时间:2024-02-29|浏览:240
用戶喜愛的交易所
已有账号登陆后会弹出下载
Nvidia, Hugging Face and ServiceNow are pushing the bar on AI for code generation with StarCoder2, a new family of open-access large language models (LLMs).
Available today in three different sizes, the models have been trained on more than 600 programming languages, including low-resource ones, to help enterprises accelerate various code-related tasks in their development workflows. They have been developed under the open BigCode Project, a joint effort of ServiceNow and Hugging Face to ensure responsible development and use of large language models for code. They are being made available royalty-free under Open Responsible AI Licenses (OpenRAIL).
“StarCoder2 stands as a testament to the combined power of open scientific collaboration and responsible AI practices with an ethical data supply chain. The state‐of‐the‐art open‐access model improves on prior generative AI performance to increase developer productivity and provides developers equal access to the benefits of code generation AI, which in turn enables organizations of any size to more easily meet their full business potential,” Harm de Vries, lead of ServiceNow’s StarCoder2 development team and co‐lead of BigCode, said in a statement.
While BigCode’s original StarCoder LLM debuted in one 15B-parameter size and was trained on about 80 programming languages, the latest generation step beyond it with models in three different sizes – 3B, 7B and 15B – and training on 619 programming languages. According to BigCode, the training data for the new models, known as The Stack, was more than seven times larger than the one used last time.
More importantly, the BigCode community used new training techniques for the latest generation to ensure that the models can understand and generate low-resource programming languages like COBOL, mathematics and program source code discussions.
The smallest 3 billion-parameter model was trained using ServiceNow’s Fast LLM framework, while the 7B one has been developed with Hugging Face’s nanotron framework. Both aim to deliver high-performance text-to-code and text-to-workflow generations while requiring less computing.
Meanwhile, the largest 15 billion-parameter model has been trained and optimized with the end‐to‐end Nvidia NeMo cloud‐native framework and Nvidia TensorRT‐LLM software.
While it remains to be seen how well these models perform in different coding scenarios, the companies did note that the performance of the smallest 3B model alone matched that of the original 15B StarCoder LLM.
Depending on their needs, enterprise teams can use any of these models and fine-tune them further on their organizational data for different use cases. This can be anything from specialized tasks such as application source code generation, workflow generation and text summarization to code completion, advanced code summarization and code snippets retrieval.
The companies emphasized that the models, with their broader and deeper training, provide repository context, enabling accurate and context‐aware predictions. Ultimately, all this paves the way to accelerate development while saving engineers and developers time to focus on more critical tasks.
“Since every software ecosystem has a proprietary programming language, code LLMs can drive breakthroughs in efficiency and innovation in every industry,” Jonathan Cohen, vice president of applied research at Nvidia, said in the press statement.
“Nvidia’s collaboration with ServiceNow and Hugging Face introduces secure, responsibly developed models, and supports broader access to accountable generative AI that we hope will benefit the global community,” he added.
As mentioned earlier, all models in the StarCoder2 family are being made available under the Open RAIL-M license with royalty-free access and use. The supporting code is available on the BigCode project’s GitHub repository. As an alternative, teams can also download and use all three models from Hugging Face.
That said, the 15B model trained by Nvidia is also coming on Nvidia AI Foundation, enabling developers to experiment with them directly from their browser or via an API endpoint.
While StarCoder is not the first entry in the space of AI-driven code generation, the wide variety of options the latest generation of the project brings certainly allows enterprises to take advantage of LLMs in application development while also saving on computing.
Other notable players in this space are OpenAI and Amazon. The former offers Codex, which powers the GitHub co-pilot service, while the latter has its CodeWhisper tool. There’s also strong competition from Replit, which has a few small AI coding models on Hugging Face, and Codenium, which recently nabbed $65 million series B funding at a valuation of $500 million.