Vultr

NLP-as-a-Service – The Next Evolution in AI

next-word-in-ai-goosai-coreweave.jpeg

    One of CoreWeave’s founding principles is to provide a more accessible cloud infrastructure for developers and founders that is purpose-built for compute intensive workloads. 

    CoreWeave’s platform has continuously evolved to address one common pain point shared by all of our clients: legacy cloud providers make it extremely difficult to scale because they offer limited high-performance compute options at monopolistic prices.

    CoreWeave is excited to announce a massive step forward for visionary businesses who are building products on top of large language models, while making it even easier to deploy NLP services on top of CoreWeave Cloud. 

    In partnership with our friends at Anlatan, the creators of NovelAI, we launched GooseAI: a fully managed inference service delivered by API. With feature parity to other well known APIs, GooseAI delivers a plug-and-play solution for serving open source language models at over 70% cost savings by simply changing 1 line of code

    In 2021 we built a state of the art NVIDIA A100 cluster for distributed training and partnered closely with EleutherAI to train the world’s largest publicly accessible language model: GPT-NeoX-20B. This investment in the AI community was a no-brainer for our team, after hearing frustrations with large models being too expensive to deploy at scale and too hard to access.

    Since then, we have been building a dead simple solution for anyone looking to deploy GPT-NeoX-20B and other models like it. As of February 2nd, you can start using our GPT-NeoX-20B beta on GooseAI.

    Here’s what you need to know:

    • GooseAI is an industry leading, fully managed inference service delivered via API
    • Feature parity with industry standard APIs, like OpenAI, at 50% lower cost
    • State-of-the-art open-source NLP models, including EleutherAI’s GPT-Neox-20B, available out of the box
    • All the advantages of CoreWeave Cloud with zero infrastructure overhead, including the industry’s fastest spin-up times and most responsive auto-scaling

    Go to GooseAI today to start serving the model or feel free to get in touch with us to learn more, and I hope to see you join our growing community of NeoX-20B developers!


    Get similar stories in your inbox weekly, for free



    Share this story:
    Vultr

    Latest stories