Site24x7-970x250

NLP-as-a-Service – The Next Evolution in AI

next-word-in-ai-goosai-coreweave.jpeg

    One of CoreWeave’s founding principles is to provide a more accessible cloud infrastructure for developers and founders that is purpose-built for compute intensive workloads. 

    CoreWeave’s platform has continuously evolved to address one common pain point shared by all of our clients: legacy cloud providers make it extremely difficult to scale because they offer limited high-performance compute options at monopolistic prices.

    CoreWeave is excited to announce a massive step forward for visionary businesses who are building products on top of large language models, while making it even easier to deploy NLP services on top of CoreWeave Cloud. 

    In partnership with our friends at Anlatan, the creators of NovelAI, we launched GooseAI: a fully managed inference service delivered by API. With feature parity to other well known APIs, GooseAI delivers a plug-and-play solution for serving open source language models at over 70% cost savings by simply changing 1 line of code

    In 2021 we built a state of the art NVIDIA A100 cluster for distributed training and partnered closely with EleutherAI to train the world’s largest publicly accessible language model: GPT-NeoX-20B. This investment in the AI community was a no-brainer for our team, after hearing frustrations with large models being too expensive to deploy at scale and too hard to access.

    Since then, we have been building a dead simple solution for anyone looking to deploy GPT-NeoX-20B and other models like it. As of February 2nd, you can start using our GPT-NeoX-20B beta on GooseAI.

    Here’s what you need to know:

    • GooseAI is an industry leading, fully managed inference service delivered via API
    • Feature parity with industry standard APIs, like OpenAI, at 50% lower cost
    • State-of-the-art open-source NLP models, including EleutherAI’s GPT-Neox-20B, available out of the box
    • All the advantages of CoreWeave Cloud with zero infrastructure overhead, including the industry’s fastest spin-up times and most responsive auto-scaling

    Go to GooseAI today to start serving the model or feel free to get in touch with us to learn more, and I hope to see you join our growing community of NeoX-20B developers!


    Get similar stories in your inbox weekly, for free



    Share this story:
    APM-970x250

    Latest stories


    How ManageEngine Applications Manager Can Help Overcome Challenges In Kubernetes Monitoring

    We tested ManageEngine Applications Manager to monitor different Kubernetes clusters. This post shares our review …

    AIOps with Site24x7: Maximizing Efficiency at an Affordable Cost

    In this post we'll dive deep into integrating AIOps in your business suing Site24x7 to …

    A Review of Zoho ManageEngine

    Zoho Corp., formerly known as AdventNet Inc., has established itself as a major player in …

    Should I learn Java in 2023? A Practical Guide

    Java is one of the most widely used programming languages in the world. It has …

    The fastest way to ramp up on DevOps

    You probably have been thinking of moving to DevOps or learning DevOps as a beginner. …

    Why You Need a Blockchain Node Provider

    In this article, we briefly cover the concept of blockchain nodes provider and explain why …

    Top 5 Virtual desktop Provides in 2022

    Here are the top 5 virtual desktop providers who offer a range of benefits such …