Intro to Protege Engine
✨ Machine Learning for Product Engineers
Why Protege Engine?
Machine Learning is hard.
For Executives:
- The vast majority of use-cases are either prohibitively expensive from either an engineering or API cost perspective.
- You shouldn't need to hire an engineer with a PhD to write prompts for you.
- Protege Engine reduces the Total Cost of Ownership of AI Solutions by 60%-90% by training small language models for specific tasks.
- Protege Engine provides value in any context where human understanding is both required and expensive to scale.
For Engineers:
- Your non-technical co-workers or users often know a lot about their domain, but need your help to distill their expert knowledge into data you can use to build solutions with.
- Your team might not be tooled to produce AI solutions right now.
- You shouldn't pay exorbidant prices for massive models to do a domain-specific task, poorly.
- You shouldn't have to endlessly edit prompts to get good output.
- You don't need AGI to rank or classify, but you do need a dataset and a training pipeline.
The Solution
An API platform that abstracts away the annoyances of configuring prompts, training models, and iterating -- all while maintaining optionality for power users and Unicorn AI Engineers.
Protege Engine provides a *complete training loop, incorporating human feedback*.
At its simplest, Protege Engine sits between your product and an "Inference Backend", effectively proxying requests and enabling them to be replayed and trained with. This grants visibility into the inner workings of your LLM API requests, supports multi-service fail-over and load-balancing, and facilitates the provisioning of small language models that can adhieve parity with their massive proprietary cousins.
At its most complex, Protege Engine can be deeply integrated with a product experience, to collect explicit and implicit user interactions and train a model to perform complex operations in a domain-specific context.