8 Product Ideas For AI Infrastructure Builders
List of ideas for AI infrastructure builders to consider
Welcome to Infinite Curiosity, a weekly newsletter that explores the intersection of Artificial Intelligence and Startups. Tech enthusiasts across 200 countries have been reading what I write. Subscribe to this newsletter for free to receive it in your inbox every week:
Hello friends,
I love infrastructure products. Companies that build infrastructure for developers to build on have a huge upside potential. There will be a huge influx of developers coming into AI. And all those developers will need infrastructure to build their products. In this post, I decided to discuss a list of 8 ideas that AI infrastructure builders should consider building.
I asked DALL-E to generate an image in the style of Salvador Dali where people are building infrastructure. This is what it came up with. Let’s dive in.
1. Infrastructure to build LLM-infused products
Many products are being built using LLMs. The applications include sales, marketing, coding, and more. We need a product that provides infrastructure to build these applications. OpenAI has been a pioneer in this field. Developers are using their APIs to build a variety of applications. We need more infrastructure products that help these developers build applications.
2. Infrastructure to generate content using AI
This is for applications that use AI to generate content. We need an infrastructure product that provides infrastructure to application builders to create image, text, audio, or video content. The outputs can be used for a variety of use cases ranging from summarizing reviews, product marketing videos, training videos, customer support, and more.
3. Infrastructure for inference
When it comes to building production AI systems, the work comprises of two things -- training and inference. Training refers to using available data to build a model and inference refers to using that model to provide outputs e.g. predictions. Inference needs a lot of compute power mostly because it happens a lot. For example, every time someone types something into ChatGPT would mean they have run inference. Each API call to the inference engine consumes compute power. The compute power available on the market won't be able to meet the demand. People will have to find new ways to generate compute power.
4. Infrastructure to catalog product usage data
Foundation models are being trained using large amounts of publicly available data. And they are being open sourced. To build a moat, you need data that's unique to you. This comes in the form of product usage data e.g. what people clicked on, what they liked, how long they watched a video. We need an infrastructure product to catalog this usage data and process it. This data can be fed into reinforcement learning models to provide relevant recommendations to users.
5. Infrastructure to build and serve verticalized foundation models
We are currently in an era of building foundation models for a given modality e.g. text, images. As we look at various domains, we need specialized foundation models that can be used as starting point to build applications. For example, let's say we have a foundation model for text data. It's too broad! Someone can build a foundation model for all marketing-related work. One can then use this verticalized foundation model to build various marketing applications across different sectors. We need infrastructure to train these models, host them efficiently, and serve them for inference.
6. Infrastructure for on-device AI applications
Mobile devices are getting more powerful. And data privacy laws are getting stricter around the world. People are building native AI applications on mobile devices. Moving the data from the device to the cloud is expensive and bandwidth-constrained. We need infrastructure for developers who building on-device AI applications.
7. Infrastructure to stream ML models
We need an infrastructure product that will allow ML developers to build models to be consumed by anyone. These models will be available for consumption via API. Customers will access all these models through a single interface like how we listen to songs on Spotify. And then the revenue will be split based on relative usage e.g. similar to Spotify's revenue model. If a developer builds a model that's useful to a lot of people, then they will earn more. The buyers and sellers don't necessarily need to know each other. And the speed/quality/consistency will be guaranteed by the infrastructure provider.
8. Infrastructure for distributed model training
We need an infrastructure product that allows people to contribute their compute power. There are very few companies in the world that have the resources to train a large AI model. To mount a challenge to these companies, a large number of individual computers can come together to form a cluster. This cluster can pool computing resources together to train a large AI model. The infrastructure provider needs to figure out an incentive mechanism for people to contribute compute power. People can buy power from this cluster and pay for usage. The revenue can be distributed based on how much each node contributed.
If you are getting value from this newsletter, consider subscribing for free and sharing with your friends: