LLM Value: Analyzing Foundation Models & Applications

Reid McCrabb

4 min read

February 16, 2024

LLM Value: Analyzing Foundation Models & Applications

Introduction

The use cases for AI models, and specifically, LLMs are plentiful. From sentiment analysis, coding assistance, and content creation, AI is integrating into everything humans do. But the actual value accretion is still being discovered. Where is there money to be made? Who has competitive advantages in these areas?

The best way to look at this is in the framework of the four different sections of LLMs: hardware, foundation models, tooling, and AI applications. Once these areas are understood, the whole picture becomes more clear.

Hardware

The lowest level of the AI stack is the hardware used to build it. This is the graphic processing unit, or GPU, used in LLM training and inference. The lack of GPUs is a significant problem for AI, as it means the infrastructure is not ready for training advanced intelligence.

This is one of the key reasons Taiwan is of such importance to the US, as it is the number one provider of GPUs for American AI companies. For China to take it would be catastrophic for US defense tech and enterprise efficiency.

Moreover, the supply/demand issue for GPUs is why Nvidia has been surging over the past 12 months. Surpassing Google and Amazon in market cap, at roughly $1.2 trillion dollars. While hardware is difficult to build, once a moat is created, the upside is massive. In this case, Nvidia’s H100, Cuda, Tensor rt moat is trading higher and higher.

Foundation Models

The foundation model is where OpenAI has a large advantage. No other model competes with GPT-4 in terms of quality. But for many tasks, simpler and smaller models make more sense to use, as they can be faster, cheaper, and provide more transparency. A series of open-source models provide these advantages – more on that in this article.

Despite Open AI’s massive success with their models, it appears to be a race to the bottom. With cheaper and cheaper alternatives, OpenAI has had to continuously cut their price, to the point that GPT-3.5-Turbo is nearly free.

This is why their focus seems derived on building out the ChatGPT product and GPTs store, rather than doubling down on pushing their lead in model strength. With open source models closing the gap in model quality, it is all the more reason that there may not be a great business here.

Artificial Analysis

Tooling

The toolset for Large Language Models (LLMs) is still developing, with vector databases emerging as a key innovation. These databases are essential for handling the complex data LLMs work with, making operations faster and more accurate.

Pinecone is leading in this new area, providing specialized solutions that improve how LLMs process and understand language. MongoDB, known for its robust database systems, has also joined this space with its own vector database. This indicates the growing importance of such technology in AI and machine learning, showing a trend towards more advanced and efficient tools for developers.

Data Camp

Other tools like LLM frameworks i.e Langchain and Llama index, and Cohere, were created to enable developers to test and implement different techniques such as RAG (retrieval augment generation).

Applications

Likely, the most popular pure AI application is Perplexity – a new kind of search engine challenging Google. Perplexity delivers straight answers using LLM’s and RAG (as mentioned above) rather than the five blue links model google provides. This is a serious threat to Google, as launching a Perplexity competitor would canabalize their links business.

We are learning most of the value gain will not be from AI startups, but from businesses that successfully integrate the technology to make their company more efficient and open up new features for users. Companies with platforms can leverage applications of artificial intelligence such as recommendation engines, chatbots for enhanced customer experience, content generation for marketing, and much more.

Andrew Ng, Stanford AI professor, and founder of Coursera explains that AIs versatility makes it alike electricity, “Just as electricity transformed almost everything 100 years ago, today I have a hard time thinking of an industry that I don’t think AI will transform in the next several years.”

Conclusion

Looking ahead, the future of LLMs and AI, in general, seems poised for exponential growth, driven by advances in hardware, model development, tooling, and innovative AI applications like sentiment analysis. As AI continues to integrate into every aspect of our lives and businesses, the challenge will be to harness its potential responsibly and ethically, ensuring that the benefits are widely distributed and contribute to the betterment of humanity.

AI’s impact will be far-reaching, transforming industries in ways we are only beginning to understand. The journey of LLMs and AI is just beginning, and the opportunities for growth, innovation, and transformation are boundless.

While the companies and users using AI are getting massive productivity boosts, not all of the companies supplying the technology are equally rewarded. Notably, the models are getting prices slashed, and becoming a commodity.