AI+Web3 is likely to be one of the hotspots in the next bull market. What are the specific trends and opportunities?

What are the specific trends and opportunities in the AI+Web3 hotspot in the next bull market?

Author: Lao Bai, Partner of ABCEDE Investment Research

AI, the hottest topic at present, is considered to be the key and core of the fourth industrial revolution. Another hot concept in the technology industry is Web3, which is seen as the key core of the next generation Internet.

AI and Web3 are two concepts that will trigger a technological revolution. If they can join forces, what kind of “surprises” will they bring us?

01 First, let’s talk about AI itself

The AI industry was originally facing a decline. You know the founder of Near, Yilong, right? He used to work in AI. He was the main contributor to TensorFlow, the most popular machine learning framework. It is speculated that he saw no hope in the AI (pre-large model machine learning) field, so he came to work on Web3.

Finally, at the end of last year, the industry welcomed ChatGpt3.5, which really brought about a qualitative change, instead of the previous waves of hype and quantitative changes. A few months later, the AI startup wave also spread to Web3. In Silicon Valley, Web2 is in a frenzy, with various capital Fomo, various homogenized solutions fighting in price wars, and various big companies and large models competing…

But it should be noted that after experiencing more than half a year of explosive growth, AI has also entered a relatively bottleneck period. For example, Google’s search heat for AI has plummeted, the user growth rate of ChatGpt has slowed down significantly, and the randomness of AI output has limited many practical applications… In short, we are still very, very far away from the legendary “AGI – Artificial General Intelligence”.

Currently, the Silicon Valley venture capital circle has several judgments on the next development of AI:

1) There are no vertical models, only large models + vertical applications (we will mention Web3+AI later)

2) Edge devices, such as mobile data, may be a barrier, and AI based on edge devices may also be an opportunity

3) The length of context may trigger a qualitative change in the future (currently using vector databases as AI memory, but the context length is still not enough)

02 Web3+AI

AI and Web3 are actually two completely different fields. AI requires centralized computing power and massive data for training, which is very centralized. On the other hand, Web3 is focused on decentralization. Therefore, they are not easily compatible. However, the argument that AI changes productivity and blockchain changes production relations has deeply penetrated people’s minds, so there will always be people searching for that point of integration. In the past two months alone, I have discussed more than 10 AI projects.

Before talking about new integration tracks, let’s first talk about old AI+Web3 projects, which are mostly platform-based, represented by FET and AGIX. How should I put it, this is what my friends who specialize in AI in China told me – “Most of those who used to work in AI are basically useless now, whether it’s Web2 or Web3. Many are burdens rather than experience. The direction and future is models like OpenAI’s Transformer-based large models. Large models have saved AI.” You can think about it yourself.

Therefore, the general platform type is not the Web3+AI model that he favors. The more than 10 projects I talked about do not have this aspect, and what I have seen so far are basically the following tracks:

1. Bot/Agent/Assistant model assetization

2. Computing power platform

3. Data platform

4. Generative AI

5. Defi trading/auditing/risk control


1. Bot/Agent/Assistant model assetization

The assetization of Bot/Agent/Assistant is the most discussed track and the one with the most serious homogeneity. In simple terms, most of these projects use OpenAI as the underlying technology, combined with other open-source/self-developed techniques, such as TTS (Text to Speech), and specific data, to fine-tune robots that are better than “ChatGPT” in a certain field.

For example, you can train a beautiful English teacher who teaches you English. You can choose whether she has an American accent or a British accent. Her personality and chat style can also be adjusted. Compared to the mechanical and official answers of ChatGPT, the interactive experience will be better. There is a DAPP in the industry called HIM, which is a virtual boyfriend and a Web3 female-oriented game, which can be considered as representatives of this type.

Based on this idea, theoretically, you can have many Bots/Agents serving you. For example, if you want to cook boiled fish, there may be a Cooking Bot specifically tailored for this field to teach you, providing more professional answers than ChatGPT. If you want to travel, there is also a travel assistant Bot that provides various travel suggestions and plans. Or if you are a project party, you can create a Discord customer service bot to help you answer community questions.

In addition to creating this type of “GPT-based vertical application” Bot, there are also derivative projects based on this, such as Bot-based “model assetization”. It has a bit of the meaning of “assetization” of NFT small images. Can the popular Prompt in AI be assetized too? For example, different Prompts in MidJourney can generate different images, and different Prompts during training will also have different effects, so Prompts themselves have value and can be assetized.

There are also projects that index and search portals based on these types of Bots. When we have thousands of Bots one day, how do we find the most suitable Bot for you? At that time, we may need a Web2 world portal like Hao123, or a search engine like Google to help you “locate”.

In my personal opinion, the assetization of Bots (models) has two drawbacks and two directions at this stage:

1) Drawbacks

Drawback 1 – The homogeneity is too serious because this is the AI+web3 track that users can understand most easily. It is a bit like NFT with a utility attribute. Therefore, the primary market is beginning to show a red ocean trend, and everyone is rushing in. However, the underlying technology is all OpenAI, so everyone actually has no technical barriers and can only rely on design and operations;

Disadvantage 2 – Sometimes things like Starbucks membership card NFT on the chain, although it is a good attempt to stand out, may not be as convenient for most users as having a physical or electronic membership card. The same problem exists with Web3 Bots. If I want to learn English with a robot or chat with Musk or Socrates, isn’t it more convenient for me to use Web2’s http://Character.AI?

2) Directions

Direction 1 – In the near to medium term, model on-chain may be a way to go. Currently, these models are mostly small ETH NFT images, and most of the metadata points to off-chain servers or IPFS, rather than purely on-chain. Models are usually tens to hundreds of megabytes in size, so they need to be stored on servers.

But with the recent rapid decline in storage prices (2TB SSD for 500 RMB), as well as the progress of storage projects such as Filecoin FVM and ETH Storage, it is believed that it should not be difficult to put models of hundreds of megabytes on the chain in the next two to three years.

You may ask, what are the benefits of putting models on the chain? Once the model is on the chain, it can be directly called by other contracts, making it more Crypto Native, and there will definitely be more ways to play. It gives a feeling of being fully on-chain like a game, because all the data is native to the chain. Currently, there are teams exploring this direction, but it is still in the very early stages.

Direction 2 – In the medium to long term, if you think seriously about smart contracts, it is actually more suitable for “machine-to-machine interaction” rather than human-machine interaction. Now AI has the concept of AutoGPT, so you can create a “virtual avatar” or “virtual assistant” that not only chats with you but also helps you perform tasks according to your requirements, such as booking flights, hotels, buying domain names, and building websites.

Do you think it is more convenient for an AI assistant to operate your various bank accounts and Alipay, or to have a blockchain address for transfers? The answer is obvious. So in the future, will there be a bunch of AI assistants integrated with similar AutoGPT, automatically conducting C2C, B2C, and even B2B payments and settlements through blockchain and smart contracts in various task scenarios? At that time, the boundary between Web2 and Web3 will become very blurred.

2. Computing Power Platform

The projects in the computing power platform area may not have as many assets and volumes as the Bot model, but they are relatively easier to understand. We all know that AI requires a large amount of computing power, and BTC and ETH have proven over the past decade that there is a way in the world to spontaneously and decentralizedly organize and coordinate massive computing power to collaborate and compete in an environment of economic incentives and games. Now this method can be applied to AI.

The two most famous projects in the industry are undoubtedly Together and Gensyn. One raised tens of millions in the seed round, and the other raised 43 million in Series A. The reason why they raised so much money is said to be because they need funds and computing power to train their own models first, and then they will develop them into computing power platforms to provide training for other AI projects.

As for the financing amount of the computational power platform for inference, it will be much smaller because it essentially aggregates idle GPU and other computational power and provides it to AI projects that need it for inference. RNDR aggregates rendering computational power, and these platforms aggregate computational power for inference. However, the technical threshold is currently relatively vague, and I even wonder if one day RNDR or Web3 cloud computing platform will extend their reach to the field of inference computational power.

Compared to the assetization of models, the direction of computational power platforms is more practical and predictable. Basically, there will definitely be demand and one or two top projects in this field. The only uncertainty at present is whether there will be leaders in training and inference separately, or if the leaders will encompass both training and inference.

3. Data Platforms

This is actually not difficult to understand because the underlying components of AI are algorithms (models), computational power, and data.

Since there are “decentralized versions” of algorithms and computational power, data will definitely not be absent. This is also a direction that Dr. Lu Qi, the founder of Qijichuangtan, is most optimistic about when discussing AI and Web3.

Web3 has always emphasized data privacy and sovereignty, and there are technologies like ZK to ensure data reliability and integrity. Therefore, AI trained on chain-based Web3 data should definitely be different from AI trained on off-chain Web2 data. So, this overall line makes sense. Currently, Ocean should be considered in this field, and there have been projects in the primary market that specialize in AI data markets based on Ocean.

4. Generative AI

In simple terms, it means using AI to draw or create art for other scenarios. For example, creating NFTs or generating maps and NPC backgrounds in games. It feels more difficult to do NFTs because the scarcity of AI-generated content is not enough, but Gamefi is a possible path. There have also been teams trying in the primary market.

However, a few days ago, I saw news that Unity (which, along with Unreal Engine, has dominated the game engine market for many years) has also launched its own AI generation tools, Sentis and Muse. They are currently in the testing phase and are expected to be officially launched next year. How should I put it, I feel that the AI-based game AIGC projects in the Web3 community may be impacted by Unity…

5. DeFi Trading/Audit/Yield/Risk Control

There have been projects attempting these categories, but the homogeneity is relatively unclear.

1) DeFi Trading – This is a bit tricky because if a trading strategy is effective, as more people use it, the strategy may gradually become less effective and need to be switched to a new strategy. Also, I’m curious about the future success rate of AI trading bots and where they will rank among ordinary traders.

2) Audit – It is estimated that it can help quickly address common vulnerabilities that already exist, but it may not be effective for new or logical vulnerabilities that have not occurred before. This will require entering the AGI era.

3) Yield and Risk Control – Yield is not difficult to understand. Just imagine it as an AI-powered YFI. You throw money at it, and AI will find platforms for Staking, LP pooling, mining, etc. based on your risk preferences. As for risk control, it feels strange to make it a separate project. It makes more sense to provide it as a service in the form of a plugin to various lending or similar Defi platforms.


It is a field that is becoming increasingly popular in the blockchain industry because it combines two cutting-edge technologies: ZK within the industry and ML (Machine Learning, a narrow branch of AI) outside the industry.

In theory, the combination with ZK can provide privacy, integrity, and accuracy to ML. However, if you ask for specific use cases, many project teams cannot come up with them. They are building the infrastructure first…

Currently, the only real demand is the privacy requirement for patient data in some medical fields. As for narratives like the integrity of on-chain games or anti-cheating, they always feel a bit forced.

There are only a few star projects in this field, such as Modulus Labs, EZKL, Giza, which are highly sought after in the primary market. Well, because there are only a few people in the world who understand ZK, and even fewer who understand both ZK and ML. So the technical threshold for this field is relatively high, and homogeneity is not as obvious. Lastly, ZKML mostly focuses on inference rather than training.