- Published on
What’s Next in Crypto x AI?
- Authors
- Name
- 0x3van
- @0x3van
Have thoughts on this topic? Join the conversation on X.
Contents
Introduction
The crypto x AI space is still incredibly nascent. Despite countless agents and tokens, most projects in the space just sum up to a numbers game, where teams have been trying to take as many shots on goal as possible.
While AI is the technological revolution of our generation, the crypto intersection has mainly represented a way for people to get liquid, earlier-stage exposure to AI. As a result, we've already seen quite a few cycles in this crossover, with most narratives just running through the euthanasia roller coaster.

What will bring us out of our hype cycles?
So, where does the next big opportunity in crypto x AI come from? What types of applications or infrastructure will actually create value or product-market-fit?
This article will attempt to cover the main areas of interest in the vertical, under the following framework:
- Where does AI help Crypto
- Where does Crypto help AI
I'm particularly interested in the decentralized AI opportunity in the second bucket, and will cover some of the most exciting projects:
1. AI helping crypto

A more comprehensive eco mapping by CV: https://x.com/cbventures/status/1923401975766355982/photo/1
While there are many more verticals across consumer AI, agent frameworks and launchpads, etc., there are three overall areas where AI has already impacted the crypto experience so far:
1. Developer tooling
Much like in web2, AI is accelerating crypto development through no-code and vibe-code platforms. For the most part, many of these apps have similar objectives to those outside of crypto like Lovable.dev
Teams like @poofnew and @tryoharaAI are letting non-technical builders ship quickly and iterate without deep smart contract knowledge. The result is faster time-to-market for crypto projects and lower barriers to entry for builders who understand markets or have creative ideas, but are not necessarily technical.

Other parts of the developer experience are also being improved, like testing smart contracts and security @AIWayfinder @octane_security:


2. User Experience
Despite major improvements in onramping and wallets (Bridge, Sphere Pay, Turnkey, Privy), the core crypto UX hasn't evolved much. Users still navigate complex block explorers and execute multi-step transactions manually.
AI agents are changing this by becoming the new interface layer:
- Search and discovery: Teams are racing to build "Perplexity for blockchains". These are chat based, natural language interfaces where users can find alpha, understand smart contracts, and analyze onchain behavior without diving into raw transaction data.
- The larger opportunity within search is for agents to become a discovery avenue for users to find new projects, yield, and tokens. Similar to how Kaito helps projects on their launchpad to get more attention, agents are able to comprehend user behavior and essentially bring to them what they are looking for. This could create a sustainable business model for agents to start monetizing via rev share or affiliate fees.
- Intent-based actions: Instead of clicking through multiple interfaces, users state their intent ("swap $1000 of ETH for the highest-yield stablecoin position") and agents execute complex multi-step transactions automatically.
- Error prevention: AI can also prevent common mistakes like fat-fingering transactions, buying scam tokens, or approving malicious contracts.

3. Trading tools and DeFi automation
A ton of teams right now are racing towards building agents that can help inform smarter trading signals, trade on behalf of users, or help them optimize and manage strategies.
- Yield Optimization: Agents that automatically move capital between lending protocols, DEXs, and farming opportunities based on changing rates and risk profiles.
- Trading Execution: AI that can execute better strategies than manual traders by processing market data faster, managing emotions, and sticking to predetermined frameworks.
- Portfolio Management: Agents that rebalance portfolios, manage risk exposure, and capitalize on arbitrage opportunities across chains and protocols.
If an agent could truly and consistently manage money better than humans, this would represent an order of magnitude improvement over current DeFAI agents, which mainly help execute existing intents. In this realm, adoption is a bit similar to electric vehicles, where there would still be a large trust gap until proven out at scale. But if this does work, it’d likely capture some of the largest value within this stack.
Winners in this stack
While some standalone apps could capture distribution here, another likely case is for existing protocols to integrate AI directly:
- DEXs get smarter routing and scam protection
- Lending protocols optimize yields automatically based on user risk profiles and repay borrowing positions when loans are over a certain health factor, reducing the chances of getting liquidated
- Wallets become AI assistants that understand user intent
- Trading platforms offer AI co-pilots that help users stick to their strategies
The endgame: crypto interfaces evolve to incorporate conversational AI that understands what you want to accomplish financially and executes it better than you could yourself.
2. Crypto helping AI: Decentralized AI
In my opinion, the much larger TAM at hand is where crypto can help AI. Teams working on decentralized AI are tackling some of the most existential yet practical questions about AI's future:
- Is it possible to build frontier models without concentrated capex investments led by centralized tech giants?
- Is it possible to coordinate global, distributed compute resources and effectively train models or gener
- What happens when a handful of corporations own humanity's most powerful technology?
I highly recommend @yb_effect's article on DeAI to really go down this rabbithole.

But just covering the tip of the iceberg here, it's likely that the next wave of real opportunity in the crypto x AI crossover will come from research-first, academic AI teams that are tackling the largest problems in the broader AI space. These teams are mostly born from the open-source AI community, and have balanced takes on why distributed and decentralized AI is the pragmatic and philosophical way to scale all of AI.
So, what are the problems that AI faces today?

In 2017, the landmark "Attention Is All You Need" paper introduced the Transformer architecture -- the missing piece that evaded deep learning for decades prior. Popularized by ChatGPT in 2019, Transformer architecture has since become the underlying architecture for most forms of LLMs, and has kicked off a massive competition in adding compute.
Since then, compute for AI training has grown 4x per year. This has resulted in increasing centralization of AI development as pre-training via more and more performant GPUs has become the moat, one that could only be controlled by the largest tech giants.
- From an ideological perspective, centralized AI is an issue in that the most powerful tool in the world can be controlled or withdrawn by their funders at any time. Thus, even if open-source teams are not able to compete against the rate of progress of centralized labs, it is important to try.
Crypto provides the economic coordination layer to get the world working together on an open-model. But before we get there, what problems does this solve beyond ideals? Why does enabling people to work together matter?
- Thankfully, the teams building in this space are highly pragmatic. Open source represents a fundamental bet on the best way to scale technology: that smaller, collaborative efforts, each optimizing for their own local maxima and building progressively on each other, can reach the global maxima faster than centralized approaches constrained by their own scale and institutional inertia.
- At the same time, with AI specifically, open-source is also a necessary condition for the creation of intelligence – something that does not moralize but is instead adaptable to the personas that individuals wish to give it.
And in practice, there are some very real infrastructure constraints that open-source may open up a world of innovation to help tackle.
Compute scarcity
Training runs already require massive energy infrastructure. Already, there are multiple efforts underway to build data centers of 1 to 5 GW. However, continuing to compound scale on frontier-models will demand more energy than a single data center can provide, on par with the energy consumption of entire cities. The constraint isn't just energy output, but the physical limitations that a single campus data center has.


Even beyond pre-training these frontier models, increasing investment cost is expected to shift to inference, accelerated by new reasoning models + the DeepSeek moment. As the @fortytwonetwork team puts it:
Unlike conventional LLMs, [reasoning models] prioritize more intelligent responses by allocating additional processing time per query. However, this shift introduces a tradeoff: fewer requests can be handled for the same computational resources. Such meaningful improvements require giving models more time to think, adding further to compute scarcity.
And compute is already scarce. OpenAI caps API calls at 10,000 per minute, effectively limiting AI applications to serving only around 3,000 concurrent users. Even ambitious projects like Stargate, a $500 billion AI infrastructure initiative announced recently by President Trump, may provide only temporary relief. According to Jevons’ Paradox, which observes that efficiency improvements often lead to increased resource consumption due to rising demand, as AI models become more capable and efficient, compute demands will likely surge through new use cases and broader adoption.
So where does crypto come in? How do blockchains meaningfully impact the world of AI research and development?
Crypto enables a fundamentally different approach: globally distributed + decentralized training with economic coordination. Rather than building new data centers, crypto networks can coordinate millions of existing GPUs—gaming rigs, crypto mining operations, enterprise servers—that sit idle most of the time. Likewise, blockchains can help decentralize inference by making use of idle compute on consumer devices.
One of the main issues with distributed training is latency. Outside of crypto elements, Prime Intellect and Nous are making research jumps in reducing GPU communication needs.
- DiLoCo (Prime Intellect): Prime Intellect's implementation reduces communication requirements by 500x, enabling training across continents with 90-95% compute utilization
- DisTrO/DeMo (Nous Research): Nous Research's optimizer family achieves 857x reduction in communication needs using discrete cosine transform compression
But traditional coordination mechanisms can't solve the trust challenges inherent in decentralized AI training, whereas the inherent properties of blockchains might actually have PMF here:
- Verification and fault tolerance: Decentralized training faces the challenge that participants might submit malicious or incorrect computations. Crypto provides cryptographic verification schemes (like Prime Intellect's TOPLOC) and economic slashing mechanisms to disincentivize bad behavior.
- Permissionless participation: Unlike traditional distributed computing projects that require approval processes, crypto enables truly permissionless contribution. Anyone with idle compute can join and start earning immediately, maximizing the pool of available resources.
- Economic alignment: Blockchain-based incentives align individual GPU owners with collective training goals, making previously idle compute economically productive.
Given this, how are teams across the decentralized AI stack tackling AI's scaling problems and using blockchains? Where are the proof points?
1. Prime Intellect: Distributed and decentralized training
- DiLoCo reduces communication by 500x, enabling training across continents
- PCCL handles dynamic membership, peer failures, achieves 45 Gbit/s across continents

- Currently training 32B parameter models across globally distributed workers
- 90-95% compute utilization in production
- Outcome: INTELLECT-1 (10B) and trained across continents. INTELLECT-2 (32B)
2. Nous Research: Distributed and decentralized training
- DisTrO/DeMo: 857x reduction in communication using discrete cosine transform
- Psyche Network: Blockchain coordination for fault tolerance and incentive mechanisms to bootstrap compute resource
- Training Consilience (40B parameters) in "the largest pretraining run conducted over the internet

3. Pluralis: Protocol Learning approach, model parallelism to incentivize enormous training runs
- Pluralis takes a different approach in open-source AI, which they call Protocol Learning . Other decentralized training projects (PI, Nous) are Data Parallel approaches, where identical copies of the model exist across all participants.
- However, Pluralis argues that the data parallel approach has an economic flaw, that pooling compute is not enough as "volunteer computation will never reach the scale required to train cutting-edge models." For context, Llama3 400B was trained on 16k 80GB H100s.

Source: https://blog.pluralis.ai/p/article-2-protocol-learning-protocol
- Protocol Learning stems from trying to introduce real value capture for model trainers, and thus assemble the compute required for massive training runs. It does so through partial model ownership allocated proportional to training contributions. In this construct, these neural networks are collaboratively trained, but the full weight set can never be extracted by any one actor (termed Protocol Models). In this setup, no participant can obtain the complete model weights without spending more computational power than would be required to train the model.
- Protocol Learning works by having each participant hold only model shards, but never the full weights. Training requires passing activations between participants, without anyone seeing the complete model. Inference then requires credentials, which are allocated proportionally to training contribution. In this way, contributors earn based on model usage.
- The implication here is that models become an economic resource / commodity that can be fully financialized, and hopefully achieve the compute scale needed for truly competitive training runs. Pluralis takes the best part of closed-source development -- the sustainability of closed model releases -- with the benefits of open-source.
4. Fortytwo: Decentralized swarm inference
- Amidst the above teams focusing on the challenges of distributed and decetnralized training, Fortytwo focuses on a different part of the stack via distributed inference using swarm intelligence
- Fortytwo tackles the increasing compute scarcity around inference. To take advantage of idle compute on consumer hardware (e.g., MacBook Air with an M2 chip), Fortytwo networks specialized small language models.
- These nodes collaborate, amplify each other through peer evaluation, and crate joint response preparation based on the most valuable contributions of the network.
- Interestingly, Fortytwo's approach could also be complementary to distributed/decentralized training efforts. Imagine a world where the SLMs running on Fortytwo nodes could be models that were trained through Prime Intellect / Nous / Pluralis. All together, these distributed training projects could create open-source foundation models that get fine tuned for specific domains, and then coordinated for inference.

Source: https://mirror.xyz/fortytwonetwork.eth/1J1RXWjvt_r4is1Is4um4WMvy1xBUk0YSE2CEU66T2Y
Conclusion
The next big opportunity in crypto x AI is not another speculative token, but infrastructure that genuinely expands AI development. The scaling constraints facing centralized AI map directly against crypto's core strengths in global resource coordination and economic incentive alignment.
Decentralized AI enables a parallel universe that can expand the architecture space for AI and test what could be possible when experimental freedom meets actual resourcing.