DL Research Content

A conversation with Kamesh Elangovan, core contributor at OpenLedger

A conversation with Kamesh Elangovan, core contributor at OpenLedger
Illustration: Gwen P; Source: OpenLedger

Kamesh is a core contributor at OpenLedger, a blockchain platform redefining how AI models are trained, attributed, and monetised. With a background in AI/ML R&D and experience working with enterprise giants like Walmart and Cadbury, Kamesh has long recognised the need for transparency and fair compensation in AI. At OpenLedger, he's leading the charge to decentralise model development, introduce Proof of Attribution, and democratise AI ownership for data and model contributors.

Why are developers moving from DeFi to AI? Has AI become the new frontier for solving Web3’s core challenges?

I see DeFi and AI converging into what I’d call DeFAI. Beyond just bringing context and automation to onchain value, AI agents are already handling many DeFi tasks like rebalancing portfolios, optimising gas fees, and even executing yield strategies with zero manual input.

These agents reduce complex multi-step processes to one-click actions. So, rather than pulling developers away, AI actually makes DeFi more accessible, intelligent, and user-friendly. You don’t need to reinvent financial primitives, you can just plug in smart agents to manage the details.

Some DeFi purists argue AI is a distraction. How would you respond, especially with your background in enterprise and crypto?

From my experience, AI agents and DeFi protocols complement each other. In traditional enterprise, agents automate repetitive workflows, and crypto is no different. We’re already seeing bots that monitor lending positions and automatically repay or adjust collateral, or swap assets when certain conditions are met.

These agents bring enterprise-grade efficiency into DeFi. It’s not a distraction; it’s the logical next step in making decentralised finance as seamless and reliable as anything in TradFi.

Why is OpenLedger betting heavily on domain-specific AI models rather than general-purpose ones like GPT?

General models are impressive, but they come with a lot of unnecessary baggage, like higher inference costs, more noise, and a tendency to veer off-topic. Take Kaito, for example, they fine-tuned a small language model specifically for analysing sentiment on Crypto Twitter.

That targeted model outperforms general-purpose ones because it only “knows” what’s relevant to crypto. By staying focused on a single vertical, you get sharper, faster, and more efficient results.

The future of AI is specialist, not generalist.

How does Proof of Attribution enhance the viability of domain-specific AI?

Provenance isn’t just a “nice-to-have,” it’s essential. Most models today are black boxes. Contributors get no recognition or compensation. Proof of Attribution (PoA) changes that. We record who supplied which data and who did the fine-tuning onchain. That transparency lets us trace errors, fix biases, and reward contributors fairly.

It encourages domain experts, who wouldn’t normally open-source their work, to contribute, knowing they’ll be properly credited and compensated.

For example, Telegram recently announced a $300 million partnership with xAI. But with no attribution or visibility into data use, end users get nothing, even though their data may be used to train those models.

What’s the advantage of tying AI training and provenance to the blockchain, rather than keeping it all offchain?

The biggest benefit is having a tamper-proof, transparent record of who did what. While the heavy training runs offchain for performance, we anchor the key steps like data uploads, tuning parameters, and contributor IDs onchain.

That gives us a verifiable trail for audits, payments, and compliance. No one can rewrite history or obscure contributions.

What does the idea of “invisible AI” mean in practice for everyday users?

It means AI will be embedded into your daily tools. Your wallet might recommend optimal gas fees before sending a transaction, or smart contracts could auto-negotiate pricing in real time.

You won’t need to “use AI” explicitly, as it will simply be running in the background, making everything smarter and more efficient.

How do tools like ModelFactory and OpenLoRA power that invisible AI layer?

The idea behind ModelFactory and OpenLoRA is to lower the barrier to creating custom AI models in niche domains. With ModelFactory, anyone can launch a tailored model with no coding required.

OpenLoRA reduces inference costs by using lightweight adapters on top of base models. These tools will power the backend of consumer apps: a fitness app might use them for personalised health advice, while a trading dashboard could pull risk predictions from a model you trained yourself.

You’ll never see the pipelines, but you’ll feel the difference in the UX.

How close are we to a world where personalised AI agents earn and spend on behalf of users?

We’re getting closer every day. For example, we launched a challenge with Freysa where users had to persuade an AI agent to release funds from a wallet it controlled. Every time someone found a flaw, we patched it — sometimes in under a day.

That sort of progress normally takes longer. On the flip side, we’ve seen rogue trading bots burn through funds chasing tiny arbitrage wins, which shows why transparency and explainability matter.

Once we address key concerns around privacy and confidentiality, personalised agents will be ready to make real financial decisions, especially in web3.