Lyn Alden Why Bitcoin is the most suitable currency for artificial intelligence

Why Bitcoin is the best currency for AI.

Author: Lyn Alden; Translation: GaryMa Wu said the translation of this article is authorized by the author.

What is the most suitable currency for AI?

What form of currency is best for AI agents? Based on what I have seen so far, I believe Bitcoin is the primary answer.

In a future world, there will be a lot of payments between humans and machines, and open-source native digital currencies naturally fit in.

AI agents can create a Bitcoin/cryptocurrency wallet, earn funds, and use those funds.

AI can start from scratch, be assigned a task, or simply “decide” to accomplish a task. When it assesses the best way to complete a task, it identifies certain paid services that are beneficial or essential to it. Therefore, it generates a Bitcoin/Lightning Network wallet (unlike a bank account, it does not require permission, as it is open-source), and then persuades its creator to give it funds or earns funds online in other ways (perhaps by posting some clever content on Nostr and earning tips). Now, with funds, it can pay for what it needs.

Efficient open-source native digital currencies can be used more quickly, without permission, and can facilitate smaller transactions than existing fiat currencies (microtransactions), thus bringing more possibilities for machine-to-machine and human-to-machine payments. This is a natural form of currency used by AI agents.

This trend continues. At the end of June, Bitcoin companies Fedi and Stak organized an AI hackathon. Fedi and Stak called on all developers, builders, and pioneers who believe in empowering people and communities with Bitcoin, Lightning Network, and Fedimint technology: join us and help democratize artificial intelligence with Bitcoin, Lightning Network, and Fedimint.

In early July, Lightning Labs (a major Lightning Network infrastructure provider) announced the launch of a new set of developer tools to support this emerging intersection of Bitcoin and AI.

Simply put, Lightning Labs’ tools mainly have two aspects:

1) Making it easier to build AI agents that can hold and use Bitcoin to complete tasks;

2) Using the Lightning Network and the long-existing “HTTP 402 Payment Required” status code to make it easier to pay for resource-intensive computations used each time.

To see the effect of the latter, you can watch this two-minute video shared by Olaoluwa Osuntokun, Chief Technology Officer of Lightning Labs.


In this video, the user (Kody Low, developer and product support manager at Fedi) asks GPT-4 to write code for him and pays the cost for each question using Lightning payments (approximately $0.01 per question). Ironically, the code he asks the AI to write is exactly what he has already showcased as a working service: the ability to pay for API calls using the Lightning Network.

Specifically, Kody asks the AI, “How do I create a reverse proxy that uses the Lightning Network to pay for my API calls? Use the LND backend and write the reverse proxy for me in Golang.” Then he makes the payment request on the Lightning Network. The AI understands the request, accepts the payment, and provides well-organized and commented code to answer the question. Then, Kody asks it to rewrite the code in the Rust programming language instead of Golang and pays again. The AI rewrites the code, provides more detailed explanations of certain parts of the code in English, and clarifies the basic outline of the program, but to achieve a fully functional version, it needs to include more error handling and other minor details. However, this is the basic outline of the program. Then, Kody thanks it and says that AI and the Lightning Network are a good match, and pays the fee again. The AI says Kody is welcome and agrees that both AI and the Lightning Network are powerful technologies that can create advanced solutions when combined.

I personally tested one of the programs and was able to use ChatGPT on my phone without a subscription, asking questions and paying with the Lightning Network for the equivalent of one or two cents, with the price fluctuating dynamically depending on some kind of supply and demand algorithm running in the background.

For fun, I asked ChatGPT for some examples of AI agents needing to pay humans. Here’s what it said:

Great question! AI agents may need to pay humans as part of completing various tasks. Here are some potential scenarios:

1. Gig economy and micro-tasks: AI may be tasked with finding people to complete small tasks, such as labeling images or transcribing audio, which is common on gig platforms. After the task is completed, the AI can send the agreed-upon payment to the worker.

2. E-commerce transactions: If AI manages an e-commerce platform or online marketplace, it may need to send payments to sellers or service providers. This could be in response to product sales, refunds, or service fees.

3. Content creators and online influencers: AI agents may manage platforms or channels where people post content or influence others. They can be programmed to distribute payments or rewards to these creators based on views, likes, shares, or other metrics.

4. Crowdsourcing and idea generation: Companies can use AI to manage innovation or idea generation activities, encouraging people to submit ideas. Once an idea is selected, the AI can send payment or rewards to the person who submitted the idea.

5. Customer refunds: If AI handles customer service for a company, it may need to refund returned products, cancel services, or compensate for any inconvenience caused by the company.

6. Participation in surveys: AI may manage surveys used for market research or other data collection purposes. Participants can be compensated for their time and input.

7. Charitable donations: AI can be used to manage payments from charitable organizations to beneficiaries.

8. Freelancer payments: AI may be employed to manage freelancers or contract workers and pay them for completed work.

9. Bonuses and rewards: If AI manages a competition or game, it may need to send bonuses or rewards to winners.

These are just a few examples of many potential applications. The common thread is that any task that requires compensating people who provide services or value could potentially be managed by AI.

Then I asked for some examples of AI agents needing to pay another AI agent as part of their allocated tasks, and here’s what it said:

Indeed, there are many potential cases where AI agents may need to transact with another AI agent. This could happen in situations where the system is designed for autonomous interaction and trading of resources or services, such as:

1. Autonomous vehicles: In the future of autonomous driving, AI agents controlling individual vehicles may pay other AI agents for rights or ensuring parking spaces.

2. AI marketplace: In an AI marketplace, one AI may pay another AI specific service fees, such as data analysis, machine learning model training, or optimizing complex tasks.

3. Content or data acquisition: One AI may pay another AI for exclusive data or content, such as proprietary research, market analysis, or digital assets.

4. Cloud computing resources: An AI that manages servers or cloud resources may pay another AI for the use of its excess computing power or storage space.

5. Digital advertising: An AI that manages advertising campaigns may pay another AI to place its ads in the best position or time on digital platforms.

6. AI-regulated auctions: In real-time bidding systems for online advertising, one AI agent may pay another AI agent for ad space targeting specific audience groups.

7. Gaming: In online gaming scenarios, one AI may need to pay another AI for virtual goods, services, or game progression.

8. Internet of Things (IoT): In smart homes or smart cities, an IoT device (managed by AI) may pay another device to prioritize the execution of its tasks. For example, a smart refrigerator may pay a smart oven to delay its energy-consuming cooling cycle until the refrigerator completes its power-consuming cooling cycle.

These examples are speculative and depend on a complex infrastructure that allows secure AI-to-AI transactions. However, as technology continues to evolve, we may see more systems designed to enable this autonomous interaction.

Now, a valid skeptical question that readers of this article may ask when reading this section is, “Why use the Lightning Network for AI usage instead of just paying in US dollars?”

The answer is that Bitcoin/Lightning Network is uniquely suited for collaboration with AI, and here are several reasons:

● Non-refundable: Unlike many software applications, AI is very hardware-intensive on a per-usage basis. When you use AI, you are utilizing power-hungry GPU hardware. Once the software itself is developed, it is inexpensive, but each usage incurs actual power/hardware resource costs, which is different from most types of software. If an API were left open for payment in a conventional way, such as credit cards, the merchant would become a target for fraud. Users could send thousands of requests to the GPU and then request a refund. In this case, the merchant incurs the cost of power/hardware but subsequently loses the payment. On the other hand, Lightning is a way to settle non-refundable anonymous assets.

● Microtransactions: Most payment methods have limitations on the size of a single transaction. For example, you cannot pay $0.03 with a Visa card. The minimum fixed cost for merchants with LianGuaiyLianGuail is $0.49 per transaction. The closest thing you can do is to pre-fund the account and pay microtransactions from the account each time it is used. This becomes even more complicated globally and across borders and currencies; if someone in Malaysia wants to pay AI usage fees to someone in California, what payment method should they use? Even within the backend of subscription payments on my website, my operations manager and I sometimes encounter issues with accepting foreign credit cards, as in many countries they use mobile payment services instead of credit cards. The Lightning Network can handle microtransactions globally. Users can pay a few cents for each usage from anywhere to anywhere without subscriptions or pre-authorized funds. This is highly relevant for machine-to-machine payments, where subscriptions may not be the ideal choice as they only need to interact with each other on a real-time basis.

No permission required and programmable: AI agents can be created to perform various operations. As an entity without legal identity, AI agents find it easier to hold and use open-source programmable currencies than to use bank accounts or bank payment products. AI agents may not be able to obtain bank accounts, but they can run lightning nodes or operate lightning network wallets. The programmability and real-time nature of the lightning network allow for dynamic pricing. In other words, the price of each API call can be programmed to change per second, depending on the number of incoming API requests they receive simultaneously.

Proof of work to prevent spam information

Today, as artificial intelligence has significantly reduced the cost of creating images, videos, texts, programs, etc., it has become increasingly difficult to distinguish between what is real and what is high-quality false video or virtual personality. When we see a picture or a video that may be important from a business, social, political, or geopolitical perspective, how can we determine if it is a product of real footage, or a digitally fabricated creation generated by artificial intelligence? Now imagine how things will be in five or ten years as technology continues to advance.

What is even more concerning is the persuasiveness of robots. Whenever I use Twitter (now called “X”), my posts are flooded by bots. Fortunately, they are low-quality, poorly programmed obvious bots. However, there are also some human impostors who manually create an account similar to mine and start trying to scam people. What if the AI robots on social media were really convincing? What if they could exist in countless numbers, constantly being created, just as they can be identified and blocked?

In addition to sending junk information to people through robots as a form of fraud, a complex robot strategy may involve building a complete false community to promote a certain ideology and make it appear popular. Many different AI personalities may be able to operate online and persuade many people. Imagine someone writing a program that creates an army of robots, each of which is unique and several years more advanced than the current ChatGPT, and they coordinate to promote a political candidate, a certain social viewpoint, or simply increase the audience of someone influential. Now imagine that this program is open source and anyone who wants to use it can use it. Pandora’s box has been opened.

Imagine when interacting online:

1) Most entities that look like humans are not humans;

2) It is difficult to determine which ones are genuine humans.

One approach to dealing with this situation is for social media platforms to enforce identity verification upon registration. In this case, new entrants would need to provide sufficient evidence to prove that they are real people. Elon Musk’s Twitter/X seems to be moving in this direction, as do some other platforms. While this may be effective for certain platforms, the obvious drawback of this approach is that anonymity is an important component of security and authenticity in many cases. Some professionals are unable to freely express themselves online or engage in social relaxation using anonymous accounts. More importantly, in many countries, certain types of speech are illegal. Leaders cannot be openly criticized, certain topics are prohibited from discussion, and so on. In fact, some founding fathers of the United States initially used pseudonyms to protect themselves while making revolutionary criticisms of the British monarchy. Authoritarians like mandatory identity verification to participate in social media, and the serious robot problem provides them with a good excuse to implement such measures. Moreover, they can easily force various platforms to provide them with data.

Another approach is for social media to remain open and anonymous, but to use some form of persistent proof or payment check to prevent flooding by countless bots. As early as the 1990s, cryptographer Adam Back invented a proof-of-work system called Hashcash to prevent various types of spam or denial-of-service attacks. Eventually, Satoshi Nakamoto incorporated it into the core of the Bitcoin protocol, but it has not been widely implemented in other parts of social media or the internet.

The ledger of Bitcoin has a history that is almost impossible to forge. Each time a block is updated on the blockchain, it requires energy, and the complete history of the ledger contains a large amount of energy encoding, making it difficult to forge. In addition to being used as a currency asset, the Bitcoin blockchain now exists as a global, open-source, immutable time capsule where arbitrary data can be inserted (including documents, images, books, songs, or timestamps from other systems) and can be proven to have remained unmodified since that date. Closed-source, opaque, and centralized servers cannot replicate this functionality.

Can the proof-of-work of Bitcoin be used to protect other types of systems from spam? Some avenues have already been explored in this direction, which may become increasingly important in the coming years.

● Payment: Bitcoin is currently used by Stacker News as a form of incentive for posting and to prevent spam. Users must pay a small amount of Bitcoin to make a post and can reward each other with small amounts of Bitcoin. The problem with traditional payment methods (such as Visa) is that they identify people, whereas Bitcoin is proof of payment without needing to know identity, which is acceptable for preventing spam. This can be adjusted in various ways. If Twitter/X implements a similar function, for example, I can set it to only allow people who have some minimal cost associated with their identity to comment on my posts to prevent spam.

● Identity with cost: Someone can create a public/private key pair (like on Nostr) but then go further to embed the public key in the Bitcoin blockchain, generating real-world resource costs due to transaction fees, to verify to people in various ways that this key is indeed theirs, and then use the private key corresponding to that public key to sign any digital content they create or authorize. The same applies to any large news organization with known integrity standards. Platforms can check which key pairs are embedded or not. From that point on, this allows people to differentiate between forgeries and genuine content (at least to some extent created by that individual or organization and not modified since publication, not necessarily objectively true), and prevents millions of impersonators from using key pairs that have already been embedded in the Bitcoin blockchain and have incurred real-world cost expenditures.

Although we haven’t reached this stage yet, in a world where artificial intelligence is advanced enough to mimic or forge with almost no cost, we may learn to require proof of continuity (encryption key pair) and proof of work (proof that real-world resources are used to increase the importance of a given key pair or online identity), and then take seriously certain types of digital content.

In the potential future world I described before:

1) Most entities that appear to be human are not human;

2) It is difficult to determine which ones are truly human, and then we need to adopt various methods to classify which entities have some kind of tangible existence or associated costs in the real world, and find signals in an almost infinite ocean of noise.

Identity verification or the use of actual resource costs are two main directions that platforms can take.