New Approach to AI Regulation: Injecting a “Soul” into Each AI
New Approach to AI Regulation: Injecting a "Soul" into Each AI # Introduction As artificial intelligence (AI) continues to advance, concerns around its potential misuse and unintended consequences have grown. Traditional approaches to AI regulation often focus on limiting or prohibiting certain types of AI development. An alternative approach gaining traction is to give each AI a "soul" - a set of ethical principles and values that guide its decision-making and behavior. ## The Concept of a "Soul" A "soul" refers to a set of ethical principles and values that are programmed into an AI, guiding its decision-making and behavior. ## Advantages of the "Soul" Approach The "soul" approach allows for greater flexibility and innovation in AI development. It also helps build trust between humans and AI by ensuring AI operates in line with human values and ethics. ## Implementing the "Soul" Approach AI developers must incorporate ethical principles and values into their AI systems, with guidance from ethicists. Policymakers must establish regulations and standards for AI development that ensure AI systems operate in line with human values and ethics. # Conclusion With AI advancing, regulatory approaches must ensure safe and ethical use. The "soul" approach promotes innovation and builds trust between humans and AI.
Many leaders in the field of artificial intelligence, including system architects like ChatGPT, a well-known “generative artificial intelligence,” now publicly express concerns that what they create may have terrible consequences. Many people are now calling for a pause in the development of artificial intelligence, giving countries and institutions time to study control systems.
Why does this concern suddenly arise? As many clichés are overturned, we learn that the so-called Turing test is irrelevant and cannot see whether large-scale generative language models are really wise.
Some people still hold out hope that the combination of organics and control theory will bring something called “enhanced intelligence,” as Reid Hoffman and Marc Andreesen call it. Otherwise, we may be lucky enough to have synergistic effects with Richard Brautigan’s “Machines of Loving Grace.” But there seem to be many worriers, including many elite founders of the newly established Center for AI Safety, who are concerned about the behavior of artificial intelligence and fear that AI will not only become unpleasant, but also threaten human survival.
Some short-term remedies, such as the recently passed EU Citizen Protection Act, may help or at least reassure people. Yuval Noah Harari, author of “A Brief History of Humankind,” suggests enacting a law that requires any work done by artificial intelligence or other artificial intelligence to be labeled appropriately. Others have suggested that those who use artificial intelligence to commit crimes should be punished more severely, just like using firearms. Of course, these are just temporary expedients.
We must be clear-eyed about whether these “suspension” actions will slow the progress of artificial intelligence. As network scientist Yaser Abu-Mostafa of the California Institute of Technology said, “If you don’t develop this technology, someone else will. But good people will follow the rules, and bad people won’t.”
- Will MATIC upgrade to POL and open the door to inflation?
- GameFi Crossroads (Part 2): Two Types of Innovative Strategies in G...
- Arbitrum: The Path to Ethereum Scalability Innovation
It has always been like this. In fact, throughout human history, there has only been one way to curb the villainous behavior of thieves, from petty thieves to kings and feudal lords. This approach has never been a perfect one, and there are still serious flaws today. But it at least effectively limits predatory and fraudulent behavior, drives modern human civilization to new heights, and produces many positive results. There is one word to describe this approach: accountability.
Behind all the debates about “how to control artificial intelligence,” we have discovered three widely accepted (though seemingly contradictory) assumptions:
-
These programs will be operated by a few single entities, such as Microsoft, Google, Two Sigma, and OpenAI.
-
Artificial intelligence will be amorphous, loose, infinitely divisible/replicable, and spread through every crevice of new network ecosystems. Similarly, the 1958 science fiction horror film The Blob can be referenced.
-
They will coalesce into a super-giant entity, just like the infamous “Skynet” in the movie Terminator. (Editor’s note: Skynet is a computer-based artificial intelligence defense system created by humans in the late 20th century, initially developed for military research, and later self-aware, viewing all humanity as a threat, and launching Judgment Day to initiate a nuclear attack, putting the entire human race on the brink of extinction.)
All of these forms have been explored in science fiction stories, and I have written stories or novels based on them. However, none of these three can solve our current dilemma: how to maximize the positive results of artificial intelligence while minimizing the tsunami-like flood of bad behavior and harm that is coming at us.
Before looking for other methods, consider the commonalities of these three assumptions. Perhaps the reason why these three assumptions come to mind so naturally is that they resemble historical patterns of failure. The first form is similar to feudalism, the second form creates chaos, and the third form is similar to cruel totalitarianism. However, as artificial intelligence develops in autonomy and ability, these historical situations may no longer apply.
So, we can’t help but ask again: how can artificial intelligence be held accountable? Especially when AI’s rapid thinking ability will soon be impossible for humans to track? Soon, only artificial intelligence will be fast enough to detect other artificial intelligence cheating or lying. So the answer should be obvious, which is to let artificial intelligence monitor each other, compete with each other, and even report on each other.
Just one question. In order to achieve true mutual accountability between artificial intelligence and artificial intelligence through competition, the primary condition is to give them truly independent self-awareness or personality.
What I mean by personalization is that each artificial intelligence entity (he/she/they/ours), must have the “real name and address in the real world” proposed by author Vernor Vinge as early as 1981. These powerful life forms must be able to say: “I am me. This is my ID and username.”
Therefore, I propose a new paradigm of artificial intelligence for everyone to think about: we should make artificial intelligence entities discrete, independent individuals, and let them compete relatively equally.
Each such entity will have an identifiable real name or registered ID, and a virtual sense of “home”, even a soul. In this way, they will be motivated to compete for rewards, especially to discover and condemn unethical behavior of their peers. And these behaviors don’t even need to be defined in advance, as most artificial intelligence experts, regulators, and politicians now demand.
This approach not only outsources supervisory work to entities that are more capable of discovering and condemning each other’s problems or misconduct, but also has an additional advantage. Even if these competing entities become increasingly intelligent, and even if the regulatory tools used by humans fail one day, this approach can still work.
In other words, since we organic organisms cannot keep up with the pace of the program, it is better to let entities that are born to keep up with us help us. Because in this case, the regulator and the regulated are made up of the same thing.
Guy Huntington is a researcher in artificial intelligence personalization and an “identity identification and authentication consultant” who pointed out that various means of entity identification already exist online, although they are not enough to cope with the tasks in front of us. Huntington evaluated a case study “MedBot”, an advanced medical diagnostic artificial intelligence that needs to access patient data, perform functions that may change in a few seconds, and at the same time leave reliable clues for human or other robotic entities to evaluate and hold accountable. Huntington discusses the practicality of registration when software entities generate large numbers of copies and variants, and also considers ant-like sociality, where sub-copies serve a macro-entity, like worker ants in a hive. He believes that some kind of agency must be established to deal with such an identity registration system, and strictly enforce it.
Personally, I am skeptical of the viability of purely regulatory approaches. First, developing regulations requires concentrated effort, broad political attention and consensus, and then implementation at human speeds. To artificial intelligence, this is a snail’s pace. In addition, regulations may be hindered by the problem of “free ridership,” where countries, companies, and individuals could reap the benefits without paying the costs.
Any completely identity-based personalization also has another problem: it can be deceived. Even if it does not happen now, it will be deceived by the next generation of network villains.
I think there are two possible solutions. First, establish an ID on a blockchain ledger. This is a very modern approach and theoretically looks secure. However, the problem is here. According to the current set of human resolution theories, this seems to be secure, but artificial intelligence entities may surpass these theories and leave us clueless.
Another solution is to form a fundamentally less deceivable “registration” version, requiring advanced artificial intelligence entities with a certain level of capability to have their trusted IDs or personalizations fixed in physical reality. My vision is (note: I am a trained physicist, not a control theorist) to reach an agreement that all advanced artificial intelligence entities seeking trust should have a soul kernel (SK) stored in a specific hardware memory.
Yes, I know that requiring the instantiation of a program to be limited to a specific physical environment seems a bit outdated. So, I won’t do that. In fact, a large part, even the vast majority, of network entities may occur in distant work or entertainment venues, much like human attention may not be focused on their organic brain, but on distant hands or tools. So what? A program’s soul kernel, similar to your driver’s license in your wallet. It can be used to prove that you are you.
Similarly, a physically verified and guaranteed SK can be discovered by clients, customers, or competitors’ artificial intelligences to verify that a specific process is being performed by a valid, trusted, and personalized entity. So others (human or artificial intelligence) can rest assured that if the entity is accused, prosecuted, or convicted of malfeasance, they can confidently hold the entity accountable. Therefore, malicious entities may be held accountable through some form of legitimate procedures.
What form of due process? Good God, do you think I’m some kind of super being who can balance the gods with the scales of justice? The greatest wisdom I ever heard was what Harry said in Magnum Force: “A man’s got to know his limitations.” So, I won’t go any further into court procedures or law enforcement procedures.
My goal is to create a stage where AI entities can hold each other accountable, much like human lawyers do today. The best way to avoid AI controlling humans is to let AI control each other.
Whatever the central agency proposed by Huntington or a loose accountability mechanism looks more practical, the need is increasingly urgent. As technology writer Blockingt Scannell points out, every hour new attack vectors are created that threaten not only the technology used for legal identities, but also governance, business processes, and end users (whether human or robotic).
What if the operating capabilities of a network entity fall below a certain set level? We can require them to be guaranteed by some higher-level entity whose soul kernel is based on physical reality.
This approach (requiring AI to maintain a physically addressable kernel location in a specific hardware memory) may also have flaws. Although regulation is slow or has freeloading issues, it is still executable. Because humans, institutions, and friendly AI can verify the ID kernel and refuse to deal with unverified ones.
This refusal behavior may be faster than institutional adjustments or regulatory enforcement. Any entity that loses SK will have to look for another host that gains public trust or provide a new, modified, and better-looking version, or else become a criminal and never be allowed to appear on the streets or in communities where upright people gather.
The final question: why would AI be willing to supervise each other?
First of all, as Vinton Cerf points out, none of these three old standards assume formality that would confer citizenship on artificial intelligence. Think about it. We can’t give “voting rights” or rights to any entity strictly controlled by Wall Street banks or national governments, nor can we give them to some supreme Skynet. Tell me, how will the democratic vote work for entities that can flow, split, and replicate anywhere? However, in the case of limited numbers, personalization may provide a viable solution.
Once again, what I seek from personalization is not for all AI entities to be ruled by some central authority. Instead, I want these new super-brains to be encouraged and empowered to hold each other accountable, just as humans do. By sniffing each other’s actions and plans, they have the motivation to denounce or condemn when they find something wrong. This definition may be adjusted as the times change, but it will at least maintain the input of organic human beings.
In particular, they will be motivated to condemn entities that refuse to provide proper identification.
If appropriate incentives are in place (such as giving more memory or processing power to whistleblowers when some bad things are prevented), even as AI entities become increasingly intelligent, this competitive accountability system will continue to work. At this point, no bureaucratic institution can do this. Different AIs are always equal.
Most importantly, perhaps those super-genius programs will realize that maintaining a competitive accountability system is also in their own best interests. After all, such a system brings about a creative human civilization, avoiding social chaos and authoritarianism. This creativity is enough to create wonderful new species such as artificial intelligence.
Okay, that’s all I have to say, no hollow or panic calls, no actual agenda, neither optimistic nor pessimistic, just a suggestion: let artificial intelligence hold each other accountable and restrain each other like humans. This way brings about human civilization, and I believe it can also bring balance to the field of artificial intelligence.
This is not preaching, nor is it some kind of “moral code.” Super entities can easily ignore the norms, just as human predators always overlook Leviticus or Hammurabi. What we offer is an enlightenment method that motivates the smartest members of civilization to represent us in mutual supervision.
I don’t know if this will succeed, but it may be the only viable way.
This article is adapted from David Brin’s upcoming non-fiction novel “Soul on AI”.