AI needs to be governed. Let’s see what that should look like.

Nick Linck
5 min readMay 4, 2023

This is a work in progress. Please critique the framework laid out below.

Mid Journey Generation: “AI which is super powerful and a crowd of people who work symbiotically with it”

First, why do we need AI governance?

For me, the two things most important are

1) define and restrict misuse of AI

2) prevent a massive increase in wealth inequality

Other than the 2 points mentioned above, GPT does a great job explaining 4 other reasons.

I hope we can agree AI needs to be governed.

Before we get into the framework…

What kind of “laws” would the “AI Governing Body” decide?

Here are just 3 examples

1) Taxation on companies using AI. Sam Altman, the CEO of OpenAI who created ChatGPT, proposed an AI tax back in 2021. Without any governance, the massive amount of money that AI will earn may sit in the hands of few.

2) How AI tax money is spent

3) Restrict certain uses of AI (a heavily debated topic)

4) Require tests to be run on AI models companies release

Now that we get a sense of what the governing body would do, a next logical question is…

Do we have just one global governing body?

Probably not.

It would be great if the whole world played by the same rules.

This would ensure laws that limit the use of AI would apply to all individuals and organizations building AI.

Something like a “pause” on AI development would actually work.

But getting the world to agree on anything AND to be happy with that agreement seems unlikely given our track record…

And if people do not agree, we have a world with more conflict.

More conflict, less peace.

As a world let’s do what we can to minimize conflict ❌ 💣

So if not one global governing body, we will have multiple governing bodies.

Who should do the governing and who should they govern?

Who should govern.

In this proposed framework, people with similar values and goals should come together to form AI governing bodies.

We expect companies to be the most successful in forming governing bodies, and we think it is unideal for governments to be the first to form governing bodies.

Why companies and not governments?

Companies already have groups of people with shared values and goals, and they are already governing the AI they have built.

Companies like OpenAI and Google should come together or decide to establish their own governing body.

The argument for governments:

They already have law enforcement to help enforce whatever decisions the AI governing body determines necessary.

But…

1) Governments will never please everyone in their country. Values are not geographically distributed. Again conflict will rise 📈 💣 👀

2) There isn’t a need for their force if governing bodies only govern those who opt-in to their system

An opt-in governance framework

1) minimizes conflict, since individuals decide who they are governed by

2) it makes a competitive marketplace for AI governing bodies.

We love competitive marketplaces that lead to happier users 🫶

An opt-in governance framework never made sense for countries since laws were generally geographically specific, you had to follow the laws of the geography you were born into.

For AI governance, there is not currently a need to make geographically specific laws.

Once we have robots, communities will need to determine how, when, & where which types of robots can operate.

At this point, governments can consult with the established governing bodies.

Why would individuals opt-in to be governed?

1) To cast their “vote” on what AI governance laws they agree with. AI safety people, this is for you.

And what will drive most people to opt-in…

2) Money: the most favored governing bodies will distribute a form ofUBI (Universal Basic Income).

Each governing body will have their own method for distributing the excess value from AI.

Some may choose to keep all of their value, some may donate to charities, others may donate to individuals.

OpenAI alludes to UBI in Planning for AGI and Beyond.

What would an opt-in governance framework look like in practice?

Most likely a dashboard displaying and comparing AI governing bodies and their laws/stances.

Individuals select their governing body and are supported/governed by that governing body.

Getting individuals to use this dashboard seems easy once we have companies who formed governing bodies on the platform. Especially once the governing bodies are distributing wealth to their supporters.

What would motivate companies to form governing bodies and use this?

The typical motivators… money, status, and power.

Money

In the long run, AI will likely become a commodity. AI research is open, & it is already possible for companies to reverse engineer specific AI models.

The organizations that will make money off of AI, will be the ones with the best forms of distribution.

The best forms of distribution come when you have direct access to, and the most influence on, users.

If an individual joins a governing body, they are incentivized to become a user in order to increase the pool of wealth their governing body can distribute to them.

Status

Organizations with more people voting for their governance framework will instinctually feel better.

People love to be loved.

+ higher status will lead to more users (not just individuals as users but companies too).

Power

As more individuals join the governing body:

1) the governing body will have more people to help them achieve their goals.

2) other organizations would be more likely to want to partner with the governing body.

If money, status, and power are not enough to convince organizations to use the platform, hopefully supporting a fair, democratic process to govern the most disruptive technology of human history will be all the motivation they need to participate.

Follow me on twitter @nick_linck to see how this progresses!

Conclusion

The future of AI is uncertain, but one thing is clear. It will dramatically change the way we live, work, and interact with one another. Like all technologies, if we do not use it properly, it poses a huge threat on our safety both physically and mentally.

This is one proposed framework on how we as society should set a framework to organize opinions on how AI should be regulated. Please add comments and DM me on twitter if you want to chat more about bringing this to life. We are making it happen 🚀

--

--