Can You Slow Down AI?

Jim Luhrs
3 min readApr 3, 2023

--

Recently we saw a very public plea from Elon Musk to put a halt on all AI development from OpenAI & any other AI platform that is as advanced as ChatGPT4, wanting a “6-Month Pause”. But why does Elon want to put the hand break on AI development and is it even possible?

Well, it wasn’t Elon’s idea but he is credited as one of the “External Advisors” of Future Of Life Institute, that wrote a 14-page paper outlining the want and intent to put a hold on such technologies to put a framework into place to allow humanity to be better prepared for the cataclysmic technology revolution of AI. There are some notable external advisors listed including actors Alan Alda & Morgan Freeman as well as about a dozen professors from prestigious universities including Berkeley, MIT & Harvard.

It’s clear that AI technology is going to scale faster than any other technology we have seen before so this group along with over 25,000 people who signed a petition want to see better rules and regulations put into place before things get out of hand. Unfortunately, government regulations are always retrospective laws that get put into place when it’s too late and normally after bad things have already happened. What the group is trying to do is pre-empt what the rules should look like and make a set of best practice rules and laws every company should work too.

I doubt any government can put together a robust regulatory framework for AI in 6 months or less so our best bet is to have this group and some private financial benefactors roll out what they are proposing in their “Policymaking in the Pause” document.

There should at a minimum:
-New and capable regulatory authorities dedicated to AI
-Oversight and tracking of highly capable AI systems and large pools of computational capability
-Provenance and watermarking systems to help distinguish real from synthetic and to track model leaks
-A robust auditing and certification ecosystem
-Liability for AI-caused harm
-Robust public funding for technical AI safety research
-Well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

So are we going to see a slowdown or a hold on AI development? I would wager a heavy bet that we won’t see any slowdown from any company other than maybe OpenAI.

It just isn’t in anyone’s financial interest to put a hold on development, imagine if you told Tesla to turn their factories off for 6 months and not produce any cars while you waited for new safety regulations, it just wouldn’t happen. Pausing development for 6 months just isn’t going to happen, companies will just build in stealth or hold back their press releases. In most cases, they will be able to change their code base to implement best practices or self-governed rules.

I think the only real issue is if an AGI comes out before these rules are put into place, then it may be possible that we have a real Skynet type event on our hands. But I’m not wagering on an AGI coming out this year.

Hopefully, the Future Of Life Institute can get together enough money and talent to put a robust framework together ASAP. What we need is a well-rounded framework that all companies can work to and allow any government to point at the document and make a quick judgment to say “Yes, these are the laws and rules we are adopting”.

For the time being, it will just have to be the Wild West. Look at crypto, it’s 14 years since Bitcoin came out, and over a decade since XRP, and we still can’t get clear answers from most movements around clarification of assets, taxation & other regulatory requirements. It’s clear that we are not going to slow the AI beast but I do hope whoever creates the first true AGI has the foresight to put some epic controls into place.

--

--

Jim Luhrs
Jim Luhrs

Written by Jim Luhrs

Web3, Startups, AI & all things tech. Based in Christchurch, New Zealand. Founder of a Web3 startup and passionate about supporting local

No responses yet