The U.S. is already one step ahead of the game — last December, members of the American Congress presented a bill on the ‘Development and Implementation of Artificial Intelligence’. Its aim is to establish a Federal Advisory Committee for AI. The drafters reasoned that understanding AI “is critical to the economic prosperity and social stability of the United States.”
How forward-thinking of them. But they have nothing on the Chinese — the Chinese State Council has stated that it wants China to be the leader in AI by 2025, which implies that they want to knock the U.S. from its pole position. Even the U.K. is eyeing up a lead position. But what about Germany? Ever since the pandemonium of last summer’s election, when the two largest parties frantically pushed for an AI ‘masterplan’ after China’s statement, not much has actually happened.
I didn’t expect a change in pace either, though. I think it’s much more important that politicians have the issue on their radar at all, and that they understand the implications of artificial intelligence.
Here is where opinions are diametrically opposed. Tesla’s Elon Musk and celebrity physicist Stephen Hawking have branded this technology “our biggest existential threat.” Steve Wozniak has attempted to offer a more balanced opinion, while Mark Zuckerberg has praised AI to the high heavens.
Of course, businesses are optimistic about what the future holds for AI, and are already using it for a wide array of applications: from communication, to cognitive searches and predictive analytics, to translation. The next big thing is the autonomous car. The (German) automotive sector, which used to focus on tin and steel, is also undergoing significant changes. Other sectors are following suit. To companies, AI is the game changer that will improve all our lives and revolutionize the economy. The results of our latest study on the working world of 2030 show that the majority of the 3,800 business leaders surveyed already anticipate a close human-machine symbiosis in the coming years. However, the same study also shows a clear split in opinions. Roughly half of the respondents were pessimistic about the effects of AI, while the other half were optimistic.
So what do we do now? The most important question concerns the implications that AI will actually have — will it usher in a bright new future or social disorder? The discussion on job losses is already in full swing.
Apocalyptic scenarios aren’t the only things we should be thinking about, but at the same time, it is worth reflecting on regulation at this early stage. I think that the AI expert Oren Etzioni has the right attitude. Following the example of Isaac Asimov’s laws of robotics, he suggests three simple rules for artificial intelligence systems so that we are equipped for the worst-case scenarios and can prevent any conceivable damage. He says that AI must be strictly regulated, that AI must be discernable from humans, and that AI cannot arbitrarily handle confidential data. These may seem like superficial rules, but they serve as a very good starting point and basis for discussion.
Are these ideas a little too ahead of their time? I don’t think so. If we tackle these issues as early on as possible, then we will be in a much better position to plan the future of artificial intelligence. Isaac Asimov wrote his laws of robotics way back in 1942, and they are still considered exemplary, even today. And if that’s not a good source of motivation, then I don’t know what is.