Franz Broseph appeared to Claes de Graaff like any other Diplomacy player. The username was a joke – Austrian Emperor Franz Joseph I resurrected as an online guy — but Diplomacy players seem to like such comedy. The game combines military strategy and political intrigue to recreate the First World War. Players negotiate with allies, adversaries, and everyone else as they plot the movements of their armies across 20th-century Europe.
When Franz Broseph joined a 20-player online tournament at the end of August, he lied to and ultimately betrayed other competitors. He finished first in the race.
A chemist from the Netherlands, De Graaff finished sixth. He had played Diplomacy for nearly a decade, both online and at competitions throughout the world. He did not understand he had lost against a machine until it was disclosed many weeks later. Franz Broseph was an automaton.
“I was stunned,” 36-year-old de Graaff stated. “It appeared so genuine – so lifelike. It could read my texts, speak with me, and establish arrangements that were mutually advantageous, allowing us both to advance. Additionally, it lied to me and betrayed me, as top players often do.”
Franz Broseph is among the new breed of internet chatbots that are fast pushing machines into uncharted terrain. He was created by a team of artificial intelligence experts from the software company Meta, the Massachusetts Institute of Technology, and other notable universities.
When conversing with these bots, it can feel similar to conversing with a human. In other words, it can appear as if machines have passed an exam meant to demonstrate their intelligence.
Computer scientists have toiled for more than seven decades to develop technology that can pass the Turing test, the technical tipping point at which people can no longer tell whether they are conversing with a machine or a person. Alan Turing, a renowned British mathematician, philosopher, and codebreaker during World War II, proposed the exam in 1950. He believed it could demonstrate to the world when machines attained actual intelligence.
The Turing test is an arbitrary metric. It depends on whether the individuals asking the questions believe they are conversing with a human when they are actually conversing with a machine.
But regardless of who is answering the questions, machines will soon render this test obsolete.
Bots such as Franz Broseph have already passed the test in specific circumstances, such as negotiating Diplomacy manoeuvres and making meal reservations. ChatGPT, a bot released in November by the San Francisco lab OpenAI, gives users the impression that they are conversing with a real person. The lab reported that over a million people have utilised it. Universities are concerned that because ChatGPT can write virtually anything, even term papers, it would create a mockery of classwork. Some people even refer to these bots as sentient or conscious while conversing with them, believing that machines have evolved a knowledge of their surroundings.
Privately, OpenAI has developed GPT-4, a technology that is even more potent than ChatGPT. It may also generate graphics in addition to text.
However, these bots are not intelligent. They lack consciousness. They are not intelligent, at least not in the same manner as humans. Even those who construct the technology acknowledge this fact.
Ilya Sutskever, head scientist at OpenAI and one of the most influential AI researchers of the past decade, stated of the new wave of chatbots, “These systems can do a lot of valuable things.” In contrast, they have not yet arrived. People believe they can accomplish what they cannot.
As new technologies emerge from research labs, it is now evident — if it wasn’t already — that scientists must rethink and reconfigure how they track the development of artificial intelligence. The Turing test falls short of expectations.
AI technologies have repeatedly conquered allegedly impossible challenges, including chess (1997), “Jeopardy!” (2011), Go (2016), and poker (2019). Now it has surpassed another, which once again does not necessarily indicate what we had anticipated.
We, the public, want a new framework to comprehend what AI can and cannot do, what it will do in the future, and how it will affect our lives, for better or worse.
Google, OpenAI, and other AI laboratories began creating neural networks that examined vast quantities of digital content, including novels, news articles, Wikipedia entries, and online chat logs, five years ago. They are termed “big language models” by researchers. These systems learned to construct their own text by identifying billions of various patterns in how humans connect words, characters, and symbols.
OpenAI revealed the DALL-E tool six months prior to the launch of its chatbot.
With a nod to the 2008 cartoon film “WALL-E,” about an autonomous robot, and Salvador Dal, a surrealist painter, this experimental technology enables users to generate digital visuals simply by describing what they wish to see. This is another neural network, constructed similarly to Franz Broseph or ChatGPT. The distinction is that it acquired knowledge from both visuals and words. It learned to understand the connections between photos and words by analysing millions of digital images and the descriptions that described them.
This is considered a multimodal system. Google, OpenAI, and other groups are already constructing video-generating systems with similar techniques. Startups are constructing bots that can traverse software applications and websites on behalf of a user.
These are not systems that can be evaluated using the Turing test or any other straightforward way. Their final objective is not conversation.
The Turing test determined whether or not a machine could mimic a human. Typically, artificial intelligence is portrayed as the rise of machines that can mimic human thought. However, the technologies being developed today are vastly different from our own. They cannot comprehend concepts they have never encountered. And they are unable to examine concepts in the physical world.
However, there are numerous ways in which these bots are superior to you and me. They do not become weary. They do not let their emotions to interfere with their efforts. They can rapidly access far more information. And they can generate text, graphics, and other forms of information at speeds and in quantities that humans cannot match.
In the future years, their abilities will likewise vastly enhance.
In the coming months and years, these bots will assist you in locating information on the internet. They will explain topics in a manner that you can comprehend. They will even compose your tweets, blog articles, and term papers if you desire.
In your spreadsheets, they will tabulate your monthly expenditures. They will search real estate websites for homes within your price range. They will create online avatars with human-like appearances and voices. They will create short films using music and dialogue.
Without a doubt, these bots will alter the planet. However, it is your responsibility to be sceptical of what these systems say and do, to modify the information they provide, and to approach anything you encounter online with scepticism. Researchers are able to equip these systems with a vast array of abilities, but they have not yet figured out how to imbue them with reason, common sense, or a sense of reality.
That is still up to you.
This piece was first published in The New York Times.