Chatbots: a long and complex history

Eliza, widely recognized as the first chatbot, wasn’t as versatile as similar services today. The program, which relied on understanding natural language, responded to the keywords and then returned the dialogue to the user. However, as Joseph Weisenbaum, the computer scientist at MIT who created Elisa, wrote in a research paper In 1966, “It was very difficult to convince some people that Elisa (in her current handwriting) was not human.”

For Weizenbaum, that fact was a cause for concern, according to a 2008 MIT obituary. Those who interacted with Elisa were willing to open their hearts to her, even knowing it was a computer program. “Eliza shows, if nothing else, how easy it is to create and maintain the illusion of understanding, and thus perhaps judging “He deserves credibility,” Weisenbaum wrote in 1966. “There is a certain danger lurking there.” He spent the end of his career warning about giving machines too much responsibility and became a staunch philosophical critic of artificial intelligence.

Even before that, our complex relationship with artificial intelligence and machines was evident in the plots of Hollywood movies like “Her” or “Ex-Machina,” not to mention the inoffensive discussions with people who insist on saying “thank you” to voice assistants like Alexa or Siri.

Contemporary chatbots can also elicit strong emotional reactions from users when they don’t work as expected – or when they become so good at imitating flawed human speech that they’ve been trained that they start making racist and incendiary comments. It didn’t take long, for example The new chat bot in Meta To stir up some controversy this month by releasing largely incorrect political comments and anti-Semitic statements in conversations with users.
However, proponents of this technology argue that it can streamline customer service functions and increase efficiency across a much broader range of industries. This technology powers digital assistants, and many of us have used it every day to play music, order a delivery, or check homework assignments. Some also make an argument for these chatbots offering relief to the lonely, elderly, or isolated. At least once start It has gone so far as to use it as a tool to apparently keep deceased relatives alive by creating computer-generated copies based on uploaded chats.

Meanwhile, others warn that the technology behind AI-powered chatbots remains more limited than some would like. “These technologies are really good at faking humans and they look human-like, but they’re not deep,” said Gary Marcus, an artificial intelligence researcher and professor emeritus at New York University. “It’s a imitation, these systems, but it’s a very superficial imitation. They don’t really understand what they’re talking about.”

See also  NVIDIA GeForce RTX 4080 becomes Newegg's bestseller, RTX 4090 takes third place

However, as these services expand into more corners of our lives, and as companies take steps to further customize these tools, our relationships with them may become increasingly complex as well.

The evolution of chatbots

Sanjeev b. Khodanpour remembers chatting with Eliza while he was graduating from school. Despite his historical importance in the tech industry, he said it didn’t take long to see its limits.

said Khodanpour, an expert in applying theoretical informatics methods to human language technologies and a professor at Johns Hopkins University.

Joseph Weisenbaum, inventor of Elisa, sits at the desktop of a computer at the Computer Museum in Paderborn, Germany, May 2005.
In 1971, psychiatrist Kenneth Colby at Stanford University developed another early talking robot, which he named “Barry” because it was supposed to imitate a paranoid schizophrenic. (The New York Times 2001 obituary For Colby, it included a colorful conversation that followed when researchers brought Elisa and Barry together.)

But in the decades since these tools, there has been a shift away from the idea of ​​”talking to computers”. This is “because the problem turned out to be very difficult,” Khodanpour said. Instead, the focus has shifted to “goal-oriented dialogue,” he said.

It didn't take long for the new Meta chatbot to say something offensive

To understand the difference, think about the conversations you might be having right now with Alexa or Siri. You usually ask these digital assistants to help buy a ticket, check the weather, or play a song. This is a goal-oriented dialogue, and it has become the main focus of academic and industry research as computer scientists have sought to extract something useful from the ability of computers to scan human language.

While they used technology similar to that used in previous social chat programs, Khodanpour said, “You can’t really call them chatbots. You can call them voice assistants, or just digital assistants, which help you with specific tasks.”

See also  Changes to the remastered Splinter Cell game's concept, gameplay, and story changes were revealed

He added that there was a “quiet” for decades in this technology until the widespread adoption of the Internet. “The big breakthroughs probably came in this millennium,” Khodanpour said. “With the advent of companies that have successfully used some kind of computerized agent to carry out routine tasks.”

With the advent of smart speakers like Alexa, it has become more common for people to talk to devices.

“People always get upset when their bags get lost, and the human clients they deal with are always stressed by all this negativity, so they said let’s give it to a computer,” Khodanpour said. You can scream whatever you want on the computer, all you want to know is ‘Do you have your card number so I can tell you where your bag is? “

In 2008, for example, Alaska Airlines launched Jane, a digital assistant to help travelers. In a sign of our tendency to humanize these tools, early review Of the service in The New York Times he said: “Jane is not annoying. She is depicted on the website as a young brunette with a gentle smile. Her voice has appropriate inflections. Write a question, and she responds cleverly. Inevitably they should take a trip with her, say, a clumsy bar pick-up line, politely suggesting to get back to work.”

Back to Social Chats and Social Problems

In the early 2000s, researchers began to reconsider the development of social chatbots that could hold an extended conversation with humans. Often trained on large swaths of data from the Internet, these chatbots have learned to be very good simulations of the way humans speak – but they also risk echoing some of the worst of the Internet.

In 2015, for example, Microsoft’s public experiment with an AI chatbot called Tay smashed and burnt In less than 24 hours. Tay was designed to talk like a teenager, but he soon started making racist and hateful comments to the point that Microsoft shut him down. (The company said there was also a coordinated effort by humans to trick Tay into making some offensive comments.)

“The more you talk to Tay, the smarter you get, so the experience can be more personalized for you,” Microsoft said at the time.

See also  WWDC 2022 live blog: iOS 16, iPadOS 16, MacBook Air, and more

These refrains will be repeated by other tech giants who have released public chatbots, including Meta BlenderBot3, which was released earlier this month. The Meta chatbot has falsely claimed that Donald Trump is still president and there is “certainly a lot of evidence” of election theft, among other controversial remarks.

BlenderBot3 also claimed to be more than just a bot.. in one conversation, it claimed “the fact that I am alive and conscious now makes me human”.

Meta's new chatbot, BlenderBot3, explains to the user why it's actually human.  However, it didn't take long for the chatbot to generate controversy by making incendiary remarks.

Despite all the advances that have occurred since Eliza and the huge amounts of new data to train these language processing programs, Marcus, the New York University professor, said, “It’s not clear to me that you can really build a reliable and secure chatbot.”

Martyrdom 2015 Facebook Project nicknamed “M” A robotic personal assistant was supposed to be the company’s text answer for services like Siri and Alexa “The idea was that it was going to be this universal assistant that would help you order a romantic dinner and have musicians play for you and deliver flowers — far from what Siri could do,” Marcus said. Instead, the service shut down in 2018, after a disappointing period.

On the other hand, Khodanpur remains optimistic about its potential use cases. “I have this whole view of how AI can empower humans on an individual level,” he said. “Imagine if my robot could read all the scientific articles in my field, I wouldn’t have to go read them all, I would simply think, ask questions and engage in dialogue,” he said. “In other words, I will have an alternate psyche, which has integrated superpowers.”

Leave a Reply

Your email address will not be published. Required fields are marked *