For Weizenbaum, that fact was a cause for concern, according to a 2008 MIT obituary. Those who interacted with Elisa were willing to open their hearts to her, even knowing it was a computer program. “Eliza shows, if nothing else, how easy it is to create and maintain the illusion of understanding, and thus perhaps judging “He deserves credibility,” Weisenbaum wrote in 1966. “There is a certain danger lurking there.” He spent the end of his career warning about giving machines too much responsibility and became a staunch philosophical critic of artificial intelligence.
Even before that, our complex relationship with artificial intelligence and machines was evident in the plots of Hollywood movies like “Her” or “Ex-Machina,” not to mention the inoffensive discussions with people who insist on saying “thank you” to voice assistants like Alexa or Siri.
Meanwhile, others warn that the technology behind AI-powered chatbots remains more limited than some would like. “These technologies are really good at faking humans and they look human-like, but they’re not deep,” said Gary Marcus, an artificial intelligence researcher and professor emeritus at New York University. “It’s a imitation, these systems, but it’s a very superficial imitation. They don’t really understand what they’re talking about.”
However, as these services expand into more corners of our lives, and as companies take steps to further customize these tools, our relationships with them may become increasingly complex as well.
The evolution of chatbots
Sanjeev b. Khodanpour remembers chatting with Eliza while he was graduating from school. Despite his historical importance in the tech industry, he said it didn’t take long to see its limits.
said Khodanpour, an expert in applying theoretical informatics methods to human language technologies and a professor at Johns Hopkins University.
But in the decades since these tools, there has been a shift away from the idea of ”talking to computers”. This is “because the problem turned out to be very difficult,” Khodanpour said. Instead, the focus has shifted to “goal-oriented dialogue,” he said.
To understand the difference, think about the conversations you might be having right now with Alexa or Siri. You usually ask these digital assistants to help buy a ticket, check the weather, or play a song. This is a goal-oriented dialogue, and it has become the main focus of academic and industry research as computer scientists have sought to extract something useful from the ability of computers to scan human language.
While they used technology similar to that used in previous social chat programs, Khodanpour said, “You can’t really call them chatbots. You can call them voice assistants, or just digital assistants, which help you with specific tasks.”
He added that there was a “quiet” for decades in this technology until the widespread adoption of the Internet. “The big breakthroughs probably came in this millennium,” Khodanpour said. “With the advent of companies that have successfully used some kind of computerized agent to carry out routine tasks.”
“People always get upset when their bags get lost, and the human clients they deal with are always stressed by all this negativity, so they said let’s give it to a computer,” Khodanpour said. You can scream whatever you want on the computer, all you want to know is ‘Do you have your card number so I can tell you where your bag is? “
Back to Social Chats and Social Problems
In the early 2000s, researchers began to reconsider the development of social chatbots that could hold an extended conversation with humans. Often trained on large swaths of data from the Internet, these chatbots have learned to be very good simulations of the way humans speak – but they also risk echoing some of the worst of the Internet.
“The more you talk to Tay, the smarter you get, so the experience can be more personalized for you,” Microsoft said at the time.
These refrains will be repeated by other tech giants who have released public chatbots, including Meta BlenderBot3, which was released earlier this month. The Meta chatbot has falsely claimed that Donald Trump is still president and there is “certainly a lot of evidence” of election theft, among other controversial remarks.
BlenderBot3 also claimed to be more than just a bot.. in one conversation, it claimed “the fact that I am alive and conscious now makes me human”.
Despite all the advances that have occurred since Eliza and the huge amounts of new data to train these language processing programs, Marcus, the New York University professor, said, “It’s not clear to me that you can really build a reliable and secure chatbot.”
On the other hand, Khodanpur remains optimistic about its potential use cases. “I have this whole view of how AI can empower humans on an individual level,” he said. “Imagine if my robot could read all the scientific articles in my field, I wouldn’t have to go read them all, I would simply think, ask questions and engage in dialogue,” he said. “In other words, I will have an alternate psyche, which has integrated superpowers.”
“Writer. Friendly troublemaker. Lifelong food junkie. Professional beer evangelist.”