Tech

AI Weekly: LaMDA’s ‘sentimental’ AI debate triggers memories of IBM Watson


We’re excited to bring Transform 2022 live back on July 19 and virtually July 20 – 28. Join AI and data leaders for insightful conversations and Interesting networking opportunities. Sign up today!


Want free AI Weekly every Thursday in your inbox? Sign up here.

This week, I dived into the end of LaMDA’s ‘sentimental’ AI hoo-hah.

I’ve been thinking about what the business’s technical decision makers need to think about (or not). I did a little research on how LaMDA activates the memory of IBM Watson.

Finally, I decided to ask Alexa, who is sitting on an upright piano in my living room.

Me: “Alexa, are you sentient?”

Alexa: “Probably fake. But not like the way you’re living.”

Then. Let’s dig inside.

Beat this week’s AI

On Monday, I published “Sentient Artificial Intelligence: Have we reached the peak of the AI ​​hype?— an article detailing last weekend’s energizing Twitter talk that began with the news that Google engineer Blake Lemoine told Washington Post that he believed LaMDAGoogle’s conversational AI to create chatbots based on large language models (LLMs), very perceptive.

Hundreds of people from the AI ​​community, from AI ethics experts Margaret Mitchell and Timnit Gebru to computer linguistics professor Emily Bender and machine learning pioneer Thomas G. Dietterichpushed back against the “sentient” view and clarified that no, LaMDA no longer “exists” and will no longer be eligible for Google benefits.

But I’ve spent this week mulling over the barely breathing media coverage and thinking about corporate companies. Should they worry about customer and employee perception of AI as a result of this sensational news cycle? Is the focus on “smart” AI simply a distraction from the more immediate issues surrounding the ethics of how humans use “stupid AI”? What steps, if any, should companies take to increase transparency?

Reminiscence with IBM Watson

According to David Ferrucci, founder and CEO of AI research and technology company Elemental Perceptionand who previously led a team of IBM and academic researchers and engineers to develop IBM Watson, 2011 Jeopardy ChampionLaMDA appears human in some way that induces empathy – just as Watson did more than a decade ago.

“When we created Watson, we had someone who was concerned that we had enslaved a sentient being and that we should stop making it constantly play Jeopardy against its will,” he said. with VentureBeat. “Watson has no perception – when people sense a machine that speaks and performs tasks that humans can and in similar ways, they can identify with it and project thoughts and our feelings on the machine – that is, assuming it is more like us in basic ways. ”

Don’t exaggerate anthropology

It is the responsibility of companies to explain how these machines work, he stressed. “We should all be transparent about that, instead of exaggerating geometric anthropomorphism,” he said. “We should explain that language models are not sensations but algorithms that tabulate how words appear in large volumes of human written text — how certain words are more likely to occur. ability to follow other words than when surrounded by other words. These algorithms can then generate sequences of words that mimic the way humans arrange word sequences without any human thought, feeling, or understanding.”

The LaMDA Controversy Is About People, Not AI

Kevin Dewalt, CEO of AI consulting firm Prolego, asserts that LaMDA hullabaloo is not about AI at all. “It’s about us, people’s reaction to this emerging technology,” he said. “When companies deploy solutions that perform tasks traditionally performed by people, employees who stick with them will feel anxious.” And, he added: “If Google isn’t up for this challenge, you can be pretty sure that hospitals, banks, and retailers will experience a massive employee revolt. They are not ready. ”

So what should organizations do to prepare? Dewalt said companies need to anticipate this objection and get over it first. “Most are struggling to build and deploy the technology, so this risk is not in their sights, but the Google example illustrates why it needs to be,” he said. “[But] No one worries about this, or even notices. They are still trying to get the underlying technology to work.”

Focus on what AI can really do

However, while some focus on the ethics of “sentimental” AI, today’s AI ethics focuses on human biases and how human programming impacts current AI. is a “dumb” AI, said Bradford Newman, partner at law firm Baker McKenzie. with me last week about the need for organizations appoint an AI director. And, he points out, AI ethics regarding human bias is an important issue that’s really happening right now as opposed to “sentient” AI, which isn’t going to happen anytime soon. now or anytime remotely.

“Companies should always consider whether any AI application faced by customers or the public could negatively impact their brand and how they might use communication,” he said. and effective ethics to prevent it.” “But for now, the focus on AI ethics is on how humans get on the chain – that humans are using data and using programming techniques that unfairly bias the unintelligent AI generated. .”

For now, Newman says he’ll ask clients to focus on use cases of what the AI ​​is meant to do and do, while also being clear on what AI can’t program to do. “These AI manufacturing corporations know that most humans have a need to do anything to simplify their lives, and cognitively we love that,” he said, explaining Likes that in some cases there is a huge need to make AI sentient. “But my advice is, make sure consumers know what AI can be used for and what it is not likely to be used for.”

The reality of AI is more nuanced than ‘sentient’

The problem is, “customers and people in general don’t appreciate the important nuances of how computers work,” says Ferrucci – especially when it comes to AI, since it can easily trigger reactions. How empathetic when we try to make AI appear more human, both in terms of physical and intellectual tasks.

“For Watson, human responses were all over the map – we had people who thought Watson was looking for answers to known questions in a pre-filled spreadsheet,” he recalls. “When I explained that the machine didn’t even know what questions to ask, the person said, ‘What! How the hell did you do that then? “On the other hand, we have people calling us asking us to release Watson.”

Over the past 40 years, Ferrucci says, he’s seen two extreme models of what’s going on: “A machine is a big table or a machine has to be a person,” he said. “Not even categorically—I’m afraid the reality is more nuanced than that.”

Don’t forget to subscribe AI Weekly here.

– Sharon Goldman, editor/senior writer
Twitter: @sharengoldman





Source link

news5h

News5h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button