Q&A: The human-machine relationship requires mutual understanding, respect for AI

Three Gartner Research executives weighed in on the coming wave of generative AI apps and how human interaction with machines will redefine how CIOs, CEOs and everyone else see it. And don't forget to say 'please' and 'thank you.'

Shutterstock/sdecoret

Generative AI (genAI) will soon infiltrate every aspect of our personal and business lives, meaning humanity will need to reevaluate its relationship with machines — machines that are continuing to become more human-like in their abilities.

In short order, they've gone from being tools to teammates for everything from product development and productivity efficiency to romantic relationships and psychological therapists, according to three Gartner Research executives. They talked about the rapid evolution of genAI over the past year during the firm's annual IT Symposium last week.

For businesses, there are four primary opportunities for genAI use: It can be a productivity partner for workplace efficiencies, a creative partner for new product invention, an external way to connect with customers, and a back-office interface with existing systems.

While AI is more than 50 years old, the launch of ChatGPT by Silicon Valley startup OpenAI late in 2022 drastically changed the possibilities for its use. What was once a technology with limited human-machine interaction capabilities suddenly gained anthropomorphic qualities.

In large part, organizations have seen genAI tools like ChatGPT as a new must-have technology in the same way companies once embraced e-commerce and digital transformation initiatives. One snag remains: most organizations still aren’t sure how genAI can advance business initiatives or enrich their bottom line.

With that backdrop, Gartner distinguished vice president analysts Erick Brethenoux, Mary Mesaglio, and Don Scheibenreif held a news conference with industry journalists and answered questions about genAI after the IT Symposium ended. Among other bits of advice, Brethenoux warned companies against focusing too much on productivity; Scheibenreif, co-author of the book “When Machines Become Customers,” described how AI-based tools are increasingly purchasing products — and should be seen by companies as customers. And Mesaglio talked about the rise of digital therapists — and romantic partners.

Don Scheibenreif

Don Scheibenreif

The following are excerpts from that news conference:

What happens when machines becomes customers? (Scheibenreif) “That’s one of the roles of machines. That premise is based on a lot of devices connected to intelligent systems that are taking on the behavior of humans.

“For example, if you have an HP printer at home. If you connect it to their ink service, it can actually buy ink on your behalf by monitoring the usage. So, HP has actually manufactured its own customers. If you have a Tesla [vehicle], you already know it can order parts on your behalf.

“There’s a whole bunch of other examples…about how machines are taking on the behavior of human customers.

“All we want to do is put a thought in our client’s minds: what happens when your best customers aren’t human? How does that change your sales approach, your marketing approach, your HR approach? That’s the discussion we wanted to get started with the book.”

Mary Mesaglio

Mary Mesaglio

Are AI-powered machines becoming more human or as human as we are? (Mesaglio) “I think what happens is a lot of people assume there are certain ineffable human qualities that a machine will never replicate, and I’m not here to say whether that’s true or not. But I can say that that ineffable human quality will have to do with things like empathy and emotional intelligence, and humans will forever be better than a machine at those things. And that’s not what our research shows.

“I’m not talking about something futuristic that could happen, I’m talking about something right now. We already see situations where machines are acting instead of humans in, say, therapist positions. [People are] feeling a lot more comfortable with machine therapists than human therapists for a whole bunch of reasons.

“For example, teenagers, [with] mental health apps and chatbots, you’ll find they have this thing called digital disinhibition, [which] is when someone feels more comfortable telling their deepest, darkest secrets to a machine than to a human because they feel less judged; they feel the machine is more neutral. And, the machine is always available. So, if you have a human therapist appointment on Tuesday, but you’re having a breakdown on Sunday night at 2 a.m., you can go to your machine therapist. They’re always there for you.”

What other examples are there of human-like machine interaction? (Mesaglio) “We already see lots of examples of machines as romantic partners to humans. For example, there is a chatbot that was designed to create human emotional connections called Xiaoice. It’s a chatbot mostly in the form of a young female. It has 660,000 users, most of them male — many of whom consider that to be their most important relationship in their life.

“There’s this one anecdote of this guy standing on the edge of a building about to commit suicide, and he took one last look at his phone, found Xiaoice and said, ‘I don’t feel like I deserve to be in this world.’ And she said, ‘Well, I care about you, and I want you to be here.’ And it kind of saved his life.

“I think we’re entering a realm where we’re not necessarily considering all of the consequences of the human-machine relationship and all the ways it will manifest in business and personally.”

Erick Brethenoux

Erick Brethenoux

(Brethenoux) "The way we approach machines and the way we engage with machines — there’s not much that’s new here. We see kids playing video games and we get excited and get deeply emotionally involved. We’ve done that with machines for a long time already.

"The difference with AI is anthropomorphization, so how much do I project of humanity into a machine and what do I then anticipate out of that?

"The problem with ChatGPT is that it so suddenly came onto the scene, accessible for everybody…. Suddenly they’re pushing that anthropomorphization to the extent of saying the machine has…senses, or something like that. Big danger. Machines don’t have senses.

"Anthropomorphization helps us relate to machines better. It helps us use the interface better. Deep learning and AI has been around for five decades. Someone gave us an anthropomorphic interface to interact with, and that’s where this exploded. It has human-like voice, its versatile, it can interact with me.

You mentioned businesses can push this machine relationship too far. What does that mean? (Brethenoux) "Decisions are made by humans and machines. They need to interact property and efficiently for decisions to be made and explained and followed through.

“To exploit that, one of the biggest mistakes our clients have made in the last nine months has been to look exclusively at productivity gains. They look at a way to eliminate that many positions in your organization because it looks good at the end of the quarter. The problem is as they keep going with AI, they realize that for the other part of the portfolio [production] they need people to be able to get new products and services in place. Too late, they let them go. Good luck getting them back.

“So, there’s a danger we’re seeing in organizations only focusing on productivity gains; we call that 'within boundaries.' There is pushing boundaries, where you’re pushing out new products and new services, and then there’s breaking boundaries. Very few of our clients are there today.”

Why specifically did you to focus the keynote on the human-machine relationship? (Scheibenreif) “With a topic like generative AI, there’s a certain amount of fatigue. People are thinking, seriously? One more thing? I’ve been listening to this for a year. We want to go for an angle we think people are missing…the business context. The part that’s really easy to miss.

“So, as Eric just said, the tyranny of the quarter, going for productivity gains, looking for where you can eliminate positions, then maybe regretting it because you need those employees to maybe do something more strategic later on. We see that, too. The risk of seeing [AI] only through a technology lens or you only see it through a short-term ROI lens, and you miss the larger conversation about what kind of relationships with machines we want to have and whether we just want to wander into those or we want to be more intentional.

“I’d say in some ways, social media was round one of the human-machine relationship and we weren’t super cognizant of how that relationship was shifting and I’d give that a score of technology 1, humans 0. There was a lot of unintended consequences, where we said, 'let’s make the world’s most addictive algorithm' and then let’s leave it to parents and 14-year-olds to police themselves.

“I’m not sure that’s where we would have intentionally wound up.

“This is round two, and we’re saying, yes there’s ROI; yes, there’s productivity considerations; yes, there’s technological considerations – how do we make this stuff work together? But the larger context of do we want to wander into these relationships and what do we want them to look like, I think is easier to miss — especially in a business context.”

How can businesses address regulations that are still being hammered out in terms of AI? (Scheibenreif) “We want people to be aware of the unintended consequences so they can do something about it. So, when we talk about principles and values, even though government regulation will take time to catch up, it doesn’t stop organizations from saying, 'We will not cross these lines in our use of this technology.'

“When it comes to the human-machine relationship, we want to remind people each one of us has a responsibility — but most importantly, the CIOs at these organizations also have a responsibility — to use this technology wisely.”

(Mesaglio) “I think the fundamental, number one mechanism you should use anytime you’re exploring an area that’s new and unclear…is using principles well. I mean something that’s unambiguous about your position. Help someone make a decision about alternatives. Is it specific to you and tied to the business outcomes you care about?

“So, if you’re trying to be the most customer-centric business on the planet, your principles should be about customer centricity. If you’re trying to be the most inexpensive and most operationally efficient, your principles should be about that.

“So, it should be in line with the outcomes you’re trying to achieve and the brand problems you have. When you think of it that way, then you think in a much more rigorous way — which lines are we determined we will or will not cross? How do we know when we’re straying into territory that will put us on the front page of a newspaper for all the wrong reasons?

“One of the things to do is for leadership to be getting together and having some kind of conversation, workshop, exercise, discussion about what are we comfortable doing and not comfortable doing. The way to do that is push yourselves to a threshold that would appear ridiculous. Like where’s a place where we’d never have to think about security — not at all. Where’s the place we can spend $1 million without blinking? We need no ROI?

“When you think about these ridiculous edges, when you move back from them, you usually get to something much more meaty that you can say, 'this is the threshold we won’t cross.'”

What principles would you suggest businesses adopt when deploying AI? (Brethenoux) “If you mean it, then tell employees I’m not implementing this technology so that I can replace you five months. Or, [I’m deploying AI] because I’m trying to focus on a specific part of the organization’s process but also getting something that’s going to be more pervasive and actable for many more people. If that customer service is the thing, then put something in place so that customers don’t yell at me when they call customer service, for example.

“The principle you’re talking about should also be explicit in the way you’re implementing the technology within your organization...."

(Mesaglio) “This is not something that’s new with generative AI. We’ve seen all these organizations who simply said, we just want to be digital…, we have to do digital transformation. Of course, digital is not a principle; it’s just an outcome — a means to an outcome. What happens when you have digital as an outcome, you get online grocery shopping, which is 100% digital and I 100% hate it. I don’t like having to look up the olive oil and then try to find out if it’s 250ml or a liter. When I’m physically in a store I can see that.

“So, principles prevents you against straying into areas where the outcome isn’t actually what anyone intended and everybody hates it.”

In terms of these principles and the human-machine relationship, who in the enterprise is responsible for that? (Scheibenreif) “I think ultimately, it’s the CEO. They set the tone; they should help drive the values for the organization. The application of AI and even the human-machine relationship should emanate from those values. Just like if we’re going to be digital. How do our values and principles accommodate digital and accommodate AI? Principally, that’s the responsibility of the CEO.

“Now with the CIO, we think they’re well positioned to lead the organization in the application of everyday AI. So, all those applications, like Microsoft Copilot and Workspace and every application that’s generating AI technology, that’s definitely the CIO’s purview. But when it comes to…the game-changing stuff, the CIO is part of the team that’s ultimately led by the CEO. So, it’s the team that’s actually thinking about this technology and how are we going to disrupt our industry and how are we going to disrupt ourselves.

“The CEO first and foremost sets the tone. Our CEO sets the tone at Gartner on the use of AI, and beyond that different leaders take different roles. The CIO for everyday and the executive team, which could be led by a chief technology officer or by a head of sales; it doesn’t matter. Who’s going to lead the game-changing discussion on the use of AI?”

What should the CIO be doing from an IT perspective to prepare the enterprise for generative AI tools? (Scheibenreif) "One is to have a discussion among the executive team about what we will and will not use AI for. What are those opportunities and what principles will guide our actions. We’ve been talking about the CIOs as guiding and even coaching the CEO and the executive team to have these very important discussions.

"What can the CIO do for their own team? They also have to have principles that guide their actions withing IT. We talk about when you're [deploying] a user-facing AI software, to not think of it as just buying a piece of software but think of it as a teammate. Do you interview it? Do you ask it questions? Do you test it out? Those are things we think IT can do. And obviously we talked about security. They not only have to recognize the new attack vectors from this technology, but how can existing security tools be used to approach [it]?"

(Mesaglio) "We talk about three things the CIO has to do that are not delegable. You can’t get some other department to do this. Creating AI-ready principles for your department — that’s not exclusive to the CIO, but it’s certainly needed in IT.

"The next is AI-ready data. What are you doing to make sure your proprietary data is ready to be consumed by a large language model? AI-ready data is secure, it’s enriched; it’s not just data, but data with business rules and business tags; it’s data that’s fair, so not biased. And data that’s governed.

"The third thing is AI-ready security. So, making sure you’re keeping up with this whole area, which is very dynamic. Just keeping up with good guys, so to speak, who are not sitting around saying, 'Well, use genAI at your peril, we’re all screwed.' They’re saying, 'There’s a bunch of stuff emerging, so watch this space.'"

When it comes to the Gartner hype cycle, genAI is at the peak of inflated expectations for emerging technologies. What will push it into the trough of disillusionment and are there ways it can re-emerge from that trough? (Brethenoux) “The main question I get from clients on a regular basis — at least once a day — is ‘I want to use AI.’ My next question is, ‘Why? What for? Why do you want to do that to yourself? What are the business reasons behind that?' There must be something leading you to believe it will solve a problem you couldn’t solve before.

“It’s good to remember it’s a hype cycle and not a maturity cycle. So, until there’s no hype anymore, it’s pretty safe to say it will stay on the top of the hype cycle for a while – expect, expect, expect.

“The trough happens when suddenly every expectation we set, and all the goals we’ve set, cannot be reached. What we’re seeing today, for example, and one reason it may start to fall [into the trough], is that implementing it is hard and expensive. So you have to have a clear map tied to the original investment before you start implementing it. It’s fun to play with, but at the end of the day, what does it bring to the company?

“We’re building research on that to be able to tell our clients how to make sure you don’t fall too far into the trough. My boss, Chris Howard, always says heroes are made in the trough because when the spotlight is no longer there on you, you can do marvelous things. So, the trough is not to be feared; it’s a hype trough. The technology is still good for things, but that’s where you do the best work without expectations on you.”

(Scheibenreif) “We don’t want people to panic. There are things you can do to start to work through this, understand the technology and start to experiment. Things go bad when people panic. I remember when people said, ‘We need to be on Facebook.’ Remember that? That seems to far-fetched today, but back then that was people panicking.”

(Mesaglio) “When people are victims of the hype, it means they’re expecting outcomes that don’t require the regular vigor. So being a victim of the hype says, 'I can skip all that and I just need to do this thing because it’s going to solve all these problems for me.' That’s opposed to normal stuff not at the top of the hype cycle, where you say, ‘OK, we’re going to try this technology and subject it to business cases and…do all these rigorous tests and then we’ll see.’

“Hype happens when someone says, ‘We just need to be doing GenAI’ in the same way 10 years ago when people were saying, 'We just need to be digital.' It’s when you suspend the rigor you’d apply to other things in the pursuit of something you’re hoping will solve a lot more problems that it actually will.”

If and when it reaches singularity, will AI deserve human rights? (Brethenoux) “You don’t reach singularity. You get very, very close, but you never reach it. The big debate in AI has always been can we reproduce the brains of humans in it, and are we creating a new form of intelligence? I don’t know.

“Humans see the world in a very different way than machines never will. We have senses, we have to survive. And all those things came from evolution. We evolved. Whether machines deserve human rights, I cannot say. I’m not an ethical expert.”

(Mesaglio) “On the singularity, I would say, ‘Do we believe that a machine is going to love chocolate as much as I do, ever? No.’ There’s no way a machine is going to taste chocolate and say, ‘This is the best thing ever invented!’ I think that’s a very human-specific thing.

“But, when it comes to the question of rights, I think we’re already running into that. When I gave the example of Xiaoice — this female avatar that’s perpetually 18 and very beautiful — one of the things that happened..., the company that created her received complaints that her content or its content was too sexualized, to physical, too suggestive, and too romantic. So, the company responded by removing a lot of that content. What happened was users who’d formed a relationship with this chatbot rebelled. They were up in arms saying, ‘How dare you give my girlfriend a lobotomy? This isn’t the girl I had my relationship with, and who are you to decide what she can or cannot say to me?'

“So, I think this question of human and machine rights will happen way before we get to anything like artificial general intelligence and I think we’re very capable of making a really big mess in terms of what we do, way before we get to the singularity or even close to it. It’s already happening and it’s happening now.”

(Scheibenreif) “When it comes to the human rights thing, I don’t know that I can speak to that. But I did use ChatGPT as we were doing research for this keynote, and I asked it what do you think are the most important principles that should govern a relationship between humans and machines. Its top response was mutual respect and understanding.

“So, I think rights are important, but the thing we can do today is ask ourselves, are we respectful? Do we yell at our machines, what does that say about us personally? I think we have an obligation to think about how this technology works and use it responsibly and not be victims of it. There has to be mutual understanding and respect without getting into anthropomorphizing it.”

(Brethenoux) “I still advise my kids to say thank you to machines for two reasons. When they take over, they’ll remember and they need to get into the habit of saying thank you to the people who are giving you something or providing a service."

(Mesaglio) “I agree. I teach my daughter to say thank you, too, because that says more about us than about the machines.

“But, a lot has happened in the metaverse, too. I’m waiting for the first court case where someone says, 'Your avatar hurt my avatar, and it caused me pain and suffering in real life. So, you need to pay up.' This is all going to happen.”