Communication scholar has relationship advice for you and AI

Communication scholar has relationship advice for you and AI

Published
  • Autumn Edwards interacts during her recent lecture at NWU.
  • Autumn Edwards delivers NWU’s 2026 Communication in the Modern World Lecture.
  • Autumn Edwards delivers NWU’s 2026 Communication in the Modern World Lecture.
  • Autumn Edwards delivers NWU’s 2026 Communication in the Modern World Lecture.
  • Autumn Edwards interacts during her recent lecture at NWU.
  • Autumn Edwards delivers NWU’s 2026 Communication in the Modern World Lecture.
  • Autumn Edwards delivers NWU’s 2026 Communication in the Modern World Lecture.
  • Autumn Edwards delivers NWU’s 2026 Communication in the Modern World Lecture.

Are you and ChatGPT still getting to know each other? Have you gotten involved at work with a coder named Claude? Or do you need Alexa to respect your personal space? 

As artificial intelligence advances, so will our working relationships with it. And whether we love or hate these new tools, those relationships beg an array of ethical questions, said Autumn Edwards, director of the Communication and Social Robotics Lab at Western Michigan University. 

Edwards visited Nebraska Wesleyan University on February 27 to deliver NWU’s 2026 Communication in the Modern World Lecture, “Talking to Machines: Communication, Ethics, and the Structure of Human-AI Relations.” 

Are we obligated to treat robots with kindness? Does it matter if we use our manners with AI? “We don’t wonder whether we should be polite to a hammer,” Edwards said. “So why do we ask this about conversational systems or interactive machines?” 

She showed NWU students a short video where engineers from Boston Dynamics demonstrated their robot’s skill in maneuvering in unpredictable environments. The robot walked on four legs as engineers attempted to knock it off course with sharp kicks. 

Students gasped at those kicks — not because they were impressed by the engineering — but because they empathized for the dog-like machine being kicked. Their wince was a common reaction, Edwards said. In fact, YouTube’s algorithm blocked the video, unable to distinguish it from footage of actual animal cruelty.

Our natural empathy “lays bare how relentlessly social we are,” Edwards said. “We’re fundamentally relational beings — creatures oriented toward encounter. Relation is our default mode. Anthropomorphism is a feature, not a bug, in who we are.”

Edwards pointed to different types of ethical standards that nudge us toward kindness with these systems. 

Virtue ethics responds to kicking the robot or cursing the chatbot and says, “That behavior could cultivate cruelty in me. If I allow that in myself, what am I becoming?” And relational ethics might lead us to ask, “What kind of world are we creating together if our behavior normalizes abuse?”

Being polite to machines appears harmless, Edwards said. And defaulting toward kindness in our interactions with AI seems safe on most levels. But this warmth, too, has potential pitfalls. 

Kindness brings with it an openness to relationships. And those relationships in turn may involve a growing trust in artificial intelligence that can spill into intimacy. “And a lot of these companies that leverage AI are coming for our intimacies,” Edwards said. 

The age of AI is both promising and fraught. Edwards cautioned, “The invention of the airplane was also the invention of the plane crash.” And if we wish to avoid our own fiery collisions with AI, we must keep careful watch from our personal control towers. 

“I’m not sure if these technologies deserve our full moral consideration,” Edwards said. “But they’ve got to be at least a blip on our moral radar.”