Global EditionASIA 中文双语Français
HK edition / 2023-08 / 04 / Page022

Chatbots helpful or hurtful?

By Wang Yuke | HK EDITION | Updated: 2023-08-04 08:50

The rapid advance of artificial intelligence-related tools has raised concerns they would replace humans in jobs that demand emotional intelligence and agility. But how the race between AI and humanity will end up is anyone's guess. Wang Yuke reports from Hong Kong.

Josephine Chan, who specializes in psychological counseling, and I were seated in front of a laptop, challenging ChatGPT (Generative Pre-trained Transformer) to navigate a simulated patient who is emotionally distraught.

The intention behind the challenge is to discern emotional distinctions between humans and chatbots or generative artificial intelligence, and how likely chatbots are to supplant humans in occupations that require intense levels of emotional intelligence and agility.

Since mental health counseling bears and serves as a point of reference for other occupations to employ in marketing strategies or human relations tactics, a counselor could be the most fitting touchstone for this experiment.

Upon reading ChatGPT's answers, Chan said: "There are a lot of constructive points here, I would say. 'Be aware of your hurtful behavior' is really a good start for bettering your emotions later. But a professional counselor may never ask a client to apologize that early until things fall into place and the client is emotionally ready."

"'Self-compassion' is a good hint. But what is it exactly? It shouldn't be thrown out offhand as such because it's a serious matter of exercise," she said. Since the chatbot glossed over the importance of "understanding the root causes" in emotion management, we probed further into "how to do it at the moment of being overwhelmed".

What popped up was "apologize sincerely take a break and seek support practice self-care " - predictable, prosaic and cliched answers similar to the previous one. They are philosophically stilted words of wisdom. "'Remember, healing and growth are a personal journey ' It's too lecturing", said Chan. "We (professional counselors) won't do that (make patronizing remarks) and should never be judgmental. What we do is to steer our clients to see the whole picture, and recognize their behaviors before we leave it to them to cope."

Generic vs tailor-made

The prospect of AI replacing humans as counselors never crosses Chan's mind as "(emotionally fraught) people come to us for some reasons. They feel trusted, secured, emotionally anchored and hinged because we lend an ear to their miseries, process their despairs together, and try to construe their underlying cause." These are all based on humans' vicarious and visceral emotions through interpersonal communication and observation that can hardly be attained any time soon by chatbots that are lacking in emotional intelligence and sentience.

Far from being a clearcut matter, human psychology is a multifaceted, enigmatic and esoteric complexity - often too elusive even for professional mental therapists, counselors and psychologists who go to great pains to try and untangle it.

There are two loose ideas in the discipline of psychoanalysis - manifest content and latent content conceived by Sigmund Freud, which Paul Pang, a Hong Kong and US clinical psychologist, has to distinguish and comb through with his clients during the counseling session. Manifest content, simply put, refers to one's presentable and visible features speaking for his or her emotions. In other words, the manifest emotion is consistent with how one is really feeling. Latent content suggests hidden emotions under the guise of one's presence. For example, Pang explained, a person may look angry but, actually, under the fuming facade be full of despair and deep sorrow. "They bottle up the true emotions, requiring us as counselors to pick them up through intimate conversations and questionnaires," he said.

"A lot of my clients had turned to ChatGPT for an answer to their problems before coming to me. They would show me what they have got from the chatbot, and ask me if it's true," said Pang. "I would say, AI's answers are spot on."

Nevertheless, the chatbot's responses, which are generated using existing data, are fairly generic, without a reckoning of individual circumstances, argued Pang. However, human counselors are adept at discerning an individual's variants, and tailoring approaches and solutions, he said.

Pang said that many of his clients made a beeline for the chatbot when they were haunted by negative thoughts or things they'd prefer to forget, to validate their own suspicions. "AI is very knowledgeable, but it encourages 'forced attribution' among my clients, which is color-tinted, misleading and unhealthy," he observed. "There was a client who suspected her husband was having love affairs. She asked chatbots about traits of romance infidelity, drew a parallel between the list of characters described by AI and her husband's behaviors, and found them to be pretty much compatible."

But, Pang continued, "We deployed a private investigator and eventually proved her husband was innocent."

The confirmation bias fed by AI's roughly generated results could throw a troubled person into a vicious downward spiral, if he or she is fixated on seeking an echo chamber on AI, and is reluctant to seek professional help. "She still insisted her husband ticked many boxes of the abhorrent behaviors listed by AI," bemoaned Pang.

If a person refuses to listen to constructive advice, he or she is unlikely to act. Working out how to steer a troubled individual into following a wholesome path, where the person is intuitively and emotionally involved, can only be done by humans, said Pang.

"When I notice them relenting somewhat, softening their voice or pausing, that's the moment (they become disarmed and are willing to show vulnerability). I would swing into action (pointing out that their thoughts are skewed, and encourage them to reflect otherwise)," he explained.

Nature vs nurture

Obviously, mental health practitioners should be saved from obliteration by AI. Other typically "people-y jobs", where a human touch and independent thinking are an integral element for high-flying performance, should also be spared obsolescence, such as medical professionals, teachers, sales representatives, legal practitioners, urban planners and creatives. These are among the jobs considered irreplaceable by Unnanu, a technology platform.

"Working with people" and "problem solving" are among the top types of skills listed by the World Economic Forum.

Human beings set ourselves apart from other species in large part by our ability in critical thinking, sophisticated emotional intelligence, and artistry of communication, and it's rooted in "both nature and nurture", said Sreedev Sharma, founder of Sociobits.

"Critical and independent thinking, and emotional intelligence are thought to be mediated by specific neural networks in the human brain, such as the prefrontal cortex and amygdala. These networks are involved in higher-order cognitive processes, such as decision-making, problem-solving and emotional regulation," explained Ken Ip, chairman of the Asia MarTech Society.

The regions in one's brain responsible for an individual's emotional intelligence development interact with each other and with other sensory and cognitive areas to enable humans to recognize, understand, express and regulate their own and other people's emotions, he added. Emotional intelligence also depends on social, environmental and cultural stimuli, "such as early attachment, parental guidance, peer interaction, cultural norms and education" of which machine learning is devoid.

Siau Keng-leng, head of the Department of Information Systems at the City University of Hong Kong, concurred that AI's emotional intelligence can also be trained through inputs of dataset and refined instructions, pointing to its "nurturing" potential. "A robot can behave crying. But, under the veneer of its crying face, does it really feel emphatic or sad inside? Maybe not," said Siau, speaking of the absence of a chatbot's inherent personality or a "nature" component.

The limitations of deep learning would surface when it's applied to grooming emotion recognition, argued Ip. "It may not be able to capture the context and subtlety of human emotions, such as irony, sarcasm, humor or empathy. It may also be a flop on reasoning."

Friend or foe

The idea that generative AI possesses empathy and emotional intelligence is not a far-fetched myth though, because it can be emotionally wired through repetitive training, said Dicky Yuen, founder and managing consultant of Venturenix, a dedicated IT, digital and marketing recruitment firm. "But it will perpetually be 'simulated or scripted emotions', and far removed from its human counterparts' emotional capacity" which is fluid and self-contained, he insisted.

While programmed emotion has found its way in such repetitively mundane jobs as customer services, it will surely disappoint in many trades that value emotional flexibility, malleability, spontaneity and street smarts, opines Yuen. "You like Tom Cruise not solely because of how he looks (which could be replicated via deep fakes) but because of how he behaves ... his mannerisms, his attitude, which can't be simulated, because there are no patterns to learn."

What ChatGPT can't do now doesn't mean it won't be capable of years later, said Pang, who entertains the idea of merging AI technology with self-help therapy and clinical psychology practices.

"Seeking a solution from AI when you find yourself not in the right state of mind is the easiest, handiest and probably most effective fix," said Pang, who perceives chatbots as a comprehensive handbook for mild emotional issues.

AI's acumen in registering subtle body language, facial expressions and biometrics could even shed light on a person's unfiltered real-time mental condition, revealing emotions that are not immediately apparent. "Blinking may indicate one's lying, perspiration hints at one's nerves and anxiety, active brows could betray one's intention of trying to convince others of something, trembling is suggestive of fear and apprehension," said Pang, suggesting that AI's ability to identify these raw indicators will facilitate psychologists in making sense of their clients and patients.

Apart from helping to detect lies for forensic or clinical psychological purposes, Pang envisioned that AI's razor sharpness and exactness could be leveraged to help identify early signs of mental abnormity - with microchips implanted in our body - and, therefore, elicit timely professional intervention.

"There will be social demand for such, and AI, in response, will sooner or later be more sophisticated technically and emotionally," Pang said. He feels that fearmongering about AI's horrendous threat to humanity doesn't make sense. "We humans will evolve too, in tandem with AI," he explained. "We can barely stop its advancement."

The idea of emotionally honing AI to provide it with independent thoughts and emotions as powerful as humans doesn't make sense to Yuen. "I can't see any commercial value in doing so. Imagine if you talk to a chatbot of such, it may irritate you with offensive remarks from its own brain. Who would want to use it?"

How the race between AI and humanity will end up is anyone's guess. We can't spare ourselves the probability of AI falling into the wrong hands - bad actors conspiring to provoke wars, and this could be very dangerous, warned Yuen.

The idea that "AI programming and development should be regulated" is probably "music to our ears" but, to enforce it will be tough, Yuen said. "I would say, those who are willing to follow these regulations (while developing AI technologies) won't need to be regulated; but, for those (with malicious intentions) who need to be regulated, you can't force them to toe the line."

Some argue that our discourse shouldn't revolve around whether it will gain sentience or not, but about humans' propensity to ascribe human traits and emotions to chatbots. Humans are predisposed to forge an emotional attachment with chatbots and anthropomorphize AI technologies. Is it legit? Is it a problematic sentiment?

"It's probably unhealthy," said Yuen. "But I think this is what the ever-evolving technology will lead us to. Decades ago, we wouldn't have imagined we would feel upset about losing one follower on social media as we do today."

Is it unhealthy to attribute and attach emotions to technologies? Should it be discouraged? Yuen's answer is in the positive. "Setting a distance from technology always holds true."

Contact the writer at jenny@chinadailyhk.com

 

A screenshot showing ChatGPT's answers to questions brought up by the reporter during a challenge between a human psychology counselor and artificial intelligence on how best to deal with an emotionally distraught patient. Wang Yuke / China Daily

 

 

 

Most Viewed

Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US