Exploring the cutting edge of technology, batteries, and green energy for a sustainable future

AI evaluates emotional intelligence through sophisticated algorithms

Posted by:

|

On:

|

Many companies use artificial intelligence to evaluate customer conversations. Such software can also recognize emotions, BR research shows. Data protection experts view this critically.

There is a sentence that most people in Germany know from telephone hotlines: “In order to test and improve our service quality, we would like to record the call.”

What many callers don’t know: Some call centers use artificial intelligence (AI) to evaluate their emotions on the phone – based on their voice.

For example, one of the world’s largest call center roperators, Teleperformance, has AI software that is designed to recognize,among other things, the emotions of callers and call center agents. How fast do they speak? Do they sound excited or anxious? Are customers threatened to terminate their contracts or file lawsuits? The AI ​​is designed to evaluate all of this, as the company’s internal documents show.

In Germany, almost two thousand agents make calls for Tele performance, including on behalf of energy suppliers. The documents show that Teleperformance customers can use the software for their German-speaking hotlines. It is unclear how many conversations the company nestled with the AI software. The company did not want to comment on this when asked by BR .

Analyzing customer emotions in real time

The call center company 11880, on the other hand, is open about the use of AI for emotion recognition. 11880’s clients include housing associations, car dealerships and electric scooter rental companies. The company handles customer complaints for them.

The AI ​​​​analyzes speech melody, intensity, rhythm and sound in real time. In total, more than 6,000 parameters of the voice are analyzed in order to calculate emotions, says Jörn Hausmann, manager at 11880. The software is designed to recognize feelings such as anger, annoyance, but also friendliness.

Hausmann emphasizes that the AI ​​software for emotion analysis is a support for the agents, its use has been approved by the works council and reviewed by a data protection officer.

During ongoing calls, the call center agents see smileys on their screens that indicate the mood of the conversation. One smiley shows the emotions of the caller, another shows those of the call center agent. The caller’s emotions are also continuously tracked and evaluated. This is to ensure that the agents remain friendly and, in the best case, the customers hang up more satisfied. This approach is not without controversy.

Recorded message “legally problematic”

The company’s recorded message only mentions an evaluation”to check and improve our service quality”. Law professor LenaRudkowski from the University of Giessen calls recorded messages of this kind”legally problematic” because callers cannot know how the call will lbe evaluated. Emotion recognition is something “that the customer does not have to expect”.

Callers are not informed about Emotion AI, confirms 11880 manager Jörn Hausmann. Consent to the evaluation is not necessary:​​”Nothing is stored here and no customer profiles are derived from it,”says Hausmann. This means that even if customers object to the recording, emotions are evaluated.

Lawyer Wedde: Emotion recognition is inadmissible

Law professor Peter Wedde from the Frankfurt University of Applied Sciences believes that emotion analysis in call centers is not legally permissible. Employers are only allowed to process the data of their own employees that is absolutely necessary, says Wedde. The analysis of emotions allows “extensive insights into personality.” In the broadest sense, according to Wedde, this is health data, the processing of which is subject to extremely strict conditions.

For lawyer Rudkowski from the University of Giessen, it is questionable whether a constant real-time analysis that team leaders could view would not constitute total surveillance of call center agents. Employers arenot allowed to “monitor their employees continuously throughout their entire working hours and put them under considerable psychological pressure to adapt,” stresses Rudkowski.

AI in the workplace – no clear rules

Almost 160,000 people work in a call center in Germany. It is unclear how many call centers use artificial intelligence for speech or emotion analysis. Industry experts estimate that between ten and thirty percent of call centers in Germany evaluate telephone calls using AI.

So far, there is no legal regulation of artificial intelligence in Germany – including in the workplace. The federal government iscurrently working on a new employee data protection law that will also affect the use of AI, the Federal Ministry of Labor wrote in response to a BR inquiry.

EU MP: Emotion AI “pseudoscientific”

At the EU level, the AI ​​Act is intended to fundamentally regulate the use of artificial intelligence in the future. In June, the EUParliament decided on its position on the draft law by a large majority. The parliamentary position stipulates that emotion recognition using voice AI should be banned in the workplace, in education and for law enforcement purposes.

Sergey Lagodinsky, member of the European Parliament for Alliance 90/The Greens, calls the use of AI for emotion recognition”pseudoscientific”. In an interview with BR, Lagodinskysays: “Every person has the right to their inner life and not to share their emotions.”

The European Data Protection Supervisor WojciechWiewiórowski also describes the use of AI for emotion recognition as”extremely undesirable”. However, an exception is health and research purposes.

Debate about banning emotion recognition

Individual members of the conservative European People’s Party (EPP), however, are moving away from strict bans in the AI ​​Act. Angelika Niebler (CSU) said in an interview with BR that emotion recognition in the workplace is an issue where “opportunities and risks must be carefully weighed up.”

The Commission, Council and EU Parliament are currently negotiating the final content of the AI ​​Act. The negotiations are expected to be concluded by the end of this year.

You have put in twelve hours of work and are feeling exhausted and disheartened, but you make an effort to maintain a neutral expression. Despite feeling too tired to continue typing, you refrain from yawning, frowning, or looking upset due to constant surveillance. This surveillance, which tracks emotions, will not hesitate to report any signs of excessive anger.

It’s not the plot of a sci-fi movie or an episode of “Black Mirror,” but it could very well be the reality for certain Chinese employees working at major tech companies.

A Chinese company named Taigusys has developed an AI emotion-recognition system that can identify and monitor the facial expressions of multiple individuals and produce detailed reports on each person’s emotional state. However, researchers argue that such systems are not only frequently inaccurate, but also fundamentally unethical.

The Guardian first reported on this system in May. Taigusys counts multinational corporations such as Huawei, China Mobile, China Unicom, and PetroChina among its key clients, though it’s uncertain whether these companies are using this particular product.

Insider reached out to Taigusys and the 36 companies on its client list, but all of them remained silent in response to the requests for comment.

According to Taigusys, its system aids in “addressing new challenges” and can “minimize conflicts” arising from emotional or confrontational behavior.

This is achieved through an AI system that can simultaneously analyze the facial expressions of multiple individuals. An algorithm then evaluates each person’s facial muscle movements and biometric signals based on various criteria outlined by Taigusys.

The program seeks out “positive” emotions such as happiness, surprise, and being moved by something uplifting. It also scans for negative emotions like disgust, sorrow, confusion, disdain, and anger, along with neutral emotions such as being “focused” on a task.

Taigusys claims that its software can even detect fake smiles.

The emotion-recognition software is also capable of generating reports on individuals and suggesting “emotional support” if they surpass the recommended thresholds for “negative emotions.”

“As per our system’s product description, based on the analysis of one’s facial features, we can determine the level of confrontational, stressed, or nervous behavior in an individual, among other metrics. We can also assess the person’s emotional response and identify any suspicious activity,” the company stated.

Activists are warning against the implementation of such programs due to ethical and human rights concerns.

In a joint research paper, Vidushi Marda, senior program officer at British human rights organization Article 19, and Shazeda Ahmed, a Ph.D. candidate at UC Berkeley School of Information, revealed that they found 27 companies in China, including Taigusys, working on developing emotion-recognition programs.

“If this kind of technology is deployed, it violates the ethical and legal rights of employees within the workplace. Even in a privately-owned workplace, there should be an expectation of privacy and dignity, as well as the employees’ freedom to act and think independently,” Marda emphasized.

She pointed out that emotion-recognition systems are based on pseudoscience and the unfounded assumption that facial expressions are connected to a person’s inner emotional state.

“These technologies cannot be ethically used, as they are based on unethical assumptions,” Marda asserted.

Additionally, Ahmed suggested that intense panoptic surveillance imposed by such technology could lead to individuals self-censoring and modifying their behavior to manipulate the system.

Daniel Leufer, Europe policy analyst at digital civil rights non-profit Access Now, expressed that even if it were feasible, emotional recognition programs, despite their shaky scientific foundation, represent a “severe violation” of various human rights, including privacy, freedom of expression, and freedom of thought.

“Most importantly, all these violations potentially occur even if emotion recognition is not scientifically possible. The mere fact that people believe it is, and create and deploy systems claiming to do it, has real effects on people,” Leufer opined.

“Simply refrain from using, developing, or funding it. Its scientific validity is questionable, it infringes on various fundamental rights, and it’s just plain unsettling,” he declared.

“A society that monitors our emotions is one that will reward some individuals and penalize others. Are specific emotions indicators of criminal behavior or low productivity?” observed Rob Reich, a political science professor at Stanford and co-author of an upcoming book on ethics and technology, “System Error: Where Silicon Valley Went Wrong and How We Can Reboot.”

“Government surveillance of our emotions will result in systematic privacy violations and a decline in human dignity,” Reich emphasized.

“It is never morally justifiable for individuals to utilize AI systems to monitor people’s emotions, especially when those conducting the surveillance hold positions of authority over those being surveilled, such as employers, law enforcement, and the government,” Leufer added.

This might explain why, so far, prisons are among the few workplaces to openly acknowledge using the technology.

The Guardian interviewed Chen Wei, a general manager at Taigusys, who informed them that the company’s systems are operational in 300 prisons and detention centers across China. The system is linked to approximately 60,000 cameras in these facilities and has contributed to keeping prisoners “more compliant,” according to Chen.

With this system, authorities can continuously monitor individuals in real-time, 24 hours a day.

“Violence and suicide are prevalent in detention centers. Even if authorities no longer physically abuse prisoners, they often attempt to break them by depriving them of sleep. Consequently, some prisoners experience mental breakdowns and attempt suicide. Our system helps prevent such incidents,” Chen informed The Guardian.

Apart from the ethical concerns associated with emotion recognition software, many doubt the technology’s capability to provide an accurate assessment.

Desmond Ong, an assistant professor at the department of information systems and analytics at the National University of Singapore’s School of Computing, suggests that the software could potentially aid in “identifying critical life-and-death situations, such as identifying fatigued, intoxicated, or mentally unstable pilots and train operators.”

However, he cautioned that it could also be misused to unfairly penalize or excessively optimize employee performance, such as making ’empathy’ a key performance metric for call center employees.

Ong pointed out that systems like the one developed by Taigusys also oversimplify human emotional expression by reducing individuals to the display of basic emotions.

Moreover, the extent to which AI can truly comprehend the complexity of human emotions is uncertain.

Sandra Wachter, an associate professor and senior research fellow at the University of Oxford’s Oxford Internet Institute, stated to Insider that it would be improbable for an algorithm to accurately decipher the highly intricate emotional states of individuals solely through facial expressions.

She highlighted, for instance, how women are often socialized to smile politely, which may not necessarily indicate happiness or agreement.

“Implementing emotion-recognition software in the workplace also poses a challenge to diversity as it compels individuals to conform to an algorithmic and artificial ‘mainstream’ standard, thereby impinging on people’s autonomy to freely express themselves,” Wachter remarked.

“We can observe a conflict with fundamental human rights, such as freedom of expression and the right to privacy,” she added.

AI struggles to interpret human emotions effectively. So, why are regulators focusing on this technology?

I recently visited one of my favorite spots in New York City, the public library, to peruse original letters, writings, and musings of Charles Darwin. The renowned English scientist had a penchant for writing, and his curiosity and keen observations come to life on the pages.

In addition to advancing the theory of evolution, Darwin analyzed the expressions and emotions of humans and animals. He deliberated in his writings on the scientific, universal, and predictable nature of emotions, and he sketched figures with exaggerated expressions, which were on display at the library.

Lately, with everyone expressing concern about ChatGPT, AI general intelligence, and the possibility of automation replacing jobs, I’ve noticed that regulators have been issuing more warnings regarding AI and emotion recognition.

Emotion recognition, in this context far removed from Darwin’s, involves attempting to discern a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings.

The concept is not overly intricate: the AI model might identify an open mouth, narrowed eyes, and tense cheeks with a thrown-back head, for example, and interpret it as laughter, deducing that the individual is happy.

However, in practice, this process is exceptionally complex—and, according to some, it represents a precarious and invasive instance of the type of pseudoscience often produced by artificial intelligence.

Numerous advocates for privacy and human rights, such as European Digital Rights and Access Now, are urging for a complete ban on emotion recognition. Although the EU AI Act version approved by the European Parliament in June does not constitute a total ban, it prohibits the use of emotion recognition in law enforcement, border control, workplaces, and educational institutions.

In the US, some lawmakers have raised concerns about this particular technology, suggesting that it may be a strong candidate for future AI regulation. Senator Ron Wyden, a leading advocate for regulation, recently commended the EU for addressing this issue and cautioned, “Relying on facial expressions, eye movements, tone of voice, and gait to assess an individual’s character or future actions is unreliable. Nonetheless, millions of dollars are being invested in the development of emotion-detection AI based on questionable science.”

Now, the question is: why is this such a significant concern? Are worries about emotion recognition well-founded, and could stringent regulation impede positive innovation?

Several companies are already marketing this technology for a wide range of applications, although it is not yet widely deployed. For example, Affectiva has been investigating how AI analyzing people’s facial expressions can be used to determine if a car driver is drowsy and to gauge audience reactions to a movie trailer. On the other hand, companies like HireVue have sold emotion recognition as a method to vet potential job candidates, a practice that has faced extensive criticism.

“I generally support allowing the private sector to develop this technology. There are important use cases, such as assisting visually impaired individuals to better understand others’ emotions,” stated Daniel Castro, vice president of the Information Technology and Innovation Foundation, a think tank based in Washington, DC.

However, some applications of this technology are more troubling. Several companies are selling software to law enforcement agencies to identify deception or flag suspicious behavior.

A European Union-sponsored pilot project, iBorderCtrl, incorporates emotion recognition as part of its technology to manage border crossings. According to its website, the Automatic Deception Detection System “assesses the likelihood of deception in interviews by analyzing interviewees’ non-verbal micro-gestures” (while acknowledging the “scientific controversy surrounding its effectiveness”).

Nevertheless, the most publicized use (or misuse) of emotion recognition technology is observed in China, and this is undoubtedly a concern for legislators.

China has repeatedly utilized emotion AI for surveillance, particularly to monitor the Uyghurs in Xinjiang, as reported by a software engineer who claimed to have installed the systems in police stations. Emotion recognition was intended to detect a nervous or anxious “state of mind,” akin to a lie detector. As a human rights advocate cautioned the BBC, “It’s individuals in highly coercive circumstances, under immense pressure, understandably feeling nervous, and that’s interpreted as an indication of guilt.” Some schools in the country have also employed the technology on students to assess understanding and performance.

Ella Jakubowska, a senior policy advisor at the Brussels-based organization European Digital Rights, states she has not come across “any credible use case” for emotion recognition: “Both [facial recognition and emotion recognition] are about social control; about who monitors and who gets monitored; about where we see a concentration of power.”

Moreover, there is evidence that emotion recognition models are inherently unreliable. Emotions are intricate, and even human beings often struggle to accurately identify them in others. While the technology has advanced in recent years due to the availability of more and better data and increased computing power, the accuracy varies widely depending on the system’s intended outcomes and the quality of input data.

“The technology is imperfect, which probably has less to do with the limitations of computer vision and more to do with the complex, culturally and contextually variable, and imprecise nature of human emotions,” Castro noted.

This brings us back to Darwin. A key debate in this field is whether science can definitively determine emotions. We may witness progress in affective computing as the underlying science of emotion continues to develop—or we may not.

It reflects the current state of AI, where the technology is highly hyped and the concept of AI enhancing our understanding of the world seems compelling. However, AI expert Meredith Broussard questions whether everything can be reduced to a mathematical problem.

The internet is filled with negativity, insults, and significant disagreements that have global implications.

AI is often portrayed as exciting and revolutionary, but it also perpetuates inequality, disrupts job markets, and affects education. It has been likened to a thrilling amusement park ride and a captivating magic trick, but also seen as humanity’s ultimate invention and moral responsibility. AI is labeled as the trend of the decade, yet some see it as outdated marketing jargon from over half a century ago. It can be seen as human-like or alien, incredibly intelligent or incredibly basic. Some believe it will stimulate economic growth, while others believe it is a bubble waiting to burst. AI is expected to bring abundance and enable humanity to thrive in the universe, while others fear it will lead to our demise.

What is everyone really talking about?

Artificial intelligence is currently one of the most influential and discussed technologies. But what exactly is it? While it may seem like a straightforward question, it is an increasingly urgent one. Simply put, AI encompasses a range of technologies that enable computers to perform tasks that are typically associated with human intelligence, such as facial recognition, speech understanding, autonomous driving, language generation, answering questions, and creating visual content. However, even this definition encompasses a wide array of capabilities.

This complexity brings about challenges. What does it mean for machines to understand speech or generate a sentence? What types of tasks can we expect machines to perform, and how much trust should we place in their abilities?

As this technology rapidly transitions from prototype to commercial product, these questions become relevant to all of us. However, I, like most people, do not have definitive answers. Those developing AI are also uncertain about what AI truly is. “These are the kinds of questions that are important enough that everyone feels like they can have an opinion,” says Chris Olah, chief scientist at the AI lab Anthropic in San Francisco. “I also think you can argue about this as much as you want and there’s no evidence that’s going to contradict you right now.”

If you’re willing to delve into this topic, I can explain the reasons behind the uncertainty, disagreements, and why it matters. It all starts with a casual joke.

In 2022, during the initial episode of Mystery AI Hype Theater 3000, hosts Alex Hanna and Emily Bender, known for poking fun at the exaggerated claims made by Silicon Valley, make a far-fetched suggestion. While humorously critiquing a lengthy Medium post by Google’s VP of Engineering, Blaise Agüera y Arcas, titled “Can machines learn how to behave?” where he argues that AI can grasp concepts in a manner akin to human understanding, such as moral values, Hanna and Bender decide to replace the term “AI” with “mathy math” to underscore their viewpoint.

The purpose of this irreverent phrase is to strip away what they perceive as exaggerated claims and human-like attributions in the quoted content. Soon thereafter, Hanna, a sociologist and research director at the Distributed AI Research Institute, and Bender, a computational linguist at the University of Washington and a prominent critic of the tech industry, emphasize the disparity between Agüera y Arcas’ intended message and their interpretation.

Agüera y Arcas asks, “How should AIs, their creators, and their users be held morally accountable?”

Bender counters, “How should mathy math be held morally accountable?”

Bender asserts a category error, rejecting Agüera y Arcas’ perspective as nonsensical. “Can we please stop it with the ‘an AI’ or ‘the AIs’ as if they are, like, individuals in the world?” says Bender.

Although it might appear as if they are discussing different topics, they are not. Both parties are referring to large language models, the technology driving the current AI surge. However, the discourse surrounding AI has become more polarized than ever. In a tweet in May, Sam Altman, CEO of OpenAI, expressed his amazement at the latest update to GPT-4, the company’s flagship model, by saying, “Feels like magic to me.”

The gap between mathematics and magic is substantial.

AI has believers who have strong faith in the current capabilities of technology and in its inevitable future advancements. They claim that artificial general intelligence is within reach and that superintelligence will follow. On the other hand, there are skeptics who dismiss such assertions as mystical nonsense.

The popular narrative surrounding AI is influenced by prominent figures in big tech, such as Sundar Pichai, Satya Nadella, Elon Musk, and Altman, as well as renowned computer scientists like Geoffrey Hinton. Sometimes, these proponents and doomsayers are the same individuals, asserting that the technology is so advanced that it’s detrimental.

As AI hype has grown, a vocal opposition against exaggerated claims has emerged. This opposition includes a group of researchers, including Hanna and Bender, and outspoken critics of the industry such as influential computer scientist and former Googler Timnit Gebru, and NYU cognitive scientist Gary Marcus. Each of them has followers who engage in contentious discussions with proponents.

In summary, AI has become a concept that varies in meaning for different people, leading to the formation of devoted fan bases. It often seems like these groups are talking past each other, and not always with sincerity.

Perhaps you find all of this to be trivial or tiresome. However, considering the influence and intricacy of these technologies, which already play a role in determining insurance premiums, retrieving information, and performing various tasks, it’s crucial that we establish a shared understanding of what we are really discussing.

In my conversations with individuals at the forefront of this field, no one has provided a clear explanation of what exactly they are developing. (A quick note: This article primarily focuses on the AI debate in the US and Europe, as many of the most well-funded and advanced AI laboratories are located there. However, significant research is also being conducted in other regions with their own diverse perspectives on AI, particularly in China.) The rapid pace of development contributes to this, but the field also lacks consensus. Current large language models can accomplish remarkable tasks, yet there is no unified understanding of their underlying mechanisms.

These models are trained to finish sentences and appear capable of performing a wide range of tasks, from solving high school math problems and coding to passing legal exams and creating poetry. When individuals accomplish these tasks, their intelligence is acknowledged. But what about when a computer achieves the same? Is the display of intelligence sufficient?

These questions are central to the meaning of “artificial intelligence,” a term that has been a subject of debate for decades. However, the discourse surrounding AI has become more contentious with the emergence of large language models that can imitate human speech and writing with stunning realism.

We have developed machines with human-like behavior but have not abandoned the tendency to imagine a human-like mind behind them. This tendency results in exaggerated assessments of AI capabilities, entrenches opinions, and adds fuel to the broader cultural clash between techno-optimists and techno-skeptics.

In addition to this atmosphere of uncertainty, there is a wealth of cultural influences, ranging from the science fiction that likely shaped many individuals in the industry to more pernicious ideologies that impact our thoughts about the future. Given this complex mix, debates about AI are no longer solely academic (and perhaps never were). AI triggers intense emotions and prompts mature individuals to engage in name-calling.

“The current state of the debate is not intellectually healthy,” Marcus comments.

For years, Marcus has highlighted the shortcomings and limitations of deep learning, the technology that propelled AI into the mainstream, powering everything from LLMs to image recognition to self-driving cars. His 2001 book “The Algebraic Mind” argued that neural networks, the foundation of deep learning, are incapable of reasoning independently. (We will skip over this for now, but we will revisit it later and examine the significance of a term like “reasoning” in a sentence like this.)

Marcus states that he has attempted to engage Hinton, who publicly expressed existential concerns about the technology he helped develop, in a proper debate about the actual capabilities of large language models. “He simply refuses,” Marcus remarks. “He dismisses me as a fool.” (Having previously spoken with Hinton about Marcus, I can confirm this. “ChatGPT clearly has a better understanding of neural networks than he does,” Hinton told me last year.) Marcus also faced criticism when he wrote an essay titled “Deep learning is facing limitations.” Altman responded to it with a tweet: “I wish I had the confidence of a mediocre skeptic of deep learning.”

Despite this, Marcus’s consistent advocacy has established his reputation and earned him an opportunity to sit alongside Altman and provide testimony before the US Senate’s AI oversight committee last year.

That’s why these conflicts are more important than typical online arguments. While there are significant egos and large amounts of money involved, these disputes are crucial when industry leaders and opinionated scientists are called upon by heads of state and lawmakers to explain the capabilities and potential dangers of this technology. They are significant when this technology becomes integrated into everyday software, such as search engines, word-processing applications, and mobile assistants. AI is here to stay, and it’s essential to understand what we are investing in and who might be deceived.

“Debating whether AI is omnipresent or non-existent is a unique discussion in the history of technology,” Stephen Cave and Kanta Dihal emphasize in Imagining AI, a collection of essays from 2023 that delves into how various cultural beliefs influence people’s perspectives on artificial intelligence. “The fact that such a debate can take place regarding AI reflects its mythical nature.”

Ultimately, AI represents an idea or an ideal shaped not only by mathematical and computer science principles but also by worldviews and science fiction concepts. Establishing a concrete definition of AI will bring clarity to many issues. While consensus may be challenging to achieve, agreeing on the essence of AI would provide a solid foundation for discussions on its future development.

Why can’t we all get along?

In a 2014 BBC interview reflecting on her career, renowned cognitive scientist Margaret Boden, now 87, was asked whether she believed there were any limitations preventing computers (referred to by her as “tin cans”) from emulating human capabilities.

Boden stated, “I certainly don’t believe there are limitations in principle because to do so would imply that [human thinking] occurs through magic, and I don’t subscribe to that view.” However, she cautioned that powerful computers alone would not suffice. The field of AI also requires “powerful ideas” – new theories about the nature of thought and new algorithms that could replicate it. “But these are very, very challenging, and I see no reason to assume that we will eventually answer all of these questions. Perhaps we will, or perhaps we won’t.”

Boden’s reflections on the early years of the current AI surge illustrate the ongoing uncertainty that pervades the field. This ambivalence, whether we will or won’t achieve AI’s goals, echoes the decades during which she and her colleagues grappled with the same profound questions that researchers continue to confront today. AI was conceived as an ambitious pursuit over 70 years ago, and we still debate the attainability of its goals, and how we will ascertain if they have been reached. Most, if not all, of these disagreements stem from our limited understanding of intelligence and our struggle to define and recognize it. The field is rife with conjecture, but certainty remains elusive.

We have been grappling with this issue since the idea of AI gained serious consideration. Even before that, when the narratives we consumed implanted the notion of human-like machines deep within our collective imagination. The protracted history of these disputes means that present-day conflicts often reinforce chasms that have existed since the inception of AI, making it even more challenging for people to find common ground.

To comprehend our current position, we must understand our origins, including the hype surrounding AI’s inception, which aimed to attract funding.

A concise history of AI marketing

Computer scientist John McCarthy is acknowledged for coining the term “artificial intelligence” in 1955 while applying for funding for a summer research program at Dartmouth College in New Hampshire.

The objective was for McCarthy and a small team of fellow researchers, a who’s-who of postwar U.S. mathematicians and computer scientists – or, as Harry Law, a researcher specializing in AI history at the University of Cambridge and ethics and policy at Google DeepMind, refers to them, “John McCarthy and the boys” – to convene for two months (not a typo) and make significant progress in this new research challenge they had set for themselves.

From left to right, Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, Peter Milner, John McCarthy, and Claude Shannon sitting on the lawn at the 1956 Dartmouth conference.

The research is set to proceed with the assumption that every aspect of learning or any other aspect of intelligence can potentially be described so precisely that a machine can imitate it, as stated by McCarthy and his colleagues. They aim to figure out how to make machines use language, form abstractions and concepts, solve problems that are currently only solvable by humans, and improve themselves.

The list of objectives they aimed to achieve, referred to by Bender as “the starry-eyed dream,” remains largely unchanged. Using language, creating concepts, and solving problems are still crucial goals for AI. Their confidence in their goals also remains constant as they claimed that a significant progress could be made if a group of scientists worked on it together for a summer. However, this “summer” has now extended over seven decades. The degree to which these problems have been solved continues to be a subject of debate on the internet.

However, what is frequently omitted from this well-known history is that artificial intelligence was originally not called “artificial intelligence.”

Several of McCarthy’s colleagues did not like the term he had coined. Arthur Samuel, a participant at Dartmouth and the creator of the first checkers-playing computer, was quoted as saying, “The word ‘artificial’ makes you think there’s something kind of phony about this.” Claude Shannon, a coauthor of the Dartmouth proposal, preferred the term “automata studies.” AI pioneers Herbert Simon and Allen Newell also referred to their work as “complex information processing” for many years.

In reality, “artificial intelligence” was just one of numerous labels that could have represented the assortment of ideas that the Dartmouth group was drawing on. The historian Jonnie Penn has identified potential alternatives that were in use at the time, including “engineering psychology,” “applied epistemology,” “neural cybernetics,” “non-numerical computing,” “neuraldynamics,” “advanced automatic programming,” and “hypothetical automata.” This list demonstrates the diverse range of influences for their new field, encompassing biology, neuroscience, statistics, and more. Marvin Minsky, another participant at Dartmouth, has characterized AI as a “suitcase word” that can encompass numerous distinct interpretations.

However, McCarthy wanted a name that encapsulated the extensive scope of his vision. By naming this new field “artificial intelligence,” he was able to capture people’s attention—and funding. It’s important to note: AI is appealing and fashionable.

In addition to terminology, the Dartmouth proposal formalized a division between competing approaches to artificial intelligence that has endured within the field. McCarthy and his colleagues aimed to use computer code to describe “every aspect of learning or any other feature of intelligence” in order for machines to replicate them. This laid the groundwork for what later became known as rule-based or symbolic AI (referred to now as GOFAI, “good old-fashioned AI”). However, creating hard-coded rules that accurately represented the processes of problem-solving for real, complex problems proved to be incredibly difficult.

The alternative approach favored neural networks, computer programs that would attempt to learn those rules independently in the form of statistical patterns. Although the concept initially seemed less promising, some researchers continued to develop variations of neural networks alongside symbolic AI. However, it would take many years, substantial computing power, and a wealth of internet data before they truly took off. Fast-forward to the present day and this approach underpins the entire AI boom.

A key point to note is that, similar to today’s researchers, AI’s trailblazers debated fundamental concepts and became engrossed in their own promotional hype. Even the GOFAI team was embroiled in disputes.

Aaron Sloman, a philosopher and fellow AI pioneer now in his late 80s, recalls how “old friends” Minsky and McCarthy “strongly disagreed” when he met them in the ’70s: “Minsky thought McCarthy’s claims about logic could not work, and McCarthy thought Minsky’s mechanisms could not do what could be done using logic. I got along well with both of them, but I was saying, ‘Neither of you have got it right.’” (Sloman still believes that no one can fully explain how human reasoning utilizes intuition as much as logic, but that’s a separate issue!)

As the fortunes of the technology fluctuated, the term “AI” went in and out of vogue. In the early ’70s, both research paths were essentially put on hold after the UK government issued a report arguing that the AI dream had become too ambitious and costly.

In 2008, while I was completing a computer science PhD, only one individual in the department was involved in neural network research. A similar memory is shared by Bender: “During my college years, there was a running joke that AI encompasses tasks that we haven’t yet mastered with computers. Once we figure out how it’s done, it ceases to be perceived as magic and no longer falls into the category of AI.”

However, that enchanting vision, as outlined in the Dartmouth proposal, persisted and, as we witness today, formed the basis of the AGI aspiration.

Five years before McCarthy began discussing artificial intelligence in 1950, Alan Turing, a renowned mathematician, raised the query: Can machines exhibit thought? In pursuance of addressing this question, Turing introduced a hypothetical assessment called the imitation game. The scenario envisions a human and a computer positioned behind a screen, interacting with a second human who poses questions to both. Turing posited that if the questioner cannot differentiate the responses from the human and the computer, the computer could be deemed as capable of thought.

Unlike McCarthy’s group, Turing comprehended the complexity of defining thought. The Turing test served as a strategy to bypass this conundrum. “Essentially, rather than focusing on the core essence of intelligence, I intend to seek out its manifestation in the environment. I’ll look for its shadow,” as explained by Law.

In 1952, BBC Radio assembled a panel to delve deeper into Turing’s concepts. Turing was joined by two of his colleagues from Manchester University—Maxwell Newman, a mathematics professor, and Geoffrey Jefferson, a neurosurgery professor—along with Richard Braithwaite, a philosopher specializing in science, ethics, and religion at the University of Cambridge.

Braithwaite initiated the discussion: “Thinking is conventionally perceived to be the domain of humans, and perhaps other higher animals. The question might seem too absurd to deliberate on. However, everything hinges on the interpretation of ‘thinking.’”

The panelists deliberated on Turing’s question but did not arrive at a definitive conclusion.

Whenever they endeavored to define the constituents of thought and its mechanisms, the parameters shifted. “As soon as we can observe cause-and-effect processes unfolding in the brain, we no longer regard it as thought but rather as an uncreative, laborious task,” Turing remarked.

The crux of the issue lay here: if one panelist proposed a behavior that could be construed as evidence of thought, such as reacting vehemently to a novel idea, another would counter by suggesting that a computer could be programmed to execute the same action.

Newman highlighted that programming a computer to display the message “I disapprove of this new program” would be a straightforward task. Nonetheless, he conceded that this would be an act of guile.

Exactly, Jefferson noted: he sought a computer that would articulate “I disapprove of this new program” because it genuinely disliked the new program. Hence, for Jefferson, mere behavior was inadequate; it was the process leading to the behavior that held significance.

However, Turing dissented. As he previously observed, isolating a specific process—the laborious task, to use his words—did not satisfactorily define thought either. So, what remained?

“From this standpoint, one might be inclined to define thought as comprising those cognitive processes that elude our understanding,” Turing suggested. “If this holds true, creating a thinking machine would entail producing a device that performs intriguing tasks without our fully comprehending how it achieves this.”

It is intriguing to witness individuals grappling with these concepts for the first time. “The debate is remarkably insightful,” commented Tomer Ullman, a cognitive scientist at Harvard University. “Several points raised are still relevant, perhaps even more so. It seems they were essentially circling the idea that the Turing test primarily aligns with behaviorist principles.”

For Turing, intelligence proved challenging to define but straightforward to identify. He posited that the manifestation of intelligence sufficed and did not expound upon the manner in which such behavior should be engendered.

Nonetheless, when probed, most people possess an innate intuition about what constitutes intelligence versus what does not. There are astute and unwise approaches to appearing intelligent. In 1981, Ned Block, a philosopher at New York University, demonstrated that Turing’s proposition fell short of these innate intuitions. Since it omitted any reference to the source of the behavior, the Turing test could be manipulated through deceit, as Newman had highlighted during the BBC broadcast.

“Could the determination of whether a machine genuinely thinks or exhibits intelligence hinge on the tendency of human interrogators to be credulous?” Block queried. (Or as computer scientist Mark Reidl has remarked: “The Turing test is not for AI to pass but for humans to fall short.”)

Consider a large table filled with all possible questions and answers, input a question and receive a matching answer – this is what Block proposed. He argued that even though this machine would seem intelligent, it would only have the intelligence of its programmers.

Blocking concluded that the intelligence of behavior is dependent on how it is produced, not how it seems. Blockheads, also known as Block’s toasters, stand as strong evidence against Turing’s proposal.

A few months ago, artificial intelligence wasn’t a popular topic, but now it is gaining momentum. Many magazines have labeled 2023 as the year of artificial intelligence. Despite this, the challenges we face with artificial intelligence today are similar to what we encountered with the internet and computers in the past.

Artificial intelligence is being utilized in various fields, offering significant benefits. However, this fascinating technology still has limitations. It lacks the ability that some humans possess to understand and reflect on the cultural context of their creations.

Humans can think abstractly and find innovative solutions beyond available data. An AI model can offer assistance in the artistic field in multiple ways.

Here are some tasks an AI model can perform (OpenAI, 2023):

  • Craft poetry and song lyrics in various styles and themes, as well as aid in writing song lyrics.
  • Assist in creative writing by providing ideas, suggestions, and helping develop plots, characters, and dialogues for plays, novels, short stories, scripts, and more.
  • Describe landscapes, scenes, or visual elements to create detailed and vivid mental images.
  • Generate original and appealing ideas for artistic or band names.
  • Provide information and knowledge related to artists, artistic movements, styles, famous artworks, music, and more.

AI, like any other technology, initially enters the world as a novelty that captivates everyone. Initially, not many people are interested in the substance of the technology. Only a few individuals can comprehend how this new technology could transform the environment. People typically use it without considering its potential implications (Levinson, 1977).

Machines or models have been programmed to behave ethically, yet there are still imperfections. Many ethical dilemmas have been identified (Lawton & Wigmore, 2023). Experts predict that in 2023, we may witness the first death attributed to a chatbot.

Most emotional AI models are based on flawed or pseudoscientific principles (Romero, 2022).

Algorithms can identify a person crying but often cannot determine the cause (Purdy et al., 2019). For instance, furrowing the brow does not always indicate anger, even though an algorithm might conclude so.

Our emotions do not always accurately reflect our internal states. An AI making assumptions about emotional states could worsen racial or gender inequities (Agarwal, 2022).

The use of AI models and devices is currently unregulated (Candelon et al., 2021).

Can we compete with AI?

Art is a creative and aesthetic expression by human beings (Lind, 1993). It is a narrative construction based on emotions, feelings, and perceptions about the human experience and reality, while serving aesthetic and/or symbolic purposes.

AI will be capable of producing its own artistic narrative based on patterns and statistical models (Manovich, 2018), and eventually compete with humans, with certain advantages and drawbacks.

Ultimately, the choice will come down to whether one prefers the composition of a particular AI or the creation of the next Bach, Mozart, or Beethoven.

“Artificial intelligence certainly has numerous advantages, but we should not overlook the drawbacks, risks, and expenses associated with its use, and I’m not just talking about the financial cost.

Those who truly comprehend the real worth of this technology will be the ones to reap the benefits. We need to gain more knowledge not only about its capabilities but also about its limitations. That’s why we need to delve deeper into the structure and operation of artificial intelligence.

In short and straightforward terms, artificial intelligence strives to mimic certain cognitive functions and enhance the efficiency and precision of various tasks.

Technology alters our lives, whether for better or for worse.

Marshall McLuhan once remarked: We shape our tools and thereafter our tools shape us.

Once we embrace a technology, once we allow it to enter our homes, our workplaces, we never want to let it go. We believe there can’t be anything superior or different. We must bear in mind that every technology has a life cycle . It’s highly probable that artificial intelligence will evolve into something we can’t currently envisage.

Technology isn’t limited solely to physical devices and tools but also encompasses systems, structures, and modes of thought that develop in response to technological innovation. AI is focused on developing algorithms and programs that enable machines to process information, learn from it, make decisions, solve problems, and carry out tasks in a manner akin to that of a human being. AI, like any other technology, is a social and cultural phenomenon that influences how we perceive the world, communicate, interact, and construct knowledge.

Artificial intelligence, akin to any technology, should be viewed as a valuable ally that can significantly contribute to simplifying multiple tasks for the benefit of humanity. It should serve humans, and not the other way around.

I’ll conclude my presentation with this thought from the Canadian philosopher and media scholar, Marshall McLuhan:

“If we understand the revolutionary transformations caused by new technologies, we can anticipate and control them, but if we continue in our subliminal trance, we will always be slaves to technological effects” (McLuhan in Norden, 1969).”

In a realm where emotions and reasoning frequently overlap, emotional intelligence (EQ) is recognized as a fundamental aspect of achieving personal and professional success. Traditionally, EQ assessments have thrived on interpersonal interactions, depending on subtle perceptions and experiences to evaluate one’s capability to handle emotions, both their own and those of others. However, as artificial intelligence becomes more embedded in our everyday lives, it raises the question: can AI truly gauge and comprehend human emotions, especially within the critical context of job recruitment?

AI’s Involvement in Emotional Analysis

Emotional intelligence transcends being merely a trend in corporate environments; it serves as a predictive marker for job performance, leadership effectiveness, and team collaboration. Organizations have long appreciated EQ for its role in creating a positive work atmosphere and facilitating business achievements. The World Economic Forum indicates that by 2025, emotional intelligence will rank among the essential job skills. Acknowledging this, developers have eagerly sought to utilize AI’s capabilities to assess this vital soft skill.

How AI Measures EQ

AI measures emotional intelligence by employing advanced algorithms that are capable of scrutinizing video interviews, voice variations, and even written communications. Tools such as HeyMilo, which employ machine learning, evaluate candidates’ verbal indicators to rate their emotional skills. This largely involves analyzing speech patterns that are thought to reflect a candidate’s empathy, resilience, and collaborative ability.

For example, an AI system can examine the minute facial expressions a candidate exhibits in response to interview queries and compare them with a database of emotional reactions. Companies like Affectiva have created emotional AI that can gauge the sincerity of a smile or detect subtle signs of anxiety in vocal tones.

The Foundation of the Technology

The technology that underpins AI’s emotional evaluation is based on psychological studies correlating facial expressions with emotions, a discipline initiated by psychologists such as Paul Ekman. Modern AI systems integrate these insights along with extensive datasets to train models to recognize emotional expressions accurately. Recent progress in natural language processing (NLP) further empowers AI to grasp the emotional nuances behind written text, thereby enhancing its ability to evaluate emotional intelligence comprehensively.

Precision and Reliability

Research indicates mixed outcomes regarding AI’s capability in assessing EQ. A study from the University of Southern California reveals that AI can sometimes match or even surpass human precision in identifying specific emotions. They discovered that algorithms were particularly skilled at recognizing happiness, albeit less proficient with more complex emotions like guilt or pride. The level of accuracy often depends on the quality of the data used for training the algorithms—an adage of “garbage in, garbage out” holds true here.

Obstacles and Ethical Issues

Nevertheless, the integration of AI into EQ evaluations presents several challenges. A significant concern involves privacy—what are the ethical implications of scrutinizing an individual’s facial expressions and vocal tones without their clear consent? Additionally, there exists a potential for bias. If AI is trained on flawed datasets, it could unintentionally reinforce stereotypes or discriminate. Ensuring fairness in AI systems demands continuous assessment and adjustment.

The Prospects of AI in EQ Evaluation

Looking forward, the possibilities for AI in the evaluation of emotional intelligence are vast. As the technology progresses, it may become a commonplace tool in recruitment processes, providing a more nuanced insight into candidates’ emotional skills. However, the realization of this potential relies on confronting ethical dilemmas directly and guaranteeing that these AI systems remain as unbiased and equitable as possible.

The evolution of AI in assessing emotional intelligence is just commencing, my friends. Although the technology offers a novel approach to human resources practices, its efficacy will rest on our capability to merge the emotional with the algorithmic, ensuring that human empathy is preserved in this interplay. As we navigate this path, the intersection of AI and EQ could transform our understanding of emotional intelligence, turning it into a measurable and visible aspect of our hiring, work, and interpersonal interactions.

Leave a Reply

Your email address will not be published. Required fields are marked *