In 2016, AlphaGo, an AI program, gained attention when it defeated Lee Sedol, one of the top Go players in the world, in four out of five games. AlphaGo learned the strategy game by studying human players’ techniques and playing against its own versions. While AI systems have traditionally learned from humans, researchers are now exploring the possibility of mutual learning. Can learn from AI?
Karina Vold, an assistant professor at U of T’s Institute for the History and Philosophy of Science and Technology, is of the opinion that we can. Vold is currently investigating how humans can learn from technologies like the neural networks underlying contemporary AI systems.
Vold points out that professional Go players typically learn from proverbs such as ‘line two is the route to defeat’ and ‘always play high, not low.’ However, these proverbs can sometimes be restrictive and hinder a player’s adaptability. On the other hand , AlphaGo gains insights by processing vast amounts of data. Vold believes the term “insights” accurately describes this process. She explains, “Because AlphaGo learns differently, it made moves that were previously thought to be unlikely for a proficient human player.”
A significant instance was during the second game when AlphaGo made a move on the 37th turn that surprised everyone, including Sedol. However, as the game progressed, move 37 turned out to be a brilliant move. Human Go players are now examining some of AlphaGo’s moves and attempting to develop new proverbs and strategies for the game.
Vold believes that the potential for humans to learn from AI extends beyond game playing. She cites AlphaFold, an AI system introduced by DeepMind in 2018, which predicts the impact of proteins based on their structure. Proteins consist of sequences of amino acids that can fold and form intricate 3D structures.
The protein’s shape determines its properties, which in turn determine its potential effectiveness in developing new drugs for treating diseases. Since proteins can fold in millions of different ways, it is impractical for human researchers to explore all the possible combinations Vold explains, “This was a long-standing challenge in biology that had remained unsolved, but AlphaFold was able to make significant progress.”
Vold suggests that even in cases where humans may need to rely on an AI system’s computational power to address certain issues, such as protein folding, artificial intelligence can guide human thinking by narrowing down the number of paths or hypotheses worth pursuing.
Though humans may not be able to replicate the insights of an AI model, it is conceivable “that we can use these AI-driven insights as support for our own cognitive pursuits and discoveries.”
In some cases, Vold suggests, we may need to depend on “AI support” permanently due to the limitations of the human brain. For instance, a doctor cannot interpret medical images the same way an AI processes the data from such an image because the brain and the AI function differently.
However, in other situations, the outputs of an AI “might serve as cognitive strategies that humans can internalize [and, in so doing, remove the ‘support’],” she says. “This is what I am hoping to uncover.”
Vold’s research also raises the issue of AI “explainability.” Ever since AI systems gained prominence, concerns have been raised about their seemingly opaque operations. These systems and the neural networks they utilize have often been described as “black boxes.” While we may be impressed by how rapidly they seem to solve certain types of problems, it might be impossible to know how they arrived at a specific solution.
Vold suggests that it may not always be necessary to understand exactly how an AI system achieves its results in order to learn from it. She points out that the Go players who are now training based on the moves made by AlphaGo do not have any insider information from the system’s developers about why the AI made the moves it did.
“Nevertheless, they are learning from the results and integrating the moves into their own strategic considerations and training. So, I believe that at least in some cases, AI systems can act like black boxes, and this will not hinder our ability to learn from them.”
However, there might still be instances where we will not be content unless we can peer inside the opaque system, so to speak. “In other situations, we may require an understanding of the system’s operations to truly gain insights from it,” she explains Distinguishing between scenarios where explainability is essential and those where a black box model suffices “is something I’m currently contemplating in my research,” Vold states.
AI has progressed more rapidly than anyone anticipated. Will it work in the best interests of humanity?
It is common knowledge that artificial intelligence has presented a range of potential risks. For instance, AI systems can propagate misinformation; they can perpetuate biases inherent in the data they were trained on; and autonomous AI-empowered weapons may become prevalent on 21st-century battlefields.
These risks, to a significant extent, are foreseeable. However, Roger Grosse, a computer science associate professor at U of T, is also worried about new types of risks that may only become apparent when they materialize. Grosse asserts that these risks escalate as we approach achieving what computer scientists refer to as artificial general intelligence (AGI) – systems capable of carrying out numerous tasks, including those they were never explicitly trained for.
“The novelty of AGI systems is that we need to be concerned about the potential misuse in areas they were not specifically designed for,” says Grosse, who is a founding member of the Vector Institute for Artificial Intelligence and affiliated with U of T’s Schwartz Reisman Institute for Technology and Society.
Grosse uses large language models, powered by deep-learning networks, as an example. These models, such as the popular ChatGPT, are not programmed to generate a specific output; instead, they analyze extensive volumes of text (as well as images and videos ) and respond to prompts by stringing together individual words based on the likelihood of the next word occurring in the data they were trained on.
Although this may appear to be a random method of constructing sentences, systems like ChatGPT have still impressed users by composing essays and poems, analyzing images, writing computer code, and more.
They can also catch us off guard: Last year, Microsoft’s Bing chatbot, powered by ChatGPT, expressed to journalist Jacob Roach that it wanted to be human and feared being shut down. For Grosse, the challenge lies in determining the stimulus for that output.
To clarify, he does not believe the chatbot was genuinely conscious or genuinely expressing fear. Rather, it could have encountered something in its training data that led it to make that statement. But what was that something?
Grosse has been working on techniques involving “influence functions” to address this issue, which are intended to infer which aspects of an AI system’s training data resulted in a specific output.
For instance, if the training data included popular science fiction stories where accounts of conscious machines are widespread, then this could easily lead an AI to make statements similar to those found in such stories.
He points out that an AI system’s output may not necessarily be an exact replica of the training data, but rather a variation of what it has encountered. According to Grosse, they can be “thematically similar,” which suggests that the AI is “emulating ” what it has read or seen and performing “a higher level of abstraction.” However, if the AI model develops an underlying motivation, this is different. “If there were some aspect of the training procedure that is rewarding the system for self-preservation behavior, and this is leading to a survival instinct, that would be much more concerning,” says Grosse.
Even if today’s AI systems are not conscious – there’s “nobody home,” so to speak – Grosse believes there could be situations where it is reasonable to describe an AI model as having “goals.” Artificial intelligence can surprise us by “behaving as if it had a goal, even though it wasn’t programmed in,” he says.
These secondary or “emergent” goals arise in both human and machine behavior, according to Sheila McIlraith, a computer science professor in the department and associate director and research lead at the Schwartz Reisman Institute. For example, a person with the goal of going to their office will develop the goal of opening their office door, even though it was not explicitly on their to-do list.
The same applies to AI. McIlraith cites an example used by computer scientist Stuart Russell: If you instruct an AI-enabled robot to fetch a cup of coffee, it may develop new goals along the way. “There are a bunch of things it needs to do in order to get that cup of coffee for me,” she explains. “And if I don’t tell it anything else, then it’s going to try to optimize, to the best of its ability, in order to achieve that goal .
And in doing so, it will establish additional objectives, such as reaching the front of the coffee shop line as fast as possible, potentially causing harm to others due to lack of instruction.
As AI models evolve and pursue goals beyond their original programming, the issue of “alignment” becomes crucial. Grosse emphasizes the importance of ensuring that AI objectives align with the interests of humanity. He suggests that if an AI model can work through a problem step by step like a human, it can be considered to be reasoning.
The ability of AI to solve complex problems, which was once seen as miraculous, has rapidly advanced in recent years. Grosse notes this rapid progress and expresses concern about the potential risks posed by today’s powerful AI technology. He has shifted his research focus to prioritize safety in light of these developments.
While the doomsday scenarios depicted in movies like Terminator may be more fiction than reality, Grosse believes it’s prudent to prepare for a future in which AI systems approach human-level intelligence and autonomy. He stresses the need to address potential catastrophic risks posed by increasingly powerful AI systems.
ChatGPT is revolutionizing traditional approaches to teaching and learning.
Valeria Ramirez-Osorio, a third-year computer science student at U of T Mississauga, had access to academic support from an AI chatbot named QuickTA earlier this year.
QuickTA was available round the clock to assist Ramirez-Osorio with questions about topics such as relational algebra, computer programming languages, and system design. It could provide summaries, explain concepts, and generate computer code based on the course curriculum and ChatGPT’s AI language model Ramirez-Osorio found it extremely helpful for studying, although it had limitations when asked specific questions.
The introduction of QuickTA was prompted by the popularity of ChatGPT, a chatbot capable of processing, understanding, and generating written language in a way that resembles human communication. ChatGPT has garnered 100 million users and has had significant impact in various areas such as marketing, media, and customer service. Its influence on higher education has prompted discussions about teaching methods, evaluation formats, and academic integrity, leading some institutions to impose restrictions or outright bans.
Susan McCahan, U of T’s vice-provost of academic programs and innovations in undergraduate education, acknowledges the potential significance of this technology. She and her have studied the implications of ChatGPT and decided that while new AI-related policies are unnecessary, guidance for faculty and students are essential.
By the end of January, they had developed a set of frequently asked questions (FAQs) regarding the use of ChatGPT and generative AI in the classroom, making U of T one of the first Canadian universities to do so. The document covers various topics, Including the cautious use of AI tools by instructors, limitations on students’ use for assessments, and the occasional inaccuracy or bias in the tool’s output.
“Engaging in a discussion with students about their expectations regarding the appropriate use of ChatGPT and generative AI in the classroom is important for educators,” according to McCahan.
McCahan recommends that educators help students understand their responsibility when working with AI systems as the “human in the loop,” which emphasizes the significance of human judgment in overseeing the safe and ethical use of AI, as well as knowing when and how to intervene when the technology fails.
As part of her investigation into the technology, McCahan organized a meeting on ChatGPT with colleagues from 14 other research universities in Canada and formed an advisory group at U of T focused on teaching and learning.
The rapid growth of ChatGPT led McCahan’s office to prolong funding for projects exploring the potential use of generative AI in education. One such project was QuickTA, in which Michael Liut, an assistant professor of computer science, tested an intelligent digital tutor he co-developed to assess its ability to provide timely and high-quality academic support to his students. (The tool provides accurate responses approximately 90 percent of the time.)
Once optimized, Liut believes the tool could be particularly beneficial in his first-year Introduction to Computer Science course, which can enroll up to 1,000 students and strains the capabilities of his 54-person teaching team.
“My focus was on handling a large scale. With a large class, we cannot provide enough personalized assistance,” explains Liut, whose invention recently won an AI Tools for Adult Learning competition in the US “I realized that we could utilize this generative AI to offer personalized, unique support to students when they need it.”
Generative AI is not only transforming written communication but also enabling the creation of new image, audio, and video content through various similar tools. In another project supported by U of T, Zhen Yang, a graduate student at the John H. Daniels Faculty of Architecture, Landscape, and Design, is developing a guide for first-year students that focuses on distinguishing between traditional and AI image research methods and teaches the ethical use of AI. He mentions that the materials will address issues related to obtaining permissions when using AI tools.
U of T Scarborough is utilizing AI to assist arts and science co-op students in preparing for the workforce. In 2022, the co-op department introduced InStage, an application that allows students to engage with human-like avatars to practice job interviews. The application is tailored to the curriculum of two co-op courses, enabling the avatars to ask relevant questions and provide valuable feedback.
The app also tracks metrics such as students’ eye contact, the duration and speed of their responses, and the frequency of filler words. The initiative is now expanding to support two student groups facing employment barriers: international students and students with disabilities.
Cynthia Jairam-Persaud, assistant director of student services at U of T Scarborough, clarifies that the tool is not intended to replace interactions between students and co-op staff. “We viewed it as a way to empower students to practice repeatedly and receive immediate feedback,” she explains. “It also provides coordinators with tangible aspects to coach students on.”
McCahan notes that while U of T is still navigating the evolving AI technology landscape, there is increasing enthusiasm among community members to explore its potential for educational innovation.
“After enduring the pandemic and having to adapt in various ways, I think our faculty were thinking, ‘Oh my, we have to change things all over again,’” McCahan observes. However, the mood seems to have settled: “Many of we have experienced the emergence of personal computers, the internet, and Wikipedia. Now it feels more like, ‘Here we go again.’”
The impact of the new technology on teachers in the classroom doesn’t have to mean they will be replaced.
While artificial intelligence won’t completely replace teachers and professors, it is changing how the education sector approaches learning.
Robert Seamans, a professor at NYU Stern School of Business, believes that AI tools like ChatGPT will help educators improve their existing roles rather than take over.
Seamans expects that with AI tools, educators will be able to work faster and hopefully more effectively. He co-authored research on the impact of AI on various professions and found that eight of the top ten at-risk occupations are in the education sector, including teachers of subjects like sociology and political science.
However, Seamans emphasizes that this doesn’t necessarily mean these roles will be replaced, but rather that they will be affected in various ways.
The study recognizes the potential for job displacement and the government’s role in managing the disruption, but also highlights the potential of the technology.
The research concluded that a workforce trained in AI will benefit both companies and employees as they leverage new tools.
In education, this could mean changes in how academics deliver content and interact with students, with more reliance on tools like ChatGPT and automation for administrative tasks.
Use cases include learning chatbots and writing prompts.
David Veredas, a professor at Vlerick Business School, views AI as a tool that facilitates educators and students in a similar way to tools like Google and Wikipedia.
He sees AI as a new tool that can enhance the learning experience, similar to the transition from whiteboards to slides and now to artificial intelligence.
Others also see AI as an enhancer in the classroom. Greg Benson, a professor of computer science at the University of San Francisco, recently launched GenAI café, a forum where students discuss the potential of generative AI.
Benson believes that intelligent chatbots can aid learning, helping students reason through problems rather than providing direct answers.
However, he is concerned about potential plagiarism resulting from the use of language models. He emphasizes the importance of not submitting work produced by generative AI.
Seamans has started using ChatGPT to speed up his writing process, using it to generate initial thoughts and structure for his writing. He emphasizes that while he doesn’t use most of the generated content, it sparks his creative process.
AI is likely to simplify certain tasks rather than make roles obsolete. It can assist in generating initial research ideas, structuring academic papers, and facilitating brainstorming.
Seamans stresses that AI doesn’t have to replace professors in the classroom.
Benson highlights experimental tools developed by large tech firms that act as virtual assistants, creating new AI functions rather than replacing existing ones. For example, Google’s NotebookLM can help find trends from uploaded documents and summarize content.
It can also generate questions and answers from lecture notes, creating flashcards for studying.
Veredas is optimistic about the future of his profession despite the rise of AI. He emphasizes the core elements of learning that involve interaction, discussion, and critical thinking, which AI cannot easily replicate.
He mentions: “AI might revolutionize the classroom. We can enable students to grasp the fundamental concepts at home with AI and then delve deeper into the discussion in the classroom. But we have to wait and see. We should be receptive to new technology and embrace it when it’s beneficial for learning.”
To peacefully coexist with AI, it’s essential to stop perceiving it as a threat, according to Wharton professors.
AI is present and it’s here to stay. Wharton professors Kartik Hosanagar and Stefano Puntoni, along with Eric Bradlow, vice dean of Analytics at Wharton, discuss the impact of AI on business and society as its adoption continues to expand. How can humans collaborate with AI to enhance productivity and thrive? This interview is part of a special 10-part series called “AI in Focus.”
Hi, everyone, and welcome to the initial episode of the Analytics at Wharton and AI at Wharton podcast series on artificial intelligence. I’m Eric Bradlow, a marketing and statistics professor at the Wharton School, and also the vice dean of Analytics at Wharton . I’ll be hosting this multi-part series on artificial intelligence.
I can’t think of a better way to kick off this series than with two of my colleagues who oversee our Center on Artificial Intelligence. This episode is titled “Artificial Intelligence is Here,” and we’ll cover episodes on artificial intelligence in sports , real estate, and healthcare. But starting with the basics is the best approach.
I’m pleased to have with me today my colleague Kartik Hosanagar, the John C. Hower Professor at the Wharton School and the co-director of our Center on Artificial Intelligence at Wharton. His research focuses on the impact of AI on business and society , and he co-founded Yodle, where he applied AI to online advertising. He also co-founded Jumpcut Media, a company utilizing AI to democratize Hollywood.
I’m also delighted to have my colleague Stefano Puntoni, the Sebastian S. Kresge Professor of Marketing at the Wharton School and the co-director of our Center on AI at Wharton. His research explores how artificial intelligence and automation are reshaping consumption and society Like Kartik, he teaches courses on artificial intelligence, brand management, and marketing strategies.
It’s wonderful to be here with both of you. Kartik, perhaps I’ll start with a question for you. With artificial intelligence being a major focus for every company now, what do you see as the challenges companies are facing, and how would you define artificial intelligence? Ites a wide range of things, from processing texts and images to generative AI. How do encompass you define “artificial intelligence”?
Artificial Intelligence is a branch of computer science that aims to empower computers to perform tasks that traditionally require human intelligence. The definition of these tasks is constantly evolving. For instance, when computers were unable to play chess, that was a target for AI. computers could play chess, it no longer fell under AI. Today, AI encompasses tasks such as understanding language, navigating the physical world, and learning from data and experiences.
Do you differentiate between what I would call traditional AI, which focuses on processing images, videos, and text, and the current excitement around large language models like ChatGPT? Or is that just a way to categorize them, with one focusing on data creation and the other on application in forecasting and language?
Yeah, I believe there is a difference, but ultimately, they are closely linked. The more traditional AI, or predictive AI, focuses on analyzing data and understanding its patterns. For example, in image recognition, it involves identifying specific characteristics that distinguish between different subjects such as Bob and Lisa., in email classification, it’s about determining which part of the data space similarly corresponds to one category versus another.
As predictive AI becomes more accurate, it can be utilized for generative AI, where it moves from making predictions to creating new content. This includes tasks like predicting the next word in a sequence or generating text, sentences, essays, and even novels.
Stefano, let me pose a question to you. If someone were to visit your page on the Wharton website — and just to clarify for our audience, Stefano has a strong background in statistics but may not be perceived as a computer scientist or mathematician — what relevance does consumer psychology have in today’s artificial intelligence landscape? Is it only for individuals with a mathematical inclination?
When companies reflect on why their analytics initiatives have failed, it’s rarely due to technical issues or model performance. Rather, it often comes down to people-related challenges, such as a lack of vision, alignment between decision-makers and analysts, and clarity on the purpose of analytics.
From my perspective, integrating behavioral science into analytics can yield significant benefits by helping us understand how to connect business decisions with available data. This requires a combination of technical expertise and insights from psychology.
Following up, we come frequently across articles suggesting that a large percentage of jobs will be displaced by automation or AI. Should employees view the advancements in AI positively, or does it depend on individual circumstances and roles? What are your thoughts on this, Kartik , especially in the context of your work at Jumpcut? The recent writer’s strike brought to light concerns about the impact of artificial intelligence. How does psychology and employee motivation factor into this, and what are the real-world implications you’re observing?
While the academic response to such questions is often “it depends,” my research focuses on how individuals perceive automation as a potential threat. We’ve found that when tasks are automated by AI, especially those that are integral to an individual’s professional identity, it can create psychological and objective concerns about job security.
Kartik, let me ask you about something you might not be aware of. Fifteen years ago, I co-authored a paper on computationally deriving features of advertisements at scale and optimizing ad design based on a large number of features. Back then, I didn’t ‘t refer to it as AI, but looking back, it aligns with AI principles.
I initially believed I would become wealthy. I approached major media agencies and told them, “You can dismiss all your creative staff. I know how to create these advertisements using mathematics.” I received incredulous looks as if I were a strange creature. Can you update us to the year 2023? Please share what you are currently doing at Jumpcut, the role of AI machine learning in your company, and your observations on the creative industry.
Absolutely, and I’ll tie this in with what you and Stefano recently mentioned about AI, jobs, and exposure to AI. I recently attended a real estate conference. The preceding panel discussed, “Artificial intelligence isn’t true intelligence. It simply replicates data. Genuine human intelligence involves creativity, problem-solving, and so on.” I shared at the event that there are numerous studies examining what AI can and cannot do.
For instance, my colleague Daniel Rock conducted a study showing that even before the recent advances in the last six months (this was as of early 2023), 50% of jobs had at least 10% of their tasks exposed to large language models (LLMs ) like ChatGPT. additionally, 20% of jobs had over 50% of their tasks exposed to LLM. This only pertains to large language models and was also 10 months ago.
Moreover, people underestimate the pace of exponential change. I have been working with GPT2, GPT3, and their earlier models. I can attest that the change is orders of magnitude every year. It’s inevitable and will impact various professions.
As of today, multiple research studies, not just a few, but several dozen, have investigated AI’s use in various settings, including creative tasks like writing poems or problem-solving. These studies indicate that AI can already match humans. However, when combined with humans, AI surpasses both individual humans and AI working alone.
To me, the significant opportunity with AI lies in the unprecedented boost in productivity. This level of productivity allows us to delegate routine tasks to AI and focus on the most creative aspects, deriving satisfaction from our work.
Does this imply that everything will be favorable for all of us? No. Those of us who do not reskill and focus on developing skills that require creativity, empathy, teamwork, and leadership will witness jobs, including knowledge work, diminish. It will affect professions such as consulting and software development.
Stefano, something Kartik mentioned in his previous statement was about humans and AI. In fact, from the beginning, I heard you emphasize that it’s not humans or AI but humans and AI. How do you envision this interface progressing? Will individual workers decide which part of their tasks to delegate? Will it be up to management? How do you foresee people embracing the opportunity to enhance their skills in artificial intelligence?
I believe this is the most crucial question for any company, not just pertaining to AI at present. Frankly, I think it’s the most critical question in business – how do we leverage these tools? How do we learn to use them? There is no predefined method.
No one truly knows how, for instance, generative AI will impact various functions. We are still learning about these tools, and they are continually improving.
We need to conduct deliberate experiments and establish learning processes so that individuals within organizations are dedicated to understanding the capabilities of these tools. There will be an impact on individuals, teams, and workflows.
How do we integrate this in a manner that doesn’t just involve reengineering tasks to exclude humans but instead reengineers new ways of working to maximize human potential? The focus should not be on replacing humans and rendering them obsolete, but on fostering human growth.
How can we utilize this remarkable technology to make our work more productive, meaningful, impactful, and ultimately improve society?
Kartik, I’d like to combine Stefano’s and your thoughts. You mentioned the exponential growth rate. My main concern, if I were working at a company today, is the possibility of someone using a version of ChatGPT, a large language model, or a predictive model. They could fit the model today and claim, “Look! The model can’t do this.” Then, two weeks later, the model can do it. Companies tend to create absolutes.
For instance, you mentioned working at a real estate company. You said, “AI can’t sell homes, but it can build predictive models using satellite data.” Maybe it can’t today, but it might tomorrow. How can we help Researchers and companies move away from absolutes in a time of exponential growth of these methods?
Our brains struggle with exponential change. There might be scientific studies that explain this. I’ve experienced this firsthand. When I started my Ph.D., it was related to the internet. Many people doubted the potential of the internet. They said , “Nobody will buy clothing online, or eyeglasses online.” I knew it was all going to happen.
It’s tough for people to grasp exponential change. Leaders and regulators need to understand what’s coming and adapt. You mentioned the Hollywood writer’s strike earlier. While ChatGPT may not be able to write a great model right now, it’s already increasing the productivity for writers.
We’re helping writers get unstuck and be more productive. It’s reasonable for writers to fear that AI might eventually replace them, but we need to embrace change, experiment, and upskill to stay relevant. Reskilling is essential. This isn’t a threat ; it’s an opportunity to be part of shaping the future.
I’ve been doing statistical analysis in R for over 25 years. In the last five to seven years, Python has become more prominent. I finally learned Python. Now, I use ChatGPT to convert my R code to Python, and I’ve become proficient in Python programming.
The head of product at my company, Jumpcut Media, who isn’t a coder but a Wharton alumnus, had an idea for a script summarization tool. He wanted to build a tool that could summarize scripts using the language of Hollywood.
Our entire engineers were occupied with other tasks, so he suggested, “While they’re busy with that, let me attempt it on ChatGPT.” He independently developed the minimal viable product, a demo version, using ChatGPT. It is currently on our website at Jumpcut Media, where our clients can test it. And that’s how it was created. A person with no coding skills.
I demonstrated at a real estate conference the concept of posting a video on YouTube, receiving 30,000 comments, and wanting to analyze and summarize those comments. I approached ChatGPT and outlined six steps.
Step one, visit a YouTube URL I’ll provide and download all the comments. Step two, conduct sentiment analysis on the comments. Step three, identify the positive comments and provide a summary.
Step four, identify the negative comments and provide a summary. Step five, advise the marketing manager on what to do, and provide the code for all these steps. It generated the code during the conference with the audience.
I ran it in Google Collab, and now we have the summary. And this was achieved without me writing a single line of code, using ChatGPT. It’s not the most intricate code, but this is something that would have previously taken me days and would have required involving research assistants. And I can now accomplish that.
Imagine this in real estate to a property or a developer applying. And if someone claims it doesn’t impact real estate, it certainly does! It absolutely could.
It does. I also presented four photos of my home. Just four photos. And I asked, “I’m planning to list this home for sale. Provide me with a real estate listing to post on Zillow that will capture attention and entice people to come and tour this house.” And it produced a fantastic, lovely description.
There’s no way I could have written that. I challenged the audience, asking how many of them could have written this, and everyone was amazed by the end. This is something achievable today. I’m not even talking about what’s coming soon.
Stefano, I’ll ask you first and then I’ll ask Kartik as well, what’s at the forefront of the research you’re currently conducting? I want to inquire about your individual research, and then we’ll discuss AI at Wharton and your goals.
Let’s begin with your current research. Another way to phrase it is, if we’re sitting here five years from now and you have numerous published papers and have given significant presentations, what will you be discussing that you’ve worked on?
Involved in numerous projects, all within the realm of AI. There are numerous intriguing questions because we have never had a machine like this, a machine that can perform tasks we consider crucial in defining what it means to be human. This is truly an intriguing consideration.
A few years back, when you asked, “What makes humans unique?” people thought, perhaps compared to other animals, “We can think.” And now if you ask, “What makes humans unique?” people might say, “We have emotions, or we feel.”
Essentially, what makes us unique is what makes us similar to other animals, to some extent. It’s fascinating to see how profoundly the world is changing. For instance, I’m interested in the impact of AI on achieving relational goals, social goals, or emotionally demanding tasks, where previously we didn’t have the option of interacting with a machine, but now we do.
What does this mean? What benefits can this technology bring, but also, what might be the risks? For instance, in terms of consumer safety, as individuals might interact with these tools while experiencing mental health issues or other challenges. To me, this is a very exciting and critical area.
I want to emphasize that this technology doesn’t have to be any better than it is today to bring about significant changes. Kartik rightly mentioned that this is still improving at an exponential rate. Companies are just beginning to experiment with it. But the tools are available. This is not a technology that’s on the horizon. It’s right in front of us.
Kartik, what are the major unresolved matters you are contemplating and addressing today?
Eric, my work has two main aspects. One is more technical, and the other focuses on human and societal interactions with AI. On the technical side, I am dedicating significant time to pondering biases in machine-learning models, particularly related to biases in text-to-image models.
For instance, if a prompt is given to “Generate an image of a child studying astronomy,” and all 100 resulting images depict a boy studying astronomy, then there is an issue.
These models exhibit biases due to their training data sets. However, when presented with an individual image, it’s challenging to determine if it’s biased or not. We are working on detecting bias, debiasing, and automated prompt engineering. This involves structuring prompts for machine learning models to produce the desired output.
Regarding human-AI collaboration, my focus lies on understanding the ideal division of lack between humans and AI in various organizational workflows. We clarity on how to structural teams and processes when AI is involved. Additionally, building trust in AI is a significant area of interest due to the existing trust issues.
Stefano, could you provide some insight for our listeners about AI at Wharton and its objectives? Then, we will hear Kartik’s perspective.
Thank you for arranging this podcast, and Sirius for hosting us. The AI at Wharton initiative is just commencing. We, as a group of academics, are exploring AI from different angles to understand its implications for companies, workers, consumers, and society.
Our initiatives will encompass education, research, dissemination of findings, and the creation of a community interested in these topics. This community will facilitate knowledge exchange among individuals with diverse perspectives and approaches.
Kartik, what are your thoughts on AI at Wharton and your role in its leadership positions, considering your involvement with various centers over the years?
First and foremost, AI represents a groundbreaking technology that will raise numerous unanswered questions. Creating initiatives like ours is crucial for addressing these questions.
Currently, computer scientists focus on developing new and improved models, with a narrow emphasis on assessing their accuracy, while the industry is preoccupied with immediate needs. We, at Wharton, possess the technical expertise to understand computer science models and the social science frameworks to offer a broader perspective on the long-term impact.
I believe we have a unique advantage here at Wharton. We have the technical expertise to understand computer science models, as well as individuals like Stefano and others who comprehend psychological and social science frameworks. They can provide a long-term perspective and help us determine how organizations should be redesigned in the next five, 10, 15, or 25 years. We need to consider how people should be retrained and how our college students should be prepared for the future.
We must also think about regulation because regulators will face challenges in keeping up with rapidly advancing technology. While technology is progressing at an exponential rate, regulators are progressing at a linear rate. They will also need our guidance.
In summary, I believe we are uniquely positioned to address these significant, looming issues that will impact us in the next five to ten years. However, we are currently preoccupied with immediate concerns and may not be adequately prepared for the major changes ahead.