Business and Ethics / Campus Community

Artificial Intelligence and Higher Ed

April 25, 2023

Exploring the benefits and potential disruptions of generative AI and its place in and out of the classroom. 

Seattle University President Eduardo Peñalver and College of Science and Engineering Dean Amit Shukla, PhD, penned an opinion piece for the Puget Sound Business Journal weighing the impacts and implications of generative AI in higher education. 

Here is the article as it appears in the publication: 

Opinion: Generative AI is a powerful tool the requires a human touch

Generative artificial intelligence (AI) is at once intriguing, exciting and, yes, a little disturbing.

For those of us in higher education, these technologies have apparent potential to disrupt traditional teaching and learning models. There is well-founded concern about generative AI’s implications for academic integrity along with a recognition that these new technologies can enhance student learning and experience.

We are always looking at ways to help students develop their skills in critical thinking, problem solving, communication, leadership and teamwork so they can continue to shape the world. Far from rendering these sorts of capabilities superfluous, emergent AI technologies only underscore their importance.

The world faces numerous grand challenges around sustainability, public health, access to clean water, energy, food, security and many others. Successfully confronting these challenges requires an education system deeply rooted in the recognition that we all have a responsibility to make the world a better place. We need to educate future leaders who approach these challenges with morality and ethics at the heart of any solutions.

As a university in the Jesuit tradition, we believe that effective learning is always situated in a specific context — rooted in previous experience and dependent upon reflection about those experiences. Education becomes most meaningful when it is put into action and reinforced by further reflection. Repeating this cycle over and over again is how transformative learning happens. It is remarkable that some of these same traits of the Jesuit educational model are shared by the reinforcement learning methods used for artificial intelligence.

Early reviews of ChatGPT, an artificial intelligence chatbot, were giddy about its astonishing capabilities. Users regaled us with computer-generated stories about lost socks written in the style of the Declaration of Independence or about removing a peanut butter and jelly sandwich from a VCR in the form of a Biblical verse.

The power of this technology is genuinely impressive and its ability to mimic human language across a broad range of domains is unprecedented. But the technology’s basic architecture is untethered to actual meaning. Additionally, these models can be biased by their training data and they can be sensitive to paraphrasing as well as by the need to guess user intent. The power of reinforcement learning is therefore also the source of its greatest weakness.

Although AI models are constantly taking in new information, that information takes the form of new symbolic data without any context. They have no experience (or even conception) of reality. Their sole reality is a world of perceived regularities among symbolic representations and, as a result, they have no way to conceive of concepts like truth and accuracy.

Recent reports have unearthed troubling tendencies. In an essay for Scientific American, NYU psychologist Gary Marcus observed that ChatGPT was prone to “hallucinations,” made-up facts that ChatGPT would nonetheless assert with great confidence.

One law professor asked ChatGPT to summarize some leading decisions of the National Labor Relations Board and it conjured fictitious cases out of thin air.

In another case, ChatGPT asserted that former Vice President Walter Mondale challenged his own president, Jimmy Carter, for the Democratic nomination in the 1980 election. (For those not alive in 1980, this did not happen, and such assertions will not help students learn history or U.S. electoral politics).

Closer to home, in an essay submitted for one of our classes at Seattle University, ChatGPT described a 2005 Supreme Court case as the cause of another case that had occurred several decades earlier.

On the other hand, many educators are effectively using these tools to supplement and enhance student learning and mastery of concepts from coding to rhetoric.

Generative AI is no replacement for human intelligence. The recent technology is based on a system of machine learning known as Reinforcement Learning from Human Feedback (RLHF). Machine learning does not yet generate what we might call “understanding” or “comprehension.”

These RLHF models are based on massive quantities of training data — a reward model for reinforcement — and an optimization algorithm for finetuning the language model. Their use of language emerges from deep statistical analysis of massive data sets they use to “predict” the most probable sequence of words in response to the prompt they receive.

Clearly, there are limits to this generative technology and we must be mindful of that.

What ChatGPT and other AI engines based on this technology require is the guidance of educated human beings rooted in the reality and constraints of the world. In an increasingly complex and technologically driven world, the challenges we face are inherently multidisciplinary. They require us to incorporate context and perspectives, learn from our experiences, take ethical actions and evaluate and reflect with empathy to create a more just and humane world. They require leaders to be innovative, inclusive and committed to the truth.

As we continue to build and improve these tools, we must recognize that they will continue to reflect the limitations of the human beings who have created them, as well as the limitations intrinsic to their architecture. Even while they reduce the challenges of certain kinds of work, they generate the need for new kinds of work and reflection.

As these models proliferate and continue to grow in capability, it will become the task of institutions like ours to train future leaders who can understand and manage them by developing, implementing and managing policy for responsible use that is grounded in ethics and morality and in service of humanity.

Artificial intelligence tools are designed by human beings and use learning models trained by the data we provide. It is therefore our responsibility to ensure that AI’s use of those inputs contributes to the betterment of the world. It is our responsibility to question the results AI generates and, applying our ethically informed judgment, to correct its biases and inaccuracies. Doing this will continue to require substantial human input, attention and care.

The future demands leaders who are innovative and creative, who can understand and effectively wield the new tools that generative AI is making available. Rather than seeking to suppress or hide from these technologies, higher education needs to respond in a collaborative way to these emerging technologies so we can help our students to use them to augment their own capabilities and enhance their learning.

Finally, we feel it necessary to make clear that this commentary was not written by artificial intelligence. Instead, it was composed by two higher education leaders who are thinking about this subject a lot these days.

We are confident that, no matter what the future of these technologies entails, there will always be a need for thoughtful reflections produced by real people. If higher education responds to emergent technologies in a wise and thoughtful way, it can and will continue to be at the forefront of forming such human beings.

Save the Date: Seattle University will host an “Ethics and Technology” conference in late June, bringing together great minds in science, tech, ethics and religion, including academic, business and nonprofit leaders.