Artificial intelligence — bane or boon to humanity?

Thursday, February 8, 2024

University of Lethbridge professor Dr. Sidney Shapiro weighs in on the possibilities of AI 

Artificial intelligence (AI) is certainly a hot topic — some people tout it as the next best thing for increasing productivity and reducing costs while others are concerned robots will be replacing workers. Every day brings a new headline, and the average person may be left to wonder whether AI is a good or bad thing.

It’s both, says Dr. Sidney Shapiro, a Dhillon School of Business assistant professor with expertise in data analysis and AI.

“Because there are a lot of unknowns, people are very worried about what it can do in the future, but they’re not looking at what it can do right now,” says Shapiro.

Right now, Shapiro says AI is in the midst of a big transition. Companies are looking at how to innovate and deliver more value for shareholders, that is, make more money by automating everything. But that can backfire, as has happened with self-checkouts.

Overall, AI is just a tool with benefits and drawbacks and the legal system hasn’t caught up with the implications of AI. Is AI poised to take over the world? Shapiro says that’s not likely to happen anytime soon. A house builder, for example, may find some AI tools helpful with certain aspects of the work, but humans are still needed to build the home.

“Until computers get much more powerful, it’s going to be difficult to have the vision of AI of what people want as something that completely transforms our lives,” Shapiro says. “The reality is that there’s a lot of hype in AI right now. And that hype overestimates what you can do with AI. It’s a useful tool, but it doesn’t replace what we can do as people, which is come up with original ideas.”

He says AI has been around for more than 40 years in various forms, and it has typically been used as a sorting tool. Analyzing data by looking for patterns in numbers can help businesses better target their advertising. For example, people in a certain demographic, like young parents, will likely be more receptive to ads about diapers.

What’s happened with AI more recently is the building of large language models (LLMs) like OpenAI’s ChatGPT. By analyzing patterns in words, LLMs can scour their databases and come up with the next word in a sentence or the next paragraph in an essay.

“We’re looking at the underlying patterns and a great example of this is a resumé,” says Shapiro. “There are zillions of resumés online. An LLM can generate new information based on all the different possibilities it has in its database.”

While AI may have a large database to draw from, the data is limited to information that’s already known.

“What we do at a university is usually try to find new knowledge that doesn’t already exist and find new connections between data that nobody has thought of before,” he says. “Although it might be possible to calculate all the possibilities using AI, AI is generally not very good at coming up with new things it doesn’t already know about.”

Another issue with AI is that it takes a huge amount of power, so it’s not very sustainable. Every time a new question is asked, an LLM goes through a large number of possibilities to pick the answer that’s most likely. If you ask ChatGPT to write a poem, it will generate one. However, Shapiro says it’s not possible to know why that particular poem was the one generated.

“The philosophical question is ‘Could you take every great novel, remix them, press a button and another great novel, a new one, comes out?’” Shapiro says. “Theoretically, it’s possible but that hasn’t happened yet. So, you have to ask what makes a novel great and not just a collection of words. If you’re using something like ChatGPT to write an essay, it can regurgitate knowledge but not very well and not within the context of what you know.”

One of the controversies surrounding AI is where the data to train LLMs came from in the first place — the internet, newspapers or entire books? In December, CBC News reported it conducted an investigation and found at least 2,500 copyrighted books written by more than 1,200 Canadian authors were part of a dataset (now defunct) that was used to train AI. Authors have also launched lawsuits against OpenAI and Microsoft, alleging their work was used to train AI systems. The training of LLMs requires large amounts of content, so data — whether from the internet or social media — is like the new oil, Shapiro says.

While ChatGPT can help generate ideas for a student essay, educators are concerned about the effects it could have on student learning and cheating. Shapiro says some professors have changed their assignments in response. They may ask students to write an essay themselves, then generate the essay using ChatGPT and critique the two.

“We’ve gotten into a pattern of having students write essays and we’ve gotten away from oral exams and asking students to do live presentations,” he says. “Whether we are using AI in our classes or not, students will be using AI in their jobs when they graduate. The question is how we prepare students for the future so they understand the tools and can leverage them in a way that works.”

Anyone interested in speaking with Shapiro about AI can reach him at sidney.shapiro@uleth.ca.

This news release can be found online at Artificial Intelligence.

 

—30—

Contact:

Caroline Zentner, public affairs advisor

University of Lethbridge

403-394-3975 or 403-795-5403 (cell)

caroline.zentner@uleth.ca

 

Our University’s Blackfoot name is Iniskim, meaning Sacred Buffalo Stone. The University is located in traditional Blackfoot Confederacy territory. We honour the Blackfoot people and their traditional ways of knowing in caring for this land, as well as all Indigenous Peoples who have helped shape and continue to strengthen our University community.