Play Live Radio
Next Up:
Available On Air Stations

How is AI like Chat GPT helping Nevadans? And how can it hurt us?

ChatGPT can answer many medical questions correctly, but experts warn against using it on its own for medical advice.
AFP via Getty Images
ChatGPT can answer many medical questions correctly, but experts warn against using it on its own for medical advice.

From self-driving cars, to virtual assistants on smartphones, artificial intelligence has become a big part of our lives.

But what happens when we combine artificial intelligence with cutting-edge chatbots like Chat GPT and ask it to write college essays?

At the same time, automation has already taken over many American jobs. And predictions are that by 2035, roughly 40 to 65% of service-level jobs in Southern Nevada could be automated.

And on Tuesday, Congress held hearings about the potential misuse of artificial intelligence. The CEO behind Chat GPT said he fears AI being used to compromise elections.

Chat GPT is essentially a far more advanced version of Siri or Cortana; you give it a prompt or a question and it will scour the internet, textbooks, studies, almost anything to provide an answer. It's not always accurate, but it does a pretty good job nonetheless. It's like Google, but does all the hard work for you.

The problem some educators are pointing out is that students all over the country have and are using the software for homework help, and some even use it to write their essays. So, say goodbye to pulling an all-nighter cranking out your history essay.

Associate professor of educational technology at the University of Nevada, Las Vegas, Kendall Hartley, researches the affects of social media and smartphones on young adult learning. He admits Chat GPT has thrown a curveball to educational institutions and that the current plagiarism detection software at UNLV doesn't have the ability to detect plagiarism from a chatbot like Chat GPT.

"So we do have tools we've invested in at the university. In particular, iThenticate is the one we use, especially for the graduate students," said Hartley. "Anytime you work with a thesis, or they develop a thesis, they're going to have to run it through there. There's also similar tools that we use at the undergraduate level through the learning management system, and it's going to come back with a report that says, here's the level of duplication and 15% to 25% is a common one at the thesis or dissertation level, because they're using other sources there, but they're citing those sources appropriately. And you can see in the report, okay, this is legitimate. I ran similar things that Chat GPT produced through iThenticate, for example, and it comes back as 0%, I've never seen a 0% before as a report back from anything, because usually there's at least some combinations. So Chat GPT is taking stuff that's out there, but it's actually generating original content each time it comes across."

When asked if the content is truly original, Hartley said it's original at least in that form and original in the places where current plagiarism detection software mine information from. Hartley also said the company behind Chat GPT, OpenAI and the companies behind current plagiarism detection software are in the process of developing software that could detect artificially generated material.

Now, something perhaps even more concerning is the fear that artificial intelligence and automation will take away jobs and industries from humans; with how quickly artificial intelligence is advancing, it's a valid concern to have.

Professor of global business and spatial economic analysis at the University of Redlands, Dr. Johannes Moenius, shares that same concern. He appeared on State of Nevada six years ago talking about a study he and his colleague released. The study predicted 65% of Nevada jobs would be susceptible to automation by 2035. So, what drives that prediction for Moenius and what would that susceptibility look like?

"If you look at robot prices, they have fallen and fallen. And still we haven't seen the level of adaptation that those large falls may have suggested, we still only see the Tipsy Robot [the robotic bartender at Planet Hollywood in Las Vegas] and a few other places," said Moenius. "But that shouldn't deceive us, because now with what I'm afraid of is the way [AI] materializes itself … it will provide very soon, the tools to even enable small businesses, not only large chains, like McDonald's, to analyze their environment, and figure out what kind of implementation measures for other types of automation they could use in their business to become more profitable."

The stereotypical idea of automation is humanoid robots doing jobs that humans would normally do, but Moenius stresses to think a little bit more realistically. Artificial intelligence can give computers and machines of all sizes to find more effective ways to do a task. The more advanced artificial intelligence gets, the more it will have ability to learn from movement and repetition and in turn, start suggesting more productive means to execute a task. In other words, it might end up cutting out some middlemen.

"In essence, when you use a kitchen aid, that's already a helper in the kitchen," said Moenius. "And so once that kitchen aid gets all the ingredients or the type of machine gets all the ingredients necessary to produce whatever meal has been programmed, then yes … we won't see changes in the front end, people still like to talk to people, but most of the automation we will see will be in back offices and kitchens."

This all sounds fairly grim, but one tech company in Las Vegas sees the future of automation and artificial intelligence as a more positive one.

Lars Buttler is chair and co-founder of the AI Foundation, a local commercial and nonprofit dedicated to democratizing artificial intelligence software. What does that mean though?

"Right now we we build applications mostly for enterprises, cities, governments and universities. And in those applications, we take those large language models and other foundational models and add capabilities to it to make them much more helpful," said Buttler.

"Right now you have to be an expert in prompting if you want to use something like Chat GPT really deeply. It doesn't really remember you or your context, it doesn't really know what your problems are, and it never is proactive. It never comes to you with some good ideas. Now imagine you could take all these models and for any applications pick the best and give them systematically all your content. You make them basically yours.. your sidekick, your co-pilot. You can even make them look like you and talk like you and instead of just having to type it has your interest and goals in mind."

The AI Foundation did a similar project for UNLV's Digital President Keith Whitfield. The digital president can be accessed through UNLV's website and students can talk to the president and ask questions about resources that UNLV offers.

What is the potential of something like this getting out of hand? Lars thinks it's possible, but said that at the moment, there is no one on Earth that knows how to make artificial intelligence conscious or have it so it can make its own objectives. However, Lars added that even when AI does get that ability, it might not mean the end of the world as we know it.

"I'm kind of a tech optimist. I think that's the best thing we want," said Buttler. "As long as artificial intelligence is not really conscious of what's going on, it could actually cause much more mischief."

Guests: Kendall Hartley, associate professor of educational technology, University of Nevada, Las Vegas; Lars Buttler, co-founder and chair, The AI Foundation; Dr. Johannes Moenius, professor of global business and spatial economic analysis, University of Redlands in California

Stay Connected
Christopher Alvarez is a news producer and podcast audio editor at Nevada Public Radio for the State of Nevada program, and has been with them for over a year.
Related Content