Artificial intelligence could increase foreign espionage, displace jobs without proper guardrails, experts say

One expert surmised that China is already training young kids on machine learning to defeat the U.S.

Quickly evolving artificial intelligence technologies like ChatGPT could increase cyberattacks from foreign countries and displace workers in the U.S. labor force, highlighting the need for new skills and training among American students and workers, according to experts.

Netra AI CEO Don Horan noted that artificial intelligence could be used to generate malicious code quickly by removing the algorithms' intended controls and creating content outside the authorized purview.

He said that foreign acts can utilize tools like ChatGPT to improve espionage and accelerate elicitation, a process wherein a perpetrator gets to know a subject very well by gathering information and creating "the profile of a human being."

This information is then used to force people to comply with their intended mission.

AI EXPERTS, PROFESSORS REVEAL HOW CHATGPT WILL RADICALLY ALTER THE CLASSROOM: ‘AGE OF THE CREATOR’

The Welcome to ChatGPT lettering of the US company OpenAI can be seen on a computer screen. (Silas Stein/picture alliance via Getty Images)

"Spies use it all the time. You meet a new person, fall in love and then find out they're a Russian spy or a Chinese spy. We've seen it in the news for years," Horan said.

Artificial intelligence can also be used to custom-tailor phishing techniques, a scam where someone attempts to steal valuable information by sending electronic messages to unsuspecting users.

For example, if you had a dog and a bad actor knew that, they could pretend to be a sibling and share a video of a cute Golden Doodle. You then click on the video, but what you don't realize is there's code on the back end that's now giving someone access to your house and all your personal information.

These scams already exist today, with state infrastructure and civilians likely getting attacked millions of times daily. But artificial intelligence allows this to be done at scale, vastly increasing the number of attacks sent out. Horan said that the risk is potentially "astronomical" and will likely cause cybersecurity budgets to balloon.

"It's definitely possible," Horan said. "I'm sure foreign governments are already using stuff like this to do those style of attacks on our citizens."

Horan, who previously worked as the acting executive deputy CIO for the State of New York, added that AI can also be used to orchestrate man-in-the-middle attacks, wherein a bad actor positions himself between a user and an application to eavesdrop or impersonate one of the parties.

It can also be employed to enact a denial of service, a form of cyberwarfare with the intention of jamming websites, making them inaccessible to the user.  

VOICE ACTORS WARN ARTIFICIAL INTELLIGENCE COULD REPLACE THEM, CUT INDUSTRY JOBS AND PAY

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. New York City school officials started blocking this week the impressive but controversial writing tool that can generate paragraphs of human-like text.  ((AP Photo/Peter Morgan))

Horan said these types of attacks were frequent during the height of the COVID-19 pandemic. He experienced routine instances where foreign actors hit his site to bar people from services they needed, like unemployment benefits.

Rayid Ghani, a professor of AI and an expert in ethics, fairness, equity, and AI regulation at Carnegie Mellon University's Heinz College, said there are numerous ethical implications regarding this evolving technology.

Ruminating on fairness and equity, Ghani said artificial intelligence-powered facial recognition software could have biases when identifying the faces of different races and genders.

AI could also introduce issues of fairness and equity when it comes to the allocation of health care resources or hiring screenings.

Ghani said it is essential to highlight how these concerns and issues are not exclusive to AI and already exist within today's human-led processes. However, the problem could be exacerbated if AI is allowed to operate within a large swathe of different industries. 

He posed an analogy between AI and judges working in the U.S. today. The country likely has tens of thousands of judges making many small or large decisions. Some of them are biased, or maybe even a lot of them are biased, but in different ways across a political and ideological spectrum.

"The risk with AI is if you have three such AI systems that will help make all these decisions. So, the risk gets consolidated. If those three are bad, we're screwed," Ghani said.

He stressed that while individual decisions have lower risk, that doesn't mean it necessarily has better outcomes.

CHATGPT LEADS LAWMAKERS TO CALL FOR REGULATING ARTIFICIAL INTELLIGENCE

One type of generative AI, ChatGPT has recently taken the world by storm.    (iStock)

Ghani advocated for transparency within the systems to curtail these issues before they arise. He noted that many of these tools could better explain why they came to a specific conclusion or output.

"Humans are not necessarily transparent. We make decisions and then we posthoc justify those decisions. Lots of these systems we cannot understand or describe how they work," he said.

Ghani also questioned whether AI could be relied on as things rapidly change. For example, are they adaptable to a world-changing event like an international pandemic, or will their ability to update itself lead to more problems in times of crisis?

According to Ghani, these technologies also raise the question of accountability. Who is responsible when someone does something terrible based on information they gleaned from an AI? Is it the person who took action, the AI, or the developer of the AI?

When asked about a potential impact on the current labor force, Ghani said it was a valid concern, noting that AI changed the job market and will continue to do so.

"It does make processes more efficient, which means people are going to lose jobs and yes, it will create new jobs, but those jobs are not at the same scale as the jobs lost," he said.

For example, if an AI can write a first pass of a document, now a company can put out those documents in two days whereas before, it may have taken them a week

"You're either going to write more things or you're going to cut down on the number of people," Ghani said.

AL GORE EXPLAINS GLOBAL AI PROGRAM THAT IS SPYING ON THOUSANDS OF FACILITIES TO MONITOR EMISSIONS

He added that the people new tools often displace or are likely to display are different from those who will be taking over the new jobs, posing a fundamental ethical issue.

He said, from a policy perspective, the U.S. could figure out how to account for that and give people the opportunity for training and acquiring tools and skills to take the new jobs that displaced their old ones.

"If we value in our society that we don't want those people to people left behind and lose their jobs, how do we augment them? Do we create new scaling programs specifically targeting people that will lose their jobs preemptively or create other social programs," Ghani said.  

Horan did not go as far as Ghani's prediction about workforce changes but did predict a definitive shift wherein more people will be employed in the technology and machine learning sectors.

He also claimed that the underlying issue is a lack of math, science and technology skills among young school-aged children. To amend this, Horan said the U.S. school system should focus on these tenets of academia in the younger years and leave other topics for later in their educational development.

"I bet you countries like China—their kids are sitting there doing machine learning, their kids are sitting there doing annotation and learning this technology at a very young age to defeat the United States."

Despite concerns, the experts Fox News Digital spoke with said AI poses enormous benefits to the U.S. so long as guardrails are put in place and citizens are appropriately trained to take advantage of these innovations.

Nick Mattei, a computer science and AI expert at Tulane University, specifically tried to quell concerns about AI "destroying the world," noting that new technologies have always prompted unease as they came into the mainstream.

CHATGPT AI ACCUSED OF LIBERAL BIAS AFTER REFUSING TO WRITE HUNTER BIDEN NEW YORK POST COVERAGE

OpenAI Dall. E 2 seen on mobile with AI brain seen on screen. on 22 January 2023 in Brussels, Belgium. (Photo by Jonathan Raa/NurPhoto via Getty Images) (Photo by Jonathan Raa/NurPhoto via Getty Images)

He recalled that the invention of cars prompted some to believe quick and easy transportation would make people lazy and convince them not to go to work. He also cited stories about how people thought stop signs would put police out of business because they would not need to control traffic in the intersections.

"It is not fundamentally reconfiguring society, but it is challenging us to think about the systems we are already a part of," Mattei said.

Speaking on the impact of AI on jobs, Mattei said AI would simultaneously remove some old jobs but also bring forth the need for new ones.

He recalled a job right out of college where he translated from one coding language to another. Now, programs like ChatGPT and GitHub's copilot can do that task automatically, making his old job obsolete.

"This thing about technology destroying or changing work, I mean, it's true. That is why we often try to work with this technology to make things more efficient or change the way that work is done," he said.

CLICK HERE TO GET THE FOX NEWS APP

But Mattei also predicted new jobs as AI alters the attribution of numerous tasks, like how teachers grade papers, how content and advertisement are generated for a website, how special effects for movies and television are created and how people build out virtual models.

"I don't know how it's going to change things. But it is going to," Mattei said. 

Load more..