Google CEO Sundar Pichai said last week that concerns about harmful applications of artificial intelligence are "very legitimate."

In a Washington Post interview, Pichai said that AI tools will need ethical guardrails and will require companies to think deeply about how technology can be abused.

“I think tech has to realize it just can’t build it and then fix it,” Pichai, fresh from his testimony before House lawmakers, said.  “I think that doesn’t work.”

Tech giants have to ensure artificial intelligence with “agency of its own” doesn't harm humankind, Pichai noted.

HERE'S HOW TO BLOCK ROBOCALLS ON IPHONE AND ANDROID

The tech executive, who runs a company that uses AI in many of its products, including its powerful search engine, said he is optimistic about the technology's long-term benefits, but his assessment of AI's potential downsides parallels that of critics who have warned about the potential for misuse and abuse.

Advocates and technologists have been warning about the power of AI to embolden authoritarian regimes, empower mass surveillance and spread misinformation, among other possibilities.

SpaceX and Tesla founder Elon Musk once said that AI could prove to be “far more dangerous than nukes.”

Google's work on Project Maven, a military AI program, sparked a protest from its employees and led the tech giant to announce that it won't continue the work when the contract expires in 2019.

10 IPHONE TRICKS YOU'LL WISH YOU KNEW SOONER

Pichai said in the interview that governments worldwide are still trying to grasp AI’s effects and the potential need for regulation.

“Sometimes I worry people underestimate the scale of change that’s possible in the mid- to long-term, and I think the questions are actually pretty complex,” he told the Post. Other tech companies, such as Microsoft, have embraced regulation of AI — both by the companies that create the technology and the governments that oversee its use.

Google CEO Sundar Pichai appears before the House Judiciary Committee to be questioned about the internet giant's privacy security and data collection, on Capitol Hill in Washington, Tuesday, Dec. 11, 2018.  (AP)

But AI, if handled properly, could have “tremendous benefits,” Pichai explained, including helping doctors detect eye disease and other ailments through automated scans of health data.

“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he told the newspaper. “This is why we've tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

Pichai, who joined Google in 2004 and became chief executive 11 years later, in January called AI “one of the most important things that humanity is working on” and said it could prove to be “more profound” for human society than “electricity or fire.”

However, the race to build machines that can operate on their own has rekindled fears that Silicon Valley’s culture of disruption could result in technology that harms people and eliminates jobs.