Kamala Harris can't be trusted with AI regulation

AI regulation needs to be handled by a panel of experts, not Kamala Harris

Recently, the White House decided that appointing an unqualified, politicized leader is perfect for tackling the complex issue of AI regulation. Kamala Harris, who has now become the AI czar, will likely lead America into a very gloomy future. The nation must correct this blunder before it’s too late.  

We can only solve a problem by asking the right questions and Harris and the polarized Congress are clearly unable to do so. The United States must replace her with an unbiased committee of experts who can protect and fully develop effective AI regulations. It should also shield decisions moving forward from the current toxic partisan environment.  
 
Unless you’ve been living under a rock, every newsfeed and broadcast available has explained how important it is for us to control AI. So, you would assume that the White House would adopt an all-hands-on-deck approach to preventing the downfall of humanity.  

BIDEN SAYS ARTIFICIAL INTELLIGENCE SCIENTISTS WORRIED ABOUT TECH OVERTAKING HUMAN THINKING AND PLANNING

Instead, they’ve been asking the public about accountability measures, funneling $140 million into research, development, and budget proposals and making symbolic appointments tainted with incompetence and political motives. 

Vice President Kamala Harris who failed as border czar now also gets to be AI czar.

Regardless of whether the vice president is qualified for her current job, she was never a good option to become the AI czar. Her knowledge of the expansive, almost uncontrollable tech is minimal. Former and current presidential staff call her a bully and report that the White House has become an unhealthy environment with her in it. We also know about her failure to improve the southern border crisis.  

Harris is not only an uneducated regulator of AI but an inappropriate leader to drive results in a high-stakes game. In parallel, an unbiased examination of the recent congressional hearings with the CEO of Open AI (creator of ChatGPT) and the CEO of TikTok, reveals the lack of knowledge of our representatives of what is at stake. Offering a no-win situation between the decision-makers in both the executive and legislative branches. 
 
Congress and Harris, fueled with their political agendas and handicapped by their lack of understanding of AI implications that go far beyond privacy protection will have little chance of formulating a solution forward.  

The only way we can find the correct solution is through an unbiased committee of experts spanning legal, social, economic, and technical modalities. This group should fully explore and grasp how AI will affect our national and personal security as well as the unprecedented economic opportunities.  

They must explore the real dangers of AI unlike the newly released AI Risk Management Framework and the Blueprint for an AI Bill of Rights which are fixated on the issue of privacy and discrimination. 
 
An independent "Warren Commission" like an unbiased and apolitical leader must be appointed to lead a committee of experts that would have a broad understanding of AI applications and their social and economic integrations that will help us develop effective policies. A committee of experts that is not made up of Microsoft, Google, OpenAI and other usual suspects with significant monetary interests. 
 
AI regulation is a wide-reaching issue not segregated by national borders. Our mistakes in the U.S. or any country will have an effect on the greater population, which is why we need a strong group of critical thinkers. The wise words of business educator Marshall Goldsmith are extremely applicable to our current predicament. What got us here, won’t get us there: Selfish concerns and the prioritization of privacy and budget won’t result in streamlined AI regulation. 
 
Political maneuvering and on-air showcasing for public appeasement from Biden and Harris meeting with big tech CEOs and Altman testifying in front of Congress do not alleviate the possibility of AI destroying the world as we know it. 

Sam Altman, chief executive officer of OpenAI, during a fireside chat at University College London (UCL) in London, UK, on Wednesday, May 24, 2023.  (Photographer: Chris J. Ratcliffe/Bloomberg via Getty Images)

This is why we need an independent committee to balance the importance of personal privacy with identifying dangerous actors. We need a policing system for classifying and defining suspicious signals. To take down malicious actors, the public and AI providers must also surrender a sliver of their privacy. If not, this exciting technology could be repurposed as a weapon of mass destruction. 

CLICK HERE TO GET THE OPINION NEWSLETTER
 
Let’s stop the political nonsense and showcasing in pointless meetings. Let’s not put our national security and humanity at risk as we put politics over logic. Speeches are nice and it is true that companies have an ethical, legal and moral responsibility to ensure that their AI is safe.  

It is also true that harmful technology can come from companies with the best intentions. However, AI implications are very different from other previously faced global threats like nuclear annihilation. Former secretary of State Henry Kissinger believes that U.S.-China tensions are mimicking Cold War times, but the possibility for destruction has raised drastically.  

CLICK HERE TO GET THE FOX NEWS APP

Foreign foes can leverage AI to develop weapons more fatal than previous nuclear threats. The legislation and policies needed now are not limited to controlling countries or big companies. The threat can come from a few smart people with computers anywhere in the world.  

Legislation and policies have to include social change as well as serious penalties. To adequately protect the world, regulations must be divorced from political beliefs, and we must move past our comfort zone for the benefit of humanity. 

CLICK HERE TO READ MORE FROM SID MOHASSEB

Load more..