Why a Federal Privacy Law is Necessary with the Advancement of AI

Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-WA) led a committee hearing entitled “The Need to Protect Americans’ Privacy and the AI Accelerant” on 11 July 2024. Cantwell highlighted the importance of federal privacy legislation to counteract the misuse of AI data.

In April, House Energy & Commerce Committee Chair Cathy McMorris Rodger (R-WA) and Cantwell published a draft of the bipartisan bicameral American Privacy Rights Act (APRA), which wants to present federal privacy regulations to change the present patchwork of state legislation. The APRA replaced the American Data Privacy and Protection Act (ADPPA), which looked great initially but eventually stopped because of too little support. The APRA tackles a few of the problems of the failure of ADPPA and was recently revised to get more support. A revised APRA draft was approved by the U.S. House Committee on Energy and Commerce Subcommittee on Data, Innovation, and Commerce introduced on May 23, 2024 before the House Energy and Commerce Committee markup.

The modified draft did not include segments on civil rights protections, included a new segment on privacy by design, data security for covered minors and the Children’s Online Privacy Protection Act, an extension of public research accepted purposes, more responsibilities for data brokers, and enables people to request making resultant decisions by a human instead of an algorithm. The modified draft got substantial backlash from civil rights and privacy organizations and strong resistance from GOP leaders. Therefore, the markup was terminated and it is uncertain when that is going to happen.

Over 140 countries have approved extensive federal privacy legislation, but in the United States, individual states create their own legislation, and the present patchwork of state legislation has resulted in regulatory disparity and makes compliance difficult for companies. Currently, with the quick development of AI technologies, government data privacy protections, such as HIPAA, are very important.

At this time, no legislation stops U.S. businesses from training their big language models on personal information without updating consumers, no limitations on algorithms being utilized to decide on real estate, credit, and occupation – which could have significant effects on consumers, and information could be purchased and sold with no customer acceptance. Insufficient regulation has made it possible for U.S. businesses to speed up the growth and usage of AI technologies, however, that has an impact on privacy. Sen. Cantwell said at the hearing that the privacy of Americans is being attacked. Connected devices and AI are used for surveillance and online tracking.

Specialists spoke at the hearing about the serious effects on data privacy by the advancement of AI systems. AI systems permit complete consumer profiling and internet surveillance and permit fraud and deepfakes to be performed with minimal cost and human involvement. University of Washington School of Law and Co-Director of the University’s Technology Lab, Dr. Ryan Calo, cautioned that AI technologies, which are prepared to identify patterns in big data sets, are permitting companies to get sensitive information about people from relatively simple data. For example, AI can get details about mental health and pregnancy from relatively non-sensitive data, which produces a serious difference in privacy security. He additionally cautioned that the sensitive information acquired by AI systems is actually being used against consumers, and consumers don’t know what is going on behind the scenes. Calo cautioned that the issue will only worsen.

Co-Executive Director of the AI Now Institute Amba Kak revealed that the course of AI is at a critical inflection point. Without regulatory involvement, the extractive, intrusive, and often detrimental data practices and business models will be replicated. Federal data privacy legislation, particularly one with good data minimization, could work to prevent the culture of impunity and irresponsibility that is harming consumers. With strong federal data privacy protections, companies would be expected to evaluate if the advantages of utilizing new AI components outbalance the prospects for harm.

About Thomas Brown
Thomas Brown worked as a reporter for several years on ComplianceHome. Thomas is a seasoned journalist with several years experience in the healthcare sector and has contributed to healthcare and information technology news publishers. Thomas has a particular interest in the application of healthcare information technology to better serve the interest of patients, including areas such as data protection and innovations such as telehealth. Follow Thomas on X https://x.com/Thomas7Brown