There are big opportunities with generative AI (artificial intelligence). It has the potential to increase efficiencies and innovate in all industries unlike anything we have ever seen before. We keep saying and now we are even hearing it. It also has the potential to bring big risks: data leaks, deep fakes, and so much more that businesses need to be aware of. Another thing businesses need to be aware of: legislation related to AI is coming and, in some states, it is already here. Let’s consider a few examples today.
The ELVIS ACT
In January of this year, Tennessee Governor Bill Lee announced the ELVIS (Ensuring Likeness Voice and Image Security) Act. Here’s why this is important: Tennessee’s music industry supports more than 61,617 jobs across the state, contributes $5.8 billion to the state’s GDP (gross domestic product), and fills more than 4,500 music venues. If the music industry crashes, so too, will Tennessee’s economy.
As of January, Tennessee’s existing law protected name, image, and likeness, but it didn’t specifically address new, personalized generative AI cloning models and services that enable human impersonation and voice. The ELVIS Act aims to rectify all this by giving songwriters, performers, and the music industry professionals voice protection from the misuse of AI. In March of this year, the ELVIS Act passed and today protects against unauthorized use of someone’s likeness by adding “voice” to the realm it protects.
AI and the Election
Oregon also recognizes some of the challenges that come along with AI and passed Senate Bill 1571 earlier this year. The legislation mandates disclosure of AI-generated content in campaign materials. Those that do not follow the legislation are subject to a penalty and a fine.
The objective here was to minimize the amount of deepfakes ahead of an already high-profile election season and ensure there is disclosure of any ads or other campaign materials that use AI to manipulate and image, video, or audio. Other states have introduced similar bills to regulate the use of AI in elections such as Montana, California, Michigan, Minnesota, Texas, and Washington, just to name a few.
At the Federal Level
Certainly, these are only a few examples of state legislation related to the use of AI. While there is no legislation related to AI at the federal level, there has been some guidance provided.
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats.
The five principles include:
Safe and effective systems: Tech should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use.
Algorithmic discrimination protections: Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.
Data privacy: Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used.
Notice and explanation: Automated systems should provide explanations that are technically valid, meaningful, and useful to you and to any operators or others who need to understand the system and calibrated to the level of risk based on the context.
Human alternatives, consideration, and fallback: Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
Of course, this is simply a framework the government suggests should be applied with respect to all automated systems. In the absence of federal legislation on AI, there will continue to be increasingly more legislation at the state and the local levels. We can expect this trend to continue in the future. Are you preparing for what comes next with AI?
Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #futureofwork #digitaltransformation #green #ecosystem #environmental #circularworld #legislation