The Attorney General of Massachusetts yesterday released guidance surrounding the use of artificial intelligence and the obligations that companies have under state consumer protection and data privacy laws to make sure the technology doesn’t take advantage of or otherwise deceive consumers.
The guidance isn’t necessarily anything new that companies in the accounts receivable management industry were not already doing, but it does serve as reinforcement that state and federal regulators are looking closely at how artificial intelligence tools are being used and that it’s important for companies to know just what its AI-backed tools are doing when interacting with consumers.
The advisory listed several acts or practices that may be considered to be unfair or deceptive under the state’s Consumer Protection Act, including:
- Falsely advertising the quality, value, or usability of AI systems
- Supplying an AI system that is defective, unusable, or impractical for the purpose advertised
- Misrepresenting the reliability, manner of performance, safety, or condition of an AI system
- Offering for sale or use an AI system in breach of warranty, in that the system is not fit for the ordinary purposes for which such systems are used, or that is unfit for the specific purpose for which it is sold where the supplier knows of such purpose
- Misrepresenting audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner as in the case of deepfakes, voice cloning, or chatbots used to engage in fraud
- Failing to comply with Massachusetts statutes, rules, regulations or laws, meant for the protection of the public’s health, safety or welfare
Companies were also warned that the state’s anti-discrimination laws prohibit the use of technology that discriminates against individuals based on a legally protecting characteristic.