The state has just issued a broad executive order setting new expectations for the use and oversight of artificial intelligence. The move includes a directive for the state’s Department of Technology to develop recommendations and best practices for watermarking AI‑generated or significantly altered images and video.
Under the order, companies that want to work with the state will also need to show they have safeguards in place to prevent misuse of AI (such as bias, illegal content, or violations of civil rights) to qualify for contracts. Agencies will review how these technologies are governed and incorporated into services.
The order also aims to broaden the state’s use of generative AI to enhance public services, including a new AI-powered tool that helps residents access programs and benefits based on life events, like starting a business or searching for a job.
A broader framework for AI oversight

This development is part of a larger trend in state‑level AI policy. Across the country, many states have introduced laws targeting various aspects, including algorithmic transparency, discrimination, and synthetic media.
Colorado’s AI Act requires risk assessments and transparency notices for high‑risk AI systems that affect consumers in areas like employment, housing, and healthcare. Illinois has passed legislation requiring companies to conduct bias audits on automated decision systems, and Texas has enacted laws addressing deepfakes, including mandates to disclose AI‑generated or manipulated content in certain contexts.