A California appellate courtroom lately affirmed a $4 million jury verdict in favor of a police captain after a sexually specific, AI-generated picture resembling her was circulated amongst her colleagues. In Washington state, a trooper filed swimsuit alleging a supervisor used AI to create and distribute a deepfake video depicting him kissing a co-worker. Each instances made headlines and each had been filed below office legislation.
However Bradford Kelley, a shareholder at Littler Mendelson who focuses on AI and employment legislation, describes these situations in a latest temporary. He says HR leaders danger lacking the larger image in the event that they think about this a cybersecurity situation and transfer on.
Deepfakes aren’t only a cybersecurity menace
“It’s not simply deepfakes,” Kelley instructed HR Govt in an interview. “If anyone makes use of a generative AI instrument to generate a music that reveals they’re romantically inquisitive about a colleague, that’s not essentially a deepfake situation, however it’s undoubtedly a difficulty the place AI may very well be weaponized.”
The wave of AI insurance policies HR groups drafted over latest years was largely targeted on a unique drawback set, together with defending confidential knowledge, managing IP danger and guaranteeing accuracy in AI-assisted work. Many could not have included easy methods to deal with an worker utilizing a available AI instrument to harass, humiliate or intimidate a co-worker.
The excellence issues as a result of the barrier to entry for crafting AI supplies is now primarily zero. Producing a harassing music, a romantic story involving an actual colleague, a fabricated dialog or a mocking picture not requires technical talent or vital effort. This might alter the chance calculus for HR leaders in ways in which haven’t been extensively mentioned.
EEOC and regulatory dangers
As deepfake know-how turns into more and more accessible, office dangers additionally ramp up. “The potential penalties span a big selection of authorized domains, together with employment discrimination, privateness legislation violations, intentional infliction of emotional misery and even prison legal responsibility,” in response to Littler’s temporary.

The U.S. Equal Employment Alternative Fee has already moved to handle AI-generated harassment explicitly. Its enforcement steering on office harassment identifies the sharing of “AI-generated and deepfake photos and movies” as examples of conduct that may represent illegal harassment based mostly on protected traits.
And the authorized publicity extends past sexual content material. Attorneys at Littler observe that AI instruments can be utilized to generate manipulated photos focusing on an worker’s race, incapacity, faith or nationwide origin. That’s a Title VII drawback, a possible Individuals with Disabilities Act drawback, and a hostile work atmosphere declare no matter whether or not anybody referred to as it a deepfake.
New laws can be transferring shortly. The federal TAKE IT DOWN Act and Florida’s Brooke’s Regulation each mandate the removing of nonconsensual intimate AI-generated content material inside 48 hours, signaling that the legislative atmosphere round this situation is tightening quick.
Past coverage gaps, HR leaders want to consider what occurs when a grievance lands on their desk. The usual investigation playbook was constructed for a world the place authorship was comparatively simple. AI complicates that. When a harasser can blame AI, HR now faces attribution questions that current frameworks weren’t designed to deal with.
Recommendation for HR leaders
Kelley and his colleagues at Littler suggest that employers start treating AI-generated content material with the identical rigor as bodily proof. That’s a significant shift from how most HR investigations function at present.
Replace the coverage language
Current anti-harassment insurance policies ought to explicitly prohibit the creation or distribution of AI-generated content material that demeans or harasses staff based mostly on protected traits. The language ought to be particular sufficient that staff can not moderately declare ambiguity.
Retool coaching
Commonplace harassment coaching doesn’t tackle this situation. HR leaders ought to think about including concrete examples of AI-facilitated harassment (the romantic music, the fabricated dialog, the altered picture) so staff perceive that utilizing an AI instrument isn’t an excuse.
Put together the investigation infrastructure
HR groups and their authorized counsel ought to suppose by means of now, earlier than a grievance arrives, how they’ll deal with digital proof, assess credibility and doc findings in instances the place AI is concerned.


