From "How to Use GenAI" to "How We Train for It": What Georgia's New AI Training Tells Local Leaders
A look at several states push to train their workforce on GenAI use.
11/20/20254 min read


Last week we published "A local leader's guide to using generative AI (without losing your voice)"—a practical playbook for using tools like ChatGPT responsibly in everyday government communications.
In just the past few days, multiple state governments have shifted from treating AI as an interesting curiosity to treating it as something their workforce needs to actually know how to use. Georgia announced a statewide partnership to train public employees on responsible GenAI use. Massachusetts launched a similar training rollout for its tech services staff. And New York is already running a hands-on pilot for 1,000 state employees, with plans to scale it up.
If you're a local elected official, organizer, or comms staffer, this matters to you (even if you aren’t in Georgia, Massachusetts, or New York).
Why? Because training is how you get from responsible individual use (what we covered last week) to responsible organizational adoption (what's coming for everyone, whether you're ready or not).
Let's look at what's actually happening, why it's picking up speed, and what you can do about it right now.
What Georgia Announced (and Why It's Worth Paying Attention To)
On November 19, 2025, the Georgia Technology Authority and the state's Office of Artificial Intelligence announced a partnership with InnovateUS to provide free, statewide GenAI training for public sector employees. The focus is explicitly on what they're calling "confidence and care"—meaning everyday usefulness plus guardrails.
A few things are worth noting here. First, the training is free and widely available, which signals Georgia wants broad, normalized adoption rather than some boutique pilot limited to IT teams. Second, the emphasis is on responsible use, not just capability. The InnovateUS course series Georgia is rolling out covers privacy, hallucinations, bias, and when not to use AI at all. Third, they're treating AI as a workforce skill. Georgia's framing is basically that public service in 2025 requires AI literacy the way 2015 required social media literacy.
This Isn't Just Georgia
As we noted, Massachusetts and New York are engaged with similar initiatives. Salt Lake City partnered with InnovateUS earlier this month to bring responsible GenAI training directly to city staff. And national organizations like the National League of Cities have started publishing step-by-step guidance for training municipal teams.
The throughline is: governments at every level are converging on the same conclusion. Untrained AI use is already happening, so training is the only safe path forward. That squares with what we're hearing across the public sector. AI is becoming a bring-your-own-tool reality. When organizations don't set norms, employees improvise in secret, and risk goes up.
What does this mean for you?
Our post last week emphasized three core principles: don't feed sensitive local information into public AI tools, treat AI output as a draft (never the final version), and use a voice guide so the work still sounds like you. Last week's guide was about how you personally use AI well. This week's news is about how your organization learns to use AI well together.
For local governments, that "together" part really matters. Local work isn't just technical. It's relational, contextual, and high-trust. Your residents know when something sounds off. They also know when the city is handling technology carefully versus recklessly. Training helps preserve that trust at scale.
Why Training Matters Even More at the Local Level
State agencies are big. They can build central AI teams and formal review processes. Local governments and community organizations usually can't. Instead, you have a tiny comms shop (or none at all), staff doing three jobs at once, heavy public-records and political scrutiny, and hyper-local facts that absolutely must be right.
That combination creates two specific risks.
The first is what I'd call "confidently wrong" local details. AI tools are especially prone to hallucinating about local context including street names, meeting dates, ordinance language, and program eligibility. Without training, someone eventually copy-pastes a polished paragraph that includes a made-up policy detail. Even if it's an honest mistake, it can erode trust fast.
The second is invisible inconsistency. In small teams, AI adoption can become scattered. One person uses it daily, another refuses, a third is unsure and hides it. Training creates shared expectations about what you use AI for, what you never use it for, how you review and localize output, and how you store records. That consistency is both a public service benefit and a political protection.
What You Can Do Right Now (Even Without a State Program)
You don't need to wait for a statewide rollout to get these benefits. Here's a lightweight version you can implement this month.
Start by naming an AI point person. This doesn't need to be a full-time role—just someone who collects good prompts, tracks policy updates, and helps staff troubleshoot.
Next, adopt a one-page AI use norm. Borrow from last week's guide and keep it simple: no sensitive constituent data in public tools, AI output is always a draft, a human checks facts and tone, and you save final versions the same way you always have (public records!).
Then run a 45-minute training huddle. Demo two or three everyday tasks like rewriting constituent emails, simplifying staff reports, or drafting meeting talking points. Show the pitfalls—hallucinations, generic AI voice, bias slips. Practice with real local examples using your voice guide.
Finally, take advantage of free public-sector training where you can. InnovateUS's Responsible AI courses are open to all public professionals, not just state employees. So are ICMA and NLC GenAI sessions for local government. Even a small team doing one module together can align your norms quickly.
A Useful Reframe
A lot of local officials hear "AI training" and think they don't have time for another tech initiative.
But this wave of state programs is actually about the opposite. It's about capacity—giving small teams leverage for routine writing and analysis. It's about risk—preventing unreviewed AI output from becoming a public mistake. And it's about equity—ensuring tools are used in ways that don't amplify bias or exclude residents.
It's professional development, not science fiction. The states moving fastest right now are basically saying: AI use is inevitable, but untrained AI use is optional. That's a framing local leaders can adopt too.
If the trend holds (and it almost certainly will), expect three near-term shifts.
First, more "official" tools will appear inside the government. Training and approved platforms will travel together, like New York's model. Second, AI norms will become part of onboarding. New hires in comms and admin roles will be trained on GenAI use the same way they're trained on open-records compliance. Third, public expectations will change. Residents won't demand that you avoid AI. They'll demand that you use it carefully and transparently.
