Terry Gerton: Let’s start with kind of a big level-setting. As we talk about AI, from your perspective, what is sort of the current state of AI competency across the federal workforce?
John Pescatore: Well, I think it’s important for us to understand the current status of what is AI. There’s not even a good definition of what that is. Obviously, everybody’s using a lot of tools today to do internet searches with AI engines, and we hear about it in the press and politicians talk about it. It’s a very mature technology — it’s been around for over 20 years — but it’s really exploded here for a number of reasons. So it’s way overhyped. And if you think about the way bad guys try to fool us into falling for phishing attacks, they try to get a sense of urgency going — it’s got to move, you got to do something quickly — that’s sort of where we are with AI. So from the point of view of the state of competency, there’s not even a definition of what it means to be competent yet. So what we have to point out first is: What does it really mean when we talk about using AI, or buying AI, or protecting ourselves against AI? We’re still sort of in a definitional phase.
John Pescatore: In the federal government, probably two years ago, I think I did a briefing on Capitol Hill to the Senate lunchtime briefings we do to them on topics. So I’ve been sort of thinking about this for several years. SANS over that time period has put out three training courses focused on the key aspects of AI. So we’re starting to see the demand for that sort of training go up. So we know there’s a thirst to become competent. It’s only here very recently that you can even define what that means.
Terry Gerton: That’s a fair point. And AI is going to show up differently depending on the kind of work that an individual does in the federal government. But there is this new bill that’s been introduced in the House — the AI Training Extension Act of 2025. So to your last point, as there’s growing interest in training, what is this bill trying to accomplish?
John Pescatore: There had been previous draft bills that came out last year — maybe even 2024 — that were focused on the procurement of AI, and it had to do with training procurement officers and people involved in technology procurements, how to evaluate AI.
John Pescatore: This current draft legislation that just came out in June is aimed at broadening that to train IT people and end users and security people. That’s very key — the procurement side is important — but that wasn’t really focused on the real problems. When we look at it today — already in private industry, certainly in some spots in the federal government — there’s kind of three things we have to worry about with AI. One is bad guys using it against us. That gets a lot of press, and we spend a lot of time there. But the most important one is: How do we make sure if the mission side is rolling out use of AI that it’s done securely? That’s really the number one. And the final one is: How do we — security people and IT people — use AI to more efficiently do our jobs? In a time when it’s harder to hire new people — more harder than ever — how do we use AI tech tools to make our people more productive, to try to fill some of the gaps? But that first one — making sure we’re protecting our own mission and our company’s use of AI — is really where the most work and training needs to be done.
Terry Gerton: So what are the key aspects of that kind of training? What do people need to be focused on? What do we need to make sure that our federal employees have as basic AI skills?
John Pescatore: So it depends on the role of the person. So that’s how we sort of start our training out. We have one that’s a broad one for leaders and managers — what do they need to think about? So for example, governance of AI is very important. Who is in charge of deciding what data the AI engine ingests, and how is that data protected? A great example is Microsoft — this was early last year, over a year ago now — they had a security incident with their own use of AI, and Microsoft Azure Cloud led to a huge breach because they hadn’t thought through the governance of AI.
John Pescatore: So in that example, employees’ PCs — disk drives — were indexed into the AI engine. That included every one of their emails, their passwords. So it was not a failure of the technology. It was a failure of the governance and the definition of how it was to be used. So that’s the first one — governance. That’s the same in information security in general. We have to have governance in place before we can start coming up with policies, before we can implement controls to cause those policies to take place. So that’s number one.
John Pescatore: From the typical IT person’s point of view, who might be involved in an IT project, it’s understanding the basic concepts and what it really means. We tend to talk about AI as one blob. There’s many different types of AI — machine learning. A lot of what you’re seeing today is generational, but these queries and the ability to do fake pictures and voices. But there’s a lot of different uses of AI. And then the final thing after that is the cybersecurity people. There’s been AI in use in cybersecurity tools for really 20 years. It was called machine learning for a long time. Now we do have some options where security people can take advantage of AI tools to do some things — but not all that it’s claimed to be.
Terry Gerton: I’m speaking with John Pescatore. He’s the director of emerging security trends at the SANS Institute. Let’s think about how agencies would acquire this kind of training or do it themselves, as the General Services Administration is thinking about centralizing procurement. Should we think about the government buying a common training package for agencies? Should agencies think about specific training programs for particular skill sets or mission sets? What should they be thinking about as they begin to acquire training for their AI teams?
John Pescatore: I think the way the government’s gone about looking at cybersecurity training and IT training overall in general still holds. And it’s not to start by looking at the training — it’s to start looking at the roles. And that’s typically by defined job categories within the various frameworks, like the NICE framework and other government efforts that have that, where a role is defined and then what skills are needed for that role, and then how are those skills demonstrated and how are those skills acquired. So I might acquire those skills — maybe I worked 10 years in a field and I’ve never taken a training course, but I have those skills. Maybe I’m brand new, right out of college, I have a degree but I’ve never had any hands-on experience. I’m lacking some of those skills, or I may have others.
John Pescatore: So then you get to certification that says: How do we assess what skills the person does have? It’s not just a paper, resume exercise. And then what’s available to fill those skill gaps? And that’s where training comes in. So there might be a degree in computer science — might be able to fill some of it — but we know from many years of experience most computer science programs don’t necessarily teach people how to do things. They teach the concepts of how to talk about things. Then we might say for certain skills — operational skills — they need hands-on experience. That may involve training with lab-type hands-on environments. Others may be strictly concepts. So I think in AI we’ve seen definitely there’s a need for that concept-type training so that managers understand what this means, and similarly that people that are technical managers understand how to evaluate their own staff’s needs. And then there’s a lot of hands-on needs.
John Pescatore: We can look at the medical world, for example. What they saw over the years was, obviously, we need highly skilled doctors. But we also need people to operate MRI machines and CAT scan machines — and then understand the medical side of things, but also understand the technology side of things. Then we need people to evaluate what the technology is saying. And then finally, we feed that small number of experts. The same is true in the federal government for IT and IT security around.
Terry Gerton: What I hear you saying is there’s all kinds of different training. What an individual who’s preparing emails and PowerPoints needs in terms of AI training is different from someone who’s managing large databases or deploying cybersecurity. But through all of that, there is a focus on transparency and ethics. Where would you bring those two topics into the training planning structure?
John Pescatore: Well, in transparency and ethics, I would largely lump that under governance, because that’s what you have to think through when a program is going to start up and do something with AI — provide better health care to state and local tribal entities using AI tools to do things — all these things that many things have been talked about.
John Pescatore: That’s where you get into, from the start: Well now, how do we do this and protect any information that’s in there? How do we validate the output from a safety point of view? And then from an ethics point of view: How do we make sure the inputs to this result in ethical outputs? So we’ve learned that AI is very good at hallucinating. If AI says, “make something up,” another AI engine ingests that. And all of a sudden, all the AI engines think it’s true.
John Pescatore: So there’s safeguards that we’ve done from a safety point of — the old, I used to work for the Secret Service and we used to have people do bomb checks — make sure that we don’t have bombs around the properties. And you would take bomb training. And the old joke was: Read the manual first on how to defuse any bomb, and read it to the end, because it’ll say cut the blue wire after you cut the red wire. So it’s the same with AI. There’s some things — if you don’t think from end to end — quality and safety, then ethics and transparency are meaningless. But if you think about it from a procurement point of view — transparency on how do we know what’s going on on the inside of this thing? It says it’s using AI. What does that mean? We still need very definitional levels defined that have not yet been reached.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.