Generative AI is making waves in almost every industry, from creative writing to coding, and even in healthcare. It’s hard not to be excited about the possibilities when it comes to automating and optimizing everything we do.
“It’s hard not to be excited about the possibilities when it comes to automating and optimizing everything we do.”
But what about when it comes to policy development in security and regulatory compliance?
Let’s just say, if AI were a student, it’s not quite ready to pass the final exam.
Now, don’t get me wrong. I’m bullish on AI. Its potential is enormous. It’s hard to believe that ChatGPT was only released about 2 years ago and companies like OpenAI, Google, Meta, and others have made staggering leaps forward since then.
But when you look closely at the application of generative AI in the high-stakes world of compliance, you start to see a few cracks in the armor. So, let’s explore why generative AI is almost—but not quite—ready for the compliance spotlight, and why we shouldn’t hand over the policy-writing reins just yet.
Fear and Loathing
First things first: AI hallucinations. No, we’re not talking about some trippy sci-fi movie where robots dream of electric sheep. We’re talking about when generative AI confidently produces text that sounds plausible but is, in fact, flat-out wrong. The problem with this is obvious—if AI decides that your cybersecurity policy needs a control for “protecting against alien invasions,” well, you’ve got bigger problems than just editing.
And it’s not just about an AI adding controls that are irrelevant or nonsensical. For compliance and security policies, hallucinations could cause an AI to miss critical controls.
This also touches on the critical issue of explainability. In compliance, it’s not just about generating text—it’s about being able to explain and audit why certain decisions were made. Unfortunately, most generative AI models lack the transparency to show how they arrived at specific recommendations. And when it comes to an audit, simply stating that “the AI said so” won’t cut it. Explainability is non-negotiable in this field, and AI isn’t there yet.
“…when it comes to an audit, simply stating that ‘the AI said so’ won’t cut it.”
80% Correct, But Which 80%?
The persistent issue of accuracy is risky in the world of compliance. Though many generative AI models are right about 80% of the time — sometimes more, sometimes less — that missing 20% is where the dragons live.
We asked IT marketer J.P. Roe about this issue, as he’s been embroiled in a very specific part of AI for the last eight months. He’s been writing professionally for over twenty years, and his science fiction works garnered the attention of developers in Europe who are trying to perfect a novel-writing AI.
“It’s all a matter of quality,” Roe told us. “The existing AI models can produce short fiction that’s on par with, say, a disinterested high school student in a creative writing class.”
“Technically, it’s fiction. It checks all the boxes. But it’s not hard to see the holes in plot and prose. The AI doesn’t get the context — like at all. In some of the early experiments, I spent six hours writing prompts about a single chapter’s story arc and the AI couldn’t produce any results that didn’t leave out something important or add a completely wild, anomalous plot twist.”
When we told him that there’s a rising demand for AI-generated policies, his response came without hesitation:
“That’s scary. After what I’ve seen working with several purpose-built models, I personally would not use it for anything legally load-bearing unless I had a compliance expert coming behind it to fix the mistakes. Don’t forget that in the Pareto principle—the 80/20 rule—the 20% is the important part. If 20% of the policy is broken, that’s 80% of your outcome.”
You see, it’s not about whether AI gets most of it right—it’s about whether a non-expert can spot the parts that are wrong. And trust me, when it comes to compliance, it’s often the parts you didn’t know you missed that bite the hardest.
The challenge here is that compliance requires a level of precision and expertise that most users—and even AI systems—don’t have. Sure, AI might nail the basics. It might generate a policy that looks good at first glance. But ask yourself: Do you really want to explain to an auditor why your policy includes a random, inaccurate control? Or worse, leaves out something critical?
There’s also the question of accountability.
If AI-generated policies lead to non-compliance, fines, or a breach, who takes the blame? The AI? The developer? The user? Right now, there’s no clear legal framework governing the accountability of AI-generated content, particularly in high-stakes industries like compliance. Until there are clear legal guidelines, using AI in this context introduces significant risks.
“…there’s no clear legal framework governing the accountability of AI-generated content…’
Context Matters—And AI Doesn’t Get It
Writing security policies isn’t just about checking boxes. It’s about context. What controls are necessary for your business, in your industry, under your specific set of regulatory requirements? Generative AI, no matter how advanced, struggles with this contextualization.
It’s not just about producing text; it’s about understanding the operational impact of that text.
Policies need to be more than words on a page—they need to be operationalized. They need to map to actual processes, technologies, and personnel in your organization. Right now, AI doesn’t have the mechanisms in place to bridge that gap. A more realistic approach in the near term might be to use AI as a policy assistant, not the final decision-maker.
AI can help with initial drafting, gathering information, and even suggesting baseline controls. But humans still need to take those suggestions, contextualize them, and finalize the policies. AI-assisted compliance could be an exciting middle ground while we wait for AI’s full potential to be realized.
The Price of Precision
Let’s talk dollars and cents. We’re not just discussing AI’s ability to spit out a draft policy—we’re talking about the costs of making that policy accurate, actionable, and reliable. Right now, achieving the precision that compliance demands will require a significant investment.
Generative AI models, particularly those used in compliance, need substantial computational resources and specialized training to ensure they meet regulatory standards. The infrastructure and expertise needed to fine-tune these systems don’t come cheap, and the cost of maintaining and updating them is even higher due to the constant evolution of regulations.
Additionally, even with cutting-edge technologies like retrieval-augmented generation (RAG) and robotic process automation (RPA) improving accuracy, these systems remain costly and complex to deploy at scale. Compliance policies need more than accurate generation; they require operational integration and explainability, and this is where current AI solutions struggle.
Much like how autonomous driving is being developed incrementally, compliance-related AI will likely evolve through hybrid models, where humans and AI work together to deliver more accurate outcomes. This incremental progress, driven by benchmarking and continuous improvement, will slowly build the trust needed for AI to take on greater roles in policy creation—but that’s still on the horizon.
It’s Not All Doom and Gloom—But the Stakes Are High
Now, I’m not saying generative AI is a lost cause for compliance. In fact, I think it has enormous potential. AI-assisted compliance, where humans work in tandem with AI, could be a game-changer. AI can help with the grunt work—summarizing regulations, suggesting baseline controls, and even drafting initial policy outlines. But finalizing those policies? That still needs the hands and minds of human experts.
The key is knowing what AI is good at and where it falls short. And for now, it’s not capable of keeping an MSP or client in compliance without a lot of handholding and oversight.
The Road Ahead: AI’s Role in Compliance—One Day, But Not Today
So, is AI going to write your next security policy? Probably not. But could it help you get there faster? Absolutely.
Blacksmith is working toward a future where AI plays a bigger role in compliance—just not the role some people are envisioning. Until AI can consistently deliver 100% accurate, context-aware policies, the compliance stakes are too high to trust it fully, and we’re not willing to sacrifice your safety and reputation to appease a trend.
What we are willing to do is find the right places where AI can slot into the compliance landscape, particularly in areas like explainability and operational integration.
In the meantime, don’t gamble with your compliance needs. Cutting corners with AI today could mean costly mistakes tomorrow.
Why take the risk?
At Blacksmith InfoSec, we combine the expertise of seasoned professionals with the efficiency of modern technology to help you build policies that are not only audit-ready but tailored to your business.
Get the precision and accountability you need now, while staying ahead of future advancements. Secure. Simplify. Succeed. Reach out to us today, and let’s forge a compliance program you can truly trust.