UpSkill AI: Compliance AI That Can't Afford to Be Wrong
Last updated on November 15, 2024
Compliance professionals can’t use ChatGPT.
Not for the work that matters. A confident-sounding wrong answer about anti-money laundering regulations could cost their clients regulatory fines, license revocations, or worse. When a compliance officer advises a financial services firm on their AML obligations, “the AI said so” doesn’t hold up when regulators come asking questions.
AI is transforming how knowledge workers operate – except in industries where accuracy isn’t a nice-to-have but a legal requirement. Compliance is one of those industries. The Compliance Company saw this gap.
TCC is a New Zealand-based consultancy specializing in AML/CFT (Anti-Money Laundering and Counter-Terrorism Financing) compliance. They advise financial services firms on regulatory obligations, conduct audits, and train compliance teams. They’ve built deep expertise – proprietary training materials, detailed interpretations of regulatory guidance, practical knowledge of how compliance works across dozens of industry contexts.
Their clients constantly needed answers. What are our obligations under the latest AML guidance? How should we handle this specific scenario? What does this regulatory update mean for our business? Getting those answers meant searching through legislation, regulatory guidance documents, internal policies, and TCC’s own training materials. Time-consuming work, even for experts.
TCC had the domain expertise. What they needed was a way to make that expertise instantly accessible – without sacrificing the accuracy their industry demands.
This is how we built UpSkill AI Q&A: an AI platform for a domain where “I think so” isn’t acceptable.
Why Compliance Is Different
Building AI for compliance requires more than “RAG with legal documents.” The constraints are fundamentally different from most AI applications.
Traceability is mandatory. In compliance, the right answer isn’t enough – showing where it came from matters just as much. When a regulator asks “how did you reach that conclusion?”, pointing to a source document is the expectation. Generic AI can’t do this. It synthesizes information in ways that obscure origins, and it can’t reliably indicate which specific passage informed which part of its response.
Multiple sources, potential conflicts. A compliance question might require information from official legislation, regulatory guidance, and an organization’s internal policies. These sources don’t always agree. The AML Act says one thing; the FMA’s guidance interprets it with nuance; a company’s internal policy adds further specificity. An AI system can’t just blend these together and hope for the best. It needs to surface each perspective so professionals can apply judgment.
Hallucination isn’t just annoying – it’s dangerous. When ChatGPT confidently makes something up about a recipe or a historical fact, it’s frustrating. When an AI confidently makes something up about regulatory obligations, someone might act on it. A compliance officer might advise a client based on that answer. The client might make business decisions. And when the regulator shows up, “the AI hallucinated” isn’t a defense.
Domain expertise shapes everything. You can’t build a compliance AI by throwing documents into a vector database and calling it done. Understanding what matters – which sections carry legal weight, how regulatory language should be interpreted, what questions practitioners actually ask, what a useful answer looks like – requires deep domain knowledge. The retrieval strategy, the prompt design, the output format – all of it depends on understanding the domain.
TCC brought that domain expertise. Our job was to build a system worthy of it.
Grounded, Verifiable Answers
The core challenge: an AI system that compliance professionals can trust.
Trust in this context has specific requirements. The answer must come from authoritative sources. The user must be able to verify where each claim originated. And the system must be transparent about what it doesn’t know.
The retrieval problem comes first. Before the AI can answer anything, it needs to find the right information. Not just text that’s semantically similar to the question – the relevant passages from the right sources. In compliance, close isn’t good enough. A passage about customer due diligence requirements for banks might be semantically similar to one about requirements for real estate agents, but they’re governed by different rules. The retrieval system needs to be precise.
Users control their sources. We designed the system so users choose which knowledge bases to query for each question: TCC’s proprietary training materials (UpSkill content), official regulatory guidance (legislation and FMA documents), or their organization’s internal compliance documents. Professionals decide what informs their answers. A user might want to check what the law actually says before seeing how their internal policy interprets it. Or they might want TCC’s practical guidance without wading through regulatory language.
Structured responses preserve source integrity. When users select multiple sources, the system returns separate answers for each. If the regulatory guidance says one thing and internal policy says another, users see both – clearly labeled, not blended. This matters. In compliance, the distinction between “what the law requires” and “what our policy says” is significant. Hiding that distinction behind a synthesized answer would undermine the tool’s usefulness.
Citation is non-negotiable. Every claim links back to its source. Users can click through to the original document, the specific section, the exact language that informed the response. In a domain where “because the AI said so” carries no weight, this is the foundation of trust.
Answers are checked before delivery. A separate process reviews each response against the source material before it reaches the user – verifying that claims follow from context and citations are accurate. Single-pass generation isn’t reliable enough when 90% correct means wrong.
The system knows its limits. If it can’t produce a confident answer, it says so and flags the question for human review. Off-topic questions get declined. Ambiguous questions prompt clarification. Conflicting sources get surfaced, not hidden.
Enterprise Infrastructure
AI accuracy means nothing if the system can’t handle real organizational complexity.
TCC’s clients aren’t individual users – they’re compliance teams within financial services firms. Each organization has its own internal policies, its own uploaded documents, its own conversation history. The platform needed to support this from day one, not as an afterthought.
Multi-tenancy is foundational. Each organization operates in its own isolated workspace. Documents uploaded by one firm are invisible to others. Conversation history stays within the organization. For enterprise compliance tools, data isolation is a requirement – firms won’t upload their AML programs to a system where boundaries are unclear.
Role-based access matches how compliance teams work. Administrators manage the knowledge base and oversee usage. Compliance professionals query the system and contribute documents. View-only users can ask questions but can’t modify anything. These aren’t arbitrary permission levels – they reflect how compliance functions operate within organizations.
Observability makes debugging possible. Every query, every retrieval, every response is logged and traceable. When something goes wrong – and in production, things go wrong – the team can trace exactly what happened: what the user asked, what documents were retrieved, how the response was generated. Without this, improving the system would be guesswork.
Document processing handles real-world messiness. Organizations upload compliance documents in various formats – PDFs, Word documents, policy manuals that weren’t designed for machine readability. The pipeline processes these into searchable, retrievable content. This is unglamorous work, but it’s where many AI systems fail. A document that can’t be properly processed is a document the system can’t use.
How We Worked Together
The product needed to reflect how compliance professionals think and work. That meant close collaboration throughout – not a requirements handoff followed by months of silence.
Understanding before building. Before writing code, we needed to understand how compliance consulting works – what questions practitioners ask, what useful answers look like, which sources carry authority, how teams use information daily. The technical architecture emerged from this understanding.
“They took the time to understand our business and what we were trying to achieve in the complex world of regulatory compliance.”
Clear scope, complete visibility. We established early what would be built, what was out of scope, and how we’d know when it was done. TCC saw progress throughout. No surprises, no scope creep, no deferred decisions on critical questions.
“Softcery defined the project scope clearly and thoroughly and then managed it end to end. They gave us complete clarity at every stage of the project.”
Education as part of delivery. TCC needed to understand the AI landscape to make informed product decisions and speak credibly about the technology to their market. What’s possible today, what’s hype, where the real tradeoffs lie – this knowledge transferred alongside the software.
“They educated and empowered us along the way, positioning us to be the spokesperson for this cutting-edge technology.”
Domain expertise shaped every decision. How should sources be displayed? What tone should responses take? When should the system decline to answer? These are domain questions. TCC had the answers. We translated them into working software.
Where It Stands
UpSkill AI Q&A is live and serving compliance professionals in New Zealand and Australia.
The platform handles real queries from real users – questions about AML obligations, regulatory interpretations, internal policy applications. Each answer is grounded in source material, cited, and verified before delivery. When the system isn’t confident, it says so.
The two jurisdictions have different regulatory frameworks and terminology. The platform handles both – multi-jurisdictional support was part of the original architecture, not a retrofit.
TCC is pioneering what AI-assisted compliance advisory looks like – defining the category in their market.
“Not only did they develop a first-of-its-kind product with us, but they also educated and empowered us along the way, positioning us to be the spokesperson for this cutting-edge technology. Softcery are true experts in AI and software development. We couldn’t have chosen a better partner for this journey.”
– Jeanette Kreft, Managing Director, The Compliance Company & UpSkill
See exactly what's standing between your prototype and a system you can confidently put in front of customers. Your custom launch plan shows the specific gaps you need to close and the fastest way to close them.
Get Your AI Launch Plan