An Accurate and Trustworthy Chatbot for Data Interactions banner image

An Accurate and Trustworthy Chatbot for Data Interactions

Jurisdiction: Northern Territory

Ask your data, trust the answer


How can government agencies deploy conversational and analytical AI to interrogate complex datasets while maintaining the high degree of accuracy and auditability standards required for official decision-making?

Government agencies manage thousands of datasets but lack intuitive ways to extract insights. While AI chatbots show promise, government's accuracy requirements (where 90% isn't sufficient) create unique challenges around trust, vetting, and accountability.
Currently, there is a heavy focus on more advanced AI and LLM frameworks and architectures that possess increased logical and reasoning skills, such as the recent release of ChatGPT5 that promises ‘PhD-level’ intelligence. However, a known drawback of such advances in reasoning is hallucination, where the model openly provides inaccurate information or sources that must be questioned by the user.
Instead, government solutions and the ongoing uptake of AI in Government requires the opposite set of criteria, where accuracy is paramount whilst the reasoning capabilities of models are less critical and can instead be user-driven. As such, there is a pressing need to develop tools, approaches and solutions to focus on this balance instead.

Your solution should demonstrate or focus on at least one of the following:
- Conversational data interrogation across multiple government datasets
- Trust scoring and vetting mechanisms that validate AI responses
- Grounded, scope-limited responses (no hallucinations about unrelated topics)
- Transferable framework that works across departments (HR, finance, operations)
- Suggested question scaffolding to guide users toward productive queries
- Audit trails showing how conclusions were reached

Some example use cases by users of your solution could be the following questions:
- Finance: "Am I going to meet my budget this year?" "Show me vendor payment outliers"
- HR: "What's happening with leave patterns in my team?"
- Operations: "Are there any red flags in our procurement data?"

Data integration: Teams must use at least one large scale dataset, either from Government or from third party sites and demonstrate the accuracy of their solution in some manner.
Ethical AI: All AI proposals must demonstrate commitment to ethical AI practices, including considerations for privacy, bias prevention, and transparency in algorithmic decision-making.

Eligibility: Open to all, but special consideration will exist for teams with a local NT lead.

Entry: Challenge entry is available to all teams in Australia.

Dataset Highlight

AusTender Procurement Statistics

Go to Dataset

Portfolio Budget Statements

Go to Dataset

APS Employee Census 2024 Results

Go to Dataset

Freedom of Information Statistics

Go to Dataset

Hugging Face Tabular Dataset

Go to Dataset

Kaggle Human Resources Datasets

Go to Dataset

Kaggle Employee Leave Tracking Data

Go to Dataset

Challenge Entries

Log in to enter your team into this challenge.

Back to Challenges