Back to Projects

Team Name:

NTelligence


Team Members:


Evidence of Work

NTelligence

Project Info

Team Name


NTelligence


Team Members


ahsan , Rafsan and 1 other member with an unpublished profile.

Project Description


NTelligence is a conversational analytics tool for NT Government. Staff ask plain-English questions (e.g., “Top 5 departments by average rating in 2023”) and NTelligence returns a safe, explainable answer with evidence: the query plan, parameterised SQL, parameters, rows, and a grounded summary.
It reduces spreadsheet/SQL chaos, speeds up decisions in regional and central teams, and bakes in privacy controls (allow-listed tables/columns, date windows, aggregation/masking for sensitive attributes).

How it works (one-liner):
UI → FastAPI (server.py) → Agent 1 Scope Guard → Agent 2 Planner (QueryPlan JSON) → Agent 3 SQL (parameterised) ↔ KB (tables + curated views) → Agent 4 Verify → answer + evidence.

Built-in demo prompts:

“Top 5 departments by avg rating in 2023”

“Staff allocation next week — Darwin & Palmerston”

“Scaffold queries” (shows Plan → SQL → Rows → Summary)


#nt government #northern territory #conversational analytics #natural language to sql #fastapi #pydantic #sqlalchemy #pydantic-ai #openai #data governance #privacy #public sector #verification #auditability #govhack

Data Story


Data story (Markdown supported)

Problem
Across agencies, routine questions require ad-hoc SQL and scattered spreadsheets. That’s slow, inconsistent, and risky around sensitive data—especially for small/regional teams.

Data approach

Demo KB: synthetic employee, action, perf tables + curated views joinempperf, joinempaction.

Execution: Agent 3 compiles parameterised SQL and runs only on allowed tables/views.

Protection: Scope Guard enforces allow-lists, clamps date windows, and requires aggregation/masking for sensitive fields where policy demands.

Verification: Agent 4 grounds summaries only in returned rows and surfaces simple trust signals (e.g., coverage, freshness).

Why this helps the NT

Speed & fairness: regional offices get the same analytical power as central teams—less waiting, more consistent answers.

Evidence-based allocation: staffing and programs can be directed using fresh, comparable metrics.

Transparency: every answer includes the plan, SQL, params, and rows—easy to check or reproduce.

What we built (hack deliverables)

Backend (FastAPI/uvicorn, Pydantic v2, pydantic-ai, SQLAlchemy, SQLite demo; Postgres-ready)

System architecture diagram + sequence diagram

Suggested-prompts UI flow (shows Scaffold queries: Plan → SQL → Rows → Summary)

README + seed DB for instant run

Limitations & next steps

Add charting agent (auto-visualise rows), caching/cost controls, and connectors for common NT data stores & the NT Open Data Portal.

Expand policies per-agency; incorporate more detailed freshness/quality checks.


Evidence of Work

Homepage

Team DataSets

This team does not currently have any datasets.

Challenge Entries

An Accurate and Trustworthy Chatbot for Data Interactions

How can government agencies deploy conversational and analytical AI to interrogate complex datasets while maintaining the high degree of accuracy and auditability standards required for official decision-making?

Ask your data, trust the answer

Eligibility: Open to all, but special consideration will exist for teams with a local NT lead.

Go to Challenge | 24 teams have entered this challenge.