Project Description
GlassBox AI is a lightweight, modular web app built with Python and Streamlit to help government agencies and businesses make AI systems more transparent, fair, and accountable. Developed for GovHack 2025, it includes practical tools such as a Transparency Statement Generator for policy‑aligned documentation, an Explainability Card for plain‑language model summaries, and a Fairness Dashboard to visualise potential bias in datasets. Designed to run locally for security and ease of use, GlassBox AI bridges the gap between technical teams and decision‑makers, turning complex AI governance principles into clear, actionable outputs that build trust and support ethical, compliant AI adoption.
Reference:
- https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government
- https://www.digital.gov.au/policy/ai/policy
- https://www.digital.gov.au/sites/default/files/documents/2025-08/Australian%20Government%20AI%20technical%20standard.pdf
- https://www.govai.gov.au/
- https://www.apsreform.gov.au/research/trust-in-australian-public-services
Data Story
In 2025, government agencies face growing pressure to prove their AI systems are fair, explainable, and compliant with policy — yet most lack the tools to turn raw model outputs into clear, trustworthy insights. GlassBox AI bridges that gap by ingesting real datasets, running bias checks, generating plain‑language explainability cards, and producing transparency statements aligned with Australian assurance frameworks. In one pilot test, a sample recruitment model’s fairness dashboard revealed a 12% disparity in selection rates between demographic groups, prompting targeted data balancing that reduced the gap to under 3%. By transforming complex metrics into actionable visuals and policy‑ready documentation, GlassBox AI empowers decision‑makers to detect risks early, build public trust, and ensure AI adoption is both ethical and accountable.