Project Description
The Problem
The world is racing toward the Fourth Industrial Revolution. Today, technology has become a widespread and integral resource. A 2018 release by The Australian Bureau of Statistics estimates that 97.1% of Australian households with children under 15 have access to the internet — with the rise of large-language models and artificial intelligence, this number will certainly continue to climb.
While technology brings unparalleled convenience and amenities, the pace of digital advancements have determinedly outpaced governance and education. But our reliance on technology is still skyrocketing. This gap has exacerbated a host of problems relating to digital safety — the internet is rife with realistic deepfakes, AI-generated misinformation has sown discord, and scam activity has stolen over $2.0 billion from our community, in 2024 alone.
These risks are only amplified for vulnerable populations accessing the internet. In acknowledgment of the problem, the federal government has passed legislation banning children under 16 from accessing social media sites. But device use is endemic, and technology is a powerful tool. Solutions that seek outright bans are, then, ineffective and unsustainable.
In light of this, we sought to create a one-stop solution to ameliorate these risks. Our tool hopes to function as a digital sword, shield and salve against misinformation and cybercrime — by flagging risks as they appear, equipping users with digital literacy skills, and providing assistance if they’ve fallen prey to cybercrime.
Our Solution
Our solution is underpinned by the proverb that prevention is better than cure — the tool, aptly named IsThisReal, reflects a common search query seeking to authenticate the veracity of accessed content. This ensures the website shows up higher in the search engine, to aid panicked netizens who may not be previously aware of its existence.
In particular, our tool has been designed to address the following issues:
Scams
1. You sure about that?
An AI-powered checker scans uploaded communication and detects if it is likely to be a scam.
Upload emails, chat messages, website links, and other suspicious digital communication. The tool indexes the upload against risk factors outlined in The Australian Government’s publication, The Little Book of Scams, and holistically computes a scam score out of 10. It also provides a simple justification, facilitating quick detection and easy understanding.
Currently, the tool is only available in English. Multi-language compatibility, aiming to cater to Australia’s diverse demographics, is in the works.
2. Who just called me?
Suspicious phone numbers or emails can be reported by community members on the website.
Reporting these numbers through official channels is often a time-intensive and laborious process. In particular, children and people who speak English as a second language may be deterred from making a report because of the lengthy justifications and elaboration required. Our website allows users to submit just a phone number, and does not require any additional writing.
Submissions will be automatically sorted, and phone numbers with a high amount of reports are flagged to be reviewed manually before being published. This function facilitates reverse lookups while maintaining privacy and avoiding spam. It also helps power the scam detection tool.
3. I've been scammed. Now what?
Falling victim to a scam is distressing enough — what happens after should not add to those feelings. An AI-powered chatbot is in the works, to direct victims to specifically-tailored resources, simplifying the processes of reporting and recovery.
Alternatively, the website directs victims to the scam helpline of their personal bank, provides the relevant links where a report can be filed, and offers step-by-step instructions guiding them through the reporting process.
4. Wow, Cheap EKKA Tickets!
Monthly scam reports will be published on the website. These reports will highlight ongoing community events and their associated risks. It will also feature data displaying the most common scam types in the previous month, and offer explanations on the scam type, as well as tips on how to detect them.
These reports are currently published in English and Chinese (Simplified), but it is possible for them to be translated into a few more languages to boost accessibility.
Misinformation
1. OMG, Is that Joe Biden on a Pink Elephant?
AI-generated content, images and videos are getting increasingly realistic — spotting a “fake” is not always possible at first glance. With the rise of quick-scrolling short form content and its popularity amongst vulnerable populations, like children and the elderly, the ability to tell apart user-generated and machine-generated content is essential.
An AI-powered browser extension works as a detection tool, helping to discerning the percentage probability that an image or video was machine-generated.
We recognise that the technology behind detecting AI content is still nascent, and possibly unreliable at times — our tool will request simple feedback from users to refine its predictions. Simultaneously, a prompt for feedback could incite human users to pay greater attention to the content they’re accessing, equipping them with the necessary skills to spot a fake.

2. That's just wrong!
Misinformation doesn’t just come from AI-generated content. In today’s world, the anonymity of the internet provides fertile ground for “ragebait” posts provoking strong negative emotions, and erroneous information intending to instigate violence or promote hateful mentalities.
Currently, the Australian Government’s eSafety guide provides advice on how to report misinformation or fake news to the platform users encountered it on, but the guide is lengthy, and requires users to click through different tabs before landing on the relevant report link. A future AI tool planned for our website can direct users quickly to the correct website, and offer simple instructions on how to make an effective report. This is especially useful for younger users who may not enjoy interrupting their scroll time to read through wordy paragraphs; simplifying the process may incentivise them to report content.
Overall
For the sake of education, transparency, and integrity, all the information used in the making of our tools will be accessible on our website.
Additionally, some users may prefer not to use AI tools. To account for this, our website will provide the same instructions in written form, so users can click through the pages to find information applicable to their situation, as well as generally stay informed on digital trends.
Try it out!
Our website is live at https://projects-uzez.onrender.com/
Key Ideas
Detection and protection: Our AI-powered tools work in the background to scan content for digital risks, and flag suspicious communication to prevent users from falling victim to scams or misinformation.
Trust and integrity: All information used to train our AI tools is readily available, ensuring transparency and upholding integrity. AI capabilities are still developing, and information provided or decisions made can sometimes be unreliable. Users are prompted to provide feedback on AI decisions, allowing humans — not machines — to have the final say. A request for feedback could also prompt users to take a closer look at the content they’re interacting with, and critically zero in on discrepancies they may have previously missed.
Inclusion and protection: Currently, our monthly scam reports are published in English and Chinese, allowing a greater proportion of the population to understand and access them. Our AI misinformation tool was designed specifically with children in mind; in an age of short-form content, a mobile application with these safeguards would equip them with the relevant skills required to differentiate between machine-generated and human-generated content. Likewise, our scam tools were designed for all populations, including the elderly, and people who use English as a second language, in hopes of simplifying the detection process and removing barriers to reporting scams.
Governance and responsibility: All data and statistics used are sourced from official government reports. Our monthly scam reports utilise Scamwatch’s latest statistics, and display them in an accessible and easy-to-digest manner. Government websites can sometimes be overwhelming, because of the quantity of information presented — our website curates that information, and presents it in a manner that’s easily navigable.
Digital literacy and empowerment: In hopes of raising awareness and instilling key digital literacy skills in the population, the website is named so that it appears higher on the search engine when a general related query is searched. AI tools aren’t everyone’s cup of tea — as such, the website provides all the information used to train our models, so users can click through educational content, make their own decisions, and acquire the skills needed to confidently navigate the digital world.