What happens when machines begin to understand us?
That’s the theme we unpacked at our latest AI & Society meetup, April 2025 held at The Precinct in Brisbane. Thanks to John Crook, CEO of Rule 30 AI, we zoomed in on one of AI’s most transformative breakthroughs: Word2Vec – the model that helped AI go beyond recognising words, to understanding what they mean in context.
Here’s a recap!
🎤 The Semantic Rosetta Stone — A keynote by John Crook
We were honoured to be joined by John Crook, CEO and Founder of Rule 30 AI, who delivered a rich, insightful talk that connected history, linguistics, and machine learning.
John opened with the story of the Rosetta Stone: discovered in 1799 and critical to deciphering Egyptian hieroglyphs. Its trilingual inscriptions — in hieroglyphs, Egyptian Demotic, and Greek — allowed scholars to decode a long-lost language by cross-referencing familiar structures.
He drew a powerful analogy to Word2Vec, developed by Tomáš Mikolov and his team at Google in 2013. Just as the Rosetta Stone helped humans translate symbols into meaning, Word2Vec helped machines understand the meaning of words based on their context.
John walked us through the evolution of vector spaces:
- How computers needed a way to represent meaning numerically
- Why simple 1D or 2D spaces were too limited
- How Mikolov’s breakthrough introduced 300-dimensional word vectors trained on massive corpora
He showed how these vectors allow relationships like:
- King – man + woman = Queen
- Paris – France + Italy = Rome
John ended with a live demo of an interactive tool that visualises the positioning of words in high-dimensional space, making abstract maths tangible for all.
It was a brilliant reminder that AI’s power often lies in its ability to mirror our own meaning-making — with clarity, precision, and complexity.
🧠 The AI Pulse
A month’s a long time in AI. It’s obviously impossible to cover everything but since we met last month (March), there’s been a few trends happing….
- The Ghibli effect: ChatGPT usage hit a record high after its viral image-generation feature sparked a frenzy of Studio Ghibli-style AI art.
- Meta’s LLaMA 3 is democratising access to powerful language models
- OpenAI’s Sora continues to blur lines between reality and synthetic video
- Anthropic’s Claude 3 Opus may now surpass GPT-4 in some tasks
- And with elections approaching worldwide, AI and misinformation is an urgent theme

🤝 Workshop
In the workshop we split off into groups to discuss applications of AI in smart cities and how language-based AI could best be used.
If you were part of a group below, add a comment on what I’ve missed.
Disclaimer, I’ve missed a lot. We didn’t get time for formal feedback from every group as a whole, so here are the questions and a high level overview.
Time is a tricky thing, but allowing time for feedback next time is a must!
🟦 Group 1 – AI in Public Services
How can AI that understands language improve services in cities?
There are lots of ways AI can improve public services.
Legal Example. A highlight of the discussion was how AI could help reduce bottlenecks in the legal system by triaging low-risk or high-impact legal cases based on natural language descriptions — but also emphasised the importance of human oversight, especially when legal nuance or empathy is involved.
Transport Example. AI using computer vision and traffic data to identify crash risks in real-time or to improve congestion management. The idea of “just-in-time” AI decision-making in city systems resonated — provided the data is diverse and robust.
Takeaway: AI can enhance fairness and efficiency — but must be built on context-aware, human-centred design.
🟥 Group 2 – Risks of Misunderstanding
What are the dangers when AI misinterprets human meaning?
This was a big question…. can AI understand the ethical, emotional, and safety risks of misunderstood intent. For example:
- Resume scanners penalising career gaps due to caregiving
- AI misreading sarcasm or satire, and accidentally spreading misinformation
Some possible things to consider….
Trust is a crucial part of this. If users feel misunderstood, adoption would like drop, even if the tech is useful. In critical sectors like health or justice, misunderstanding could be catastrophic.
Takeaway: AI must learn to listen like humans do — with empathy, awareness, and fallbacks. Context is everything.
🟧 Group 3 – Design a Brisbane Chatbot
If we built a local chatbot, what should it do – and not do?
Some possible things to consider….
- Real-time local info: TransLink, weather, events, road closures
- Multilingual support (Mandarin, Vietnamese, Hindi, etc.)
- Friendly, conversational tone with Aussie warmth
- Integration with smart city features like e-scooter locations or parking
What it should not do:
- Give generic global answers
- Overreach on data collection
- Replace human judgement in legal or emergency advice
- Use cold or bureaucratic language
Takeaway: Trustworthy bots are local, transparent, human-informed — and built with community voice at the centre.
🟨 Group 4 – Finding Insights in Feedback
How could AI help cities listen better to communities?
From climate surveys to community feedback on development, there’s potential for AI to surface themes and sentiments from massive text datasets.
Concerns about transparency, explainability and bias — if we’re going to automate the “listening”, we must also show how we’re interpreting what people say.
Takeaway: AI should amplify civic voices — not overwrite or simplify them. NLP needs transparency and civic tech needs trust.
🟩 Group 5 – Fairness and Inclusion in Language Models
How do we ensure AI understands all voices fairly?
This group explored how language models can easily marginalise or misrepresent communities if not trained inclusively.
Some possible things to consider….
- Accessible design (including low literacy & disability-friendly features)
- Cultural nuance and code-switching awareness
- Community review and testing of chatbot responses
- Representation in training datasets
The idea of ESG-aligned AI design — inclusion isn’t an add-on; it’s a foundation.
Takeaway: If AI is to be used in society, it must be built by and for all of society.
🙌 Final Reflections
AI & Society is about more than machine learning. It’s about meaning-making, responsibility, and connection. April’s event reminded us that even the most technical breakthroughs, like Word2Vec, sit within deep social and ethical contexts.
Thank you to all who joined — and to those who introduced yourselves, challenged assumptions, and helped build something meaningful.
We’ll see you at the next one. 🗓️ AI & Society – Brisbane | May Event – May 14, 2025. Register to attend here.
Stay curious 🧐
Sophia | AI & Society





