As the year gets underway, artificial intelligence continues to surface both opportunities and cautionary lessons. This January edition focuses on emerging AI tools, cultural and ethical reflections, real-world misuse and rapid advances in robotics.

AI in New Zealand

Engineering New Zealand upcoming free AI webinars

  • Turning AI into action This webinar focuses on moving beyond experimentation to practical, value-driven AI use. The session will explore New Zealand specific case studies from Auckland Transport’s predictive maintenance models to Air New Zealand’s enterprise GPT integration, demonstrating how local leaders are already scaling productivity. The session features live demonstrations of deep-research tools, meeting-transcript analysis and citation-backed platforms like NotebookLM that eliminate "hallucinations" and protect data integrity.
  • Computational design on CMUA – revisiting with an AI lens The Canterbury Multi-Use Arena (Te Kaha) is one of the most complex venue projects delivered in New Zealand, involving complex trade-offs between structural performance, sustainability, constructability and long-term operations. While the original design pre-dated modern AI tools, the project reflects the type of data-intensive, multi-criteria decision-making where AI can add real value. This session revisits Te Kaha through an AI lens, showing how Mott MacDonald is using AI to complement computational design workflows and support better, faster engineering decisions.

AI Forum NZ webinarsall webinars in the AEC series can be accessed here. The following webinars are coming up:

  • AI tools showcase The AI Forum of New Zealand is hosting a practical showcase of AI tools, with demonstrations focused on real-world use rather than hype. This will cover bid prep and proposal writing, automated drawing review and clash detection, meeting transcription and action extraction, and data analytics for project performance
  • Real New Zealand AI case studies and success stories from the field This webinar shares practical examples of how engineering firms, including GHD and Mott MacDonald, are implementing AI with measurable results. It covers real-world use cases such as document automation and RFI processing, AI-supported project scheduling and resource optimisation, design automation and generative design, and quality control using computer vision. The session is relevant for anyone interested in how AI is disrupting engineering practice and creating new opportunities to evolve how we work.

The latest news

The AI muscle gap This New Zealand AI insight explores the growing gap between organisations investing in AI tools and their ability to use them effectively. It argues that capability, governance and organisational learning are now the limiting factors for AI value, not technology access. The piece is a useful reminder that building “AI muscle” takes sustained effort, leadership and real-world practice, particularly in regulated and safety-critical sectors like engineering.

New Zealand AI research platform proposal successful in first stage University of Auckland researchers, working with partners across New Zealand universities and research and business partners, have been awarded seed funding for a test bed to develop Agentic AI. Next generation AI assistants or agents will learn and maintain themselves to perform tasks autonomously but safely in conjunction with humans.

AI in engineering

The IEEE Technology 2026 Predictions Report This looks at the AI and digital technology trends expected to shape industry and professional practice over the coming years. It covers advances in artificial intelligence, cybersecurity, data governance and human–AI interaction, with a clear focus on implications for real-world systems and decision-making.

Structured AI This company develops engineering AI tools focused on document-heavy regulatory processes, including consenting workflows. Its products are positioned for use in regulatory environments where organisations are exploring how AI could help structure applications, check completeness and support faster processing, while retaining human decision-making.

How civil engineers can strike the AI balance A practical overview from ASCE on where AI is already being applied in civil engineering (inspection, traffic, predictive maintenance), with a useful caution on governance and professional judgement.

Generative AI approaches for architectural design automation This recent research paper explores how AI is being used to improve quality assurance and decision-making in complex engineering and construction projects. It focuses on data-driven methods for defect detection, consistency and managing uncertainty across multidisciplinary workflows. The paper also highlights practical challenges, including data quality, transparency and trust in AI-supported decisions. It provides a useful snapshot of how AI is moving from research into repeatable engineering practice.

How generative design is reshaping engineering workflows This article explores how generative design is reshaping engineering workflows by using AI to rapidly generate and evaluate multiple design options against defined constraints. It highlights benefits such as faster iteration, improved performance trade-offs and greater support for early-stage decision-making. The piece also notes the shift in the engineer’s role towards setting parameters, interpreting outputs and exercising professional judgement. Overall, it provides a practical view of how AI-enabled generative tools are being integrated into real engineering practice.

AI global

When simple AI tools go wrong: the Clawdbot case A browser-based AI assistant called Clawdbot has gone viral for promising easy automation of everyday online tasks. However, security researchers have raised serious concerns about how it operates, including poor safeguards around credentials and website access. Follow-up analysis has highlighted how easily “helpful” AI agents can become a cybersecurity risk when deployed without robust controls. This is a timely reminder that convenience does not equal safety, particularly when AI tools interact directly with live systems and user accounts. Related reading: Clawdbot: when easy AI becomes a security nightmare

The risks of fake AI imagery in the real world A recent UK case has highlighted the misuse of AI-generated images presented as genuine evidence. As synthetic media becomes more convincing, this raises serious questions around trust, verification and professional responsibility. Robust review processes and human judgement remain essential safeguards.

Robotic roundup

Robotics in review: editors look back at 2025 A curated overview of 2025 robotics developments, useful if you want a broader context setter rather than a single-technology story.

Walmart expands drone delivery at scale Walmart is partnering with Wing to roll out what it describes as the world’s largest residential drone delivery service, targeting more than 270 locations by 2027. Early data suggests strong repeat use, indicating growing consumer comfort with autonomous delivery.

Columbia’s EMO robot learns to lip-sync like a human Researchers at Columbia University have developed a robotic head capable of learning human-like facial expressions and lip synchronisation by observing itself and analysing video data. While experimental, the work highlights rapid progress in embodied AI and human–robot interaction.

AI tools and models

Moonshot AI releases Kimi K2.5 Moonshot AI has released Kimi K2.5, a new model designed around agent-based workflows and task coordination. The release reflects a broader shift toward multi-agent systems that can plan, delegate and execute complex tasks.

Introducing ChatGPT Health OpenAI has announced new healthcare-focused capabilities for ChatGPT, aimed at supporting clinicians and patients with trusted, medically grounded information. The announcement reinforces the importance of domain constraints and safety in high-risk applications.

Interesting reading

The adolescence of technology Dario Amodei reflects on AI as a technology entering an “adolescent” phase: powerful, fast-moving and not yet fully understood. The essay offers a useful lens for thinking about governance, restraint and responsibility during periods of rapid capability growth.

Neurodiversity, brains and machines This reflective piece explores the intersection between neurodiversity and AI, considering how different ways of thinking can both shape and be shaped by intelligent systems. It raises important questions about inclusion, cognitive diversity and the assumptions embedded in AI design.

Management as an AI superpower Ethan Mollick argues that the value of AI in organisations increasingly depends on how well it is managed, directed and constrained by humans. Rather than replacing expertise, AI amplifies the importance of clear goals, good judgement and strong leadership.

Give it a go

How to build a simple app using Claude Artifacts

Anthropic’s Claude now allows users to create small apps and tools directly within the chat interface using “Artifacts”. By describing a problem in plain language, Claude can ask clarifying questions and then generate a working prototype that can be refined over time. This lowers the barrier for non-developers to explore lightweight AI-assisted tools, while still requiring careful validation.


Final note

January is often a quieter month, but the themes emerging here are consistent with what we expect to see throughout 2025: more capable AI agents; deeper integration with physical systems; and growing emphasis on governance, judgement and professional responsibility.

If you have tools, case studies or events you would like included in future editions, please get in touch.

AI