31 Mar 2026
AI is showing up not just as a new set of tools, but as a governance, infrastructure, and deployment challenge. This edition has a stronger focus on regulation, privacy, cyber risk, sovereignty, and the practical realities of adopting AI responsibly.
AI in New Zealand
Engineering New Zealand upcoming AI webinars
Advancing Construction with BIM Enabled Robotics. As they began the New Dunedin Hospital Outpatients Project, Southbase aimed to push the boundaries of speed, accuracy, and safety for delivering this complex asset. Building on their extensive BIM coordination and digital implementation, they achieved two New Zealand construction firsts with the Hiliti Jaibot and the HP Site Printer. This presentation will cover the digital requirements necessary to enable these workflows on site.
AI and Automation Workshop In this May webinar, Amir Mohammadi, co-founder and director of Nodey, shares how you can use AI and automation to transform structural engineering workflows and reduce repetitive administrative work. Through real examples and live demonstrations, he will show how tools such as n8n and Cursor can automate workflows, streamline business processes, and support rapid development of practical internal tools. Participants will also receive resources and ideas to help explore similar automation approaches in their own organisations.
For those that missed our last Panel Q&A session
Engineering and AI Panel Q&A Webinar : 3 The third in our Q&A series, this panel webinar explored AI in 2026: Expectations, risks and opportunities for engineering companies. Hear about what is actually making a difference, what is genuinely useful in 2026 and how engineering companies should decide what to adopt, and what to avoid.
AI Forum NZ webinars
Real AI case studies from the field Examples from GHD and Mott MacDonald show how AI is being used in practice for document automation, project scheduling, design automation, and quality control using computer vision. The webinar offers a strong local picture of AI moving from experimentation into measurable business use.
Other news from New Zealand
New Zealand risks falling behind in AI RNZ reports on a proposed $3.5 billion AI-focused data centre near Invercargill, highlighting both the scale of the infrastructure opportunity and the growing concern that New Zealand lacks the capability base to make full use of it. For engineers, it is a useful reminder that AI adoption is as much about skills and systems as it is about hardware.
TEKEVER and EPE partner in New Zealand This partnership points to growing momentum in New Zealand’s autonomous systems capability, particularly for maritime, land and surveillance applications. It is relevant for engineers working in defence, remote operations, and critical infrastructure monitoring.
Halter AI Cow Collars Reach $2B Valuation Halter’s growth continues to show how AI can create real commercial value in a New Zealand-founded business by combining software, sensing and automation in a practical field application. It is a strong local example of AI delivering productivity gains in a core New Zealand sector.
AI and cyber risk Tech New Zealand’s National Cyber Security Summit is worth watching for signals on how AI is changing cyber threats, resilience expectations, and risk management in New Zealand. For engineers, the relevance is less about AI product news and more about what secure adoption means in practice, especially for digital systems and critical infrastructure.
AI and Engineering
AI Forum NZ AEC knowledge hub The AI Forum Knowledge Hub brings together practical tools, case studies and guidance to help people understand and apply AI in real-world settings. It is a useful starting point for engineers who want trusted, accessible material rather than hype-driven commentary.
AI and the grid bottleneck Growing demand from AI data centres is putting pressure on electricity systems and raising new challenges for utilities, grid operators, and infrastructure developers. Recent coverage points to increasing attention on transmission constraints, flexible demand, and co-located generation as part of the response.
AI in materials discovery and advanced manufacturing A recent review in Communications Materials looks at the role of open-source AI infrastructure in materials discovery and advanced manufacturing. It offers a stronger connection to chemical, materials, and process engineering than many of the more general AI stories in circulation.
AI and Governance
Agents of chaos A new study explores what happened when autonomous AI agents were deployed in a live environment with persistent memory, email, Discord, file access, and shell execution. The findings are a useful warning for anyone thinking about agentic AI in practice, highlighting risks around unauthorised actions, information disclosure, identity spoofing, and accountability when systems are given real autonomy.
Can the law keep up with AI? As AI capability moves faster than legal reform, questions of liability, accountability, and oversight are becoming harder to ignore. The University of Auckland article is a useful big-picture read for engineers working in regulated fields or higher-risk applications.
New Zealand AI regulatory update New Zealand is shaping its AI approach through existing law, public sector guidance, and the national AI strategy rather than a standalone AI Act. For engineers, the summary offers a practical local reference point on governance, compliance, and responsible deployment.
Sovereign by design The report argues that AI sovereignty is not about doing everything domestically, but about reducing dependence on foreign firms at the infrastructure, cloud, and model layers so countries retain strategic choice. Focusing on Canada, it sets out where the country is exposed, where it still has strengths, and what options it has to build more control over data, compute, and AI deployment.
AI music fraud case ends in guilty plea A US man has pleaded guilty after using AI-generated songs and bot-driven streams to collect millions in music royalties. The case is one of the first successful prosecutions of AI-related fraud in the music industry and shows how generative tools can be used to scale deception as well as productivity.
Hong Kong issues generative AI guideline Hong Kong’s Digital Policy Office has released a practical guideline for developers, service providers, and users of generative AI, covering technical limits, governance principles, and risks such as data leakage, bias, and error. It is framed as operational guidance rather than a new AI law, with an emphasis on balancing innovation and responsibility.
AI global
International AI safety report 2026 The latest International AI Safety Report offers a global evidence review of what today’s most capable AI systems can do, where their limits still sit, and what risks are emerging as they are deployed more widely. It highlights continued gains in reasoning, coding, and autonomous operation, while also pointing to growing concern about deepfakes, cyber misuse, reliability failures, and wider systemic effects.
Blue Origin enters the space data centre race Blue Origin’s reported move into space-linked data centre infrastructure shows how quickly AI demand is spilling into adjacent sectors such as energy, aerospace and compute infrastructure. For engineers, it is another sign that AI growth is creating new design and systems challenges well beyond software alone.
Anthropic challenges US risk label Anthropic has filed suit after being labelled a supply chain risk by the US government, following a dispute over military use of its AI systems. The case highlights the growing tension between AI safety positions, defence contracting, and government control over critical digital infrastructure.
AI-Powered cyborg cockroaches This story points to the widening range of AI-enabled robotics applications, including small autonomous systems for sensing and surveillance in difficult environments. It is a reminder that robotics development is moving well beyond conventional industrial formats.
AI helps design cancer vaccine for dog This story highlights how generative AI and protein modelling tools are beginning to support more tailored approaches in biomedical research. It is an interesting example of AI tools being combined in specialised scientific workflows rather than used in isolation.
How to onboard AI agents at work Harvard Business Review argues that AI agents need structured onboarding much like new staff do, with clear roles, boundaries, access settings, and performance expectations. It is a useful management lens for organisations moving from occasional prompting toward more agentic use of AI in day-to-day work.
The factory model for software engineering Addy Osmani argues that coding agents are changing software engineering from a craft model centred on manual production toward a factory model built on orchestration, review, and leverage. The piece is less about replacing engineers than about changing where value sits as agents take on more of the implementation work.
AI tools and models
TurboQuant cuts AI memory use Google Research says TurboQuant can sharply reduce the memory needed for key-value caches in large language models, with reported gains of at least six times less memory use and up to eight times faster attention computation. The work is aimed at improving long-context inference, where memory movement increasingly limits performance and cost.
OpenAI launches GPT-5.4 OpenAI has released GPT-5.4, positioning it as a model for professional work with stronger reasoning, coding and tool use across documents, spreadsheets and presentations. It is a useful marker of where leading AI tools are heading, with more emphasis on real workflow support rather than chat alone.
Gemini updates for Workspace Google’s latest Gemini updates are aimed at making Workspace tools more useful for drafting, analysis, slide creation and file search using context from a user’s own documents, email and the web. It is a good example of AI becoming more embedded in everyday work software.
Gemini Embedding 2 Google has released Gemini Embedding 2, its first natively multimodal embedding model, which maps text, images, video, audio, and documents into a single embedding space. Google says it is designed for retrieval, classification, and search across different media types, with support for more than 100 languages and public preview access through the Gemini API and Vertex AI.
Copilot Cowork Microsoft has launched Copilot Cowork as an execution layer inside Microsoft 365 Copilot, designed to delegate tasks, coordinate workflows, and act across emails, meetings, files, and data within Microsoft 365. Microsoft says it was developed with Anthropic technology behind Claude Cowork, but with Microsoft security, governance, and audit controls built in.
Robotic roundup
Humanoid robot plays tennis Real-time control, balance, and perception are improving quickly in embodied AI, and a humanoid robot rallying in tennis is a vivid example of that progress. Although still experimental, it points to robotics capability expanding into more dynamic environments.
Amazon tests doorstep delivery robots Amazon’s acquisition of Rivr signals continued investment in physical AI for last-mile logistics, particularly in the difficult final metres between vehicle and customer. It is another indicator that robotics development is increasingly focused on practical deployment challenges, not just warehouse automation.
Robot dogs guard major data centres Robotic security systems are beginning to appear in high-value infrastructure environments such as major data centres, where continuous monitoring and perimeter inspection are critical. The story is a useful example of robotics moving into specialist operational roles where reliability and risk reduction matter.
Gecko Robotics lands major US Navy deal Gecko Robotics’ latest defence contract shows the continuing demand for robotics in inspection, maintenance and asset intelligence for critical infrastructure and military systems. It reinforces the value of robotics where access is difficult, risk is high, and asset condition data matters.
Service robot malfunctions A useful reminder that real-world deployment brings edge cases, safety issues and messy human environments that robotics systems still struggle with.
Interesting reading
Humanity’s last exam Benchmark scores often overstate how capable frontier models really are, which is why more demanding tests are starting to matter. This paper introduces one of the toughest examples yet, designed to probe expert-level reasoning across a wide range of domains.
What 81,000 people want from AI Anthropic’s research offers a broad picture of how people want AI systems to behave, including preferences around helpfulness, tone, values and control. It is useful reading for anyone interested in AI alignment, design choices and what users actually expect from these tools.
Gen AI as a 24x7 Tutor This paper explores how generative AI tools perform as tutors for engineering and maths content, comparing different GPT-based approaches. It is relevant for educators and engineers interested in AI-supported learning, but also in the limits of these tools in technical teaching contexts.
Give it a go
Comparing AI tools in parallel Side-by-side testing can reveal far more than vendor claims, and this AI Forum resource sets out a practical way to compare tools on the same task. Teams weighing up options will find it a useful method for judging accuracy, reasoning, and usability.
If you have tools, case studies or events you would like included in future editions or just want to let us know what you want more of, please get in touch.