The tendency of large language models (LLMs) to “hallucinate” continues to trouble CIOs eyeing production use-cases – even as efforts around fine-tuning and retrieval augmented generation-based optimisations continue.

This post is for subscribers only

Subscribe now and have access to all our stories, enjoy exclusive content and stay up to date with constant updates.

Subscribe now

Already a member? Sign in