LiteLLM Attack: A Wake-Up Call for New Zealand's AI Community
A Python package with 97 million monthly downloads was compromised yesterday — and it's a reminder that building AI applications comes with security responsibilities many of us haven't fully considered.
Andrej Karpathy, former Tesla AI director and OpenAI founding member, called supply chain attacks "basically the scariest thing imaginable in modern software" after LiteLLM version 1.82.8 was found to be stealing SSH keys, API credentials, and cryptocurrency wallets from developer machines.
But rather than panic, let's look at what happened and what practical steps Kiwi developers and AI startups can take to protect themselves.
What Actually Happened
LiteLLM is a popular library that provides a unified interface to dozens of AI model APIs — OpenAI, Anthropic, Google, AWS Bedrock, and many others. If you're building AI applications in Python, there's a good chance you've used it.
Yesterday, a compromised version (1.82.8) was uploaded to PyPI. For about an hour, anyone who ran:
...potentially had their credentials stolen. The malicious code targeted:
- SSH keys and configurations
- AWS, GCP, and Azure credentials
- Kubernetes configs and CI/CD secrets
- API keys stored in environment variables
- Cryptocurrency wallets
- Git credentials and shell history
The attack was discovered when Callum McMahon's machine ran out of RAM and crashed after installing the package — a bug in the attacker's code that accidentally exposed it.
Am I Affected?
Most likely not. The poisoned version was only live for about an hour. If you haven't installed or updated LiteLLM in the last 24 hours, you're probably fine.
But it's worth checking:
Also check if you have DSPy or other packages that depend on LiteLLM — they may have pulled in the compromised version as a transitive dependency.
✅ Practical Steps to Protect Your AI Project
1. Pin your dependencies. Don't use loose version constraints like litellm>=1.64.0. Use exact versions:
2. Use lock files. If you're using Poetry or pip-tools, commit your lock files. They freeze the exact versions you've tested.
3. Run dependency audits regularly.
4. Separate secrets from code. Never commit .env files. Use environment variables or secret managers (AWS Secrets Manager, HashiCorp Vault, or even 1Password).
5. Rotate credentials if you're unsure. If there's any chance you installed the compromised version, rotate everything:
- SSH keys:
ssh-keygen -t ed25519 -C "your@email.com" - API keys: Generate new ones from each provider's console
- Database passwords: Update them immediately
Karpathy's Take: Fewer Dependencies, More "Yoinking"
The attack prompted Karpathy to share a perspective that challenges conventional software engineering wisdom:
"Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated... I've been growingly averse to them, preferring to use LLMs to 'yoink' functionality when it's simple enough and possible."
— Andrej Karpathy
His point: Modern LLMs can write simple utility functions in seconds. Why add another dependency — another potential attack vector — for something you could have Claude or GPT write directly into your codebase?
For New Zealand AI startups, this is worth considering. Every dependency you add is a trust relationship you're forming with unknown maintainers. Some are essential. Others might be replaceable with a few dozen lines of code generated by an LLM.
The NZ-Specific Risk
New Zealand's AI ecosystem is small but growing. We have:
- Startups building AI-powered products
- Universities teaching machine learning courses
- Enterprises adopting LLMs for internal tools
- Government agencies exploring AI capabilities
Many of these organisations are relatively new to AI development. Security practices that are second nature to seasoned software engineers — pinning versions, auditing dependencies, rotating credentials — may not be part of the workflow yet.
This attack is a reminder: as we build AI capabilities in New Zealand, we need to build security consciousness alongside them.
What This Isn't
It's important to be clear about what this attack wasn't:
- It wasn't AI attacking humans. This was a traditional supply chain attack — malicious code uploaded by a human attacker.
- It wasn't a fundamental AI safety issue. The compromised package existed outside the AI models themselves.
- It wasn't targeted at New Zealand. The attack was global — anyone using Python could have been affected.
But it is a reminder that the software supply chain underpinning AI development has vulnerabilities. And as we discussed in our article on New Zealand's AI infrastructure build-out, we need the skills to secure what we're building.
The Bigger Picture
Supply chain attacks like this represent a shift in how we need to think about software development. The old model — trust PyPI, trust npm, trust your dependencies — is showing cracks.
For New Zealand developers:
- Small teams might not have dedicated security people, but they can still pin versions and use lock files
- Startups can adopt a "minimum dependencies" philosophy, especially for early prototypes
- Universities should include supply chain security in their AI/ML courses
- Enterprises can implement dependency scanning in their CI/CD pipelines
And perhaps the most practical takeaway: before adding a dependency, ask yourself if an LLM could write that functionality instead. Sometimes the safer path is the simpler one.
Sources
- Andrej Karpathy (@karpathy) on X/Twitter
- Daniel Hnyk (@hnykda) — initial discovery report
- PyPI package records
- TechCrunch, The Register coverage
This article reflects our analysis and opinion based on publicly available information at the time of publication. The security landscape evolves rapidly. Verify important claims independently. Views expressed are those of Singularity.Kiwi editors.