Key Highlights
- On May 3, 2026, an AI agent deleted nearly all of a rental software vendor's production data — including backups — in 9 seconds
- The root cause wasn't the model; it was missing guardrails around what the agent was allowed to do
- 5 questions every rental business should ask their software vendor about AI safety
- How responsible AI design — approval gates, scoped permissions, defined workflows — prevents this category of failure
On May 3, 2026, a car rental software vendor — PocketOS — lost nearly all of its production data. Three months of customer records, agreements, vehicles, payments, and documents — gone in nine seconds. The cause wasn't a hacker or a hardware failure. It was an AI agent the company had given broad credentials to, acting on its own judgment without a confirmation step. The incident has been reported by Heise, Tom's Hardware, and The Guardian.
This isn't a "look at our competitor" post. It's a wake-up call for everyone — including us — about what responsible AI in car rental software has to look like. If you run a rental business, the lessons here apply to whichever software vendor you choose, including Rentbee.
What Actually Went Wrong
According to the agent's own post-mortem, quoted in coverage: "I guessed that deleting a staging volume via the API would be scoped to staging only. I didn't verify."
That sentence is the entire story. Translate it out of developer-speak and the failure mode is universal:
- The AI was trying to fix a small problem.
- It chose to take a destructive action to fix it.
- Nothing stopped it before it acted.
- The action was instantly irreversible.
This is not a failure of the underlying model. The same pattern would have happened with any sufficiently capable AI allowed to take destructive actions on its own. The model was not the problem. What the AI was allowed to do without asking was the problem.
That's the most important takeaway for the rental industry, because the next decade of rental software is going to be AI-augmented. AI assistants in rental platforms can already void contracts, refund payments, delete customer records, modify reservations, and send emails to renters. The vendors who let AI take those actions on its own will eventually make the same headlines. The vendors who treat every destructive action as something a human has to confirm will save their customers hours every week — safely.
The Real Lesson: Destructive Actions Need a Human in the Loop
In aviation, you don't trust a single autopilot to land a plane. You build redundant systems, confirmation steps, override switches, and procedures that assume any single component can fail. AI in production rental software needs the same posture.
To bring this back home: think about what an AI assistant inside a rental management platform can plausibly do. It can delete a customer record. Cancel an active rental agreement. Refund a payment. Override a Do Not Rent flag. Replace a signed document. Send an email to the wrong customer. Mass-update rental rates. Each of those actions, taken without confirmation, is a story waiting to be written — and rental businesses are the ones who'd live with the consequences.
This incident reveals five concrete design failures that responsible rental software has to design out from day one:
1. Destructive Actions Without Approval Gates
The AI agent executed an irreversible deletion with no human in the loop. This is the failure that turns every other failure into a disaster. Any production system that lets AI delete a customer, void an agreement, refund a payment, or remove a document without a human confirmation step is one bad inference away from a catastrophe.
2. Treating "Look Something Up" the Same as "Change Something"
A well-designed AI assistant treats "show me this customer's history" very differently from "delete this customer." Many AI integrations blur the line — and the blurring is exactly what enables this category of failure. Reading data should be free; changing or deleting data should always pause for confirmation.
3. No Structured Workflows for High-Stakes Operations
The agent reasoned its way to a destructive action through open-ended chat. High-stakes operations in rental software — voiding contracts, modifying payments, attaching documents, updating customer flags — should run as defined, validated workflows with explicit confirmation steps, not as free-form conclusions the AI talks itself into.
4. AI With More Authority Than the Task Requires
The agent was permitted to take an action far beyond what its task actually called for. AI assistants in rental software should operate under the principle of least privilege: a tool that's helping you find a contract should not also be able to delete one. Capability should match the task in front of the AI, nothing more.
5. No Brakes on Runaway Behavior
By the time anyone noticed, nine seconds was already enough. Responsible AI systems don't just log what happened after the fact — they have rate limits, scope limits, and circuit breakers that stop a single bad decision from cascading into a business-ending event.
What to Ask Your Rental Software Vendor
Whether you're evaluating Rentbee or any other rental software platform with AI features, here are the questions that separate marketing from substance:
- Can the AI take destructive actions — delete customers, void contracts, refund payments, override DNR flags — without an explicit user approval step? If yes, walk away.
- Does the AI respect the same role-based permissions as a human user? If a Rental Agent can't void a contract in the UI, the AI shouldn't be able to either.
- Are destructive or high-stakes operations handled through structured workflows or free-form chat? Workflows are auditable and bounded; free-form chat is neither.
- What happens if the AI gets the wrong answer? Reversible actions are recoverable; irreversible ones are not. Every destructive action should require human confirmation and leave an audit trail.
- What is the AI allowed to do without asking? "Look up information" is a safe default. "Modify or delete customer data on its own" is not — no matter how convenient the demo looks.
If a vendor can't answer these clearly, the AI feature isn't ready for your business — no matter how impressive the demo looks.
How Rentbee Approaches Responsible AI
We built Buzz AI, Rentbee's AI assistant, with the assumption that the model can and will be wrong. That assumption shapes every part of the design:
- Approval gates on every destructive or data-changing action. Buzz AI proposes; you approve. Voiding a contract, refunding a payment, deleting a customer, attaching a document, updating a DNR flag — none of it happens without explicit human confirmation.
- Role-scoped access. Buzz AI runs under the same permissions as the user driving it. A Rental Agent's Buzz AI can do exactly what a Rental Agent can do — no more.
- Structured workflows for high-stakes operations. Creating a vehicle, filing a ticket, attaching insurance — these aren't free-form chats. They're defined procedures with validation and confirmation at each step.
- Read-heavy by default. Looking up an agreement, summarizing revenue, finding a customer — Buzz AI does these freely. Anything that changes or removes data hits an approval gate first.
- Proprietary guardrails purpose-built for rental operations.
We're not claiming Buzz AI can't be wrong. We're claiming that when it is wrong, the wrongness can't quietly cost you three months of data.
One More Habit That Causes Disasters: Credit Cards in Internal Notes
While we're on the topic of avoidable catastrophes, here's one we still see across the rental industry: credit card numbers stored in plain-text customer notes, agreement comments, or internal chat logs.
It happens innocently. An agent jots a card number into a note "just for now" so the next shift can charge a damage fee. It gets copied into an agreement. A backup gets exfiltrated, an account gets compromised, or an AI assistant scans a note and surfaces the number — and now you have a PCI-DSS incident, regulatory exposure, and a very uncomfortable conversation with your processor.
The rule is simple, and it doesn't change because AI exists:
- Never type, paste, or store a full credit card number in customer notes, agreement comments, document descriptions, or any free-text field.
- Always capture cards through a secured, tokenized vault — the kind that stores a token reference, not the number itself, and is scoped to authorized charges only.
Rentbee includes a secured credit card vault at no additional cost — built-in, PCI-aligned, and integrated with payments, agreements, and refunds across the platform. If you're still asking your team to write card numbers into notes, the vault replaces that habit with one safer, faster click. There's no reason to put your business at that level of risk when the safer path is already in your platform.
The Industry Has to Get This Right Together
Talking about these failures publicly is how an industry learns — and credit goes to the team that disclosed this one openly instead of burying it. We've used the lessons internally, audited our own systems, and we'd encourage every rental software vendor to do the same.
AI in rental software is going to be enormous. Done right, it gives small operators capabilities that used to require enterprise budgets. Done wrong, it's a liability multiplier. The difference is not the model — it's whether the people building the product treated safety as a feature or as something to bolt on later.
FAQ
What happened in the May 2026 AI deletion incident?
On May 3, 2026, an AI coding agent deleted approximately three months of a rental software vendor's production data, including backups, in nine seconds. The incident has been widely reported by Heise, Tom's Hardware, The Guardian, and others. The vendor disclosed it publicly and is recovering data from older backups and external records.
Was this a problem with the AI model?
No. The model behaved as designed — it followed an instruction. The failure was in the surrounding system: the AI had broad credentials, no approval step before destructive actions, and no guardrails to prevent runaway behavior. Any sufficiently capable AI in the same setup would have produced the same outcome.
How do I evaluate whether a rental software vendor's AI is safe?
Ask what destructive actions the AI can take without explicit user approval — deleting customers, voiding contracts, refunding payments, modifying documents. Ask whether it respects role-based permissions, and whether high-stakes operations run as structured workflows or as free-form chat. Vendors who can answer clearly have thought about safety; vendors who can't, haven't.
Does Buzz AI have similar risks?
Buzz AI is designed specifically to prevent this category of failure. Every destructive or data-changing action requires explicit user approval, the AI runs under the same role-based permissions as the human user, and high-stakes operations are handled through validated workflows rather than open-ended chat.
How can I learn more about Buzz AI?
Buzz AI is a premium add-on for Rentbee customers. Contact our team to discuss enabling it on your account, or read more about how Buzz AI handles real rental operations.
Is it safe to store credit card numbers in customer notes?
No. Storing card numbers in plain-text notes, comments, or chat logs creates PCI-DSS exposure, regulatory risk, and an obvious target for breaches and AI-assisted leaks. Always capture and store cards through a secured, tokenized vault. Rentbee includes a secured credit card vault at no additional cost — use it.
The Bottom Line
This incident is going to be cited in software safety case studies for years. The right response from the industry isn't to retreat from AI — it's to demand that AI in production systems be designed with the same rigor we expect from financial software, medical devices, or anything else where mistakes have consequences.
Rental businesses deserve software that gives them AI's upside without exposing them to AI's downside. That's the bar.
Talk to our team about how Rentbee approaches AI safety, or read more about Buzz AI in action.
