"Embarrassment" from Decades Ago Unearthed Due to Microsoft's Boss's Playful AI Experiment
A simple experiment with artificial intelligence (AI) has instead unveiled an old “embarrassment” in the world of computing. A legacy programme created nearly four decades ago apparently harbours a hidden logic error that has gone undetected all this time. This finding reveals how AI is now capable of exposing vulnerabilities in ancient computer code that is still in use today. The experiment was conducted by Mark Russinovich, Chief Technology Officer (CTO) of Microsoft Azure at Microsoft. He attempted to test the capabilities of Anthropic’s latest AI model, Claude Opus 4.6. The code tested was no ordinary programme. The programme named “Enhancer” is Russinovich’s own creation, which he wrote in May 1986. That small programme was made using 6502 assembly language and functions to modify the Applesoft BASIC programming language to allow the use of variables in GOTO, GOSUB, and RESTORE commands. The results were beyond expectations. Claude Opus 4.6 not only managed to read the sequence of old code but also successfully decompiled the 6502 machine language into a more understandable format. The AI model even added labels and logic comments that were deemed highly accurate. Furthermore, the AI found a hidden logic error (logic error) that had gone unnoticed for about 40 years. One of the important findings is a bug in the form of “silent incorrect behaviour”. Claude discovered that when the programme does not find the target line being sought, the system does not display an error message. Instead, execution jumps to the next line or even to the end of the programme. The AI also provided relevant repair recommendations in line with 6502 programming patterns. Claude suggested adding a command to check the carry flag status, which automatically activates when a line is not found, then directs the execution process to an error handling mechanism. “We are entering an era of AI-accelerated vulnerability discovery that operates automatically. This capability will be exploited by both the good guys and the attackers,” he said.