Why Cybersecurity Can Never Be Solved, Why Security Tools Always Miss Some Bugs

Table of Contents
Summery
  • Guaranteed security is impossible due to "unknown unknowns" and the gap between software models and hardware reality, as seen with vulnerabilities like Zenbleed
  • Security is an empirical property defined by the cost and incentive for attackers; strong defenses work by making the cost of exploitation impractically high.
  • The solution to cybersecurity is an incremental process of refining defenses based on successful attacks (counterexamples) rather than seeking universal guarantees

How to Solve Cybersecurity Once and For All
Photo by Philipp Katzenberger on Unsplash

For the full Reasearch Paper Visit IEEE

How to Solve Cybersecurity Once and For All - Marcel Böhme


Despite decades of intensive research into cybersecurity, critical software systems remain vulnerable to new and persistent threats. A striking example of this reality occurred at a recent Pwn2Own competition where a single individual successfully exploited every major web browser including Chrome and Safari and Edge which are used by billions of people globally. While building security into software from the ground up is theoretically the most effective approach, it assumes that source code perfectly reflects the programmer's intentions and that we can control behavior with precision. However, the continuous stream of new vulnerabilities suggests that simply developing better tools and processes is not enough to prevent attackers from launching successful exploits indefinitely.

 

To understand why security guarantees are elusive, consider the cat and mouse game played by mobile phone vendors. A vendor might employ the best offensive and defensive strategies to secure a device yet the first jailbreak often appears within weeks of its release. Even after the vendor patches the flaw and strengthens the system, new jailbreaks continue to emerge for years which triggers a cycle of constant security updates. This phenomenon does not mean the defenses are ineffective but rather demonstrates that making universal claims about a system being completely free of security flaws is practically impossib

 

One of the fundamental reasons we cannot guarantee security is the problem of "unknown unknowns." We often do not know which specific properties a system must uphold to be secure until a vulnerability is discovered retrospectively. A prime example is speculative execution which was designed as a performance optimization technique for processors. It was only later discovered that this feature violated the constant time property required for secure cryptographic protocols, allowing attackers to measure timing differences and infer secret values.

IEEE REFERENCE

How to Solve Cybersecurity Once and For All

Author: M. Böhme IEEE Security & Privacy, vol. 23, no. 3, pp. 79-82, May-June 2025
Privacy Software Systems Computer Security
View on IEEE Xplore

Another major hurdle is the "modeling gap" where security tools reason within a simplified model of a system rather than the actual deployed environment. Verification methods and static analysis tools rely on assumptions about how code executes, but these assumptions can be broken by the underlying hardware. For instance, the Zenbleed vulnerability in AMD Zen 2 processors allowed data to leak across virtual machine boundaries due to a hardware bug, rendering software level security guarantees effectively useless in that context.

 

Given these limitations, security should not be viewed as a binary property of being either secure or insecure. Instead, it is an empirical and numeric property that must be continuously strengthened to reduce the likelihood of a compromise. The true goal of security tooling is not to provide rigid guarantees but to increase the degree of difficulty for attackers. If an enterprise system appears secure, it is likely because no one has yet found a counterexample to disprove that assumption.

 

We must begin to think about cybersecurity in economic terms as a function of attacker cost and incentive. In the mobile phone example, the accumulation of mitigations over time eventually made the cost of developing a new jailbreak impractically high for most people. However, when there is substantial demand to overcome security measures, as there is with jailbreaks, a system will inevitably appear insecure to the outside world regardless of the strength of its defenses.

 

Therefore, the failure of a tool to prevent a specific flaw should not be seen as a total defeat but as an opportunity for "hardening". Successful attacks reported through bug bounties or red team exercises provide the most reliable signal of defense strength. We should identify the specific reason a defense failed in a particular instance and fix it incrementally. This counterexample guided approach allows defenses to converge toward a point where finding new vulnerabilities becomes too costly for attackers.

 

Researchers and practitioners need to shift their focus from confirming that their tools can find known bugs to actively seeking out what their tools miss. Evaluating progress by simply increasing detection rates on existing benchmarks tells us nothing about a tool's ability to stop future, unknown attacks. Instead of constantly inventing new techniques for every new vulnerability type, we should ask why existing defenses failed and address those root causes directly.

 

A useful metaphor for this process is a fishing net where the net represents security tools and the fish represent flaws. There will never be an ultimate net that catches every single fish, as some will always slip through. However, this does not undermine the utility of the net; rather, it suggests we should support the evolution of our defenses based on the specific "fish" that escape. The focus moves from conceptually designing new nets to empirically improving existing ones.

 

For the security community, this means actively seeking evidence that falsifies claims of effectiveness rather than just supporting them. Offensive security researchers should treat every new attack as a counterexample that exposes gaps in current defenses, while software engineers can look for ways to automate the localization and repair of these failures. By adopting this mindset, we can stop trying to prove our systems are perfect and start the practical work of solving cybersecurity one counterexample at a time.