Security by obscurity

Logical Complexity = Algorithmic Information + Epistemic Gaming

Shannon sought security against the attacker with unlimited computational powers: if an information source conveys some information, then Shannon’s attacker will surely extract that information. Diffie and Hellman refined Shannon’s attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer science, and led to modern cryptography.

Shannon also sought security against the attacker with unlimited logical and observational powers, expressed through the maxim that “the enemy knows the system”. This view is still endorsed in cryptography. The popular formulation, going back to Kerckhoffs, is that “there is no security by obscurity”, meaning that the algorithms cannot be kept obscured from the attacker, and that security should only rely upon the secret keys. In fact, modern cryptography goes even further than Shannon or Kerckhoffs in tacitly assuming that if there is an algorithm that can break the system, then the attacker will surely find that algorithm. The attacker is not viewed as an omnipotent computer anymore, but he is still construed as an omnipotent programmer. The ongoing hackers’ successes seem to justify this view.

So the Diffie-Hellman step from unlimited to limited computational powers has not been extended into a step from unlimited to limited logical or programming powers.  Is the assumption that all feasible algorithms will eventually be discovered and implemented really different from the assumption that everything that is computable will eventually be computed? We explore some ways to refine the current models of the attacker, and of the defender, by taking into account their limited logical and programming powers. If the adaptive attacker actively queries the system to seek out its vulnerabilities, can the system gain some security by actively learning attacker’s methods, and adapting to them? An overview of this project is here.

PS from 2023:

Back in 2011, the idea of gaming security by obscurity was pursued in a project, described in the paper “Gaming security by obscurity“, and presented as a “clear blue sky concept” at the New Paradigms workshop. The proposed implementation relied on the capability of approximating the hardness of programming a specified function. That capability was science fiction. In the meantime, GPTs emerged, as unsupervised learners that compress given contexts quite effectively. While the algorithmic complexity of the underlying compressions is still, of course, not strictly computable, approximating them has become a matter of daily practice. Here is an interface: