[quote author=oda.krell link=topic=178336.msg7232347#msg7232347 date=1402407707] [quote author=JorgeStolfi link=topic=178336.msg7231237#msg7231237 date=1402403179] [b](You are not referring to the P != NP conjecture I suppose?[/b] It is just as uncertain, [b]but it has absolutely no relevance[/b] to those two assumptions in the bitcoin protocol.) [/quote] I am. And it has. At least to 'what the protocol can handle in principle being thrown at it', not 'what the protocol is and does right now'. As long as there [i]are[/i] problems that are hard to solve but easy to check, the protocol can be adapted to the threat that a particular problem turned out to be easy to solve after all. [/quote] First, let me say that I was just objecting to that "pure incorruptible mathematics" stuff by the reporter. But my skepticism does [b]not[/b] come from that (or from any other strictly technical issue), but from political/social/practical problems, for which my probabilities are obviously different from those of most people here. As for the P != NP issue, that is a misunderstanding that theoretical computer scientists insist on spreading for survival reasons. (I am guilty of that myself, I was a TC scientist myself for many years and even taught that snake oil to students once.) The truth is that the polynomial and superpolynomial classes can be defined only if the problem instance size n is allowed to go to infinity -- and the difference appears only when n is "about to get there". If one places any finite bound to n (say, n < 10^500), the two classes are the same: any algorithm that does not loop forever will run in polynomail time. In that case the whole theory of complexity of computation collapses, and provides no proofs or insight at all. (In fact, if there is a finite bound for n, every non-looping algorithm runs in O(1) time.) This is not an extraordinary claim, but only a trivial consequence of the definition. Said another way for any time function t() there is a polynomial p() such that t(n) < p(n) for all n up to 10^500. One may argue that complexity theory is only an approximation that gives practically useful results even though it uses an unreal model of problems and computers -- just as the physical laws of heat conduction and perfect gas are useful even though they assume a continuous medium instead of a collection of atoms. However, the simplifications made in those physical laws usually introduce only very small errors; whereas the assumption of n being unbounded completely changes the problem. The tools of complexity theory simply don't work for bounded n. Consider these two problems: P1: "given a boolean formula of arbitrary length n, find an assignment for its variables that makes it true" P2: "given a boolean formula of length n < 10^50, find an assignment for its variables that makes it true" Complexity theory says that P1 is NP-complete, and many TC scientists will tell you that it is a very hard problem that almost certainly will require superpolynomial time and therefore is unfeasible. As for P2, the theory will say that it is trivial, and all TC scientists will say that it can be solved in constant time. But any instance of P1 that one may encounter in practice is an instance of P2 too. So, complexity theory says two completely different things about the practical problem -- neither of which has any useful consequence.