Satoshi never wrote or implied that the 1 MB block size limit was meant to be a cap on the normal traffic. Neither in the whitepaper, nor in his forum posts and messages, does he say that block space was supposed to be an intentionally scarce resource, that would be reserved exclusively for the transactions that paid the highest fees. On the contrary, whenever the "scaling problem" was discussed, he always denied that it was a problem, because (he wrote) continuing increases in computing speed and bandwidth would outpace the growth of the blockchain, *even if traffic continued growing to Visa levels*. However, in [one post on bitcontalk](http://satoshi.nakamotoinstitute.org/posts/bitcointalk/468/#selection-39.139-39.236) described a tiered fee structure (that was implemented or proposed, it is, not clear) That was in response to other people raising the issue. In the next post he said that it would have to be more complicated, and never mentioned that again. If you read carefully, that proposal is very different from Greg' s "fee market", and does not imply a fixed block size limit. Unlike Greg's, that policy is meant to ensure minimal service to *low*-fee transactions (including zero-fee -- Satoshi had a soft spot in his heart for them), while still providing faster service to those who pay more. The numbers in the first column still must be adjusted as traffic grows, so that the total space allocated for all classes is at least twice the average block size; otherwise the schema would have the same problem as Greg's (drive users away for unpredictable long delays) -- only worse, because it would . Note the last two paragraphs, and in particular > At some price, you can **pretty much always get in** if you're willing to outbid the other customers. This statement only makes sense if the topmost tier has no size limit, so that any transaction that paid the corresponding fee (thus out-bidding all those who paid the lower fees) will pretty much be guaranteed to be included in the next block. Anyway, this scheme assumes that the spaces allocated for each fee class are sufficient to serve the daily average input traffic of that class, with some leeway. Otherwise there will be unbounded backlogs and delays for that fee class. Given that he thought that traffic might grow slowly but steadily to Visa-size levels eventually, he must have given those numbers as examples, assuming that they would be adjusted up to always stay ahead of the traffic. > Just including the minimum 0.01 goes a long way. This line seems to clarify what the first paragraph seems to say: even if there are 2 MB of transactions in the queue, almost all paying 0.02 or more, a transaction that pays 0.01 will have 10--12% chance of being included in the first 250 kB of the next block. Thus, if the same situation persists for the following block, the expected delay to confirmation of that transaction will be 8-10 block times. Therefore, here is how I understand that proposal: When assembling the next candidate block, a miner would scan the transactions in the input queue either in random order, or in order of increasing fees, or in chronological order, first-come-first-served (another basic "common business sense" principle that Greg refuses to consider). Not in order of decreasing fees, since that would make the tier sizes irrelevant (indeed it would reduce to Greg's schema). While the candidate block was less than 50 kB, every transaction would be included, even free ones. After that point, transactions that paid less than 0.01 would be deferred for future blocks, while all transactions paying 0.01 or more would continue to be included -- until reaching the 250 kB mark. Then, transactions that paid less than 0.02 would be deferred, while those that paid 0.02 or more would be included. And so on -- until some last tier, which would be filled with the highest paying transactions still in the input queue. This schema would be almost the opposite of Greg's fee market, because (except for the highest tier) it was intended to *guarantee* some space in the blocks for transactions that paid *low* fees. Namely, the schema would guarantee inclusion of a zero-fee transaction if it happened to be in the first 50 kB taken from the input queue; of a transaction that paid 0.01, if it happened to be in the first 250 kB; and so on. In this respect, that system would be almost the opposite of Greg's fee market, where users have no guarantees, no matter how much they pay. However, the schema would work only as long as the total input traffic was less than the 1 MB total cap. Because, as it has been pointed out a million times, a situation in which the input traffic higher than the capacity would not be sustainable. Even if one assumes that miners can discard zero-fee transactions at will, that condition would be required from the fee-paying ones. Moreover, that schema would have required a similar condition on the input traffic of each paying class. For instance, with chrono ordering, the input traffic of transactions paying 0.01 would have to be less than some cap S(1) between 200 and 250 kB every 10 minutes, depending on the amount of zero-fee traffic. If that limit were to be exceeded, there would be an ever-growing backlog of transactions paying 0.01, whose confirmation delay would grow without bound -- again, an unsustainable situation. The transactions paying 0.02 would have a cap S(2) somewhere between 83 and 333 kB, depending on how many 0.01 and 0.00 traffic there was. And so on. There would be a traffic cap even for the zero-fee class, namely 50 kB every 10 minutes. Zero-fee transactions exceeding that cap would pile up in the queue, without bound, and would eventually be discarded. It is hard to imagine why anyone, except a malicious spammer, would issue a zero-fee transaction with that risk. So, in that respect, Satoshi's schema above is even worse than Greg's, because it puts an even lower cap on adoption, it multiplies the risk of large backlogs, and it makes DoS attacks even easier and cheaper. So, in a steady-state (sustainable regime) situation, the input traffic in each class must well below the cap of that class. In a steady state situation, the average volume of confirmed transactions in each class must be equal to the input traffic volume. That is, if the users generate S(c) kilobytes of transactions of class c every 10 minutes, then each solved block must contain S(c) kylobytes of transactions of that class. For that to be true, transactions must be forced to spend some time on the queue, which depends on the queue ordering, on the class c and on the traffic profile S(0), S(1), ... Under a sustainable regime, the various queue ordering options differ on the size of the queue, on the distribution of confirmation delays, and on the degree of feedback that they provide to users, for each class. While the queue ordering cannot affect the average throughput (which is equal to the input traffic), it can affect the network capacity (the cap on traffic, for eacg class and in total) and/or how the service degrades as the inpout traffic approaches the cap. The easiest case to analyze (and arguably the best for users) is when the queue is sorted by increasing fee, before assembling the block. Then, in a steady state with regular 10 minute block intervals, the effect is trivial: the entire queue is copied into the block, and no entries are delayed to the next block. That's because the conditions for steady state imply that all transactions of class c are included before reaching its size limit. But then all users would enjoy next-block confirmation, independently of the fee that they pay. Delays would occur only in case of temporary traffic surges, or (equivalently) extra-long block intervals. Namely, if the queued-up traffic in one class c happened to exceed the space allotted for it, then some transactions would be deferred to future blocks. (Preferably, preserving first-come-first-served within that class.) The random priority case is a bit more difficult to analyze. Suppose that the miner builds his candidate block in an instant, and does not change it until the block gets solved, by him or someone else. Let Q(c) be the fraction of queued transactions that pay the fee of class c, at that instant. In order to select S(0) kilobytes of zero-fee transactions while filling the first 50 kB, we must have Q(0) x 50 = S(0), that is, Q(0) = S(0)/50. For example, if S(0) = 25, then Q(0) = 0.50 -- that is, 50% of the entries in the queue will be zero-fee. In those 50 kB, there will be Q(1) x (50 - S(0)) kilobytes of class 1 transactions. After that, the class zero transactions in the queue are ignored, so the fraction of class 1 transactions in the queue becomes effectively Q(1)/(1 - Q(0)). Therefore, to get S(1) kB of transactions while filling the first 250 kB, we must have S(1) = Q(1) x (50 - S(0)) + Q(1) / (1 - Q(0)) x (250 - 50); that is, Q(1) = S(1) / (50 - S(0) + 200/(1 - Q(0))) For example, if S(0) = 20 and S(1) = 200, then Q(1) = 200/425 = 47.06. That is, ~47% of the entries in the queue will be paying 0.01. And so on. Since we are assuming steady state, hence average block size smaller than the 1 MB limit, all transactions of the topmost class that remain in the queue, after filling all the lower-class sections of the block, will be included in the block. That lets us compute the actual amount V(c) (in kilobytes) of transactions of each class c in the queue. The ratio V(c)/S(c) will be the average confirmation delay (in blocks) experienced by transactions of class c. (The actual delay will have a Poisson distribution). Note that, as the traffic S(c) in one class approaches saturation, that class will make up a larger fraction of the queue. For example, if S(0) = 45, and S(1) = 200, then 90% of the queue will be zero-fee transactions, and 9.98% will be class 1. If there is any substantial traffic of the other classes, the actual volume of class 0 and class 1 entries in the queue will be huge, and so will be the respective delays. Then what would be the point of paying 0.02 or 0.05, rather than 0.01? It is the same problem that occurs in Greg's fee market: the mechanism that is supposed to sustain the fees at some equilibrium above the minimum generates zero "pressure" while the average minimum-paying traffic is below a certain limit, and is unsustainable if that traffic exceeds that limit, even if by a small amount. With such a "zero-infinity" feedback mechanism, fluctuations in traffic and in interblock However, that schema would make sense only if, in a steady state situation, the traffic in each class was less than the space ideally allotted for it. That is, it assumed that, every 10 minutes, there would be less than 50 kB of free transactions, less than 250 kB of transactions that paid up to 0.01, less than 333 kB of transactions that paid less than 0.02, and so on.