It's my wishful thinking
humblebun
You should really try cobol, lisp, ada, or erlang. Dead languages are the best
I see you are a person of taste
While you describe the way how error correction works, there are other factors you fail to notice.
It is widely known, that for each physical qubit T2 time decreases when you place it among other. The ultimate question here is: when you add qubits, could you overcome this decoherence with EC or not.
Say you want to build a QC with 1000 logical qubits and you want to be sure that the error rate doesn't exceed 0.01% after 1 second. You assemble it, and it turns out that you have 0.1%. You choose to use some simple code, say 7,1 and now you have to assemble a 7000 chip to execute 1000 qubits logic. You again assemble it and the error rate is higher now (due to decoherence and crosstalk). But the question is how much higher? If it's lower than your EC efficiency then you just drop a few more qubits, use 15,2 code and you are good to go. But what if no?
When you look at it for a minute, your phone will dim the screen and in reflection you'll see the person who has harmed you.
It was shown this year for how many, 47 qbits to scale? How could you be certain this will stand for millions and billions?
Take a look again, it's still there
But who guarantees that ec will overcome decoherence, introduced by this number of qbits? Not a trivial question that nobody can answer for certain
Took me a minute to notice, but it was worth it
If qbits double every year
And then we need to increase coherence time, which is 50ms for the current 433 qubits large chip. Error correction might work, but might not
We've all been there, Hose. Take a rest and drink enough water