**There are no classical registers in quantum computing**

In classical computers you can have a well defined "current state at a given time" (stored notably in CPU registers and DRAM memory in modern systems), and this state changes with time (each CPU clock) in a controlled way.

Therefore, it is easier to map sequential description of an algorithm back to classical real hardware. For example, a classical algorithm might be described sequentially as:

```
a = b + c
d = 2 * a
```

and in a classical computer this might actually be implemented in two separate steps:

- a CPU clock happens
- one ADD instruction that stores the intermediate result to a register that represent
`a`

- a CPU clock happens
- one MUL instruction which stores the final result to a register that represnts
`d`

- a CPU clock happens
- ...

In quantum computing however, you cannot save the "intermediate state of a computation" and operate on it in a later step: you set up the inputs and the circuit, and information flows in a single indivisible step to the sensor device at the end of the circuit which makes a probabilistic reading.

Therefore, unless we are treating quantum circuits as black boxes between classical registers, sequential algorithm descriptions don't make much sense.

It is this fact that makes quantum computers much harder to program.

So a more likely useful description of quantum computing looks more like the combinatorial logic blocks (i.e. blocks with no registers and therefore no state) in hardware description languages such as Verilog and VHDL, which are just textual descriptions of a graph of circuits.

For example, in a Verilog combinatorial block, when you say:

```
a = b + c
```

it doesn't mean "on the next clock cycle of the algorithm, the register `a`

will be worth `b + c`

" like in say C or Python.

It rather means:

`a`

is a wire,
`b`

is a wire
`c`

is a wire
`+`

is an adding circuit with `b`

and `c`

as inputs and `a`

as output

Therefore, as soon as `b`

or `c`

change, `a`

also "immediately" changes. With "immediately" in quotes because in practice electrons do take some time to move, and so we can't take the clock smaller than this propagation time.

A "propagation time" analogue is also present in quantum computers, where each experiment takes some time to finish, and the faster that time, the faster you can rerun the experiment to reduce uncertitude of the result.

Of course, for any maximum input size, you could make one huge combinatorial circuit that implements that algorithm. But in classical computing we don't do that because silicon is expensive to design and produce, so it is much more economical to design a circuit that solves a wider range of problems than a huge specialized circuit, even if each problem is solved a bit less fast.

In quantum computers, you don't have a choice. Unless you can use a divide and conquer style algorithm to generate smaller subproblems (which generally implies a P problem which might not be so interesting to a quantum computer), you just need a minimum number of qubits and gates for each given algorithm.

1"

..rarely if not never would someone explaining a classical math algorithm revert to its representation in binary circuits" Actually, this was quite common among those involved in the development of computers in the 50's and 60's. In fact, it's how I learned to program (c. 1970). I've always assumed that the reason was that they were more familiar with electronic circuit logic, and because it represented a lower level logic that could explain the behavior of the higher level logic of CPUs and programs. – RBarryYoung – 2019-11-19T20:26:31.090