Making Quantum Oracle
We shall produce that magical quantum oracle. In the current case, we have one bit x and one bit y=f(x), so we might just have single qubit oracle. But since we eventualy want to generalize to n-bit x, and also since we might want to keep the value x for later use, we define a (n+1) qubit whose lowest qubit represents y and the rest x. We refer to the lowest as "oracle qubit" and the rest as "input qubits". For a given function f(x), let us define a unitary operation U_f on the qubit state | x, y >
|
U_f :
| x > to | x' > = | x >
| y > to | y' > = | y .xor. f(x) >
|
|
If we feed y=0 into this operation, we obtain
|
| x > to | x >
| 0 > to | f(x) >
|
|
and this means that U_f is identified as the quantum oracle of f when it operats on | x, 0 >.
We explicitely show how to achive this oracle U_f in terms of its matrix expression fro the case of single qubit x. With the oracle qubit added, the qubit space is 2 dimensional, and the oracle U_f becomes a two-by-two matrix.
A :
f(0)=0
f(1)=0 |
|
|
B :
f(0)=0
f(1)=1 |
|
|
C :
f(0)=1
f(1)=0 |
|
|
D :
f(0)=1
f(1)=1 |
|
|
Extensions to higher qubit case should be straightforward. For two qubits x, there are 2^4 = 16 possible two valued function f(x), and correspondingly there should be 16 possible oracles U_f which is now eight-by-eight matrix operating on three qubit space | x, y >. Writing down their explicit form would be a good excersize.
|