Examples and applications
Halaman ini belum diterjemahkan. Anda sedang melihat versi asli dalam bahasa Inggris.
During this lesson, we'll explore some variational algorithm examples and how to apply them:
- How to write a custom variational algorithm
- How to apply a variational algorithm to find minimum eigenvalues
- How to utilize variational algorithms to solve application use cases
Note that the Qiskit patterns framework can be applied to all the problems we introduce here. However, to avoid repetition, we will only explicitly call out the framework steps in one example case, run on real hardware.
Problem definitions
Imagine that we want to use a variational algorithm to find the eigenvalue of the following observable:
This observable has the following eigenvalues:
And eigenstates:
# Added by doQumentation — required packages for this notebook
!pip install -q numpy qiskit qiskit-ibm-runtime rustworkx scipy
from qiskit.quantum_info import SparsePauliOp
observable_1 = SparsePauliOp.from_list([("II", 2), ("XX", -2), ("YY", 3), ("ZZ", -3)])
Custom VQE
We'll first explore how to construct a VQE instance manually to find the lowest eigenvalue for . This will incorporate a variety of techniques that we have covered throughout this course.
def cost_func_vqe(params, ansatz, hamiltonian, estimator):
"""Return estimate of energy from estimator
Parameters:
params (ndarray): Array of ansatz parameters
ansatz (QuantumCircuit): Parameterized ansatz circuit
hamiltonian (SparsePauliOp): Operator representation of Hamiltonian
estimator (Estimator): Estimator primitive instance
Returns:
float: Energy estimate
"""
pub = (ansatz, hamiltonian, params)
cost = estimator.run([pub]).result()[0].data.evs
return cost
from qiskit.circuit.library.n_local import n_local
from qiskit import QuantumCircuit
import numpy as np
reference_circuit = QuantumCircuit(2)
reference_circuit.x(0)
variational_form = n_local(
num_qubits=2,
rotation_blocks=["rz", "ry"],
entanglement_blocks="cx",
entanglement="linear",
reps=1,
)
raw_ansatz = reference_circuit.compose(variational_form)
raw_ansatz.decompose().draw("mpl")

We will start debugging on local simulators.
from qiskit.primitives import StatevectorEstimator as Estimator
from qiskit.primitives import StatevectorSampler as Sampler
estimator = Estimator()
sampler = Sampler()
We now set an initial set of parameters:
x0 = np.ones(raw_ansatz.num_parameters)
print(x0)
[1. 1. 1. 1. 1. 1. 1. 1.]
We can minimize this cost function to calculate optimal parameters
# SciPy minimizer routine
from scipy.optimize import minimize
import time
start_time = time.time()
result = minimize(
cost_func_vqe,
x0,
args=(raw_ansatz, observable_1, estimator),
method="COBYLA",
options={"maxiter": 1000, "disp": True},
)
end_time = time.time()
execution_time = end_time - start_time
Return from COBYLA because the trust region radius reaches its lower bound.
Number of function values = 103 Least value of F = -5.999999998357189
The corresponding X is:
[2.27483579e+00 8.37593091e-01 1.57080508e+00 5.82932911e-06
2.49973063e+00 6.41884255e-01 6.33686904e-01 6.33688223e-01]
result
message: Return from COBYLA because the trust region radius reaches its lower bound.
success: True
status: 0
fun: -5.999999998357189
x: [ 2.275e+00 8.376e-01 1.571e+00 5.829e-06 2.500e+00
6.419e-01 6.337e-01 6.337e-01]
nfev: 103
maxcv: 0.0
Because this toy problem uses only two qubits, we can check this by using NumPy's linear algebra eigensolver.
from numpy.linalg import eigvalsh
solution_eigenvalue = min(eigvalsh(observable_1.to_matrix()))
print(f"""Number of iterations: {result.nfev}""")
print(f"""Time (s): {execution_time}""")
print(
f"Percent error: {100*abs((result.fun - solution_eigenvalue)/solution_eigenvalue):.2e}"
)
Number of iterations: 103
Time (s): 0.4394676685333252
Percent error: 2.74e-08
As you can see, the result is extremely close to the ideal.
Experimenting to improve speed and accuracy
Add reference state
In the previous example we have not used any reference operator . Now let us think about how the ideal eigenstate can be obtained. Consider the following circuit.
from qiskit import QuantumCircuit
ideal_qc = QuantumCircuit(2)
ideal_qc.h(0)
ideal_qc.cx(0, 1)
ideal_qc.draw("mpl")
We can quickly check that this circuit gives us the desired state.
from qiskit.quantum_info import Statevector
Statevector(ideal_qc)
Statevector([0.70710678+0.j, 0. +0.j, 0. +0.j,
0.70710678+0.j],
dims=(2, 2))
Now that we have seen how a circuit preparing the solution state looks like, it seems reasonable to use a Hadamard gate as a reference circuit, so that the full ansatz becomes:
reference = QuantumCircuit(2)
reference.h(0)
reference.cx(0, 1)
# Include barrier to separate reference from variational form
reference.barrier()
ref_ansatz = variational_form.decompose().compose(reference, front=True)
ref_ansatz.draw("mpl")

For this new circuit, the ideal solution could be reached with all the parameters set to , so this confirms that the choice of reference circuit is reasonable.
Now let us compare the number of cost function evaluations, optimizer iterations and time taken with those of the previous attempt.
import time
start_time = time.time()
ref_result = minimize(
cost_func_vqe, x0, args=(ref_ansatz, observable_1, estimator), method="COBYLA"
)
end_time = time.time()
execution_time = end_time - start_time
Using our optimal parameters to calculate the minimum eigenvalue:
experimental_min_eigenvalue_ref = cost_func_vqe(
ref_result.x, ref_ansatz, observable_1, estimator
)
print(experimental_min_eigenvalue_ref)
-5.999999996759607
print("ADDED REFERENCE STATE:")
print(f"""Number of iterations: {ref_result.nfev}""")
print(f"""Time (s): {execution_time}""")
print(
f"Percent error: {100*abs((experimental_min_eigenvalue_ref - solution_eigenvalue)/solution_eigenvalue):.2e}"
)
ADDED REFERENCE STATE:
Number of iterations: 127
Time (s): 0.5620882511138916
Percent error: 5.40e-08
Depending on your specific system, this may or may not result in an improvement in speed or accuracy in this very small scale example. The point is that starting with physically-motivated reference states becomes increasingly important in improving speed and accuracy as problems scale.
Change the initial point
Now that we have seen the effect of adding the reference state, we will go into what happens when we choose different initial points . In particular we will use and .
Remember that, as discussed when the reference state was introduced, the ideal solution would be found when all the parameters are , so the first initial point should give fewer evaluations.
import time
start_time = time.time()
x0 = [0, 0, 0, 0, 6, 0, 0, 0]
x0_1_result = minimize(
cost_func_vqe, x0, args=(raw_ansatz, observable_1, estimator), method="COBYLA"
)
end_time = time.time()
execution_time = end_time - start_time
print("INITIAL POINT 1:")
print(f"""Number of iterations: {x0_1_result.nfev}""")
print(f"""Time (s): {execution_time}""")
INITIAL POINT 1:
Number of iterations: 108
Time (s): 0.4492197036743164
Adjusting initial point to :
import time
start_time = time.time()
x0 = 6 * np.ones(raw_ansatz.num_parameters)
x0_2_result = minimize(
cost_func_vqe, x0, args=(raw_ansatz, observable_1, estimator), method="COBYLA"
)
end_time = time.time()
execution_time = end_time - start_time
print("INITIAL POINT 2:")
print(f"""Number of iterations: {x0_2_result.nfev}""")
print(f"""Time (s): {execution_time}""")
INITIAL POINT 2:
Number of iterations: 107
Time (s): 0.40889453887939453
By experimenting with different initial points, you might be able to achieve convergence faster and with fewer function evaluations.
Experimenting with different optimizers
We can adjust the optimizer using SciPy minimize's method argument, with more options found here. We originally used a constrained minimizer (COBYLA). In this example, we'll explore using an unconstrained minimizer (BFGS) instead
import time
start_time = time.time()
result = minimize(
cost_func_vqe, x0, args=(raw_ansatz, observable_1, estimator), method="BFGS"
)
end_time = time.time()
execution_time = end_time - start_time
print("CHANGED TO BFGS OPTIMIZER:")
print(f"""Number of iterations: {result.nfev}""")
print(f"""Time (s): {execution_time}""")
CHANGED TO BFGS OPTIMIZER:
Number of iterations: 117
Time (s): 0.31656408309936523
VQD example
Here we implement the Qiskit patterns framework, explicitly.
Step 1: Map classical inputs to a quantum problem
Now instead of looking for only the lowest eigenvalue of our observables, we will look for all , (where ).
Remember that the cost functions of VQD are:
This is particularly important because a vector (in this case ) must be passed as an argument when we define the VQD object.
Also, in Qiskit's implementation of VQD, instead of considering the effective observables described in the previous notebook, the fidelities are calculated directly via the ComputeUncompute algorithm, that leverages a Sampler primitive to sample the probability of obtaining for the circuit
. This works precisely because this probability is