The Inner Product of a Complex Vectors
A review and solution in Python
I recently began viewing Leonard Suskind’s awesome introductory Quantum Mechanics course on Youtube, The Theoretical Minimum. Toward the end of lecture one, Suskind introduces the Dirac notation for Complex Vector spaces, and how to find the inner product of two complex vectors.
The Dirac notation for two complex vectors A, and B, would look like this:
These are assumed to be one-column vectors with the same number of rows. For example:
\( |A> = \pmatrix{2 + 1j\\3 - 2j\\5 + 1j} \)
\( |B> =\pmatrix{3 + 2j\\1 - 4j\\ 6 + 1j} \)
This “\(|\space \textit{symbol}\space >\)” notation is called a ket, which is an unusual name for anything. It will make sense in a minute, but before we get to that, let’s see how we would implement what we just wrote in numpy.
Note that we are going to stick to one dimensional numpy arrays of three elements each (in this case). Numpy handles vectors like this better and more simply for what we’ll eventually need to do, even if it does mean that ideas like “transpose” end up being a no-op. Trying to work with vertical vectors in numpy complicates the code and creates more work for us.
import numpy as np
# Create a complex numpy array.
A = np.array([2 + 1j, 3 - 2j, 5 + 1j])
# A = np.vstack(A)
print("A = \n", A)
# Create another array
# Create another array
B = np.array([3 + 2j, 1 - 4j, 6 + 1j])
# B = np.vstack(B)
print("B = \n", B)
# Show that transpose will end up being effectively a no-op
print(A.shape == A.transpose().shape, A.all() == A.transpose().all())
A =
[2.+1.j 3.-2.j 5.+1.j]
B =
[3.+2.j 1.-4.j 6.+1.j]
True True
Back to the Abstract Math
Now that we have two vectors to work with, I promised to clear up the word ket. Let’s do that now. The Dirac notation for an inner product of the two vectors looks like this: $\( \lt A|B \gt \)\( Ealier we expressed the two vectors these using a ket notation: \)\( |A> \)\( \)\( |B> \)\( Now it looks like something has become of the "A" term. It has moved to the beginning of the brackets, so written individually, it would look like this: \)\( \lt A| \)$
This is called a “bra”, so Dirac notation is also referred to as “bra-ket” notation. If this is looking to you like a cute pun on the word “bracket,” you may congraulate yourself on paying attention. That’s exactly what it is!
But this bra is not just the same A vector “moved over.”
We need to do two things to it to make it into a bra, also known as the “dual” of A.
First, notationally, we transpose it, turning it into a row vector. So instead of this: $\( \newcommand\mycolv[1]{\begin{pmatrix}#1\end{pmatrix}} \mycolv{2 + 1j\\3 - 2j\\5 + 1j} \)$
we get this: $\( \newcommand\myrowv[1]{\begin{pmatrix}#1\end{pmatrix}} \mycolv{2 + 1j, 3 - 2j, 5 + 1j} \)$
We showed earlier that this is a no-op in Python, but we mention it here for completeness.
Next, we need to take the conjugate of each term, which (in case you’ve forgotten it like I had) simply means flipping the sign of the imaginary part.
Thus our final result for the \(\lt A |\) term would be: \(\pmatrix{2 - 1j, 3 + 2j, 5 - 1j}\).
Doing those two things to a complex vector A is called a “conjugate transpose”, it’s also called the “dual” of A. For a finite vector space, this is also called a “Hermitian adjoint”
So we can say that given a finite complex vector:
The “dual of A” = \(\lt A |\) = “the conjugate transpose of A” == “the Hermitian adjoint of A”.
I think that’s enough words for the same thing for now.
“Conjugate transpose” is nice for our purposes, as it tells us exactly what we need to call to construct a dual. As we showed before, the transpose is a no-op for our one-dimensional numpy vector, but we’ll leave it in as a bow to the formal math. (You can experiment with taking it out, and you’ll get the same answer for the inner product.)
# Find A's "conjugate transpose".
A_dual = A.transpose().conjugate()
A_dual
array([2.-1.j, 3.+2.j, 5.-1.j])
Find the inner product
With all this out of the way, we’re now ready to find the inner product:
result = np.inner(A_dual, B)
result
(50-10j)
Encapsulating it as a function
I wrote this article partly as a way to review the formal math notation and understand how it relates to Python, but if what you really wanted was just a simple Python function to do it for you, here you go.
def complex_inner_product(A, B):
### Find the inner product of two equal length complex numpy 1-D arrays
assert(A.shape == B.shape)
A_dual = A.conjugate()
return np.inner(A_dual, B)
# Test it. We should get what we got above.
test_ok = result == complex_inner_product(A, B)
print(test_ok)
assert(test_ok)
True
# Test the postulate that <A|B> is related to <B|A> as a conjugate,
# assuming they are complex numbers.
reversed = complex_inner_product(B, A)
assert reversed == result.conjugate()