Office: WH-115, Cell-Phone: 607-761-9850, Email: alex@math.binghamton.edu, Office Hours: MWF 1:00 - 2:00 and by appointment.

Class meeting times and locations: MWF 11:20 - 12:50 in SL-206.

Binghamton University follows the recommendations of public health experts to protect the health of students, faculty, staff and the community at large. Safeguarding public health depends on each of us strictly following requirements as they are instituted and for as long as they remain in force. Health and safety standards will be enforced in this course.

Current rules require everyone to wear a face covering that completely covers **both the nose and mouth**
while indoors (unless they are eating or alone in a private space like an office). A face shield is not an acceptable
substitute. Classroom safety requirements will continue to be based on guidance from public health authorities and
will be uniformly applied across campus. If these requirements change, a campus-wide announcement will be made to
inform the University.

**Instructors and students must follow all applicable campus requirements for use of face coverings.**
The University recommends and supports swift action and clear consequences since a student’s non-compliance risks the
safety of others. Instructors will immediately notify students of any in-class instance of inadvertent
non-compliance. Any in-class instance of deliberate non-compliance after warning will result in the student being
asked to leave the class immediately. Work missed because of ejection from class for non-compliance may only be
made up later with the instructor's permission. All students are responsible for bringing a mask to class in order
to comply with campus requirements. If you forget your face covering or it does not meet the requirements, you will
be asked to leave the room immediately. You may not return until you meet the requirement.

If a student does not comply with the requirements or the instructor’s direction, the instructor will immediately cancel the remainder of the class session and inform the dean’s office, which will work with the Student Records office to issue a failing grade (“F”) for the course regardless of when in the semester the incident occurs. The dean’s office will also inform the Office of Student Conduct. If a student’s refusal to comply is a second offense, the Office of Student Conduct may recommend dismissal from the University. If the rules for health and safety measures change, the campus will be notified and the new requirements will take effect.

``Schaum's Outline of Linear Algebra 6th Edition'' by Seymour Lipschutz and Marc Lipson, McGraw-Hill Education; 6 edition (October 25, 2017), ISBN-13: 978-1260011449, ISBN-10: 1260011445.

We will cover as much of the book as we have time for, reviewing topics from elementary linear algebra as needed, but working over a general field. A detailed list of topics and notations I will use can be found on the following webpage: Linear Algebra and Matrix Theory along with links to Panopto recordings of my lectures on linear algebra from another semester, and my written lecture notes.

More briefly, the topics covered include: Vector spaces over any field, subspaces, sums, direct sums, intersections, spanning sets, independence, basis, dimension, coordinates, linear transformations, matrix representing a linear transformation with respect to a pair of bases, kernel, range, injective, surjective, bijective, invertible, isomorphism, the dimension theorem, transition matrices, equivalence of matrices representing the same transformation with respect to different bases, operators on V, similarity, invariant subspaces, quotient spaces, isomorphism theorems, linear functionals, dual spaces, dual bases, adjoint operators, eigenvalues, eigenvectors, diagonalization of operators, characteristic polynomials, determinants, geometric and algebraic multiplicities, Jordan canonical form, rational canonical form, inner product spaces in real and complex cases, orthogonality, projections, Gram-Schmidt theorem, unitary, normal, and self-adjoint operators, spectral theorems.

Exam 1: Friday, Feb. 25, 2022.

Exam 2: Wednesday, March 30, 2022.

Exam 3: Friday, May 6, 2022.

**Final Exam: Wednesday, May 18, 2022, 5:40 - 7:40 PM, EB-J01.**

There will be three 90-minute exams during the semester and one Final Exam (2 hours long) during the scheduled Finals period. There will be ten 5-point quizzes distributed throughout the semester, once per week, but not in a week when an exam is given. The hourlies will be worth 100 points each, and the (2-hour) Final Exam will be worth 150 points. The contents of each exam will be determined one week before the exam. The Final Exam will be comprehensive, covering the whole course. ANYONE UNABLE TO TAKE AN EXAM SHOULD CONTACT THE PROFESSOR AHEAD OF TIME TO EXPLAIN THE REASON. A MESSAGE CAN BE LEFT AT THE MATH DEPT OFFICE (777-2147) OR SENT TO HIS EMAIL ADDRESS. NO ONE SHOULD MISS THE FINAL!

It is hoped that classes and exams will be held in-person, with whatever precautions are required due to COVID. Properly worn masks will probably still be required in Spring 2022. A student may test positive for COVID and have to be under quarantine, or a student may be too ill to take an exam or quiz (the flu is still a threat). To deal with those situations, I will make use of Gradescope so that students with a valid medical excuse can take an exam or quiz online remotely. That method of testing is not as secure as an in-person exam, so I have to rely on your honesty not to cheat in that situation. If I find that any cheating has occured, the penalty will be severe. Information about how to use Gradescope is in the next section.

In case the university administration cancels classes because of a snow emergency, we can still have a virtual class meeting on zoom. Panopto recordings of lectures from a previous semester are already available with the posted lecture notes, but if we have a class meeting on zoom, it will be recorded and a link made available on this webpage. Since snow cancelations are rare, I will not set up a recurring zoom meeting for this class, but I will only do one at a time if needed. An email announcing each zoom meeting would be sent in the morning including a link to the meeting.

Students who are unable to attend class in-person for exams or quizzes should find the exam or quiz on Gradescope, take it within the time limit, and submit the solutions as a single electronic pdf file. I highly recommend using CamScanner for this purpose. It is available from this website: CamScanner Website.

**All submitted documents should be sent to me using Gradescope according to the directions in
the following file:**
Gradescope Submitting Guide.

Here is the webpage for direct access to Gradescope: Gradescope login page.

A practice Exam 1 and its solutions can be found through the following link: Practice Exam 1 and its solutions.

A practice Exam 2 and its solutions can be found through the following link: Practice Exam 2 and its solutions.

A practice Exam 3 and its solutions can be found through the following link: Practice Exam 3 and its solutions.

A practice question for the final exam and its solution can be found through the following link: Final Practice Question and its solution.

A practice final exam will not be posted, so it is recommended that students review all questions and solutions from Exams 1, 2 and 3, especially the questions you got wrong.

Quizzes and their solutions will be posted here after they are given and graded. These will be very useful to correct your mistakes and to prepare for exams and the final.

Here is a link to Quiz 1 and its solutions.

Here is a link to Quiz 2 and its solutions.

Here is a link to Quiz 3 and its solutions.

Here is a link to Quiz 4 and its solutions.

Here is a link to Quiz 5 and its solutions.

Here is a link to Quiz 6 and its solutions.

Here is a link to Quiz 7 and its solutions.

Here is a link to Quiz 8 and its solutions.

Here is a link to Quiz 9 and its solutions.

Here is a link to Quiz 10 and its solutions.

Exams and their solutions will be posted here after they are given and graded. These will be very useful to correct your mistakes and to prepare for later exams and the final.

Here is a link to Exam 1 and its solutions and a graphical display of the grade distribution for Exam 1 and letter grade interpretations of the numerical scores: Exam 1 grade distribution.

Here is a link to Exam 2 and its solutions and a graphical display of the grade distribution for Exam 2 and letter grade interpretations of the numerical scores: Exam 2 grade distribution.

Here is a link to Exam 3 and its solutions and a graphical display of the grade distribution for Exam 3 and letter grade interpretations of the numerical scores: Exam 3 grade distribution.

Here is a link to a graphical display of the grade distribution for the Final Exam and letter grade interpretations of the numerical scores: Final Exam grade distribution.

Here is a link to a graphical display of the grade distribution for the course totals and letter grade interpretations of the numerical scores: Course totals grade distribution.

The numerical scores for each exam will be given a letter grade interpretation, as will the Total of all points earned. The letter grades on the exams indicate how a student is doing, and will be taken into consideration in making the interpretation for the Totals. The course grade will be determined by the interpretation of Total points earned. Only borderline cases will be subject to further adjustment based on Homework. Any cases of cheating will be subject to investigation by the Academic Honesty Committee of Harpur College.

For each section of material covered there may be an assignment of problems from the textbook or some exercises may be given in class. They will be due one week from the day they are assigned (or the next scheduled class meeting after that if there is a holiday). Late assignments will be accepted at the discretion of the Professor. Assignments will be examined by the professor, who will record the fact that an assignment was attempted. QUESTIONS ABOUT PROBLEMS SHOULD BE ASKED OF THE PROFESSOR AT THE BEGINNING OF CLASS. DO NOT DEPEND ON THE PROFESSOR TO FIND AND CORRECT YOUR MISTAKES. The number of homeworks attempted will be considered as a factor in determining your course grade if you are a borderline case in the Total curve.

Class attendance is required at both the lectures and the discussion sessions, and sleeping in class does not count as being there. Questions are welcomed at any time during lectures. At the start of each class be ready to ask questions about homework problems or about the previous lecture. I hope there will be a substantial amount of participation by the students, and I want to create an atmosphere where you all feel very free to ask questions and make comments. If anyone feels that I have not answered a question clearly, completely, and with respect and consideration for the student who asked it, I want you to let me know about it immediately so I can correct the problem. You can do this in class or in my office hours, verbally or in writing, on paper or by email, or by whatever means makes you most comfortable, but with enough detail that I understand what you think was the problem. It will be too late to help if you only tell me at the end of the course.

The material is a combination of theory and calculation, and it is necessary to
understand the theory in order to do sensible calculations and interpret
them correctly. There is a significant difference between training and education,
and I feel strongly that our goal at this university is to educate you, not just to
train you to do computations. Theory is not presented to impress you with my knowledge
of the subject, but to give you the depth of understanding expected of an adult with a
university education in this subject. I will try to give you the benefit of my 44 years
of experience teaching mathematics at the university level, but it will require your
consistent concentrated study to master this material. While much learning can take place
in the classroom, **a significant part of it must be done by you outside of class.**
Using the book, class notes, homework exercises, **only you can achieve success in this course.**
Students who do not take this course seriously, who do not take the advice I give, are
not likely to be rewarded at the end. I am here to help and guide you, and I also make
and grade the exams to judge how much you have learned, but grades are earned by you, not
given by me. Exams will be a combination of theory questions and
calculations appropriate for a course of this level.

Students requesting disability-related accommodations should register with the Services for Students with Disabilities office (SSD). They are the appropriate entity on campus to determine and authorize disability-related accommodations. The office is located in the University Union, room 119. Phone number 607-777-2686. For students already registered with SSD, please provide your academic accommodation letter as soon as possible so that we can discuss the implementation of your accommodations.

The following link to a webpage of the Dean of Students contains important contact information for sources of help if you are having personal problems. Dean of Students Help Page

This class is scheduled to meet three times per week for a total of 270 minutes per week. In addition to attending all classes, you should expect to need 8 to 10 hours per week outside of the class meetings to study the material and do homework.

During classes students are expected to behave according to university rules. Some students feel free to use their cellphones during class, but most professors find that insulting, and it certainly prevents students from concentrating on the lecture. Any professor who sees a student using a cellphone instead of paying attention can ask the student to put the cellphone away, or can take it until the end of class. Addiction to cellphones is a serious condition affecting many people! It is allowed for students to use cellphones to photograph notes from the board. Some students may use laptop computers to take notes, but they should be careful not to use them for internet browsing during class. The Professor has the final decision about what to allow in the classroom.

Some homework assignments may be made in class, but most are already posted here. Many assigned problems below are solved in the book, so you should try to solve them yourself first, and then look at the book's solutions. There are some typographical errors in the textbook. To get credit for trying problems, you must write them up and turn them in. Some of the problems below are reviewing elementary linear algebra, but some may include new material.

Chapter 1: 1.5, 1.6

Chapter 2: 2.15, 2.16, 2.18, 2.19, 2.20, 2.25, 2.28, 2.32

Chapter 3: 3.12, 3.24, 3.26, 3.29, 3.30, 3.32.

Chapter 4: 4.4, 4.5, 4.6, 4.7, 4.9, 4.10, 4.11, 4.12, 4.14, 4.19, 4.21, 4.25, 4.26, 4.29, 4.30, 4.31, 4.33, 4.34, 4.35, 4.40 (review of fundamental theorems), 4.53, 4.54, 4.55, 4.56, 4.58, 4.59, 4.62, 4.63, 4.64.

Chapter 5: 5.6, 5.7, 5.10, 5.13, 5.15, 5.16, 5.17, 5.18, 5.23, 5.28, 5.30, 5.31, 5.34, 5.36, 5.37.

Chapter 6: 6.1, 6.2, 6.4, 6.6, 6.7, 6.8, 6.10, 6.11, 6.12, 6.29, 6.30, 6.31, 6.33, 6.36.

Chapter 7: 7.3, 7.5, 7.6, 7.8, 7.9, 7.11, 7.13, 7.14, 7.15, 7.16, 7.17, 7.18, 7.19, 7.21, 7.22, 7.25, 7.27, 7.28, 7.29, 7.33, 7.34, 7.35, 7.36, 7.39, 7.42, 7.43, 7.49, 7.50, 7.51.

Chapter 8: 8.2, 8.3, 8.4 (note error in answer to part b, should be -24), 8.6, 8.13, 8.16, 8.17, 8.22-8.34 (These are theory problems which have been discussed in class or covered in elementary linear algebra. They are for review.)

Chapter 9: 9.1-9.9, 9.11, 9.12, 9.14-9.17, 9.24, 9.25, 9.27, 9.28, 9.31.

Chapter 10: 10.3, 10.5-10.20 (theory questions), 10.29, 10.31, 10.32, (additional problems about quotient spaces): 10.23-10.27.

The following problems are suggested for study if we have time to cover Chapters 11, 12 or 13.

Chapter 11: 11.1, 11.2, 11.4, 11.5. Note the difference between my definition and notation and the book's for the transition matrix from basis S to basis T.

Chapter 12: 12.1, 12.2, 12.3, 12.5, 12.6, 12.7, 12.11, 12.17, 12.19, 12.20.

Chapter 13: 13.1, 13.2, 13.8, 13.10, 13.11, 13.15, 13.17, 13.18.

Here is a list of topics covered in lectures which may be covered on Exam 1.

Linear Systems, solving by row reduction of the augmented matrix [A|B] and interpretation in terms of free and dependent variables.

Consistent vs. inconsistent systems. Homogeneous systems AX=O.

Elementary row operations and reduction to Reduced Row Echelon Form (RREF).

Matrices, the set of all mxn matrices with entries in field F, F^{m}_{n}, addition of
matrices, multiplication of a matrix by a scalar in F.

The span of a set of vectors in F^{m}_{n} as the set of all linear combinations
from that set.

Matrix shapes, names of special patterns.

Rank of a matrix.

How an mxn matrix A determines a function L_{A}: F^{n}
--> F^{m} by L_{A}(X) = AX.

Linearity properties of the function L_{A}, that is,
L_{A}(X+Y) = L_{A}(X) + L_{A}(Y) for any X, Y in F^{n},
and L_{A}(rX) = r L_{A}(X) for any X in F^{n} and any r in F.

Definition of Ker(L_{A}) and of Range(L_{A}) = Im(L_{A}) and
how to find them by row reduction methods.

Properties of general functions: one-to-one (injective), onto (surjective), both (bijective), invertible. Composition of functions, associativity of composition.

Connection between properties of matrix A and function L_{A}.

Defintion of matrix multiplication AB through the definition L_{A}
composed with L_{B} equals L_{AB}. Lemma that L_{A}
= L_{B} iff A=B.

Formula for the matrix product of an mxn matrix A with an nxp matrix B giving an mxp matrix
C = AB whose columns are A(Col_{k}(B)) for k = 1, ..., p.

Defintion of standard basis vectors e_{1},
... , e_{n} in F^{n} and lemma that Ae_{j}
= Col_{j}_{}(A), so AX is the sum of x_{j}
Col_{j}(A).

Abstract definition of a vector space V over a field F. Examples, F^{m}_{n}
is a vector space. For any set S, the set Fun = {f : S ---> F} of
all functions from S to the field F, is a vector space.

Definition of a linear transformation L : V ---> W from a vector space to a vector space. Ker(L), Range(L) = Im(L).

Basic facts about vector spaces and about linear transformations (maps), and examples.

Definition and some examples of subspaces.

Definition of when a square matrix is
invertible, uniqueness of the inverse when it exists, and an
algorithm to decide and find it by row reduction of [A | I_{n}].

Definition of transpose of a matrix, of symmetric and anti-symmetric matrices.

Elementary matrices and how they can be used to achieve elementary row or column operations.

The rules of matrix algebra.

The span of a set of vectors S in a vector space V, and why it forms a subspace of V.

How to check that a subset W in V is a subspace of V.

Linear indepdendence or dependence of a subset of V, definition and method of determining that.

Theorems and examples about spanning and independence, connection with rank of a matrix.

Definition of a basis for a vector space, and how to decide if a subset is a basis of V.

Finding a basis for important examples of subspaces, Ker(L), Range(L), where L:V---> W is a linear map.

Dimension of V as the number of vectors in any basis for V.

The standard basis for several examples of vector spaces, including
all the F^{m}_{n} examples and the vector space of polynomials with
degree at most k.

Row-space and Column-space of a matrix, and their dimension related to the rank of the matrix.

Information about the linear transformation
L_{A}: F^{n}--> F^{m} associated with rank(A).

The relationship between the dimensions of Ker(L), Range(L) and V for L:V---> W.

Extending an independent set to a basis, cutting down a spanning set to a basis.

Use of a basis S of V to give
coordinates with respect to S for each vector v in V. How that coordinate function, [v]_{S},
is a linear map from V to
F^{n} when a basis S for V consists of n vectors.

Transition matrices which give the relationship between the coordinates of a vector
v with respect to different bases. If S and T are two bases of the same vector space, V,
then the transition matrix from S to T is the square invertible matrix _{T}P_{S} such that
[v]_{T} = _{T}P_{S} [v]_{S}.

How to represent a general linear map L:V---> W with respect to a choice of basis S in V and basis T in W by a matrix, that is,

using coordinates
with respect to S, [ . ]_{S}, and coordinates with respect to T, [ . ]_{T}, to find
a matrix _{T}[L]_{S}, such that _{T}[L]_{S} [v]_{S} =
[L(v)]_{T}.

The algorithm for finding that matrix by a row reduction of [T | L(S)].

If S and S' are two bases of V, and T and T' are two bases of W, and L:V---> W then there is a
relationship between _{T}[L]_{S}, the matrix representing L from S to T, and
_{T'}[L]_{S'}, the matrix representing L from S' to T'.

That relationship is _{T'}[L]_{S'} =
_{T'}Q_{T} _{T}[L]_{S} _{S}P_{S'}
where _{S}P_{S'} is the transition matrix from S' to S, and
_{T'}Q_{T} is the transition matrix from T to T'.

Row/Column equivalence of two mxn matrices, B = QAP, for appropriate size invertible matrices Q and P.

Block Identity Form (BIF) as best matrix Row/Column equivalent to a given matrix A, best matrix representing L:V-->W given choice of bases S' and T'.

The concept of isomorphism (bijective linear map) and its properties.

Lin(V,W) = {L:V-->W | L is linear} is a vector space under addition of functions and scalar multiplication of a function.

Isomorphism between F^{m}_{n} and Lin(F^{n},F^{m}) by taking matrix A
to linear map L_{A}.

End(V) = Lin(V,V) as a ring under + and composition, as well as a vector space, making it an algebra.

Polynomial ring F[t] another example of an algebra. Recursive definition of non-negative powers of a square
matrix, A^{n}, and of an L in End(V), L^{n}.

Evaluation of any polynomial f(t) in F[t] at a square matrix A or at an L in End(V), f(A) and f(L).

Concepts and facts about F[t], for example, degree of a nonzero polynomial, Euclidean Algorithm in F[t], root f(a) = 0 for a in F iff there is a linear factor (t-a) in f(t), irreducible polynomials in F[t].

Discussion of when a polynomial f(t) in F[t] is satisfied by a square matrix A, f(A) = 0 matrix, or satisfied by an L in End(V), f(L) = 0 map on V.

Material on determinants, their definition using permutations or by cofactor expansion, their properties, and methods of calculating them (definition by permutations or by cofactor expansions, crosshatching method for matrices of size n = 2 or n = 3 ONLY, using row operations).

The use of determinant to get the characteristic polynomial, det(tI_{n} - A), whose roots
give the eigenvalues of A, and whose expression as a product of powers of distinct linear factors
gives the algebraic multiplicities.

Eigenspaces, their properties, and how to decide if a matrix can be diagonalized or not. Theorems about eigenspaces and diagonalizability.

Independence of the union of bases for distinct eigenspaces.

Geometric multiplicity and its relationship to algebraic multiplicity for each eigenvalue.

L invariant subspace W of V for L in End(V), restriction of L to W, properties of the restricted L.

Classical adjoint of square matrix A, adj(A), and property that A adj(A) = det(A)I_{n} = adj(A) A.

Cayley-Hamilton theorem, characteristic polynomial of A is satisfied by A.

Definition of minimal polynomial of A, m_{A}(t), and its properties and relationship to characteristic
poly of A.

Sums and direct sums of subspaces of V. dim(W_{1} + W_{2}) =
dim(W_{1}) + dim(W_{2}) - dim(W_{1} ∩ W_{2})

Generalized eigenspaces for L in End(V) assuming all eigenvalues of L are in the field F.

Primary decomposition theorem, that V is a direct sum of the generalized eigenspaces for L.

L is diagonalizable iff its minimal polynomial is a product of distinct linear factors.

Quotient spaces V/W for W any subspace of V. Definitions and basic theorems, for example,

For L:V-->W, U subspace of V, there exists an induced linear map L^{-}:V/U-->W such that

L equals L^{-} composed with the projection map from V onto V/U iff U is contained in Ker(L).

Definition of when vector spaces are isomorphic. First and Second Isomorphism Theorems.

For L:V-->V, W an L-invariant subspace of V, existence of the induced linear map L^{-}:V/W-->V/W,

and theorem about block upper triangular form of a matrix representing such an L obtained by extending a

basis of W to a basis of V. Applications to characteristic and minimal polynomial of L.

Jordan blocks and Jordan canonical form theorem for L:V-->V with all eigenvalues in the field F.

Correspondence between a basic Jordan block and a special basis of an L-invariant subspace of V

coming from a chain of generalized eigenspaces for L.

The combinatorics of Jordan blocks for a fixed eigenvalue, connection with the partition function p(n).

The meaning of the number of basic Jordan blocks for a fixed eigenvalue, and of the size of the largest block.

Problems about counting all possible Jordan forms when given characteristic and minimal polynomials of L.

Problems about finding a Jordan form basis for L:V-->V when dim(V) is small.

Assuming L:V-->V has characteristic polynomial factored into irreducible polynomials to powers, get results

about Primary Decomposition Theorem, and the Rational Canonical Form (RCF) matrix representing L.

Definition of cyclic subspace Z(v,L) generated by v and L, and of the Companion matrix C(f(t)) of a monic polynomial f(t).

Theorem that the characteristic and minimal polynomials of a Companion matrix C(f(t)) are both equal to f(t).

RCF as a block diagonal form matrix representing L made from Companion matrix blocks, each coming from a cyclic subspace.

The combinatorics of Companion matrix blocks related to the characteristic and minimal polynomials of L.

The standard dot product on R^{n} and its properties: bilinear, symmetric, positive definite.

Definition of length of a vector, ||v||, the Cauchy-Schwarz inequality, definition of angle between vectors using the standard dot product.

Definition of orthogonal (perpendicular) sets of vectors in R^{n}, and of orthonormal sets of vectors.

Theorem: Any orthogonal set of non-zero vectors is independent.

Theorem: With respect to an orthogonal basis S of R^{n}, the coordinates of any vector v with respect to S can be computed using the dot product.

Definition of an orthogonal nxn real matrix, A transpose equals A inverse, A^{T} = A^{-1}.

Theorem: A is an orthogonal matrix iff the set of its columns forms an orthonormal set in R^{n}.

Applications of the standard dot product to geometry in R^{n}, projection maps.

Projection of one vector onto another. Projection of any vector v in R^{n} into a given subspace W.

General solution for Proj_{W}(v) by solving a linear system.

A better solution using projection maps if you have an orthogonal basis of W.

Gram-Schmidt orthogonalization process in R^{n}, a method to convert a basis of subspace W into an orthogonal basis of W.

Normalization to unit vectors then gives an orthonormal basis of W.

Theorems about orthogonal matrices in relation to the standard dot product of R^{n}.

Theorem: For any real symmetric matrix, A, and any two distinct eigenvalues of A, their eigenspaces are orthogonal.

Theorem: For any real symmetric matrix, A, all its eigenvalues are real.

Theorem: For any real symmetric matrix, A, there is an orthogonal matrix, P, such that P^{T}AP = D is diagonal.

Pythagorean Theorem and the Triangle inequality in R^{n}.

Standard dot product in complex n-space C^{n}. Properties: sesquilinear, conjugate symmetric, positive definite.

Hermitian conjugate A^{*} = A^{H}, of a complex matrix, A.

Definitions: A is called Hermitian when A^{*} = A, called skew-Hermitian when A^{*} = -A,

called unitary when A^{*} = A^{-1}.

Theorem: A is unitary iff the set of its columns forms an orthonormal basis of C^{n}.

Theorem: All eigenvalues of a Hermitian matrix are real.

Orthogonal complement ``S^{perp}" of a subset, S, in R^{n} or C^{n}, equals span(S)^{perp}.

Definition of an orthogonal direct sum of subspaces. For any subspace, W, in F^{n}, W + W^{perp} = F^{n}.

For real symmetric matrix, A, the sum of its distinct eigenspaces is an orthogonal direct sum.

Definition: Complex nxn matrix A is called normal when A A^{*} = A^{*} A, that is, A commutes with its conjugate transpose.

General inner product space (I.P.S) over the real numbers.

Cauchy-Schwarz inequality for any I.P.S. and how it gives a geometry on the space.

For a finite dimensional I.P.S. how the inner product is given by the matrix of inner products of pairs of basis vectors:

If S = {v_{1},...,v_{n}} is a basis of I.P.S. V and M = [(v_{i},v_{j})] is the
matrix of inner products of pairs of basis vectors from S,

then for any pair of vectors u and v from V, we have the formula for the inner product
(u,v) = [u]_{S}^{Tr} M [v]_{S}.

For any matrix M, the formula above defines a bilinear form on V, that is, a bilinear function (. , .) : VxV --> R.

The bilinear form defined by a matrix M is symmetric iff M is a symmetric matrix.

The symmetric bilinear form defined by a symmetric matrix, M, is positive definite iff the matrix M is positive definite,

that is, when X^{Tr} M X >= 0 for every nx1 column vector X, and it equals 0 iff
X is the zero column vector.

Function space examples of I.P.S. where the inner product is defined by a definite integral.

Every subspace of an I.P.S. is also an I.P.S. with the inner product just restricted to the subspace.

Examples using spaces of continuous functions on an interval, and subspaces of polynomial functions.

Use of Gram-Schmidt process in an I.P.S. to get an orthogonal basis of a subspace.

How the matrix, M, representing an inner product with respect to one basis, S, changes when the inner product is represented with respect to a different basis, T.

If P is the transition matrix from T to S, then M changes to P^{Tr} M P, so we call this relation
between matrices ``congruence".

The ``best" matrix congruent to a given positive definite matrix M can be shown to be a diagonal matrix with all diagonal entries (eigenvalues) strictly positive.

This is an application of orthogonal diagonalization of a real symmetric matrix.

Properties of general bilinear forms, definition of rank of a bilinear form as the rank of any matrix representing the form.

Types of bilinear forms, such as, non-degenerate or degenerate, symmetric or alternating or skew-symmetric.

Classification of alternating bilinear forms, of symmetric bilinear forms.

The vector space of all bilinear forms on V, and how the matrix representing any form with respect to a given basis provides and isomorphism with the vector space of all nxn matrices, where n = dim(V).

Quadratic forms and their relationship with bilinear forms.

Sylvester's Law of Inertia classifying all quadratic forms on a real finite dimension vector space.

Generalization of I.P.S. to complex vector spaces, Hermitian forms, modified properties: sesquilinear, conjugate symmetric, positive definite.

Dual spaces, V^{*} = Lin(V, F), and specific examples of their elements, usually called linear functionals.

Given any basis, S = {v_{1},...,v_{n}}, of V, we have the dual basis
S^{*} = {f_{1},...,f_{n}} in the dual space V^{*} such that
f_{i}(v_{j}) = delta_{ij}.

Examples of how to find dual basis, S^{*}, when given basis, S.

Theorem showing how a basis, S, of V and its dual basis, S^{*}, give formulas for the coordinates
of any v in V with respect to S, and of any f in V^{*} with respect to S^{*}.

Theorem: If S and T are two bases of V, and S^{*} and T^{*} are their dual bases in
V^{*}, then the relationship between the two transition matrices is
_{S*}P_{T*} = (_{T}P_{S})^{Tr}.

Double dual space, (V^{*})^{*}, and the natural map, ^{^} : V --> (V^{*})^{*}defined by v^{^}(f) = f(v) for any v in V and any f in V^{*}.

For any subset S in V, defined the annihilator of S, Ann(S) = S^{0} = {f in V^{*} | f(s) = 0 for
all s in S}.

Ann(S) = Ann(span(S)) is a subspace of V^{*}, and some basic facts about it.

Defined the transpose of a linear map L : V --> U to be the linear map
L^{*} : U^{*} --> V^{*} such that L^{*}(f) = f ⚬ L, the composition of f and L.

Theorems about L^{*}. If S is a basis of V and T is a basis of U and A represents L from S to T, and B
represents L^{*} from T^{*} to S^{*}, then B = A^{Tr}.

Additional results about linear operators on an inner product space, V, whether the field is the reals or the complex numbers.

Last modified on 5-20-2022.