A Course in Algebra E. B. Vinberg Graduate Studies in Mathematics Volume S6 American Mathematical Society A Course i Author: E. B. DOWNLOAD PDF. A Course in Algebra-E. B. redelocidi.cf [PDF] Free Book A Course In Algebra By E. B. Vinberg - PDF Format. A Course In Algebra By E. B. Vinberg click here to access This Book: FREE DOWNLOAD.

A Course In Algebra Vinberg Pdf

Language:English, Arabic, Hindi
Genre:Science & Research
Published (Last):19.11.2015
ePub File Size:16.36 MB
PDF File Size:10.29 MB
Distribution:Free* [*Registration Required]
Uploaded by: BETHANN

Vinberg has written an algebra book that is excellent, both as a classroom text or for self-study. It starts with the most basic concepts and builds. What are some good undergraduate level books, particularly good introductions to (Real and Complex) Analysis, Linear Algebra, Algebra or. A Course in Algebra book. Read reviews from world's largest community for readers. Presents modern algebra. This book includes such topics as affine and.. .

But Artin's book is very good and it's good news for all of us that Artin is revising it. This is a comprehensive textbook on modern algebra written by an internationally renowned specialist.

It covers material traditionally found in advanced undergraduate and basic graduate courses and presents it in a lucid style. Freeware Datenrettung Sd Karte Beschadigt. The author includes almost no technically difficult proofs, and reflecting his point of view on mathematics, he tries wherever possible to replace calculations and difficult deductions with conceptual proofs and to associate geometric images to algebraic objects.

The effort spent on the part of students in absorbing these ideas will pay off when they turn to solving problems outside of this textbook. Another important feature is the presentation of most topics on several levels, allowing students to move smoothly from initial acquaintance with the subject to thorough study and a deeper understanding. Basic topics are included, such as algebraic structures, linear algebra, polynomials, and groups, as well as more advanced topics, such as affine and projective spaces, tensor algebra, Galois theory, Lie groups, and associative algebras and their representations.

Some applications of linear algebra and group theory to physics are discussed. The book is written with extreme care and contains over exercises and 70 figures.

It is ideal as a textbook and also suitable for independent study for advanced undergraduates and graduate students. Readership Advanced undergraduates, graduate students, and research mathematicians interested in algebra. This is a masterly textbook on basic algebra.

It is, at the same time, demanding and down-to-earth, challenging and user-friendly, abstract and concrete, concise and comprehensible, and above all extremely educating, inspiring and enlightening. We will denote the diagonal matrix fal The following obvious properties relate matrix multiplication to other operations: As in the statement concerning associativity, we assume here that the sizes of matrices agree so that all operations make sense. The sum and product of square matrices of the same order n are well defined; they are also square matrices of order n.

Properties 1. We denote it Ln K. Ll K is the field K itself. In our notation the letter "L" comes from "linear", 1This algebra is often denoted and the reason for this choice is that matrices can be interpreted as linear maps see Section 2. Matrix Algebras ii The algebra Ln K contains zero divisors. This follows, for instance, from the second equality above.

Moreover, there exist nonzero matrices with zero squares, e. This follows from the existence of zero divisors, since a zero divisor cannot be an invertible element see the proof of absence of zero divisors in a field in Section 1.

Vinberg A Course In Algebra Pdf Textbook

For instance, matrices 1 and o 1 are not invertible in L2 K. A matrix E,j that has 1 as the i,j th entry and zero in all other places is called a matrix unit not to be confused with the identity matrix! Write down the multiplication table of the algebra Ln K in this basis. Clearly, any scalar matrix commutes with all other square matrices of the same order. Prove the converse: Prove that in L2 R , matrices of the type b b a form a subalgebra isomorphic to the algebra of complex numbers. Prove that in the algebra L2 C regarded as an algebra over IR, matrices of the type form a subalgebra isomorphic to the algebra of quaternions see Example 1.

Algebraic Structures 34 define the transposed matrix all Observe that all constructions in the last three sections would remain unchanged if we replaced K with a commutative associative ring with unity, for instance, the ring of integers or a ring of residue classes. The only difference lies in terminology: Chapter 2 Elements of Linear Algebra 2.

Systems of Linear Equations Fix a field K. We are going to abuse the language slightly and call elements of K numbers. A linear equation with variables xl, x A system of m linear equations with n variables has the following general form: A system of equations is called compatible if it has at least one solution and incompatible otherwise.

A compatible system can have one or more solutions. To solve a system of equations means to find all its solutions. Observe that a solution of a system of equations with n variables is an ordered collection of n numbers, i.

There exists a simple general method for solving systems of linear equations called Gaussian elimination. Its idea lies in reducing every system of linear equations to an equivalent system that has a simple form and whose solutions are easy to find. Recall that two systems of equations are called equivalent if their sets of solutions coincide, i. Gaussian elimination is performed using special transformations called elementary. Definition 2. An elementary transformation of a system of linear equations is a transformation of one of the following three types: Notice that a transformation of the first type changes only one equation, the one to which the other, multiplied by a number, is being added.

Clearly, every solution of the original system of equations is a solution of the system obtained using an elementary transformation. On the other hand, the original system of equations can be reconstructed from the new one using an appropriate elementary transformation of the same type.

For instance, if we add to the first equation the second one multiplied by c, we can get back by adding to the first equation of the new system the second equation it is the same as in the original system multiplied by -c. Thus, under any elementary transformation we obtain a system that is equivalent to the original one. Since it is easier for us to work not with systems themselves but with their extended matrices, here is the corresponding definition for matrices: An elementary row transformation of a matrix is a transformation of one of the following three types: Systems of Linear Equations 37 i adding a row multiplied by a number to another row; ii interchanging two rows; iii multiplying a row by a nonzero number.

Obviously, every elementary transformation of a system of equations leads to a corresponding elementary row transformation of its extended matrix and its coefficient matrix.

We can show now that every matrix can be reduced to quite a simple form by elementary transformations. Call the first nonzero element of a row ar, a2, A matrix is in step form if i the indices of pivotal elements of its nonzero rows form a strictly increasing sequence; ii zero rows, if exist, are at the bottom. That is, a matrix in step form looks as follows: Also, jl Theorem 2.

Every matrix can be reduced to step form by elementary transformations. If the given matrix is the zero one, it is already in step form. If it is nonzero, let jl be the index of its first nonzero column. By exchanging the rows, if necessary, we obtain a matrix where al 0. Then, we add to every row from the second down the first row multiplied by an appropriate number, so that all entries of the jlth column, except the first one, become zero.

We obtain a matrix of the form Al Applying the same procedure to the matrix Al, we finally obtain a matrix of the form 2. Elements of Linear Algebra 38 Remark 2. In this proof we did not use elementary transformations of the third type. But they can be useful in solving particular systems. Example 2. Reduce the following matrix to step form: The previous example was specially designed so that jl, In some sense, this situation 1 only when the first column of the is an exception.

For example, jl original matrix is zero. In such a case matrix 2. Now we apply the above theorem to solving systems of linear equations. A system of linear equations is said to be in step form if its extended matrix is in step form. Theorem 2. Thus, it is enough to learn how to solve systems already in step form. Systems of Linear Equations 39 We need to introduce a few terms.

A system of linear equations is called strictly triangular if its coefficient matrix is strictly triangular. Remark 2.

In this case the system contains an equation of the form where b 36 0. Hence, it is incompatible. Second case: In this case, after deleting zero equations i. We can uniquely determine xn from the last equation, x.

Therefore, the system has a unique solution. Third case: A compatible system of linear equations is called determined if it has a unique solution and underdetermined if it has more than one solution. As follows from the previous discussion, an underdetermined system has infinitely many solutions whenever K is infinite. Up to renumeration of variables, a general solution of such a system has the following form: Elements of Linear Algebra 40 Example 2. The matrix from Example 2. Calculations in Example 2.

For consistency, we can think that for determined systems all variables are principal and no variable is free. Then the general solution is the unique solution of the system. A strictly triangular matrix can be reduced to the identity matrix by elementary row transformations.

To achieve this, we add the last row multiplied by an appropriate coefficient to all other rows.

This coefficient is chosen here in such a way that all entries of the last column but the last one become zero. Then, similarly, we add the penultimate row to others so that all entries of the next to the last column except for the diagonal entry become zero, etc.

Finally, we obtain a diagonal matrix. By multiplying its rows by appropriate numbers, we obtain the identity matrix. Using this method, we do not stop at the step form when solving a system of linear equations but continue with the transformations and reduce the coefficient matrix for the principal variables to the identity matrix. Then the general solution is easily obtained from the matrix we just got. This procedure is called the reverse Gaussian elimination. Systems of Linear Equations 41 Example 2.

We continue reducing the matrix from Example 2. First, we delete the zero row. Then, by subtracting the third row from the second, we get 1 2 1 0 0 1 1 0 0 0 2 -1 5 0 -3 We subtract the doubled second row from the first, multiply the third row by -1, and obtain 1 0 -1 0 8 0 1 1 00 0 -3 0 1 -5 Therefore, the system of equations from Example 2. A system of homogeneous linear equations is always compatible as it has the zero solution. If it is determined, then it has just the zero solution, and if it is underdetermined, it has at least one nonzero solution even infinitely many if K is infinite.

In the preceding notation, the latter case holds when r Theorem 2. Every system of homogeneous linear equations for which the number of equations is less than the number of variables, has a nonzero solution. Underdetermined systems of linear equations differ by the degree of indeterminacy, which is naturally defined as the number of free variables in the general solution of a system.

For instance, a line in three-dimensional space is given by a system of two linear equations with one free variable, and a plane by a system of one equation with two free variables. The same system of linear equations can admit different general solutions with different free variables, so it is natural to ask if the number of free variables always remains constant.

A positive answer to this question relies on the concept of dimension introduced in the next section. In the remaining part of this section, we will interpret Gaussian elimination in the language of matrix multiplication.

Elements of Linear Algebra 42 First of all, if X denotes the column of variables and B the column of free terms, system 2. Setting this element equal to the ith element of the column B, we obtain exactly the ith equation of system 2.

Let U be a square matrix of order m. Multiplying both sides of equation 2. Moreover, if U is invertible, multiplication by U-1 on the left changes 2. Equation 2. It is easy to see that the extended matrix of this system is UA.

Furthermore, a direct check shows that elementary transformations of a matrix A are equivalent to multiplying it on the left by the so-called elementary matrices of the following three types: All elementary matrices are invertible; moreover, their inverses are elementary matrices corresponding to inverse elementary transformations: In the language of matrices, Gaussian elimination consists of successive multiplications on the left of equation 2.

Use of other matrices instead of elementary ones provides us with different methods for solving systems of linear equations. For instance, such is the method of rotations; here matrices U are taken to be of the from A i j Basis and Dimension of a Vector Space The concept of dimension is one of the most fundamental ideas in mathematics.

In different branches of mathematics, it assumes various forms as 2. Elements of Linear Algebra 44 does the concept of space itself. In this section we will define the dimension of a vector space and discuss questions related to this notion. In Section 1. The dimension of a vector space is defined as the number of vectors in its basis. However, before we state the definition precisely, two questions must be answered: To answer these questions, we need to introduce several new notions and prove a few statements that are also of independent interest.

Let V be a vector space over a field K. Vectors al, a 2 , Otherwise, they are said to be linearly independent. Note that the notion of linear dependence or independence refers not to separate vectors but to their collections, or systems.

The notion of a system of vectors is different from that of a set of vectors. First, vectors in a system are assumed to be numbered.

Second, some of them may be equal to each other. Thus, a system of n vectors is actually a mapping of the set 11, 2, Notice, though, that the property of being dependent or independent does not depend on how vectors are numbered within the system.

The term "linear combination" actually has two meanings: In the statement coefficients A1, A 2 , In other words, linear independence of vectors al, a2i Basis and Dimension of a Vector Space 45 Example 2. A system that consists of exactly one vector is linearly dependent if and only if this vector is zero. A system that consists of two vectors is linearly dependent if and only if these vectors are proportional.

Three geometric vectors see Section 1. Clearly, if a system of vectors contains a linearly dependent subsystem, it is linearly dependent itself. For instance, any system of vectors that contains proportional vectors is linearly dependent.

Lemma 2. Vectors al, a2, Assume al 0 0. Not every vector in a linearly dependent system can be expressed as a linear combination of others.

For example, let a be a nonzero vector. Let vectors al, a2, A vector b can be expressed as a linear combination of al, a2,. If b can be expressed as a linear combination of al, a2, Elements of Linear Algebra where not all coefficients Al i A2, We claim that p 0 0.

Indeed, otherwise al, a2i Let b be a vector expressed as a linear combination of vectors ai, a2, This expression is unique if and only if a1, a2, Let S C V be a subset. The collection of all possible finite linear combinations of vectors from S is called the linear span of S and is denoted S.

It is the smallest subspace of V containing S check this! A vector space is called finite-dimensional if it is spanned by a finite number of vectors, and infinite-dimensional otherwise. Proposition 2. Basis and Dimension of a Vector Space 47 Proof. We can express them in terms of al, a2i If A1, A2, On the other hand, this system has a nonzero solution by Theorem 2.

Therefore, vectors b1i b2, In view of Lemma 2. A basis of a vector space V is a linearly independent system of vectors that spans V. Every finite-dimensional subset V has a basis. More precisely, every finite subset S of V that spans V contains a basis of V. If S is linearly dependent, it contains a vector that can be expressed in terms of other vectors by Lemma 2. If we remove this vector from S, we obtain a set that still spans V but contains a smaller number of vectors.

Continuing further, we will finally obtain a linearly independent set that spans V, i. All bases of a finite-dimensional space V contain the same number of vectors.

This number is called the dimension of V and is denoted dim V. Elements of Linear Algebra 48 Proof. Assume V contains two bases with different numbers of vectors. Then according to Proposition 2. The zero vector space that consists of the zero vector only is regarded as having the "empty basis"; accordingly, its dimension is considered to be zero.

The dimension of E2 respectively, E3 is 2 respectively, 3. It follows from Example 1. The field of complex numbers regarded as a vector space over Ilk has dimension 2, and the algebra of quaternions see Example 1.

If X is infinite, then for any n, F X, K contains n linearly independent vectors, e. Thus, in this case F X, K is infinite-dimensional. The field R regarded as a vector space over Q is infinitedimensional. Indeed, if it were finite-dimensional, every real number would be determined by its coefficients in some basis, i.

But then the set of real numbers would be countable and this is not so.

Exercise 2. Determine the number of vectors in an n-dimensional vector space over a finite field with q elements.

Prove that the space of all continuous functions on an interval is infinite-dimensional. Basis and Dimension of a Vector Space 49 subset, i. Moreover, each linearly independent subset of S can be completed to a maximal linearly independent subset. We need to show that every vector in S can be expressed as a linear combination of el, e2, By definition, every vector in S can be expressed as a linear combination of vectors from S.

Hence, it suffices to show that every vector a E S can be expressed as a linear combination of el, Any linearly independent system of vectors in a vector space V can be completed to a basis. In particular, any nonzero vector is contained in some basis and any n linearly independent vectors in an n-dimensional vector space already form a basis. Determine the number of bases of an n-dimensional space over a field of q elements. Any subspace U of a finite-dimensional space V is also finite-dimensional, and dim U Proof.

By Proposition 2. Determine the number of k-dimensional subspaces of an n-dimensional vector space over a field of q elements. The next theorem provides a complete description of all finite-dimensional vector spaces. Finite-dimensional vector spaces over the same field are isomorphic if and only if their dimensions are the same. Elements of Linear Algebra 50 dim U. Conversely, by Proposition 1. The space Kn possesses a "distinguished" basis consisting of unit rows see Example 1. On the other hand, if we fix a basis in an n-dimensional space V, then by assigning to each vector the row of its coordinates in this basis as in the proof of Proposition 1.

This isomorphism maps basis vectors to unit rows. In this sense we can say that the space of rows is nothing but a finite-dimensional space with a fixed basis. The set of all bases of an n-dimensional vector space V can be described in the following way.

Thus, vectors el, Such a matrix is called nonsingular see also Definition 2. The aforesaid establishes a one-to-one correspondence between the set of all bases of V and the set of nonsingular matrices of order n. We can extend the law of matrix multiplication to the case where entries in one of the two matrices are vectors this makes sense because of how operations on a vector space are defined.

Then equality 2. Let x E V be a vector. The notions of basis and dimension can be extended to infinite-dimensional vector spaces. For this, we need to define the linear combination of an infinite system of vectors.

In a purely algebraic situation there is no other way but to restrict our attention to the case of linear combinations where only finitely many coefficients are nonzero.

Thus, the sum is finite, hence it makes sense. Just as in the case of finite systems of vectors, this definition leads to the notions of linear expression, linear dependence, and basis.

The dimension of a space is the cardinality of its basis. In particular, a vector space with a countable basis is called countable-dimensional. Consider the set of all sequences infinite rows of elements of a field K. Clearly, it is a vector space with respect to operations of addition and multiplication by elements of K that are defined just as they are for rows of finite length. We say that a sequence is finitary if only a finite number of its entries are nonzero. Finitary sequences form a subspace in the space of all sequences.

As its basis vectors, we can take sequences of the following form: Just as in Proposition 1. Prove that R regarded as a vector space over Q is not countable-dimensional. Elements of Linear Algebra 52 Exercise 2. Prove that every uncountable set of vectors in a countabledimensional space is linearly dependent hence, every basis of such a space is countable.

Prove that any finite or countable linearly independent system of vectors in a countable-dimensional space can be completed to a basis. Prove that a subspace of a countable-dimensional vector space is at most countable-dimensional i. Give an example of a countable-dimensional subspace of a countable-dimensional vector space that does not coincide with the whole space. Exercises 2.

Similar statements can be proved for spaces of uncountable dimension, but this requires the use of set theory transfinite induction or Zorn's lemma. On the other hand, this purely algebraic approach has a restricted area of applications.

Usually, a space of uncountable dimension is endowed with a topology, which gives meaning to infinite sums of vectors. The notion of dimension is closely related to those of rank of a matrix and rank of a system of vectors. The rank of a system of vectors is the dimension of its linear span.

The rank of a matrix is the rank of the system of its rows. The rank of a matrix A is denoted rk A. Obviously, this holds if and only if the corresponding spans coincide: Thus, equivalent systems of vectors have the same rank.

The definition of an elementary transformation implies that rows of a matrix A' obtained from another matrix A using an elementary transformation can be expressed as a linear combination of the rows of A. But as A can be obtained from A' using the inverse transformation, its rows can be expressed as a linear combination of the rows of A'. Therefore, the systems of rows of A and A' are equivalent and the ranks of these matrices are equal. This is useful for calculating the rank of a matrix.

Linear Maps 53 Proposition 2. The rank of a matrix is equal to the number of nonzero rows of the matrix in step form to which it is reduced by elementary transformations. Since the rank of a matrix does not change under elementary transformations, it suffices to prove that the rank of a matrix in step form equals the number of its nonzero rows. This will follow if we can prove that nonzero rows of a matrix in step form are linearly independent.

Consider a matrix in step form 2. Assume that a linear combination of its nonzero rows with coefficients Al, A2, Continuing further, we see that all coefficients Al, A2, In particular, the number of nonzero rows in a matrix in step form, to which a given matrix is reduced, is constant, regardless of the sequence of elementary transformations chosen.

A system of linear equations is compatible if and only if the rank of its matrix of coefficients equals the rank of its extended matrix. Linear Maps Every algebraic theory considers maps that are more general than isomorphisms. Usually, these maps are called homomorphisms or, in the case of vector spaces, linear maps. While isomorphisms fully preserve inner properties of algebraic structures and their elements, homomorphisms do so only partially.

Let U and V be vector spaces over a field K. A map P: Elements of Linear Algebra 54 This definition is different from that of an isomorphism between two vector spaces only in that it does not require the map to be bijective.

Observe that under a linear map the zero vector is mapped into the zero vector and the opposite of a vector into the opposite of its image. A rotation is a linear map and even an isomorphism from E2 to itself see Figure 2.

An orthogonal projection onto a plane defines a linear map but not an isomorphism from E3 to the space of geometric vectors on this plane. Differentiation is a linear map from the space of all functions continuously differentiable on a given interval of the real line to the space of functions continuous on this interval. The map f H f b f x dx is a linear map of functions continuous on [a, b] to R regarded as a onedimensional vector space over R.

Linear Maps 55 V is uniquely determined by the images of the A linear map V: U basis vectors of U. Indeed, let lei: These considerations lead us towards a more analytical description of linear maps.

We shall provide it for the spaces of rows. Let p: K' be a linear map. Apply it to the unit rows el, e2, It is called the matrix of the linear map V. Notice that the coordinates of the row cp ej form the jth column of A. Thus we established a one-to-one correspondence between linear maps from K" into K'" and m x n matrices.

Elements of Linear Algebra 56 In a similar way, we can determine the matrix of a linear map W: U V between two arbitrary finite-dimensional vector spaces. Namely, its jth column contains coordinates of the image of the jth basis vector of U. Of course, this matrix depends on the choices of bases in the spaces U and V.

Figure 2. Let p be a rotation through an angle a Figure 2. This means that the matrix of cp is 2. Here we will determine the matrix of the projection in Example 2. Complete it to a basis of the whole space with a vector e3 orthogonal to this plane. Since under projection el and e2 are mapped to themselves and e3 to 0, the matrix in question has the form 1 0 0 0 1 0 for these choices of bases. Unlike an isomorphism, a linear map might be neither injective nor surjective.

Violations of these two properties provide us with two subspaces associated to any linear map: Linear Maps 57 Definition 2. The image of a linear map cp: For example, let us prove the second claim. The kernel of the projection map in Example 2. The kernel of the differentiation map in Example 2. Its image is the space of all continuous functions. The latter follows from the existence of antiderivative of any continuous function this is shown in advanced calculus.

A linear map cp: Injectivity of cp means that for any b E Imcp equation 2. Thus it suffices to prove only the second claim of the theorem. Elements of Linear Algebra 58 i. Then equation 2. In this way we see that the set of solutions of system 2. Also, the set of solutions of system 2. But what is the dimension of the space of solutions of 2. The answer is given by the following theorem. Let cp: Using elementary transformations, we reduce system 2.

In view of Proposition 2. Hence, a generic solution of 2. To prove the theorem, it remains to show that these solutions form a basis of Ker W.

Linear Maps 59 For any A1, A2, The values of the principal variables are uniquely determined by the values of the free ones in accordance with 2. Thus, every solution of system 2.

Given a system of homogeneous linear equations, any basis of the space of its solutions is called a fundamental system of solutions. The above proof provides a working algorithm for constructing such a system of solutions. Choose a basis of U in a special way: Since el Corollary 2. Elements of Linear Algebra 60 Proof. The proof follows from comparing the statement of Theorems 2.

The rank of the system of columns of any matrix column rank equals the rank of its system of rows row rank. KI be a linear map with a matrix A and e1, e2, Let ep: Kn the unit rows of A. It follows from 2. Comparing this result with the previous corollary completes the proof. A field K can be viewed as a one-dimensional vector space over itself. Let X be the set of tetrahedron's edges and Y the set of its faces.

This defines the following linear map: It is not difficult to prove that whenever char K 0 2, 'p is surjective. For this, it suffices to show that Im ep contains 6-functions of all faces see Example 2. A function f for which ' f is a 6-function of the bottom face is shown in Figure 2.

Linear Maps 61 Functions comprising a basis of Ker cp are shown in Figure 2. Since the columns of a matrix A are the rows of its transposed matrix AT see Section 1. We can define elementary column transformations just as we defined elementary row transformations of a matrix. They correspond to elementary row transformations of the transposed matrix. Thus the rank of a matrix does not change not only under elementary row transformations but also under elementary column transformations. Elementary column transformations are equivalent to multiplying the matrix on the right by elementary matrices.

We turn now to operations on linear maps. Also, if W: As an example, we prove the first distributive law. U-4V 2. Elements of Linear Algebra 62 be linear maps. Indeed, let M, N, P, Q be sets and ca: P-,Q, -i: This is clear for linear operations addition and multiplication by numbers. To prove this for multiplication, let W: K" -4 K, V,: Let e1, In the language of linear maps, the matrix equality proved in Example 1.

As the latter statement is geometrically obvious, we thus proved the formulas for the sine and cosine of the sum of two angles. The properties of matrix operations obtained in Section 1. Obviously, the identity map id: Linear Maps 63 is linear. The matrix of the identity map id: K" -- K" is the identity matrix E of the nth order. Thus the properties of the identity matrix 1.

A Course in Algebra

K" -- K' is the linear map determined by a matrix A and "id" stands for the identity maps of spaces K" and K"' in the first and second equalities, respectively. Recall that a map is invertible if and only if it is bijective. If co: The second condition for linearity is checked in the same way.

In other words, A is nonsingular if its rows or columns are linearly independent. A square matrix is invertible if and only if it is nonsingular. Let v: According to the discussion above, A is invertible if and only if the map p is bijective.

By Theorem 2. In view of Theorem 2. Such an equation can be solved just like equation 2. This is equivalent to elementary row transformations of the "extended" matrix AIE. Reducing the left half of this matrix to the identity matrix which is possible because A is nonsingular , we obtain the inverse matrix on the right.

Elements of Linear Algebra 64 For this, we perform the following elementary transformations: Using linear maps, prove that the rank of the product of two matrices not necessarily square does not exceed the rank of each of them. Also prove that if one of these matrices is nonsingular, then the rank of the product equals the rank of the other matrix.

Determinants In the previous section we explained how to find out whether a matrix is nonsingular or, equivalently, if a system of n vectors in an n-dimensional space is linearly independent. In each particular case this question can be answered by reducing the matrix to step form by elementary row transformations.

However, it is of interest to formulate a general condition for matrix entries that would tell us when this matrix is nonsingular. We will give an example of such a condition for geometric vectors.

A pair of noncollinear vectors a1, a2 E E2 is said to be positively oriented if the turn from a1 to a2 through the angle less than 7r is in the positive direction, i. For any vectors al, a2 consider the parallelogram with sides ax, a2. Denote by area al, a2 the oriented area of this parallelogram, i. The value of Iarea al, a2 I measures, in some sense, the degree of linear independence of al and a2. The function area al, a2 with vector arguments al and a2 has the following properties: The last two properties are obvious.

To prove the first, consider the area of a parallelogram as the product of its base and height. Since projection is a linear 2. Determinants 65 a2 0. Similarly, if we choose a2 as the base, we can prove that area ai, a2 is linear in ai. Properties i - iii are sufficient to calculate area ai,a2. The discussion above implies that vectors a1 and a2 are linearly independent if and only if the matrix composed of their coordinates has a nonzero determinant.

Similarly, it can be shown that the oriented volume vol ai, a2, a3 of a parallelepiped formed by vectors at, a2, a3 has the following properties: Using these properties, we can express vol al, 12, a3 in terms of the coordinates of a1, a2, a3 in a positively oriented orthonormal basis as follows 2. Elements of Linear Algebra 66 try doing the calculations yourself! Thus, vectors a1, a2, a3 are linearly independent if and only if the matrix composed of their coordinates has a nonzero determinant.

Two schemes in Figure 2. The determinant of a matrix A is denoted either by det A or by the same matrix with parentheses replaced with vertical lines. In the case of arbitrary dimension and arbitrary field, we do not have the notions such as area or volume.

Hence it is natural to define the determinant as a function with properties similar to i - iii. We begin by introducing necessary definitions. Let V be a vector space over a field K and f al, a2,. The function f al, a2, A multilinear function f al, a2, A skew-symmetric multilinear function has an important property: Indeed, when these two arguments are interchanged, the value of the function does not change, and yet it is multiplied by Hence, it equals zero. In fact, this property implies skew-symmetry as defined above.


To prove this, notice that when checking if a function is skew-symmetric in any two of its arguments, the other arguments are fixed. Thus it suffices to consider the case of a bilinear i. Let f be a bilinear function that becomes zero whenever the values of its arguments are equal. A sequence k1, k2, Notice that k1 can assume n possible values; k2, n - 1 values if k1 is fixed; k3, n - 2 values if k1 and k2 are fixed, etc.

Hence, the total number of arrangements is n n - 1 n - The arrangement 1, 2, We say that a pair of numbers forms an inversion in a given arrangement if the greater of them stands to the left of the lesser.

An arrangement is called even respectively, odd if it contains an even respectively, odd number of inversions. We also define the sign of an arrangement which we set equal to I if the arrangement is even and -1 if it is odd.

The sign of an arrangement k1, k2,. The odd 2. Elements of Linear Algebra 68 arrangements are 1, 3, 2 one inversion , 3, 2,1 three inversions , and 2, 1, 3 one inversion. The trivial arrangement does not contain any inversions and is thus even.

A course in algebra. E. B. Vinberg

Conversely, in the arrangement n, n-1, Therefore, the number of inversions in this arrangement equals L2n 2 n n2 1 mod 2. The interchange of positions of two elements in an arrangement is called a transposition of these elements.

Any transposition changes the sign of an arrangement. When a transposition is applied to adjacent elements, only their relative position changes and the number of inversions decreases or increases by 1. Hence, in this case the sign changes.

As we showed above, every time the sign of the arrangement will change; therefore, in the end it will be the opposite of the original. Write down all even arrangements and transpose the first two elements in each of them.

We will obtain all odd arrangements, once each. Now we can state and prove the main theorem. For any c E K, there exists a unique skew-symmetric nlinear function f on the space Kn that satisfies the following condition: This function has the following form: Determinants 69 Proof.

Then f a1,a2, If they are all distinct, then f ek1, ekz, Indeed, if this equality holds for some arrangement k1, k2,.. By condition 2. But it is obvious that any arrangement can be obtained from the trivial one by a successive application of transpositions. Therefore, this equality holds for any arrangement and we obtain the expression 2.


We conclude that if there is a function f that satisfies all conditions of the theorem, then it has the form 2. Linearity in each of the arguments is clear, since for any i formula 2. Condition 2. It remains to check that f is skew-symmetric.

Consider what happens when the arguments aj and aj are interchanged. We can split the set of all arrangements into the pairs of arrangements obtained from each other by the transposition of ki and kj. According to ankn corresponding to arrangeProposition 2. When ai is interchanged with aj, the products interchange too, hence all of the expression is multiplied by The proof of the above theorem then says that 2.

The function satisfying the conditions of Theorem 2. Therefore, 2. Similarly, by identifying each matrix with the collection of its rows, we can consider every function of n elements of Kn as a function of a square matrix of order n and vice versa.

The uniqueness condition from Theorem 2. If f is a skew-symmetric multilinear function of matrix rows, then 2.

There exist much simpler ways to calculate determinants. They are based on determinants' properties that we will prove below. The determinant of a matrix does not change under an elementary transformation of the first type. We add to the first row of A the second row multiplied by c. Denote the new matrix by A'. D We know that when two rows are interchanged, the determinant is multiplied by Also when a row is multiplied by a number, the determinant is multiplied by this number.

Thus, we know how the determinant changes under any elementary row transformation of the given matrix. Since any 2. Determinants 71 matrix can be reduced to step form and every square matrix in step form is triangular but maybe not strictly triangular , it remains to figure out how to calculate the determinant of a triangular matrix. The determinant of a triangular matrix equals the product of its diagonal entries.

For any matrix, the product of diagonal entries is contained in 2. When the matrix is triangular, all other summands in this expression are zero. O Besides providing us with a practical method of calculating determinants, Propositions 2.

A square matrix A is nonsingular if and only if det A 0 0. Reduce A to step form by elementary row transformations.

If at some point we used transformations of the second or third type, then the determinant could have changed but its equality or inequality to zero would have been preserved. A is nonsingular if and only if the matrix in step form is strictly triangular, but this is equivalent to its having a nonzero 0 determinant.

We continue studying properties of the determinant. Just like the determinant of A, the determinant of AT is the algebraic sum of all possible products of n entries of A, one from each row and each column. So, we have only to check that the each product appears in the expressions for det A and det AT with the same sign. Let aj, j, a;h a;j be a product of n entries of the matrix A, one from each row and each column.

A Course in Algebra

Then the product Sign il,i2, Elements of Linear Algebra 72 Proof of Lemma 2. At each such exchange, each sign il , i2, We continue with the proof of Theorem 2. To find what sign with which a product ai,j, at,? To find the sign of the same product in detAT, we must order its factors by their column numbers, i.

This means that the product we are considering appears in det A and det AT with the same sign. It follows from Theorem 2. In particular, we have Corollary 2. The determinant is a skew-symmetric multilinear function of matrix columns. When B and D are fixed, the determinant of A is a skew-symmetric multilinear function of its lower rows, hence, a skew-symmetric multilinear function of rows of C.

By Corollary 2. E Due to Theorem 2. Determinants 73 Example 2. Here we will calculate the so-called Vandermonde determinant: V xl, X2,. I1 xn x2 nIt is denoted K[[x]]. Open Preview See a Problem? If co: On the set of irrational numbers, neither multiplication nor addition is defined, since the sum and the product of two irrational numbers can be rational. This also reminds us of sometimes helpful similarities between the numbers and the structure we are considering.

Hence, a generic solution of 2.

SHYLA from Punta Gorda
Feel free to read my other articles. I'm keen on snow kiting. I fancy gently.