# Linear Algebra¶

## Standard Functions¶

Linear algebra functions in Julia are largely implemented by calling functions from LAPACK. Sparse factorizations call functions from SuiteSparse.

`*`

(*A*,*B*)¶Matrix multiplication

`\`

(*A*,*B*)¶Matrix division using a polyalgorithm. For input matrices

`A`

and`B`

, the result`X`

is such that`A*X==B`

when`A`

is square. The solver that is used depends upon the structure of`A`

. A direct solver is used for upper or lower triangular`A`

. For Hermitian`A`

(equivalent to symmetric`A`

for non-complex`A`

) the`BunchKaufman`

factorization is used. Otherwise an LU factorization is used. For rectangular`A`

the result is the minimum-norm least squares solution computed by a pivoted QR factorization of`A`

and a rank estimate of`A`

based on the R factor.When

`A`

is sparse, a similar polyalgorithm is used. For indefinite matrices, the`LDLt`

factorization does not use pivoting during the numerical factorization and therefore the procedure can fail even for invertible matrices.

`vecdot`

(*x*,*y*)¶For any iterable containers

`x`

and`y`

(including arrays of any dimension) of numbers (or any element type for which`dot`

is defined), compute the Euclidean dot product (the sum of`dot(x[i],y[i])`

) as if they were vectors.

`factorize`

(*A*)¶Compute a convenient factorization (including LU, Cholesky, Bunch-Kaufman, LowerTriangular, UpperTriangular) of

`A`

, based upon the type of the input matrix. The return value can then be reused for efficient solving of multiple systems. For example:`A=factorize(A);x=A\b;y=A\C`

.

`full`

(*F*)¶Reconstruct the matrix

`A`

from the factorization`F=factorize(A)`

.

`lu`

(*A*) → L, U, p¶Compute the LU factorization of

`A`

, such that`A[p,:]=L*U`

.

`lufact`

(*A*[,*pivot=Val{true}*]) → F¶Compute the LU factorization of

`A`

. The return type of`F`

depends on the type of`A`

. In most cases, if`A`

is a subtype`S`

of AbstractMatrix with an element type`T`

supporting`+`

,`-`

,`*`

and`/`

the return type is`LU{T,S{T}}`

. If pivoting is chosen (default) the element type should also support`abs`

and`<`

. When`A`

is sparse and have element of type`Float32`

,`Float64`

,`Complex{Float32}`

, or`Complex{Float64}`

the return type is`UmfpackLU`

. Some examples are shown in the table below.Type of input `A`

Type of output `F`

Relationship between `F`

and`A`

`Matrix()`

`LU`

`F[:L]*F[:U]==A[F[:p],:]`

`Tridiagonal()`

`LU{T,Tridiagonal{T}}`

`F[:L]*F[:U]==A[F[:p],:]`

`SparseMatrixCSC()`

`UmfpackLU`

`F[:L]*F[:U]==(F[:Rs].*A)[F[:p],F[:q]]`

The individual components of the factorization

`F`

can be accessed by indexing:Component Description `LU`

`LU{T,Tridiagonal{T}}`

`UmfpackLU`

`F[:L]`

`L`

(lower triangular) part of`LU`

✓ ✓ ✓ `F[:U]`

`U`

(upper triangular) part of`LU`

✓ ✓ ✓ `F[:p]`

(right) permutation `Vector`

✓ ✓ ✓ `F[:P]`

(right) permutation `Matrix`

✓ ✓ `F[:q]`

left permutation `Vector`

✓ `F[:Rs]`

`Vector`

of scaling factors✓ `F[:(:)]`

`(L,U,p,q,Rs)`

components✓ Supported function `LU`

`LU{T,Tridiagonal{T}}`

`UmfpackLU`

`/`

✓ `\`

✓ ✓ ✓ `cond`

✓ ✓ `det`

✓ ✓ ✓ `logdet`

✓ ✓ `logabsdet`

✓ ✓ `size`

✓ ✓

`lufact!`

(*A*) → LU¶`lufact!`

is the same as`lufact()`

, but saves space by overwriting the input`A`

, instead of creating a copy. For sparse`A`

the`nzval`

field is not overwritten but the index fields,`colptr`

and`rowval`

are decremented in place, converting from 1-based indices to 0-based indices.

`chol`

(*A*[,*LU*]) → F¶Compute the Cholesky factorization of a symmetric positive definite matrix

`A`

and return the matrix`F`

. If`LU`

is`Val{:U}`

(Upper),`F`

is of type`UpperTriangular`

and`A=F'*F`

. If`LU`

is`Val{:L}`

(Lower),`F`

is of type`LowerTriangular`

and`A=F*F'`

.`LU`

defaults to`Val{:U}`

.

`cholfact`

(*A, [LU=:U[,pivot=Val{false}]][;tol=-1.0]*) → Cholesky¶Compute the Cholesky factorization of a dense symmetric positive (semi)definite matrix

`A`

and return either a`Cholesky`

if`pivot==Val{false}`

or`CholeskyPivoted`

if`pivot==Val{true}`

.`LU`

may be`:L`

for using the lower part or`:U`

for the upper part. The default is to use`:U`

. The triangular matrix can be obtained from the factorization`F`

with:`F[:L]`

and`F[:U]`

. The following functions are available for`Cholesky`

objects:`size`

,`\`

,`inv`

,`det`

. For`CholeskyPivoted`

there is also defined a`rank`

. If`pivot==Val{false}`

a`PosDefException`

exception is thrown in case the matrix is not positive definite. The argument`tol`

determines the tolerance for determining the rank. For negative values, the tolerance is the machine precision.

`cholfact`

(*A; shift=0*,*perm=Int[]*) → CHOLMOD.FactorCompute the Cholesky factorization of a sparse positive definite matrix

`A`

. A fill-reducing permutation is used.`F=cholfact(A)`

is most frequently used to solve systems of equations with`F\b`

, but also the methods`diag`

,`det`

,`logdet`

are defined for`F`

. You can also extract individual factors from`F`

, using`F[:L]`

. However, since pivoting is on by default, the factorization is internally represented as`A==P'*L*L'*P`

with a permutation matrix`P`

; using just`L`

without accounting for`P`

will give incorrect answers. To include the effects of permutation, it’s typically preferable to extact “combined” factors like`PtL=F[:PtL]`

(the equivalent of`P'*L`

) and`LtP=F[:UP]`

(the equivalent of`L'*P`

).Setting optional

`shift`

keyword argument computes the factorization of`A+shift*I`

instead of`A`

. If the`perm`

argument is nonempty, it should be a permutation of`1:size(A,1)`

giving the ordering to use (instead of CHOLMOD’s default AMD ordering).The function calls the C library CHOLMOD and many other functions from the library are wrapped but not exported.

`cholfact!`

(*A [,LU=:U [,pivot=Val{false}]][;tol=-1.0]*) → Cholesky¶`cholfact!`

is the same as`cholfact()`

, but saves space by overwriting the input`A`

, instead of creating a copy.`cholfact!`

can also reuse the symbolic factorization from a different matrix`F`

with the same structure when used as:`cholfact!(F::CholmodFactor,A)`

.

`ldltfact`

(*::SymTridiagonal*) → LDLt¶Compute an

`LDLt`

factorization of a real symmetric tridiagonal matrix such that`A=L*Diagonal(d)*L'`

where`L`

is a unit lower triangular matrix and`d`

is a vector. The main use of an`LDLt`

factorization`F=ldltfact(A)`

is to solve the linear system of equations`Ax=b`

with`F\b`

.

`ldltfact`

(*::Union{SparseMatrixCSC*,*Symmetric{Float64*,*SparseMatrixCSC{Flaot64*,*SuiteSparse_long}}*,*Hermitian{Complex{Float64}*,*SparseMatrixCSC{Complex{Float64}*,*SuiteSparse_long}}}; shift=0*,*perm=Int[]*) → CHOLMOD.FactorCompute the

`LDLt`

factorization of a sparse symmetric or Hermitian matrix. A fill-reducing permutation is used.`F=ldltfact(A)`

is most frequently used to solve systems of equations`A*x=b`

with`F\b`

, but also the methods`diag`

,`det`

,`logdet`

are defined for`F`

. You can also extract individual factors from`F`

, using`F[:L]`

. However, since pivoting is on by default, the factorization is internally represented as`A==P'*L*D*L'*P`

with a permutation matrix`P`

; using just`L`

without accounting for`P`

will give incorrect answers. To include the effects of permutation, it’s typically preferable to extact “combined” factors like`PtL=F[:PtL]`

(the equivalent of`P'*L`

) and`LtP=F[:UP]`

(the equivalent of`L'*P`

). The complete list of supported factors is`:L,:PtL,:D,:UP,:U,:LD,:DU,:PtLD,:DUP`

.Setting optional

`shift`

keyword argument computes the factorization of`A+shift*I`

instead of`A`

. If the`perm`

argument is nonempty, it should be a permutation of`1:size(A,1)`

giving the ordering to use (instead of CHOLMOD’s default AMD ordering).The function calls the C library CHOLMOD and many other functions from the library are wrapped but not exported.

`ldltfact!`

(*::SymTridiagonal*) → LDLt¶Same as

`ldltfact`

, but saves space by overwriting the input`A`

, instead of creating a copy.

`qr`

(*A*[,*pivot=Val{false}][;thin=true*]) → Q, R, [p]¶Compute the (pivoted) QR factorization of

`A`

such that either`A=Q*R`

or`A[:,p]=Q*R`

. Also see`qrfact`

. The default is to compute a thin factorization. Note that`R`

is not extended with zeros when the full`Q`

is requested.

`qrfact`

(*A*[,*pivot=Val{false}*]) → F¶Computes the QR factorization of

`A`

. The return type of`F`

depends on the element type of`A`

and whether pivoting is specified (with`pivot==Val{true}`

).Return type `eltype(A)`

`pivot`

Relationship between `F`

and`A`

`QR`

not `BlasFloat`

either `A==F[:Q]*F[:R]`

`QRCompactWY`

`BlasFloat`

`Val{false}`

`A==F[:Q]*F[:R]`

`QRPivoted`

`BlasFloat`

`Val{true}`

`A[:,F[:p]]==F[:Q]*F[:R]`

`BlasFloat`

refers to any of:`Float32`

,`Float64`

,`Complex64`

or`Complex128`

.The individual components of the factorization

`F`

can be accessed by indexing:Component Description `QR`

`QRCompactWY`

`QRPivoted`

`F[:Q]`

`Q`

(orthogonal/unitary) part of`QR`

✓ ( `QRPackedQ`

)✓ ( `QRCompactWYQ`

)✓ ( `QRPackedQ`

)`F[:R]`

`R`

(upper right triangular) part of`QR`

✓ ✓ ✓ `F[:p]`

pivot `Vector`

✓ `F[:P]`

(pivot) permutation `Matrix`

✓ The following functions are available for the

`QR`

objects:`size`

,`\`

. When`A`

is rectangular,`\`

will return a least squares solution and if the solution is not unique, the one with smallest norm is returned.Multiplication with respect to either thin or full

`Q`

is allowed, i.e. both`F[:Q]*F[:R]`

and`F[:Q]*A`

are supported. A`Q`

matrix can be converted into a regular matrix with`full()`

which has a named argument`thin`

.Note

`qrfact`

returns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that the`Q`

and`R`

matrices can be stored compactly rather as two separate dense matrices.The data contained in

`QR`

or`QRPivoted`

can be used to construct the`QRPackedQ`

type, which is a compact representation of the rotation matrix:\[Q = \prod_{i=1}^{\min(m,n)} (I - \tau_i v_i v_i^T)\]where \(\tau_i\) is the scale factor and \(v_i\) is the projection vector associated with the \(i^{th}\) Householder elementary reflector.

The data contained in

`QRCompactWY`

can be used to construct the`QRCompactWYQ`

type, which is a compact representation of the rotation matrix\[Q = I + Y T Y^T\]where

`Y`

is \(m \times r\) lower trapezoidal and`T`

is \(r \times r\) upper triangular. The*compact WY*representation [Schreiber1989] is not to be confused with the older,*WY*representation [Bischof1987]. (The LAPACK documentation uses`V`

in lieu of`Y`

.)[Bischof1987] *(1, 2)*C Bischof and C Van Loan, “The WY representation for products of Householder matrices”, SIAM J Sci Stat Comput 8 (1987), s2-s13. doi:10.1137/0908009[Schreiber1989] R Schreiber and C Van Loan, “A storage-efficient WY representation for products of Householder transformations”, SIAM J Sci Stat Comput 10 (1989), 53-57. doi:10.1137/0910005

`qrfact`

(*A*) → SPQR.FactorizationCompute the QR factorization of a sparse matrix

`A`

. A fill-reducing permutation is used. The main application of this type is to solve least squares problems with`\`

. The function calls the C library SPQR and a few additional functions from the library are wrapped but not exported.

`qrfact!`

(*A*[,*pivot=Val{false}*])¶`qrfact!`

is the same as`qrfact()`

when`A`

is a subtype of`StridedMatrix`

, but saves space by overwriting the input`A`

, instead of creating a copy.

`full`

(*QRCompactWYQ*[,*thin=true*]) → MatrixConverts an orthogonal or unitary matrix stored as a

`QRCompactWYQ`

object, i.e. in the compact WY format [Bischof1987], to a dense matrix.Optionally takes a

`thin`

Boolean argument, which if`true`

omits the columns that span the rows of`R`

in the QR factorization that are zero. The resulting matrix is the`Q`

in a thin QR factorization (sometimes called the reduced QR factorization). If`false`

, returns a`Q`

that spans all rows of`R`

in its corresponding QR factorization.

`bkfact`

(*A*) → BunchKaufman¶Compute the Bunch-Kaufman [Bunch1977] factorization of a real symmetric or complex Hermitian matrix

`A`

and return a`BunchKaufman`

object. The following functions are available for`BunchKaufman`

objects:`size`

,`\`

,`inv`

,`issym`

,`ishermitian`

.[Bunch1977] J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179. url.

`bkfact!`

(*A*) → BunchKaufman¶`bkfact!`

is the same as`bkfact()`

, but saves space by overwriting the input`A`

, instead of creating a copy.

`eig`

(*A,[irange,][vl,][vu,][permute=true,][scale=true]*) → D, V¶Computes eigenvalues and eigenvectors of

`A`

. See`eigfact()`

for details on the`balance`

keyword argument.julia>eig([1.00.00.0;0.03.00.0;0.00.018.0])([1.0,3.0,18.0],3x3Array{Float64,2}:1.00.00.00.01.00.00.00.01.0)

`eig`

is a wrapper around`eigfact()`

, extracting all parts of the factorization to a tuple; where possible, using`eigfact()`

is recommended.

`eig`

(*A*,*B*) → D, VComputes generalized eigenvalues and vectors of

`A`

with respect to`B`

.`eig`

is a wrapper around`eigfact()`

, extracting all parts of the factorization to a tuple; where possible, using`eigfact()`

is recommended.

`eigvals`

(*A,[irange,][vl,][vu]*) → values¶Returns the eigenvalues of

`A`

. If`A`

is`Symmetric`

,`Hermitian`

or`SymTridiagonal`

, it is possible to calculate only a subset of the eigenvalues by specifying either a`UnitRange`

`irange`

covering indices of the sorted eigenvalues, or a pair`vl`

and`vu`

for the lower and upper boundaries of the eigenvalues.For general non-symmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option

`permute=true`

permutes the matrix to become closer to upper triangular, and`scale=true`

scales the matrix by its diagonal elements to make rows and columns moreequal in norm. The default is`true`

for both options.

`eigvals!`

(*A,[irange,][vl,][vu]*) → values¶Same as

`eigvals`

, but saves space by overwriting the input`A`

(and`B`

), instead of creating a copy.

`eigmax`

(*A*)¶Returns the largest eigenvalue of

`A`

.

`eigmin`

(*A*)¶Returns the smallest eigenvalue of

`A`

.

`eigvecs`

(*A, [eigvals,][permute=true,][scale=true]*) → Matrix¶Returns a matrix

`M`

whose columns are the eigenvectors of`A`

. (The`k`

th eigenvector can be obtained from the slice`M[:,k]`

.) The`permute`

and`scale`

keywords are the same as for`eigfact()`

.For

`SymTridiagonal`

matrices, if the optional vector of eigenvalues`eigvals`

is specified, returns the specific corresponding eigenvectors.

`eigfact`

(*A,[irange,][vl,][vu,][permute=true,][scale=true]*) → Eigen¶Computes the eigenvalue decomposition of

`A`

, returning an`Eigen`

factorization object`F`

which contains the eigenvalues in`F[:values]`

and the eigenvectors in the columns of the matrix`F[:vectors]`

. (The`k`

th eigenvector can be obtained from the slice`F[:vectors][:,k]`

.)The following functions are available for

`Eigen`

objects:`inv`

,`det`

.If

`A`

is`Symmetric`

,`Hermitian`

or`SymTridiagonal`

, it is possible to calculate only a subset of the eigenvalues by specifying either a`UnitRange`

`irange`

covering indices of the sorted eigenvalues or a pair`vl`

and`vu`

for the lower and upper boundaries of the eigenvalues.For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option

`permute=true`

permutes the matrix to become closer to upper triangular, and`scale=true`

scales the matrix by its diagonal elements to make rows and columns more equal in norm. The default is`true`

for both options.

`eigfact`

(*A*,*B*) → GeneralizedEigenComputes the generalized eigenvalue decomposition of

`A`

and`B`

, returning a`GeneralizedEigen`

factorization object`F`

which contains the generalized eigenvalues in`F[:values]`

and the generalized eigenvectors in the columns of the matrix`F[:vectors]`

. (The`k`

th generalized eigenvector can be obtained from the slice`F[:vectors][:,k]`

.)

`eigfact!`

(*A*[,*B*])¶Same as

`eigfact()`

, but saves space by overwriting the input`A`

(and`B`

), instead of creating a copy.

`hessfact`

(*A*)¶Compute the Hessenberg decomposition of

`A`

and return a`Hessenberg`

object. If`F`

is the factorization object, the unitary matrix can be accessed with`F[:Q]`

and the Hessenberg matrix with`F[:H]`

. When`Q`

is extracted, the resulting type is the`HessenbergQ`

object, and may be converted to a regular matrix with`full()`

.

`hessfact!`

(*A*)¶`hessfact!`

is the same as`hessfact()`

, but saves space by overwriting the input`A`

, instead of creating a copy.

`schurfact`

(*A*) → Schur¶Computes the Schur factorization of the matrix

`A`

. The (quasi) triangular Schur factor can be obtained from the`Schur`

object`F`

with either`F[:Schur]`

or`F[:T]`

and the unitary/orthogonal Schur vectors can be obtained with`F[:vectors]`

or`F[:Z]`

such that`A=F[:vectors]*F[:Schur]*F[:vectors]'`

. The eigenvalues of`A`

can be obtained with`F[:values]`

.

`schurfact!`

(*A*)¶Computes the Schur factorization of

`A`

, overwriting`A`

in the process. See`schurfact()`

`schur`

(*A*) → Schur[:T], Schur[:Z], Schur[:values]¶See

`schurfact()`

`ordschur`

(*Q*,*T*,*select*) → Schur¶Reorders the Schur factorization of a real matrix

`A=Q*T*Q'`

according to the logical array`select`

returning a Schur object`F`

. The selected eigenvalues appear in the leading diagonal of`F[:Schur]`

and the corresponding leading columns of`F[:vectors]`

form an orthonormal basis of the corresponding right invariant subspace. A complex conjugate pair of eigenvalues must be either both included or excluded via`select`

.

`ordschur!`

(*Q*,*T*,*select*) → Schur¶Reorders the Schur factorization of a real matrix

`A=Q*T*Q'`

, overwriting`Q`

and`T`

in the process. See`ordschur()`

`ordschur`

(*S*,*select*) → SchurReorders the Schur factorization

`S`

of type`Schur`

.

`ordschur!`

(*S*,*select*) → SchurReorders the Schur factorization

`S`

of type`Schur`

, overwriting`S`

in the process. See`ordschur()`

`schurfact`

(*A*,*B*) → GeneralizedSchurComputes the Generalized Schur (or QZ) factorization of the matrices

`A`

and`B`

. The (quasi) triangular Schur factors can be obtained from the`Schur`

object`F`

with`F[:S]`

and`F[:T]`

, the left unitary/orthogonal Schur vectors can be obtained with`F[:left]`

or`F[:Q]`

and the right unitary/orthogonal Schur vectors can be obtained with`F[:right]`

or`F[:Z]`

such that`A=F[:left]*F[:S]*F[:right]'`

and`B=F[:left]*F[:T]*F[:right]'`

. The generalized eigenvalues of`A`

and`B`

can be obtained with`F[:alpha]./F[:beta]`

.

`schur`

(*A*,*B*) → GeneralizedSchur[:S], GeneralizedSchur[:T], GeneralizedSchur[:Q], GeneralizedSchur[:Z]See

`schurfact()`

`ordschur`

(*S*,*T*,*Q*,*Z*,*select*) → GeneralizedSchurReorders the Generalized Schur factorization of a matrix

`(A,B)=(Q*S*Z^{H},Q*T*Z^{H})`

according to the logical array`select`

and returns a GeneralizedSchur object`GS`

. The selected eigenvalues appear in the leading diagonal of both`(GS[:S],GS[:T])`

and the left and right unitary/orthogonal Schur vectors are also reordered such that`(A,B)=GS[:Q]*(GS[:S],GS[:T])*GS[:Z]^{H}`

still holds and the generalized eigenvalues of`A`

and`B`

can still be obtained with`GS[:alpha]./GS[:beta]`

.

`ordschur!`

(*S*,*T*,*Q*,*Z*,*select*) → GeneralizedSchurReorders the Generalized Schur factorization of a matrix by overwriting the matrices

`(S,T,Q,Z)`

in the process. See`ordschur()`

.

`ordschur`

(*GS*,*select*) → GeneralizedSchurReorders the Generalized Schur factorization of a Generalized Schur object. See

`ordschur()`

.

`ordschur!`

(*GS*,*select*) → GeneralizedSchurReorders the Generalized Schur factorization of a Generalized Schur object by overwriting the object with the new factorization. See

`ordschur()`

.

`svdfact`

(*A*[,*thin=true*]) → SVD¶Compute the Singular Value Decomposition (SVD) of

`A`

and return an`SVD`

object.`U`

,`S`

,`V`

and`Vt`

can be obtained from the factorization`F`

with`F[:U]`

,`F[:S]`

,`F[:V]`

and`F[:Vt]`

, such that`A=U*diagm(S)*Vt`

. If`thin`

is`true`

, an economy mode decomposition is returned. The algorithm produces`Vt`

and hence`Vt`

is more efficient to extract than`V`

. The default is to produce a thin decomposition.

`svdfact!`

(*A*[,*thin=true*]) → SVD¶`svdfact!`

is the same as`svdfact()`

, but saves space by overwriting the input`A`

, instead of creating a copy. If`thin`

is`true`

, an economy mode decomposition is returned. The default is to produce a thin decomposition.

`svd`

(*A*[,*thin=true*]) → U, S, V¶Wrapper around

`svdfact`

extracting all parts the factorization to a tuple. Direct use of`svdfact`

is therefore generally more efficient. Computes the SVD of`A`

, returning`U`

, vector`S`

, and`V`

such that`A==U*diagm(S)*V'`

. If`thin`

is`true`

, an economy mode decomposition is returned. The default is to produce a thin decomposition.

`svdvals`

(*A*)¶Returns the singular values of

`A`

.

`svdvals!`

(*A*)¶Returns the singular values of

`A`

, while saving space by overwriting the input.

`svdfact`

(*A*,*B*) → GeneralizedSVDCompute the generalized SVD of

`A`

and`B`

, returning a`GeneralizedSVD`

Factorization object`F`

, such that`A=F[:U]*F[:D1]*F[:R0]*F[:Q]'`

and`B=F[:V]*F[:D2]*F[:R0]*F[:Q]'`

.

`svd`

(*A*,*B*) → U, V, Q, D1, D2, R0Wrapper around

`svdfact`

extracting all parts the factorization to a tuple. Direct use of`svdfact`

is therefore generally more efficient. The function returns the generalized SVD of`A`

and`B`

, returning`U`

,`V`

,`Q`

,`D1`

,`D2`

, and`R0`

such that`A=U*D1*R0*Q'`

and`B=V*D2*R0*Q'`

.

`svdvals`

(*A*,*B*)Return only the singular values from the generalized singular value decomposition of

`A`

and`B`

.

`givens{T}`

(*::T*,*::T*,*::Integer*,*::Integer*) → {Givens, T}¶Computes the tuple

`(G,r)=givens(f,g,i1,i2)`

where`G`

is a Givens rotation and`r`

is a scalar such that`G*x=y`

with`x[i1]=f`

,`x[i2]=g`

,`y[i1]=r`

, and`y[i2]=0`

. The cosine and sine of the rotation angle can be extracted from the`Givens`

type with`G.c`

and`G.s`

respectively. The arguments`f`

and`g`

can be either`Float32`

,`Float64`

,`Complex{Float32}`

, or`Complex{Float64}`

. The`Givens`

type supports left multiplication`G*A`

and conjugated transpose right multiplication`A*G'`

. The type doesn’t have a`size`

and can therefore be multiplied with matrices of arbitrary size as long as`i2<=size(A,2)`

for`G*A`

or`i2<=size(A,1)`

for`A*G'`

.

`givens{T}`

(*::AbstractArray{T}*,*::Integer*,*::Integer*,*::Integer*) → {Givens, T}Computes the tuple

`(G,r)=givens(A,i1,i2,col)`

where`G`

is Givens rotation and`r`

is a scalar such that`G*A[:,col]=y`

with`y[i1]=r`

, and`y[i2]=0`

. The cosine and sine of the rotation angle can be extracted from the`Givens`

type with`G.c`

and`G.s`

respectively. The element type of`A`

can be either`Float32`

,`Float64`

,`Complex{Float32}`

, or`Complex{Float64}`

. The`Givens`

type supports left multiplication`G*A`

and conjugated transpose right multiplication`A*G'`

. The type doesn’t have a`size`

and can therefore be multiplied with matrices of arbitrary size as long as`i2<=size(A,2)`

for`G*A`

or`i2<=size(A,1)`

for`A*G'`

.

`triu`

(*M*)¶Upper triangle of a matrix.

`triu`

(*M*,*k*)Returns the upper triangle of

`M`

starting from the`k`

th superdiagonal.

`triu!`

(*M*)¶Upper triangle of a matrix, overwriting

`M`

in the process.

`triu!`

(*M*,*k*)Returns the upper triangle of

`M`

starting from the`k`

th superdiagonal, overwriting`M`

in the process.

`tril`

(*M*)¶Lower triangle of a matrix.

`tril`

(*M*,*k*)Returns the lower triangle of

`M`

starting from the`k`

th superdiagonal.

`tril!`

(*M*)¶Lower triangle of a matrix, overwriting

`M`

in the process.

`tril!`

(*M*,*k*)Returns the lower triangle of

`M`

starting from the`k`

th superdiagonal, overwriting`M`

in the process.

`diagind`

(*M*[,*k*])¶A

`Range`

giving the indices of the`k`

th diagonal of the matrix`M`

.

`diag`

(*M*[,*k*])¶The

`k`

th diagonal of a matrix, as a vector. Use`diagm`

to construct a diagonal matrix.

`diagm`

(*v*[,*k*])¶Construct a diagonal matrix and place

`v`

on the`k`

th diagonal.

`scale`

(*A*,*b*)¶`scale`

(*b*,*A*)Scale an array

`A`

by a scalar`b`

, returning a new array.If

`A`

is a matrix and`b`

is a vector, then`scale(A,b)`

scales each column`i`

of`A`

by`b[i]`

(similar to`A*diagm(b)`

), while`scale(b,A)`

scales each row`i`

of`A`

by`b[i]`

(similar to`diagm(b)*A`

), returning a new array.Note: for large

`A`

,`scale`

can be much faster than`A.*b`

or`b.*A`

, due to the use of BLAS.

`scale!`

(*A*,*b*)¶`scale!`

(*b*,*A*)Scale an array

`A`

by a scalar`b`

, similar to`scale()`

but overwriting`A`

in-place.If

`A`

is a matrix and`b`

is a vector, then`scale!(A,b)`

scales each column`i`

of`A`

by`b[i]`

(similar to`A*diagm(b)`

), while`scale!(b,A)`

scales each row`i`

of`A`

by`b[i]`

(similar to`diagm(b)*A`

), again operating in-place on`A`

.

`Tridiagonal`

(*dl*,*d*,*du*)¶Construct a tridiagonal matrix from the lower diagonal, diagonal, and upper diagonal, respectively. The result is of type

`Tridiagonal`

and provides efficient specialized linear solvers, but may be converted into a regular matrix with`full()`

.

`Bidiagonal`

(*dv*,*ev*,*isupper*)¶Constructs an upper (

`isupper=true`

) or lower (`isupper=false`

) bidiagonal matrix using the given diagonal (`dv`

) and off-diagonal (`ev`

) vectors. The result is of type`Bidiagonal`

and provides efficient specialized linear solvers, but may be converted into a regular matrix with`full()`

.

`SymTridiagonal`

(*d*,*du*)¶Construct a real symmetric tridiagonal matrix from the diagonal and upper diagonal, respectively. The result is of type

`SymTridiagonal`

and provides efficient specialized eigensolvers, but may be converted into a regular matrix with`full()`

.

`rank`

(*M*)¶Compute the rank of a matrix.

`norm`

(*A*[,*p*])¶Compute the

`p`

-norm of a vector or the operator norm of a matrix`A`

, defaulting to the`p=2`

-norm.For vectors,

`p`

can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular,`norm(A,Inf)`

returns the largest value in`abs(A)`

, whereas`norm(A,-Inf)`

returns the smallest.For matrices, the matrix norm induced by the vector

`p`

-norm is used, where valid values of`p`

are`1`

,`2`

, or`Inf`

. (Note that for sparse matrices,`p=2`

is currently not implemented.) Use`vecnorm()`

to compute the Frobenius norm.

`vecnorm`

(*A*[,*p*])¶For any iterable container

`A`

(including arrays of any dimension) of numbers (or any element type for which`norm`

is defined), compute the`p`

-norm (defaulting to`p=2`

) as if`A`

were a vector of the corresponding length.For example, if

`A`

is a matrix and`p=2`

, then this is equivalent to the Frobenius norm.

`cond`

(*M*[,*p*])¶Condition number of the matrix

`M`

, computed using the operator`p`

-norm. Valid values for`p`

are`1`

,`2`

(default), or`Inf`

.

`condskeel`

(*M*[,*x*,*p*])¶- \[\begin{split}\kappa_S(M, p) & = \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \right\Vert_p \\ \kappa_S(M, x, p) & = \left\Vert \left\vert M \right\vert \left\vert M^{-1} \right\vert \left\vert x \right\vert \right\Vert_p\end{split}\]
Skeel condition number \(\kappa_S\) of the matrix

`M`

, optionally with respect to the vector`x`

, as computed using the operator`p`

-norm.`p`

is`Inf`

by default, if not provided. Valid values for`p`

are`1`

,`2`

, or`Inf`

.This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.

`trace`

(*M*)¶Matrix trace

`det`

(*M*)¶Matrix determinant

`logdet`

(*M*)¶Log of matrix determinant. Equivalent to

`log(det(M))`

, but may provide increased accuracy and/or speed.

`logabsdet`

(*M*)¶Log of absolute value of determinant of real matrix. Equivalent to

`(log(abs(det(M))),sign(det(M)))`

, but may provide increased accuracy and/or speed.

`inv`

(*M*)¶Matrix inverse

`pinv`

(*M*[,*tol*])¶Computes the Moore-Penrose pseudoinverse.

For matrices

`M`

with floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values above a given threshold,`tol`

.The optimal choice of

`tol`

varies both with the value of`M`

and the intended application of the pseudoinverse. The default value of`tol`

is`eps(real(float(one(eltype(M)))))*maximum(size(A))`

, which is essentially machine epsilon for the real part of a matrix element multiplied by the larger matrix dimension. For inverting dense ill-conditioned matrices in a least-squares sense,`tol=sqrt(eps(real(float(one(eltype(M))))))`

is recommended.For more information, see [issue8859], [B96], [S84], [KY88].

[issue8859] Issue 8859, “Fix least squares”, https://github.com/JuliaLang/julia/pull/8859 [B96] Åke Björck, “Numerical Methods for Least Squares Problems”, SIAM Press, Philadelphia, 1996, “Other Titles in Applied Mathematics”, Vol. 51. doi:10.1137/1.9781611971484 [S84] G. W. Stewart, “Rank Degeneracy”, SIAM Journal on Scientific and Statistical Computing, 5(2), 1984, 403-413. doi:10.1137/0905030 [KY88] Konstantinos Konstantinides and Kung Yao, “Statistical analysis of effective singular values in matrix rank determination”, IEEE Transactions on Acoustics, Speech and Signal Processing, 36(5), 1988, 757-763. doi:10.1109/29.1585

`nullspace`

(*M*)¶Basis for nullspace of

`M`

.

`repmat`

(*A*,*n*,*m*)¶Construct a matrix by repeating the given matrix

`n`

times in dimension 1 and`m`

times in dimension 2.

`repeat`

(*A*,*inner = Int[]*,*outer = Int[]*)¶Construct an array by repeating the entries of

`A`

. The i-th element of`inner`

specifies the number of times that the individual entries of the i-th dimension of`A`

should be repeated. The i-th element of`outer`

specifies the number of times that a slice along the i-th dimension of`A`

should be repeated.

`kron`

(*A*,*B*)¶Kronecker tensor product of two vectors or two matrices.

`blkdiag`

(*A...*)¶Concatenate matrices block-diagonally. Currently only implemented for sparse matrices.

`linreg`

(*x*,*y*) → a, b¶Perform linear regression. Returns

`a`

and`b`

such that`a+b*x`

is the closest straight line to the given points`(x,y)`

, i.e., such that the squared error between`y`

and`a+b*x`

is minimized.**Example**:usingPyPlotx=[1.0:12.0;]y=[5.5,6.3,7.6,8.8,10.9,11.79,13.48,15.02,17.77,20.81,22.0,22.99]a,b=linreg(x,y)# Linear regressionplot(x,y,"o")# Plot (x, y) pointsplot(x,[a+b*iforiinx])# Plot line determined by linear regression

`linreg`

(*x*,*y*,*w*)Weighted least-squares linear regression.

`expm`

(*A*)¶Compute the matrix exponential of

`A`

, defined by\[e^A = \sum_{n=0}^{\infty} \frac{A^n}{n!}.\]For symmetric or Hermitian

`A`

, an eigendecomposition (`eigfact()`

) is used, otherwise the scaling and squaring algorithm (see [H05]) is chosen.[H05] Nicholas J. Higham, “The squaring and scaling method for the matrix exponential revisited”, SIAM Journal on Matrix Analysis and Applications, 26(4), 2005, 1179-1193. doi:10.1137/090768539

`logm`

(*A*)¶If

`A`

has no negative real eigenvalue, compute the principal matrix logarithm of`A`

, i.e. the unique matrix \(X\) such that \(e^X = A\) and \(-\pi < Im(\lambda) < \pi\) for all the eigenvalues \(\lambda\) of \(X\). If`A`

has nonpositive eigenvalues, a warning is printed and whenever possible a nonprincipal matrix function is returned.If

`A`

is symmetric or Hermitian, its eigendecomposition (`eigfact()`

) is used, if`A`

is triangular an improved version of the inverse scaling and squaring method is employed (see [AH12] and [AHR13]). For general matrices, the complex Schur form (`schur()`

) is computed and the triangular algorithm is used on the triangular factor.[AH12] Awad H. Al-Mohy and Nicholas J. Higham, “Improved inverse scaling and squaring algorithms for the matrix logarithm”, SIAM Journal on Scientific Computing, 34(4), 2012, C153-C169. doi:10.1137/110852553 [AHR13] Awad H. Al-Mohy, Nicholas J. Higham and Samuel D. Relton, “Computing the Fréchet derivative of the matrix logarithm and estimating the condition number”, SIAM Journal on Scientific Computing, 35(4), 2013, C394-C410. doi:10.1137/120885991

`sqrtm`

(*A*)¶If

`A`

has no negative real eigenvalues, compute the principal matrix square root of`A`

, that is the unique matrix \(X\) with eigenvalues having positive real part such that \(X^2 = A\). Otherwise, a nonprincipal square root is returned.If

`A`

is symmetric or Hermitian, its eigendecomposition (`eigfact()`

) is used to compute the square root. Otherwise, the square root is determined by means of the Björck-Hammarling method, which computes the complex Schur form (`schur()`

) and then the complex square root of the triangular factor.[BH83] Åke Björck and Sven Hammarling, “A Schur method for the square root of a matrix”, Linear Algebra and its Applications, 52-53, 1983, 127-140. doi:10.1016/0024-3795(83)80010-X

`lyap`

(*A*,*C*)¶Computes the solution

`X`

to the continuous Lyapunov equation`AX+XA'+C=0`

, where no eigenvalue of`A`

has a zero real part and no two eigenvalues are negative complex conjugates of each other.

`sylvester`

(*A*,*B*,*C*)¶Computes the solution

`X`

to the Sylvester equation`AX+XB+C=0`

, where`A`

,`B`

and`C`

have compatible dimensions and`A`

and`-B`

have no eigenvalues with equal real part.

`issym`

(*A*) → Bool¶Test whether a matrix is symmetric.

`isposdef`

(*A*) → Bool¶Test whether a matrix is positive definite.

`isposdef!`

(*A*) → Bool¶Test whether a matrix is positive definite, overwriting

`A`

in the processes.

`istril`

(*A*) → Bool¶Test whether a matrix is lower triangular.

`istriu`

(*A*) → Bool¶Test whether a matrix is upper triangular.

`isdiag`

(*A*) → Bool¶Test whether a matrix is diagonal.

`ishermitian`

(*A*) → Bool¶Test whether a matrix is Hermitian.

`transpose`

(*A*)¶The transposition operator (

`.'`

).

`transpose!`

(*dest*,*src*)¶Transpose array

`src`

and store the result in the preallocated array`dest`

, which should have a size corresponding to`(size(src,2),size(src,1))`

. No in-place transposition is supported and unexpected results will happen if`src`

and`dest`

have overlapping memory regions.

`ctranspose`

(*A*)¶The conjugate transposition operator (

`'`

).

`ctranspose!`

(*dest*,*src*)¶Conjugate transpose array

`src`

and store the result in the preallocated array`dest`

, which should have a size corresponding to`(size(src,2),size(src,1))`

. No in-place transposition is supported and unexpected results will happen if`src`

and`dest`

have overlapping memory regions.

`eigs`

(*A; nev=6*,*ncv=max(20*,*2*nev+1)*,*which="LM"*,*tol=0.0*,*maxiter=300*,*sigma=nothing*,*ritzvec=true*,*v0=zeros((0*,*))) -> (d*, [*v*, ]*nconv*,*niter*,*nmult*,*resid*)¶Computes eigenvalues

`d`

of`A`

using Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.The following keyword arguments are supported:

`nev`

: Number of eigenvalues`ncv`

: Number of Krylov vectors used in the computation; should satisfy`nev+1<=ncv<=n`

for real symmetric problems and`nev+2<=ncv<=n`

for other problems, where`n`

is the size of the input matrix`A`

. The default is`ncv=max(20,2*nev+1)`

. Note that these restrictions limit the input matrix`A`

to be of dimension at least 2.`which`

: type of eigenvalues to compute. See the note below.`which`

type of eigenvalues `:LM`

eigenvalues of largest magnitude (default) `:SM`

eigenvalues of smallest magnitude `:LR`

eigenvalues of largest real part `:SR`

eigenvalues of smallest real part `:LI`

eigenvalues of largest imaginary part (nonsymmetric or complex `A`

only)`:SI`

eigenvalues of smallest imaginary part (nonsymmetric or complex `A`

only)`:BE`

compute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric `A`

only)`tol`

: tolerance (\(tol \le 0.0\) defaults to`DLAMCH('EPS')`

)`maxiter`

: Maximum number of iterations (default = 300)`sigma`

: Specifies the level shift used in inverse iteration. If`nothing`

(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close to`sigma`

using shift and invert iterations.`ritzvec`

: Returns the Ritz vectors`v`

(eigenvectors) if`true`

`v0`

: starting vector from which to start the iterations

`eigs`

returns the`nev`

requested eigenvalues in`d`

, the corresponding Ritz vectors`v`

(only if`ritzvec=true`

), the number of converged eigenvalues`nconv`

, the number of iterations`niter`

and the number of matrix vector multiplications`nmult`

, as well as the final residual vector`resid`

.Note

The

`sigma`

and`which`

keywords interact: the description of eigenvalues searched for by`which`

do _not_ necessarily refer to the eigenvalues of`A`

, but rather the linear operator constructed by the specification of the iteration mode implied by`sigma`

.`sigma`

iteration mode `which`

refers to eigenvalues of`nothing`

ordinary (forward) \(A\) real or complex inverse with level shift `sigma`

\((A - \sigma I )^{-1}\)

`eigs`

(*A*,*B; nev=6*,*ncv=max(20*,*2*nev+1)*,*which="LM"*,*tol=0.0*,*maxiter=300*,*sigma=nothing*,*ritzvec=true*,*v0=zeros((0*,*))) -> (d*, [*v*, ]*nconv*,*niter*,*nmult*,*resid*)Computes generalized eigenvalues

`d`

of`A`

and`B`

using Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.The following keyword arguments are supported:

`nev`

: Number of eigenvalues`ncv`

: Number of Krylov vectors used in the computation; should satisfy`nev+1<=ncv<=n`

for real symmetric problems and`nev+2<=ncv<=n`

for other problems, where`n`

is the size of the input matrices`A`

and`B`

. The default is`ncv=max(20,2*nev+1)`

. Note that these restrictions limit the input matrix`A`

to be of dimension at least 2.`which`

: type of eigenvalues to compute. See the note below.`which`

type of eigenvalues `:LM`

eigenvalues of largest magnitude (default) `:SM`

eigenvalues of smallest magnitude `:LR`

eigenvalues of largest real part `:SR`

eigenvalues of smallest real part `:LI`

eigenvalues of largest imaginary part (nonsymmetric or complex `A`

only)`:SI`

eigenvalues of smallest imaginary part (nonsymmetric or complex `A`

only)`:BE`

compute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric `A`

only)`tol`

: tolerance (\(tol \le 0.0\) defaults to`DLAMCH('EPS')`

)`maxiter`

: Maximum number of iterations (default = 300)`sigma`

: Specifies the level shift used in inverse iteration. If`nothing`

(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close to`sigma`

using shift and invert iterations.`ritzvec`

: Returns the Ritz vectors`v`

(eigenvectors) if`true`

`v0`

: starting vector from which to start the iterations

`eigs`

returns the`nev`

requested eigenvalues in`d`

, the corresponding Ritz vectors`v`

(only if`ritzvec=true`

), the number of converged eigenvalues`nconv`

, the number of iterations`niter`

and the number of matrix vector multiplications`nmult`

, as well as the final residual vector`resid`

.Note

The

`sigma`

and`which`

keywords interact: the description of eigenvalues searched for by`which`

do _not_ necessarily refer to the eigenvalue problem \(Av = Bv\lambda\), but rather the linear operator constructed by the specification of the iteration mode implied by`sigma`

.`sigma`

iteration mode `which`

refers to the problem`nothing`

ordinary (forward) \(Av = Bv\lambda\) real or complex inverse with level shift `sigma`

\((A - \sigma B )^{-1}B = v\nu\)

`svds`

(*A; nsv=6*,*ritzvec=true*,*tol=0.0*,*maxiter=1000) -> (left_sv*,*s*,*right_sv*,*nconv*,*niter*,*nmult*,*resid*)¶`svds`

computes largest singular values`s`

of`A`

using Lanczos or Arnoldi iterations. Uses`eigs()`

underneath.Inputs are:

`A`

: Linear operator. It can either subtype of`AbstractArray`

(e.g., sparse matrix) or duck typed. For duck typing`A`

has to support`size(A)`

,`eltype(A)`

,`A*vector`

and`A'*vector`

.`nsv`

: Number of singular values.`ritzvec`

: Whether to return the left and right singular vectors`left_sv`

and`right_sv`

, default is`true`

. If`false`

the singular vectors are omitted from the output.`tol`

: tolerance, see`eigs()`

.`maxiter`

: Maximum number of iterations, see`eigs()`

.

**Example**:X=sprand(10,5,0.2)svds(X,nsv=2)

`peakflops`

(*n; parallel=false*)¶`peakflops`

computes the peak flop rate of the computer by using double precision`Base.LinAlg.BLAS.gemm!()`

. By default, if no arguments are specified, it multiplies a matrix of size`nxn`

, where`n=2000`

. If the underlying BLAS is using multiple threads, higher flop rates are realized. The number of BLAS threads can be set with`blas_set_num_threads(n)`

.If the keyword argument

`parallel`

is set to`true`

,`peakflops`

is run in parallel on all the worker processors. The flop rate of the entire parallel computer is returned. When running in parallel, only 1 BLAS thread is used. The argument`n`

still refers to the size of the problem that is solved on each processor.

## BLAS Functions¶

`Base.LinAlg.BLAS`

provides wrappers for some of the BLAS functions for
linear algebra. Those BLAS functions that overwrite one of the input
arrays have names ending in `'!'`

.

Usually a function has 4 methods defined, one each for `Float64`

,
`Float32`

, `Complex128`

and `Complex64`

arrays.

`dot`

(*n*,*X*,*incx*,*Y*,*incy*)¶Dot product of two vectors consisting of

`n`

elements of array`X`

with stride`incx`

and`n`

elements of array`Y`

with stride`incy`

.

`dotu`

(*n*,*X*,*incx*,*Y*,*incy*)¶Dot function for two complex vectors.

`dotc`

(*n*,*X*,*incx*,*U*,*incy*)¶Dot function for two complex vectors conjugating the first vector.

`blascopy!`

(*n*,*X*,*incx*,*Y*,*incy*)¶Copy

`n`

elements of array`X`

with stride`incx`

to array`Y`

with stride`incy`

. Returns`Y`

.

`nrm2`

(*n*,*X*,*incx*)¶2-norm of a vector consisting of

`n`

elements of array`X`

with stride`incx`

.

`asum`

(*n*,*X*,*incx*)¶sum of the absolute values of the first

`n`

elements of array`X`

with stride`incx`

.

`axpy!`

(*a*,*X*,*Y*)¶Overwrite

`Y`

with`a*X+Y`

. Returns`Y`

.

`scal!`

(*n*,*a*,*X*,*incx*)¶Overwrite

`X`

with`a*X`

. Returns`X`

.

`scal`

(*n*,*a*,*X*,*incx*)¶Returns

`a*X`

.

`ger!`

(*alpha*,*x*,*y*,*A*)¶Rank-1 update of the matrix

`A`

with vectors`x`

and`y`

as`alpha*x*y'+A`

.

`syr!`

(*uplo*,*alpha*,*x*,*A*)¶Rank-1 update of the symmetric matrix

`A`

with vector`x`

as`alpha*x*x.'+A`

. When`uplo`

is ‘U’ the upper triangle of`A`

is updated (‘L’ for lower triangle). Returns`A`

.

`syrk!`

(*uplo*,*trans*,*alpha*,*A*,*beta*,*C*)¶Rank-k update of the symmetric matrix

`C`

as`alpha*A*A.'+beta*C`

or`alpha*A.'*A+beta*C`

according to whether`trans`

is ‘N’ or ‘T’. When`uplo`

is ‘U’ the upper triangle of`C`

is updated (‘L’ for lower triangle). Returns`C`

.

`syrk`

(*uplo*,*trans*,*alpha*,*A*)¶Returns either the upper triangle or the lower triangle, according to

`uplo`

(‘U’ or ‘L’), of`alpha*A*A.'`

or`alpha*A.'*A`

, according to`trans`

(‘N’ or ‘T’).

`her!`

(*uplo*,*alpha*,*x*,*A*)¶Methods for complex arrays only. Rank-1 update of the Hermitian matrix

`A`

with vector`x`

as`alpha*x*x'+A`

. When`uplo`

is ‘U’ the upper triangle of`A`

is updated (‘L’ for lower triangle). Returns`A`

.

`herk!`

(*uplo*,*trans*,*alpha*,*A*,*beta*,*C*)¶Methods for complex arrays only. Rank-k update of the Hermitian matrix

`C`

as`alpha*A*A'+beta*C`

or`alpha*A'*A+beta*C`

according to whether`trans`

is ‘N’ or ‘T’. When`uplo`

is ‘U’ the upper triangle of`C`

is updated (‘L’ for lower triangle). Returns`C`

.

`herk`

(*uplo*,*trans*,*alpha*,*A*)¶Methods for complex arrays only. Returns either the upper triangle or the lower triangle, according to

`uplo`

(‘U’ or ‘L’), of`alpha*A*A'`

or`alpha*A'*A`

, according to`trans`

(‘N’ or ‘T’).

`gbmv!`

(*trans*,*m*,*kl*,*ku*,*alpha*,*A*,*x*,*beta*,*y*)¶Update vector

`y`

as`alpha*A*x+beta*y`

or`alpha*A'*x+beta*y`

according to`trans`

(‘N’ or ‘T’). The matrix`A`

is a general band matrix of dimension`m`

by`size(A,2)`

with`kl`

sub-diagonals and`ku`

super-diagonals. Returns the updated`y`

.

`gbmv`

(*trans*,*m*,*kl*,*ku*,*alpha*,*A*,*x*,*beta*,*y*)¶Returns

`alpha*A*x`

or`alpha*A'*x`

according to`trans`

(‘N’ or ‘T’). The matrix`A`

is a general band matrix of dimension`m`

by`size(A,2)`

with`kl`

sub-diagonals and`ku`

super-diagonals.

`sbmv!`

(*uplo*,*k*,*alpha*,*A*,*x*,*beta*,*y*)¶Update vector

`y`

as`alpha*A*x+beta*y`

where`A`

is a a symmetric band matrix of order`size(A,2)`

with`k`

super-diagonals stored in the argument`A`

. The storage layout for`A`

is described the reference BLAS module, level-2 BLAS at <http://www.netlib.org/lapack/explore-html/>.Returns the updated

`y`

.

`sbmv`

(*uplo*,*k*,*alpha*,*A*,*x*)¶Returns

`alpha*A*x`

where`A`

is a symmetric band matrix of order`size(A,2)`

with`k`

super-diagonals stored in the argument`A`

.

`sbmv`

(*uplo*,*k*,*A*,*x*)Returns

`A*x`

where`A`

is a symmetric band matrix of order`size(A,2)`

with`k`

super-diagonals stored in the argument`A`

.

`gemm!`

(*tA*,*tB*,*alpha*,*A*,*B*,*beta*,*C*)¶Update

`C`

as`alpha*A*B+beta*C`

or the other three variants according to`tA`

(transpose`A`

) and`tB`

. Returns the updated`C`

.

`gemm`

(*tA*,*tB*,*alpha*,*A*,*B*)¶Returns

`alpha*A*B`

or the other three variants according to`tA`

(transpose`A`

) and`tB`

.

`gemm`

(*tA*,*tB*,*A*,*B*)Returns

`A*B`

or the other three variants according to`tA`

(transpose`A`

) and`tB`

.

`gemv!`

(*tA*,*alpha*,*A*,*x*,*beta*,*y*)¶Update the vector

`y`

as`alpha*A*x+beta*y`

or`alpha*A'x+beta*y`

according to`tA`

(transpose`A`

). Returns the updated`y`

.

`gemv`

(*tA*,*alpha*,*A*,*x*)¶Returns

`alpha*A*x`

or`alpha*A'x`

according to`tA`

(transpose`A`

).

`gemv`

(*tA*,*A*,*x*)Returns

`A*x`

or`A'x`

according to`tA`

(transpose`A`

).

`symm!`

(*side*,*ul*,*alpha*,*A*,*B*,*beta*,*C*)¶Update

`C`

as`alpha*A*B+beta*C`

or`alpha*B*A+beta*C`

according to`side`

.`A`

is assumed to be symmetric. Only the`ul`

triangle of`A`

is used. Returns the updated`C`

.

`symm`

(*side*,*ul*,*alpha*,*A*,*B*)¶Returns

`alpha*A*B`

or`alpha*B*A`

according to`side`

.`A`

is assumed to be symmetric. Only the`ul`

triangle of`A`

is used.

`symm`

(*side*,*ul*,*A*,*B*)Returns

`A*B`

or`B*A`

according to`side`

.`A`

is assumed to be symmetric. Only the`ul`

triangle of`A`

is used.

`symm`

(*tA*,*tB*,*alpha*,*A*,*B*)Returns

`alpha*A*B`

or the other three variants according to`tA`

(transpose`A`

) and`tB`

.

`symv!`

(*ul*,*alpha*,*A*,*x*,*beta*,*y*)¶Update the vector

`y`

as`alpha*A*x+beta*y`

.`A`

is assumed to be symmetric. Only the`ul`

triangle of`A`

is used. Returns the updated`y`

.

`symv`

(*ul*,*alpha*,*A*,*x*)¶Returns

`alpha*A*x`

.`A`

is assumed to be symmetric. Only the`ul`

triangle of`A`

is used.

`symv`

(*ul*,*A*,*x*)Returns

`A*x`

.`A`

is assumed to be symmetric. Only the`ul`

triangle of`A`

is used.

`trmm!`

(*side*,*ul*,*tA*,*dA*,*alpha*,*A*,*B*)¶Update

`B`

as`alpha*A*B`

or one of the other three variants determined by`side`

(`A`

on left or right) and`tA`

(transpose`A`

). Only the`ul`

triangle of`A`

is used.`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones). Returns the updated`B`

.

`trmm`

(*side*,*ul*,*tA*,*dA*,*alpha*,*A*,*B*)¶Returns

`alpha*A*B`

or one of the other three variants determined by`side`

(`A`

on left or right) and`tA`

(transpose`A`

). Only the`ul`

triangle of`A`

is used.`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones).

`trsm!`

(*side*,*ul*,*tA*,*dA*,*alpha*,*A*,*B*)¶Overwrite

`B`

with the solution to`A*X=alpha*B`

or one of the other three variants determined by`side`

(`A`

on left or right of`X`

) and`tA`

(transpose`A`

). Only the`ul`

triangle of`A`

is used.`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones). Returns the updated`B`

.

`trsm`

(*side*,*ul*,*tA*,*dA*,*alpha*,*A*,*B*)¶Returns the solution to

`A*X=alpha*B`

or one of the other three variants determined by`side`

(`A`

on left or right of`X`

) and`tA`

(transpose`A`

). Only the`ul`

triangle of`A`

is used.`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones).

`trmv!`

(*side*,*ul*,*tA*,*dA*,*alpha*,*A*,*b*)¶Update

`b`

as`alpha*A*b`

or one of the other three variants determined by`side`

(`A`

on left or right) and`tA`

(transpose`A`

). Only the`ul`

triangle of`A`

is used.`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones). Returns the updated`b`

.

`trmv`

(*side*,*ul*,*tA*,*dA*,*alpha*,*A*,*b*)¶Returns

`alpha*A*b`

or one of the other three variants determined by`side`

(`A`

on left or right) and`tA`

(transpose`A`

). Only the`ul`

triangle of`A`

is used.`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones).

`trsv!`

(*ul*,*tA*,*dA*,*A*,*b*)¶Overwrite

`b`

with the solution to`A*x=b`

or one of the other two variants determined by`tA`

(transpose`A`

) and`ul`

(triangle of`A`

used).`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones). Returns the updated`b`

.

`trsv`

(*ul*,*tA*,*dA*,*A*,*b*)¶Returns the solution to

`A*x=b`

or one of the other two variants determined by`tA`

(transpose`A`

) and`ul`

(triangle of`A`

is used.)`dA`

indicates if`A`

is unit-triangular (the diagonal is assumed to be all ones).

`blas_set_num_threads`

(*n*)¶Set the number of threads the BLAS library should use.

`I`

¶An object of type

`UniformScaling`

, representing an identity matrix of any size.

## LAPACK Functions¶

`Base.LinAlg.LAPACK`

provides wrappers for some of the LAPACK functions for
linear algebra. Those functions that overwrite one of the input
arrays have names ending in `'!'`

.

Usually a function has 4 methods defined, one each for `Float64`

,
`Float32`

, `Complex128`

and `Complex64`

arrays.

Note that the LAPACK API provided by Julia can and will change in the future. Since this API is not user-facing, there is no commitment to support/deprecate this specific set of functions in future releases.

`gbtrf!`

(*kl*,*ku*,*m*,*AB) -> (AB*,*ipiv*)¶Compute the LU factorization of a banded matrix

`AB`

.`kl`

is the first subdiagonal containing a nonzero band,`ku`

is the last superdiagonal containing one, and`m`

is the first dimension of the matrix`AB`

. Returns the LU factorization in-place and`ipiv`

, the vector of pivots used.

`gbtrs!`

(*trans*,*kl*,*ku*,*m*,*AB*,*ipiv*,*B*)¶Solve the equation

`AB*X=B`

.`trans`

determines the orientation of`AB`

. It may be`N`

(no transpose),`T`

(transpose), or`C`

(conjugate transpose).`kl`

is the first subdiagonal containing a nonzero band,`ku`

is the last superdiagonal containing one, and`m`

is the first dimension of the matrix`AB`

.`ipiv`

is the vector of pivots returned from`gbtrf!`

. Returns the vector or matrix`X`

, overwriting`B`

in-place.

`gebal!`

(*job*,*A) -> (ilo*,*ihi*,*scale*)¶Balance the matrix

`A`

before computing its eigensystem or Schur factorization.`job`

can be one of`N`

(`A`

will not be permuted or scaled),`P`

(`A`

will only be permuted),`S`

(`A`

will only be scaled), or`B`

(`A`

will be both permuted and scaled). Modifies`A`

in-place and returns`ilo`

,`ihi`

, and`scale`

. If permuting was turned on,`A[i,j]=0`

if`j>i`

and`1<j<ilo`

or`j>ihi`

.`scale`

contains information about the scaling/permutations performed.

`gebak!`

(*job*,*side*,*ilo*,*ihi*,*scale*,*V*)¶Transform the eigenvectors

`V`

of a matrix balanced using`gebal!`

to the unscaled/unpermuted eigenvectors of the original matrix. Modifies`V`

in-place.`side`

can be`L`

(left eigenvectors are transformed) or`R`

(right eigenvectors are transformed).

`gebrd!`

(*A) -> (A*,*d*,*e*,*tauq*,*taup*)¶Reduce

`A`

in-place to bidiagonal form`A=QBP'`

. Returns`A`

, containing the bidiagonal matrix`B`

;`d`

, containing the diagonal elements of`B`

;`e`

, containing the off-diagonal elements of`B`

;`tauq`

, containing the elementary reflectors representing`Q`

; and`taup`

, containing the elementary reflectors representing`P`

.

`gelqf!`

(*A*,*tau*)¶Compute the

`LQ`

factorization of`A`

,`A=LQ`

.`tau`

contains scalars which parameterize the elementary reflectors of the factorization.`tau`

must have length greater than or equal to the smallest dimension of`A`

.Returns

`A`

and`tau`

modified in-place.

`gelqf!`

(*A) -> (A*,*tau*)Compute the

`LQ`

factorization of`A`

,`A=LQ`

.Returns

`A`

, modified in-place, and`tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`geqlf!`

(*A*,*tau*)¶Compute the

`QL`

factorization of`A`

,`A=QL`

.`tau`

contains scalars which parameterize the elementary reflectors of the factorization.`tau`

must have length greater than or equal to the smallest dimension of`A`

.Returns

`A`

and`tau`

modified in-place.

`geqlf!`

(*A) -> (A*,*tau*)Compute the

`QL`

factorization of`A`

,`A=QL`

.Returns

`A`

, modified in-place, and`tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`geqrf!`

(*A*,*tau*)¶Compute the

`QR`

factorization of`A`

,`A=QR`

.`tau`

contains scalars which parameterize the elementary reflectors of the factorization.`tau`

must have length greater than or equal to the smallest dimension of`A`

.Returns

`A`

and`tau`

modified in-place.

`geqrf!`

(*A) -> (A*,*tau*)Compute the

`QR`

factorization of`A`

,`A=QR`

.Returns

`A`

, modified in-place, and`tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`geqp3!`

(*A*,*jpvt*,*tau*)¶Compute the pivoted

`QR`

factorization of`A`

,`AP=QR`

using BLAS level 3.`P`

is a pivoting matrix, represented by`jpvt`

.`tau`

stores the elementary reflectors.`jpvt`

must have length length greater than or equal to`n`

if`A`

is an`(mxn)`

matrix.`tau`

must have length greater than or equal to the smallest dimension of`A`

.`A`

,`jpvt`

, and`tau`

are modified in-place.

`geqp3!`

(*A*,*jpvt) -> (A*,*jpvt*,*tau*)Compute the pivoted

`QR`

factorization of`A`

,`AP=QR`

using BLAS level 3.`P`

is a pivoting matrix, represented by`jpvt`

.`jpvt`

must have length greater than or equal to`n`

if`A`

is an`(mxn)`

matrix.Returns

`A`

and`jpvt`

, modified in-place, and`tau`

, which stores the elementary reflectors.

`geqp3!`

(*A) -> (A*,*jpvt*,*tau*)Compute the pivoted

`QR`

factorization of`A`

,`AP=QR`

using BLAS level 3.Returns

`A`

, modified in-place,`jpvt`

, which represents the pivoting matrix`P`

, and`tau`

, which stores the elementary reflectors.

`gerqf!`

(*A*,*tau*)¶Compute the

`RQ`

factorization of`A`

,`A=RQ`

.`tau`

contains scalars which parameterize the elementary reflectors of the factorization.`tau`

must have length greater than or equal to the smallest dimension of`A`

.Returns

`A`

and`tau`

modified in-place.

`gerqf!`

(*A) -> (A*,*tau*)Compute the

`RQ`

factorization of`A`

,`A=RQ`

.`A`

, modified in-place, and`tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`geqrt!`

(*A*,*T*)¶Compute the blocked

`QR`

factorization of`A`

,`A=QR`

.`T`

contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension of`T`

sets the block size and it must be between 1 and`n`

. The second dimension of`T`

must equal the smallest dimension of`A`

.Returns

`A`

and`T`

modified in-place.

`geqrt!`

(*A*,*nb) -> (A*,*T*)Compute the blocked

`QR`

factorization of`A`

,`A=QR`

.`nb`

sets the block size and it must be between 1 and`n`

, the second dimension of`A`

.Returns

`A`

, modified in-place, and`T`

, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

`geqrt3!`

(*A*,*T*)¶Recursively computes the blocked

`QR`

factorization of`A`

,`A=QR`

.`T`

contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension of`T`

sets the block size and it must be between 1 and`n`

. The second dimension of`T`

must equal the smallest dimension of`A`

.Returns

`A`

and`T`

modified in-place.

`geqrt3!`

(*A) -> (A*,*T*)Recursively computes the blocked

`QR`

factorization of`A`

,`A=QR`

.Returns

`A`

, modified in-place, and`T`

, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

`getrf!`

(*A) -> (A*,*ipiv*,*info*)¶Compute the pivoted

`LU`

factorization of`A`

,`A=LU`

.Returns

`A`

, modified in-place,`ipiv`

, the pivoting information, and an`info`

code which indicates success (`info=0`

), a singular value in`U`

(`info=i`

, in which case`U[i,i]`

is singular), or an error code (`info<0`

).

`tzrzf!`

(*A) -> (A*,*tau*)¶Transforms the upper trapezoidal matrix

`A`

to upper triangular form in-place. Returns`A`

and`tau`

, the scalar parameters for the elementary reflectors of the transformation.

`ormrz!`

(*side*,*trans*,*A*,*tau*,*C*)¶Multiplies the matrix

`C`

by`Q`

from the transformation supplied by`tzrzf!`

. Depending on`side`

or`trans`

the multiplication can be left-sided (`side=L,Q*C`

) or right-sided (`side=R,C*Q`

) and`Q`

can be unmodified (`trans=N`

), transposed (`trans=T`

), or conjugate transposed (`trans=C`

). Returns matrix`C`

which is modified in-place with the result of the multiplication.

`gels!`

(*trans*,*A*,*B) -> (F*,*B*,*ssr*)¶Solves the linear equation

`A*X=B`

,`A.'*X=B`

, or`A'*X=B`

using a QR or LQ factorization. Modifies the matrix/vector`B`

in place with the solution.`A`

is overwritten with its`QR`

or`LQ`

factorization.`trans`

may be one of`N`

(no modification),`T`

(transpose), or`C`

(conjugate transpose).`gels!`

searches for the minimum norm/least squares solution.`A`

may be under or over determined. The solution is returned in`B`

.

`gesv!`

(*A*,*B) -> (B*,*A*,*ipiv*)¶Solves the linear equation

`A*X=B`

where`A`

is a square matrix using the`LU`

factorization of`A`

.`A`

is overwritten with its`LU`

factorization and`B`

is overwritten with the solution`X`

.`ipiv`

contains the pivoting information for the`LU`

factorization of`A`

.

`getrs!`

(*trans*,*A*,*ipiv*,*B*)¶Solves the linear equation

`A*X=B`

,`A.'*X=B`

, or`A'*X=B`

for square`A`

. Modifies the matrix/vector`B`

in place with the solution.`A`

is the`LU`

factorization from`getrf!`

, with`ipiv`

the pivoting information.`trans`

may be one of`N`

(no modification),`T`

(transpose), or`C`

(conjugate transpose).

`getri!`

(*A*,*ipiv*)¶Computes the inverse of

`A`

, using its`LU`

factorization found by`getrf!`

.`ipiv`

is the pivot information output and`A`

contains the`LU`

factorization of`getrf!`

.`A`

is overwritten with its inverse.

`gesvx!`

(*fact*,*trans*,*A*,*AF*,*ipiv*,*equed*,*R*,*C*,*B) -> (X*,*equed*,*R*,*C*,*B*,*rcond*,*ferr*,*berr*,*work*)¶Solves the linear equation

`A*X=B`

(`trans=N`

),`A.'*X=B`

(`trans=T`

), or`A'*X=B`

(`trans=C`

) using the`LU`

factorization of`A`

.`fact`

may be`E`

, in which case`A`

will be equilibrated and copied to`AF`

;`F`

, in which case`AF`

and`ipiv`

from a previous`LU`

factorization are inputs; or`N`

, in which case`A`

will be copied to`AF`

and then factored. If`fact=F`

,`equed`

may be`N`

, meaning`A`

has not been equilibrated;`R`

, meaning`A`

was multiplied by`diagm(R)`

from the left;`C`

, meaning`A`

was multiplied by`diagm(C)`

from the right; or`B`

, meaning`A`

was multiplied by`diagm(R)`

from the left and`diagm(C)`

from the right. If`fact=F`

and`equed=R`

or`B`

the elements of`R`

must all be positive. If`fact=F`

and`equed=C`

or`B`

the elements of`C`

must all be positive.Returns the solution

`X`

;`equed`

, which is an output if`fact`

is not`N`

, and describes the equilibration that was performed;`R`

, the row equilibration diagonal;`C`

, the column equilibration diagonal;`B`

, which may be overwritten with its equilibrated form`diagm(R)*B`

(if`trans=N`

and`equed=R,B`

) or`diagm(C)*B`

(if`trans=T,C`

and`equed=C,B`

);`rcond`

, the reciprocal condition number of`A`

after equilbrating;`ferr`

, the forward error bound for each solution vector in`X`

;`berr`

, the forward error bound for each solution vector in`X`

; and`work`

, the reciprocal pivot growth factor.

`gesvx!`

(*A*,*B*)The no-equilibration, no-transpose simplification of

`gesvx!`

.

`gelsd!`

(*A*,*B*,*rcond) -> (B*,*rnk*)¶Computes the least norm solution of

`A*X=B`

by finding the`SVD`

factorization of`A`

, then dividing-and-conquering the problem.`B`

is overwritten with the solution`X`

. Singular values below`rcond`

will be treated as zero. Returns the solution in`B`

and the effective rank of`A`

in`rnk`

.

`gelsy!`

(*A*,*B*,*rcond) -> (B*,*rnk*)¶Computes the least norm solution of

`A*X=B`

by finding the full`QR`

factorization of`A`

, then dividing-and-conquering the problem.`B`

is overwritten with the solution`X`

. Singular values below`rcond`

will be treated as zero. Returns the solution in`B`

and the effective rank of`A`

in`rnk`

.

`gglse!`

(*A*,*c*,*B*,*d) -> (X*,*res*)¶Solves the equation

`A*x=c`

where`x`

is subject to the equality constraint`B*x=d`

. Uses the formula`||c-A*x||^2=0`

to solve. Returns`X`

and the residual sum-of-squares.

`geev!`

(*jobvl*,*jobvr*,*A) -> (W*,*VL*,*VR*)¶Finds the eigensystem of

`A`

. If`jobvl=N`

, the left eigenvectors of`A`

aren’t computed. If`jobvr=N`

, the right eigenvectors of`A`

aren’t computed. If`jobvl=V`

or`jobvr=V`

, the corresponding eigenvectors are computed. Returns the eigenvalues in`W`

, the right eigenvectors in`VR`

, and the left eigenvectors in`VL`

.

`gesdd!`

(*job*,*A) -> (U*,*S*,*VT*)¶Finds the singular value decomposition of

`A`

,`A=U*S*V'`

, using a divide and conquer approach. If`job=A`

, all the columns of`U`

and the rows of`V'`

are computed. If`job=N`

, no columns of`U`

or rows of`V'`

are computed. If`job=O`

,`A`

is overwritten with the columns of (thin)`U`

and the rows of (thin)`V'`

. If`job=S`

, the columns of (thin)`U`

and the rows of (thin)`V'`

are computed and returned separately.

`gesvd!`

(*jobu*,*jobvt*,*A) -> (U*,*S*,*VT*)¶Finds the singular value decomposition of

`A`

,`A=U*S*V'`

. If`jobu=A`

, all the columns of`U`

are computed. If`jobvt=A`

all the rows of`V'`

are computed. If`jobu=N`

, no columns of`U`

are computed. If`jobvt=N`

no rows of`V'`

are computed. If`jobu=O`

,`A`

is overwritten with the columns of (thin)`U`

. If`jobvt=O`

,`A`

is overwritten with the rows of (thin)`V'`

. If`jobu=S`

, the columns of (thin)`U`

are computed and returned separately. If`jobvt=S`

the rows of (thin)`V'`

are computed and returned separately.`jobu`

and`jobvt`

can’t both be`O`

.Returns

`U`

,`S`

, and`Vt`

, where`S`

are the singular values of`A`

.

`ggsvd!`

(*jobu*,*jobv*,*jobq*,*A*,*B) -> (U*,*V*,*Q*,*alpha*,*beta*,*k*,*l*,*R*)¶Finds the generalized singular value decomposition of

`A`

and`B`

,`U'*A*Q=D1*R`

and`V'*B*Q=D2*R`

.`D1`

has`alpha`

on its diagonal and`D2`

has`beta`

on its diagonal. If`jobu=U`

, the orthogonal/unitary matrix`U`

is computed. If`jobv=V`

the orthogonal/unitary matrix`V`

is computed. If`jobq=Q`

, the orthogonal/unitary matrix`Q`

is computed. If`jobu`

,`jobv`

or`jobq`

is`N`

, that matrix is not computed. This function is only available in LAPACK versions prior to 3.6.0.

`ggsvd3!`

(*jobu*,*jobv*,*jobq*,*A*,*B) -> (U*,*V*,*Q*,*alpha*,*beta*,*k*,*l*,*R*)¶Finds the generalized singular value decomposition of

`A`

and`B`

,`U'*A*Q=D1*R`

and`V'*B*Q=D2*R`

.`D1`

has`alpha`

on its diagonal and`D2`

has`beta`

on its diagonal. If`jobu=U`

, the orthogonal/unitary matrix`U`

is computed. If`jobv=V`

the orthogonal/unitary matrix`V`

is computed. If`jobq=Q`

, the orthogonal/unitary matrix`Q`

is computed. If`jobu`

,`jobv`

, or`jobq`

is`N`

, that matrix is not computed. This function requires LAPACK 3.6.0.

`geevx!`

(*balanc*,*jobvl*,*jobvr*,*sense*,*A) -> (A*,*w*,*VL*,*VR*,*ilo*,*ihi*,*scale*,*abnrm*,*rconde*,*rcondv*)¶Finds the eigensystem of

`A`

with matrix balancing. If`jobvl=N`

, the left eigenvectors of`A`

aren’t computed. If`jobvr=N`

, the right eigenvectors of`A`

aren’t computed. If`jobvl=V`

or`jobvr=V`

, the corresponding eigenvectors are computed. If`balanc=N`

, no balancing is performed. If`balanc=P`

,`A`

is permuted but not scaled. If`balanc=S`

,`A`

is scaled but not permuted. If`balanc=B`

,`A`

is permuted and scaled. If`sense=N`

, no reciprocal condition numbers are computed. If`sense=E`

, reciprocal condition numbers are computed for the eigenvalues only. If`sense=V`

, reciprocal condition numbers are computed for the right eigenvectors only. If`sense=B`

, reciprocal condition numbers are computed for the right eigenvectors and the eigenvectors. If`sense=E,B`

, the right and left eigenvectors must be computed.

`ggev!`

(*jobvl*,*jobvr*,*A*,*B) -> (alpha*,*beta*,*vl*,*vr*)¶Finds the generalized eigendecomposition of

`A`

and`B`

. If`jobvl=N`

, the left eigenvectors aren’t computed. If`jobvr=N`

, the right eigenvectors aren’t computed. If`jobvl=V`

or`jobvr=V`

, the corresponding eigenvectors are computed.

`gtsv!`

(*dl*,*d*,*du*,*B*)¶Solves the equation

`A*X=B`

where`A`

is a tridiagonal matrix with`dl`

on the subdiagonal,`d`

on the diagonal, and`du`

on the superdiagonal.Overwrites

`B`

with the solution`X`

and returns it.

`gttrf!`

(*dl*,*d*,*du) -> (dl*,*d*,*du*,*du2*,*ipiv*)¶Finds the

`LU`

factorization of a tridiagonal matrix with`dl`

on the subdiagonal,`d`

on the diagonal, and`du`

on the superdiagonal.Modifies

`dl`

,`d`

, and`du`

in-place and returns them and the second superdiagonal`du2`

and the pivoting vector`ipiv`

.

`gttrs!`

(*trans*,*dl*,*d*,*du*,*du2*,*ipiv*,*B*)¶Solves the equation

`A*X=B`

(`trans=N`

),`A.'*X=B`

(`trans=T`

), or`A'*X=B`

(`trans=C`

) using the`LU`

factorization computed by`gttrf!`

.`B`

is overwritten with the solution`X`

.

`orglq!`

(*A*,*tau*,*k = length(tau)*)¶Explicitly finds the matrix

`Q`

of a`LQ`

factorization after calling`gelqf!`

on`A`

. Uses the output of`gelqf!`

.`A`

is overwritten by`Q`

.

`orgqr!`

(*A*,*tau*,*k = length(tau)*)¶Explicitly finds the matrix

`Q`

of a`QR`

factorization after calling`geqrf!`

on`A`

. Uses the output of`geqrf!`

.`A`

is overwritten by`Q`

.

`ormlq!`

(*side*,*trans*,*A*,*tau*,*C*)¶Computes

`Q*C`

(`trans=N`

),`Q.'*C`

(`trans=T`

),`Q'*C`

(`trans=C`

) for`side=L`

or the equivalent right-sided multiplication for`side=R`

using`Q`

from a`LQ`

factorization of`A`

computed using`gelqf!`

.`C`

is overwritten.

`ormqr!`

(*side*,*trans*,*A*,*tau*,*C*)¶Computes

`Q*C`

(`trans=N`

),`Q.'*C`

(`trans=T`

),`Q'*C`

(`trans=C`

) for`side=L`

or the equivalent right-sided multiplication for`side=R`

using`Q`

from a`QR`

factorization of`A`

computed using`geqrf!`

.`C`

is overwritten.

`gemqrt!`

(*side*,*trans*,*V*,*T*,*C*)¶Computes

`Q*C`

(`trans=N`

),`Q.'*C`

(`trans=T`

),`Q'*C`

(`trans=C`

) for`side=L`

or the equivalent right-sided multiplication for`side=R`

using`Q`

from a`QR`

factorization of`A`

computed using`geqrt!`

.`C`

is overwritten.

`posv!`

(*uplo*,*A*,*B) -> (A*,*B*)¶Finds the solution to

`A*X=B`

where`A`

is a symmetric or Hermitian positive definite matrix. If`uplo=U`

the upper Cholesky decomposition of`A`

is computed. If`uplo=L`

the lower Cholesky decomposition of`A`

is computed.`A`

is overwritten by its Cholesky decomposition.`B`

is overwritten with the solution`X`

.

`potrf!`

(*uplo*,*A*)¶Computes the Cholesky (upper if

`uplo=U`

, lower if`uplo=L`

) decomposition of positive-definite matrix`A`

.`A`

is overwritten and returned with an info code.

`potri!`

(*uplo*,*A*)¶Computes the inverse of positive-definite matrix

`A`

after calling`potrf!`

to find its (upper if`uplo=U`

, lower if`uplo=L`

) Cholesky decomposition.`A`

is overwritten by its inverse and returned.

`potrs!`

(*uplo*,*A*,*B*)¶Finds the solution to

`A*X=B`

where`A`

is a symmetric or Hermitian positive definite matrix whose Cholesky decomposition was computed by`potrf!`

. If`uplo=U`

the upper Cholesky decomposition of`A`

was computed. If`uplo=L`

the lower Cholesky decomposition of`A`

was computed.`B`

is overwritten with the solution`X`

.

`pstrf!`

(*uplo*,*A*,*tol) -> (A*,*piv*,*rank*,*info*)¶Computes the (upper if

`uplo=U`

, lower if`uplo=L`

) pivoted Cholesky decomposition of positive-definite matrix`A`

with a user-set tolerance`tol`

.`A`

is overwritten by its Cholesky decomposition.Returns

`A`

, the pivots`piv`

, the rank of`A`

, and an`info`

code. If`info=0`

, the factorization succeeded. If`info=i>0`,then`A`

is indefinite or rank-deficient.

`ptsv!`

(*D*,*E*,*B*)¶Solves

`A*X=B`

for positive-definite tridiagonal`A`

.`D`

is the diagonal of`A`

and`E`

is the off-diagonal.`B`

is overwritten with the solution`X`

and returned.

`pttrf!`

(*D*,*E*)¶Computes the LDLt factorization of a positive-definite tridiagonal matrix with

`D`

as diagonal and`E`

as off-diagonal.`D`

and`E`

are overwritten and returned.

`pttrs!`

(*D*,*E*,*B*)¶Solves

`A*X=B`

for positive-definite tridiagonal`A`

with diagonal`D`

and off-diagonal`E`

after computing`A`

‘s LDLt factorization using`pttrf!`

.`B`

is overwritten with the solution`X`

.

`trtri!`

(*uplo*,*diag*,*A*)¶Finds the inverse of (upper if

`uplo=U`

, lower if`uplo=L`

) triangular matrix`A`

. If`diag=N`

,`A`

has non-unit diagonal elements. If`diag=U`

, all diagonal elements of`A`

are one.`A`

is overwritten with its inverse.

`trtrs!`

(*uplo*,*trans*,*diag*,*A*,*B*)¶Solves

`A*X=B`

(`trans=N`

),`A.'*X=B`

(`trans=T`

), or`A'*X=B`

(`trans=C`

) for (upper if`uplo=U`

, lower if`uplo=L`

) triangular matrix`A`

. If`diag=N`

,`A`

has non-unit diagonal elements. If`diag=U`

, all diagonal elements of`A`

are one.`B`

is overwritten with the solution`X`

.

`trcon!`

(*norm*,*uplo*,*diag*,*A*)¶Finds the reciprocal condition number of (upper if

`uplo=U`

, lower if`uplo=L`

) triangular matrix`A`

. If`diag=N`

,`A`

has non-unit diagonal elements. If`diag=U`

, all diagonal elements of`A`

are one. If`norm=I`

, the condition number is found in the infinity norm. If`norm=O`

or`1`

, the condition number is found in the one norm.

`trevc!`

(*side*,*howmny*,*select*,*T*,*VL = similar(T)*,*VR = similar(T)*)¶Finds the eigensystem of an upper triangular matrix

`T`

. If`side=R`

, the right eigenvectors are computed. If`side=L`

, the left eigenvectors are computed. If`side=B`

, both sets are computed. If`howmny=A`

, all eigenvectors are found. If`howmny=B`

, all eigenvectors are found and backtransformed using`VL`

and`VR`

. If`howmny=S`

, only the eigenvectors corresponding to the values in`select`

are computed.

`trrfs!`

(*uplo*,*trans*,*diag*,*A*,*B*,*X*,*Ferr*,*Berr) -> (Ferr*,*Berr*)¶Estimates the error in the solution to

`A*X=B`

(`trans=N`

),`A.'*X=B`

(`trans=T`

),`A'*X=B`

(`trans=C`

) for`side=L`

, or the equivalent equations a right-handed`side=R`

`X*A`

after computing`X`

using`trtrs!`

. If`uplo=U`

,`A`

is upper triangular. If`uplo=L`

,`A`

is lower triangular. If`diag=N`

,`A`

has non-unit diagonal elements. If`diag=U`

, all diagonal elements of`A`

are one.`Ferr`

and`Berr`

are optional inputs.`Ferr`

is the forward error and`Berr`

is the backward error, each component-wise.

`stev!`

(*job*,*dv*,*ev) -> (dv*,*Zmat*)¶Computes the eigensystem for a symmetric tridiagonal matrix with

`dv`

as diagonal and`ev`

as off-diagonal. If`job=N`

only the eigenvalues are found and returned in`dv`

. If`job=V`

then the eigenvectors are also found and returned in`Zmat`

.

`stebz!`

(*range*,*order*,*vl*,*vu*,*il*,*iu*,*abstol*,*dv*,*ev) -> (dv*,*iblock*,*isplit*)¶Computes the eigenvalues for a symmetric tridiagonal matrix with

`dv`

as diagonal and`ev`

as off-diagonal. If`range=A`

, all the eigenvalues are found. If`range=V`

, the eigenvalues in the half-open interval`(vl,vu]`

are found. If`range=I`

, the eigenvalues with indices between`il`

and`iu`

are found. If`order=B`

, eigvalues are ordered within a block. If`order=E`

, they are ordered across all the blocks.`abstol`

can be set as a tolerance for convergence.

`stegr!`

(*jobz*,*range*,*dv*,*ev*,*vl*,*vu*,*il*,*iu) -> (w*,*Z*)¶Computes the eigenvalues (

`jobz=N`

) or eigenvalues and eigenvectors (`jobz=V`

) for a symmetric tridiagonal matrix with`dv`

as diagonal and`ev`

as off-diagonal. If`range=A`

, all the eigenvalues are found. If`range=V`

, the eigenvalues in the half-open interval`(vl,vu]`

are found. If`range=I`

, the eigenvalues with indices between`il`

and`iu`

are found. The eigenvalues are returned in`w`

and the eigenvectors in`Z`

.

`stein!`

(*dv*,*ev_in*,*w_in*,*iblock_in*,*isplit_in*)¶Computes the eigenvectors for a symmetric tridiagonal matrix with

`dv`

as diagonal and`ev_in`

as off-diagonal.`w_in`

specifies the input eigenvalues for which to find corresponding eigenvectors.`iblock_in`

specifies the submatrices corresponding to the eigenvalues in`w_in`

.`isplit_in`

specifies the splitting points between the submatrix blocks.

`syconv!`

(*uplo*,*A*,*ipiv) -> (A*,*work*)¶Converts a symmetric matrix

`A`

(which has been factorized into a triangular matrix) into two matrices`L`

and`D`

. If`uplo=U`

,`A`

is upper triangular. If`uplo=L`

, it is lower triangular.`ipiv`

is the pivot vector from the triangular factorization.`A`

is overwritten by`L`

and`D`

.

`sysv!`

(*uplo*,*A*,*B) -> (B*,*A*,*ipiv*)¶Finds the solution to

`A*X=B`

for symmetric matrix`A`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.`B`

is overwritten by the solution`X`

.`A`

is overwritten by its Bunch-Kaufman factorization.`ipiv`

contains pivoting information about the factorization.

`sytrf!`

(*uplo*,*A) -> (A*,*ipiv*)¶Computes the Bunch-Kaufman factorization of a symmetric matrix

`A`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.Returns

`A`

, overwritten by the factorization, and a pivot vector`ipiv`

.

`sytri!`

(*uplo*,*A*,*ipiv*)¶Computes the inverse of a symmetric matrix

`A`

using the results of`sytrf!`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.`A`

is overwritten by its inverse.

`sytrs!`

(*uplo*,*A*,*ipiv*,*B*)¶Solves the equation

`A*X=B`

for a symmetric matrix`A`

using the results of`sytrf!`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.`B`

is overwritten by the solution`X`

.

`hesv!`

(*uplo*,*A*,*B) -> (B*,*A*,*ipiv*)¶Finds the solution to

`A*X=B`

for Hermitian matrix`A`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.`B`

is overwritten by the solution`X`

.`A`

is overwritten by its Bunch-Kaufman factorization.`ipiv`

contains pivoting information about the factorization.

`hetrf!`

(*uplo*,*A) -> (A*,*ipiv*)¶Computes the Bunch-Kaufman factorization of a Hermitian matrix

`A`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.Returns

`A`

, overwritten by the factorization, and a pivot vector.

`hetri!`

(*uplo*,*A*,*ipiv*)¶Computes the inverse of a Hermitian matrix

`A`

using the results of`sytrf!`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.`A`

is overwritten by its inverse.

`hetrs!`

(*uplo*,*A*,*ipiv*,*B*)¶Solves the equation

`A*X=B`

for a Hermitian matrix`A`

using the results of`sytrf!`

. If`uplo=U`

, the upper half of`A`

is stored. If`uplo=L`

, the lower half is stored.`B`

is overwritten by the solution`X`

.

`syev!`

(*jobz*,*uplo*,*A*)¶Finds the eigenvalues (

`jobz=N`

) or eigenvalues and eigenvectors (`jobz=V`

) of a symmetric matrix`A`

. If`uplo=U`

, the upper triangle of`A`

is used. If`uplo=L`

, the lower triangle of`A`

is used.

`syevr!`

(*jobz*,*range*,*uplo*,*A*,*vl*,*vu*,*il*,*iu*,*abstol) -> (W*,*Z*)¶Finds the eigenvalues (

`jobz=N`

) or eigenvalues and eigenvectors (`jobz=V`

) of a symmetric matrix`A`

. If`uplo=U`

, the upper triangle of`A`

is used. If`uplo=L`

, the lower triangle of`A`

is used. If`range=A`

, all the eigenvalues are found. If`range=V`

, the eigenvalues in the half-open interval`(vl,vu]`

are found. If`range=I`

, the eigenvalues with indices between`il`

and`iu`

are found.`abstol`

can be set as a tolerance for convergence.The eigenvalues are returned in

`W`

and the eigenvectors in`Z`

.

`sygvd!`

(*jobz*,*range*,*uplo*,*A*,*vl*,*vu*,*il*,*iu*,*abstol) -> (w*,*A*,*B*)¶Finds the generalized eigenvalues (

`jobz=N`

) or eigenvalues and eigenvectors (`jobz=V`

) of a symmetric matrix`A`

and symmetric positive-definite matrix`B`

. If`uplo=U`

, the upper triangles of`A`

and`B`

are used. If`uplo=L`

, the lower triangles of`A`

and`B`

are used. If`itype=1`

, the problem to solve is`A*x=lambda*B*x`

. If`itype=2`

, the problem to solve is`A*B*x=lambda*x`

. If`itype=3`

, the problem to solve is`B*A*x=lambda*x`

.

`bdsqr!`

(*uplo*,*d*,*e_*,*Vt*,*U*,*C) -> (d*,*Vt*,*U*,*C*)¶Computes the singular value decomposition of a bidiagonal matrix with

`d`

on the diagonal and`e_`

on the off-diagonal. If`uplo=U`

,`e_`

is the superdiagonal. If`uplo=L`

,`e_`

is the subdiagonal. Can optionally also compute the product`Q'*C`

.Returns the singular values in

`d`

, and the matrix`C`

overwritten with`Q'*C`

.

`bdsdc!`

(*uplo*,*compq*,*d*,*e_) -> (d*,*e*,*u*,*vt*,*q*,*iq*)¶Computes the singular value decomposition of a bidiagonal matrix with

`d`

on the diagonal and`e_`

on the off-diagonal using a divide and conqueq method. If`uplo=U`

,`e_`

is the superdiagonal. If`uplo=L`

,`e_`

is the subdiagonal. If`compq=N`

, only the singular values are found. If`compq=I`

, the singular values and vectors are found. If`compq=P`

, the singular values and vectors are found in compact form. Only works for real types.Returns the singular values in

`d`

, and if`compq=P`

, the compact singular vectors in`iq`

.

`gecon!`

(*normtype*,*A*,*anorm*)¶Finds the reciprocal condition number of matrix

`A`

. If`normtype=I`

, the condition number is found in the infinity norm. If`normtype=O`

or`1`

, the condition number is found in the one norm.`A`

must be the result of`getrf!`

and`anorm`

is the norm of`A`

in the relevant norm.

`gehrd!`

(*ilo*,*ihi*,*A) -> (A*,*tau*)¶Converts a matrix

`A`

to Hessenberg form. If`A`

is balanced with`gebal!`

then`ilo`

and`ihi`

are the outputs of`gebal!`

. Otherwise they should be`ilo=1`

and`ihi=size(A,2)`

.`tau`

contains the elementary reflectors of the factorization.

`orghr!`

(*ilo*,*ihi*,*A*,*tau*)¶Explicitly finds

`Q`

, the orthogonal/unitary matrix from`gehrd!`

.`ilo`

,`ihi`

,`A`

, and`tau`

must correspond to the input/output to`gehrd!`

.

`gees!`

(*jobvs*,*A) -> (A*,*vs*,*w*)¶Computes the eigenvalues (

`jobvs=N`

) or the eigenvalues and Schur vectors (`jobvs=V`

) of matrix`A`

.`A`

is overwritten by its Schur form.Returns

`A`

,`vs`

containing the Schur vectors, and`w`

, containing the eigenvalues.

`gges!`

(*jobvsl*,*jobvsr*,*A*,*B) -> (A*,*B*,*alpha*,*beta*,*vsl*,*vsr*)¶Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (

`jobsvl=V`

), or right Schur vectors (`jobvsr=V`

) of`A`

and`B`

.The generalized eigenvalues are returned in

`alpha`

and`beta`

. The left Schur vectors are returned in`vsl`

and the right Schur vectors are returned in`vsr`

.

`trsen!`

(*compq*,*job*,*select*,*T*,*Q) -> (T*,*Q*,*w*)¶Reorder the Schur factorization of a matrix and optionally finds reciprocal condition numbers. If

`job=N`

, no condition numbers are found. If`job=E`

, only the condition number for this cluster of eigenvalues is found. If`job=V`

, only the condition number for the invariant subspace is found. If`job=B`

then the condition numbers for the cluster and subspace are found. If`compq=V`

the Schur vectors`Q`

are updated. If`compq=N`

the Schur vectors are not modified.`select`

determines which eigenvalues are in the cluster.Returns

`T`

,`Q`

, and reordered eigenvalues in`w`

.

`tgsen!`

(*select*,*S*,*T*,*Q*,*Z) -> (S*,*T*,*alpha*,*beta*,*Q*,*Z*)¶Reorders the vectors of a generalized Schur decomposition.

`select`

specifices the eigenvalues in each cluster.

`trsyl!`

(*transa*,*transb*,*A*,*B*,*C*,*isgn=1) -> (C*,*scale*)¶Solves the Sylvester matrix equation

`A*X+/-X*B=scale*C`

where`A`

and`B`

are both quasi-upper triangular. If`transa=N`

,`A`

is not modified. If`transa=T`

,`A`

is transposed. If`transa=C`

,`A`

is conjugate transposed. Similarly for`transb`

and`B`

. If`isgn=1`

, the equation`A*X+X*B=scale*C`

is solved. If`isgn=-1`

, the equation`A*X-X*B=scale*C`

is solved.Returns

`X`

(overwriting`C`

) and`scale`

.