# Linear Algebra

## Standard Functions

Linear algebra functions in Julia are largely implemented by calling functions from LAPACK. Sparse factorizations call functions from SuiteSparse.

`Base.:*`

— Method.`*(x, y...)`

Multiplication operator. `x*y*z*...`

calls this function with all arguments, i.e. `*(x, y, z, ...)`

.

`Base.:\`

— Method.`\(x, y)`

Left division operator: multiplication of `y`

by the inverse of `x`

on the left. Gives floating-point results for integer arguments.

```
julia> 3 \ 6
2.0
julia> inv(3) * 6
2.0
julia> A = [1 2; 3 4]; x = [5, 6];
julia> A \ x
2-element Array{Float64,1}:
-4.0
4.5
julia> inv(A) * x
2-element Array{Float64,1}:
-4.0
4.5
```

`Base.LinAlg.dot`

— Function.`dot(n, X, incx, Y, incy)`

Dot product of two vectors consisting of `n`

elements of array `X`

with stride `incx`

and `n`

elements of array `Y`

with stride `incy`

.

**Example:**

```
julia> dot(10, ones(10), 1, ones(20), 2)
10.0
```

`Base.LinAlg.vecdot`

— Function.`vecdot(x, y)`

For any iterable containers `x`

and `y`

(including arrays of any dimension) of numbers (or any element type for which `dot`

is defined), compute the Euclidean dot product (the sum of `dot(x[i],y[i])`

) as if they were vectors.

**Examples**

```
julia> vecdot(1:5, 2:6)
70
julia> x = fill(2., (5,5));
julia> y = fill(3., (5,5));
julia> vecdot(x, y)
150.0
```

`Base.LinAlg.cross`

— Function.```
cross(x, y)
×(x,y)
```

Compute the cross product of two 3-vectors.

**Example**

```
julia> a = [0;1;0]
3-element Array{Int64,1}:
0
1
0
julia> b = [0;0;1]
3-element Array{Int64,1}:
0
0
1
julia> cross(a,b)
3-element Array{Int64,1}:
1
0
0
```

`Base.LinAlg.factorize`

— Function.`factorize(A)`

Compute a convenient factorization of `A`

, based upon the type of the input matrix. `factorize`

checks `A`

to see if it is symmetric/triangular/etc. if `A`

is passed as a generic matrix. `factorize`

checks every element of `A`

to verify/rule out each property. It will short-circuit as soon as it can rule out symmetry/triangular structure. The return value can be reused for efficient solving of multiple systems. For example: `A=factorize(A); x=A\b; y=A\C`

.

Properties of `A` | type of factorization |
---|---|

Positive-definite | Cholesky (see `cholfact` ) |

Dense Symmetric/Hermitian | Bunch-Kaufman (see `bkfact` ) |

Sparse Symmetric/Hermitian | LDLt (see `ldltfact` ) |

Triangular | Triangular |

Diagonal | Diagonal |

Bidiagonal | Bidiagonal |

Tridiagonal | LU (see `lufact` ) |

Symmetric real tridiagonal | LDLt (see `ldltfact` ) |

General square | LU (see `lufact` ) |

General non-square | QR (see `qrfact` ) |

If `factorize`

is called on a Hermitian positive-definite matrix, for instance, then `factorize`

will return a Cholesky factorization.

**Example**

```
julia> A = Array(Bidiagonal(ones(5, 5), true))
5×5 Array{Float64,2}:
1.0 1.0 0.0 0.0 0.0
0.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 0.0
0.0 0.0 0.0 1.0 1.0
0.0 0.0 0.0 0.0 1.0
julia> factorize(A) # factorize will check to see that A is already factorized
5×5 Bidiagonal{Float64}:
1.0 1.0 ⋅ ⋅ ⋅
⋅ 1.0 1.0 ⋅ ⋅
⋅ ⋅ 1.0 1.0 ⋅
⋅ ⋅ ⋅ 1.0 1.0
⋅ ⋅ ⋅ ⋅ 1.0
```

This returns a `5×5 Bidiagonal{Float64}`

, which can now be passed to other linear algebra functions (e.g. eigensolvers) which will use specialized methods for `Bidiagonal`

types.

`Base.LinAlg.Diagonal`

— Type.`Diagonal(A::AbstractMatrix)`

Constructs a matrix from the diagonal of `A`

.

**Example**

```
julia> A = [1 2 3; 4 5 6; 7 8 9]
3×3 Array{Int64,2}:
1 2 3
4 5 6
7 8 9
julia> Diagonal(A)
3×3 Diagonal{Int64}:
1 ⋅ ⋅
⋅ 5 ⋅
⋅ ⋅ 9
```

`Diagonal(V::AbstractVector)`

Constructs a matrix with `V`

as its diagonal.

**Example**

```
julia> V = [1; 2]
2-element Array{Int64,1}:
1
2
julia> Diagonal(V)
2×2 Diagonal{Int64}:
1 ⋅
⋅ 2
```

`Base.LinAlg.Bidiagonal`

— Type.`Bidiagonal(dv, ev, isupper::Bool)`

Constructs an upper (`isupper=true`

) or lower (`isupper=false`

) bidiagonal matrix using the given diagonal (`dv`

) and off-diagonal (`ev`

) vectors. The result is of type `Bidiagonal`

and provides efficient specialized linear solvers, but may be converted into a regular matrix with `convert(Array, _)`

(or `Array(_)`

for short). `ev`

's length must be one less than the length of `dv`

.

**Example**

```
julia> dv = [1; 2; 3; 4]
4-element Array{Int64,1}:
1
2
3
4
julia> ev = [7; 8; 9]
3-element Array{Int64,1}:
7
8
9
julia> Bu = Bidiagonal(dv, ev, true) # ev is on the first superdiagonal
4×4 Bidiagonal{Int64}:
1 7 ⋅ ⋅
⋅ 2 8 ⋅
⋅ ⋅ 3 9
⋅ ⋅ ⋅ 4
julia> Bl = Bidiagonal(dv, ev, false) # ev is on the first subdiagonal
4×4 Bidiagonal{Int64}:
1 ⋅ ⋅ ⋅
7 2 ⋅ ⋅
⋅ 8 3 ⋅
⋅ ⋅ 9 4
```

`Bidiagonal(dv, ev, uplo::Char)`

Constructs an upper (`uplo='U'`

) or lower (`uplo='L'`

) bidiagonal matrix using the given diagonal (`dv`

) and off-diagonal (`ev`

) vectors. The result is of type `Bidiagonal`

and provides efficient specialized linear solvers, but may be converted into a regular matrix with `convert(Array, _)`

(or `Array(_)`

for short). `ev`

's length must be one less than the length of `dv`

.

**Example**

```
julia> dv = [1; 2; 3; 4]
4-element Array{Int64,1}:
1
2
3
4
julia> ev = [7; 8; 9]
3-element Array{Int64,1}:
7
8
9
julia> Bu = Bidiagonal(dv, ev, 'U') #e is on the first superdiagonal
4×4 Bidiagonal{Int64}:
1 7 ⋅ ⋅
⋅ 2 8 ⋅
⋅ ⋅ 3 9
⋅ ⋅ ⋅ 4
julia> Bl = Bidiagonal(dv, ev, 'L') #e is on the first subdiagonal
4×4 Bidiagonal{Int64}:
1 ⋅ ⋅ ⋅
7 2 ⋅ ⋅
⋅ 8 3 ⋅
⋅ ⋅ 9 4
```

`Bidiagonal(A, isupper::Bool)`

Construct a `Bidiagonal`

matrix from the main diagonal of `A`

and its first super- (if `isupper=true`

) or sub-diagonal (if `isupper=false`

).

**Example**

```
julia> A = [1 1 1 1; 2 2 2 2; 3 3 3 3; 4 4 4 4]
4×4 Array{Int64,2}:
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
julia> Bidiagonal(A, true) #contains the main diagonal and first superdiagonal of A
4×4 Bidiagonal{Int64}:
1 1 ⋅ ⋅
⋅ 2 2 ⋅
⋅ ⋅ 3 3
⋅ ⋅ ⋅ 4
julia> Bidiagonal(A, false) #contains the main diagonal and first subdiagonal of A
4×4 Bidiagonal{Int64}:
1 ⋅ ⋅ ⋅
2 2 ⋅ ⋅
⋅ 3 3 ⋅
⋅ ⋅ 4 4
```

`Base.LinAlg.SymTridiagonal`

— Type.`SymTridiagonal(dv, ev)`

Construct a symmetric tridiagonal matrix from the diagonal and first sub/super-diagonal, respectively. The result is of type `SymTridiagonal`

and provides efficient specialized eigensolvers, but may be converted into a regular matrix with `convert(Array, _)`

(or `Array(_)`

for short).

**Example**

```
julia> dv = [1; 2; 3; 4]
4-element Array{Int64,1}:
1
2
3
4
julia> ev = [7; 8; 9]
3-element Array{Int64,1}:
7
8
9
julia> SymTridiagonal(dv, ev)
4×4 SymTridiagonal{Int64}:
1 7 ⋅ ⋅
7 2 8 ⋅
⋅ 8 3 9
⋅ ⋅ 9 4
```

`Base.LinAlg.Tridiagonal`

— Type.`Tridiagonal(dl, d, du)`

Construct a tridiagonal matrix from the first subdiagonal, diagonal, and first superdiagonal, respectively. The result is of type `Tridiagonal`

and provides efficient specialized linear solvers, but may be converted into a regular matrix with `convert(Array, _)`

(or `Array(_)`

for short). The lengths of `dl`

and `du`

must be one less than the length of `d`

.

**Example**

```
julia> dl = [1; 2; 3]
3-element Array{Int64,1}:
1
2
3
julia> du = [4; 5; 6]
3-element Array{Int64,1}:
4
5
6
julia> d = [7; 8; 9; 0]
4-element Array{Int64,1}:
7
8
9
0
julia> Tridiagonal(dl, d, du)
4×4 Tridiagonal{Int64}:
7 4 ⋅ ⋅
1 8 5 ⋅
⋅ 2 9 6
⋅ ⋅ 3 0
```

`Tridiagonal(A)`

returns a `Tridiagonal`

array based on (abstract) matrix `A`

, using its first lower diagonal, main diagonal, and first upper diagonal.

**Example**

```
julia> A = [1 2 3 4; 1 2 3 4; 1 2 3 4; 1 2 3 4]
4×4 Array{Int64,2}:
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
julia> Tridiagonal(A)
4×4 Tridiagonal{Int64}:
1 2 ⋅ ⋅
1 2 3 ⋅
⋅ 2 3 4
⋅ ⋅ 3 4
```

`Base.LinAlg.Symmetric`

— Type.`Symmetric(A, uplo=:U)`

Construct a `Symmetric`

view of the upper (if `uplo = :U`

) or lower (if `uplo = :L`

) triangle of the matrix `A`

.

**Example**

```
julia> A = [1 0 2 0 3; 0 4 0 5 0; 6 0 7 0 8; 0 9 0 1 0; 2 0 3 0 4]
5×5 Array{Int64,2}:
1 0 2 0 3
0 4 0 5 0
6 0 7 0 8
0 9 0 1 0
2 0 3 0 4
julia> Supper = Symmetric(A)
5×5 Symmetric{Int64,Array{Int64,2}}:
1 0 2 0 3
0 4 0 5 0
2 0 7 0 8
0 5 0 1 0
3 0 8 0 4
julia> Slower = Symmetric(A, :L)
5×5 Symmetric{Int64,Array{Int64,2}}:
1 0 6 0 2
0 4 0 9 0
6 0 7 0 3
0 9 0 1 0
2 0 3 0 4
```

Note that `Supper`

will not be equal to `Slower`

unless `A`

is itself symmetric (e.g. if `A == A.'`

).

`Base.LinAlg.Hermitian`

— Type.`Hermitian(A, uplo=:U)`

Construct a `Hermitian`

view of the upper (if `uplo = :U`

) or lower (if `uplo = :L`

) triangle of the matrix `A`

.

**Example**

```
julia> A = [1 0 2+2im 0 3-3im; 0 4 0 5 0; 6-6im 0 7 0 8+8im; 0 9 0 1 0; 2+2im 0 3-3im 0 4];
julia> Hupper = Hermitian(A)
5×5 Hermitian{Complex{Int64},Array{Complex{Int64},2}}:
1+0im 0+0im 2+2im 0+0im 3-3im
0+0im 4+0im 0+0im 5+0im 0+0im
2-2im 0+0im 7+0im 0+0im 8+8im
0+0im 5+0im 0+0im 1+0im 0+0im
3+3im 0+0im 8-8im 0+0im 4+0im
julia> Hlower = Hermitian(A, :L)
5×5 Hermitian{Complex{Int64},Array{Complex{Int64},2}}:
1+0im 0+0im 6+6im 0+0im 2-2im
0+0im 4+0im 0+0im 9+0im 0+0im
6-6im 0+0im 7+0im 0+0im 3+3im
0+0im 9+0im 0+0im 1+0im 0+0im
2+2im 0+0im 3-3im 0+0im 4+0im
```

Note that `Hupper`

will not be equal to `Hlower`

unless `A`

is itself Hermitian (e.g. if `A == A'`

).

`Base.LinAlg.LowerTriangular`

— Type.`LowerTriangular(A::AbstractMatrix)`

Construct a `LowerTriangular`

view of the the matrix `A`

.

**Example**

```
julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]
3×3 Array{Float64,2}:
1.0 2.0 3.0
4.0 5.0 6.0
7.0 8.0 9.0
julia> LowerTriangular(A)
3×3 LowerTriangular{Float64,Array{Float64,2}}:
1.0 ⋅ ⋅
4.0 5.0 ⋅
7.0 8.0 9.0
```

`Base.LinAlg.UpperTriangular`

— Type.`UpperTriangular(A::AbstractMatrix)`

Construct an `UpperTriangular`

view of the the matrix `A`

.

**Example**

```
julia> A = [1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0]
3×3 Array{Float64,2}:
1.0 2.0 3.0
4.0 5.0 6.0
7.0 8.0 9.0
julia> UpperTriangular(A)
3×3 UpperTriangular{Float64,Array{Float64,2}}:
1.0 2.0 3.0
⋅ 5.0 6.0
⋅ ⋅ 9.0
```

`Base.LinAlg.lu`

— Function.`lu(A, pivot=Val{true}) -> L, U, p`

Compute the LU factorization of `A`

, such that `A[p,:] = L*U`

. By default, pivoting is used. This can be overridden by passing `Val{false}`

for the second argument.

See also `lufact`

.

**Example**

```
julia> A = [4. 3.; 6. 3.]
2×2 Array{Float64,2}:
4.0 3.0
6.0 3.0
julia> L, U, p = lu(A)
([1.0 0.0; 0.666667 1.0], [6.0 3.0; 0.0 1.0], [2, 1])
julia> A[p, :] == L * U
true
```

`Base.LinAlg.lufact`

— Function.`lufact(A::SparseMatrixCSC) -> F::UmfpackLU`

Compute the LU factorization of a sparse matrix `A`

.

For sparse `A`

with real or complex element type, the return type of `F`

is `UmfpackLU{Tv, Ti}`

, with `Tv`

= `Float64`

or `Complex128`

respectively and `Ti`

is an integer type (`Int32`

or `Int64`

).

The individual components of the factorization `F`

can be accessed by indexing:

Component | Description |
---|---|

`F[:L]` | `L` (lower triangular) part of `LU` |

`F[:U]` | `U` (upper triangular) part of `LU` |

`F[:p]` | right permutation `Vector` |

`F[:q]` | left permutation `Vector` |

`F[:Rs]` | `Vector` of scaling factors |

`F[:(:)]` | `(L,U,p,q,Rs)` components |

The relation between `F`

and `A`

is

`F[:L]*F[:U] == (F[:Rs] .* A)[F[:p], F[:q]]`

`F`

further supports the following functions:

`lufact(A::SparseMatrixCSC)`

uses the UMFPACK library that is part of SuiteSparse. As this library only supports sparse matrices with `Float64`

or `Complex128`

elements, `lufact`

converts `A`

into a copy that is of type `SparseMatrixCSC{Float64}`

or `SparseMatrixCSC{Complex128}`

as appropriate.

`lufact(A [,pivot=Val{true}]) -> F::LU`

Compute the LU factorization of `A`

.

In most cases, if `A`

is a subtype `S`

of `AbstractMatrix{T}`

with an element type `T`

supporting `+`

, `-`

, `*`

and `/`

, the return type is `LU{T,S{T}}`

. If pivoting is chosen (default) the element type should also support `abs`

and `<`

.

The individual components of the factorization `F`

can be accessed by indexing:

Component | Description |
---|---|

`F[:L]` | `L` (lower triangular) part of `LU` |

`F[:U]` | `U` (upper triangular) part of `LU` |

`F[:p]` | (right) permutation `Vector` |

`F[:P]` | (right) permutation `Matrix` |

The relationship between `F`

and `A`

is

`F[:L]*F[:U] == A[F[:p], :]`

`F`

further supports the following functions:

Supported function | `LU` | `LU{T,Tridiagonal{T}}` |
---|---|---|

`/` | ✓ | |

`\` | ✓ | ✓ |

`cond` | ✓ | |

`inv` | ✓ | ✓ |

`det` | ✓ | ✓ |

`logdet` | ✓ | ✓ |

`logabsdet` | ✓ | ✓ |

`size` | ✓ | ✓ |

**Example**

```
julia> A = [4 3; 6 3]
2×2 Array{Int64,2}:
4 3
6 3
julia> F = lufact(A)
Base.LinAlg.LU{Float64,Array{Float64,2}} with factors L and U:
[1.0 0.0; 1.5 1.0]
[4.0 3.0; 0.0 -1.5]
julia> F[:L] * F[:U] == A[F[:p], :]
true
```

`Base.LinAlg.lufact!`

— Function.`lufact!(A, pivot=Val{true}) -> LU`

`lufact!`

is the same as `lufact`

, but saves space by overwriting the input `A`

, instead of creating a copy. An `InexactError`

exception is thrown if the factorization produces a number not representable by the element type of `A`

, e.g. for integer types.

`Base.LinAlg.chol`

— Function.`chol(A) -> U`

Compute the Cholesky factorization of a positive definite matrix `A`

and return the `UpperTriangular`

matrix `U`

such that `A = U'U`

.

**Example**

```
julia> A = [1. 2.; 2. 50.]
2×2 Array{Float64,2}:
1.0 2.0
2.0 50.0
julia> U = chol(A)
2×2 UpperTriangular{Float64,Array{Float64,2}}:
1.0 2.0
⋅ 6.78233
julia> U'U
2×2 Array{Float64,2}:
1.0 2.0
2.0 50.0
```

`chol(x::Number) -> y`

Compute the square root of a non-negative number `x`

.

**Example**

```
julia> chol(16)
4.0
```

`Base.LinAlg.cholfact`

— Function.`cholfact(A; shift = 0.0, perm = Int[]) -> CHOLMOD.Factor`

Compute the Cholesky factorization of a sparse positive definite matrix `A`

. `A`

must be a `SparseMatrixCSC`

or a `Symmetric`

/`Hermitian`

view of a `SparseMatrixCSC`

. Note that even if `A`

doesn't have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used. `F = cholfact(A)`

is most frequently used to solve systems of equations with `F\b`

, but also the methods `diag`

, `det`

, and `logdet`

are defined for `F`

. You can also extract individual factors from `F`

, using `F[:L]`

. However, since pivoting is on by default, the factorization is internally represented as `A == P'*L*L'*P`

with a permutation matrix `P`

; using just `L`

without accounting for `P`

will give incorrect answers. To include the effects of permutation, it's typically preferable to extract "combined" factors like `PtL = F[:PtL]`

(the equivalent of `P'*L`

) and `LtP = F[:UP]`

(the equivalent of `L'*P`

).

Setting the optional `shift`

keyword argument computes the factorization of `A+shift*I`

instead of `A`

. If the `perm`

argument is nonempty, it should be a permutation of `1:size(A,1)`

giving the ordering to use (instead of CHOLMOD's default AMD ordering).

This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to `SparseMatrixCSC{Float64}`

or `SparseMatrixCSC{Complex128}`

as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the `Base.SparseArrays.CHOLMOD`

module.

`cholfact(A, [uplo::Symbol,] Val{false}) -> Cholesky`

Compute the Cholesky factorization of a dense symmetric positive definite matrix `A`

and return a `Cholesky`

factorization. The matrix `A`

can either be a `Symmetric`

or `Hermitian`

`StridedMatrix`

or a *perfectly* symmetric or Hermitian `StridedMatrix`

. In the latter case, the optional argument `uplo`

may be `:L`

for using the lower part or `:U`

for the upper part of `A`

. The default is to use `:U`

. The triangular Cholesky factor can be obtained from the factorization `F`

with: `F[:L]`

and `F[:U]`

. The following functions are available for `Cholesky`

objects: `size`

, `\`

, `inv`

, and `det`

. A `PosDefException`

exception is thrown in case the matrix is not positive definite.

**Example**

```
julia> A = [4. 12. -16.; 12. 37. -43.; -16. -43. 98.]
3×3 Array{Float64,2}:
4.0 12.0 -16.0
12.0 37.0 -43.0
-16.0 -43.0 98.0
julia> C = cholfact(A)
Base.LinAlg.Cholesky{Float64,Array{Float64,2}} with factor:
[2.0 6.0 -8.0; 0.0 1.0 5.0; 0.0 0.0 3.0]
julia> C[:U]
3×3 UpperTriangular{Float64,Array{Float64,2}}:
2.0 6.0 -8.0
⋅ 1.0 5.0
⋅ ⋅ 3.0
julia> C[:L]
3×3 LowerTriangular{Float64,Array{Float64,2}}:
2.0 ⋅ ⋅
6.0 1.0 ⋅
-8.0 5.0 3.0
julia> C[:L] * C[:U] == A
true
```

`cholfact(A, [uplo::Symbol,] Val{true}; tol = 0.0) -> CholeskyPivoted`

Compute the pivoted Cholesky factorization of a dense symmetric positive semi-definite matrix `A`

and return a `CholeskyPivoted`

factorization. The matrix `A`

can either be a `Symmetric`

or `Hermitian`

`StridedMatrix`

or a *perfectly* symmetric or Hermitian `StridedMatrix`

. In the latter case, the optional argument `uplo`

may be `:L`

for using the lower part or `:U`

for the upper part of `A`

. The default is to use `:U`

. The triangular Cholesky factor can be obtained from the factorization `F`

with: `F[:L]`

and `F[:U]`

. The following functions are available for `PivotedCholesky`

objects: `size`

, `\`

, `inv`

, `det`

, and `rank`

. The argument `tol`

determines the tolerance for determining the rank. For negative values, the tolerance is the machine precision.

`Base.LinAlg.cholfact!`

— Function.`cholfact!(F::Factor, A; shift = 0.0) -> CHOLMOD.Factor`

Compute the Cholesky ($LL'$) factorization of `A`

, reusing the symbolic factorization `F`

. `A`

must be a `SparseMatrixCSC`

or a `Symmetric`

/ `Hermitian`

view of a `SparseMatrixCSC`

. Note that even if `A`

doesn't have the type tag, it must still be symmetric or Hermitian.

See also `cholfact`

.

This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to `SparseMatrixCSC{Float64}`

or `SparseMatrixCSC{Complex128}`

as appropriate.

`cholfact!(A, [uplo::Symbol,] Val{false}) -> Cholesky`

The same as `cholfact`

, but saves space by overwriting the input `A`

, instead of creating a copy. An `InexactError`

exception is thrown if the factorization produces a number not representable by the element type of `A`

, e.g. for integer types.

**Example**

```
julia> A = [1 2; 2 50]
2×2 Array{Int64,2}:
1 2
2 50
julia> cholfact!(A)
ERROR: InexactError()
```

`cholfact!(A, [uplo::Symbol,] Val{true}; tol = 0.0) -> CholeskyPivoted`

The same as `cholfact`

, but saves space by overwriting the input `A`

, instead of creating a copy. An `InexactError`

exception is thrown if the factorization produces a number not representable by the element type of `A`

, e.g. for integer types.

`Base.LinAlg.lowrankupdate`

— Function.`lowrankupdate(C::Cholesky, v::StridedVector) -> CC::Cholesky`

Update a Cholesky factorization `C`

with the vector `v`

. If `A = C[:U]'C[:U]`

then `CC = cholfact(C[:U]'C[:U] + v*v')`

but the computation of `CC`

only uses `O(n^2)`

operations.

`Base.LinAlg.lowrankdowndate`

— Function.`lowrankdowndate(C::Cholesky, v::StridedVector) -> CC::Cholesky`

Downdate a Cholesky factorization `C`

with the vector `v`

. If `A = C[:U]'C[:U]`

then `CC = cholfact(C[:U]'C[:U] - v*v')`

but the computation of `CC`

only uses `O(n^2)`

operations.

`Base.LinAlg.lowrankupdate!`

— Function.`lowrankupdate!(C::Cholesky, v::StridedVector) -> CC::Cholesky`

Update a Cholesky factorization `C`

with the vector `v`

. If `A = C[:U]'C[:U]`

then `CC = cholfact(C[:U]'C[:U] + v*v')`

but the computation of `CC`

only uses `O(n^2)`

operations. The input factorization `C`

is updated in place such that on exit `C == CC`

. The vector `v`

is destroyed during the computation.

`Base.LinAlg.lowrankdowndate!`

— Function.`lowrankdowndate!(C::Cholesky, v::StridedVector) -> CC::Cholesky`

Downdate a Cholesky factorization `C`

with the vector `v`

. If `A = C[:U]'C[:U]`

then `CC = cholfact(C[:U]'C[:U] - v*v')`

but the computation of `CC`

only uses `O(n^2)`

operations. The input factorization `C`

is updated in place such that on exit `C == CC`

. The vector `v`

is destroyed during the computation.

`Base.LinAlg.ldltfact`

— Function.`ldltfact(A; shift = 0.0, perm=Int[]) -> CHOLMOD.Factor`

Compute the $LDL'$ factorization of a sparse matrix `A`

. `A`

must be a `SparseMatrixCSC`

or a `Symmetric`

/`Hermitian`

view of a `SparseMatrixCSC`

. Note that even if `A`

doesn't have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used. `F = ldltfact(A)`

is most frequently used to solve systems of equations `A*x = b`

with `F\b`

. The returned factorization object `F`

also supports the methods `diag`

, `det`

, `logdet`

, and `inv`

. You can extract individual factors from `F`

using `F[:L]`

. However, since pivoting is on by default, the factorization is internally represented as `A == P'*L*D*L'*P`

with a permutation matrix `P`

; using just `L`

without accounting for `P`

will give incorrect answers. To include the effects of permutation, it is typically preferable to extract "combined" factors like `PtL = F[:PtL]`

(the equivalent of `P'*L`

) and `LtP = F[:UP]`

(the equivalent of `L'*P`

). The complete list of supported factors is `:L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP`

.

Setting the optional `shift`

keyword argument computes the factorization of `A+shift*I`

instead of `A`

. If the `perm`

argument is nonempty, it should be a permutation of `1:size(A,1)`

giving the ordering to use (instead of CHOLMOD's default AMD ordering).

This method uses the CHOLMOD library from SuiteSparse, which only supports doubles or complex doubles. Input matrices not of those element types will be converted to `SparseMatrixCSC{Float64}`

or `SparseMatrixCSC{Complex128}`

as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the `Base.SparseArrays.CHOLMOD`

module.

`ldltfact(S::SymTridiagonal) -> LDLt`

Compute an `LDLt`

factorization of a real symmetric tridiagonal matrix such that `A = L*Diagonal(d)*L'`

where `L`

is a unit lower triangular matrix and `d`

is a vector. The main use of an `LDLt`

factorization `F = ldltfact(A)`

is to solve the linear system of equations `Ax = b`

with `F\b`

.

`Base.LinAlg.ldltfact!`

— Function.`ldltfact!(F::Factor, A; shift = 0.0) -> CHOLMOD.Factor`

Compute the $LDL'$ factorization of `A`

, reusing the symbolic factorization `F`

. `A`

must be a `SparseMatrixCSC`

or a `Symmetric`

/`Hermitian`

view of a `SparseMatrixCSC`

. Note that even if `A`

doesn't have the type tag, it must still be symmetric or Hermitian.

See also `ldltfact`

.

`SparseMatrixCSC{Float64}`

or `SparseMatrixCSC{Complex128}`

as appropriate.

`ldltfact!(S::SymTridiagonal) -> LDLt`

Same as `ldltfact`

, but saves space by overwriting the input `A`

, instead of creating a copy.

`Base.LinAlg.qr`

— Function.`qr(A, pivot=Val{false}; thin::Bool=true) -> Q, R, [p]`

Compute the (pivoted) QR factorization of `A`

such that either `A = Q*R`

or `A[:,p] = Q*R`

. Also see `qrfact`

. The default is to compute a thin factorization. Note that `R`

is not extended with zeros when the full `Q`

is requested.

`qr(v::AbstractVector) -> w, r`

Computes the polar decomposition of a vector. Returns `w`

, a unit vector in the direction of `v`

, and `r`

, the norm of `v`

.

See also `normalize`

, `normalize!`

, and `LinAlg.qr!`

.

**Example**

```
julia> v = [1; 2]
2-element Array{Int64,1}:
1
2
julia> w, r = qr(v)
([0.447214, 0.894427], 2.23606797749979)
julia> w*r == v
true
```

`Base.LinAlg.qr!`

— Function.`LinAlg.qr!(v::AbstractVector) -> w, r`

Computes the polar decomposition of a vector. Instead of returning a new vector as `qr(v::AbstractVector)`

, this function mutates the input vector `v`

in place. Returns `w`

, a unit vector in the direction of `v`

(this is a mutation of `v`

), and `r`

, the norm of `v`

.

See also `normalize`

, `normalize!`

, and `qr`

.

`Base.LinAlg.qrfact`

— Function.`qrfact(A, pivot=Val{false}) -> F`

Compute the QR factorization of the matrix `A`

: an orthogonal (or unitary if `A`

is complex-valued) matrix `Q`

, and an upper triangular matrix `R`

such that

The returned object `F`

stores the factorization in a packed format:

if

`pivot == Val{true}`

then`F`

is a`QRPivoted`

object,otherwise if the element type of

`A`

is a BLAS type (`Float32`

,`Float64`

,`Complex64`

or`Complex128`

), then`F`

is a`QRCompactWY`

object,otherwise

`F`

is a`QR`

object.

The individual components of the factorization `F`

can be accessed by indexing with a symbol:

`F[:Q]`

: the orthogonal/unitary matrix`Q`

`F[:R]`

: the upper triangular matrix`R`

`F[:p]`

: the permutation vector of the pivot (`QRPivoted`

only)`F[:P]`

: the permutation matrix of the pivot (`QRPivoted`

only)

The following functions are available for the `QR`

objects: `inv`

, `size`

, and `\`

. When `A`

is rectangular, `\`

will return a least squares solution and if the solution is not unique, the one with smallest norm is returned.

Multiplication with respect to either thin or full `Q`

is allowed, i.e. both `F[:Q]*F[:R]`

and `F[:Q]*A`

are supported. A `Q`

matrix can be converted into a regular matrix with `full`

which has a named argument `thin`

.

**Example**

```
julia> A = [3.0 -6.0; 4.0 -8.0; 0.0 1.0]
3×2 Array{Float64,2}:
3.0 -6.0
4.0 -8.0
0.0 1.0
julia> F = qrfact(A)
Base.LinAlg.QRCompactWY{Float64,Array{Float64,2}} with factors Q and R:
[-0.6 0.0 0.8; -0.8 0.0 -0.6; 0.0 -1.0 0.0]
[-5.0 10.0; 0.0 -1.0]
julia> F[:Q] * F[:R] == A
true
```

`qrfact`

returns multiple types because LAPACK uses several representations that minimize the memory storage requirements of products of Householder elementary reflectors, so that the `Q`

and `R`

matrices can be stored compactly rather as two separate dense matrices.

`qrfact(A) -> SPQR.Factorization`

Compute the `QR`

factorization of a sparse matrix `A`

. A fill-reducing permutation is used. The main application of this type is to solve least squares problems with `\`

. The function calls the C library SPQR and a few additional functions from the library are wrapped but not exported.

`Base.LinAlg.qrfact!`

— Function.`qrfact!(A, pivot=Val{false})`

`qrfact!`

is the same as `qrfact`

when `A`

is a subtype of `StridedMatrix`

, but saves space by overwriting the input `A`

, instead of creating a copy. An `InexactError`

exception is thrown if the factorization produces a number not representable by the element type of `A`

, e.g. for integer types.

`Base.LinAlg.QR`

— Type.`QR <: Factorization`

A QR matrix factorization stored in a packed format, typically obtained from `qrfact`

. If $A$ is an `m`

×`n`

matrix, then

where $Q$ is an orthogonal/unitary matrix and $R$ is upper triangular. The matrix $Q$ is stored as a sequence of Householder reflectors $v_i$ and coefficients $\tau_i$ where:

The object has two fields:

`factors`

is an`m`

×`n`

matrix.The upper triangular part contains the elements of $R$, that is

`R = triu(F.factors)`

for a`QR`

object`F`

.The subdiagonal part contains the reflectors $v_i$ stored in a packed format where $v_i$ is the $i$th column of the matrix

`V = eye(m,n) + tril(F.factors,-1)`

.

`τ`

is a vector of length`min(m,n)`

containing the coefficients $au_i$.

`Base.LinAlg.QRCompactWY`

— Type.`QRCompactWY <: Factorization`

A QR matrix factorization stored in a compact blocked format, typically obtained from `qrfact`

. If $A$ is an `m`

×`n`

matrix, then

where $Q$ is an orthogonal/unitary matrix and $R$ is upper triangular. It is similar to the `QR`

format except that the orthogonal/unitary matrix $Q$ is stored in *Compact WY* format [Schreiber1989], as a lower trapezoidal matrix $V$ and an upper triangular matrix $T$ where

such that $v_i$ is the $i$th column of $V$, and $au_i$ is the $i$th diagonal element of $T$.

The object has two fields:

`factors`

, as in the`QR`

type, is an`m`

×`n`

matrix.The upper triangular part contains the elements of $R$, that is

`R = triu(F.factors)`

for a`QR`

object`F`

.The subdiagonal part contains the reflectors $v_i$ stored in a packed format such that

`V = eye(m,n) + tril(F.factors,-1)`

.

`T`

is a square matrix with`min(m,n)`

columns, whose upper triangular part gives the matrix $T$ above (the subdiagonal elements are ignored).

This format should not to be confused with the older *WY* representation [Bischof1987].

**[Bischof1987]**

C Bischof and C Van Loan, "The WY representation for products of Householder matrices", SIAM J Sci Stat Comput 8 (1987), s2-s13. doi:10.1137/0908009

**[Schreiber1989]**

R Schreiber and C Van Loan, "A storage-efficient WY representation for products of Householder transformations", SIAM J Sci Stat Comput 10 (1989), 53-57. doi:10.1137/0910005

`Base.LinAlg.QRPivoted`

— Type.`QRPivoted <: Factorization`

A QR matrix factorization with column pivoting in a packed format, typically obtained from `qrfact`

. If $A$ is an `m`

×`n`

matrix, then

where $P$ is a permutation matrix, $Q$ is an orthogonal/unitary matrix and $R$ is upper triangular. The matrix $Q$ is stored as a sequence of Householder reflectors:

The object has three fields:

`factors`

is an`m`

×`n`

matrix.The upper triangular part contains the elements of $R$, that is

`R = triu(F.factors)`

for a`QR`

object`F`

.The subdiagonal part contains the reflectors $v_i$ stored in a packed format where $v_i$ is the $i$th column of the matrix

`V = eye(m,n) + tril(F.factors,-1)`

.

`τ`

is a vector of length`min(m,n)`

containing the coefficients $au_i$.`jpvt`

is an integer vector of length`n`

corresponding to the permutation $P$.

`Base.LinAlg.lqfact!`

— Function.`lqfact!(A) -> LQ`

Compute the LQ factorization of `A`

, using the input matrix as a workspace. See also `lq`

.

`Base.LinAlg.lqfact`

— Function.`lqfact(A) -> LQ`

Compute the LQ factorization of `A`

. See also `lq`

.

`Base.LinAlg.lq`

— Function.`lq(A; [thin=true]) -> L, Q`

Perform an LQ factorization of `A`

such that `A = L*Q`

. The default is to compute a thin factorization. The LQ factorization is the QR factorization of `A.'`

. `L`

is not extended with zeros if the full `Q`

is requested.

`Base.LinAlg.bkfact`

— Function.`bkfact(A, uplo::Symbol=:U, symmetric::Bool=issymmetric(A), rook::Bool=false) -> BunchKaufman`

Compute the Bunch-Kaufman [Bunch1977] factorization of a symmetric or Hermitian matrix `A`

and return a `BunchKaufman`

object. `uplo`

indicates which triangle of matrix `A`

to reference. If `symmetric`

is `true`

, `A`

is assumed to be symmetric. If `symmetric`

is `false`

, `A`

is assumed to be Hermitian. If `rook`

is `true`

, rook pivoting is used. If `rook`

is false, rook pivoting is not used. The following functions are available for `BunchKaufman`

objects: `size`

, `\`

, `inv`

, `issymmetric`

, `ishermitian`

.

**[Bunch1977]**

J R Bunch and L Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Mathematics of Computation 31:137 (1977), 163-179. url.

`Base.LinAlg.bkfact!`

— Function.`bkfact!(A, uplo::Symbol=:U, symmetric::Bool=issymmetric(A), rook::Bool=false) -> BunchKaufman`

`bkfact!`

is the same as `bkfact`

, but saves space by overwriting the input `A`

, instead of creating a copy.

`Base.LinAlg.eig`

— Function.```
eig(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> D, V
eig(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> D, V
eig(A, permute::Bool=true, scale::Bool=true) -> D, V
```

Computes eigenvalues (`D`

) and eigenvectors (`V`

) of `A`

. See `eigfact`

for details on the `irange`

, `vl`

, and `vu`

arguments (for `SymTridiagonal`

, `Hermitian`

, and `Symmetric`

matrices) and the `permute`

and `scale`

keyword arguments. The eigenvectors are returned columnwise.

**Example**

```
julia> eig([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])
([1.0, 3.0, 18.0], [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 0.0 1.0])
```

`eig`

is a wrapper around `eigfact`

, extracting all parts of the factorization to a tuple; where possible, using `eigfact`

is recommended.

`eig(A, B) -> D, V`

Computes generalized eigenvalues (`D`

) and vectors (`V`

) of `A`

with respect to `B`

.

`eig`

is a wrapper around `eigfact`

, extracting all parts of the factorization to a tuple; where possible, using `eigfact`

is recommended.

**Example**

```
julia> A = [1 0; 0 -1]
2×2 Array{Int64,2}:
1 0
0 -1
julia> B = [0 1; 1 0]
2×2 Array{Int64,2}:
0 1
1 0
julia> eig(A, B)
(Complex{Float64}[0.0+1.0im, 0.0-1.0im], Complex{Float64}[0.0-1.0im 0.0+1.0im; -1.0-0.0im -1.0+0.0im])
```

`Base.LinAlg.eigvals`

— Function.`eigvals(A; permute::Bool=true, scale::Bool=true) -> values`

Returns the eigenvalues of `A`

.

For general non-symmetric matrices it is possible to specify how the matrix is balanced before the eigenvalue calculation. The option `permute=true`

permutes the matrix to become closer to upper triangular, and `scale=true`

scales the matrix by its diagonal elements to make rows and columns more equal in norm. The default is `true`

for both options.

`eigvals(A, B) -> values`

Computes the generalized eigenvalues of `A`

and `B`

.

**Example**

```
julia> A = [1 0; 0 -1]
2×2 Array{Int64,2}:
1 0
0 -1
julia> B = [0 1; 1 0]
2×2 Array{Int64,2}:
0 1
1 0
julia> eigvals(A,B)
2-element Array{Complex{Float64},1}:
0.0+1.0im
0.0-1.0im
```

`eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values`

Returns the eigenvalues of `A`

. It is possible to calculate only a subset of the eigenvalues by specifying a `UnitRange`

`irange`

covering indices of the sorted eigenvalues, e.g. the 2nd to 8th eigenvalues.

```
julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])
3×3 SymTridiagonal{Float64}:
1.0 2.0 ⋅
2.0 2.0 3.0
⋅ 3.0 1.0
julia> eigvals(A, 2:2)
1-element Array{Float64,1}:
1.0
julia> eigvals(A)
3-element Array{Float64,1}:
-2.14005
1.0
5.14005
```

`eigvals(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values`

Returns the eigenvalues of `A`

. It is possible to calculate only a subset of the eigenvalues by specifying a pair `vl`

and `vu`

for the lower and upper boundaries of the eigenvalues.

```
julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])
3×3 SymTridiagonal{Float64}:
1.0 2.0 ⋅
2.0 2.0 3.0
⋅ 3.0 1.0
julia> eigvals(A, -1, 2)
1-element Array{Float64,1}:
1.0
julia> eigvals(A)
3-element Array{Float64,1}:
-2.14005
1.0
5.14005
```

`Base.LinAlg.eigvals!`

— Function.`eigvals!(A; permute::Bool=true, scale::Bool=true) -> values`

Same as `eigvals`

, but saves space by overwriting the input `A`

, instead of creating a copy. The option `permute=true`

permutes the matrix to become closer to upper triangular, and `scale=true`

scales the matrix by its diagonal elements to make rows and columns more equal in norm.

`eigvals!(A, B) -> values`

Same as `eigvals`

, but saves space by overwriting the input `A`

(and `B`

), instead of creating copies.

`eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> values`

Same as `eigvals`

, but saves space by overwriting the input `A`

, instead of creating a copy. `irange`

is a range of eigenvalue *indices* to search for - for instance, the 2nd to 8th eigenvalues.

`eigvals!(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> values`

Same as `eigvals`

, but saves space by overwriting the input `A`

, instead of creating a copy. `vl`

is the lower bound of the interval to search for eigenvalues, and `vu`

is the upper bound.

`Base.LinAlg.eigmax`

— Function.`eigmax(A; permute::Bool=true, scale::Bool=true)`

Returns the largest eigenvalue of `A`

. The option `permute=true`

permutes the matrix to become closer to upper triangular, and `scale=true`

scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues of `A`

are complex, this method will fail, since complex numbers cannot be sorted.

**Example**

```
julia> A = [0 im; -im 0]
2×2 Array{Complex{Int64},2}:
0+0im 0+1im
0-1im 0+0im
julia> eigmax(A)
1.0
julia> A = [0 im; -1 0]
2×2 Array{Complex{Int64},2}:
0+0im 0+1im
-1+0im 0+0im
julia> eigmax(A)
ERROR: DomainError:
Stacktrace:
[1] #eigmax#46(::Bool, ::Bool, ::Function, ::Array{Complex{Int64},2}) at ./linalg/eigen.jl:238
[2] eigmax(::Array{Complex{Int64},2}) at ./linalg/eigen.jl:236
```

`Base.LinAlg.eigmin`

— Function.`eigmin(A; permute::Bool=true, scale::Bool=true)`

Returns the smallest eigenvalue of `A`

. The option `permute=true`

permutes the matrix to become closer to upper triangular, and `scale=true`

scales the matrix by its diagonal elements to make rows and columns more equal in norm. Note that if the eigenvalues of `A`

are complex, this method will fail, since complex numbers cannot be sorted.

**Example**

```
julia> A = [0 im; -im 0]
2×2 Array{Complex{Int64},2}:
0+0im 0+1im
0-1im 0+0im
julia> eigmin(A)
-1.0
julia> A = [0 im; -1 0]
2×2 Array{Complex{Int64},2}:
0+0im 0+1im
-1+0im 0+0im
julia> eigmin(A)
ERROR: DomainError:
Stacktrace:
[1] #eigmin#47(::Bool, ::Bool, ::Function, ::Array{Complex{Int64},2}) at ./linalg/eigen.jl:280
[2] eigmin(::Array{Complex{Int64},2}) at ./linalg/eigen.jl:278
```

`Base.LinAlg.eigvecs`

— Function.`eigvecs(A::SymTridiagonal[, eigvals]) -> Matrix`

Returns a matrix `M`

whose columns are the eigenvectors of `A`

. (The `k`

th eigenvector can be obtained from the slice `M[:, k]`

.)

If the optional vector of eigenvalues `eigvals`

is specified, `eigvecs`

returns the specific corresponding eigenvectors.

**Example**

```
julia> A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])
3×3 SymTridiagonal{Float64}:
1.0 2.0 ⋅
2.0 2.0 3.0
⋅ 3.0 1.0
julia> eigvals(A)
3-element Array{Float64,1}:
-2.14005
1.0
5.14005
julia> eigvecs(A)
3×3 Array{Float64,2}:
0.418304 -0.83205 0.364299
-0.656749 -7.39009e-16 0.754109
0.627457 0.5547 0.546448
julia> eigvecs(A, [1.])
3×1 Array{Float64,2}:
0.83205
4.26351e-17
-0.5547
```

`eigvecs(A; permute::Bool=true, scale::Bool=true) -> Matrix`

Returns a matrix `M`

whose columns are the eigenvectors of `A`

. (The `k`

th eigenvector can be obtained from the slice `M[:, k]`

.) The `permute`

and `scale`

keywords are the same as for `eigfact`

.

**Example**

```
julia> eigvecs([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])
3×3 Array{Float64,2}:
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 1.0
```

`eigvecs(A, B) -> Matrix`

Returns a matrix `M`

whose columns are the generalized eigenvectors of `A`

and `B`

. (The `k`

th eigenvector can be obtained from the slice `M[:, k]`

.)

**Example**

```
julia> A = [1 0; 0 -1]
2×2 Array{Int64,2}:
1 0
0 -1
julia> B = [0 1; 1 0]
2×2 Array{Int64,2}:
0 1
1 0
julia> eigvecs(A, B)
2×2 Array{Complex{Float64},2}:
0.0-1.0im 0.0+1.0im
-1.0-0.0im -1.0+0.0im
```

`Base.LinAlg.eigfact`

— Function.`eigfact(A; permute::Bool=true, scale::Bool=true) -> Eigen`

Computes the eigenvalue decomposition of `A`

, returning an `Eigen`

factorization object `F`

which contains the eigenvalues in `F[:values]`

and the eigenvectors in the columns of the matrix `F[:vectors]`

. (The `k`

th eigenvector can be obtained from the slice `F[:vectors][:, k]`

.)

The following functions are available for `Eigen`

objects: `inv`

, `det`

, and `isposdef`

.

For general nonsymmetric matrices it is possible to specify how the matrix is balanced before the eigenvector calculation. The option `permute=true`

permutes the matrix to become closer to upper triangular, and `scale=true`

scales the matrix by its diagonal elements to make rows and columns more equal in norm. The default is `true`

for both options.

**Example**

```
julia> F = eigfact([1.0 0.0 0.0; 0.0 3.0 0.0; 0.0 0.0 18.0])
Base.LinAlg.Eigen{Float64,Float64,Array{Float64,2},Array{Float64,1}}([1.0, 3.0, 18.0], [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 0.0 1.0])
julia> F[:values]
3-element Array{Float64,1}:
1.0
3.0
18.0
julia> F[:vectors]
3×3 Array{Float64,2}:
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 1.0
```

`eigfact(A, B) -> GeneralizedEigen`

Computes the generalized eigenvalue decomposition of `A`

and `B`

, returning a `GeneralizedEigen`

factorization object `F`

which contains the generalized eigenvalues in `F[:values]`

and the generalized eigenvectors in the columns of the matrix `F[:vectors]`

. (The `k`

th generalized eigenvector can be obtained from the slice `F[:vectors][:, k]`

.)

`eigfact(A::Union{SymTridiagonal, Hermitian, Symmetric}, irange::UnitRange) -> Eigen`

Computes the eigenvalue decomposition of `A`

, returning an `Eigen`

factorization object `F`

which contains the eigenvalues in `F[:values]`

and the eigenvectors in the columns of the matrix `F[:vectors]`

. (The `k`

th eigenvector can be obtained from the slice `F[:vectors][:, k]`

.)

The following functions are available for `Eigen`

objects: `inv`

, `det`

, and `isposdef`

.

The `UnitRange`

`irange`

specifies indices of the sorted eigenvalues to search for.

If `irange`

is not `1:n`

, where `n`

is the dimension of `A`

, then the returned factorization will be a *truncated* factorization.

`eigfact(A::Union{SymTridiagonal, Hermitian, Symmetric}, vl::Real, vu::Real) -> Eigen`

Computes the eigenvalue decomposition of `A`

, returning an `Eigen`

factorization object `F`

which contains the eigenvalues in `F[:values]`

and the eigenvectors in the columns of the matrix `F[:vectors]`

. (The `k`

th eigenvector can be obtained from the slice `F[:vectors][:, k]`

.)

The following functions are available for `Eigen`

objects: `inv`

, `det`

, and `isposdef`

.

`vl`

is the lower bound of the window of eigenvalues to search for, and `vu`

is the upper bound.

If [`vl`

, `vu`

] does not contain all eigenvalues of `A`

, then the returned factorization will be a *truncated* factorization.

`Base.LinAlg.eigfact!`

— Function.`eigfact!(A, [B])`

Same as `eigfact`

, but saves space by overwriting the input `A`

(and `B`

), instead of creating a copy.

`Base.LinAlg.hessfact`

— Function.`hessfact(A) -> Hessenberg`

Compute the Hessenberg decomposition of `A`

and return a `Hessenberg`

object. If `F`

is the factorization object, the unitary matrix can be accessed with `F[:Q]`

and the Hessenberg matrix with `F[:H]`

. When `Q`

is extracted, the resulting type is the `HessenbergQ`

object, and may be converted to a regular matrix with `convert(Array, _)`

(or `Array(_)`

for short).

**Example**

```
julia> A = [4. 9. 7.; 4. 4. 1.; 4. 3. 2.]
3×3 Array{Float64,2}:
4.0 9.0 7.0
4.0 4.0 1.0
4.0 3.0 2.0
julia> F = hessfact(A);
julia> F[:Q] * F[:H] * F[:Q]'
3×3 Array{Float64,2}:
4.0 9.0 7.0
4.0 4.0 1.0
4.0 3.0 2.0
```

`Base.LinAlg.hessfact!`

— Function.`hessfact!(A) -> Hessenberg`

`hessfact!`

is the same as `hessfact`

, but saves space by overwriting the input `A`

, instead of creating a copy.

`Base.LinAlg.schurfact`

— Function.`schurfact(A::StridedMatrix) -> F::Schur`

Computes the Schur factorization of the matrix `A`

. The (quasi) triangular Schur factor can be obtained from the `Schur`

object `F`

with either `F[:Schur]`

or `F[:T]`

and the orthogonal/unitary Schur vectors can be obtained with `F[:vectors]`

or `F[:Z]`

such that `A = F[:vectors]*F[:Schur]*F[:vectors]'`

. The eigenvalues of `A`

can be obtained with `F[:values]`

.

**Example**

```
julia> A = [-2. 1. 3.; 2. 1. -1.; -7. 2. 7.]
3×3 Array{Float64,2}:
-2.0 1.0 3.0
2.0 1.0 -1.0
-7.0 2.0 7.0
julia> F = schurfact(A)
Base.LinAlg.Schur{Float64,Array{Float64,2}} with factors T and Z:
[2.0 0.801792 6.63509; -8.55988e-11 2.0 8.08286; 0.0 0.0 1.99999]
[0.577351 0.154299 -0.801784; 0.577346 -0.77152 0.267262; 0.577354 0.617211 0.534522]
and values:
Complex{Float64}[2.0+8.28447e-6im, 2.0-8.28447e-6im, 1.99999+0.0im]
julia> F[:vectors] * F[:Schur] * F[:vectors]'
3×3 Array{Float64,2}:
-2.0 1.0 3.0
2.0 1.0 -1.0
-7.0 2.0 7.0
```

`schurfact(A::StridedMatrix, B::StridedMatrix) -> F::GeneralizedSchur`

Computes the Generalized Schur (or QZ) factorization of the matrices `A`

and `B`

. The (quasi) triangular Schur factors can be obtained from the `Schur`

object `F`

with `F[:S]`

and `F[:T]`

, the left unitary/orthogonal Schur vectors can be obtained with `F[:left]`

or `F[:Q]`

and the right unitary/orthogonal Schur vectors can be obtained with `F[:right]`

or `F[:Z]`

such that `A=F[:left]*F[:S]*F[:right]'`

and `B=F[:left]*F[:T]*F[:right]'`

. The generalized eigenvalues of `A`

and `B`

can be obtained with `F[:alpha]./F[:beta]`

.

`Base.LinAlg.schurfact!`

— Function.`schurfact!(A::StridedMatrix) -> F::Schur`

Same as `schurfact`

but uses the input argument as workspace.

`schurfact!(A::StridedMatrix, B::StridedMatrix) -> F::GeneralizedSchur`

Same as `schurfact`

but uses the input matrices `A`

and `B`

as workspace.

`Base.LinAlg.schur`

— Function.`schur(A::StridedMatrix) -> T::Matrix, Z::Matrix, λ::Vector`

Computes the Schur factorization of the matrix `A`

. The methods return the (quasi) triangular Schur factor `T`

and the orthogonal/unitary Schur vectors `Z`

such that `A = Z*T*Z'`

. The eigenvalues of `A`

are returned in the vector `λ`

.

See `schurfact`

.

**Example**

```
julia> A = [-2. 1. 3.; 2. 1. -1.; -7. 2. 7.]
3×3 Array{Float64,2}:
-2.0 1.0 3.0
2.0 1.0 -1.0
-7.0 2.0 7.0
julia> T, Z, lambda = schur(A)
([2.0 0.801792 6.63509; -8.55988e-11 2.0 8.08286; 0.0 0.0 1.99999], [0.577351 0.154299 -0.801784; 0.577346 -0.77152 0.267262; 0.577354 0.617211 0.534522], Complex{Float64}[2.0+8.28447e-6im, 2.0-8.28447e-6im, 1.99999+0.0im])
julia> Z * T * Z'
3×3 Array{Float64,2}:
-2.0 1.0 3.0
2.0 1.0 -1.0
-7.0 2.0 7.0
```

`schur(A::StridedMatrix, B::StridedMatrix) -> S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector`

See `schurfact`

.

`Base.LinAlg.ordschur`

— Function.`ordschur(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur`

Reorders the Schur factorization `F`

of a matrix `A = Z*T*Z'`

according to the logical array `select`

returning the reordered factorization `F`

object. The selected eigenvalues appear in the leading diagonal of `F[:Schur]`

and the corresponding leading columns of `F[:vectors]`

form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded via `select`

.

`ordschur(T::StridedMatrix, Z::StridedMatrix, select::Union{Vector{Bool},BitVector}) -> T::StridedMatrix, Z::StridedMatrix, λ::Vector`

Reorders the Schur factorization of a real matrix `A = Z*T*Z'`

according to the logical array `select`

returning the reordered matrices `T`

and `Z`

as well as the vector of eigenvalues `λ`

. The selected eigenvalues appear in the leading diagonal of `T`

and the corresponding leading columns of `Z`

form an orthogonal/unitary basis of the corresponding right invariant subspace. In the real case, a complex conjugate pair of eigenvalues must be either both included or both excluded via `select`

.

`ordschur(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur`

Reorders the Generalized Schur factorization `F`

of a matrix pair `(A, B) = (Q*S*Z', Q*T*Z')`

according to the logical array `select`

and returns a GeneralizedSchur object `F`

. The selected eigenvalues appear in the leading diagonal of both `F[:S]`

and `F[:T]`

, and the left and right orthogonal/unitary Schur vectors are also reordered such that `(A, B) = F[:Q]*(F[:S], F[:T])*F[:Z]'`

still holds and the generalized eigenvalues of `A`

and `B`

can still be obtained with `F[:alpha]./F[:beta]`

.

`ordschur(S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, select) -> S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector`

Reorders the Generalized Schur factorization of a matrix pair `(A, B) = (Q*S*Z', Q*T*Z')`

according to the logical array `select`

and returns the matrices `S`

, `T`

, `Q`

, `Z`

and vectors `α`

and `β`

. The selected eigenvalues appear in the leading diagonal of both `S`

and `T`

, and the left and right unitary/orthogonal Schur vectors are also reordered such that `(A, B) = Q*(S, T)*Z'`

still holds and the generalized eigenvalues of `A`

and `B`

can still be obtained with `α./β`

.

`Base.LinAlg.ordschur!`

— Function.`ordschur!(F::Schur, select::Union{Vector{Bool},BitVector}) -> F::Schur`

Same as `ordschur`

but overwrites the factorization `F`

.

`ordschur!(T::StridedMatrix, Z::StridedMatrix, select::Union{Vector{Bool},BitVector}) -> T::StridedMatrix, Z::StridedMatrix, λ::Vector`

Same as `ordschur`

but overwrites the input arguments.

`ordschur!(F::GeneralizedSchur, select::Union{Vector{Bool},BitVector}) -> F::GeneralizedSchur`

Same as `ordschur`

but overwrites the factorization `F`

.

`ordschur!(S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, select) -> S::StridedMatrix, T::StridedMatrix, Q::StridedMatrix, Z::StridedMatrix, α::Vector, β::Vector`

Same as `ordschur`

but overwrites the factorization the input arguments.

`Base.LinAlg.svdfact`

— Function.`svdfact(A; thin::Bool=true) -> SVD`

Compute the singular value decomposition (SVD) of `A`

and return an `SVD`

object.

`U`

, `S`

, `V`

and `Vt`

can be obtained from the factorization `F`

with `F[:U]`

, `F[:S]`

, `F[:V]`

and `F[:Vt]`

, such that `A = U*diagm(S)*Vt`

. The algorithm produces `Vt`

and hence `Vt`

is more efficient to extract than `V`

. The singular values in `S`

are sorted in descending order.

If `thin=true`

(default), a thin SVD is returned. For a $M \times N$ matrix `A`

, `U`

is $M \times M$ for a full SVD (`thin=false`

) and $M \times \min(M, N)$ for a thin SVD.

**Example**

```
julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]
4×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 2.0
0.0 0.0 3.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 2.0 0.0 0.0 0.0
julia> F = svdfact(A)
Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}([0.0 1.0 0.0 0.0; 1.0 0.0 0.0 0.0; 0.0 0.0 0.0 -1.0; 0.0 0.0 1.0 0.0], [3.0, 2.23607, 2.0, 0.0], [-0.0 0.0 … -0.0 0.0; 0.447214 0.0 … 0.0 0.894427; -0.0 1.0 … -0.0 0.0; 0.0 0.0 … 1.0 0.0])
julia> F[:U] * diagm(F[:S]) * F[:Vt]
4×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 2.0
0.0 0.0 3.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 2.0 0.0 0.0 0.0
```

`svdfact(A, B) -> GeneralizedSVD`

Compute the generalized SVD of `A`

and `B`

, returning a `GeneralizedSVD`

factorization object `F`

, such that `A = F[:U]*F[:D1]*F[:R0]*F[:Q]'`

and `B = F[:V]*F[:D2]*F[:R0]*F[:Q]'`

.

For an M-by-N matrix `A`

and P-by-N matrix `B`

,

`F[:U]`

is a M-by-M orthogonal matrix,`F[:V]`

is a P-by-P orthogonal matrix,`F[:Q]`

is a N-by-N orthogonal matrix,`F[:R0]`

is a (K+L)-by-N matrix whose rightmost (K+L)-by-(K+L) block is nonsingular upper block triangular,`F[:D1]`

is a M-by-(K+L) diagonal matrix with 1s in the first K entries,`F[:D2]`

is a P-by-(K+L) matrix whose top right L-by-L block is diagonal,

`K+L`

is the effective numerical rank of the matrix `[A; B]`

.

The entries of `F[:D1]`

and `F[:D2]`

are related, as explained in the LAPACK documentation for the generalized SVD and the xGGSVD3 routine which is called underneath (in LAPACK 3.6.0 and newer).

`Base.LinAlg.svdfact!`

— Function.`svdfact!(A, thin::Bool=true) -> SVD`

`svdfact!`

is the same as `svdfact`

, but saves space by overwriting the input `A`

, instead of creating a copy.

`svdfact!(A, B) -> GeneralizedSVD`

`svdfact!`

is the same as `svdfact`

, but modifies the arguments `A`

and `B`

in-place, instead of making copies.

`Base.LinAlg.svd`

— Function.`svd(A; thin::Bool=true) -> U, S, V`

Computes the SVD of `A`

, returning `U`

, vector `S`

, and `V`

such that `A == U*diagm(S)*V'`

. The singular values in `S`

are sorted in descending order.

If `thin=true`

(default), a thin SVD is returned. For a $M \times N$ matrix `A`

, `U`

is $M \times M$ for a full SVD (`thin=false`

) and $M \times \min(M, N)$ for a thin SVD.

`svd`

is a wrapper around `svdfact`

, extracting all parts of the `SVD`

factorization to a tuple. Direct use of `svdfact`

is therefore more efficient.

**Example**

```
julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]
4×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 2.0
0.0 0.0 3.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 2.0 0.0 0.0 0.0
julia> U, S, V = svd(A)
([0.0 1.0 0.0 0.0; 1.0 0.0 0.0 0.0; 0.0 0.0 0.0 -1.0; 0.0 0.0 1.0 0.0], [3.0, 2.23607, 2.0, 0.0], [-0.0 0.447214 -0.0 0.0; 0.0 0.0 1.0 0.0; … ; -0.0 0.0 -0.0 1.0; 0.0 0.894427 0.0 0.0])
julia> U*diagm(S)*V'
4×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 2.0
0.0 0.0 3.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 2.0 0.0 0.0 0.0
```

`svd(A, B) -> U, V, Q, D1, D2, R0`

Wrapper around `svdfact`

extracting all parts of the factorization to a tuple. Direct use of `svdfact`

is therefore generally more efficient. The function returns the generalized SVD of `A`

and `B`

, returning `U`

, `V`

, `Q`

, `D1`

, `D2`

, and `R0`

such that `A = U*D1*R0*Q'`

and `B = V*D2*R0*Q'`

.

`Base.LinAlg.svdvals`

— Function.`svdvals(A)`

Returns the singular values of `A`

in descending order.

**Example**

```
julia> A = [1. 0. 0. 0. 2.; 0. 0. 3. 0. 0.; 0. 0. 0. 0. 0.; 0. 2. 0. 0. 0.]
4×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 2.0
0.0 0.0 3.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 2.0 0.0 0.0 0.0
julia> svdvals(A)
4-element Array{Float64,1}:
3.0
2.23607
2.0
0.0
```

`svdvals(A, B)`

Return the generalized singular values from the generalized singular value decomposition of `A`

and `B`

. See also `svdfact`

.

`Base.LinAlg.Givens`

— Type.`LinAlg.Givens(i1,i2,c,s) -> G`

A Givens rotation linear operator. The fields `c`

and `s`

represent the cosine and sine of the rotation angle, respectively. The `Givens`

type supports left multiplication `G*A`

and conjugated transpose right multiplication `A*G'`

. The type doesn't have a `size`

and can therefore be multiplied with matrices of arbitrary size as long as `i2<=size(A,2)`

for `G*A`

or `i2<=size(A,1)`

for `A*G'`

.

See also: `givens`

`Base.LinAlg.givens`

— Function.`givens{T}(f::T, g::T, i1::Integer, i2::Integer) -> (G::Givens, r::T)`

Computes the Givens rotation `G`

and scalar `r`

such that for any vector `x`

where

```
x[i1] = f
x[i2] = g
```

the result of the multiplication

`y = G*x`

has the property that

```
y[i1] = r
y[i2] = 0
```

See also: `LinAlg.Givens`

`givens(A::AbstractArray, i1::Integer, i2::Integer, j::Integer) -> (G::Givens, r)`

Computes the Givens rotation `G`

and scalar `r`

such that the result of the multiplication

`B = G*A`

has the property that

```
B[i1,j] = r
B[i2,j] = 0
```

See also: `LinAlg.Givens`

`givens(x::AbstractVector, i1::Integer, i2::Integer) -> (G::Givens, r)`

Computes the Givens rotation `G`

and scalar `r`

such that the result of the multiplication

`B = G*x`

has the property that

```
B[i1] = r
B[i2] = 0
```

See also: `LinAlg.Givens`

`Base.LinAlg.triu`

— Function.`triu(M)`

Upper triangle of a matrix.

**Example**

```
julia> a = ones(4,4)
4×4 Array{Float64,2}:
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
julia> triu(a)
4×4 Array{Float64,2}:
1.0 1.0 1.0 1.0
0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0
0.0 0.0 0.0 1.0
```

`triu(M, k::Integer)`

Returns the upper triangle of `M`

starting from the `k`

th superdiagonal.

**Example**

```
julia> a = ones(4,4)
4×4 Array{Float64,2}:
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
julia> triu(a,3)
4×4 Array{Float64,2}:
0.0 0.0 0.0 1.0
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
julia> triu(a,-3)
4×4 Array{Float64,2}:
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
```

`Base.LinAlg.triu!`

— Function.`triu!(M)`

Upper triangle of a matrix, overwriting `M`

in the process. See also `triu`

.

`triu!(M, k::Integer)`

Returns the upper triangle of `M`

starting from the `k`

th superdiagonal, overwriting `M`

in the process.

**Example**

```
julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]
5×5 Array{Int64,2}:
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
julia> triu!(M, 1)
5×5 Array{Int64,2}:
0 2 3 4 5
0 0 3 4 5
0 0 0 4 5
0 0 0 0 5
0 0 0 0 0
```

`Base.LinAlg.tril`

— Function.`tril(M)`

Lower triangle of a matrix.

**Example**

```
julia> a = ones(4,4)
4×4 Array{Float64,2}:
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
julia> tril(a)
4×4 Array{Float64,2}:
1.0 0.0 0.0 0.0
1.0 1.0 0.0 0.0
1.0 1.0 1.0 0.0
1.0 1.0 1.0 1.0
```

`tril(M, k::Integer)`

Returns the lower triangle of `M`

starting from the `k`

th superdiagonal.

**Example**

```
julia> a = ones(4,4)
4×4 Array{Float64,2}:
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
julia> tril(a,3)
4×4 Array{Float64,2}:
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0
julia> tril(a,-3)
4×4 Array{Float64,2}:
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
1.0 0.0 0.0 0.0
```

`Base.LinAlg.tril!`

— Function.`tril!(M)`

Lower triangle of a matrix, overwriting `M`

in the process. See also `tril`

.

`tril!(M, k::Integer)`

Returns the lower triangle of `M`

starting from the `k`

th superdiagonal, overwriting `M`

in the process.

**Example**

```
julia> M = [1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]
5×5 Array{Int64,2}:
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
julia> tril!(M, 2)
5×5 Array{Int64,2}:
1 2 3 0 0
1 2 3 4 0
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
```

`Base.LinAlg.diagind`

— Function.`diagind(M, k::Integer=0)`

A `Range`

giving the indices of the `k`

th diagonal of the matrix `M`

.

**Example**

```
julia> A = [1 2 3; 4 5 6; 7 8 9]
3×3 Array{Int64,2}:
1 2 3
4 5 6
7 8 9
julia> diagind(A,-1)
2:4:6
```

`Base.LinAlg.diag`

— Function.`diag(M, k::Integer=0)`

The `k`

th diagonal of a matrix, as a vector. Use `diagm`

to construct a diagonal matrix.

**Example**

```
julia> A = [1 2 3; 4 5 6; 7 8 9]
3×3 Array{Int64,2}:
1 2 3
4 5 6
7 8 9
julia> diag(A,1)
2-element Array{Int64,1}:
2
6
```

`Base.LinAlg.diagm`

— Function.`diagm(v, k::Integer=0)`

Construct a matrix by placing `v`

on the `k`

th diagonal.

**Example**

```
julia> diagm([1,2,3],1)
4×4 Array{Int64,2}:
0 1 0 0
0 0 2 0
0 0 0 3
0 0 0 0
```

`Base.LinAlg.scale!`

— Function.```
scale!(A, b)
scale!(b, A)
```

Scale an array `A`

by a scalar `b`

overwriting `A`

in-place.

If `A`

is a matrix and `b`

is a vector, then `scale!(A,b)`

scales each column `i`

of `A`

by `b[i]`

(similar to `A*Diagonal(b)`

), while `scale!(b,A)`

scales each row `i`

of `A`

by `b[i]`

(similar to `Diagonal(b)*A`

), again operating in-place on `A`

. An `InexactError`

exception is thrown if the scaling produces a number not representable by the element type of `A`

, e.g. for integer types.

**Example**

```
julia> a = [1 2; 3 4]
2×2 Array{Int64,2}:
1 2
3 4
julia> b = [1; 2]
2-element Array{Int64,1}:
1
2
julia> scale!(a,b)
2×2 Array{Int64,2}:
1 4
3 8
julia> a = [1 2; 3 4];
julia> b = [1; 2];
julia> scale!(b,a)
2×2 Array{Int64,2}:
1 2
6 8
```

`Base.LinAlg.rank`

— Function.`rank(M[, tol::Real])`

Compute the rank of a matrix by counting how many singular values of `M`

have magnitude greater than `tol`

. By default, the value of `tol`

is the largest dimension of `M`

multiplied by the `eps`

of the `eltype`

of `M`

.

**Example**

```
julia> rank(eye(3))
3
julia> rank(diagm([1, 0, 2]))
2
julia> rank(diagm([1, 0.001, 2]), 0.1)
2
julia> rank(diagm([1, 0.001, 2]), 0.00001)
3
```

`Base.LinAlg.norm`

— Function.`norm(A::AbstractArray, p::Real=2)`

Compute the `p`

-norm of a vector or the operator norm of a matrix `A`

, defaulting to the 2-norm.

`norm(A::AbstractVector, p::Real=2)`

For vectors, this is equivalent to `vecnorm`

and equal to:

with $a_i$ the entries of $A$ and $n$ its length.

`p`

can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular, `norm(A, Inf)`

returns the largest value in `abs(A)`

, whereas `norm(A, -Inf)`

returns the smallest.

**Example**

```
julia> v = [3, -2, 6]
3-element Array{Int64,1}:
3
-2
6
julia> norm(v)
7.0
julia> norm(v, Inf)
6.0
```

`norm(A::AbstractMatrix, p::Real=2)`

For matrices, the matrix norm induced by the vector `p`

-norm is used, where valid values of `p`

are `1`

, `2`

, or `Inf`

. (Note that for sparse matrices, `p=2`

is currently not implemented.) Use `vecnorm`

to compute the Frobenius norm.

When `p=1`

, the matrix norm is the maximum absolute column sum of `A`

:

with $a_{ij}$ the entries of $A$, and $m$ and $n$ its dimensions.

When `p=2`

, the matrix norm is the spectral norm, equal to the largest singular value of `A`

.

When `p=Inf`

, the matrix norm is the maximum absolute row sum of `A`

:

**Example**

```
julia> A = [1 -2 -3; 2 3 -1]
2×3 Array{Int64,2}:
1 -2 -3
2 3 -1
julia> norm(A, Inf)
6.0
```

`norm(x::Number, p::Real=2)`

For numbers, return $\left( |x|^p \right)^{1/p}$. This is equivalent to `vecnorm`

.

`norm(A::RowVector, q::Real=2)`

For row vectors, return the $q$-norm of `A`

, which is equivalent to the p-norm with value `p = q/(q-1)`

. They coincide at `p = q = 2`

.

The difference in norm between a vector space and its dual arises to preserve the relationship between duality and the inner product, and the result is consistent with the p-norm of `1 × n`

matrix.

`Base.LinAlg.vecnorm`

— Function.`vecnorm(A, p::Real=2)`

For any iterable container `A`

(including arrays of any dimension) of numbers (or any element type for which `norm`

is defined), compute the `p`

-norm (defaulting to `p=2`

) as if `A`

were a vector of the corresponding length.

The `p`

-norm is defined as:

with $a_i$ the entries of $A$ and $n$ its length.

`p`

can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular, `vecnorm(A, Inf)`

returns the largest value in `abs(A)`

, whereas `vecnorm(A, -Inf)`

returns the smallest. If `A`

is a matrix and `p=2`

, then this is equivalent to the Frobenius norm.

**Example**

```
julia> vecnorm([1 2 3; 4 5 6; 7 8 9])
16.881943016134134
julia> vecnorm([1 2 3 4 5 6 7 8 9])
16.881943016134134
```

`vecnorm(x::Number, p::Real=2)`

For numbers, return $\left( |x|^p \right) ^{1/p}$.

`Base.LinAlg.normalize!`

— Function.`Base.LinAlg.normalize`

— Function.`normalize(v::AbstractVector, p::Real=2)`

Normalize the vector `v`

so that its `p`

-norm equals unity, i.e. `norm(v, p) == vecnorm(v, p) == 1`

. See also `normalize!`

and `vecnorm`

.

**Examples**

```
julia> a = [1,2,4];
julia> b = normalize(a)
3-element Array{Float64,1}:
0.218218
0.436436
0.872872
julia> norm(b)
1.0
julia> c = normalize(a, 1)
3-element Array{Float64,1}:
0.142857
0.285714
0.571429
julia> norm(c, 1)
1.0
```

`Base.LinAlg.cond`

— Function.`cond(M, p::Real=2)`

Condition number of the matrix `M`

, computed using the operator `p`

-norm. Valid values for `p`

are `1`

, `2`

(default), or `Inf`

.

`Base.LinAlg.condskeel`

— Function.`condskeel(M, [x, p::Real=Inf])`

Skeel condition number $\kappa_S$ of the matrix `M`

, optionally with respect to the vector `x`

, as computed using the operator `p`

-norm. $\left\vert M \right\vert$ denotes the matrix of (entry wise) absolute values of $M$; $\left\vert M \right\vert_{ij} = \left\vert M_{ij} \right\vert$. Valid values for `p`

are `1`

, `2`

and `Inf`

(default).

This quantity is also known in the literature as the Bauer condition number, relative condition number, or componentwise relative condition number.

`Base.LinAlg.trace`

— Function.`trace(M)`

Matrix trace. Sums the diagonal elements of `M`

.

**Example**

```
julia> A = [1 2; 3 4]
2×2 Array{Int64,2}:
1 2
3 4
julia> trace(A)
5
```

`Base.LinAlg.det`

— Function.`det(M)`

Matrix determinant.

**Example**

```
julia> M = [1 0; 2 2]
2×2 Array{Int64,2}:
1 0
2 2
julia> det(M)
2.0
```

`Base.LinAlg.logdet`

— Function.`logdet(M)`

Log of matrix determinant. Equivalent to `log(det(M))`

, but may provide increased accuracy and/or speed.

**Examples**

```
julia> M = [1 0; 2 2]
2×2 Array{Int64,2}:
1 0
2 2
julia> logdet(M)
0.6931471805599453
julia> logdet(eye(3))
0.0
```

`Base.LinAlg.logabsdet`

— Function.`logabsdet(M)`

Log of absolute value of matrix determinant. Equivalent to `(log(abs(det(M))), sign(det(M)))`

, but may provide increased accuracy and/or speed.

`Base.inv`

— Function.`inv(M)`

Matrix inverse. Computes matrix `N`

such that `M * N = I`

, where `I`

is the identity matrix. Computed by solving the left-division `N = M \ I`

.

**Example**

```
julia> M = [2 5; 1 3]
2×2 Array{Int64,2}:
2 5
1 3
julia> N = inv(M)
2×2 Array{Float64,2}:
3.0 -5.0
-1.0 2.0
julia> M*N == N*M == eye(2)
true
```

`Base.LinAlg.pinv`

— Function.`pinv(M[, tol::Real])`

Computes the Moore-Penrose pseudoinverse.

For matrices `M`

with floating point elements, it is convenient to compute the pseudoinverse by inverting only singular values above a given threshold, `tol`

.

The optimal choice of `tol`

varies both with the value of `M`

and the intended application of the pseudoinverse. The default value of `tol`

is `eps(real(float(one(eltype(M)))))*maximum(size(A))`

, which is essentially machine epsilon for the real part of a matrix element multiplied by the larger matrix dimension. For inverting dense ill-conditioned matrices in a least-squares sense, `tol = sqrt(eps(real(float(one(eltype(M))))))`

is recommended.

For more information, see [issue8859], [B96], [S84], [KY88].

**Example**

```
julia> M = [1.5 1.3; 1.2 1.9]
2×2 Array{Float64,2}:
1.5 1.3
1.2 1.9
julia> N = pinv(M)
2×2 Array{Float64,2}:
1.47287 -1.00775
-0.930233 1.16279
julia> M * N
2×2 Array{Float64,2}:
1.0 -2.22045e-16
4.44089e-16 1.0
```

**[issue8859]**

Issue 8859, "Fix least squares", https://github.com/JuliaLang/julia/pull/8859

**[B96]**

Åke Björck, "Numerical Methods for Least Squares Problems", SIAM Press, Philadelphia, 1996, "Other Titles in Applied Mathematics", Vol. 51. doi:10.1137/1.9781611971484

**[S84]**

G. W. Stewart, "Rank Degeneracy", SIAM Journal on Scientific and Statistical Computing, 5(2), 1984, 403-413. doi:10.1137/0905030

**[KY88]**

Konstantinos Konstantinides and Kung Yao, "Statistical analysis of effective singular values in matrix rank determination", IEEE Transactions on Acoustics, Speech and Signal Processing, 36(5), 1988, 757-763. doi:10.1109/29.1585

`Base.LinAlg.nullspace`

— Function.`nullspace(M)`

Basis for nullspace of `M`

.

**Example**

```
julia> M = [1 0 0; 0 1 0; 0 0 0]
3×3 Array{Int64,2}:
1 0 0
0 1 0
0 0 0
julia> nullspace(M)
3×1 Array{Float64,2}:
0.0
0.0
1.0
```

`Base.repmat`

— Function.`repmat(A, m::Integer, n::Integer=1)`

Construct a matrix by repeating the given matrix (or vector) `m`

times in dimension 1 and `n`

times in dimension 2.

**Examples**

```
julia> repmat([1, 2, 3], 2)
6-element Array{Int64,1}:
1
2
3
1
2
3
julia> repmat([1, 2, 3], 2, 3)
6×3 Array{Int64,2}:
1 1 1
2 2 2
3 3 3
1 1 1
2 2 2
3 3 3
```

`Base.repeat`

— Function.`repeat(A::AbstractArray; inner=ntuple(x->1, ndims(A)), outer=ntuple(x->1, ndims(A)))`

Construct an array by repeating the entries of `A`

. The i-th element of `inner`

specifies the number of times that the individual entries of the i-th dimension of `A`

should be repeated. The i-th element of `outer`

specifies the number of times that a slice along the i-th dimension of `A`

should be repeated. If `inner`

or `outer`

are omitted, no repetition is performed.

**Examples**

```
julia> repeat(1:2, inner=2)
4-element Array{Int64,1}:
1
1
2
2
julia> repeat(1:2, outer=2)
4-element Array{Int64,1}:
1
2
1
2
julia> repeat([1 2; 3 4], inner=(2, 1), outer=(1, 3))
4×6 Array{Int64,2}:
1 2 1 2 1 2
1 2 1 2 1 2
3 4 3 4 3 4
3 4 3 4 3 4
```

`Base.kron`

— Function.`kron(A, B)`

Kronecker tensor product of two vectors or two matrices.

**Example**

```
julia> A = [1 2; 3 4]
2×2 Array{Int64,2}:
1 2
3 4
julia> B = [im 1; 1 -im]
2×2 Array{Complex{Int64},2}:
0+1im 1+0im
1+0im 0-1im
julia> kron(A, B)
4×4 Array{Complex{Int64},2}:
0+1im 1+0im 0+2im 2+0im
1+0im 0-1im 2+0im 0-2im
0+3im 3+0im 0+4im 4+0im
3+0im 0-3im 4+0im 0-4im
```

`Base.SparseArrays.blkdiag`

— Function.`blkdiag(A...)`

Concatenate matrices block-diagonally. Currently only implemented for sparse matrices.

**Example**

```
julia> blkdiag(speye(3), 2*speye(2))
5×5 SparseMatrixCSC{Float64,Int64} with 5 stored entries:
[1, 1] = 1.0
[2, 2] = 1.0
[3, 3] = 1.0
[4, 4] = 2.0
[5, 5] = 2.0
```

`Base.LinAlg.linreg`

— Function.`linreg(x, y)`

Perform simple linear regression using Ordinary Least Squares. Returns `a`

and `b`

such that `a + b*x`

is the closest straight line to the given points `(x, y)`

, i.e., such that the squared error between `y`

and `a + b*x`

is minimized.

**Examples:**

```
using PyPlot
x = 1.0:12.0
y = [5.5, 6.3, 7.6, 8.8, 10.9, 11.79, 13.48, 15.02, 17.77, 20.81, 22.0, 22.99]
a, b = linreg(x, y) # Linear regression
plot(x, y, "o") # Plot (x, y) points
plot(x, a + b*x) # Plot line determined by linear regression
```

See also:

`Base.LinAlg.expm`

— Function.`expm(A)`

Compute the matrix exponential of `A`

, defined by

For symmetric or Hermitian `A`

, an eigendecomposition (`eigfact`

) is used, otherwise the scaling and squaring algorithm (see [H05]) is chosen.

**[H05]**

Nicholas J. Higham, "The squaring and scaling method for the matrix exponential revisited", SIAM Journal on Matrix Analysis and Applications, 26(4), 2005, 1179-1193. doi:10.1137/090768539

**Example**

```
julia> A = eye(2, 2)
2×2 Array{Float64,2}:
1.0 0.0
0.0 1.0
julia> expm(A)
2×2 Array{Float64,2}:
2.71828 0.0
0.0 2.71828
```

`Base.LinAlg.logm`

— Function.`logm(A{T}::StridedMatrix{T})`

If `A`

has no negative real eigenvalue, compute the principal matrix logarithm of `A`

, i.e. the unique matrix $X$ such that $e^X = A$ and $-\pi < Im(\lambda) < \pi$ for all the eigenvalues $\lambda$ of $X$. If `A`

has nonpositive eigenvalues, a nonprincipal matrix function is returned whenever possible.

If `A`

is symmetric or Hermitian, its eigendecomposition (`eigfact`

) is used, if `A`

is triangular an improved version of the inverse scaling and squaring method is employed (see [AH12] and [AHR13]). For general matrices, the complex Schur form (`schur`

) is computed and the triangular algorithm is used on the triangular factor.

**[AH12]**

Awad H. Al-Mohy and Nicholas J. Higham, "Improved inverse scaling and squaring algorithms for the matrix logarithm", SIAM Journal on Scientific Computing, 34(4), 2012, C153-C169. doi:10.1137/110852553

**[AHR13]**

Awad H. Al-Mohy, Nicholas J. Higham and Samuel D. Relton, "Computing the Fréchet derivative of the matrix logarithm and estimating the condition number", SIAM Journal on Scientific Computing, 35(4), 2013, C394-C410. doi:10.1137/120885991

**Example**

```
julia> A = 2.7182818 * eye(2)
2×2 Array{Float64,2}:
2.71828 0.0
0.0 2.71828
julia> logm(A)
2×2 Symmetric{Float64,Array{Float64,2}}:
1.0 0.0
0.0 1.0
```

`Base.LinAlg.sqrtm`

— Function.`sqrtm(A)`

If `A`

has no negative real eigenvalues, compute the principal matrix square root of `A`

, that is the unique matrix $X$ with eigenvalues having positive real part such that $X^2 = A$. Otherwise, a nonprincipal square root is returned.

If `A`

is symmetric or Hermitian, its eigendecomposition (`eigfact`

) is used to compute the square root. Otherwise, the square root is determined by means of the Björck-Hammarling method [BH83], which computes the complex Schur form (`schur`

) and then the complex square root of the triangular factor.

**[BH83]**

Åke Björck and Sven Hammarling, "A Schur method for the square root of a matrix", Linear Algebra and its Applications, 52-53, 1983, 127-140. doi:10.1016/0024-3795(83)80010-X

**Example**

```
julia> A = [4 0; 0 4]
2×2 Array{Int64,2}:
4 0
0 4
julia> sqrtm(A)
2×2 Array{Float64,2}:
2.0 0.0
0.0 2.0
```

`Base.LinAlg.lyap`

— Function.`lyap(A, C)`

Computes the solution `X`

to the continuous Lyapunov equation `AX + XA' + C = 0`

, where no eigenvalue of `A`

has a zero real part and no two eigenvalues are negative complex conjugates of each other.

`Base.LinAlg.sylvester`

— Function.`sylvester(A, B, C)`

Computes the solution `X`

to the Sylvester equation `AX + XB + C = 0`

, where `A`

, `B`

and `C`

have compatible dimensions and `A`

and `-B`

have no eigenvalues with equal real part.

`Base.LinAlg.issymmetric`

— Function.`issymmetric(A) -> Bool`

Test whether a matrix is symmetric.

**Examples**

```
julia> a = [1 2; 2 -1]
2×2 Array{Int64,2}:
1 2
2 -1
julia> issymmetric(a)
true
julia> b = [1 im; -im 1]
2×2 Array{Complex{Int64},2}:
1+0im 0+1im
0-1im 1+0im
julia> issymmetric(b)
false
```

`Base.LinAlg.isposdef`

— Function.`isposdef(A) -> Bool`

Test whether a matrix is positive definite.

**Example**

```
julia> A = [1 2; 2 50]
2×2 Array{Int64,2}:
1 2
2 50
julia> isposdef(A)
true
```

`Base.LinAlg.isposdef!`

— Function.`isposdef!(A) -> Bool`

Test whether a matrix is positive definite, overwriting `A`

in the process.

**Example**

```
julia> A = [1. 2.; 2. 50.];
julia> isposdef!(A)
true
julia> A
2×2 Array{Float64,2}:
1.0 2.0
2.0 6.78233
```

`Base.LinAlg.istril`

— Function.`istril(A) -> Bool`

Test whether a matrix is lower triangular.

**Examples**

```
julia> a = [1 2; 2 -1]
2×2 Array{Int64,2}:
1 2
2 -1
julia> istril(a)
false
julia> b = [1 0; -im -1]
2×2 Array{Complex{Int64},2}:
1+0im 0+0im
0-1im -1+0im
julia> istril(b)
true
```

`Base.LinAlg.istriu`

— Function.`istriu(A) -> Bool`

Test whether a matrix is upper triangular.

**Examples**

```
julia> a = [1 2; 2 -1]
2×2 Array{Int64,2}:
1 2
2 -1
julia> istriu(a)
false
julia> b = [1 im; 0 -1]
2×2 Array{Complex{Int64},2}:
1+0im 0+1im
0+0im -1+0im
julia> istriu(b)
true
```

`Base.LinAlg.isdiag`

— Function.`isdiag(A) -> Bool`

Test whether a matrix is diagonal.

**Examples**

```
julia> a = [1 2; 2 -1]
2×2 Array{Int64,2}:
1 2
2 -1
julia> isdiag(a)
false
julia> b = [im 0; 0 -im]
2×2 Array{Complex{Int64},2}:
0+1im 0+0im
0+0im 0-1im
julia> isdiag(b)
true
```

`Base.LinAlg.ishermitian`

— Function.`ishermitian(A) -> Bool`

Test whether a matrix is Hermitian.

**Examples**

```
julia> a = [1 2; 2 -1]
2×2 Array{Int64,2}:
1 2
2 -1
julia> ishermitian(a)
true
julia> b = [1 im; -im 1]
2×2 Array{Complex{Int64},2}:
1+0im 0+1im
0-1im 1+0im
julia> ishermitian(b)
true
```

`Base.LinAlg.RowVector`

— Type.`RowVector(vector)`

A lazy-view wrapper of an `AbstractVector`

, which turns a length-`n`

vector into a `1×n`

shaped row vector and represents the transpose of a vector (the elements are also transposed recursively). This type is usually constructed (and unwrapped) via the `transpose`

function or `.'`

operator (or related `ctranspose`

or `'`

operator).

By convention, a vector can be multiplied by a matrix on its left (`A * v`

) whereas a row vector can be multiplied by a matrix on its right (such that `v.' * A = (A.' * v).'`

). It differs from a `1×n`

-sized matrix by the facts that its transpose returns a vector and the inner product `v1.' * v2`

returns a scalar, but will otherwise behave similarly.

`Base.LinAlg.ConjArray`

— Type.`ConjArray(array)`

A lazy-view wrapper of an `AbstractArray`

, taking the elementwise complex conjugate. This type is usually constructed (and unwrapped) via the `conj`

function (or related `ctranspose`

), but currently this is the default behavior for `RowVector`

only. For other arrays, the `ConjArray`

constructor can be used directly.

**Examples**

```
julia> [1+im, 1-im]'
1×2 RowVector{Complex{Int64},ConjArray{Complex{Int64},1,Array{Complex{Int64},1}}}:
1-1im 1+1im
julia> ConjArray([1+im 0; 0 1-im])
2×2 ConjArray{Complex{Int64},2,Array{Complex{Int64},2}}:
1-1im 0+0im
0+0im 1+1im
```

`Base.transpose`

— Function.`transpose(A::AbstractMatrix)`

The transposition operator (`.'`

).

**Example**

```
julia> A = [1 2 3; 4 5 6; 7 8 9]
3×3 Array{Int64,2}:
1 2 3
4 5 6
7 8 9
julia> transpose(A)
3×3 Array{Int64,2}:
1 4 7
2 5 8
3 6 9
```

`transpose(v::AbstractVector)`

The transposition operator (`.'`

).

**Example**

```
julia> v = [1,2,3]
3-element Array{Int64,1}:
1
2
3
julia> transpose(v)
1×3 RowVector{Int64,Array{Int64,1}}:
1 2 3
```

`Base.LinAlg.transpose!`

— Function.`transpose!(dest,src)`

Transpose array `src`

and store the result in the preallocated array `dest`

, which should have a size corresponding to `(size(src,2),size(src,1))`

. No in-place transposition is supported and unexpected results will happen if `src`

and `dest`

have overlapping memory regions.

`Base.ctranspose`

— Function.`ctranspose(A)`

The conjugate transposition operator (`'`

).

**Example**

```
julia> A = [3+2im 9+2im; 8+7im 4+6im]
2×2 Array{Complex{Int64},2}:
3+2im 9+2im
8+7im 4+6im
julia> ctranspose(A)
2×2 Array{Complex{Int64},2}:
3-2im 8-7im
9-2im 4-6im
```

`Base.LinAlg.ctranspose!`

— Function.`ctranspose!(dest,src)`

Conjugate transpose array `src`

and store the result in the preallocated array `dest`

, which should have a size corresponding to `(size(src,2),size(src,1))`

. No in-place transposition is supported and unexpected results will happen if `src`

and `dest`

have overlapping memory regions.

`Base.LinAlg.eigs`

— Method.`eigs(A; nev=6, ncv=max(20,2*nev+1), which=:LM, tol=0.0, maxiter=300, sigma=nothing, ritzvec=true, v0=zeros((0,))) -> (d,[v,],nconv,niter,nmult,resid)`

Computes eigenvalues `d`

of `A`

using implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.

The following keyword arguments are supported:

`nev`

: Number of eigenvalues`ncv`

: Number of Krylov vectors used in the computation; should satisfy`nev+1 <= ncv <= n`

for real symmetric problems and`nev+2 <= ncv <= n`

for other problems, where`n`

is the size of the input matrix`A`

. The default is`ncv = max(20,2*nev+1)`

. Note that these restrictions limit the input matrix`A`

to be of dimension at least 2.`which`

: type of eigenvalues to compute. See the note below.

`which` | type of eigenvalues |
---|---|

`:LM` | eigenvalues of largest magnitude (default) |

`:SM` | eigenvalues of smallest magnitude |

`:LR` | eigenvalues of largest real part |

`:SR` | eigenvalues of smallest real part |

`:LI` | eigenvalues of largest imaginary part (nonsymmetric or complex `A` only) |

`:SI` | eigenvalues of smallest imaginary part (nonsymmetric or complex `A` only) |

`:BE` | compute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric `A` only) |

`tol`

: parameter defining the relative tolerance for convergence of Ritz values (eigenvalue estimates). A Ritz value $θ$ is considered converged when its associated residual is less than or equal to the product of`tol`

and $max(ɛ^{2/3}, |θ|)$, where`ɛ = eps(real(eltype(A)))/2`

is LAPACK's machine epsilon. The residual associated with $θ$ and its corresponding Ritz vector $v$ is defined as the norm $||Av - vθ||$. The specified value of`tol`

should be positive; otherwise, it is ignored and $ɛ$ is used instead. Default: $ɛ$.`maxiter`

: Maximum number of iterations (default = 300)`sigma`

: Specifies the level shift used in inverse iteration. If`nothing`

(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close to`sigma`

using shift and invert iterations.`ritzvec`

: Returns the Ritz vectors`v`

(eigenvectors) if`true`

`v0`

: starting vector from which to start the iterations

`eigs`

returns the `nev`

requested eigenvalues in `d`

, the corresponding Ritz vectors `v`

(only if `ritzvec=true`

), the number of converged eigenvalues `nconv`

, the number of iterations `niter`

and the number of matrix vector multiplications `nmult`

, as well as the final residual vector `resid`

.

**Example**

```
julia> A = spdiagm(1:4);
julia> λ, ϕ = eigs(A, nev = 2);
julia> λ
2-element Array{Float64,1}:
4.0
3.0
```

The `sigma`

and `which`

keywords interact: the description of eigenvalues searched for by `which`

do *not* necessarily refer to the eigenvalues of `A`

, but rather the linear operator constructed by the specification of the iteration mode implied by `sigma`

.

`sigma` | iteration mode | `which` refers to eigenvalues of |
---|---|---|

`nothing` | ordinary (forward) | $A$ |

real or complex | inverse with level shift `sigma` | $(A - \sigma I )^{-1}$ |

Although `tol`

has a default value, the best choice depends strongly on the matrix `A`

. We recommend that users _always_ specify a value for `tol`

which suits their specific needs.

For details of how the errors in the computed eigenvalues are estimated, see:

B. N. Parlett, "The Symmetric Eigenvalue Problem", SIAM: Philadelphia, 2/e (1998), Ch. 13.2, "Accessing Accuracy in Lanczos Problems", pp. 290-292 ff.

R. B. Lehoucq and D. C. Sorensen, "Deflation Techniques for an Implicitly Restarted Arnoldi Iteration", SIAM Journal on Matrix Analysis and Applications (1996), 17(4), 789–821. doi:10.1137/S0895479895281484

`Base.LinAlg.eigs`

— Method.`eigs(A, B; nev=6, ncv=max(20,2*nev+1), which=:LM, tol=0.0, maxiter=300, sigma=nothing, ritzvec=true, v0=zeros((0,))) -> (d,[v,],nconv,niter,nmult,resid)`

Computes generalized eigenvalues `d`

of `A`

and `B`

using implicitly restarted Lanczos or Arnoldi iterations for real symmetric or general nonsymmetric matrices respectively.

The following keyword arguments are supported:

`nev`

: Number of eigenvalues`ncv`

: Number of Krylov vectors used in the computation; should satisfy`nev+1 <= ncv <= n`

for real symmetric problems and`nev+2 <= ncv <= n`

for other problems, where`n`

is the size of the input matrices`A`

and`B`

. The default is`ncv = max(20,2*nev+1)`

. Note that these restrictions limit the input matrix`A`

to be of dimension at least 2.`which`

: type of eigenvalues to compute. See the note below.

`which` | type of eigenvalues |
---|---|

`:LM` | eigenvalues of largest magnitude (default) |

`:SM` | eigenvalues of smallest magnitude |

`:LR` | eigenvalues of largest real part |

`:SR` | eigenvalues of smallest real part |

`:LI` | eigenvalues of largest imaginary part (nonsymmetric or complex `A` only) |

`:SI` | eigenvalues of smallest imaginary part (nonsymmetric or complex `A` only) |

`:BE` | compute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric `A` only) |

`tol`

: relative tolerance used in the convergence criterion for eigenvalues, similar to`tol`

in the`eigs(A)`

method for the ordinary eigenvalue problem, but effectively for the eigenvalues of $B^{-1} A$ instead of $A$. See the documentation for the ordinary eigenvalue problem in`eigs(A)`

and the accompanying note about`tol`

.`maxiter`

: Maximum number of iterations (default = 300)`sigma`

: Specifies the level shift used in inverse iteration. If`nothing`

(default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close to`sigma`

using shift and invert iterations.`ritzvec`

: Returns the Ritz vectors`v`

(eigenvectors) if`true`

`v0`

: starting vector from which to start the iterations

`eigs`

returns the `nev`

requested eigenvalues in `d`

, the corresponding Ritz vectors `v`

(only if `ritzvec=true`

), the number of converged eigenvalues `nconv`

, the number of iterations `niter`

and the number of matrix vector multiplications `nmult`

, as well as the final residual vector `resid`

.

**Example**

```
julia> A = speye(4, 4); B = spdiagm(1:4);
julia> λ, ϕ = eigs(A, B, nev = 2);
julia> λ
2-element Array{Float64,1}:
1.0
0.5
```

The `sigma`

and `which`

keywords interact: the description of eigenvalues searched for by `which`

do *not* necessarily refer to the eigenvalue problem $Av = Bv\lambda$, but rather the linear operator constructed by the specification of the iteration mode implied by `sigma`

.

`sigma` | iteration mode | `which` refers to the problem |
---|---|---|

`nothing` | ordinary (forward) | $Av = Bv\lambda$ |

real or complex | inverse with level shift `sigma` | $(A - \sigma B )^{-1}B = v\nu$ |

`Base.LinAlg.svds`

— Function.`svds(A; nsv=6, ritzvec=true, tol=0.0, maxiter=1000, ncv=2*nsv, u0=zeros((0,)), v0=zeros((0,))) -> (SVD([left_sv,] s, [right_sv,]), nconv, niter, nmult, resid)`

Computes the largest singular values `s`

of `A`

using implicitly restarted Lanczos iterations derived from `eigs`

.

**Inputs**

`A`

: Linear operator whose singular values are desired.`A`

may be represented as a subtype of`AbstractArray`

, e.g., a sparse matrix, or any other type supporting the four methods`size(A)`

,`eltype(A)`

,`A * vector`

, and`A' * vector`

.`nsv`

: Number of singular values. Default: 6.`ritzvec`

: If`true`

, return the left and right singular vectors`left_sv`

and`right_sv`

. If`false`

, omit the singular vectors. Default:`true`

.`tol`

: tolerance, see`eigs`

.`maxiter`

: Maximum number of iterations, see`eigs`

. Default: 1000.`ncv`

: Maximum size of the Krylov subspace, see`eigs`

(there called`nev`

). Default:`2*nsv`

.`u0`

: Initial guess for the first left Krylov vector. It may have length`m`

(the first dimension of`A`

), or 0.`v0`

: Initial guess for the first right Krylov vector. It may have length`n`

(the second dimension of`A`

), or 0.

**Outputs**

`svd`

: An`SVD`

object containing the left singular vectors, the requested values, and the right singular vectors. If`ritzvec = false`

, the left and right singular vectors will be empty.`nconv`

: Number of converged singular values.`niter`

: Number of iterations.`nmult`

: Number of matrix–vector products used.`resid`

: Final residual vector.

**Example**

```
julia> A = spdiagm(1:4);
julia> s = svds(A, nsv = 2)[1];
julia> s[:S]
2-element Array{Float64,1}:
4.0
3.0
```

`svds(A)`

is formally equivalent to calling `eigs`

to perform implicitly restarted Lanczos tridiagonalization on the Hermitian matrix $\begin{pmatrix} 0 & A^\prime \\ A & 0 \end{pmatrix}$, whose eigenvalues are plus and minus the singular values of $A$.

`Base.LinAlg.peakflops`

— Function.`peakflops(n::Integer=2000; parallel::Bool=false)`

`peakflops`

computes the peak flop rate of the computer by using double precision `gemm!`

. By default, if no arguments are specified, it multiplies a matrix of size `n x n`

, where `n = 2000`

. If the underlying BLAS is using multiple threads, higher flop rates are realized. The number of BLAS threads can be set with `BLAS.set_num_threads(n)`

.

If the keyword argument `parallel`

is set to `true`

, `peakflops`

is run in parallel on all the worker processors. The flop rate of the entire parallel computer is returned. When running in parallel, only 1 BLAS thread is used. The argument `n`

still refers to the size of the problem that is solved on each processor.

## Low-level matrix operations

Matrix operations involving transpositions operations like `A' \ B`

are converted by the Julia parser into calls to specially named functions like `Ac_ldiv_B`

. If you want to overload these operations for your own types, then it is useful to know the names of these functions.

Also, in many cases there are in-place versions of matrix operations that allow you to supply a pre-allocated output vector or matrix. This is useful when optimizing critical code in order to avoid the overhead of repeated allocations. These in-place operations are suffixed with `!`

below (e.g. `A_mul_B!`

) according to the usual Julia convention.

`Base.LinAlg.A_ldiv_B!`

— Function.`A_ldiv_B!([Y,] A, B) -> Y`

Compute `A \ B`

in-place and store the result in `Y`

, returning the result. If only two arguments are passed, then `A_ldiv_B!(A, B)`

overwrites `B`

with the result.

The argument `A`

should *not* be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced by `factorize`

or `cholfact`

). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g., `lufact!`

), and performance-critical situations requiring `A_ldiv_B!`

usually also require fine-grained control over the factorization of `A`

.

`Base.A_ldiv_Bc`

— Function.`A_ldiv_Bc(A, B)`

For matrices or vectors $A$ and $B$, calculates $A$ \ $Bᴴ$.

`Base.A_ldiv_Bt`

— Function.`A_ldiv_Bt(A, B)`

For matrices or vectors $A$ and $B$, calculates $A$ \ $Bᵀ$.

`Base.LinAlg.A_mul_B!`

— Function.`A_mul_B!(Y, A, B) -> Y`

Calculates the matrix-matrix or matrix-vector product $A⋅B$ and stores the result in `Y`

, overwriting the existing value of `Y`

. Note that `Y`

must not be aliased with either `A`

or `B`

.

**Example**

```
julia> A=[1.0 2.0; 3.0 4.0]; B=[1.0 1.0; 1.0 1.0]; Y = similar(B); A_mul_B!(Y, A, B);
julia> Y
2×2 Array{Float64,2}:
3.0 3.0
7.0 7.0
```

`Base.A_mul_Bc`

— Function.`A_mul_Bc(A, B)`

For matrices or vectors $A$ and $B$, calculates $A⋅Bᴴ$.

`Base.A_mul_Bt`

— Function.`A_mul_Bt(A, B)`

For matrices or vectors $A$ and $B$, calculates $A⋅Bᵀ$.

`Base.A_rdiv_Bc`

— Function.`A_rdiv_Bc(A, B)`

For matrices or vectors $A$ and $B$, calculates $A / Bᴴ$.

`Base.A_rdiv_Bt`

— Function.`A_rdiv_Bt(A, B)`

For matrices or vectors $A$ and $B$, calculates $A / Bᵀ$.

`Base.Ac_ldiv_B`

— Function.`Ac_ldiv_B(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᴴ$ \ $B$.

`Base.LinAlg.Ac_ldiv_B!`

— Function.`Ac_ldiv_B!([Y,] A, B) -> Y`

Similar to `A_ldiv_B!`

, but return $Aᴴ$ \ $B$, computing the result in-place in `Y`

(or overwriting `B`

if `Y`

is not supplied).

`Base.Ac_ldiv_Bc`

— Function.`Ac_ldiv_Bc(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᴴ$ \ $Bᴴ$.

`Base.Ac_mul_B`

— Function.`Ac_mul_B(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᴴ⋅B$.

`Base.Ac_mul_Bc`

— Function.`Ac_mul_Bc(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᴴ Bᴴ$.

`Base.Ac_rdiv_B`

— Function.`Ac_rdiv_B(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᴴ / B$.

`Base.Ac_rdiv_Bc`

— Function.`Ac_rdiv_Bc(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᴴ / Bᴴ$.

`Base.At_ldiv_B`

— Function.`At_ldiv_B(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᵀ$ \ $B$.

`Base.LinAlg.At_ldiv_B!`

— Function.`At_ldiv_B!([Y,] A, B) -> Y`

Similar to `A_ldiv_B!`

, but return $Aᵀ$ \ $B$, computing the result in-place in `Y`

(or overwriting `B`

if `Y`

is not supplied).

`Base.At_ldiv_Bt`

— Function.`At_ldiv_Bt(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᵀ$ \ $Bᵀ$.

`Base.At_mul_B`

— Function.`At_mul_B(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᵀ⋅B$.

`Base.At_mul_Bt`

— Function.`At_mul_Bt(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᵀ⋅Bᵀ$.

`Base.At_rdiv_B`

— Function.`At_rdiv_B(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᵀ / B$.

`Base.At_rdiv_Bt`

— Function.`At_rdiv_Bt(A, B)`

For matrices or vectors $A$ and $B$, calculates $Aᵀ / Bᵀ$.

## BLAS Functions

In Julia (as in much of scientific computation), dense linear-algebra operations are based on the LAPACK library, which in turn is built on top of basic linear-algebra building-blocks known as the BLAS. There are highly optimized implementations of BLAS available for every computer architecture, and sometimes in high-performance linear algebra routines it is useful to call the BLAS functions directly.

`Base.LinAlg.BLAS`

provides wrappers for some of the BLAS functions. Those BLAS functions that overwrite one of the input arrays have names ending in `'!'`

. Usually, a BLAS function has four methods defined, for `Float64`

, `Float32`

, `Complex128`

, and `Complex64`

arrays.

### BLAS Character Arguments

Many BLAS functions accept arguments that determine whether to transpose an argument (`trans`

), which triangle of a matrix to reference (`uplo`

or `ul`

), whether the diagonal of a triangular matrix can be assumed to be all ones (`dA`

) or which side of a matrix multiplication the input argument belongs on (`side`

). The possiblities are:

#### Multplication Order

`side` | Meaning |
---|---|

`'L'` | The argument goes on the left side of a matrix-matrix operation. |

`'R'` | The argument goes on the right side of a matrix-matrix operation. |

#### Triangle Referencing

`uplo` /`ul` | Meaning |
---|---|

`'U'` | Only the upper triangle of the matrix will be used. |

`'L'` | Only the lower triangle of the matrix will be used. |

#### Transposition Operation

`trans` /`tX` | Meaning |
---|---|

`'N'` | The input matrix `X` is not transposed or conjugated. |

`'T'` | The input matrix `X` will be transposed. |

`'C'` | The input matrix `X` will be conjugated and transposed. |

#### Unit Diagonal

`diag` /`dX` | Meaning |
---|---|

`'N'` | The diagonal values of the matrix `X` will be read. |

`'U'` | The diagonal of the matrix `X` is assumed to be all ones. |

`Base.LinAlg.BLAS.dotu`

— Function.`dotu(n, X, incx, Y, incy)`

Dot function for two complex vectors consisting of `n`

elements of array `X`

with stride `incx`

and `n`

elements of array `Y`

with stride `incy`

.

**Example:**

```
julia> Base.BLAS.dotu(10, im*ones(10), 1, complex.(ones(20), ones(20)), 2)
-10.0 + 10.0im
```

`Base.LinAlg.BLAS.dotc`

— Function.`dotc(n, X, incx, U, incy)`

Dot function for two complex vectors, consisting of `n`

elements of array `X`

with stride `incx`

and `n`

elements of array `U`

with stride `incy`

, conjugating the first vector.

**Example:**

```
julia> Base.BLAS.dotc(10, im*ones(10), 1, complex.(ones(20), ones(20)), 2)
10.0 - 10.0im
```

`Base.LinAlg.BLAS.blascopy!`

— Function.`blascopy!(n, X, incx, Y, incy)`

Copy `n`

elements of array `X`

with stride `incx`

to array `Y`

with stride `incy`

. Returns `Y`

.

`Base.LinAlg.BLAS.nrm2`

— Function.`nrm2(n, X, incx)`

2-norm of a vector consisting of `n`

elements of array `X`

with stride `incx`

.

**Example:**

```
julia> Base.BLAS.nrm2(4, ones(8), 2)
2.0
julia> Base.BLAS.nrm2(1, ones(8), 2)
1.0
```

`Base.LinAlg.BLAS.asum`

— Function.`asum(n, X, incx)`

Sum of the absolute values of the first `n`

elements of array `X`

with stride `incx`

.

**Example:**

```
julia> Base.BLAS.asum(5, im*ones(10), 2)
5.0
julia> Base.BLAS.asum(2, im*ones(10), 5)
2.0
```

`Base.LinAlg.axpy!`

— Function.`axpy!(a, X, Y)`

Overwrite `Y`

with `a*X + Y`

, where `a`

is a scalar. Returns `Y`

.

**Example:**

```
julia> x = [1; 2; 3];
julia> y = [4; 5; 6];
julia> Base.BLAS.axpy!(2, x, y)
3-element Array{Int64,1}:
6
9
12
```

`Base.LinAlg.BLAS.scal!`

— Function.`scal!(n, a, X, incx)`

Overwrite `X`

with `a*X`

for the first `n`

elements of array `X`

with stride `incx`

. Returns `X`

.

`Base.LinAlg.BLAS.scal`

— Function.`scal(n, a, X, incx)`

Returns `X`

scaled by `a`

for the first `n`

elements of array `X`

with stride `incx`

.

`Base.LinAlg.BLAS.ger!`

— Function.`ger!(alpha, x, y, A)`

Rank-1 update of the matrix `A`

with vectors `x`

and `y`

as `alpha*x*y' + A`

.

`Base.LinAlg.BLAS.syr!`

— Function.`syr!(uplo, alpha, x, A)`

Rank-1 update of the symmetric matrix `A`

with vector `x`

as `alpha*x*x.' + A`

. `uplo`

controls which triangle of `A`

is updated. Returns `A`

.

`Base.LinAlg.BLAS.syrk!`

— Function.`Base.LinAlg.BLAS.syrk`

— Function.`Base.LinAlg.BLAS.her!`

— Function.`her!(uplo, alpha, x, A)`

Methods for complex arrays only. Rank-1 update of the Hermitian matrix `A`

with vector `x`

as `alpha*x*x' + A`

. `uplo`

controls which triangle of `A`

is updated. Returns `A`

.

`Base.LinAlg.BLAS.herk!`

— Function.`Base.LinAlg.BLAS.herk`

— Function.`Base.LinAlg.BLAS.gbmv!`

— Function.`gbmv!(trans, m, kl, ku, alpha, A, x, beta, y)`

Update vector `y`

as `alpha*A*x + beta*y`

or `alpha*A'*x + beta*y`

according to `trans`

. The matrix `A`

is a general band matrix of dimension `m`

by `size(A,2)`

with `kl`

sub-diagonals and `ku`

super-diagonals. `alpha`

and `beta`

are scalars. Returns the updated `y`

.

`Base.LinAlg.BLAS.gbmv`

— Function.`gbmv(trans, m, kl, ku, alpha, A, x)`

Returns `alpha*A*x`

or `alpha*A'*x`

according to `trans`

. The matrix `A`

is a general band matrix of dimension `m`

by `size(A,2)`

with `kl`

sub-diagonals and `ku`

super-diagonals, and `alpha`

is a scalar.

`Base.LinAlg.BLAS.sbmv!`

— Function.`sbmv!(uplo, k, alpha, A, x, beta, y)`

Update vector `y`

as `alpha*A*x + beta*y`

where `A`

is a a symmetric band matrix of order `size(A,2)`

with `k`

super-diagonals stored in the argument `A`

. The storage layout for `A`

is described the reference BLAS module, level-2 BLAS at http://www.netlib.org/lapack/explore-html/. Only the `uplo`

triangle of `A`

is used.

Returns the updated `y`

.

`Base.LinAlg.BLAS.sbmv`

— Method.`sbmv(uplo, k, alpha, A, x)`

Returns `alpha*A*x`

where `A`

is a symmetric band matrix of order `size(A,2)`

with `k`

super-diagonals stored in the argument `A`

. Only the `uplo`

triangle of `A`

is used.

`Base.LinAlg.BLAS.sbmv`

— Method.`sbmv(uplo, k, A, x)`

Returns `A*x`

where `A`

is a symmetric band matrix of order `size(A,2)`

with `k`

super-diagonals stored in the argument `A`

. Only the `uplo`

triangle of `A`

is used.

`Base.LinAlg.BLAS.gemm!`

— Function.`gemm!(tA, tB, alpha, A, B, beta, C)`

Update `C`

as `alpha*A*B + beta*C`

or the other three variants according to `tA`

and `tB`

. Returns the updated `C`

.

`Base.LinAlg.BLAS.gemm`

— Method.`gemm(tA, tB, alpha, A, B)`

Returns `alpha*A*B`

or the other three variants according to `tA`

and `tB`

.

`Base.LinAlg.BLAS.gemm`

— Method.`gemm(tA, tB, A, B)`

Returns `A*B`

or the other three variants according to `tA`

and `tB`

.

`Base.LinAlg.BLAS.gemv!`

— Function.`gemv!(tA, alpha, A, x, beta, y)`

Update the vector `y`

as `alpha*A*x + beta*y`

or `alpha*A'x + beta*y`

according to `tA`

. `alpha`

and `beta`

are scalars. Returns the updated `y`

.

`Base.LinAlg.BLAS.gemv`

— Method.`gemv(tA, alpha, A, x)`

Returns `alpha*A*x`

or `alpha*A'x`

according to `tA`

. `alpha`

is a scalar.

`Base.LinAlg.BLAS.gemv`

— Method.`gemv(tA, A, x)`

Returns `A*x`

or `A'x`

according to `tA`

.

`Base.LinAlg.BLAS.symm!`

— Function.`Base.LinAlg.BLAS.symm`

— Method.`Base.LinAlg.BLAS.symm`

— Method.`Base.LinAlg.BLAS.symv!`

— Function.`symv!(ul, alpha, A, x, beta, y)`

Update the vector `y`

as `alpha*A*x + beta*y`

. `A`

is assumed to be symmetric. Only the `ul`

triangle of `A`

is used. `alpha`

and `beta`

are scalars. Returns the updated `y`

.

`Base.LinAlg.BLAS.symv`

— Method.`symv(ul, alpha, A, x)`

Returns `alpha*A*x`

. `A`

is assumed to be symmetric. Only the `ul`

triangle of `A`

is used. `alpha`

is a scalar.

`Base.LinAlg.BLAS.symv`

— Method.`symv(ul, A, x)`

Returns `A*x`

. `A`

is assumed to be symmetric. Only the `ul`

triangle of `A`

is used.

`Base.LinAlg.BLAS.trmm!`

— Function.`Base.LinAlg.BLAS.trmm`

— Function.`Base.LinAlg.BLAS.trsm!`

— Function.`Base.LinAlg.BLAS.trsm`

— Function.`Base.LinAlg.BLAS.trmv!`

— Function.`Base.LinAlg.BLAS.trmv`

— Function.`Base.LinAlg.BLAS.trsv!`

— Function.`Base.LinAlg.BLAS.trsv`

— Function.`Base.LinAlg.BLAS.set_num_threads`

— Function.`set_num_threads(n)`

Set the number of threads the BLAS library should use.

`Base.LinAlg.I`

— Constant.`I`

An object of type `UniformScaling`

, representing an identity matrix of any size.

**Example**

```
julia> ones(5, 6) * I == ones(5, 6)
true
julia> [1 2im 3; 1im 2 3] * I
2×3 Array{Complex{Int64},2}:
1+0im 0+2im 3+0im
0+1im 2+0im 3+0im
```

## LAPACK Functions

`Base.LinAlg.LAPACK`

provides wrappers for some of the LAPACK functions for linear algebra. Those functions that overwrite one of the input arrays have names ending in `'!'`

.

Usually a function has 4 methods defined, one each for `Float64`

, `Float32`

, `Complex128`

and `Complex64`

arrays.

Note that the LAPACK API provided by Julia can and will change in the future. Since this API is not user-facing, there is no commitment to support/deprecate this specific set of functions in future releases.

`Base.LinAlg.LAPACK.gbtrf!`

— Function.`gbtrf!(kl, ku, m, AB) -> (AB, ipiv)`

Compute the LU factorization of a banded matrix `AB`

. `kl`

is the first subdiagonal containing a nonzero band, `ku`

is the last superdiagonal containing one, and `m`

is the first dimension of the matrix `AB`

. Returns the LU factorization in-place and `ipiv`

, the vector of pivots used.

`Base.LinAlg.LAPACK.gbtrs!`

— Function.`gbtrs!(trans, kl, ku, m, AB, ipiv, B)`

Solve the equation `AB * X = B`

. `trans`

determines the orientation of `AB`

. It may be `N`

(no transpose), `T`

(transpose), or `C`

(conjugate transpose). `kl`

is the first subdiagonal containing a nonzero band, `ku`

is the last superdiagonal containing one, and `m`

is the first dimension of the matrix `AB`

. `ipiv`

is the vector of pivots returned from `gbtrf!`

. Returns the vector or matrix `X`

, overwriting `B`

in-place.

`Base.LinAlg.LAPACK.gebal!`

— Function.`gebal!(job, A) -> (ilo, ihi, scale)`

Balance the matrix `A`

before computing its eigensystem or Schur factorization. `job`

can be one of `N`

(`A`

will not be permuted or scaled), `P`

(`A`

will only be permuted), `S`

(`A`

will only be scaled), or `B`

(`A`

will be both permuted and scaled). Modifies `A`

in-place and returns `ilo`

, `ihi`

, and `scale`

. If permuting was turned on, `A[i,j] = 0`

if `j > i`

and `1 < j < ilo`

or `j > ihi`

. `scale`

contains information about the scaling/permutations performed.

`Base.LinAlg.LAPACK.gebak!`

— Function.`gebak!(job, side, ilo, ihi, scale, V)`

Transform the eigenvectors `V`

of a matrix balanced using `gebal!`

to the unscaled/unpermuted eigenvectors of the original matrix. Modifies `V`

in-place. `side`

can be `L`

(left eigenvectors are transformed) or `R`

(right eigenvectors are transformed).

`Base.LinAlg.LAPACK.gebrd!`

— Function.`gebrd!(A) -> (A, d, e, tauq, taup)`

Reduce `A`

in-place to bidiagonal form `A = QBP'`

. Returns `A`

, containing the bidiagonal matrix `B`

; `d`

, containing the diagonal elements of `B`

; `e`

, containing the off-diagonal elements of `B`

; `tauq`

, containing the elementary reflectors representing `Q`

; and `taup`

, containing the elementary reflectors representing `P`

.

`Base.LinAlg.LAPACK.gelqf!`

— Function.`gelqf!(A, tau)`

Compute the `LQ`

factorization of `A`

, `A = LQ`

. `tau`

contains scalars which parameterize the elementary reflectors of the factorization. `tau`

must have length greater than or equal to the smallest dimension of `A`

.

Returns `A`

and `tau`

modified in-place.

`gelqf!(A) -> (A, tau)`

Compute the `LQ`

factorization of `A`

, `A = LQ`

.

Returns `A`

, modified in-place, and `tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`Base.LinAlg.LAPACK.geqlf!`

— Function.`geqlf!(A, tau)`

Compute the `QL`

factorization of `A`

, `A = QL`

. `tau`

contains scalars which parameterize the elementary reflectors of the factorization. `tau`

must have length greater than or equal to the smallest dimension of `A`

.

Returns `A`

and `tau`

modified in-place.

`geqlf!(A) -> (A, tau)`

Compute the `QL`

factorization of `A`

, `A = QL`

.

Returns `A`

, modified in-place, and `tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`Base.LinAlg.LAPACK.geqrf!`

— Function.`geqrf!(A, tau)`

Compute the `QR`

factorization of `A`

, `A = QR`

. `tau`

contains scalars which parameterize the elementary reflectors of the factorization. `tau`

must have length greater than or equal to the smallest dimension of `A`

.

Returns `A`

and `tau`

modified in-place.

`geqrf!(A) -> (A, tau)`

Compute the `QR`

factorization of `A`

, `A = QR`

.

Returns `A`

, modified in-place, and `tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`Base.LinAlg.LAPACK.geqp3!`

— Function.`geqp3!(A, jpvt, tau)`

Compute the pivoted `QR`

factorization of `A`

, `AP = QR`

using BLAS level 3. `P`

is a pivoting matrix, represented by `jpvt`

. `tau`

stores the elementary reflectors. `jpvt`

must have length length greater than or equal to `n`

if `A`

is an `(m x n)`

matrix. `tau`

must have length greater than or equal to the smallest dimension of `A`

.

`A`

, `jpvt`

, and `tau`

are modified in-place.

`geqp3!(A, jpvt) -> (A, jpvt, tau)`

Compute the pivoted `QR`

factorization of `A`

, `AP = QR`

using BLAS level 3. `P`

is a pivoting matrix, represented by `jpvt`

. `jpvt`

must have length greater than or equal to `n`

if `A`

is an `(m x n)`

matrix.

Returns `A`

and `jpvt`

, modified in-place, and `tau`

, which stores the elementary reflectors.

`geqp3!(A) -> (A, jpvt, tau)`

Compute the pivoted `QR`

factorization of `A`

, `AP = QR`

using BLAS level 3.

Returns `A`

, modified in-place, `jpvt`

, which represents the pivoting matrix `P`

, and `tau`

, which stores the elementary reflectors.

`Base.LinAlg.LAPACK.gerqf!`

— Function.`gerqf!(A, tau)`

Compute the `RQ`

factorization of `A`

, `A = RQ`

. `tau`

contains scalars which parameterize the elementary reflectors of the factorization. `tau`

must have length greater than or equal to the smallest dimension of `A`

.

Returns `A`

and `tau`

modified in-place.

`gerqf!(A) -> (A, tau)`

Compute the `RQ`

factorization of `A`

, `A = RQ`

.

`A`

, modified in-place, and `tau`

, which contains scalars which parameterize the elementary reflectors of the factorization.

`Base.LinAlg.LAPACK.geqrt!`

— Function.`geqrt!(A, T)`

Compute the blocked `QR`

factorization of `A`

, `A = QR`

. `T`

contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension of `T`

sets the block size and it must be between 1 and `n`

. The second dimension of `T`

must equal the smallest dimension of `A`

.

Returns `A`

and `T`

modified in-place.

`geqrt!(A, nb) -> (A, T)`

Compute the blocked `QR`

factorization of `A`

, `A = QR`

. `nb`

sets the block size and it must be between 1 and `n`

, the second dimension of `A`

.

Returns `A`

, modified in-place, and `T`

, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

`Base.LinAlg.LAPACK.geqrt3!`

— Function.`geqrt3!(A, T)`

Recursively computes the blocked `QR`

factorization of `A`

, `A = QR`

. `T`

contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization. The first dimension of `T`

sets the block size and it must be between 1 and `n`

. The second dimension of `T`

must equal the smallest dimension of `A`

.

Returns `A`

and `T`

modified in-place.

`geqrt3!(A) -> (A, T)`

Recursively computes the blocked `QR`

factorization of `A`

, `A = QR`

.

Returns `A`

, modified in-place, and `T`

, which contains upper triangular block reflectors which parameterize the elementary reflectors of the factorization.

`Base.LinAlg.LAPACK.getrf!`

— Function.`getrf!(A) -> (A, ipiv, info)`

Compute the pivoted `LU`

factorization of `A`

, `A = LU`

.

Returns `A`

, modified in-place, `ipiv`

, the pivoting information, and an `info`

code which indicates success (`info = 0`

), a singular value in `U`

(`info = i`

, in which case `U[i,i]`

is singular), or an error code (`info < 0`

).

`Base.LinAlg.LAPACK.tzrzf!`

— Function.`tzrzf!(A) -> (A, tau)`

Transforms the upper trapezoidal matrix `A`

to upper triangular form in-place. Returns `A`

and `tau`

, the scalar parameters for the elementary reflectors of the transformation.

`Base.LinAlg.LAPACK.ormrz!`

— Function.`ormrz!(side, trans, A, tau, C)`

Multiplies the matrix `C`

by `Q`

from the transformation supplied by `tzrzf!`

. Depending on `side`

or `trans`

the multiplication can be left-sided (`side = L, Q*C`

) or right-sided (`side = R, C*Q`

) and `Q`

can be unmodified (`trans = N`

), transposed (`trans = T`

), or conjugate transposed (`trans = C`

). Returns matrix `C`

which is modified in-place with the result of the multiplication.

`Base.LinAlg.LAPACK.gels!`

— Function.`gels!(trans, A, B) -> (F, B, ssr)`

Solves the linear equation `A * X = B`

, `A.' * X =B`

, or `A' * X = B`

using a QR or LQ factorization. Modifies the matrix/vector `B`

in place with the solution. `A`

is overwritten with its `QR`

or `LQ`

factorization. `trans`

may be one of `N`

(no modification), `T`

(transpose), or `C`

(conjugate transpose). `gels!`

searches for the minimum norm/least squares solution. `A`

may be under or over determined. The solution is returned in `B`

.

`Base.LinAlg.LAPACK.gesv!`

— Function.`gesv!(A, B) -> (B, A, ipiv)`

Solves the linear equation `A * X = B`

where `A`

is a square matrix using the `LU`

factorization of `A`

. `A`

is overwritten with its `LU`

factorization and `B`

is overwritten with the solution `X`

. `ipiv`

contains the pivoting information for the `LU`

factorization of `A`

.

`Base.LinAlg.LAPACK.getrs!`

— Function.`getrs!(trans, A, ipiv, B)`

Solves the linear equation `A * X = B`

, `A.' * X =B`

, or `A' * X = B`

for square `A`

. Modifies the matrix/vector `B`

in place with the solution. `A`

is the `LU`

factorization from `getrf!`

, with `ipiv`

the pivoting information. `trans`

may be one of `N`

(no modification), `T`

(transpose), or `C`

(conjugate transpose).

`Base.LinAlg.LAPACK.getri!`

— Function.`getri!(A, ipiv)`

Computes the inverse of `A`

, using its `LU`

factorization found by `getrf!`

. `ipiv`

is the pivot information output and `A`

contains the `LU`

factorization of `getrf!`

. `A`

is overwritten with its inverse.

`Base.LinAlg.LAPACK.gesvx!`

— Function.`gesvx!(fact, trans, A, AF, ipiv, equed, R, C, B) -> (X, equed, R, C, B, rcond, ferr, berr, work)`

Solves the linear equation `A * X = B`

(`trans = N`

), `A.' * X =B`

(`trans = T`

), or `A' * X = B`

(`trans = C`

) using the `LU`

factorization of `A`

. `fact`

may be `E`

, in which case `A`

will be equilibrated and copied to `AF`

; `F`

, in which case `AF`

and `ipiv`

from a previous `LU`

factorization are inputs; or `N`

, in which case `A`

will be copied to `AF`

and then factored. If `fact = F`

, `equed`

may be `N`

, meaning `A`

has not been equilibrated; `R`

, meaning `A`

was multiplied by `diagm(R)`

from the left; `C`

, meaning `A`

was multiplied by `diagm(C)`

from the right; or `B`

, meaning `A`

was multiplied by `diagm(R)`

from the left and `diagm(C)`

from the right. If `fact = F`

and `equed = R`

or `B`

the elements of `R`

must all be positive. If `fact = F`

and `equed = C`

or `B`

the elements of `C`

must all be positive.

Returns the solution `X`

; `equed`

, which is an output if `fact`

is not `N`

, and describes the equilibration that was performed; `R`

, the row equilibration diagonal; `C`

, the column equilibration diagonal; `B`

, which may be overwritten with its equilibrated form `diagm(R)*B`

(if `trans = N`

and `equed = R,B`

) or `diagm(C)*B`

(if `trans = T,C`

and `equed = C,B`

); `rcond`

, the reciprocal condition number of `A`

after equilbrating; `ferr`

, the forward error bound for each solution vector in `X`

; `berr`

, the forward error bound for each solution vector in `X`

; and `work`

, the reciprocal pivot growth factor.

`gesvx!(A, B)`

The no-equilibration, no-transpose simplification of `gesvx!`

.

`Base.LinAlg.LAPACK.gelsd!`

— Function.`gelsd!(A, B, rcond) -> (B, rnk)`

Computes the least norm solution of `A * X = B`

by finding the `SVD`

factorization of `A`

, then dividing-and-conquering the problem. `B`

is overwritten with the solution `X`

. Singular values below `rcond`

will be treated as zero. Returns the solution in `B`

and the effective rank of `A`

in `rnk`

.

`Base.LinAlg.LAPACK.gelsy!`

— Function.`gelsy!(A, B, rcond) -> (B, rnk)`

Computes the least norm solution of `A * X = B`

by finding the full `QR`

factorization of `A`

, then dividing-and-conquering the problem. `B`

is overwritten with the solution `X`

. Singular values below `rcond`

will be treated as zero. Returns the solution in `B`

and the effective rank of `A`

in `rnk`

.

`Base.LinAlg.LAPACK.gglse!`

— Function.`gglse!(A, c, B, d) -> (X,res)`

Solves the equation `A * x = c`

where `x`

is subject to the equality constraint `B * x = d`

. Uses the formula `||c - A*x||^2 = 0`

to solve. Returns `X`

and the residual sum-of-squares.

`Base.LinAlg.LAPACK.geev!`

— Function.`geev!(jobvl, jobvr, A) -> (W, VL, VR)`

Finds the eigensystem of `A`

. If `jobvl = N`

, the left eigenvectors of `A`

aren't computed. If `jobvr = N`

, the right eigenvectors of `A`

aren't computed. If `jobvl = V`

or `jobvr = V`

, the corresponding eigenvectors are computed. Returns the eigenvalues in `W`

, the right eigenvectors in `VR`

, and the left eigenvectors in `VL`

.

`Base.LinAlg.LAPACK.gesdd!`

— Function.`gesdd!(job, A) -> (U, S, VT)`

Finds the singular value decomposition of `A`

, `A = U * S * V'`

, using a divide and conquer approach. If `job = A`

, all the columns of `U`

and the rows of `V'`

are computed. If `job = N`

, no columns of `U`

or rows of `V'`

are computed. If `job = O`

, `A`

is overwritten with the columns of (thin) `U`

and the rows of (thin) `V'`

. If `job = S`

, the columns of (thin) `U`

and the rows of (thin) `V'`

are computed and returned separately.

`Base.LinAlg.LAPACK.gesvd!`

— Function.`gesvd!(jobu, jobvt, A) -> (U, S, VT)`

Finds the singular value decomposition of `A`

, `A = U * S * V'`

. If `jobu = A`

, all the columns of `U`

are computed. If `jobvt = A`

all the rows of `V'`

are computed. If `jobu = N`

, no columns of `U`

are computed. If `jobvt = N`

no rows of `V'`

are computed. If `jobu = O`

, `A`

is overwritten with the columns of (thin) `U`

. If `jobvt = O`

, `A`

is overwritten with the rows of (thin) `V'`

. If `jobu = S`

, the columns of (thin) `U`

are computed and returned separately. If `jobvt = S`

the rows of (thin) `V'`

are computed and returned separately. `jobu`

and `jobvt`

can't both be `O`

.

Returns `U`

, `S`

, and `Vt`

, where `S`

are the singular values of `A`

.

`Base.LinAlg.LAPACK.ggsvd!`

— Function.`ggsvd!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)`

Finds the generalized singular value decomposition of `A`

and `B`

, `U'*A*Q = D1*R`

and `V'*B*Q = D2*R`

. `D1`

has `alpha`

on its diagonal and `D2`

has `beta`

on its diagonal. If `jobu = U`

, the orthogonal/unitary matrix `U`

is computed. If `jobv = V`

the orthogonal/unitary matrix `V`

is computed. If `jobq = Q`

, the orthogonal/unitary matrix `Q`

is computed. If `jobu`

, `jobv`

or `jobq`

is `N`

, that matrix is not computed. This function is only available in LAPACK versions prior to 3.6.0.

`Base.LinAlg.LAPACK.ggsvd3!`

— Function.`ggsvd3!(jobu, jobv, jobq, A, B) -> (U, V, Q, alpha, beta, k, l, R)`

Finds the generalized singular value decomposition of `A`

and `B`

, `U'*A*Q = D1*R`

and `V'*B*Q = D2*R`

. `D1`

has `alpha`

on its diagonal and `D2`

has `beta`

on its diagonal. If `jobu = U`

, the orthogonal/unitary matrix `U`

is computed. If `jobv = V`

the orthogonal/unitary matrix `V`

is computed. If `jobq = Q`

, the orthogonal/unitary matrix `Q`

is computed. If `jobu`

, `jobv`

, or `jobq`

is `N`

, that matrix is not computed. This function requires LAPACK 3.6.0.

`Base.LinAlg.LAPACK.geevx!`

— Function.`geevx!(balanc, jobvl, jobvr, sense, A) -> (A, w, VL, VR, ilo, ihi, scale, abnrm, rconde, rcondv)`

Finds the eigensystem of `A`

with matrix balancing. If `jobvl = N`

, the left eigenvectors of `A`

aren't computed. If `jobvr = N`

, the right eigenvectors of `A`

aren't computed. If `jobvl = V`

or `jobvr = V`

, the corresponding eigenvectors are computed. If `balanc = N`

, no balancing is performed. If `balanc = P`

, `A`

is permuted but not scaled. If `balanc = S`

, `A`

is scaled but not permuted. If `balanc = B`

, `A`

is permuted and scaled. If `sense = N`

, no reciprocal condition numbers are computed. If `sense = E`

, reciprocal condition numbers are computed for the eigenvalues only. If `sense = V`

, reciprocal condition numbers are computed for the right eigenvectors only. If `sense = B`

, reciprocal condition numbers are computed for the right eigenvectors and the eigenvectors. If `sense = E,B`

, the right and left eigenvectors must be computed.

`Base.LinAlg.LAPACK.ggev!`

— Function.`ggev!(jobvl, jobvr, A, B) -> (alpha, beta, vl, vr)`

Finds the generalized eigendecomposition of `A`

and `B`

. If `jobvl = N`

, the left eigenvectors aren't computed. If `jobvr = N`

, the right eigenvectors aren't computed. If `jobvl = V`

or `jobvr = V`

, the corresponding eigenvectors are computed.

`Base.LinAlg.LAPACK.gtsv!`

— Function.`gtsv!(dl, d, du, B)`

Solves the equation `A * X = B`

where `A`

is a tridiagonal matrix with `dl`

on the subdiagonal, `d`

on the diagonal, and `du`

on the superdiagonal.

Overwrites `B`

with the solution `X`

and returns it.

`Base.LinAlg.LAPACK.gttrf!`

— Function.`gttrf!(dl, d, du) -> (dl, d, du, du2, ipiv)`

Finds the `LU`

factorization of a tridiagonal matrix with `dl`

on the subdiagonal, `d`

on the diagonal, and `du`

on the superdiagonal.

Modifies `dl`

, `d`

, and `du`

in-place and returns them and the second superdiagonal `du2`

and the pivoting vector `ipiv`

.

`Base.LinAlg.LAPACK.gttrs!`

— Function.`gttrs!(trans, dl, d, du, du2, ipiv, B)`

Solves the equation `A * X = B`

(`trans = N`

), `A.' * X = B`

(`trans = T`

), or `A' * X = B`

(`trans = C`

) using the `LU`

factorization computed by `gttrf!`

. `B`

is overwritten with the solution `X`

.

`Base.LinAlg.LAPACK.orglq!`

— Function.`orglq!(A, tau, k = length(tau))`

Explicitly finds the matrix `Q`

of a `LQ`

factorization after calling `gelqf!`

on `A`

. Uses the output of `gelqf!`

. `A`

is overwritten by `Q`

.

`Base.LinAlg.LAPACK.orgqr!`

— Function.`orgqr!(A, tau, k = length(tau))`

Explicitly finds the matrix `Q`

of a `QR`

factorization after calling `geqrf!`

on `A`

. Uses the output of `geqrf!`

. `A`

is overwritten by `Q`

.

`Base.LinAlg.LAPACK.orgql!`

— Function.`orgql!(A, tau, k = length(tau))`

Explicitly finds the matrix `Q`

of a `QL`

factorization after calling `geqlf!`

on `A`

. Uses the output of `geqlf!`

. `A`

is overwritten by `Q`

.

`Base.LinAlg.LAPACK.orgrq!`

— Function.`orgrq!(A, tau, k = length(tau))`

Explicitly finds the matrix `Q`

of a `RQ`

factorization after calling `gerqf!`

on `A`

. Uses the output of `gerqf!`

. `A`

is overwritten by `Q`

.

`Base.LinAlg.LAPACK.ormlq!`

— Function.`ormlq!(side, trans, A, tau, C)`

Computes `Q * C`

(`trans = N`

), `Q.' * C`

(`trans = T`

), `Q' * C`

(`trans = C`

) for `side = L`

or the equivalent right-sided multiplication for `side = R`

using `Q`

from a `LQ`

factorization of `A`

computed using `gelqf!`

. `C`

is overwritten.

`Base.LinAlg.LAPACK.ormqr!`

— Function.`ormqr!(side, trans, A, tau, C)`

Computes `Q * C`

(`trans = N`

), `Q.' * C`

(`trans = T`

), `Q' * C`

(`trans = C`

) for `side = L`

or the equivalent right-sided multiplication for `side = R`

using `Q`

from a `QR`

factorization of `A`

computed using `geqrf!`

. `C`

is overwritten.

`Base.LinAlg.LAPACK.ormql!`

— Function.`ormql!(side, trans, A, tau, C)`

Computes `Q * C`

(`trans = N`

), `Q.' * C`

(`trans = T`

), `Q' * C`

(`trans = C`

) for `side = L`

or the equivalent right-sided multiplication for `side = R`

using `Q`

from a `QL`

factorization of `A`

computed using `geqlf!`

. `C`

is overwritten.

`Base.LinAlg.LAPACK.ormrq!`

— Function.`ormrq!(side, trans, A, tau, C)`

Computes `Q * C`

(`trans = N`

), `Q.' * C`

(`trans = T`

), `Q' * C`

(`trans = C`

) for `side = L`

or the equivalent right-sided multiplication for `side = R`

using `Q`

from a `RQ`

factorization of `A`

computed using `gerqf!`

. `C`

is overwritten.

`Base.LinAlg.LAPACK.gemqrt!`

— Function.`gemqrt!(side, trans, V, T, C)`

Computes `Q * C`

(`trans = N`

), `Q.' * C`

(`trans = T`

), `Q' * C`

(`trans = C`

) for `side = L`

or the equivalent right-sided multiplication for `side = R`

using `Q`

from a `QR`

factorization of `A`

computed using `geqrt!`

. `C`

is overwritten.

`Base.LinAlg.LAPACK.posv!`

— Function.`posv!(uplo, A, B) -> (A, B)`

Finds the solution to `A * X = B`

where `A`

is a symmetric or Hermitian positive definite matrix. If `uplo = U`

the upper Cholesky decomposition of `A`

is computed. If `uplo = L`

the lower Cholesky decomposition of `A`

is computed. `A`

is overwritten by its Cholesky decomposition. `B`

is overwritten with the solution `X`

.

`Base.LinAlg.LAPACK.potrf!`

— Function.`potrf!(uplo, A)`

Computes the Cholesky (upper if `uplo = U`

, lower if `uplo = L`

) decomposition of positive-definite matrix `A`

. `A`

is overwritten and returned with an info code.

`Base.LinAlg.LAPACK.potri!`

— Function.`potri!(uplo, A)`

Computes the inverse of positive-definite matrix `A`

after calling `potrf!`

to find its (upper if `uplo = U`

, lower if `uplo = L`

) Cholesky decomposition.

`A`

is overwritten by its inverse and returned.

`Base.LinAlg.LAPACK.potrs!`

— Function.`potrs!(uplo, A, B)`

Finds the solution to `A * X = B`

where `A`

is a symmetric or Hermitian positive definite matrix whose Cholesky decomposition was computed by `potrf!`

. If `uplo = U`

the upper Cholesky decomposition of `A`

was computed. If `uplo = L`

the lower Cholesky decomposition of `A`

was computed. `B`

is overwritten with the solution `X`

.

`Base.LinAlg.LAPACK.pstrf!`

— Function.`pstrf!(uplo, A, tol) -> (A, piv, rank, info)`

Computes the (upper if `uplo = U`

, lower if `uplo = L`

) pivoted Cholesky decomposition of positive-definite matrix `A`

with a user-set tolerance `tol`

. `A`

is overwritten by its Cholesky decomposition.

Returns `A`

, the pivots `piv`

, the rank of `A`

, and an `info`

code. If `info = 0`

, the factorization succeeded. If `info = i > 0`

, then `A`

is indefinite or rank-deficient.

`Base.LinAlg.LAPACK.ptsv!`

— Function.`ptsv!(D, E, B)`

Solves `A * X = B`

for positive-definite tridiagonal `A`

. `D`

is the diagonal of `A`

and `E`

is the off-diagonal. `B`

is overwritten with the solution `X`

and returned.

`Base.LinAlg.LAPACK.pttrf!`

— Function.`pttrf!(D, E)`

Computes the LDLt factorization of a positive-definite tridiagonal matrix with `D`

as diagonal and `E`

as off-diagonal. `D`

and `E`

are overwritten and returned.

`Base.LinAlg.LAPACK.pttrs!`

— Function.`pttrs!(D, E, B)`

Solves `A * X = B`

for positive-definite tridiagonal `A`

with diagonal `D`

and off-diagonal `E`

after computing `A`

's LDLt factorization using `pttrf!`

. `B`

is overwritten with the solution `X`

.

`Base.LinAlg.LAPACK.trtri!`

— Function.`trtri!(uplo, diag, A)`

Finds the inverse of (upper if `uplo = U`

, lower if `uplo = L`

) triangular matrix `A`

. If `diag = N`

, `A`

has non-unit diagonal elements. If `diag = U`

, all diagonal elements of `A`

are one. `A`

is overwritten with its inverse.

`Base.LinAlg.LAPACK.trtrs!`

— Function.`trtrs!(uplo, trans, diag, A, B)`

Solves `A * X = B`

(`trans = N`

), `A.' * X = B`

(`trans = T`

), or `A' * X = B`

(`trans = C`

) for (upper if `uplo = U`

, lower if `uplo = L`

) triangular matrix `A`

. If `diag = N`

, `A`

has non-unit diagonal elements. If `diag = U`

, all diagonal elements of `A`

are one. `B`

is overwritten with the solution `X`

.

`Base.LinAlg.LAPACK.trcon!`

— Function.`trcon!(norm, uplo, diag, A)`

Finds the reciprocal condition number of (upper if `uplo = U`

, lower if `uplo = L`

) triangular matrix `A`

. If `diag = N`

, `A`

has non-unit diagonal elements. If `diag = U`

, all diagonal elements of `A`

are one. If `norm = I`

, the condition number is found in the infinity norm. If `norm = O`

or `1`

, the condition number is found in the one norm.

`Base.LinAlg.LAPACK.trevc!`

— Function.`trevc!(side, howmny, select, T, VL = similar(T), VR = similar(T))`

Finds the eigensystem of an upper triangular matrix `T`

. If `side = R`

, the right eigenvectors are computed. If `side = L`

, the left eigenvectors are computed. If `side = B`

, both sets are computed. If `howmny = A`

, all eigenvectors are found. If `howmny = B`

, all eigenvectors are found and backtransformed using `VL`

and `VR`

. If `howmny = S`

, only the eigenvectors corresponding to the values in `select`

are computed.

`Base.LinAlg.LAPACK.trrfs!`

— Function.`trrfs!(uplo, trans, diag, A, B, X, Ferr, Berr) -> (Ferr, Berr)`

Estimates the error in the solution to `A * X = B`

(`trans = N`

), `A.' * X = B`

(`trans = T`

), `A' * X = B`

(`trans = C`

) for `side = L`

, or the equivalent equations a right-handed `side = R`

`X * A`

after computing `X`

using `trtrs!`

. If `uplo = U`

, `A`

is upper triangular. If `uplo = L`

, `A`

is lower triangular. If `diag = N`

, `A`

has non-unit diagonal elements. If `diag = U`

, all diagonal elements of `A`

are one. `Ferr`

and `Berr`

are optional inputs. `Ferr`

is the forward error and `Berr`

is the backward error, each component-wise.

`Base.LinAlg.LAPACK.stev!`

— Function.`stev!(job, dv, ev) -> (dv, Zmat)`

Computes the eigensystem for a symmetric tridiagonal matrix with `dv`

as diagonal and `ev`

as off-diagonal. If `job = N`

only the eigenvalues are found and returned in `dv`

. If `job = V`

then the eigenvectors are also found and returned in `Zmat`

.

`Base.LinAlg.LAPACK.stebz!`

— Function.`stebz!(range, order, vl, vu, il, iu, abstol, dv, ev) -> (dv, iblock, isplit)`

Computes the eigenvalues for a symmetric tridiagonal matrix with `dv`

as diagonal and `ev`

as off-diagonal. If `range = A`

, all the eigenvalues are found. If `range = V`

, the eigenvalues in the half-open interval `(vl, vu]`

are found. If `range = I`

, the eigenvalues with indices between `il`

and `iu`

are found. If `order = B`

, eigvalues are ordered within a block. If `order = E`

, they are ordered across all the blocks. `abstol`

can be set as a tolerance for convergence.

`Base.LinAlg.LAPACK.stegr!`

— Function.`stegr!(jobz, range, dv, ev, vl, vu, il, iu) -> (w, Z)`

Computes the eigenvalues (`jobz = N`

) or eigenvalues and eigenvectors (`jobz = V`

) for a symmetric tridiagonal matrix with `dv`

as diagonal and `ev`

as off-diagonal. If `range = A`

, all the eigenvalues are found. If `range = V`

, the eigenvalues in the half-open interval `(vl, vu]`

are found. If `range = I`

, the eigenvalues with indices between `il`

and `iu`

are found. The eigenvalues are returned in `w`

and the eigenvectors in `Z`

.

`Base.LinAlg.LAPACK.stein!`

— Function.`stein!(dv, ev_in, w_in, iblock_in, isplit_in)`

Computes the eigenvectors for a symmetric tridiagonal matrix with `dv`

as diagonal and `ev_in`

as off-diagonal. `w_in`

specifies the input eigenvalues for which to find corresponding eigenvectors. `iblock_in`

specifies the submatrices corresponding to the eigenvalues in `w_in`

. `isplit_in`

specifies the splitting points between the submatrix blocks.

`Base.LinAlg.LAPACK.syconv!`

— Function.`syconv!(uplo, A, ipiv) -> (A, work)`

Converts a symmetric matrix `A`

(which has been factorized into a triangular matrix) into two matrices `L`

and `D`

. If `uplo = U`

, `A`

is upper triangular. If `uplo = L`

, it is lower triangular. `ipiv`

is the pivot vector from the triangular factorization. `A`

is overwritten by `L`

and `D`

.

`Base.LinAlg.LAPACK.sysv!`

— Function.`sysv!(uplo, A, B) -> (B, A, ipiv)`

Finds the solution to `A * X = B`

for symmetric matrix `A`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored. `B`

is overwritten by the solution `X`

. `A`

is overwritten by its Bunch-Kaufman factorization. `ipiv`

contains pivoting information about the factorization.

`Base.LinAlg.LAPACK.sytrf!`

— Function.`sytrf!(uplo, A) -> (A, ipiv, info)`

Computes the Bunch-Kaufman factorization of a symmetric matrix `A`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored.

Returns `A`

, overwritten by the factorization, a pivot vector `ipiv`

, and the error code `info`

which is a non-negative integer. If `info`

is positive the matrix is singular and the diagonal part of the factorization is exactly zero at position `info`

.

`Base.LinAlg.LAPACK.sytri!`

— Function.`sytri!(uplo, A, ipiv)`

Computes the inverse of a symmetric matrix `A`

using the results of `sytrf!`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored. `A`

is overwritten by its inverse.

`Base.LinAlg.LAPACK.sytrs!`

— Function.`sytrs!(uplo, A, ipiv, B)`

Solves the equation `A * X = B`

for a symmetric matrix `A`

using the results of `sytrf!`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored. `B`

is overwritten by the solution `X`

.

`Base.LinAlg.LAPACK.hesv!`

— Function.`hesv!(uplo, A, B) -> (B, A, ipiv)`

Finds the solution to `A * X = B`

for Hermitian matrix `A`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored. `B`

is overwritten by the solution `X`

. `A`

is overwritten by its Bunch-Kaufman factorization. `ipiv`

contains pivoting information about the factorization.

`Base.LinAlg.LAPACK.hetrf!`

— Function.`hetrf!(uplo, A) -> (A, ipiv, info)`

Computes the Bunch-Kaufman factorization of a Hermitian matrix `A`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored.

Returns `A`

, overwritten by the factorization, a pivot vector `ipiv`

, and the error code `info`

which is a non-negative integer. If `info`

is positive the matrix is singular and the diagonal part of the factorization is exactly zero at position `info`

.

`Base.LinAlg.LAPACK.hetri!`

— Function.`hetri!(uplo, A, ipiv)`

Computes the inverse of a Hermitian matrix `A`

using the results of `sytrf!`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored. `A`

is overwritten by its inverse.

`Base.LinAlg.LAPACK.hetrs!`

— Function.`hetrs!(uplo, A, ipiv, B)`

Solves the equation `A * X = B`

for a Hermitian matrix `A`

using the results of `sytrf!`

. If `uplo = U`

, the upper half of `A`

is stored. If `uplo = L`

, the lower half is stored. `B`

is overwritten by the solution `X`

.

`Base.LinAlg.LAPACK.syev!`

— Function.`syev!(jobz, uplo, A)`

Finds the eigenvalues (`jobz = N`

) or eigenvalues and eigenvectors (`jobz = V`

) of a symmetric matrix `A`

. If `uplo = U`

, the upper triangle of `A`

is used. If `uplo = L`

, the lower triangle of `A`

is used.

`Base.LinAlg.LAPACK.syevr!`

— Function.`syevr!(jobz, range, uplo, A, vl, vu, il, iu, abstol) -> (W, Z)`

Finds the eigenvalues (`jobz = N`

) or eigenvalues and eigenvectors (`jobz = V`

) of a symmetric matrix `A`

. If `uplo = U`

, the upper triangle of `A`

is used. If `uplo = L`

, the lower triangle of `A`

is used. If `range = A`

, all the eigenvalues are found. If `range = V`

, the eigenvalues in the half-open interval `(vl, vu]`

are found. If `range = I`

, the eigenvalues with indices between `il`

and `iu`

are found. `abstol`

can be set as a tolerance for convergence.

The eigenvalues are returned in `W`

and the eigenvectors in `Z`

.

`Base.LinAlg.LAPACK.sygvd!`

— Function.`sygvd!(itype, jobz, uplo, A, B) -> (w, A, B)`

Finds the generalized eigenvalues (`jobz = N`

) or eigenvalues and eigenvectors (`jobz = V`

) of a symmetric matrix `A`

and symmetric positive-definite matrix `B`

. If `uplo = U`

, the upper triangles of `A`

and `B`

are used. If `uplo = L`

, the lower triangles of `A`

and `B`

are used. If `itype = 1`

, the problem to solve is `A * x = lambda * B * x`

. If `itype = 2`

, the problem to solve is `A * B * x = lambda * x`

. If `itype = 3`

, the problem to solve is `B * A * x = lambda * x`

.

`Base.LinAlg.LAPACK.bdsqr!`

— Function.`bdsqr!(uplo, d, e_, Vt, U, C) -> (d, Vt, U, C)`

Computes the singular value decomposition of a bidiagonal matrix with `d`

on the diagonal and `e_`

on the off-diagonal. If `uplo = U`

, `e_`

is the superdiagonal. If `uplo = L`

, `e_`

is the subdiagonal. Can optionally also compute the product `Q' * C`

.

Returns the singular values in `d`

, and the matrix `C`

overwritten with `Q' * C`

.

`Base.LinAlg.LAPACK.bdsdc!`

— Function.`bdsdc!(uplo, compq, d, e_) -> (d, e, u, vt, q, iq)`

Computes the singular value decomposition of a bidiagonal matrix with `d`

on the diagonal and `e_`

on the off-diagonal using a divide and conqueq method. If `uplo = U`

, `e_`

is the superdiagonal. If `uplo = L`

, `e_`

is the subdiagonal. If `compq = N`

, only the singular values are found. If `compq = I`

, the singular values and vectors are found. If `compq = P`

, the singular values and vectors are found in compact form. Only works for real types.

Returns the singular values in `d`

, and if `compq = P`

, the compact singular vectors in `iq`

.

`Base.LinAlg.LAPACK.gecon!`

— Function.`gecon!(normtype, A, anorm)`

Finds the reciprocal condition number of matrix `A`

. If `normtype = I`

, the condition number is found in the infinity norm. If `normtype = O`

or `1`

, the condition number is found in the one norm. `A`

must be the result of `getrf!`

and `anorm`

is the norm of `A`

in the relevant norm.

`Base.LinAlg.LAPACK.gehrd!`

— Function.`gehrd!(ilo, ihi, A) -> (A, tau)`

Converts a matrix `A`

to Hessenberg form. If `A`

is balanced with `gebal!`

then `ilo`

and `ihi`

are the outputs of `gebal!`

. Otherwise they should be `ilo = 1`

and `ihi = size(A,2)`

. `tau`

contains the elementary reflectors of the factorization.

`Base.LinAlg.LAPACK.orghr!`

— Function.`orghr!(ilo, ihi, A, tau)`

Explicitly finds `Q`

, the orthogonal/unitary matrix from `gehrd!`

. `ilo`

, `ihi`

, `A`

, and `tau`

must correspond to the input/output to `gehrd!`

.

`Base.LinAlg.LAPACK.gees!`

— Function.`gees!(jobvs, A) -> (A, vs, w)`

Computes the eigenvalues (`jobvs = N`

) or the eigenvalues and Schur vectors (`jobvs = V`

) of matrix `A`

. `A`

is overwritten by its Schur form.

Returns `A`

, `vs`

containing the Schur vectors, and `w`

, containing the eigenvalues.

`Base.LinAlg.LAPACK.gges!`

— Function.`gges!(jobvsl, jobvsr, A, B) -> (A, B, alpha, beta, vsl, vsr)`

Computes the generalized eigenvalues, generalized Schur form, left Schur vectors (`jobsvl = V`

), or right Schur vectors (`jobvsr = V`

) of `A`

and `B`

.

The generalized eigenvalues are returned in `alpha`

and `beta`

. The left Schur vectors are returned in `vsl`

and the right Schur vectors are returned in `vsr`

.

`Base.LinAlg.LAPACK.trexc!`

— Function.`trexc!(compq, ifst, ilst, T, Q) -> (T, Q)`

Reorder the Schur factorization of a matrix. If `compq = V`

, the Schur vectors `Q`

are reordered. If `compq = N`

they are not modified. `ifst`

and `ilst`

specify the reordering of the vectors.

`Base.LinAlg.LAPACK.trsen!`

— Function.`trsen!(compq, job, select, T, Q) -> (T, Q, w)`

Reorder the Schur factorization of a matrix and optionally finds reciprocal condition numbers. If `job = N`

, no condition numbers are found. If `job = E`

, only the condition number for this cluster of eigenvalues is found. If `job = V`

, only the condition number for the invariant subspace is found. If `job = B`

then the condition numbers for the cluster and subspace are found. If `compq = V`

the Schur vectors `Q`

are updated. If `compq = N`

the Schur vectors are not modified. `select`

determines which eigenvalues are in the cluster.

Returns `T`

, `Q`

, and reordered eigenvalues in `w`

.

`Base.LinAlg.LAPACK.tgsen!`

— Function.`tgsen!(select, S, T, Q, Z) -> (S, T, alpha, beta, Q, Z)`

Reorders the vectors of a generalized Schur decomposition. `select`

specifices the eigenvalues in each cluster.

`Base.LinAlg.LAPACK.trsyl!`

— Function.`trsyl!(transa, transb, A, B, C, isgn=1) -> (C, scale)`

Solves the Sylvester matrix equation `A * X +/- X * B = scale*C`

where `A`

and `B`

are both quasi-upper triangular. If `transa = N`

, `A`

is not modified. If `transa = T`

, `A`

is transposed. If `transa = C`

, `A`

is conjugate transposed. Similarly for `transb`

and `B`

. If `isgn = 1`

, the equation `A * X + X * B = scale * C`

is solved. If `isgn = -1`

, the equation `A * X - X * B = scale * C`

is solved.

Returns `X`

(overwriting `C`

) and `scale`

.