Merge branch 'master' into abomonation
This commit is contained in:
commit
afef66227e
11
.travis.yml
11
.travis.yml
|
@ -11,6 +11,12 @@ matrix:
|
|||
allow_failures:
|
||||
- rust: nightly
|
||||
- rust: beta
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- gfortran
|
||||
- libblas3gf
|
||||
- liblapack3gf
|
||||
script:
|
||||
- rustc --version
|
||||
- cargo --version
|
||||
|
@ -19,3 +25,8 @@ script:
|
|||
- cargo build --verbose --features serde-serialize
|
||||
- cargo build --verbose --features abomonation-serialize
|
||||
- cargo test --verbose --features "arbitrary serde-serialize abomonation-serialize"
|
||||
- cd nalgebra-lapack; cargo test --verbose
|
||||
|
||||
env:
|
||||
matrix:
|
||||
- CARGO_FEATURE_SYSTEM_NETLIB=1 CARGO_FEATURE_EXCLUDE_LAPACKE=1 CARGO_FEATURE_EXCLUDE_CBLAS=1
|
107
CHANGELOG.md
107
CHANGELOG.md
|
@ -4,20 +4,117 @@ documented here.
|
|||
|
||||
This project adheres to [Semantic Versioning](http://semver.org/).
|
||||
|
||||
## [0.13.0] - WIP
|
||||
|
||||
## [0.13.0]
|
||||
|
||||
The **nalgebra-lapack** crate has been updated. This now includes a broad range
|
||||
matrix decompositions using LAPACK bindings.
|
||||
|
||||
### Breaking semantic change
|
||||
* The implementation of slicing with steps now matches the documentation.
|
||||
Before, step identified the number to add to pass from one column/row index
|
||||
to the next one. This made 0 step invalid. Now (and on the documentation so
|
||||
far), the step is the number of ignored row/columns between each
|
||||
row/column. Thus, a step of 0 means that no row/column is ignored. For
|
||||
example, a step of, say, 3 on previous versions should now bet set to 2.
|
||||
|
||||
### Modified
|
||||
* The trait `Axpy` has been replaced by a metod `.axpy`.
|
||||
* The alias `MatrixNM` is now deprecated. Use `MatrixMN` instead (we
|
||||
reordered M and N to be in alphabetical order).
|
||||
* In-place componentwise multiplication and division
|
||||
`.component_mul_mut(...)` and `.component_div_mut(...)` have bee deprecated
|
||||
for a future renaming. Use `.component_mul_assign(...)` and
|
||||
`.component_div_assign(...)` instead.
|
||||
|
||||
### Added
|
||||
* `alga::general::Real` is now re-exported by nalgebra.
|
||||
elements.)
|
||||
* `::zeros(...)` that creates a matrix filled with zeroes.
|
||||
* `::from_partial_diagonal(...)` that creates a matrix from diagonal elements.
|
||||
The matrix can be rectangular. If not enough elements are provided, the rest
|
||||
of the diagonal is set to 0.
|
||||
* `.conjugate_transpose()` computes the transposed conjugate of a
|
||||
complex matrix.
|
||||
* `.conjugate_transpose_to(...)` computes the transposed conjugate of a
|
||||
complex matrix. The result written into a user-provided matrix.
|
||||
* `.transpose_to(...)` is the same as `.transpose()` but stores the result in
|
||||
the provided matrix.
|
||||
* `.conjugate_transpose_to(...)` is the same as `.conjugate_transpose()` but
|
||||
stores the result in the provided matrix.
|
||||
* Implements `IntoIterator` for `&Matrix`, `&mut Matrix` and `Matrix`.
|
||||
* `.mul_to(...)` multiplies two matrices and stores the result to the given buffer.
|
||||
* `.tr_mul_to(...)` left-multiplies `self.transpose()` to another matrix and stores the result to the given buffer.
|
||||
* `.add_scalar(...)` that adds a scalar to each component of a matrix.
|
||||
* `.add_scalar_mut(...)` that adds in-place a scalar to each component of a matrix.
|
||||
* `.kronecker(a, b)` computes the kronecker product (i.e. matrix tensor
|
||||
product) of two matrices.
|
||||
* `.set_row(i, row)` sets the i-th row of the matrix.
|
||||
* `.set_column(j, column)` sets the i-th column of the matrix.
|
||||
* `.apply(f)` replaces each component of a matrix with the results of the
|
||||
closure `f` called on each of them.
|
||||
|
||||
Pure Rust implementation of some Blas operations:
|
||||
|
||||
* `.iamax()` retuns the index of the maximum value of a vector.
|
||||
* `.axpy(...)` computes `self = a * x + b * self`.
|
||||
* `.gemv(...)` computes `self = alpha * a * x + beta * self` with a matrix and vector `a` and `x`.
|
||||
* `.ger(...)` computes `self = alpha * x^t * y + beta * self` where `x` and `y` are vectors.
|
||||
* `.gemm(...)` computes `self = alpha * a * b + beta * self` where `a` and `b` are matrices.
|
||||
* `.gemv_symm(...)` is the same as `.gemv` except that `self` is assumed symmetric.
|
||||
* `.ger_symm(...)` is the same as `.ger` except that `self` is assumed symmetric.
|
||||
|
||||
New slicing methods:
|
||||
* `.rows_range(...)` that retrieves a reference to a range of rows.
|
||||
* `.rows_range_mut(...)` that retrieves a mutable reference to a range of rows.
|
||||
* `.columns_range(...)` that retrieves a reference to a range of columns.
|
||||
* `.columns_range_mut(...)` that retrieves a mutable reference to a range of columns.
|
||||
|
||||
Matrix decompositions implemented in pure Rust:
|
||||
* Cholesky, SVD, LU, QR, Hessenberg, Schur, Symmetric eigendecompositions,
|
||||
Bidiagonal, Symmetric tridiagonal
|
||||
* Computation of householder reflectors and givens rotations.
|
||||
|
||||
Matrix edition:
|
||||
* `.upper_triangle()` extracts the upper triangle of a matrix, including the diagonal.
|
||||
* `.lower_triangle()` extracts the lower triangle of a matrix, including the diagonal.
|
||||
* `.fill(...)` fills the matrix with a single value.
|
||||
* `.fill_with_identity(...)` fills the matrix with the identity.
|
||||
* `.fill_diagonal(...)` fills the matrix diagonal with a single value.
|
||||
* `.fill_row(...)` fills a selected matrix row with a single value.
|
||||
* `.fill_column(...)` fills a selected matrix column with a single value.
|
||||
* `.set_diagonal(...)` sets the matrix diagonal.
|
||||
* `.set_row(...)` sets a selected row.
|
||||
* `.set_column(...)` sets a selected column.
|
||||
* `.fill_lower_triangle(...)` fills some sub-diagonals bellow the main diagonal with a value.
|
||||
* `.fill_upper_triangle(...)` fills some sub-diagonals above the main diagonal with a value.
|
||||
* `.swap_rows(...)` swaps two rows.
|
||||
* `.swap_columns(...)` swaps two columns.
|
||||
|
||||
Column removal:
|
||||
* `.remove_column(...)` removes one column.
|
||||
* `.remove_fixed_columns<D>(...)` removes `D` columns.
|
||||
* `.remove_columns(...)` removes a number of columns known at run-time.
|
||||
|
||||
Row removal:
|
||||
* `.remove_row(...)` removes one row.
|
||||
* `.remove_fixed_rows<D>(...)` removes `D` rows.
|
||||
* `.remove_rows(...)` removes a number of rows known at run-time.
|
||||
|
||||
Column insertion:
|
||||
* `.insert_column(...)` adds one column at the given position.
|
||||
* `.insert_fixed_columns<D>(...)` adds `D` columns at the given position.
|
||||
* `.insert_columns(...)` adds at the given position a number of columns known at run-time.
|
||||
|
||||
Row insertion:
|
||||
* `.insert_row(...)` adds one row at the given position.
|
||||
* `.insert_fixed_rows<D>(...)` adds `D` rows at the given position.
|
||||
* `.insert_rows(...)` adds at the given position a number of rows known at run-time.
|
||||
|
||||
## [0.12.0]
|
||||
The main change of this release is the update of the dependency serde to 1.0.
|
||||
|
||||
### Added
|
||||
* `.trace()` that computes the trace of a matrix (i.e., the sum of its
|
||||
diagonal elements.)
|
||||
* `.trace()` that computes the trace of a matrix (the sum of its diagonal
|
||||
elements.)
|
||||
|
||||
## [0.11.0]
|
||||
The [website](http://nalgebra.org) has been fully rewritten and gives a good
|
||||
|
|
|
@ -19,19 +19,20 @@ path = "src/lib.rs"
|
|||
arbitrary = [ "quickcheck" ]
|
||||
serde-serialize = [ "serde", "serde_derive", "num-complex/serde" ]
|
||||
abomonation-serialize = [ "abomonation" ]
|
||||
debug = [ ]
|
||||
|
||||
[dependencies]
|
||||
typenum = "1.4"
|
||||
typenum = "1.7"
|
||||
generic-array = "0.8"
|
||||
rand = "0.3"
|
||||
num-traits = "0.1"
|
||||
num-complex = "0.1"
|
||||
approx = "0.1"
|
||||
alga = "0.5"
|
||||
matrixmultiply = "0.1"
|
||||
serde = { version = "1.0", optional = true }
|
||||
serde_derive = { version = "1.0", optional = true }
|
||||
abomonation = { version = "0.4", optional = true }
|
||||
# clippy = "*"
|
||||
|
||||
[dependencies.quickcheck]
|
||||
optional = true
|
||||
|
@ -39,3 +40,6 @@ version = "0.4"
|
|||
|
||||
[dev-dependencies]
|
||||
serde_json = "1.0"
|
||||
|
||||
[workspace]
|
||||
members = [ "nalgebra-lapack" ]
|
||||
|
|
6
Makefile
6
Makefile
|
@ -1,11 +1,11 @@
|
|||
all:
|
||||
CARGO_INCREMENTAL=1 cargo build --features "arbitrary serde-serialize"
|
||||
cargo check --features "debug arbitrary serde-serialize"
|
||||
|
||||
doc:
|
||||
CARGO_INCREMENTAL=1 cargo doc --no-deps --features "arbitrary serde-serialize"
|
||||
cargo doc --no-deps --features "debug arbitrary serde-serialize"
|
||||
|
||||
bench:
|
||||
cargo bench
|
||||
|
||||
test:
|
||||
cargo test --features "arbitrary serde-serialize"
|
||||
cargo test --features "debug arbitrary serde-serialize"
|
||||
|
|
10
README.md
10
README.md
|
@ -24,3 +24,13 @@
|
|||
<b>Linear algebra library</b>
|
||||
<i>for the Rust programming language.</i>
|
||||
</p>
|
||||
|
||||
-----
|
||||
|
||||
<p align = "center">
|
||||
<i>Click this button if you which to donate to support the development of</i> <b>nalgebra</b>:
|
||||
</p>
|
||||
|
||||
<p align = "center">
|
||||
<a href="https://www.patreon.com/bePatron?u=7111380" ><img src="https://c5.patreon.com/external/logo/become_a_patron_button.png" alt="Become a Patron!" /></a>
|
||||
</p>
|
||||
|
|
|
@ -4,20 +4,12 @@ macro_rules! bench_binop(
|
|||
($name: ident, $t1: ty, $t2: ty, $binop: ident) => {
|
||||
#[bench]
|
||||
fn $name(bh: &mut Bencher) {
|
||||
const LEN: usize = 1 << 13;
|
||||
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
|
||||
let elems1: Vec<$t1> = (0usize .. LEN).map(|_| rng.gen::<$t1>()).collect();
|
||||
let elems2: Vec<$t2> = (0usize .. LEN).map(|_| rng.gen::<$t2>()).collect();
|
||||
let mut i = 0;
|
||||
let a = rng.gen::<$t1>();
|
||||
let b = rng.gen::<$t2>();
|
||||
|
||||
bh.iter(|| {
|
||||
i = (i + 1) & (LEN - 1);
|
||||
|
||||
unsafe {
|
||||
test::black_box(elems1.get_unchecked(i).$binop(*elems2.get_unchecked(i)))
|
||||
}
|
||||
a.$binop(b)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
@ -27,43 +19,27 @@ macro_rules! bench_binop_ref(
|
|||
($name: ident, $t1: ty, $t2: ty, $binop: ident) => {
|
||||
#[bench]
|
||||
fn $name(bh: &mut Bencher) {
|
||||
const LEN: usize = 1 << 13;
|
||||
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
|
||||
let elems1: Vec<$t1> = (0usize .. LEN).map(|_| rng.gen::<$t1>()).collect();
|
||||
let elems2: Vec<$t2> = (0usize .. LEN).map(|_| rng.gen::<$t2>()).collect();
|
||||
let mut i = 0;
|
||||
let a = rng.gen::<$t1>();
|
||||
let b = rng.gen::<$t2>();
|
||||
|
||||
bh.iter(|| {
|
||||
i = (i + 1) & (LEN - 1);
|
||||
|
||||
unsafe {
|
||||
test::black_box(elems1.get_unchecked(i).$binop(elems2.get_unchecked(i)))
|
||||
}
|
||||
a.$binop(&b)
|
||||
})
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
macro_rules! bench_binop_na(
|
||||
($name: ident, $t1: ty, $t2: ty, $binop: ident) => {
|
||||
macro_rules! bench_binop_fn(
|
||||
($name: ident, $t1: ty, $t2: ty, $binop: path) => {
|
||||
#[bench]
|
||||
fn $name(bh: &mut Bencher) {
|
||||
const LEN: usize = 1 << 13;
|
||||
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
|
||||
let elems1: Vec<$t1> = (0usize .. LEN).map(|_| rng.gen::<$t1>()).collect();
|
||||
let elems2: Vec<$t2> = (0usize .. LEN).map(|_| rng.gen::<$t2>()).collect();
|
||||
let mut i = 0;
|
||||
let a = rng.gen::<$t1>();
|
||||
let b = rng.gen::<$t2>();
|
||||
|
||||
bh.iter(|| {
|
||||
i = (i + 1) & (LEN - 1);
|
||||
|
||||
unsafe {
|
||||
test::black_box(na::$binop(elems1.get_unchecked(i), elems2.get_unchecked(i)))
|
||||
}
|
||||
$binop(&a, &b)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,192 @@
|
|||
use rand::{IsaacRng, Rng};
|
||||
use test::{self, Bencher};
|
||||
use na::{Vector2, Vector3, Vector4, Matrix2, Matrix3, Matrix4,
|
||||
MatrixN, U10,
|
||||
DMatrix, DVector};
|
||||
use std::ops::{Add, Sub, Mul, Div};
|
||||
|
||||
#[path="../common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
bench_binop!(mat2_mul_m, Matrix2<f32>, Matrix2<f32>, mul);
|
||||
bench_binop!(mat3_mul_m, Matrix3<f32>, Matrix3<f32>, mul);
|
||||
bench_binop!(mat4_mul_m, Matrix4<f32>, Matrix4<f32>, mul);
|
||||
|
||||
bench_binop_ref!(mat2_tr_mul_m, Matrix2<f32>, Matrix2<f32>, tr_mul);
|
||||
bench_binop_ref!(mat3_tr_mul_m, Matrix3<f32>, Matrix3<f32>, tr_mul);
|
||||
bench_binop_ref!(mat4_tr_mul_m, Matrix4<f32>, Matrix4<f32>, tr_mul);
|
||||
|
||||
bench_binop!(mat2_add_m, Matrix2<f32>, Matrix2<f32>, add);
|
||||
bench_binop!(mat3_add_m, Matrix3<f32>, Matrix3<f32>, add);
|
||||
bench_binop!(mat4_add_m, Matrix4<f32>, Matrix4<f32>, add);
|
||||
|
||||
bench_binop!(mat2_sub_m, Matrix2<f32>, Matrix2<f32>, sub);
|
||||
bench_binop!(mat3_sub_m, Matrix3<f32>, Matrix3<f32>, sub);
|
||||
bench_binop!(mat4_sub_m, Matrix4<f32>, Matrix4<f32>, sub);
|
||||
|
||||
bench_binop!(mat2_mul_v, Matrix2<f32>, Vector2<f32>, mul);
|
||||
bench_binop!(mat3_mul_v, Matrix3<f32>, Vector3<f32>, mul);
|
||||
bench_binop!(mat4_mul_v, Matrix4<f32>, Vector4<f32>, mul);
|
||||
|
||||
bench_binop_ref!(mat2_tr_mul_v, Matrix2<f32>, Vector2<f32>, tr_mul);
|
||||
bench_binop_ref!(mat3_tr_mul_v, Matrix3<f32>, Vector3<f32>, tr_mul);
|
||||
bench_binop_ref!(mat4_tr_mul_v, Matrix4<f32>, Vector4<f32>, tr_mul);
|
||||
|
||||
bench_binop!(mat2_mul_s, Matrix2<f32>, f32, mul);
|
||||
bench_binop!(mat3_mul_s, Matrix3<f32>, f32, mul);
|
||||
bench_binop!(mat4_mul_s, Matrix4<f32>, f32, mul);
|
||||
|
||||
bench_binop!(mat2_div_s, Matrix2<f32>, f32, div);
|
||||
bench_binop!(mat3_div_s, Matrix3<f32>, f32, div);
|
||||
bench_binop!(mat4_div_s, Matrix4<f32>, f32, div);
|
||||
|
||||
bench_unop!(mat2_inv, Matrix2<f32>, try_inverse);
|
||||
bench_unop!(mat3_inv, Matrix3<f32>, try_inverse);
|
||||
bench_unop!(mat4_inv, Matrix4<f32>, try_inverse);
|
||||
|
||||
bench_unop!(mat2_transpose, Matrix2<f32>, transpose);
|
||||
bench_unop!(mat3_transpose, Matrix3<f32>, transpose);
|
||||
bench_unop!(mat4_transpose, Matrix4<f32>, transpose);
|
||||
|
||||
#[bench]
|
||||
fn mat_div_scalar(b: &mut Bencher) {
|
||||
let a = DMatrix::from_row_slice(1000, 1000, &vec![2.0;1000000]);
|
||||
let n = 42.0;
|
||||
|
||||
b.iter(|| {
|
||||
let mut aa = a.clone();
|
||||
let mut b = aa.slice_mut((0, 0), (1000, 1000));
|
||||
b /= n
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat100_add_mat100(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(100, 100);
|
||||
let b = DMatrix::<f64>::new_random(100, 100);
|
||||
|
||||
bench.iter(|| { &a + &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat4_mul_mat4(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(4, 4);
|
||||
let b = DMatrix::<f64>::new_random(4, 4);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat5_mul_mat5(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(5, 5);
|
||||
let b = DMatrix::<f64>::new_random(5, 5);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat6_mul_mat6(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(6, 6);
|
||||
let b = DMatrix::<f64>::new_random(6, 6);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat7_mul_mat7(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(7, 7);
|
||||
let b = DMatrix::<f64>::new_random(7, 7);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat8_mul_mat8(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(8, 8);
|
||||
let b = DMatrix::<f64>::new_random(8, 8);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat9_mul_mat9(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(9, 9);
|
||||
let b = DMatrix::<f64>::new_random(9, 9);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat10_mul_mat10(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(10, 10);
|
||||
let b = DMatrix::<f64>::new_random(10, 10);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat10_mul_mat10_static(bench: &mut Bencher) {
|
||||
let a = MatrixN::<f64, U10>::new_random();
|
||||
let b = MatrixN::<f64, U10>::new_random();
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat100_mul_mat100(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(100, 100);
|
||||
let b = DMatrix::<f64>::new_random(100, 100);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat500_mul_mat500(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::from_element(500, 500, 5f64);
|
||||
let b = DMatrix::<f64>::from_element(500, 500, 6f64);
|
||||
|
||||
bench.iter(|| { &a * &b })
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn copy_from(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(1000, 1000);
|
||||
let mut b = DMatrix::<f64>::new_random(1000, 1000);
|
||||
|
||||
bench.iter(|| {
|
||||
b.copy_from(&a);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn axpy(bench: &mut Bencher) {
|
||||
let x = DVector::<f64>::from_element(100000, 2.0);
|
||||
let mut y = DVector::<f64>::from_element(100000, 3.0);
|
||||
let a = 42.0;
|
||||
|
||||
bench.iter(|| {
|
||||
y.axpy(a, &x, 1.0);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn tr_mul_to(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(1000, 1000);
|
||||
let b = DVector::<f64>::new_random(1000);
|
||||
let mut c = DVector::from_element(1000, 0.0);
|
||||
|
||||
bench.iter(|| {
|
||||
a.tr_mul_to(&b, &mut c)
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn mat_mul_mat(bench: &mut Bencher) {
|
||||
let a = DMatrix::<f64>::new_random(100, 100);
|
||||
let b = DMatrix::<f64>::new_random(100, 100);
|
||||
let mut ab = DMatrix::<f64>::from_element(100, 100, 0.0);
|
||||
|
||||
bench.iter(|| {
|
||||
test::black_box(a.mul_to(&b, &mut ab));
|
||||
})
|
||||
}
|
|
@ -0,0 +1,2 @@
|
|||
mod matrix;
|
||||
mod vector;
|
|
@ -0,0 +1,128 @@
|
|||
use rand::{IsaacRng, Rng};
|
||||
use test::{self, Bencher};
|
||||
use typenum::U10000;
|
||||
use na::{Vector2, Vector3, Vector4, VectorN, DVector};
|
||||
use std::ops::{Add, Sub, Mul, Div};
|
||||
|
||||
#[path="../common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
bench_binop!(vec2_add_v_f32, Vector2<f32>, Vector2<f32>, add);
|
||||
bench_binop!(vec3_add_v_f32, Vector3<f32>, Vector3<f32>, add);
|
||||
bench_binop!(vec4_add_v_f32, Vector4<f32>, Vector4<f32>, add);
|
||||
|
||||
bench_binop!(vec2_add_v_f64, Vector2<f64>, Vector2<f64>, add);
|
||||
bench_binop!(vec3_add_v_f64, Vector3<f64>, Vector3<f64>, add);
|
||||
bench_binop!(vec4_add_v_f64, Vector4<f64>, Vector4<f64>, add);
|
||||
|
||||
bench_binop!(vec2_sub_v, Vector2<f32>, Vector2<f32>, sub);
|
||||
bench_binop!(vec3_sub_v, Vector3<f32>, Vector3<f32>, sub);
|
||||
bench_binop!(vec4_sub_v, Vector4<f32>, Vector4<f32>, sub);
|
||||
|
||||
bench_binop!(vec2_mul_s, Vector2<f32>, f32, mul);
|
||||
bench_binop!(vec3_mul_s, Vector3<f32>, f32, mul);
|
||||
bench_binop!(vec4_mul_s, Vector4<f32>, f32, mul);
|
||||
|
||||
bench_binop!(vec2_div_s, Vector2<f32>, f32, div);
|
||||
bench_binop!(vec3_div_s, Vector3<f32>, f32, div);
|
||||
bench_binop!(vec4_div_s, Vector4<f32>, f32, div);
|
||||
|
||||
bench_binop_ref!(vec2_dot_f32, Vector2<f32>, Vector2<f32>, dot);
|
||||
bench_binop_ref!(vec3_dot_f32, Vector3<f32>, Vector3<f32>, dot);
|
||||
bench_binop_ref!(vec4_dot_f32, Vector4<f32>, Vector4<f32>, dot);
|
||||
|
||||
bench_binop_ref!(vec2_dot_f64, Vector2<f64>, Vector2<f64>, dot);
|
||||
bench_binop_ref!(vec3_dot_f64, Vector3<f64>, Vector3<f64>, dot);
|
||||
bench_binop_ref!(vec4_dot_f64, Vector4<f64>, Vector4<f64>, dot);
|
||||
|
||||
bench_binop_ref!(vec3_cross, Vector3<f32>, Vector3<f32>, cross);
|
||||
|
||||
bench_unop!(vec2_norm, Vector2<f32>, norm);
|
||||
bench_unop!(vec3_norm, Vector3<f32>, norm);
|
||||
bench_unop!(vec4_norm, Vector4<f32>, norm);
|
||||
|
||||
bench_unop!(vec2_normalize, Vector2<f32>, normalize);
|
||||
bench_unop!(vec3_normalize, Vector3<f32>, normalize);
|
||||
bench_unop!(vec4_normalize, Vector4<f32>, normalize);
|
||||
|
||||
bench_binop_ref!(vec10000_dot_f64, VectorN<f64, U10000>, VectorN<f64, U10000>, dot);
|
||||
bench_binop_ref!(vec10000_dot_f32, VectorN<f32, U10000>, VectorN<f32, U10000>, dot);
|
||||
|
||||
#[bench]
|
||||
fn vec10000_axpy_f64(bh: &mut Bencher) {
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
let mut a = DVector::new_random(10000);
|
||||
let b = DVector::new_random(10000);
|
||||
let n = rng.gen::<f64>();
|
||||
|
||||
bh.iter(|| {
|
||||
a.axpy(n, &b, 1.0)
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn vec10000_axpy_beta_f64(bh: &mut Bencher) {
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
let mut a = DVector::new_random(10000);
|
||||
let b = DVector::new_random(10000);
|
||||
let n = rng.gen::<f64>();
|
||||
let beta = rng.gen::<f64>();
|
||||
|
||||
bh.iter(|| {
|
||||
a.axpy(n, &b, beta)
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn vec10000_axpy_f64_slice(bh: &mut Bencher) {
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
let mut a = DVector::new_random(10000);
|
||||
let b = DVector::new_random(10000);
|
||||
let n = rng.gen::<f64>();
|
||||
|
||||
bh.iter(|| {
|
||||
let mut a = a.fixed_rows_mut::<U10000>(0);
|
||||
let b = b.fixed_rows::<U10000>(0);
|
||||
|
||||
a.axpy(n, &b, 1.0)
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn vec10000_axpy_f64_static(bh: &mut Bencher) {
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
let mut a = VectorN::<f64, U10000>::new_random();
|
||||
let b = VectorN::<f64, U10000>::new_random();
|
||||
let n = rng.gen::<f64>();
|
||||
|
||||
// NOTE: for some reasons, it is much faster if the arument are boxed (Box::new(VectorN...)).
|
||||
bh.iter(|| {
|
||||
a.axpy(n, &b, 1.0)
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
#[bench]
|
||||
fn vec10000_axpy_f32(bh: &mut Bencher) {
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
let mut a = DVector::new_random(10000);
|
||||
let b = DVector::new_random(10000);
|
||||
let n = rng.gen::<f32>();
|
||||
|
||||
bh.iter(|| {
|
||||
a.axpy(n, &b, 1.0)
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn vec10000_axpy_beta_f32(bh: &mut Bencher) {
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
let mut a = DVector::new_random(10000);
|
||||
let b = DVector::new_random(10000);
|
||||
let n = rng.gen::<f32>();
|
||||
let beta = rng.gen::<f32>();
|
||||
|
||||
bh.iter(|| {
|
||||
a.axpy(n, &b, beta)
|
||||
})
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
mod quaternion;
|
|
@ -0,0 +1,22 @@
|
|||
use rand::{IsaacRng, Rng};
|
||||
use test::{self, Bencher};
|
||||
use na::{Quaternion, UnitQuaternion, Vector3};
|
||||
use std::ops::{Add, Sub, Mul, Div};
|
||||
|
||||
#[path="../common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
bench_binop!(quaternion_add_q, Quaternion<f32>, Quaternion<f32>, add);
|
||||
bench_binop!(quaternion_sub_q, Quaternion<f32>, Quaternion<f32>, sub);
|
||||
bench_binop!(quaternion_mul_q, Quaternion<f32>, Quaternion<f32>, mul);
|
||||
|
||||
bench_binop!(unit_quaternion_mul_v, UnitQuaternion<f32>, Vector3<f32>, mul);
|
||||
|
||||
bench_binop!(quaternion_mul_s, Quaternion<f32>, f32, mul);
|
||||
bench_binop!(quaternion_div_s, Quaternion<f32>, f32, div);
|
||||
|
||||
bench_unop!(quaternion_inv, Quaternion<f32>, try_inverse);
|
||||
bench_unop!(unit_quaternion_inv, UnitQuaternion<f32>, inverse);
|
||||
|
||||
// bench_unop_self!(quaternion_conjugate, Quaternion<f32>, conjugate);
|
||||
// bench_unop!(quaternion_normalize, Quaternion<f32>, normalize);
|
|
@ -0,0 +1,21 @@
|
|||
#![feature(test)]
|
||||
#![allow(unused_macros)]
|
||||
|
||||
extern crate test;
|
||||
extern crate rand;
|
||||
extern crate typenum;
|
||||
extern crate nalgebra as na;
|
||||
|
||||
|
||||
use rand::{Rng, IsaacRng};
|
||||
use na::DMatrix;
|
||||
|
||||
|
||||
mod core;
|
||||
mod linalg;
|
||||
mod geometry;
|
||||
|
||||
fn reproductible_dmatrix(nrows: usize, ncols: usize) -> DMatrix<f64> {
|
||||
let mut rng = IsaacRng::new_unseeded();
|
||||
DMatrix::<f64>::from_fn(nrows, ncols, |_, _| rng.gen())
|
||||
}
|
|
@ -0,0 +1,75 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{Matrix4, DMatrix, Bidiagonal};
|
||||
|
||||
#[path="../common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
// Without unpack.
|
||||
#[bench]
|
||||
fn bidiagonalize_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(Bidiagonal::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bidiagonalize_100x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 500);
|
||||
bh.iter(|| test::black_box(Bidiagonal::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bidiagonalize_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(Bidiagonal::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bidiagonalize_500x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 100);
|
||||
bh.iter(|| test::black_box(Bidiagonal::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bidiagonalize_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(Bidiagonal::new(m.clone())))
|
||||
}
|
||||
|
||||
|
||||
// With unpack.
|
||||
#[bench]
|
||||
fn bidiagonalize_unpack_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| {
|
||||
let bidiag = Bidiagonal::new(m.clone());
|
||||
let _ = bidiag.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bidiagonalize_unpack_100x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 500);
|
||||
bh.iter(|| {
|
||||
let bidiag = Bidiagonal::new(m.clone());
|
||||
let _ = bidiag.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bidiagonalize_unpack_500x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 100);
|
||||
bh.iter(|| {
|
||||
let bidiag = Bidiagonal::new(m.clone());
|
||||
let _ = bidiag.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn bidiagonalize_unpack_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| {
|
||||
let bidiag = Bidiagonal::new(m.clone());
|
||||
let _ = bidiag.unpack();
|
||||
})
|
||||
}
|
||||
|
|
@ -0,0 +1,109 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{DMatrix, DVector, Cholesky};
|
||||
|
||||
#[bench]
|
||||
fn cholesky_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let m = &m * m.transpose();
|
||||
|
||||
bh.iter(|| test::black_box(Cholesky::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn cholesky_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let m = &m * m.transpose();
|
||||
|
||||
bh.iter(|| test::black_box(Cholesky::new(m.clone())))
|
||||
}
|
||||
|
||||
// With unpack.
|
||||
#[bench]
|
||||
fn cholesky_decompose_unpack_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let m = &m * m.transpose();
|
||||
|
||||
bh.iter(|| {
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
let _ = chol.unpack();
|
||||
})
|
||||
}
|
||||
#[bench]
|
||||
fn cholesky_decompose_unpack_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let m = &m * m.transpose();
|
||||
|
||||
bh.iter(|| {
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
let _ = chol.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn cholesky_solve_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let m = &m * m.transpose();
|
||||
let v = DVector::<f64>::new_random(10);
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = chol.solve(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn cholesky_solve_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let m = &m * m.transpose();
|
||||
let v = DVector::<f64>::new_random(100);
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = chol.solve(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn cholesky_solve_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let m = &m * m.transpose();
|
||||
let v = DVector::<f64>::new_random(500);
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = chol.solve(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn cholesky_inverse_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let m = &m * m.transpose();
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = chol.inverse();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn cholesky_inverse_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let m = &m * m.transpose();
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = chol.inverse();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn cholesky_inverse_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let m = &m * m.transpose();
|
||||
let chol = Cholesky::new(m.clone()).unwrap();
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = chol.inverse();
|
||||
})
|
||||
}
|
|
@ -0,0 +1,30 @@
|
|||
use test::Bencher;
|
||||
use na::{DMatrix, Eigen};
|
||||
|
||||
#[bench]
|
||||
fn eigen_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
|
||||
bh.iter(|| Eigen::new(m.clone(), 1.0e-7, 0))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn eigen_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
|
||||
bh.iter(|| Eigen::new(m.clone(), 1.0e-7, 0))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn eigenvalues_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
|
||||
bh.iter(|| m.clone().eigenvalues(1.0e-7, 0))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn eigenvalues_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
|
||||
bh.iter(|| m.clone().eigenvalues(1.0e-7, 0))
|
||||
}
|
|
@ -0,0 +1,114 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{DMatrix, DVector, FullPivLU};
|
||||
|
||||
// Without unpack.
|
||||
#[bench]
|
||||
fn full_piv_lu_decompose_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
bh.iter(|| test::black_box(FullPivLU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(FullPivLU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_decompose_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(FullPivLU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_solve_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(10, 1.0);
|
||||
lu.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_solve_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(100, 1.0);
|
||||
lu.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_solve_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(500, 1.0);
|
||||
lu.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_inverse_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_inverse_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_inverse_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_determinant_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.determinant())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_determinant_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.determinant())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn full_piv_lu_determinant_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let lu = FullPivLU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.determinant())
|
||||
})
|
||||
}
|
|
@ -0,0 +1,60 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{Matrix4, DMatrix, Hessenberg};
|
||||
|
||||
#[path="../common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
// Without unpack.
|
||||
#[bench]
|
||||
fn hessenberg_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(Hessenberg::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(Hessenberg::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_200x200(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(200, 200);
|
||||
bh.iter(|| test::black_box(Hessenberg::new(m.clone())))
|
||||
}
|
||||
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(Hessenberg::new(m.clone())))
|
||||
}
|
||||
|
||||
|
||||
// With unpack.
|
||||
#[bench]
|
||||
fn hessenberg_decompose_unpack_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| {
|
||||
let hess = Hessenberg::new(m.clone());
|
||||
let _ = hess.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_unpack_200x200(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(200, 200);
|
||||
bh.iter(|| {
|
||||
let hess = Hessenberg::new(m.clone());
|
||||
let _ = hess.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_unpack_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| {
|
||||
let hess = Hessenberg::new(m.clone());
|
||||
let _ = hess.unpack();
|
||||
})
|
||||
}
|
|
@ -0,0 +1,114 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{DMatrix, DVector, LU};
|
||||
|
||||
// Without unpack.
|
||||
#[bench]
|
||||
fn lu_decompose_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_decompose_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_solve_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(10, 1.0);
|
||||
lu.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_solve_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(100, 1.0);
|
||||
lu.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_solve_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(500, 1.0);
|
||||
lu.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_inverse_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_inverse_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_inverse_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_determinant_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.determinant())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_determinant_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.determinant())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_determinant_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let lu = LU::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(lu.determinant())
|
||||
})
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
mod solve;
|
||||
mod cholesky;
|
||||
mod qr;
|
||||
mod hessenberg;
|
||||
mod bidiagonal;
|
||||
mod lu;
|
||||
mod full_piv_lu;
|
||||
mod svd;
|
||||
mod schur;
|
||||
mod symmetric_eigen;
|
||||
// mod eigen;
|
|
@ -0,0 +1,137 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{Matrix4, DMatrix, DVector, QR};
|
||||
|
||||
#[path="../common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
// Without unpack.
|
||||
#[bench]
|
||||
fn qr_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_100x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 500);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_500x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 100);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
|
||||
// With unpack.
|
||||
#[bench]
|
||||
fn qr_decompose_unpack_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| {
|
||||
let qr = QR::new(m.clone());
|
||||
let _ = qr.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_unpack_100x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 500);
|
||||
bh.iter(|| {
|
||||
let qr = QR::new(m.clone());
|
||||
let _ = qr.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_unpack_500x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 100);
|
||||
bh.iter(|| {
|
||||
let qr = QR::new(m.clone());
|
||||
let _ = qr.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_unpack_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| {
|
||||
let qr = QR::new(m.clone());
|
||||
let _ = qr.unpack();
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_solve_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let qr = QR::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(10, 1.0);
|
||||
qr.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_solve_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let qr = QR::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(100, 1.0);
|
||||
qr.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_solve_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let qr = QR::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
let mut b = DVector::<f64>::from_element(500, 1.0);
|
||||
qr.solve(&mut b);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_inverse_10x10(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(10, 10);
|
||||
let qr = QR::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(qr.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_inverse_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let qr = QR::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(qr.try_inverse())
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_inverse_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
let qr = QR::new(m.clone());
|
||||
|
||||
bh.iter(|| {
|
||||
test::black_box(qr.try_inverse())
|
||||
})
|
||||
}
|
|
@ -0,0 +1,51 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{Matrix4, RealSchur};
|
||||
|
||||
#[bench]
|
||||
fn schur_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(RealSchur::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn schur_decompose_10x10(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(10, 10);
|
||||
bh.iter(|| test::black_box(RealSchur::new(m.clone())))
|
||||
}
|
||||
|
||||
|
||||
#[bench]
|
||||
fn schur_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(100, 100);
|
||||
bh.iter(|| test::black_box(RealSchur::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn schur_decompose_200x200(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(200, 200);
|
||||
bh.iter(|| test::black_box(RealSchur::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn eigenvalues_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(m.complex_eigenvalues()))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn eigenvalues_10x10(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(10, 10);
|
||||
bh.iter(|| test::black_box(m.complex_eigenvalues()))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn eigenvalues_100x100(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(100, 100);
|
||||
bh.iter(|| test::black_box(m.complex_eigenvalues()))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn eigenvalues_200x200(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(200, 200);
|
||||
bh.iter(|| test::black_box(m.complex_eigenvalues()))
|
||||
}
|
|
@ -0,0 +1,82 @@
|
|||
use test::Bencher;
|
||||
use na::{DMatrix, DVector};
|
||||
|
||||
#[bench]
|
||||
fn solve_l_triangular_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let v = DVector::<f64>::new_random(100);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.solve_lower_triangular(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn solve_l_triangular_1000x1000(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(1000, 1000);
|
||||
let v = DVector::<f64>::new_random(1000);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.solve_lower_triangular(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn tr_solve_l_triangular_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let v = DVector::<f64>::new_random(100);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.tr_solve_lower_triangular(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn tr_solve_l_triangular_1000x1000(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(1000, 1000);
|
||||
let v = DVector::<f64>::new_random(1000);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.tr_solve_lower_triangular(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn solve_u_triangular_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let v = DVector::<f64>::new_random(100);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.solve_upper_triangular(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn solve_u_triangular_1000x1000(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(1000, 1000);
|
||||
let v = DVector::<f64>::new_random(1000);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.solve_upper_triangular(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn tr_solve_u_triangular_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
let v = DVector::<f64>::new_random(100);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.tr_solve_upper_triangular(&v);
|
||||
})
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn tr_solve_u_triangular_1000x1000(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(1000, 1000);
|
||||
let v = DVector::<f64>::new_random(1000);
|
||||
|
||||
bh.iter(|| {
|
||||
let _ = m.tr_solve_upper_triangular(&v);
|
||||
})
|
||||
}
|
|
@ -0,0 +1,99 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{Matrix4, SVD};
|
||||
|
||||
#[bench]
|
||||
fn svd_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(SVD::new(m.clone(), true, true)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn svd_decompose_10x10(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(10, 10);
|
||||
bh.iter(|| test::black_box(SVD::new(m.clone(), true, true)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn svd_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(100, 100);
|
||||
bh.iter(|| test::black_box(SVD::new(m.clone(), true, true)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn svd_decompose_200x200(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(200, 200);
|
||||
bh.iter(|| test::black_box(SVD::new(m.clone(), true, true)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn rank_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(m.rank(1.0e-10)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn rank_10x10(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(10, 10);
|
||||
bh.iter(|| test::black_box(m.rank(1.0e-10)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn rank_100x100(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(100, 100);
|
||||
bh.iter(|| test::black_box(m.rank(1.0e-10)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn rank_200x200(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(200, 200);
|
||||
bh.iter(|| test::black_box(m.rank(1.0e-10)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn singular_values_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(m.singular_values()))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn singular_values_10x10(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(10, 10);
|
||||
bh.iter(|| test::black_box(m.singular_values()))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn singular_values_100x100(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(100, 100);
|
||||
bh.iter(|| test::black_box(m.singular_values()))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn singular_values_200x200(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(200, 200);
|
||||
bh.iter(|| test::black_box(m.singular_values()))
|
||||
}
|
||||
|
||||
|
||||
#[bench]
|
||||
fn pseudo_inverse_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(m.clone().pseudo_inverse(1.0e-10)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn pseudo_inverse_10x10(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(10, 10);
|
||||
bh.iter(|| test::black_box(m.clone().pseudo_inverse(1.0e-10)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn pseudo_inverse_100x100(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(100, 100);
|
||||
bh.iter(|| test::black_box(m.clone().pseudo_inverse(1.0e-10)))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn pseudo_inverse_200x200(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(200, 200);
|
||||
bh.iter(|| test::black_box(m.clone().pseudo_inverse(1.0e-10)))
|
||||
}
|
|
@ -0,0 +1,27 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{Matrix4, SymmetricEigen};
|
||||
|
||||
#[bench]
|
||||
fn symmetric_eigen_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(SymmetricEigen::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn symmetric_eigen_decompose_10x10(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(10, 10);
|
||||
bh.iter(|| test::black_box(SymmetricEigen::new(m.clone())))
|
||||
}
|
||||
|
||||
|
||||
#[bench]
|
||||
fn symmetric_eigen_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(100, 100);
|
||||
bh.iter(|| test::black_box(SymmetricEigen::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn symmetric_eigen_decompose_200x200(bh: &mut Bencher) {
|
||||
let m = ::reproductible_dmatrix(200, 200);
|
||||
bh.iter(|| test::black_box(SymmetricEigen::new(m.clone())))
|
||||
}
|
|
@ -1,53 +0,0 @@
|
|||
#![feature(test)]
|
||||
|
||||
extern crate test;
|
||||
extern crate rand;
|
||||
extern crate nalgebra as na;
|
||||
|
||||
use rand::{IsaacRng, Rng};
|
||||
use test::Bencher;
|
||||
use na::{Vector2, Vector3, Vector4, Matrix2, Matrix3, Matrix4};
|
||||
use std::ops::{Add, Sub, Mul, Div};
|
||||
|
||||
#[path="common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
bench_binop!(_bench_mat2_mul_m, Matrix2<f32>, Matrix2<f32>, mul);
|
||||
bench_binop!(_bench_mat3_mul_m, Matrix3<f32>, Matrix3<f32>, mul);
|
||||
bench_binop!(_bench_mat4_mul_m, Matrix4<f32>, Matrix4<f32>, mul);
|
||||
|
||||
bench_binop_ref!(_bench_mat2_tr_mul_m, Matrix2<f32>, Matrix2<f32>, tr_mul);
|
||||
bench_binop_ref!(_bench_mat3_tr_mul_m, Matrix3<f32>, Matrix3<f32>, tr_mul);
|
||||
bench_binop_ref!(_bench_mat4_tr_mul_m, Matrix4<f32>, Matrix4<f32>, tr_mul);
|
||||
|
||||
bench_binop!(_bench_mat2_add_m, Matrix2<f32>, Matrix2<f32>, add);
|
||||
bench_binop!(_bench_mat3_add_m, Matrix3<f32>, Matrix3<f32>, add);
|
||||
bench_binop!(_bench_mat4_add_m, Matrix4<f32>, Matrix4<f32>, add);
|
||||
|
||||
bench_binop!(_bench_mat2_sub_m, Matrix2<f32>, Matrix2<f32>, sub);
|
||||
bench_binop!(_bench_mat3_sub_m, Matrix3<f32>, Matrix3<f32>, sub);
|
||||
bench_binop!(_bench_mat4_sub_m, Matrix4<f32>, Matrix4<f32>, sub);
|
||||
|
||||
bench_binop!(_bench_mat2_mul_v, Matrix2<f32>, Vector2<f32>, mul);
|
||||
bench_binop!(_bench_mat3_mul_v, Matrix3<f32>, Vector3<f32>, mul);
|
||||
bench_binop!(_bench_mat4_mul_v, Matrix4<f32>, Vector4<f32>, mul);
|
||||
|
||||
bench_binop_ref!(_bench_mat2_tr_mul_v, Matrix2<f32>, Vector2<f32>, tr_mul);
|
||||
bench_binop_ref!(_bench_mat3_tr_mul_v, Matrix3<f32>, Vector3<f32>, tr_mul);
|
||||
bench_binop_ref!(_bench_mat4_tr_mul_v, Matrix4<f32>, Vector4<f32>, tr_mul);
|
||||
|
||||
bench_binop!(_bench_mat2_mul_s, Matrix2<f32>, f32, mul);
|
||||
bench_binop!(_bench_mat3_mul_s, Matrix3<f32>, f32, mul);
|
||||
bench_binop!(_bench_mat4_mul_s, Matrix4<f32>, f32, mul);
|
||||
|
||||
bench_binop!(_bench_mat2_div_s, Matrix2<f32>, f32, div);
|
||||
bench_binop!(_bench_mat3_div_s, Matrix3<f32>, f32, div);
|
||||
bench_binop!(_bench_mat4_div_s, Matrix4<f32>, f32, div);
|
||||
|
||||
bench_unop!(_bench_mat2_inv, Matrix2<f32>, try_inverse);
|
||||
bench_unop!(_bench_mat3_inv, Matrix3<f32>, try_inverse);
|
||||
bench_unop!(_bench_mat4_inv, Matrix4<f32>, try_inverse);
|
||||
|
||||
bench_unop!(_bench_mat2_transpose, Matrix2<f32>, transpose);
|
||||
bench_unop!(_bench_mat3_transpose, Matrix3<f32>, transpose);
|
||||
bench_unop!(_bench_mat4_transpose, Matrix4<f32>, transpose);
|
|
@ -1,28 +0,0 @@
|
|||
#![feature(test)]
|
||||
|
||||
extern crate test;
|
||||
extern crate rand;
|
||||
extern crate nalgebra as na;
|
||||
|
||||
use rand::{IsaacRng, Rng};
|
||||
use test::Bencher;
|
||||
use na::{Quaternion, UnitQuaternion, Vector3};
|
||||
use std::ops::{Add, Sub, Mul, Div};
|
||||
|
||||
#[path="common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
bench_binop!(_bench_quaternion_add_q, Quaternion<f32>, Quaternion<f32>, add);
|
||||
bench_binop!(_bench_quaternion_sub_q, Quaternion<f32>, Quaternion<f32>, sub);
|
||||
bench_binop!(_bench_quaternion_mul_q, Quaternion<f32>, Quaternion<f32>, mul);
|
||||
|
||||
bench_binop!(_bench_unit_quaternion_mul_v, UnitQuaternion<f32>, Vector3<f32>, mul);
|
||||
|
||||
bench_binop!(_bench_quaternion_mul_s, Quaternion<f32>, f32, mul);
|
||||
bench_binop!(_bench_quaternion_div_s, Quaternion<f32>, f32, div);
|
||||
|
||||
bench_unop!(_bench_quaternion_inv, Quaternion<f32>, try_inverse);
|
||||
bench_unop!(_bench_unit_quaternion_inv, UnitQuaternion<f32>, inverse);
|
||||
|
||||
// bench_unop_self!(_bench_quaternion_conjugate, Quaternion<f32>, conjugate);
|
||||
// bench_unop!(_bench_quaternion_normalize, Quaternion<f32>, normalize);
|
|
@ -1,43 +0,0 @@
|
|||
#![feature(test)]
|
||||
|
||||
extern crate test;
|
||||
extern crate rand;
|
||||
extern crate nalgebra as na;
|
||||
|
||||
use rand::{IsaacRng, Rng};
|
||||
use test::Bencher;
|
||||
use na::{Vector2, Vector3, Vector4};
|
||||
use std::ops::{Add, Sub, Mul, Div};
|
||||
|
||||
#[path="common/macros.rs"]
|
||||
mod macros;
|
||||
|
||||
bench_binop!(_bench_vec2_add_v, Vector2<f32>, Vector2<f32>, add);
|
||||
bench_binop!(_bench_vec3_add_v, Vector3<f32>, Vector3<f32>, add);
|
||||
bench_binop!(_bench_vec4_add_v, Vector4<f32>, Vector4<f32>, add);
|
||||
|
||||
bench_binop!(_bench_vec2_sub_v, Vector2<f32>, Vector2<f32>, sub);
|
||||
bench_binop!(_bench_vec3_sub_v, Vector3<f32>, Vector3<f32>, sub);
|
||||
bench_binop!(_bench_vec4_sub_v, Vector4<f32>, Vector4<f32>, sub);
|
||||
|
||||
bench_binop!(_bench_vec2_mul_s, Vector2<f32>, f32, mul);
|
||||
bench_binop!(_bench_vec3_mul_s, Vector3<f32>, f32, mul);
|
||||
bench_binop!(_bench_vec4_mul_s, Vector4<f32>, f32, mul);
|
||||
|
||||
bench_binop!(_bench_vec2_div_s, Vector2<f32>, f32, div);
|
||||
bench_binop!(_bench_vec3_div_s, Vector3<f32>, f32, div);
|
||||
bench_binop!(_bench_vec4_div_s, Vector4<f32>, f32, div);
|
||||
|
||||
bench_binop_ref!(_bench_vec2_dot, Vector2<f32>, Vector2<f32>, dot);
|
||||
bench_binop_ref!(_bench_vec3_dot, Vector3<f32>, Vector3<f32>, dot);
|
||||
bench_binop_ref!(_bench_vec4_dot, Vector4<f32>, Vector4<f32>, dot);
|
||||
|
||||
bench_binop_ref!(_bench_vec3_cross, Vector3<f32>, Vector3<f32>, cross);
|
||||
|
||||
bench_unop!(_bench_vec2_norm, Vector2<f32>, norm);
|
||||
bench_unop!(_bench_vec3_norm, Vector3<f32>, norm);
|
||||
bench_unop!(_bench_vec4_norm, Vector4<f32>, norm);
|
||||
|
||||
bench_unop!(_bench_vec2_normalize, Vector2<f32>, normalize);
|
||||
bench_unop!(_bench_vec3_normalize, Vector3<f32>, normalize);
|
||||
bench_unop!(_bench_vec4_normalize, Vector4<f32>, normalize);
|
|
@ -1,11 +1,10 @@
|
|||
extern crate alga;
|
||||
extern crate nalgebra as na;
|
||||
|
||||
use alga::general::Real;
|
||||
use alga::linear::FiniteDimInnerSpace;
|
||||
use na::{Unit, ColumnVector, OwnedColumnVector, Vector2, Vector3};
|
||||
use na::storage::Storage;
|
||||
use na::dimension::{DimName, U1};
|
||||
use na::{Real, DefaultAllocator, Unit, VectorN, Vector2, Vector3};
|
||||
use na::allocator::Allocator;
|
||||
use na::dimension::Dim;
|
||||
|
||||
/// Reflects a vector wrt. the hyperplane with normal `plane_normal`.
|
||||
fn reflect_wrt_hyperplane_with_algebraic_genericity<V>(plane_normal: &Unit<V>, vector: &V) -> V
|
||||
|
@ -16,12 +15,12 @@ fn reflect_wrt_hyperplane_with_algebraic_genericity<V>(plane_normal: &Unit<V>, v
|
|||
|
||||
|
||||
/// Reflects a vector wrt. the hyperplane with normal `plane_normal`.
|
||||
fn reflect_wrt_hyperplane_with_structural_genericity<N, D, S>(plane_normal: &Unit<ColumnVector<N, D, S>>,
|
||||
vector: &ColumnVector<N, D, S>)
|
||||
-> OwnedColumnVector<N, D, S::Alloc>
|
||||
fn reflect_wrt_hyperplane_with_dimensional_genericity<N: Real, D: Dim>(plane_normal: &Unit<VectorN<N, D>>,
|
||||
vector: &VectorN<N, D>)
|
||||
-> VectorN<N, D>
|
||||
where N: Real,
|
||||
D: DimName,
|
||||
S: Storage<N, D, U1> {
|
||||
D: Dim,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
let n = plane_normal.as_ref(); // Get the underlying V.
|
||||
vector - n * (n.dot(vector) * na::convert(2.0))
|
||||
}
|
||||
|
@ -57,8 +56,8 @@ fn main() {
|
|||
assert_eq!(reflect_wrt_hyperplane_with_algebraic_genericity(&plane2, &v2).y, -2.0);
|
||||
assert_eq!(reflect_wrt_hyperplane_with_algebraic_genericity(&plane3, &v3).y, -2.0);
|
||||
|
||||
assert_eq!(reflect_wrt_hyperplane_with_structural_genericity(&plane2, &v2).y, -2.0);
|
||||
assert_eq!(reflect_wrt_hyperplane_with_structural_genericity(&plane3, &v3).y, -2.0);
|
||||
assert_eq!(reflect_wrt_hyperplane_with_dimensional_genericity(&plane2, &v2).y, -2.0);
|
||||
assert_eq!(reflect_wrt_hyperplane_with_dimensional_genericity(&plane3, &v3).y, -2.0);
|
||||
|
||||
// Call each specific implementation depending on the dimension.
|
||||
assert_eq!(reflect_wrt_hyperplane2(&plane2, &v2).y, -2.0);
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
# Change Log
|
||||
|
||||
## [0.4.0] - 2016-09-07
|
||||
|
||||
* Made all traits use associated types for their output type parameters. This
|
||||
simplifies usage of the traits and is consistent with the concept of
|
||||
associated types used as output type parameters (not input type parameters) as
|
||||
described in [the associated type
|
||||
RFC](https://github.com/rust-lang/rfcs/blob/master/text/0195-associated-items.md).
|
||||
* Implemented `check_info!` macro to check all LAPACK calls.
|
||||
* Implemented error handling with [error_chain](https://crates.io/crates/error-chain).
|
||||
|
||||
## [0.3.0] - 2016-09-06
|
||||
|
||||
* Documentation is hosted at https://docs.rs/nalgebra-lapack/
|
||||
* Updated `nalgebra` to 0.10.
|
||||
* Rename traits `HasSVD` to `SVD` and `HasEigensystem` to `Eigensystem`.
|
||||
* Added `Solve` trait for solving a linear matrix equation.
|
||||
* Added `Inverse` for computing the multiplicative inverse of a matrix.
|
||||
* Added `Cholesky` for decomposing a positive-definite matrix.
|
||||
* The `Eigensystem` and `SVD` traits are now generic over types. The
|
||||
associated types have been removed.
|
|
@ -0,0 +1,40 @@
|
|||
[package]
|
||||
name = "nalgebra-lapack"
|
||||
version = "0.11.2"
|
||||
authors = [ "Sébastien Crozet <developer@crozet.re>", "Andrew Straw <strawman@astraw.com>" ]
|
||||
|
||||
description = "Linear algebra library with transformations and satically-sized or dynamically-sized matrices."
|
||||
documentation = "http://nalgebra.org/doc/nalgebra/index.html"
|
||||
homepage = "http://nalgebra.org"
|
||||
repository = "https://github.com/sebcrozet/nalgebra"
|
||||
readme = "README.md"
|
||||
keywords = [ "linear", "algebra", "matrix", "vector" ]
|
||||
license = "BSD-3-Clause"
|
||||
|
||||
[features]
|
||||
serde-serialize = [ "serde", "serde_derive" ]
|
||||
|
||||
# For BLAS/LAPACK
|
||||
default = ["openblas"]
|
||||
openblas = ["lapack/openblas"]
|
||||
netlib = ["lapack/netlib"]
|
||||
accelerate = ["lapack/accelerate"]
|
||||
|
||||
[dependencies]
|
||||
nalgebra = { version = "0.12", path = ".." }
|
||||
num-traits = "0.1"
|
||||
num-complex = "0.1"
|
||||
alga = "0.5"
|
||||
serde = { version = "0.9", optional = true }
|
||||
serde_derive = { version = "0.9", optional = true }
|
||||
# clippy = "*"
|
||||
|
||||
[dependencies.lapack]
|
||||
version = "0.11"
|
||||
default-features = false
|
||||
|
||||
[dev-dependencies]
|
||||
nalgebra = { version = "0.12", path = "..", features = [ "arbitrary" ] }
|
||||
quickcheck = "0.4"
|
||||
approx = "0.1"
|
||||
rand = "0.3"
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 Andrew D. Straw
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
|
@ -0,0 +1,11 @@
|
|||
all:
|
||||
cargo build
|
||||
|
||||
test:
|
||||
cargo test
|
||||
|
||||
doc:
|
||||
cargo doc --all --no-deps
|
||||
|
||||
bench:
|
||||
cargo bench
|
|
@ -0,0 +1,59 @@
|
|||
# nalgebra-lapack [![Version][version-img]][version-url] [![Status][status-img]][status-url] [![Doc][doc-img]][doc-url]
|
||||
|
||||
Rust library for linear algebra using nalgebra and LAPACK.
|
||||
|
||||
## Documentation
|
||||
|
||||
Documentation is available [here](https://docs.rs/nalgebra-lapack/).
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Cargo features to select lapack provider
|
||||
|
||||
Like the [lapack crate](https://crates.io/crates/lapack) from which this
|
||||
behavior is inherited, nalgebra-lapack uses [cargo
|
||||
features](http://doc.crates.io/manifest.html#the-[features]-section) to select
|
||||
which lapack provider (or implementation) is used. Command line arguments to
|
||||
cargo are the easiest way to do this, and the best provider depends on your
|
||||
particular system. In some cases, the providers can be further tuned with
|
||||
environment variables.
|
||||
|
||||
Below are given examples of how to invoke `cargo build` on two different systems
|
||||
using two different providers. The `--no-default-features --features "provider"`
|
||||
arguments will be consistent for other `cargo` commands.
|
||||
|
||||
### Ubuntu
|
||||
|
||||
As tested on Ubuntu 12.04, do this to build the lapack package against
|
||||
the system installation of netlib without LAPACKE (note the E) or
|
||||
CBLAS:
|
||||
|
||||
sudo apt-get install gfortran libblas3gf liblapack3gf
|
||||
export CARGO_FEATURE_SYSTEM_NETLIB=1
|
||||
export CARGO_FEATURE_EXCLUDE_LAPACKE=1
|
||||
export CARGO_FEATURE_EXCLUDE_CBLAS=1
|
||||
|
||||
export CARGO_FEATURES='--no-default-features --features netlib'
|
||||
cargo build ${CARGO_FEATURES}
|
||||
|
||||
### Mac OS X
|
||||
|
||||
On Mac OS X, do this to use Apple's Accelerate framework:
|
||||
|
||||
export CARGO_FEATURES='--no-default-features --features accelerate'
|
||||
cargo build ${CARGO_FEATURES}
|
||||
|
||||
[version-img]: https://img.shields.io/crates/v/nalgebra-lapack.svg
|
||||
[version-url]: https://crates.io/crates/nalgebra-lapack
|
||||
[status-img]: https://travis-ci.org/strawlab/nalgebra-lapack.svg?branch=master
|
||||
[status-url]: https://travis-ci.org/strawlab/nalgebra-lapack
|
||||
[doc-img]: https://docs.rs/nalgebra-lapack/badge.svg
|
||||
[doc-url]: https://docs.rs/nalgebra-lapack/
|
||||
|
||||
## Contributors
|
||||
This integration of LAPACK on nalgebra was
|
||||
[initiated](https://github.com/strawlab/nalgebra-lapack) by Andrew Straw. It
|
||||
then became officially supported and integrated to the main nalgebra
|
||||
repository.
|
|
@ -0,0 +1,8 @@
|
|||
#![feature(test)]
|
||||
|
||||
extern crate test;
|
||||
extern crate rand;
|
||||
extern crate nalgebra as na;
|
||||
extern crate nalgebra_lapack as nl;
|
||||
|
||||
mod linalg;
|
|
@ -0,0 +1,21 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{DMatrix, Matrix4};
|
||||
use nl::Hessenberg;
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(Hessenberg::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(Hessenberg::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn hessenberg_decompose_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(Hessenberg::new(m.clone())))
|
||||
}
|
|
@ -0,0 +1,34 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{DMatrix, Matrix4};
|
||||
use nl::LU;
|
||||
|
||||
|
||||
#[bench]
|
||||
fn lu_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_decompose_100x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 500);
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_decompose_500x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 100);
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn lu_decompose_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(LU::new(m.clone())))
|
||||
}
|
|
@ -0,0 +1,3 @@
|
|||
mod qr;
|
||||
mod lu;
|
||||
mod hessenberg;
|
|
@ -0,0 +1,33 @@
|
|||
use test::{self, Bencher};
|
||||
use na::{DMatrix, Matrix4};
|
||||
use nl::QR;
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_100x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 100);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_100x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(100, 500);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_4x4(bh: &mut Bencher) {
|
||||
let m = Matrix4::<f64>::new_random();
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_500x100(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 100);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
||||
|
||||
#[bench]
|
||||
fn qr_decompose_500x500(bh: &mut Bencher) {
|
||||
let m = DMatrix::<f64>::new_random(500, 500);
|
||||
bh.iter(|| test::black_box(QR::new(m.clone())))
|
||||
}
|
|
@ -0,0 +1,183 @@
|
|||
#[cfg(feature = "serde-serialize")]
|
||||
use serde;
|
||||
|
||||
use num::Zero;
|
||||
use num_complex::Complex;
|
||||
|
||||
use na::{Scalar, DefaultAllocator, Matrix, MatrixN, MatrixMN};
|
||||
use na::dimension::Dim;
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
/// The cholesky decomposion of a symmetric-definite-positive matrix.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, D>,
|
||||
MatrixN<N, D>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(deserialize =
|
||||
"DefaultAllocator: Allocator<N, D>,
|
||||
MatrixN<N, D>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Cholesky<N: Scalar, D: Dim>
|
||||
where DefaultAllocator: Allocator<N, D, D> {
|
||||
l: MatrixN<N, D>
|
||||
}
|
||||
|
||||
impl<N: Scalar, D: Dim> Copy for Cholesky<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D>,
|
||||
MatrixN<N, D>: Copy { }
|
||||
|
||||
impl<N: CholeskyScalar + Zero, D: Dim> Cholesky<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> {
|
||||
|
||||
/// Complutes the cholesky decomposition of the given symmetric-definite-positive square
|
||||
/// matrix.
|
||||
///
|
||||
/// Only the lower-triangular part of the input matrix is considered.
|
||||
#[inline]
|
||||
pub fn new(mut m: MatrixN<N, D>) -> Option<Self> {
|
||||
// FIXME: check symmetry as well?
|
||||
assert!(m.is_square(), "Unable to compute the cholesky decomposition of a non-square matrix.");
|
||||
|
||||
let uplo = b'L';
|
||||
let dim = m.nrows() as i32;
|
||||
let mut info = 0;
|
||||
|
||||
N::xpotrf(uplo, dim, m.as_mut_slice(), dim, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
Some(Cholesky { l: m })
|
||||
}
|
||||
|
||||
/// Retrieves the lower-triangular factor of the cholesky decomposition.
|
||||
pub fn unpack(mut self) -> MatrixN<N, D> {
|
||||
self.l.fill_upper_triangle(Zero::zero(), 1);
|
||||
self.l
|
||||
}
|
||||
|
||||
/// Retrieves the lower-triangular factor of che cholesky decomposition, without zeroing-out
|
||||
/// its strict upper-triangular part.
|
||||
///
|
||||
/// This is an allocation-less version of `self.l()`. The values of the strict upper-triangular
|
||||
/// part are garbage and should be ignored by further computations.
|
||||
pub fn unpack_dirty(self) -> MatrixN<N, D> {
|
||||
self.l
|
||||
}
|
||||
|
||||
/// Retrieves the lower-triangular factor of the cholesky decomposition.
|
||||
pub fn l(&self) -> MatrixN<N, D> {
|
||||
let mut res = self.l.clone();
|
||||
res.fill_upper_triangle(Zero::zero(), 1);
|
||||
res
|
||||
}
|
||||
|
||||
/// Retrieves the lower-triangular factor of the cholesky decomposition, without zeroing-out
|
||||
/// its strict upper-triangular part.
|
||||
///
|
||||
/// This is an allocation-less version of `self.l()`. The values of the strict upper-triangular
|
||||
/// part are garbage and should be ignored by further computations.
|
||||
pub fn l_dirty(&self) -> &MatrixN<N, D> {
|
||||
&self.l
|
||||
}
|
||||
|
||||
/// Solves the symmetric-definite-positive linear system `self * x = b`, where `x` is the
|
||||
/// unknown to be determined.
|
||||
pub fn solve<R2: Dim, C2: Dim, S2>(&self, b: &Matrix<N, R2, C2, S2>) -> Option<MatrixMN<N, R2, C2>>
|
||||
where S2: Storage<N, R2, C2>,
|
||||
DefaultAllocator: Allocator<N, R2, C2> {
|
||||
|
||||
let mut res = b.clone_owned();
|
||||
if self.solve_mut(&mut res) {
|
||||
Some(res)
|
||||
}
|
||||
else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Solves in-place the symmetric-definite-positive linear system `self * x = b`, where `x` is
|
||||
/// the unknown to be determined.
|
||||
pub fn solve_mut<R2: Dim, C2: Dim>(&self, b: &mut MatrixMN<N, R2, C2>) -> bool
|
||||
where DefaultAllocator: Allocator<N, R2, C2> {
|
||||
|
||||
let dim = self.l.nrows();
|
||||
|
||||
assert!(b.nrows() == dim, "The number of rows of `b` must be equal to the dimension of the matrix `a`.");
|
||||
|
||||
let nrhs = b.ncols() as i32;
|
||||
let lda = dim as i32;
|
||||
let ldb = dim as i32;
|
||||
let mut info = 0;
|
||||
|
||||
N::xpotrs(b'L', dim as i32, nrhs, self.l.as_slice(), lda, b.as_mut_slice(), ldb, &mut info);
|
||||
lapack_test!(info)
|
||||
}
|
||||
|
||||
/// Computes the inverse of the decomposed matrix.
|
||||
pub fn inverse(mut self) -> Option<MatrixN<N, D>> {
|
||||
let dim = self.l.nrows();
|
||||
let mut info = 0;
|
||||
|
||||
N::xpotri(b'L', dim as i32, self.l.as_mut_slice(), dim as i32, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
// Copy lower triangle to upper triangle.
|
||||
for i in 0 .. dim {
|
||||
for j in i + 1 .. dim {
|
||||
unsafe { *self.l.get_unchecked_mut(i, j) = *self.l.get_unchecked(j, i) };
|
||||
}
|
||||
}
|
||||
|
||||
Some(self.l)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Lapack functions dispatch.
|
||||
*
|
||||
*/
|
||||
/// Trait implemented by floats (`f32`, `f64`) and complex floats (`Complex<f32>`, `Complex<f64>`)
|
||||
/// supported by the cholesky decompotition.
|
||||
pub trait CholeskyScalar: Scalar {
|
||||
#[allow(missing_docs)]
|
||||
fn xpotrf(uplo: u8, n: i32, a: &mut [Self], lda: i32, info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xpotrs(uplo: u8, n: i32, nrhs: i32, a: &[Self], lda: i32, b: &mut [Self], ldb: i32, info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xpotri(uplo: u8, n: i32, a: &mut [Self], lda: i32, info: &mut i32);
|
||||
}
|
||||
|
||||
macro_rules! cholesky_scalar_impl(
|
||||
($N: ty, $xpotrf: path, $xpotrs: path, $xpotri: path) => (
|
||||
impl CholeskyScalar for $N {
|
||||
#[inline]
|
||||
fn xpotrf(uplo: u8, n: i32, a: &mut [Self], lda: i32, info: &mut i32) {
|
||||
$xpotrf(uplo, n, a, lda, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xpotrs(uplo: u8, n: i32, nrhs: i32, a: &[Self], lda: i32,
|
||||
b: &mut [Self], ldb: i32, info: &mut i32) {
|
||||
$xpotrs(uplo, n, nrhs, a, lda, b, ldb, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xpotri(uplo: u8, n: i32, a: &mut [Self], lda: i32, info: &mut i32) {
|
||||
$xpotri(uplo, n, a, lda, info)
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
cholesky_scalar_impl!(f32, interface::spotrf, interface::spotrs, interface::spotri);
|
||||
cholesky_scalar_impl!(f64, interface::dpotrf, interface::dpotrs, interface::dpotri);
|
||||
cholesky_scalar_impl!(Complex<f32>, interface::cpotrf, interface::cpotrs, interface::cpotri);
|
||||
cholesky_scalar_impl!(Complex<f64>, interface::zpotrf, interface::zpotrs, interface::zpotri);
|
|
@ -0,0 +1,253 @@
|
|||
#[cfg(feature = "serde-serialize")]
|
||||
use serde;
|
||||
|
||||
use num::Zero;
|
||||
use num_complex::Complex;
|
||||
|
||||
use alga::general::Real;
|
||||
|
||||
use ::ComplexHelper;
|
||||
use na::{Scalar, DefaultAllocator, Matrix, VectorN, MatrixN};
|
||||
use na::dimension::{Dim, U1};
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
/// Eigendecomposition of a real square matrix with real eigenvalues.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> + Allocator<N, D>,
|
||||
VectorN<N, D>: serde::Serialize,
|
||||
MatrixN<N, D>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(deserialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> + Allocator<N, D>,
|
||||
VectorN<N, D>: serde::Serialize,
|
||||
MatrixN<N, D>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Eigen<N: Scalar, D: Dim>
|
||||
where DefaultAllocator: Allocator<N, D> +
|
||||
Allocator<N, D, D> {
|
||||
/// The eigenvalues of the decomposed matrix.
|
||||
pub eigenvalues: VectorN<N, D>,
|
||||
/// The (right) eigenvectors of the decomposed matrix.
|
||||
pub eigenvectors: Option<MatrixN<N, D>>,
|
||||
/// The left eigenvectors of the decomposed matrix.
|
||||
pub left_eigenvectors: Option<MatrixN<N, D>>
|
||||
}
|
||||
|
||||
impl<N: Scalar, D: Dim> Copy for Eigen<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> +
|
||||
Allocator<N, D, D>,
|
||||
VectorN<N, D>: Copy,
|
||||
MatrixN<N, D>: Copy { }
|
||||
|
||||
|
||||
impl<N: EigenScalar + Real, D: Dim> Eigen<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, D> {
|
||||
/// Computes the eigenvalues and eigenvectors of the square matrix `m`.
|
||||
///
|
||||
/// If `eigenvectors` is `false` then, the eigenvectors are not computed explicitly.
|
||||
pub fn new(mut m: MatrixN<N, D>, left_eigenvectors: bool, eigenvectors: bool)
|
||||
-> Option<Eigen<N, D>> {
|
||||
|
||||
assert!(m.is_square(), "Unable to compute the eigenvalue decomposition of a non-square matrix.");
|
||||
|
||||
let ljob = if left_eigenvectors { b'V' } else { b'N' };
|
||||
let rjob = if eigenvectors { b'V' } else { b'N' };
|
||||
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
let n = nrows.value();
|
||||
|
||||
let lda = n as i32;
|
||||
|
||||
let mut wr = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
// FIXME: Tap into the workspace.
|
||||
let mut wi = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
|
||||
|
||||
let mut info = 0;
|
||||
let mut placeholder1 = [ N::zero() ];
|
||||
let mut placeholder2 = [ N::zero() ];
|
||||
|
||||
let lwork = N::xgeev_work_size(ljob, rjob, n as i32, m.as_mut_slice(), lda,
|
||||
wr.as_mut_slice(), wi.as_mut_slice(), &mut placeholder1,
|
||||
n as i32, &mut placeholder2, n as i32, &mut info);
|
||||
|
||||
lapack_check!(info);
|
||||
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
match (left_eigenvectors, eigenvectors) {
|
||||
(true, true) => {
|
||||
let mut vl = unsafe { Matrix::new_uninitialized_generic(nrows, ncols) };
|
||||
let mut vr = unsafe { Matrix::new_uninitialized_generic(nrows, ncols) };
|
||||
|
||||
N::xgeev(ljob, rjob, n as i32, m.as_mut_slice(), lda, wr.as_mut_slice(),
|
||||
wi.as_mut_slice(), &mut vl.as_mut_slice(), n as i32, &mut vr.as_mut_slice(),
|
||||
n as i32, &mut work, lwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
if wi.iter().all(|e| e.is_zero()) {
|
||||
return Some(Eigen {
|
||||
eigenvalues: wr, left_eigenvectors: Some(vl), eigenvectors: Some(vr)
|
||||
})
|
||||
}
|
||||
},
|
||||
(true, false) => {
|
||||
let mut vl = unsafe { Matrix::new_uninitialized_generic(nrows, ncols) };
|
||||
|
||||
N::xgeev(ljob, rjob, n as i32, m.as_mut_slice(), lda, wr.as_mut_slice(),
|
||||
wi.as_mut_slice(), &mut vl.as_mut_slice(), n as i32, &mut placeholder2,
|
||||
1 as i32, &mut work, lwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
if wi.iter().all(|e| e.is_zero()) {
|
||||
return Some(Eigen {
|
||||
eigenvalues: wr, left_eigenvectors: Some(vl), eigenvectors: None
|
||||
});
|
||||
}
|
||||
},
|
||||
(false, true) => {
|
||||
let mut vr = unsafe { Matrix::new_uninitialized_generic(nrows, ncols) };
|
||||
|
||||
N::xgeev(ljob, rjob, n as i32, m.as_mut_slice(), lda, wr.as_mut_slice(),
|
||||
wi.as_mut_slice(), &mut placeholder1, 1 as i32, &mut vr.as_mut_slice(),
|
||||
n as i32, &mut work, lwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
if wi.iter().all(|e| e.is_zero()) {
|
||||
return Some(Eigen {
|
||||
eigenvalues: wr, left_eigenvectors: None, eigenvectors: Some(vr)
|
||||
});
|
||||
}
|
||||
},
|
||||
(false, false) => {
|
||||
N::xgeev(ljob, rjob, n as i32, m.as_mut_slice(), lda, wr.as_mut_slice(),
|
||||
wi.as_mut_slice(), &mut placeholder1, 1 as i32, &mut placeholder2,
|
||||
1 as i32, &mut work, lwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
if wi.iter().all(|e| e.is_zero()) {
|
||||
return Some(Eigen {
|
||||
eigenvalues: wr, left_eigenvectors: None, eigenvectors: None
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// The complex eigenvalues of the given matrix.
|
||||
///
|
||||
/// Panics if the eigenvalue computation does not converge.
|
||||
pub fn complex_eigenvalues(mut m: MatrixN<N, D>) -> VectorN<Complex<N>, D>
|
||||
where DefaultAllocator: Allocator<Complex<N>, D> {
|
||||
assert!(m.is_square(), "Unable to compute the eigenvalue decomposition of a non-square matrix.");
|
||||
|
||||
let nrows = m.data.shape().0;
|
||||
let n = nrows.value();
|
||||
|
||||
let lda = n as i32;
|
||||
|
||||
let mut wr = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
let mut wi = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
|
||||
|
||||
let mut info = 0;
|
||||
let mut placeholder1 = [ N::zero() ];
|
||||
let mut placeholder2 = [ N::zero() ];
|
||||
|
||||
let lwork = N::xgeev_work_size(b'N', b'N', n as i32, m.as_mut_slice(), lda,
|
||||
wr.as_mut_slice(), wi.as_mut_slice(), &mut placeholder1,
|
||||
n as i32, &mut placeholder2, n as i32, &mut info);
|
||||
|
||||
lapack_panic!(info);
|
||||
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
N::xgeev(b'N', b'N', n as i32, m.as_mut_slice(), lda, wr.as_mut_slice(),
|
||||
wi.as_mut_slice(), &mut placeholder1, 1 as i32, &mut placeholder2,
|
||||
1 as i32, &mut work, lwork, &mut info);
|
||||
lapack_panic!(info);
|
||||
|
||||
let mut res = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
|
||||
for i in 0 .. res.len() {
|
||||
res[i] = Complex::new(wr[i], wi[i]);
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// The determinant of the decomposed matrix.
|
||||
#[inline]
|
||||
pub fn determinant(&self) -> N {
|
||||
let mut det = N::one();
|
||||
for e in self.eigenvalues.iter() {
|
||||
det *= *e;
|
||||
}
|
||||
|
||||
det
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Lapack functions dispatch.
|
||||
*
|
||||
*/
|
||||
/// Trait implemented by scalar type for which Lapack funtion exist to compute the
|
||||
/// eigendecomposition.
|
||||
pub trait EigenScalar: Scalar {
|
||||
#[allow(missing_docs)]
|
||||
fn xgeev(jobvl: u8, jobvr: u8, n: i32, a: &mut [Self], lda: i32,
|
||||
wr: &mut [Self], wi: &mut [Self],
|
||||
vl: &mut [Self], ldvl: i32, vr: &mut [Self], ldvr: i32,
|
||||
work: &mut [Self], lwork: i32, info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xgeev_work_size(jobvl: u8, jobvr: u8, n: i32, a: &mut [Self], lda: i32,
|
||||
wr: &mut [Self], wi: &mut [Self], vl: &mut [Self], ldvl: i32,
|
||||
vr: &mut [Self], ldvr: i32, info: &mut i32) -> i32;
|
||||
}
|
||||
|
||||
macro_rules! real_eigensystem_scalar_impl (
|
||||
($N: ty, $xgeev: path) => (
|
||||
impl EigenScalar for $N {
|
||||
#[inline]
|
||||
fn xgeev(jobvl: u8, jobvr: u8, n: i32, a: &mut [Self], lda: i32,
|
||||
wr: &mut [Self], wi: &mut [Self],
|
||||
vl: &mut [Self], ldvl: i32, vr: &mut [Self], ldvr: i32,
|
||||
work: &mut [Self], lwork: i32, info: &mut i32) {
|
||||
$xgeev(jobvl, jobvr, n, a, lda, wr, wi, vl, ldvl, vr, ldvr, work, lwork, info)
|
||||
}
|
||||
|
||||
|
||||
#[inline]
|
||||
fn xgeev_work_size(jobvl: u8, jobvr: u8, n: i32, a: &mut [Self], lda: i32,
|
||||
wr: &mut [Self], wi: &mut [Self], vl: &mut [Self], ldvl: i32,
|
||||
vr: &mut [Self], ldvr: i32, info: &mut i32) -> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xgeev(jobvl, jobvr, n, a, lda, wr, wi, vl, ldvl, vr, ldvr, &mut work, lwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
real_eigensystem_scalar_impl!(f32, interface::sgeev);
|
||||
real_eigensystem_scalar_impl!(f64, interface::dgeev);
|
||||
|
||||
//// FIXME: decomposition of complex matrix and matrices with complex eigenvalues.
|
||||
// eigensystem_complex_impl!(f32, interface::cgeev);
|
||||
// eigensystem_complex_impl!(f64, interface::zgeev);
|
|
@ -0,0 +1,178 @@
|
|||
use num::Zero;
|
||||
use num_complex::Complex;
|
||||
|
||||
use ::ComplexHelper;
|
||||
use na::{Scalar, Matrix, DefaultAllocator, VectorN, MatrixN};
|
||||
use na::dimension::{DimSub, DimDiff, U1};
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
|
||||
/// The Hessenberg decomposition of a general matrix.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimDiff<D, U1>>,
|
||||
MatrixN<N, D>: serde::Serialize,
|
||||
VectorN<N, DimDiff<D, U1>>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(deserialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimDiff<D, U1>>,
|
||||
MatrixN<N, D>: serde::Deserialize<'de>,
|
||||
VectorN<N, DimDiff<D, U1>>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Hessenberg<N: Scalar, D: DimSub<U1>>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimDiff<D, U1>> {
|
||||
h: MatrixN<N, D>,
|
||||
tau: VectorN<N, DimDiff<D, U1>>
|
||||
}
|
||||
|
||||
|
||||
impl<N: Scalar, D: DimSub<U1>> Copy for Hessenberg<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimDiff<D, U1>>,
|
||||
MatrixN<N, D>: Copy,
|
||||
VectorN<N, DimDiff<D, U1>>: Copy { }
|
||||
|
||||
impl<N: HessenbergScalar + Zero, D: DimSub<U1>> Hessenberg<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimDiff<D, U1>> {
|
||||
/// Computes the hessenberg decomposition of the matrix `m`.
|
||||
pub fn new(mut m: MatrixN<N, D>) -> Hessenberg<N, D> {
|
||||
let nrows = m.data.shape().0;
|
||||
let n = nrows.value() as i32;
|
||||
|
||||
assert!(m.is_square(), "Unable to compute the hessenberg decomposition of a non-square matrix.");
|
||||
assert!(!m.is_empty(), "Unable to compute the hessenberg decomposition of an empty matrix.");
|
||||
|
||||
let mut tau = unsafe { Matrix::new_uninitialized_generic(nrows.sub(U1), U1) };
|
||||
|
||||
let mut info = 0;
|
||||
let lwork = N::xgehrd_work_size(n, 1, n, m.as_mut_slice(), n, tau.as_mut_slice(), &mut info);
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
lapack_panic!(info);
|
||||
|
||||
N::xgehrd(n, 1, n, m.as_mut_slice(), n, tau.as_mut_slice(), &mut work, lwork, &mut info);
|
||||
lapack_panic!(info);
|
||||
|
||||
Hessenberg { h: m, tau: tau }
|
||||
}
|
||||
|
||||
/// Computes the hessenberg matrix of this decomposition.
|
||||
#[inline]
|
||||
pub fn h(&self) -> MatrixN<N, D> {
|
||||
let mut h = self.h.clone_owned();
|
||||
h.fill_lower_triangle(N::zero(), 2);
|
||||
|
||||
h
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: HessenbergReal + Zero, D: DimSub<U1>> Hessenberg<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimDiff<D, U1>> {
|
||||
/// Computes the matrices `(Q, H)` of this decomposition.
|
||||
#[inline]
|
||||
pub fn unpack(self) -> (MatrixN<N, D>, MatrixN<N, D>) {
|
||||
(self.q(), self.h())
|
||||
}
|
||||
|
||||
/// Computes the unitary matrix `Q` of this decomposition.
|
||||
#[inline]
|
||||
pub fn q(&self) -> MatrixN<N, D> {
|
||||
let n = self.h.nrows() as i32;
|
||||
let mut q = self.h.clone_owned();
|
||||
let mut info = 0;
|
||||
|
||||
let lwork = N::xorghr_work_size(n, 1, n, q.as_mut_slice(), n, self.tau.as_slice(), &mut info);
|
||||
let mut work = vec![ N::zero(); lwork as usize ];
|
||||
|
||||
N::xorghr(n, 1, n, q.as_mut_slice(), n, self.tau.as_slice(), &mut work, lwork, &mut info);
|
||||
|
||||
q
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Lapack functions dispatch.
|
||||
*
|
||||
*/
|
||||
pub trait HessenbergScalar: Scalar {
|
||||
fn xgehrd(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32,
|
||||
tau: &mut [Self], work: &mut [Self], lwork: i32, info: &mut i32);
|
||||
fn xgehrd_work_size(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32,
|
||||
tau: &mut [Self], info: &mut i32) -> i32;
|
||||
}
|
||||
|
||||
/// Trait implemented by scalars for which Lapack implements the hessenberg decomposition.
|
||||
pub trait HessenbergReal: HessenbergScalar {
|
||||
#[allow(missing_docs)]
|
||||
fn xorghr(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32, tau: &[Self],
|
||||
work: &mut [Self], lwork: i32, info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xorghr_work_size(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32,
|
||||
tau: &[Self], info: &mut i32) -> i32;
|
||||
}
|
||||
|
||||
macro_rules! hessenberg_scalar_impl(
|
||||
($N: ty, $xgehrd: path) => (
|
||||
impl HessenbergScalar for $N {
|
||||
#[inline]
|
||||
fn xgehrd(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32,
|
||||
tau: &mut [Self], work: &mut [Self], lwork: i32, info: &mut i32) {
|
||||
$xgehrd(n, ilo, ihi, a, lda, tau, work, lwork, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xgehrd_work_size(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32,
|
||||
tau: &mut [Self], info: &mut i32) -> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xgehrd(n, ilo, ihi, a, lda, tau, &mut work, lwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
macro_rules! hessenberg_real_impl(
|
||||
($N: ty, $xorghr: path) => (
|
||||
impl HessenbergReal for $N {
|
||||
#[inline]
|
||||
fn xorghr(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32, tau: &[Self],
|
||||
work: &mut [Self], lwork: i32, info: &mut i32) {
|
||||
$xorghr(n, ilo, ihi, a, lda, tau, work, lwork, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xorghr_work_size(n: i32, ilo: i32, ihi: i32, a: &mut [Self], lda: i32,
|
||||
tau: &[Self], info: &mut i32) -> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xorghr(n, ilo, ihi, a, lda, tau, &mut work, lwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
hessenberg_scalar_impl!(f32, interface::sgehrd);
|
||||
hessenberg_scalar_impl!(f64, interface::dgehrd);
|
||||
hessenberg_scalar_impl!(Complex<f32>, interface::cgehrd);
|
||||
hessenberg_scalar_impl!(Complex<f64>, interface::zgehrd);
|
||||
|
||||
hessenberg_real_impl!(f32, interface::sorghr);
|
||||
hessenberg_real_impl!(f64, interface::dorghr);
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
#![macro_use]
|
||||
|
||||
macro_rules! lapack_check(
|
||||
($info: expr) => (
|
||||
// FIXME: return a richer error.
|
||||
if $info != 0 {
|
||||
return None;
|
||||
}
|
||||
// if $info < 0 {
|
||||
// return Err(Error::from(ErrorKind::LapackIllegalArgument(-$info)));
|
||||
// } else if $info > 0 {
|
||||
// return Err(Error::from(ErrorKind::LapackFailure($info)));
|
||||
// }
|
||||
);
|
||||
);
|
||||
|
||||
macro_rules! lapack_panic(
|
||||
($info: expr) => (
|
||||
assert!($info == 0, "Lapack error.");
|
||||
);
|
||||
);
|
||||
|
||||
macro_rules! lapack_test(
|
||||
($info: expr) => (
|
||||
$info == 0
|
||||
);
|
||||
);
|
|
@ -0,0 +1,148 @@
|
|||
//! # nalgebra-lapack
|
||||
//!
|
||||
//! Rust library for linear algebra using nalgebra and LAPACK.
|
||||
//!
|
||||
//! ## Documentation
|
||||
//!
|
||||
//! Documentation is available [here](https://docs.rs/nalgebra-lapack/).
|
||||
//!
|
||||
//! ## License
|
||||
//!
|
||||
//! MIT
|
||||
//!
|
||||
//! ## Cargo features to select lapack provider
|
||||
//!
|
||||
//! Like the [lapack crate](https://crates.io/crates/lapack) from which this
|
||||
//! behavior is inherited, nalgebra-lapack uses [cargo
|
||||
//! features](http://doc.crates.io/manifest.html#the-[features]-section) to select
|
||||
//! which lapack provider (or implementation) is used. Command line arguments to
|
||||
//! cargo are the easiest way to do this, and the best provider depends on your
|
||||
//! particular system. In some cases, the providers can be further tuned with
|
||||
//! environment variables.
|
||||
//!
|
||||
//! Below are given examples of how to invoke `cargo build` on two different systems
|
||||
//! using two different providers. The `--no-default-features --features "provider"`
|
||||
//! arguments will be consistent for other `cargo` commands.
|
||||
//!
|
||||
//! ### Ubuntu
|
||||
//!
|
||||
//! As tested on Ubuntu 12.04, do this to build the lapack package against
|
||||
//! the system installation of netlib without LAPACKE (note the E) or
|
||||
//! CBLAS:
|
||||
//!
|
||||
//! ```.ignore
|
||||
//! sudo apt-get install gfortran libblas3gf liblapack3gf
|
||||
//! export CARGO_FEATURE_SYSTEM_NETLIB=1
|
||||
//! export CARGO_FEATURE_EXCLUDE_LAPACKE=1
|
||||
//! export CARGO_FEATURE_EXCLUDE_CBLAS=1
|
||||
//!
|
||||
//! export CARGO_FEATURES='--no-default-features --features netlib'
|
||||
//! cargo build ${CARGO_FEATURES}
|
||||
//! ```
|
||||
//!
|
||||
//! ### Mac OS X
|
||||
//!
|
||||
//! On Mac OS X, do this to use Apple's Accelerate framework:
|
||||
//!
|
||||
//! ```.ignore
|
||||
//! export CARGO_FEATURES='--no-default-features --features accelerate'
|
||||
//! cargo build ${CARGO_FEATURES}
|
||||
//! ```
|
||||
//!
|
||||
//! [version-img]: https://img.shields.io/crates/v/nalgebra-lapack.svg
|
||||
//! [version-url]: https://crates.io/crates/nalgebra-lapack
|
||||
//! [status-img]: https://travis-ci.org/strawlab/nalgebra-lapack.svg?branch=master
|
||||
//! [status-url]: https://travis-ci.org/strawlab/nalgebra-lapack
|
||||
//! [doc-img]: https://docs.rs/nalgebra-lapack/badge.svg
|
||||
//! [doc-url]: https://docs.rs/nalgebra-lapack/
|
||||
//!
|
||||
//! ## Contributors
|
||||
//! This integration of LAPACK on nalgebra was
|
||||
//! [initiated](https://github.com/strawlab/nalgebra-lapack) by Andrew Straw. It
|
||||
//! then became officially supported and integrated to the main nalgebra
|
||||
//! repository.
|
||||
|
||||
#![deny(non_camel_case_types)]
|
||||
#![deny(unused_parens)]
|
||||
#![deny(non_upper_case_globals)]
|
||||
#![deny(unused_qualifications)]
|
||||
#![deny(unused_results)]
|
||||
#![deny(missing_docs)]
|
||||
#![doc(html_root_url = "http://nalgebra.org/rustdoc")]
|
||||
|
||||
extern crate num_traits as num;
|
||||
extern crate num_complex;
|
||||
extern crate lapack;
|
||||
extern crate alga;
|
||||
extern crate nalgebra as na;
|
||||
|
||||
mod lapack_check;
|
||||
mod svd;
|
||||
mod eigen;
|
||||
mod symmetric_eigen;
|
||||
mod cholesky;
|
||||
mod lu;
|
||||
mod qr;
|
||||
mod hessenberg;
|
||||
mod schur;
|
||||
|
||||
use num_complex::Complex;
|
||||
|
||||
pub use self::svd::SVD;
|
||||
pub use self::cholesky::{Cholesky, CholeskyScalar};
|
||||
pub use self::lu::{LU, LUScalar};
|
||||
pub use self::eigen::Eigen;
|
||||
pub use self::symmetric_eigen::SymmetricEigen;
|
||||
pub use self::qr::QR;
|
||||
pub use self::hessenberg::Hessenberg;
|
||||
pub use self::schur::RealSchur;
|
||||
|
||||
|
||||
trait ComplexHelper {
|
||||
type RealPart;
|
||||
|
||||
fn real_part(self) -> Self::RealPart;
|
||||
}
|
||||
|
||||
impl ComplexHelper for f32 {
|
||||
type RealPart = f32;
|
||||
|
||||
#[inline]
|
||||
fn real_part(self) -> Self::RealPart {
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl ComplexHelper for f64 {
|
||||
type RealPart = f64;
|
||||
|
||||
#[inline]
|
||||
fn real_part(self) -> Self::RealPart {
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl ComplexHelper for Complex<f32> {
|
||||
type RealPart = f32;
|
||||
|
||||
#[inline]
|
||||
fn real_part(self) -> Self::RealPart {
|
||||
self.re
|
||||
}
|
||||
}
|
||||
|
||||
impl ComplexHelper for Complex<f64> {
|
||||
type RealPart = f64;
|
||||
|
||||
#[inline]
|
||||
fn real_part(self) -> Self::RealPart {
|
||||
self.re
|
||||
}
|
||||
}
|
||||
|
||||
unsafe fn uninitialized_vec<T: Copy>(n: usize) -> Vec<T> {
|
||||
let mut res = Vec::new();
|
||||
res.reserve_exact(n);
|
||||
res.set_len(n);
|
||||
res
|
||||
}
|
|
@ -0,0 +1,320 @@
|
|||
use num::{Zero, One};
|
||||
use num_complex::Complex;
|
||||
|
||||
use ::ComplexHelper;
|
||||
use na::{Scalar, DefaultAllocator, Matrix, MatrixMN, MatrixN, VectorN};
|
||||
use na::dimension::{Dim, DimMin, DimMinimum, U1};
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
/// LU decomposition with partial pivoting.
|
||||
///
|
||||
/// This decomposes a matrix `M` with m rows and n columns into three parts:
|
||||
/// * `L` which is a `m × min(m, n)` lower-triangular matrix.
|
||||
/// * `U` which is a `min(m, n) × n` upper-triangular matrix.
|
||||
/// * `P` which is a `m * m` permutation matrix.
|
||||
///
|
||||
/// Those are such that `M == P * L * U`.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<i32, DimMinimum<R, C>>,
|
||||
MatrixMN<N, R, C>: serde::Serialize,
|
||||
PermutationSequence<DimMinimum<R, C>>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(deserialize =
|
||||
"DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<i32, DimMinimum<R, C>>,
|
||||
MatrixMN<N, R, C>: serde::Deserialize<'de>,
|
||||
PermutationSequence<DimMinimum<R, C>>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct LU<N: Scalar, R: DimMin<C>, C: Dim>
|
||||
where DefaultAllocator: Allocator<i32, DimMinimum<R, C>> +
|
||||
Allocator<N, R, C> {
|
||||
lu: MatrixMN<N, R, C>,
|
||||
p: VectorN<i32, DimMinimum<R, C>>
|
||||
}
|
||||
|
||||
impl<N: Scalar, R: DimMin<C>, C: Dim> Copy for LU<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<i32, DimMinimum<R, C>>,
|
||||
MatrixMN<N, R, C>: Copy,
|
||||
VectorN<i32, DimMinimum<R, C>>: Copy { }
|
||||
|
||||
impl<N: LUScalar, R: Dim, C: Dim> LU<N, R, C>
|
||||
where N: Zero + One,
|
||||
R: DimMin<C>,
|
||||
DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<N, R, R> +
|
||||
Allocator<N, R, DimMinimum<R, C>> +
|
||||
Allocator<N, DimMinimum<R, C>, C> +
|
||||
Allocator<i32, DimMinimum<R, C>> {
|
||||
|
||||
/// Computes the LU decomposition with partial (row) pivoting of `matrix`.
|
||||
pub fn new(mut m: MatrixMN<N, R, C>) -> Self {
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
let min_nrows_ncols = nrows.min(ncols);
|
||||
let nrows = nrows.value() as i32;
|
||||
let ncols = ncols.value() as i32;
|
||||
|
||||
let mut ipiv: VectorN<i32, _> = Matrix::zeros_generic(min_nrows_ncols, U1);
|
||||
|
||||
let mut info = 0;
|
||||
|
||||
N::xgetrf(nrows, ncols, m.as_mut_slice(), nrows, ipiv.as_mut_slice(), &mut info);
|
||||
lapack_panic!(info);
|
||||
|
||||
LU { lu: m, p: ipiv }
|
||||
}
|
||||
|
||||
/// Gets the lower-triangular matrix part of the decomposition.
|
||||
#[inline]
|
||||
pub fn l(&self) -> MatrixMN<N, R, DimMinimum<R, C>> {
|
||||
let (nrows, ncols) = self.lu.data.shape();
|
||||
let mut res = self.lu.columns_generic(0, nrows.min(ncols)).into_owned();
|
||||
|
||||
res.fill_upper_triangle(Zero::zero(), 1);
|
||||
res.fill_diagonal(One::one());
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// Gets the upper-triangular matrix part of the decomposition.
|
||||
#[inline]
|
||||
pub fn u(&self) -> MatrixMN<N, DimMinimum<R, C>, C> {
|
||||
let (nrows, ncols) = self.lu.data.shape();
|
||||
let mut res = self.lu.rows_generic(0, nrows.min(ncols)).into_owned();
|
||||
|
||||
res.fill_lower_triangle(Zero::zero(), 1);
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// Gets the row permutation matrix of this decomposition.
|
||||
///
|
||||
/// Computing the permutation matrix explicitly is costly and usually not necessary.
|
||||
/// To permute rows of a matrix or vector, use the method `self.permute(...)` instead.
|
||||
#[inline]
|
||||
pub fn p(&self) -> MatrixN<N, R> {
|
||||
let (dim, _) = self.lu.data.shape();
|
||||
let mut id = Matrix::identity_generic(dim, dim);
|
||||
self.permute(&mut id);
|
||||
|
||||
id
|
||||
}
|
||||
|
||||
// FIXME: when we support resizing a matrix, we could add unwrap_u/unwrap_l that would
|
||||
// re-use the memory from the internal matrix!
|
||||
|
||||
/// Gets the LAPACK permutation indices.
|
||||
#[inline]
|
||||
pub fn permutation_indices(&self) -> &VectorN<i32, DimMinimum<R, C>> {
|
||||
&self.p
|
||||
}
|
||||
|
||||
/// Applies the permutation matrix to a given matrix or vector in-place.
|
||||
#[inline]
|
||||
pub fn permute<C2: Dim>(&self, rhs: &mut MatrixMN<N, R, C2>)
|
||||
where DefaultAllocator: Allocator<N, R, C2> {
|
||||
|
||||
let (nrows, ncols) = rhs.shape();
|
||||
|
||||
N::xlaswp(ncols as i32, rhs.as_mut_slice(), nrows as i32,
|
||||
1, self.p.len() as i32, self.p.as_slice(), -1);
|
||||
}
|
||||
|
||||
fn generic_solve_mut<R2: Dim, C2: Dim>(&self, trans: u8, b: &mut MatrixMN<N, R2, C2>) -> bool
|
||||
where DefaultAllocator: Allocator<N, R2, C2> +
|
||||
Allocator<i32, R2> {
|
||||
|
||||
let dim = self.lu.nrows();
|
||||
|
||||
assert!(self.lu.is_square(), "Unable to solve a set of under/over-determined equations.");
|
||||
assert!(b.nrows() == dim, "The number of rows of `b` must be equal to the dimension of the matrix `a`.");
|
||||
|
||||
let nrhs = b.ncols() as i32;
|
||||
let lda = dim as i32;
|
||||
let ldb = dim as i32;
|
||||
let mut info = 0;
|
||||
|
||||
N::xgetrs(trans, dim as i32, nrhs, self.lu.as_slice(), lda, self.p.as_slice(),
|
||||
b.as_mut_slice(), ldb, &mut info);
|
||||
lapack_test!(info)
|
||||
}
|
||||
|
||||
/// Solves the linear system `self * x = b`, where `x` is the unknown to be determined.
|
||||
pub fn solve<R2: Dim, C2: Dim, S2>(&self, b: &Matrix<N, R2, C2, S2>) -> Option<MatrixMN<N, R2, C2>>
|
||||
where S2: Storage<N, R2, C2>,
|
||||
DefaultAllocator: Allocator<N, R2, C2> +
|
||||
Allocator<i32, R2> {
|
||||
|
||||
let mut res = b.clone_owned();
|
||||
if self.generic_solve_mut(b'N', &mut res) {
|
||||
Some(res)
|
||||
}
|
||||
else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Solves the linear system `self.transpose() * x = b`, where `x` is the unknown to be
|
||||
/// determined.
|
||||
pub fn solve_transpose<R2: Dim, C2: Dim, S2>(&self, b: &Matrix<N, R2, C2, S2>)
|
||||
-> Option<MatrixMN<N, R2, C2>>
|
||||
where S2: Storage<N, R2, C2>,
|
||||
DefaultAllocator: Allocator<N, R2, C2> +
|
||||
Allocator<i32, R2> {
|
||||
|
||||
let mut res = b.clone_owned();
|
||||
if self.generic_solve_mut(b'T', &mut res) {
|
||||
Some(res)
|
||||
}
|
||||
else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Solves the linear system `self.conjugate_transpose() * x = b`, where `x` is the unknown to
|
||||
/// be determined.
|
||||
pub fn solve_conjugate_transpose<R2: Dim, C2: Dim, S2>(&self, b: &Matrix<N, R2, C2, S2>)
|
||||
-> Option<MatrixMN<N, R2, C2>>
|
||||
where S2: Storage<N, R2, C2>,
|
||||
DefaultAllocator: Allocator<N, R2, C2> +
|
||||
Allocator<i32, R2> {
|
||||
|
||||
let mut res = b.clone_owned();
|
||||
if self.generic_solve_mut(b'T', &mut res) {
|
||||
Some(res)
|
||||
}
|
||||
else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Solves in-place the linear system `self * x = b`, where `x` is the unknown to be determined.
|
||||
///
|
||||
/// Retuns `false` if no solution was found (the decomposed matrix is singular).
|
||||
pub fn solve_mut<R2: Dim, C2: Dim>(&self, b: &mut MatrixMN<N, R2, C2>) -> bool
|
||||
where DefaultAllocator: Allocator<N, R2, C2> +
|
||||
Allocator<i32, R2> {
|
||||
|
||||
self.generic_solve_mut(b'N', b)
|
||||
}
|
||||
|
||||
/// Solves in-place the linear system `self.transpose() * x = b`, where `x` is the unknown to be
|
||||
/// determined.
|
||||
///
|
||||
/// Retuns `false` if no solution was found (the decomposed matrix is singular).
|
||||
pub fn solve_transpose_mut<R2: Dim, C2: Dim>(&self, b: &mut MatrixMN<N, R2, C2>) -> bool
|
||||
where DefaultAllocator: Allocator<N, R2, C2> +
|
||||
Allocator<i32, R2> {
|
||||
|
||||
self.generic_solve_mut(b'T', b)
|
||||
}
|
||||
|
||||
/// Solves in-place the linear system `self.conjugate_transpose() * x = b`, where `x` is the unknown to
|
||||
/// be determined.
|
||||
///
|
||||
/// Retuns `false` if no solution was found (the decomposed matrix is singular).
|
||||
pub fn solve_conjugate_transpose_mut<R2: Dim, C2: Dim>(&self, b: &mut MatrixMN<N, R2, C2>) -> bool
|
||||
where DefaultAllocator: Allocator<N, R2, C2> +
|
||||
Allocator<i32, R2> {
|
||||
|
||||
self.generic_solve_mut(b'T', b)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: LUScalar, D: Dim> LU<N, D, D>
|
||||
where N: Zero + One,
|
||||
D: DimMin<D, Output = D>,
|
||||
DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<i32, D> {
|
||||
/// Computes the inverse of the decomposed matrix.
|
||||
pub fn inverse(mut self) -> Option<MatrixN<N, D>> {
|
||||
let dim = self.lu.nrows() as i32;
|
||||
let mut info = 0;
|
||||
let lwork = N::xgetri_work_size(dim, self.lu.as_mut_slice(),
|
||||
dim, self.p.as_mut_slice(),
|
||||
&mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
N::xgetri(dim, self.lu.as_mut_slice(), dim, self.p.as_mut_slice(),
|
||||
&mut work, lwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
Some(self.lu)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Lapack functions dispatch.
|
||||
*
|
||||
*/
|
||||
/// Trait implemented by scalars for which Lapack implements the LU decomposition.
|
||||
pub trait LUScalar: Scalar {
|
||||
#[allow(missing_docs)]
|
||||
fn xgetrf(m: i32, n: i32, a: &mut [Self], lda: i32, ipiv: &mut [i32], info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xlaswp(n: i32, a: &mut [Self], lda: i32, k1: i32, k2: i32, ipiv: &[i32], incx: i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xgetrs(trans: u8, n: i32, nrhs: i32, a: &[Self], lda: i32, ipiv: &[i32],
|
||||
b: &mut [Self], ldb: i32, info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xgetri(n: i32, a: &mut [Self], lda: i32, ipiv: &[i32],
|
||||
work: &mut [Self], lwork: i32, info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xgetri_work_size(n: i32, a: &mut [Self], lda: i32, ipiv: &[i32], info: &mut i32) -> i32;
|
||||
}
|
||||
|
||||
|
||||
macro_rules! lup_scalar_impl(
|
||||
($N: ty, $xgetrf: path, $xlaswp: path, $xgetrs: path, $xgetri: path) => (
|
||||
impl LUScalar for $N {
|
||||
#[inline]
|
||||
fn xgetrf(m: i32, n: i32, a: &mut [Self], lda: i32, ipiv: &mut [i32], info: &mut i32) {
|
||||
$xgetrf(m, n, a, lda, ipiv, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xlaswp(n: i32, a: &mut [Self], lda: i32, k1: i32, k2: i32, ipiv: &[i32], incx: i32) {
|
||||
$xlaswp(n, a, lda, k1, k2, ipiv, incx)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xgetrs(trans: u8, n: i32, nrhs: i32, a: &[Self], lda: i32, ipiv: &[i32],
|
||||
b: &mut [Self], ldb: i32, info: &mut i32) {
|
||||
$xgetrs(trans, n, nrhs, a, lda, ipiv, b, ldb, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xgetri(n: i32, a: &mut [Self], lda: i32, ipiv: &[i32],
|
||||
work: &mut [Self], lwork: i32, info: &mut i32) {
|
||||
$xgetri(n, a, lda, ipiv, work, lwork, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xgetri_work_size(n: i32, a: &mut [Self], lda: i32, ipiv: &[i32], info: &mut i32) -> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xgetri(n, a, lda, ipiv, &mut work, lwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
|
||||
lup_scalar_impl!(f32, interface::sgetrf, interface::slaswp, interface::sgetrs, interface::sgetri);
|
||||
lup_scalar_impl!(f64, interface::dgetrf, interface::dlaswp, interface::dgetrs, interface::dgetri);
|
||||
lup_scalar_impl!(Complex<f32>, interface::cgetrf, interface::claswp, interface::cgetrs, interface::cgetri);
|
||||
lup_scalar_impl!(Complex<f64>, interface::zgetrf, interface::zlaswp, interface::zgetrs, interface::zgetri);
|
|
@ -0,0 +1,200 @@
|
|||
#[cfg(feature = "serde-serialize")]
|
||||
use serde;
|
||||
|
||||
use num_complex::Complex;
|
||||
use num::Zero;
|
||||
|
||||
use ::ComplexHelper;
|
||||
use na::{Scalar, DefaultAllocator, Matrix, VectorN, MatrixMN};
|
||||
use na::dimension::{Dim, DimMin, DimMinimum, U1};
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
|
||||
/// The QR decomposition of a general matrix.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<N, DimMinimum<R, C>>,
|
||||
MatrixMN<N, R, C>: serde::Serialize,
|
||||
VectorN<N, DimMinimum<R, C>>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(deserialize =
|
||||
"DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<N, DimMinimum<R, C>>,
|
||||
MatrixMN<N, R, C>: serde::Deserialize<'de>,
|
||||
VectorN<N, DimMinimum<R, C>>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct QR<N: Scalar, R: DimMin<C>, C: Dim>
|
||||
where DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<N, DimMinimum<R, C>> {
|
||||
qr: MatrixMN<N, R, C>,
|
||||
tau: VectorN<N, DimMinimum<R, C>>
|
||||
}
|
||||
|
||||
impl<N: Scalar, R: DimMin<C>, C: Dim> Copy for QR<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<N, DimMinimum<R, C>>,
|
||||
MatrixMN<N, R, C>: Copy,
|
||||
VectorN<N, DimMinimum<R, C>>: Copy { }
|
||||
|
||||
impl<N: QRScalar + Zero, R: DimMin<C>, C: Dim> QR<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<N, R, DimMinimum<R, C>> +
|
||||
Allocator<N, DimMinimum<R, C>, C> +
|
||||
Allocator<N, DimMinimum<R, C>> {
|
||||
/// Computes the QR decomposition of the matrix `m`.
|
||||
pub fn new(mut m: MatrixMN<N, R, C>) -> QR<N, R, C> {
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
|
||||
let mut info = 0;
|
||||
let mut tau = unsafe { Matrix::new_uninitialized_generic(nrows.min(ncols), U1) };
|
||||
|
||||
if nrows.value() == 0 || ncols.value() == 0 {
|
||||
return QR { qr: m, tau: tau };
|
||||
}
|
||||
|
||||
let lwork = N::xgeqrf_work_size(nrows.value() as i32, ncols.value() as i32,
|
||||
m.as_mut_slice(), nrows.value() as i32,
|
||||
tau.as_mut_slice(), &mut info);
|
||||
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
N::xgeqrf(nrows.value() as i32, ncols.value() as i32, m.as_mut_slice(),
|
||||
nrows.value() as i32, tau.as_mut_slice(), &mut work, lwork, &mut info);
|
||||
|
||||
QR { qr: m, tau: tau }
|
||||
}
|
||||
|
||||
/// Retrieves the upper trapezoidal submatrix `R` of this decomposition.
|
||||
#[inline]
|
||||
pub fn r(&self) -> MatrixMN<N, DimMinimum<R, C>, C> {
|
||||
let (nrows, ncols) = self.qr.data.shape();
|
||||
self.qr.rows_generic(0, nrows.min(ncols)).upper_triangle()
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: QRReal + Zero, R: DimMin<C>, C: Dim> QR<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> +
|
||||
Allocator<N, R, DimMinimum<R, C>> +
|
||||
Allocator<N, DimMinimum<R, C>, C> +
|
||||
Allocator<N, DimMinimum<R, C>> {
|
||||
/// Retrieves the matrices `(Q, R)` of this decompositions.
|
||||
pub fn unpack(self) -> (MatrixMN<N, R, DimMinimum<R, C>>, MatrixMN<N, DimMinimum<R, C>, C>) {
|
||||
(self.q(), self.r())
|
||||
}
|
||||
|
||||
|
||||
/// Computes the orthogonal matrix `Q` of this decomposition.
|
||||
#[inline]
|
||||
pub fn q(&self) -> MatrixMN<N, R, DimMinimum<R, C>> {
|
||||
let (nrows, ncols) = self.qr.data.shape();
|
||||
let min_nrows_ncols = nrows.min(ncols);
|
||||
|
||||
if min_nrows_ncols.value() == 0 {
|
||||
return MatrixMN::from_element_generic(nrows, min_nrows_ncols, N::zero());
|
||||
}
|
||||
|
||||
let mut q = self.qr.generic_slice((0, 0), (nrows, min_nrows_ncols)).into_owned();
|
||||
|
||||
let mut info = 0;
|
||||
let nrows = nrows.value() as i32;
|
||||
|
||||
let lwork = N::xorgqr_work_size(nrows, min_nrows_ncols.value() as i32,
|
||||
self.tau.len() as i32, q.as_mut_slice(), nrows,
|
||||
self.tau.as_slice(), &mut info);
|
||||
|
||||
let mut work = vec![ N::zero(); lwork as usize ];
|
||||
|
||||
N::xorgqr(nrows, min_nrows_ncols.value() as i32, self.tau.len() as i32, q.as_mut_slice(),
|
||||
nrows, self.tau.as_slice(), &mut work, lwork, &mut info);
|
||||
|
||||
q
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Lapack functions dispatch.
|
||||
*
|
||||
*/
|
||||
/// Trait implemented by scalar types for which Lapack funtion exist to compute the
|
||||
/// QR decomposition.
|
||||
pub trait QRScalar: Scalar {
|
||||
fn xgeqrf(m: i32, n: i32, a: &mut [Self], lda: i32, tau: &mut [Self],
|
||||
work: &mut [Self], lwork: i32, info: &mut i32);
|
||||
|
||||
fn xgeqrf_work_size(m: i32, n: i32, a: &mut [Self], lda: i32,
|
||||
tau: &mut [Self], info: &mut i32) -> i32;
|
||||
}
|
||||
|
||||
/// Trait implemented by reals for which Lapack funtion exist to compute the
|
||||
/// QR decomposition.
|
||||
pub trait QRReal: QRScalar {
|
||||
#[allow(missing_docs)]
|
||||
fn xorgqr(m: i32, n: i32, k: i32, a: &mut [Self], lda: i32, tau: &[Self], work: &mut [Self],
|
||||
lwork: i32, info: &mut i32);
|
||||
|
||||
#[allow(missing_docs)]
|
||||
fn xorgqr_work_size(m: i32, n: i32, k: i32, a: &mut [Self], lda: i32,
|
||||
tau: &[Self], info: &mut i32) -> i32;
|
||||
}
|
||||
|
||||
macro_rules! qr_scalar_impl(
|
||||
($N: ty, $xgeqrf: path) => (
|
||||
impl QRScalar for $N {
|
||||
#[inline]
|
||||
fn xgeqrf(m: i32, n: i32, a: &mut [Self], lda: i32, tau: &mut [Self],
|
||||
work: &mut [Self], lwork: i32, info: &mut i32) {
|
||||
$xgeqrf(m, n, a, lda, tau, work, lwork, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xgeqrf_work_size(m: i32, n: i32, a: &mut [Self], lda: i32, tau: &mut [Self],
|
||||
info: &mut i32) -> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xgeqrf(m, n, a, lda, tau, &mut work, lwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
macro_rules! qr_real_impl(
|
||||
($N: ty, $xorgqr: path) => (
|
||||
impl QRReal for $N {
|
||||
#[inline]
|
||||
fn xorgqr(m: i32, n: i32, k: i32, a: &mut [Self], lda: i32, tau: &[Self],
|
||||
work: &mut [Self], lwork: i32, info: &mut i32) {
|
||||
$xorgqr(m, n, k, a, lda, tau, work, lwork, info)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn xorgqr_work_size(m: i32, n: i32, k: i32, a: &mut [Self], lda: i32, tau: &[Self],
|
||||
info: &mut i32) -> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xorgqr(m, n, k, a, lda, tau, &mut work, lwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
qr_scalar_impl!(f32, interface::sgeqrf);
|
||||
qr_scalar_impl!(f64, interface::dgeqrf);
|
||||
qr_scalar_impl!(Complex<f32>, interface::cgeqrf);
|
||||
qr_scalar_impl!(Complex<f64>, interface::zgeqrf);
|
||||
|
||||
qr_real_impl!(f32, interface::sorgqr);
|
||||
qr_real_impl!(f64, interface::dorgqr);
|
|
@ -0,0 +1,214 @@
|
|||
#[cfg(feature = "serde-serialize")]
|
||||
use serde;
|
||||
|
||||
use num::Zero;
|
||||
use num_complex::Complex;
|
||||
|
||||
use alga::general::Real;
|
||||
|
||||
use ::ComplexHelper;
|
||||
use na::{Scalar, DefaultAllocator, Matrix, VectorN, MatrixN};
|
||||
use na::dimension::{Dim, U1};
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
/// Eigendecomposition of a real square matrix with real eigenvalues.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> + Allocator<N, D>,
|
||||
VectorN<N, D>: serde::Serialize,
|
||||
MatrixN<N, D>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(deserialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> + Allocator<N, D>,
|
||||
VectorN<N, D>: serde::Serialize,
|
||||
MatrixN<N, D>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct RealSchur<N: Scalar, D: Dim>
|
||||
where DefaultAllocator: Allocator<N, D> +
|
||||
Allocator<N, D, D> {
|
||||
|
||||
re: VectorN<N, D>,
|
||||
im: VectorN<N, D>,
|
||||
t: MatrixN<N, D>,
|
||||
q: MatrixN<N, D>
|
||||
}
|
||||
|
||||
impl<N: Scalar, D: Dim> Copy for RealSchur<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> + Allocator<N, D>,
|
||||
MatrixN<N, D>: Copy,
|
||||
VectorN<N, D>: Copy { }
|
||||
|
||||
|
||||
impl<N: RealSchurScalar + Real, D: Dim> RealSchur<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, D> {
|
||||
/// Computes the eigenvalues and real Schur foorm of the matrix `m`.
|
||||
///
|
||||
/// Panics if the method did not converge.
|
||||
pub fn new(m: MatrixN<N, D>) -> Self {
|
||||
Self::try_new(m).expect("RealSchur decomposition: convergence failed.")
|
||||
}
|
||||
|
||||
/// Computes the eigenvalues and real Schur foorm of the matrix `m`.
|
||||
///
|
||||
/// Returns `None` if the method did not converge.
|
||||
pub fn try_new(mut m: MatrixN<N, D>) -> Option<Self> {
|
||||
assert!(m.is_square(), "Unable to compute the eigenvalue decomposition of a non-square matrix.");
|
||||
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
let n = nrows.value();
|
||||
|
||||
let lda = n as i32;
|
||||
|
||||
let mut info = 0;
|
||||
|
||||
let mut wr = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
let mut wi = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
let mut q = unsafe { Matrix::new_uninitialized_generic(nrows, ncols) };
|
||||
// Placeholders:
|
||||
let mut bwork = [ 0i32 ];
|
||||
let mut unused = 0;
|
||||
|
||||
let lwork = N::xgees_work_size(b'V', b'N', n as i32, m.as_mut_slice(), lda, &mut unused,
|
||||
wr.as_mut_slice(), wi.as_mut_slice(), q.as_mut_slice(), n as i32,
|
||||
&mut bwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
N::xgees(b'V', b'N', n as i32, m.as_mut_slice(), lda, &mut unused,
|
||||
wr.as_mut_slice(), wi.as_mut_slice(), q.as_mut_slice(),
|
||||
n as i32, &mut work, lwork, &mut bwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
Some(RealSchur { re: wr, im: wi, t: m, q: q })
|
||||
}
|
||||
|
||||
/// Retrieves the unitary matrix `Q` and the upper-quasitriangular matrix `T` such that the
|
||||
/// decomposed matrix equals `Q * T * Q.transpose()`.
|
||||
pub fn unpack(self) -> (MatrixN<N, D>, MatrixN<N, D>) {
|
||||
(self.q, self.t)
|
||||
}
|
||||
|
||||
/// Computes the real eigenvalues of the decomposed matrix.
|
||||
///
|
||||
/// Return `None` if some eigenvalues are complex.
|
||||
pub fn eigenvalues(&self) -> Option<VectorN<N, D>> {
|
||||
if self.im.iter().all(|e| e.is_zero()) {
|
||||
Some(self.re.clone())
|
||||
}
|
||||
else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Computes the complex eigenvalues of the decomposed matrix.
|
||||
pub fn complex_eigenvalues(&self) -> VectorN<Complex<N>, D>
|
||||
where DefaultAllocator: Allocator<Complex<N>, D> {
|
||||
|
||||
let mut out = unsafe { VectorN::new_uninitialized_generic(self.t.data.shape().0, U1) };
|
||||
|
||||
for i in 0 .. out.len() {
|
||||
out[i] = Complex::new(self.re[i], self.im[i])
|
||||
}
|
||||
|
||||
out
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Lapack functions dispatch.
|
||||
*
|
||||
*/
|
||||
/// Trait implemented by scalars for which Lapack implements the Real Schur decomposition.
|
||||
pub trait RealSchurScalar: Scalar {
|
||||
#[allow(missing_docs)]
|
||||
fn xgees(jobvs: u8,
|
||||
sort: u8,
|
||||
// select: ???
|
||||
n: i32,
|
||||
a: &mut [Self],
|
||||
lda: i32,
|
||||
sdim: &mut i32,
|
||||
wr: &mut [Self],
|
||||
wi: &mut [Self],
|
||||
vs: &mut [Self],
|
||||
ldvs: i32,
|
||||
work: &mut [Self],
|
||||
lwork: i32,
|
||||
bwork: &mut [i32],
|
||||
info: &mut i32);
|
||||
|
||||
#[allow(missing_docs)]
|
||||
fn xgees_work_size(jobvs: u8,
|
||||
sort: u8,
|
||||
// select: ???
|
||||
n: i32,
|
||||
a: &mut [Self],
|
||||
lda: i32,
|
||||
sdim: &mut i32,
|
||||
wr: &mut [Self],
|
||||
wi: &mut [Self],
|
||||
vs: &mut [Self],
|
||||
ldvs: i32,
|
||||
bwork: &mut [i32],
|
||||
info: &mut i32)
|
||||
-> i32;
|
||||
}
|
||||
|
||||
macro_rules! real_eigensystem_scalar_impl (
|
||||
($N: ty, $xgees: path) => (
|
||||
impl RealSchurScalar for $N {
|
||||
#[inline]
|
||||
fn xgees(jobvs: u8,
|
||||
sort: u8,
|
||||
// select: ???
|
||||
n: i32,
|
||||
a: &mut [$N],
|
||||
lda: i32,
|
||||
sdim: &mut i32,
|
||||
wr: &mut [$N],
|
||||
wi: &mut [$N],
|
||||
vs: &mut [$N],
|
||||
ldvs: i32,
|
||||
work: &mut [$N],
|
||||
lwork: i32,
|
||||
bwork: &mut [i32],
|
||||
info: &mut i32) {
|
||||
$xgees(jobvs, sort, None, n, a, lda, sdim, wr, wi, vs, ldvs, work, lwork, bwork, info);
|
||||
}
|
||||
|
||||
|
||||
#[inline]
|
||||
fn xgees_work_size(jobvs: u8,
|
||||
sort: u8,
|
||||
// select: ???
|
||||
n: i32,
|
||||
a: &mut [$N],
|
||||
lda: i32,
|
||||
sdim: &mut i32,
|
||||
wr: &mut [$N],
|
||||
wi: &mut [$N],
|
||||
vs: &mut [$N],
|
||||
ldvs: i32,
|
||||
bwork: &mut [i32],
|
||||
info: &mut i32)
|
||||
-> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xgees(jobvs, sort, None, n, a, lda, sdim, wr, wi, vs, ldvs, &mut work, lwork, bwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
real_eigensystem_scalar_impl!(f32, interface::sgees);
|
||||
real_eigensystem_scalar_impl!(f64, interface::dgees);
|
|
@ -0,0 +1,279 @@
|
|||
#[cfg(feature = "serde-serialize")]
|
||||
use serde;
|
||||
|
||||
use std::cmp;
|
||||
use num::Signed;
|
||||
|
||||
use na::{Scalar, Matrix, VectorN, MatrixN, MatrixMN,
|
||||
DefaultAllocator};
|
||||
use na::dimension::{Dim, DimMin, DimMinimum, U1};
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
|
||||
/// The SVD decomposition of a general matrix.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, DimMinimum<R, C>> +
|
||||
Allocator<N, R, R> +
|
||||
Allocator<N, C, C>,
|
||||
MatrixN<N, R>: serde::Serialize,
|
||||
MatrixN<N, C>: serde::Serialize,
|
||||
VectorN<N, DimMinimum<R, C>>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, DimMinimum<R, C>> +
|
||||
Allocator<N, R, R> +
|
||||
Allocator<N, C, C>,
|
||||
MatrixN<N, R>: serde::Deserialize<'de>,
|
||||
MatrixN<N, C>: serde::Deserialize<'de>,
|
||||
VectorN<N, DimMinimum<R, C>>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SVD<N: Scalar, R: DimMin<C>, C: Dim>
|
||||
where DefaultAllocator: Allocator<N, R, R> +
|
||||
Allocator<N, DimMinimum<R, C>> +
|
||||
Allocator<N, C, C> {
|
||||
/// The left-singular vectors `U` of this SVD.
|
||||
pub u: MatrixN<N, R>, // FIXME: should be MatrixMN<N, R, DimMinimum<R, C>>
|
||||
/// The right-singular vectors `V^t` of this SVD.
|
||||
pub vt: MatrixN<N, C>, // FIXME: should be MatrixMN<N, DimMinimum<R, C>, C>
|
||||
/// The singular values of this SVD.
|
||||
pub singular_values: VectorN<N, DimMinimum<R, C>>
|
||||
}
|
||||
|
||||
impl<N: Scalar, R: DimMin<C>, C: Dim> Copy for SVD<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, C, C> +
|
||||
Allocator<N, R, R> +
|
||||
Allocator<N, DimMinimum<R, C>>,
|
||||
MatrixMN<N, R, R>: Copy,
|
||||
MatrixMN<N, C, C>: Copy,
|
||||
VectorN<N, DimMinimum<R, C>>: Copy { }
|
||||
|
||||
/// Trait implemented by floats (`f32`, `f64`) and complex floats (`Complex<f32>`, `Complex<f64>`)
|
||||
/// supported by the Singular Value Decompotition.
|
||||
pub trait SVDScalar<R: DimMin<C>, C: Dim>: Scalar
|
||||
where DefaultAllocator: Allocator<Self, R, R> +
|
||||
Allocator<Self, R, C> +
|
||||
Allocator<Self, DimMinimum<R, C>> +
|
||||
Allocator<Self, C, C> {
|
||||
/// Computes the SVD decomposition of `m`.
|
||||
fn compute(m: MatrixMN<Self, R, C>) -> Option<SVD<Self, R, C>>;
|
||||
}
|
||||
|
||||
impl<N: SVDScalar<R, C>, R: DimMin<C>, C: Dim> SVD<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, R> +
|
||||
Allocator<N, R, C> +
|
||||
Allocator<N, DimMinimum<R, C>> +
|
||||
Allocator<N, C, C> {
|
||||
/// Computes the Singular Value Decomposition of `matrix`.
|
||||
pub fn new(m: MatrixMN<N, R, C>) -> Option<Self> {
|
||||
N::compute(m)
|
||||
}
|
||||
}
|
||||
|
||||
macro_rules! svd_impl(
|
||||
($t: ty, $lapack_func: path) => (
|
||||
impl<R: Dim, C: Dim> SVDScalar<R, C> for $t
|
||||
where R: DimMin<C>,
|
||||
DefaultAllocator: Allocator<$t, R, C> +
|
||||
Allocator<$t, R, R> +
|
||||
Allocator<$t, C, C> +
|
||||
Allocator<$t, DimMinimum<R, C>> {
|
||||
|
||||
fn compute(mut m: MatrixMN<$t, R, C>) -> Option<SVD<$t, R, C>> {
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
|
||||
if nrows.value() == 0 || ncols.value() == 0 {
|
||||
return None;
|
||||
}
|
||||
|
||||
let job = b'A';
|
||||
|
||||
let lda = nrows.value() as i32;
|
||||
|
||||
let mut u = unsafe { Matrix::new_uninitialized_generic(nrows, nrows) };
|
||||
let mut s = unsafe { Matrix::new_uninitialized_generic(nrows.min(ncols), U1) };
|
||||
let mut vt = unsafe { Matrix::new_uninitialized_generic(ncols, ncols) };
|
||||
|
||||
let ldu = nrows.value();
|
||||
let ldvt = ncols.value();
|
||||
|
||||
let mut work = [ 0.0 ];
|
||||
let mut lwork = -1 as i32;
|
||||
let mut info = 0;
|
||||
let mut iwork = unsafe { ::uninitialized_vec(8 * cmp::min(nrows.value(), ncols.value())) };
|
||||
|
||||
$lapack_func(job, nrows.value() as i32, ncols.value() as i32, m.as_mut_slice(),
|
||||
lda, &mut s.as_mut_slice(), u.as_mut_slice(), ldu as i32, vt.as_mut_slice(),
|
||||
ldvt as i32, &mut work, lwork, &mut iwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
lwork = work[0] as i32;
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
$lapack_func(job, nrows.value() as i32, ncols.value() as i32, m.as_mut_slice(),
|
||||
lda, &mut s.as_mut_slice(), u.as_mut_slice(), ldu as i32, vt.as_mut_slice(),
|
||||
ldvt as i32, &mut work, lwork, &mut iwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
Some(SVD { u: u, singular_values: s, vt: vt })
|
||||
}
|
||||
}
|
||||
|
||||
impl<R: DimMin<C>, C: Dim> SVD<$t, R, C>
|
||||
// FIXME: All those bounds…
|
||||
where DefaultAllocator: Allocator<$t, R, C> +
|
||||
Allocator<$t, C, R> +
|
||||
Allocator<$t, U1, R> +
|
||||
Allocator<$t, U1, C> +
|
||||
Allocator<$t, R, R> +
|
||||
Allocator<$t, DimMinimum<R, C>> +
|
||||
Allocator<$t, DimMinimum<R, C>, R> +
|
||||
Allocator<$t, DimMinimum<R, C>, C> +
|
||||
Allocator<$t, R, DimMinimum<R, C>> +
|
||||
Allocator<$t, C, C> {
|
||||
/// Reconstructs the matrix from its decomposition.
|
||||
///
|
||||
/// Useful if some components (e.g. some singular values) of this decomposition have
|
||||
/// been manually changed by the user.
|
||||
#[inline]
|
||||
pub fn recompose(self) -> MatrixMN<$t, R, C> {
|
||||
let nrows = self.u.data.shape().0;
|
||||
let ncols = self.vt.data.shape().1;
|
||||
let min_nrows_ncols = nrows.min(ncols);
|
||||
|
||||
let mut res: MatrixMN<_, R, C> = Matrix::zeros_generic(nrows, ncols);
|
||||
|
||||
{
|
||||
let mut sres = res.generic_slice_mut((0, 0), (min_nrows_ncols, ncols));
|
||||
sres.copy_from(&self.vt.rows_generic(0, min_nrows_ncols));
|
||||
|
||||
for i in 0 .. min_nrows_ncols.value() {
|
||||
let eigval = self.singular_values[i];
|
||||
let mut row = sres.row_mut(i);
|
||||
row *= eigval;
|
||||
}
|
||||
}
|
||||
|
||||
self.u * res
|
||||
}
|
||||
|
||||
/// Computes the pseudo-inverse of the decomposed matrix.
|
||||
///
|
||||
/// All singular value bellow epsilon will be set to zero instead of being inverted.
|
||||
#[inline]
|
||||
pub fn pseudo_inverse(&self, epsilon: $t) -> MatrixMN<$t, C, R> {
|
||||
let nrows = self.u.data.shape().0;
|
||||
let ncols = self.vt.data.shape().1;
|
||||
let min_nrows_ncols = nrows.min(ncols);
|
||||
|
||||
let mut res: MatrixMN<_, C, R> = Matrix::zeros_generic(ncols, nrows);
|
||||
|
||||
{
|
||||
let mut sres = res.generic_slice_mut((0, 0), (min_nrows_ncols, nrows));
|
||||
self.u.columns_generic(0, min_nrows_ncols).transpose_to(&mut sres);
|
||||
|
||||
for i in 0 .. min_nrows_ncols.value() {
|
||||
let eigval = self.singular_values[i];
|
||||
let mut row = sres.row_mut(i);
|
||||
|
||||
if eigval.abs() > epsilon {
|
||||
row /= eigval
|
||||
}
|
||||
else {
|
||||
row.fill(0.0);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.vt.tr_mul(&res)
|
||||
}
|
||||
|
||||
/// The rank of the decomposed matrix.
|
||||
///
|
||||
/// This is the number of singular values that are not too small (i.e. greater than
|
||||
/// the given `epsilon`).
|
||||
#[inline]
|
||||
pub fn rank(&self, epsilon: $t) -> usize {
|
||||
let mut i = 0;
|
||||
|
||||
for e in self.singular_values.as_slice().iter() {
|
||||
if e.abs() > epsilon {
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
|
||||
i
|
||||
}
|
||||
|
||||
// FIXME: add methods to retrieve the null-space and column-space? (Respectively
|
||||
// corresponding to the zero and non-zero singular values).
|
||||
}
|
||||
);
|
||||
);
|
||||
|
||||
/*
|
||||
macro_rules! svd_complex_impl(
|
||||
($name: ident, $t: ty, $lapack_func: path) => (
|
||||
impl SVDScalar for Complex<$t> {
|
||||
fn compute<R: Dim, C: Dim, S>(mut m: Matrix<$t, R, C, S>) -> Option<SVD<$t, R, C, S::Alloc>>
|
||||
Option<(MatrixN<Complex<$t>, R, S::Alloc>,
|
||||
VectorN<$t, DimMinimum<R, C>, S::Alloc>,
|
||||
MatrixN<Complex<$t>, C, S::Alloc>)>
|
||||
where R: DimMin<C>,
|
||||
S: ContiguousStorage<Complex<$t>, R, C>,
|
||||
S::Alloc: OwnedAllocator<Complex<$t>, R, C, S> +
|
||||
Allocator<Complex<$t>, R, R> +
|
||||
Allocator<Complex<$t>, C, C> +
|
||||
Allocator<$t, DimMinimum<R, C>> {
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
|
||||
if nrows.value() == 0 || ncols.value() == 0 {
|
||||
return None;
|
||||
}
|
||||
|
||||
let jobu = b'A';
|
||||
let jobvt = b'A';
|
||||
|
||||
let lda = nrows.value() as i32;
|
||||
let min_nrows_ncols = nrows.min(ncols);
|
||||
|
||||
|
||||
let mut u = unsafe { Matrix::new_uninitialized_generic(nrows, nrows) };
|
||||
let mut s = unsafe { Matrix::new_uninitialized_generic(min_nrows_ncols, U1) };
|
||||
let mut vt = unsafe { Matrix::new_uninitialized_generic(ncols, ncols) };
|
||||
|
||||
let ldu = nrows.value();
|
||||
let ldvt = ncols.value();
|
||||
|
||||
let mut work = [ Complex::new(0.0, 0.0) ];
|
||||
let mut lwork = -1 as i32;
|
||||
let mut rwork = vec![ 0.0; (5 * min_nrows_ncols.value()) ];
|
||||
let mut info = 0;
|
||||
|
||||
$lapack_func(jobu, jobvt, nrows.value() as i32, ncols.value() as i32, m.as_mut_slice(),
|
||||
lda, s.as_mut_slice(), u.as_mut_slice(), ldu as i32, vt.as_mut_slice(),
|
||||
ldvt as i32, &mut work, lwork, &mut rwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
lwork = work[0].re as i32;
|
||||
let mut work = vec![Complex::new(0.0, 0.0); lwork as usize];
|
||||
|
||||
$lapack_func(jobu, jobvt, nrows.value() as i32, ncols.value() as i32, m.as_mut_slice(),
|
||||
lda, s.as_mut_slice(), u.as_mut_slice(), ldu as i32, vt.as_mut_slice(),
|
||||
ldvt as i32, &mut work, lwork, &mut rwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
Some((u, s, vt))
|
||||
}
|
||||
);
|
||||
);
|
||||
*/
|
||||
|
||||
svd_impl!(f32, interface::sgesdd);
|
||||
svd_impl!(f64, interface::dgesdd);
|
||||
// svd_complex_impl!(lapack_svd_complex_f32, f32, interface::cgesvd);
|
||||
// svd_complex_impl!(lapack_svd_complex_f64, f64, interface::zgesvd);
|
|
@ -0,0 +1,176 @@
|
|||
#[cfg(feature = "serde-serialize")]
|
||||
use serde;
|
||||
|
||||
use num::Zero;
|
||||
use std::ops::MulAssign;
|
||||
|
||||
use alga::general::Real;
|
||||
|
||||
use ::ComplexHelper;
|
||||
use na::{Scalar, DefaultAllocator, Matrix, VectorN, MatrixN};
|
||||
use na::dimension::{Dim, U1};
|
||||
use na::storage::Storage;
|
||||
use na::allocator::Allocator;
|
||||
|
||||
use lapack::fortran as interface;
|
||||
|
||||
/// Eigendecomposition of a real square symmetric matrix with real eigenvalues.
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(serialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, D>,
|
||||
VectorN<N, D>: serde::Serialize,
|
||||
MatrixN<N, D>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(deserialize =
|
||||
"DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, D>,
|
||||
VectorN<N, D>: serde::Deserialize<'de>,
|
||||
MatrixN<N, D>: serde::Deserialize<'de>")))]
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SymmetricEigen<N: Scalar, D: Dim>
|
||||
where DefaultAllocator: Allocator<N, D> +
|
||||
Allocator<N, D, D> {
|
||||
/// The eigenvectors of the decomposed matrix.
|
||||
pub eigenvectors: MatrixN<N, D>,
|
||||
|
||||
/// The unsorted eigenvalues of the decomposed matrix.
|
||||
pub eigenvalues: VectorN<N, D>,
|
||||
}
|
||||
|
||||
|
||||
impl<N: Scalar, D: Dim> Copy for SymmetricEigen<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, D>,
|
||||
MatrixN<N, D>: Copy,
|
||||
VectorN<N, D>: Copy { }
|
||||
|
||||
impl<N: SymmetricEigenScalar + Real, D: Dim> SymmetricEigen<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, D> {
|
||||
|
||||
/// Computes the eigenvalues and eigenvectors of the symmetric matrix `m`.
|
||||
///
|
||||
/// Only the lower-triangular part of `m` is read. If `eigenvectors` is `false` then, the
|
||||
/// eigenvectors are not computed explicitly. Panics if the method did not converge.
|
||||
pub fn new(m: MatrixN<N, D>) -> Self {
|
||||
let (vals, vecs) = Self::do_decompose(m, true).expect("SymmetricEigen: convergence failure.");
|
||||
SymmetricEigen { eigenvalues: vals, eigenvectors: vecs.unwrap() }
|
||||
}
|
||||
|
||||
/// Computes the eigenvalues and eigenvectors of the symmetric matrix `m`.
|
||||
///
|
||||
/// Only the lower-triangular part of `m` is read. If `eigenvectors` is `false` then, the
|
||||
/// eigenvectors are not computed explicitly. Returns `None` if the method did not converge.
|
||||
pub fn try_new(m: MatrixN<N, D>) -> Option<Self> {
|
||||
Self::do_decompose(m, true).map(|(vals, vecs)| {
|
||||
SymmetricEigen { eigenvalues: vals, eigenvectors: vecs.unwrap() }
|
||||
})
|
||||
}
|
||||
|
||||
fn do_decompose(mut m: MatrixN<N, D>, eigenvectors: bool) -> Option<(VectorN<N, D>, Option<MatrixN<N, D>>)> {
|
||||
assert!(m.is_square(), "Unable to compute the eigenvalue decomposition of a non-square matrix.");
|
||||
|
||||
let jobz = if eigenvectors { b'V' } else { b'N' };
|
||||
|
||||
let nrows = m.data.shape().0;
|
||||
let n = nrows.value();
|
||||
|
||||
let lda = n as i32;
|
||||
|
||||
let mut values = unsafe { Matrix::new_uninitialized_generic(nrows, U1) };
|
||||
let mut info = 0;
|
||||
|
||||
let lwork = N::xsyev_work_size(jobz, b'L', n as i32, m.as_mut_slice(), lda, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
let mut work = unsafe { ::uninitialized_vec(lwork as usize) };
|
||||
|
||||
N::xsyev(jobz, b'L', n as i32, m.as_mut_slice(), lda, values.as_mut_slice(), &mut work, lwork, &mut info);
|
||||
lapack_check!(info);
|
||||
|
||||
let vectors = if eigenvectors { Some(m) } else { None };
|
||||
Some((values, vectors))
|
||||
}
|
||||
|
||||
/// Computes only the eigenvalues of the input matrix.
|
||||
///
|
||||
/// Panics if the method does not converge.
|
||||
pub fn eigenvalues(m: MatrixN<N, D>) -> VectorN<N, D> {
|
||||
Self::do_decompose(m, false).expect("SymmetricEigen eigenvalues: convergence failure.").0
|
||||
}
|
||||
|
||||
/// Computes only the eigenvalues of the input matrix.
|
||||
///
|
||||
/// Returns `None` if the method does not converge.
|
||||
pub fn try_eigenvalues(m: MatrixN<N, D>) -> Option<VectorN<N, D>> {
|
||||
Self::do_decompose(m, false).map(|res| res.0)
|
||||
}
|
||||
|
||||
/// The determinant of the decomposed matrix.
|
||||
#[inline]
|
||||
pub fn determinant(&self) -> N {
|
||||
let mut det = N::one();
|
||||
for e in self.eigenvalues.iter() {
|
||||
det *= *e;
|
||||
}
|
||||
|
||||
det
|
||||
}
|
||||
|
||||
/// Rebuild the original matrix.
|
||||
///
|
||||
/// This is useful if some of the eigenvalues have been manually modified.
|
||||
pub fn recompose(&self) -> MatrixN<N, D> {
|
||||
let mut u_t = self.eigenvectors.clone();
|
||||
for i in 0 .. self.eigenvalues.len() {
|
||||
let val = self.eigenvalues[i];
|
||||
u_t.column_mut(i).mul_assign(val);
|
||||
}
|
||||
u_t.transpose_mut();
|
||||
&self.eigenvectors * u_t
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Lapack functions dispatch.
|
||||
*
|
||||
*/
|
||||
/// Trait implemented by scalars for which Lapack implements the eigendecomposition of symmetric
|
||||
/// real matrices.
|
||||
pub trait SymmetricEigenScalar: Scalar {
|
||||
#[allow(missing_docs)]
|
||||
fn xsyev(jobz: u8, uplo: u8, n: i32, a: &mut [Self], lda: i32, w: &mut [Self], work: &mut [Self],
|
||||
lwork: i32, info: &mut i32);
|
||||
#[allow(missing_docs)]
|
||||
fn xsyev_work_size(jobz: u8, uplo: u8, n: i32, a: &mut [Self], lda: i32, info: &mut i32) -> i32;
|
||||
}
|
||||
|
||||
macro_rules! real_eigensystem_scalar_impl (
|
||||
($N: ty, $xsyev: path) => (
|
||||
impl SymmetricEigenScalar for $N {
|
||||
#[inline]
|
||||
fn xsyev(jobz: u8, uplo: u8, n: i32, a: &mut [Self], lda: i32, w: &mut [Self], work: &mut [Self],
|
||||
lwork: i32, info: &mut i32) {
|
||||
$xsyev(jobz, uplo, n, a, lda, w, work, lwork, info)
|
||||
}
|
||||
|
||||
|
||||
#[inline]
|
||||
fn xsyev_work_size(jobz: u8, uplo: u8, n: i32, a: &mut [Self], lda: i32, info: &mut i32) -> i32 {
|
||||
let mut work = [ Zero::zero() ];
|
||||
let mut w = [ Zero::zero() ];
|
||||
let lwork = -1 as i32;
|
||||
|
||||
$xsyev(jobz, uplo, n, a, lda, &mut w, &mut work, lwork, info);
|
||||
ComplexHelper::real_part(work[0]) as i32
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
real_eigensystem_scalar_impl!(f32, interface::ssyev);
|
||||
real_eigensystem_scalar_impl!(f64, interface::dsyev);
|
|
@ -0,0 +1,9 @@
|
|||
#[macro_use]
|
||||
extern crate quickcheck;
|
||||
#[macro_use]
|
||||
extern crate approx;
|
||||
extern crate nalgebra as na;
|
||||
extern crate nalgebra_lapack as nl;
|
||||
|
||||
|
||||
mod linalg;
|
|
@ -0,0 +1,101 @@
|
|||
use std::cmp;
|
||||
|
||||
use nl::Cholesky;
|
||||
use na::{DMatrix, DVector, Vector4, Matrix3, Matrix4x3, Matrix4};
|
||||
|
||||
quickcheck!{
|
||||
fn cholesky(m: DMatrix<f64>) -> bool {
|
||||
if m.len() != 0 {
|
||||
let m = &m * m.transpose();
|
||||
if let Some(chol) = Cholesky::new(m.clone()) {
|
||||
let l = chol.unpack();
|
||||
let reconstructed_m = &l * l.transpose();
|
||||
|
||||
return relative_eq!(reconstructed_m, m, epsilon = 1.0e-7)
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
fn cholesky_static(m: Matrix3<f64>) -> bool {
|
||||
let m = &m * m.transpose();
|
||||
if let Some(chol) = Cholesky::new(m) {
|
||||
let l = chol.unpack();
|
||||
let reconstructed_m = &l * l.transpose();
|
||||
|
||||
relative_eq!(reconstructed_m, m, epsilon = 1.0e-7)
|
||||
}
|
||||
else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
fn cholesky_solve(n: usize, nb: usize) -> bool {
|
||||
if n != 0 {
|
||||
let n = cmp::min(n, 15); // To avoid slowing down the test too much.
|
||||
let nb = cmp::min(nb, 15); // To avoid slowing down the test too much.
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
let m = &m * m.transpose();
|
||||
|
||||
if let Some(chol) = Cholesky::new(m.clone()) {
|
||||
let b1 = DVector::new_random(n);
|
||||
let b2 = DMatrix::new_random(n, nb);
|
||||
|
||||
let sol1 = chol.solve(&b1).unwrap();
|
||||
let sol2 = chol.solve(&b2).unwrap();
|
||||
|
||||
return relative_eq!(&m * sol1, b1, epsilon = 1.0e-6) &&
|
||||
relative_eq!(&m * sol2, b2, epsilon = 1.0e-6)
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
fn cholesky_solve_static(m: Matrix4<f64>) -> bool {
|
||||
let m = &m * m.transpose();
|
||||
match Cholesky::new(m) {
|
||||
Some(chol) => {
|
||||
let b1 = Vector4::new_random();
|
||||
let b2 = Matrix4x3::new_random();
|
||||
|
||||
let sol1 = chol.solve(&b1).unwrap();
|
||||
let sol2 = chol.solve(&b2).unwrap();
|
||||
|
||||
relative_eq!(m * sol1, b1, epsilon = 1.0e-7) &&
|
||||
relative_eq!(m * sol2, b2, epsilon = 1.0e-7)
|
||||
},
|
||||
None => true
|
||||
}
|
||||
}
|
||||
|
||||
fn cholesky_inverse(n: usize) -> bool {
|
||||
if n != 0 {
|
||||
let n = cmp::min(n, 15); // To avoid slowing down the test too much.
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
let m = &m * m.transpose();
|
||||
|
||||
if let Some(m1) = Cholesky::new(m.clone()).unwrap().inverse() {
|
||||
let id1 = &m * &m1;
|
||||
let id2 = &m1 * &m;
|
||||
|
||||
return id1.is_identity(1.0e-6) && id2.is_identity(1.0e-6);
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
fn cholesky_inverse_static(m: Matrix4<f64>) -> bool {
|
||||
let m = m * m.transpose();
|
||||
match Cholesky::new(m.clone()).unwrap().inverse() {
|
||||
Some(m1) => {
|
||||
let id1 = &m * &m1;
|
||||
let id2 = &m1 * &m;
|
||||
|
||||
id1.is_identity(1.0e-5) && id2.is_identity(1.0e-5)
|
||||
},
|
||||
None => true
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,38 @@
|
|||
use std::cmp;
|
||||
|
||||
use nl::Hessenberg;
|
||||
use na::{DMatrix, Matrix4};
|
||||
|
||||
quickcheck!{
|
||||
fn hessenberg(n: usize) -> bool {
|
||||
if n != 0 {
|
||||
let n = cmp::min(n, 25);
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
|
||||
match Hessenberg::new(m.clone()) {
|
||||
Some(hess) => {
|
||||
let h = hess.h();
|
||||
let p = hess.p();
|
||||
|
||||
relative_eq!(m, &p * h * p.transpose(), epsilon = 1.0e-7)
|
||||
},
|
||||
None => true
|
||||
}
|
||||
}
|
||||
else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
fn hessenberg_static(m: Matrix4<f64>) -> bool {
|
||||
match Hessenberg::new(m) {
|
||||
Some(hess) => {
|
||||
let h = hess.h();
|
||||
let p = hess.p();
|
||||
|
||||
relative_eq!(m, p * h * p.transpose(), epsilon = 1.0e-7)
|
||||
},
|
||||
None => true
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,107 @@
|
|||
use std::cmp;
|
||||
|
||||
use nl::LU;
|
||||
use na::{DMatrix, DVector, Matrix4, Matrix4x3, Matrix3x4, Vector4};
|
||||
|
||||
quickcheck!{
|
||||
fn lup(m: DMatrix<f64>) -> bool {
|
||||
if m.len() != 0 {
|
||||
let lup = LU::new(m.clone());
|
||||
let l = lup.l();
|
||||
let u = lup.u();
|
||||
let mut computed1 = &l * &u;
|
||||
lup.permute(&mut computed1);
|
||||
|
||||
let computed2 = lup.p() * l * u;
|
||||
|
||||
relative_eq!(computed1, m, epsilon = 1.0e-7) &&
|
||||
relative_eq!(computed2, m, epsilon = 1.0e-7)
|
||||
}
|
||||
else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
fn lu_static(m: Matrix3x4<f64>) -> bool {
|
||||
let lup = LU::new(m);
|
||||
let l = lup.l();
|
||||
let u = lup.u();
|
||||
let mut computed1 = l * u;
|
||||
lup.permute(&mut computed1);
|
||||
|
||||
let computed2 = lup.p() * l * u;
|
||||
|
||||
relative_eq!(computed1, m, epsilon = 1.0e-7) &&
|
||||
relative_eq!(computed2, m, epsilon = 1.0e-7)
|
||||
}
|
||||
|
||||
fn lu_solve(n: usize, nb: usize) -> bool {
|
||||
if n != 0 {
|
||||
let n = cmp::min(n, 25); // To avoid slowing down the test too much.
|
||||
let nb = cmp::min(nb, 25); // To avoid slowing down the test too much.
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
|
||||
let lup = LU::new(m.clone());
|
||||
let b1 = DVector::new_random(n);
|
||||
let b2 = DMatrix::new_random(n, nb);
|
||||
|
||||
let sol1 = lup.solve(&b1).unwrap();
|
||||
let sol2 = lup.solve(&b2).unwrap();
|
||||
|
||||
let tr_sol1 = lup.solve_transpose(&b1).unwrap();
|
||||
let tr_sol2 = lup.solve_transpose(&b2).unwrap();
|
||||
|
||||
relative_eq!(&m * sol1, b1, epsilon = 1.0e-7) &&
|
||||
relative_eq!(&m * sol2, b2, epsilon = 1.0e-7) &&
|
||||
relative_eq!(m.transpose() * tr_sol1, b1, epsilon = 1.0e-7) &&
|
||||
relative_eq!(m.transpose() * tr_sol2, b2, epsilon = 1.0e-7)
|
||||
}
|
||||
else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
fn lu_solve_static(m: Matrix4<f64>) -> bool {
|
||||
let lup = LU::new(m);
|
||||
let b1 = Vector4::new_random();
|
||||
let b2 = Matrix4x3::new_random();
|
||||
|
||||
let sol1 = lup.solve(&b1).unwrap();
|
||||
let sol2 = lup.solve(&b2).unwrap();
|
||||
let tr_sol1 = lup.solve_transpose(&b1).unwrap();
|
||||
let tr_sol2 = lup.solve_transpose(&b2).unwrap();
|
||||
|
||||
relative_eq!(m * sol1, b1, epsilon = 1.0e-7) &&
|
||||
relative_eq!(m * sol2, b2, epsilon = 1.0e-7) &&
|
||||
relative_eq!(m.transpose() * tr_sol1, b1, epsilon = 1.0e-7) &&
|
||||
relative_eq!(m.transpose() * tr_sol2, b2, epsilon = 1.0e-7)
|
||||
}
|
||||
|
||||
fn lu_inverse(n: usize) -> bool {
|
||||
if n != 0 {
|
||||
let n = cmp::min(n, 25); // To avoid slowing down the test too much.
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
|
||||
if let Some(m1) = LU::new(m.clone()).inverse() {
|
||||
let id1 = &m * &m1;
|
||||
let id2 = &m1 * &m;
|
||||
|
||||
return id1.is_identity(1.0e-7) && id2.is_identity(1.0e-7);
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
fn lu_inverse_static(m: Matrix4<f64>) -> bool {
|
||||
match LU::new(m.clone()).inverse() {
|
||||
Some(m1) => {
|
||||
let id1 = &m * &m1;
|
||||
let id2 = &m1 * &m;
|
||||
|
||||
id1.is_identity(1.0e-5) && id2.is_identity(1.0e-5)
|
||||
},
|
||||
None => true
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,7 @@
|
|||
mod real_eigensystem;
|
||||
mod symmetric_eigen;
|
||||
mod cholesky;
|
||||
mod lu;
|
||||
mod qr;
|
||||
mod svd;
|
||||
mod real_schur;
|
|
@ -0,0 +1,20 @@
|
|||
use nl::QR;
|
||||
use na::{DMatrix, Matrix4x3};
|
||||
|
||||
quickcheck!{
|
||||
fn qr(m: DMatrix<f64>) -> bool {
|
||||
let qr = QR::new(m.clone());
|
||||
let q = qr.q();
|
||||
let r = qr.r();
|
||||
|
||||
relative_eq!(m, q * r, epsilon = 1.0e-7)
|
||||
}
|
||||
|
||||
fn qr_static(m: Matrix4x3<f64>) -> bool {
|
||||
let qr = QR::new(m);
|
||||
let q = qr.q();
|
||||
let r = qr.r();
|
||||
|
||||
relative_eq!(m, q * r, epsilon = 1.0e-7)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,48 @@
|
|||
use std::cmp;
|
||||
|
||||
use nl::Eigen;
|
||||
use na::{DMatrix, Matrix4};
|
||||
|
||||
quickcheck!{
|
||||
fn eigensystem(n: usize) -> bool {
|
||||
if n != 0 {
|
||||
let n = cmp::min(n, 25);
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
|
||||
match Eigen::new(m.clone(), true, true) {
|
||||
Some(eig) => {
|
||||
let eigvals = DMatrix::from_diagonal(&eig.eigenvalues);
|
||||
let transformed_eigvectors = &m * eig.eigenvectors.as_ref().unwrap();
|
||||
let scaled_eigvectors = eig.eigenvectors.as_ref().unwrap() * &eigvals;
|
||||
|
||||
let transformed_left_eigvectors = m.transpose() * eig.left_eigenvectors.as_ref().unwrap();
|
||||
let scaled_left_eigvectors = eig.left_eigenvectors.as_ref().unwrap() * &eigvals;
|
||||
|
||||
relative_eq!(transformed_eigvectors, scaled_eigvectors, epsilon = 1.0e-7) &&
|
||||
relative_eq!(transformed_left_eigvectors, scaled_left_eigvectors, epsilon = 1.0e-7)
|
||||
},
|
||||
None => true
|
||||
}
|
||||
}
|
||||
else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
fn eigensystem_static(m: Matrix4<f64>) -> bool {
|
||||
match Eigen::new(m, true, true) {
|
||||
Some(eig) => {
|
||||
let eigvals = Matrix4::from_diagonal(&eig.eigenvalues);
|
||||
let transformed_eigvectors = m * eig.eigenvectors.unwrap();
|
||||
let scaled_eigvectors = eig.eigenvectors.unwrap() * eigvals;
|
||||
|
||||
let transformed_left_eigvectors = m.transpose() * eig.left_eigenvectors.unwrap();
|
||||
let scaled_left_eigvectors = eig.left_eigenvectors.unwrap() * eigvals;
|
||||
|
||||
relative_eq!(transformed_eigvectors, scaled_eigvectors, epsilon = 1.0e-7) &&
|
||||
relative_eq!(transformed_left_eigvectors, scaled_left_eigvectors, epsilon = 1.0e-7)
|
||||
},
|
||||
None => true
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,21 @@
|
|||
use std::cmp;
|
||||
use nl::RealSchur;
|
||||
use na::{DMatrix, Matrix4};
|
||||
|
||||
quickcheck! {
|
||||
fn schur(n: usize) -> bool {
|
||||
let n = cmp::max(1, cmp::min(n, 10));
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
|
||||
let (vecs, vals) = RealSchur::new(m.clone()).unpack();
|
||||
|
||||
relative_eq!(&vecs * vals * vecs.transpose(), m, epsilon = 1.0e-7)
|
||||
}
|
||||
|
||||
fn schur_static(m: Matrix4<f64>) -> bool {
|
||||
let (vecs, vals) = RealSchur::new(m.clone()).unpack();
|
||||
|
||||
relative_eq!(vecs * vals * vecs.transpose(), m, epsilon = 1.0e-7)
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
use nl::SVD;
|
||||
use na::{DMatrix, Matrix3x4};
|
||||
|
||||
quickcheck!{
|
||||
fn svd(m: DMatrix<f64>) -> bool {
|
||||
if m.nrows() != 0 && m.ncols() != 0 {
|
||||
let svd = SVD::new(m.clone()).unwrap();
|
||||
let sm = DMatrix::from_partial_diagonal(m.nrows(), m.ncols(), svd.singular_values.as_slice());
|
||||
|
||||
let reconstructed_m = &svd.u * sm * &svd.vt;
|
||||
let reconstructed_m2 = svd.recompose();
|
||||
|
||||
relative_eq!(reconstructed_m, m, epsilon = 1.0e-7) &&
|
||||
relative_eq!(reconstructed_m2, reconstructed_m, epsilon = 1.0e-7)
|
||||
}
|
||||
else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
fn svd_static(m: Matrix3x4<f64>) -> bool {
|
||||
let svd = SVD::new(m).unwrap();
|
||||
let sm = Matrix3x4::from_partial_diagonal(svd.singular_values.as_slice());
|
||||
|
||||
let reconstructed_m = &svd.u * &sm * &svd.vt;
|
||||
let reconstructed_m2 = svd.recompose();
|
||||
|
||||
relative_eq!(reconstructed_m, m, epsilon = 1.0e-7) &&
|
||||
relative_eq!(reconstructed_m2, m, epsilon = 1.0e-7)
|
||||
}
|
||||
|
||||
fn pseudo_inverse(m: DMatrix<f64>) -> bool {
|
||||
if m.nrows() == 0 || m.ncols() == 0 {
|
||||
return true;
|
||||
}
|
||||
|
||||
let svd = SVD::new(m.clone()).unwrap();
|
||||
let im = svd.pseudo_inverse(1.0e-7);
|
||||
|
||||
if m.nrows() <= m.ncols() {
|
||||
return (&m * &im).is_identity(1.0e-7)
|
||||
}
|
||||
|
||||
if m.nrows() >= m.ncols() {
|
||||
return (im * m).is_identity(1.0e-7)
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
fn pseudo_inverse_static(m: Matrix3x4<f64>) -> bool {
|
||||
let svd = SVD::new(m).unwrap();
|
||||
let im = svd.pseudo_inverse(1.0e-7);
|
||||
|
||||
(m * im).is_identity(1.0e-7)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
use std::cmp;
|
||||
|
||||
use nl::SymmetricEigen;
|
||||
use na::{DMatrix, Matrix4};
|
||||
|
||||
quickcheck!{
|
||||
fn symmetric_eigen(n: usize) -> bool {
|
||||
let n = cmp::max(1, cmp::min(n, 10));
|
||||
let m = DMatrix::<f64>::new_random(n, n);
|
||||
let eig = SymmetricEigen::new(m.clone());
|
||||
let recomp = eig.recompose();
|
||||
relative_eq!(m.lower_triangle(), recomp.lower_triangle(), epsilon = 1.0e-5)
|
||||
}
|
||||
|
||||
fn symmetric_eigen_static(m: Matrix4<f64>) -> bool {
|
||||
let eig = SymmetricEigen::new(m);
|
||||
let recomp = eig.recompose();
|
||||
relative_eq!(m.lower_triangle(), recomp.lower_triangle(), epsilon = 1.0e-5)
|
||||
}
|
||||
}
|
|
@ -1,7 +1,7 @@
|
|||
use core::Matrix;
|
||||
use core::dimension::{Dynamic, U1, U2, U3, U4, U5, U6};
|
||||
use core::matrix_array::MatrixArray;
|
||||
use core::matrix_vec::MatrixVec;
|
||||
use core::storage::Owned;
|
||||
|
||||
/*
|
||||
*
|
||||
|
@ -10,14 +10,18 @@ use core::matrix_vec::MatrixVec;
|
|||
*
|
||||
*
|
||||
*/
|
||||
/// A dynamically sized column-major matrix.
|
||||
pub type DMatrix<N> = Matrix<N, Dynamic, Dynamic, MatrixVec<N, Dynamic, Dynamic>>;
|
||||
/// A staticaly sized column-major matrix with `R` rows and `C` columns.
|
||||
#[deprecated(note = "This matrix name contains a typo. Use MatrixMN instead.")]
|
||||
pub type MatrixNM<N, R, C> = Matrix<N, R, C, Owned<N, R, C>>;
|
||||
|
||||
/// A staticaly sized column-major matrix with `R` rows and `C` columns.
|
||||
pub type MatrixNM<N, R, C> = Matrix<N, R, C, MatrixArray<N, R, C>>;
|
||||
pub type MatrixMN<N, R, C> = Matrix<N, R, C, Owned<N, R, C>>;
|
||||
|
||||
/// A staticaly sized column-major square matrix with `D` rows and columns.
|
||||
pub type MatrixN<N, D> = MatrixNM<N, D, D>;
|
||||
pub type MatrixN<N, D> = MatrixMN<N, D, D>;
|
||||
|
||||
/// A dynamically sized column-major matrix.
|
||||
pub type DMatrix<N> = MatrixN<N, Dynamic>;
|
||||
|
||||
/// A stack-allocated, column-major, 1x1 square matrix.
|
||||
pub type Matrix1<N> = MatrixN<N, U1>;
|
||||
|
@ -33,75 +37,75 @@ pub type Matrix5<N> = MatrixN<N, U5>;
|
|||
pub type Matrix6<N> = MatrixN<N, U6>;
|
||||
|
||||
/// A stack-allocated, column-major, 1x2 square matrix.
|
||||
pub type Matrix1x2<N> = MatrixNM<N, U1, U2>;
|
||||
pub type Matrix1x2<N> = MatrixMN<N, U1, U2>;
|
||||
/// A stack-allocated, column-major, 1x3 square matrix.
|
||||
pub type Matrix1x3<N> = MatrixNM<N, U1, U3>;
|
||||
pub type Matrix1x3<N> = MatrixMN<N, U1, U3>;
|
||||
/// A stack-allocated, column-major, 1x4 square matrix.
|
||||
pub type Matrix1x4<N> = MatrixNM<N, U1, U4>;
|
||||
pub type Matrix1x4<N> = MatrixMN<N, U1, U4>;
|
||||
/// A stack-allocated, column-major, 1x5 square matrix.
|
||||
pub type Matrix1x5<N> = MatrixNM<N, U1, U5>;
|
||||
pub type Matrix1x5<N> = MatrixMN<N, U1, U5>;
|
||||
/// A stack-allocated, column-major, 1x6 square matrix.
|
||||
pub type Matrix1x6<N> = MatrixNM<N, U1, U6>;
|
||||
pub type Matrix1x6<N> = MatrixMN<N, U1, U6>;
|
||||
|
||||
/// A stack-allocated, column-major, 2x3 square matrix.
|
||||
pub type Matrix2x3<N> = MatrixNM<N, U2, U3>;
|
||||
pub type Matrix2x3<N> = MatrixMN<N, U2, U3>;
|
||||
/// A stack-allocated, column-major, 2x4 square matrix.
|
||||
pub type Matrix2x4<N> = MatrixNM<N, U2, U4>;
|
||||
pub type Matrix2x4<N> = MatrixMN<N, U2, U4>;
|
||||
/// A stack-allocated, column-major, 2x5 square matrix.
|
||||
pub type Matrix2x5<N> = MatrixNM<N, U2, U5>;
|
||||
pub type Matrix2x5<N> = MatrixMN<N, U2, U5>;
|
||||
/// A stack-allocated, column-major, 2x6 square matrix.
|
||||
pub type Matrix2x6<N> = MatrixNM<N, U2, U6>;
|
||||
pub type Matrix2x6<N> = MatrixMN<N, U2, U6>;
|
||||
|
||||
/// A stack-allocated, column-major, 3x4 square matrix.
|
||||
pub type Matrix3x4<N> = MatrixNM<N, U3, U4>;
|
||||
pub type Matrix3x4<N> = MatrixMN<N, U3, U4>;
|
||||
/// A stack-allocated, column-major, 3x5 square matrix.
|
||||
pub type Matrix3x5<N> = MatrixNM<N, U3, U5>;
|
||||
pub type Matrix3x5<N> = MatrixMN<N, U3, U5>;
|
||||
/// A stack-allocated, column-major, 3x6 square matrix.
|
||||
pub type Matrix3x6<N> = MatrixNM<N, U3, U6>;
|
||||
pub type Matrix3x6<N> = MatrixMN<N, U3, U6>;
|
||||
|
||||
/// A stack-allocated, column-major, 4x5 square matrix.
|
||||
pub type Matrix4x5<N> = MatrixNM<N, U4, U5>;
|
||||
pub type Matrix4x5<N> = MatrixMN<N, U4, U5>;
|
||||
/// A stack-allocated, column-major, 4x6 square matrix.
|
||||
pub type Matrix4x6<N> = MatrixNM<N, U4, U6>;
|
||||
pub type Matrix4x6<N> = MatrixMN<N, U4, U6>;
|
||||
|
||||
/// A stack-allocated, column-major, 5x6 square matrix.
|
||||
pub type Matrix5x6<N> = MatrixNM<N, U5, U6>;
|
||||
pub type Matrix5x6<N> = MatrixMN<N, U5, U6>;
|
||||
|
||||
|
||||
/// A stack-allocated, column-major, 2x1 square matrix.
|
||||
pub type Matrix2x1<N> = MatrixNM<N, U2, U1>;
|
||||
pub type Matrix2x1<N> = MatrixMN<N, U2, U1>;
|
||||
/// A stack-allocated, column-major, 3x1 square matrix.
|
||||
pub type Matrix3x1<N> = MatrixNM<N, U3, U1>;
|
||||
pub type Matrix3x1<N> = MatrixMN<N, U3, U1>;
|
||||
/// A stack-allocated, column-major, 4x1 square matrix.
|
||||
pub type Matrix4x1<N> = MatrixNM<N, U4, U1>;
|
||||
pub type Matrix4x1<N> = MatrixMN<N, U4, U1>;
|
||||
/// A stack-allocated, column-major, 5x1 square matrix.
|
||||
pub type Matrix5x1<N> = MatrixNM<N, U5, U1>;
|
||||
pub type Matrix5x1<N> = MatrixMN<N, U5, U1>;
|
||||
/// A stack-allocated, column-major, 6x1 square matrix.
|
||||
pub type Matrix6x1<N> = MatrixNM<N, U6, U1>;
|
||||
pub type Matrix6x1<N> = MatrixMN<N, U6, U1>;
|
||||
|
||||
/// A stack-allocated, column-major, 3x2 square matrix.
|
||||
pub type Matrix3x2<N> = MatrixNM<N, U3, U2>;
|
||||
pub type Matrix3x2<N> = MatrixMN<N, U3, U2>;
|
||||
/// A stack-allocated, column-major, 4x2 square matrix.
|
||||
pub type Matrix4x2<N> = MatrixNM<N, U4, U2>;
|
||||
pub type Matrix4x2<N> = MatrixMN<N, U4, U2>;
|
||||
/// A stack-allocated, column-major, 5x2 square matrix.
|
||||
pub type Matrix5x2<N> = MatrixNM<N, U5, U2>;
|
||||
pub type Matrix5x2<N> = MatrixMN<N, U5, U2>;
|
||||
/// A stack-allocated, column-major, 6x2 square matrix.
|
||||
pub type Matrix6x2<N> = MatrixNM<N, U6, U2>;
|
||||
pub type Matrix6x2<N> = MatrixMN<N, U6, U2>;
|
||||
|
||||
/// A stack-allocated, column-major, 4x3 square matrix.
|
||||
pub type Matrix4x3<N> = MatrixNM<N, U4, U3>;
|
||||
pub type Matrix4x3<N> = MatrixMN<N, U4, U3>;
|
||||
/// A stack-allocated, column-major, 5x3 square matrix.
|
||||
pub type Matrix5x3<N> = MatrixNM<N, U5, U3>;
|
||||
pub type Matrix5x3<N> = MatrixMN<N, U5, U3>;
|
||||
/// A stack-allocated, column-major, 6x3 square matrix.
|
||||
pub type Matrix6x3<N> = MatrixNM<N, U6, U3>;
|
||||
pub type Matrix6x3<N> = MatrixMN<N, U6, U3>;
|
||||
|
||||
/// A stack-allocated, column-major, 5x4 square matrix.
|
||||
pub type Matrix5x4<N> = MatrixNM<N, U5, U4>;
|
||||
pub type Matrix5x4<N> = MatrixMN<N, U5, U4>;
|
||||
/// A stack-allocated, column-major, 6x4 square matrix.
|
||||
pub type Matrix6x4<N> = MatrixNM<N, U6, U4>;
|
||||
pub type Matrix6x4<N> = MatrixMN<N, U6, U4>;
|
||||
|
||||
/// A stack-allocated, column-major, 6x5 square matrix.
|
||||
pub type Matrix6x5<N> = MatrixNM<N, U6, U5>;
|
||||
pub type Matrix6x5<N> = MatrixMN<N, U6, U5>;
|
||||
|
||||
|
||||
/*
|
||||
|
@ -115,7 +119,7 @@ pub type Matrix6x5<N> = MatrixNM<N, U6, U5>;
|
|||
pub type DVector<N> = Matrix<N, Dynamic, U1, MatrixVec<N, Dynamic, U1>>;
|
||||
|
||||
/// A statically sized D-dimensional column vector.
|
||||
pub type VectorN<N, D> = MatrixNM<N, D, U1>;
|
||||
pub type VectorN<N, D> = MatrixMN<N, D, U1>;
|
||||
|
||||
/// A stack-allocated, 1-dimensional column vector.
|
||||
pub type Vector1<N> = VectorN<N, U1>;
|
||||
|
@ -142,7 +146,7 @@ pub type Vector6<N> = VectorN<N, U6>;
|
|||
pub type RowDVector<N> = Matrix<N, U1, Dynamic, MatrixVec<N, U1, Dynamic>>;
|
||||
|
||||
/// A statically sized D-dimensional row vector.
|
||||
pub type RowVectorN<N, D> = MatrixNM<N, U1, D>;
|
||||
pub type RowVectorN<N, D> = MatrixMN<N, U1, D>;
|
||||
|
||||
/// A stack-allocated, 1-dimensional row vector.
|
||||
pub type RowVector1<N> = RowVectorN<N, U1>;
|
||||
|
|
|
@ -2,10 +2,10 @@
|
|||
|
||||
use std::any::Any;
|
||||
|
||||
use core::Scalar;
|
||||
use core::{DefaultAllocator, Scalar};
|
||||
use core::constraint::{SameNumberOfRows, SameNumberOfColumns, ShapeConstraint};
|
||||
use core::dimension::{Dim, U1};
|
||||
use core::storage::{Storage, OwnedStorage};
|
||||
use core::storage::ContiguousStorageMut;
|
||||
|
||||
/// A matrix allocator of a memory buffer that may contain `R::to_usize() * C::to_usize()`
|
||||
/// elements of type `N`.
|
||||
|
@ -16,9 +16,9 @@ use core::storage::{Storage, OwnedStorage};
|
|||
///
|
||||
/// Every allocator must be both static and dynamic. Though not all implementations may share the
|
||||
/// same `Buffer` type.
|
||||
pub trait Allocator<N: Scalar, R: Dim, C: Dim>: Any + Sized {
|
||||
pub trait Allocator<N: Scalar, R: Dim, C: Dim = U1>: Any + Sized {
|
||||
/// The type of buffer this allocator can instanciate.
|
||||
type Buffer: OwnedStorage<N, R, C, Alloc = Self>;
|
||||
type Buffer: ContiguousStorageMut<N, R, C> + Clone;
|
||||
|
||||
/// Allocates a buffer with the given number of rows and columns without initializing its content.
|
||||
unsafe fn allocate_uninitialized(nrows: R, ncols: C) -> Self::Buffer;
|
||||
|
@ -27,15 +27,20 @@ pub trait Allocator<N: Scalar, R: Dim, C: Dim>: Any + Sized {
|
|||
fn allocate_from_iterator<I: IntoIterator<Item = N>>(nrows: R, ncols: C, iter: I) -> Self::Buffer;
|
||||
}
|
||||
|
||||
/// A matrix data allocator dedicated to the given owned matrix storage.
|
||||
pub trait OwnedAllocator<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C, Alloc = Self>>:
|
||||
Allocator<N, R, C, Buffer = S> {
|
||||
}
|
||||
|
||||
impl<N, R, C, T, S> OwnedAllocator<N, R, C, S> for T
|
||||
where N: Scalar, R: Dim, C: Dim,
|
||||
T: Allocator<N, R, C, Buffer = S>,
|
||||
S: OwnedStorage<N, R, C, Alloc = T> {
|
||||
/// A matrix reallocator. Changes the size of the memory buffer that initially contains (RFrom ×
|
||||
/// CFrom) elements to a smaller or larger size (RTo, CTo).
|
||||
pub trait Reallocator<N: Scalar, RFrom: Dim, CFrom: Dim, RTo: Dim, CTo: Dim>:
|
||||
Allocator<N, RFrom, CFrom> + Allocator<N, RTo, CTo> {
|
||||
/// Reallocates a buffer of shape `(RTo, CTo)`, possibly reusing a previously allocated buffer
|
||||
/// `buf`. Data stored by `buf` are linearly copied to the output:
|
||||
///
|
||||
/// * The copy is performed as if both were just arrays (without a matrix structure).
|
||||
/// * If `buf` is larger than the output size, then extra elements of `buf` are truncated.
|
||||
/// * If `buf` is smaller than the output size, then extra elements of the output are left
|
||||
/// uninitialized.
|
||||
unsafe fn reallocate_copy(nrows: RTo, ncols: CTo,
|
||||
buf: <Self as Allocator<N, RFrom, CFrom>>::Buffer)
|
||||
-> <Self as Allocator<N, RTo, CTo>>::Buffer;
|
||||
}
|
||||
|
||||
/// The number of rows of the result of a componentwise operation on two matrices.
|
||||
|
@ -45,45 +50,36 @@ pub type SameShapeR<R1, R2> = <ShapeConstraint as SameNumberOfRows<R1, R2>>::Rep
|
|||
pub type SameShapeC<C1, C2> = <ShapeConstraint as SameNumberOfColumns<C1, C2>>::Representative;
|
||||
|
||||
// FIXME: Bad name.
|
||||
/// Restricts the given number of rows and columns to be respectively the same. Can only be used
|
||||
/// when `Self = SA::Alloc`.
|
||||
pub trait SameShapeAllocator<N, R1, C1, R2, C2, SA>:
|
||||
/// Restricts the given number of rows and columns to be respectively the same.
|
||||
pub trait SameShapeAllocator<N, R1, C1, R2, C2>:
|
||||
Allocator<N, R1, C1> +
|
||||
Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>
|
||||
where R1: Dim, R2: Dim, C1: Dim, C2: Dim,
|
||||
N: Scalar,
|
||||
SA: Storage<N, R1, C1, Alloc = Self>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
}
|
||||
|
||||
impl<N, R1, R2, C1, C2, SA> SameShapeAllocator<N, R1, C1, R2, C2, SA> for SA::Alloc
|
||||
impl<N, R1, R2, C1, C2> SameShapeAllocator<N, R1, C1, R2, C2> for DefaultAllocator
|
||||
where R1: Dim, R2: Dim, C1: Dim, C2: Dim,
|
||||
N: Scalar,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SA::Alloc:
|
||||
Allocator<N, R1, C1> +
|
||||
Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>,
|
||||
DefaultAllocator: Allocator<N, R1, C1> + Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
}
|
||||
|
||||
// XXX: Bad name.
|
||||
/// Restricts the given number of rows to be equal. Can only be used when `Self = SA::Alloc`.
|
||||
pub trait SameShapeColumnVectorAllocator<N, R1, R2, SA>:
|
||||
Allocator<N, R1, U1> +
|
||||
Allocator<N, SameShapeR<R1, R2>, U1> +
|
||||
SameShapeAllocator<N, R1, U1, R2, U1, SA>
|
||||
/// Restricts the given number of rows to be equal.
|
||||
pub trait SameShapeVectorAllocator<N, R1, R2>:
|
||||
Allocator<N, R1> +
|
||||
Allocator<N, SameShapeR<R1, R2>> +
|
||||
SameShapeAllocator<N, R1, U1, R2, U1>
|
||||
where R1: Dim, R2: Dim,
|
||||
N: Scalar,
|
||||
SA: Storage<N, R1, U1, Alloc = Self>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> {
|
||||
}
|
||||
|
||||
impl<N, R1, R2, SA> SameShapeColumnVectorAllocator<N, R1, R2, SA> for SA::Alloc
|
||||
impl<N, R1, R2> SameShapeVectorAllocator<N, R1, R2> for DefaultAllocator
|
||||
where R1: Dim, R2: Dim,
|
||||
N: Scalar,
|
||||
SA: Storage<N, R1, U1>,
|
||||
SA::Alloc:
|
||||
Allocator<N, R1, U1> +
|
||||
Allocator<N, SameShapeR<R1, R2>, U1>,
|
||||
DefaultAllocator: Allocator<N, R1, U1> + Allocator<N, SameShapeR<R1, R2>>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> {
|
||||
}
|
||||
|
|
|
@ -0,0 +1,458 @@
|
|||
use std::mem;
|
||||
use num::{Zero, One, Signed};
|
||||
use matrixmultiply;
|
||||
use alga::general::{ClosedMul, ClosedAdd};
|
||||
|
||||
use core::{Scalar, Matrix, Vector};
|
||||
use core::dimension::{Dim, U1, U2, U3, U4, Dynamic};
|
||||
use core::constraint::{ShapeConstraint, SameNumberOfRows, SameNumberOfColumns, AreMultipliable, DimEq};
|
||||
use core::storage::{Storage, StorageMut};
|
||||
|
||||
|
||||
|
||||
impl<N: Scalar + PartialOrd + Signed, D: Dim, S: Storage<N, D>> Vector<N, D, S> {
|
||||
/// Computes the index of the vector component with the largest absolute value.
|
||||
#[inline]
|
||||
pub fn iamax(&self) -> usize {
|
||||
assert!(!self.is_empty(), "The input vector must not be empty.");
|
||||
|
||||
let mut the_max = unsafe { self.vget_unchecked(0).abs() };
|
||||
let mut the_i = 0;
|
||||
|
||||
for i in 1 .. self.nrows() {
|
||||
let val = unsafe { self.vget_unchecked(i).abs() };
|
||||
|
||||
if val > the_max {
|
||||
the_max = val;
|
||||
the_i = i;
|
||||
}
|
||||
}
|
||||
|
||||
the_i
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar + PartialOrd + Signed, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Computes the index of the matrix component with the largest absolute value.
|
||||
#[inline]
|
||||
pub fn iamax_full(&self) -> (usize, usize) {
|
||||
assert!(!self.is_empty(), "The input matrix must not be empty.");
|
||||
|
||||
let mut the_max = unsafe { self.get_unchecked(0, 0).abs() };
|
||||
let mut the_ij = (0, 0);
|
||||
|
||||
for j in 0 .. self.ncols() {
|
||||
for i in 0 .. self.nrows() {
|
||||
let val = unsafe { self.get_unchecked(i, j).abs() };
|
||||
|
||||
if val > the_max {
|
||||
the_max = val;
|
||||
the_ij = (i, j);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
the_ij
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul {
|
||||
/// The dot product between two matrices (seen as vectors).
|
||||
///
|
||||
/// Note that this is **not** the matrix multiplication as in, e.g., numpy. For matrix
|
||||
/// multiplication, use one of: `.gemm`, `mul_to`, `.mul`, `*`.
|
||||
#[inline]
|
||||
pub fn dot<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<N, R2, C2, SB>) -> N
|
||||
where SB: Storage<N, R2, C2>,
|
||||
ShapeConstraint: DimEq<R, R2> + DimEq<C, C2> {
|
||||
assert!(self.nrows() == rhs.nrows(), "Dot product dimensions mismatch.");
|
||||
|
||||
|
||||
// So we do some special cases for common fixed-size vectors of dimension lower than 8
|
||||
// because the `for` loop bellow won't be very efficient on those.
|
||||
if (R::is::<U2>() || R2::is::<U2>()) &&
|
||||
(C::is::<U1>() || C2::is::<U1>()) {
|
||||
unsafe {
|
||||
let a = *self.get_unchecked(0, 0) * *rhs.get_unchecked(0, 0);
|
||||
let b = *self.get_unchecked(1, 0) * *rhs.get_unchecked(1, 0);
|
||||
|
||||
return a + b;
|
||||
}
|
||||
}
|
||||
if (R::is::<U3>() || R2::is::<U3>()) &&
|
||||
(C::is::<U1>() || C2::is::<U1>()) {
|
||||
unsafe {
|
||||
let a = *self.get_unchecked(0, 0) * *rhs.get_unchecked(0, 0);
|
||||
let b = *self.get_unchecked(1, 0) * *rhs.get_unchecked(1, 0);
|
||||
let c = *self.get_unchecked(2, 0) * *rhs.get_unchecked(2, 0);
|
||||
|
||||
return a + b + c;
|
||||
}
|
||||
}
|
||||
if (R::is::<U4>() || R2::is::<U4>()) &&
|
||||
(C::is::<U1>() || C2::is::<U1>()) {
|
||||
unsafe {
|
||||
let mut a = *self.get_unchecked(0, 0) * *rhs.get_unchecked(0, 0);
|
||||
let mut b = *self.get_unchecked(1, 0) * *rhs.get_unchecked(1, 0);
|
||||
let c = *self.get_unchecked(2, 0) * *rhs.get_unchecked(2, 0);
|
||||
let d = *self.get_unchecked(3, 0) * *rhs.get_unchecked(3, 0);
|
||||
|
||||
a += c;
|
||||
b += d;
|
||||
|
||||
return a + b;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// All this is inspired from the "unrolled version" discussed in:
|
||||
// http://blog.theincredibleholk.org/blog/2012/12/10/optimizing-dot-product/
|
||||
//
|
||||
// And this comment from bluss:
|
||||
// https://users.rust-lang.org/t/how-to-zip-two-slices-efficiently/2048/12
|
||||
let mut res = N::zero();
|
||||
|
||||
// We have to define them outside of the loop (and not inside at first assignment)
|
||||
// otherwize vectorization won't kick in for some reason.
|
||||
let mut acc0;
|
||||
let mut acc1;
|
||||
let mut acc2;
|
||||
let mut acc3;
|
||||
let mut acc4;
|
||||
let mut acc5;
|
||||
let mut acc6;
|
||||
let mut acc7;
|
||||
|
||||
for j in 0 .. self.ncols() {
|
||||
let mut i = 0;
|
||||
|
||||
acc0 = N::zero();
|
||||
acc1 = N::zero();
|
||||
acc2 = N::zero();
|
||||
acc3 = N::zero();
|
||||
acc4 = N::zero();
|
||||
acc5 = N::zero();
|
||||
acc6 = N::zero();
|
||||
acc7 = N::zero();
|
||||
|
||||
while self.nrows() - i >= 8 {
|
||||
acc0 += unsafe { *self.get_unchecked(i + 0, j) * *rhs.get_unchecked(i + 0, j) };
|
||||
acc1 += unsafe { *self.get_unchecked(i + 1, j) * *rhs.get_unchecked(i + 1, j) };
|
||||
acc2 += unsafe { *self.get_unchecked(i + 2, j) * *rhs.get_unchecked(i + 2, j) };
|
||||
acc3 += unsafe { *self.get_unchecked(i + 3, j) * *rhs.get_unchecked(i + 3, j) };
|
||||
acc4 += unsafe { *self.get_unchecked(i + 4, j) * *rhs.get_unchecked(i + 4, j) };
|
||||
acc5 += unsafe { *self.get_unchecked(i + 5, j) * *rhs.get_unchecked(i + 5, j) };
|
||||
acc6 += unsafe { *self.get_unchecked(i + 6, j) * *rhs.get_unchecked(i + 6, j) };
|
||||
acc7 += unsafe { *self.get_unchecked(i + 7, j) * *rhs.get_unchecked(i + 7, j) };
|
||||
i += 8;
|
||||
}
|
||||
|
||||
res += acc0 + acc4;
|
||||
res += acc1 + acc5;
|
||||
res += acc2 + acc6;
|
||||
res += acc3 + acc7;
|
||||
|
||||
for k in i .. self.nrows() {
|
||||
res += unsafe { *self.get_unchecked(k, j) * *rhs.get_unchecked(k, j) }
|
||||
}
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// The dot product between the transpose of `self` and `rhs`.
|
||||
#[inline]
|
||||
pub fn tr_dot<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<N, R2, C2, SB>) -> N
|
||||
where SB: Storage<N, R2, C2>,
|
||||
ShapeConstraint: DimEq<C, R2> + DimEq<R, C2> {
|
||||
let (nrows, ncols) = self.shape();
|
||||
assert!((ncols, nrows) == rhs.shape(), "Transposed dot product dimension mismatch.");
|
||||
|
||||
let mut res = N::zero();
|
||||
|
||||
for j in 0 .. self.nrows() {
|
||||
for i in 0 .. self.ncols() {
|
||||
res += unsafe { *self.get_unchecked(j, i) * *rhs.get_unchecked(i, j) }
|
||||
}
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
fn array_axpy<N>(y: &mut [N], a: N, x: &[N], beta: N, stride1: usize, stride2: usize, len: usize)
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul {
|
||||
for i in 0 .. len {
|
||||
unsafe {
|
||||
let y = y.get_unchecked_mut(i * stride1);
|
||||
*y = a * *x.get_unchecked(i * stride2) + beta * *y;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn array_ax<N>(y: &mut [N], a: N, x: &[N], stride1: usize, stride2: usize, len: usize)
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul {
|
||||
for i in 0 .. len {
|
||||
unsafe {
|
||||
*y.get_unchecked_mut(i * stride1) = a * *x.get_unchecked(i * stride2);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: Dim, S> Vector<N, D, S>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
S: StorageMut<N, D> {
|
||||
/// Computes `self = a * x + b * self`.
|
||||
///
|
||||
/// If be is zero, `self` is never read from.
|
||||
#[inline]
|
||||
pub fn axpy<D2: Dim, SB>(&mut self, a: N, x: &Vector<N, D2, SB>, b: N)
|
||||
where SB: Storage<N, D2>,
|
||||
ShapeConstraint: DimEq<D, D2> {
|
||||
|
||||
assert_eq!(self.nrows(), x.nrows(), "Axpy: mismatched vector shapes.");
|
||||
|
||||
let rstride1 = self.strides().0;
|
||||
let rstride2 = x.strides().0;
|
||||
|
||||
let y = self.data.as_mut_slice();
|
||||
let x = x.data.as_slice();
|
||||
|
||||
if !b.is_zero() {
|
||||
array_axpy(y, a, x, b, rstride1, rstride2, x.len());
|
||||
}
|
||||
else {
|
||||
array_ax(y, a, x, rstride1, rstride2, x.len());
|
||||
}
|
||||
}
|
||||
|
||||
/// Computes `self = alpha * a * x + beta * self`, where `a` is a matrix, `x` a vector, and
|
||||
/// `alpha, beta` two scalars.
|
||||
///
|
||||
/// If `beta` is zero, `self` is never read.
|
||||
#[inline]
|
||||
pub fn gemv<R2: Dim, C2: Dim, D3: Dim, SB, SC>(&mut self,
|
||||
alpha: N,
|
||||
a: &Matrix<N, R2, C2, SB>,
|
||||
x: &Vector<N, D3, SC>,
|
||||
beta: N)
|
||||
where N: One,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SC: Storage<N, D3>,
|
||||
ShapeConstraint: DimEq<D, R2> +
|
||||
AreMultipliable<R2, C2, D3, U1> {
|
||||
let dim1 = self.nrows();
|
||||
let (nrows2, ncols2) = a.shape();
|
||||
let dim3 = x.nrows();
|
||||
|
||||
assert!(ncols2 == dim3 && dim1 == nrows2, "Gemv: dimensions mismatch.");
|
||||
|
||||
if ncols2 == 0 {
|
||||
return;
|
||||
}
|
||||
|
||||
// FIXME: avoid bound checks.
|
||||
let col2 = a.column(0);
|
||||
let val = unsafe { *x.vget_unchecked(0) };
|
||||
self.axpy(alpha * val, &col2, beta);
|
||||
|
||||
for j in 1 .. ncols2 {
|
||||
let col2 = a.column(j);
|
||||
let val = unsafe { *x.vget_unchecked(j) };
|
||||
|
||||
self.axpy(alpha * val, &col2, N::one());
|
||||
}
|
||||
}
|
||||
|
||||
/// Computes `self = alpha * a * x + beta * self`, where `a` is a **symmetric** matrix, `x` a
|
||||
/// vector, and `alpha, beta` two scalars.
|
||||
///
|
||||
/// If `beta` is zero, `self` is never read. If `self` is read, only its lower-triangular part
|
||||
/// (including the diagonal) is actually read.
|
||||
#[inline]
|
||||
pub fn gemv_symm<D2: Dim, D3: Dim, SB, SC>(&mut self,
|
||||
alpha: N,
|
||||
a: &Matrix<N, D2, D2, SB>,
|
||||
x: &Vector<N, D3, SC>,
|
||||
beta: N)
|
||||
where N: One,
|
||||
SB: Storage<N, D2, D2>,
|
||||
SC: Storage<N, D3>,
|
||||
ShapeConstraint: DimEq<D, D2> +
|
||||
AreMultipliable<D2, D2, D3, U1> {
|
||||
let dim1 = self.nrows();
|
||||
let dim2 = a.nrows();
|
||||
let dim3 = x.nrows();
|
||||
|
||||
assert!(a.is_square(), "Syetric gemv: the input matrix must be square.");
|
||||
assert!(dim2 == dim3 && dim1 == dim2, "Symmetric gemv: dimensions mismatch.");
|
||||
|
||||
if dim2 == 0 {
|
||||
return;
|
||||
}
|
||||
|
||||
// FIXME: avoid bound checks.
|
||||
let col2 = a.column(0);
|
||||
let val = unsafe { *x.vget_unchecked(0) };
|
||||
self.axpy(alpha * val, &col2, beta);
|
||||
self[0] += alpha * x.rows_range(1 ..).dot(&a.slice_range(1 .., 0));
|
||||
|
||||
for j in 1 .. dim2 {
|
||||
let col2 = a.column(j);
|
||||
let dot = x.rows_range(j ..).dot(&col2.rows_range(j ..));
|
||||
|
||||
let val;
|
||||
unsafe {
|
||||
val = *x.vget_unchecked(j);
|
||||
*self.vget_unchecked_mut(j) += alpha * dot;
|
||||
}
|
||||
self.rows_range_mut(j + 1 ..).axpy(alpha * val, &col2.rows_range(j + 1 ..), N::one());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R1: Dim, C1: Dim, S: StorageMut<N, R1, C1>> Matrix<N, R1, C1, S>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul {
|
||||
|
||||
/// Computes `self = alpha * x * y.transpose() + beta * self`.
|
||||
///
|
||||
/// If `beta` is zero, `self` is never read.
|
||||
#[inline]
|
||||
pub fn ger<D2: Dim, D3: Dim, SB, SC>(&mut self, alpha: N, x: &Vector<N, D2, SB>, y: &Vector<N, D3, SC>, beta: N)
|
||||
where N: One,
|
||||
SB: Storage<N, D2>,
|
||||
SC: Storage<N, D3>,
|
||||
ShapeConstraint: DimEq<R1, D2> + DimEq<C1, D3> {
|
||||
let (nrows1, ncols1) = self.shape();
|
||||
let dim2 = x.nrows();
|
||||
let dim3 = y.nrows();
|
||||
|
||||
assert!(nrows1 == dim2 && ncols1 == dim3, "ger: dimensions mismatch.");
|
||||
|
||||
for j in 0 .. ncols1 {
|
||||
// FIXME: avoid bound checks.
|
||||
let val = unsafe { *y.vget_unchecked(j) };
|
||||
self.column_mut(j).axpy(alpha * val, x, beta);
|
||||
}
|
||||
}
|
||||
|
||||
/// Computes `self = alpha * a * b + beta * self`, where `a, b, self` are matrices.
|
||||
/// `alpha` and `beta` are scalar.
|
||||
///
|
||||
/// If `beta` is zero, `self` is never read.
|
||||
#[inline]
|
||||
pub fn gemm<R2: Dim, C2: Dim, R3: Dim, C3: Dim, SB, SC>(&mut self,
|
||||
alpha: N,
|
||||
a: &Matrix<N, R2, C2, SB>,
|
||||
b: &Matrix<N, R3, C3, SC>,
|
||||
beta: N)
|
||||
where N: One,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SC: Storage<N, R3, C3>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> +
|
||||
SameNumberOfColumns<C1, C3> +
|
||||
AreMultipliable<R2, C2, R3, C3> {
|
||||
let (nrows1, ncols1) = self.shape();
|
||||
let (nrows2, ncols2) = a.shape();
|
||||
let (nrows3, ncols3) = b.shape();
|
||||
|
||||
assert_eq!(ncols2, nrows3, "gemm: dimensions mismatch for multiplication.");
|
||||
assert_eq!((nrows1, ncols1), (nrows2, ncols3), "gemm: dimensions mismatch for addition.");
|
||||
|
||||
// We assume large matrices will be Dynamic but small matrices static.
|
||||
// We could use matrixmultiply for large statically-sized matrices but the performance
|
||||
// threshold to activate it would be different from SMALL_DIM because our code optimizes
|
||||
// better for statically-sized matrices.
|
||||
let is_dynamic = R1::is::<Dynamic>() || C1::is::<Dynamic>() ||
|
||||
R2::is::<Dynamic>() || C2::is::<Dynamic>() ||
|
||||
R3::is::<Dynamic>() || C3::is::<Dynamic>();
|
||||
// Thershold determined ampirically.
|
||||
const SMALL_DIM: usize = 5;
|
||||
|
||||
if is_dynamic &&
|
||||
nrows1 > SMALL_DIM && ncols1 > SMALL_DIM &&
|
||||
nrows2 > SMALL_DIM && ncols2 > SMALL_DIM {
|
||||
if N::is::<f32>() {
|
||||
let (rsa, csa) = a.strides();
|
||||
let (rsb, csb) = b.strides();
|
||||
let (rsc, csc) = self.strides();
|
||||
|
||||
unsafe {
|
||||
matrixmultiply::sgemm(
|
||||
nrows2,
|
||||
ncols2,
|
||||
ncols3,
|
||||
mem::transmute_copy(&alpha),
|
||||
a.data.ptr() as *const f32,
|
||||
rsa as isize, csa as isize,
|
||||
b.data.ptr() as *const f32,
|
||||
rsb as isize, csb as isize,
|
||||
mem::transmute_copy(&beta),
|
||||
self.data.ptr_mut() as *mut f32,
|
||||
rsc as isize, csc as isize);
|
||||
}
|
||||
}
|
||||
else if N::is::<f64>() {
|
||||
let (rsa, csa) = a.strides();
|
||||
let (rsb, csb) = b.strides();
|
||||
let (rsc, csc) = self.strides();
|
||||
|
||||
unsafe {
|
||||
matrixmultiply::dgemm(
|
||||
nrows2,
|
||||
ncols2,
|
||||
ncols3,
|
||||
mem::transmute_copy(&alpha),
|
||||
a.data.ptr() as *const f64,
|
||||
rsa as isize, csa as isize,
|
||||
b.data.ptr() as *const f64,
|
||||
rsb as isize, csb as isize,
|
||||
mem::transmute_copy(&beta),
|
||||
self.data.ptr_mut() as *mut f64,
|
||||
rsc as isize, csc as isize);
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
for j1 in 0 .. ncols1 {
|
||||
// FIXME: avoid bound checks.
|
||||
self.column_mut(j1).gemv(alpha, a, &b.column(j1), beta);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N, R1: Dim, C1: Dim, S: StorageMut<N, R1, C1>> Matrix<N, R1, C1, S>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul {
|
||||
/// Computes `self = alpha * x * y.transpose() + beta * self`, where `self` is a **symmetric**
|
||||
/// matrix.
|
||||
///
|
||||
/// If `beta` is zero, `self` is never read. The result is symmetric. Only the lower-triangular
|
||||
/// (including the diagonal) part of `self` is read/written.
|
||||
#[inline]
|
||||
pub fn ger_symm<D2: Dim, D3: Dim, SB, SC>(&mut self,
|
||||
alpha: N,
|
||||
x: &Vector<N, D2, SB>,
|
||||
y: &Vector<N, D3, SC>,
|
||||
beta: N)
|
||||
where N: One,
|
||||
SB: Storage<N, D2>,
|
||||
SC: Storage<N, D3>,
|
||||
ShapeConstraint: DimEq<R1, D2> + DimEq<C1, D3> {
|
||||
let dim1 = self.nrows();
|
||||
let dim2 = x.nrows();
|
||||
let dim3 = y.nrows();
|
||||
|
||||
assert!(self.is_square(), "Symmetric ger: the input matrix must be square.");
|
||||
assert!(dim1 == dim2 && dim1 == dim3, "ger: dimensions mismatch.");
|
||||
|
||||
for j in 0 .. dim1 {
|
||||
// FIXME: avoid bound checks.
|
||||
let val = unsafe { *y.vget_unchecked(j) };
|
||||
let subdim = Dynamic::new(dim1 - j);
|
||||
self.generic_slice_mut((j, j), (subdim, U1)).axpy(alpha * val, &x.rows_range(j ..), beta);
|
||||
}
|
||||
}
|
||||
}
|
211
src/core/cg.rs
211
src/core/cg.rs
|
@ -7,20 +7,20 @@
|
|||
|
||||
use num::One;
|
||||
|
||||
use core::{Scalar, SquareMatrix, OwnedSquareMatrix, ColumnVector, Unit};
|
||||
use core::dimension::{DimName, DimNameSub, DimNameDiff, U1, U2, U3, U4};
|
||||
use core::storage::{Storage, StorageMut, OwnedStorage};
|
||||
use core::allocator::{Allocator, OwnedAllocator};
|
||||
use geometry::{PointBase, OrthographicBase, PerspectiveBase, IsometryBase, OwnedRotation, OwnedPoint};
|
||||
use core::{DefaultAllocator, Scalar, SquareMatrix, Vector, Unit,
|
||||
VectorN, MatrixN, Vector3, Matrix3, Matrix4};
|
||||
use core::dimension::{DimName, DimNameSub, DimNameDiff, U1};
|
||||
use core::storage::{Storage, StorageMut};
|
||||
use core::allocator::Allocator;
|
||||
use geometry::{Point, Isometry, Point3, Rotation2, Rotation3, Orthographic3, Perspective3, IsometryMatrix3};
|
||||
|
||||
use alga::general::{Real, Field};
|
||||
use alga::linear::Transformation;
|
||||
|
||||
|
||||
impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
||||
impl<N, D: DimName> MatrixN<N, D>
|
||||
where N: Scalar + Field,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S> {
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
/// Creates a new homogeneous matrix that applies the same scaling factor on each dimension.
|
||||
#[inline]
|
||||
pub fn new_scaling(scaling: N) -> Self {
|
||||
|
@ -32,9 +32,9 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Creates a new homogeneous matrix that applies a distinct scaling factor for each dimension.
|
||||
#[inline]
|
||||
pub fn new_nonuniform_scaling<SB>(scaling: &ColumnVector<N, DimNameDiff<D, U1>, SB>) -> Self
|
||||
pub fn new_nonuniform_scaling<SB>(scaling: &Vector<N, DimNameDiff<D, U1>, SB>) -> Self
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>> {
|
||||
let mut res = Self::one();
|
||||
for i in 0 .. scaling.len() {
|
||||
res[(i, i)] = scaling[i];
|
||||
|
@ -45,10 +45,9 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Creates a new homogeneous matrix that applies a pure translation.
|
||||
#[inline]
|
||||
pub fn new_translation<SB>(translation: &ColumnVector<N, DimNameDiff<D, U1>, SB>) -> Self
|
||||
pub fn new_translation<SB>(translation: &Vector<N, DimNameDiff<D, U1>, SB>) -> Self
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, DimNameDiff<D, U1>, U1> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>> {
|
||||
let mut res = Self::one();
|
||||
res.fixed_slice_mut::<DimNameDiff<D, U1>, U1>(0, D::dim() - 1).copy_from(translation);
|
||||
|
||||
|
@ -56,44 +55,30 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, S> SquareMatrix<N, U3, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, U3, U3>,
|
||||
S::Alloc: OwnedAllocator<N, U3, U3, S> {
|
||||
impl<N: Real> Matrix3<N> {
|
||||
/// Builds a 2 dimensional homogeneous rotation matrix from an angle in radian.
|
||||
#[inline]
|
||||
pub fn new_rotation(angle: N) -> Self
|
||||
where S::Alloc: Allocator<N, U2, U2> {
|
||||
OwnedRotation::<N, U2, S::Alloc>::new(angle).to_homogeneous()
|
||||
pub fn new_rotation(angle: N) -> Self {
|
||||
Rotation2::new(angle).to_homogeneous()
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, S> SquareMatrix<N, U4, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, U4, U4>,
|
||||
S::Alloc: OwnedAllocator<N, U4, U4, S> {
|
||||
|
||||
impl<N: Real> Matrix4<N> {
|
||||
/// Builds a 3D homogeneous rotation matrix from an axis and an angle (multiplied together).
|
||||
///
|
||||
/// Returns the identity matrix if the given argument is zero.
|
||||
#[inline]
|
||||
pub fn new_rotation<SB>(axisangle: ColumnVector<N, U3, SB>) -> Self
|
||||
where SB: Storage<N, U3, U1>,
|
||||
S::Alloc: Allocator<N, U3, U3> {
|
||||
OwnedRotation::<N, U3, S::Alloc>::new(axisangle).to_homogeneous()
|
||||
pub fn new_rotation(axisangle: Vector3<N>) -> Self {
|
||||
Rotation3::new(axisangle).to_homogeneous()
|
||||
}
|
||||
|
||||
/// Builds a 3D homogeneous rotation matrix from an axis and an angle (multiplied together).
|
||||
///
|
||||
/// Returns the identity matrix if the given argument is zero.
|
||||
#[inline]
|
||||
pub fn new_rotation_wrt_point<SB>(axisangle: ColumnVector<N, U3, SB>, pt: OwnedPoint<N, U3, S::Alloc>) -> Self
|
||||
where SB: Storage<N, U3, U1>,
|
||||
S::Alloc: Allocator<N, U3, U3> +
|
||||
Allocator<N, U3, U1> +
|
||||
Allocator<N, U1, U3> {
|
||||
let rot = OwnedRotation::<N, U3, S::Alloc>::from_scaled_axis(axisangle);
|
||||
IsometryBase::rotation_wrt_point(rot, pt).to_homogeneous()
|
||||
pub fn new_rotation_wrt_point(axisangle: Vector3<N>, pt: Point3<N>) -> Self {
|
||||
let rot = Rotation3::from_scaled_axis(axisangle);
|
||||
Isometry::rotation_wrt_point(rot, pt).to_homogeneous()
|
||||
}
|
||||
|
||||
/// Builds a 3D homogeneous rotation matrix from an axis and an angle (multiplied together).
|
||||
|
@ -101,37 +86,32 @@ impl<N, S> SquareMatrix<N, U4, S>
|
|||
/// Returns the identity matrix if the given argument is zero.
|
||||
/// This is identical to `Self::new_rotation`.
|
||||
#[inline]
|
||||
pub fn from_scaled_axis<SB>(axisangle: ColumnVector<N, U3, SB>) -> Self
|
||||
where SB: Storage<N, U3, U1>,
|
||||
S::Alloc: Allocator<N, U3, U3> {
|
||||
OwnedRotation::<N, U3, S::Alloc>::from_scaled_axis(axisangle).to_homogeneous()
|
||||
pub fn from_scaled_axis(axisangle: Vector3<N>) -> Self {
|
||||
Rotation3::from_scaled_axis(axisangle).to_homogeneous()
|
||||
}
|
||||
|
||||
/// Creates a new rotation from Euler angles.
|
||||
///
|
||||
/// The primitive rotations are applied in order: 1 roll − 2 pitch − 3 yaw.
|
||||
pub fn from_euler_angles(roll: N, pitch: N, yaw: N) -> Self
|
||||
where S::Alloc: Allocator<N, U3, U3> {
|
||||
OwnedRotation::<N, U3, S::Alloc>::from_euler_angles(roll, pitch, yaw).to_homogeneous()
|
||||
pub fn from_euler_angles(roll: N, pitch: N, yaw: N) -> Self {
|
||||
Rotation3::from_euler_angles(roll, pitch, yaw).to_homogeneous()
|
||||
}
|
||||
|
||||
/// Builds a 3D homogeneous rotation matrix from an axis and a rotation angle.
|
||||
pub fn from_axis_angle<SB>(axis: &Unit<ColumnVector<N, U3, SB>>, angle: N) -> Self
|
||||
where SB: Storage<N, U3, U1>,
|
||||
S::Alloc: Allocator<N, U3, U3> {
|
||||
OwnedRotation::<N, U3, S::Alloc>::from_axis_angle(axis, angle).to_homogeneous()
|
||||
pub fn from_axis_angle(axis: &Unit<Vector3<N>>, angle: N) -> Self {
|
||||
Rotation3::from_axis_angle(axis, angle).to_homogeneous()
|
||||
}
|
||||
|
||||
/// Creates a new homogeneous matrix for an orthographic projection.
|
||||
#[inline]
|
||||
pub fn new_orthographic(left: N, right: N, bottom: N, top: N, znear: N, zfar: N) -> Self {
|
||||
OrthographicBase::new(left, right, bottom, top, znear, zfar).unwrap()
|
||||
Orthographic3::new(left, right, bottom, top, znear, zfar).unwrap()
|
||||
}
|
||||
|
||||
/// Creates a new homogeneous matrix for a perspective projection.
|
||||
#[inline]
|
||||
pub fn new_perspective(aspect: N, fovy: N, znear: N, zfar: N) -> Self {
|
||||
PerspectiveBase::new(aspect, fovy, znear, zfar).unwrap()
|
||||
Perspective3::new(aspect, fovy, znear, zfar).unwrap()
|
||||
}
|
||||
|
||||
/// Creates an isometry that corresponds to the local frame of an observer standing at the
|
||||
|
@ -140,57 +120,30 @@ impl<N, S> SquareMatrix<N, U4, S>
|
|||
/// It maps the view direction `target - eye` to the positive `z` axis and the origin to the
|
||||
/// `eye`.
|
||||
#[inline]
|
||||
pub fn new_observer_frame<SB>(eye: &PointBase<N, U3, SB>,
|
||||
target: &PointBase<N, U3, SB>,
|
||||
up: &ColumnVector<N, U3, SB>)
|
||||
-> Self
|
||||
where SB: OwnedStorage<N, U3, U1, Alloc = S::Alloc>,
|
||||
SB::Alloc: OwnedAllocator<N, U3, U1, SB> +
|
||||
Allocator<N, U1, U3> +
|
||||
Allocator<N, U3, U3> {
|
||||
IsometryBase::<N, U3, SB, OwnedRotation<N, U3, SB::Alloc>>
|
||||
::new_observer_frame(eye, target, up).to_homogeneous()
|
||||
pub fn new_observer_frame(eye: &Point3<N>, target: &Point3<N>, up: &Vector3<N>) -> Self {
|
||||
IsometryMatrix3::new_observer_frame(eye, target, up).to_homogeneous()
|
||||
}
|
||||
|
||||
/// Builds a right-handed look-at view matrix.
|
||||
#[inline]
|
||||
pub fn look_at_rh<SB>(eye: &PointBase<N, U3, SB>,
|
||||
target: &PointBase<N, U3, SB>,
|
||||
up: &ColumnVector<N, U3, SB>)
|
||||
-> Self
|
||||
where SB: OwnedStorage<N, U3, U1, Alloc = S::Alloc>,
|
||||
SB::Alloc: OwnedAllocator<N, U3, U1, SB> +
|
||||
Allocator<N, U1, U3> +
|
||||
Allocator<N, U3, U3> {
|
||||
IsometryBase::<N, U3, SB, OwnedRotation<N, U3, SB::Alloc>>
|
||||
::look_at_rh(eye, target, up).to_homogeneous()
|
||||
pub fn look_at_rh(eye: &Point3<N>, target: &Point3<N>, up: &Vector3<N>) -> Self {
|
||||
IsometryMatrix3::look_at_rh(eye, target, up).to_homogeneous()
|
||||
}
|
||||
|
||||
/// Builds a left-handed look-at view matrix.
|
||||
#[inline]
|
||||
pub fn look_at_lh<SB>(eye: &PointBase<N, U3, SB>,
|
||||
target: &PointBase<N, U3, SB>,
|
||||
up: &ColumnVector<N, U3, SB>)
|
||||
-> Self
|
||||
where SB: OwnedStorage<N, U3, U1, Alloc = S::Alloc>,
|
||||
SB::Alloc: OwnedAllocator<N, U3, U1, SB> +
|
||||
Allocator<N, U1, U3> +
|
||||
Allocator<N, U3, U3> {
|
||||
IsometryBase::<N, U3, SB, OwnedRotation<N, U3, SB::Alloc>>
|
||||
::look_at_lh(eye, target, up).to_homogeneous()
|
||||
pub fn look_at_lh(eye: &Point3<N>, target: &Point3<N>, up: &Vector3<N>) -> Self {
|
||||
IsometryMatrix3::look_at_lh(eye, target, up).to_homogeneous()
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
||||
where N: Scalar + Field,
|
||||
S: Storage<N, D, D> {
|
||||
|
||||
impl<N: Scalar + Field, D: DimName, S: Storage<N, D, D>> SquareMatrix<N, D, S> {
|
||||
/// Computes the transformation equal to `self` followed by an uniform scaling factor.
|
||||
#[inline]
|
||||
pub fn append_scaling(&self, scaling: N) -> OwnedSquareMatrix<N, D, S::Alloc>
|
||||
pub fn append_scaling(&self, scaling: N) -> MatrixN<N, D>
|
||||
where D: DimNameSub<U1>,
|
||||
S::Alloc: Allocator<N, DimNameDiff<D, U1>, D> {
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
let mut res = self.clone_owned();
|
||||
res.append_scaling_mut(scaling);
|
||||
res
|
||||
|
@ -198,9 +151,9 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes the transformation equal to an uniform scaling factor followed by `self`.
|
||||
#[inline]
|
||||
pub fn prepend_scaling(&self, scaling: N) -> OwnedSquareMatrix<N, D, S::Alloc>
|
||||
pub fn prepend_scaling(&self, scaling: N) -> MatrixN<N, D>
|
||||
where D: DimNameSub<U1>,
|
||||
S::Alloc: Allocator<N, D, DimNameDiff<D, U1>> {
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
let mut res = self.clone_owned();
|
||||
res.prepend_scaling_mut(scaling);
|
||||
res
|
||||
|
@ -208,11 +161,10 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes the transformation equal to `self` followed by a non-uniform scaling factor.
|
||||
#[inline]
|
||||
pub fn append_nonuniform_scaling<SB>(&self, scaling: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
-> OwnedSquareMatrix<N, D, S::Alloc>
|
||||
pub fn append_nonuniform_scaling<SB>(&self, scaling: &Vector<N, DimNameDiff<D, U1>, SB>) -> MatrixN<N, D>
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, U1, D> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>>,
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
let mut res = self.clone_owned();
|
||||
res.append_nonuniform_scaling_mut(scaling);
|
||||
res
|
||||
|
@ -220,11 +172,10 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes the transformation equal to a non-uniform scaling factor followed by `self`.
|
||||
#[inline]
|
||||
pub fn prepend_nonuniform_scaling<SB>(&self, scaling: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
-> OwnedSquareMatrix<N, D, S::Alloc>
|
||||
pub fn prepend_nonuniform_scaling<SB>(&self, scaling: &Vector<N, DimNameDiff<D, U1>, SB>) -> MatrixN<N, D>
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, D, U1> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>>,
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
let mut res = self.clone_owned();
|
||||
res.prepend_nonuniform_scaling_mut(scaling);
|
||||
res
|
||||
|
@ -232,11 +183,10 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes the transformation equal to `self` followed by a translation.
|
||||
#[inline]
|
||||
pub fn append_translation<SB>(&self, shift: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
-> OwnedSquareMatrix<N, D, S::Alloc>
|
||||
pub fn append_translation<SB>(&self, shift: &Vector<N, DimNameDiff<D, U1>, SB>) -> MatrixN<N, D>
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, DimNameDiff<D, U1>, U1> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>>,
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
let mut res = self.clone_owned();
|
||||
res.append_translation_mut(shift);
|
||||
res
|
||||
|
@ -244,28 +194,23 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes the transformation equal to a translation followed by `self`.
|
||||
#[inline]
|
||||
pub fn prepend_translation<SB>(&self, shift: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
-> OwnedSquareMatrix<N, D, S::Alloc>
|
||||
pub fn prepend_translation<SB>(&self, shift: &Vector<N, DimNameDiff<D, U1>, SB>) -> MatrixN<N, D>
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, DimNameDiff<D, U1>, U1> +
|
||||
Allocator<N, DimNameDiff<D, U1>, DimNameDiff<D, U1>> +
|
||||
Allocator<N, U1, DimNameDiff<D, U1>> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>>,
|
||||
DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimNameDiff<D, U1>> {
|
||||
let mut res = self.clone_owned();
|
||||
res.prepend_translation_mut(shift);
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
||||
where N: Scalar + Field,
|
||||
S: StorageMut<N, D, D> {
|
||||
impl<N: Scalar + Field, D: DimName, S: StorageMut<N, D, D>> SquareMatrix<N, D, S> {
|
||||
|
||||
/// Computes in-place the transformation equal to `self` followed by an uniform scaling factor.
|
||||
#[inline]
|
||||
pub fn append_scaling_mut(&mut self, scaling: N)
|
||||
where D: DimNameSub<U1>,
|
||||
S::Alloc: Allocator<N, DimNameDiff<D, U1>, D> {
|
||||
where D: DimNameSub<U1> {
|
||||
let mut to_scale = self.fixed_rows_mut::<DimNameDiff<D, U1>>(0);
|
||||
to_scale *= scaling;
|
||||
}
|
||||
|
@ -273,18 +218,16 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
/// Computes in-place the transformation equal to an uniform scaling factor followed by `self`.
|
||||
#[inline]
|
||||
pub fn prepend_scaling_mut(&mut self, scaling: N)
|
||||
where D: DimNameSub<U1>,
|
||||
S::Alloc: Allocator<N, D, DimNameDiff<D, U1>> {
|
||||
where D: DimNameSub<U1> {
|
||||
let mut to_scale = self.fixed_columns_mut::<DimNameDiff<D, U1>>(0);
|
||||
to_scale *= scaling;
|
||||
}
|
||||
|
||||
/// Computes in-place the transformation equal to `self` followed by a non-uniform scaling factor.
|
||||
#[inline]
|
||||
pub fn append_nonuniform_scaling_mut<SB>(&mut self, scaling: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
pub fn append_nonuniform_scaling_mut<SB>(&mut self, scaling: &Vector<N, DimNameDiff<D, U1>, SB>)
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, U1, D> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>> {
|
||||
for i in 0 .. scaling.len() {
|
||||
let mut to_scale = self.fixed_rows_mut::<U1>(i);
|
||||
to_scale *= scaling[i];
|
||||
|
@ -293,10 +236,9 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes in-place the transformation equal to a non-uniform scaling factor followed by `self`.
|
||||
#[inline]
|
||||
pub fn prepend_nonuniform_scaling_mut<SB>(&mut self, scaling: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
pub fn prepend_nonuniform_scaling_mut<SB>(&mut self, scaling: &Vector<N, DimNameDiff<D, U1>, SB>)
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, D, U1> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>> {
|
||||
for i in 0 .. scaling.len() {
|
||||
let mut to_scale = self.fixed_columns_mut::<U1>(i);
|
||||
to_scale *= scaling[i];
|
||||
|
@ -305,10 +247,9 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes the transformation equal to `self` followed by a translation.
|
||||
#[inline]
|
||||
pub fn append_translation_mut<SB>(&mut self, shift: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
pub fn append_translation_mut<SB>(&mut self, shift: &Vector<N, DimNameDiff<D, U1>, SB>)
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, DimNameDiff<D, U1>, U1> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>> {
|
||||
for i in 0 .. D::dim() {
|
||||
for j in 0 .. D::dim() - 1 {
|
||||
self[(j, i)] += shift[j] * self[(D::dim() - 1, i)];
|
||||
|
@ -318,12 +259,10 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
|
||||
/// Computes the transformation equal to a translation followed by `self`.
|
||||
#[inline]
|
||||
pub fn prepend_translation_mut<SB>(&mut self, shift: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
pub fn prepend_translation_mut<SB>(&mut self, shift: &Vector<N, DimNameDiff<D, U1>, SB>)
|
||||
where D: DimNameSub<U1>,
|
||||
SB: Storage<N, DimNameDiff<D, U1>, U1>,
|
||||
S::Alloc: Allocator<N, DimNameDiff<D, U1>, U1> +
|
||||
Allocator<N, DimNameDiff<D, U1>, DimNameDiff<D, U1>> +
|
||||
Allocator<N, U1, DimNameDiff<D, U1>> {
|
||||
SB: Storage<N, DimNameDiff<D, U1>>,
|
||||
DefaultAllocator: Allocator<N, DimNameDiff<D, U1>> {
|
||||
let scale = self.fixed_slice::<U1, DimNameDiff<D, U1>>(D::dim() - 1, 0).tr_dot(&shift);
|
||||
let post_translation = self.fixed_slice::<DimNameDiff<D, U1>, DimNameDiff<D, U1>>(0, 0) * shift;
|
||||
|
||||
|
@ -335,19 +274,12 @@ impl<N, D: DimName, S> SquareMatrix<N, D, S>
|
|||
}
|
||||
|
||||
|
||||
impl<N, D, SA, SB> Transformation<PointBase<N, DimNameDiff<D, U1>, SB>> for SquareMatrix<N, D, SA>
|
||||
where N: Real,
|
||||
D: DimNameSub<U1>,
|
||||
SA: OwnedStorage<N, D, D>,
|
||||
SB: OwnedStorage<N, DimNameDiff<D, U1>, U1, Alloc = SA::Alloc>,
|
||||
SA::Alloc: OwnedAllocator<N, D, D, SA> +
|
||||
Allocator<N, DimNameDiff<D, U1>, DimNameDiff<D, U1>> +
|
||||
Allocator<N, DimNameDiff<D, U1>, U1> +
|
||||
Allocator<N, U1, DimNameDiff<D, U1>>,
|
||||
SB::Alloc: OwnedAllocator<N, DimNameDiff<D, U1>, U1, SB> {
|
||||
impl<N: Real, D: DimNameSub<U1>> Transformation<Point<N, DimNameDiff<D, U1>>> for MatrixN<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> +
|
||||
Allocator<N, DimNameDiff<D, U1>> +
|
||||
Allocator<N, DimNameDiff<D, U1>, DimNameDiff<D, U1>> {
|
||||
#[inline]
|
||||
fn transform_vector(&self, v: &ColumnVector<N, DimNameDiff<D, U1>, SB>)
|
||||
-> ColumnVector<N, DimNameDiff<D, U1>, SB> {
|
||||
fn transform_vector(&self, v: &VectorN<N, DimNameDiff<D, U1>>) -> VectorN<N, DimNameDiff<D, U1>> {
|
||||
let transform = self.fixed_slice::<DimNameDiff<D, U1>, DimNameDiff<D, U1>>(0, 0);
|
||||
let normalizer = self.fixed_slice::<U1, DimNameDiff<D, U1>>(D::dim() - 1, 0);
|
||||
let n = normalizer.tr_dot(&v);
|
||||
|
@ -360,8 +292,7 @@ impl<N, D, SA, SB> Transformation<PointBase<N, DimNameDiff<D, U1>, SB>> for Squa
|
|||
}
|
||||
|
||||
#[inline]
|
||||
fn transform_point(&self, pt: &PointBase<N, DimNameDiff<D, U1>, SB>)
|
||||
-> PointBase<N, DimNameDiff<D, U1>, SB> {
|
||||
fn transform_point(&self, pt: &Point<N, DimNameDiff<D, U1>>) -> Point<N, DimNameDiff<D, U1>> {
|
||||
let transform = self.fixed_slice::<DimNameDiff<D, U1>, DimNameDiff<D, U1>>(0, 0);
|
||||
let translation = self.fixed_slice::<DimNameDiff<D, U1>, U1>(0, D::dim() - 1);
|
||||
let normalizer = self.fixed_slice::<U1, DimNameDiff<D, U1>>(D::dim() - 1, 0);
|
||||
|
|
|
@ -4,21 +4,22 @@ use num::Signed;
|
|||
|
||||
use alga::general::{ClosedMul, ClosedDiv};
|
||||
|
||||
use core::{Scalar, Matrix, OwnedMatrix, MatrixSum};
|
||||
use core::{DefaultAllocator, Scalar, Matrix, MatrixMN, MatrixSum};
|
||||
use core::dimension::Dim;
|
||||
use core::storage::{Storage, StorageMut};
|
||||
use core::allocator::SameShapeAllocator;
|
||||
use core::allocator::{Allocator, SameShapeAllocator};
|
||||
use core::constraint::{ShapeConstraint, SameNumberOfRows, SameNumberOfColumns};
|
||||
|
||||
|
||||
/// The type of the result of a matrix componentwise operation.
|
||||
pub type MatrixComponentOp<N, R1, C1, R2, C2, SA> = MatrixSum<N, R1, C1, R2, C2, SA>;
|
||||
pub type MatrixComponentOp<N, R1, C1, R2, C2> = MatrixSum<N, R1, C1, R2, C2>;
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Computes the componentwise absolute value.
|
||||
#[inline]
|
||||
pub fn abs(&self) -> OwnedMatrix<N, R, C, S::Alloc>
|
||||
where N: Signed {
|
||||
pub fn abs(&self) -> MatrixMN<N, R, C>
|
||||
where N: Signed,
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
let mut res = self.clone_owned();
|
||||
|
||||
for e in res.iter_mut() {
|
||||
|
@ -32,48 +33,72 @@ impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
|||
}
|
||||
|
||||
macro_rules! component_binop_impl(
|
||||
($($binop: ident, $binop_mut: ident, $Trait: ident . $binop_assign: ident, $desc:expr, $desc_mut:expr);* $(;)*) => {$(
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
($($binop: ident, $binop_mut: ident, $binop_assign: ident, $Trait: ident . $op_assign: ident, $desc:expr, $desc_mut:expr);* $(;)*) => {$(
|
||||
impl<N: Scalar, R1: Dim, C1: Dim, SA: Storage<N, R1, C1>> Matrix<N, R1, C1, SA> {
|
||||
#[doc = $desc]
|
||||
#[inline]
|
||||
pub fn $binop<R2, C2, SB>(&self, rhs: &Matrix<N, R2, C2, SB>) -> MatrixComponentOp<N, R, C, R2, C2, S>
|
||||
pub fn $binop<R2, C2, SB>(&self, rhs: &Matrix<N, R2, C2, SB>) -> MatrixComponentOp<N, R1, C1, R2, C2>
|
||||
where N: $Trait,
|
||||
R2: Dim, C2: Dim,
|
||||
SB: Storage<N, R2, C2>,
|
||||
S::Alloc: SameShapeAllocator<N, R, C, R2, C2, S>,
|
||||
ShapeConstraint: SameNumberOfRows<R, R2> + SameNumberOfColumns<C, C2> {
|
||||
DefaultAllocator: SameShapeAllocator<N, R1, C1, R2, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
|
||||
assert_eq!(self.shape(), rhs.shape(), "Componentwise mul/div: mismatched matrix dimensions.");
|
||||
let mut res = self.clone_owned_sum();
|
||||
|
||||
for (res, rhs) in res.iter_mut().zip(rhs.iter()) {
|
||||
res.$binop_assign(*rhs);
|
||||
for j in 0 .. res.ncols() {
|
||||
for i in 0 .. res.nrows() {
|
||||
unsafe {
|
||||
res.get_unchecked_mut(i, j).$op_assign(*rhs.get_unchecked(i, j));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: StorageMut<N, R, C>> Matrix<N, R, C, S> {
|
||||
impl<N: Scalar, R1: Dim, C1: Dim, SA: StorageMut<N, R1, C1>> Matrix<N, R1, C1, SA> {
|
||||
#[doc = $desc_mut]
|
||||
#[inline]
|
||||
pub fn $binop_assign<R2, C2, SB>(&mut self, rhs: &Matrix<N, R2, C2, SB>)
|
||||
where N: $Trait,
|
||||
R2: Dim,
|
||||
C2: Dim,
|
||||
SB: Storage<N, R2, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
|
||||
assert_eq!(self.shape(), rhs.shape(), "Componentwise mul/div: mismatched matrix dimensions.");
|
||||
|
||||
for j in 0 .. self.ncols() {
|
||||
for i in 0 .. self.nrows() {
|
||||
unsafe {
|
||||
self.get_unchecked_mut(i, j).$op_assign(*rhs.get_unchecked(i, j));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[doc = $desc_mut]
|
||||
#[inline]
|
||||
#[deprecated(note = "This is renamed using the `_assign` sufix instead of the `_mut` suffix.")]
|
||||
pub fn $binop_mut<R2, C2, SB>(&mut self, rhs: &Matrix<N, R2, C2, SB>)
|
||||
where N: $Trait,
|
||||
R2: Dim,
|
||||
C2: Dim,
|
||||
SB: Storage<N, R2, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R, R2> + SameNumberOfColumns<C, C2> {
|
||||
for (me, rhs) in self.iter_mut().zip(rhs.iter()) {
|
||||
me.$binop_assign(*rhs);
|
||||
}
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
self.$binop_assign(rhs)
|
||||
}
|
||||
}
|
||||
)*}
|
||||
);
|
||||
|
||||
component_binop_impl!(
|
||||
component_mul, component_mul_mut, ClosedMul.mul_assign,
|
||||
component_mul, component_mul_mut, component_mul_assign, ClosedMul.mul_assign,
|
||||
"Componentwise matrix multiplication.", "Mutable, componentwise matrix multiplication.";
|
||||
component_div, component_div_mut, ClosedDiv.div_assign,
|
||||
component_div, component_div_mut, component_div_assign, ClosedDiv.div_assign,
|
||||
"Componentwise matrix division.", "Mutable, componentwise matrix division.";
|
||||
// FIXME: add other operators like bitshift, etc. ?
|
||||
);
|
||||
|
|
|
@ -6,8 +6,7 @@ use core::dimension::{Dim, DimName, Dynamic};
|
|||
pub struct ShapeConstraint;
|
||||
|
||||
/// Constraints `C1` and `R2` to be equivalent.
|
||||
pub trait AreMultipliable<R1: Dim, C1: Dim,
|
||||
R2: Dim, C2: Dim> {
|
||||
pub trait AreMultipliable<R1: Dim, C1: Dim, R2: Dim, C2: Dim>: DimEq<C1, R2> {
|
||||
}
|
||||
|
||||
|
||||
|
@ -15,11 +14,30 @@ impl<R1: Dim, C1: Dim, R2: Dim, C2: Dim> AreMultipliable<R1, C1, R2, C2> for Sha
|
|||
where ShapeConstraint: DimEq<C1, R2> {
|
||||
}
|
||||
|
||||
/// Constraints `D1` and `D2` to be equivalent.
|
||||
pub trait DimEq<D1: Dim, D2: Dim> {
|
||||
/// This is either equal to `D1` or `D2`, always choosing the one (if any) which is a type-level
|
||||
/// constant.
|
||||
type Representative: Dim;
|
||||
}
|
||||
|
||||
impl<D: Dim> DimEq<D, D> for ShapeConstraint {
|
||||
type Representative = D;
|
||||
}
|
||||
|
||||
impl<D: DimName> DimEq<D, Dynamic> for ShapeConstraint {
|
||||
type Representative = D;
|
||||
}
|
||||
|
||||
impl<D: DimName> DimEq<Dynamic, D> for ShapeConstraint {
|
||||
type Representative = D;
|
||||
}
|
||||
|
||||
macro_rules! equality_trait_decl(
|
||||
($($doc: expr, $Trait: ident),* $(,)*) => {$(
|
||||
// XXX: we can't do something like `DimEq<D1> for D2` because we would require a blancket impl…
|
||||
#[doc = $doc]
|
||||
pub trait $Trait<D1: Dim, D2: Dim> {
|
||||
pub trait $Trait<D1: Dim, D2: Dim>: DimEq<D1, D2> + DimEq<D2, D1> {
|
||||
/// This is either equal to `D1` or `D2`, always choosing the one (if any) which is a type-level
|
||||
/// constant.
|
||||
type Representative: Dim;
|
||||
|
@ -40,9 +58,6 @@ macro_rules! equality_trait_decl(
|
|||
);
|
||||
|
||||
equality_trait_decl!(
|
||||
"Constraints `D1` and `D2` to be equivalent.",
|
||||
DimEq,
|
||||
|
||||
"Constraints `D1` and `D2` to be equivalent. \
|
||||
They are both assumed to be the number of \
|
||||
rows of a matrix.",
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
#[cfg(feature = "arbitrary")]
|
||||
use quickcheck::{Arbitrary, Gen};
|
||||
#[cfg(feature = "arbitrary")]
|
||||
use core::storage::Owned;
|
||||
|
||||
use std::iter;
|
||||
use num::{Zero, One, Bounded};
|
||||
|
@ -8,38 +10,44 @@ use typenum::{self, Cmp, Greater};
|
|||
|
||||
use alga::general::{ClosedAdd, ClosedMul};
|
||||
|
||||
use core::{Scalar, Matrix, SquareMatrix, ColumnVector, Unit};
|
||||
use core::{DefaultAllocator, Scalar, Matrix, Vector, Unit, MatrixMN, MatrixN, VectorN};
|
||||
use core::dimension::{Dim, DimName, Dynamic, U1, U2, U3, U4, U5, U6};
|
||||
use core::allocator::{Allocator, OwnedAllocator};
|
||||
use core::storage::{Storage, OwnedStorage};
|
||||
use core::allocator::Allocator;
|
||||
use core::storage::Storage;
|
||||
|
||||
/*
|
||||
*
|
||||
* Generic constructors.
|
||||
*
|
||||
*/
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
||||
// XXX: needed because of a compiler bug. See the rust compiler issue #26026.
|
||||
where S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
impl<N: Scalar, R: Dim, C: Dim> MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
/// Creates a new uninitialized matrix. If the matrix has a compile-time dimension, this panics
|
||||
/// if `nrows != R::to_usize()` or `ncols != C::to_usize()`.
|
||||
#[inline]
|
||||
pub unsafe fn new_uninitialized_generic(nrows: R, ncols: C) -> Matrix<N, R, C, S> {
|
||||
Matrix::from_data(S::Alloc::allocate_uninitialized(nrows, ncols))
|
||||
pub unsafe fn new_uninitialized_generic(nrows: R, ncols: C) -> Self {
|
||||
Self::from_data(DefaultAllocator::allocate_uninitialized(nrows, ncols))
|
||||
}
|
||||
|
||||
/// Creates a matrix with all its elements set to `elem`.
|
||||
#[inline]
|
||||
pub fn from_element_generic(nrows: R, ncols: C, elem: N) -> Matrix<N, R, C, S> {
|
||||
pub fn from_element_generic(nrows: R, ncols: C, elem: N) -> Self {
|
||||
let len = nrows.value() * ncols.value();
|
||||
Matrix::from_iterator_generic(nrows, ncols, iter::repeat(elem).take(len))
|
||||
Self::from_iterator_generic(nrows, ncols, iter::repeat(elem).take(len))
|
||||
}
|
||||
|
||||
/// Creates a matrix with all its elements set to 0.
|
||||
#[inline]
|
||||
pub fn zeros_generic(nrows: R, ncols: C) -> Self
|
||||
where N: Zero {
|
||||
Self::from_element_generic(nrows, ncols, N::zero())
|
||||
}
|
||||
|
||||
/// Creates a matrix with all its elements filled by an iterator.
|
||||
#[inline]
|
||||
pub fn from_iterator_generic<I>(nrows: R, ncols: C, iter: I) -> Matrix<N, R, C, S>
|
||||
pub fn from_iterator_generic<I>(nrows: R, ncols: C, iter: I) -> Self
|
||||
where I: IntoIterator<Item = N> {
|
||||
Matrix::from_data(S::Alloc::allocate_from_iterator(nrows, ncols, iter))
|
||||
Self::from_data(DefaultAllocator::allocate_from_iterator(nrows, ncols, iter))
|
||||
}
|
||||
|
||||
/// Creates a matrix with its elements filled with the components provided by a slice in
|
||||
|
@ -48,7 +56,7 @@ impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
|||
/// The order of elements in the slice must follow the usual mathematic writing, i.e.,
|
||||
/// row-by-row.
|
||||
#[inline]
|
||||
pub fn from_row_slice_generic(nrows: R, ncols: C, slice: &[N]) -> Matrix<N, R, C, S> {
|
||||
pub fn from_row_slice_generic(nrows: R, ncols: C, slice: &[N]) -> Self {
|
||||
assert!(slice.len() == nrows.value() * ncols.value(),
|
||||
"Matrix init. error: the slice did not contain the right number of elements.");
|
||||
|
||||
|
@ -69,14 +77,14 @@ impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
|||
/// Creates a matrix with its elements filled with the components provided by a slice. The
|
||||
/// components must have the same layout as the matrix data storage (i.e. row-major or column-major).
|
||||
#[inline]
|
||||
pub fn from_column_slice_generic(nrows: R, ncols: C, slice: &[N]) -> Matrix<N, R, C, S> {
|
||||
Matrix::from_iterator_generic(nrows, ncols, slice.iter().cloned())
|
||||
pub fn from_column_slice_generic(nrows: R, ncols: C, slice: &[N]) -> Self {
|
||||
Self::from_iterator_generic(nrows, ncols, slice.iter().cloned())
|
||||
}
|
||||
|
||||
/// Creates a matrix filled with the results of a function applied to each of its component
|
||||
/// coordinates.
|
||||
#[inline]
|
||||
pub fn from_fn_generic<F>(nrows: R, ncols: C, mut f: F) -> Matrix<N, R, C, S>
|
||||
pub fn from_fn_generic<F>(nrows: R, ncols: C, mut f: F) -> Self
|
||||
where F: FnMut(usize, usize) -> N {
|
||||
let mut res = unsafe { Self::new_uninitialized_generic(nrows, ncols) };
|
||||
|
||||
|
@ -94,7 +102,7 @@ impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
|||
/// If the matrix is not square, the largest square submatrix starting at index `(0, 0)` is set
|
||||
/// to the identity matrix. All other entries are set to zero.
|
||||
#[inline]
|
||||
pub fn identity_generic(nrows: R, ncols: C) -> Matrix<N, R, C, S>
|
||||
pub fn identity_generic(nrows: R, ncols: C) -> Self
|
||||
where N: Zero + One {
|
||||
Self::from_diagonal_element_generic(nrows, ncols, N::one())
|
||||
}
|
||||
|
@ -104,10 +112,9 @@ impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
|||
/// If the matrix is not square, the largest square submatrix starting at index `(0, 0)` is set
|
||||
/// to the identity matrix. All other entries are set to zero.
|
||||
#[inline]
|
||||
pub fn from_diagonal_element_generic(nrows: R, ncols: C, elt: N) -> Matrix<N, R, C, S>
|
||||
pub fn from_diagonal_element_generic(nrows: R, ncols: C, elt: N) -> Self
|
||||
where N: Zero + One {
|
||||
let mut res = unsafe { Self::new_uninitialized_generic(nrows, ncols) };
|
||||
res.fill(N::zero());
|
||||
let mut res = Self::zeros_generic(nrows, ncols);
|
||||
|
||||
for i in 0 .. ::min(nrows.value(), ncols.value()) {
|
||||
unsafe { *res.get_unchecked_mut(i, i) = elt }
|
||||
|
@ -116,12 +123,29 @@ impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
|||
res
|
||||
}
|
||||
|
||||
/// Creates a new matrix that may be rectangular. The first `elts.len()` diagonal elements are
|
||||
/// filled with the content of `elts`. Others are set to 0.
|
||||
///
|
||||
/// Panics if `elts.len()` is larger than the minimum among `nrows` and `ncols`.
|
||||
#[inline]
|
||||
pub fn from_partial_diagonal_generic(nrows: R, ncols: C, elts: &[N]) -> Self
|
||||
where N: Zero {
|
||||
let mut res = Self::zeros_generic(nrows, ncols);
|
||||
assert!(elts.len() <= ::min(nrows.value(), ncols.value()), "Too many diagonal elements provided.");
|
||||
|
||||
for (i, elt) in elts.iter().enumerate() {
|
||||
unsafe { *res.get_unchecked_mut(i, i) = *elt }
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// Builds a new matrix from its rows.
|
||||
///
|
||||
/// Panics if not enough rows are provided (for statically-sized matrices), or if all rows do
|
||||
/// not have the same dimensions.
|
||||
#[inline]
|
||||
pub fn from_rows<SB>(rows: &[Matrix<N, U1, C, SB>]) -> Matrix<N, R, C, S>
|
||||
pub fn from_rows<SB>(rows: &[Matrix<N, U1, C, SB>]) -> Self
|
||||
where SB: Storage<N, U1, C> {
|
||||
|
||||
assert!(rows.len() > 0, "At least one row must be given.");
|
||||
|
@ -144,8 +168,8 @@ impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
|||
/// Panics if not enough columns are provided (for statically-sized matrices), or if all
|
||||
/// columns do not have the same dimensions.
|
||||
#[inline]
|
||||
pub fn from_columns<SB>(columns: &[ColumnVector<N, R, SB>]) -> Matrix<N, R, C, S>
|
||||
where SB: Storage<N, R, U1> {
|
||||
pub fn from_columns<SB>(columns: &[Vector<N, R, SB>]) -> Self
|
||||
where SB: Storage<N, R> {
|
||||
|
||||
assert!(columns.len() > 0, "At least one column must be given.");
|
||||
let ncols = C::try_to_usize().unwrap_or(columns.len());
|
||||
|
@ -160,31 +184,27 @@ impl<N: Scalar, R: Dim, C: Dim, S: OwnedStorage<N, R, C>> Matrix<N, R, C, S>
|
|||
// FIXME: optimize that.
|
||||
Self::from_fn_generic(R::from_usize(nrows), C::from_usize(ncols), |i, j| columns[j][i])
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R: Dim, C: Dim, S> Matrix<N, R, C, S>
|
||||
where N: Scalar + Rand,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
/// Creates a matrix filled with random values.
|
||||
#[inline]
|
||||
pub fn new_random_generic(nrows: R, ncols: C) -> Matrix<N, R, C, S> {
|
||||
Matrix::from_fn_generic(nrows, ncols, |_, _| rand::random())
|
||||
pub fn new_random_generic(nrows: R, ncols: C) -> Self
|
||||
where N: Rand {
|
||||
Self::from_fn_generic(nrows, ncols, |_, _| rand::random())
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: Dim, S> SquareMatrix<N, D, S>
|
||||
where N: Scalar + Zero,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S> {
|
||||
impl<N, D: Dim> MatrixN<N, D>
|
||||
where N: Scalar,
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
/// Creates a square matrix with its diagonal set to `diag` and all other entries set to 0.
|
||||
#[inline]
|
||||
pub fn from_diagonal<SB: Storage<N, D, U1>>(diag: &ColumnVector<N, D, SB>) -> Self {
|
||||
pub fn from_diagonal<SB: Storage<N, D>>(diag: &Vector<N, D, SB>) -> Self
|
||||
where N: Zero {
|
||||
let (dim, _) = diag.data.shape();
|
||||
let mut res = Self::from_element_generic(dim, dim, N::zero());
|
||||
let mut res = Self::zeros_generic(dim, dim);
|
||||
|
||||
for i in 0 .. diag.len() {
|
||||
unsafe { *res.get_unchecked_mut(i, i) = *diag.get_unchecked(i, 0); }
|
||||
unsafe { *res.get_unchecked_mut(i, i) = *diag.vget_unchecked(i); }
|
||||
}
|
||||
|
||||
res
|
||||
|
@ -199,25 +219,31 @@ impl<N, D: Dim, S> SquareMatrix<N, D, S>
|
|||
*/
|
||||
macro_rules! impl_constructors(
|
||||
($($Dims: ty),*; $(=> $DimIdent: ident: $DimBound: ident),*; $($gargs: expr),*; $($args: ident),*) => {
|
||||
impl<N: Scalar, $($DimIdent: $DimBound, )* S> Matrix<N $(, $Dims)*, S>
|
||||
where S: OwnedStorage<N $(, $Dims)*>,
|
||||
S::Alloc: OwnedAllocator<N $(, $Dims)*, S> {
|
||||
impl<N: Scalar, $($DimIdent: $DimBound, )*> MatrixMN<N $(, $Dims)*>
|
||||
where DefaultAllocator: Allocator<N $(, $Dims)*> {
|
||||
|
||||
/// Creates a new uninitialized matrix.
|
||||
#[inline]
|
||||
pub unsafe fn new_uninitialized($($args: usize),*) -> Matrix<N $(, $Dims)*, S> {
|
||||
pub unsafe fn new_uninitialized($($args: usize),*) -> Self {
|
||||
Self::new_uninitialized_generic($($gargs),*)
|
||||
}
|
||||
|
||||
/// Creates a matrix with all its elements set to `elem`.
|
||||
#[inline]
|
||||
pub fn from_element($($args: usize,)* elem: N) -> Matrix<N $(, $Dims)*, S> {
|
||||
pub fn from_element($($args: usize,)* elem: N) -> Self {
|
||||
Self::from_element_generic($($gargs, )* elem)
|
||||
}
|
||||
|
||||
/// Creates a matrix with all its elements set to `0`.
|
||||
#[inline]
|
||||
pub fn zeros($($args: usize),*) -> Self
|
||||
where N: Zero {
|
||||
Self::zeros_generic($($gargs),*)
|
||||
}
|
||||
|
||||
/// Creates a matrix with all its elements filled by an iterator.
|
||||
#[inline]
|
||||
pub fn from_iterator<I>($($args: usize,)* iter: I) -> Matrix<N $(, $Dims)*, S>
|
||||
pub fn from_iterator<I>($($args: usize,)* iter: I) -> Self
|
||||
where I: IntoIterator<Item = N> {
|
||||
Self::from_iterator_generic($($gargs, )* iter)
|
||||
}
|
||||
|
@ -228,14 +254,14 @@ macro_rules! impl_constructors(
|
|||
/// The order of elements in the slice must follow the usual mathematic writing, i.e.,
|
||||
/// row-by-row.
|
||||
#[inline]
|
||||
pub fn from_row_slice($($args: usize,)* slice: &[N]) -> Matrix<N $(, $Dims)*, S> {
|
||||
pub fn from_row_slice($($args: usize,)* slice: &[N]) -> Self {
|
||||
Self::from_row_slice_generic($($gargs, )* slice)
|
||||
}
|
||||
|
||||
/// Creates a matrix with its elements filled with the components provided by a slice
|
||||
/// in column-major order.
|
||||
#[inline]
|
||||
pub fn from_column_slice($($args: usize,)* slice: &[N]) -> Matrix<N $(, $Dims)*, S> {
|
||||
pub fn from_column_slice($($args: usize,)* slice: &[N]) -> Self {
|
||||
Self::from_column_slice_generic($($gargs, )* slice)
|
||||
}
|
||||
|
||||
|
@ -243,7 +269,7 @@ macro_rules! impl_constructors(
|
|||
/// component coordinates.
|
||||
// FIXME: don't take a dimension of the matrix is statically sized.
|
||||
#[inline]
|
||||
pub fn from_fn<F>($($args: usize,)* f: F) -> Matrix<N $(, $Dims)*, S>
|
||||
pub fn from_fn<F>($($args: usize,)* f: F) -> Self
|
||||
where F: FnMut(usize, usize) -> N {
|
||||
Self::from_fn_generic($($gargs, )* f)
|
||||
}
|
||||
|
@ -252,7 +278,7 @@ macro_rules! impl_constructors(
|
|||
/// submatrix (starting at the first row and column) is set to the identity while all
|
||||
/// other entries are set to zero.
|
||||
#[inline]
|
||||
pub fn identity($($args: usize,)*) -> Matrix<N $(, $Dims)*, S>
|
||||
pub fn identity($($args: usize,)*) -> Self
|
||||
where N: Zero + One {
|
||||
Self::identity_generic($($gargs),* )
|
||||
}
|
||||
|
@ -260,19 +286,28 @@ macro_rules! impl_constructors(
|
|||
/// Creates a matrix filled with its diagonal filled with `elt` and all other
|
||||
/// components set to zero.
|
||||
#[inline]
|
||||
pub fn from_diagonal_element($($args: usize,)* elt: N) -> Matrix<N $(, $Dims)*, S>
|
||||
pub fn from_diagonal_element($($args: usize,)* elt: N) -> Self
|
||||
where N: Zero + One {
|
||||
Self::from_diagonal_element_generic($($gargs, )* elt)
|
||||
}
|
||||
|
||||
/// Creates a new matrix that may be rectangular. The first `elts.len()` diagonal
|
||||
/// elements are filled with the content of `elts`. Others are set to 0.
|
||||
///
|
||||
/// Panics if `elts.len()` is larger than the minimum among `nrows` and `ncols`.
|
||||
#[inline]
|
||||
pub fn from_partial_diagonal($($args: usize,)* elts: &[N]) -> Self
|
||||
where N: Zero {
|
||||
Self::from_partial_diagonal_generic($($gargs, )* elts)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar + Rand, $($DimIdent: $DimBound, )* S> Matrix<N $(, $Dims)*, S>
|
||||
where S: OwnedStorage<N $(, $Dims)*>,
|
||||
S::Alloc: OwnedAllocator<N $(, $Dims)*, S> {
|
||||
impl<N: Scalar + Rand, $($DimIdent: $DimBound, )*> MatrixMN<N $(, $Dims)*>
|
||||
where DefaultAllocator: Allocator<N $(, $Dims)*> {
|
||||
|
||||
/// Creates a matrix filled with random values.
|
||||
#[inline]
|
||||
pub fn new_random($($args: usize),*) -> Matrix<N $(, $Dims)*, S> {
|
||||
pub fn new_random($($args: usize),*) -> Self {
|
||||
Self::new_random_generic($($gargs),*)
|
||||
}
|
||||
}
|
||||
|
@ -305,10 +340,9 @@ impl_constructors!(Dynamic, Dynamic;
|
|||
* Zero, One, Rand traits.
|
||||
*
|
||||
*/
|
||||
impl<N, R: DimName, C: DimName, S> Zero for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> Zero for MatrixMN<N, R, C>
|
||||
where N: Scalar + Zero + ClosedAdd,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn zero() -> Self {
|
||||
Self::from_element(N::zero())
|
||||
|
@ -320,20 +354,18 @@ impl<N, R: DimName, C: DimName, S> Zero for Matrix<N, R, C, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> One for Matrix<N, D, D, S>
|
||||
impl<N, D: DimName> One for MatrixN<N, D>
|
||||
where N: Scalar + Zero + One + ClosedMul + ClosedAdd,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S> {
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
#[inline]
|
||||
fn one() -> Self {
|
||||
Self::identity()
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> Bounded for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> Bounded for MatrixMN<N, R, C>
|
||||
where N: Scalar + Bounded,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn max_value() -> Self {
|
||||
Self::from_element(N::max_value())
|
||||
|
@ -345,9 +377,8 @@ impl<N, R: DimName, C: DimName, S> Bounded for Matrix<N, R, C, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar + Rand, R: Dim, C: Dim, S> Rand for Matrix<N, R, C, S>
|
||||
where S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
impl<N: Scalar + Rand, R: Dim, C: Dim> Rand for MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn rand<G: Rng>(rng: &mut G) -> Self {
|
||||
let nrows = R::try_to_usize().unwrap_or(rng.gen_range(0, 10));
|
||||
|
@ -359,11 +390,11 @@ impl<N: Scalar + Rand, R: Dim, C: Dim, S> Rand for Matrix<N, R, C, S>
|
|||
|
||||
|
||||
#[cfg(feature = "arbitrary")]
|
||||
impl<N, R, C, S> Arbitrary for Matrix<N, R, C, S>
|
||||
impl<N, R, C> Arbitrary for MatrixMN<N, R, C>
|
||||
where R: Dim, C: Dim,
|
||||
N: Scalar + Arbitrary + Send,
|
||||
S: OwnedStorage<N, R, C> + Send,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C>,
|
||||
Owned<N, R, C>: Clone + Send {
|
||||
#[inline]
|
||||
fn arbitrary<G: Gen>(g: &mut G) -> Self {
|
||||
let nrows = R::try_to_usize().unwrap_or(g.gen_range(0, 10));
|
||||
|
@ -381,13 +412,12 @@ impl<N, R, C, S> Arbitrary for Matrix<N, R, C, S>
|
|||
*/
|
||||
macro_rules! componentwise_constructors_impl(
|
||||
($($R: ty, $C: ty, $($args: ident:($irow: expr,$icol: expr)),*);* $(;)*) => {$(
|
||||
impl<N, S> Matrix<N, $R, $C, S>
|
||||
impl<N> MatrixMN<N, $R, $C>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $R, $C>,
|
||||
S::Alloc: OwnedAllocator<N, $R, $C, S> {
|
||||
DefaultAllocator: Allocator<N, $R, $C> {
|
||||
/// Initializes this matrix from its components.
|
||||
#[inline]
|
||||
pub fn new($($args: N),*) -> Matrix<N, $R, $C, S> {
|
||||
pub fn new($($args: N),*) -> Self {
|
||||
unsafe {
|
||||
let mut res = Self::new_uninitialized();
|
||||
$( *res.get_unchecked_mut($irow, $icol) = $args; )*
|
||||
|
@ -549,16 +579,15 @@ componentwise_constructors_impl!(
|
|||
* Axis constructors.
|
||||
*
|
||||
*/
|
||||
impl<N, R: DimName, S> ColumnVector<N, R, S>
|
||||
impl<N, R: DimName> VectorN<N, R>
|
||||
where N: Scalar + Zero + One,
|
||||
S: OwnedStorage<N, R, U1>,
|
||||
S::Alloc: OwnedAllocator<N, R, U1, S> {
|
||||
DefaultAllocator: Allocator<N, R> {
|
||||
/// The column vector with a 1 as its first component, and zero elsewhere.
|
||||
#[inline]
|
||||
pub fn x() -> Self
|
||||
where R::Value: Cmp<typenum::U0, Output = Greater> {
|
||||
let mut res = Self::from_element(N::zero());
|
||||
unsafe { *res.get_unchecked_mut(0, 0) = N::one(); }
|
||||
let mut res = Self::zeros();
|
||||
unsafe { *res.vget_unchecked_mut(0) = N::one(); }
|
||||
|
||||
res
|
||||
}
|
||||
|
@ -567,8 +596,8 @@ where N: Scalar + Zero + One,
|
|||
#[inline]
|
||||
pub fn y() -> Self
|
||||
where R::Value: Cmp<typenum::U1, Output = Greater> {
|
||||
let mut res = Self::from_element(N::zero());
|
||||
unsafe { *res.get_unchecked_mut(1, 0) = N::one(); }
|
||||
let mut res = Self::zeros();
|
||||
unsafe { *res.vget_unchecked_mut(1) = N::one(); }
|
||||
|
||||
res
|
||||
}
|
||||
|
@ -577,8 +606,8 @@ where N: Scalar + Zero + One,
|
|||
#[inline]
|
||||
pub fn z() -> Self
|
||||
where R::Value: Cmp<typenum::U2, Output = Greater> {
|
||||
let mut res = Self::from_element(N::zero());
|
||||
unsafe { *res.get_unchecked_mut(2, 0) = N::one(); }
|
||||
let mut res = Self::zeros();
|
||||
unsafe { *res.vget_unchecked_mut(2) = N::one(); }
|
||||
|
||||
res
|
||||
}
|
||||
|
@ -587,8 +616,8 @@ where N: Scalar + Zero + One,
|
|||
#[inline]
|
||||
pub fn w() -> Self
|
||||
where R::Value: Cmp<typenum::U3, Output = Greater> {
|
||||
let mut res = Self::from_element(N::zero());
|
||||
unsafe { *res.get_unchecked_mut(3, 0) = N::one(); }
|
||||
let mut res = Self::zeros();
|
||||
unsafe { *res.vget_unchecked_mut(3) = N::one(); }
|
||||
|
||||
res
|
||||
}
|
||||
|
@ -597,8 +626,8 @@ where N: Scalar + Zero + One,
|
|||
#[inline]
|
||||
pub fn a() -> Self
|
||||
where R::Value: Cmp<typenum::U4, Output = Greater> {
|
||||
let mut res = Self::from_element(N::zero());
|
||||
unsafe { *res.get_unchecked_mut(4, 0) = N::one(); }
|
||||
let mut res = Self::zeros();
|
||||
unsafe { *res.vget_unchecked_mut(4) = N::one(); }
|
||||
|
||||
res
|
||||
}
|
||||
|
@ -607,8 +636,8 @@ where N: Scalar + Zero + One,
|
|||
#[inline]
|
||||
pub fn b() -> Self
|
||||
where R::Value: Cmp<typenum::U5, Output = Greater> {
|
||||
let mut res = Self::from_element(N::zero());
|
||||
unsafe { *res.get_unchecked_mut(5, 0) = N::one(); }
|
||||
let mut res = Self::zeros();
|
||||
unsafe { *res.vget_unchecked_mut(5) = N::one(); }
|
||||
|
||||
res
|
||||
}
|
||||
|
|
|
@ -3,37 +3,35 @@ use std::mem;
|
|||
use std::convert::{From, Into, AsRef, AsMut};
|
||||
use alga::general::{SubsetOf, SupersetOf};
|
||||
|
||||
use core::{Scalar, Matrix};
|
||||
use core::{DefaultAllocator, Scalar, Matrix, MatrixMN};
|
||||
use core::dimension::{Dim,
|
||||
U1, U2, U3, U4,
|
||||
U5, U6, U7, U8,
|
||||
U9, U10, U11, U12,
|
||||
U13, U14, U15, U16
|
||||
};
|
||||
use core::constraint::{ShapeConstraint, SameNumberOfRows, SameNumberOfColumns};
|
||||
use core::storage::{Storage, StorageMut, OwnedStorage};
|
||||
use core::iter::{MatrixIter, MatrixIterMut};
|
||||
use core::allocator::{OwnedAllocator, SameShapeAllocator};
|
||||
use core::constraint::{ShapeConstraint, SameNumberOfRows, SameNumberOfColumns};
|
||||
use core::storage::{ContiguousStorage, ContiguousStorageMut, Storage, StorageMut};
|
||||
use core::allocator::{Allocator, SameShapeAllocator};
|
||||
|
||||
|
||||
// FIXME: too bad this won't work allo slice conversions.
|
||||
impl<N1, N2, R1, C1, R2, C2, SA, SB> SubsetOf<Matrix<N2, R2, C2, SB>> for Matrix<N1, R1, C1, SA>
|
||||
impl<N1, N2, R1, C1, R2, C2> SubsetOf<MatrixMN<N2, R2, C2>> for MatrixMN<N1, R1, C1>
|
||||
where R1: Dim, C1: Dim, R2: Dim, C2: Dim,
|
||||
N1: Scalar,
|
||||
N2: Scalar + SupersetOf<N1>,
|
||||
SA: OwnedStorage<N1, R1, C1>,
|
||||
SB: OwnedStorage<N2, R2, C2>,
|
||||
SB::Alloc: OwnedAllocator<N2, R2, C2, SB>,
|
||||
SA::Alloc: OwnedAllocator<N1, R1, C1, SA> +
|
||||
SameShapeAllocator<N1, R1, C1, R2, C2, SA>,
|
||||
DefaultAllocator: Allocator<N2, R2, C2> +
|
||||
Allocator<N1, R1, C1> +
|
||||
SameShapeAllocator<N1, R1, C1, R2, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
#[inline]
|
||||
fn to_superset(&self) -> Matrix<N2, R2, C2, SB> {
|
||||
fn to_superset(&self) -> MatrixMN<N2, R2, C2> {
|
||||
let (nrows, ncols) = self.shape();
|
||||
let nrows2 = R2::from_usize(nrows);
|
||||
let ncols2 = C2::from_usize(ncols);
|
||||
|
||||
let mut res = unsafe { Matrix::<N2, R2, C2, SB>::new_uninitialized_generic(nrows2, ncols2) };
|
||||
let mut res = unsafe { MatrixMN::<N2, R2, C2>::new_uninitialized_generic(nrows2, ncols2) };
|
||||
for i in 0 .. nrows {
|
||||
for j in 0 .. ncols {
|
||||
unsafe {
|
||||
|
@ -46,12 +44,12 @@ impl<N1, N2, R1, C1, R2, C2, SA, SB> SubsetOf<Matrix<N2, R2, C2, SB>> for Matrix
|
|||
}
|
||||
|
||||
#[inline]
|
||||
fn is_in_subset(m: &Matrix<N2, R2, C2, SB>) -> bool {
|
||||
fn is_in_subset(m: &MatrixMN<N2, R2, C2>) -> bool {
|
||||
m.iter().all(|e| e.is_in_subset())
|
||||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn from_superset_unchecked(m: &Matrix<N2, R2, C2, SB>) -> Self {
|
||||
unsafe fn from_superset_unchecked(m: &MatrixMN<N2, R2, C2>) -> Self {
|
||||
let (nrows2, ncols2) = m.shape();
|
||||
let nrows = R1::from_usize(nrows2);
|
||||
let ncols = C1::from_usize(ncols2);
|
||||
|
@ -90,10 +88,9 @@ impl<'a, N: Scalar, R: Dim, C: Dim, S: StorageMut<N, R, C>> IntoIterator for &'a
|
|||
|
||||
macro_rules! impl_from_into_asref_1D(
|
||||
($(($NRows: ident, $NCols: ident) => $SZ: expr);* $(;)*) => {$(
|
||||
impl<N, S> From<[N; $SZ]> for Matrix<N, $NRows, $NCols, S>
|
||||
impl<N> From<[N; $SZ]> for MatrixMN<N, $NRows, $NCols>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
DefaultAllocator: Allocator<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn from(arr: [N; $SZ]) -> Self {
|
||||
unsafe {
|
||||
|
@ -107,8 +104,7 @@ macro_rules! impl_from_into_asref_1D(
|
|||
|
||||
impl<N, S> Into<[N; $SZ]> for Matrix<N, $NRows, $NCols, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
S: ContiguousStorage<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn into(self) -> [N; $SZ] {
|
||||
unsafe {
|
||||
|
@ -122,8 +118,7 @@ macro_rules! impl_from_into_asref_1D(
|
|||
|
||||
impl<N, S> AsRef<[N; $SZ]> for Matrix<N, $NRows, $NCols, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
S: ContiguousStorage<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn as_ref(&self) -> &[N; $SZ] {
|
||||
unsafe {
|
||||
|
@ -134,8 +129,7 @@ macro_rules! impl_from_into_asref_1D(
|
|||
|
||||
impl<N, S> AsMut<[N; $SZ]> for Matrix<N, $NRows, $NCols, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
S: ContiguousStorageMut<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn as_mut(&mut self) -> &mut [N; $SZ] {
|
||||
unsafe {
|
||||
|
@ -165,10 +159,8 @@ impl_from_into_asref_1D!(
|
|||
|
||||
macro_rules! impl_from_into_asref_2D(
|
||||
($(($NRows: ty, $NCols: ty) => ($SZRows: expr, $SZCols: expr));* $(;)*) => {$(
|
||||
impl<N, S> From<[[N; $SZRows]; $SZCols]> for Matrix<N, $NRows, $NCols, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
impl<N: Scalar> From<[[N; $SZRows]; $SZCols]> for MatrixMN<N, $NRows, $NCols>
|
||||
where DefaultAllocator: Allocator<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn from(arr: [[N; $SZRows]; $SZCols]) -> Self {
|
||||
unsafe {
|
||||
|
@ -180,10 +172,8 @@ macro_rules! impl_from_into_asref_2D(
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, S> Into<[[N; $SZRows]; $SZCols]> for Matrix<N, $NRows, $NCols, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
impl<N: Scalar, S> Into<[[N; $SZRows]; $SZCols]> for Matrix<N, $NRows, $NCols, S>
|
||||
where S: ContiguousStorage<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn into(self) -> [[N; $SZRows]; $SZCols] {
|
||||
unsafe {
|
||||
|
@ -195,10 +185,8 @@ macro_rules! impl_from_into_asref_2D(
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, S> AsRef<[[N; $SZRows]; $SZCols]> for Matrix<N, $NRows, $NCols, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
impl<N: Scalar, S> AsRef<[[N; $SZRows]; $SZCols]> for Matrix<N, $NRows, $NCols, S>
|
||||
where S: ContiguousStorage<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn as_ref(&self) -> &[[N; $SZRows]; $SZCols] {
|
||||
unsafe {
|
||||
|
@ -207,10 +195,8 @@ macro_rules! impl_from_into_asref_2D(
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, S> AsMut<[[N; $SZRows]; $SZCols]> for Matrix<N, $NRows, $NCols, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $NRows, $NCols>,
|
||||
S::Alloc: OwnedAllocator<N, $NRows, $NCols, S> {
|
||||
impl<N: Scalar, S> AsMut<[[N; $SZRows]; $SZCols]> for Matrix<N, $NRows, $NCols, S>
|
||||
where S: ContiguousStorageMut<N, $NRows, $NCols> {
|
||||
#[inline]
|
||||
fn as_mut(&mut self) -> &mut [[N; $SZRows]; $SZCols] {
|
||||
unsafe {
|
||||
|
@ -222,7 +208,7 @@ macro_rules! impl_from_into_asref_2D(
|
|||
);
|
||||
|
||||
|
||||
// Implement for matrices with shape 2x2 .. 4x4.
|
||||
// Implement for matrices with shape 2x2 .. 6x6.
|
||||
impl_from_into_asref_2D!(
|
||||
(U2, U2) => (2, 2); (U2, U3) => (2, 3); (U2, U4) => (2, 4); (U2, U5) => (2, 5); (U2, U6) => (2, 6);
|
||||
(U3, U2) => (3, 2); (U3, U3) => (3, 3); (U3, U4) => (3, 4); (U3, U5) => (3, 5); (U3, U6) => (3, 6);
|
||||
|
|
|
@ -9,8 +9,7 @@ use std::ops::{Deref, DerefMut};
|
|||
|
||||
use core::{Scalar, Matrix};
|
||||
use core::dimension::{U1, U2, U3, U4, U5, U6};
|
||||
use core::storage::OwnedStorage;
|
||||
use core::allocator::OwnedAllocator;
|
||||
use core::storage::{ContiguousStorage, ContiguousStorageMut};
|
||||
|
||||
/*
|
||||
*
|
||||
|
@ -35,22 +34,20 @@ macro_rules! coords_impl(
|
|||
macro_rules! deref_impl(
|
||||
($R: ty, $C: ty; $Target: ident) => {
|
||||
impl<N: Scalar, S> Deref for Matrix<N, $R, $C, S>
|
||||
where S: OwnedStorage<N, $R, $C>,
|
||||
S::Alloc: OwnedAllocator<N, $R, $C, S> {
|
||||
where S: ContiguousStorage<N, $R, $C> {
|
||||
type Target = $Target<N>;
|
||||
|
||||
#[inline]
|
||||
fn deref(&self) -> &Self::Target {
|
||||
unsafe { mem::transmute(self) }
|
||||
unsafe { mem::transmute(self.data.ptr()) }
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, S> DerefMut for Matrix<N, $R, $C, S>
|
||||
where S: OwnedStorage<N, $R, $C>,
|
||||
S::Alloc: OwnedAllocator<N, $R, $C, S> {
|
||||
where S: ContiguousStorageMut<N, $R, $C> {
|
||||
#[inline]
|
||||
fn deref_mut(&mut self) -> &mut Self::Target {
|
||||
unsafe { mem::transmute(self) }
|
||||
unsafe { mem::transmute(self.data.ptr_mut()) }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,373 +0,0 @@
|
|||
use std::cmp;
|
||||
|
||||
use alga::general::Real;
|
||||
use core::{SquareMatrix, OwnedSquareMatrix, ColumnVector, OwnedColumnVector};
|
||||
use dimension::{Dim, Dynamic, U1};
|
||||
use storage::{Storage, OwnedStorage};
|
||||
use allocator::{Allocator, OwnedAllocator};
|
||||
|
||||
|
||||
|
||||
impl<N, D: Dim, S> SquareMatrix<N, D, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S> {
|
||||
/// Get the householder matrix corresponding to a reflexion to the hyperplane
|
||||
/// defined by `vector`. It can be a reflexion contained in a subspace.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `dimension` - the dimension of the space the resulting matrix operates in
|
||||
/// * `start` - the starting dimension of the subspace of the reflexion
|
||||
/// * `vector` - the vector defining the reflection.
|
||||
pub fn new_householder_generic<SB, D2>(dimension: D, start: usize, vector: &ColumnVector<N, D2, SB>)
|
||||
-> OwnedSquareMatrix<N, D, S::Alloc>
|
||||
where D2: Dim,
|
||||
SB: Storage<N, D2, U1> {
|
||||
let mut qk = Self::identity_generic(dimension, dimension);
|
||||
let subdim = vector.shape().0;
|
||||
|
||||
let stop = subdim + start;
|
||||
|
||||
assert!(dimension.value() >= stop, "Householder matrix creation: subspace dimension index out of bounds.");
|
||||
|
||||
for j in start .. stop {
|
||||
for i in start .. stop {
|
||||
unsafe {
|
||||
let vv = *vector.get_unchecked(i - start, 0) * *vector.get_unchecked(j - start, 0);
|
||||
let qkij = *qk.get_unchecked(i, j);
|
||||
*qk.get_unchecked_mut(i, j) = qkij - vv - vv;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
qk
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N: Real, D: Dim, S: Storage<N, D, D>> SquareMatrix<N, D, S> {
|
||||
/// QR decomposition using Householder reflections.
|
||||
pub fn qr(self) -> (OwnedSquareMatrix<N, D, S::Alloc>, OwnedSquareMatrix<N, D, S::Alloc>)
|
||||
where S::Alloc: Allocator<N, Dynamic, U1> +
|
||||
Allocator<N, D, U1> {
|
||||
|
||||
let (nrows, ncols) = self.data.shape();
|
||||
|
||||
// XXX: too restrictive.
|
||||
assert!(nrows.value() >= ncols.value(), "");
|
||||
|
||||
let mut q = OwnedSquareMatrix::<N, D, S::Alloc>::identity_generic(nrows, ncols);
|
||||
let mut r = self.into_owned();
|
||||
|
||||
// Temporary buffer that contains a column.
|
||||
let mut col = unsafe {
|
||||
OwnedColumnVector::<N, D, S::Alloc>::new_uninitialized_generic(nrows, U1)
|
||||
};
|
||||
|
||||
for ite in 0 .. cmp::min(nrows.value() - 1, ncols.value()) {
|
||||
let subdim = Dynamic::new(nrows.value() - ite);
|
||||
let mut v = col.rows_mut(0, subdim.value());
|
||||
v.copy_from(&r.generic_slice((ite, ite), (subdim, U1)));
|
||||
|
||||
let alpha =
|
||||
if unsafe { *v.get_unchecked(ite, 0) } >= ::zero() {
|
||||
-v.norm()
|
||||
}
|
||||
else {
|
||||
v.norm()
|
||||
};
|
||||
|
||||
unsafe {
|
||||
let x = *v.get_unchecked(0, 0);
|
||||
*v.get_unchecked_mut(0, 0) = x - alpha;
|
||||
}
|
||||
|
||||
if !v.normalize_mut().is_zero() {
|
||||
let mut qk = OwnedSquareMatrix::<N, D, S::Alloc>::new_householder_generic(nrows, ite, &v);
|
||||
r = &qk * r;
|
||||
|
||||
// FIXME: add a method `q.mul_tr(qk) := q * qk.transpose` ?
|
||||
qk.transpose_mut();
|
||||
q = q * qk;
|
||||
}
|
||||
}
|
||||
|
||||
(q, r)
|
||||
}
|
||||
|
||||
/// Eigendecomposition of a square symmetric matrix.
|
||||
pub fn eig(&self, eps: N, niter: usize)
|
||||
-> (OwnedSquareMatrix<N, D, S::Alloc>, OwnedColumnVector<N, D, S::Alloc>)
|
||||
where S::Alloc: Allocator<N, D, U1> +
|
||||
Allocator<N, Dynamic, U1> {
|
||||
|
||||
assert!(self.is_square(),
|
||||
"Unable to compute the eigenvectors and eigenvalues of a non-square matrix.");
|
||||
|
||||
let dim = self.data.shape().0;
|
||||
|
||||
let (mut eigenvectors, mut eigenvalues) = self.hessenberg();
|
||||
|
||||
if dim.value() == 1 {
|
||||
return (eigenvectors, eigenvalues.diagonal());
|
||||
}
|
||||
|
||||
// Allocate arrays for Givens rotation components
|
||||
let mut c = unsafe { OwnedColumnVector::<N, D, S::Alloc>::new_uninitialized_generic(dim, U1) };
|
||||
let mut s = unsafe { OwnedColumnVector::<N, D, S::Alloc>::new_uninitialized_generic(dim, U1) };
|
||||
|
||||
let mut iter = 0;
|
||||
let mut curdim = dim.value() - 1;
|
||||
|
||||
for _ in 0 .. dim.value() {
|
||||
|
||||
let mut stop = false;
|
||||
|
||||
while !stop && iter < niter {
|
||||
|
||||
let lambda;
|
||||
|
||||
unsafe {
|
||||
let a = *eigenvalues.get_unchecked(curdim - 1, curdim - 1);
|
||||
let b = *eigenvalues.get_unchecked(curdim - 1, curdim);
|
||||
let c = *eigenvalues.get_unchecked(curdim, curdim - 1);
|
||||
let d = *eigenvalues.get_unchecked(curdim, curdim);
|
||||
|
||||
let trace = a + d;
|
||||
let determinant = a * d - b * c;
|
||||
|
||||
let constquarter: N = ::convert(0.25f64);
|
||||
let consthalf: N = ::convert(0.5f64);
|
||||
|
||||
let e = (constquarter * trace * trace - determinant).sqrt();
|
||||
|
||||
let lambda1 = consthalf * trace + e;
|
||||
let lambda2 = consthalf * trace - e;
|
||||
|
||||
if (lambda1 - d).abs() < (lambda2 - d).abs() {
|
||||
lambda = lambda1;
|
||||
}
|
||||
else {
|
||||
lambda = lambda2;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Shift matrix
|
||||
for k in 0 .. curdim + 1 {
|
||||
unsafe {
|
||||
let a = *eigenvalues.get_unchecked(k, k);
|
||||
*eigenvalues.get_unchecked_mut(k, k) = a - lambda;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Givens rotation from left
|
||||
for k in 0 .. curdim {
|
||||
let x_i = unsafe { *eigenvalues.get_unchecked(k, k) };
|
||||
let x_j = unsafe { *eigenvalues.get_unchecked(k + 1, k) };
|
||||
|
||||
let ctmp;
|
||||
let stmp;
|
||||
|
||||
if x_j.abs() < eps {
|
||||
ctmp = N::one();
|
||||
stmp = N::zero();
|
||||
}
|
||||
else if x_i.abs() < eps {
|
||||
ctmp = N::zero();
|
||||
stmp = -N::one();
|
||||
}
|
||||
else {
|
||||
let r = x_i.hypot(x_j);
|
||||
ctmp = x_i / r;
|
||||
stmp = -x_j / r;
|
||||
}
|
||||
|
||||
c[k] = ctmp;
|
||||
s[k] = stmp;
|
||||
|
||||
for j in k .. (curdim + 1) {
|
||||
unsafe {
|
||||
let a = *eigenvalues.get_unchecked(k, j);
|
||||
let b = *eigenvalues.get_unchecked(k + 1, j);
|
||||
|
||||
*eigenvalues.get_unchecked_mut(k, j) = ctmp * a - stmp * b;
|
||||
*eigenvalues.get_unchecked_mut(k + 1, j) = stmp * a + ctmp * b;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
// Givens rotation from right applied to eigenvalues
|
||||
for k in 0 .. curdim {
|
||||
for i in 0 .. (k + 2) {
|
||||
unsafe {
|
||||
let a = *eigenvalues.get_unchecked(i, k);
|
||||
let b = *eigenvalues.get_unchecked(i, k + 1);
|
||||
|
||||
*eigenvalues.get_unchecked_mut(i, k) = c[k] * a - s[k] * b;
|
||||
*eigenvalues.get_unchecked_mut(i, k + 1) = s[k] * a + c[k] * b;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Shift back
|
||||
for k in 0 .. curdim + 1 {
|
||||
unsafe {
|
||||
let a = *eigenvalues.get_unchecked(k, k);
|
||||
*eigenvalues.get_unchecked_mut(k, k) = a + lambda;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Givens rotation from right applied to eigenvectors
|
||||
for k in 0 .. curdim {
|
||||
for i in 0 .. dim.value() {
|
||||
|
||||
unsafe {
|
||||
let a = *eigenvectors.get_unchecked(i, k);
|
||||
let b = *eigenvectors.get_unchecked(i, k + 1);
|
||||
|
||||
*eigenvectors.get_unchecked_mut(i, k) = c[k] * a - s[k] * b;
|
||||
*eigenvectors.get_unchecked_mut(i, k + 1) = s[k] * a + c[k] * b;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
iter = iter + 1;
|
||||
stop = true;
|
||||
|
||||
for j in 0 .. curdim {
|
||||
// Check last row.
|
||||
if unsafe { *eigenvalues.get_unchecked(curdim, j) }.abs() >= eps {
|
||||
stop = false;
|
||||
break;
|
||||
}
|
||||
|
||||
// Check last column.
|
||||
if unsafe { *eigenvalues.get_unchecked(j, curdim) }.abs() >= eps {
|
||||
stop = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if stop {
|
||||
if curdim > 1 {
|
||||
curdim = curdim - 1;
|
||||
}
|
||||
else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
(eigenvectors, eigenvalues.diagonal())
|
||||
}
|
||||
|
||||
/// Cholesky decomposition G of a square symmetric positive definite matrix A, such that A = G * G^T
|
||||
///
|
||||
/// Matrix symmetricness is not checked. Returns `None` if `self` is not definite positive.
|
||||
#[inline]
|
||||
pub fn cholesky(&self) -> Result<OwnedSquareMatrix<N, D, S::Alloc>, &'static str> {
|
||||
let out = self.transpose();
|
||||
|
||||
if !out.relative_eq(self, N::default_epsilon(), N::default_max_relative()) {
|
||||
return Err("Cholesky: Input matrix is not symmetric");
|
||||
}
|
||||
|
||||
self.do_cholesky(out)
|
||||
}
|
||||
|
||||
/// Cholesky decomposition G of a square symmetric positive definite matrix A, such that A = G * G^T
|
||||
#[inline]
|
||||
pub fn cholesky_unchecked(&self) -> Result<OwnedSquareMatrix<N, D, S::Alloc>, &'static str> {
|
||||
let out = self.transpose();
|
||||
self.do_cholesky(out)
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn do_cholesky(&self, mut out: OwnedSquareMatrix<N, D, S::Alloc>)
|
||||
-> Result<OwnedSquareMatrix<N, D, S::Alloc>, &'static str> {
|
||||
assert!(self.is_square(), "The input matrix must be square.");
|
||||
|
||||
for i in 0 .. out.nrows() {
|
||||
for j in 0 .. (i + 1) {
|
||||
|
||||
let mut sum = out[(i, j)];
|
||||
|
||||
for k in 0 .. j {
|
||||
sum = sum - out[(i, k)] * out[(j, k)];
|
||||
}
|
||||
|
||||
if i > j {
|
||||
out[(i, j)] = sum / out[(j, j)];
|
||||
}
|
||||
else if sum > N::zero() {
|
||||
out[(i, i)] = sum.sqrt();
|
||||
}
|
||||
else {
|
||||
return Err("Cholesky: Input matrix is not positive definite to machine precision.");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i in 0 .. out.nrows() {
|
||||
for j in i + 1 .. out.ncols() {
|
||||
out[(i, j)] = N::zero();
|
||||
}
|
||||
}
|
||||
|
||||
Ok(out)
|
||||
}
|
||||
|
||||
/// Hessenberg
|
||||
/// Returns the matrix `self` in Hessenberg form and the corresponding similarity transformation
|
||||
///
|
||||
/// # Returns
|
||||
/// The tuple (`q`, `h`) that `q * h * q^T = self`
|
||||
pub fn hessenberg(&self) -> (OwnedSquareMatrix<N, D, S::Alloc>, OwnedSquareMatrix<N, D, S::Alloc>)
|
||||
where S::Alloc: Allocator<N, D, U1> + Allocator<N, Dynamic, U1> {
|
||||
|
||||
let (nrows, ncols) = self.data.shape();
|
||||
let mut h = self.clone_owned();
|
||||
|
||||
let mut q = OwnedSquareMatrix::<N, D, S::Alloc>::identity_generic(nrows, ncols);
|
||||
|
||||
if ncols.value() <= 2 {
|
||||
return (q, h);
|
||||
}
|
||||
|
||||
// Temporary buffer that contains a column.
|
||||
let mut col = unsafe {
|
||||
OwnedColumnVector::<N, D, S::Alloc>::new_uninitialized_generic(nrows, U1)
|
||||
};
|
||||
|
||||
for ite in 0 .. (ncols.value() - 2) {
|
||||
let subdim = Dynamic::new(nrows.value() - (ite + 1));
|
||||
let mut v = col.rows_mut(0, subdim.value());
|
||||
v.copy_from(&h.generic_slice((ite + 1, ite), (subdim, U1)));
|
||||
|
||||
let alpha = v.norm();
|
||||
|
||||
unsafe {
|
||||
let x = *v.get_unchecked(0, 0);
|
||||
*v.get_unchecked_mut(0, 0) = x - alpha;
|
||||
}
|
||||
|
||||
if !v.normalize_mut().is_zero() {
|
||||
// XXX: we output the householder matrix to a pre-allocated matrix instead of
|
||||
// return a value to `p`. This would avoid allocation at each iteration.
|
||||
let p = OwnedSquareMatrix::<N, D, S::Alloc>::new_householder_generic(nrows, ite + 1, &v);
|
||||
|
||||
q = q * &p;
|
||||
h = &p * h * p;
|
||||
}
|
||||
}
|
||||
|
||||
(q, h)
|
||||
}
|
||||
}
|
|
@ -4,6 +4,8 @@
|
|||
//! heap-allocated buffers for matrices with at least one dimension unknown at compile-time.
|
||||
|
||||
use std::mem;
|
||||
use std::ptr;
|
||||
use std::cmp;
|
||||
use std::ops::Mul;
|
||||
|
||||
use typenum::Prod;
|
||||
|
@ -11,7 +13,8 @@ use generic_array::ArrayLength;
|
|||
|
||||
use core::Scalar;
|
||||
use core::dimension::{Dim, DimName, Dynamic};
|
||||
use core::allocator::Allocator;
|
||||
use core::allocator::{Allocator, Reallocator};
|
||||
use core::storage::{Storage, StorageMut};
|
||||
use core::matrix_array::MatrixArray;
|
||||
use core::matrix_vec::MatrixVec;
|
||||
|
||||
|
@ -107,3 +110,110 @@ impl<N: Scalar, R: DimName> Allocator<N, R, Dynamic> for DefaultAllocator {
|
|||
MatrixVec::new(nrows, ncols, res)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* Reallocator.
|
||||
*
|
||||
*/
|
||||
// Anything -> Static × Static
|
||||
impl<N: Scalar, RFrom, CFrom, RTo, CTo> Reallocator<N, RFrom, CFrom, RTo, CTo> for DefaultAllocator
|
||||
where RFrom: Dim,
|
||||
CFrom: Dim,
|
||||
RTo: DimName,
|
||||
CTo: DimName,
|
||||
Self: Allocator<N, RFrom, CFrom>,
|
||||
RTo::Value: Mul<CTo::Value>,
|
||||
Prod<RTo::Value, CTo::Value>: ArrayLength<N> {
|
||||
|
||||
#[inline]
|
||||
unsafe fn reallocate_copy(rto: RTo, cto: CTo, buf: <Self as Allocator<N, RFrom, CFrom>>::Buffer) -> MatrixArray<N, RTo, CTo> {
|
||||
let mut res = <Self as Allocator<N, RTo, CTo>>::allocate_uninitialized(rto, cto);
|
||||
|
||||
let (rfrom, cfrom) = buf.shape();
|
||||
|
||||
let len_from = rfrom.value() * cfrom.value();
|
||||
let len_to = rto.value() * cto.value();
|
||||
ptr::copy_nonoverlapping(buf.ptr(), res.ptr_mut(), cmp::min(len_from, len_to));
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Static × Static -> Dynamic × Any
|
||||
impl<N: Scalar, RFrom, CFrom, CTo> Reallocator<N, RFrom, CFrom, Dynamic, CTo> for DefaultAllocator
|
||||
where RFrom: DimName,
|
||||
CFrom: DimName,
|
||||
CTo: Dim,
|
||||
RFrom::Value: Mul<CFrom::Value>,
|
||||
Prod<RFrom::Value, CFrom::Value>: ArrayLength<N> {
|
||||
|
||||
#[inline]
|
||||
unsafe fn reallocate_copy(rto: Dynamic, cto: CTo, buf: MatrixArray<N, RFrom, CFrom>) -> MatrixVec<N, Dynamic, CTo> {
|
||||
let mut res = <Self as Allocator<N, Dynamic, CTo>>::allocate_uninitialized(rto, cto);
|
||||
|
||||
let (rfrom, cfrom) = buf.shape();
|
||||
|
||||
let len_from = rfrom.value() * cfrom.value();
|
||||
let len_to = rto.value() * cto.value();
|
||||
ptr::copy_nonoverlapping(buf.ptr(), res.ptr_mut(), cmp::min(len_from, len_to));
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
// Static × Static -> Static × Dynamic
|
||||
impl<N: Scalar, RFrom, CFrom, RTo> Reallocator<N, RFrom, CFrom, RTo, Dynamic> for DefaultAllocator
|
||||
where RFrom: DimName,
|
||||
CFrom: DimName,
|
||||
RTo: DimName,
|
||||
RFrom::Value: Mul<CFrom::Value>,
|
||||
Prod<RFrom::Value, CFrom::Value>: ArrayLength<N> {
|
||||
|
||||
#[inline]
|
||||
unsafe fn reallocate_copy(rto: RTo, cto: Dynamic, buf: MatrixArray<N, RFrom, CFrom>) -> MatrixVec<N, RTo, Dynamic> {
|
||||
let mut res = <Self as Allocator<N, RTo, Dynamic>>::allocate_uninitialized(rto, cto);
|
||||
|
||||
let (rfrom, cfrom) = buf.shape();
|
||||
|
||||
let len_from = rfrom.value() * cfrom.value();
|
||||
let len_to = rto.value() * cto.value();
|
||||
ptr::copy_nonoverlapping(buf.ptr(), res.ptr_mut(), cmp::min(len_from, len_to));
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
// All conversion from a dynamic buffer to a dynamic buffer.
|
||||
impl<N: Scalar, CFrom: Dim, CTo: Dim> Reallocator<N, Dynamic, CFrom, Dynamic, CTo> for DefaultAllocator {
|
||||
#[inline]
|
||||
unsafe fn reallocate_copy(rto: Dynamic, cto: CTo, buf: MatrixVec<N, Dynamic, CFrom>) -> MatrixVec<N, Dynamic, CTo> {
|
||||
let new_buf = buf.resize(rto.value() * cto.value());
|
||||
MatrixVec::new(rto, cto, new_buf)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, CFrom: Dim, RTo: DimName> Reallocator<N, Dynamic, CFrom, RTo, Dynamic> for DefaultAllocator {
|
||||
#[inline]
|
||||
unsafe fn reallocate_copy(rto: RTo, cto: Dynamic, buf: MatrixVec<N, Dynamic, CFrom>) -> MatrixVec<N, RTo, Dynamic> {
|
||||
let new_buf = buf.resize(rto.value() * cto.value());
|
||||
MatrixVec::new(rto, cto, new_buf)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, RFrom: DimName, CTo: Dim> Reallocator<N, RFrom, Dynamic, Dynamic, CTo> for DefaultAllocator {
|
||||
#[inline]
|
||||
unsafe fn reallocate_copy(rto: Dynamic, cto: CTo, buf: MatrixVec<N, RFrom, Dynamic>) -> MatrixVec<N, Dynamic, CTo> {
|
||||
let new_buf = buf.resize(rto.value() * cto.value());
|
||||
MatrixVec::new(rto, cto, new_buf)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, RFrom: DimName, RTo: DimName> Reallocator<N, RFrom, Dynamic, RTo, Dynamic> for DefaultAllocator {
|
||||
#[inline]
|
||||
unsafe fn reallocate_copy(rto: RTo, cto: Dynamic, buf: MatrixVec<N, RFrom, Dynamic>) -> MatrixVec<N, RTo, Dynamic> {
|
||||
let new_buf = buf.resize(rto.value() * cto.value());
|
||||
MatrixVec::new(rto, cto, new_buf)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,9 +3,11 @@
|
|||
//! Traits and tags for identifying the dimension of all algebraic entities.
|
||||
|
||||
use std::fmt::Debug;
|
||||
use std::any::Any;
|
||||
use std::any::{TypeId, Any};
|
||||
use std::cmp;
|
||||
use std::ops::{Add, Sub, Mul, Div};
|
||||
use typenum::{self, Unsigned, UInt, B1, Bit, UTerm, Sum, Prod, Diff, Quot};
|
||||
use typenum::{self, Unsigned, UInt, B1, Bit, UTerm, Sum, Prod, Diff, Quot,
|
||||
Min, Minimum, Max, Maximum};
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
use serde::{Serialize, Serializer, Deserialize, Deserializer};
|
||||
|
@ -55,6 +57,11 @@ impl IsNotStaticOne for Dynamic { }
|
|||
/// Trait implemented by any type that can be used as a dimension. This includes type-level
|
||||
/// integers and `Dynamic` (for dimensions not known at compile-time).
|
||||
pub trait Dim: Any + Debug + Copy + PartialEq + Send {
|
||||
#[inline(always)]
|
||||
fn is<D: Dim>() -> bool {
|
||||
TypeId::of::<Self>() == TypeId::of::<D>()
|
||||
}
|
||||
|
||||
/// Gets the compile-time value of `Self`. Returns `None` if it is not known, i.e., if `Self =
|
||||
/// Dynamic`.
|
||||
fn try_to_usize() -> Option<usize>;
|
||||
|
@ -85,6 +92,24 @@ impl Dim for Dynamic {
|
|||
}
|
||||
}
|
||||
|
||||
impl Add<usize> for Dynamic {
|
||||
type Output = Dynamic;
|
||||
|
||||
#[inline]
|
||||
fn add(self, rhs: usize) -> Dynamic {
|
||||
Dynamic::new(self.value + rhs)
|
||||
}
|
||||
}
|
||||
|
||||
impl Sub<usize> for Dynamic {
|
||||
type Output = Dynamic;
|
||||
|
||||
#[inline]
|
||||
fn sub(self, rhs: usize) -> Dynamic {
|
||||
Dynamic::new(self.value - rhs)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* Operations.
|
||||
|
@ -93,7 +118,7 @@ impl Dim for Dynamic {
|
|||
|
||||
macro_rules! dim_ops(
|
||||
($($DimOp: ident, $DimNameOp: ident,
|
||||
$Op: ident, $op: ident,
|
||||
$Op: ident, $op: ident, $op_path: path,
|
||||
$DimResOp: ident, $DimNameResOp: ident,
|
||||
$ResOp: ident);* $(;)*) => {$(
|
||||
pub type $DimResOp<D1, D2> = <D1 as $DimOp<D2>>::Output;
|
||||
|
@ -120,7 +145,7 @@ macro_rules! dim_ops(
|
|||
|
||||
#[inline]
|
||||
fn $op(self, other: D) -> Dynamic {
|
||||
Dynamic::new(self.value.$op(other.value()))
|
||||
Dynamic::new($op_path(self.value, other.value()))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -129,7 +154,7 @@ macro_rules! dim_ops(
|
|||
|
||||
#[inline]
|
||||
fn $op(self, other: Dynamic) -> Dynamic {
|
||||
Dynamic::new(self.value().$op(other.value))
|
||||
Dynamic::new($op_path(self.value(), other.value))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -155,10 +180,12 @@ macro_rules! dim_ops(
|
|||
);
|
||||
|
||||
dim_ops!(
|
||||
DimAdd, DimNameAdd, Add, add, DimSum, DimNameSum, Sum;
|
||||
DimMul, DimNameMul, Mul, mul, DimProd, DimNameProd, Prod;
|
||||
DimSub, DimNameSub, Sub, sub, DimDiff, DimNameDiff, Diff;
|
||||
DimDiv, DimNameDiv, Div, div, DimQuot, DimNameQuot, Quot;
|
||||
DimAdd, DimNameAdd, Add, add, Add::add, DimSum, DimNameSum, Sum;
|
||||
DimMul, DimNameMul, Mul, mul, Mul::mul, DimProd, DimNameProd, Prod;
|
||||
DimSub, DimNameSub, Sub, sub, Sub::sub, DimDiff, DimNameDiff, Diff;
|
||||
DimDiv, DimNameDiv, Div, div, Div::div, DimQuot, DimNameQuot, Quot;
|
||||
DimMin, DimNameMin, Min, min, cmp::min, DimMinimum, DimNameNimimum, Minimum;
|
||||
DimMax, DimNameMax, Max, max, cmp::max, DimMaximum, DimNameMaximum, Maximum;
|
||||
);
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,565 @@
|
|||
use num::{Zero, One};
|
||||
use std::cmp;
|
||||
use std::ptr;
|
||||
|
||||
use core::{DefaultAllocator, Scalar, Matrix, DMatrix, MatrixMN, Vector, RowVector};
|
||||
use core::dimension::{Dim, DimName, DimSub, DimDiff, DimAdd, DimSum, DimMin, DimMinimum, U1, Dynamic};
|
||||
use core::constraint::{ShapeConstraint, DimEq, SameNumberOfColumns, SameNumberOfRows};
|
||||
use core::allocator::{Allocator, Reallocator};
|
||||
use core::storage::{Storage, StorageMut};
|
||||
|
||||
impl<N: Scalar + Zero, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Extracts the upper triangular part of this matrix (including the diagonal).
|
||||
#[inline]
|
||||
pub fn upper_triangle(&self) -> MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
let mut res = self.clone_owned();
|
||||
res.fill_lower_triangle(N::zero(), 1);
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// Extracts the upper triangular part of this matrix (including the diagonal).
|
||||
#[inline]
|
||||
pub fn lower_triangle(&self) -> MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
let mut res = self.clone_owned();
|
||||
res.fill_upper_triangle(N::zero(), 1);
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: StorageMut<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Sets all the elements of this matrix to `val`.
|
||||
#[inline]
|
||||
pub fn fill(&mut self, val: N) {
|
||||
for e in self.iter_mut() {
|
||||
*e = val
|
||||
}
|
||||
}
|
||||
|
||||
/// Fills `self` with the identity matrix.
|
||||
#[inline]
|
||||
pub fn fill_with_identity(&mut self)
|
||||
where N: Zero + One {
|
||||
self.fill(N::zero());
|
||||
self.fill_diagonal(N::one());
|
||||
}
|
||||
|
||||
/// Sets all the diagonal elements of this matrix to `val`.
|
||||
#[inline]
|
||||
pub fn fill_diagonal(&mut self, val: N) {
|
||||
let (nrows, ncols) = self.shape();
|
||||
let n = cmp::min(nrows, ncols);
|
||||
|
||||
for i in 0 .. n {
|
||||
unsafe { *self.get_unchecked_mut(i, i) = val }
|
||||
}
|
||||
}
|
||||
|
||||
/// Sets all the elements of the selected row to `val`.
|
||||
#[inline]
|
||||
pub fn fill_row(&mut self, i: usize, val: N) {
|
||||
assert!(i < self.nrows(), "Row index out of bounds.");
|
||||
for j in 0 .. self.ncols() {
|
||||
unsafe { *self.get_unchecked_mut(i, j) = val }
|
||||
}
|
||||
}
|
||||
|
||||
/// Sets all the elements of the selected column to `val`.
|
||||
#[inline]
|
||||
pub fn fill_column(&mut self, j: usize, val: N) {
|
||||
assert!(j < self.ncols(), "Row index out of bounds.");
|
||||
for i in 0 .. self.nrows() {
|
||||
unsafe { *self.get_unchecked_mut(i, j) = val }
|
||||
}
|
||||
}
|
||||
|
||||
/// Fills the diagonal of this matrix with the content of the given vector.
|
||||
#[inline]
|
||||
pub fn set_diagonal<R2: Dim, S2>(&mut self, diag: &Vector<N, R2, S2>)
|
||||
where R: DimMin<C>,
|
||||
S2: Storage<N, R2>,
|
||||
ShapeConstraint: DimEq<DimMinimum<R, C>, R2> {
|
||||
let (nrows, ncols) = self.shape();
|
||||
let min_nrows_ncols = cmp::min(nrows, ncols);
|
||||
assert_eq!(diag.len(), min_nrows_ncols, "Mismatched dimensions.");
|
||||
|
||||
for i in 0 .. min_nrows_ncols {
|
||||
unsafe { *self.get_unchecked_mut(i, i) = *diag.vget_unchecked(i) }
|
||||
}
|
||||
}
|
||||
|
||||
/// Fills the selected row of this matrix with the content of the given vector.
|
||||
#[inline]
|
||||
pub fn set_row<C2: Dim, S2>(&mut self, i: usize, row: &RowVector<N, C2, S2>)
|
||||
where S2: Storage<N, U1, C2>,
|
||||
ShapeConstraint: SameNumberOfColumns<C, C2> {
|
||||
self.row_mut(i).copy_from(row);
|
||||
}
|
||||
|
||||
/// Fills the selected column of this matrix with the content of the given vector.
|
||||
#[inline]
|
||||
pub fn set_column<R2: Dim, S2>(&mut self, i: usize, column: &Vector<N, R2, S2>)
|
||||
where S2: Storage<N, R2, U1>,
|
||||
ShapeConstraint: SameNumberOfRows<R, R2> {
|
||||
self.column_mut(i).copy_from(column);
|
||||
}
|
||||
|
||||
/// Sets all the elements of the lower-triangular part of this matrix to `val`.
|
||||
///
|
||||
/// The parameter `shift` allows some subdiagonals to be left untouched:
|
||||
/// * If `shift = 0` then the diagonal is overwritten as well.
|
||||
/// * If `shift = 1` then the diagonal is left untouched.
|
||||
/// * If `shift > 1`, then the diagonal and the first `shift - 1` subdiagonals are left
|
||||
/// untouched.
|
||||
#[inline]
|
||||
pub fn fill_lower_triangle(&mut self, val: N, shift: usize) {
|
||||
for j in 0 .. self.ncols() {
|
||||
for i in (j + shift) .. self.nrows() {
|
||||
unsafe { *self.get_unchecked_mut(i, j) = val }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Sets all the elements of the lower-triangular part of this matrix to `val`.
|
||||
///
|
||||
/// The parameter `shift` allows some superdiagonals to be left untouched:
|
||||
/// * If `shift = 0` then the diagonal is overwritten as well.
|
||||
/// * If `shift = 1` then the diagonal is left untouched.
|
||||
/// * If `shift > 1`, then the diagonal and the first `shift - 1` superdiagonals are left
|
||||
/// untouched.
|
||||
#[inline]
|
||||
pub fn fill_upper_triangle(&mut self, val: N, shift: usize) {
|
||||
for j in shift .. self.ncols() {
|
||||
// FIXME: is there a more efficient way to avoid the min ?
|
||||
// (necessary for rectangular matrices)
|
||||
for i in 0 .. cmp::min(j + 1 - shift, self.nrows()) {
|
||||
unsafe { *self.get_unchecked_mut(i, j) = val }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Swaps two rows in-place.
|
||||
#[inline]
|
||||
pub fn swap_rows(&mut self, irow1: usize, irow2: usize) {
|
||||
assert!(irow1 < self.nrows() && irow2 < self.nrows());
|
||||
|
||||
if irow1 != irow2 {
|
||||
// FIXME: optimize that.
|
||||
for i in 0 .. self.ncols() {
|
||||
unsafe { self.swap_unchecked((irow1, i), (irow2, i)) }
|
||||
}
|
||||
}
|
||||
// Otherwise do nothing.
|
||||
}
|
||||
|
||||
/// Swaps two columns in-place.
|
||||
#[inline]
|
||||
pub fn swap_columns(&mut self, icol1: usize, icol2: usize) {
|
||||
assert!(icol1 < self.ncols() && icol2 < self.ncols());
|
||||
|
||||
if icol1 != icol2 {
|
||||
// FIXME: optimize that.
|
||||
for i in 0 .. self.nrows() {
|
||||
unsafe { self.swap_unchecked((i, icol1), (i, icol2)) }
|
||||
}
|
||||
}
|
||||
// Otherwise do nothing.
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, D: Dim, S: StorageMut<N, D, D>> Matrix<N, D, D, S> {
|
||||
/// Copies the upper-triangle of this matrix to its lower-triangular part.
|
||||
///
|
||||
/// This makes the matrix symmetric. Panics if the matrix is not square.
|
||||
pub fn fill_lower_triangle_with_upper_triangle(&mut self) {
|
||||
assert!(self.is_square(), "The input matrix should be square.");
|
||||
|
||||
let dim = self.nrows();
|
||||
for j in 0 .. dim {
|
||||
for i in j + 1 .. dim {
|
||||
unsafe {
|
||||
*self.get_unchecked_mut(i, j) = *self.get_unchecked(j, i);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Copies the upper-triangle of this matrix to its upper-triangular part.
|
||||
///
|
||||
/// This makes the matrix symmetric. Panics if the matrix is not square.
|
||||
pub fn fill_upper_triangle_with_lower_triangle(&mut self) {
|
||||
assert!(self.is_square(), "The input matrix should be square.");
|
||||
|
||||
for j in 1 .. self.ncols() {
|
||||
for i in 0 .. j {
|
||||
unsafe {
|
||||
*self.get_unchecked_mut(i, j) = *self.get_unchecked(j, i);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* FIXME: specialize all the following for slices.
|
||||
*
|
||||
*/
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/*
|
||||
*
|
||||
* Column removal.
|
||||
*
|
||||
*/
|
||||
/// Removes the `i`-th column from this matrix.
|
||||
#[inline]
|
||||
pub fn remove_column(self, i: usize) -> MatrixMN<N, R, DimDiff<C, U1>>
|
||||
where C: DimSub<U1>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, DimDiff<C, U1>> {
|
||||
self.remove_fixed_columns::<U1>(i)
|
||||
}
|
||||
|
||||
/// Removes `D::dim()` consecutive columns from this matrix, starting with the `i`-th
|
||||
/// (included).
|
||||
#[inline]
|
||||
pub fn remove_fixed_columns<D>(self, i: usize) -> MatrixMN<N, R, DimDiff<C, D>>
|
||||
where D: DimName,
|
||||
C: DimSub<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, DimDiff<C, D>> {
|
||||
|
||||
self.remove_columns_generic(i, D::name())
|
||||
}
|
||||
|
||||
/// Removes `n` consecutive columns from this matrix, starting with the `i`-th (included).
|
||||
#[inline]
|
||||
pub fn remove_columns(self, i: usize, n: usize) -> MatrixMN<N, R, Dynamic>
|
||||
where C: DimSub<Dynamic, Output = Dynamic>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, Dynamic> {
|
||||
|
||||
self.remove_columns_generic(i, Dynamic::new(n))
|
||||
}
|
||||
|
||||
/// Removes `nremove.value()` columns from this matrix, starting with the `i`-th (included).
|
||||
///
|
||||
/// This is the generic implementation of `.remove_columns(...)` and
|
||||
/// `.remove_fixed_columns(...)` which have nicer API interfaces.
|
||||
#[inline]
|
||||
pub fn remove_columns_generic<D>(self, i: usize, nremove: D) -> MatrixMN<N, R, DimDiff<C, D>>
|
||||
where D: Dim,
|
||||
C: DimSub<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, DimDiff<C, D>> {
|
||||
|
||||
let mut m = self.into_owned();
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
assert!(i + nremove.value() <= ncols.value(), "Column index out of range.");
|
||||
|
||||
if nremove.value() != 0 && i + nremove.value() < ncols.value() {
|
||||
// The first `deleted_i * nrows` are left untouched.
|
||||
let copied_value_start = i + nremove.value();
|
||||
|
||||
unsafe {
|
||||
let ptr_in = m.data.ptr().offset((copied_value_start * nrows.value()) as isize);
|
||||
let ptr_out = m.data.ptr_mut().offset((i * nrows.value()) as isize);
|
||||
|
||||
ptr::copy(ptr_in, ptr_out, (ncols.value() - copied_value_start) * nrows.value());
|
||||
}
|
||||
}
|
||||
|
||||
unsafe {
|
||||
Matrix::from_data(DefaultAllocator::reallocate_copy(nrows, ncols.sub(nremove), m.data))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Row removal.
|
||||
*
|
||||
*/
|
||||
/// Removes the `i`-th row from this matrix.
|
||||
#[inline]
|
||||
pub fn remove_row(self, i: usize) -> MatrixMN<N, DimDiff<R, U1>, C>
|
||||
where R: DimSub<U1>,
|
||||
DefaultAllocator: Reallocator<N, R, C, DimDiff<R, U1>, C> {
|
||||
self.remove_fixed_rows::<U1>(i)
|
||||
}
|
||||
|
||||
/// Removes `D::dim()` consecutive rows from this matrix, starting with the `i`-th (included).
|
||||
#[inline]
|
||||
pub fn remove_fixed_rows<D>(self, i: usize) -> MatrixMN<N, DimDiff<R, D>, C>
|
||||
where D: DimName,
|
||||
R: DimSub<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, DimDiff<R, D>, C> {
|
||||
|
||||
self.remove_rows_generic(i, D::name())
|
||||
}
|
||||
|
||||
/// Removes `n` consecutive rows from this matrix, starting with the `i`-th (included).
|
||||
#[inline]
|
||||
pub fn remove_rows(self, i: usize, n: usize) -> MatrixMN<N, Dynamic, C>
|
||||
where R: DimSub<Dynamic, Output = Dynamic>,
|
||||
DefaultAllocator: Reallocator<N, R, C, Dynamic, C> {
|
||||
|
||||
self.remove_rows_generic(i, Dynamic::new(n))
|
||||
}
|
||||
|
||||
/// Removes `nremove.value()` rows from this matrix, starting with the `i`-th (included).
|
||||
///
|
||||
/// This is the generic implementation of `.remove_rows(...)` and `.remove_fixed_rows(...)`
|
||||
/// which have nicer API interfaces.
|
||||
#[inline]
|
||||
pub fn remove_rows_generic<D>(self, i: usize, nremove: D) -> MatrixMN<N, DimDiff<R, D>, C>
|
||||
where D: Dim,
|
||||
R: DimSub<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, DimDiff<R, D>, C> {
|
||||
let mut m = self.into_owned();
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
assert!(i + nremove.value() <= nrows.value(), "Row index out of range.");
|
||||
|
||||
if nremove.value() != 0 {
|
||||
unsafe {
|
||||
compress_rows(&mut m.data.as_mut_slice(), nrows.value(), ncols.value(), i, nremove.value());
|
||||
}
|
||||
}
|
||||
|
||||
unsafe {
|
||||
Matrix::from_data(DefaultAllocator::reallocate_copy(nrows.sub(nremove), ncols, m.data))
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* Columns insertion.
|
||||
*
|
||||
*/
|
||||
/// Inserts a column filled with `val` at the `i-th` position.
|
||||
#[inline]
|
||||
pub fn insert_column(self, i: usize, val: N) -> MatrixMN<N, R, DimSum<C, U1>>
|
||||
where C: DimAdd<U1>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, DimSum<C, U1>> {
|
||||
self.insert_fixed_columns::<U1>(i, val)
|
||||
}
|
||||
|
||||
/// Inserts `D::dim()` columns filled with `val` starting at the `i-th` position.
|
||||
#[inline]
|
||||
pub fn insert_fixed_columns<D>(self, i: usize, val: N) -> MatrixMN<N, R, DimSum<C, D>>
|
||||
where D: DimName,
|
||||
C: DimAdd<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, DimSum<C, D>> {
|
||||
let mut res = unsafe { self.insert_columns_generic_uninitialized(i, D::name()) };
|
||||
res.fixed_columns_mut::<D>(i).fill(val);
|
||||
res
|
||||
}
|
||||
|
||||
/// Inserts `n` columns filled with `val` starting at the `i-th` position.
|
||||
#[inline]
|
||||
pub fn insert_columns(self, i: usize, n: usize, val: N) -> MatrixMN<N, R, Dynamic>
|
||||
where C: DimAdd<Dynamic, Output = Dynamic>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, Dynamic> {
|
||||
let mut res = unsafe { self.insert_columns_generic_uninitialized(i, Dynamic::new(n)) };
|
||||
res.columns_mut(i, n).fill(val);
|
||||
res
|
||||
}
|
||||
|
||||
/// Inserts `ninsert.value()` columns starting at the `i-th` place of this matrix.
|
||||
///
|
||||
/// The added column values are not initialized.
|
||||
#[inline]
|
||||
pub unsafe fn insert_columns_generic_uninitialized<D>(self, i: usize, ninsert: D)
|
||||
-> MatrixMN<N, R, DimSum<C, D>>
|
||||
where D: Dim,
|
||||
C: DimAdd<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, R, DimSum<C, D>> {
|
||||
|
||||
let m = self.into_owned();
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
let mut res = Matrix::from_data(DefaultAllocator::reallocate_copy(nrows, ncols.add(ninsert), m.data));
|
||||
|
||||
assert!(i <= ncols.value(), "Column insertion index out of range.");
|
||||
|
||||
if ninsert.value() != 0 && i != ncols.value() {
|
||||
let ptr_in = res.data.ptr().offset((i * nrows.value()) as isize);
|
||||
let ptr_out = res.data.ptr_mut().offset(((i + ninsert.value()) * nrows.value()) as isize);
|
||||
|
||||
ptr::copy(ptr_in, ptr_out, (ncols.value() - i) * nrows.value())
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* Rows insertion.
|
||||
*
|
||||
*/
|
||||
/// Inserts a row filled with `val` at the `i-th` position.
|
||||
#[inline]
|
||||
pub fn insert_row(self, i: usize, val: N) -> MatrixMN<N, DimSum<R, U1>, C>
|
||||
where R: DimAdd<U1>,
|
||||
DefaultAllocator: Reallocator<N, R, C, DimSum<R, U1>, C> {
|
||||
self.insert_fixed_rows::<U1>(i, val)
|
||||
}
|
||||
|
||||
/// Inserts `D::dim()` rows filled with `val` starting at the `i-th` position.
|
||||
#[inline]
|
||||
pub fn insert_fixed_rows<D>(self, i: usize, val: N) -> MatrixMN<N, DimSum<R, D>, C>
|
||||
where D: DimName,
|
||||
R: DimAdd<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, DimSum<R, D>, C> {
|
||||
let mut res = unsafe { self.insert_rows_generic_uninitialized(i, D::name()) };
|
||||
res.fixed_rows_mut::<D>(i).fill(val);
|
||||
res
|
||||
}
|
||||
|
||||
/// Inserts `n` rows filled with `val` starting at the `i-th` position.
|
||||
#[inline]
|
||||
pub fn insert_rows(self, i: usize, n: usize, val: N) -> MatrixMN<N, Dynamic, C>
|
||||
where R: DimAdd<Dynamic, Output = Dynamic>,
|
||||
DefaultAllocator: Reallocator<N, R, C, Dynamic, C> {
|
||||
let mut res = unsafe { self.insert_rows_generic_uninitialized(i, Dynamic::new(n)) };
|
||||
res.rows_mut(i, n).fill(val);
|
||||
res
|
||||
}
|
||||
|
||||
/// Inserts `ninsert.value()` rows at the `i-th` place of this matrix.
|
||||
///
|
||||
/// The added rows values are not initialized.
|
||||
/// This is the generic implementation of `.insert_rows(...)` and
|
||||
/// `.insert_fixed_rows(...)` which have nicer API interfaces.
|
||||
#[inline]
|
||||
pub unsafe fn insert_rows_generic_uninitialized<D>(self, i: usize, ninsert: D)
|
||||
-> MatrixMN<N, DimSum<R, D>, C>
|
||||
where D: Dim,
|
||||
R: DimAdd<D>,
|
||||
DefaultAllocator: Reallocator<N, R, C, DimSum<R, D>, C> {
|
||||
|
||||
let m = self.into_owned();
|
||||
let (nrows, ncols) = m.data.shape();
|
||||
let mut res = Matrix::from_data(DefaultAllocator::reallocate_copy(nrows.add(ninsert), ncols, m.data));
|
||||
|
||||
assert!(i <= nrows.value(), "Row insertion index out of range.");
|
||||
|
||||
if ninsert.value() != 0 {
|
||||
extend_rows(&mut res.data.as_mut_slice(), nrows.value(), ncols.value(), i, ninsert.value());
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* Resizing.
|
||||
*
|
||||
*/
|
||||
|
||||
/// Resizes this matrix so that it contains `new_nrows` rows and `new_ncols` columns.
|
||||
///
|
||||
/// The values are copied such that `self[(i, j)] == result[(i, j)]`. If the result has more
|
||||
/// rows and/or columns than `self`, then the extra rows or columns are filled with `val`.
|
||||
pub fn resize(self, new_nrows: usize, new_ncols: usize, val: N) -> DMatrix<N>
|
||||
where DefaultAllocator: Reallocator<N, R, C, Dynamic, Dynamic> {
|
||||
|
||||
self.resize_generic(Dynamic::new(new_nrows), Dynamic::new(new_ncols), val)
|
||||
}
|
||||
|
||||
/// Resizes this matrix so that it contains `R2::value()` rows and `C2::value()` columns.
|
||||
///
|
||||
/// The values are copied such that `self[(i, j)] == result[(i, j)]`. If the result has more
|
||||
/// rows and/or columns than `self`, then the extra rows or columns are filled with `val`.
|
||||
pub fn fixed_resize<R2: DimName, C2: DimName>(self, val: N) -> MatrixMN<N, R2, C2>
|
||||
where DefaultAllocator: Reallocator<N, R, C, R2, C2> {
|
||||
|
||||
self.resize_generic(R2::name(), C2::name(), val)
|
||||
}
|
||||
|
||||
/// Resizes `self` such that it has dimensions `new_nrows × now_ncols`.
|
||||
///
|
||||
/// The values are copied such that `self[(i, j)] == result[(i, j)]`. If the result has more
|
||||
/// rows and/or columns than `self`, then the extra rows or columns are filled with `val`.
|
||||
#[inline]
|
||||
pub fn resize_generic<R2: Dim, C2: Dim>(self, new_nrows: R2, new_ncols: C2, val: N) -> MatrixMN<N, R2, C2>
|
||||
where DefaultAllocator: Reallocator<N, R, C, R2, C2> {
|
||||
|
||||
let (nrows, ncols) = self.shape();
|
||||
let mut data = self.data.into_owned();
|
||||
|
||||
if new_nrows.value() == nrows {
|
||||
let res = unsafe { DefaultAllocator::reallocate_copy(new_nrows, new_ncols, data) };
|
||||
|
||||
Matrix::from_data(res)
|
||||
}
|
||||
else {
|
||||
let mut res;
|
||||
|
||||
unsafe {
|
||||
if new_nrows.value() < nrows {
|
||||
compress_rows(&mut data.as_mut_slice(), nrows, ncols, new_nrows.value(), nrows - new_nrows.value());
|
||||
res = Matrix::from_data(DefaultAllocator::reallocate_copy(new_nrows, new_ncols, data));
|
||||
}
|
||||
else {
|
||||
res = Matrix::from_data(DefaultAllocator::reallocate_copy(new_nrows, new_ncols, data));
|
||||
extend_rows(&mut res.data.as_mut_slice(), nrows, ncols, nrows, new_nrows.value() - nrows);
|
||||
}
|
||||
}
|
||||
|
||||
if new_ncols.value() > ncols {
|
||||
res.columns_range_mut(ncols ..).fill(val);
|
||||
}
|
||||
|
||||
if new_nrows.value() > nrows {
|
||||
res.slice_range_mut(nrows .., .. cmp::min(ncols, new_ncols.value())).fill(val);
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
unsafe fn compress_rows<N: Scalar>(data: &mut [N], nrows: usize, ncols: usize, i: usize, nremove: usize) {
|
||||
let new_nrows = nrows - nremove;
|
||||
let ptr_in = data.as_ptr();
|
||||
let ptr_out = data.as_mut_ptr();
|
||||
|
||||
let mut curr_i = i;
|
||||
|
||||
for k in 0 .. ncols - 1 {
|
||||
ptr::copy(ptr_in.offset((curr_i + (k + 1) * nremove) as isize),
|
||||
ptr_out.offset(curr_i as isize),
|
||||
new_nrows);
|
||||
|
||||
curr_i += new_nrows;
|
||||
}
|
||||
|
||||
// Deal with the last column from which less values have to be copied.
|
||||
let remaining_len = nrows - i - nremove;
|
||||
ptr::copy(ptr_in.offset((nrows * ncols - remaining_len) as isize),
|
||||
ptr_out.offset(curr_i as isize),
|
||||
remaining_len);
|
||||
}
|
||||
|
||||
|
||||
unsafe fn extend_rows<N: Scalar>(data: &mut [N], nrows: usize, ncols: usize, i: usize, ninsert: usize) {
|
||||
let new_nrows = nrows + ninsert;
|
||||
let ptr_in = data.as_ptr();
|
||||
let ptr_out = data.as_mut_ptr();
|
||||
|
||||
let remaining_len = nrows - i;
|
||||
let mut curr_i = new_nrows * ncols - remaining_len;
|
||||
|
||||
// Deal with the last column from which less values have to be copied.
|
||||
ptr::copy(ptr_in.offset((nrows * ncols - remaining_len) as isize),
|
||||
ptr_out.offset(curr_i as isize),
|
||||
remaining_len);
|
||||
|
||||
for k in (0 .. ncols - 1).rev() {
|
||||
curr_i -= new_nrows;
|
||||
|
||||
ptr::copy(ptr_in.offset((k * nrows + i) as isize),
|
||||
ptr_out.offset(curr_i as isize),
|
||||
nrows);
|
||||
}
|
||||
}
|
|
@ -1,203 +0,0 @@
|
|||
use approx::ApproxEq;
|
||||
|
||||
use alga::general::Field;
|
||||
|
||||
use core::{Scalar, Matrix, SquareMatrix, OwnedSquareMatrix};
|
||||
use core::dimension::Dim;
|
||||
use core::storage::{Storage, StorageMut};
|
||||
|
||||
|
||||
impl<N, D: Dim, S> SquareMatrix<N, D, S>
|
||||
where N: Scalar + Field + ApproxEq,
|
||||
S: Storage<N, D, D> {
|
||||
/// Attempts to invert this matrix.
|
||||
#[inline]
|
||||
pub fn try_inverse(self) -> Option<OwnedSquareMatrix<N, D, S::Alloc>> {
|
||||
let mut res = self.into_owned();
|
||||
|
||||
if res.shape().0 <= 3 {
|
||||
if res.try_inverse_mut() {
|
||||
Some(res)
|
||||
}
|
||||
else {
|
||||
None
|
||||
}
|
||||
}
|
||||
else {
|
||||
gauss_jordan_inverse(res)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N, D: Dim, S> SquareMatrix<N, D, S>
|
||||
where N: Scalar + Field + ApproxEq,
|
||||
S: StorageMut<N, D, D> {
|
||||
/// Attempts to invert this matrix in-place. Returns `false` and leaves `self` untouched if
|
||||
/// inversion fails.
|
||||
#[inline]
|
||||
pub fn try_inverse_mut(&mut self) -> bool {
|
||||
assert!(self.is_square(), "Unable to invert a non-square matrix.");
|
||||
|
||||
let dim = self.shape().0;
|
||||
|
||||
unsafe {
|
||||
match dim {
|
||||
0 => true,
|
||||
1 => {
|
||||
let determinant = self.get_unchecked(0, 0).clone();
|
||||
if determinant == N::zero() {
|
||||
false
|
||||
}
|
||||
else {
|
||||
*self.get_unchecked_mut(0, 0) = N::one() / determinant;
|
||||
true
|
||||
}
|
||||
},
|
||||
2 => {
|
||||
let determinant = self.determinant();
|
||||
|
||||
if determinant == N::zero() {
|
||||
false
|
||||
}
|
||||
else {
|
||||
let m11 = *self.get_unchecked(0, 0); let m12 = *self.get_unchecked(0, 1);
|
||||
let m21 = *self.get_unchecked(1, 0); let m22 = *self.get_unchecked(1, 1);
|
||||
|
||||
*self.get_unchecked_mut(0, 0) = m22 / determinant;
|
||||
*self.get_unchecked_mut(0, 1) = -m12 / determinant;
|
||||
|
||||
*self.get_unchecked_mut(1, 0) = -m21 / determinant;
|
||||
*self.get_unchecked_mut(1, 1) = m11 / determinant;
|
||||
|
||||
true
|
||||
}
|
||||
},
|
||||
3 => {
|
||||
let m11 = *self.get_unchecked(0, 0);
|
||||
let m12 = *self.get_unchecked(0, 1);
|
||||
let m13 = *self.get_unchecked(0, 2);
|
||||
|
||||
let m21 = *self.get_unchecked(1, 0);
|
||||
let m22 = *self.get_unchecked(1, 1);
|
||||
let m23 = *self.get_unchecked(1, 2);
|
||||
|
||||
let m31 = *self.get_unchecked(2, 0);
|
||||
let m32 = *self.get_unchecked(2, 1);
|
||||
let m33 = *self.get_unchecked(2, 2);
|
||||
|
||||
|
||||
let minor_m12_m23 = m22 * m33 - m32 * m23;
|
||||
let minor_m11_m23 = m21 * m33 - m31 * m23;
|
||||
let minor_m11_m22 = m21 * m32 - m31 * m22;
|
||||
|
||||
let determinant = m11 * minor_m12_m23 -
|
||||
m12 * minor_m11_m23 +
|
||||
m13 * minor_m11_m22;
|
||||
|
||||
if determinant == N::zero() {
|
||||
false
|
||||
}
|
||||
else {
|
||||
*self.get_unchecked_mut(0, 0) = minor_m12_m23 / determinant;
|
||||
*self.get_unchecked_mut(0, 1) = (m13 * m32 - m33 * m12) / determinant;
|
||||
*self.get_unchecked_mut(0, 2) = (m12 * m23 - m22 * m13) / determinant;
|
||||
|
||||
*self.get_unchecked_mut(1, 0) = -minor_m11_m23 / determinant;
|
||||
*self.get_unchecked_mut(1, 1) = (m11 * m33 - m31 * m13) / determinant;
|
||||
*self.get_unchecked_mut(1, 2) = (m13 * m21 - m23 * m11) / determinant;
|
||||
|
||||
*self.get_unchecked_mut(2, 0) = minor_m11_m22 / determinant;
|
||||
*self.get_unchecked_mut(2, 1) = (m12 * m31 - m32 * m11) / determinant;
|
||||
*self.get_unchecked_mut(2, 2) = (m11 * m22 - m21 * m12) / determinant;
|
||||
|
||||
true
|
||||
}
|
||||
},
|
||||
_ => {
|
||||
let oself = self.clone_owned();
|
||||
if let Some(res) = gauss_jordan_inverse(oself) {
|
||||
self.copy_from(&res);
|
||||
true
|
||||
}
|
||||
else {
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/// Inverts the given matrix using Gauss-Jordan Ellimitation.
|
||||
fn gauss_jordan_inverse<N, D, S>(mut matrix: SquareMatrix<N, D, S>) -> Option<OwnedSquareMatrix<N, D, S::Alloc>>
|
||||
where D: Dim,
|
||||
N: Scalar + Field + ApproxEq,
|
||||
S: StorageMut<N, D, D> {
|
||||
|
||||
assert!(matrix.is_square(), "Unable to invert a non-square matrix.");
|
||||
let dim = matrix.data.shape().0;
|
||||
let mut res: OwnedSquareMatrix<N, D, S::Alloc> = Matrix::identity_generic(dim, dim);
|
||||
let dim = dim.value();
|
||||
|
||||
unsafe {
|
||||
for k in 0 .. dim {
|
||||
// Search a non-zero value on the k-th column.
|
||||
// FIXME: would it be worth it to spend some more time searching for the
|
||||
// max instead?
|
||||
|
||||
let mut n0 = k; // index of a non-zero entry.
|
||||
|
||||
while n0 != dim {
|
||||
if !matrix.get_unchecked(n0, k).is_zero() {
|
||||
break;
|
||||
}
|
||||
|
||||
n0 += 1;
|
||||
}
|
||||
|
||||
if n0 == dim {
|
||||
return None
|
||||
}
|
||||
|
||||
// Swap pivot line.
|
||||
if n0 != k {
|
||||
for j in 0 .. dim {
|
||||
matrix.swap_unchecked((n0, j), (k, j));
|
||||
res.swap_unchecked((n0, j), (k, j));
|
||||
}
|
||||
}
|
||||
|
||||
let pivot = *matrix.get_unchecked(k, k);
|
||||
|
||||
for j in k .. dim {
|
||||
let selfval = *matrix.get_unchecked(k, j) / pivot;
|
||||
*matrix.get_unchecked_mut(k, j) = selfval;
|
||||
}
|
||||
|
||||
for j in 0 .. dim {
|
||||
let resval = *res.get_unchecked(k, j) / pivot;
|
||||
*res.get_unchecked_mut(k, j) = resval;
|
||||
}
|
||||
|
||||
for l in 0 .. dim {
|
||||
if l != k {
|
||||
let normalizer = *matrix.get_unchecked(l, k);
|
||||
|
||||
for j in k .. dim {
|
||||
let selfval = *matrix.get_unchecked(l, j) - *matrix.get_unchecked(k, j) * normalizer;
|
||||
*matrix.get_unchecked_mut(l, j) = selfval;
|
||||
}
|
||||
|
||||
for j in 0 .. dim {
|
||||
let resval = *res.get_unchecked(l, j) - *res.get_unchecked(k, j) * normalizer;
|
||||
*res.get_unchecked_mut(l, j) = resval;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Some(res)
|
||||
}
|
||||
}
|
|
@ -81,6 +81,13 @@ macro_rules! iterator {
|
|||
self.size_hint().0
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, N: Scalar, R: Dim, C: Dim, S: 'a + $Storage<N, R, C>> ExactSizeIterator for $Name<'a, N, R, C, S> {
|
||||
#[inline]
|
||||
fn len(&self) -> usize {
|
||||
self.size
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -7,42 +7,39 @@ use alga::general::{AbstractMagma, AbstractGroupAbelian, AbstractGroup, Abstract
|
|||
ClosedAdd, ClosedNeg, ClosedMul};
|
||||
use alga::linear::{VectorSpace, NormedSpace, InnerSpace, FiniteDimVectorSpace, FiniteDimInnerSpace};
|
||||
|
||||
use core::{Scalar, Matrix, SquareMatrix};
|
||||
use core::{DefaultAllocator, Scalar, MatrixMN, MatrixN};
|
||||
use core::dimension::{Dim, DimName};
|
||||
use core::storage::OwnedStorage;
|
||||
use core::allocator::OwnedAllocator;
|
||||
use core::storage::{Storage, StorageMut};
|
||||
use core::allocator::Allocator;
|
||||
|
||||
/*
|
||||
*
|
||||
* Additive structures.
|
||||
*
|
||||
*/
|
||||
impl<N, R: DimName, C: DimName, S> Identity<Additive> for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> Identity<Additive> for MatrixMN<N, R, C>
|
||||
where N: Scalar + Zero,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn identity() -> Self {
|
||||
Self::from_element(N::zero())
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> AbstractMagma<Additive> for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> AbstractMagma<Additive> for MatrixMN<N, R, C>
|
||||
where N: Scalar + ClosedAdd,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn operate(&self, other: &Self) -> Self {
|
||||
self + other
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> Inverse<Additive> for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> Inverse<Additive> for MatrixMN<N, R, C>
|
||||
where N: Scalar + ClosedNeg,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn inverse(&self) -> Matrix<N, R, C, S> {
|
||||
fn inverse(&self) -> MatrixMN<N, R, C> {
|
||||
-self
|
||||
}
|
||||
|
||||
|
@ -54,10 +51,9 @@ impl<N, R: DimName, C: DimName, S> Inverse<Additive> for Matrix<N, R, C, S>
|
|||
|
||||
macro_rules! inherit_additive_structure(
|
||||
($($marker: ident<$operator: ident> $(+ $bounds: ident)*),* $(,)*) => {$(
|
||||
impl<N, R: DimName, C: DimName, S> $marker<$operator> for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> $marker<$operator> for MatrixMN<N, R, C>
|
||||
where N: Scalar + $marker<$operator> $(+ $bounds)*,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> { }
|
||||
DefaultAllocator: Allocator<N, R, C> { }
|
||||
)*}
|
||||
);
|
||||
|
||||
|
@ -70,10 +66,9 @@ inherit_additive_structure!(
|
|||
AbstractGroupAbelian<Additive> + Zero + ClosedAdd + ClosedNeg
|
||||
);
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> AbstractModule for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> AbstractModule for MatrixMN<N, R, C>
|
||||
where N: Scalar + RingCommutative,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
type AbstractRing = N;
|
||||
|
||||
#[inline]
|
||||
|
@ -82,24 +77,21 @@ impl<N, R: DimName, C: DimName, S> AbstractModule for Matrix<N, R, C, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> Module for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> Module for MatrixMN<N, R, C>
|
||||
where N: Scalar + RingCommutative,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
type Ring = N;
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> VectorSpace for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> VectorSpace for MatrixMN<N, R, C>
|
||||
where N: Scalar + Field,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
type Field = N;
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> FiniteDimVectorSpace for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> FiniteDimVectorSpace for MatrixMN<N, R, C>
|
||||
where N: Scalar + Field,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn dimension() -> usize {
|
||||
R::dim() * C::dim()
|
||||
|
@ -131,10 +123,8 @@ impl<N, R: DimName, C: DimName, S> FiniteDimVectorSpace for Matrix<N, R, C, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> NormedSpace for Matrix<N, R, C, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
impl<N: Real, R: DimName, C: DimName> NormedSpace for MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn norm_squared(&self) -> N {
|
||||
self.norm_squared()
|
||||
|
@ -166,10 +156,8 @@ impl<N, R: DimName, C: DimName, S> NormedSpace for Matrix<N, R, C, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> InnerSpace for Matrix<N, R, C, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
impl<N: Real, R: DimName, C: DimName> InnerSpace for MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
type Real = N;
|
||||
|
||||
#[inline]
|
||||
|
@ -187,12 +175,10 @@ impl<N, R: DimName, C: DimName, S> InnerSpace for Matrix<N, R, C, S>
|
|||
// In particular:
|
||||
// − use `x()` instead of `::canonical_basis_element`
|
||||
// − use `::new(x, y, z)` instead of `::from_slice`
|
||||
impl<N, R: DimName, C: DimName, S> FiniteDimInnerSpace for Matrix<N, R, C, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
impl<N: Real, R: DimName, C: DimName> FiniteDimInnerSpace for MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn orthonormalize(vs: &mut [Matrix<N, R, C, S>]) -> usize {
|
||||
fn orthonormalize(vs: &mut [MatrixMN<N, R, C>]) -> usize {
|
||||
let mut nbasis_elements = 0;
|
||||
|
||||
for i in 0 .. vs.len() {
|
||||
|
@ -229,7 +215,7 @@ impl<N, R: DimName, C: DimName, S> FiniteDimInnerSpace for Matrix<N, R, C, S>
|
|||
match Self::dimension() {
|
||||
1 => {
|
||||
if vs.len() == 0 {
|
||||
f(&Self::canonical_basis_element(0));
|
||||
let _ = f(&Self::canonical_basis_element(0));
|
||||
}
|
||||
},
|
||||
2 => {
|
||||
|
@ -241,7 +227,7 @@ impl<N, R: DimName, C: DimName, S> FiniteDimInnerSpace for Matrix<N, R, C, S>
|
|||
let v = &vs[0];
|
||||
let res = Self::from_column_slice(&[-v[1], v[0]]);
|
||||
|
||||
f(&res.normalize());
|
||||
let _ = f(&res.normalize());
|
||||
}
|
||||
|
||||
// Otherwise, nothing.
|
||||
|
@ -266,11 +252,11 @@ impl<N, R: DimName, C: DimName, S> FiniteDimInnerSpace for Matrix<N, R, C, S>
|
|||
let _ = a.normalize_mut();
|
||||
|
||||
if f(&a.cross(v)) {
|
||||
f(&a);
|
||||
let _ = f(&a);
|
||||
}
|
||||
}
|
||||
else if vs.len() == 2 {
|
||||
f(&vs[0].cross(&vs[1]).normalize());
|
||||
let _ = f(&vs[0].cross(&vs[1]).normalize());
|
||||
}
|
||||
},
|
||||
_ => {
|
||||
|
@ -307,20 +293,18 @@ impl<N, R: DimName, C: DimName, S> FiniteDimInnerSpace for Matrix<N, R, C, S>
|
|||
*
|
||||
*
|
||||
*/
|
||||
impl<N, D: DimName, S> Identity<Multiplicative> for SquareMatrix<N, D, S>
|
||||
impl<N, D: DimName> Identity<Multiplicative> for MatrixN<N, D>
|
||||
where N: Scalar + Zero + One,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S> {
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
#[inline]
|
||||
fn identity() -> Self {
|
||||
Self::identity()
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> AbstractMagma<Multiplicative> for SquareMatrix<N, D, S>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S> {
|
||||
impl<N, D: DimName> AbstractMagma<Multiplicative> for MatrixN<N, D>
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
DefaultAllocator: Allocator<N, D, D> {
|
||||
#[inline]
|
||||
fn operate(&self, other: &Self) -> Self {
|
||||
self * other
|
||||
|
@ -329,10 +313,9 @@ impl<N, D: DimName, S> AbstractMagma<Multiplicative> for SquareMatrix<N, D, S>
|
|||
|
||||
macro_rules! impl_multiplicative_structure(
|
||||
($($marker: ident<$operator: ident> $(+ $bounds: ident)*),* $(,)*) => {$(
|
||||
impl<N, D: DimName, S> $marker<$operator> for SquareMatrix<N, D, S>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul + $marker<$operator> $(+ $bounds)*,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S> { }
|
||||
impl<N, D: DimName> $marker<$operator> for MatrixN<N, D>
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul + $marker<$operator> $(+ $bounds)*,
|
||||
DefaultAllocator: Allocator<N, D, D> { }
|
||||
)*}
|
||||
);
|
||||
|
||||
|
@ -341,421 +324,24 @@ impl_multiplicative_structure!(
|
|||
AbstractMonoid<Multiplicative> + One
|
||||
);
|
||||
|
||||
// // FIXME: Field too strong?
|
||||
// impl<N, S> Matrix for Matrix<N, S>
|
||||
// where N: Scalar + Field,
|
||||
// S: Storage<N> {
|
||||
// type Field = N;
|
||||
// type Row = OwnedMatrix<N, Static<U1>, S::C, S::Alloc>;
|
||||
// type Column = OwnedMatrix<N, S::R, Static<U1>, S::Alloc>;
|
||||
// type Transpose = OwnedMatrix<N, S::C, S::R, S::Alloc>;
|
||||
|
||||
// #[inline]
|
||||
// fn nrows(&self) -> usize {
|
||||
// self.shape().0
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn ncolumns(&self) -> usize {
|
||||
// self.shape().1
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn row(&self, row: usize) -> Self::Row {
|
||||
// let mut res: Self::Row = ::zero();
|
||||
|
||||
// for (column, e) in res.iter_mut().enumerate() {
|
||||
// *e = self[(row, column)];
|
||||
// }
|
||||
|
||||
// res
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn column(&self, column: usize) -> Self::Column {
|
||||
// let mut res: Self::Column = ::zero();
|
||||
|
||||
// for (row, e) in res.iter_mut().enumerate() {
|
||||
// *e = self[(row, column)];
|
||||
// }
|
||||
|
||||
// res
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// unsafe fn get_unchecked(&self, i: usize, j: usize) -> Self::Field {
|
||||
// self.get_unchecked(i, j)
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn transpose(&self) -> Self::Transpose {
|
||||
// self.transpose()
|
||||
// }
|
||||
// }
|
||||
|
||||
// impl<N, S> MatrixMut for Matrix<N, S>
|
||||
// where N: Scalar + Field,
|
||||
// S: StorageMut<N> {
|
||||
// #[inline]
|
||||
// fn set_row_mut(&mut self, irow: usize, row: &Self::Row) {
|
||||
// assert!(irow < self.shape().0, "Row index out of bounds.");
|
||||
|
||||
// for (icol, e) in row.iter().enumerate() {
|
||||
// unsafe { self.set_unchecked(irow, icol, *e) }
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn set_column_mut(&mut self, icol: usize, col: &Self::Column) {
|
||||
// assert!(icol < self.shape().1, "Column index out of bounds.");
|
||||
// for (irow, e) in col.iter().enumerate() {
|
||||
// unsafe { self.set_unchecked(irow, icol, *e) }
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// unsafe fn set_unchecked(&mut self, i: usize, j: usize, val: Self::Field) {
|
||||
// *self.get_unchecked_mut(i, j) = val
|
||||
// }
|
||||
// }
|
||||
|
||||
// // FIXME: Real is needed here only for invertibility...
|
||||
// impl<N: Real> SquareMatrixMut for $t<N> {
|
||||
// #[inline]
|
||||
// fn from_diagonal(diag: &Self::Coordinates) -> Self {
|
||||
// let mut res: $t<N> = ::zero();
|
||||
// res.set_diagonal_mut(diag);
|
||||
// res
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn set_diagonal_mut(&mut self, diag: &Self::Coordinates) {
|
||||
// for (i, e) in diag.iter().enumerate() {
|
||||
// unsafe { self.set_unchecked(i, i, *e) }
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
|
||||
|
||||
|
||||
// Specializations depending on the dimension.
|
||||
// matrix_group_approx_impl!(common: $t, 1, $vector, $($compN),+);
|
||||
|
||||
// // FIXME: Real is needed here only for invertibility...
|
||||
// impl<N: Real> SquareMatrix for $t<N> {
|
||||
// type Vector = $vector<N>;
|
||||
|
||||
// #[inline]
|
||||
// fn diagonal(&self) -> Self::Coordinates {
|
||||
// $vector::new(self.m11)
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn determinant(&self) -> Self::Field {
|
||||
// self.m11
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse(&self) -> Option<Self> {
|
||||
// let mut res = *self;
|
||||
// if res.try_inverse_mut() {
|
||||
// Some(res)
|
||||
// }
|
||||
// else {
|
||||
// None
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse_mut(&mut self) -> bool {
|
||||
// if relative_eq!(&self.m11, &::zero()) {
|
||||
// false
|
||||
// }
|
||||
// else {
|
||||
// self.m11 = ::one::<N>() / ::determinant(self);
|
||||
|
||||
// true
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn transpose_mut(&mut self) {
|
||||
// // no-op
|
||||
// }
|
||||
// }
|
||||
|
||||
// ident, 2, $vector: ident, $($compN: ident),+) => {
|
||||
// matrix_group_approx_impl!(common: $t, 2, $vector, $($compN),+);
|
||||
|
||||
// // FIXME: Real is needed only for inversion here.
|
||||
// impl<N: Real> SquareMatrix for $t<N> {
|
||||
// type Vector = $vector<N>;
|
||||
|
||||
// #[inline]
|
||||
// fn diagonal(&self) -> Self::Coordinates {
|
||||
// $vector::new(self.m11, self.m22)
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn determinant(&self) -> Self::Field {
|
||||
// self.m11 * self.m22 - self.m21 * self.m12
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse(&self) -> Option<Self> {
|
||||
// let mut res = *self;
|
||||
// if res.try_inverse_mut() {
|
||||
// Some(res)
|
||||
// }
|
||||
// else {
|
||||
// None
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse_mut(&mut self) -> bool {
|
||||
// let determinant = ::determinant(self);
|
||||
|
||||
// if relative_eq!(&determinant, &::zero()) {
|
||||
// false
|
||||
// }
|
||||
// else {
|
||||
// *self = Matrix2::new(
|
||||
// self.m22 / determinant , -self.m12 / determinant,
|
||||
// -self.m21 / determinant, self.m11 / determinant);
|
||||
|
||||
// true
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn transpose_mut(&mut self) {
|
||||
// mem::swap(&mut self.m12, &mut self.m21)
|
||||
// }
|
||||
// }
|
||||
|
||||
// ident, 3, $vector: ident, $($compN: ident),+) => {
|
||||
// matrix_group_approx_impl!(common: $t, 3, $vector, $($compN),+);
|
||||
|
||||
// // FIXME: Real is needed only for inversion here.
|
||||
// impl<N: Real> SquareMatrix for $t<N> {
|
||||
// type Vector = $vector<N>;
|
||||
|
||||
// #[inline]
|
||||
// fn diagonal(&self) -> Self::Coordinates {
|
||||
// $vector::new(self.m11, self.m22, self.m33)
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn determinant(&self) -> Self::Field {
|
||||
// let minor_m12_m23 = self.m22 * self.m33 - self.m32 * self.m23;
|
||||
// let minor_m11_m23 = self.m21 * self.m33 - self.m31 * self.m23;
|
||||
// let minor_m11_m22 = self.m21 * self.m32 - self.m31 * self.m22;
|
||||
|
||||
// self.m11 * minor_m12_m23 - self.m12 * minor_m11_m23 + self.m13 * minor_m11_m22
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse(&self) -> Option<Self> {
|
||||
// let mut res = *self;
|
||||
// if res.try_inverse_mut() {
|
||||
// Some(res)
|
||||
// }
|
||||
// else {
|
||||
// None
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse_mut(&mut self) -> bool {
|
||||
// let minor_m12_m23 = self.m22 * self.m33 - self.m32 * self.m23;
|
||||
// let minor_m11_m23 = self.m21 * self.m33 - self.m31 * self.m23;
|
||||
// let minor_m11_m22 = self.m21 * self.m32 - self.m31 * self.m22;
|
||||
|
||||
// let determinant = self.m11 * minor_m12_m23 -
|
||||
// self.m12 * minor_m11_m23 +
|
||||
// self.m13 * minor_m11_m22;
|
||||
|
||||
// if relative_eq!(&determinant, &::zero()) {
|
||||
// false
|
||||
// }
|
||||
// else {
|
||||
// *self = Matrix3::new(
|
||||
// (minor_m12_m23 / determinant),
|
||||
// ((self.m13 * self.m32 - self.m33 * self.m12) / determinant),
|
||||
// ((self.m12 * self.m23 - self.m22 * self.m13) / determinant),
|
||||
|
||||
// (-minor_m11_m23 / determinant),
|
||||
// ((self.m11 * self.m33 - self.m31 * self.m13) / determinant),
|
||||
// ((self.m13 * self.m21 - self.m23 * self.m11) / determinant),
|
||||
|
||||
// (minor_m11_m22 / determinant),
|
||||
// ((self.m12 * self.m31 - self.m32 * self.m11) / determinant),
|
||||
// ((self.m11 * self.m22 - self.m21 * self.m12) / determinant)
|
||||
// );
|
||||
|
||||
// true
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn transpose_mut(&mut self) {
|
||||
// mem::swap(&mut self.m12, &mut self.m21);
|
||||
// mem::swap(&mut self.m13, &mut self.m31);
|
||||
// mem::swap(&mut self.m23, &mut self.m32);
|
||||
// }
|
||||
// }
|
||||
|
||||
// ident, $dimension: expr, $vector: ident, $($compN: ident),+) => {
|
||||
// matrix_group_approx_impl!(common: $t, $dimension, $vector, $($compN),+);
|
||||
|
||||
// // FIXME: Real is needed only for inversion here.
|
||||
// impl<N: Real> SquareMatrix for $t<N> {
|
||||
// type Vector = $vector<N>;
|
||||
|
||||
// #[inline]
|
||||
// fn diagonal(&self) -> Self::Coordinates {
|
||||
// let mut diagonal: $vector<N> = ::zero();
|
||||
|
||||
// for i in 0 .. $dimension {
|
||||
// unsafe { diagonal.unsafe_set(i, self.get_unchecked(i, i)) }
|
||||
// }
|
||||
|
||||
// diagonal
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn determinant(&self) -> Self::Field {
|
||||
// // FIXME: extremely naive implementation.
|
||||
// let mut det = ::zero();
|
||||
|
||||
// for icol in 0 .. $dimension {
|
||||
// let e = unsafe { self.unsafe_at((0, icol)) };
|
||||
|
||||
// if e != ::zero() {
|
||||
// let minor_mat = self.delete_row_column(0, icol);
|
||||
// let minor = minor_mat.determinant();
|
||||
|
||||
// if icol % 2 == 0 {
|
||||
// det += minor;
|
||||
// }
|
||||
// else {
|
||||
// det -= minor;
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
|
||||
// det
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse(&self) -> Option<Self> {
|
||||
// let mut res = *self;
|
||||
// if res.try_inverse_mut() {
|
||||
// Some(res)
|
||||
// }
|
||||
// else {
|
||||
// None
|
||||
// }
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn try_inverse_mut(&mut self) -> bool {
|
||||
// let mut res: $t<N> = ::one();
|
||||
|
||||
// // Inversion using Gauss-Jordan elimination
|
||||
// for k in 0 .. $dimension {
|
||||
// // search a non-zero value on the k-th column
|
||||
// // FIXME: would it be worth it to spend some more time searching for the
|
||||
// // max instead?
|
||||
|
||||
// let mut n0 = k; // index of a non-zero entry
|
||||
|
||||
// while n0 != $dimension {
|
||||
// if self[(n0, k)] != ::zero() {
|
||||
// break;
|
||||
// }
|
||||
|
||||
// n0 = n0 + 1;
|
||||
// }
|
||||
|
||||
// if n0 == $dimension {
|
||||
// return false
|
||||
// }
|
||||
|
||||
// // swap pivot line
|
||||
// if n0 != k {
|
||||
// for j in 0 .. $dimension {
|
||||
// self.swap((n0, j), (k, j));
|
||||
// res.swap((n0, j), (k, j));
|
||||
// }
|
||||
// }
|
||||
|
||||
// let pivot = self[(k, k)];
|
||||
|
||||
// for j in k .. $dimension {
|
||||
// let selfval = self[(k, j)] / pivot;
|
||||
// self[(k, j)] = selfval;
|
||||
// }
|
||||
|
||||
// for j in 0 .. $dimension {
|
||||
// let resval = res[(k, j)] / pivot;
|
||||
// res[(k, j)] = resval;
|
||||
// }
|
||||
|
||||
// for l in 0 .. $dimension {
|
||||
// if l != k {
|
||||
// let normalizer = self[(l, k)];
|
||||
|
||||
// for j in k .. $dimension {
|
||||
// let selfval = self[(l, j)] - self[(k, j)] * normalizer;
|
||||
// self[(l, j)] = selfval;
|
||||
// }
|
||||
|
||||
// for j in 0 .. $dimension {
|
||||
// let resval = res[(l, j)] - res[(k, j)] * normalizer;
|
||||
// res[(l, j)] = resval;
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
|
||||
// *self = res;
|
||||
|
||||
// true
|
||||
// }
|
||||
|
||||
// #[inline]
|
||||
// fn transpose_mut(&mut self) {
|
||||
// for i in 1 .. $dimension {
|
||||
// for j in 0 .. i {
|
||||
// self.swap((i, j), (j, i))
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
|
||||
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Ordering
|
||||
*
|
||||
*/
|
||||
impl<N, R: Dim, C: Dim, S> MeetSemilattice for Matrix<N, R, C, S>
|
||||
impl<N, R: Dim, C: Dim> MeetSemilattice for MatrixMN<N, R, C>
|
||||
where N: Scalar + MeetSemilattice,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn meet(&self, other: &Self) -> Self {
|
||||
self.zip_map(other, |a, b| a.meet(&b))
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R: Dim, C: Dim, S> JoinSemilattice for Matrix<N, R, C, S>
|
||||
impl<N, R: Dim, C: Dim> JoinSemilattice for MatrixMN<N, R, C>
|
||||
where N: Scalar + JoinSemilattice,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn join(&self, other: &Self) -> Self {
|
||||
self.zip_map(other, |a, b| a.join(&b))
|
||||
|
@ -763,10 +349,9 @@ impl<N, R: Dim, C: Dim, S> JoinSemilattice for Matrix<N, R, C, S>
|
|||
}
|
||||
|
||||
|
||||
impl<N, R: Dim, C: Dim, S> Lattice for Matrix<N, R, C, S>
|
||||
impl<N, R: Dim, C: Dim> Lattice for MatrixMN<N, R, C>
|
||||
where N: Scalar + Lattice,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S> {
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
#[inline]
|
||||
fn meet_join(&self, other: &Self) -> (Self, Self) {
|
||||
let shape = self.data.shape();
|
||||
|
|
|
@ -21,7 +21,7 @@ use generic_array::{ArrayLength, GenericArray};
|
|||
|
||||
use core::Scalar;
|
||||
use core::dimension::{DimName, U1};
|
||||
use core::storage::{Storage, StorageMut, Owned, OwnedStorage};
|
||||
use core::storage::{Storage, StorageMut, Owned, ContiguousStorage, ContiguousStorageMut};
|
||||
use core::allocator::Allocator;
|
||||
use core::default_allocator::DefaultAllocator;
|
||||
|
||||
|
@ -139,22 +139,10 @@ unsafe impl<N, R, C> Storage<N, R, C> for MatrixArray<N, R, C>
|
|||
R: DimName,
|
||||
C: DimName,
|
||||
R::Value: Mul<C::Value>,
|
||||
Prod<R::Value, C::Value>: ArrayLength<N> {
|
||||
Prod<R::Value, C::Value>: ArrayLength<N>,
|
||||
DefaultAllocator: Allocator<N, R, C, Buffer = Self> {
|
||||
type RStride = U1;
|
||||
type CStride = R;
|
||||
type Alloc = DefaultAllocator;
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, R, C, Self::Alloc> {
|
||||
self
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn clone_owned(&self) -> Owned<N, R, C, Self::Alloc> {
|
||||
let it = self.iter().cloned();
|
||||
|
||||
Self::Alloc::allocate_from_iterator(self.shape().0, self.shape().1, it)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn ptr(&self) -> *const N {
|
||||
|
@ -170,30 +158,44 @@ unsafe impl<N, R, C> Storage<N, R, C> for MatrixArray<N, R, C>
|
|||
fn strides(&self) -> (Self::RStride, Self::CStride) {
|
||||
(Self::RStride::name(), Self::CStride::name())
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn is_contiguous(&self) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
self
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn clone_owned(&self) -> Owned<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
let it = self.iter().cloned();
|
||||
|
||||
DefaultAllocator::allocate_from_iterator(self.shape().0, self.shape().1, it)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N] {
|
||||
&self[..]
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
unsafe impl<N, R, C> StorageMut<N, R, C> for MatrixArray<N, R, C>
|
||||
where N: Scalar,
|
||||
R: DimName,
|
||||
C: DimName,
|
||||
R::Value: Mul<C::Value>,
|
||||
Prod<R::Value, C::Value>: ArrayLength<N> {
|
||||
Prod<R::Value, C::Value>: ArrayLength<N>,
|
||||
DefaultAllocator: Allocator<N, R, C, Buffer = Self> {
|
||||
#[inline]
|
||||
fn ptr_mut(&mut self) -> *mut N {
|
||||
self[..].as_mut_ptr()
|
||||
}
|
||||
}
|
||||
|
||||
unsafe impl<N, R, C> OwnedStorage<N, R, C> for MatrixArray<N, R, C>
|
||||
where N: Scalar,
|
||||
R: DimName,
|
||||
C: DimName,
|
||||
R::Value: Mul<C::Value>,
|
||||
Prod<R::Value, C::Value>: ArrayLength<N> {
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N] {
|
||||
&self[..]
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn as_mut_slice(&mut self) -> &mut [N] {
|
||||
|
@ -201,6 +203,24 @@ unsafe impl<N, R, C> OwnedStorage<N, R, C> for MatrixArray<N, R, C>
|
|||
}
|
||||
}
|
||||
|
||||
unsafe impl<N, R, C> ContiguousStorage<N, R, C> for MatrixArray<N, R, C>
|
||||
where N: Scalar,
|
||||
R: DimName,
|
||||
C: DimName,
|
||||
R::Value: Mul<C::Value>,
|
||||
Prod<R::Value, C::Value>: ArrayLength<N>,
|
||||
DefaultAllocator: Allocator<N, R, C, Buffer = Self> {
|
||||
}
|
||||
|
||||
unsafe impl<N, R, C> ContiguousStorageMut<N, R, C> for MatrixArray<N, R, C>
|
||||
where N: Scalar,
|
||||
R: DimName,
|
||||
C: DimName,
|
||||
R::Value: Mul<C::Value>,
|
||||
Prod<R::Value, C::Value>: ArrayLength<N>,
|
||||
DefaultAllocator: Allocator<N, R, C, Buffer = Self> {
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
|
|
|
@ -1,28 +1,31 @@
|
|||
use std::marker::PhantomData;
|
||||
use std::ops::{Range, RangeFrom, RangeTo, RangeFull};
|
||||
use std::slice;
|
||||
|
||||
use core::{Scalar, Matrix};
|
||||
use core::dimension::{Dim, DimName, Dynamic, DimMul, DimProd, U1};
|
||||
use core::dimension::{Dim, DimName, Dynamic, U1};
|
||||
use core::iter::MatrixIter;
|
||||
use core::storage::{Storage, StorageMut, Owned};
|
||||
use core::allocator::Allocator;
|
||||
use core::default_allocator::DefaultAllocator;
|
||||
|
||||
macro_rules! slice_storage_impl(
|
||||
($doc: expr; $Storage: ident as $SRef: ty; $T: ident.$get_addr: ident ($Ptr: ty as $Ref: ty)) => {
|
||||
#[doc = $doc]
|
||||
pub struct $T<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim, Alloc> {
|
||||
#[derive(Debug)]
|
||||
pub struct $T<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> {
|
||||
ptr: $Ptr,
|
||||
shape: (R, C),
|
||||
strides: (RStride, CStride),
|
||||
_phantoms: PhantomData<($Ref, Alloc)>,
|
||||
_phantoms: PhantomData<$Ref>,
|
||||
}
|
||||
|
||||
// Dynamic and () are arbitrary. It's just to be able to call the constructors with
|
||||
// `Slice::`
|
||||
impl<'a, N: Scalar, R: Dim, C: Dim> $T<'a, N, R, C, Dynamic, Dynamic, ()> {
|
||||
// Dynamic is arbitrary. It's just to be able to call the constructors with `Slice::`
|
||||
impl<'a, N: Scalar, R: Dim, C: Dim> $T<'a, N, R, C, Dynamic, Dynamic> {
|
||||
/// Create a new matrix slice without bound checking.
|
||||
#[inline]
|
||||
pub unsafe fn new_unchecked<RStor, CStor, S>(storage: $SRef, start: (usize, usize), shape: (R, C))
|
||||
-> $T<'a, N, R, C, S::RStride, S::CStride, S::Alloc>
|
||||
-> $T<'a, N, R, C, S::RStride, S::CStride>
|
||||
where RStor: Dim,
|
||||
CStor: Dim,
|
||||
S: $Storage<N, RStor, CStor> {
|
||||
|
@ -37,17 +40,29 @@ macro_rules! slice_storage_impl(
|
|||
start: (usize, usize),
|
||||
shape: (R, C),
|
||||
strides: (RStride, CStride))
|
||||
-> $T<'a, N, R, C, RStride, CStride, S::Alloc>
|
||||
-> $T<'a, N, R, C, RStride, CStride>
|
||||
where RStor: Dim,
|
||||
CStor: Dim,
|
||||
S: $Storage<N, RStor, CStor>,
|
||||
RStride: Dim,
|
||||
CStride: Dim {
|
||||
|
||||
$T::from_raw_parts(storage.$get_addr(start.0, start.1), shape, strides)
|
||||
}
|
||||
|
||||
/// Create a new matrix slice without bound checking and from a raw pointer.
|
||||
#[inline]
|
||||
pub unsafe fn from_raw_parts<RStride, CStride>(ptr: $Ptr,
|
||||
shape: (R, C),
|
||||
strides: (RStride, CStride))
|
||||
-> $T<'a, N, R, C, RStride, CStride>
|
||||
where RStride: Dim,
|
||||
CStride: Dim {
|
||||
|
||||
$T {
|
||||
ptr: storage.$get_addr(start.0, start.1),
|
||||
ptr: ptr,
|
||||
shape: shape,
|
||||
strides: (strides.0, strides.1),
|
||||
strides: strides,
|
||||
_phantoms: PhantomData
|
||||
}
|
||||
}
|
||||
|
@ -65,11 +80,11 @@ slice_storage_impl!("A mutable matrix data storage for mutable matrix slice. Onl
|
|||
);
|
||||
|
||||
|
||||
impl<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim, Alloc> Copy
|
||||
for SliceStorage<'a, N, R, C, RStride, CStride, Alloc> { }
|
||||
impl<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Copy
|
||||
for SliceStorage<'a, N, R, C, RStride, CStride> { }
|
||||
|
||||
impl<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim, Alloc> Clone
|
||||
for SliceStorage<'a, N, R, C, RStride, CStride, Alloc> {
|
||||
impl<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Clone
|
||||
for SliceStorage<'a, N, R, C, RStride, CStride> {
|
||||
#[inline]
|
||||
fn clone(&self) -> Self {
|
||||
SliceStorage {
|
||||
|
@ -83,26 +98,11 @@ for SliceStorage<'a, N, R, C, RStride, CStride, Alloc> {
|
|||
|
||||
macro_rules! storage_impl(
|
||||
($($T: ident),* $(,)*) => {$(
|
||||
unsafe impl<'a, N, R: Dim, C: Dim, RStride: Dim, CStride: Dim, Alloc> Storage<N, R, C>
|
||||
for $T<'a, N, R, C, RStride, CStride, Alloc>
|
||||
where N: Scalar,
|
||||
Alloc: Allocator<N, R, C> {
|
||||
unsafe impl<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Storage<N, R, C>
|
||||
for $T<'a, N, R, C, RStride, CStride> {
|
||||
|
||||
type RStride = RStride;
|
||||
type CStride = CStride;
|
||||
type Alloc = Alloc;
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, R, C, Self::Alloc> {
|
||||
self.clone_owned()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn clone_owned(&self) -> Owned<N, R, C, Self::Alloc> {
|
||||
let (nrows, ncols) = self.shape();
|
||||
let it = MatrixIter::new(self).cloned();
|
||||
Alloc::allocate_from_iterator(nrows, ncols, it)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn ptr(&self) -> *const N {
|
||||
|
@ -118,20 +118,74 @@ macro_rules! storage_impl(
|
|||
fn strides(&self) -> (Self::RStride, Self::CStride) {
|
||||
self.strides
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn is_contiguous(&self) -> bool {
|
||||
// Common cases that can be deduced at compile-time even if one of the dimensions
|
||||
// is Dynamic.
|
||||
if (RStride::is::<U1>() && C::is::<U1>()) || // Column vector.
|
||||
(CStride::is::<U1>() && R::is::<U1>()) { // Row vector.
|
||||
true
|
||||
}
|
||||
else {
|
||||
let (nrows, _) = self.shape();
|
||||
let (srows, scols) = self.strides();
|
||||
|
||||
srows.value() == 1 && scols.value() == nrows.value()
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
self.clone_owned()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn clone_owned(&self) -> Owned<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
let (nrows, ncols) = self.shape();
|
||||
let it = MatrixIter::new(self).cloned();
|
||||
DefaultAllocator::allocate_from_iterator(nrows, ncols, it)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N] {
|
||||
let (nrows, ncols) = self.shape();
|
||||
if nrows.value() != 0 && ncols.value() != 0 {
|
||||
let sz = self.linear_index(nrows.value() - 1, ncols.value() - 1);
|
||||
unsafe { slice::from_raw_parts(self.ptr, sz + 1) }
|
||||
}
|
||||
else {
|
||||
unsafe { slice::from_raw_parts(self.ptr, 0) }
|
||||
}
|
||||
}
|
||||
}
|
||||
)*}
|
||||
);
|
||||
|
||||
storage_impl!(SliceStorage, SliceStorageMut);
|
||||
|
||||
unsafe impl<'a, N, R: Dim, C: Dim, RStride: Dim, CStride: Dim, Alloc> StorageMut<N, R, C>
|
||||
for SliceStorageMut<'a, N, R, C, RStride, CStride, Alloc>
|
||||
where N: Scalar,
|
||||
Alloc: Allocator<N, R, C> {
|
||||
unsafe impl<'a, N: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> StorageMut<N, R, C>
|
||||
for SliceStorageMut<'a, N, R, C, RStride, CStride> {
|
||||
#[inline]
|
||||
fn ptr_mut(&mut self) -> *mut N {
|
||||
self.ptr
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn as_mut_slice(&mut self) -> &mut [N] {
|
||||
let (nrows, ncols) = self.shape();
|
||||
if nrows.value() != 0 && ncols.value() != 0 {
|
||||
let sz = self.linear_index(nrows.value() - 1, ncols.value() - 1);
|
||||
unsafe { slice::from_raw_parts_mut(self.ptr, sz + 1) }
|
||||
}
|
||||
else {
|
||||
unsafe { slice::from_raw_parts_mut(self.ptr, 0) }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -139,35 +193,45 @@ impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
|||
#[inline]
|
||||
fn assert_slice_index(&self, start: (usize, usize), shape: (usize, usize), steps: (usize, usize)) {
|
||||
let my_shape = self.shape();
|
||||
assert!(start.0 + (shape.0 - 1) * steps.0 <= my_shape.0, "Matrix slicing out of bounds.");
|
||||
assert!(start.1 + (shape.1 - 1) * steps.1 <= my_shape.1, "Matrix slicing out of bounds.");
|
||||
// NOTE: we don't do any subtraction to avoid underflow for zero-sized matrices.
|
||||
//
|
||||
// Terms that would have been negative are moved to the other side of the inequality
|
||||
// instead.
|
||||
assert!(start.0 + (steps.0 + 1) * shape.0 <= my_shape.0 + steps.0, "Matrix slicing out of bounds.");
|
||||
assert!(start.1 + (steps.1 + 1) * shape.1 <= my_shape.1 + steps.1, "Matrix slicing out of bounds.");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
macro_rules! matrix_slice_impl(
|
||||
($me: ident: $Me: ty, $MatrixSlice: ident, $SliceStorage: ident, $Storage: ident, $data: expr;
|
||||
($me: ident: $Me: ty, $MatrixSlice: ident, $SliceStorage: ident, $Storage: ident.$get_addr: ident (), $data: expr;
|
||||
$row: ident,
|
||||
$row_part: ident,
|
||||
$rows: ident,
|
||||
$rows_with_step: ident,
|
||||
$fixed_rows: ident,
|
||||
$fixed_rows_with_step: ident,
|
||||
$rows_generic: ident,
|
||||
$rows_generic_with_step: ident,
|
||||
$column: ident,
|
||||
$column_part: ident,
|
||||
$columns: ident,
|
||||
$columns_with_step: ident,
|
||||
$fixed_columns: ident,
|
||||
$fixed_columns_with_step: ident,
|
||||
$columns_generic: ident,
|
||||
$columns_generic_with_step: ident,
|
||||
$slice: ident,
|
||||
$slice_with_steps: ident,
|
||||
$fixed_slice: ident,
|
||||
$fixed_slice_with_steps: ident,
|
||||
$generic_slice: ident,
|
||||
$generic_slice_with_steps: ident) => {
|
||||
$generic_slice_with_steps: ident,
|
||||
$rows_range_pair: ident,
|
||||
$columns_range_pair: ident) => {
|
||||
/// A matrix slice.
|
||||
pub type $MatrixSlice<'a, N, R, C, RStride, CStride, Alloc>
|
||||
= Matrix<N, R, C, $SliceStorage<'a, N, R, C, RStride, CStride, Alloc>>;
|
||||
pub type $MatrixSlice<'a, N, R, C, RStride, CStride>
|
||||
= Matrix<N, R, C, $SliceStorage<'a, N, R, C, RStride, CStride>>;
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: $Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/*
|
||||
|
@ -175,73 +239,80 @@ macro_rules! matrix_slice_impl(
|
|||
* Row slicing.
|
||||
*
|
||||
*/
|
||||
/// Returns a slice containing the i-th column of this matrix.
|
||||
/// Returns a slice containing the i-th row of this matrix.
|
||||
#[inline]
|
||||
pub fn $row($me: $Me, i: usize) -> $MatrixSlice<N, U1, C, S::RStride, S::CStride, S::Alloc> {
|
||||
pub fn $row($me: $Me, i: usize) -> $MatrixSlice<N, U1, C, S::RStride, S::CStride> {
|
||||
$me.$fixed_rows::<U1>(i)
|
||||
}
|
||||
|
||||
/// Returns a slice containing the `n` first elements of the i-th row of this matrix.
|
||||
#[inline]
|
||||
pub fn $row_part($me: $Me, i: usize, n: usize) -> $MatrixSlice<N, U1, Dynamic, S::RStride, S::CStride> {
|
||||
$me.$generic_slice((i, 0), (U1, Dynamic::new(n)))
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a set of consecutive rows.
|
||||
#[inline]
|
||||
pub fn $rows($me: $Me, first_row: usize, nrows: usize)
|
||||
-> $MatrixSlice<N, Dynamic, C, S::RStride, S::CStride, S::Alloc> {
|
||||
-> $MatrixSlice<N, Dynamic, C, S::RStride, S::CStride> {
|
||||
|
||||
let my_shape = $me.data.shape();
|
||||
$me.assert_slice_index((first_row, 0), (nrows, my_shape.1.value()), (1, 1));
|
||||
let shape = (Dynamic::new(nrows), my_shape.1);
|
||||
|
||||
unsafe {
|
||||
let data = $SliceStorage::new_unchecked($data, (first_row, 0), shape);
|
||||
Matrix::from_data_statically_unchecked(data)
|
||||
}
|
||||
$me.$rows_generic(first_row, Dynamic::new(nrows))
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a set of consecutive rows regularly spaced by `step` rows.
|
||||
/// Extracts from this matrix a set of consecutive rows regularly skipping `step` rows.
|
||||
#[inline]
|
||||
pub fn $rows_with_step($me: $Me, first_row: usize, nrows: usize, step: usize)
|
||||
-> $MatrixSlice<N, Dynamic, C, Dynamic, S::CStride, S::Alloc> {
|
||||
-> $MatrixSlice<N, Dynamic, C, Dynamic, S::CStride> {
|
||||
|
||||
$me.$rows_generic(first_row, Dynamic::new(nrows), Dynamic::new(step))
|
||||
$me.$rows_generic_with_step(first_row, Dynamic::new(nrows), step)
|
||||
}
|
||||
|
||||
/// Extracts a compile-time number of consecutive rows from this matrix.
|
||||
#[inline]
|
||||
pub fn $fixed_rows<RSlice>($me: $Me, first_row: usize)
|
||||
-> $MatrixSlice<N, RSlice, C, S::RStride, S::CStride, S::Alloc>
|
||||
where RSlice: DimName {
|
||||
pub fn $fixed_rows<RSlice: DimName>($me: $Me, first_row: usize)
|
||||
-> $MatrixSlice<N, RSlice, C, S::RStride, S::CStride> {
|
||||
|
||||
$me.$rows_generic(first_row, RSlice::name())
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a compile-time number of rows regularly skipping `step`
|
||||
/// rows.
|
||||
#[inline]
|
||||
pub fn $fixed_rows_with_step<RSlice: DimName>($me: $Me, first_row: usize, step: usize)
|
||||
-> $MatrixSlice<N, RSlice, C, Dynamic, S::CStride> {
|
||||
|
||||
$me.$rows_generic_with_step(first_row, RSlice::name(), step)
|
||||
}
|
||||
|
||||
/// Extracts from this matrix `nrows` rows regularly skipping `step` rows. Both
|
||||
/// argument may or may not be values known at compile-time.
|
||||
#[inline]
|
||||
pub fn $rows_generic<RSlice: Dim>($me: $Me, row_start: usize, nrows: RSlice)
|
||||
-> $MatrixSlice<N, RSlice, C, S::RStride, S::CStride> {
|
||||
|
||||
let my_shape = $me.data.shape();
|
||||
$me.assert_slice_index((first_row, 0), (RSlice::dim(), my_shape.1.value()), (1, 1));
|
||||
let shape = (RSlice::name(), my_shape.1);
|
||||
$me.assert_slice_index((row_start, 0), (nrows.value(), my_shape.1.value()), (0, 0));
|
||||
|
||||
let shape = (nrows, my_shape.1);
|
||||
|
||||
unsafe {
|
||||
let data = $SliceStorage::new_unchecked($data, (first_row, 0), shape);
|
||||
let data = $SliceStorage::new_unchecked($data, (row_start, 0), shape);
|
||||
Matrix::from_data_statically_unchecked(data)
|
||||
}
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a compile-time number of rows regularly spaced by `step` rows.
|
||||
/// Extracts from this matrix `nrows` rows regularly skipping `step` rows. Both
|
||||
/// argument may or may not be values known at compile-time.
|
||||
#[inline]
|
||||
pub fn $fixed_rows_with_step<RSlice>($me: $Me, first_row: usize, step: usize)
|
||||
-> $MatrixSlice<N, RSlice, C, Dynamic, S::CStride, S::Alloc>
|
||||
where RSlice: DimName {
|
||||
|
||||
$me.$rows_generic(first_row, RSlice::name(), Dynamic::new(step))
|
||||
}
|
||||
|
||||
/// Extracts from this matrix `nrows` rows regularly spaced by `step` rows. Both argument may
|
||||
/// or may not be values known at compile-time.
|
||||
#[inline]
|
||||
pub fn $rows_generic<RSlice, RStep>($me: $Me, row_start: usize, nrows: RSlice, step: RStep)
|
||||
-> $MatrixSlice<N, RSlice, C, DimProd<RStep, S::RStride>, S::CStride, S::Alloc>
|
||||
where RSlice: Dim,
|
||||
RStep: DimMul<S::RStride> {
|
||||
pub fn $rows_generic_with_step<RSlice>($me: $Me, row_start: usize, nrows: RSlice, step: usize)
|
||||
-> $MatrixSlice<N, RSlice, C, Dynamic, S::CStride>
|
||||
where RSlice: Dim {
|
||||
|
||||
let my_shape = $me.data.shape();
|
||||
let my_strides = $me.data.strides();
|
||||
$me.assert_slice_index((row_start, 0), (nrows.value(), my_shape.1.value()), (step.value(), 1));
|
||||
$me.assert_slice_index((row_start, 0), (nrows.value(), my_shape.1.value()), (step, 0));
|
||||
|
||||
let strides = (step.mul(my_strides.0), my_strides.1);
|
||||
let strides = (Dynamic::new((step + 1) * my_strides.0.value()), my_strides.1);
|
||||
let shape = (nrows, my_shape.1);
|
||||
|
||||
unsafe {
|
||||
|
@ -257,42 +328,59 @@ macro_rules! matrix_slice_impl(
|
|||
*/
|
||||
/// Returns a slice containing the i-th column of this matrix.
|
||||
#[inline]
|
||||
pub fn $column($me: $Me, i: usize) -> $MatrixSlice<N, R, U1, S::RStride, S::CStride, S::Alloc> {
|
||||
pub fn $column($me: $Me, i: usize) -> $MatrixSlice<N, R, U1, S::RStride, S::CStride> {
|
||||
$me.$fixed_columns::<U1>(i)
|
||||
}
|
||||
|
||||
/// Returns a slice containing the `n` first elements of the i-th column of this matrix.
|
||||
#[inline]
|
||||
pub fn $column_part($me: $Me, i: usize, n: usize) -> $MatrixSlice<N, Dynamic, U1, S::RStride, S::CStride> {
|
||||
$me.$generic_slice((0, i), (Dynamic::new(n), U1))
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a set of consecutive columns.
|
||||
#[inline]
|
||||
pub fn $columns($me: $Me, first_col: usize, ncols: usize)
|
||||
-> $MatrixSlice<N, R, Dynamic, S::RStride, S::CStride, S::Alloc> {
|
||||
-> $MatrixSlice<N, R, Dynamic, S::RStride, S::CStride> {
|
||||
|
||||
let my_shape = $me.data.shape();
|
||||
$me.assert_slice_index((0, first_col), (my_shape.0.value(), ncols), (1, 1));
|
||||
let shape = (my_shape.0, Dynamic::new(ncols));
|
||||
|
||||
unsafe {
|
||||
let data = $SliceStorage::new_unchecked($data, (0, first_col), shape);
|
||||
Matrix::from_data_statically_unchecked(data)
|
||||
}
|
||||
$me.$columns_generic(first_col, Dynamic::new(ncols))
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a set of consecutive columns regularly spaced by `step` columns.
|
||||
/// Extracts from this matrix a set of consecutive columns regularly skipping `step`
|
||||
/// columns.
|
||||
#[inline]
|
||||
pub fn $columns_with_step($me: $Me, first_col: usize, ncols: usize, step: usize)
|
||||
-> $MatrixSlice<N, R, Dynamic, S::RStride, Dynamic, S::Alloc> {
|
||||
-> $MatrixSlice<N, R, Dynamic, S::RStride, Dynamic> {
|
||||
|
||||
$me.$columns_generic(first_col, Dynamic::new(ncols), Dynamic::new(step))
|
||||
$me.$columns_generic_with_step(first_col, Dynamic::new(ncols), step)
|
||||
}
|
||||
|
||||
/// Extracts a compile-time number of consecutive columns from this matrix.
|
||||
#[inline]
|
||||
pub fn $fixed_columns<CSlice>($me: $Me, first_col: usize)
|
||||
-> $MatrixSlice<N, R, CSlice, S::RStride, S::CStride, S::Alloc>
|
||||
where CSlice: DimName {
|
||||
pub fn $fixed_columns<CSlice: DimName>($me: $Me, first_col: usize)
|
||||
-> $MatrixSlice<N, R, CSlice, S::RStride, S::CStride> {
|
||||
|
||||
$me.$columns_generic(first_col, CSlice::name())
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a compile-time number of columns regularly skipping
|
||||
/// `step` columns.
|
||||
#[inline]
|
||||
pub fn $fixed_columns_with_step<CSlice: DimName>($me: $Me, first_col: usize, step: usize)
|
||||
-> $MatrixSlice<N, R, CSlice, S::RStride, Dynamic> {
|
||||
|
||||
$me.$columns_generic_with_step(first_col, CSlice::name(), step)
|
||||
}
|
||||
|
||||
/// Extracts from this matrix `ncols` columns. The number of columns may or may not be
|
||||
/// known at compile-time.
|
||||
#[inline]
|
||||
pub fn $columns_generic<CSlice: Dim>($me: $Me, first_col: usize, ncols: CSlice)
|
||||
-> $MatrixSlice<N, R, CSlice, S::RStride, S::CStride> {
|
||||
|
||||
let my_shape = $me.data.shape();
|
||||
$me.assert_slice_index((0, first_col), (my_shape.0.value(), CSlice::dim()), (1, 1));
|
||||
let shape = (my_shape.0, CSlice::name());
|
||||
$me.assert_slice_index((0, first_col), (my_shape.0.value(), ncols.value()), (0, 0));
|
||||
let shape = (my_shape.0, ncols);
|
||||
|
||||
unsafe {
|
||||
let data = $SliceStorage::new_unchecked($data, (0, first_col), shape);
|
||||
|
@ -300,30 +388,19 @@ macro_rules! matrix_slice_impl(
|
|||
}
|
||||
}
|
||||
|
||||
/// Extracts from this matrix a compile-time number of columns regularly spaced by `step`
|
||||
/// columns.
|
||||
#[inline]
|
||||
pub fn $fixed_columns_with_step<CSlice>($me: $Me, first_col: usize, step: usize)
|
||||
-> $MatrixSlice<N, R, CSlice, S::RStride, Dynamic, S::Alloc>
|
||||
where CSlice: DimName {
|
||||
|
||||
$me.$columns_generic(first_col, CSlice::name(), Dynamic::new(step))
|
||||
}
|
||||
|
||||
/// Extracts from this matrix `ncols` columns regularly spaced by `step` columns. Both argument may
|
||||
/// Extracts from this matrix `ncols` columns skipping `step` columns. Both argument may
|
||||
/// or may not be values known at compile-time.
|
||||
#[inline]
|
||||
pub fn $columns_generic<CSlice, CStep>($me: $Me, first_col: usize, ncols: CSlice, step: CStep)
|
||||
-> $MatrixSlice<N, R, CSlice, S::RStride, DimProd<CStep, S::CStride>, S::Alloc>
|
||||
where CSlice: Dim,
|
||||
CStep: DimMul<S::CStride> {
|
||||
pub fn $columns_generic_with_step<CSlice: Dim>($me: $Me, first_col: usize, ncols: CSlice, step: usize)
|
||||
-> $MatrixSlice<N, R, CSlice, S::RStride, Dynamic> {
|
||||
|
||||
let my_shape = $me.data.shape();
|
||||
let my_strides = $me.data.strides();
|
||||
|
||||
$me.assert_slice_index((0, first_col), (my_shape.0.value(), ncols.value()), (1, step.value()));
|
||||
$me.assert_slice_index((0, first_col), (my_shape.0.value(), ncols.value()), (0, step));
|
||||
|
||||
let strides = (my_strides.0, step.mul(my_strides.1));
|
||||
let strides = (my_strides.0, Dynamic::new((step + 1) * my_strides.1.value()));
|
||||
let shape = (my_shape.0, ncols);
|
||||
|
||||
unsafe {
|
||||
|
@ -341,9 +418,9 @@ macro_rules! matrix_slice_impl(
|
|||
/// consecutive elements.
|
||||
#[inline]
|
||||
pub fn $slice($me: $Me, start: (usize, usize), shape: (usize, usize))
|
||||
-> $MatrixSlice<N, Dynamic, Dynamic, S::RStride, S::CStride, S::Alloc> {
|
||||
-> $MatrixSlice<N, Dynamic, Dynamic, S::RStride, S::CStride> {
|
||||
|
||||
$me.assert_slice_index(start, shape, (1, 1));
|
||||
$me.assert_slice_index(start, shape, (0, 0));
|
||||
let shape = (Dynamic::new(shape.0), Dynamic::new(shape.1));
|
||||
|
||||
unsafe {
|
||||
|
@ -359,9 +436,8 @@ macro_rules! matrix_slice_impl(
|
|||
/// original matrix.
|
||||
#[inline]
|
||||
pub fn $slice_with_steps($me: $Me, start: (usize, usize), shape: (usize, usize), steps: (usize, usize))
|
||||
-> $MatrixSlice<N, Dynamic, Dynamic, Dynamic, Dynamic, S::Alloc> {
|
||||
-> $MatrixSlice<N, Dynamic, Dynamic, Dynamic, Dynamic> {
|
||||
let shape = (Dynamic::new(shape.0), Dynamic::new(shape.1));
|
||||
let steps = (Dynamic::new(steps.0), Dynamic::new(steps.1));
|
||||
|
||||
$me.$generic_slice_with_steps(start, shape, steps)
|
||||
}
|
||||
|
@ -370,11 +446,11 @@ macro_rules! matrix_slice_impl(
|
|||
/// CSlice::dim())` consecutive components.
|
||||
#[inline]
|
||||
pub fn $fixed_slice<RSlice, CSlice>($me: $Me, irow: usize, icol: usize)
|
||||
-> $MatrixSlice<N, RSlice, CSlice, S::RStride, S::CStride, S::Alloc>
|
||||
-> $MatrixSlice<N, RSlice, CSlice, S::RStride, S::CStride>
|
||||
where RSlice: DimName,
|
||||
CSlice: DimName {
|
||||
|
||||
$me.assert_slice_index((irow, icol), (RSlice::dim(), CSlice::dim()), (1, 1));
|
||||
$me.assert_slice_index((irow, icol), (RSlice::dim(), CSlice::dim()), (0, 0));
|
||||
let shape = (RSlice::name(), CSlice::name());
|
||||
|
||||
unsafe {
|
||||
|
@ -389,22 +465,21 @@ macro_rules! matrix_slice_impl(
|
|||
/// the original matrix.
|
||||
#[inline]
|
||||
pub fn $fixed_slice_with_steps<RSlice, CSlice>($me: $Me, start: (usize, usize), steps: (usize, usize))
|
||||
-> $MatrixSlice<N, RSlice, CSlice, Dynamic, Dynamic, S::Alloc>
|
||||
-> $MatrixSlice<N, RSlice, CSlice, Dynamic, Dynamic>
|
||||
where RSlice: DimName,
|
||||
CSlice: DimName {
|
||||
let shape = (RSlice::name(), CSlice::name());
|
||||
let steps = (Dynamic::new(steps.0), Dynamic::new(steps.1));
|
||||
$me.$generic_slice_with_steps(start, shape, steps)
|
||||
}
|
||||
|
||||
/// Creates a slice that may or may not have a fixed size and stride.
|
||||
#[inline]
|
||||
pub fn $generic_slice<RSlice, CSlice>($me: $Me, start: (usize, usize), shape: (RSlice, CSlice))
|
||||
-> $MatrixSlice<N, RSlice, CSlice, S::RStride, S::CStride, S::Alloc>
|
||||
-> $MatrixSlice<N, RSlice, CSlice, S::RStride, S::CStride>
|
||||
where RSlice: Dim,
|
||||
CSlice: Dim {
|
||||
|
||||
$me.assert_slice_index(start, (shape.0.value(), shape.1.value()), (1, 1));
|
||||
$me.assert_slice_index(start, (shape.0.value(), shape.1.value()), (0, 0));
|
||||
|
||||
unsafe {
|
||||
let data = $SliceStorage::new_unchecked($data, start, shape);
|
||||
|
@ -414,69 +489,335 @@ macro_rules! matrix_slice_impl(
|
|||
|
||||
/// Creates a slice that may or may not have a fixed size and stride.
|
||||
#[inline]
|
||||
pub fn $generic_slice_with_steps<RSlice, CSlice, RStep, CStep>($me: $Me,
|
||||
pub fn $generic_slice_with_steps<RSlice, CSlice>($me: $Me,
|
||||
start: (usize, usize),
|
||||
shape: (RSlice, CSlice),
|
||||
steps: (RStep, CStep))
|
||||
-> $MatrixSlice<N, RSlice, CSlice, DimProd<RStep, S::RStride>, DimProd<CStep, S::CStride>, S::Alloc>
|
||||
steps: (usize, usize))
|
||||
-> $MatrixSlice<N, RSlice, CSlice, Dynamic, Dynamic>
|
||||
where RSlice: Dim,
|
||||
CSlice: Dim,
|
||||
RStep: DimMul<S::RStride>,
|
||||
CStep: DimMul<S::CStride> {
|
||||
CSlice: Dim {
|
||||
|
||||
$me.assert_slice_index(start, (shape.0.value(), shape.1.value()), (steps.0.value(), steps.1.value()));
|
||||
$me.assert_slice_index(start, (shape.0.value(), shape.1.value()), steps);
|
||||
|
||||
let my_strides = $me.data.strides();
|
||||
let strides = (steps.0.mul(my_strides.0), steps.1.mul(my_strides.1));
|
||||
let strides = (Dynamic::new((steps.0 + 1) * my_strides.0.value()),
|
||||
Dynamic::new((steps.1 + 1) * my_strides.1.value()));
|
||||
|
||||
unsafe {
|
||||
let data = $SliceStorage::new_with_strides_unchecked($data, start, shape, strides);
|
||||
Matrix::from_data_statically_unchecked(data)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
*
|
||||
* Splitting.
|
||||
*
|
||||
*/
|
||||
/// Splits this NxM matrix into two parts delimited by two ranges.
|
||||
///
|
||||
/// Panics if the ranges overlap or if the first range is empty.
|
||||
#[inline]
|
||||
pub fn $rows_range_pair<Range1: SliceRange<R>, Range2: SliceRange<R>>($me: $Me, r1: Range1, r2: Range2)
|
||||
-> ($MatrixSlice<N, Range1::Size, C, S::RStride, S::CStride>,
|
||||
$MatrixSlice<N, Range2::Size, C, S::RStride, S::CStride>) {
|
||||
|
||||
let (nrows, ncols) = $me.data.shape();
|
||||
let strides = $me.data.strides();
|
||||
|
||||
let start1 = r1.begin(nrows);
|
||||
let start2 = r2.begin(nrows);
|
||||
|
||||
let end1 = r1.end(nrows);
|
||||
let end2 = r2.end(nrows);
|
||||
|
||||
let nrows1 = r1.size(nrows);
|
||||
let nrows2 = r2.size(nrows);
|
||||
|
||||
assert!(start2 >= end1 || start1 >= end2, "Rows range pair: the slice ranges must not overlap.");
|
||||
assert!(end2 <= nrows.value(), "Rows range pair: index out of range.");
|
||||
|
||||
unsafe {
|
||||
let ptr1 = $data.$get_addr(start1, 0);
|
||||
let ptr2 = $data.$get_addr(start2, 0);
|
||||
|
||||
let data1 = $SliceStorage::from_raw_parts(ptr1, (nrows1, ncols), strides);
|
||||
let data2 = $SliceStorage::from_raw_parts(ptr2, (nrows2, ncols), strides);
|
||||
let slice1 = Matrix::from_data_statically_unchecked(data1);
|
||||
let slice2 = Matrix::from_data_statically_unchecked(data2);
|
||||
|
||||
(slice1, slice2)
|
||||
}
|
||||
}
|
||||
|
||||
/// Splits this NxM matrix into two parts delimited by two ranges.
|
||||
///
|
||||
/// Panics if the ranges overlap or if the first range is empty.
|
||||
#[inline]
|
||||
pub fn $columns_range_pair<Range1: SliceRange<C>, Range2: SliceRange<C>>($me: $Me, r1: Range1, r2: Range2)
|
||||
-> ($MatrixSlice<N, R, Range1::Size, S::RStride, S::CStride>,
|
||||
$MatrixSlice<N, R, Range2::Size, S::RStride, S::CStride>) {
|
||||
|
||||
let (nrows, ncols) = $me.data.shape();
|
||||
let strides = $me.data.strides();
|
||||
|
||||
let start1 = r1.begin(ncols);
|
||||
let start2 = r2.begin(ncols);
|
||||
|
||||
let end1 = r1.end(ncols);
|
||||
let end2 = r2.end(ncols);
|
||||
|
||||
let ncols1 = r1.size(ncols);
|
||||
let ncols2 = r2.size(ncols);
|
||||
|
||||
assert!(start2 >= end1 || start1 >= end2, "Columns range pair: the slice ranges must not overlap.");
|
||||
assert!(end2 <= ncols.value(), "Columns range pair: index out of range.");
|
||||
|
||||
unsafe {
|
||||
let ptr1 = $data.$get_addr(0, start1);
|
||||
let ptr2 = $data.$get_addr(0, start2);
|
||||
|
||||
let data1 = $SliceStorage::from_raw_parts(ptr1, (nrows, ncols1), strides);
|
||||
let data2 = $SliceStorage::from_raw_parts(ptr2, (nrows, ncols2), strides);
|
||||
let slice1 = Matrix::from_data_statically_unchecked(data1);
|
||||
let slice2 = Matrix::from_data_statically_unchecked(data2);
|
||||
|
||||
(slice1, slice2)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
matrix_slice_impl!(
|
||||
self: &Self, MatrixSlice, SliceStorage, Storage, &self.data;
|
||||
self: &Self, MatrixSlice, SliceStorage, Storage.get_address_unchecked(), &self.data;
|
||||
row,
|
||||
row_part,
|
||||
rows,
|
||||
rows_with_step,
|
||||
fixed_rows,
|
||||
fixed_rows_with_step,
|
||||
rows_generic,
|
||||
rows_generic_with_step,
|
||||
column,
|
||||
column_part,
|
||||
columns,
|
||||
columns_with_step,
|
||||
fixed_columns,
|
||||
fixed_columns_with_step,
|
||||
columns_generic,
|
||||
columns_generic_with_step,
|
||||
slice,
|
||||
slice_with_steps,
|
||||
fixed_slice,
|
||||
fixed_slice_with_steps,
|
||||
generic_slice,
|
||||
generic_slice_with_steps);
|
||||
generic_slice_with_steps,
|
||||
rows_range_pair,
|
||||
columns_range_pair);
|
||||
|
||||
|
||||
matrix_slice_impl!(
|
||||
self: &mut Self, MatrixSliceMut, SliceStorageMut, StorageMut, &mut self.data;
|
||||
self: &mut Self, MatrixSliceMut, SliceStorageMut, StorageMut.get_address_unchecked_mut(), &mut self.data;
|
||||
row_mut,
|
||||
row_part_mut,
|
||||
rows_mut,
|
||||
rows_with_step_mut,
|
||||
fixed_rows_mut,
|
||||
fixed_rows_with_step_mut,
|
||||
rows_generic_mut,
|
||||
rows_generic_with_step_mut,
|
||||
column_mut,
|
||||
column_part_mut,
|
||||
columns_mut,
|
||||
columns_with_step_mut,
|
||||
fixed_columns_mut,
|
||||
fixed_columns_with_step_mut,
|
||||
columns_generic_mut,
|
||||
columns_generic_with_step_mut,
|
||||
slice_mut,
|
||||
slice_with_steps_mut,
|
||||
fixed_slice_mut,
|
||||
fixed_slice_with_steps_mut,
|
||||
generic_slice_mut,
|
||||
generic_slice_with_steps_mut);
|
||||
generic_slice_with_steps_mut,
|
||||
rows_range_pair_mut,
|
||||
columns_range_pair_mut);
|
||||
|
||||
|
||||
/// A range with a size that may be known at compile-time.
|
||||
///
|
||||
/// This may be:
|
||||
/// * A single `usize` index, e.g., `4`
|
||||
/// * A left-open range `std::ops::RangeTo`, e.g., `.. 4`
|
||||
/// * A right-open range `std::ops::RangeFrom`, e.g., `4 ..`
|
||||
/// * A full range `std::ops::RangeFull`, e.g., `..`
|
||||
pub trait SliceRange<D: Dim> {
|
||||
/// Type of the range size. May be a type-level integer.
|
||||
type Size: Dim;
|
||||
|
||||
/// The start index of the range.
|
||||
fn begin(&self, shape: D) -> usize;
|
||||
// NOTE: this is the index immediatly after the last index.
|
||||
/// The index immediatly after the last index inside of the range.
|
||||
fn end(&self, shape: D) -> usize;
|
||||
/// The number of elements of the range, i.e., `self.end - self.begin`.
|
||||
fn size(&self, shape: D) -> Self::Size;
|
||||
}
|
||||
|
||||
impl<D: Dim> SliceRange<D> for usize {
|
||||
type Size = U1;
|
||||
|
||||
#[inline(always)]
|
||||
fn begin(&self, _: D) -> usize {
|
||||
*self
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn end(&self, _: D) -> usize {
|
||||
*self + 1
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn size(&self, _: D) -> Self::Size {
|
||||
U1
|
||||
}
|
||||
}
|
||||
|
||||
impl<D: Dim> SliceRange<D> for Range<usize> {
|
||||
type Size = Dynamic;
|
||||
|
||||
#[inline(always)]
|
||||
fn begin(&self, _: D) -> usize {
|
||||
self.start
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn end(&self, _: D) -> usize {
|
||||
self.end
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn size(&self, _: D) -> Self::Size {
|
||||
Dynamic::new(self.end - self.start)
|
||||
}
|
||||
}
|
||||
|
||||
impl<D: Dim> SliceRange<D> for RangeFrom<usize> {
|
||||
type Size = Dynamic;
|
||||
|
||||
#[inline(always)]
|
||||
fn begin(&self, _: D) -> usize {
|
||||
self.start
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn end(&self, dim: D) -> usize {
|
||||
dim.value()
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn size(&self, dim: D) -> Self::Size {
|
||||
Dynamic::new(dim.value() - self.start)
|
||||
}
|
||||
}
|
||||
|
||||
impl<D: Dim> SliceRange<D> for RangeTo<usize> {
|
||||
type Size = Dynamic;
|
||||
|
||||
#[inline(always)]
|
||||
fn begin(&self, _: D) -> usize {
|
||||
0
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn end(&self, _: D) -> usize {
|
||||
self.end
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn size(&self, _: D) -> Self::Size {
|
||||
Dynamic::new(self.end)
|
||||
}
|
||||
}
|
||||
|
||||
impl<D: Dim> SliceRange<D> for RangeFull {
|
||||
type Size = D;
|
||||
|
||||
#[inline(always)]
|
||||
fn begin(&self, _: D) -> usize {
|
||||
0
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn end(&self, dim: D) -> usize {
|
||||
dim.value()
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn size(&self, dim: D) -> Self::Size {
|
||||
dim
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Slices a sub-matrix containing the rows indexed by the range `rows` and the columns indexed
|
||||
/// by the range `cols`.
|
||||
#[inline]
|
||||
pub fn slice_range<RowRange, ColRange>(&self, rows: RowRange, cols: ColRange)
|
||||
-> MatrixSlice<N, RowRange::Size, ColRange::Size, S::RStride, S::CStride>
|
||||
where RowRange: SliceRange<R>,
|
||||
ColRange: SliceRange<C> {
|
||||
|
||||
let (nrows, ncols) = self.data.shape();
|
||||
self.generic_slice((rows.begin(nrows), cols.begin(ncols)),
|
||||
(rows.size(nrows), cols.size(ncols)))
|
||||
}
|
||||
|
||||
/// Slice containing all the rows indexed by the range `rows`.
|
||||
#[inline]
|
||||
pub fn rows_range<RowRange: SliceRange<R>>(&self, rows: RowRange)
|
||||
-> MatrixSlice<N, RowRange::Size, C, S::RStride, S::CStride> {
|
||||
|
||||
self.slice_range(rows, ..)
|
||||
}
|
||||
|
||||
/// Slice containing all the columns indexed by the range `rows`.
|
||||
#[inline]
|
||||
pub fn columns_range<ColRange: SliceRange<C>>(&self, cols: ColRange)
|
||||
-> MatrixSlice<N, R, ColRange::Size, S::RStride, S::CStride> {
|
||||
|
||||
self.slice_range(.., cols)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: StorageMut<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Slices a mutable sub-matrix containing the rows indexed by the range `rows` and the columns
|
||||
/// indexed by the range `cols`.
|
||||
pub fn slice_range_mut<RowRange, ColRange>(&mut self, rows: RowRange, cols: ColRange)
|
||||
-> MatrixSliceMut<N, RowRange::Size, ColRange::Size, S::RStride, S::CStride>
|
||||
where RowRange: SliceRange<R>,
|
||||
ColRange: SliceRange<C> {
|
||||
|
||||
let (nrows, ncols) = self.data.shape();
|
||||
self.generic_slice_mut((rows.begin(nrows), cols.begin(ncols)),
|
||||
(rows.size(nrows), cols.size(ncols)))
|
||||
}
|
||||
|
||||
/// Slice containing all the rows indexed by the range `rows`.
|
||||
#[inline]
|
||||
pub fn rows_range_mut<RowRange: SliceRange<R>>(&mut self, rows: RowRange)
|
||||
-> MatrixSliceMut<N, RowRange::Size, C, S::RStride, S::CStride> {
|
||||
|
||||
self.slice_range_mut(rows, ..)
|
||||
}
|
||||
|
||||
/// Slice containing all the columns indexed by the range `cols`.
|
||||
#[inline]
|
||||
pub fn columns_range_mut<ColRange: SliceRange<C>>(&mut self, cols: ColRange)
|
||||
-> MatrixSliceMut<N, R, ColRange::Size, S::RStride, S::CStride> {
|
||||
|
||||
self.slice_range_mut(.., cols)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,7 +2,8 @@ use std::ops::Deref;
|
|||
|
||||
use core::Scalar;
|
||||
use core::dimension::{Dim, DimName, Dynamic, U1};
|
||||
use core::storage::{Storage, StorageMut, Owned, OwnedStorage};
|
||||
use core::storage::{Storage, StorageMut, Owned, ContiguousStorage, ContiguousStorageMut};
|
||||
use core::allocator::Allocator;
|
||||
use core::default_allocator::DefaultAllocator;
|
||||
|
||||
#[cfg(feature = "abomonation-serialize")]
|
||||
|
@ -48,6 +49,26 @@ impl<N, R: Dim, C: Dim> MatrixVec<N, R, C> {
|
|||
pub unsafe fn data_mut(&mut self) -> &mut Vec<N> {
|
||||
&mut self.data
|
||||
}
|
||||
|
||||
/// Resizes the undelying mutable data storage and unrwaps it.
|
||||
///
|
||||
/// If `sz` is larger than the current size, additional elements are uninitialized.
|
||||
/// If `sz` is smaller than the current size, additional elements are trucated.
|
||||
#[inline]
|
||||
pub unsafe fn resize(mut self, sz: usize) -> Vec<N>{
|
||||
let len = self.len();
|
||||
|
||||
if sz < len {
|
||||
self.data.set_len(sz);
|
||||
self.data.shrink_to_fit();
|
||||
}
|
||||
else {
|
||||
self.data.reserve_exact(sz - len);
|
||||
self.data.set_len(sz);
|
||||
}
|
||||
|
||||
self.data
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R: Dim, C: Dim> Deref for MatrixVec<N, R, C> {
|
||||
|
@ -65,24 +86,14 @@ impl<N, R: Dim, C: Dim> Deref for MatrixVec<N, R, C> {
|
|||
* Dynamic − Dynamic
|
||||
*
|
||||
*/
|
||||
unsafe impl<N: Scalar, C: Dim> Storage<N, Dynamic, C> for MatrixVec<N, Dynamic, C> {
|
||||
unsafe impl<N: Scalar, C: Dim> Storage<N, Dynamic, C> for MatrixVec<N, Dynamic, C>
|
||||
where DefaultAllocator: Allocator<N, Dynamic, C, Buffer = Self> {
|
||||
type RStride = U1;
|
||||
type CStride = Dynamic;
|
||||
type Alloc = DefaultAllocator;
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, Dynamic, C, Self::Alloc> {
|
||||
self
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn clone_owned(&self) -> Owned<N, Dynamic, C, Self::Alloc> {
|
||||
self.clone()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn ptr(&self) -> *const N {
|
||||
self[..].as_ptr()
|
||||
self.data.as_ptr()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
@ -94,27 +105,39 @@ unsafe impl<N: Scalar, C: Dim> Storage<N, Dynamic, C> for MatrixVec<N, Dynamic,
|
|||
fn strides(&self) -> (Self::RStride, Self::CStride) {
|
||||
(Self::RStride::name(), self.nrows)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
unsafe impl<N: Scalar, R: DimName> Storage<N, R, Dynamic> for MatrixVec<N, R, Dynamic> {
|
||||
type RStride = U1;
|
||||
type CStride = R;
|
||||
type Alloc = DefaultAllocator;
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, R, Dynamic, Self::Alloc> {
|
||||
fn is_contiguous(&self) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, Dynamic, C>
|
||||
where DefaultAllocator: Allocator<N, Dynamic, C> {
|
||||
self
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn clone_owned(&self) -> Owned<N, R, Dynamic, Self::Alloc> {
|
||||
fn clone_owned(&self) -> Owned<N, Dynamic, C>
|
||||
where DefaultAllocator: Allocator<N, Dynamic, C> {
|
||||
self.clone()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N] {
|
||||
&self[..]
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
unsafe impl<N: Scalar, R: DimName> Storage<N, R, Dynamic> for MatrixVec<N, R, Dynamic>
|
||||
where DefaultAllocator: Allocator<N, R, Dynamic, Buffer = Self> {
|
||||
type RStride = U1;
|
||||
type CStride = R;
|
||||
|
||||
#[inline]
|
||||
fn ptr(&self) -> *const N {
|
||||
self[..].as_ptr()
|
||||
self.data.as_ptr()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
@ -126,6 +149,28 @@ unsafe impl<N: Scalar, R: DimName> Storage<N, R, Dynamic> for MatrixVec<N, R, Dy
|
|||
fn strides(&self) -> (Self::RStride, Self::CStride) {
|
||||
(Self::RStride::name(), self.nrows)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn is_contiguous(&self) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn into_owned(self) -> Owned<N, R, Dynamic>
|
||||
where DefaultAllocator: Allocator<N, R, Dynamic> {
|
||||
self
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn clone_owned(&self) -> Owned<N, R, Dynamic>
|
||||
where DefaultAllocator: Allocator<N, R, Dynamic> {
|
||||
self.clone()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N] {
|
||||
&self[..]
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -133,20 +178,14 @@ unsafe impl<N: Scalar, R: DimName> Storage<N, R, Dynamic> for MatrixVec<N, R, Dy
|
|||
|
||||
/*
|
||||
*
|
||||
* StorageMut, OwnedStorage.
|
||||
* StorageMut, ContiguousStorage.
|
||||
*
|
||||
*/
|
||||
unsafe impl<N: Scalar, C: Dim> StorageMut<N, Dynamic, C> for MatrixVec<N, Dynamic, C> {
|
||||
unsafe impl<N: Scalar, C: Dim> StorageMut<N, Dynamic, C> for MatrixVec<N, Dynamic, C>
|
||||
where DefaultAllocator: Allocator<N, Dynamic, C, Buffer = Self> {
|
||||
#[inline]
|
||||
fn ptr_mut(&mut self) -> *mut N {
|
||||
self.as_mut_slice().as_mut_ptr()
|
||||
}
|
||||
}
|
||||
|
||||
unsafe impl<N: Scalar, C: Dim> OwnedStorage<N, Dynamic, C> for MatrixVec<N, Dynamic, C> {
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N] {
|
||||
&self[..]
|
||||
self.data.as_mut_ptr()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
@ -155,18 +194,20 @@ unsafe impl<N: Scalar, C: Dim> OwnedStorage<N, Dynamic, C> for MatrixVec<N, Dyna
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
unsafe impl<N: Scalar, R: DimName> StorageMut<N, R, Dynamic> for MatrixVec<N, R, Dynamic> {
|
||||
#[inline]
|
||||
fn ptr_mut(&mut self) -> *mut N {
|
||||
self.as_mut_slice().as_mut_ptr()
|
||||
}
|
||||
unsafe impl<N: Scalar, C: Dim> ContiguousStorage<N, Dynamic, C> for MatrixVec<N, Dynamic, C>
|
||||
where DefaultAllocator: Allocator<N, Dynamic, C, Buffer = Self> {
|
||||
}
|
||||
|
||||
unsafe impl<N: Scalar, R: DimName> OwnedStorage<N, R, Dynamic> for MatrixVec<N, R, Dynamic> {
|
||||
unsafe impl<N: Scalar, C: Dim> ContiguousStorageMut<N, Dynamic, C> for MatrixVec<N, Dynamic, C>
|
||||
where DefaultAllocator: Allocator<N, Dynamic, C, Buffer = Self> {
|
||||
}
|
||||
|
||||
|
||||
unsafe impl<N: Scalar, R: DimName> StorageMut<N, R, Dynamic> for MatrixVec<N, R, Dynamic>
|
||||
where DefaultAllocator: Allocator<N, R, Dynamic, Buffer = Self> {
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N] {
|
||||
&self[..]
|
||||
fn ptr_mut(&mut self) -> *mut N {
|
||||
self.data.as_mut_ptr()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
@ -189,3 +230,11 @@ impl<N: Abomonation, R: Dim, C: Dim> Abomonation for MatrixVec<N, R, C> {
|
|||
self.data.exhume(bytes)
|
||||
}
|
||||
}
|
||||
|
||||
unsafe impl<N: Scalar, R: DimName> ContiguousStorage<N, R, Dynamic> for MatrixVec<N, R, Dynamic>
|
||||
where DefaultAllocator: Allocator<N, R, Dynamic, Buffer = Self> {
|
||||
}
|
||||
|
||||
unsafe impl<N: Scalar, R: DimName> ContiguousStorageMut<N, R, Dynamic> for MatrixVec<N, R, Dynamic>
|
||||
where DefaultAllocator: Allocator<N, R, Dynamic, Buffer = Self> {
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@ pub mod allocator;
|
|||
pub mod storage;
|
||||
pub mod coordinates;
|
||||
mod ops;
|
||||
mod blas;
|
||||
pub mod iter;
|
||||
pub mod default_allocator;
|
||||
|
||||
|
@ -15,8 +16,6 @@ mod construction;
|
|||
mod properties;
|
||||
mod alias;
|
||||
mod matrix_alga;
|
||||
mod determinant;
|
||||
mod inverse;
|
||||
mod conversion;
|
||||
mod matrix_slice;
|
||||
mod matrix_array;
|
||||
|
@ -24,8 +23,7 @@ mod matrix_vec;
|
|||
mod cg;
|
||||
mod unit;
|
||||
mod componentwise;
|
||||
|
||||
mod decompositions;
|
||||
mod edition;
|
||||
|
||||
#[doc(hidden)]
|
||||
pub mod helper;
|
||||
|
|
529
src/core/ops.rs
529
src/core/ops.rs
|
@ -1,15 +1,16 @@
|
|||
use std::iter;
|
||||
use std::ops::{Add, AddAssign, Sub, SubAssign, Mul, MulAssign, Div, DivAssign, Neg,
|
||||
Index, IndexMut};
|
||||
use num::{Zero, One};
|
||||
use std::cmp::PartialOrd;
|
||||
use num::{Zero, One, Signed};
|
||||
|
||||
use alga::general::{ClosedMul, ClosedDiv, ClosedAdd, ClosedSub, ClosedNeg};
|
||||
|
||||
use core::{Scalar, Matrix, OwnedMatrix, SquareMatrix, MatrixSum, MatrixMul, MatrixTrMul};
|
||||
use core::dimension::{Dim, DimMul, DimName, DimProd};
|
||||
use core::constraint::{ShapeConstraint, SameNumberOfRows, SameNumberOfColumns, AreMultipliable};
|
||||
use core::storage::{Storage, StorageMut, OwnedStorage};
|
||||
use core::allocator::{SameShapeAllocator, Allocator, OwnedAllocator};
|
||||
use core::{DefaultAllocator, Scalar, Matrix, MatrixN, MatrixMN, MatrixSum};
|
||||
use core::dimension::{Dim, DimName, DimProd, DimMul};
|
||||
use core::constraint::{ShapeConstraint, SameNumberOfRows, SameNumberOfColumns, AreMultipliable, DimEq};
|
||||
use core::storage::{Storage, StorageMut, ContiguousStorageMut};
|
||||
use core::allocator::{SameShapeAllocator, Allocator, SameShapeR, SameShapeC};
|
||||
|
||||
/*
|
||||
*
|
||||
|
@ -70,8 +71,9 @@ impl<N, R: Dim, C: Dim, S> IndexMut<(usize, usize)> for Matrix<N, R, C, S>
|
|||
*/
|
||||
impl<N, R: Dim, C: Dim, S> Neg for Matrix<N, R, C, S>
|
||||
where N: Scalar + ClosedNeg,
|
||||
S: Storage<N, R, C> {
|
||||
type Output = OwnedMatrix<N, R, C, S::Alloc>;
|
||||
S: Storage<N, R, C>,
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
type Output = MatrixMN<N, R, C>;
|
||||
|
||||
#[inline]
|
||||
fn neg(self) -> Self::Output {
|
||||
|
@ -83,8 +85,9 @@ impl<N, R: Dim, C: Dim, S> Neg for Matrix<N, R, C, S>
|
|||
|
||||
impl<'a, N, R: Dim, C: Dim, S> Neg for &'a Matrix<N, R, C, S>
|
||||
where N: Scalar + ClosedNeg,
|
||||
S: Storage<N, R, C> {
|
||||
type Output = OwnedMatrix<N, R, C, S::Alloc>;
|
||||
S: Storage<N, R, C>,
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
type Output = MatrixMN<N, R, C>;
|
||||
|
||||
#[inline]
|
||||
fn neg(self) -> Self::Output {
|
||||
|
@ -109,33 +112,156 @@ impl<N, R: Dim, C: Dim, S> Matrix<N, R, C, S>
|
|||
* Addition & Substraction
|
||||
*
|
||||
*/
|
||||
|
||||
macro_rules! componentwise_binop_impl(
|
||||
($Trait: ident, $method: ident, $bound: ident;
|
||||
$TraitAssign: ident, $method_assign: ident) => {
|
||||
$TraitAssign: ident, $method_assign: ident, $method_assign_statically_unchecked: ident,
|
||||
$method_assign_statically_unchecked_rhs: ident;
|
||||
$method_to: ident, $method_to_statically_unchecked: ident) => {
|
||||
|
||||
impl<N, R1: Dim, C1: Dim, SA: Storage<N, R1, C1>> Matrix<N, R1, C1, SA>
|
||||
where N: Scalar + $bound {
|
||||
|
||||
/*
|
||||
*
|
||||
* Methods without dimension checking at compile-time.
|
||||
* This is useful for code reuse because the sum representative system does not plays
|
||||
* easily with static checks.
|
||||
*
|
||||
*/
|
||||
#[inline]
|
||||
fn $method_to_statically_unchecked<R2: Dim, C2: Dim, SB,
|
||||
R3: Dim, C3: Dim, SC>(&self,
|
||||
rhs: &Matrix<N, R2, C2, SB>,
|
||||
out: &mut Matrix<N, R3, C3, SC>)
|
||||
where SB: Storage<N, R2, C2>,
|
||||
SC: StorageMut<N, R3, C3> {
|
||||
assert!(self.shape() == rhs.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
assert!(self.shape() == out.shape(), "Matrix addition/subtraction output dimensions mismatch.");
|
||||
|
||||
// This is the most common case and should be deduced at compile-time.
|
||||
// FIXME: use specialization instead?
|
||||
if self.data.is_contiguous() && rhs.data.is_contiguous() && out.data.is_contiguous() {
|
||||
let arr1 = self.data.as_slice();
|
||||
let arr2 = rhs.data.as_slice();
|
||||
let out = out.data.as_mut_slice();
|
||||
for i in 0 .. arr1.len() {
|
||||
unsafe {
|
||||
*out.get_unchecked_mut(i) = arr1.get_unchecked(i).$method(*arr2.get_unchecked(i));
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
for j in 0 .. self.ncols() {
|
||||
for i in 0 .. self.nrows() {
|
||||
unsafe {
|
||||
let val = self.get_unchecked(i, j).$method(*rhs.get_unchecked(i, j));
|
||||
*out.get_unchecked_mut(i, j) = val;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#[inline]
|
||||
fn $method_assign_statically_unchecked<R2, C2, SB>(&mut self, rhs: &Matrix<N, R2, C2, SB>)
|
||||
where R2: Dim,
|
||||
C2: Dim,
|
||||
SA: StorageMut<N, R1, C1>,
|
||||
SB: Storage<N, R2, C2> {
|
||||
assert!(self.shape() == rhs.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
|
||||
// This is the most common case and should be deduced at compile-time.
|
||||
// FIXME: use specialization instead?
|
||||
if self.data.is_contiguous() && rhs.data.is_contiguous() {
|
||||
let arr1 = self.data.as_mut_slice();
|
||||
let arr2 = rhs.data.as_slice();
|
||||
for i in 0 .. arr2.len() {
|
||||
unsafe {
|
||||
arr1.get_unchecked_mut(i).$method_assign(*arr2.get_unchecked(i));
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
for j in 0 .. rhs.ncols() {
|
||||
for i in 0 .. rhs.nrows() {
|
||||
unsafe {
|
||||
self.get_unchecked_mut(i, j).$method_assign(*rhs.get_unchecked(i, j))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#[inline]
|
||||
fn $method_assign_statically_unchecked_rhs<R2, C2, SB>(&self, rhs: &mut Matrix<N, R2, C2, SB>)
|
||||
where R2: Dim,
|
||||
C2: Dim,
|
||||
SB: StorageMut<N, R2, C2> {
|
||||
assert!(self.shape() == rhs.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
|
||||
// This is the most common case and should be deduced at compile-time.
|
||||
// FIXME: use specialization instead?
|
||||
if self.data.is_contiguous() && rhs.data.is_contiguous() {
|
||||
let arr1 = self.data.as_slice();
|
||||
let arr2 = rhs.data.as_mut_slice();
|
||||
for i in 0 .. arr1.len() {
|
||||
unsafe {
|
||||
let res = arr1.get_unchecked(i).$method(*arr2.get_unchecked(i));
|
||||
*arr2.get_unchecked_mut(i) = res;
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
for j in 0 .. self.ncols() {
|
||||
for i in 0 .. self.nrows() {
|
||||
unsafe {
|
||||
let r = rhs.get_unchecked_mut(i, j);
|
||||
*r = self.get_unchecked(i, j).$method(*r)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
*
|
||||
* Methods without dimension checking at compile-time.
|
||||
* This is useful for code reuse because the sum representative system does not plays
|
||||
* easily with static checks.
|
||||
*
|
||||
*/
|
||||
/// Equivalent to `self + rhs` but stores the result into `out` to avoid allocations.
|
||||
#[inline]
|
||||
pub fn $method_to<R2: Dim, C2: Dim, SB,
|
||||
R3: Dim, C3: Dim, SC>(&self,
|
||||
rhs: &Matrix<N, R2, C2, SB>,
|
||||
out: &mut Matrix<N, R3, C3, SC>)
|
||||
where SB: Storage<N, R2, C2>,
|
||||
SC: StorageMut<N, R3, C3>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> +
|
||||
SameNumberOfRows<R1, R3> + SameNumberOfColumns<C1, C3> {
|
||||
self.$method_to_statically_unchecked(rhs, out)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'b, N, R1, C1, R2, C2, SA, SB> $Trait<&'b Matrix<N, R2, C2, SB>> for Matrix<N, R1, C1, SA>
|
||||
where R1: Dim, C1: Dim, R2: Dim, C2: Dim,
|
||||
N: Scalar + $bound,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA::Alloc: SameShapeAllocator<N, R1, C1, R2, C2, SA>,
|
||||
DefaultAllocator: SameShapeAllocator<N, R1, C1, R2, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
type Output = MatrixSum<N, R1, C1, R2, C2, SA>;
|
||||
type Output = MatrixSum<N, R1, C1, R2, C2>;
|
||||
|
||||
#[inline]
|
||||
fn $method(self, right: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
assert!(self.shape() == right.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
fn $method(self, rhs: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
assert!(self.shape() == rhs.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
let mut res = self.into_owned_sum::<R2, C2>();
|
||||
|
||||
// XXX: optimize our iterator!
|
||||
//
|
||||
// Using our own iterator prevents loop unrolling, wich breaks some optimization
|
||||
// (like SIMD). On the other hand, using the slice iterator is 4x faster.
|
||||
|
||||
// for (left, right) in res.iter_mut().zip(right.iter()) {
|
||||
for (left, right) in res.as_mut_slice().iter_mut().zip(right.iter()) {
|
||||
*left = left.$method(*right)
|
||||
}
|
||||
|
||||
res.$method_assign_statically_unchecked(rhs);
|
||||
res
|
||||
}
|
||||
}
|
||||
|
@ -145,26 +271,16 @@ macro_rules! componentwise_binop_impl(
|
|||
N: Scalar + $bound,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SB::Alloc: SameShapeAllocator<N, R2, C2, R1, C1, SB>,
|
||||
DefaultAllocator: SameShapeAllocator<N, R2, C2, R1, C1>,
|
||||
ShapeConstraint: SameNumberOfRows<R2, R1> + SameNumberOfColumns<C2, C1> {
|
||||
type Output = MatrixSum<N, R2, C2, R1, C1, SB>;
|
||||
type Output = MatrixSum<N, R2, C2, R1, C1>;
|
||||
|
||||
#[inline]
|
||||
fn $method(self, right: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
assert!(self.shape() == right.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
let mut res = right.into_owned_sum::<R1, C1>();
|
||||
|
||||
// XXX: optimize our iterator!
|
||||
//
|
||||
// Using our own iterator prevents loop unrolling, wich breaks some optimization
|
||||
// (like SIMD). On the other hand, using the slice iterator is 4x faster.
|
||||
|
||||
// for (left, right) in self.iter().zip(res.iter_mut()) {
|
||||
for (left, right) in self.iter().zip(res.as_mut_slice().iter_mut()) {
|
||||
*right = left.$method(*right)
|
||||
}
|
||||
|
||||
res
|
||||
fn $method(self, rhs: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
let mut rhs = rhs.into_owned_sum::<R1, C1>();
|
||||
assert!(self.shape() == rhs.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
self.$method_assign_statically_unchecked_rhs(&mut rhs);
|
||||
rhs
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -173,13 +289,13 @@ macro_rules! componentwise_binop_impl(
|
|||
N: Scalar + $bound,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA::Alloc: SameShapeAllocator<N, R1, C1, R2, C2, SA>,
|
||||
DefaultAllocator: SameShapeAllocator<N, R1, C1, R2, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
type Output = MatrixSum<N, R1, C1, R2, C2, SA>;
|
||||
type Output = MatrixSum<N, R1, C1, R2, C2>;
|
||||
|
||||
#[inline]
|
||||
fn $method(self, right: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
self.$method(&right)
|
||||
fn $method(self, rhs: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
self.$method(&rhs)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -188,13 +304,21 @@ macro_rules! componentwise_binop_impl(
|
|||
N: Scalar + $bound,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA::Alloc: SameShapeAllocator<N, R1, C1, R2, C2, SA>,
|
||||
DefaultAllocator: SameShapeAllocator<N, R1, C1, R2, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
type Output = MatrixSum<N, R1, C1, R2, C2, SA>;
|
||||
type Output = MatrixSum<N, R1, C1, R2, C2>;
|
||||
|
||||
#[inline]
|
||||
fn $method(self, right: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
self.clone_owned().$method(right)
|
||||
fn $method(self, rhs: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
let mut res = unsafe {
|
||||
let (nrows, ncols) = self.shape();
|
||||
let nrows: SameShapeR<R1, R2> = Dim::from_usize(nrows);
|
||||
let ncols: SameShapeC<C1, C2> = Dim::from_usize(ncols);
|
||||
Matrix::new_uninitialized_generic(nrows, ncols)
|
||||
};
|
||||
|
||||
self.$method_to_statically_unchecked(rhs, &mut res);
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -206,11 +330,8 @@ macro_rules! componentwise_binop_impl(
|
|||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
|
||||
#[inline]
|
||||
fn $method_assign(&mut self, right: &'b Matrix<N, R2, C2, SB>) {
|
||||
assert!(self.shape() == right.shape(), "Matrix addition/subtraction dimensions mismatch.");
|
||||
for (left, right) in self.iter_mut().zip(right.iter()) {
|
||||
left.$method_assign(*right)
|
||||
}
|
||||
fn $method_assign(&mut self, rhs: &'b Matrix<N, R2, C2, SB>) {
|
||||
self.$method_assign_statically_unchecked(rhs)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -222,32 +343,34 @@ macro_rules! componentwise_binop_impl(
|
|||
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
|
||||
|
||||
#[inline]
|
||||
fn $method_assign(&mut self, right: Matrix<N, R2, C2, SB>) {
|
||||
self.$method_assign(&right)
|
||||
fn $method_assign(&mut self, rhs: Matrix<N, R2, C2, SB>) {
|
||||
self.$method_assign(&rhs)
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
componentwise_binop_impl!(Add, add, ClosedAdd; AddAssign, add_assign);
|
||||
componentwise_binop_impl!(Sub, sub, ClosedSub; SubAssign, sub_assign);
|
||||
componentwise_binop_impl!(Add, add, ClosedAdd;
|
||||
AddAssign, add_assign, add_assign_statically_unchecked, add_assign_statically_unchecked_mut;
|
||||
add_to, add_to_statically_unchecked);
|
||||
componentwise_binop_impl!(Sub, sub, ClosedSub;
|
||||
SubAssign, sub_assign, sub_assign_statically_unchecked, sub_assign_statically_unchecked_mut;
|
||||
sub_to, sub_to_statically_unchecked);
|
||||
|
||||
impl<N, R: DimName, C: DimName, S> iter::Sum for Matrix<N, R, C, S>
|
||||
impl<N, R: DimName, C: DimName> iter::Sum for MatrixMN<N, R, C>
|
||||
where N: Scalar + ClosedAdd + Zero,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S>
|
||||
DefaultAllocator: Allocator<N, R, C>
|
||||
{
|
||||
fn sum<I: Iterator<Item = Matrix<N, R, C, S>>>(iter: I) -> Matrix<N, R, C, S> {
|
||||
fn sum<I: Iterator<Item = MatrixMN<N, R, C>>>(iter: I) -> MatrixMN<N, R, C> {
|
||||
iter.fold(Matrix::zero(), |acc, x| acc + x)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, N, R: DimName, C: DimName, S> iter::Sum<&'a Matrix<N, R, C, S>> for Matrix<N, R, C, S>
|
||||
impl<'a, N, R: DimName, C: DimName> iter::Sum<&'a MatrixMN<N, R, C>> for MatrixMN<N, R, C>
|
||||
where N: Scalar + ClosedAdd + Zero,
|
||||
S: OwnedStorage<N, R, C>,
|
||||
S::Alloc: OwnedAllocator<N, R, C, S>
|
||||
DefaultAllocator: Allocator<N, R, C>
|
||||
{
|
||||
fn sum<I: Iterator<Item = &'a Matrix<N, R, C, S>>>(iter: I) -> Matrix<N, R, C, S> {
|
||||
fn sum<I: Iterator<Item = &'a MatrixMN<N, R, C>>>(iter: I) -> MatrixMN<N, R, C> {
|
||||
iter.fold(Matrix::zero(), |acc, x| acc + x)
|
||||
}
|
||||
}
|
||||
|
@ -266,8 +389,9 @@ macro_rules! componentwise_scalarop_impl(
|
|||
$TraitAssign: ident, $method_assign: ident) => {
|
||||
impl<N, R: Dim, C: Dim, S> $Trait<N> for Matrix<N, R, C, S>
|
||||
where N: Scalar + $bound,
|
||||
S: Storage<N, R, C> {
|
||||
type Output = OwnedMatrix<N, R, C, S::Alloc>;
|
||||
S: Storage<N, R, C>,
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
type Output = MatrixMN<N, R, C>;
|
||||
|
||||
#[inline]
|
||||
fn $method(self, rhs: N) -> Self::Output {
|
||||
|
@ -289,8 +413,9 @@ macro_rules! componentwise_scalarop_impl(
|
|||
|
||||
impl<'a, N, R: Dim, C: Dim, S> $Trait<N> for &'a Matrix<N, R, C, S>
|
||||
where N: Scalar + $bound,
|
||||
S: Storage<N, R, C> {
|
||||
type Output = OwnedMatrix<N, R, C, S::Alloc>;
|
||||
S: Storage<N, R, C>,
|
||||
DefaultAllocator: Allocator<N, R, C> {
|
||||
type Output = MatrixMN<N, R, C>;
|
||||
|
||||
#[inline]
|
||||
fn $method(self, rhs: N) -> Self::Output {
|
||||
|
@ -302,9 +427,11 @@ macro_rules! componentwise_scalarop_impl(
|
|||
where N: Scalar + $bound,
|
||||
S: StorageMut<N, R, C> {
|
||||
#[inline]
|
||||
fn $method_assign(&mut self, right: N) {
|
||||
for left in self.iter_mut() {
|
||||
left.$method_assign(right)
|
||||
fn $method_assign(&mut self, rhs: N) {
|
||||
for j in 0 .. self.ncols() {
|
||||
for i in 0 .. self.nrows() {
|
||||
unsafe { self.get_unchecked_mut(i, j).$method_assign(rhs) };
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -316,35 +443,35 @@ componentwise_scalarop_impl!(Div, div, ClosedDiv; DivAssign, div_assign);
|
|||
|
||||
macro_rules! left_scalar_mul_impl(
|
||||
($($T: ty),* $(,)*) => {$(
|
||||
impl<R: Dim, C: Dim, S> Mul<Matrix<$T, R, C, S>> for $T
|
||||
where S: Storage<$T, R, C> {
|
||||
type Output = OwnedMatrix<$T, R, C, S::Alloc>;
|
||||
impl<R: Dim, C: Dim, S: Storage<$T, R, C>> Mul<Matrix<$T, R, C, S>> for $T
|
||||
where DefaultAllocator: Allocator<$T, R, C> {
|
||||
type Output = MatrixMN<$T, R, C>;
|
||||
|
||||
#[inline]
|
||||
fn mul(self, right: Matrix<$T, R, C, S>) -> Self::Output {
|
||||
let mut res = right.into_owned();
|
||||
fn mul(self, rhs: Matrix<$T, R, C, S>) -> Self::Output {
|
||||
let mut res = rhs.into_owned();
|
||||
|
||||
// XXX: optimize our iterator!
|
||||
//
|
||||
// Using our own iterator prevents loop unrolling, wich breaks some optimization
|
||||
// (like SIMD). On the other hand, using the slice iterator is 4x faster.
|
||||
|
||||
// for right in res.iter_mut() {
|
||||
for right in res.as_mut_slice().iter_mut() {
|
||||
*right = self * *right
|
||||
// for rhs in res.iter_mut() {
|
||||
for rhs in res.as_mut_slice().iter_mut() {
|
||||
*rhs = self * *rhs
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'b, R: Dim, C: Dim, S> Mul<&'b Matrix<$T, R, C, S>> for $T
|
||||
where S: Storage<$T, R, C> {
|
||||
type Output = OwnedMatrix<$T, R, C, S::Alloc>;
|
||||
impl<'b, R: Dim, C: Dim, S: Storage<$T, R, C>> Mul<&'b Matrix<$T, R, C, S>> for $T
|
||||
where DefaultAllocator: Allocator<$T, R, C> {
|
||||
type Output = MatrixMN<$T, R, C>;
|
||||
|
||||
#[inline]
|
||||
fn mul(self, right: &'b Matrix<$T, R, C, S>) -> Self::Output {
|
||||
self * right.clone_owned()
|
||||
fn mul(self, rhs: &'b Matrix<$T, R, C, S>) -> Self::Output {
|
||||
self * rhs.clone_owned()
|
||||
}
|
||||
}
|
||||
)*}
|
||||
|
@ -361,84 +488,66 @@ left_scalar_mul_impl!(
|
|||
// Matrix × Matrix
|
||||
impl<'a, 'b, N, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<&'b Matrix<N, R2, C2, SB>>
|
||||
for &'a Matrix<N, R1, C1, SA>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
SB: Storage<N, R2, C2>,
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SA::Alloc: Allocator<N, R1, C2>,
|
||||
SB: Storage<N, R2, C2>,
|
||||
DefaultAllocator: Allocator<N, R1, C2>,
|
||||
ShapeConstraint: AreMultipliable<R1, C1, R2, C2> {
|
||||
type Output = MatrixMul<N, R1, C1, C2, SA>;
|
||||
type Output = MatrixMN<N, R1, C2>;
|
||||
|
||||
#[inline]
|
||||
fn mul(self, right: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
let (nrows1, ncols1) = self.shape();
|
||||
let (nrows2, ncols2) = right.shape();
|
||||
|
||||
assert!(ncols1 == nrows2, "Matrix multiplication dimensions mismatch.");
|
||||
|
||||
let mut res: MatrixMul<N, R1, C1, C2, SA> = unsafe {
|
||||
Matrix::new_uninitialized_generic(self.data.shape().0, right.data.shape().1)
|
||||
fn mul(self, rhs: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
let mut res = unsafe {
|
||||
Matrix::new_uninitialized_generic(self.data.shape().0, rhs.data.shape().1)
|
||||
};
|
||||
|
||||
for i in 0 .. nrows1 {
|
||||
for j in 0 .. ncols2 {
|
||||
let mut acc = N::zero();
|
||||
|
||||
unsafe {
|
||||
for k in 0 .. ncols1 {
|
||||
acc = acc + *self.get_unchecked(i, k) * *right.get_unchecked(k, j);
|
||||
}
|
||||
|
||||
*res.get_unchecked_mut(i, j) = acc;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.mul_to(rhs, &mut res);
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, N, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<Matrix<N, R2, C2, SB>>
|
||||
for &'a Matrix<N, R1, C1, SA>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SA::Alloc: Allocator<N, R1, C2>,
|
||||
DefaultAllocator: Allocator<N, R1, C2>,
|
||||
ShapeConstraint: AreMultipliable<R1, C1, R2, C2> {
|
||||
type Output = MatrixMul<N, R1, C1, C2, SA>;
|
||||
type Output = MatrixMN<N, R1, C2>;
|
||||
|
||||
#[inline]
|
||||
fn mul(self, right: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
self * &right
|
||||
fn mul(self, rhs: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
self * &rhs
|
||||
}
|
||||
}
|
||||
|
||||
impl<'b, N, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<&'b Matrix<N, R2, C2, SB>>
|
||||
for Matrix<N, R1, C1, SA>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SA::Alloc: Allocator<N, R1, C2>,
|
||||
DefaultAllocator: Allocator<N, R1, C2>,
|
||||
ShapeConstraint: AreMultipliable<R1, C1, R2, C2> {
|
||||
type Output = MatrixMul<N, R1, C1, C2, SA>;
|
||||
type Output = MatrixMN<N, R1, C2>;
|
||||
|
||||
#[inline]
|
||||
fn mul(self, right: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
&self * right
|
||||
fn mul(self, rhs: &'b Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
&self * rhs
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<Matrix<N, R2, C2, SB>>
|
||||
for Matrix<N, R1, C1, SA>
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA: Storage<N, R1, C1>,
|
||||
SA::Alloc: Allocator<N, R1, C2>,
|
||||
DefaultAllocator: Allocator<N, R1, C2>,
|
||||
ShapeConstraint: AreMultipliable<R1, C1, R2, C2> {
|
||||
type Output = MatrixMul<N, R1, C1, C2, SA>;
|
||||
type Output = MatrixMN<N, R1, C2>;
|
||||
|
||||
#[inline]
|
||||
fn mul(self, right: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
&self * &right
|
||||
fn mul(self, rhs: Matrix<N, R2, C2, SB>) -> Self::Output {
|
||||
&self * &rhs
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -447,84 +556,106 @@ for Matrix<N, R1, C1, SA>
|
|||
// − we can't use `a *= b` when C2 is not equal to C1.
|
||||
impl<N, R1, C1, R2, SA, SB> MulAssign<Matrix<N, R2, C1, SB>> for Matrix<N, R1, C1, SA>
|
||||
where R1: Dim, C1: Dim, R2: Dim,
|
||||
N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
SB: Storage<N, R2, C1>,
|
||||
SA: OwnedStorage<N, R1, C1>,
|
||||
SA: ContiguousStorageMut<N, R1, C1> + Clone,
|
||||
ShapeConstraint: AreMultipliable<R1, C1, R2, C1>,
|
||||
SA::Alloc: OwnedAllocator<N, R1, C1, SA> {
|
||||
DefaultAllocator: Allocator<N, R1, C1, Buffer = SA> {
|
||||
#[inline]
|
||||
fn mul_assign(&mut self, right: Matrix<N, R2, C1, SB>) {
|
||||
*self = &*self * right
|
||||
fn mul_assign(&mut self, rhs: Matrix<N, R2, C1, SB>) {
|
||||
*self = &*self * rhs
|
||||
}
|
||||
}
|
||||
|
||||
impl<'b, N, R1, C1, R2, SA, SB> MulAssign<&'b Matrix<N, R2, C1, SB>> for Matrix<N, R1, C1, SA>
|
||||
where R1: Dim, C1: Dim, R2: Dim,
|
||||
N: Scalar + Zero + ClosedAdd + ClosedMul,
|
||||
N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
SB: Storage<N, R2, C1>,
|
||||
SA: OwnedStorage<N, R1, C1>,
|
||||
SA: ContiguousStorageMut<N, R1, C1> + Clone,
|
||||
ShapeConstraint: AreMultipliable<R1, C1, R2, C1>,
|
||||
// FIXME: this is too restrictive. See comments for the non-ref version.
|
||||
SA::Alloc: OwnedAllocator<N, R1, C1, SA> {
|
||||
DefaultAllocator: Allocator<N, R1, C1, Buffer = SA> {
|
||||
#[inline]
|
||||
fn mul_assign(&mut self, right: &'b Matrix<N, R2, C1, SB>) {
|
||||
*self = &*self * right
|
||||
fn mul_assign(&mut self, rhs: &'b Matrix<N, R2, C1, SB>) {
|
||||
*self = &*self * rhs
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Transpose-multiplication.
|
||||
impl<N, R1: Dim, C1: Dim, SA> Matrix<N, R1, C1, SA>
|
||||
where N: Scalar,
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul,
|
||||
SA: Storage<N, R1, C1> {
|
||||
/// Equivalent to `self.transpose() * right`.
|
||||
/// Equivalent to `self.transpose() * rhs`.
|
||||
#[inline]
|
||||
pub fn tr_mul<R2: Dim, C2: Dim, SB>(&self, right: &Matrix<N, R2, C2, SB>) -> MatrixTrMul<N, R1, C1, C2, SA>
|
||||
where N: Zero + ClosedAdd + ClosedMul,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA::Alloc: Allocator<N, C1, C2>,
|
||||
ShapeConstraint: AreMultipliable<C1, R1, R2, C2> {
|
||||
pub fn tr_mul<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<N, R2, C2, SB>) -> MatrixMN<N, C1, C2>
|
||||
where SB: Storage<N, R2, C2>,
|
||||
DefaultAllocator: Allocator<N, C1, C2>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> {
|
||||
|
||||
let mut res = unsafe {
|
||||
Matrix::new_uninitialized_generic(self.data.shape().1, rhs.data.shape().1)
|
||||
};
|
||||
|
||||
self.tr_mul_to(rhs, &mut res);
|
||||
res
|
||||
}
|
||||
|
||||
/// Equivalent to `self.transpose() * rhs` but stores the result into `out` to avoid
|
||||
/// allocations.
|
||||
#[inline]
|
||||
pub fn tr_mul_to<R2: Dim, C2: Dim, SB,
|
||||
R3: Dim, C3: Dim, SC>(&self,
|
||||
rhs: &Matrix<N, R2, C2, SB>,
|
||||
out: &mut Matrix<N, R3, C3, SC>)
|
||||
where SB: Storage<N, R2, C2>,
|
||||
SC: StorageMut<N, R3, C3>,
|
||||
ShapeConstraint: SameNumberOfRows<R1, R2> +
|
||||
DimEq<C1, R3> +
|
||||
DimEq<C2, C3> {
|
||||
let (nrows1, ncols1) = self.shape();
|
||||
let (nrows2, ncols2) = right.shape();
|
||||
let (nrows2, ncols2) = rhs.shape();
|
||||
let (nrows3, ncols3) = out.shape();
|
||||
|
||||
assert!(nrows1 == nrows2, "Matrix multiplication dimensions mismatch.");
|
||||
|
||||
let mut res: MatrixTrMul<N, R1, C1, C2, SA> = unsafe {
|
||||
Matrix::new_uninitialized_generic(self.data.shape().1, right.data.shape().1)
|
||||
};
|
||||
assert!(nrows3 == ncols1 && ncols3 == ncols2, "Matrix multiplication output dimensions mismatch.");
|
||||
|
||||
for i in 0 .. ncols1 {
|
||||
for j in 0 .. ncols2 {
|
||||
let mut acc = N::zero();
|
||||
|
||||
unsafe {
|
||||
for k in 0 .. nrows1 {
|
||||
acc += *self.get_unchecked(k, i) * *right.get_unchecked(k, j);
|
||||
}
|
||||
|
||||
*res.get_unchecked_mut(i, j) = acc;
|
||||
let dot = self.column(i).dot(&rhs.column(j));
|
||||
unsafe { *out.get_unchecked_mut(i, j) = dot };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
res
|
||||
/// Equivalent to `self * rhs` but stores the result into `out` to avoid allocations.
|
||||
#[inline]
|
||||
pub fn mul_to<R2: Dim, C2: Dim, SB,
|
||||
R3: Dim, C3: Dim, SC>(&self,
|
||||
rhs: &Matrix<N, R2, C2, SB>,
|
||||
out: &mut Matrix<N, R3, C3, SC>)
|
||||
where SB: Storage<N, R2, C2>,
|
||||
SC: StorageMut<N, R3, C3>,
|
||||
ShapeConstraint: SameNumberOfRows<R3, R1> +
|
||||
SameNumberOfColumns<C3, C2> +
|
||||
AreMultipliable<R1, C1, R2, C2> {
|
||||
out.gemm(N::one(), self, rhs, N::zero());
|
||||
}
|
||||
|
||||
|
||||
/// The kronecker product of two matrices (aka. tensor product of the corresponding linear
|
||||
/// maps).
|
||||
pub fn kronecker<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<N, R2, C2, SB>)
|
||||
-> OwnedMatrix<N, DimProd<R1, R2>, DimProd<C1, C2>, SA::Alloc>
|
||||
-> MatrixMN<N, DimProd<R1, R2>, DimProd<C1, C2>>
|
||||
where N: ClosedMul,
|
||||
R1: DimMul<R2>,
|
||||
C1: DimMul<C2>,
|
||||
SB: Storage<N, R2, C2>,
|
||||
SA::Alloc: Allocator<N, DimProd<R1, R2>, DimProd<C1, C2>> {
|
||||
DefaultAllocator: Allocator<N, DimProd<R1, R2>, DimProd<C1, C2>> {
|
||||
let (nrows1, ncols1) = self.data.shape();
|
||||
let (nrows2, ncols2) = rhs.data.shape();
|
||||
|
||||
let mut res: OwnedMatrix<_, _, _, SA::Alloc> =
|
||||
unsafe { Matrix::new_uninitialized_generic(nrows1.mul(nrows2), ncols1.mul(ncols2)) };
|
||||
let mut res = unsafe { Matrix::new_uninitialized_generic(nrows1.mul(nrows2), ncols1.mul(ncols2)) };
|
||||
|
||||
{
|
||||
let mut data_res = res.data.ptr_mut();
|
||||
|
@ -549,22 +680,76 @@ impl<N, R1: Dim, C1: Dim, SA> Matrix<N, R1, C1, SA>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> iter::Product for SquareMatrix<N, D, S>
|
||||
impl<N: Scalar + ClosedAdd, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Adds a scalar to `self`.
|
||||
#[inline]
|
||||
pub fn add_scalar(&self, rhs: N) -> MatrixMN<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C> {
|
||||
let mut res = self.clone_owned();
|
||||
res.add_scalar_mut(rhs);
|
||||
res
|
||||
}
|
||||
|
||||
/// Adds a scalar to `self` in-place.
|
||||
#[inline]
|
||||
pub fn add_scalar_mut(&mut self, rhs: N)
|
||||
where S: StorageMut<N, R, C> {
|
||||
for e in self.iter_mut() {
|
||||
*e += rhs
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N, D: DimName> iter::Product for MatrixN<N, D>
|
||||
where N: Scalar + Zero + One + ClosedMul + ClosedAdd,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S>
|
||||
DefaultAllocator: Allocator<N, D, D>
|
||||
{
|
||||
fn product<I: Iterator<Item = SquareMatrix<N, D, S>>>(iter: I) -> SquareMatrix<N, D, S> {
|
||||
fn product<I: Iterator<Item = MatrixN<N, D>>>(iter: I) -> MatrixN<N, D> {
|
||||
iter.fold(Matrix::one(), |acc, x| acc * x)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, N, D: DimName, S> iter::Product<&'a SquareMatrix<N, D, S>> for SquareMatrix<N, D, S>
|
||||
impl<'a, N, D: DimName> iter::Product<&'a MatrixN<N, D>> for MatrixN<N, D>
|
||||
where N: Scalar + Zero + One + ClosedMul + ClosedAdd,
|
||||
S: OwnedStorage<N, D, D>,
|
||||
S::Alloc: OwnedAllocator<N, D, D, S>
|
||||
DefaultAllocator: Allocator<N, D, D>
|
||||
{
|
||||
fn product<I: Iterator<Item = &'a SquareMatrix<N, D, S>>>(iter: I) -> SquareMatrix<N, D, S> {
|
||||
fn product<I: Iterator<Item = &'a MatrixN<N, D>>>(iter: I) -> MatrixN<N, D> {
|
||||
iter.fold(Matrix::one(), |acc, x| acc * x)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar + PartialOrd + Signed, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Returns the absolute value of the coefficient with the largest absolute value.
|
||||
#[inline]
|
||||
pub fn amax(&self) -> N {
|
||||
let mut max = N::zero();
|
||||
|
||||
for e in self.iter() {
|
||||
let ae = e.abs();
|
||||
|
||||
if ae > max {
|
||||
max = ae;
|
||||
}
|
||||
}
|
||||
|
||||
max
|
||||
}
|
||||
|
||||
/// Returns the absolute value of the coefficient with the smallest absolute value.
|
||||
#[inline]
|
||||
pub fn amin(&self) -> N {
|
||||
let mut it = self.iter();
|
||||
let mut min = it.next().expect("amin: empty matrices not supported.").abs();
|
||||
|
||||
for e in it {
|
||||
let ae = e.abs();
|
||||
|
||||
if ae < min {
|
||||
min = ae;
|
||||
}
|
||||
}
|
||||
|
||||
min
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,33 +2,38 @@
|
|||
use num::{Zero, One};
|
||||
use approx::ApproxEq;
|
||||
|
||||
use alga::general::{ClosedAdd, ClosedMul, ClosedSub, Field};
|
||||
use alga::general::{ClosedAdd, ClosedMul, Real};
|
||||
|
||||
use core::{Scalar, Matrix, SquareMatrix};
|
||||
use core::dimension::Dim;
|
||||
use core::{DefaultAllocator, Scalar, Matrix, SquareMatrix};
|
||||
use core::dimension::{Dim, DimMin};
|
||||
use core::storage::Storage;
|
||||
use core::allocator::Allocator;
|
||||
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S> {
|
||||
/// Indicates if this is a square matrix.
|
||||
#[inline]
|
||||
pub fn is_square(&self) -> bool {
|
||||
let shape = self.shape();
|
||||
shape.0 == shape.1
|
||||
pub fn is_empty(&self) -> bool {
|
||||
let (nrows, ncols) = self.shape();
|
||||
nrows == 0 || ncols == 0
|
||||
}
|
||||
|
||||
/// Indicates if this is a square matrix.
|
||||
#[inline]
|
||||
pub fn is_square(&self) -> bool {
|
||||
let (nrows, ncols) = self.shape();
|
||||
nrows == ncols
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S>
|
||||
// FIXME: ApproxEq prevents us from using those methods on integer matrices…
|
||||
where N: ApproxEq,
|
||||
N::Epsilon: Copy {
|
||||
/// Indicated if this is the identity matrix within a relative error of `eps`.
|
||||
///
|
||||
/// If the matrix is diagonal, this checks that diagonal elements (i.e. at coordinates `(i, i)`
|
||||
/// for i from `0` to `min(R, C)`) are equal one; and that all other elements are zero.
|
||||
#[inline]
|
||||
pub fn is_identity(&self, eps: N::Epsilon) -> bool
|
||||
where N: Zero + One {
|
||||
where N: Zero + One + ApproxEq,
|
||||
N::Epsilon: Copy {
|
||||
let (nrows, ncols) = self.shape();
|
||||
let d;
|
||||
|
||||
|
@ -75,32 +80,35 @@ impl<N: Scalar, R: Dim, C: Dim, S: Storage<N, R, C>> Matrix<N, R, C, S>
|
|||
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N: Scalar + ApproxEq, D: Dim, S: Storage<N, D, D>> SquareMatrix<N, D, S>
|
||||
where N: Zero + One + ClosedAdd + ClosedMul,
|
||||
N::Epsilon: Copy {
|
||||
/// Checks that this matrix is orthogonal, i.e., that it is square and `M × Mᵀ = Id`.
|
||||
/// Checks that `Mᵀ × M = Id`.
|
||||
///
|
||||
/// In this definition `Id` is approximately equal to the identity matrix with a relative error
|
||||
/// equal to `eps`.
|
||||
#[inline]
|
||||
pub fn is_orthogonal(&self, eps: N::Epsilon) -> bool {
|
||||
self.is_square() && (self.tr_mul(self)).is_identity(eps)
|
||||
pub fn is_orthogonal(&self, eps: N::Epsilon) -> bool
|
||||
where N: Zero + One + ClosedAdd + ClosedMul + ApproxEq,
|
||||
S: Storage<N, R, C>,
|
||||
N::Epsilon: Copy,
|
||||
DefaultAllocator: Allocator<N, C, C> {
|
||||
(self.tr_mul(self)).is_identity(eps)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N: Real, D: Dim, S: Storage<N, D, D>> SquareMatrix<N, D, S>
|
||||
where DefaultAllocator: Allocator<N, D, D> {
|
||||
/// Checks that this matrix is orthogonal and has a determinant equal to 1.
|
||||
#[inline]
|
||||
pub fn is_special_orthogonal(&self, eps: N::Epsilon) -> bool
|
||||
where N: ClosedSub + PartialOrd {
|
||||
self.is_orthogonal(eps) && self.determinant() > N::zero()
|
||||
pub fn is_special_orthogonal(&self, eps: N) -> bool
|
||||
where D: DimMin<D, Output = D>,
|
||||
DefaultAllocator: Allocator<(usize, usize), D> {
|
||||
self.is_square() && self.is_orthogonal(eps) && self.determinant() > N::zero()
|
||||
}
|
||||
|
||||
/// Returns `true` if this matrix is invertible.
|
||||
#[inline]
|
||||
pub fn is_invertible(&self) -> bool
|
||||
where N: Field {
|
||||
pub fn is_invertible(&self) -> bool {
|
||||
// FIXME: improve this?
|
||||
self.clone_owned().try_inverse().is_some()
|
||||
}
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
use std::any::TypeId;
|
||||
use std::fmt::Debug;
|
||||
use std::any::Any;
|
||||
|
||||
|
@ -5,5 +6,12 @@ use std::any::Any;
|
|||
///
|
||||
/// This does not make any assumption on the algebraic properties of `Self`.
|
||||
pub trait Scalar: Copy + PartialEq + Debug + Any {
|
||||
#[inline]
|
||||
/// Tests if `Self` the the same as the type `T`
|
||||
///
|
||||
/// Typically used to test of `Self` is a f32 or a f64 with `N::is::<f32>()`.
|
||||
fn is<T: Scalar>() -> bool {
|
||||
TypeId::of::<Self>() == TypeId::of::<T>()
|
||||
}
|
||||
}
|
||||
impl<T: Copy + PartialEq + Debug + Any> Scalar for T { }
|
||||
|
|
|
@ -1,39 +1,28 @@
|
|||
//! Abstract definition of a matrix data storage.
|
||||
|
||||
use std::fmt::Debug;
|
||||
use std::mem;
|
||||
use std::any::Any;
|
||||
|
||||
use core::Scalar;
|
||||
use dimension::Dim;
|
||||
use allocator::{Allocator, SameShapeR, SameShapeC};
|
||||
use core::default_allocator::DefaultAllocator;
|
||||
use core::dimension::{Dim, U1};
|
||||
use core::allocator::{Allocator, SameShapeR, SameShapeC};
|
||||
|
||||
/*
|
||||
* Aliases for sum storage.
|
||||
* Aliases for allocation results.
|
||||
*/
|
||||
/// The data storage for the sum of two matrices with dimensions `(R1, C1)` and `(R2, C2)`.
|
||||
pub type SumStorage<N, R1, C1, R2, C2, SA> =
|
||||
<<SA as Storage<N, R1, C1>>::Alloc as Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>>::Buffer;
|
||||
pub type SameShapeStorage<N, R1, C1, R2, C2> = <DefaultAllocator as Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>>::Buffer;
|
||||
|
||||
/*
|
||||
* Aliases for multiplication storage.
|
||||
*/
|
||||
/// The data storage for the multiplication of two matrices with dimensions `(R1, C1)` on the left
|
||||
/// hand side, and with `C2` columns on the right hand side.
|
||||
pub type MulStorage<N, R1, C1, C2, SA> =
|
||||
<<SA as Storage<N, R1, C1>>::Alloc as Allocator<N, R1, C2>>::Buffer;
|
||||
|
||||
/// The data storage for the multiplication of two matrices with dimensions `(R1, C1)` on the left
|
||||
/// hand side, and with `C2` columns on the right hand side. The first matrix is implicitly
|
||||
/// transposed.
|
||||
pub type TrMulStorage<N, R1, C1, C2, SA> =
|
||||
<<SA as Storage<N, R1, C1>>::Alloc as Allocator<N, C1, C2>>::Buffer;
|
||||
|
||||
/*
|
||||
* Alias for allocation result.
|
||||
*/
|
||||
// FIXME: better name than Owned ?
|
||||
/// The owned data storage that can be allocated from `S`.
|
||||
pub type Owned<N, R, C, A> =
|
||||
<A as Allocator<N, R, C>>::Buffer;
|
||||
pub type Owned<N, R, C = U1> = <DefaultAllocator as Allocator<N, R, C>>::Buffer;
|
||||
|
||||
/// The row-stride of the owned data storage for a buffer of dimension `(R, C)`.
|
||||
pub type RStride<N, R, C = U1> = <<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::RStride;
|
||||
|
||||
/// The column-stride of the owned data storage for a buffer of dimension `(R, C)`.
|
||||
pub type CStride<N, R, C = U1> = <<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::CStride;
|
||||
|
||||
|
||||
/// The trait shared by all matrix data storage.
|
||||
|
@ -45,22 +34,13 @@ pub type Owned<N, R, C, A> =
|
|||
/// should **not** allow the user to modify the size of the underlying buffer with safe methods
|
||||
/// (for example the `MatrixVec::data_mut` method is unsafe because the user could change the
|
||||
/// vector's size so that it no longer contains enough elements: this will lead to UB.
|
||||
pub unsafe trait Storage<N: Scalar, R: Dim, C: Dim>: Sized {
|
||||
pub unsafe trait Storage<N: Scalar, R: Dim, C: Dim = U1>: Debug + Sized {
|
||||
/// The static stride of this storage's rows.
|
||||
type RStride: Dim;
|
||||
|
||||
/// The static stride of this storage's columns.
|
||||
type CStride: Dim;
|
||||
|
||||
/// The allocator for this family of storage.
|
||||
type Alloc: Allocator<N, R, C>;
|
||||
|
||||
/// Builds a matrix data storage that does not contain any reference.
|
||||
fn into_owned(self) -> Owned<N, R, C, Self::Alloc>;
|
||||
|
||||
/// Clones this data storage into one that does not contain any reference.
|
||||
fn clone_owned(&self) -> Owned<N, R, C, Self::Alloc>;
|
||||
|
||||
/// The matrix data pointer.
|
||||
fn ptr(&self) -> *const N;
|
||||
|
||||
|
@ -110,6 +90,24 @@ pub unsafe trait Storage<N: Scalar, R: Dim, C: Dim>: Sized {
|
|||
unsafe fn get_unchecked(&self, irow: usize, icol: usize) -> &N {
|
||||
self.get_unchecked_linear(self.linear_index(irow, icol))
|
||||
}
|
||||
|
||||
/// Indicates whether this data buffer stores its elements contiguously.
|
||||
#[inline]
|
||||
fn is_contiguous(&self) -> bool;
|
||||
|
||||
/// Retrieves the data buffer as a contiguous slice.
|
||||
///
|
||||
/// The matrix components may not be stored in a contiguous way, depending on the strides.
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N];
|
||||
|
||||
/// Builds a matrix data storage that does not contain any reference.
|
||||
fn into_owned(self) -> Owned<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C>;
|
||||
|
||||
/// Clones this data storage to one that does not contain any reference.
|
||||
fn clone_owned(&self) -> Owned<N, R, C>
|
||||
where DefaultAllocator: Allocator<N, R, C>;
|
||||
}
|
||||
|
||||
|
||||
|
@ -118,7 +116,7 @@ pub unsafe trait Storage<N: Scalar, R: Dim, C: Dim>: Sized {
|
|||
/// Note that a mutable access does not mean that the matrix owns its data. For example, a mutable
|
||||
/// matrix slice can provide mutable access to its elements even if it does not own its data (it
|
||||
/// contains only an internal reference to them).
|
||||
pub unsafe trait StorageMut<N: Scalar, R: Dim, C: Dim>: Storage<N, R, C> {
|
||||
pub unsafe trait StorageMut<N: Scalar, R: Dim, C: Dim = U1>: Storage<N, R, C> {
|
||||
/// The matrix mutable data pointer.
|
||||
fn ptr_mut(&mut self) -> *mut N;
|
||||
|
||||
|
@ -163,22 +161,24 @@ pub unsafe trait StorageMut<N: Scalar, R: Dim, C: Dim>: Storage<N, R, C> {
|
|||
|
||||
self.swap_unchecked_linear(lid1, lid2)
|
||||
}
|
||||
}
|
||||
|
||||
/// A matrix storage that does not contain any reference and that is stored contiguously in memory.
|
||||
///
|
||||
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
|
||||
/// `.get_unchecked_linear` succeeds. This trait is unsafe because failing to comply to this may
|
||||
/// cause Undefined Behaviors.
|
||||
pub unsafe trait OwnedStorage<N: Scalar, R: Dim, C: Dim>: StorageMut<N, R, C> + Clone + Any
|
||||
where Self::Alloc: Allocator<N, R, C, Buffer = Self> {
|
||||
// NOTE: We could auto-impl those two methods but we don't to make sure the user is aware that
|
||||
// data must be contiguous.
|
||||
/// Converts this data storage to a slice.
|
||||
#[inline]
|
||||
fn as_slice(&self) -> &[N];
|
||||
|
||||
/// Converts this data storage to a mutable slice.
|
||||
/// Retrieves the mutable data buffer as a contiguous slice.
|
||||
///
|
||||
/// Matrix components may not be contiguous, depending on its strides.
|
||||
#[inline]
|
||||
fn as_mut_slice(&mut self) -> &mut [N];
|
||||
}
|
||||
|
||||
/// A matrix storage that is stored contiguously in memory.
|
||||
///
|
||||
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
|
||||
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
|
||||
/// failing to comply to this may cause Undefined Behaviors.
|
||||
pub unsafe trait ContiguousStorage<N: Scalar, R: Dim, C: Dim = U1>: Storage<N, R, C> { }
|
||||
|
||||
/// A mutable matrix storage that is stored contiguously in memory.
|
||||
///
|
||||
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
|
||||
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
|
||||
/// failing to comply to this may cause Undefined Behaviors.
|
||||
pub unsafe trait ContiguousStorageMut<N: Scalar, R: Dim, C: Dim = U1>: ContiguousStorage<N, R, C> + StorageMut<N, R, C> { }
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
//! Various tools useful for testing/debugging/benchmarking.
|
||||
|
||||
|
||||
mod random_orthogonal;
|
||||
mod random_sdp;
|
||||
|
||||
pub use self::random_orthogonal::*;
|
||||
pub use self::random_sdp::*;
|
|
@ -0,0 +1,51 @@
|
|||
#[cfg(feature = "arbitrary")]
|
||||
use quickcheck::{Arbitrary, Gen};
|
||||
#[cfg(feature = "arbitrary")]
|
||||
use core::storage::Owned;
|
||||
|
||||
use num_complex::Complex;
|
||||
use alga::general::Real;
|
||||
use core::{DefaultAllocator, MatrixN};
|
||||
use core::dimension::{Dim, Dynamic, U2};
|
||||
use core::allocator::Allocator;
|
||||
use geometry::UnitComplex;
|
||||
|
||||
/// A random orthogonal matrix.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct RandomOrthogonal<N: Real, D: Dim = Dynamic>
|
||||
where DefaultAllocator: Allocator<N, D, D> {
|
||||
m: MatrixN<N, D>
|
||||
}
|
||||
|
||||
|
||||
impl<N: Real, D: Dim> RandomOrthogonal<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> {
|
||||
/// Retrieve the generated matrix.
|
||||
pub fn unwrap(self) -> MatrixN<N, D> {
|
||||
self.m
|
||||
}
|
||||
|
||||
/// Creates a new random orthogonal matrix from its dimension and a random reals generators.
|
||||
pub fn new<Rand: FnMut() -> N>(dim: D, mut rand: Rand) -> Self {
|
||||
let mut res = MatrixN::identity_generic(dim, dim);
|
||||
|
||||
// Create an orthogonal matrix by compositing planar 2D rotations.
|
||||
for i in 0 .. dim.value() - 1 {
|
||||
let c = Complex::new(rand(), rand());
|
||||
let rot: UnitComplex<N> = UnitComplex::from_complex(c);
|
||||
rot.rotate(&mut res.fixed_rows_mut::<U2>(i));
|
||||
}
|
||||
|
||||
RandomOrthogonal { m: res }
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "arbitrary")]
|
||||
impl<N: Real + Arbitrary + Send, D: Dim> Arbitrary for RandomOrthogonal<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D>,
|
||||
Owned<N, D, D>: Clone + Send {
|
||||
fn arbitrary<G: Gen>(g: &mut G) -> Self {
|
||||
let dim = D::try_to_usize().unwrap_or(g.gen_range(1, 50));
|
||||
Self::new(D::from_usize(dim), || N::arbitrary(g))
|
||||
}
|
||||
}
|
|
@ -0,0 +1,54 @@
|
|||
#[cfg(feature = "arbitrary")]
|
||||
use quickcheck::{Arbitrary, Gen};
|
||||
#[cfg(feature = "arbitrary")]
|
||||
use core::storage::Owned;
|
||||
|
||||
use alga::general::Real;
|
||||
use core::{DefaultAllocator, MatrixN};
|
||||
use core::dimension::{Dim, Dynamic};
|
||||
use core::allocator::Allocator;
|
||||
|
||||
use debug::RandomOrthogonal;
|
||||
|
||||
|
||||
/// A random, well-conditioned, symmetric definite-positive matrix.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct RandomSDP<N: Real, D: Dim = Dynamic>
|
||||
where DefaultAllocator: Allocator<N, D, D> {
|
||||
m: MatrixN<N, D>
|
||||
}
|
||||
|
||||
|
||||
impl<N: Real, D: Dim> RandomSDP<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D> {
|
||||
|
||||
/// Retrieve the generated matrix.
|
||||
pub fn unwrap(self) -> MatrixN<N, D> {
|
||||
self.m
|
||||
}
|
||||
|
||||
/// Creates a new well conditioned symmetric definite-positive matrix from its dimension and a
|
||||
/// random reals generators.
|
||||
pub fn new<Rand: FnMut() -> N>(dim: D, mut rand: Rand) -> Self {
|
||||
let mut m = RandomOrthogonal::new(dim, || rand()).unwrap();
|
||||
let mt = m.transpose();
|
||||
|
||||
for i in 0 .. dim.value() {
|
||||
let mut col = m.column_mut(i);
|
||||
let eigenval = N::one() + rand().abs();
|
||||
col *= eigenval;
|
||||
}
|
||||
|
||||
RandomSDP { m: m * mt }
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "arbitrary")]
|
||||
impl<N: Real + Arbitrary + Send, D: Dim> Arbitrary for RandomSDP<N, D>
|
||||
where DefaultAllocator: Allocator<N, D, D>,
|
||||
Owned<N, D, D>: Clone + Send {
|
||||
fn arbitrary<G: Gen>(g: &mut G) -> Self {
|
||||
let dim = D::try_to_usize().unwrap_or(g.gen_range(1, 50));
|
||||
Self::new(D::from_usize(dim), || N::arbitrary(g))
|
||||
}
|
||||
}
|
|
@ -1,33 +1,43 @@
|
|||
use std::fmt;
|
||||
use std::hash;
|
||||
use std::marker::PhantomData;
|
||||
use approx::ApproxEq;
|
||||
|
||||
use alga::general::{Real, SubsetOf};
|
||||
use alga::linear::Rotation;
|
||||
|
||||
use core::{Scalar, OwnedSquareMatrix};
|
||||
use core::dimension::{DimName, DimNameSum, DimNameAdd, U1};
|
||||
use core::storage::{Storage, OwnedStorage};
|
||||
use core::allocator::{Allocator, OwnedAllocator};
|
||||
use geometry::{TranslationBase, PointBase};
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
use serde;
|
||||
|
||||
#[cfg(feature = "abomonation-serialize")]
|
||||
use abomonation::Abomonation;
|
||||
|
||||
use alga::general::{Real, SubsetOf};
|
||||
use alga::linear::Rotation;
|
||||
|
||||
/// An isometry that uses a data storage deduced from the allocator `A`.
|
||||
pub type OwnedIsometryBase<N, D, A, R> =
|
||||
IsometryBase<N, D, <A as Allocator<N, D, U1>>::Buffer, R>;
|
||||
use core::{DefaultAllocator, MatrixN};
|
||||
use core::dimension::{DimName, DimNameSum, DimNameAdd, U1};
|
||||
use core::storage::Owned;
|
||||
use core::allocator::Allocator;
|
||||
use geometry::{Translation, Point};
|
||||
|
||||
/// A direct isometry, i.e., a rotation followed by a translation.
|
||||
#[repr(C)]
|
||||
#[derive(Hash, Debug, Clone, Copy)]
|
||||
#[derive(Debug)]
|
||||
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
|
||||
pub struct IsometryBase<N: Scalar, D: DimName, S, R> {
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(
|
||||
serialize = "R: serde::Serialize,
|
||||
DefaultAllocator: Allocator<N, D>,
|
||||
Owned<N, D>: serde::Serialize")))]
|
||||
#[cfg_attr(feature = "serde-serialize",
|
||||
serde(bound(
|
||||
deserialize = "R: serde::Deserialize<'de>,
|
||||
DefaultAllocator: Allocator<N, D>,
|
||||
Owned<N, D>: serde::Deserialize<'de>")))]
|
||||
pub struct Isometry<N: Real, D: DimName, R>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
/// The pure rotational part of this isometry.
|
||||
pub rotation: R,
|
||||
/// The pure translational part of this isometry.
|
||||
pub translation: TranslationBase<N, D, S>,
|
||||
pub translation: Translation<N, D>,
|
||||
|
||||
|
||||
// One dummy private field just to prevent explicit construction.
|
||||
|
@ -36,11 +46,12 @@ pub struct IsometryBase<N: Scalar, D: DimName, S, R> {
|
|||
}
|
||||
|
||||
#[cfg(feature = "abomonation-serialize")]
|
||||
impl<N, D, S, R> Abomonation for IsometryBase<N, D, S, R>
|
||||
impl<N, D, R> Abomonation for IsometryBase<N, D, R>
|
||||
where N: Scalar,
|
||||
D: DimName,
|
||||
R: Abomonation,
|
||||
TranslationBase<N, D, S>: Abomonation
|
||||
TranslationBase<N, D>: Abomonation,
|
||||
DefaultAllocator: Allocator<N, D>
|
||||
{
|
||||
unsafe fn entomb(&self, writer: &mut Vec<u8>) {
|
||||
self.rotation.entomb(writer);
|
||||
|
@ -58,15 +69,35 @@ impl<N, D, S, R> Abomonation for IsometryBase<N, D, S, R>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real + hash::Hash, D: DimName + hash::Hash, R: hash::Hash> hash::Hash for Isometry<N, D, R>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
Owned<N, D>: hash::Hash {
|
||||
fn hash<H: hash::Hasher>(&self, state: &mut H) {
|
||||
self.translation.hash(state);
|
||||
self.rotation.hash(state);
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Real, D: DimName + Copy, R: Rotation<Point<N, D>> + Copy> Copy for Isometry<N, D, R>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
Owned<N, D>: Copy {
|
||||
}
|
||||
|
||||
impl<N: Real, D: DimName, R: Rotation<Point<N, D>> + Clone> Clone for Isometry<N, D, R>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn clone(&self) -> Self {
|
||||
Isometry::from_parts(self.translation.clone(), self.rotation.clone())
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Real, D: DimName, R: Rotation<Point<N, D>>> Isometry<N, D, R>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
|
||||
/// Creates a new isometry from its rotational and translational parts.
|
||||
#[inline]
|
||||
pub fn from_parts(translation: TranslationBase<N, D, S>, rotation: R) -> IsometryBase<N, D, S, R> {
|
||||
IsometryBase {
|
||||
pub fn from_parts(translation: Translation<N, D>, rotation: R) -> Isometry<N, D, R> {
|
||||
Isometry {
|
||||
rotation: rotation,
|
||||
translation: translation,
|
||||
_noconstruct: PhantomData
|
||||
|
@ -75,7 +106,7 @@ impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
|||
|
||||
/// Inverts `self`.
|
||||
#[inline]
|
||||
pub fn inverse(&self) -> IsometryBase<N, D, S, R> {
|
||||
pub fn inverse(&self) -> Isometry<N, D, R> {
|
||||
let mut res = self.clone();
|
||||
res.inverse_mut();
|
||||
res
|
||||
|
@ -91,7 +122,7 @@ impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
|||
|
||||
/// Appends to `self` the given translation in-place.
|
||||
#[inline]
|
||||
pub fn append_translation_mut(&mut self, t: &TranslationBase<N, D, S>) {
|
||||
pub fn append_translation_mut(&mut self, t: &Translation<N, D>) {
|
||||
self.translation.vector += &t.vector
|
||||
}
|
||||
|
||||
|
@ -105,7 +136,7 @@ impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
|||
/// Appends in-place to `self` a rotation centered at the point `p`, i.e., the rotation that
|
||||
/// lets `p` invariant.
|
||||
#[inline]
|
||||
pub fn append_rotation_wrt_point_mut(&mut self, r: &R, p: &PointBase<N, D, S>) {
|
||||
pub fn append_rotation_wrt_point_mut(&mut self, r: &R, p: &Point<N, D>) {
|
||||
self.translation.vector -= &p.coords;
|
||||
self.append_rotation_mut(r);
|
||||
self.translation.vector += &p.coords;
|
||||
|
@ -115,7 +146,7 @@ impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
|||
/// `self.translation`.
|
||||
#[inline]
|
||||
pub fn append_rotation_wrt_center_mut(&mut self, r: &R) {
|
||||
let center = PointBase::from_coordinates(self.translation.vector.clone());
|
||||
let center = Point::from_coordinates(self.translation.vector.clone());
|
||||
self.append_rotation_wrt_point_mut(r, ¢er)
|
||||
}
|
||||
}
|
||||
|
@ -124,16 +155,15 @@ impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
|||
// and makes it hard to use it, e.g., for Transform × Isometry implementation.
|
||||
// This is OK since all constructors of the isometry enforce the Rotation bound already (and
|
||||
// explicit struct construction is prevented by the dummy ZST field).
|
||||
impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
||||
where N: Scalar,
|
||||
S: Storage<N, D, U1> {
|
||||
impl<N: Real, D: DimName, R> Isometry<N, D, R>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
/// Converts this isometry into its equivalent homogeneous transformation matrix.
|
||||
#[inline]
|
||||
pub fn to_homogeneous(&self) -> OwnedSquareMatrix<N, DimNameSum<D, U1>, S::Alloc>
|
||||
pub fn to_homogeneous(&self) -> MatrixN<N, DimNameSum<D, U1>>
|
||||
where D: DimNameAdd<U1>,
|
||||
R: SubsetOf<OwnedSquareMatrix<N, DimNameSum<D, U1>, S::Alloc>>,
|
||||
S::Alloc: Allocator<N, DimNameSum<D, U1>, DimNameSum<D, U1>> {
|
||||
let mut res: OwnedSquareMatrix<N, _, S::Alloc> = ::convert_ref(&self.rotation);
|
||||
R: SubsetOf<MatrixN<N, DimNameSum<D, U1>>>,
|
||||
DefaultAllocator: Allocator<N, DimNameSum<D, U1>, DimNameSum<D, U1>> {
|
||||
let mut res: MatrixN<N, _> = ::convert_ref(&self.rotation);
|
||||
res.fixed_slice_mut::<D, U1>(0, D::dim()).copy_from(&self.translation.vector);
|
||||
|
||||
res
|
||||
|
@ -141,30 +171,24 @@ impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
|||
}
|
||||
|
||||
|
||||
impl<N, D: DimName, S, R> Eq for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>> + Eq,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> Eq for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>> + Eq,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> PartialEq for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>> + PartialEq,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> PartialEq for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>> + PartialEq,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn eq(&self, right: &IsometryBase<N, D, S, R>) -> bool {
|
||||
fn eq(&self, right: &Isometry<N, D, R>) -> bool {
|
||||
self.translation == right.translation &&
|
||||
self.rotation == right.rotation
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> ApproxEq for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>> + ApproxEq<Epsilon = N::Epsilon>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S>,
|
||||
impl<N: Real, D: DimName, R> ApproxEq for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>> + ApproxEq<Epsilon = N::Epsilon>,
|
||||
DefaultAllocator: Allocator<N, D>,
|
||||
N::Epsilon: Copy {
|
||||
type Epsilon = N::Epsilon;
|
||||
|
||||
|
@ -201,32 +225,16 @@ impl<N, D: DimName, S, R> ApproxEq for IsometryBase<N, D, S, R>
|
|||
* Display
|
||||
*
|
||||
*/
|
||||
impl<N, D: DimName, S, R> fmt::Display for IsometryBase<N, D, S, R>
|
||||
where N: Real + fmt::Display,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: fmt::Display,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> + Allocator<usize, D, U1> {
|
||||
impl<N: Real + fmt::Display, D: DimName, R> fmt::Display for Isometry<N, D, R>
|
||||
where R: fmt::Display,
|
||||
DefaultAllocator: Allocator<N, D> +
|
||||
Allocator<usize, D> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
let precision = f.precision().unwrap_or(3);
|
||||
|
||||
try!(writeln!(f, "IsometryBase {{"));
|
||||
try!(writeln!(f, "Isometry {{"));
|
||||
try!(write!(f, "{:.*}", precision, self.translation));
|
||||
try!(write!(f, "{:.*}", precision, self.rotation));
|
||||
writeln!(f, "}}")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// /*
|
||||
// *
|
||||
// * Absolute
|
||||
// *
|
||||
// */
|
||||
// impl<N: Absolute> Absolute for $t<N> {
|
||||
// type AbsoluteValue = $submatrix<N::AbsoluteValue>;
|
||||
//
|
||||
// #[inline]
|
||||
// fn abs(m: &$t<N>) -> $submatrix<N::AbsoluteValue> {
|
||||
// Absolute::abs(&m.submatrix)
|
||||
// }
|
||||
// }
|
||||
|
|
|
@ -1,14 +1,14 @@
|
|||
use alga::general::{AbstractMagma, AbstractGroup, AbstractLoop, AbstractMonoid, AbstractQuasigroup,
|
||||
AbstractSemigroup, Real, Inverse, Multiplicative, Identity, Id};
|
||||
use alga::linear::{Transformation, Similarity, AffineTransformation, DirectIsometry, Isometry,
|
||||
use alga::linear::{Transformation, Similarity, AffineTransformation, DirectIsometry,
|
||||
Rotation, ProjectiveTransformation};
|
||||
use alga::linear::Isometry as AlgaIsometry;
|
||||
|
||||
use core::ColumnVector;
|
||||
use core::dimension::{DimName, U1};
|
||||
use core::storage::OwnedStorage;
|
||||
use core::allocator::OwnedAllocator;
|
||||
use core::{DefaultAllocator, VectorN};
|
||||
use core::dimension::DimName;
|
||||
use core::allocator::Allocator;
|
||||
|
||||
use geometry::{IsometryBase, TranslationBase, PointBase};
|
||||
use geometry::{Isometry, Translation, Point};
|
||||
|
||||
|
||||
/*
|
||||
|
@ -16,22 +16,18 @@ use geometry::{IsometryBase, TranslationBase, PointBase};
|
|||
* Algebraic structures.
|
||||
*
|
||||
*/
|
||||
impl<N, D: DimName, S, R> Identity<Multiplicative> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> Identity<Multiplicative> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn identity() -> Self {
|
||||
Self::identity()
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> Inverse<Multiplicative> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> Inverse<Multiplicative> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn inverse(&self) -> Self {
|
||||
self.inverse()
|
||||
|
@ -43,11 +39,9 @@ impl<N, D: DimName, S, R> Inverse<Multiplicative> for IsometryBase<N, D, S, R>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> AbstractMagma<Multiplicative> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> AbstractMagma<Multiplicative> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn operate(&self, rhs: &Self) -> Self {
|
||||
self * rhs
|
||||
|
@ -56,11 +50,9 @@ impl<N, D: DimName, S, R> AbstractMagma<Multiplicative> for IsometryBase<N, D, S
|
|||
|
||||
macro_rules! impl_multiplicative_structures(
|
||||
($($marker: ident<$operator: ident>),* $(,)*) => {$(
|
||||
impl<N, D: DimName, S, R> $marker<$operator> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> { }
|
||||
impl<N: Real, D: DimName, R> $marker<$operator> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> { }
|
||||
)*}
|
||||
);
|
||||
|
||||
|
@ -77,49 +69,43 @@ impl_multiplicative_structures!(
|
|||
* Transformation groups.
|
||||
*
|
||||
*/
|
||||
impl<N, D: DimName, S, R> Transformation<PointBase<N, D, S>> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> Transformation<Point<N, D>> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn transform_point(&self, pt: &PointBase<N, D, S>) -> PointBase<N, D, S> {
|
||||
fn transform_point(&self, pt: &Point<N, D>) -> Point<N, D> {
|
||||
self * pt
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn transform_vector(&self, v: &ColumnVector<N, D, S>) -> ColumnVector<N, D, S> {
|
||||
fn transform_vector(&self, v: &VectorN<N, D>) -> VectorN<N, D> {
|
||||
self * v
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> ProjectiveTransformation<PointBase<N, D, S>> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> ProjectiveTransformation<Point<N, D>> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn inverse_transform_point(&self, pt: &PointBase<N, D, S>) -> PointBase<N, D, S> {
|
||||
fn inverse_transform_point(&self, pt: &Point<N, D>) -> Point<N, D> {
|
||||
self.rotation.inverse_transform_point(&(pt - &self.translation.vector))
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn inverse_transform_vector(&self, v: &ColumnVector<N, D, S>) -> ColumnVector<N, D, S> {
|
||||
fn inverse_transform_vector(&self, v: &VectorN<N, D>) -> VectorN<N, D> {
|
||||
self.rotation.inverse_transform_vector(v)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> AffineTransformation<PointBase<N, D, S>> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> AffineTransformation<Point<N, D>> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
type Rotation = R;
|
||||
type NonUniformScaling = Id;
|
||||
type Translation = TranslationBase<N, D, S>;
|
||||
type Translation = Translation<N, D>;
|
||||
|
||||
#[inline]
|
||||
fn decompose(&self) -> (TranslationBase<N, D, S>, R, Id, R) {
|
||||
fn decompose(&self) -> (Translation<N, D>, R, Id, R) {
|
||||
(self.translation.clone(), self.rotation.clone(), Id::new(), R::identity())
|
||||
}
|
||||
|
||||
|
@ -136,7 +122,7 @@ impl<N, D: DimName, S, R> AffineTransformation<PointBase<N, D, S>> for IsometryB
|
|||
#[inline]
|
||||
fn append_rotation(&self, r: &Self::Rotation) -> Self {
|
||||
let shift = r.transform_vector(&self.translation.vector);
|
||||
IsometryBase::from_parts(TranslationBase::from_vector(shift), r.clone() * self.rotation.clone())
|
||||
Isometry::from_parts(Translation::from_vector(shift), r.clone() * self.rotation.clone())
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
@ -155,22 +141,20 @@ impl<N, D: DimName, S, R> AffineTransformation<PointBase<N, D, S>> for IsometryB
|
|||
}
|
||||
|
||||
#[inline]
|
||||
fn append_rotation_wrt_point(&self, r: &Self::Rotation, p: &PointBase<N, D, S>) -> Option<Self> {
|
||||
fn append_rotation_wrt_point(&self, r: &Self::Rotation, p: &Point<N, D>) -> Option<Self> {
|
||||
let mut res = self.clone();
|
||||
res.append_rotation_wrt_point_mut(r, p);
|
||||
Some(res)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> Similarity<PointBase<N, D, S>> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> Similarity<Point<N, D>> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
type Scaling = Id;
|
||||
|
||||
#[inline]
|
||||
fn translation(&self) -> TranslationBase<N, D, S> {
|
||||
fn translation(&self) -> Translation<N, D> {
|
||||
self.translation.clone()
|
||||
}
|
||||
|
||||
|
@ -187,12 +171,10 @@ impl<N, D: DimName, S, R> Similarity<PointBase<N, D, S>> for IsometryBase<N, D,
|
|||
|
||||
macro_rules! marker_impl(
|
||||
($($Trait: ident),*) => {$(
|
||||
impl<N, D: DimName, S, R> $Trait<PointBase<N, D, S>> for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> { }
|
||||
impl<N: Real, D: DimName, R> $Trait<Point<N, D>> for Isometry<N, D, R>
|
||||
where R: Rotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> { }
|
||||
)*}
|
||||
);
|
||||
|
||||
marker_impl!(Isometry, DirectIsometry);
|
||||
marker_impl!(AlgaIsometry, DirectIsometry);
|
||||
|
|
|
@ -1,19 +1,16 @@
|
|||
use core::MatrixArray;
|
||||
use core::dimension::{U1, U2, U3};
|
||||
use core::dimension::{U2, U3};
|
||||
|
||||
use geometry::{Rotation, IsometryBase, UnitQuaternion, UnitComplex};
|
||||
use geometry::{Isometry, Rotation2, Rotation3, UnitQuaternion, UnitComplex};
|
||||
|
||||
/// A D-dimensional isometry.
|
||||
pub type Isometry<N, D> = IsometryBase<N, D, MatrixArray<N, D, U1>, Rotation<N, D>>;
|
||||
|
||||
/// A 2-dimensional isometry using a unit complex number for its rotational part.
|
||||
pub type Isometry2<N> = IsometryBase<N, U2, MatrixArray<N, U2, U1>, UnitComplex<N>>;
|
||||
pub type Isometry2<N> = Isometry<N, U2, UnitComplex<N>>;
|
||||
|
||||
/// A 3-dimensional isometry using a unit quaternion for its rotational part.
|
||||
pub type Isometry3<N> = IsometryBase<N, U3, MatrixArray<N, U3, U1>, UnitQuaternion<N>>;
|
||||
pub type Isometry3<N> = Isometry<N, U3, UnitQuaternion<N>>;
|
||||
|
||||
/// A 2-dimensional isometry using a rotation matrix for its rotation part.
|
||||
pub type IsometryMatrix2<N> = Isometry<N, U2>;
|
||||
/// A 2-dimensional isometry using a rotation matrix for its rotational part.
|
||||
pub type IsometryMatrix2<N> = Isometry<N, U2, Rotation2<N>>;
|
||||
|
||||
/// A 3-dimensional isometry using a rotation matrix for its rotation part.
|
||||
pub type IsometryMatrix3<N> = Isometry<N, U3>;
|
||||
/// A 3-dimensional isometry using a rotation matrix for its rotational part.
|
||||
pub type IsometryMatrix3<N> = Isometry<N, U3, Rotation3<N>>;
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
#[cfg(feature = "arbitrary")]
|
||||
use quickcheck::{Arbitrary, Gen};
|
||||
#[cfg(feature = "arbitrary")]
|
||||
use core::storage::Owned;
|
||||
|
||||
use num::One;
|
||||
use rand::{Rng, Rand};
|
||||
|
@ -7,31 +9,33 @@ use rand::{Rng, Rand};
|
|||
use alga::general::Real;
|
||||
use alga::linear::Rotation as AlgaRotation;
|
||||
|
||||
use core::ColumnVector;
|
||||
use core::dimension::{DimName, U1, U2, U3, U4};
|
||||
use core::allocator::{OwnedAllocator, Allocator};
|
||||
use core::storage::OwnedStorage;
|
||||
use core::{DefaultAllocator, Vector2, Vector3};
|
||||
use core::dimension::{DimName, U2, U3};
|
||||
use core::allocator::Allocator;
|
||||
|
||||
use geometry::{PointBase, TranslationBase, RotationBase, IsometryBase, UnitQuaternionBase, UnitComplex};
|
||||
use geometry::{Point, Translation, Rotation, Isometry, UnitQuaternion, UnitComplex,
|
||||
Point3, Rotation2, Rotation3};
|
||||
|
||||
|
||||
impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: AlgaRotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R: AlgaRotation<Point<N, D>>> Isometry<N, D, R>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
/// Creates a new identity isometry.
|
||||
#[inline]
|
||||
pub fn identity() -> Self {
|
||||
Self::from_parts(TranslationBase::identity(), R::identity())
|
||||
Self::from_parts(Translation::identity(), R::identity())
|
||||
}
|
||||
|
||||
/// The isometry that applies the rotation `r` with its axis passing through the point `p`.
|
||||
/// This effectively lets `p` invariant.
|
||||
#[inline]
|
||||
pub fn rotation_wrt_point(r: R, p: Point<N, D>) -> Self {
|
||||
let shift = r.transform_vector(&-&p.coords);
|
||||
Self::from_parts(Translation::from_vector(shift + p.coords), r)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> One for IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: AlgaRotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R: AlgaRotation<Point<N, D>>> One for Isometry<N, D, R>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
/// Creates a new identity isometry.
|
||||
#[inline]
|
||||
fn one() -> Self {
|
||||
|
@ -39,37 +43,21 @@ impl<N, D: DimName, S, R> One for IsometryBase<N, D, S, R>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> Rand for IsometryBase<N, D, S, R>
|
||||
where N: Real + Rand,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: AlgaRotation<PointBase<N, D, S>> + Rand,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real + Rand, D: DimName, R> Rand for Isometry<N, D, R>
|
||||
where R: AlgaRotation<Point<N, D>> + Rand,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn rand<G: Rng>(rng: &mut G) -> Self {
|
||||
Self::from_parts(rng.gen(), rng.gen())
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S, R> IsometryBase<N, D, S, R>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: AlgaRotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
/// The isometry that applies the rotation `r` with its axis passing through the point `p`.
|
||||
/// This effectively lets `p` invariant.
|
||||
#[inline]
|
||||
pub fn rotation_wrt_point(r: R, p: PointBase<N, D, S>) -> Self {
|
||||
let shift = r.transform_vector(&-&p.coords);
|
||||
Self::from_parts(TranslationBase::from_vector(shift + p.coords), r)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "arbitrary")]
|
||||
impl<N, D: DimName, S, R> Arbitrary for IsometryBase<N, D, S, R>
|
||||
impl<N, D: DimName, R> Arbitrary for Isometry<N, D, R>
|
||||
where N: Real + Arbitrary + Send,
|
||||
S: OwnedStorage<N, D, U1> + Send,
|
||||
R: AlgaRotation<PointBase<N, D, S>> + Arbitrary + Send,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
R: AlgaRotation<Point<N, D>> + Arbitrary + Send,
|
||||
Owned<N, D>: Send,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn arbitrary<G: Gen>(rng: &mut G) -> Self {
|
||||
Self::from_parts(Arbitrary::arbitrary(rng), Arbitrary::arbitrary(rng))
|
||||
|
@ -83,45 +71,31 @@ impl<N, D: DimName, S, R> Arbitrary for IsometryBase<N, D, S, R>
|
|||
*/
|
||||
|
||||
// 2D rotation.
|
||||
impl<N, S, SR> IsometryBase<N, U2, S, RotationBase<N, U2, SR>>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, U2, U1, Alloc = SR::Alloc>,
|
||||
SR: OwnedStorage<N, U2, U2>,
|
||||
S::Alloc: OwnedAllocator<N, U2, U1, S>,
|
||||
SR::Alloc: OwnedAllocator<N, U2, U2, SR> {
|
||||
impl<N: Real> Isometry<N, U2, Rotation2<N>> {
|
||||
/// Creates a new isometry from a translation and a rotation angle.
|
||||
#[inline]
|
||||
pub fn new(translation: ColumnVector<N, U2, S>, angle: N) -> Self {
|
||||
Self::from_parts(TranslationBase::from_vector(translation), RotationBase::<N, U2, SR>::new(angle))
|
||||
pub fn new(translation: Vector2<N>, angle: N) -> Self {
|
||||
Self::from_parts(Translation::from_vector(translation), Rotation::<N, U2>::new(angle))
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, S> IsometryBase<N, U2, S, UnitComplex<N>>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, U2, U1>,
|
||||
S::Alloc: OwnedAllocator<N, U2, U1, S> {
|
||||
impl<N: Real> Isometry<N, U2, UnitComplex<N>> {
|
||||
/// Creates a new isometry from a translation and a rotation angle.
|
||||
#[inline]
|
||||
pub fn new(translation: ColumnVector<N, U2, S>, angle: N) -> Self {
|
||||
Self::from_parts(TranslationBase::from_vector(translation), UnitComplex::from_angle(angle))
|
||||
pub fn new(translation: Vector2<N>, angle: N) -> Self {
|
||||
Self::from_parts(Translation::from_vector(translation), UnitComplex::from_angle(angle))
|
||||
}
|
||||
}
|
||||
|
||||
// 3D rotation.
|
||||
macro_rules! isometry_construction_impl(
|
||||
($RotId: ident < $($RotParams: ident),*>, $RRDim: ty, $RCDim: ty) => {
|
||||
impl<N, S, SR> IsometryBase<N, U3, S, $RotId<$($RotParams),*>>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, U3, U1, Alloc = SR::Alloc>,
|
||||
SR: OwnedStorage<N, $RRDim, $RCDim>,
|
||||
S::Alloc: OwnedAllocator<N, U3, U1, S>,
|
||||
SR::Alloc: OwnedAllocator<N, $RRDim, $RCDim, SR> +
|
||||
Allocator<N, U3, U3> {
|
||||
impl<N: Real> Isometry<N, U3, $RotId<$($RotParams),*>> {
|
||||
/// Creates a new isometry from a translation and a rotation axis-angle.
|
||||
#[inline]
|
||||
pub fn new(translation: ColumnVector<N, U3, S>, axisangle: ColumnVector<N, U3, S>) -> Self {
|
||||
pub fn new(translation: Vector3<N>, axisangle: Vector3<N>) -> Self {
|
||||
Self::from_parts(
|
||||
TranslationBase::from_vector(translation),
|
||||
Translation::from_vector(translation),
|
||||
$RotId::<$($RotParams),*>::from_scaled_axis(axisangle))
|
||||
}
|
||||
|
||||
|
@ -137,12 +111,12 @@ macro_rules! isometry_construction_impl(
|
|||
/// * up - Vertical direction. The only requirement of this parameter is to not be collinear
|
||||
/// to `eye - at`. Non-collinearity is not checked.
|
||||
#[inline]
|
||||
pub fn new_observer_frame(eye: &PointBase<N, U3, S>,
|
||||
target: &PointBase<N, U3, S>,
|
||||
up: &ColumnVector<N, U3, S>)
|
||||
pub fn new_observer_frame(eye: &Point3<N>,
|
||||
target: &Point3<N>,
|
||||
up: &Vector3<N>)
|
||||
-> Self {
|
||||
Self::from_parts(
|
||||
TranslationBase::from_vector(eye.coords.clone()),
|
||||
Translation::from_vector(eye.coords.clone()),
|
||||
$RotId::new_observer_frame(&(target - eye), up))
|
||||
}
|
||||
|
||||
|
@ -157,14 +131,14 @@ macro_rules! isometry_construction_impl(
|
|||
/// * up - A vector approximately aligned with required the vertical axis. The only
|
||||
/// requirement of this parameter is to not be collinear to `target - eye`.
|
||||
#[inline]
|
||||
pub fn look_at_rh(eye: &PointBase<N, U3, S>,
|
||||
target: &PointBase<N, U3, S>,
|
||||
up: &ColumnVector<N, U3, S>)
|
||||
pub fn look_at_rh(eye: &Point3<N>,
|
||||
target: &Point3<N>,
|
||||
up: &Vector3<N>)
|
||||
-> Self {
|
||||
let rotation = $RotId::look_at_rh(&(target - eye), up);
|
||||
let trans = &rotation * (-eye);
|
||||
|
||||
Self::from_parts(TranslationBase::from_vector(trans.coords), rotation)
|
||||
Self::from_parts(Translation::from_vector(trans.coords), rotation)
|
||||
}
|
||||
|
||||
/// Builds a left-handed look-at view matrix.
|
||||
|
@ -178,18 +152,18 @@ macro_rules! isometry_construction_impl(
|
|||
/// * up - A vector approximately aligned with required the vertical axis. The only
|
||||
/// requirement of this parameter is to not be collinear to `target - eye`.
|
||||
#[inline]
|
||||
pub fn look_at_lh(eye: &PointBase<N, U3, S>,
|
||||
target: &PointBase<N, U3, S>,
|
||||
up: &ColumnVector<N, U3, S>)
|
||||
pub fn look_at_lh(eye: &Point3<N>,
|
||||
target: &Point3<N>,
|
||||
up: &Vector3<N>)
|
||||
-> Self {
|
||||
let rotation = $RotId::look_at_lh(&(target - eye), up);
|
||||
let trans = &rotation * (-eye);
|
||||
|
||||
Self::from_parts(TranslationBase::from_vector(trans.coords), rotation)
|
||||
Self::from_parts(Translation::from_vector(trans.coords), rotation)
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
isometry_construction_impl!(RotationBase<N, U3, SR>, U3, U3);
|
||||
isometry_construction_impl!(UnitQuaternionBase<N, SR>, U4, U1);
|
||||
isometry_construction_impl!(Rotation3<N>, U3, U3);
|
||||
isometry_construction_impl!(UnitQuaternion<N>, U4, U1);
|
||||
|
|
|
@ -1,50 +1,47 @@
|
|||
use alga::general::{Real, SubsetOf, SupersetOf};
|
||||
use alga::linear::Rotation;
|
||||
|
||||
use core::{SquareMatrix, OwnedSquareMatrix};
|
||||
use core::dimension::{DimName, DimNameAdd, DimNameSum, U1};
|
||||
use core::storage::OwnedStorage;
|
||||
use core::allocator::{Allocator, OwnedAllocator};
|
||||
use core::{DefaultAllocator, MatrixN};
|
||||
use core::dimension::{DimName, DimNameAdd, DimNameSum, DimMin, U1};
|
||||
use core::allocator::Allocator;
|
||||
|
||||
use geometry::{PointBase, TranslationBase, IsometryBase, SimilarityBase, TransformBase, SuperTCategoryOf, TAffine};
|
||||
use geometry::{Point, Translation, Isometry, Similarity, Transform, SuperTCategoryOf, TAffine};
|
||||
|
||||
/*
|
||||
* This file provides the following conversions:
|
||||
* =============================================
|
||||
*
|
||||
* IsometryBase -> IsometryBase
|
||||
* IsometryBase -> SimilarityBase
|
||||
* IsometryBase -> TransformBase
|
||||
* IsometryBase -> Matrix (homogeneous)
|
||||
* Isometry -> Isometry
|
||||
* Isometry -> Similarity
|
||||
* Isometry -> Transform
|
||||
* Isometry -> Matrix (homogeneous)
|
||||
*/
|
||||
|
||||
|
||||
impl<N1, N2, D: DimName, SA, SB, R1, R2> SubsetOf<IsometryBase<N2, D, SB, R2>> for IsometryBase<N1, D, SA, R1>
|
||||
impl<N1, N2, D: DimName, R1, R2> SubsetOf<Isometry<N2, D, R2>> for Isometry<N1, D, R1>
|
||||
where N1: Real,
|
||||
N2: Real + SupersetOf<N1>,
|
||||
R1: Rotation<PointBase<N1, D, SA>> + SubsetOf<R2>,
|
||||
R2: Rotation<PointBase<N2, D, SB>>,
|
||||
SA: OwnedStorage<N1, D, U1>,
|
||||
SB: OwnedStorage<N2, D, U1>,
|
||||
SA::Alloc: OwnedAllocator<N1, D, U1, SA>,
|
||||
SB::Alloc: OwnedAllocator<N2, D, U1, SB> {
|
||||
R1: Rotation<Point<N1, D>> + SubsetOf<R2>,
|
||||
R2: Rotation<Point<N2, D>>,
|
||||
DefaultAllocator: Allocator<N1, D> +
|
||||
Allocator<N2, D> {
|
||||
#[inline]
|
||||
fn to_superset(&self) -> IsometryBase<N2, D, SB, R2> {
|
||||
IsometryBase::from_parts(
|
||||
fn to_superset(&self) -> Isometry<N2, D, R2> {
|
||||
Isometry::from_parts(
|
||||
self.translation.to_superset(),
|
||||
self.rotation.to_superset()
|
||||
)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn is_in_subset(iso: &IsometryBase<N2, D, SB, R2>) -> bool {
|
||||
::is_convertible::<_, TranslationBase<N1, D, SA>>(&iso.translation) &&
|
||||
fn is_in_subset(iso: &Isometry<N2, D, R2>) -> bool {
|
||||
::is_convertible::<_, Translation<N1, D>>(&iso.translation) &&
|
||||
::is_convertible::<_, R1>(&iso.rotation)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn from_superset_unchecked(iso: &IsometryBase<N2, D, SB, R2>) -> Self {
|
||||
IsometryBase::from_parts(
|
||||
unsafe fn from_superset_unchecked(iso: &Isometry<N2, D, R2>) -> Self {
|
||||
Isometry::from_parts(
|
||||
iso.translation.to_subset_unchecked(),
|
||||
iso.rotation.to_subset_unchecked()
|
||||
)
|
||||
|
@ -52,95 +49,91 @@ impl<N1, N2, D: DimName, SA, SB, R1, R2> SubsetOf<IsometryBase<N2, D, SB, R2>> f
|
|||
}
|
||||
|
||||
|
||||
impl<N1, N2, D: DimName, SA, SB, R1, R2> SubsetOf<SimilarityBase<N2, D, SB, R2>> for IsometryBase<N1, D, SA, R1>
|
||||
impl<N1, N2, D: DimName, R1, R2> SubsetOf<Similarity<N2, D, R2>> for Isometry<N1, D, R1>
|
||||
where N1: Real,
|
||||
N2: Real + SupersetOf<N1>,
|
||||
R1: Rotation<PointBase<N1, D, SA>> + SubsetOf<R2>,
|
||||
R2: Rotation<PointBase<N2, D, SB>>,
|
||||
SA: OwnedStorage<N1, D, U1>,
|
||||
SB: OwnedStorage<N2, D, U1>,
|
||||
SA::Alloc: OwnedAllocator<N1, D, U1, SA>,
|
||||
SB::Alloc: OwnedAllocator<N2, D, U1, SB> {
|
||||
R1: Rotation<Point<N1, D>> + SubsetOf<R2>,
|
||||
R2: Rotation<Point<N2, D>>,
|
||||
DefaultAllocator: Allocator<N1, D> +
|
||||
Allocator<N2, D> {
|
||||
#[inline]
|
||||
fn to_superset(&self) -> SimilarityBase<N2, D, SB, R2> {
|
||||
SimilarityBase::from_isometry(
|
||||
fn to_superset(&self) -> Similarity<N2, D, R2> {
|
||||
Similarity::from_isometry(
|
||||
self.to_superset(),
|
||||
N2::one()
|
||||
)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn is_in_subset(sim: &SimilarityBase<N2, D, SB, R2>) -> bool {
|
||||
::is_convertible::<_, IsometryBase<N1, D, SA, R1>>(&sim.isometry) &&
|
||||
fn is_in_subset(sim: &Similarity<N2, D, R2>) -> bool {
|
||||
::is_convertible::<_, Isometry<N1, D, R1>>(&sim.isometry) &&
|
||||
sim.scaling() == N2::one()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn from_superset_unchecked(sim: &SimilarityBase<N2, D, SB, R2>) -> Self {
|
||||
unsafe fn from_superset_unchecked(sim: &Similarity<N2, D, R2>) -> Self {
|
||||
::convert_ref_unchecked(&sim.isometry)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N1, N2, D, SA, SB, R, C> SubsetOf<TransformBase<N2, D, SB, C>> for IsometryBase<N1, D, SA, R>
|
||||
impl<N1, N2, D, R, C> SubsetOf<Transform<N2, D, C>> for Isometry<N1, D, R>
|
||||
where N1: Real,
|
||||
N2: Real + SupersetOf<N1>,
|
||||
SA: OwnedStorage<N1, D, U1>,
|
||||
SB: OwnedStorage<N2, DimNameSum<D, U1>, DimNameSum<D, U1>>,
|
||||
C: SuperTCategoryOf<TAffine>,
|
||||
R: Rotation<PointBase<N1, D, SA>> +
|
||||
SubsetOf<OwnedSquareMatrix<N1, DimNameSum<D, U1>, SA::Alloc>> + // needed by: .to_homogeneous()
|
||||
SubsetOf<SquareMatrix<N2, DimNameSum<D, U1>, SB>>, // needed by: ::convert_unchecked(mm)
|
||||
D: DimNameAdd<U1>,
|
||||
SA::Alloc: OwnedAllocator<N1, D, U1, SA> +
|
||||
R: Rotation<Point<N1, D>> +
|
||||
SubsetOf<MatrixN<N1, DimNameSum<D, U1>>> +
|
||||
SubsetOf<MatrixN<N2, DimNameSum<D, U1>>>,
|
||||
D: DimNameAdd<U1> +
|
||||
DimMin<D, Output = D>, // needed by .is_special_orthogonal()
|
||||
DefaultAllocator: Allocator<N1, D> +
|
||||
Allocator<N1, D, D> + // needed by R
|
||||
Allocator<N1, DimNameSum<D, U1>, DimNameSum<D, U1>> + // needed by: .to_homogeneous()
|
||||
Allocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>>, // needed by R
|
||||
SB::Alloc: OwnedAllocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>, SB> +
|
||||
Allocator<N2, D, D> + // needed by: mm.fixed_slice_mut
|
||||
Allocator<N2, D, U1> + // needed by: m.fixed_slice
|
||||
Allocator<N2, U1, D> { // needed by: m.fixed_slice
|
||||
Allocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>> + // needed by R
|
||||
Allocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>> +
|
||||
Allocator<(usize, usize), D> + // needed by .is_special_orthogonal()
|
||||
Allocator<N2, D, D> +
|
||||
Allocator<N2, D> {
|
||||
#[inline]
|
||||
fn to_superset(&self) -> TransformBase<N2, D, SB, C> {
|
||||
TransformBase::from_matrix_unchecked(self.to_homogeneous().to_superset())
|
||||
fn to_superset(&self) -> Transform<N2, D, C> {
|
||||
Transform::from_matrix_unchecked(self.to_homogeneous().to_superset())
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn is_in_subset(t: &TransformBase<N2, D, SB, C>) -> bool {
|
||||
fn is_in_subset(t: &Transform<N2, D, C>) -> bool {
|
||||
<Self as SubsetOf<_>>::is_in_subset(t.matrix())
|
||||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn from_superset_unchecked(t: &TransformBase<N2, D, SB, C>) -> Self {
|
||||
unsafe fn from_superset_unchecked(t: &Transform<N2, D, C>) -> Self {
|
||||
Self::from_superset_unchecked(t.matrix())
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N1, N2, D, SA, SB, R> SubsetOf<SquareMatrix<N2, DimNameSum<D, U1>, SB>> for IsometryBase<N1, D, SA, R>
|
||||
impl<N1, N2, D, R> SubsetOf<MatrixN<N2, DimNameSum<D, U1>>> for Isometry<N1, D, R>
|
||||
where N1: Real,
|
||||
N2: Real + SupersetOf<N1>,
|
||||
SA: OwnedStorage<N1, D, U1>,
|
||||
SB: OwnedStorage<N2, DimNameSum<D, U1>, DimNameSum<D, U1>>,
|
||||
R: Rotation<PointBase<N1, D, SA>> +
|
||||
SubsetOf<OwnedSquareMatrix<N1, DimNameSum<D, U1>, SA::Alloc>> + // needed by: .to_homogeneous()
|
||||
SubsetOf<SquareMatrix<N2, DimNameSum<D, U1>, SB>>, // needed by: ::convert_unchecked(mm)
|
||||
D: DimNameAdd<U1>,
|
||||
SA::Alloc: OwnedAllocator<N1, D, U1, SA> +
|
||||
R: Rotation<Point<N1, D>> +
|
||||
SubsetOf<MatrixN<N1, DimNameSum<D, U1>>> +
|
||||
SubsetOf<MatrixN<N2, DimNameSum<D, U1>>>,
|
||||
D: DimNameAdd<U1> +
|
||||
DimMin<D, Output = D>, // needed by .is_special_orthogonal()
|
||||
DefaultAllocator: Allocator<N1, D> +
|
||||
Allocator<N1, D, D> + // needed by R
|
||||
Allocator<N1, DimNameSum<D, U1>, DimNameSum<D, U1>> + // needed by: .to_homogeneous()
|
||||
Allocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>>, // needed by R
|
||||
SB::Alloc: OwnedAllocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>, SB> +
|
||||
Allocator<N2, D, D> + // needed by: mm.fixed_slice_mut
|
||||
Allocator<N2, D, U1> + // needed by: m.fixed_slice
|
||||
Allocator<N2, U1, D> { // needed by: m.fixed_slice
|
||||
Allocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>> + // needed by R
|
||||
Allocator<N2, DimNameSum<D, U1>, DimNameSum<D, U1>> +
|
||||
Allocator<(usize, usize), D> + // needed by .is_special_orthogonal()
|
||||
Allocator<N2, D, D> +
|
||||
Allocator<N2, D> {
|
||||
#[inline]
|
||||
fn to_superset(&self) -> SquareMatrix<N2, DimNameSum<D, U1>, SB> {
|
||||
fn to_superset(&self) -> MatrixN<N2, DimNameSum<D, U1>> {
|
||||
self.to_homogeneous().to_superset()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn is_in_subset(m: &SquareMatrix<N2, DimNameSum<D, U1>, SB>) -> bool {
|
||||
fn is_in_subset(m: &MatrixN<N2, DimNameSum<D, U1>>) -> bool {
|
||||
let rot = m.fixed_slice::<D, D>(0, 0);
|
||||
let bottom = m.fixed_slice::<U1, D>(D::dim(), 0);
|
||||
|
||||
|
@ -154,9 +147,9 @@ impl<N1, N2, D, SA, SB, R> SubsetOf<SquareMatrix<N2, DimNameSum<D, U1>, SB>> for
|
|||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn from_superset_unchecked(m: &SquareMatrix<N2, DimNameSum<D, U1>, SB>) -> Self {
|
||||
unsafe fn from_superset_unchecked(m: &MatrixN<N2, DimNameSum<D, U1>>) -> Self {
|
||||
let t = m.fixed_slice::<D, U1>(0, D::dim()).into_owned();
|
||||
let t = TranslationBase::from_vector(::convert_unchecked(t));
|
||||
let t = Translation::from_vector(::convert_unchecked(t));
|
||||
|
||||
Self::from_parts(t, ::convert_unchecked(m.clone_owned()))
|
||||
}
|
||||
|
|
|
@ -1,14 +1,13 @@
|
|||
use std::ops::{Mul, MulAssign, Div, DivAssign};
|
||||
|
||||
use alga::general::Real;
|
||||
use alga::linear::Rotation;
|
||||
use alga::linear::Rotation as AlgaRotation;
|
||||
|
||||
use core::ColumnVector;
|
||||
use core::{DefaultAllocator, VectorN};
|
||||
use core::dimension::{DimName, U1, U3, U4};
|
||||
use core::storage::OwnedStorage;
|
||||
use core::allocator::OwnedAllocator;
|
||||
use core::allocator::Allocator;
|
||||
|
||||
use geometry::{PointBase, RotationBase, IsometryBase, TranslationBase, UnitQuaternionBase};
|
||||
use geometry::{Point, Rotation, Isometry, Translation, UnitQuaternion};
|
||||
|
||||
// FIXME: there are several cloning of rotations that we could probably get rid of (but we didn't
|
||||
// yet because that would require to add a bound like `where for<'a, 'b> &'a R: Mul<&'b R, Output = R>`
|
||||
|
@ -22,41 +21,41 @@ use geometry::{PointBase, RotationBase, IsometryBase, TranslationBase, UnitQuate
|
|||
*
|
||||
* (Operators)
|
||||
*
|
||||
* IsometryBase × IsometryBase
|
||||
* IsometryBase × R
|
||||
* Isometry × Isometry
|
||||
* Isometry × R
|
||||
*
|
||||
*
|
||||
* IsometryBase ÷ IsometryBase
|
||||
* IsometryBase ÷ R
|
||||
* Isometry ÷ Isometry
|
||||
* Isometry ÷ R
|
||||
*
|
||||
* IsometryBase × PointBase
|
||||
* IsometryBase × ColumnVector
|
||||
* Isometry × Point
|
||||
* Isometry × Vector
|
||||
*
|
||||
*
|
||||
* IsometryBase × TranslationBase
|
||||
* TranslationBase × IsometryBase
|
||||
* TranslationBase × R -> IsometryBase<R>
|
||||
* Isometry × Translation
|
||||
* Translation × Isometry
|
||||
* Translation × R -> Isometry<R>
|
||||
*
|
||||
* NOTE: The following are provided explicitly because we can't have R × IsometryBase.
|
||||
* RotationBase × IsometryBase<RotationBase>
|
||||
* UnitQuaternion × IsometryBase<UnitQuaternion>
|
||||
* NOTE: The following are provided explicitly because we can't have R × Isometry.
|
||||
* Rotation × Isometry<Rotation>
|
||||
* UnitQuaternion × Isometry<UnitQuaternion>
|
||||
*
|
||||
* RotationBase ÷ IsometryBase<RotationBase>
|
||||
* UnitQuaternion ÷ IsometryBase<UnitQuaternion>
|
||||
* Rotation ÷ Isometry<Rotation>
|
||||
* UnitQuaternion ÷ Isometry<UnitQuaternion>
|
||||
*
|
||||
* RotationBase × TranslationBase -> IsometryBase<RotationBase>
|
||||
* UnitQuaternion × TranslationBase -> IsometryBase<UnitQuaternion>
|
||||
* Rotation × Translation -> Isometry<Rotation>
|
||||
* UnitQuaternion × Translation -> Isometry<UnitQuaternion>
|
||||
*
|
||||
*
|
||||
* (Assignment Operators)
|
||||
*
|
||||
* IsometryBase ×= TranslationBase
|
||||
* Isometry ×= Translation
|
||||
*
|
||||
* IsometryBase ×= IsometryBase
|
||||
* IsometryBase ×= R
|
||||
* Isometry ×= Isometry
|
||||
* Isometry ×= R
|
||||
*
|
||||
* IsometryBase ÷= IsometryBase
|
||||
* IsometryBase ÷= R
|
||||
* Isometry ÷= Isometry
|
||||
* Isometry ÷= R
|
||||
*
|
||||
*/
|
||||
|
||||
|
@ -65,11 +64,9 @@ macro_rules! isometry_binop_impl(
|
|||
($Op: ident, $op: ident;
|
||||
$lhs: ident: $Lhs: ty, $rhs: ident: $Rhs: ty, Output = $Output: ty;
|
||||
$action: expr; $($lives: tt),*) => {
|
||||
impl<$($lives ,)* N, D: DimName, S, R> $Op<$Rhs> for $Lhs
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<$($lives ,)* N: Real, D: DimName, R> $Op<$Rhs> for $Lhs
|
||||
where R: AlgaRotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
type Output = $Output;
|
||||
|
||||
#[inline]
|
||||
|
@ -114,22 +111,18 @@ macro_rules! isometry_binop_assign_impl_all(
|
|||
$lhs: ident: $Lhs: ty, $rhs: ident: $Rhs: ty;
|
||||
[val] => $action_val: expr;
|
||||
[ref] => $action_ref: expr;) => {
|
||||
impl<N, D: DimName, S, R> $OpAssign<$Rhs> for $Lhs
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Real, D: DimName, R> $OpAssign<$Rhs> for $Lhs
|
||||
where R: AlgaRotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn $op_assign(&mut $lhs, $rhs: $Rhs) {
|
||||
$action_val
|
||||
}
|
||||
}
|
||||
|
||||
impl<'b, N, D: DimName, S, R> $OpAssign<&'b $Rhs> for $Lhs
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
R: Rotation<PointBase<N, D, S>>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<'b, N: Real, D: DimName, R> $OpAssign<&'b $Rhs> for $Lhs
|
||||
where R: AlgaRotation<Point<N, D>>,
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn $op_assign(&mut $lhs, $rhs: &'b $Rhs) {
|
||||
$action_ref
|
||||
|
@ -138,18 +131,18 @@ macro_rules! isometry_binop_assign_impl_all(
|
|||
}
|
||||
);
|
||||
|
||||
// IsometryBase × IsometryBase
|
||||
// IsometryBase ÷ IsometryBase
|
||||
// Isometry × Isometry
|
||||
// Isometry ÷ Isometry
|
||||
isometry_binop_impl_all!(
|
||||
Mul, mul;
|
||||
self: IsometryBase<N, D, S, R>, rhs: IsometryBase<N, D, S, R>, Output = IsometryBase<N, D, S, R>;
|
||||
self: Isometry<N, D, R>, rhs: Isometry<N, D, R>, Output = Isometry<N, D, R>;
|
||||
[val val] => &self * &rhs;
|
||||
[ref val] => self * &rhs;
|
||||
[val ref] => &self * rhs;
|
||||
[ref ref] => {
|
||||
let shift = self.rotation.transform_vector(&rhs.translation.vector);
|
||||
|
||||
IsometryBase::from_parts(TranslationBase::from_vector(&self.translation.vector + shift),
|
||||
Isometry::from_parts(Translation::from_vector(&self.translation.vector + shift),
|
||||
self.rotation.clone() * rhs.rotation.clone()) // FIXME: too bad we have to clone.
|
||||
};
|
||||
);
|
||||
|
@ -157,7 +150,7 @@ isometry_binop_impl_all!(
|
|||
|
||||
isometry_binop_impl_all!(
|
||||
Div, div;
|
||||
self: IsometryBase<N, D, S, R>, rhs: IsometryBase<N, D, S, R>, Output = IsometryBase<N, D, S, R>;
|
||||
self: Isometry<N, D, R>, rhs: Isometry<N, D, R>, Output = Isometry<N, D, R>;
|
||||
[val val] => self * rhs.inverse();
|
||||
[ref val] => self * rhs.inverse();
|
||||
[val ref] => self * rhs.inverse();
|
||||
|
@ -165,10 +158,10 @@ isometry_binop_impl_all!(
|
|||
);
|
||||
|
||||
|
||||
// IsometryBase ×= TranslationBase
|
||||
// Isometry ×= Translation
|
||||
isometry_binop_assign_impl_all!(
|
||||
MulAssign, mul_assign;
|
||||
self: IsometryBase<N, D, S, R>, rhs: TranslationBase<N, D, S>;
|
||||
self: Isometry<N, D, R>, rhs: Translation<N, D>;
|
||||
[val] => *self *= &rhs;
|
||||
[ref] => {
|
||||
let shift = self.rotation.transform_vector(&rhs.vector);
|
||||
|
@ -176,11 +169,11 @@ isometry_binop_assign_impl_all!(
|
|||
};
|
||||
);
|
||||
|
||||
// IsometryBase ×= IsometryBase
|
||||
// IsometryBase ÷= IsometryBase
|
||||
// Isometry ×= Isometry
|
||||
// Isometry ÷= Isometry
|
||||
isometry_binop_assign_impl_all!(
|
||||
MulAssign, mul_assign;
|
||||
self: IsometryBase<N, D, S, R>, rhs: IsometryBase<N, D, S, R>;
|
||||
self: Isometry<N, D, R>, rhs: Isometry<N, D, R>;
|
||||
[val] => *self *= &rhs;
|
||||
[ref] => {
|
||||
let shift = self.rotation.transform_vector(&rhs.translation.vector);
|
||||
|
@ -191,55 +184,55 @@ isometry_binop_assign_impl_all!(
|
|||
|
||||
isometry_binop_assign_impl_all!(
|
||||
DivAssign, div_assign;
|
||||
self: IsometryBase<N, D, S, R>, rhs: IsometryBase<N, D, S, R>;
|
||||
self: Isometry<N, D, R>, rhs: Isometry<N, D, R>;
|
||||
[val] => *self /= &rhs;
|
||||
[ref] => *self *= rhs.inverse();
|
||||
);
|
||||
|
||||
// IsometryBase ×= R
|
||||
// IsometryBase ÷= R
|
||||
// Isometry ×= R
|
||||
// Isometry ÷= R
|
||||
isometry_binop_assign_impl_all!(
|
||||
MulAssign, mul_assign;
|
||||
self: IsometryBase<N, D, S, R>, rhs: R;
|
||||
self: Isometry<N, D, R>, rhs: R;
|
||||
[val] => self.rotation *= rhs;
|
||||
[ref] => self.rotation *= rhs.clone();
|
||||
);
|
||||
|
||||
isometry_binop_assign_impl_all!(
|
||||
DivAssign, div_assign;
|
||||
self: IsometryBase<N, D, S, R>, rhs: R;
|
||||
self: Isometry<N, D, R>, rhs: R;
|
||||
// FIXME: don't invert explicitly?
|
||||
[val] => *self *= rhs.inverse();
|
||||
[ref] => *self *= rhs.inverse();
|
||||
);
|
||||
|
||||
|
||||
// IsometryBase × R
|
||||
// IsometryBase ÷ R
|
||||
// Isometry × R
|
||||
// Isometry ÷ R
|
||||
isometry_binop_impl_all!(
|
||||
Mul, mul;
|
||||
self: IsometryBase<N, D, S, R>, rhs: R, Output = IsometryBase<N, D, S, R>;
|
||||
[val val] => IsometryBase::from_parts(self.translation, self.rotation * rhs);
|
||||
[ref val] => IsometryBase::from_parts(self.translation.clone(), self.rotation.clone() * rhs); // FIXME: do not clone.
|
||||
[val ref] => IsometryBase::from_parts(self.translation, self.rotation * rhs.clone());
|
||||
[ref ref] => IsometryBase::from_parts(self.translation.clone(), self.rotation.clone() * rhs.clone());
|
||||
self: Isometry<N, D, R>, rhs: R, Output = Isometry<N, D, R>;
|
||||
[val val] => Isometry::from_parts(self.translation, self.rotation * rhs);
|
||||
[ref val] => Isometry::from_parts(self.translation.clone(), self.rotation.clone() * rhs); // FIXME: do not clone.
|
||||
[val ref] => Isometry::from_parts(self.translation, self.rotation * rhs.clone());
|
||||
[ref ref] => Isometry::from_parts(self.translation.clone(), self.rotation.clone() * rhs.clone());
|
||||
);
|
||||
|
||||
|
||||
isometry_binop_impl_all!(
|
||||
Div, div;
|
||||
self: IsometryBase<N, D, S, R>, rhs: R, Output = IsometryBase<N, D, S, R>;
|
||||
[val val] => IsometryBase::from_parts(self.translation, self.rotation / rhs);
|
||||
[ref val] => IsometryBase::from_parts(self.translation.clone(), self.rotation.clone() / rhs);
|
||||
[val ref] => IsometryBase::from_parts(self.translation, self.rotation / rhs.clone());
|
||||
[ref ref] => IsometryBase::from_parts(self.translation.clone(), self.rotation.clone() / rhs.clone());
|
||||
self: Isometry<N, D, R>, rhs: R, Output = Isometry<N, D, R>;
|
||||
[val val] => Isometry::from_parts(self.translation, self.rotation / rhs);
|
||||
[ref val] => Isometry::from_parts(self.translation.clone(), self.rotation.clone() / rhs);
|
||||
[val ref] => Isometry::from_parts(self.translation, self.rotation / rhs.clone());
|
||||
[ref ref] => Isometry::from_parts(self.translation.clone(), self.rotation.clone() / rhs.clone());
|
||||
);
|
||||
|
||||
|
||||
// IsometryBase × PointBase
|
||||
// Isometry × Point
|
||||
isometry_binop_impl_all!(
|
||||
Mul, mul;
|
||||
self: IsometryBase<N, D, S, R>, right: PointBase<N, D, S>, Output = PointBase<N, D, S>;
|
||||
self: Isometry<N, D, R>, right: Point<N, D>, Output = Point<N, D>;
|
||||
[val val] => self.translation * self.rotation.transform_point(&right);
|
||||
[ref val] => &self.translation * self.rotation.transform_point(&right);
|
||||
[val ref] => self.translation * self.rotation.transform_point(right);
|
||||
|
@ -247,10 +240,12 @@ isometry_binop_impl_all!(
|
|||
);
|
||||
|
||||
|
||||
// IsometryBase × Vector
|
||||
// Isometry × Vector
|
||||
isometry_binop_impl_all!(
|
||||
Mul, mul;
|
||||
self: IsometryBase<N, D, S, R>, right: ColumnVector<N, D, S>, Output = ColumnVector<N, D, S>;
|
||||
// FIXME: because of `transform_vector`, we cant use a generic storage type for the rhs vector,
|
||||
// i.e., right: Vector<N, D, S> where S: Storage<N, D>.
|
||||
self: Isometry<N, D, R>, right: VectorN<N, D>, Output = VectorN<N, D>;
|
||||
[val val] => self.rotation.transform_vector(&right);
|
||||
[ref val] => self.rotation.transform_vector(&right);
|
||||
[val ref] => self.rotation.transform_vector(right);
|
||||
|
@ -258,38 +253,38 @@ isometry_binop_impl_all!(
|
|||
);
|
||||
|
||||
|
||||
// IsometryBase × TranslationBase
|
||||
// Isometry × Translation
|
||||
isometry_binop_impl_all!(
|
||||
Mul, mul;
|
||||
self: IsometryBase<N, D, S, R>, right: TranslationBase<N, D, S>, Output = IsometryBase<N, D, S, R>;
|
||||
self: Isometry<N, D, R>, right: Translation<N, D>, Output = Isometry<N, D, R>;
|
||||
[val val] => &self * &right;
|
||||
[ref val] => self * &right;
|
||||
[val ref] => &self * right;
|
||||
[ref ref] => {
|
||||
let new_tr = &self.translation.vector + self.rotation.transform_vector(&right.vector);
|
||||
IsometryBase::from_parts(TranslationBase::from_vector(new_tr), self.rotation.clone())
|
||||
Isometry::from_parts(Translation::from_vector(new_tr), self.rotation.clone())
|
||||
};
|
||||
);
|
||||
|
||||
// TranslationBase × IsometryBase
|
||||
// Translation × Isometry
|
||||
isometry_binop_impl_all!(
|
||||
Mul, mul;
|
||||
self: TranslationBase<N, D, S>, right: IsometryBase<N, D, S, R>, Output = IsometryBase<N, D, S, R>;
|
||||
[val val] => IsometryBase::from_parts(self * right.translation, right.rotation);
|
||||
[ref val] => IsometryBase::from_parts(self * &right.translation, right.rotation);
|
||||
[val ref] => IsometryBase::from_parts(self * &right.translation, right.rotation.clone());
|
||||
[ref ref] => IsometryBase::from_parts(self * &right.translation, right.rotation.clone());
|
||||
self: Translation<N, D>, right: Isometry<N, D, R>, Output = Isometry<N, D, R>;
|
||||
[val val] => Isometry::from_parts(self * right.translation, right.rotation);
|
||||
[ref val] => Isometry::from_parts(self * &right.translation, right.rotation);
|
||||
[val ref] => Isometry::from_parts(self * &right.translation, right.rotation.clone());
|
||||
[ref ref] => Isometry::from_parts(self * &right.translation, right.rotation.clone());
|
||||
);
|
||||
|
||||
|
||||
// TranslationBase × R
|
||||
// Translation × R
|
||||
isometry_binop_impl_all!(
|
||||
Mul, mul;
|
||||
self: TranslationBase<N, D, S>, right: R, Output = IsometryBase<N, D, S, R>;
|
||||
[val val] => IsometryBase::from_parts(self, right);
|
||||
[ref val] => IsometryBase::from_parts(self.clone(), right);
|
||||
[val ref] => IsometryBase::from_parts(self, right.clone());
|
||||
[ref ref] => IsometryBase::from_parts(self.clone(), right.clone());
|
||||
self: Translation<N, D>, right: R, Output = Isometry<N, D, R>;
|
||||
[val val] => Isometry::from_parts(self, right);
|
||||
[ref val] => Isometry::from_parts(self.clone(), right);
|
||||
[val ref] => Isometry::from_parts(self, right.clone());
|
||||
[ref ref] => Isometry::from_parts(self.clone(), right.clone());
|
||||
);
|
||||
|
||||
|
||||
|
@ -300,12 +295,9 @@ macro_rules! isometry_from_composition_impl(
|
|||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) $(for $Dims: ident: $DimsBound: ident),*;
|
||||
$lhs: ident: $Lhs: ty, $rhs: ident: $Rhs: ty, Output = $Output: ty;
|
||||
$action: expr; $($lives: tt),*) => {
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound)*, SA, SB> $Op<$Rhs> for $Lhs
|
||||
where N: Real,
|
||||
SA: OwnedStorage<N, $R1, $C1>,
|
||||
SB: OwnedStorage<N, $R2, $C2, Alloc = SA::Alloc>,
|
||||
SA::Alloc: OwnedAllocator<N, $R1, $C1, SA>,
|
||||
SB::Alloc: OwnedAllocator<N, $R2, $C2, SB> {
|
||||
impl<$($lives ,)* N: Real $(, $Dims: $DimsBound)*> $Op<$Rhs> for $Lhs
|
||||
where DefaultAllocator: Allocator<N, $R1, $C1> +
|
||||
Allocator<N, $R2, $C2> {
|
||||
type Output = $Output;
|
||||
|
||||
#[inline]
|
||||
|
@ -352,51 +344,51 @@ macro_rules! isometry_from_composition_impl_all(
|
|||
);
|
||||
|
||||
|
||||
// RotationBase × TranslationBase
|
||||
// Rotation × Translation
|
||||
isometry_from_composition_impl_all!(
|
||||
Mul, mul;
|
||||
(D, D), (D, U1) for D: DimName;
|
||||
self: RotationBase<N, D, SA>, right: TranslationBase<N, D, SB>, Output = IsometryBase<N, D, SB, RotationBase<N, D, SA>>;
|
||||
[val val] => IsometryBase::from_parts(TranslationBase::from_vector(&self * right.vector), self);
|
||||
[ref val] => IsometryBase::from_parts(TranslationBase::from_vector(self * right.vector), self.clone());
|
||||
[val ref] => IsometryBase::from_parts(TranslationBase::from_vector(&self * &right.vector), self);
|
||||
[ref ref] => IsometryBase::from_parts(TranslationBase::from_vector(self * &right.vector), self.clone());
|
||||
self: Rotation<N, D>, right: Translation<N, D>, Output = Isometry<N, D, Rotation<N, D>>;
|
||||
[val val] => Isometry::from_parts(Translation::from_vector(&self * right.vector), self);
|
||||
[ref val] => Isometry::from_parts(Translation::from_vector(self * right.vector), self.clone());
|
||||
[val ref] => Isometry::from_parts(Translation::from_vector(&self * &right.vector), self);
|
||||
[ref ref] => Isometry::from_parts(Translation::from_vector(self * &right.vector), self.clone());
|
||||
);
|
||||
|
||||
|
||||
// UnitQuaternionBase × TranslationBase
|
||||
// UnitQuaternion × Translation
|
||||
isometry_from_composition_impl_all!(
|
||||
Mul, mul;
|
||||
(U4, U1), (U3, U1);
|
||||
self: UnitQuaternionBase<N, SA>, right: TranslationBase<N, U3, SB>,
|
||||
Output = IsometryBase<N, U3, SB, UnitQuaternionBase<N, SA>>;
|
||||
[val val] => IsometryBase::from_parts(TranslationBase::from_vector(&self * right.vector), self);
|
||||
[ref val] => IsometryBase::from_parts(TranslationBase::from_vector( self * right.vector), self.clone());
|
||||
[val ref] => IsometryBase::from_parts(TranslationBase::from_vector(&self * &right.vector), self);
|
||||
[ref ref] => IsometryBase::from_parts(TranslationBase::from_vector( self * &right.vector), self.clone());
|
||||
self: UnitQuaternion<N>, right: Translation<N, U3>,
|
||||
Output = Isometry<N, U3, UnitQuaternion<N>>;
|
||||
[val val] => Isometry::from_parts(Translation::from_vector(&self * right.vector), self);
|
||||
[ref val] => Isometry::from_parts(Translation::from_vector( self * right.vector), self.clone());
|
||||
[val ref] => Isometry::from_parts(Translation::from_vector(&self * &right.vector), self);
|
||||
[ref ref] => Isometry::from_parts(Translation::from_vector( self * &right.vector), self.clone());
|
||||
);
|
||||
|
||||
// RotationBase × IsometryBase
|
||||
// Rotation × Isometry
|
||||
isometry_from_composition_impl_all!(
|
||||
Mul, mul;
|
||||
(D, D), (D, U1) for D: DimName;
|
||||
self: RotationBase<N, D, SA>, right: IsometryBase<N, D, SB, RotationBase<N, D, SA>>,
|
||||
Output = IsometryBase<N, D, SB, RotationBase<N, D, SA>>;
|
||||
self: Rotation<N, D>, right: Isometry<N, D, Rotation<N, D>>,
|
||||
Output = Isometry<N, D, Rotation<N, D>>;
|
||||
[val val] => &self * &right;
|
||||
[ref val] => self * &right;
|
||||
[val ref] => &self * right;
|
||||
[ref ref] => {
|
||||
let shift = self * &right.translation.vector;
|
||||
IsometryBase::from_parts(TranslationBase::from_vector(shift), self * &right.rotation)
|
||||
Isometry::from_parts(Translation::from_vector(shift), self * &right.rotation)
|
||||
};
|
||||
);
|
||||
|
||||
// RotationBase ÷ IsometryBase
|
||||
// Rotation ÷ Isometry
|
||||
isometry_from_composition_impl_all!(
|
||||
Div, div;
|
||||
(D, D), (D, U1) for D: DimName;
|
||||
self: RotationBase<N, D, SA>, right: IsometryBase<N, D, SB, RotationBase<N, D, SA>>,
|
||||
Output = IsometryBase<N, D, SB, RotationBase<N, D, SA>>;
|
||||
self: Rotation<N, D>, right: Isometry<N, D, Rotation<N, D>>,
|
||||
Output = Isometry<N, D, Rotation<N, D>>;
|
||||
// FIXME: don't call iverse explicitly?
|
||||
[val val] => self * right.inverse();
|
||||
[ref val] => self * right.inverse();
|
||||
|
@ -405,28 +397,28 @@ isometry_from_composition_impl_all!(
|
|||
);
|
||||
|
||||
|
||||
// UnitQuaternion × IsometryBase
|
||||
// UnitQuaternion × Isometry
|
||||
isometry_from_composition_impl_all!(
|
||||
Mul, mul;
|
||||
(U4, U1), (U3, U1);
|
||||
self: UnitQuaternionBase<N, SA>, right: IsometryBase<N, U3, SB, UnitQuaternionBase<N, SA>>,
|
||||
Output = IsometryBase<N, U3, SB, UnitQuaternionBase<N, SA>>;
|
||||
self: UnitQuaternion<N>, right: Isometry<N, U3, UnitQuaternion<N>>,
|
||||
Output = Isometry<N, U3, UnitQuaternion<N>>;
|
||||
[val val] => &self * &right;
|
||||
[ref val] => self * &right;
|
||||
[val ref] => &self * right;
|
||||
[ref ref] => {
|
||||
let shift = self * &right.translation.vector;
|
||||
IsometryBase::from_parts(TranslationBase::from_vector(shift), self * &right.rotation)
|
||||
Isometry::from_parts(Translation::from_vector(shift), self * &right.rotation)
|
||||
};
|
||||
);
|
||||
|
||||
|
||||
// UnitQuaternion ÷ IsometryBase
|
||||
// UnitQuaternion ÷ Isometry
|
||||
isometry_from_composition_impl_all!(
|
||||
Div, div;
|
||||
(U4, U1), (U3, U1);
|
||||
self: UnitQuaternionBase<N, SA>, right: IsometryBase<N, U3, SB, UnitQuaternionBase<N, SA>>,
|
||||
Output = IsometryBase<N, U3, SB, UnitQuaternionBase<N, SA>>;
|
||||
self: UnitQuaternion<N>, right: Isometry<N, U3, UnitQuaternion<N>>,
|
||||
Output = Isometry<N, U3, UnitQuaternion<N>>;
|
||||
// FIXME: don't call inverse explicitly?
|
||||
[val val] => self * right.inverse();
|
||||
[ref val] => self * right.inverse();
|
||||
|
|
|
@ -14,7 +14,7 @@ mod point_coordinates;
|
|||
mod rotation;
|
||||
mod rotation_construction;
|
||||
mod rotation_ops;
|
||||
mod rotation_alga; // FIXME: implement RotationBase methods.
|
||||
mod rotation_alga; // FIXME: implement Rotation methods.
|
||||
mod rotation_conversion;
|
||||
mod rotation_alias;
|
||||
mod rotation_specialization;
|
||||
|
@ -23,9 +23,8 @@ mod quaternion;
|
|||
mod quaternion_construction;
|
||||
mod quaternion_ops;
|
||||
mod quaternion_alga;
|
||||
mod quaternion_alias;
|
||||
mod quaternion_coordinates;
|
||||
mod quaternion_conversion;
|
||||
mod quaternion_coordinates;
|
||||
|
||||
mod unit_complex;
|
||||
mod unit_complex_construction;
|
||||
|
@ -61,6 +60,8 @@ mod transform_alga;
|
|||
mod transform_conversion;
|
||||
mod transform_alias;
|
||||
|
||||
mod reflection;
|
||||
|
||||
mod orthographic;
|
||||
mod perspective;
|
||||
|
||||
|
@ -71,7 +72,6 @@ pub use self::rotation::*;
|
|||
pub use self::rotation_alias::*;
|
||||
|
||||
pub use self::quaternion::*;
|
||||
pub use self::quaternion_alias::*;
|
||||
|
||||
pub use self::unit_complex::*;
|
||||
|
||||
|
@ -87,5 +87,7 @@ pub use self::similarity_alias::*;
|
|||
pub use self::transform::*;
|
||||
pub use self::transform_alias::*;
|
||||
|
||||
pub use self::orthographic::{OrthographicBase, Orthographic3};
|
||||
pub use self::perspective::{PerspectiveBase, Perspective3};
|
||||
pub use self::reflection::*;
|
||||
|
||||
pub use self::orthographic::Orthographic3;
|
||||
pub use self::perspective::Perspective3;
|
||||
|
|
|
@ -8,7 +8,7 @@ macro_rules! md_impl(
|
|||
// Operator, operator method, and calar bounds.
|
||||
$Op: ident, $op: ident $(where N: $($ScalarBounds: ident),*)*;
|
||||
// Storage dimensions, and dimension bounds.
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$BoundParam: ty>)*),+
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$($BoundParam: ty),*>)*),+
|
||||
// [Optional] Extra allocator bounds.
|
||||
$(where $ConstraintType: ty: $ConstraintBound: ident<$($ConstraintBoundParams: ty $( = $EqBound: ty )*),*> )*;
|
||||
// Argument identifiers and types + output.
|
||||
|
@ -17,10 +17,11 @@ macro_rules! md_impl(
|
|||
$action: expr;
|
||||
// Lifetime.
|
||||
$($lives: tt),*) => {
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound $(<$BoundParam>)*)*, SA, SB> $Op<$Rhs> for $Lhs
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul $($(+ $ScalarBounds)*)*,
|
||||
SA: Storage<N, $R1, $C1>,
|
||||
SB: Storage<N, $R2, $C2>,
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound $(<$($BoundParam),*>)*)*> $Op<$Rhs> for $Lhs
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul $($(+ $ScalarBounds)*)*,
|
||||
DefaultAllocator: Allocator<N, $R1, $C1> +
|
||||
Allocator<N, $R2, $C2> +
|
||||
Allocator<N, $R1, $C2>,
|
||||
$( $ConstraintType: $ConstraintBound<$( $ConstraintBoundParams $( = $EqBound )*),*> ),*
|
||||
{
|
||||
type Output = $Result;
|
||||
|
@ -41,7 +42,7 @@ macro_rules! md_impl_all(
|
|||
// Operator, operator method, and calar bounds.
|
||||
$Op: ident, $op: ident $(where N: $($ScalarBounds: ident),*)*;
|
||||
// Storage dimensions, and dimension bounds.
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$BoundParam: ty>)*),+
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$($BoundParam: ty),*>)*),+
|
||||
// [Optional] Extra allocator bounds.
|
||||
$(where $ConstraintType: ty: $ConstraintBound: ident<$($ConstraintBoundParams: ty $( = $EqBound: ty )*),*> )*;
|
||||
// Argument identifiers and types + output.
|
||||
|
@ -54,28 +55,28 @@ macro_rules! md_impl_all(
|
|||
|
||||
md_impl!(
|
||||
$Op, $op $(where N: $($ScalarBounds),*)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$BoundParam>)*),+
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$($BoundParam),*>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound<$($ConstraintBoundParams $( = $EqBound )*),*>)*;
|
||||
$lhs: $Lhs, $rhs: $Rhs, Output = $Result;
|
||||
$action_val_val; );
|
||||
|
||||
md_impl!(
|
||||
$Op, $op $(where N: $($ScalarBounds),*)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$BoundParam>)*),+
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$($BoundParam),*>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound<$($ConstraintBoundParams $( = $EqBound )*),*>)*;
|
||||
$lhs: &'a $Lhs, $rhs: $Rhs, Output = $Result;
|
||||
$action_ref_val; 'a);
|
||||
|
||||
md_impl!(
|
||||
$Op, $op $(where N: $($ScalarBounds),*)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$BoundParam>)*),+
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$($BoundParam),*>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound<$($ConstraintBoundParams $( = $EqBound )*),*>)*;
|
||||
$lhs: $Lhs, $rhs: &'b $Rhs, Output = $Result;
|
||||
$action_val_ref; 'b);
|
||||
|
||||
md_impl!(
|
||||
$Op, $op $(where N: $($ScalarBounds),*)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$BoundParam>)*),+
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$($BoundParam),*>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound<$($ConstraintBoundParams $( = $EqBound )*),*>)*;
|
||||
$lhs: &'a $Lhs, $rhs: &'b $Rhs, Output = $Result;
|
||||
$action_ref_ref; 'a, 'b);
|
||||
|
@ -89,19 +90,18 @@ macro_rules! md_assign_impl(
|
|||
// Operator, operator method, and scalar bounds.
|
||||
$Op: ident, $op: ident $(where N: $($ScalarBounds: ident),*)*;
|
||||
// Storage dimensions, and dimension bounds.
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$BoundParam: ty>)*),+
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$($BoundParam: ty),*>)*),+
|
||||
// [Optional] Extra allocator bounds.
|
||||
$(where $ConstraintType: ty: $ConstraintBound: ident<$($ConstraintBoundParams: ty $( = $EqBound: ty )*),*> )*;
|
||||
$(where $ConstraintType: ty: $ConstraintBound: ident $(<$($ConstraintBoundParams: ty $( = $EqBound: ty )*),*>)* )*;
|
||||
// Argument identifiers and types.
|
||||
$lhs: ident: $Lhs: ty, $rhs: ident: $Rhs: ty;
|
||||
// Actual implementation and lifetimes.
|
||||
$action: expr; $($lives: tt),*) => {
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound $(<$BoundParam>)*)*, SA, SB> $Op<$Rhs> for $Lhs
|
||||
where N: Scalar + Zero + ClosedAdd + ClosedMul $($(+ $ScalarBounds)*)*,
|
||||
SA: OwnedStorage<N, $R1, $C1>, // FIXME: this is too restrictive.
|
||||
SB: Storage<N, $R2, $C2>,
|
||||
SA::Alloc: OwnedAllocator<N, $R1, $C1, SA>,
|
||||
$( $ConstraintType: $ConstraintBound<$( $ConstraintBoundParams $( = $EqBound )*),*> ),*
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound $(<$($BoundParam),*>)*)*> $Op<$Rhs> for $Lhs
|
||||
where N: Scalar + Zero + One + ClosedAdd + ClosedMul $($(+ $ScalarBounds)*)*,
|
||||
DefaultAllocator: Allocator<N, $R1, $C1> +
|
||||
Allocator<N, $R2, $C2>,
|
||||
$( $ConstraintType: $ConstraintBound $(<$( $ConstraintBoundParams $( = $EqBound )*),*>)* ),*
|
||||
{
|
||||
#[inline]
|
||||
fn $op(&mut $lhs, $rhs: $Rhs) {
|
||||
|
@ -118,9 +118,9 @@ macro_rules! md_assign_impl_all(
|
|||
// Operator, operator method, and scalar bounds.
|
||||
$Op: ident, $op: ident $(where N: $($ScalarBounds: ident),*)*;
|
||||
// Storage dimensions, and dimension bounds.
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$BoundParam: ty>)*),+
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident $(<$($BoundParam: ty),*>)*),+
|
||||
// [Optional] Extra allocator bounds.
|
||||
$(where $ConstraintType: ty: $ConstraintBound: ident<$($ConstraintBoundParams: ty $( = $EqBound: ty )*),*> )*;
|
||||
$(where $ConstraintType: ty: $ConstraintBound: ident$(<$($ConstraintBoundParams: ty $( = $EqBound: ty )*),*>)* )*;
|
||||
// Argument identifiers and types.
|
||||
$lhs: ident: $Lhs: ty, $rhs: ident: $Rhs: ty;
|
||||
// Actual implementation and lifetimes.
|
||||
|
@ -128,15 +128,15 @@ macro_rules! md_assign_impl_all(
|
|||
[ref] => $action_ref: expr;) => {
|
||||
md_assign_impl!(
|
||||
$Op, $op $(where N: $($ScalarBounds),*)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$BoundParam>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound<$($ConstraintBoundParams $( = $EqBound )*),*>)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$($BoundParam),*>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound $(<$($ConstraintBoundParams $( = $EqBound )*),*>)*)*;
|
||||
$lhs: $Lhs, $rhs: $Rhs;
|
||||
$action_val; );
|
||||
|
||||
md_assign_impl!(
|
||||
$Op, $op $(where N: $($ScalarBounds),*)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$BoundParam>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound<$($ConstraintBoundParams $( = $EqBound )*),*>)*;
|
||||
($R1, $C1),($R2, $C2) for $($Dims: $DimsBound $(<$($BoundParam),*>)*),+
|
||||
$(where $ConstraintType: $ConstraintBound $(<$($ConstraintBoundParams $( = $EqBound )*),*>)*)*;
|
||||
$lhs: $Lhs, $rhs: &'b $Rhs;
|
||||
$action_ref; 'b);
|
||||
}
|
||||
|
@ -146,14 +146,14 @@ macro_rules! md_assign_impl_all(
|
|||
/// Macro for the implementation of addition and subtraction.
|
||||
macro_rules! add_sub_impl(
|
||||
($Op: ident, $op: ident, $bound: ident;
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) $(-> ($RRes: ty))* for $($Dims: ident: $DimsBound: ident),+;
|
||||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) $(-> ($RRes: ty))* for $($Dims: ident: $DimsBound: ident $(<$($BoundParam: ty),*>)*),+;
|
||||
$lhs: ident: $Lhs: ty, $rhs: ident: $Rhs: ty, Output = $Result: ty;
|
||||
$action: expr; $($lives: tt),*) => {
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound)*, SA, SB> $Op<$Rhs> for $Lhs
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound $(<$($BoundParam),*>)*)*> $Op<$Rhs> for $Lhs
|
||||
where N: Scalar + $bound,
|
||||
SA: Storage<N, $R1, $C1>,
|
||||
SB: Storage<N, $R2, $C2>,
|
||||
SA::Alloc: SameShapeAllocator<N, $R1, $C1, $R2, $C2, SA>,
|
||||
DefaultAllocator: Allocator<N, $R1, $C1> +
|
||||
Allocator<N, $R2, $C2> +
|
||||
SameShapeAllocator<N, $R1, $C1, $R2, $C2>,
|
||||
ShapeConstraint: SameNumberOfRows<$R1, $R2 $(, Representative = $RRes)*> +
|
||||
SameNumberOfColumns<$C1, $C2> {
|
||||
type Output = $Result;
|
||||
|
@ -173,11 +173,10 @@ macro_rules! add_sub_assign_impl(
|
|||
($R1: ty, $C1: ty),($R2: ty, $C2: ty) for $($Dims: ident: $DimsBound: ident),+;
|
||||
$lhs: ident: $Lhs: ty, $rhs: ident: $Rhs: ty;
|
||||
$action: expr; $($lives: tt),*) => {
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound)*, SA, SB> $Op<$Rhs> for $Lhs
|
||||
impl<$($lives ,)* N $(, $Dims: $DimsBound)*> $Op<$Rhs> for $Lhs
|
||||
where N: Scalar + $bound,
|
||||
SA: OwnedStorage<N, $R1, $C1>, // FIXME: this is too restrictive.
|
||||
SB: Storage<N, $R2, $C2>,
|
||||
SA::Alloc: OwnedAllocator<N, $R1, $C1, SA>,
|
||||
DefaultAllocator: Allocator<N, $R1, $C1> +
|
||||
Allocator<N, $R2, $C2>,
|
||||
ShapeConstraint: SameNumberOfRows<$R1, $R2> + SameNumberOfColumns<$C1, $C2> {
|
||||
#[inline]
|
||||
fn $op(&mut $lhs, $rhs: $Rhs) {
|
||||
|
|
|
@ -1,70 +1,65 @@
|
|||
#[cfg(feature="arbitrary")]
|
||||
use quickcheck::{Arbitrary, Gen};
|
||||
use rand::{Rand, Rng};
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
use serde::{Serialize, Serializer, Deserialize, Deserializer};
|
||||
use serde;
|
||||
use std::fmt;
|
||||
|
||||
use alga::general::Real;
|
||||
|
||||
use core::{Scalar, SquareMatrix, OwnedSquareMatrix, ColumnVector, OwnedColumnVector, MatrixArray};
|
||||
use core::dimension::{U1, U3, U4};
|
||||
use core::storage::{OwnedStorage, Storage, StorageMut};
|
||||
use core::allocator::OwnedAllocator;
|
||||
use core::{Matrix4, Vector, Vector3};
|
||||
use core::dimension::U3;
|
||||
use core::storage::Storage;
|
||||
use core::helper;
|
||||
|
||||
use geometry::{PointBase, OwnedPoint};
|
||||
use geometry::Point3;
|
||||
|
||||
/// A 3D orthographic projection stored as an homogeneous 4x4 matrix.
|
||||
#[derive(Debug, Clone, Copy)] // FIXME: Hash
|
||||
pub struct OrthographicBase<N: Scalar, S: Storage<N, U4, U4>> {
|
||||
matrix: SquareMatrix<N, U4, S>
|
||||
pub struct Orthographic3<N: Real> {
|
||||
matrix: Matrix4<N>
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<N, S> Serialize for OrthographicBase<N, S>
|
||||
where N: Scalar,
|
||||
S: Storage<N, U4, U4>,
|
||||
SquareMatrix<N, U4, S>: Serialize,
|
||||
{
|
||||
fn serialize<T>(&self, serializer: T) -> Result<T::Ok, T::Error>
|
||||
where T: Serializer
|
||||
{
|
||||
self.matrix.serialize(serializer)
|
||||
impl<N: Real> Copy for Orthographic3<N> { }
|
||||
|
||||
impl<N: Real> Clone for Orthographic3<N> {
|
||||
#[inline]
|
||||
fn clone(&self) -> Self {
|
||||
Orthographic3::from_matrix_unchecked(self.matrix.clone())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<'de, N, S> Deserialize<'de> for OrthographicBase<N, S>
|
||||
where N: Scalar,
|
||||
S: Storage<N, U4, U4>,
|
||||
SquareMatrix<N, U4, S>: Deserialize<'de>,
|
||||
{
|
||||
fn deserialize<T>(deserializer: T) -> Result<Self, T::Error>
|
||||
where T: Deserializer<'de>
|
||||
{
|
||||
SquareMatrix::deserialize(deserializer).map(|x| OrthographicBase { matrix: x })
|
||||
impl<N: Real> fmt::Debug for Orthographic3<N> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
|
||||
self.matrix.fmt(f)
|
||||
}
|
||||
}
|
||||
|
||||
/// A 3D orthographic projection stored as a static homogeneous 4x4 matrix.
|
||||
pub type Orthographic3<N> = OrthographicBase<N, MatrixArray<N, U4, U4>>;
|
||||
|
||||
impl<N, S> Eq for OrthographicBase<N, S>
|
||||
where N: Scalar + Eq,
|
||||
S: Storage<N, U4, U4> { }
|
||||
|
||||
impl<N: Scalar, S: Storage<N, U4, U4>> PartialEq for OrthographicBase<N, S> {
|
||||
impl<N: Real> PartialEq for Orthographic3<N> {
|
||||
#[inline]
|
||||
fn eq(&self, right: &Self) -> bool {
|
||||
self.matrix == right.matrix
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, S> OrthographicBase<N, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, U4, U4>,
|
||||
S::Alloc: OwnedAllocator<N, U4, U4, S> {
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<N: Real + serde::Serialize> serde::Serialize for Orthographic3<N> {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where S: serde::Serializer {
|
||||
self.matrix.serialize(serializer)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<'a, N: Real + serde::Deserialize<'a>> serde::Deserialize<'a> for Orthographic3<N> {
|
||||
fn deserialize<Des>(deserializer: Des) -> Result<Self, Des::Error>
|
||||
where Des: serde::Deserializer<'a> {
|
||||
let matrix = Matrix4::<N>::deserialize(deserializer)?;
|
||||
|
||||
Ok(Orthographic3::from_matrix_unchecked(matrix))
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Real> Orthographic3<N> {
|
||||
/// Creates a new orthographic projection matrix.
|
||||
#[inline]
|
||||
pub fn new(left: N, right: N, bottom: N, top: N, znear: N, zfar: N) -> Self {
|
||||
|
@ -72,7 +67,7 @@ impl<N, S> OrthographicBase<N, S>
|
|||
assert!(bottom < top, "The top corner must be higher than the bottom corner.");
|
||||
assert!(znear < zfar, "The far plane must be farther than the near plane.");
|
||||
|
||||
let matrix = SquareMatrix::<N, U4, S>::identity();
|
||||
let matrix = Matrix4::<N>::identity();
|
||||
let mut res = Self::from_matrix_unchecked(matrix);
|
||||
|
||||
res.set_left_and_right(left, right);
|
||||
|
@ -87,8 +82,8 @@ impl<N, S> OrthographicBase<N, S>
|
|||
/// It is not checked whether or not the given matrix actually represents an orthographic
|
||||
/// projection.
|
||||
#[inline]
|
||||
pub fn from_matrix_unchecked(matrix: SquareMatrix<N, U4, S>) -> Self {
|
||||
OrthographicBase {
|
||||
pub fn from_matrix_unchecked(matrix: Matrix4<N>) -> Self {
|
||||
Orthographic3 {
|
||||
matrix: matrix
|
||||
}
|
||||
}
|
||||
|
@ -105,24 +100,10 @@ impl<N, S> OrthographicBase<N, S>
|
|||
|
||||
Self::new(-width * half, width * half, -height * half, height * half, znear, zfar)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Real, S: Storage<N, U4, U4>> OrthographicBase<N, S> {
|
||||
/// A reference to the underlying homogeneous transformation matrix.
|
||||
#[inline]
|
||||
pub fn as_matrix(&self) -> &SquareMatrix<N, U4, S> {
|
||||
&self.matrix
|
||||
}
|
||||
|
||||
/// Retrieves the underlying homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn unwrap(self) -> SquareMatrix<N, U4, S> {
|
||||
self.matrix
|
||||
}
|
||||
|
||||
/// Retrieves the inverse of the underlying homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn inverse(&self) -> OwnedSquareMatrix<N, U4, S::Alloc> {
|
||||
pub fn inverse(&self) -> Matrix4<N> {
|
||||
let mut res = self.to_homogeneous();
|
||||
|
||||
let inv_m11 = N::one() / self.matrix[(0, 0)];
|
||||
|
@ -142,10 +123,22 @@ impl<N: Real, S: Storage<N, U4, U4>> OrthographicBase<N, S> {
|
|||
|
||||
/// Computes the corresponding homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn to_homogeneous(&self) -> OwnedSquareMatrix<N, U4, S::Alloc> {
|
||||
pub fn to_homogeneous(&self) -> Matrix4<N> {
|
||||
self.matrix.clone_owned()
|
||||
}
|
||||
|
||||
/// A reference to the underlying homogeneous transformation matrix.
|
||||
#[inline]
|
||||
pub fn as_matrix(&self) -> &Matrix4<N> {
|
||||
&self.matrix
|
||||
}
|
||||
|
||||
/// Retrieves the underlying homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn unwrap(self) -> Matrix4<N> {
|
||||
self.matrix
|
||||
}
|
||||
|
||||
/// The smallest x-coordinate of the view cuboid.
|
||||
#[inline]
|
||||
pub fn left(&self) -> N {
|
||||
|
@ -185,10 +178,8 @@ impl<N: Real, S: Storage<N, U4, U4>> OrthographicBase<N, S> {
|
|||
// FIXME: when we get specialization, specialize the Mul impl instead.
|
||||
/// Projects a point. Faster than matrix multiplication.
|
||||
#[inline]
|
||||
pub fn project_point<SB>(&self, p: &PointBase<N, U3, SB>) -> OwnedPoint<N, U3, SB::Alloc>
|
||||
where SB: Storage<N, U3, U1> {
|
||||
|
||||
OwnedPoint::<N, U3, SB::Alloc>::new(
|
||||
pub fn project_point(&self, p: &Point3<N>) -> Point3<N> {
|
||||
Point3::new(
|
||||
self.matrix[(0, 0)] * p[0] + self.matrix[(0, 3)],
|
||||
self.matrix[(1, 1)] * p[1] + self.matrix[(1, 3)],
|
||||
self.matrix[(2, 2)] * p[2] + self.matrix[(2, 3)]
|
||||
|
@ -197,10 +188,9 @@ impl<N: Real, S: Storage<N, U4, U4>> OrthographicBase<N, S> {
|
|||
|
||||
/// Un-projects a point. Faster than multiplication by the underlying matrix inverse.
|
||||
#[inline]
|
||||
pub fn unproject_point<SB>(&self, p: &PointBase<N, U3, SB>) -> OwnedPoint<N, U3, SB::Alloc>
|
||||
where SB: Storage<N, U3, U1> {
|
||||
pub fn unproject_point(&self, p: &Point3<N>) -> Point3<N> {
|
||||
|
||||
OwnedPoint::<N, U3, SB::Alloc>::new(
|
||||
Point3::new(
|
||||
(p[0] - self.matrix[(0, 3)]) / self.matrix[(0, 0)],
|
||||
(p[1] - self.matrix[(1, 3)]) / self.matrix[(1, 1)],
|
||||
(p[2] - self.matrix[(2, 3)]) / self.matrix[(2, 2)]
|
||||
|
@ -210,18 +200,16 @@ impl<N: Real, S: Storage<N, U4, U4>> OrthographicBase<N, S> {
|
|||
// FIXME: when we get specialization, specialize the Mul impl instead.
|
||||
/// Projects a vector. Faster than matrix multiplication.
|
||||
#[inline]
|
||||
pub fn project_vector<SB>(&self, p: &ColumnVector<N, U3, SB>) -> OwnedColumnVector<N, U3, SB::Alloc>
|
||||
where SB: Storage<N, U3, U1> {
|
||||
pub fn project_vector<SB>(&self, p: &Vector<N, U3, SB>) -> Vector3<N>
|
||||
where SB: Storage<N, U3> {
|
||||
|
||||
OwnedColumnVector::<N, U3, SB::Alloc>::new(
|
||||
Vector3::new(
|
||||
self.matrix[(0, 0)] * p[0],
|
||||
self.matrix[(1, 1)] * p[1],
|
||||
self.matrix[(2, 2)] * p[2]
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Real, S: StorageMut<N, U4, U4>> OrthographicBase<N, S> {
|
||||
/// Sets the smallest x-coordinate of the view cuboid.
|
||||
#[inline]
|
||||
pub fn set_left(&mut self, left: N) {
|
||||
|
@ -289,10 +277,7 @@ impl<N: Real, S: StorageMut<N, U4, U4>> OrthographicBase<N, S> {
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, S> Rand for OrthographicBase<N, S>
|
||||
where N: Real + Rand,
|
||||
S: OwnedStorage<N, U4, U4>,
|
||||
S::Alloc: OwnedAllocator<N, U4, U4, S> {
|
||||
impl<N: Real + Rand> Rand for Orthographic3<N> {
|
||||
fn rand<R: Rng>(r: &mut R) -> Self {
|
||||
let left = Rand::rand(r);
|
||||
let right = helper::reject_rand(r, |x: &N| *x > left);
|
||||
|
@ -306,10 +291,8 @@ impl<N, S> Rand for OrthographicBase<N, S>
|
|||
}
|
||||
|
||||
#[cfg(feature="arbitrary")]
|
||||
impl<N, S> Arbitrary for OrthographicBase<N, S>
|
||||
where N: Real + Arbitrary,
|
||||
S: OwnedStorage<N, U4, U4> + Send,
|
||||
S::Alloc: OwnedAllocator<N, U4, U4, S> {
|
||||
impl<N: Real + Arbitrary> Arbitrary for Orthographic3<N>
|
||||
where Matrix4<N>: Send {
|
||||
fn arbitrary<G: Gen>(g: &mut G) -> Self {
|
||||
let left = Arbitrary::arbitrary(g);
|
||||
let right = helper::reject(g, |x: &N| *x > left);
|
||||
|
|
|
@ -3,77 +3,71 @@ use quickcheck::{Arbitrary, Gen};
|
|||
use rand::{Rand, Rng};
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
use serde::{Serialize, Serializer, Deserialize, Deserializer};
|
||||
use serde;
|
||||
use std::fmt;
|
||||
|
||||
use alga::general::Real;
|
||||
|
||||
use core::{Scalar, SquareMatrix, OwnedSquareMatrix, ColumnVector, OwnedColumnVector, MatrixArray};
|
||||
use core::dimension::{U1, U3, U4};
|
||||
use core::storage::{OwnedStorage, Storage, StorageMut};
|
||||
use core::allocator::OwnedAllocator;
|
||||
use core::{Scalar, Matrix4, Vector, Vector3};
|
||||
use core::dimension::U3;
|
||||
use core::storage::Storage;
|
||||
use core::helper;
|
||||
|
||||
use geometry::{PointBase, OwnedPoint};
|
||||
use geometry::Point3;
|
||||
|
||||
/// A 3D perspective projection stored as an homogeneous 4x4 matrix.
|
||||
#[derive(Debug, Clone, Copy)] // FIXME: Hash
|
||||
pub struct PerspectiveBase<N: Scalar, S: Storage<N, U4, U4>> {
|
||||
matrix: SquareMatrix<N, U4, S>
|
||||
pub struct Perspective3<N: Scalar> {
|
||||
matrix: Matrix4<N>
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<N, S> Serialize for PerspectiveBase<N, S>
|
||||
where N: Scalar,
|
||||
S: Storage<N, U4, U4>,
|
||||
SquareMatrix<N, U4, S>: Serialize,
|
||||
{
|
||||
fn serialize<T>(&self, serializer: T) -> Result<T::Ok, T::Error>
|
||||
where T: Serializer
|
||||
{
|
||||
self.matrix.serialize(serializer)
|
||||
impl<N: Real> Copy for Perspective3<N> { }
|
||||
|
||||
impl<N: Real> Clone for Perspective3<N> {
|
||||
#[inline]
|
||||
fn clone(&self) -> Self {
|
||||
Perspective3::from_matrix_unchecked(self.matrix.clone())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<'de, N, S> Deserialize<'de> for PerspectiveBase<N, S>
|
||||
where N: Scalar,
|
||||
S: Storage<N, U4, U4>,
|
||||
SquareMatrix<N, U4, S>: Deserialize<'de>,
|
||||
{
|
||||
fn deserialize<T>(deserializer: T) -> Result<Self, T::Error>
|
||||
where T: Deserializer<'de>
|
||||
{
|
||||
SquareMatrix::deserialize(deserializer).map(|x| PerspectiveBase { matrix: x })
|
||||
impl<N: Real> fmt::Debug for Perspective3<N> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
|
||||
self.matrix.fmt(f)
|
||||
}
|
||||
}
|
||||
|
||||
/// A 3D perspective projection stored as a static homogeneous 4x4 matrix.
|
||||
pub type Perspective3<N> = PerspectiveBase<N, MatrixArray<N, U4, U4>>;
|
||||
|
||||
impl<N, S> Eq for PerspectiveBase<N, S>
|
||||
where N: Scalar + Eq,
|
||||
S: Storage<N, U4, U4> { }
|
||||
|
||||
impl<N, S> PartialEq for PerspectiveBase<N, S>
|
||||
where N: Scalar,
|
||||
S: Storage<N, U4, U4> {
|
||||
impl<N: Real> PartialEq for Perspective3<N> {
|
||||
#[inline]
|
||||
fn eq(&self, right: &Self) -> bool {
|
||||
self.matrix == right.matrix
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, S> PerspectiveBase<N, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, U4, U4>,
|
||||
S::Alloc: OwnedAllocator<N, U4, U4, S> {
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<N: Real + serde::Serialize> serde::Serialize for Perspective3<N> {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where S: serde::Serializer {
|
||||
self.matrix.serialize(serializer)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<'a, N: Real + serde::Deserialize<'a>> serde::Deserialize<'a> for Perspective3<N> {
|
||||
fn deserialize<Des>(deserializer: Des) -> Result<Self, Des::Error>
|
||||
where Des: serde::Deserializer<'a> {
|
||||
let matrix = Matrix4::<N>::deserialize(deserializer)?;
|
||||
|
||||
Ok(Perspective3::from_matrix_unchecked(matrix))
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Real> Perspective3<N> {
|
||||
/// Creates a new perspective matrix from the aspect ratio, y field of view, and near/far planes.
|
||||
pub fn new(aspect: N, fovy: N, znear: N, zfar: N) -> Self {
|
||||
assert!(!relative_eq!(zfar - znear, N::zero()), "The near-plane and far-plane must not be superimposed.");
|
||||
assert!(!relative_eq!(aspect, N::zero()), "The apsect ratio must not be zero.");
|
||||
|
||||
let matrix = SquareMatrix::<N, U4, S>::identity();
|
||||
let mut res = PerspectiveBase::from_matrix_unchecked(matrix);
|
||||
let matrix = Matrix4::identity();
|
||||
let mut res = Perspective3::from_matrix_unchecked(matrix);
|
||||
|
||||
res.set_fovy(fovy);
|
||||
res.set_aspect(aspect);
|
||||
|
@ -91,32 +85,15 @@ impl<N, S> PerspectiveBase<N, S>
|
|||
/// It is not checked whether or not the given matrix actually represents an orthographic
|
||||
/// projection.
|
||||
#[inline]
|
||||
pub fn from_matrix_unchecked(matrix: SquareMatrix<N, U4, S>) -> Self {
|
||||
PerspectiveBase {
|
||||
pub fn from_matrix_unchecked(matrix: Matrix4<N>) -> Self {
|
||||
Perspective3 {
|
||||
matrix: matrix
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, S> PerspectiveBase<N, S>
|
||||
where N: Real,
|
||||
S: Storage<N, U4, U4> {
|
||||
|
||||
/// A reference to the underlying homogeneous transformation matrix.
|
||||
#[inline]
|
||||
pub fn as_matrix(&self) -> &SquareMatrix<N, U4, S> {
|
||||
&self.matrix
|
||||
}
|
||||
|
||||
/// Retrieves the underlying homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn unwrap(self) -> SquareMatrix<N, U4, S> {
|
||||
self.matrix
|
||||
}
|
||||
|
||||
/// Retrieves the inverse of the underlying homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn inverse(&self) -> OwnedSquareMatrix<N, U4, S::Alloc> {
|
||||
pub fn inverse(&self) -> Matrix4<N> {
|
||||
let mut res = self.to_homogeneous();
|
||||
|
||||
res[(0, 0)] = N::one() / self.matrix[(0, 0)];
|
||||
|
@ -135,10 +112,22 @@ impl<N, S> PerspectiveBase<N, S>
|
|||
|
||||
/// Computes the corresponding homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn to_homogeneous(&self) -> OwnedSquareMatrix<N, U4, S::Alloc> {
|
||||
pub fn to_homogeneous(&self) -> Matrix4<N> {
|
||||
self.matrix.clone_owned()
|
||||
}
|
||||
|
||||
/// A reference to the underlying homogeneous transformation matrix.
|
||||
#[inline]
|
||||
pub fn as_matrix(&self) -> &Matrix4<N> {
|
||||
&self.matrix
|
||||
}
|
||||
|
||||
/// Retrieves the underlying homogeneous matrix.
|
||||
#[inline]
|
||||
pub fn unwrap(self) -> Matrix4<N> {
|
||||
self.matrix
|
||||
}
|
||||
|
||||
/// Gets the `width / height` aspect ratio of the view frustrum.
|
||||
#[inline]
|
||||
pub fn aspect(&self) -> N {
|
||||
|
@ -174,11 +163,9 @@ impl<N, S> PerspectiveBase<N, S>
|
|||
// FIXME: when we get specialization, specialize the Mul impl instead.
|
||||
/// Projects a point. Faster than matrix multiplication.
|
||||
#[inline]
|
||||
pub fn project_point<SB>(&self, p: &PointBase<N, U3, SB>) -> OwnedPoint<N, U3, SB::Alloc>
|
||||
where SB: Storage<N, U3, U1> {
|
||||
|
||||
pub fn project_point(&self, p: &Point3<N>) -> Point3<N> {
|
||||
let inverse_denom = -N::one() / p[2];
|
||||
OwnedPoint::<N, U3, SB::Alloc>::new(
|
||||
Point3::new(
|
||||
self.matrix[(0, 0)] * p[0] * inverse_denom,
|
||||
self.matrix[(1, 1)] * p[1] * inverse_denom,
|
||||
(self.matrix[(2, 2)] * p[2] + self.matrix[(2, 3)]) * inverse_denom
|
||||
|
@ -187,12 +174,10 @@ impl<N, S> PerspectiveBase<N, S>
|
|||
|
||||
/// Un-projects a point. Faster than multiplication by the matrix inverse.
|
||||
#[inline]
|
||||
pub fn unproject_point<SB>(&self, p: &PointBase<N, U3, SB>) -> OwnedPoint<N, U3, SB::Alloc>
|
||||
where SB: Storage<N, U3, U1> {
|
||||
|
||||
pub fn unproject_point(&self, p: &Point3<N>) -> Point3<N> {
|
||||
let inverse_denom = self.matrix[(2, 3)] / (p[2] + self.matrix[(2, 2)]);
|
||||
|
||||
OwnedPoint::<N, U3, SB::Alloc>::new(
|
||||
Point3::new(
|
||||
p[0] * inverse_denom / self.matrix[(0, 0)],
|
||||
p[1] * inverse_denom / self.matrix[(1, 1)],
|
||||
-inverse_denom
|
||||
|
@ -202,22 +187,17 @@ impl<N, S> PerspectiveBase<N, S>
|
|||
// FIXME: when we get specialization, specialize the Mul impl instead.
|
||||
/// Projects a vector. Faster than matrix multiplication.
|
||||
#[inline]
|
||||
pub fn project_vector<SB>(&self, p: &ColumnVector<N, U3, SB>) -> OwnedColumnVector<N, U3, SB::Alloc>
|
||||
where SB: Storage<N, U3, U1> {
|
||||
pub fn project_vector<SB>(&self, p: &Vector<N, U3, SB>) -> Vector3<N>
|
||||
where SB: Storage<N, U3> {
|
||||
|
||||
let inverse_denom = -N::one() / p[2];
|
||||
OwnedColumnVector::<N, U3, SB::Alloc>::new(
|
||||
Vector3::new(
|
||||
self.matrix[(0, 0)] * p[0] * inverse_denom,
|
||||
self.matrix[(1, 1)] * p[1] * inverse_denom,
|
||||
self.matrix[(2, 2)]
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N, S> PerspectiveBase<N, S>
|
||||
where N: Real,
|
||||
S: StorageMut<N, U4, U4> {
|
||||
/// Updates this perspective matrix with a new `width / height` aspect ratio of the view
|
||||
/// frustrum.
|
||||
#[inline]
|
||||
|
@ -256,10 +236,7 @@ impl<N, S> PerspectiveBase<N, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, S> Rand for PerspectiveBase<N, S>
|
||||
where N: Real + Rand,
|
||||
S: OwnedStorage<N, U4, U4>,
|
||||
S::Alloc: OwnedAllocator<N, U4, U4, S> {
|
||||
impl<N: Real + Rand> Rand for Perspective3<N> {
|
||||
fn rand<R: Rng>(r: &mut R) -> Self {
|
||||
let znear = Rand::rand(r);
|
||||
let zfar = helper::reject_rand(r, |&x: &N| !(x - znear).is_zero());
|
||||
|
@ -270,10 +247,7 @@ impl<N, S> Rand for PerspectiveBase<N, S>
|
|||
}
|
||||
|
||||
#[cfg(feature="arbitrary")]
|
||||
impl<N, S> Arbitrary for PerspectiveBase<N, S>
|
||||
where N: Real + Arbitrary,
|
||||
S: OwnedStorage<N, U4, U4> + Send,
|
||||
S::Alloc: OwnedAllocator<N, U4, U4, S> {
|
||||
impl<N: Real + Arbitrary> Arbitrary for Perspective3<N> {
|
||||
fn arbitrary<G: Gen>(g: &mut G) -> Self {
|
||||
let znear = Arbitrary::arbitrary(g);
|
||||
let zfar = helper::reject(g, |&x: &N| !(x - znear).is_zero());
|
||||
|
|
|
@ -1,89 +1,81 @@
|
|||
use num::One;
|
||||
use std::hash;
|
||||
use std::fmt;
|
||||
use std::cmp::Ordering;
|
||||
use approx::ApproxEq;
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
use serde::{Serialize, Serializer, Deserialize, Deserializer};
|
||||
use serde;
|
||||
|
||||
#[cfg(feature = "abomonation-serialize")]
|
||||
use abomonation::Abomonation;
|
||||
|
||||
use core::{Scalar, ColumnVector, OwnedColumnVector};
|
||||
use core::{DefaultAllocator, Scalar, VectorN};
|
||||
use core::iter::{MatrixIter, MatrixIterMut};
|
||||
use core::dimension::{DimName, DimNameSum, DimNameAdd, U1};
|
||||
use core::storage::{Storage, StorageMut, MulStorage};
|
||||
use core::allocator::{Allocator, SameShapeR};
|
||||
|
||||
// XXX Bad name: we can't even add points…
|
||||
/// The type of the result of the sum of a point with a vector.
|
||||
pub type PointSum<N, D1, D2, SA> =
|
||||
PointBase<N, SameShapeR<D1, D2>,
|
||||
<<SA as Storage<N, D1, U1>>::Alloc as Allocator<N, SameShapeR<D1, D2>, U1>>::Buffer>;
|
||||
|
||||
/// The type of the result of the multiplication of a point by a matrix.
|
||||
pub type PointMul<N, R1, C1, SA> = PointBase<N, R1, MulStorage<N, R1, C1, U1, SA>>;
|
||||
|
||||
/// A point with an owned storage.
|
||||
pub type OwnedPoint<N, D, A> = PointBase<N, D, <A as Allocator<N, D, U1>>::Buffer>;
|
||||
use core::allocator::Allocator;
|
||||
|
||||
/// A point in a n-dimensional euclidean space.
|
||||
#[repr(C)]
|
||||
#[derive(Hash, Debug)]
|
||||
pub struct PointBase<N: Scalar, D: DimName, S: Storage<N, D, U1>> {
|
||||
#[derive(Debug)]
|
||||
pub struct Point<N: Scalar, D: DimName>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
/// The coordinates of this point, i.e., the shift from the origin.
|
||||
pub coords: ColumnVector<N, D, S>
|
||||
pub coords: VectorN<N, D>
|
||||
}
|
||||
|
||||
impl<N, D, S> Copy for PointBase<N, D, S>
|
||||
where N: Scalar,
|
||||
D: DimName,
|
||||
S: Storage<N, D, U1> + Copy { }
|
||||
impl<N: Scalar + hash::Hash, D: DimName + hash::Hash> hash::Hash for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
<DefaultAllocator as Allocator<N, D>>::Buffer: hash::Hash {
|
||||
fn hash<H: hash::Hasher>(&self, state: &mut H) {
|
||||
self.coords.hash(state)
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D, S> Clone for PointBase<N, D, S>
|
||||
where N: Scalar,
|
||||
D: DimName,
|
||||
S: Storage<N, D, U1> + Clone {
|
||||
impl<N: Scalar, D: DimName> Copy for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
<DefaultAllocator as Allocator<N, D>>::Buffer: Copy { }
|
||||
|
||||
impl<N: Scalar, D: DimName> Clone for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
<DefaultAllocator as Allocator<N, D>>::Buffer: Clone {
|
||||
#[inline]
|
||||
fn clone(&self) -> Self {
|
||||
PointBase::from_coordinates(self.coords.clone())
|
||||
Point::from_coordinates(self.coords.clone())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<N, D, S> Serialize for PointBase<N, D, S>
|
||||
where N: Scalar,
|
||||
D: DimName,
|
||||
S: Storage<N, D, U1>,
|
||||
ColumnVector<N, D, S>: Serialize,
|
||||
{
|
||||
fn serialize<T>(&self, serializer: T) -> Result<T::Ok, T::Error>
|
||||
where T: Serializer
|
||||
{
|
||||
impl<N: Scalar, D: DimName> serde::Serialize for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
<DefaultAllocator as Allocator<N, D>>::Buffer: serde::Serialize {
|
||||
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where S: serde::Serializer {
|
||||
self.coords.serialize(serializer)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "serde-serialize")]
|
||||
impl<'de, N, D, S> Deserialize<'de> for PointBase<N, D, S>
|
||||
where N: Scalar,
|
||||
D: DimName,
|
||||
S: Storage<N, D, U1>,
|
||||
ColumnVector<N, D, S>: Deserialize<'de>,
|
||||
{
|
||||
fn deserialize<T>(deserializer: T) -> Result<Self, T::Error>
|
||||
where T: Deserializer<'de>
|
||||
{
|
||||
ColumnVector::deserialize(deserializer).map(|x| PointBase { coords: x })
|
||||
impl<'a, N: Scalar, D: DimName> serde::Deserialize<'a> for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
<DefaultAllocator as Allocator<N, D>>::Buffer: serde::Deserialize<'a> {
|
||||
|
||||
fn deserialize<Des>(deserializer: Des) -> Result<Self, Des::Error>
|
||||
where Des: serde::Deserializer<'a> {
|
||||
let coords = VectorN::<N, D>::deserialize(deserializer)?;
|
||||
|
||||
Ok(Point::from_coordinates(coords))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#[cfg(feature = "abomonation-serialize")]
|
||||
impl<N, D, S> Abomonation for PointBase<N, D, S>
|
||||
impl<N, D> Abomonation for PointBase<N, D>
|
||||
where N: Scalar,
|
||||
D: DimName,
|
||||
S: Storage<N, D, U1>,
|
||||
ColumnVector<N, D, S>: Abomonation
|
||||
ColumnVector<N, D>: Abomonation,
|
||||
DefaultAllocator: Allocator<N, D>
|
||||
{
|
||||
unsafe fn entomb(&self, writer: &mut Vec<u8>) {
|
||||
self.coords.entomb(writer)
|
||||
|
@ -98,27 +90,38 @@ impl<N, D, S> Abomonation for PointBase<N, D, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, D: DimName, S: Storage<N, D, U1>> PointBase<N, D, S> {
|
||||
/// Creates a new point with the given coordinates.
|
||||
#[inline]
|
||||
pub fn from_coordinates(coords: ColumnVector<N, D, S>) -> PointBase<N, D, S> {
|
||||
PointBase {
|
||||
coords: coords
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, D: DimName, S: Storage<N, D, U1>> PointBase<N, D, S> {
|
||||
/// Moves this point into one that owns its data.
|
||||
#[inline]
|
||||
pub fn into_owned(self) -> OwnedPoint<N, D, S::Alloc> {
|
||||
PointBase::from_coordinates(self.coords.into_owned())
|
||||
}
|
||||
impl<N: Scalar, D: DimName> Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
|
||||
/// Clones this point into one that owns its data.
|
||||
#[inline]
|
||||
pub fn clone_owned(&self) -> OwnedPoint<N, D, S::Alloc> {
|
||||
PointBase::from_coordinates(self.coords.clone_owned())
|
||||
pub fn clone(&self) -> Point<N, D> {
|
||||
Point::from_coordinates(self.coords.clone_owned())
|
||||
}
|
||||
|
||||
/// Converts this point into a vector in homogeneous coordinates, i.e., appends a `1` at the
|
||||
/// end of it.
|
||||
#[inline]
|
||||
pub fn to_homogeneous(&self) -> VectorN<N, DimNameSum<D, U1>>
|
||||
where N: One,
|
||||
D: DimNameAdd<U1>,
|
||||
DefaultAllocator: Allocator<N, DimNameSum<D, U1>> {
|
||||
|
||||
let mut res = unsafe {
|
||||
VectorN::<_, DimNameSum<D, U1>>::new_uninitialized()
|
||||
};
|
||||
res.fixed_slice_mut::<D, U1>(0, 0).copy_from(&self.coords);
|
||||
res[(D::dim(), 0)] = N::one();
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// Creates a new point with the given coordinates.
|
||||
#[inline]
|
||||
pub fn from_coordinates(coords: VectorN<N, D>) -> Point<N, D> {
|
||||
Point {
|
||||
coords: coords
|
||||
}
|
||||
}
|
||||
|
||||
/// The dimension of this point.
|
||||
|
@ -136,44 +139,26 @@ impl<N: Scalar, D: DimName, S: Storage<N, D, U1>> PointBase<N, D, S> {
|
|||
|
||||
/// Iterates through this point coordinates.
|
||||
#[inline]
|
||||
pub fn iter(&self) -> MatrixIter<N, D, U1, S> {
|
||||
pub fn iter(&self) -> MatrixIter<N, D, U1, <DefaultAllocator as Allocator<N, D>>::Buffer> {
|
||||
self.coords.iter()
|
||||
}
|
||||
|
||||
/// Gets a reference to i-th element of this point without bound-checking.
|
||||
#[inline]
|
||||
pub unsafe fn get_unchecked(&self, i: usize) -> &N {
|
||||
self.coords.get_unchecked(i, 0)
|
||||
self.coords.vget_unchecked(i)
|
||||
}
|
||||
|
||||
|
||||
/// Converts this point into a vector in homogeneous coordinates, i.e., appends a `1` at the
|
||||
/// end of it.
|
||||
#[inline]
|
||||
pub fn to_homogeneous(&self) -> OwnedColumnVector<N, DimNameSum<D, U1>, S::Alloc>
|
||||
where N: One,
|
||||
D: DimNameAdd<U1>,
|
||||
S::Alloc: Allocator<N, DimNameSum<D, U1>, U1> {
|
||||
|
||||
let mut res = unsafe { OwnedColumnVector::<N, _, S::Alloc>::new_uninitialized() };
|
||||
res.fixed_slice_mut::<D, U1>(0, 0).copy_from(&self.coords);
|
||||
res[(D::dim(), 0)] = N::one();
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Scalar, D: DimName, S: StorageMut<N, D, U1>> PointBase<N, D, S> {
|
||||
/// Mutably iterates through this point coordinates.
|
||||
#[inline]
|
||||
pub fn iter_mut(&mut self) -> MatrixIterMut<N, D, U1, S> {
|
||||
pub fn iter_mut(&mut self) -> MatrixIterMut<N, D, U1, <DefaultAllocator as Allocator<N, D>>::Buffer> {
|
||||
self.coords.iter_mut()
|
||||
}
|
||||
|
||||
/// Gets a mutable reference to i-th element of this point without bound-checking.
|
||||
#[inline]
|
||||
pub unsafe fn get_unchecked_mut(&mut self, i: usize) -> &mut N {
|
||||
self.coords.get_unchecked_mut(i, 0)
|
||||
self.coords.vget_unchecked_mut(i)
|
||||
}
|
||||
|
||||
/// Swaps two entries without bound-checking.
|
||||
|
@ -183,9 +168,8 @@ impl<N: Scalar, D: DimName, S: StorageMut<N, D, U1>> PointBase<N, D, S> {
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> ApproxEq for PointBase<N, D, S>
|
||||
where N: Scalar + ApproxEq,
|
||||
S: Storage<N, D, U1>,
|
||||
impl<N: Scalar + ApproxEq, D: DimName> ApproxEq for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
N::Epsilon: Copy {
|
||||
type Epsilon = N::Epsilon;
|
||||
|
||||
|
@ -215,22 +199,19 @@ impl<N, D: DimName, S> ApproxEq for PointBase<N, D, S>
|
|||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> Eq for PointBase<N, D, S>
|
||||
where N: Scalar + Eq,
|
||||
S: Storage<N, D, U1> { }
|
||||
impl<N: Scalar + Eq, D: DimName> Eq for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> { }
|
||||
|
||||
impl<N, D: DimName, S> PartialEq for PointBase<N, D, S>
|
||||
where N: Scalar,
|
||||
S: Storage<N, D, U1> {
|
||||
impl<N: Scalar, D: DimName> PartialEq for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn eq(&self, right: &Self) -> bool {
|
||||
self.coords == right.coords
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> PartialOrd for PointBase<N, D, S>
|
||||
where N: Scalar + PartialOrd,
|
||||
S: Storage<N, D, U1> {
|
||||
impl<N: Scalar + PartialOrd, D: DimName> PartialOrd for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
self.coords.partial_cmp(&other.coords)
|
||||
|
@ -262,9 +243,8 @@ impl<N, D: DimName, S> PartialOrd for PointBase<N, D, S>
|
|||
* Display
|
||||
*
|
||||
*/
|
||||
impl<N, D: DimName, S> fmt::Display for PointBase<N, D, S>
|
||||
where N: Scalar + fmt::Display,
|
||||
S: Storage<N, D, U1> {
|
||||
impl<N: Scalar + fmt::Display, D: DimName> fmt::Display for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
try!(write!(f, "{{"));
|
||||
|
||||
|
|
|
@ -1,26 +1,22 @@
|
|||
use alga::general::{Field, Real, MeetSemilattice, JoinSemilattice, Lattice};
|
||||
use alga::linear::{AffineSpace, EuclideanSpace};
|
||||
|
||||
use core::{ColumnVector, Scalar};
|
||||
use core::dimension::{DimName, U1};
|
||||
use core::storage::OwnedStorage;
|
||||
use core::allocator::OwnedAllocator;
|
||||
use core::{DefaultAllocator, Scalar, VectorN};
|
||||
use core::dimension::DimName;
|
||||
use core::allocator::Allocator;
|
||||
|
||||
use geometry::PointBase;
|
||||
use geometry::Point;
|
||||
|
||||
|
||||
impl<N, D: DimName, S> AffineSpace for PointBase<N, D, S>
|
||||
impl<N: Scalar + Field, D: DimName> AffineSpace for Point<N, D>
|
||||
where N: Scalar + Field,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
type Translation = ColumnVector<N, D, S>;
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
type Translation = VectorN<N, D>;
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> EuclideanSpace for PointBase<N, D, S>
|
||||
where N: Real,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
type Coordinates = ColumnVector<N, D, S>;
|
||||
impl<N: Real, D: DimName> EuclideanSpace for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
type Coordinates = VectorN<N, D>;
|
||||
type Real = N;
|
||||
|
||||
#[inline]
|
||||
|
@ -49,35 +45,32 @@ impl<N, D: DimName, S> EuclideanSpace for PointBase<N, D, S>
|
|||
* Ordering
|
||||
*
|
||||
*/
|
||||
impl<N, D: DimName, S> MeetSemilattice for PointBase<N, D, S>
|
||||
impl<N, D: DimName> MeetSemilattice for Point<N, D>
|
||||
where N: Scalar + MeetSemilattice,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn meet(&self, other: &Self) -> Self {
|
||||
PointBase::from_coordinates(self.coords.meet(&other.coords))
|
||||
Point::from_coordinates(self.coords.meet(&other.coords))
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> JoinSemilattice for PointBase<N, D, S>
|
||||
impl<N, D: DimName> JoinSemilattice for Point<N, D>
|
||||
where N: Scalar + JoinSemilattice,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn join(&self, other: &Self) -> Self {
|
||||
PointBase::from_coordinates(self.coords.join(&other.coords))
|
||||
Point::from_coordinates(self.coords.join(&other.coords))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
impl<N, D: DimName, S> Lattice for PointBase<N, D, S>
|
||||
impl<N, D: DimName> Lattice for Point<N, D>
|
||||
where N: Scalar + Lattice,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn meet_join(&self, other: &Self) -> (Self, Self) {
|
||||
let (meet, join) = self.coords.meet_join(&other.coords);
|
||||
|
||||
(PointBase::from_coordinates(meet), PointBase::from_coordinates(join))
|
||||
(Point::from_coordinates(meet), Point::from_coordinates(join))
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,10 +1,6 @@
|
|||
use core::MatrixArray;
|
||||
use core::dimension::{U1, U2, U3, U4, U5, U6};
|
||||
|
||||
use geometry::PointBase;
|
||||
|
||||
/// A statically sized D-dimensional column point.
|
||||
pub type Point<N, D> = PointBase<N, D, MatrixArray<N, D, U1>>;
|
||||
use geometry::Point;
|
||||
|
||||
/// A statically sized 1-dimensional column point.
|
||||
pub type Point1<N> = Point<N, U1>;
|
||||
|
|
|
@ -5,28 +5,25 @@ use rand::{Rand, Rng};
|
|||
use num::{Zero, One, Bounded};
|
||||
|
||||
use alga::general::ClosedDiv;
|
||||
use core::{Scalar, ColumnVector};
|
||||
use core::storage::{Storage, OwnedStorage};
|
||||
use core::allocator::{Allocator, OwnedAllocator};
|
||||
use core::{DefaultAllocator, Scalar, VectorN};
|
||||
use core::allocator::Allocator;
|
||||
use core::dimension::{DimName, DimNameAdd, DimNameSum, U1, U2, U3, U4, U5, U6};
|
||||
|
||||
use geometry::PointBase;
|
||||
use geometry::Point;
|
||||
|
||||
impl<N, D: DimName, S> PointBase<N, D, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Scalar, D: DimName> Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
/// Creates a new point with uninitialized coordinates.
|
||||
#[inline]
|
||||
pub unsafe fn new_uninitialized() -> Self {
|
||||
Self::from_coordinates(ColumnVector::<_, D, _>::new_uninitialized())
|
||||
Self::from_coordinates(VectorN::new_uninitialized())
|
||||
}
|
||||
|
||||
/// Creates a new point with all coordinates equal to zero.
|
||||
#[inline]
|
||||
pub fn origin() -> Self
|
||||
where N: Zero {
|
||||
Self::from_coordinates(ColumnVector::<_, D, _>::from_element(N::zero()))
|
||||
Self::from_coordinates(VectorN::from_element(N::zero()))
|
||||
}
|
||||
|
||||
/// Creates a new point from its homogeneous vector representation.
|
||||
|
@ -34,11 +31,10 @@ impl<N, D: DimName, S> PointBase<N, D, S>
|
|||
/// In practice, this builds a D-dimensional points with the same first D component as `v`
|
||||
/// divided by the last component of `v`. Returns `None` if this divisor is zero.
|
||||
#[inline]
|
||||
pub fn from_homogeneous<SB>(v: ColumnVector<N, DimNameSum<D, U1>, SB>) -> Option<Self>
|
||||
pub fn from_homogeneous(v: VectorN<N, DimNameSum<D, U1>>) -> Option<Self>
|
||||
where N: Scalar + Zero + One + ClosedDiv,
|
||||
D: DimNameAdd<U1>,
|
||||
SB: Storage<N, DimNameSum<D, U1>, U1, Alloc = S::Alloc>,
|
||||
S::Alloc: Allocator<N, DimNameSum<D, U1>, U1> {
|
||||
DefaultAllocator: Allocator<N, DimNameSum<D, U1>> {
|
||||
|
||||
if !v[D::dim()].is_zero() {
|
||||
let coords = v.fixed_slice::<D, U1>(0, 0) / v[D::dim()];
|
||||
|
@ -56,39 +52,34 @@ impl<N, D: DimName, S> PointBase<N, D, S>
|
|||
* Traits that buid points.
|
||||
*
|
||||
*/
|
||||
impl<N, D: DimName, S> Bounded for PointBase<N, D, S>
|
||||
where N: Scalar + Bounded,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Scalar + Bounded, D: DimName> Bounded for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn max_value() -> Self {
|
||||
Self::from_coordinates(ColumnVector::max_value())
|
||||
Self::from_coordinates(VectorN::max_value())
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn min_value() -> Self {
|
||||
Self::from_coordinates(ColumnVector::min_value())
|
||||
Self::from_coordinates(VectorN::min_value())
|
||||
}
|
||||
}
|
||||
|
||||
impl<N, D: DimName, S> Rand for PointBase<N, D, S>
|
||||
where N: Scalar + Rand,
|
||||
S: OwnedStorage<N, D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Scalar + Rand, D: DimName> Rand for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D> {
|
||||
#[inline]
|
||||
fn rand<G: Rng>(rng: &mut G) -> Self {
|
||||
PointBase::from_coordinates(rng.gen())
|
||||
Point::from_coordinates(rng.gen())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature="arbitrary")]
|
||||
impl<N, D: DimName, S> Arbitrary for PointBase<N, D, S>
|
||||
where N: Scalar + Arbitrary + Send,
|
||||
S: OwnedStorage<N, D, U1> + Send,
|
||||
S::Alloc: OwnedAllocator<N, D, U1, S> {
|
||||
impl<N: Scalar + Arbitrary + Send, D: DimName> Arbitrary for Point<N, D>
|
||||
where DefaultAllocator: Allocator<N, D>,
|
||||
<DefaultAllocator as Allocator<N, D>>::Buffer: Send {
|
||||
#[inline]
|
||||
fn arbitrary<G: Gen>(g: &mut G) -> Self {
|
||||
PointBase::from_coordinates(ColumnVector::arbitrary(g))
|
||||
Point::from_coordinates(VectorN::arbitrary(g))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -99,13 +90,11 @@ impl<N, D: DimName, S> Arbitrary for PointBase<N, D, S>
|
|||
*/
|
||||
macro_rules! componentwise_constructors_impl(
|
||||
($($D: ty, $($args: ident:$irow: expr),*);* $(;)*) => {$(
|
||||
impl<N, S> PointBase<N, $D, S>
|
||||
where N: Scalar,
|
||||
S: OwnedStorage<N, $D, U1>,
|
||||
S::Alloc: OwnedAllocator<N, $D, U1, S> {
|
||||
impl<N: Scalar> Point<N, $D>
|
||||
where DefaultAllocator: Allocator<N, $D> {
|
||||
/// Initializes this matrix from its components.
|
||||
#[inline]
|
||||
pub fn new($($args: N),*) -> PointBase<N, $D, S> {
|
||||
pub fn new($($args: N),*) -> Point<N, $D> {
|
||||
unsafe {
|
||||
let mut res = Self::new_uninitialized();
|
||||
$( *res.get_unchecked_mut($irow) = $args; )*
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue