Merge branch 'dev' into bytemuck

This commit is contained in:
Christopher Durham 2021-07-22 17:47:57 -05:00 committed by GitHub
commit 07c3fbc191
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
66 changed files with 674 additions and 428 deletions

View File

@ -4,6 +4,26 @@ documented here.
This project adheres to [Semantic Versioning](https://semver.org/). This project adheres to [Semantic Versioning](https://semver.org/).
## [0.28.0]
### Added
- Implement `Hash` for `Transform`.
- Implement `Borrow` and `BorrowMut` for contiguous slices.
### Modified
- The `OPoint<T, D>` type has been added. It takes the dimension number as a type-level integer (e.g. `Const<3>`) instead
of a const-generic. The type `Point<T, const D: usize>` is now an alias for `OPoint`. This changes doesn't affect any
of the existing code using `Point`. However, it will allow the use `OPoint` in a generic context where the dimension
cannot be easily expressed as a const-generic (because of the current limitation of const-generics in Rust).
- Several clippy warnings were fixed. This results in some method signature changes (e.g. taking `self` instead of `&self`)
but this should not have any practical infulances on existing codebase.
- The `Point::new` constructors are no longer const-fn. This is due to some limitations in const-fn
not allowing custom trait-bounds. Use the `point!` macro instead to build points in const environments.
- `Dynamic::new` and `Unit::new_unchecked` are now const-fn.
- Methods returning `Result<(), ()>` now return `bool` instead.
### Fixed
- Fixed a potential unsoundess issue when converting a mutable slice to a `&mut[T]`.
## [0.27.1] ## [0.27.1]
### Fixed ### Fixed
- Fixed a bug in the conversion from `glam::Vec2` or `glam::DVec2` to `Isometry2`. - Fixed a bug in the conversion from `glam::Vec2` or `glam::DVec2` to `Isometry2`.
@ -38,7 +58,7 @@ conversions targeting the versions 0.13, 0.14, and 0.15 of `glam`.
Fix a regression introduced in 0.26.0 preventing `DVector` from being serialized with `serde`. Fix a regression introduced in 0.26.0 preventing `DVector` from being serialized with `serde`.
## [0.26.0] ## [0.26.0]
This releases integrates `min-const-generics` to nalgebra. See This release integrates `min-const-generics` to nalgebra. See
[our blog post](https://www.dimforge.com/blog/2021/04/12/integrating-const-generics-to-nalgebra) [our blog post](https://www.dimforge.com/blog/2021/04/12/integrating-const-generics-to-nalgebra)
for details about this release. for details about this release.
@ -78,7 +98,7 @@ for details about this release.
## [0.25.3] ## [0.25.3]
### Added ### Added
- The `Vector::simd_cap_magnitude` method to cap the magnitude of the a vector with - The `Vector::simd_cap_magnitude` method to cap the magnitude of the vector with
SIMD components. SIMD components.
## [0.25.2] ## [0.25.2]
@ -109,7 +129,7 @@ This updates all the dependencies of nalgebra to their latest version, including
### New crate: nalgebra-sparse ### New crate: nalgebra-sparse
Alongside this release of `nalgebra`, we are releasing `nalgebra-sparse`: a crate dedicated to sparse matrix Alongside this release of `nalgebra`, we are releasing `nalgebra-sparse`: a crate dedicated to sparse matrix
computation with `nalgebra`. The `sparse` module of `nalgebra`itself still exists for backward compatibility computation with `nalgebra`. The `sparse` module of `nalgebra`itself still exists for backward compatibility,
but it will be deprecated soon in favor of the `nalgebra-sparse` crate. but it will be deprecated soon in favor of the `nalgebra-sparse` crate.
### Added ### Added
@ -125,7 +145,7 @@ but it will be deprecated soon in favor of the `nalgebra-sparse` crate.
## [0.24.0] ## [0.24.0]
### Added ### Added
* The `DualQuaternion` type. It is still work-in-progress but the basics are here: * The `DualQuaternion` type. It is still work-in-progress, but the basics are here:
creation from its real and dual part, multiplication of two dual quaternions, creation from its real and dual part, multiplication of two dual quaternions,
and normalization. and normalization.
@ -157,7 +177,7 @@ In this release we improved the documentation of the matrix and vector types by:
and `Vector.apply(f)`. and `Vector.apply(f)`.
* The `Quaternion::from([N; 4])` conversion to build a quaternion from an array of four elements. * The `Quaternion::from([N; 4])` conversion to build a quaternion from an array of four elements.
* The `Isometry::from(Translation)` conversion to build an isometry from a translation. * The `Isometry::from(Translation)` conversion to build an isometry from a translation.
* The `Vector::ith_axis(i)` which build a unit vector, e.g., `Unit<Vector3<f32>>` with its i-th component set to 1.0 and the * The `Vector::ith_axis(i)` which build a unit vector, e.g., `Unit<Vector3<f32>>` with its i-th component set to 1.0, and the
others set to zero. others set to zero.
* The `Isometry.lerp_slerp` and `Isometry.try_lerp_slerp` methods to interpolate between two isometries using linear * The `Isometry.lerp_slerp` and `Isometry.try_lerp_slerp` methods to interpolate between two isometries using linear
interpolation for the translational part, and spherical interpolation for the rotational part. interpolation for the translational part, and spherical interpolation for the rotational part.
@ -166,7 +186,7 @@ In this release we improved the documentation of the matrix and vector types by:
## [0.22.0] ## [0.22.0]
In this release, we are using the new version 0.2 of simba. One major change of that version is that the In this release, we are using the new version 0.2 of simba. One major change of that version is that the
use of `libm` is now opt-in when building targetting `no-std` environment. If you are using floating-point use of `libm` is now opt-in when building targeting `no-std` environment. If you are using floating-point
operations with nalgebra in a `no-std` environment, you will need to enable the new `libm` feature operations with nalgebra in a `no-std` environment, you will need to enable the new `libm` feature
of nalgebra for your code to compile again. of nalgebra for your code to compile again.
@ -174,7 +194,7 @@ of nalgebra for your code to compile again.
* The `libm` feature that enables `libm` when building for `no-std` environment. * The `libm` feature that enables `libm` when building for `no-std` environment.
* The `libm-force` feature that enables `libm` even when building for a not `no-std` environment. * The `libm-force` feature that enables `libm` even when building for a not `no-std` environment.
* `Cholesky::new_unchecked` which build a Cholesky decomposition without checking that its input is * `Cholesky::new_unchecked` which build a Cholesky decomposition without checking that its input is
positive-definite. It can be use with SIMD types. positive-definite. It can be used with SIMD types.
* The `Default` trait is now implemented for matrices, and quaternions. They are all filled with zeros, * The `Default` trait is now implemented for matrices, and quaternions. They are all filled with zeros,
except for `UnitQuaternion` which is initialized with the identity. except for `UnitQuaternion` which is initialized with the identity.
* Matrix exponential `matrix.exp()`. * Matrix exponential `matrix.exp()`.
@ -345,7 +365,7 @@ library (i.e. it supports `#![no_std]`). See the corresponding [documentation](h
* Add methods `.rotation_between_axis(...)` and `.scaled_rotation_between_axis(...)` to `UnitComplex` * Add methods `.rotation_between_axis(...)` and `.scaled_rotation_between_axis(...)` to `UnitComplex`
to compute the rotation matrix between two 2D **unit** vectors. to compute the rotation matrix between two 2D **unit** vectors.
* Add methods `.axis_angle()` to `UnitComplex` and `UnitQuaternion` in order to retrieve both the * Add methods `.axis_angle()` to `UnitComplex` and `UnitQuaternion` in order to retrieve both the
unit rotation axis and the rotation angle simultaneously. unit rotation axis, and the rotation angle simultaneously.
* Add functions to construct a random matrix with a user-defined distribution: `::from_distribution(...)`. * Add functions to construct a random matrix with a user-defined distribution: `::from_distribution(...)`.
## [0.14.0] ## [0.14.0]
@ -366,7 +386,7 @@ library (i.e. it supports `#![no_std]`). See the corresponding [documentation](h
the matrix `M` such that for all vector `v` we have the matrix `M` such that for all vector `v` we have
`M * v == self.cross(&v)`. `M * v == self.cross(&v)`.
* `.iamin()` that returns the index of the vector entry with * `.iamin()` that returns the index of the vector entry with
smallest absolute value. the smallest absolute value.
* The `mint` feature that can be enabled in order to allow conversions from * The `mint` feature that can be enabled in order to allow conversions from
and to types of the [mint](https://crates.io/crates/mint) crate. and to types of the [mint](https://crates.io/crates/mint) crate.
* Aliases for matrix and vector slices. Their are named by adding `Slice` * Aliases for matrix and vector slices. Their are named by adding `Slice`
@ -404,7 +424,7 @@ This adds support for serialization using the
* The alias `MatrixNM` is now deprecated. Use `MatrixMN` instead (we * The alias `MatrixNM` is now deprecated. Use `MatrixMN` instead (we
reordered M and N to be in alphabetical order). reordered M and N to be in alphabetical order).
* In-place componentwise multiplication and division * In-place componentwise multiplication and division
`.component_mul_mut(...)` and `.component_div_mut(...)` have bee deprecated `.component_mul_mut(...)` and `.component_div_mut(...)` have been deprecated
for a future renaming. Use `.component_mul_assign(...)` and for a future renaming. Use `.component_mul_assign(...)` and
`.component_div_assign(...)` instead. `.component_div_assign(...)` instead.
@ -582,7 +602,7 @@ only:
* The free functions `::prepend_rotation`, `::append_rotation`, * The free functions `::prepend_rotation`, `::append_rotation`,
`::append_rotation_wrt_center`, `::append_rotation_wrt_point`, `::append_rotation_wrt_center`, `::append_rotation_wrt_point`,
`::append_transformation`, and `::append_translation ` have been removed. `::append_transformation`, and `::append_translation ` have been removed.
Instead create the rotation or translation object explicitly and use Instead, create the rotation or translation object explicitly and use
multiplication to compose it with anything else. multiplication to compose it with anything else.
* The free function `::outer` has been removed. Use column-vector × * The free function `::outer` has been removed. Use column-vector ×
@ -608,7 +628,7 @@ Binary operations are now allowed between references as well. For example
### Modified ### Modified
Removed unused parameters to methods from the `ApproxEq` trait. Those were Removed unused parameters to methods from the `ApproxEq` trait. Those were
required before rust 1.0 to help type inference. The are not needed any more required before rust 1.0 to help type inference. They are not needed any more
since it now allowed to write for a type `T` that implements `ApproxEq`: since it now allowed to write for a type `T` that implements `ApproxEq`:
`<T as ApproxEq>::approx_epsilon()`. This replaces the old form: `<T as ApproxEq>::approx_epsilon()`. This replaces the old form:
`ApproxEq::approx_epsilon(None::<T>)`. `ApproxEq::approx_epsilon(None::<T>)`.
@ -627,7 +647,7 @@ since it now allowed to write for a type `T` that implements `ApproxEq`:
`UnitQuaternion::from_axisangle`. The new `::new` method now requires a `UnitQuaternion::from_axisangle`. The new `::new` method now requires a
not-normalized quaternion. not-normalized quaternion.
Methods names starting with `new_with_` now start with `from_`. This is more Method names starting with `new_with_` now start with `from_`. This is more
idiomatic in Rust. idiomatic in Rust.
The `Norm` trait now uses an associated type instead of a type parameter. The `Norm` trait now uses an associated type instead of a type parameter.
@ -658,8 +678,8 @@ crate for vectors, rotations and points. To enable them, activate the
## [0.8.0] ## [0.8.0]
### Modified ### Modified
* Almost everything (types, methods, and traits) now use full names instead * Almost everything (types, methods, and traits) now use fulls names instead
of abbreviations (e.g. `Vec3` becomes `Vector3`). Most changes are abvious. of abbreviations (e.g. `Vec3` becomes `Vector3`). Most changes are obvious.
Note however that: Note however that:
- `::sqnorm` becomes `::norm_squared`. - `::sqnorm` becomes `::norm_squared`.
- `::sqdist` becomes `::distance_squared`. - `::sqdist` becomes `::distance_squared`.
@ -693,11 +713,11 @@ you [there](https://users.nphysics.org)!
### Removed ### Removed
* Removed zero-sized elements `Vector0`, `Point0`. * Removed zero-sized elements `Vector0`, `Point0`.
* Removed 4-dimensional transformations `Rotation4` and `Isometry4` (which had an implementation to incomplete to be useful). * Removed 4-dimensional transformations `Rotation4` and `Isometry4` (which had an implementation too incomplete to be useful).
### Modified ### Modified
* Vectors are now multipliable with isometries. This will result into a pure rotation (this is how * Vectors are now multipliable with isometries. This will result into a pure rotation (this is how
vectors differ from point semantically: they design directions so they are not translatable). vectors differ from point semantically: they design directions, so they are not translatable).
* `{Isometry3, Rotation3}::look_at` reimplemented and renamed to `::look_at_rh` and `::look_at_lh` to agree * `{Isometry3, Rotation3}::look_at` reimplemented and renamed to `::look_at_rh` and `::look_at_lh` to agree
with the computer graphics community (in particular, the GLM library). Use the `::look_at_rh` with the computer graphics community (in particular, the GLM library). Use the `::look_at_rh`
variant to build a view matrix that variant to build a view matrix that

View File

@ -1,6 +1,6 @@
[package] [package]
name = "nalgebra" name = "nalgebra"
version = "0.27.1" version = "0.28.0"
authors = [ "Sébastien Crozet <developer@crozet.re>" ] authors = [ "Sébastien Crozet <developer@crozet.re>" ]
description = "General-purpose linear algebra library with transformations and statically-sized or dynamically-sized matrices." description = "General-purpose linear algebra library with transformations and statically-sized or dynamically-sized matrices."

View File

@ -4,7 +4,7 @@ version = "0.0.0"
authors = [ "You" ] authors = [ "You" ]
[dependencies] [dependencies]
nalgebra = "0.27.0" nalgebra = "0.28.0"
[[bin]] [[bin]]
name = "example" name = "example"

View File

@ -1,6 +1,6 @@
[package] [package]
name = "nalgebra-glm" name = "nalgebra-glm"
version = "0.13.0" version = "0.14.0"
authors = ["sebcrozet <developer@crozet.re>"] authors = ["sebcrozet <developer@crozet.re>"]
description = "A computer-graphics oriented API for nalgebra, inspired by the C++ GLM library." description = "A computer-graphics oriented API for nalgebra, inspired by the C++ GLM library."
@ -27,4 +27,4 @@ abomonation-serialize = [ "nalgebra/abomonation-serialize" ]
num-traits = { version = "0.2", default-features = false } num-traits = { version = "0.2", default-features = false }
approx = { version = "0.5", default-features = false } approx = { version = "0.5", default-features = false }
simba = { version = "0.5", default-features = false } simba = { version = "0.5", default-features = false }
nalgebra = { path = "..", version = "0.27", default-features = false } nalgebra = { path = "..", version = "0.28", default-features = false }

View File

@ -21,7 +21,7 @@
**nalgebra-glm** using the module prefix `glm::`. For example you will write `glm::rotate(...)` instead **nalgebra-glm** using the module prefix `glm::`. For example you will write `glm::rotate(...)` instead
of the more verbose `nalgebra_glm::rotate(...)`: of the more verbose `nalgebra_glm::rotate(...)`:
```rust ```
extern crate nalgebra_glm as glm; extern crate nalgebra_glm as glm;
``` ```

View File

@ -1,6 +1,6 @@
[package] [package]
name = "nalgebra-lapack" name = "nalgebra-lapack"
version = "0.18.0" version = "0.19.0"
authors = [ "Sébastien Crozet <developer@crozet.re>", "Andrew Straw <strawman@astraw.com>" ] authors = [ "Sébastien Crozet <developer@crozet.re>", "Andrew Straw <strawman@astraw.com>" ]
description = "Matrix decompositions using nalgebra matrices and Lapack bindings." description = "Matrix decompositions using nalgebra matrices and Lapack bindings."
@ -29,7 +29,7 @@ accelerate = ["lapack-src/accelerate"]
intel-mkl = ["lapack-src/intel-mkl"] intel-mkl = ["lapack-src/intel-mkl"]
[dependencies] [dependencies]
nalgebra = { version = "0.27", path = ".." } nalgebra = { version = "0.28", path = ".." }
num-traits = "0.2" num-traits = "0.2"
num-complex = { version = "0.4", default-features = false } num-complex = { version = "0.4", default-features = false }
simba = "0.5" simba = "0.5"
@ -39,7 +39,7 @@ lapack-src = { version = "0.8", default-features = false }
# clippy = "*" # clippy = "*"
[dev-dependencies] [dev-dependencies]
nalgebra = { version = "0.27", features = [ "arbitrary", "rand" ], path = ".." } nalgebra = { version = "0.28", features = [ "arbitrary", "rand" ], path = ".." }
proptest = { version = "1", default-features = false, features = ["std"] } proptest = { version = "1", default-features = false, features = ["std"] }
quickcheck = "1" quickcheck = "1"
approx = "0.5" approx = "0.5"

View File

@ -30,7 +30,7 @@
//! the system installation of netlib without LAPACKE (note the E) or //! the system installation of netlib without LAPACKE (note the E) or
//! CBLAS: //! CBLAS:
//! //!
//! ```.ignore //! ```ignore
//! sudo apt-get install gfortran libblas3gf liblapack3gf //! sudo apt-get install gfortran libblas3gf liblapack3gf
//! export CARGO_FEATURE_SYSTEM_NETLIB=1 //! export CARGO_FEATURE_SYSTEM_NETLIB=1
//! export CARGO_FEATURE_EXCLUDE_LAPACKE=1 //! export CARGO_FEATURE_EXCLUDE_LAPACKE=1
@ -44,7 +44,7 @@
//! //!
//! On macOS, do this to use Apple's Accelerate framework: //! On macOS, do this to use Apple's Accelerate framework:
//! //!
//! ```.ignore //! ```ignore
//! export CARGO_FEATURES="--no-default-features --features accelerate" //! export CARGO_FEATURES="--no-default-features --features accelerate"
//! cargo build ${CARGO_FEATURES} //! cargo build ${CARGO_FEATURES}
//! ``` //! ```

View File

@ -21,5 +21,5 @@ quote = "1.0"
proc-macro2 = "1.0" proc-macro2 = "1.0"
[dev-dependencies] [dev-dependencies]
nalgebra = { version = "0.27.0", path = ".." } nalgebra = { version = "0.28.0", path = ".." }
trybuild = "1.0.42" trybuild = "1.0.42"

View File

@ -1,6 +1,6 @@
[package] [package]
name = "nalgebra-sparse" name = "nalgebra-sparse"
version = "0.3.0" version = "0.4.0"
authors = [ "Andreas Longva", "Sébastien Crozet <developer@crozet.re>" ] authors = [ "Andreas Longva", "Sébastien Crozet <developer@crozet.re>" ]
edition = "2018" edition = "2018"
description = "Sparse matrix computation based on nalgebra." description = "Sparse matrix computation based on nalgebra."
@ -20,7 +20,7 @@ compare = [ "matrixcompare-core" ]
slow-tests = [] slow-tests = []
[dependencies] [dependencies]
nalgebra = { version="0.27", path = "../" } nalgebra = { version="0.28", path = "../" }
num-traits = { version = "0.2", default-features = false } num-traits = { version = "0.2", default-features = false }
proptest = { version = "1.0", optional = true } proptest = { version = "1.0", optional = true }
matrixcompare-core = { version = "0.1.0", optional = true } matrixcompare-core = { version = "0.1.0", optional = true }
@ -28,7 +28,7 @@ matrixcompare-core = { version = "0.1.0", optional = true }
[dev-dependencies] [dev-dependencies]
itertools = "0.10" itertools = "0.10"
matrixcompare = { version = "0.3.0", features = [ "proptest-support" ] } matrixcompare = { version = "0.3.0", features = [ "proptest-support" ] }
nalgebra = { version="0.27", path = "../", features = ["compare"] } nalgebra = { version="0.28", path = "../", features = ["compare"] }
[package.metadata.docs.rs] [package.metadata.docs.rs]
# Enable certain features when building docs for docs.rs # Enable certain features when building docs for docs.rs

View File

@ -7,7 +7,7 @@
//! The following example illustrates how to convert between matrix formats with the `From` //! The following example illustrates how to convert between matrix formats with the `From`
//! implementations. //! implementations.
//! //!
//! ```rust //! ```
//! use nalgebra_sparse::{csr::CsrMatrix, csc::CscMatrix, coo::CooMatrix}; //! use nalgebra_sparse::{csr::CsrMatrix, csc::CscMatrix, coo::CooMatrix};
//! use nalgebra::DMatrix; //! use nalgebra::DMatrix;
//! //!

View File

@ -20,7 +20,7 @@ use crate::SparseFormatError;
/// ///
/// # Examples /// # Examples
/// ///
/// ```rust /// ```
/// use nalgebra_sparse::{coo::CooMatrix, csr::CsrMatrix, csc::CscMatrix}; /// use nalgebra_sparse::{coo::CooMatrix, csr::CsrMatrix, csc::CscMatrix};
/// ///
/// // Initialize a matrix with all zeros (no explicitly stored entries). /// // Initialize a matrix with all zeros (no explicitly stored entries).

View File

@ -19,7 +19,7 @@ use std::slice::{Iter, IterMut};
/// ///
/// # Usage /// # Usage
/// ///
/// ```rust /// ```
/// use nalgebra_sparse::csc::CscMatrix; /// use nalgebra_sparse::csc::CscMatrix;
/// use nalgebra::{DMatrix, Matrix3x4}; /// use nalgebra::{DMatrix, Matrix3x4};
/// use matrixcompare::assert_matrix_eq; /// use matrixcompare::assert_matrix_eq;
@ -97,7 +97,7 @@ use std::slice::{Iter, IterMut};
/// represents the matrix in a column-by-column fashion. The entries associated with column `j` are /// represents the matrix in a column-by-column fashion. The entries associated with column `j` are
/// determined as follows: /// determined as follows:
/// ///
/// ```rust /// ```
/// # let col_offsets: Vec<usize> = vec![0, 0]; /// # let col_offsets: Vec<usize> = vec![0, 0];
/// # let row_indices: Vec<usize> = vec![]; /// # let row_indices: Vec<usize> = vec![];
/// # let values: Vec<i32> = vec![]; /// # let values: Vec<i32> = vec![];

View File

@ -19,7 +19,7 @@ use std::slice::{Iter, IterMut};
/// ///
/// # Usage /// # Usage
/// ///
/// ```rust /// ```
/// use nalgebra_sparse::csr::CsrMatrix; /// use nalgebra_sparse::csr::CsrMatrix;
/// use nalgebra::{DMatrix, Matrix3x4}; /// use nalgebra::{DMatrix, Matrix3x4};
/// use matrixcompare::assert_matrix_eq; /// use matrixcompare::assert_matrix_eq;
@ -97,7 +97,7 @@ use std::slice::{Iter, IterMut};
/// represents the matrix in a row-by-row fashion. The entries associated with row `i` are /// represents the matrix in a row-by-row fashion. The entries associated with row `i` are
/// determined as follows: /// determined as follows:
/// ///
/// ```rust /// ```
/// # let row_offsets: Vec<usize> = vec![0, 0]; /// # let row_offsets: Vec<usize> = vec![0, 0];
/// # let col_indices: Vec<usize> = vec![]; /// # let col_indices: Vec<usize> = vec![];
/// # let values: Vec<i32> = vec![]; /// # let values: Vec<i32> = vec![];

View File

@ -73,7 +73,7 @@
//! //!
//! # Example: COO -> CSR -> matrix-vector product //! # Example: COO -> CSR -> matrix-vector product
//! //!
//! ```rust //! ```
//! use nalgebra_sparse::{coo::CooMatrix, csr::CsrMatrix}; //! use nalgebra_sparse::{coo::CooMatrix, csr::CsrMatrix};
//! use nalgebra::{DMatrix, DVector}; //! use nalgebra::{DMatrix, DVector};
//! use matrixcompare::assert_matrix_eq; //! use matrixcompare::assert_matrix_eq;

View File

@ -90,7 +90,7 @@
//! `C <- 3.0 * C + 2.0 * A^T * B`, where `A`, `B`, `C` are matrices and `A^T` is the transpose //! `C <- 3.0 * C + 2.0 * A^T * B`, where `A`, `B`, `C` are matrices and `A^T` is the transpose
//! of `A`. The simplest way to write this is: //! of `A`. The simplest way to write this is:
//! //!
//! ```rust //! ```
//! # use nalgebra_sparse::csr::CsrMatrix; //! # use nalgebra_sparse::csr::CsrMatrix;
//! # let a = CsrMatrix::identity(10); let b = CsrMatrix::identity(10); //! # let a = CsrMatrix::identity(10); let b = CsrMatrix::identity(10);
//! # let mut c = CsrMatrix::identity(10); //! # let mut c = CsrMatrix::identity(10);
@ -109,7 +109,7 @@
//! //!
//! An alternative way to implement this expression (here using CSR matrices) is: //! An alternative way to implement this expression (here using CSR matrices) is:
//! //!
//! ```rust //! ```
//! # use nalgebra_sparse::csr::CsrMatrix; //! # use nalgebra_sparse::csr::CsrMatrix;
//! # let a = CsrMatrix::identity(10); let b = CsrMatrix::identity(10); //! # let a = CsrMatrix::identity(10); let b = CsrMatrix::identity(10);
//! # let mut c = CsrMatrix::identity(10); //! # let mut c = CsrMatrix::identity(10);

View File

@ -40,6 +40,7 @@ pub trait Reallocator<T: Scalar, RFrom: Dim, CFrom: Dim, RTo: Dim, CTo: Dim>:
/// Reallocates a buffer of shape `(RTo, CTo)`, possibly reusing a previously allocated buffer /// Reallocates a buffer of shape `(RTo, CTo)`, possibly reusing a previously allocated buffer
/// `buf`. Data stored by `buf` are linearly copied to the output: /// `buf`. Data stored by `buf` are linearly copied to the output:
/// ///
/// # Safety
/// * The copy is performed as if both were just arrays (without a matrix structure). /// * The copy is performed as if both were just arrays (without a matrix structure).
/// * If `buf` is larger than the output size, then extra elements of `buf` are truncated. /// * If `buf` is larger than the output size, then extra elements of `buf` are truncated.
/// * If `buf` is smaller than the output size, then extra elements of the output are left /// * If `buf` is smaller than the output size, then extra elements of the output are left

View File

@ -79,7 +79,7 @@ where
} }
#[inline] #[inline]
unsafe fn is_contiguous(&self) -> bool { fn is_contiguous(&self) -> bool {
true true
} }
@ -286,11 +286,7 @@ where
unsafe fn exhume<'a, 'b>(&'a mut self, mut bytes: &'b mut [u8]) -> Option<&'b mut [u8]> { unsafe fn exhume<'a, 'b>(&'a mut self, mut bytes: &'b mut [u8]) -> Option<&'b mut [u8]> {
for element in self.as_mut_slice() { for element in self.as_mut_slice() {
let temp = bytes; let temp = bytes;
bytes = if let Some(remainder) = element.exhume(temp) { bytes = element.exhume(temp)?
remainder
} else {
return None;
}
} }
Some(bytes) Some(bytes)
} }
@ -327,7 +323,7 @@ mod rkyv_impl {
for ArrayStorage<T, R, C> for ArrayStorage<T, R, C>
{ {
fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> { fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> {
Ok(self.0.serialize(serializer)?) self.0.serialize(serializer)
} }
} }

View File

@ -1388,12 +1388,12 @@ where
{ {
work.gemv(T::one(), mid, &rhs.column(0), T::zero()); work.gemv(T::one(), mid, &rhs.column(0), T::zero());
self.column_mut(0) self.column_mut(0)
.gemv_tr(alpha.inlined_clone(), &rhs, work, beta.inlined_clone()); .gemv_tr(alpha.inlined_clone(), rhs, work, beta.inlined_clone());
for j in 1..rhs.ncols() { for j in 1..rhs.ncols() {
work.gemv(T::one(), mid, &rhs.column(j), T::zero()); work.gemv(T::one(), mid, &rhs.column(j), T::zero());
self.column_mut(j) self.column_mut(j)
.gemv_tr(alpha.inlined_clone(), &rhs, work, beta.inlined_clone()); .gemv_tr(alpha.inlined_clone(), rhs, work, beta.inlined_clone());
} }
} }

View File

@ -386,7 +386,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
(D::dim() - 1, 0), (D::dim() - 1, 0),
(Const::<1>, DimNameDiff::<D, U1>::name()), (Const::<1>, DimNameDiff::<D, U1>::name()),
) )
.tr_dot(&shift); .tr_dot(shift);
let post_translation = self.generic_slice( let post_translation = self.generic_slice(
(0, 0), (0, 0),
(DimNameDiff::<D, U1>::name(), DimNameDiff::<D, U1>::name()), (DimNameDiff::<D, U1>::name(), DimNameDiff::<D, U1>::name()),
@ -423,7 +423,7 @@ where
(D::dim() - 1, 0), (D::dim() - 1, 0),
(Const::<1>, DimNameDiff::<D, U1>::name()), (Const::<1>, DimNameDiff::<D, U1>::name()),
); );
let n = normalizer.tr_dot(&v); let n = normalizer.tr_dot(v);
if !n.is_zero() { if !n.is_zero() {
return transform * (v / n); return transform * (v / n);

View File

@ -53,7 +53,10 @@ impl<T: Scalar, R: Dim, C: Dim> OMatrix<T, R, C>
where where
DefaultAllocator: Allocator<T, R, C>, DefaultAllocator: Allocator<T, R, C>,
{ {
/// Creates a new uninitialized matrix. If the matrix has a compile-time dimension, this panics /// Creates a new uninitialized matrix.
///
/// # Safety
/// If the matrix has a compile-time dimension, this panics
/// if `nrows != R::to_usize()` or `ncols != C::to_usize()`. /// if `nrows != R::to_usize()` or `ncols != C::to_usize()`.
#[inline] #[inline]
pub unsafe fn new_uninitialized_generic(nrows: R, ncols: C) -> mem::MaybeUninit<Self> { pub unsafe fn new_uninitialized_generic(nrows: R, ncols: C) -> mem::MaybeUninit<Self> {
@ -827,7 +830,7 @@ where
Standard: Distribution<T>, Standard: Distribution<T>,
{ {
#[inline] #[inline]
fn sample<'a, G: Rng + ?Sized>(&self, rng: &'a mut G) -> OMatrix<T, R, C> { fn sample<G: Rng + ?Sized>(&self, rng: &mut G) -> OMatrix<T, R, C> {
let nrows = R::try_to_usize().unwrap_or_else(|| rng.gen_range(0..10)); let nrows = R::try_to_usize().unwrap_or_else(|| rng.gen_range(0..10));
let ncols = C::try_to_usize().unwrap_or_else(|| rng.gen_range(0..10)); let ncols = C::try_to_usize().unwrap_or_else(|| rng.gen_range(0..10));
@ -864,7 +867,7 @@ where
{ {
/// Generate a uniformly distributed random unit vector. /// Generate a uniformly distributed random unit vector.
#[inline] #[inline]
fn sample<'a, G: Rng + ?Sized>(&self, rng: &'a mut G) -> Unit<OVector<T, D>> { fn sample<G: Rng + ?Sized>(&self, rng: &mut G) -> Unit<OVector<T, D>> {
Unit::new_normalize(OVector::from_distribution_generic( Unit::new_normalize(OVector::from_distribution_generic(
D::name(), D::name(),
Const::<1>, Const::<1>,

View File

@ -10,6 +10,7 @@ impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
{ {
/// Creates, without bound-checking, a matrix slice from an array and with dimensions and strides specified by generic types instances. /// Creates, without bound-checking, a matrix slice from an array and with dimensions and strides specified by generic types instances.
/// ///
/// # Safety
/// This method is unsafe because the input data array is not checked to contain enough elements. /// This method is unsafe because the input data array is not checked to contain enough elements.
/// The generic types `R`, `C`, `RStride`, `CStride` can either be type-level integers or integers wrapped with `Dynamic::new()`. /// The generic types `R`, `C`, `RStride`, `CStride` can either be type-level integers or integers wrapped with `Dynamic::new()`.
#[inline] #[inline]
@ -59,6 +60,7 @@ impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
impl<'a, T: Scalar, R: Dim, C: Dim> MatrixSlice<'a, T, R, C> { impl<'a, T: Scalar, R: Dim, C: Dim> MatrixSlice<'a, T, R, C> {
/// Creates, without bound-checking, a matrix slice from an array and with dimensions specified by generic types instances. /// Creates, without bound-checking, a matrix slice from an array and with dimensions specified by generic types instances.
/// ///
/// # Safety
/// This method is unsafe because the input data array is not checked to contain enough elements. /// This method is unsafe because the input data array is not checked to contain enough elements.
/// The generic types `R` and `C` can either be type-level integers or integers wrapped with `Dynamic::new()`. /// The generic types `R` and `C` can either be type-level integers or integers wrapped with `Dynamic::new()`.
#[inline] #[inline]
@ -146,6 +148,7 @@ impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
{ {
/// Creates, without bound-checking, a mutable matrix slice from an array and with dimensions and strides specified by generic types instances. /// Creates, without bound-checking, a mutable matrix slice from an array and with dimensions and strides specified by generic types instances.
/// ///
/// # Safety
/// This method is unsafe because the input data array is not checked to contain enough elements. /// This method is unsafe because the input data array is not checked to contain enough elements.
/// The generic types `R`, `C`, `RStride`, `CStride` can either be type-level integers or integers wrapped with `Dynamic::new()`. /// The generic types `R`, `C`, `RStride`, `CStride` can either be type-level integers or integers wrapped with `Dynamic::new()`.
#[inline] #[inline]
@ -217,6 +220,7 @@ impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
impl<'a, T: Scalar, R: Dim, C: Dim> MatrixSliceMutMN<'a, T, R, C> { impl<'a, T: Scalar, R: Dim, C: Dim> MatrixSliceMutMN<'a, T, R, C> {
/// Creates, without bound-checking, a mutable matrix slice from an array and with dimensions specified by generic types instances. /// Creates, without bound-checking, a mutable matrix slice from an array and with dimensions specified by generic types instances.
/// ///
/// # Safety
/// This method is unsafe because the input data array is not checked to contain enough elements. /// This method is unsafe because the input data array is not checked to contain enough elements.
/// The generic types `R` and `C` can either be type-level integers or integers wrapped with `Dynamic::new()`. /// The generic types `R` and `C` can either be type-level integers or integers wrapped with `Dynamic::new()`.
#[inline] #[inline]

View File

@ -1,6 +1,7 @@
#[cfg(all(feature = "alloc", not(feature = "std")))] #[cfg(all(feature = "alloc", not(feature = "std")))]
use alloc::vec::Vec; use alloc::vec::Vec;
use simba::scalar::{SubsetOf, SupersetOf}; use simba::scalar::{SubsetOf, SupersetOf};
use std::borrow::{Borrow, BorrowMut};
use std::convert::{AsMut, AsRef, From, Into}; use std::convert::{AsMut, AsRef, From, Into};
use simba::simd::{PrimitiveSimdValue, SimdValue}; use simba::simd::{PrimitiveSimdValue, SimdValue};
@ -192,32 +193,47 @@ impl<T: Scalar, const R: usize, const C: usize> From<SMatrix<T, R, C>> for [[T;
} }
} }
macro_rules! impl_from_into_asref_2D( macro_rules! impl_from_into_asref_borrow_2D(
($(($NRows: ty, $NCols: ty) => ($SZRows: expr, $SZCols: expr));* $(;)*) => {$(
impl<T: Scalar, S> AsRef<[[T; $SZRows]; $SZCols]> for Matrix<T, $NRows, $NCols, S> //does the impls on one case for either AsRef/AsMut and Borrow/BorrowMut
(
($NRows: ty, $NCols: ty) => ($SZRows: expr, $SZCols: expr);
$Ref:ident.$ref:ident(), $Mut:ident.$mut:ident()
) => {
impl<T: Scalar, S> $Ref<[[T; $SZRows]; $SZCols]> for Matrix<T, $NRows, $NCols, S>
where S: ContiguousStorage<T, $NRows, $NCols> { where S: ContiguousStorage<T, $NRows, $NCols> {
#[inline] #[inline]
fn as_ref(&self) -> &[[T; $SZRows]; $SZCols] { fn $ref(&self) -> &[[T; $SZRows]; $SZCols] {
unsafe { unsafe {
&*(self.data.ptr() as *const [[T; $SZRows]; $SZCols]) &*(self.data.ptr() as *const [[T; $SZRows]; $SZCols])
} }
} }
} }
impl<T: Scalar, S> AsMut<[[T; $SZRows]; $SZCols]> for Matrix<T, $NRows, $NCols, S> impl<T: Scalar, S> $Mut<[[T; $SZRows]; $SZCols]> for Matrix<T, $NRows, $NCols, S>
where S: ContiguousStorageMut<T, $NRows, $NCols> { where S: ContiguousStorageMut<T, $NRows, $NCols> {
#[inline] #[inline]
fn as_mut(&mut self) -> &mut [[T; $SZRows]; $SZCols] { fn $mut(&mut self) -> &mut [[T; $SZRows]; $SZCols] {
unsafe { unsafe {
&mut *(self.data.ptr_mut() as *mut [[T; $SZRows]; $SZCols]) &mut *(self.data.ptr_mut() as *mut [[T; $SZRows]; $SZCols])
} }
} }
} }
};
//collects the mappings from typenum pairs to consts
($(($NRows: ty, $NCols: ty) => ($SZRows: expr, $SZCols: expr));* $(;)*) => {$(
impl_from_into_asref_borrow_2D!(
($NRows, $NCols) => ($SZRows, $SZCols); AsRef.as_ref(), AsMut.as_mut()
);
impl_from_into_asref_borrow_2D!(
($NRows, $NCols) => ($SZRows, $SZCols); Borrow.borrow(), BorrowMut.borrow_mut()
);
)*} )*}
); );
// Implement for matrices with shape 2x2 .. 6x6. // Implement for matrices with shape 2x2 .. 6x6.
impl_from_into_asref_2D!( impl_from_into_asref_borrow_2D!(
(U2, U2) => (2, 2); (U2, U3) => (2, 3); (U2, U4) => (2, 4); (U2, U5) => (2, 5); (U2, U6) => (2, 6); (U2, U2) => (2, 2); (U2, U3) => (2, 3); (U2, U4) => (2, 4); (U2, U5) => (2, 5); (U2, U6) => (2, 6);
(U3, U2) => (3, 2); (U3, U3) => (3, 3); (U3, U4) => (3, 4); (U3, U5) => (3, 5); (U3, U6) => (3, 6); (U3, U2) => (3, 2); (U3, U3) => (3, 3); (U3, U4) => (3, 4); (U3, U5) => (3, 5); (U3, U6) => (3, 6);
(U4, U2) => (4, 2); (U4, U3) => (4, 3); (U4, U4) => (4, 4); (U4, U5) => (4, 5); (U4, U6) => (4, 6); (U4, U2) => (4, 2); (U4, U3) => (4, 3); (U4, U4) => (4, 4); (U4, U5) => (4, 5); (U4, U6) => (4, 6);
@ -451,6 +467,12 @@ impl<'a, T: Scalar + Copy> From<&'a [T]> for DVectorSlice<'a, T> {
} }
} }
impl<'a, T: Scalar> From<DVectorSlice<'a, T>> for &'a [T] {
fn from(vec: DVectorSlice<'a, T>) -> &'a [T] {
vec.data.into_slice()
}
}
impl<'a, T: Scalar + Copy> From<&'a mut [T]> for DVectorSliceMut<'a, T> { impl<'a, T: Scalar + Copy> From<&'a mut [T]> for DVectorSliceMut<'a, T> {
#[inline] #[inline]
fn from(slice: &'a mut [T]) -> Self { fn from(slice: &'a mut [T]) -> Self {
@ -458,6 +480,12 @@ impl<'a, T: Scalar + Copy> From<&'a mut [T]> for DVectorSliceMut<'a, T> {
} }
} }
impl<'a, T: Scalar> From<DVectorSliceMut<'a, T>> for &'a mut [T] {
fn from(vec: DVectorSliceMut<'a, T>) -> &'a mut [T] {
vec.data.into_slice_mut()
}
}
impl<T: Scalar + PrimitiveSimdValue, R: Dim, C: Dim> From<[OMatrix<T::Element, R, C>; 2]> impl<T: Scalar + PrimitiveSimdValue, R: Dim, C: Dim> From<[OMatrix<T::Element, R, C>; 2]>
for OMatrix<T, R, C> for OMatrix<T, R, C>
where where

View File

@ -20,7 +20,7 @@ pub struct Dynamic {
impl Dynamic { impl Dynamic {
/// A dynamic size equal to `value`. /// A dynamic size equal to `value`.
#[inline] #[inline]
pub fn new(value: usize) -> Self { pub const fn new(value: usize) -> Self {
Self { value } Self { value }
} }
} }

View File

@ -587,6 +587,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
/// Inserts `ninsert.value()` columns starting at the `i-th` place of this matrix. /// Inserts `ninsert.value()` columns starting at the `i-th` place of this matrix.
/// ///
/// # Safety
/// The added column values are not initialized. /// The added column values are not initialized.
#[inline] #[inline]
pub unsafe fn insert_columns_generic_uninitialized<D>( pub unsafe fn insert_columns_generic_uninitialized<D>(
@ -668,6 +669,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
/// Inserts `ninsert.value()` rows at the `i-th` place of this matrix. /// Inserts `ninsert.value()` rows at the `i-th` place of this matrix.
/// ///
/// # Safety
/// The added rows values are not initialized. /// The added rows values are not initialized.
/// This is the generic implementation of `.insert_rows(...)` and /// This is the generic implementation of `.insert_rows(...)` and
/// `.insert_fixed_rows(...)` which have nicer API interfaces. /// `.insert_fixed_rows(...)` which have nicer API interfaces.

View File

@ -44,7 +44,7 @@ impl<D: Dim> DimRange<D> for usize {
#[test] #[test]
fn dimrange_usize() { fn dimrange_usize() {
assert_eq!(DimRange::contained_by(&0, Const::<0>), false); assert_eq!(DimRange::contained_by(&0, Const::<0>), false);
assert_eq!(DimRange::contained_by(&0, Const::<1>), true); assert!(DimRange::contained_by(&0, Const::<1>));
} }
impl<D: Dim> DimRange<D> for ops::Range<usize> { impl<D: Dim> DimRange<D> for ops::Range<usize> {
@ -68,24 +68,23 @@ impl<D: Dim> DimRange<D> for ops::Range<usize> {
#[test] #[test]
fn dimrange_range_usize() { fn dimrange_range_usize() {
use std::usize::MAX;
assert_eq!(DimRange::contained_by(&(0..0), Const::<0>), false); assert_eq!(DimRange::contained_by(&(0..0), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(0..1), Const::<0>), false); assert_eq!(DimRange::contained_by(&(0..1), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(0..1), Const::<1>), true); assert!(DimRange::contained_by(&(0..1), Const::<1>));
assert!(DimRange::contained_by(
&((usize::MAX - 1)..usize::MAX),
Dynamic::new(usize::MAX)
));
assert_eq!( assert_eq!(
DimRange::contained_by(&((MAX - 1)..MAX), Dynamic::new(MAX)), DimRange::length(&((usize::MAX - 1)..usize::MAX), Dynamic::new(usize::MAX)),
true
);
assert_eq!(
DimRange::length(&((MAX - 1)..MAX), Dynamic::new(MAX)),
Dynamic::new(1) Dynamic::new(1)
); );
assert_eq!( assert_eq!(
DimRange::length(&(MAX..(MAX - 1)), Dynamic::new(MAX)), DimRange::length(&(usize::MAX..(usize::MAX - 1)), Dynamic::new(usize::MAX)),
Dynamic::new(0) Dynamic::new(0)
); );
assert_eq!( assert_eq!(
DimRange::length(&(MAX..MAX), Dynamic::new(MAX)), DimRange::length(&(usize::MAX..usize::MAX), Dynamic::new(usize::MAX)),
Dynamic::new(0) Dynamic::new(0)
); );
} }
@ -111,20 +110,19 @@ impl<D: Dim> DimRange<D> for ops::RangeFrom<usize> {
#[test] #[test]
fn dimrange_rangefrom_usize() { fn dimrange_rangefrom_usize() {
use std::usize::MAX;
assert_eq!(DimRange::contained_by(&(0..), Const::<0>), false); assert_eq!(DimRange::contained_by(&(0..), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(0..), Const::<0>), false); assert_eq!(DimRange::contained_by(&(0..), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(0..), Const::<1>), true); assert!(DimRange::contained_by(&(0..), Const::<1>));
assert!(DimRange::contained_by(
&((usize::MAX - 1)..),
Dynamic::new(usize::MAX)
));
assert_eq!( assert_eq!(
DimRange::contained_by(&((MAX - 1)..), Dynamic::new(MAX)), DimRange::length(&((usize::MAX - 1)..), Dynamic::new(usize::MAX)),
true
);
assert_eq!(
DimRange::length(&((MAX - 1)..), Dynamic::new(MAX)),
Dynamic::new(1) Dynamic::new(1)
); );
assert_eq!( assert_eq!(
DimRange::length(&(MAX..), Dynamic::new(MAX)), DimRange::length(&(usize::MAX..), Dynamic::new(usize::MAX)),
Dynamic::new(0) Dynamic::new(0)
); );
} }
@ -177,7 +175,7 @@ impl<D: Dim> DimRange<D> for ops::RangeFull {
#[test] #[test]
fn dimrange_rangefull() { fn dimrange_rangefull() {
assert_eq!(DimRange::contained_by(&(..), Const::<0>), true); assert!(DimRange::contained_by(&(..), Const::<0>));
assert_eq!(DimRange::length(&(..), Const::<1>), Const::<1>); assert_eq!(DimRange::length(&(..), Const::<1>), Const::<1>);
} }
@ -206,32 +204,31 @@ impl<D: Dim> DimRange<D> for ops::RangeInclusive<usize> {
#[test] #[test]
fn dimrange_rangeinclusive_usize() { fn dimrange_rangeinclusive_usize() {
use std::usize::MAX;
assert_eq!(DimRange::contained_by(&(0..=0), Const::<0>), false); assert_eq!(DimRange::contained_by(&(0..=0), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(0..=0), Const::<1>), true); assert!(DimRange::contained_by(&(0..=0), Const::<1>));
assert_eq!( assert_eq!(
DimRange::contained_by(&(MAX..=MAX), Dynamic::new(MAX)), DimRange::contained_by(&(usize::MAX..=usize::MAX), Dynamic::new(usize::MAX)),
false false
); );
assert_eq!( assert_eq!(
DimRange::contained_by(&((MAX - 1)..=MAX), Dynamic::new(MAX)), DimRange::contained_by(&((usize::MAX - 1)..=usize::MAX), Dynamic::new(usize::MAX)),
false false
); );
assert_eq!( assert!(DimRange::contained_by(
DimRange::contained_by(&((MAX - 1)..=(MAX - 1)), Dynamic::new(MAX)), &((usize::MAX - 1)..=(usize::MAX - 1)),
true Dynamic::new(usize::MAX)
); ));
assert_eq!(DimRange::length(&(0..=0), Const::<1>), Dynamic::new(1)); assert_eq!(DimRange::length(&(0..=0), Const::<1>), Dynamic::new(1));
assert_eq!( assert_eq!(
DimRange::length(&((MAX - 1)..=MAX), Dynamic::new(MAX)), DimRange::length(&((usize::MAX - 1)..=usize::MAX), Dynamic::new(usize::MAX)),
Dynamic::new(2) Dynamic::new(2)
); );
assert_eq!( assert_eq!(
DimRange::length(&(MAX..=(MAX - 1)), Dynamic::new(MAX)), DimRange::length(&(usize::MAX..=(usize::MAX - 1)), Dynamic::new(usize::MAX)),
Dynamic::new(0) Dynamic::new(0)
); );
assert_eq!( assert_eq!(
DimRange::length(&(MAX..=MAX), Dynamic::new(MAX)), DimRange::length(&(usize::MAX..=usize::MAX), Dynamic::new(usize::MAX)),
Dynamic::new(1) Dynamic::new(1)
); );
} }
@ -257,21 +254,20 @@ impl<D: Dim> DimRange<D> for ops::RangeTo<usize> {
#[test] #[test]
fn dimrange_rangeto_usize() { fn dimrange_rangeto_usize() {
use std::usize::MAX; assert!(DimRange::contained_by(&(..0), Const::<0>));
assert_eq!(DimRange::contained_by(&(..0), Const::<0>), true);
assert_eq!(DimRange::contained_by(&(..1), Const::<0>), false); assert_eq!(DimRange::contained_by(&(..1), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(..0), Const::<1>), true); assert!(DimRange::contained_by(&(..0), Const::<1>));
assert!(DimRange::contained_by(
&(..(usize::MAX - 1)),
Dynamic::new(usize::MAX)
));
assert_eq!( assert_eq!(
DimRange::contained_by(&(..(MAX - 1)), Dynamic::new(MAX)), DimRange::length(&(..(usize::MAX - 1)), Dynamic::new(usize::MAX)),
true Dynamic::new(usize::MAX - 1)
); );
assert_eq!( assert_eq!(
DimRange::length(&(..(MAX - 1)), Dynamic::new(MAX)), DimRange::length(&(..usize::MAX), Dynamic::new(usize::MAX)),
Dynamic::new(MAX - 1) Dynamic::new(usize::MAX)
);
assert_eq!(
DimRange::length(&(..MAX), Dynamic::new(MAX)),
Dynamic::new(MAX)
); );
} }
@ -296,21 +292,20 @@ impl<D: Dim> DimRange<D> for ops::RangeToInclusive<usize> {
#[test] #[test]
fn dimrange_rangetoinclusive_usize() { fn dimrange_rangetoinclusive_usize() {
use std::usize::MAX;
assert_eq!(DimRange::contained_by(&(..=0), Const::<0>), false); assert_eq!(DimRange::contained_by(&(..=0), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(..=1), Const::<0>), false); assert_eq!(DimRange::contained_by(&(..=1), Const::<0>), false);
assert_eq!(DimRange::contained_by(&(..=0), Const::<1>), true); assert!(DimRange::contained_by(&(..=0), Const::<1>));
assert_eq!( assert_eq!(
DimRange::contained_by(&(..=(MAX)), Dynamic::new(MAX)), DimRange::contained_by(&(..=(usize::MAX)), Dynamic::new(usize::MAX)),
false false
); );
assert!(DimRange::contained_by(
&(..=(usize::MAX - 1)),
Dynamic::new(usize::MAX)
));
assert_eq!( assert_eq!(
DimRange::contained_by(&(..=(MAX - 1)), Dynamic::new(MAX)), DimRange::length(&(..=(usize::MAX - 1)), Dynamic::new(usize::MAX)),
true Dynamic::new(usize::MAX)
);
assert_eq!(
DimRange::length(&(..=(MAX - 1)), Dynamic::new(MAX)),
Dynamic::new(MAX)
); );
} }

View File

@ -336,7 +336,7 @@ mod rkyv_impl {
for Matrix<T, R, C, S> for Matrix<T, R, C, S>
{ {
fn serialize(&self, serializer: &mut _S) -> Result<Self::Resolver, _S::Error> { fn serialize(&self, serializer: &mut _S) -> Result<Self::Resolver, _S::Error> {
Ok(self.data.serialize(serializer)?) self.data.serialize(serializer)
} }
} }
@ -1581,7 +1581,7 @@ impl<T: Scalar + Zero + One, D: DimAdd<U1> + IsNotStaticOne, S: Storage<T, D, D>
let dim = DimSum::<D, U1>::from_usize(self.nrows() + 1); let dim = DimSum::<D, U1>::from_usize(self.nrows() + 1);
let mut res = OMatrix::identity_generic(dim, dim); let mut res = OMatrix::identity_generic(dim, dim);
res.generic_slice_mut::<D, D>((0, 0), self.data.shape()) res.generic_slice_mut::<D, D>((0, 0), self.data.shape())
.copy_from(&self); .copy_from(self);
res res
} }
} }
@ -1819,7 +1819,6 @@ macro_rules! impl_fmt {
where where
T: Scalar + $trait, T: Scalar + $trait,
S: Storage<T, R, C>, S: Storage<T, R, C>,
DefaultAllocator: Allocator<usize, R, C>,
{ {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
#[cfg(feature = "std")] #[cfg(feature = "std")]
@ -1837,20 +1836,17 @@ macro_rules! impl_fmt {
4 4
} }
let (nrows, ncols) = self.data.shape(); let (nrows, ncols) = self.shape();
if nrows.value() == 0 || ncols.value() == 0 { if nrows == 0 || ncols == 0 {
return write!(f, "[ ]"); return write!(f, "[ ]");
} }
let mut max_length = 0; let mut max_length = 0;
let mut lengths: OMatrix<usize, R, C> = Matrix::zeros_generic(nrows, ncols);
let (nrows, ncols) = self.shape();
for i in 0..nrows { for i in 0..nrows {
for j in 0..ncols { for j in 0..ncols {
lengths[(i, j)] = val_width(&self[(i, j)], f); max_length = crate::max(max_length, val_width(&self[(i, j)], f));
max_length = crate::max(max_length, lengths[(i, j)]);
} }
} }
@ -1867,7 +1863,7 @@ macro_rules! impl_fmt {
for i in 0..nrows { for i in 0..nrows {
write!(f, "")?; write!(f, "")?;
for j in 0..ncols { for j in 0..ncols {
let number_length = lengths[(i, j)] + 1; let number_length = val_width(&self[(i, j)], f) + 1;
let pad = max_length_with_space - number_length; let pad = max_length_with_space - number_length;
write!(f, " {:>thepad$}", "", thepad = pad)?; write!(f, " {:>thepad$}", "", thepad = pad)?;
match f.precision() { match f.precision() {
@ -1900,6 +1896,15 @@ impl_fmt!(fmt::UpperHex, "{:X}", "{:1$X}");
impl_fmt!(fmt::Binary, "{:b}", "{:.1$b}"); impl_fmt!(fmt::Binary, "{:b}", "{:.1$b}");
impl_fmt!(fmt::Pointer, "{:p}", "{:.1$p}"); impl_fmt!(fmt::Pointer, "{:p}", "{:.1$p}");
#[cfg(test)]
mod tests {
#[test]
fn empty_display() {
let vec: Vec<f64> = Vec::new();
let dvector = crate::DVector::from_vec(vec);
assert_eq!(format!("{}", dvector), "[ ]")
}
#[test] #[test]
fn lower_exp() { fn lower_exp() {
let test = crate::Matrix2::new(1e6, 2e5, 2e-5, 1.); let test = crate::Matrix2::new(1e6, 2e5, 2e-5, 1.);
@ -1914,6 +1919,7 @@ fn lower_exp() {
" "
) )
} }
}
/// # Cross product /// # Cross product
impl<T: Scalar + ClosedAdd + ClosedSub + ClosedMul, R: Dim, C: Dim, S: Storage<T, R, C>> impl<T: Scalar + ClosedAdd + ClosedSub + ClosedMul, R: Dim, C: Dim, S: Storage<T, R, C>>

View File

@ -77,6 +77,23 @@ macro_rules! slice_storage_impl(
$T::from_raw_parts(storage.$get_addr(start.0, start.1), shape, strides) $T::from_raw_parts(storage.$get_addr(start.0, start.1), shape, strides)
} }
} }
impl <'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
$T<'a, T, R, C, RStride, CStride>
where
Self: ContiguousStorage<T, R, C>
{
/// Extracts the original slice from this storage
pub fn into_slice(self) -> &'a [T] {
let (nrows, ncols) = self.shape();
if nrows.value() != 0 && ncols.value() != 0 {
let sz = self.linear_index(nrows.value() - 1, ncols.value() - 1);
unsafe { slice::from_raw_parts(self.ptr, sz + 1) }
} else {
unsafe { slice::from_raw_parts(self.ptr, 0) }
}
}
}
} }
); );
@ -108,6 +125,23 @@ impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Clone
} }
} }
impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
SliceStorageMut<'a, T, R, C, RStride, CStride>
where
Self: ContiguousStorageMut<T, R, C>,
{
/// Extracts the original slice from this storage
pub fn into_slice_mut(self) -> &'a mut [T] {
let (nrows, ncols) = self.shape();
if nrows.value() != 0 && ncols.value() != 0 {
let sz = self.linear_index(nrows.value() - 1, ncols.value() - 1);
unsafe { slice::from_raw_parts_mut(self.ptr, sz + 1) }
} else {
unsafe { slice::from_raw_parts_mut(self.ptr, 0) }
}
}
}
macro_rules! storage_impl( macro_rules! storage_impl(
($($T: ident),* $(,)*) => {$( ($($T: ident),* $(,)*) => {$(
unsafe impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Storage<T, R, C> unsafe impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Storage<T, R, C>
@ -132,7 +166,7 @@ macro_rules! storage_impl(
} }
#[inline] #[inline]
unsafe fn is_contiguous(&self) -> bool { fn is_contiguous(&self) -> bool {
// Common cases that can be deduced at compile-time even if one of the dimensions // Common cases that can be deduced at compile-time even if one of the dimensions
// is Dynamic. // is Dynamic.
if (RStride::is::<U1>() && C::is::<U1>()) || // Column vector. if (RStride::is::<U1>() && C::is::<U1>()) || // Column vector.

View File

@ -58,7 +58,7 @@ pub unsafe trait Storage<T: Scalar, R: Dim, C: Dim = U1>: Debug + Sized {
/// Compute the index corresponding to the irow-th row and icol-th column of this matrix. The /// Compute the index corresponding to the irow-th row and icol-th column of this matrix. The
/// index must be such that the following holds: /// index must be such that the following holds:
/// ///
/// ```.ignore /// ```ignore
/// let lindex = self.linear_index(irow, icol); /// let lindex = self.linear_index(irow, icol);
/// assert!(*self.get_unchecked(irow, icol) == *self.get_unchecked_linear(lindex)) /// assert!(*self.get_unchecked(irow, icol) == *self.get_unchecked_linear(lindex))
/// ``` /// ```
@ -70,24 +70,36 @@ pub unsafe trait Storage<T: Scalar, R: Dim, C: Dim = U1>: Debug + Sized {
} }
/// Gets the address of the i-th matrix component without performing bound-checking. /// Gets the address of the i-th matrix component without performing bound-checking.
///
/// # Safety
/// If the index is out of bounds, dereferencing the result will cause undefined behavior.
#[inline] #[inline]
unsafe fn get_address_unchecked_linear(&self, i: usize) -> *const T { fn get_address_unchecked_linear(&self, i: usize) -> *const T {
self.ptr().wrapping_add(i) self.ptr().wrapping_add(i)
} }
/// Gets the address of the i-th matrix component without performing bound-checking. /// Gets the address of the i-th matrix component without performing bound-checking.
///
/// # Safety
/// If the index is out of bounds, dereferencing the result will cause undefined behavior.
#[inline] #[inline]
unsafe fn get_address_unchecked(&self, irow: usize, icol: usize) -> *const T { fn get_address_unchecked(&self, irow: usize, icol: usize) -> *const T {
self.get_address_unchecked_linear(self.linear_index(irow, icol)) self.get_address_unchecked_linear(self.linear_index(irow, icol))
} }
/// Retrieves a reference to the i-th element without bound-checking. /// Retrieves a reference to the i-th element without bound-checking.
///
/// # Safety
/// If the index is out of bounds, the method will cause undefined behavior.
#[inline] #[inline]
unsafe fn get_unchecked_linear(&self, i: usize) -> &T { unsafe fn get_unchecked_linear(&self, i: usize) -> &T {
&*self.get_address_unchecked_linear(i) &*self.get_address_unchecked_linear(i)
} }
/// Retrieves a reference to the i-th element without bound-checking. /// Retrieves a reference to the i-th element without bound-checking.
///
/// # Safety
/// If the index is out of bounds, the method will cause undefined behavior.
#[inline] #[inline]
unsafe fn get_unchecked(&self, irow: usize, icol: usize) -> &T { unsafe fn get_unchecked(&self, irow: usize, icol: usize) -> &T {
self.get_unchecked_linear(self.linear_index(irow, icol)) self.get_unchecked_linear(self.linear_index(irow, icol))
@ -95,12 +107,14 @@ pub unsafe trait Storage<T: Scalar, R: Dim, C: Dim = U1>: Debug + Sized {
/// Indicates whether this data buffer stores its elements contiguously. /// Indicates whether this data buffer stores its elements contiguously.
/// ///
/// This method is unsafe because unsafe code relies on this properties to performe /// # Safety
/// some low-lever optimizations. /// This function must not return `true` if the underlying storage is not contiguous,
unsafe fn is_contiguous(&self) -> bool; /// or undefined behaviour will occur.
fn is_contiguous(&self) -> bool;
/// Retrieves the data buffer as a contiguous slice. /// Retrieves the data buffer as a contiguous slice.
/// ///
/// # Safety
/// The matrix components may not be stored in a contiguous way, depending on the strides. /// The matrix components may not be stored in a contiguous way, depending on the strides.
/// This method is unsafe because this can yield to invalid aliasing when called on some pairs /// This method is unsafe because this can yield to invalid aliasing when called on some pairs
/// of matrix slices originating from the same matrix with strides. /// of matrix slices originating from the same matrix with strides.
@ -129,30 +143,45 @@ pub unsafe trait StorageMut<T: Scalar, R: Dim, C: Dim = U1>: Storage<T, R, C> {
fn ptr_mut(&mut self) -> *mut T; fn ptr_mut(&mut self) -> *mut T;
/// Gets the mutable address of the i-th matrix component without performing bound-checking. /// Gets the mutable address of the i-th matrix component without performing bound-checking.
///
/// # Safety
/// If the index is out of bounds, dereferencing the result will cause undefined behavior.
#[inline] #[inline]
unsafe fn get_address_unchecked_linear_mut(&mut self, i: usize) -> *mut T { fn get_address_unchecked_linear_mut(&mut self, i: usize) -> *mut T {
self.ptr_mut().wrapping_add(i) self.ptr_mut().wrapping_add(i)
} }
/// Gets the mutable address of the i-th matrix component without performing bound-checking. /// Gets the mutable address of the i-th matrix component without performing bound-checking.
///
/// # Safety
/// If the index is out of bounds, dereferencing the result will cause undefined behavior.
#[inline] #[inline]
unsafe fn get_address_unchecked_mut(&mut self, irow: usize, icol: usize) -> *mut T { fn get_address_unchecked_mut(&mut self, irow: usize, icol: usize) -> *mut T {
let lid = self.linear_index(irow, icol); let lid = self.linear_index(irow, icol);
self.get_address_unchecked_linear_mut(lid) self.get_address_unchecked_linear_mut(lid)
} }
/// Retrieves a mutable reference to the i-th element without bound-checking. /// Retrieves a mutable reference to the i-th element without bound-checking.
///
/// # Safety
/// If the index is out of bounds, the method will cause undefined behavior.
unsafe fn get_unchecked_linear_mut(&mut self, i: usize) -> &mut T { unsafe fn get_unchecked_linear_mut(&mut self, i: usize) -> &mut T {
&mut *self.get_address_unchecked_linear_mut(i) &mut *self.get_address_unchecked_linear_mut(i)
} }
/// Retrieves a mutable reference to the element at `(irow, icol)` without bound-checking. /// Retrieves a mutable reference to the element at `(irow, icol)` without bound-checking.
///
/// # Safety
/// If the index is out of bounds, the method will cause undefined behavior.
#[inline] #[inline]
unsafe fn get_unchecked_mut(&mut self, irow: usize, icol: usize) -> &mut T { unsafe fn get_unchecked_mut(&mut self, irow: usize, icol: usize) -> &mut T {
&mut *self.get_address_unchecked_mut(irow, icol) &mut *self.get_address_unchecked_mut(irow, icol)
} }
/// Swaps two elements using their linear index without bound-checking. /// Swaps two elements using their linear index without bound-checking.
///
/// # Safety
/// If the indices are out of bounds, the method will cause undefined behavior.
#[inline] #[inline]
unsafe fn swap_unchecked_linear(&mut self, i1: usize, i2: usize) { unsafe fn swap_unchecked_linear(&mut self, i1: usize, i2: usize) {
let a = self.get_address_unchecked_linear_mut(i1); let a = self.get_address_unchecked_linear_mut(i1);
@ -162,6 +191,9 @@ pub unsafe trait StorageMut<T: Scalar, R: Dim, C: Dim = U1>: Storage<T, R, C> {
} }
/// Swaps two elements without bound-checking. /// Swaps two elements without bound-checking.
///
/// # Safety
/// If the indices are out of bounds, the method will cause undefined behavior.
#[inline] #[inline]
unsafe fn swap_unchecked(&mut self, row_col1: (usize, usize), row_col2: (usize, usize)) { unsafe fn swap_unchecked(&mut self, row_col1: (usize, usize), row_col2: (usize, usize)) {
let lid1 = self.linear_index(row_col1.0, row_col1.1); let lid1 = self.linear_index(row_col1.0, row_col1.1);
@ -174,6 +206,7 @@ pub unsafe trait StorageMut<T: Scalar, R: Dim, C: Dim = U1>: Storage<T, R, C> {
/// ///
/// Matrix components may not be contiguous, depending on its strides. /// Matrix components may not be contiguous, depending on its strides.
/// ///
/// # Safety
/// The matrix components may not be stored in a contiguous way, depending on the strides. /// The matrix components may not be stored in a contiguous way, depending on the strides.
/// This method is unsafe because this can yield to invalid aliasing when called on some pairs /// This method is unsafe because this can yield to invalid aliasing when called on some pairs
/// of matrix slices originating from the same matrix with strides. /// of matrix slices originating from the same matrix with strides.

View File

@ -95,7 +95,7 @@ mod rkyv_impl {
impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Unit<T> { impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Unit<T> {
fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> { fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> {
Ok(self.value.serialize(serializer)?) self.value.serialize(serializer)
} }
} }
@ -221,7 +221,7 @@ impl<T: Normed> Unit<T> {
impl<T> Unit<T> { impl<T> Unit<T> {
/// Wraps the given value, assuming it is already normalized. /// Wraps the given value, assuming it is already normalized.
#[inline] #[inline]
pub fn new_unchecked(value: T) -> Self { pub const fn new_unchecked(value: T) -> Self {
Unit { value } Unit { value }
} }

View File

@ -102,6 +102,7 @@ impl<T, R: Dim, C: Dim> VecStorage<T, R, C> {
/// The underlying mutable data storage. /// The underlying mutable data storage.
/// ///
/// # Safety
/// This is unsafe because this may cause UB if the size of the vector is changed /// This is unsafe because this may cause UB if the size of the vector is changed
/// by the user. /// by the user.
#[inline] #[inline]
@ -111,6 +112,7 @@ impl<T, R: Dim, C: Dim> VecStorage<T, R, C> {
/// Resizes the underlying mutable data storage and unwraps it. /// Resizes the underlying mutable data storage and unwraps it.
/// ///
/// # Safety
/// If `sz` is larger than the current size, additional elements are uninitialized. /// If `sz` is larger than the current size, additional elements are uninitialized.
/// If `sz` is smaller than the current size, additional elements are truncated. /// If `sz` is smaller than the current size, additional elements are truncated.
#[inline] #[inline]
@ -178,7 +180,7 @@ where
} }
#[inline] #[inline]
unsafe fn is_contiguous(&self) -> bool { fn is_contiguous(&self) -> bool {
true true
} }
@ -227,7 +229,7 @@ where
} }
#[inline] #[inline]
unsafe fn is_contiguous(&self) -> bool { fn is_contiguous(&self) -> bool {
true true
} }

View File

@ -38,14 +38,23 @@ use simba::scalar::{ClosedNeg, RealField};
/// If a feature that you need is missing, feel free to open an issue or a PR. /// If a feature that you need is missing, feel free to open an issue or a PR.
/// See https://github.com/dimforge/nalgebra/issues/487 /// See https://github.com/dimforge/nalgebra/issues/487
#[repr(C)] #[repr(C)]
#[derive(Debug, Eq, PartialEq, Copy, Clone)] #[derive(Debug, Copy, Clone)]
pub struct DualQuaternion<T: Scalar> { pub struct DualQuaternion<T> {
/// The real component of the quaternion /// The real component of the quaternion
pub real: Quaternion<T>, pub real: Quaternion<T>,
/// The dual component of the quaternion /// The dual component of the quaternion
pub dual: Quaternion<T>, pub dual: Quaternion<T>,
} }
impl<T: Scalar + Eq> Eq for DualQuaternion<T> {}
impl<T: Scalar> PartialEq for DualQuaternion<T> {
#[inline]
fn eq(&self, right: &Self) -> bool {
self.real == right.real && self.dual == right.dual
}
}
impl<T: Scalar + Zero> Default for DualQuaternion<T> { impl<T: Scalar + Zero> Default for DualQuaternion<T> {
fn default() -> Self { fn default() -> Self {
Self { Self {
@ -291,7 +300,7 @@ where
} }
impl<T: RealField> DualQuaternion<T> { impl<T: RealField> DualQuaternion<T> {
fn to_vector(&self) -> OVector<T, U8> { fn to_vector(self) -> OVector<T, U8> {
(*self.as_ref()).into() (*self.as_ref()).into()
} }
} }
@ -740,7 +749,7 @@ where
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_isometry(&self) -> Isometry3<T> { pub fn to_isometry(self) -> Isometry3<T> {
Isometry3::from_parts(self.translation(), self.rotation()) Isometry3::from_parts(self.translation(), self.rotation())
} }
@ -891,7 +900,7 @@ where
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_homogeneous(&self) -> Matrix4<T> { pub fn to_homogeneous(self) -> Matrix4<T> {
self.to_isometry().to_homogeneous() self.to_isometry().to_homogeneous()
} }
} }

View File

@ -60,15 +60,17 @@ use crate::geometry::{AbstractRotation, Point, Translation};
feature = "serde-serialize-no-std", feature = "serde-serialize-no-std",
serde(bound(serialize = "R: Serialize, serde(bound(serialize = "R: Serialize,
DefaultAllocator: Allocator<T, Const<D>>, DefaultAllocator: Allocator<T, Const<D>>,
Owned<T, Const<D>>: Serialize")) Owned<T, Const<D>>: Serialize,
T: Scalar"))
)] )]
#[cfg_attr( #[cfg_attr(
feature = "serde-serialize-no-std", feature = "serde-serialize-no-std",
serde(bound(deserialize = "R: Deserialize<'de>, serde(bound(deserialize = "R: Deserialize<'de>,
DefaultAllocator: Allocator<T, Const<D>>, DefaultAllocator: Allocator<T, Const<D>>,
Owned<T, Const<D>>: Deserialize<'de>")) Owned<T, Const<D>>: Deserialize<'de>,
T: Scalar"))
)] )]
pub struct Isometry<T: Scalar, R, const D: usize> { pub struct Isometry<T, R, const D: usize> {
/// The pure rotational part of this isometry. /// The pure rotational part of this isometry.
pub rotation: R, pub rotation: R,
/// The pure translational part of this isometry. /// The pure translational part of this isometry.

View File

@ -86,7 +86,7 @@ where
Standard: Distribution<T> + Distribution<R>, Standard: Distribution<T> + Distribution<R>,
{ {
#[inline] #[inline]
fn sample<'a, G: Rng + ?Sized>(&self, rng: &'a mut G) -> Isometry<T, R, D> { fn sample<G: Rng + ?Sized>(&self, rng: &mut G) -> Isometry<T, R, D> {
Isometry::from_parts(rng.gen(), rng.gen()) Isometry::from_parts(rng.gen(), rng.gen())
} }
} }

View File

@ -19,7 +19,7 @@ use crate::geometry::{Point3, Projective3};
/// A 3D orthographic projection stored as a homogeneous 4x4 matrix. /// A 3D orthographic projection stored as a homogeneous 4x4 matrix.
#[repr(C)] #[repr(C)]
pub struct Orthographic3<T: RealField> { pub struct Orthographic3<T> {
matrix: Matrix4<T>, matrix: Matrix4<T>,
} }
@ -239,7 +239,7 @@ impl<T: RealField> Orthographic3<T> {
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_homogeneous(&self) -> Matrix4<T> { pub fn to_homogeneous(self) -> Matrix4<T> {
self.matrix self.matrix
} }
@ -287,7 +287,7 @@ impl<T: RealField> Orthographic3<T> {
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_projective(&self) -> Projective3<T> { pub fn to_projective(self) -> Projective3<T> {
Projective3::from_matrix_unchecked(self.matrix) Projective3::from_matrix_unchecked(self.matrix)
} }

View File

@ -14,13 +14,13 @@ use simba::scalar::RealField;
use crate::base::dimension::U3; use crate::base::dimension::U3;
use crate::base::storage::Storage; use crate::base::storage::Storage;
use crate::base::{Matrix4, Scalar, Vector, Vector3}; use crate::base::{Matrix4, Vector, Vector3};
use crate::geometry::{Point3, Projective3}; use crate::geometry::{Point3, Projective3};
/// A 3D perspective projection stored as a homogeneous 4x4 matrix. /// A 3D perspective projection stored as a homogeneous 4x4 matrix.
#[repr(C)] #[repr(C)]
pub struct Perspective3<T: Scalar> { pub struct Perspective3<T> {
matrix: Matrix4<T>, matrix: Matrix4<T>,
} }
@ -141,7 +141,7 @@ impl<T: RealField> Perspective3<T> {
/// Computes the corresponding homogeneous matrix. /// Computes the corresponding homogeneous matrix.
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_homogeneous(&self) -> Matrix4<T> { pub fn to_homogeneous(self) -> Matrix4<T> {
self.matrix.clone_owned() self.matrix.clone_owned()
} }
@ -162,7 +162,7 @@ impl<T: RealField> Perspective3<T> {
/// This transformation seen as a `Projective3`. /// This transformation seen as a `Projective3`.
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_projective(&self) -> Projective3<T> { pub fn to_projective(self) -> Projective3<T> {
Projective3::from_matrix_unchecked(self.matrix) Projective3::from_matrix_unchecked(self.matrix)
} }
@ -305,7 +305,7 @@ where
Standard: Distribution<T>, Standard: Distribution<T>,
{ {
/// Generate an arbitrary random variate for testing purposes. /// Generate an arbitrary random variate for testing purposes.
fn sample<'a, R: Rng + ?Sized>(&self, r: &'a mut R) -> Perspective3<T> { fn sample<R: Rng + ?Sized>(&self, r: &mut R) -> Perspective3<T> {
use crate::base::helper; use crate::base::helper;
let znear = r.gen(); let znear = r.gen();
let zfar = helper::reject_rand(r, |&x: &T| !(x - znear).is_zero()); let zfar = helper::reject_rand(r, |&x: &T| !(x - znear).is_zero());

View File

@ -17,7 +17,7 @@ use simba::simd::SimdPartialOrd;
use crate::base::allocator::Allocator; use crate::base::allocator::Allocator;
use crate::base::dimension::{DimName, DimNameAdd, DimNameSum, U1}; use crate::base::dimension::{DimName, DimNameAdd, DimNameSum, U1};
use crate::base::iter::{MatrixIter, MatrixIterMut}; use crate::base::iter::{MatrixIter, MatrixIterMut};
use crate::base::{Const, DefaultAllocator, OVector, SVector, Scalar}; use crate::base::{Const, DefaultAllocator, OVector, Scalar};
/// A point in an euclidean space. /// A point in an euclidean space.
/// ///
@ -40,35 +40,53 @@ use crate::base::{Const, DefaultAllocator, OVector, SVector, Scalar};
/// of said transformations for details. /// of said transformations for details.
#[repr(C)] #[repr(C)]
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct Point<T, const D: usize> { pub struct OPoint<T: Scalar, D: DimName>
where
DefaultAllocator: Allocator<T, D>,
{
/// The coordinates of this point, i.e., the shift from the origin. /// The coordinates of this point, i.e., the shift from the origin.
pub coords: SVector<T, D>, pub coords: OVector<T, D>,
} }
impl<T: Scalar + hash::Hash, const D: usize> hash::Hash for Point<T, D> { impl<T: Scalar + hash::Hash, D: DimName> hash::Hash for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
fn hash<H: hash::Hasher>(&self, state: &mut H) { fn hash<H: hash::Hasher>(&self, state: &mut H) {
self.coords.hash(state) self.coords.hash(state)
} }
} }
impl<T: Scalar + Copy, const D: usize> Copy for Point<T, D> {} impl<T: Scalar + Copy, D: DimName> Copy for OPoint<T, D>
where
#[cfg(feature = "bytemuck")] DefaultAllocator: Allocator<T, D>,
unsafe impl<T: Scalar, const D: usize> bytemuck::Zeroable for Point<T, D> where OVector<T, D>: Copy,
SVector<T, D>: bytemuck::Zeroable
{ {
} }
#[cfg(feature = "bytemuck")] #[cfg(feature = "bytemuck")]
unsafe impl<T: Scalar, const D: usize> bytemuck::Pod for Point<T, D> unsafe impl<T: Scalar, D: DimName> bytemuck::Zeroable for OPoint<T, D>
where
OVector<T, D>: bytemuck::Zeroable,
DefaultAllocator: Allocator<T, D>,
{
}
#[cfg(feature = "bytemuck")]
unsafe impl<T: Scalar, D: DimName> bytemuck::Pod for OPoint<T, D>
where where
T: Copy, T: Copy,
SVector<T, D>: bytemuck::Pod, OVector<T, D>: bytemuck::Pod,
DefaultAllocator: Allocator<T, D>,
{ {
} }
#[cfg(feature = "serde-serialize-no-std")] #[cfg(feature = "serde-serialize-no-std")]
impl<T: Scalar + Serialize, const D: usize> Serialize for Point<T, D> { impl<T: Scalar + Serialize, D: DimName> Serialize for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
<DefaultAllocator as Allocator<T, D>>::Buffer: Serialize,
{
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where where
S: Serializer, S: Serializer,
@ -78,22 +96,27 @@ impl<T: Scalar + Serialize, const D: usize> Serialize for Point<T, D> {
} }
#[cfg(feature = "serde-serialize-no-std")] #[cfg(feature = "serde-serialize-no-std")]
impl<'a, T: Scalar + Deserialize<'a>, const D: usize> Deserialize<'a> for Point<T, D> { impl<'a, T: Scalar + Deserialize<'a>, D: DimName> Deserialize<'a> for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
<DefaultAllocator as Allocator<T, D>>::Buffer: Deserialize<'a>,
{
fn deserialize<Des>(deserializer: Des) -> Result<Self, Des::Error> fn deserialize<Des>(deserializer: Des) -> Result<Self, Des::Error>
where where
Des: Deserializer<'a>, Des: Deserializer<'a>,
{ {
let coords = SVector::<T, D>::deserialize(deserializer)?; let coords = OVector::<T, D>::deserialize(deserializer)?;
Ok(Self::from(coords)) Ok(Self::from(coords))
} }
} }
#[cfg(feature = "abomonation-serialize")] #[cfg(feature = "abomonation-serialize")]
impl<T, const D: usize> Abomonation for Point<T, D> impl<T, D: DimName> Abomonation for OPoint<T, D>
where where
T: Scalar, T: Scalar,
SVector<T, D>: Abomonation, OVector<T, D>: Abomonation,
DefaultAllocator: Allocator<T, D>,
{ {
unsafe fn entomb<W: Write>(&self, writer: &mut W) -> IOResult<()> { unsafe fn entomb<W: Write>(&self, writer: &mut W) -> IOResult<()> {
self.coords.entomb(writer) self.coords.entomb(writer)
@ -108,7 +131,10 @@ where
} }
} }
impl<T: Scalar, const D: usize> Point<T, D> { impl<T: Scalar, D: DimName> OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
/// Returns a point containing the result of `f` applied to each of its entries. /// Returns a point containing the result of `f` applied to each of its entries.
/// ///
/// # Example /// # Example
@ -123,7 +149,10 @@ impl<T: Scalar, const D: usize> Point<T, D> {
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn map<T2: Scalar, F: FnMut(T) -> T2>(&self, f: F) -> Point<T2, D> { pub fn map<T2: Scalar, F: FnMut(T) -> T2>(&self, f: F) -> OPoint<T2, D>
where
DefaultAllocator: Allocator<T2, D>,
{
self.coords.map(f).into() self.coords.map(f).into()
} }
@ -163,20 +192,21 @@ impl<T: Scalar, const D: usize> Point<T, D> {
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_homogeneous(&self) -> OVector<T, DimNameSum<Const<D>, U1>> pub fn to_homogeneous(&self) -> OVector<T, DimNameSum<D, U1>>
where where
T: One, T: One,
Const<D>: DimNameAdd<U1>, D: DimNameAdd<U1>,
DefaultAllocator: Allocator<T, DimNameSum<Const<D>, U1>>, DefaultAllocator: Allocator<T, DimNameSum<D, U1>>,
{ {
let mut res = unsafe { let mut res = unsafe {
crate::unimplemented_or_uninitialized_generic!( crate::unimplemented_or_uninitialized_generic!(
<DimNameSum<Const<D>, U1> as DimName>::name(), <DimNameSum<D, U1> as DimName>::name(),
Const::<1> Const::<1>
) )
}; };
res.fixed_slice_mut::<D, 1>(0, 0).copy_from(&self.coords); res.generic_slice_mut((0, 0), (D::name(), Const::<1>))
res[(D, 0)] = T::one(); .copy_from(&self.coords);
res[(D::dim(), 0)] = T::one();
res res
} }
@ -184,7 +214,7 @@ impl<T: Scalar, const D: usize> Point<T, D> {
/// Creates a new point with the given coordinates. /// Creates a new point with the given coordinates.
#[deprecated(note = "Use Point::from(vector) instead.")] #[deprecated(note = "Use Point::from(vector) instead.")]
#[inline] #[inline]
pub fn from_coordinates(coords: SVector<T, D>) -> Self { pub fn from_coordinates(coords: OVector<T, D>) -> Self {
Self { coords } Self { coords }
} }
@ -243,8 +273,7 @@ impl<T: Scalar, const D: usize> Point<T, D> {
#[inline] #[inline]
pub fn iter( pub fn iter(
&self, &self,
) -> MatrixIter<T, Const<D>, Const<1>, <DefaultAllocator as Allocator<T, Const<D>>>::Buffer> ) -> MatrixIter<T, D, Const<1>, <DefaultAllocator as Allocator<T, D>>::Buffer> {
{
self.coords.iter() self.coords.iter()
} }
@ -270,8 +299,7 @@ impl<T: Scalar, const D: usize> Point<T, D> {
#[inline] #[inline]
pub fn iter_mut( pub fn iter_mut(
&mut self, &mut self,
) -> MatrixIterMut<T, Const<D>, Const<1>, <DefaultAllocator as Allocator<T, Const<D>>>::Buffer> ) -> MatrixIterMut<T, D, Const<1>, <DefaultAllocator as Allocator<T, D>>::Buffer> {
{
self.coords.iter_mut() self.coords.iter_mut()
} }
@ -289,9 +317,10 @@ impl<T: Scalar, const D: usize> Point<T, D> {
} }
} }
impl<T: Scalar + AbsDiffEq, const D: usize> AbsDiffEq for Point<T, D> impl<T: Scalar + AbsDiffEq, D: DimName> AbsDiffEq for OPoint<T, D>
where where
T::Epsilon: Copy, T::Epsilon: Copy,
DefaultAllocator: Allocator<T, D>,
{ {
type Epsilon = T::Epsilon; type Epsilon = T::Epsilon;
@ -306,9 +335,10 @@ where
} }
} }
impl<T: Scalar + RelativeEq, const D: usize> RelativeEq for Point<T, D> impl<T: Scalar + RelativeEq, D: DimName> RelativeEq for OPoint<T, D>
where where
T::Epsilon: Copy, T::Epsilon: Copy,
DefaultAllocator: Allocator<T, D>,
{ {
#[inline] #[inline]
fn default_max_relative() -> Self::Epsilon { fn default_max_relative() -> Self::Epsilon {
@ -327,9 +357,10 @@ where
} }
} }
impl<T: Scalar + UlpsEq, const D: usize> UlpsEq for Point<T, D> impl<T: Scalar + UlpsEq, D: DimName> UlpsEq for OPoint<T, D>
where where
T::Epsilon: Copy, T::Epsilon: Copy,
DefaultAllocator: Allocator<T, D>,
{ {
#[inline] #[inline]
fn default_max_ulps() -> u32 { fn default_max_ulps() -> u32 {
@ -342,16 +373,22 @@ where
} }
} }
impl<T: Scalar + Eq, const D: usize> Eq for Point<T, D> {} impl<T: Scalar + Eq, D: DimName> Eq for OPoint<T, D> where DefaultAllocator: Allocator<T, D> {}
impl<T: Scalar, const D: usize> PartialEq for Point<T, D> { impl<T: Scalar, D: DimName> PartialEq for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
#[inline] #[inline]
fn eq(&self, right: &Self) -> bool { fn eq(&self, right: &Self) -> bool {
self.coords == right.coords self.coords == right.coords
} }
} }
impl<T: Scalar + PartialOrd, const D: usize> PartialOrd for Point<T, D> { impl<T: Scalar + PartialOrd, D: DimName> PartialOrd for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
#[inline] #[inline]
fn partial_cmp(&self, other: &Self) -> Option<Ordering> { fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
self.coords.partial_cmp(&other.coords) self.coords.partial_cmp(&other.coords)
@ -381,25 +418,28 @@ impl<T: Scalar + PartialOrd, const D: usize> PartialOrd for Point<T, D> {
/* /*
* inf/sup * inf/sup
*/ */
impl<T: Scalar + SimdPartialOrd, const D: usize> Point<T, D> { impl<T: Scalar + SimdPartialOrd, D: DimName> OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
/// Computes the infimum (aka. componentwise min) of two points. /// Computes the infimum (aka. componentwise min) of two points.
#[inline] #[inline]
#[must_use] #[must_use]
pub fn inf(&self, other: &Self) -> Point<T, D> { pub fn inf(&self, other: &Self) -> OPoint<T, D> {
self.coords.inf(&other.coords).into() self.coords.inf(&other.coords).into()
} }
/// Computes the supremum (aka. componentwise max) of two points. /// Computes the supremum (aka. componentwise max) of two points.
#[inline] #[inline]
#[must_use] #[must_use]
pub fn sup(&self, other: &Self) -> Point<T, D> { pub fn sup(&self, other: &Self) -> OPoint<T, D> {
self.coords.sup(&other.coords).into() self.coords.sup(&other.coords).into()
} }
/// Computes the (infimum, supremum) of two points. /// Computes the (infimum, supremum) of two points.
#[inline] #[inline]
#[must_use] #[must_use]
pub fn inf_sup(&self, other: &Self) -> (Point<T, D>, Point<T, D>) { pub fn inf_sup(&self, other: &Self) -> (OPoint<T, D>, OPoint<T, D>) {
let (inf, sup) = self.coords.inf_sup(&other.coords); let (inf, sup) = self.coords.inf_sup(&other.coords);
(inf.into(), sup.into()) (inf.into(), sup.into())
} }
@ -410,7 +450,10 @@ impl<T: Scalar + SimdPartialOrd, const D: usize> Point<T, D> {
* Display * Display
* *
*/ */
impl<T: Scalar + fmt::Display, const D: usize> fmt::Display for Point<T, D> { impl<T: Scalar + fmt::Display, D: DimName> fmt::Display for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{{")?; write!(f, "{{")?;

View File

@ -1,4 +1,8 @@
use crate::geometry::Point; use crate::geometry::OPoint;
use crate::Const;
/// A point with `D` elements.
pub type Point<T, const D: usize> = OPoint<T, Const<D>>;
/// A statically sized 1-dimensional column point. /// A statically sized 1-dimensional column point.
/// ///

View File

@ -10,22 +10,26 @@ use rand::{
use crate::base::allocator::Allocator; use crate::base::allocator::Allocator;
use crate::base::dimension::{DimNameAdd, DimNameSum, U1}; use crate::base::dimension::{DimNameAdd, DimNameSum, U1};
use crate::base::{DefaultAllocator, SVector, Scalar}; use crate::base::{DefaultAllocator, Scalar};
use crate::{ use crate::{
Const, OVector, Point1, Point2, Point3, Point4, Point5, Point6, Vector1, Vector2, Vector3, Const, DimName, OPoint, OVector, Point1, Point2, Point3, Point4, Point5, Point6, Vector1,
Vector4, Vector5, Vector6, Vector2, Vector3, Vector4, Vector5, Vector6,
}; };
use simba::scalar::{ClosedDiv, SupersetOf}; use simba::scalar::{ClosedDiv, SupersetOf};
use crate::geometry::Point; use crate::geometry::Point;
/// # Other construction methods /// # Other construction methods
impl<T: Scalar, const D: usize> Point<T, D> { impl<T: Scalar, D: DimName> OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
/// Creates a new point with uninitialized coordinates. /// Creates a new point with uninitialized coordinates.
#[inline] #[inline]
pub unsafe fn new_uninitialized() -> Self { pub unsafe fn new_uninitialized() -> Self {
Self::from(crate::unimplemented_or_uninitialized_generic!( Self::from(crate::unimplemented_or_uninitialized_generic!(
Const::<D>, Const::<1> D::name(),
Const::<1>
)) ))
} }
@ -49,7 +53,7 @@ impl<T: Scalar, const D: usize> Point<T, D> {
where where
T: Zero, T: Zero,
{ {
Self::from(SVector::from_element(T::zero())) Self::from(OVector::from_element(T::zero()))
} }
/// Creates a new point from a slice. /// Creates a new point from a slice.
@ -68,7 +72,7 @@ impl<T: Scalar, const D: usize> Point<T, D> {
/// ``` /// ```
#[inline] #[inline]
pub fn from_slice(components: &[T]) -> Self { pub fn from_slice(components: &[T]) -> Self {
Self::from(SVector::from_row_slice(components)) Self::from(OVector::from_row_slice(components))
} }
/// Creates a new point from its homogeneous vector representation. /// Creates a new point from its homogeneous vector representation.
@ -102,14 +106,15 @@ impl<T: Scalar, const D: usize> Point<T, D> {
/// assert_eq!(pt, Some(Point2::new(1.0, 2.0))); /// assert_eq!(pt, Some(Point2::new(1.0, 2.0)));
/// ``` /// ```
#[inline] #[inline]
pub fn from_homogeneous(v: OVector<T, DimNameSum<Const<D>, U1>>) -> Option<Self> pub fn from_homogeneous(v: OVector<T, DimNameSum<D, U1>>) -> Option<Self>
where where
T: Scalar + Zero + One + ClosedDiv, T: Scalar + Zero + One + ClosedDiv,
Const<D>: DimNameAdd<U1>, D: DimNameAdd<U1>,
DefaultAllocator: Allocator<T, DimNameSum<Const<D>, U1>>, DefaultAllocator: Allocator<T, DimNameSum<D, U1>>,
{ {
if !v[D].is_zero() { if !v[D::dim()].is_zero() {
let coords = v.fixed_slice::<D, 1>(0, 0) / v[D].inlined_clone(); let coords =
v.generic_slice((0, 0), (D::name(), Const::<1>)) / v[D::dim()].inlined_clone();
Some(Self::from(coords)) Some(Self::from(coords))
} else { } else {
None None
@ -125,9 +130,10 @@ impl<T: Scalar, const D: usize> Point<T, D> {
/// let pt2 = pt.cast::<f32>(); /// let pt2 = pt.cast::<f32>();
/// assert_eq!(pt2, Point2::new(1.0f32, 2.0)); /// assert_eq!(pt2, Point2::new(1.0f32, 2.0));
/// ``` /// ```
pub fn cast<To: Scalar>(self) -> Point<To, D> pub fn cast<To: Scalar>(self) -> OPoint<To, D>
where where
Point<To, D>: SupersetOf<Self>, OPoint<To, D>: SupersetOf<Self>,
DefaultAllocator: Allocator<To, D>,
{ {
crate::convert(self) crate::convert(self)
} }
@ -138,38 +144,43 @@ impl<T: Scalar, const D: usize> Point<T, D> {
* Traits that build points. * Traits that build points.
* *
*/ */
impl<T: Scalar + Bounded, const D: usize> Bounded for Point<T, D> { impl<T: Scalar + Bounded, D: DimName> Bounded for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
#[inline] #[inline]
fn max_value() -> Self { fn max_value() -> Self {
Self::from(SVector::max_value()) Self::from(OVector::max_value())
} }
#[inline] #[inline]
fn min_value() -> Self { fn min_value() -> Self {
Self::from(SVector::min_value()) Self::from(OVector::min_value())
} }
} }
#[cfg(feature = "rand-no-std")] #[cfg(feature = "rand-no-std")]
impl<T: Scalar, const D: usize> Distribution<Point<T, D>> for Standard impl<T: Scalar, D: DimName> Distribution<OPoint<T, D>> for Standard
where where
Standard: Distribution<T>, Standard: Distribution<T>,
DefaultAllocator: Allocator<T, D>,
{ {
/// Generate a `Point` where each coordinate is an independent variate from `[0, 1)`. /// Generate a `Point` where each coordinate is an independent variate from `[0, 1)`.
#[inline] #[inline]
fn sample<'a, G: Rng + ?Sized>(&self, rng: &mut G) -> Point<T, D> { fn sample<'a, G: Rng + ?Sized>(&self, rng: &mut G) -> OPoint<T, D> {
Point::from(rng.gen::<SVector<T, D>>()) OPoint::from(rng.gen::<OVector<T, D>>())
} }
} }
#[cfg(feature = "arbitrary")] #[cfg(feature = "arbitrary")]
impl<T: Scalar + Arbitrary + Send, const D: usize> Arbitrary for Point<T, D> impl<T: Scalar + Arbitrary + Send, D: DimName> Arbitrary for OPoint<T, D>
where where
<DefaultAllocator as Allocator<T, Const<D>>>::Buffer: Send, <DefaultAllocator as Allocator<T, D>>::Buffer: Send,
DefaultAllocator: Allocator<T, D>,
{ {
#[inline] #[inline]
fn arbitrary(g: &mut Gen) -> Self { fn arbitrary(g: &mut Gen) -> Self {
Self::from(SVector::arbitrary(g)) Self::from(OVector::arbitrary(g))
} }
} }
@ -181,7 +192,7 @@ where
// NOTE: the impl for Point1 is not with the others so that we // NOTE: the impl for Point1 is not with the others so that we
// can add a section with the impl block comment. // can add a section with the impl block comment.
/// # Construction from individual components /// # Construction from individual components
impl<T> Point1<T> { impl<T: Scalar> Point1<T> {
/// Initializes this point from its components. /// Initializes this point from its components.
/// ///
/// # Example /// # Example
@ -192,7 +203,7 @@ impl<T> Point1<T> {
/// assert_eq!(p.x, 1.0); /// assert_eq!(p.x, 1.0);
/// ``` /// ```
#[inline] #[inline]
pub const fn new(x: T) -> Self { pub fn new(x: T) -> Self {
Point { Point {
coords: Vector1::new(x), coords: Vector1::new(x),
} }
@ -200,13 +211,13 @@ impl<T> Point1<T> {
} }
macro_rules! componentwise_constructors_impl( macro_rules! componentwise_constructors_impl(
($($doc: expr; $Point: ident, $Vector: ident, $($args: ident:$irow: expr),*);* $(;)*) => {$( ($($doc: expr; $Point: ident, $Vector: ident, $($args: ident:$irow: expr),*);* $(;)*) => {$(
impl<T> $Point<T> { impl<T: Scalar> $Point<T> {
#[doc = "Initializes this point from its components."] #[doc = "Initializes this point from its components."]
#[doc = "# Example\n```"] #[doc = "# Example\n```"]
#[doc = $doc] #[doc = $doc]
#[doc = "```"] #[doc = "```"]
#[inline] #[inline]
pub const fn new($($args: T),*) -> Self { pub fn new($($args: T),*) -> Self {
Point { coords: $Vector::new($($args),*) } Point { coords: $Vector::new($($args),*) }
} }
} }

View File

@ -7,6 +7,7 @@ use crate::base::dimension::{DimNameAdd, DimNameSum, U1};
use crate::base::{Const, DefaultAllocator, Matrix, OVector, Scalar}; use crate::base::{Const, DefaultAllocator, Matrix, OVector, Scalar};
use crate::geometry::Point; use crate::geometry::Point;
use crate::{DimName, OPoint};
/* /*
* This file provides the following conversions: * This file provides the following conversions:
@ -16,67 +17,69 @@ use crate::geometry::Point;
* Point -> Vector (homogeneous) * Point -> Vector (homogeneous)
*/ */
impl<T1, T2, const D: usize> SubsetOf<Point<T2, D>> for Point<T1, D> impl<T1, T2, D: DimName> SubsetOf<OPoint<T2, D>> for OPoint<T1, D>
where where
T1: Scalar, T1: Scalar,
T2: Scalar + SupersetOf<T1>, T2: Scalar + SupersetOf<T1>,
DefaultAllocator: Allocator<T1, D> + Allocator<T2, D>,
{ {
#[inline] #[inline]
fn to_superset(&self) -> Point<T2, D> { fn to_superset(&self) -> OPoint<T2, D> {
Point::from(self.coords.to_superset()) OPoint::from(self.coords.to_superset())
} }
#[inline] #[inline]
fn is_in_subset(m: &Point<T2, D>) -> bool { fn is_in_subset(m: &OPoint<T2, D>) -> bool {
// TODO: is there a way to reuse the `.is_in_subset` from the matrix implementation of // TODO: is there a way to reuse the `.is_in_subset` from the matrix implementation of
// SubsetOf? // SubsetOf?
m.iter().all(|e| e.is_in_subset()) m.iter().all(|e| e.is_in_subset())
} }
#[inline] #[inline]
fn from_superset_unchecked(m: &Point<T2, D>) -> Self { fn from_superset_unchecked(m: &OPoint<T2, D>) -> Self {
Self::from(Matrix::from_superset_unchecked(&m.coords)) Self::from(Matrix::from_superset_unchecked(&m.coords))
} }
} }
impl<T1, T2, const D: usize> SubsetOf<OVector<T2, DimNameSum<Const<D>, U1>>> for Point<T1, D> impl<T1, T2, D> SubsetOf<OVector<T2, DimNameSum<D, U1>>> for OPoint<T1, D>
where where
Const<D>: DimNameAdd<U1>, D: DimNameAdd<U1>,
T1: Scalar, T1: Scalar,
T2: Scalar + Zero + One + ClosedDiv + SupersetOf<T1>, T2: Scalar + Zero + One + ClosedDiv + SupersetOf<T1>,
DefaultAllocator: DefaultAllocator: Allocator<T1, D>
Allocator<T1, DimNameSum<Const<D>, U1>> + Allocator<T2, DimNameSum<Const<D>, U1>>, + Allocator<T2, D>
+ Allocator<T1, DimNameSum<D, U1>>
+ Allocator<T2, DimNameSum<D, U1>>,
// + Allocator<T1, D> // + Allocator<T1, D>
// + Allocator<T2, D>, // + Allocator<T2, D>,
{ {
#[inline] #[inline]
fn to_superset(&self) -> OVector<T2, DimNameSum<Const<D>, U1>> { fn to_superset(&self) -> OVector<T2, DimNameSum<D, U1>> {
let p: Point<T2, D> = self.to_superset(); let p: OPoint<T2, D> = self.to_superset();
p.to_homogeneous() p.to_homogeneous()
} }
#[inline] #[inline]
fn is_in_subset(v: &OVector<T2, DimNameSum<Const<D>, U1>>) -> bool { fn is_in_subset(v: &OVector<T2, DimNameSum<D, U1>>) -> bool {
crate::is_convertible::<_, OVector<T1, DimNameSum<Const<D>, U1>>>(v) && !v[D].is_zero() crate::is_convertible::<_, OVector<T1, DimNameSum<D, U1>>>(v) && !v[D::dim()].is_zero()
} }
#[inline] #[inline]
fn from_superset_unchecked(v: &OVector<T2, DimNameSum<Const<D>, U1>>) -> Self { fn from_superset_unchecked(v: &OVector<T2, DimNameSum<D, U1>>) -> Self {
let coords = v.fixed_slice::<D, 1>(0, 0) / v[D].inlined_clone(); let coords = v.generic_slice((0, 0), (D::name(), Const::<1>)) / v[D::dim()].inlined_clone();
Self { Self {
coords: crate::convert_unchecked(coords), coords: crate::convert_unchecked(coords),
} }
} }
} }
impl<T: Scalar + Zero + One, const D: usize> From<Point<T, D>> impl<T: Scalar + Zero + One, D: DimName> From<OPoint<T, D>> for OVector<T, DimNameSum<D, U1>>
for OVector<T, DimNameSum<Const<D>, U1>>
where where
Const<D>: DimNameAdd<U1>, D: DimNameAdd<U1>,
DefaultAllocator: Allocator<T, DimNameSum<Const<D>, U1>>, DefaultAllocator: Allocator<T, DimNameSum<D, U1>> + Allocator<T, D>,
{ {
#[inline] #[inline]
fn from(t: Point<T, D>) -> Self { fn from(t: OPoint<T, D>) -> Self {
t.to_homogeneous() t.to_homogeneous()
} }
} }
@ -97,10 +100,13 @@ impl<T: Scalar, const D: usize> From<Point<T, D>> for [T; D] {
} }
} }
impl<T: Scalar, const D: usize> From<OVector<T, Const<D>>> for Point<T, D> { impl<T: Scalar, D: DimName> From<OVector<T, D>> for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
#[inline] #[inline]
fn from(coords: OVector<T, Const<D>>) -> Self { fn from(coords: OVector<T, D>) -> Self {
Point { coords } OPoint { coords }
} }
} }

View File

@ -1,9 +1,9 @@
use std::ops::{Deref, DerefMut}; use std::ops::{Deref, DerefMut};
use crate::base::coordinates::{X, XY, XYZ, XYZW, XYZWA, XYZWAB}; use crate::base::coordinates::{X, XY, XYZ, XYZW, XYZWA, XYZWAB};
use crate::base::Scalar; use crate::base::{Scalar, U1, U2, U3, U4, U5, U6};
use crate::geometry::Point; use crate::geometry::OPoint;
/* /*
* *
@ -12,8 +12,8 @@ use crate::geometry::Point;
*/ */
macro_rules! deref_impl( macro_rules! deref_impl(
($D: expr, $Target: ident $(, $comps: ident)*) => { ($D: ty, $Target: ident $(, $comps: ident)*) => {
impl<T: Scalar> Deref for Point<T, $D> impl<T: Scalar> Deref for OPoint<T, $D>
{ {
type Target = $Target<T>; type Target = $Target<T>;
@ -23,7 +23,7 @@ macro_rules! deref_impl(
} }
} }
impl<T: Scalar> DerefMut for Point<T, $D> impl<T: Scalar> DerefMut for OPoint<T, $D>
{ {
#[inline] #[inline]
fn deref_mut(&mut self) -> &mut Self::Target { fn deref_mut(&mut self) -> &mut Self::Target {
@ -33,9 +33,9 @@ macro_rules! deref_impl(
} }
); );
deref_impl!(1, X, x); deref_impl!(U1, X, x);
deref_impl!(2, XY, x, y); deref_impl!(U2, XY, x, y);
deref_impl!(3, XYZ, x, y, z); deref_impl!(U3, XYZ, x, y, z);
deref_impl!(4, XYZW, x, y, z, w); deref_impl!(U4, XYZW, x, y, z, w);
deref_impl!(5, XYZWA, x, y, z, w, a); deref_impl!(U5, XYZWA, x, y, z, w, a);
deref_impl!(6, XYZWAB, x, y, z, w, a, b); deref_impl!(U6, XYZWAB, x, y, z, w, a, b);

View File

@ -8,18 +8,23 @@ use simba::scalar::{ClosedAdd, ClosedDiv, ClosedMul, ClosedNeg, ClosedSub};
use crate::base::constraint::{ use crate::base::constraint::{
AreMultipliable, SameNumberOfColumns, SameNumberOfRows, ShapeConstraint, AreMultipliable, SameNumberOfColumns, SameNumberOfRows, ShapeConstraint,
}; };
use crate::base::dimension::{Dim, U1}; use crate::base::dimension::{Dim, DimName, U1};
use crate::base::storage::Storage; use crate::base::storage::Storage;
use crate::base::{Const, Matrix, SVector, Scalar, Vector}; use crate::base::{Const, Matrix, OVector, Scalar, Vector};
use crate::geometry::Point; use crate::allocator::Allocator;
use crate::geometry::{OPoint, Point};
use crate::DefaultAllocator;
/* /*
* *
* Indexing. * Indexing.
* *
*/ */
impl<T: Scalar, const D: usize> Index<usize> for Point<T, D> { impl<T: Scalar, D: DimName> Index<usize> for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
type Output = T; type Output = T;
#[inline] #[inline]
@ -28,7 +33,10 @@ impl<T: Scalar, const D: usize> Index<usize> for Point<T, D> {
} }
} }
impl<T: Scalar, const D: usize> IndexMut<usize> for Point<T, D> { impl<T: Scalar, D: DimName> IndexMut<usize> for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
#[inline] #[inline]
fn index_mut(&mut self, i: usize) -> &mut Self::Output { fn index_mut(&mut self, i: usize) -> &mut Self::Output {
&mut self.coords[i] &mut self.coords[i]
@ -40,7 +48,10 @@ impl<T: Scalar, const D: usize> IndexMut<usize> for Point<T, D> {
* Neg. * Neg.
* *
*/ */
impl<T: Scalar + ClosedNeg, const D: usize> Neg for Point<T, D> { impl<T: Scalar + ClosedNeg, D: DimName> Neg for OPoint<T, D>
where
DefaultAllocator: Allocator<T, D>,
{
type Output = Self; type Output = Self;
#[inline] #[inline]
@ -49,8 +60,11 @@ impl<T: Scalar + ClosedNeg, const D: usize> Neg for Point<T, D> {
} }
} }
impl<'a, T: Scalar + ClosedNeg, const D: usize> Neg for &'a Point<T, D> { impl<'a, T: Scalar + ClosedNeg, D: DimName> Neg for &'a OPoint<T, D>
type Output = Point<T, D>; where
DefaultAllocator: Allocator<T, D>,
{
type Output = OPoint<T, D>;
#[inline] #[inline]
fn neg(self) -> Self::Output { fn neg(self) -> Self::Output {
@ -66,102 +80,103 @@ impl<'a, T: Scalar + ClosedNeg, const D: usize> Neg for &'a Point<T, D> {
// Point - Point // Point - Point
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D>, U1), (Const<D>, U1) -> (Const<D>, U1) (D, U1), (D, U1) -> (D, U1)
const D; for; where; const; for D; where D: DimName, DefaultAllocator: Allocator<T, D>;
self: &'a Point<T, D>, right: &'b Point<T, D>, Output = SVector<T, D>; self: &'a OPoint<T, D>, right: &'b OPoint<T, D>, Output = OVector<T, D>;
&self.coords - &right.coords; 'a, 'b); &self.coords - &right.coords; 'a, 'b);
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D>, U1), (Const<D>, U1) -> (Const<D>, U1) (D, U1), (D, U1) -> (D, U1)
const D; for; where; const; for D; where D: DimName, DefaultAllocator: Allocator<T, D>;
self: &'a Point<T, D>, right: Point<T, D>, Output = SVector<T, D>; self: &'a OPoint<T, D>, right: OPoint<T, D>, Output = OVector<T, D>;
&self.coords - right.coords; 'a); &self.coords - right.coords; 'a);
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D>, U1), (Const<D>, U1) -> (Const<D>, U1) (D, U1), (D, U1) -> (D, U1)
const D; for; where; const; for D; where D: DimName, DefaultAllocator: Allocator<T, D>;
self: Point<T, D>, right: &'b Point<T, D>, Output = SVector<T, D>; self: OPoint<T, D>, right: &'b OPoint<T, D>, Output = OVector<T, D>;
self.coords - &right.coords; 'b); self.coords - &right.coords; 'b);
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D>, U1), (Const<D>, U1) -> (Const<D>, U1) (D, U1), (D, U1) -> (D, U1)
const D; for; where; const; for D; where D: DimName, DefaultAllocator: Allocator<T, D>;
self: Point<T, D>, right: Point<T, D>, Output = SVector<T, D>; self: OPoint<T, D>, right: OPoint<T, D>, Output = OVector<T, D>;
self.coords - right.coords; ); self.coords - right.coords; );
// Point - Vector // Point - Vector
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: &'a Point<T, D1>, right: &'b Vector<T, D2, SB>, Output = Point<T, D1>; self: &'a OPoint<T, D1>, right: &'b Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(&self.coords - right); 'a, 'b); Self::Output::from(&self.coords - right); 'a, 'b);
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: &'a Point<T, D1>, right: Vector<T, D2, SB>, Output = Point<T, D1>; self: &'a OPoint<T, D1>, right: Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(&self.coords - &right); 'a); // TODO: should not be a ref to `right`. Self::Output::from(&self.coords - &right); 'a); // TODO: should not be a ref to `right`.
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: Point<T, D1>, right: &'b Vector<T, D2, SB>, Output = Point<T, D1>; self: OPoint<T, D1>, right: &'b Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(self.coords - right); 'b); Self::Output::from(self.coords - right); 'b);
add_sub_impl!(Sub, sub, ClosedSub; add_sub_impl!(Sub, sub, ClosedSub;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: Point<T, D1>, right: Vector<T, D2, SB>, Output = Point<T, D1>; self: OPoint<T, D1>, right: Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(self.coords - right); ); Self::Output::from(self.coords - right); );
// Point + Vector // Point + Vector
add_sub_impl!(Add, add, ClosedAdd; add_sub_impl!(Add, add, ClosedAdd;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: &'a Point<T, D1>, right: &'b Vector<T, D2, SB>, Output = Point<T, D1>; self: &'a OPoint<T, D1>, right: &'b Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(&self.coords + right); 'a, 'b); Self::Output::from(&self.coords + right); 'a, 'b);
add_sub_impl!(Add, add, ClosedAdd; add_sub_impl!(Add, add, ClosedAdd;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: &'a Point<T, D1>, right: Vector<T, D2, SB>, Output = Point<T, D1>; self: &'a OPoint<T, D1>, right: Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(&self.coords + &right); 'a); // TODO: should not be a ref to `right`. Self::Output::from(&self.coords + &right); 'a); // TODO: should not be a ref to `right`.
add_sub_impl!(Add, add, ClosedAdd; add_sub_impl!(Add, add, ClosedAdd;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: Point<T, D1>, right: &'b Vector<T, D2, SB>, Output = Point<T, D1>; self: OPoint<T, D1>, right: &'b Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(self.coords + right); 'b); Self::Output::from(self.coords + right); 'b);
add_sub_impl!(Add, add, ClosedAdd; add_sub_impl!(Add, add, ClosedAdd;
(Const<D1>, U1), (D2, U1) -> (Const<D1>, U1) (D1, U1), (D2, U1) -> (D1, U1)
const D1; const;
for D2, SB; for D1, D2, SB;
where D2: Dim, SB: Storage<T, D2>; where D1: DimName, D2: Dim, SB: Storage<T, D2>, DefaultAllocator: Allocator<T, D1>;
self: Point<T, D1>, right: Vector<T, D2, SB>, Output = Point<T, D1>; self: OPoint<T, D1>, right: Vector<T, D2, SB>, Output = OPoint<T, D1>;
Self::Output::from(self.coords + right); ); Self::Output::from(self.coords + right); );
// TODO: replace by the shared macro: add_sub_assign_impl? // TODO: replace by the shared macro: add_sub_assign_impl?
macro_rules! op_assign_impl( macro_rules! op_assign_impl(
($($TraitAssign: ident, $method_assign: ident, $bound: ident);* $(;)*) => {$( ($($TraitAssign: ident, $method_assign: ident, $bound: ident);* $(;)*) => {$(
impl<'b, T, D2: Dim, SB, const D1: usize> $TraitAssign<&'b Vector<T, D2, SB>> for Point<T, D1> impl<'b, T, D1: DimName, D2: Dim, SB> $TraitAssign<&'b Vector<T, D2, SB>> for OPoint<T, D1>
where T: Scalar + $bound, where T: Scalar + $bound,
SB: Storage<T, D2>, SB: Storage<T, D2>,
ShapeConstraint: SameNumberOfRows<Const<D1>, D2> { ShapeConstraint: SameNumberOfRows<D1, D2>,
DefaultAllocator: Allocator<T, D1> {
#[inline] #[inline]
fn $method_assign(&mut self, right: &'b Vector<T, D2, SB>) { fn $method_assign(&mut self, right: &'b Vector<T, D2, SB>) {
@ -169,10 +184,11 @@ macro_rules! op_assign_impl(
} }
} }
impl<T, D2: Dim, SB, const D1: usize> $TraitAssign<Vector<T, D2, SB>> for Point<T, D1> impl<T, D1: DimName, D2: Dim, SB> $TraitAssign<Vector<T, D2, SB>> for OPoint<T, D1>
where T: Scalar + $bound, where T: Scalar + $bound,
SB: Storage<T, D2>, SB: Storage<T, D2>,
ShapeConstraint: SameNumberOfRows<Const<D1>, D2> { ShapeConstraint: SameNumberOfRows<D1, D2>,
DefaultAllocator: Allocator<T, D1> {
#[inline] #[inline]
fn $method_assign(&mut self, right: Vector<T, D2, SB>) { fn $method_assign(&mut self, right: Vector<T, D2, SB>) {
@ -214,28 +230,30 @@ md_impl_all!(
macro_rules! componentwise_scalarop_impl( macro_rules! componentwise_scalarop_impl(
($Trait: ident, $method: ident, $bound: ident; ($Trait: ident, $method: ident, $bound: ident;
$TraitAssign: ident, $method_assign: ident) => { $TraitAssign: ident, $method_assign: ident) => {
impl<T: Scalar + $bound, const D: usize> $Trait<T> for Point<T, D> impl<T: Scalar + $bound, D: DimName> $Trait<T> for OPoint<T, D>
where DefaultAllocator: Allocator<T, D>
{ {
type Output = Point<T, D>; type Output = OPoint<T, D>;
#[inline] #[inline]
fn $method(self, right: T) -> Self::Output { fn $method(self, right: T) -> Self::Output {
Point::from(self.coords.$method(right)) OPoint::from(self.coords.$method(right))
} }
} }
impl<'a, T: Scalar + $bound, const D: usize> $Trait<T> for &'a Point<T, D> impl<'a, T: Scalar + $bound, D: DimName> $Trait<T> for &'a OPoint<T, D>
where DefaultAllocator: Allocator<T, D>
{ {
type Output = Point<T, D>; type Output = OPoint<T, D>;
#[inline] #[inline]
fn $method(self, right: T) -> Self::Output { fn $method(self, right: T) -> Self::Output {
Point::from((&self.coords).$method(right)) OPoint::from((&self.coords).$method(right))
} }
} }
impl<T: Scalar + $bound, const D: usize> $TraitAssign<T> for Point<T, D> impl<T: Scalar + $bound, D: DimName> $TraitAssign<T> for OPoint<T, D>
/* where DefaultAllocator: Allocator<T, D> */ where DefaultAllocator: Allocator<T, D>
{ {
#[inline] #[inline]
fn $method_assign(&mut self, right: T) { fn $method_assign(&mut self, right: T) {
@ -250,23 +268,25 @@ componentwise_scalarop_impl!(Div, div, ClosedDiv; DivAssign, div_assign);
macro_rules! left_scalar_mul_impl( macro_rules! left_scalar_mul_impl(
($($T: ty),* $(,)*) => {$( ($($T: ty),* $(,)*) => {$(
impl<const D: usize> Mul<Point<$T, D>> for $T impl<D: DimName> Mul<OPoint<$T, D>> for $T
where DefaultAllocator: Allocator<$T, D>
{ {
type Output = Point<$T, D>; type Output = OPoint<$T, D>;
#[inline] #[inline]
fn mul(self, right: Point<$T, D>) -> Self::Output { fn mul(self, right: OPoint<$T, D>) -> Self::Output {
Point::from(self * right.coords) OPoint::from(self * right.coords)
} }
} }
impl<'b, const D: usize> Mul<&'b Point<$T, D>> for $T impl<'b, D: DimName> Mul<&'b OPoint<$T, D>> for $T
where DefaultAllocator: Allocator<$T, D>
{ {
type Output = Point<$T, D>; type Output = OPoint<$T, D>;
#[inline] #[inline]
fn mul(self, right: &'b Point<$T, D>) -> Self::Output { fn mul(self, right: &'b OPoint<$T, D>) -> Self::Output {
Point::from(self * &right.coords) OPoint::from(self * &right.coords)
} }
} }
)*} )*}

View File

@ -139,7 +139,7 @@ mod rkyv_impl {
impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Quaternion<T> { impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Quaternion<T> {
fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> { fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> {
Ok(self.coords.serialize(serializer)?) self.coords.serialize(serializer)
} }
} }
@ -1478,7 +1478,7 @@ where
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_rotation_matrix(&self) -> Rotation<T, 3> { pub fn to_rotation_matrix(self) -> Rotation<T, 3> {
let i = self.as_ref()[0]; let i = self.as_ref()[0];
let j = self.as_ref()[1]; let j = self.as_ref()[1];
let k = self.as_ref()[2]; let k = self.as_ref()[2];
@ -1513,7 +1513,7 @@ where
/// The angles are produced in the form (roll, pitch, yaw). /// The angles are produced in the form (roll, pitch, yaw).
#[inline] #[inline]
#[deprecated(note = "This is renamed to use `.euler_angles()`.")] #[deprecated(note = "This is renamed to use `.euler_angles()`.")]
pub fn to_euler_angles(&self) -> (T, T, T) pub fn to_euler_angles(self) -> (T, T, T)
where where
T: RealField, T: RealField,
{ {
@ -1561,7 +1561,7 @@ where
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_homogeneous(&self) -> Matrix4<T> { pub fn to_homogeneous(self) -> Matrix4<T> {
self.to_rotation_matrix().to_homogeneous() self.to_rotation_matrix().to_homogeneous()
} }

View File

@ -171,7 +171,7 @@ where
Standard: Distribution<T>, Standard: Distribution<T>,
{ {
#[inline] #[inline]
fn sample<'a, R: Rng + ?Sized>(&self, rng: &'a mut R) -> Quaternion<T> { fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> Quaternion<T> {
Quaternion::new(rng.gen(), rng.gen(), rng.gen(), rng.gen()) Quaternion::new(rng.gen(), rng.gen(), rng.gen(), rng.gen())
} }
} }
@ -535,10 +535,10 @@ where
SC: Storage<T, U3>, SC: Storage<T, U3>,
{ {
// TODO: code duplication with Rotation. // TODO: code duplication with Rotation.
let c = na.cross(&nb); let c = na.cross(nb);
if let Some(axis) = Unit::try_new(c, T::default_epsilon()) { if let Some(axis) = Unit::try_new(c, T::default_epsilon()) {
let cos = na.dot(&nb); let cos = na.dot(nb);
// The cosinus may be out of [-1, 1] because of inaccuracies. // The cosinus may be out of [-1, 1] because of inaccuracies.
if cos <= -T::one() { if cos <= -T::one() {
@ -548,7 +548,7 @@ where
} else { } else {
Some(Self::from_axis_angle(&axis, cos.acos() * s)) Some(Self::from_axis_angle(&axis, cos.acos() * s))
} }
} else if na.dot(&nb) < T::zero() { } else if na.dot(nb) < T::zero() {
// PI // PI
// //
// The rotation axis is undefined but the angle not zero. This is not a // The rotation axis is undefined but the angle not zero. This is not a
@ -860,7 +860,7 @@ where
{ {
/// Generate a uniformly distributed random rotation quaternion. /// Generate a uniformly distributed random rotation quaternion.
#[inline] #[inline]
fn sample<'a, R: Rng + ?Sized>(&self, rng: &'a mut R) -> UnitQuaternion<T> { fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> UnitQuaternion<T> {
// Ken Shoemake's Subgroup Algorithm // Ken Shoemake's Subgroup Algorithm
// Uniform random rotations. // Uniform random rotations.
// In D. Kirk, editor, Graphics Gems III, pages 124-132. Academic, New York, 1992. // In D. Kirk, editor, Graphics Gems III, pages 124-132. Academic, New York, 1992.

View File

@ -1,5 +1,5 @@
use crate::base::constraint::{AreMultipliable, DimEq, SameNumberOfRows, ShapeConstraint}; use crate::base::constraint::{AreMultipliable, DimEq, SameNumberOfRows, ShapeConstraint};
use crate::base::{Const, Matrix, Scalar, Unit, Vector}; use crate::base::{Const, Matrix, Unit, Vector};
use crate::dimension::{Dim, U1}; use crate::dimension::{Dim, U1};
use crate::storage::{Storage, StorageMut}; use crate::storage::{Storage, StorageMut};
use simba::scalar::ComplexField; use simba::scalar::ComplexField;
@ -7,7 +7,7 @@ use simba::scalar::ComplexField;
use crate::geometry::Point; use crate::geometry::Point;
/// A reflection wrt. a plane. /// A reflection wrt. a plane.
pub struct Reflection<T: Scalar, D: Dim, S: Storage<T, D>> { pub struct Reflection<T, D, S> {
axis: Vector<T, D, S>, axis: Vector<T, D, S>,
bias: T, bias: T,
} }
@ -90,7 +90,7 @@ impl<T: ComplexField, D: Dim, S: Storage<T, D>> Reflection<T, D, S> {
} }
let m_two: T = crate::convert(-2.0f64); let m_two: T = crate::convert(-2.0f64);
lhs.gerc(m_two, &work, &self.axis, T::one()); lhs.gerc(m_two, work, &self.axis, T::one());
} }
/// Applies the reflection to the rows of `lhs`. /// Applies the reflection to the rows of `lhs`.
@ -111,6 +111,6 @@ impl<T: ComplexField, D: Dim, S: Storage<T, D>> Reflection<T, D, S> {
} }
let m_two = sign.scale(crate::convert(-2.0f64)); let m_two = sign.scale(crate::convert(-2.0f64));
lhs.gerc(m_two, &work, &self.axis, sign); lhs.gerc(m_two, work, &self.axis, sign);
} }
} }

View File

@ -55,7 +55,7 @@ use crate::geometry::Point;
/// ///
#[repr(C)] #[repr(C)]
#[derive(Debug)] #[derive(Debug)]
pub struct Rotation<T: Scalar, const D: usize> { pub struct Rotation<T, const D: usize> {
matrix: SMatrix<T, D, D>, matrix: SMatrix<T, D, D>,
} }
@ -215,9 +215,9 @@ impl<T: Scalar, const D: usize> Rotation<T, D> {
/// A mutable reference to the underlying matrix representation of this rotation. /// A mutable reference to the underlying matrix representation of this rotation.
/// ///
/// This is suffixed by "_unchecked" because this allows the user to replace the matrix by another one that is /// This is suffixed by "_unchecked" because this allows the user to replace the
/// non-square, non-inversible, or non-orthonormal. If one of those properties is broken, /// matrix by another one that is non-inversible or non-orthonormal. If one of
/// subsequent method calls may be UB. /// those properties is broken, subsequent method calls may return bogus results.
#[inline] #[inline]
pub fn matrix_mut_unchecked(&mut self) -> &mut SMatrix<T, D, D> { pub fn matrix_mut_unchecked(&mut self) -> &mut SMatrix<T, D, D> {
&mut self.matrix &mut self.matrix

View File

@ -274,7 +274,7 @@ where
{ {
/// Generate a uniformly distributed random rotation. /// Generate a uniformly distributed random rotation.
#[inline] #[inline]
fn sample<'a, R: Rng + ?Sized>(&self, rng: &'a mut R) -> Rotation2<T> { fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> Rotation2<T> {
let twopi = Uniform::new(T::zero(), T::simd_two_pi()); let twopi = Uniform::new(T::zero(), T::simd_two_pi());
Rotation2::new(rng.sample(twopi)) Rotation2::new(rng.sample(twopi))
} }
@ -883,7 +883,7 @@ impl<T: SimdRealField> Rotation3<T> {
/// ///
/// The angles are produced in the form (roll, pitch, yaw). /// The angles are produced in the form (roll, pitch, yaw).
#[deprecated(note = "This is renamed to use `.euler_angles()`.")] #[deprecated(note = "This is renamed to use `.euler_angles()`.")]
pub fn to_euler_angles(&self) -> (T, T, T) pub fn to_euler_angles(self) -> (T, T, T)
where where
T: RealField, T: RealField,
{ {

View File

@ -27,19 +27,19 @@ use crate::geometry::{AbstractRotation, Isometry, Point, Translation};
#[cfg_attr(feature = "serde-serialize-no-std", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde-serialize-no-std", derive(Serialize, Deserialize))]
#[cfg_attr( #[cfg_attr(
feature = "serde-serialize-no-std", feature = "serde-serialize-no-std",
serde(bound(serialize = "T: Serialize, serde(bound(serialize = "T: Scalar + Serialize,
R: Serialize, R: Serialize,
DefaultAllocator: Allocator<T, Const<D>>, DefaultAllocator: Allocator<T, Const<D>>,
Owned<T, Const<D>>: Serialize")) Owned<T, Const<D>>: Serialize"))
)] )]
#[cfg_attr( #[cfg_attr(
feature = "serde-serialize-no-std", feature = "serde-serialize-no-std",
serde(bound(deserialize = "T: Deserialize<'de>, serde(bound(deserialize = "T: Scalar + Deserialize<'de>,
R: Deserialize<'de>, R: Deserialize<'de>,
DefaultAllocator: Allocator<T, Const<D>>, DefaultAllocator: Allocator<T, Const<D>>,
Owned<T, Const<D>>: Deserialize<'de>")) Owned<T, Const<D>>: Deserialize<'de>"))
)] )]
pub struct Similarity<T: Scalar, R, const D: usize> { pub struct Similarity<T, R, const D: usize> {
/// The part of this similarity that does not include the scaling factor. /// The part of this similarity that does not include the scaling factor.
pub isometry: Isometry<T, R, D>, pub isometry: Isometry<T, R, D>,
scaling: T, scaling: T,

View File

@ -1,6 +1,7 @@
use approx::{AbsDiffEq, RelativeEq, UlpsEq}; use approx::{AbsDiffEq, RelativeEq, UlpsEq};
use std::any::Any; use std::any::Any;
use std::fmt::Debug; use std::fmt::Debug;
use std::hash;
use std::marker::PhantomData; use std::marker::PhantomData;
#[cfg(feature = "serde-serialize-no-std")] #[cfg(feature = "serde-serialize-no-std")]
@ -166,14 +167,16 @@ where
_phantom: PhantomData<C>, _phantom: PhantomData<C>,
} }
// TODO impl<T: RealField + hash::Hash, C: TCategory, const D: usize> hash::Hash for Transform<T, C, D>
// impl<T: RealField + hash::Hash, D: DimNameAdd<U1> + hash::Hash, C: TCategory> hash::Hash for Transform<T, C, D> where
// where DefaultAllocator: Allocator<T, DimNameSum<Const<D>, U1>, DimNameSum<Const<D>, U1>>, Const<D>: DimNameAdd<U1>,
// Owned<T, DimNameSum<Const<D>, U1>, DimNameSum<Const<D>, U1>>: hash::Hash { DefaultAllocator: Allocator<T, DimNameSum<Const<D>, U1>, DimNameSum<Const<D>, U1>>,
// fn hash<H: hash::Hasher>(&self, state: &mut H) { Owned<T, DimNameSum<Const<D>, U1>, DimNameSum<Const<D>, U1>>: hash::Hash,
// self.matrix.hash(state); {
// } fn hash<H: hash::Hasher>(&self, state: &mut H) {
// } self.matrix.hash(state);
}
}
impl<T: RealField, C: TCategory, const D: usize> Copy for Transform<T, C, D> impl<T: RealField, C: TCategory, const D: usize> Copy for Transform<T, C, D>
where where

View File

@ -124,7 +124,7 @@ md_impl_all!(
if C::has_normalizer() { if C::has_normalizer() {
let normalizer = self.matrix().fixed_slice::<1, D>(D, 0); let normalizer = self.matrix().fixed_slice::<1, D>(D, 0);
let n = normalizer.tr_dot(&rhs); let n = normalizer.tr_dot(rhs);
if !n.is_zero() { if !n.is_zero() {
return transform * (rhs / n); return transform * (rhs / n);

View File

@ -139,7 +139,7 @@ mod rkyv_impl {
impl<T: Serialize<S>, S: Fallible + ?Sized, const D: usize> Serialize<S> for Translation<T, D> { impl<T: Serialize<S>, S: Fallible + ?Sized, const D: usize> Serialize<S> for Translation<T, D> {
fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> { fn serialize(&self, serializer: &mut S) -> Result<Self::Resolver, S::Error> {
Ok(self.vector.serialize(serializer)?) self.vector.serialize(serializer)
} }
} }

View File

@ -69,7 +69,7 @@ where
{ {
/// Generate an arbitrary random variate for testing purposes. /// Generate an arbitrary random variate for testing purposes.
#[inline] #[inline]
fn sample<'a, G: Rng + ?Sized>(&self, rng: &'a mut G) -> Translation<T, D> { fn sample<G: Rng + ?Sized>(&self, rng: &mut G) -> Translation<T, D> {
Translation::from(rng.gen::<SVector<T, D>>()) Translation::from(rng.gen::<SVector<T, D>>())
} }
} }

View File

@ -261,7 +261,7 @@ where
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_rotation_matrix(&self) -> Rotation2<T> { pub fn to_rotation_matrix(self) -> Rotation2<T> {
let r = self.re; let r = self.re;
let i = self.im; let i = self.im;
@ -282,7 +282,7 @@ where
/// ``` /// ```
#[inline] #[inline]
#[must_use] #[must_use]
pub fn to_homogeneous(&self) -> Matrix3<T> { pub fn to_homogeneous(self) -> Matrix3<T> {
self.to_rotation_matrix().to_homogeneous() self.to_rotation_matrix().to_homogeneous()
} }
} }

View File

@ -383,8 +383,8 @@ where
SB: Storage<T, U2>, SB: Storage<T, U2>,
SC: Storage<T, U2>, SC: Storage<T, U2>,
{ {
let sang = na.perp(&nb); let sang = na.perp(nb);
let cang = na.dot(&nb); let cang = na.dot(nb);
Self::from_angle(sang.simd_atan2(cang) * s) Self::from_angle(sang.simd_atan2(cang) * s)
} }

View File

@ -14,7 +14,7 @@ and the official package manager: [cargo](https://github.com/rust-lang/cargo).
Simply add the following to your `Cargo.toml` file: Simply add the following to your `Cargo.toml` file:
```.ignore ```ignore
[dependencies] [dependencies]
// TODO: replace the * by the latest version. // TODO: replace the * by the latest version.
nalgebra = "*" nalgebra = "*"
@ -26,7 +26,7 @@ Most useful functionalities of **nalgebra** are grouped in the root module `nalg
However, the recommended way to use **nalgebra** is to import types and traits However, the recommended way to use **nalgebra** is to import types and traits
explicitly, and call free-functions using the `na::` prefix: explicitly, and call free-functions using the `na::` prefix:
```.rust ```
#[macro_use] #[macro_use]
extern crate approx; // For the macro relative_eq! extern crate approx; // For the macro relative_eq!
extern crate nalgebra as na; extern crate nalgebra as na;
@ -87,7 +87,6 @@ an optimized set of tools for computer graphics and physics. Those features incl
html_root_url = "https://docs.rs/nalgebra/0.25.0" html_root_url = "https://docs.rs/nalgebra/0.25.0"
)] )]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]
#![cfg_attr(all(feature = "alloc", not(feature = "std")), feature(alloc))]
#![cfg_attr(feature = "no_unsound_assume_init", allow(unreachable_code))] #![cfg_attr(feature = "no_unsound_assume_init", allow(unreachable_code))]
#[cfg(feature = "rand-no-std")] #[cfg(feature = "rand-no-std")]
@ -102,6 +101,7 @@ extern crate approx;
extern crate num_traits as num; extern crate num_traits as num;
#[cfg(all(feature = "alloc", not(feature = "std")))] #[cfg(all(feature = "alloc", not(feature = "std")))]
#[cfg_attr(test, macro_use)]
extern crate alloc; extern crate alloc;
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]

View File

@ -98,7 +98,7 @@ pub fn clear_row_unchecked<T: ComplexField, R: Dim, C: Dim>(
reflection_norm.signum().conjugate(), reflection_norm.signum().conjugate(),
); );
top.columns_range_mut(irow + shift..) top.columns_range_mut(irow + shift..)
.tr_copy_from(&refl.axis()); .tr_copy_from(refl.axis());
} else { } else {
top.columns_range_mut(irow + shift..).tr_copy_from(&axis); top.columns_range_mut(irow + shift..).tr_copy_from(&axis);
} }

View File

@ -27,7 +27,7 @@
//! In `proptest`, it is usually preferable to have free functions that generate *strategies*. //! In `proptest`, it is usually preferable to have free functions that generate *strategies*.
//! Currently, the [matrix](fn.matrix.html) function fills this role. The analogous function for //! Currently, the [matrix](fn.matrix.html) function fills this role. The analogous function for
//! column vectors is [vector](fn.vector.html). Let's take a quick look at how it may be used: //! column vectors is [vector](fn.vector.html). Let's take a quick look at how it may be used:
//! ```rust //! ```
//! use nalgebra::proptest::matrix; //! use nalgebra::proptest::matrix;
//! use proptest::prelude::*; //! use proptest::prelude::*;
//! //!
@ -52,7 +52,7 @@
//! number of columns to vary. One way to do this is to use `proptest` combinators in combination //! number of columns to vary. One way to do this is to use `proptest` combinators in combination
//! with [matrix](fn.matrix.html) as follows: //! with [matrix](fn.matrix.html) as follows:
//! //!
//! ```rust //! ```
//! use nalgebra::{Dynamic, OMatrix, Const}; //! use nalgebra::{Dynamic, OMatrix, Const};
//! use nalgebra::proptest::matrix; //! use nalgebra::proptest::matrix;
//! use proptest::prelude::*; //! use proptest::prelude::*;
@ -92,7 +92,7 @@
//! //!
//! If you don't care about the dimensions of matrices, you can write tests like these: //! If you don't care about the dimensions of matrices, you can write tests like these:
//! //!
//! ```rust //! ```
//! use nalgebra::{DMatrix, DVector, Dynamic, Matrix3, OMatrix, Vector3, U3}; //! use nalgebra::{DMatrix, DVector, Dynamic, Matrix3, OMatrix, Vector3, U3};
//! use proptest::prelude::*; //! use proptest::prelude::*;
//! //!

View File

@ -31,11 +31,9 @@ impl<'a, T: Clone> Iterator for ColumnEntries<'a, T> {
if self.curr >= self.i.len() { if self.curr >= self.i.len() {
None None
} else { } else {
let res = Some( let res = Some((unsafe { *self.i.get_unchecked(self.curr) }, unsafe {
(unsafe { self.i.get_unchecked(self.curr).clone() }, unsafe {
self.v.get_unchecked(self.curr).clone() self.v.get_unchecked(self.curr).clone()
}), }));
);
self.curr += 1; self.curr += 1;
res res
} }
@ -80,10 +78,12 @@ pub trait CsStorage<T, R, C = U1>: for<'a> CsStorageIter<'a, T, R, C> {
fn shape(&self) -> (R, C); fn shape(&self) -> (R, C);
/// Retrieve the i-th row index of the underlying row index buffer. /// Retrieve the i-th row index of the underlying row index buffer.
/// ///
/// # Safety
/// No bound-checking is performed. /// No bound-checking is performed.
unsafe fn row_index_unchecked(&self, i: usize) -> usize; unsafe fn row_index_unchecked(&self, i: usize) -> usize;
/// The i-th value on the contiguous value buffer of this storage. /// The i-th value on the contiguous value buffer of this storage.
/// ///
/// # Safety
/// No bound-checking is performed. /// No bound-checking is performed.
unsafe fn get_value_unchecked(&self, i: usize) -> &T; unsafe fn get_value_unchecked(&self, i: usize) -> &T;
/// The i-th value on the contiguous value buffer of this storage. /// The i-th value on the contiguous value buffer of this storage.
@ -155,7 +155,7 @@ where
#[inline] #[inline]
fn column_row_indices(&'a self, j: usize) -> Self::ColumnRowIndices { fn column_row_indices(&'a self, j: usize) -> Self::ColumnRowIndices {
let rng = self.column_range(j); let rng = self.column_range(j);
self.i[rng.clone()].iter().cloned() self.i[rng].iter().cloned()
} }
} }
@ -489,7 +489,7 @@ where
// Sort the index vector. // Sort the index vector.
let range = self.data.column_range(j); let range = self.data.column_range(j);
self.data.i[range.clone()].sort(); self.data.i[range.clone()].sort_unstable();
// Permute the values too. // Permute the values too.
for (i, irow) in range.clone().zip(self.data.i[range].iter().cloned()) { for (i, irow) in range.clone().zip(self.data.i[range].iter().cloned()) {

View File

@ -271,7 +271,7 @@ where
// Keep the output sorted. // Keep the output sorted.
let range = res.data.p[j]..nz; let range = res.data.p[j]..nz;
res.data.i[range.clone()].sort(); res.data.i[range.clone()].sort_unstable();
for p in range { for p in range {
res.data.vals[p] = workspace[res.data.i[p]].inlined_clone() res.data.vals[p] = workspace[res.data.i[p]].inlined_clone()

View File

@ -63,7 +63,7 @@ impl<T: RealField, D: Dim, S: CsStorage<T, D, D>> CsMatrix<T, D, D, S> {
let mut column = self.data.column_entries(j); let mut column = self.data.column_entries(j);
let mut diag_found = false; let mut diag_found = false;
while let Some((i, val)) = column.next() { for (i, val) in &mut column {
if i == j { if i == j {
if val.is_zero() { if val.is_zero() {
return false; return false;
@ -109,7 +109,7 @@ impl<T: RealField, D: Dim, S: CsStorage<T, D, D>> CsMatrix<T, D, D, S> {
let mut column = self.data.column_entries(j); let mut column = self.data.column_entries(j);
let mut diag = None; let mut diag = None;
while let Some((i, val)) = column.next() { for (i, val) in &mut column {
if i == j { if i == j {
if val.is_zero() { if val.is_zero() {
return false; return false;
@ -151,7 +151,7 @@ impl<T: RealField, D: Dim, S: CsStorage<T, D, D>> CsMatrix<T, D, D, S> {
// We don't compute a postordered reach here because it will be sorted after anyway. // We don't compute a postordered reach here because it will be sorted after anyway.
self.lower_triangular_reach(b, &mut reach); self.lower_triangular_reach(b, &mut reach);
// We sort the reach so the result matrix has sorted indices. // We sort the reach so the result matrix has sorted indices.
reach.sort(); reach.sort_unstable();
let mut workspace = let mut workspace =
unsafe { crate::unimplemented_or_uninitialized_generic!(b.data.shape().0, Const::<1>) }; unsafe { crate::unimplemented_or_uninitialized_generic!(b.data.shape().0, Const::<1>) };
@ -167,7 +167,7 @@ impl<T: RealField, D: Dim, S: CsStorage<T, D, D>> CsMatrix<T, D, D, S> {
let mut column = self.data.column_entries(j); let mut column = self.data.column_entries(j);
let mut diag_found = false; let mut diag_found = false;
while let Some((i, val)) = column.next() { for (i, val) in &mut column {
if i == j { if i == j {
if val.is_zero() { if val.is_zero() {
break; break;

View File

@ -267,12 +267,12 @@ impl<T: RealField + simba::scalar::RealField> AffineTransformation<Point3<T>>
#[inline] #[inline]
fn append_translation(&self, translation: &Self::Translation) -> Self { fn append_translation(&self, translation: &Self::Translation) -> Self {
self * Self::from_parts(translation.clone(), UnitQuaternion::identity()) self * Self::from_parts(*translation, UnitQuaternion::identity())
} }
#[inline] #[inline]
fn prepend_translation(&self, translation: &Self::Translation) -> Self { fn prepend_translation(&self, translation: &Self::Translation) -> Self {
Self::from_parts(translation.clone(), UnitQuaternion::identity()) * self Self::from_parts(*translation, UnitQuaternion::identity()) * self
} }
#[inline] #[inline]
@ -287,12 +287,12 @@ impl<T: RealField + simba::scalar::RealField> AffineTransformation<Point3<T>>
#[inline] #[inline]
fn append_scaling(&self, _: &Self::NonUniformScaling) -> Self { fn append_scaling(&self, _: &Self::NonUniformScaling) -> Self {
self.clone() *self
} }
#[inline] #[inline]
fn prepend_scaling(&self, _: &Self::NonUniformScaling) -> Self { fn prepend_scaling(&self, _: &Self::NonUniformScaling) -> Self {
self.clone() *self
} }
} }

View File

@ -272,12 +272,12 @@ where
match Self::dimension() { match Self::dimension() {
1 => { 1 => {
if vs.len() == 0 { if vs.is_empty() {
let _ = f(&Self::canonical_basis_element(0)); let _ = f(&Self::canonical_basis_element(0));
} }
} }
2 => { 2 => {
if vs.len() == 0 { if vs.is_empty() {
let _ = f(&Self::canonical_basis_element(0)) let _ = f(&Self::canonical_basis_element(0))
&& f(&Self::canonical_basis_element(1)); && f(&Self::canonical_basis_element(1));
} else if vs.len() == 1 { } else if vs.len() == 1 {
@ -290,7 +290,7 @@ where
// Otherwise, nothing. // Otherwise, nothing.
} }
3 => { 3 => {
if vs.len() == 0 { if vs.is_empty() {
let _ = f(&Self::canonical_basis_element(0)) let _ = f(&Self::canonical_basis_element(0))
&& f(&Self::canonical_basis_element(1)) && f(&Self::canonical_basis_element(1))
&& f(&Self::canonical_basis_element(2)); && f(&Self::canonical_basis_element(2));

View File

@ -23,7 +23,7 @@ impl<T: RealField + simba::scalar::RealField, const D: usize> EuclideanSpace for
#[inline] #[inline]
fn coordinates(&self) -> Self::Coordinates { fn coordinates(&self) -> Self::Coordinates {
self.coords.clone() self.coords
} }
#[inline] #[inline]

View File

@ -144,11 +144,7 @@ impl<T: RealField + simba::scalar::RealField> NormedSpace for Quaternion<T> {
#[inline] #[inline]
fn try_normalize(&self, min_norm: T) -> Option<Self> { fn try_normalize(&self, min_norm: T) -> Option<Self> {
if let Some(v) = self.coords.try_normalize(min_norm) { self.coords.try_normalize(min_norm).map(Self::from)
Some(Self::from(v))
} else {
None
}
} }
#[inline] #[inline]
@ -234,17 +230,17 @@ impl<T: RealField + simba::scalar::RealField> AffineTransformation<Point3<T>>
#[inline] #[inline]
fn decompose(&self) -> (Id, Self, Id, Self) { fn decompose(&self) -> (Id, Self, Id, Self) {
(Id::new(), self.clone(), Id::new(), Self::identity()) (Id::new(), *self, Id::new(), Self::identity())
} }
#[inline] #[inline]
fn append_translation(&self, _: &Self::Translation) -> Self { fn append_translation(&self, _: &Self::Translation) -> Self {
self.clone() *self
} }
#[inline] #[inline]
fn prepend_translation(&self, _: &Self::Translation) -> Self { fn prepend_translation(&self, _: &Self::Translation) -> Self {
self.clone() *self
} }
#[inline] #[inline]
@ -259,12 +255,12 @@ impl<T: RealField + simba::scalar::RealField> AffineTransformation<Point3<T>>
#[inline] #[inline]
fn append_scaling(&self, _: &Self::NonUniformScaling) -> Self { fn append_scaling(&self, _: &Self::NonUniformScaling) -> Self {
self.clone() *self
} }
#[inline] #[inline]
fn prepend_scaling(&self, _: &Self::NonUniformScaling) -> Self { fn prepend_scaling(&self, _: &Self::NonUniformScaling) -> Self {
self.clone() *self
} }
} }
@ -278,7 +274,7 @@ impl<T: RealField + simba::scalar::RealField> Similarity<Point3<T>> for UnitQuat
#[inline] #[inline]
fn rotation(&self) -> Self { fn rotation(&self) -> Self {
self.clone() *self
} }
#[inline] #[inline]

View File

@ -79,7 +79,7 @@ impl<T: RealField + simba::scalar::RealField, const D: usize> Transformation<Poi
#[inline] #[inline]
fn transform_vector(&self, v: &SVector<T, D>) -> SVector<T, D> { fn transform_vector(&self, v: &SVector<T, D>) -> SVector<T, D> {
v.clone() *v
} }
} }
@ -93,7 +93,7 @@ impl<T: RealField + simba::scalar::RealField, const D: usize> ProjectiveTransfor
#[inline] #[inline]
fn inverse_transform_vector(&self, v: &SVector<T, D>) -> SVector<T, D> { fn inverse_transform_vector(&self, v: &SVector<T, D>) -> SVector<T, D> {
v.clone() *v
} }
} }
@ -176,7 +176,7 @@ impl<T: RealField + simba::scalar::RealField, const D: usize> AlgaTranslation<Po
{ {
#[inline] #[inline]
fn to_vector(&self) -> SVector<T, D> { fn to_vector(&self) -> SVector<T, D> {
self.vector.clone() self.vector
} }
#[inline] #[inline]
@ -186,7 +186,7 @@ impl<T: RealField + simba::scalar::RealField, const D: usize> AlgaTranslation<Po
#[inline] #[inline]
fn powf(&self, n: T) -> Option<Self> { fn powf(&self, n: T) -> Option<Self> {
Some(Self::from(&self.vector * n)) Some(Self::from(self.vector * n))
} }
#[inline] #[inline]

View File

@ -90,17 +90,17 @@ impl<T: RealField + simba::scalar::RealField> AffineTransformation<Point2<T>> fo
#[inline] #[inline]
fn decompose(&self) -> (Id, Self, Id, Self) { fn decompose(&self) -> (Id, Self, Id, Self) {
(Id::new(), self.clone(), Id::new(), Self::identity()) (Id::new(), *self, Id::new(), Self::identity())
} }
#[inline] #[inline]
fn append_translation(&self, _: &Self::Translation) -> Self { fn append_translation(&self, _: &Self::Translation) -> Self {
self.clone() *self
} }
#[inline] #[inline]
fn prepend_translation(&self, _: &Self::Translation) -> Self { fn prepend_translation(&self, _: &Self::Translation) -> Self {
self.clone() *self
} }
#[inline] #[inline]
@ -115,12 +115,12 @@ impl<T: RealField + simba::scalar::RealField> AffineTransformation<Point2<T>> fo
#[inline] #[inline]
fn append_scaling(&self, _: &Self::NonUniformScaling) -> Self { fn append_scaling(&self, _: &Self::NonUniformScaling) -> Self {
self.clone() *self
} }
#[inline] #[inline]
fn prepend_scaling(&self, _: &Self::NonUniformScaling) -> Self { fn prepend_scaling(&self, _: &Self::NonUniformScaling) -> Self {
self.clone() *self
} }
} }
@ -134,7 +134,7 @@ impl<T: RealField + simba::scalar::RealField> Similarity<Point2<T>> for UnitComp
#[inline] #[inline]
fn rotation(&self) -> Self { fn rotation(&self) -> Self {
self.clone() *self
} }
#[inline] #[inline]

View File

@ -1108,3 +1108,31 @@ fn partial_eq_different_types() {
// assert_ne!(static_mat, typenum_static_mat); // assert_ne!(static_mat, typenum_static_mat);
//assert_ne!(typenum_static_mat, static_mat); //assert_ne!(typenum_static_mat, static_mat);
} }
fn generic_omatrix_to_string<D>(
vector: &nalgebra::OVector<f64, D>,
matrix: &nalgebra::OMatrix<f64, D, D>,
) -> (String, String)
where
D: nalgebra::Dim,
nalgebra::DefaultAllocator: nalgebra::base::allocator::Allocator<f64, D>,
nalgebra::DefaultAllocator: nalgebra::base::allocator::Allocator<f64, D, D>,
{
(vector.to_string(), matrix.to_string())
}
#[test]
fn omatrix_to_string() {
let dvec: nalgebra::DVector<f64> = nalgebra::dvector![1.0, 2.0];
let dmatr: nalgebra::DMatrix<f64> = nalgebra::dmatrix![1.0, 2.0; 3.0, 4.0];
let svec: nalgebra::SVector<f64, 2> = nalgebra::vector![1.0, 2.0];
let smatr: nalgebra::SMatrix<f64, 2, 2> = nalgebra::matrix![1.0, 2.0; 3.0, 4.0];
assert_eq!(
generic_omatrix_to_string(&dvec, &dmatr),
(dvec.to_string(), dmatr.to_string())
);
assert_eq!(
generic_omatrix_to_string(&svec, &smatr),
(svec.to_string(), smatr.to_string())
);
}