The previous implementation was correct only for real elements. The
Cholesky decomposition is `L L^H`, so the determinant is `det(L) *
det(L^H)`. Since `L` is a triangular matrix, `det(L)` is the product
of the diagonal elements of `L`. Since `L^H` is triangular and its
diagonal elements are the conjugates of the diagonal elements of `L`,
`det(L^H)` is the conjugate of `det(L)`. So, the overall determinant
is the product of the diagonal elements of `L` times its conjugate.
Most call sites still invoke UB through `assume_init`. Said call sites instead invoke `unimplemented!()` if the `no_unsound_assume_init` feature is enabled, to make it easier to gradually fix them.
Progress towards #556.
This implements `UnitDualQuaternion` as an alternative to `Isometry3`
for representing 3D isometries, which also provides the `sclerp`
operation which can be used to perform screw-linear interpolation
between two unit dual quaternions.
This introduces functionality for creating strategies for matrices and
vectors, as well as an implementation of Arbitrary.
Strategies for the geometric types (Point3, Quaternion etc.) are not
currently part of this contribution.
The current strategy implementation for matrices has some limitations
that lead to suboptimal shrinking behavior. This is documented in the
module-level docs, with some additional comments in the code.
There are two major additions in this commit. The first is a new storage
trait, `ReshapableStorage`, that can be implemented for storage types
that can be reshaped in-place. I have implemented this for both the
`ArrayStorage` and `VecStorage` types, as they are the most common and
they are just interpretations of a flat list.
The second is a `Matrix::reshape_generic` method that allows matrices to
be in-place reshaped provided that the underlying storage can handle it.
In practice, this means that the standard matrix types (`MatrixMN` and
`DMatrix`) can be resized to any size that has the same element count.
Resizing between array and vector storage is not implemented due to
`Storage` only being implemented for `VecStorage` variants where at
least one dimension is `Dynamic`.
Additionally, only the generic reshape function is added as it can be a
basis for other reshaping functions (see the resizing functions) and I
am not particularly in the mood to implement a variety of reshaping
methods.
Previously, most dimension mismatch asserts used raw `assert!` and did
not include the mismatching dimensions in the panic message. When using
dynamic matrices, this led to somewhat-opaque panics such as:
```rust
let m1 = DMatrix::<f32>::zeros(2, 3);
let m2 = DMatrix::<f32>::zeros(5, 10);
m1 + m2 // panic: Matrix addition/subtraction dimensions mismatch.
```
This patch adds dimension information in the panic messages wherever
doing so did not add additional bounds checks, mostly by simply changing
`assert!(a == b, ...)` cases to `assert_eq!`. After:
```rust
// panic: assertion failed: `(left == right)`
// left: `(2, 3)`,
// right: `(5, 10)`: Matrix addition/subtraction dimensions mismatch.
```
Note that the `gemv` and `ger` were not updated, as they are called from
within other functions on subset matricies -- e.g., `gemv` is called
from `gemm` which is called from `mul_to` . Including dimension
information in the `gemv` panic messages would be confusing to
`mul` / `mul_to` users, because it would include dimensions of the column
vectors that `gemm` passes to `gemv` rather than of the original `mul`
arguments. A fix would be to add bounds checks to `mul_to`, but that may
have performance and redundancy implications, so is left to another
patch.
Extend<N> was already implemented, but nalgebra vectors/matrices give
iterators that give &N, not N, so implementing Extend<&N> as well makes
it easier to use.
It seems common practice to do so: The standard library's Vec also
implments Extend for both T and &T.
The various nalgebra-lapack FooScalars are still Copy because they make use of uninitialized memory.
nalgebgra-glm Number still uses Copy because upstream `approx` requires it.
After we yield the final element from the iterator, we don't offset
`ptr` agian, to avoid having it go out-of-bounds.
However, `inner_end` may be several elements out-of-bounds, depending on
the value of `size`. Therefore, we use `wrapping_offset` to avoid
undefined behavior.
`./ci/test.sh` now passes locally.
Refactoring done via the following sed commands:
```bash
export RELEVANT_SOURCEFILES="$(find src -name '*.rs') $(find examples -name '*.rs')"
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Arbitrary\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Serialize\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Deserialize\)/N\1: Scalar + Copy + \2/' $f; do
export RELEVANT_SOURCEFILES="$(find nalgebra-glm -name '*.rs')"
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar,/N\1: Scalar + Copy,/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar>/N\1: Scalar + Copy>/' $f; done
for f in algebra-glm/src/traits.rs; do sed -i 's/Scalar + Ring/Scalar + Copy + Ring>/' $f; done # Number trait definition
```
This should semantically be a no-op, but enables refactorings to use non-Copy scalars on a case-by-case basis.
Also, the only instance of a `One + Zero` trait bound was changed into a `Zero + One` bound to match the others.
The following sed scripts were used in the refactoring (with each clause added to reduce the error count of `cargo check`):
```bash
export RELEVANT_SOURCEFILES="$(find src -name '*.rs') $(find examples -name '*.rs')"
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar,/N: Scalar+Copy,/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Field/N: Scalar + Copy + Field/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Zero/N: Scalar + Copy + Zero/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Closed/N: Scalar + Copy + Closed/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Eq/N: Scalar + Copy + Eq/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + PartialOrd/N: Scalar + Copy + PartialOrd/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: *Scalar + Zero/N: Scalar + Copy + Zero/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + PartialEq/N: Scalar + Copy + PartialEq/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar>/N: Scalar+Copy>/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + $bound/N: Scalar + Copy + $bound/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: *Scalar + $bound/N: Scalar + Copy + $bound/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\): *Scalar,/N\1: Scalar+Copy,/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: *Scalar + $trait/N: Scalar + Copy + $trait/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\): *Scalar + Superset/N\1: Scalar + Copy + Superset/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\): *Scalar + \([a-zA-Z]*Eq\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \([a-zA-Z]*Eq\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(hash::\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar {/N\1: Scalar + Copy {/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Zero\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Bounded\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Lattice\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Meet\|Join\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(fmt::\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Ring\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Hash\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Send\|Sync\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/One + Zero/Zero + One/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Zero\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \($marker\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar>/N\1: Scalar + Copy>/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/Scalar+Copy/Scalar + Copy/' $f; done
```
The added method `Vector::axcpy` generalises `Vector::gemv` to
noncommutative cases since it allows us to write for `gemv`
`self.axcpy(alpha, &col2, val, beta)`, instead the usual
`self.axpy(alpha * val, &col2, beta)`. Hence, `axcpy` preserves the
order of scalar multiplication which is important for applications where
commutativity is not guaranteed (e.g., matrices of quaternions, etc.).
This commmit also removes helpers `array_axpy` and `array_ax`, and
replaces them with `array_axcpy` and `array_axc` respectively, which
like above preserve the order of scalar multiplication.
Finally, `Vector::axpy` is preserved, however, now expressed in terms of
`Vector::axcpy` like so:
```
self.axcpy(alpha * val, &col2, beta)
```
When creating a matrix with only one zero dimension, we end up with a
matrix with a total size of zero, but a non-zero stride for elements.
While such a matrix can never actually have any elements, we need to be
careful with how we use the pointer associated with it.
Since such a pointer will always be dangling, it can never be used with `ptr.offset`,
which requires that the pointer be in-bounds or one passed the end of an
allocation. Violating this results in undefined behavior.
This commit adds in checks before the uses of `ptr.offset`. If we ever
need to offset from a pointer when our actual allocation size is zero,
we skip offsetting, and return the original pointer. This is fine
because any actual use of the original or offsetted pointer would
already be undefined behavior - we shoul never be trying to dereference
the pointer associated with a zero-size matrix.
This issue was caught be running `cargo miri test` on the project.
The eigenvalue problem is solved in two different method that use different methods
to calculate the discriminant of the solution to the quadratic equation.
Use the method whose computation is considered more stable.