Extend<N> was already implemented, but nalgebra vectors/matrices give
iterators that give &N, not N, so implementing Extend<&N> as well makes
it easier to use.
It seems common practice to do so: The standard library's Vec also
implments Extend for both T and &T.
```bash
export RELEVANT_SOURCEFILES="$(find nalgebra-lapack -name '*.rs')"
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar,/N\1: Scalar + Copy,/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar>/N\1: Scalar + Copy>/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/\([A-Z]*Scalar\): Scalar {/\1: Scalar + Copy {/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/SVDScalar<R: DimMin<C>, C: Dim>: Scalar/SVDScalar<R: DimMin<C>, C: Dim>: Scalar + Copy/' $f; done
```
The various nalgebra-lapack FooScalars are still Copy because they make use of uninitialized memory.
nalgebgra-glm Number still uses Copy because upstream `approx` requires it.
After we yield the final element from the iterator, we don't offset
`ptr` agian, to avoid having it go out-of-bounds.
However, `inner_end` may be several elements out-of-bounds, depending on
the value of `size`. Therefore, we use `wrapping_offset` to avoid
undefined behavior.
`./ci/test.sh` now passes locally.
Refactoring done via the following sed commands:
```bash
export RELEVANT_SOURCEFILES="$(find src -name '*.rs') $(find examples -name '*.rs')"
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Arbitrary\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Serialize\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Deserialize\)/N\1: Scalar + Copy + \2/' $f; do
export RELEVANT_SOURCEFILES="$(find nalgebra-glm -name '*.rs')"
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar,/N\1: Scalar + Copy,/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar>/N\1: Scalar + Copy>/' $f; done
for f in algebra-glm/src/traits.rs; do sed -i 's/Scalar + Ring/Scalar + Copy + Ring>/' $f; done # Number trait definition
```
This should semantically be a no-op, but enables refactorings to use non-Copy scalars on a case-by-case basis.
Also, the only instance of a `One + Zero` trait bound was changed into a `Zero + One` bound to match the others.
The following sed scripts were used in the refactoring (with each clause added to reduce the error count of `cargo check`):
```bash
export RELEVANT_SOURCEFILES="$(find src -name '*.rs') $(find examples -name '*.rs')"
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar,/N: Scalar+Copy,/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Field/N: Scalar + Copy + Field/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Zero/N: Scalar + Copy + Zero/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Closed/N: Scalar + Copy + Closed/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + Eq/N: Scalar + Copy + Eq/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + PartialOrd/N: Scalar + Copy + PartialOrd/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: *Scalar + Zero/N: Scalar + Copy + Zero/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + PartialEq/N: Scalar + Copy + PartialEq/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar>/N: Scalar+Copy>/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: Scalar + $bound/N: Scalar + Copy + $bound/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: *Scalar + $bound/N: Scalar + Copy + $bound/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\): *Scalar,/N\1: Scalar+Copy,/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N: *Scalar + $trait/N: Scalar + Copy + $trait/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\): *Scalar + Superset/N\1: Scalar + Copy + Superset/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\): *Scalar + \([a-zA-Z]*Eq\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \([a-zA-Z]*Eq\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(hash::\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar {/N\1: Scalar + Copy {/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Zero\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Bounded\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Lattice\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Meet\|Join\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(fmt::\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Ring\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Hash\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Send\|Sync\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/One + Zero/Zero + One/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \(Zero\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar + \($marker\)/N\1: Scalar + Copy + \2/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/N\([0-9]\?\): *Scalar>/N\1: Scalar + Copy>/' $f; done
for f in $RELEVANT_SOURCEFILES; do sed -i 's/Scalar+Copy/Scalar + Copy/' $f; done
```
The added method `Vector::axcpy` generalises `Vector::gemv` to
noncommutative cases since it allows us to write for `gemv`
`self.axcpy(alpha, &col2, val, beta)`, instead the usual
`self.axpy(alpha * val, &col2, beta)`. Hence, `axcpy` preserves the
order of scalar multiplication which is important for applications where
commutativity is not guaranteed (e.g., matrices of quaternions, etc.).
This commmit also removes helpers `array_axpy` and `array_ax`, and
replaces them with `array_axcpy` and `array_axc` respectively, which
like above preserve the order of scalar multiplication.
Finally, `Vector::axpy` is preserved, however, now expressed in terms of
`Vector::axcpy` like so:
```
self.axcpy(alpha * val, &col2, beta)
```
When creating a matrix with only one zero dimension, we end up with a
matrix with a total size of zero, but a non-zero stride for elements.
While such a matrix can never actually have any elements, we need to be
careful with how we use the pointer associated with it.
Since such a pointer will always be dangling, it can never be used with `ptr.offset`,
which requires that the pointer be in-bounds or one passed the end of an
allocation. Violating this results in undefined behavior.
This commit adds in checks before the uses of `ptr.offset`. If we ever
need to offset from a pointer when our actual allocation size is zero,
we skip offsetting, and return the original pointer. This is fine
because any actual use of the original or offsetted pointer would
already be undefined behavior - we shoul never be trying to dereference
the pointer associated with a zero-size matrix.
This issue was caught be running `cargo miri test` on the project.
The eigenvalue problem is solved in two different method that use different methods
to calculate the discriminant of the solution to the quadratic equation.
Use the method whose computation is considered more stable.