Compare commits

...

101 Commits

Author SHA1 Message Date
Sébastien Crozet
48c8f6a505
Release v0.33.0 (#1417) 2024-06-23 15:28:37 +02:00
Sébastien Crozet
5ad68f486d
Introduce Storage::forget_elements() to fix memory leak in Matrix::generic_resize() (#1416)
* Add Storage::forget

* Adjust implementations of Reallocator to use Storage::forget

* Fix formatting

* Rename forget to forget_elements and add safety comments

* Update comments in Reallocator implementations

---------

Co-authored-by: Nick Mertin <nickmertin@gmail.com>
2024-06-23 11:46:20 +02:00
Andreas Borgen Longva
eb228faa2b
Improved stack! implementation, tests (#1375)
* Add macro for concatenating matrices

* Replace DimUnify with DimEq::representative

* Add some simple cat macro output generation tests

* Fix formatting in cat macro code

* Add random prefix to cat macro output

* Add simple quote_spanned for cat macro

* Use `generic_view_mut` in cat macro

* Fix clippy lints in cat macro

* Clean up documentation for cat macro

* Remove identity literal from cat macro

* Allow references in input to cat macro

* Rename cat macro to stack

* Add more stack macro tests

* Add comment to explain reason for prefix in stack! macro

* Refactor matrix!, stack! macros into separate modules

* Take all blocks by reference in stack! macro

* Make empty stack![] invocation well-defined

* Fix stack! macro incorrect reference to data

* More extensive tests for stack! macro

* Move nalgebra-macros tests to nalgebra tests

By testing matrix!, stack! macros etc. in nalgebra, we ensure that
these macros are used in the same way that users will be using them.

* Fix stack! code generation tests

* Add back nalgebra as dev-dependency of nalgebra-macros

* Fix accidental wrong matrix! macro references in docs

* Rewrite stack! documentation for clarity

* Formatting

* Skip formatting of macro, rustfmt messes it up

* Rewrite stack! impl for improved clarity, Span behavior

This improves error messages upon dimension mismatch, among other
things. I've also tried to make the implementation easier to understand,
adding some comments to help the reader understand the individual steps.

* Use SameNumberOfRows/Columns instead of DimEq in stack! macro

This gives more accurate compiler errors if matrix dimensions
are mismatched.

* Check that stack! panics at runtime for basic dimension mismatch

* Add suggested edge cases from initial PR to tests

* stack! impl: use fixed prefix everywhere

This ensures that the expected generated code in tests
is the actual generated code when used in the wild.

* nalgebra-macros: Remove clippy pedantic, fix clippy complaints

pedantic seems to be mostly intent on wasting the programmer's time

* Add stack! sanity tests for built-ins and Complex

* Fix formatting in test

* Improve readability of format_ident! calls in stack! impl

* fix trybuild tests

* chore: run tests with a specific rust version

* More trybuild fixes

---------

Co-authored-by: Birk Tjelmeland <git@birktj.no>
Co-authored-by: Sébastien Crozet <sebcrozet@dimforge.com>
2024-06-23 11:29:28 +02:00
Christopher Durham
49906a35be
Fix glm::is_null epsilon test (#1350)
The existing implementation compares each component to zero with
an epsilon; effectively `glm::all(glm::is_comp_null(v, epsilon))`.
This probably isn't the desired semantics when calling `glm::is_null`;
rather, we want to determine if the magnitude of the vector is within
`epsilon` units of zero. It's the question of circle versus square.

This behavior matches that of OpenGL Mathematics.
2024-06-23 11:09:52 +02:00
Sébastien Crozet
292abfbaa0
chore: update to simba 0.9 (#1415) 2024-06-22 19:06:20 +02:00
Bruce Mitchener
f812694959
deps: Update to itertools 0.13 (#1398) 2024-06-22 18:50:08 +02:00
Bruce Mitchener
9712aebd1f
Support conversion for glam 0.28 (#1409) 2024-06-22 18:49:46 +02:00
Yotam Ofek
5cb9dcbda1
Update syn to 2.0 (#1342) 2024-06-22 18:43:32 +02:00
Junfeng Liu
d1edb4fd7b Fix document of Rotation's transpose and inverse methods 2024-06-18 11:19:35 -07:00
Markus Mayer
42a9dbeb54 Fix typo in ArrayStorage documentation comment
Signed-off-by: Markus Mayer <widemeadows@gmail.com>
2024-06-17 12:27:10 -07:00
Sébastien Crozet
c23807ac5d
feat: use GAT to remove the scalar type T from the Allocator trait (#1397) 2024-06-12 11:16:06 +02:00
Bruce Mitchener
28e993a4f5
Support conversion for glam 0.27 (#1390) 2024-06-12 10:58:39 +02:00
Andreas Borgen Longva
afc03cc403
Merge pull request #1384 from jnyfah/dev
Inverting a 4x4 matrix with `try_inverse_mut` doesn't leave `self` unchanged if the inversion fails.
2024-05-05 12:04:49 +02:00
Jennifer Chukwu
343eb214ef format 2024-05-05 09:39:20 +00:00
Jennifer Chukwu
914a7cf1fa add assert 2024-05-02 14:36:28 +00:00
Jennifer Chukwu
e3a08c9b60 test with a fixed input 2024-05-02 10:16:45 +00:00
Jennifer Chukwu
825d078294 add tests 2024-04-30 16:25:35 +00:00
Andreas Borgen Longva
9bc7f8b0d8
Merge pull request #1383 from derchr/column-major-fix
Fix false row-major comment in doc
2024-04-30 14:22:19 +02:00
Jennifer Chukwu
5e8779957e missed one cofactor 2024-04-28 16:02:46 +00:00
Adrian H
e84ea21c74
Fixed a typo in documentation in matrix_view.rs 2024-04-22 19:51:49 -07:00
Jennifer Chukwu
c21df4e1a3 remove else 2024-04-22 19:00:44 +00:00
Jennifer Chukwu
a0da49fb83 early return 2024-04-22 12:36:35 +00:00
Jennifer Chukwu
d331bbd7c1 inversion 2024-04-19 14:49:14 +00:00
Derek Christ
28d04b61cd Fix false row-major comment in doc
In the documentation of RawStorage::stride it was falsely claimed that
the storage of the matrix happens in a row-major fashion. In fact it is
column-major.
2024-04-18 18:07:21 +02:00
Sébastien Crozet
a803815bd1
chore: update changelog 2024-03-28 15:34:41 +01:00
Vollkornaffe
c475c4000c
Fix numerical issue on SVD with near-identity matrix (#1369)
* fix: Normalize the column once more

The column may not be normalized if the `factor` is on a scale of 1e-40.
Possibly, f32 just runs out of precision.

There is likely a better solution to the problem.

* chore: Add test that fails before fix

* chore: add comment providing details on the householder fix.

* chore: rename regression test

---------

Co-authored-by: Sébastien Crozet <sebcrozet@dimforge.com>
2024-03-28 15:26:11 +01:00
tpdickso
749a9fee17
Merge pull request #1317 from yotamofek/swap-unchecked-ub
Fix UB in `RawStorageMut::swap_unchecked_linear`
2024-03-27 20:44:39 -04:00
Yotam Ofek
1cfc539a96 Fix type inference error in tests on rustc beta 2024-03-22 12:36:59 -07:00
Yotam Ofek
3651942670 Merge branch 'dev' of https://github.com/dimforge/nalgebra into swap-unchecked-ub 2024-03-22 17:56:35 +00:00
tpdickso
9948bf7e23
Merge pull request #1349 from CAD97/patch-3
Fix `glm::is_normalized` epsilon test
2024-03-20 23:01:15 -04:00
Yotam Ofek
095c561b60
Apply suggestions from code review
Co-authored-by: tpdickso <terence.dickson.prf@gmail.com>
2024-03-20 19:55:03 +02:00
Yotam Ofek
d1b5df480f Merge branch 'dev' of https://github.com/dimforge/nalgebra into swap-unchecked-ub 2024-03-20 16:37:44 +00:00
tpdickso
990afe6b26
Merge pull request #1371 from yotamofek/redundant-import
Fix redundant import errors in `nalgebra-glm`
2024-03-20 12:11:02 -04:00
Yotam Ofek
cf44429837 Fix redundant import errors in nalgebra-glm
More fallout from https://github.com/rust-lang/rust/issues/121708
Should make CI green again
2024-03-20 13:12:01 +00:00
Yotam Ofek
d884a7e2d0 fmt 2024-03-19 16:39:28 +00:00
Yotam Ofek
546d06b541 Update src/base/storage.rs
Co-authored-by: tpdickso <terence.dickson.prf@gmail.com>
2024-03-19 16:39:28 +00:00
Yotam Ofek
adc3a8103b Fix UB in RawStorageMut::swap_unchecked_linear 2024-03-19 16:39:28 +00:00
tpdickso
8c6f14fcd0
Merge pull request #1287 from mohe2015/triplet-iter-clone
Implement `Clone` for `CsrTripletIter` and `CscTripletIter`
2024-02-20 17:51:17 -05:00
tpdickso
ee486792a6
Merge pull request #1357 from Jondolf/glam025
Support Glam 0.25 type conversion
2024-02-13 22:25:42 -05:00
Joona Aalto
bbac38b2f5 Support Glam 0.25 type conversion 2024-01-27 21:11:47 +02:00
Moritz Hedtke
86bde5ff1d Implement Clone for CsrTripletIter and CscTripletIter 2024-01-14 22:37:58 +01:00
Christopher Durham
f578181351 Fix glm::is_normalized epsilon
The existing comparison bound of $\epsilon^2$ is improperly scaled for
testing an epsilon of the squared vector magnitude. Let $\epsilon$ be
our specified epsilon and $\delta$ be the permissible delta of the
squared magnitude. Thus, for a nearly-normalized vector, we have

$$\begin{align}
\sqrt{1 + \delta} &=  1 + \epsilon        \\
          \delta  &= (1 + \epsilon)^2 - 1 \\
          \delta  &=  \epsilon^2 + 2\epsilon
\text{ .}\end{align}$$

Since we only care about small epsilon, we can assume
that $\epsilon^2$ is small and just use $\delta = 2\epsilon$. And in
fact, [this is the bound used by GLM][GLM#isNormalized] (MIT license)
... except they're using `length` and not `length2` for some reason.

[GLM#isNormalized]: b06b775c1c/glm/gtx/vector_query.inl (L102)

If we stick an epsilon of `1.0e-6` into the current implementation,
this gives us a computed delta of `1.0e-12`: smaller than the `f32`
machine epsilon, and thus no different than direct float comparison
without epsilon. This also gives an effetive epsilon of `5.0e-13`;
*much* less than the intended `1.0e-6` of intended permitted slack!
By doing a bit more algebra, we can find the effective epsilon is
$\sqrt{\texttt{epsilon}^2 + 1} - 1$. This patch makes the effective
epsilon $\sqrt{2\times\texttt{epsilon} + 1} - 1$ which still isn't
*perfect*, but it's effectively linear in the domain we care about,
only really making a practical difference above an epsilon of 10%.

TL;DR: the existing `is_normalized` considers a vector normalized if
the squared magnitude is within `epsilon*epsilon` of `1`. This is wrong
and it should be testing if it's within `2*epsilon`. This PR fixes it.

For absence of doubt, a comparison epsilon of $\texttt{epsilon}^2$ is
correct when comparing squared magnitude against zero, such as when
testing if a displacement vector is nearly zero.
2024-01-13 00:02:26 -05:00
Ababwa
0b89950fca Correct less than or equal symbol in doc in vector_relational.rs
Less than or equal <=
2024-01-11 23:24:13 -08:00
Benjamin Saunders
7866bcee5c Remove CUDA support relying on abandoned toolchain 2024-01-11 23:09:17 -08:00
Benjamin De Roeck
a60870daf6
docs: Demonstrate correct function in to_homogeneous example (#1346) 2024-01-04 19:21:40 -08:00
Kurt Lawrence
6dce471297
Make OPoint call T::fmt to respect formatting modifiers (#1336) 2023-12-20 14:42:54 -08:00
Julian Knodt
1e0cb7bc09
Fix Clippy Warnings (#1300) 2023-12-16 13:54:38 -08:00
zachs18
a01fa48e33
Forward std feature to some deps. (#1321) 2023-12-10 19:35:54 -08:00
Bruce Mitchener
c3fe38b318 docs: Fix unbalanced backticks. 2023-12-10 14:04:53 -08:00
Sébastien Crozet
a91e3b0d89
Merge pull request #1315 from yotamofek/owned-view-iter
Allow creating matrix iter with an owned view
2023-11-12 23:29:41 +01:00
Sébastien Crozet
83eccc6b8f
Merge pull request #1312 from arscisca/dev-DefaultTrait
Implement Default trait for sparse matrix types
2023-11-12 23:27:37 +01:00
Sébastien Crozet
2e99320d01
Merge pull request #1314 from rasmusgo/fix-svd-near-zero
Fix bug in SVD related to values near zero
2023-11-12 23:18:59 +01:00
Sébastien Crozet
c5276c90e1 cargo fmt 2023-11-12 23:17:33 +01:00
Sébastien Crozet
06b8d38970 fix no-std builds. 2023-11-12 23:17:17 +01:00
Sébastien Crozet
469390f4b9 Check norm_squared instead of mangnitude. 2023-11-12 23:12:52 +01:00
Sébastien Crozet
f8cd2d497d
Merge pull request #1308 from decathorpe/dev
Fix and clarify license in crate metadata and add missing license files
2023-11-12 23:04:32 +01:00
Yotam Ofek
1195eadd1a Allow creating matrix iter with an owned view 2023-11-12 08:19:29 +00:00
Rasmus Brönnegård
7ea9ecee08 Test for axes with zero magnitude 2023-11-09 01:20:44 +01:00
Rasmus Brönnegård
03c24fb369 Add unit tests for issue 1313 2023-11-09 00:53:42 +01:00
Rasmus Brönnegård
b6e094c82f Fix spelling in givens.rs 2023-11-09 00:48:13 +01:00
Alessandro Rocco Scisca
0887b875a5 Implement Default trait for sparse matrix types 2023-10-30 17:50:56 +00:00
Fabio Valentini
bad63b6423
Fix and clarify license in crate metadata and add missing license files 2023-10-24 18:33:53 +02:00
Sébastien Crozet
c6ff3eeb7e
Merge pull request #1265 from waywardmonkeys/fix-html-links
docs: Use intradoc links rather than HTML.
2023-09-30 18:24:28 +02:00
Sébastien Crozet
804e606c97
Merge pull request #1270 from waywardmonkeys/look-at-lh-is-left-handed
doc: Isometry's `look_at_lh` is left-handed.
2023-09-30 18:21:48 +02:00
Sébastien Crozet
d2d2571590
Merge pull request #1273 from waywardmonkeys/blackbox-criterion
Use std::hint::black_box consistently.
2023-09-30 18:20:09 +02:00
Sébastien Crozet
2043824058
Merge pull request #1281 from waywardmonkeys/clippy-single_component_path_imports
nalgebra-glm: Fix clippy single_component_path_imports.
2023-09-30 18:16:16 +02:00
Sébastien Crozet
1987ca29bb
Merge pull request #1282 from waywardmonkeys/clippy-needless-borrow
clippy: Fix needless_borrow warnings.
2023-09-30 18:15:42 +02:00
Sébastien Crozet
5a9d036226
Merge pull request #1283 from waywardmonkeys/dual-quaternion-wrong-self-convention
DualQuaternion: Fix to_vector self convention.
2023-09-30 18:13:23 +02:00
Sébastien Crozet
c9008ab859
Merge pull request #1284 from waywardmonkeys/glm-too-many-arguments
nalgebra-glm, clippy: Suppress too_many_arguments.
2023-09-30 17:56:16 +02:00
Sébastien Crozet
25749e3b89
Merge pull request #1288 from JulianKnodt/dev
Add `try_from_triplets_iter`
2023-09-30 17:55:41 +02:00
Sébastien Crozet
b39bd09eaa chore: swap test names 2023-09-30 17:55:04 +02:00
Sébastien Crozet
542f6cf437
Merge pull request #1302 from mgeier/doc-norm_squared
DOC: Use norm_squared() in its doctest
2023-09-30 17:44:41 +02:00
Matthias Geier
87796ace42 DOC: Use norm_squared() in its doctest 2023-09-30 13:01:11 +02:00
julianknodt
666b0fd2de Add try_from_triplets_iter
Calls `try_from_triplets` for now, and is mentioned in the documentation.
2023-08-22 22:19:42 -07:00
Bruce Mitchener
1b1d950f74 nalgebra-glm, clippy: Suppress too_many_arguments.
Matrix constructors need more args.
2023-08-19 01:00:01 +07:00
Bruce Mitchener
bfb84e8fc6 DualQuaternion: Fix to_vector self convention.
By taking a ref, we can avoid an extra copy on the caller side.
2023-08-19 00:52:59 +07:00
Bruce Mitchener
1d9a4bf6ec clippy: Fix needless_borrow warnings. 2023-08-19 00:36:24 +07:00
Bruce Mitchener
8609167053 nalgebra-glm: Fix clippy single_component_path_imports.
These imports are redundant and not needed.
2023-08-19 00:19:42 +07:00
Andreas Borgen Longva
f404bcbd6d
Merge pull request #1279 from waywardmonkeys/clippy-less-lazy
clippy: Don't need lazy eval for len calls.
2023-08-15 11:01:13 +02:00
Andreas Borgen Longva
1e40308118
Merge pull request #1278 from waywardmonkeys/clippy-needless-return
clippy: Fix needless_return warnings.
2023-08-15 11:00:24 +02:00
Bruce Mitchener
cb2ed212ed clippy: Don't need lazy eval for len calls. 2023-08-15 14:46:35 +07:00
Bruce Mitchener
76866ad878 clippy: Fix needless_return warnings. 2023-08-15 14:34:34 +07:00
Andreas Borgen Longva
6ac9d8995c
Merge pull request #1276 from waywardmonkeys/fix-rkyv-feature-doc-warnings
docs: Fix 2 warnings when building with rkyv.
2023-08-15 09:19:07 +02:00
Andreas Borgen Longva
02260161b1
Merge pull request #1277 from waywardmonkeys/unused-split
`split_at` is only used by Rayon code.
2023-08-15 09:18:54 +02:00
Bruce Mitchener
226761323f docs: Fix 2 warnings when building with rkyv. 2023-08-14 17:23:59 +07:00
Bruce Mitchener
a2fd72dfb9 split_at is only used by Rayon code.
This fixes an unused code warning. Since the code is `pub(crate)`,
it was only available within the crate and only used by Rayon
code, so it is no functional change to not compile it except
when the right feature is enabled.

Also, fix a minor typo in some non-doc comments.
2023-08-14 17:21:45 +07:00
Andreas Borgen Longva
8d7763ab8f
Merge pull request #1275 from waywardmonkeys/no-default-features-unused-import-warnings
Fix import warnings when `--no-default-features`.
2023-08-14 08:57:45 +02:00
Andreas Borgen Longva
69b46cb7fa
Merge pull request #1260 from WarrenWeckesser/use-assert-macro
DOC: Fix compiler warning in the first example in lib.rs.
2023-08-14 08:53:24 +02:00
Andreas Borgen Longva
ec9a88c0ac
Merge pull request #1271 from waywardmonkeys/incorrect-usages-of-relative-eq
Use `assert_relative_eq!` instead of relative_eq!`.
2023-08-14 08:51:22 +02:00
Andreas Borgen Longva
3fdeeca09c
Merge pull request #1272 from waywardmonkeys/unused-lifetimes
clippy: Remove unused lifetimes.
2023-08-14 08:36:50 +02:00
Bruce Mitchener
14b00f6bf6 Fix import warnings when --no-default-features. 2023-08-14 11:40:03 +07:00
Bruce Mitchener
9042d1424c Use std::hint::black_box consistently.
This also removes the `#![feature(bench_black_box)]`. This was
stabilized in Rust 1.66 and anyone building benchmarks will be
on that or later (as they previously would have been on nightly).

This also allows building `cargo build --all-targets` on stable
Rust as it no longer dies when hitting the feature addition in
the benchmarks.
2023-08-14 11:15:57 +07:00
Bruce Mitchener
a51886ed3f clippy: Remove unused lifetimes. 2023-08-14 09:21:56 +07:00
Bruce Mitchener
8ba1459602 Use assert_relative_eq! instead of relative_eq!`.
When testing for something, need to use the assert form.
2023-08-12 22:48:16 +07:00
Bruce Mitchener
d0aa7f2090 doc: Isometry's look_at_lh is left-handed.
Fixes issue #734.
2023-08-12 22:00:17 +07:00
Andreas Borgen Longva
32a07aca3c
Merge pull request #1267 from waywardmonkeys/docs-constraints-to-be
docs: grammar: "Constrains ... to be"
2023-08-10 09:02:16 +02:00
Andreas Borgen Longva
cd450c9e27
Merge pull request #1266 from waywardmonkeys/improve-view-alias-docs
docs: Improve view alias docs.
2023-08-10 09:01:44 +02:00
Bruce Mitchener
8f59f4dcf6 docs: grammar: "Constrains ... to be"
(Also pick up a small typo in a non-doc comment in the same area
of code.)
2023-08-10 10:59:14 +07:00
Bruce Mitchener
c9c829c7a2 docs: Improve view alias docs.
* Indicate whether they are immutable/mutable clearly.
* Link to the other form (immutable link to mutable, mutable to
  immutable).
* Consistently include the text about it being an alias and to
  look elsewhere for the methods.
2023-08-10 09:26:15 +07:00
Bruce Mitchener
136a565579 docs: Use intradoc links rather than HTML.
This fixes almost all HTML links to be intradoc links that Rust
can verify during `cargo doc`. This will help prevent future
broken links.
2023-08-06 22:34:40 +07:00
warren
ec5d2eb4ae DOC: Fix compiler warning in the first example in lib.rs. 2023-07-09 09:40:32 -04:00
213 changed files with 5439 additions and 3489 deletions

View File

@ -13,52 +13,59 @@ jobs:
check-fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check formatting
run: cargo fmt -- --check
- uses: actions/checkout@v2
- name: Check formatting
run: cargo fmt -- --check
clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install clippy
run: rustup component add clippy
- name: Run clippy
run: cargo clippy
- uses: actions/checkout@v2
- name: Install clippy
run: rustup component add clippy
- name: Run clippy
run: cargo clippy
build-nalgebra:
runs-on: ubuntu-latest
# env:
# RUSTFLAGS: -D warnings
# env:
# RUSTFLAGS: -D warnings
steps:
- uses: actions/checkout@v2
- name: Build --no-default-feature
run: cargo build --no-default-features;
- name: Build (default features)
run: cargo build;
- name: Build --features serde-serialize
run: cargo build --features serde-serialize
- name: Build nalgebra-lapack
run: cd nalgebra-lapack; cargo build;
- name: Build nalgebra-sparse --no-default-features
run: cd nalgebra-sparse; cargo build --no-default-features;
- name: Build nalgebra-sparse (default features)
run: cd nalgebra-sparse; cargo build;
- name: Build nalgebra-sparse --all-features
run: cd nalgebra-sparse; cargo build --all-features;
- uses: actions/checkout@v2
- name: Build --no-default-feature
run: cargo build --no-default-features;
- name: Build (default features)
run: cargo build;
- name: Build --features serde-serialize
run: cargo build --features serde-serialize
- name: Build nalgebra-lapack
run: cd nalgebra-lapack; cargo build;
- name: Build nalgebra-sparse --no-default-features
run: cd nalgebra-sparse; cargo build --no-default-features;
- name: Build nalgebra-sparse (default features)
run: cd nalgebra-sparse; cargo build;
- name: Build nalgebra-sparse --all-features
run: cd nalgebra-sparse; cargo build --all-features;
# Run this on its own job because it alone takes a lot of time.
# So its best to let it run in parallel to the other jobs.
build-nalgebra-all-features:
runs-on: ubuntu-latest
steps:
# Needed because the --all-features build which enables cuda support.
- uses: Jimver/cuda-toolkit@v0.2.8
- uses: actions/checkout@v2
- run: cargo build --all-features;
- run: cargo build -p nalgebra-glm --all-features;
test-nalgebra:
runs-on: ubuntu-latest
# env:
# RUSTFLAGS: -D warnings
# env:
# RUSTFLAGS: -D warnings
steps:
# Tests are run with a specific version of the compiler to avoid
# trybuild errors when a new compiler version is out. This can be
# bumped as needed after running the tests with TRYBUILD=overwrite
# to re-generate the error reference.
- name: Select rustc version
uses: actions-rs/toolchain@v1
with:
toolchain: 1.79.0
override: true
- uses: actions/checkout@v2
- name: test
run: cargo test --features arbitrary,rand,serde-serialize,sparse,debug,io,compare,libm,proptest-support,slow-tests,rkyv-safe-deser,rayon;
@ -87,8 +94,8 @@ jobs:
run: cargo test -p nalgebra-macros
build-wasm:
runs-on: ubuntu-latest
# env:
# RUSTFLAGS: -D warnings
# env:
# RUSTFLAGS: -D warnings
steps:
- uses: actions/checkout@v2
- run: rustup target add wasm32-unknown-unknown
@ -120,26 +127,9 @@ jobs:
run: xargo build --verbose --no-default-features -p nalgebra-glm --target=x86_64-unknown-linux-gnu;
- name: build thumbv7em-none-eabihf nalgebra-glm
run: xargo build --verbose --no-default-features -p nalgebra-glm --target=thumbv7em-none-eabihf;
build-cuda:
runs-on: ubuntu-latest
steps:
- uses: Jimver/cuda-toolkit@v0.2.8
with:
cuda: '11.5.0'
- name: Install nightly-2021-12-04
uses: actions-rs/toolchain@v1
with:
toolchain: nightly-2021-12-04
override: true
- uses: actions/checkout@v2
- run: rustup target add nvptx64-nvidia-cuda
- run: cargo build --no-default-features --features cuda
- run: cargo build --no-default-features --features cuda --target=nvptx64-nvidia-cuda
env:
CUDA_ARCH: "350"
docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Generate documentation
run: cargo doc
- uses: actions/checkout@v2
- name: Generate documentation
run: cargo doc

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +1,16 @@
[package]
name = "nalgebra"
version = "0.32.3"
authors = [ "Sébastien Crozet <developer@crozet.re>" ]
name = "nalgebra"
version = "0.33.0"
authors = ["Sébastien Crozet <developer@crozet.re>"]
description = "General-purpose linear algebra library with transformations and statically-sized or dynamically-sized matrices."
documentation = "https://www.nalgebra.org/docs"
homepage = "https://nalgebra.org"
repository = "https://github.com/dimforge/nalgebra"
readme = "README.md"
categories = [ "science", "mathematics", "wasm", "no-std" ]
keywords = [ "linear", "algebra", "matrix", "vector", "math" ]
license = "BSD-3-Clause"
categories = ["science", "mathematics", "wasm", "no-std"]
keywords = ["linear", "algebra", "matrix", "vector", "math"]
license = "Apache-2.0"
edition = "2018"
exclude = ["/ci/*", "/.travis.yml", "/Makefile"]
@ -22,104 +22,112 @@ name = "nalgebra"
path = "src/lib.rs"
[features]
default = [ "std", "macros" ]
std = [ "matrixmultiply", "simba/std" ]
sparse = [ ]
debug = [ "approx/num-complex", "rand" ]
alloc = [ ]
io = [ "pest", "pest_derive" ]
compare = [ "matrixcompare-core" ]
libm = [ "simba/libm" ]
libm-force = [ "simba/libm_force" ]
macros = [ "nalgebra-macros" ]
cuda = [ "cust_core", "simba/cuda" ]
default = ["std", "macros"]
std = ["matrixmultiply", "num-traits/std", "num-complex/std", "num-rational/std", "approx/std", "simba/std"]
sparse = []
debug = ["approx/num-complex", "rand"]
alloc = []
io = ["pest", "pest_derive"]
compare = ["matrixcompare-core"]
libm = ["simba/libm"]
libm-force = ["simba/libm_force"]
macros = ["nalgebra-macros"]
# Conversion
convert-mint = [ "mint" ]
convert-bytemuck = [ "bytemuck" ]
convert-glam014 = [ "glam014" ]
convert-glam015 = [ "glam015" ]
convert-glam016 = [ "glam016" ]
convert-glam017 = [ "glam017" ]
convert-glam018 = [ "glam018" ]
convert-glam019 = [ "glam019" ]
convert-glam020 = [ "glam020" ]
convert-glam021 = [ "glam021" ]
convert-glam022 = [ "glam022" ]
convert-glam023 = [ "glam023" ]
convert-glam024 = [ "glam024" ]
convert-mint = ["mint"]
convert-bytemuck = ["bytemuck"]
convert-glam014 = ["glam014"]
convert-glam015 = ["glam015"]
convert-glam016 = ["glam016"]
convert-glam017 = ["glam017"]
convert-glam018 = ["glam018"]
convert-glam019 = ["glam019"]
convert-glam020 = ["glam020"]
convert-glam021 = ["glam021"]
convert-glam022 = ["glam022"]
convert-glam023 = ["glam023"]
convert-glam024 = ["glam024"]
convert-glam025 = ["glam025"]
convert-glam027 = ["glam027"]
convert-glam028 = ["glam028"]
# Serialization
## To use serde in a #[no-std] environment, enable the
## `serde-serialize-no-std` feature instead of `serde-serialize`.
## Serialization of dynamically-sized matrices/vectors require
## `serde-serialize`.
serde-serialize-no-std = [ "serde", "num-complex/serde" ]
serde-serialize = [ "serde-serialize-no-std", "serde/std" ]
rkyv-serialize-no-std = [ "rkyv/size_32" ]
rkyv-serialize = [ "rkyv-serialize-no-std", "rkyv/std", "rkyv/validation" ]
serde-serialize-no-std = ["serde", "num-complex/serde"]
serde-serialize = ["serde-serialize-no-std", "serde/std"]
rkyv-serialize-no-std = ["rkyv/size_32"]
rkyv-serialize = ["rkyv-serialize-no-std", "rkyv/std", "rkyv/validation"]
# Randomness
## To use rand in a #[no-std] environment, enable the
## `rand-no-std` feature instead of `rand`.
rand-no-std = [ "rand-package" ]
rand = [ "rand-no-std", "rand-package/std", "rand-package/std_rng", "rand_distr" ]
rand-no-std = ["rand-package"]
rand = ["rand-no-std", "rand-package/std", "rand-package/std_rng", "rand_distr"]
# Tests
arbitrary = [ "quickcheck" ]
proptest-support = [ "proptest" ]
slow-tests = []
rkyv-safe-deser = [ "rkyv-serialize", "rkyv/validation" ]
arbitrary = ["quickcheck"]
proptest-support = ["proptest"]
slow-tests = []
rkyv-safe-deser = ["rkyv-serialize", "rkyv/validation"]
[dependencies]
nalgebra-macros = { version = "0.2.1", path = "nalgebra-macros", optional = true }
typenum = "1.12"
rand-package = { package = "rand", version = "0.8", optional = true, default-features = false }
num-traits = { version = "0.2", default-features = false }
num-complex = { version = "0.4", default-features = false }
num-rational = { version = "0.4", default-features = false }
approx = { version = "0.5", default-features = false }
simba = { version = "0.8", default-features = false }
alga = { version = "0.9", default-features = false, optional = true }
rand_distr = { version = "0.4", default-features = false, optional = true }
nalgebra-macros = { version = "0.2.2", path = "nalgebra-macros", optional = true }
typenum = "1.12"
rand-package = { package = "rand", version = "0.8", optional = true, default-features = false }
num-traits = { version = "0.2", default-features = false }
num-complex = { version = "0.4", default-features = false }
num-rational = { version = "0.4", default-features = false }
approx = { version = "0.5", default-features = false }
simba = { version = "0.9", default-features = false }
alga = { version = "0.9", default-features = false, optional = true }
rand_distr = { version = "0.4", default-features = false, optional = true }
matrixmultiply = { version = "0.3", optional = true }
serde = { version = "1.0", default-features = false, features = [ "derive" ], optional = true }
rkyv = { version = "0.7.41", default-features = false, optional = true }
mint = { version = "0.5", optional = true }
quickcheck = { version = "1", optional = true }
pest = { version = "2", optional = true }
pest_derive = { version = "2", optional = true }
bytemuck = { version = "1.5", optional = true }
serde = { version = "1.0", default-features = false, features = ["derive"], optional = true }
rkyv = { version = "0.7.41", default-features = false, optional = true }
mint = { version = "0.5", optional = true }
quickcheck = { version = "1", optional = true }
pest = { version = "2", optional = true }
pest_derive = { version = "2", optional = true }
bytemuck = { version = "1.5", optional = true }
matrixcompare-core = { version = "0.1", optional = true }
proptest = { version = "1", optional = true, default-features = false, features = ["std"] }
glam014 = { package = "glam", version = "0.14", optional = true }
glam015 = { package = "glam", version = "0.15", optional = true }
glam016 = { package = "glam", version = "0.16", optional = true }
glam017 = { package = "glam", version = "0.17", optional = true }
glam018 = { package = "glam", version = "0.18", optional = true }
glam019 = { package = "glam", version = "0.19", optional = true }
glam020 = { package = "glam", version = "0.20", optional = true }
glam021 = { package = "glam", version = "0.21", optional = true }
glam022 = { package = "glam", version = "0.22", optional = true }
glam023 = { package = "glam", version = "0.23", optional = true }
glam024 = { package = "glam", version = "0.24", optional = true }
cust_core = { version = "0.1", optional = true }
rayon = { version = "1.6", optional = true }
proptest = { version = "1", optional = true, default-features = false, features = ["std"] }
glam014 = { package = "glam", version = "0.14", optional = true }
glam015 = { package = "glam", version = "0.15", optional = true }
glam016 = { package = "glam", version = "0.16", optional = true }
glam017 = { package = "glam", version = "0.17", optional = true }
glam018 = { package = "glam", version = "0.18", optional = true }
glam019 = { package = "glam", version = "0.19", optional = true }
glam020 = { package = "glam", version = "0.20", optional = true }
glam021 = { package = "glam", version = "0.21", optional = true }
glam022 = { package = "glam", version = "0.22", optional = true }
glam023 = { package = "glam", version = "0.23", optional = true }
glam024 = { package = "glam", version = "0.24", optional = true }
glam025 = { package = "glam", version = "0.25", optional = true }
glam027 = { package = "glam", version = "0.27", optional = true }
glam028 = { package = "glam", version = "0.28", optional = true }
rayon = { version = "1.6", optional = true }
[dev-dependencies]
serde_json = "1.0"
rand_xorshift = "0.3"
rand_isaac = "0.3"
criterion = { version = "0.4", features = ["html_reports"] }
nalgebra = { path = ".", features = ["debug", "compare", "rand", "macros"]}
nalgebra = { path = ".", features = ["debug", "compare", "rand", "macros"] }
# For matrix comparison macro
matrixcompare = "0.3.0"
itertools = "0.10"
itertools = "0.13"
# For macro testing
trybuild = "1.0.90"
cool_asserts = "2.0.3"
[workspace]
members = [ "nalgebra-lapack", "nalgebra-glm", "nalgebra-sparse", "nalgebra-macros" ]
members = ["nalgebra-lapack", "nalgebra-glm", "nalgebra-sparse", "nalgebra-macros"]
resolver = "2"
[[example]]

View File

@ -5,9 +5,6 @@
<a href="https://discord.gg/vt9DJSW">
<img src="https://img.shields.io/discord/507548572338880513.svg?logo=discord&colorB=7289DA">
</a>
<a href="https://circleci.com/gh/dimforge/nalgebra">
<img src="https://circleci.com/gh/dimforge/nalgebra.svg?style=svg" alt="Build status">
</a>
<a href="https://crates.io/crates/nalgebra">
<img src="https://img.shields.io/crates/v/nalgebra.svg?style=flat-square" alt="crates.io">
</a>
@ -17,7 +14,7 @@
</p>
<p align = "center">
<strong>
<a href="https://nalgebra.org">Users guide</a> | <a href="https://docs.rs/nalgebra/latest/nalgebra/">Documentation</a> | <a href="https://discourse.nphysics.org/c/nalgebra">Forum</a>
<a href="https://nalgebra.org">Users guide</a> | <a href="https://docs.rs/nalgebra/latest/nalgebra/">Documentation</a>
</strong>
</p>

View File

@ -142,7 +142,7 @@ fn iter(bench: &mut criterion::Criterion) {
bench.bench_function("iter", move |bh| {
bh.iter(|| {
for value in a.iter() {
criterion::black_box(value);
std::hint::black_box(value);
}
})
});
@ -154,7 +154,7 @@ fn iter_rev(bench: &mut criterion::Criterion) {
bench.bench_function("iter_rev", move |bh| {
bh.iter(|| {
for value in a.iter().rev() {
criterion::black_box(value);
std::hint::black_box(value);
}
})
});

View File

@ -1,4 +1,3 @@
#![feature(bench_black_box)]
#![allow(unused_macros)]
extern crate nalgebra as na;

View File

@ -1,10 +1,10 @@
[package]
name = "example-using-nalgebra"
name = "example-using-nalgebra"
version = "0.0.0"
authors = [ "You" ]
authors = ["You"]
[dependencies]
nalgebra = "0.32.0"
nalgebra = "0.33.0"
[[bin]]
name = "example"

View File

@ -12,7 +12,7 @@ fn reflect_wrt_hyperplane_with_dimensional_genericity<T: RealField, D: Dim>(
where
T: RealField,
D: Dim,
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
let n = plane_normal.as_ref(); // Get the underlying V.
vector - n * (n.dot(vector) * na::convert(2.0))
@ -28,7 +28,7 @@ where
}
/// Reflects a 3D vector wrt. the 3D plane with normal `plane_normal`.
/// /!\ This is an exact replicate of `reflect_wrt_hyperplane2, but for 3D.
/// /!\ This is an exact replicate of `reflect_wrt_hyperplane2`, but for 3D.
fn reflect_wrt_hyperplane3<T>(plane_normal: &Unit<Vector3<T>>, vector: &Vector3<T>) -> Vector3<T>
where
T: RealField,

View File

@ -1,6 +1,6 @@
[package]
name = "nalgebra-glm"
version = "0.18.0"
version = "0.19.0"
authors = ["sebcrozet <developer@crozet.re>"]
description = "A computer-graphics oriented API for nalgebra, inspired by the C++ GLM library."
@ -8,32 +8,31 @@ documentation = "https://www.nalgebra.org/docs"
homepage = "https://nalgebra.org"
repository = "https://github.com/dimforge/nalgebra"
readme = "../README.md"
categories = [ "science", "mathematics", "wasm", "no standard library" ]
keywords = [ "linear", "algebra", "matrix", "vector", "math" ]
license = "BSD-3-Clause"
categories = ["science", "mathematics", "wasm", "no standard library"]
keywords = ["linear", "algebra", "matrix", "vector", "math"]
license = "Apache-2.0"
edition = "2018"
[badges]
maintenance = { status = "actively-developed" }
[features]
default = [ "std" ]
std = [ "nalgebra/std", "simba/std" ]
arbitrary = [ "nalgebra/arbitrary" ]
serde-serialize = [ "nalgebra/serde-serialize-no-std" ]
cuda = [ "nalgebra/cuda" ]
default = ["std"]
std = ["nalgebra/std", "simba/std"]
arbitrary = ["nalgebra/arbitrary"]
serde-serialize = ["nalgebra/serde-serialize-no-std"]
# Conversion
convert-mint = [ "nalgebra/mint" ]
convert-bytemuck = [ "nalgebra/bytemuck" ]
convert-glam014 = [ "nalgebra/glam014" ]
convert-glam015 = [ "nalgebra/glam015" ]
convert-glam016 = [ "nalgebra/glam016" ]
convert-glam017 = [ "nalgebra/glam017" ]
convert-glam018 = [ "nalgebra/glam018" ]
convert-mint = ["nalgebra/mint"]
convert-bytemuck = ["nalgebra/bytemuck"]
convert-glam014 = ["nalgebra/glam014"]
convert-glam015 = ["nalgebra/glam015"]
convert-glam016 = ["nalgebra/glam016"]
convert-glam017 = ["nalgebra/glam017"]
convert-glam018 = ["nalgebra/glam018"]
[dependencies]
num-traits = { version = "0.2", default-features = false }
approx = { version = "0.5", default-features = false }
simba = { version = "0.8", default-features = false }
nalgebra = { path = "..", version = "0.32", default-features = false }
simba = { version = "0.9", default-features = false }
nalgebra = { path = "..", version = "0.33", default-features = false }

1
nalgebra-glm/LICENSE Symbolic link
View File

@ -0,0 +1 @@
../LICENSE

View File

@ -5,38 +5,38 @@ use na::{
/// A matrix with components of type `T`. It has `R` rows, and `C` columns.
///
/// In this library, vectors, represented as [`TVec`](type.TVec.html) and
/// In this library, vectors, represented as [`TVec`] and
/// friends, are also matrices. Operations that operate on a matrix will
/// also work on a vector.
///
/// # See also:
///
/// * [`TMat2`](type.TMat2.html)
/// * [`TMat2x2`](type.TMat2x2.html)
/// * [`TMat2x3`](type.TMat2x3.html)
/// * [`TMat2x4`](type.TMat2x4.html)
/// * [`TMat3`](type.TMat3.html)
/// * [`TMat3x2`](type.TMat3x2.html)
/// * [`TMat3x3`](type.TMat3x3.html)
/// * [`TMat3x4`](type.TMat3x4.html)
/// * [`TMat4`](type.TMat4.html)
/// * [`TMat4x2`](type.TMat4x2.html)
/// * [`TMat4x3`](type.TMat4x3.html)
/// * [`TMat4x4`](type.TMat4x4.html)
/// * [`TVec`](type.TVec.html)
/// * [`TMat2`]
/// * [`TMat2x2`]
/// * [`TMat2x3`]
/// * [`TMat2x4`]
/// * [`TMat3`]
/// * [`TMat3x2`]
/// * [`TMat3x3`]
/// * [`TMat3x4`]
/// * [`TMat4`]
/// * [`TMat4x2`]
/// * [`TMat4x3`]
/// * [`TMat4x4`]
/// * [`TVec`]
pub type TMat<T, const R: usize, const C: usize> = SMatrix<T, R, C>;
/// A column vector with components of type `T`. It has `D` rows (and one column).
///
/// In this library, vectors are represented as a single column matrix, so
/// operations on [`TMat`](type.TMat.html) are also valid on vectors.
/// operations on [`TMat`] are also valid on vectors.
///
/// # See also:
///
/// * [`TMat`](type.TMat.html)
/// * [`TVec1`](type.TVec1.html)
/// * [`TVec2`](type.TVec2.html)
/// * [`TVec3`](type.TVec3.html)
/// * [`TVec4`](type.TVec4.html)
/// * [`TMat`]
/// * [`TVec1`]
/// * [`TVec2`]
/// * [`TVec3`]
/// * [`TVec4`]
pub type TVec<T, const R: usize> = SVector<T, R>;
/// A quaternion with components of type `T`.
pub type Qua<T> = Quaternion<T>;
@ -47,28 +47,28 @@ pub type Qua<T> = Quaternion<T>;
///
/// ## Constructors:
///
/// * [`make_vec1`](fn.make_vec1.html)
/// * [`vec1`](fn.vec1.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec3_to_vec1`](fn.vec3_to_vec1.html)
/// * [`vec4_to_vec1`](fn.vec4_to_vec1.html)
/// * [`make_vec1()`](crate::make_vec1)
/// * [`vec1()`](crate::vec1)
/// * [`vec2_to_vec1()`](crate::vec2_to_vec1)
/// * [`vec3_to_vec1()`](crate::vec3_to_vec1)
/// * [`vec4_to_vec1()`](crate::vec4_to_vec1)
///
/// ## Related types:
///
/// * [`BVec1`](type.BVec1.html)
/// * [`DVec1`](type.DVec1.html)
/// * [`IVec1`](type.IVec1.html)
/// * [`I16Vec1`](type.I16Vec1.html)
/// * [`I32Vec1`](type.I32Vec1.html)
/// * [`I64Vec1`](type.I64Vec1.html)
/// * [`I8Vec1`](type.I8Vec1.html)
/// * [`TVec`](type.TVec.html)
/// * [`UVec1`](type.UVec1.html)
/// * [`U16Vec1`](type.U16Vec1.html)
/// * [`U32Vec1`](type.U32Vec1.html)
/// * [`U64Vec1`](type.U64Vec1.html)
/// * [`U8Vec1`](type.U8Vec1.html)
/// * [`Vec1`](type.Vec1.html)
/// * [`BVec1`]
/// * [`DVec1`]
/// * [`IVec1`]
/// * [`I16Vec1`]
/// * [`I32Vec1`]
/// * [`I64Vec1`]
/// * [`I8Vec1`]
/// * [`TVec`]
/// * [`UVec1`]
/// * [`U16Vec1`]
/// * [`U32Vec1`]
/// * [`U64Vec1`]
/// * [`U8Vec1`]
/// * [`Vec1`]
pub type TVec1<T> = TVec<T, 1>;
/// A 2D vector with components of type `T`.
///
@ -76,29 +76,28 @@ pub type TVec1<T> = TVec<T, 1>;
///
/// ## Constructors:
///
/// * [`make_vec2`](fn.make_vec2.html)
/// * [`vec2`](fn.vec2.html)
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec3_to_vec2`](fn.vec3_to_vec2.html)
/// * [`vec4_to_vec2`](fn.vec4_to_vec2.html)
/// * [`make_vec2()`](crate::make_vec2)
/// * [`vec2()`](crate::vec2)
/// * [`vec1_to_vec2()`](crate::vec1_to_vec2)
/// * [`vec3_to_vec2()`](crate::vec3_to_vec2)
/// * [`vec4_to_vec2()`](crate::vec4_to_vec2)
///
/// ## Related types:
///
/// * [`vec2`](fn.vec2.html)
/// * [`BVec2`](type.BVec2.html)
/// * [`DVec2`](type.DVec2.html)
/// * [`IVec2`](type.IVec2.html)
/// * [`I16Vec2`](type.I16Vec2.html)
/// * [`I32Vec2`](type.I32Vec2.html)
/// * [`I64Vec2`](type.I64Vec2.html)
/// * [`I8Vec2`](type.I8Vec2.html)
/// * [`TVec`](type.TVec.html)
/// * [`UVec2`](type.UVec2.html)
/// * [`U16Vec2`](type.U16Vec2.html)
/// * [`U32Vec2`](type.U32Vec2.html)
/// * [`U64Vec2`](type.U64Vec2.html)
/// * [`U8Vec2`](type.U8Vec2.html)
/// * [`Vec2`](type.Vec2.html)
/// * [`BVec2`]
/// * [`DVec2`]
/// * [`IVec2`]
/// * [`I16Vec2`]
/// * [`I32Vec2`]
/// * [`I64Vec2`]
/// * [`I8Vec2`]
/// * [`TVec`]
/// * [`UVec2`]
/// * [`U16Vec2`]
/// * [`U32Vec2`]
/// * [`U64Vec2`]
/// * [`U8Vec2`]
/// * [`Vec2`]
pub type TVec2<T> = TVec<T, 2>;
/// A 3D vector with components of type `T`.
///
@ -106,29 +105,28 @@ pub type TVec2<T> = TVec<T, 2>;
///
/// ## Constructors:
///
/// * [`make_vec3`](fn.make_vec3.html)
/// * [`vec3`](fn.vec3.html)
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec4_to_vec3`](fn.vec4_to_vec3.html)
/// * [`make_vec3()`](crate::make_vec3)
/// * [`vec3()`](crate::vec3)
/// * [`vec1_to_vec3()`](crate::vec1_to_vec3)
/// * [`vec2_to_vec3()`](crate::vec2_to_vec3)
/// * [`vec4_to_vec3()`](crate::vec4_to_vec3)
///
/// ## Related types:
///
/// * [`vec3`](fn.vec3.html)
/// * [`BVec3`](type.BVec3.html)
/// * [`DVec3`](type.DVec3.html)
/// * [`IVec3`](type.IVec3.html)
/// * [`I16Vec3`](type.I16Vec3.html)
/// * [`I32Vec3`](type.I32Vec3.html)
/// * [`I64Vec3`](type.I64Vec3.html)
/// * [`I8Vec3`](type.I8Vec3.html)
/// * [`TVec`](type.TVec.html)
/// * [`UVec3`](type.UVec3.html)
/// * [`U16Vec3`](type.U16Vec3.html)
/// * [`U32Vec3`](type.U32Vec3.html)
/// * [`U64Vec3`](type.U64Vec3.html)
/// * [`U8Vec3`](type.U8Vec3.html)
/// * [`Vec3`](type.Vec3.html)
/// * [`BVec3`]
/// * [`DVec3`]
/// * [`IVec3`]
/// * [`I16Vec3`]
/// * [`I32Vec3`]
/// * [`I64Vec3`]
/// * [`I8Vec3`]
/// * [`TVec`]
/// * [`UVec3`]
/// * [`U16Vec3`]
/// * [`U32Vec3`]
/// * [`U64Vec3`]
/// * [`U8Vec3`]
/// * [`Vec3`]
pub type TVec3<T> = TVec<T, 3>;
/// A 4D vector with components of type `T`.
///
@ -136,28 +134,27 @@ pub type TVec3<T> = TVec<T, 3>;
///
/// ## Constructors:
///
/// * [`make_vec4`](fn.make_vec4.html)
/// * [`vec4`](fn.vec4.html)
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec3_to_vec4`](fn.vec3_to_vec4.html)
/// * [`make_vec4()`](crate::make_vec4)
/// * [`vec4()`](crate::vec4)
/// * [`vec1_to_vec4()`](crate::vec1_to_vec4)
/// * [`vec2_to_vec4()`](crate::vec2_to_vec4)
/// * [`vec3_to_vec4()`](crate::vec3_to_vec4)
///
/// ## Related types:
///
/// * [`vec4`](fn.vec4.html)
/// * [`BVec4`](type.BVec4.html)
/// * [`DVec4`](type.DVec4.html)
/// * [`IVec4`](type.IVec4.html)
/// * [`I16Vec4`](type.I16Vec4.html)
/// * [`I32Vec4`](type.I32Vec4.html)
/// * [`I64Vec4`](type.I64Vec4.html)
/// * [`I8Vec4`](type.I8Vec4.html)
/// * [`UVec4`](type.UVec4.html)
/// * [`U16Vec4`](type.U16Vec4.html)
/// * [`U32Vec4`](type.U32Vec4.html)
/// * [`U64Vec4`](type.U64Vec4.html)
/// * [`U8Vec4`](type.U8Vec4.html)
/// * [`Vec4`](type.Vec4.html)
/// * [`BVec4`]
/// * [`DVec4`]
/// * [`IVec4`]
/// * [`I16Vec4`]
/// * [`I32Vec4`]
/// * [`I64Vec4`]
/// * [`I8Vec4`]
/// * [`UVec4`]
/// * [`U16Vec4`]
/// * [`U32Vec4`]
/// * [`U64Vec4`]
/// * [`U8Vec4`]
/// * [`Vec4`]
pub type TVec4<T> = TVec<T, 4>;
/// A 1D vector with boolean components.
pub type BVec1 = TVec1<bool>;

View File

@ -1,5 +1,4 @@
use core::mem;
use na;
use crate::aliases::{TMat, TVec};
use crate::traits::Number;
@ -20,7 +19,7 @@ use crate::RealNumber;
///
/// # See also:
///
/// * [`sign`](fn.sign.html)
/// * [`sign()`]
pub fn abs<T: Number, const R: usize, const C: usize>(x: &TMat<T, R, C>) -> TMat<T, R, C> {
x.abs()
}
@ -37,11 +36,11 @@ pub fn abs<T: Number, const R: usize, const C: usize>(x: &TMat<T, R, C>) -> TMat
///
/// # See also:
///
/// * [`ceil`](fn.ceil.html)
/// * [`floor`](fn.floor.html)
/// * [`fract`](fn.fract.html)
/// * [`round`](fn.round.html)
/// * [`trunc`](fn.trunc.html)
/// * [`ceil()`]
/// * [`floor()`]
/// * [`fract()`]
/// * [`round()`]
/// * [`trunc()`]
pub fn ceil<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
x.map(|x| x.ceil())
}
@ -65,8 +64,8 @@ pub fn ceil<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`clamp`](fn.clamp.html)
/// * [`clamp_vec`](fn.clamp_vec.html)
/// * [`clamp()`]
/// * [`clamp_vec()`]
pub fn clamp_scalar<T: Number>(x: T, min_val: T, max_val: T) -> T {
na::clamp(x, min_val, max_val)
}
@ -89,8 +88,8 @@ pub fn clamp_scalar<T: Number>(x: T, min_val: T, max_val: T) -> T {
///
/// # See also:
///
/// * [`clamp_scalar`](fn.clamp_scalar.html)
/// * [`clamp_vec`](fn.clamp_vec.html)
/// * [`clamp_scalar()`]
/// * [`clamp_vec()`]
pub fn clamp<T: Number, const D: usize>(x: &TVec<T, D>, min_val: T, max_val: T) -> TVec<T, D> {
x.map(|x| na::clamp(x, min_val, max_val))
}
@ -120,8 +119,8 @@ pub fn clamp<T: Number, const D: usize>(x: &TVec<T, D>, min_val: T, max_val: T)
///
/// # See also:
///
/// * [`clamp_scalar`](fn.clamp_scalar.html)
/// * [`clamp`](fn.clamp.html)
/// * [`clamp_scalar()`]
/// * [`clamp()`]
pub fn clamp_vec<T: Number, const D: usize>(
x: &TVec<T, D>,
min_val: &TVec<T, D>,
@ -136,13 +135,13 @@ pub fn clamp_vec<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`float_bits_to_int_vec`](fn.float_bits_to_int_vec.html)
/// * [`float_bits_to_uint`](fn.float_bits_to_uint.html)
/// * [`float_bits_to_uint_vec`](fn.float_bits_to_uint_vec.html)
/// * [`int_bits_to_float`](fn.int_bits_to_float.html)
/// * [`int_bits_to_float_vec`](fn.int_bits_to_float_vec.html)
/// * [`uint_bits_to_float`](fn.uint_bits_to_float.html)
/// * [`uint_bits_to_float_scalar`](fn.uint_bits_to_float_scalar.html)
/// * [`float_bits_to_int_vec()`]
/// * [`float_bits_to_uint()`]
/// * [`float_bits_to_uint_vec()`]
/// * [`int_bits_to_float()`]
/// * [`int_bits_to_float_vec()`]
/// * [`uint_bits_to_float()`]
/// * [`uint_bits_to_float_scalar()`]
pub fn float_bits_to_int(v: f32) -> i32 {
unsafe { mem::transmute(v) }
}
@ -153,13 +152,13 @@ pub fn float_bits_to_int(v: f32) -> i32 {
///
/// # See also:
///
/// * [`float_bits_to_int`](fn.float_bits_to_int.html)
/// * [`float_bits_to_uint`](fn.float_bits_to_uint.html)
/// * [`float_bits_to_uint_vec`](fn.float_bits_to_uint_vec.html)
/// * [`int_bits_to_float`](fn.int_bits_to_float.html)
/// * [`int_bits_to_float_vec`](fn.int_bits_to_float_vec.html)
/// * [`uint_bits_to_float`](fn.uint_bits_to_float.html)
/// * [`uint_bits_to_float_scalar`](fn.uint_bits_to_float_scalar.html)
/// * [`float_bits_to_int()`]
/// * [`float_bits_to_uint()`]
/// * [`float_bits_to_uint_vec()`]
/// * [`int_bits_to_float()`]
/// * [`int_bits_to_float_vec()`]
/// * [`uint_bits_to_float()`]
/// * [`uint_bits_to_float_scalar()`]
pub fn float_bits_to_int_vec<const D: usize>(v: &TVec<f32, D>) -> TVec<i32, D> {
v.map(float_bits_to_int)
}
@ -170,13 +169,13 @@ pub fn float_bits_to_int_vec<const D: usize>(v: &TVec<f32, D>) -> TVec<i32, D> {
///
/// # See also:
///
/// * [`float_bits_to_int`](fn.float_bits_to_int.html)
/// * [`float_bits_to_int_vec`](fn.float_bits_to_int_vec.html)
/// * [`float_bits_to_uint_vec`](fn.float_bits_to_uint_vec.html)
/// * [`int_bits_to_float`](fn.int_bits_to_float.html)
/// * [`int_bits_to_float_vec`](fn.int_bits_to_float_vec.html)
/// * [`uint_bits_to_float`](fn.uint_bits_to_float.html)
/// * [`uint_bits_to_float_scalar`](fn.uint_bits_to_float_scalar.html)
/// * [`float_bits_to_int()`]
/// * [`float_bits_to_int_vec()`]
/// * [`float_bits_to_uint_vec()`]
/// * [`int_bits_to_float()`]
/// * [`int_bits_to_float_vec()`]
/// * [`uint_bits_to_float()`]
/// * [`uint_bits_to_float_scalar()`]
pub fn float_bits_to_uint(v: f32) -> u32 {
unsafe { mem::transmute(v) }
}
@ -187,13 +186,13 @@ pub fn float_bits_to_uint(v: f32) -> u32 {
///
/// # See also:
///
/// * [`float_bits_to_int`](fn.float_bits_to_int.html)
/// * [`float_bits_to_int_vec`](fn.float_bits_to_int_vec.html)
/// * [`float_bits_to_uint`](fn.float_bits_to_uint.html)
/// * [`int_bits_to_float`](fn.int_bits_to_float.html)
/// * [`int_bits_to_float_vec`](fn.int_bits_to_float_vec.html)
/// * [`uint_bits_to_float`](fn.uint_bits_to_float.html)
/// * [`uint_bits_to_float_scalar`](fn.uint_bits_to_float_scalar.html)
/// * [`float_bits_to_int()`]
/// * [`float_bits_to_int_vec()`]
/// * [`float_bits_to_uint()`]
/// * [`int_bits_to_float()`]
/// * [`int_bits_to_float_vec()`]
/// * [`uint_bits_to_float()`]
/// * [`uint_bits_to_float_scalar()`]
pub fn float_bits_to_uint_vec<const D: usize>(v: &TVec<f32, D>) -> TVec<u32, D> {
v.map(float_bits_to_uint)
}
@ -210,10 +209,10 @@ pub fn float_bits_to_uint_vec<const D: usize>(v: &TVec<f32, D>) -> TVec<u32, D>
///
/// # See also:
///
/// * [`ceil`](fn.ceil.html)
/// * [`fract`](fn.fract.html)
/// * [`round`](fn.round.html)
/// * [`trunc`](fn.trunc.html)
/// * [`ceil()`]
/// * [`fract()`]
/// * [`round()`]
/// * [`trunc()`]
pub fn floor<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
x.map(|x| x.floor())
}
@ -236,10 +235,10 @@ pub fn floor<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`ceil`](fn.ceil.html)
/// * [`floor`](fn.floor.html)
/// * [`round`](fn.round.html)
/// * [`trunc`](fn.trunc.html)
/// * [`ceil()`]
/// * [`floor()`]
/// * [`round()`]
/// * [`trunc()`]
pub fn fract<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
x.map(|x| x.fract())
}
@ -258,13 +257,13 @@ pub fn fract<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`float_bits_to_int`](fn.float_bits_to_int.html)
/// * [`float_bits_to_int_vec`](fn.float_bits_to_int_vec.html)
/// * [`float_bits_to_uint`](fn.float_bits_to_uint.html)
/// * [`float_bits_to_uint_vec`](fn.float_bits_to_uint_vec.html)
/// * [`int_bits_to_float_vec`](fn.int_bits_to_float_vec.html)
/// * [`uint_bits_to_float`](fn.uint_bits_to_float.html)
/// * [`uint_bits_to_float_scalar`](fn.uint_bits_to_float_scalar.html)
/// * [`float_bits_to_int()`]
/// * [`float_bits_to_int_vec()`]
/// * [`float_bits_to_uint()`]
/// * [`float_bits_to_uint_vec()`]
/// * [`int_bits_to_float_vec()`]
/// * [`uint_bits_to_float()`]
/// * [`uint_bits_to_float_scalar()`]
pub fn int_bits_to_float(v: i32) -> f32 {
f32::from_bits(v as u32)
}
@ -275,13 +274,13 @@ pub fn int_bits_to_float(v: i32) -> f32 {
///
/// # See also:
///
/// * [`float_bits_to_int`](fn.float_bits_to_int.html)
/// * [`float_bits_to_int_vec`](fn.float_bits_to_int_vec.html)
/// * [`float_bits_to_uint`](fn.float_bits_to_uint.html)
/// * [`float_bits_to_uint_vec`](fn.float_bits_to_uint_vec.html)
/// * [`int_bits_to_float`](fn.int_bits_to_float.html)
/// * [`uint_bits_to_float`](fn.uint_bits_to_float.html)
/// * [`uint_bits_to_float_scalar`](fn.uint_bits_to_float_scalar.html)
/// * [`float_bits_to_int()`]
/// * [`float_bits_to_int_vec()`]
/// * [`float_bits_to_uint()`]
/// * [`float_bits_to_uint_vec()`]
/// * [`int_bits_to_float()`]
/// * [`uint_bits_to_float()`]
/// * [`uint_bits_to_float_scalar()`]
pub fn int_bits_to_float_vec<const D: usize>(v: &TVec<i32, D>) -> TVec<f32, D> {
v.map(int_bits_to_float)
}
@ -315,8 +314,8 @@ pub fn int_bits_to_float_vec<const D: usize>(v: &TVec<i32, D>) -> TVec<f32, D> {
///
/// # See also:
///
/// * [`mix`](fn.mix.html)
/// * [`mix_vec`](fn.mix_vec.html)
/// * [`mix()`]
/// * [`mix_vec()`]
pub fn mix_scalar<T: Number>(x: T, y: T, a: T) -> T {
x * (T::one() - a) + y * a
}
@ -336,8 +335,8 @@ pub fn mix_scalar<T: Number>(x: T, y: T, a: T) -> T {
///
/// # See also:
///
/// * [`mix_scalar`](fn.mix_scalar.html)
/// * [`mix_vec`](fn.mix_vec.html)
/// * [`mix_scalar()`]
/// * [`mix_vec()`]
pub fn mix<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>, a: T) -> TVec<T, D> {
x * (T::one() - a) + y * a
}
@ -359,14 +358,14 @@ pub fn mix<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>, a: T) -> T
///
/// # See also:
///
/// * [`mix_scalar`](fn.mix_scalar.html)
/// * [`mix`](fn.mix.html)
/// * [`mix_scalar()`]
/// * [`mix()`]
pub fn mix_vec<T: Number, const D: usize>(
x: &TVec<T, D>,
y: &TVec<T, D>,
a: &TVec<T, D>,
) -> TVec<T, D> {
x.component_mul(&(TVec::<T, D>::repeat(T::one()) - a)) + y.component_mul(&a)
x.component_mul(&(TVec::<T, D>::repeat(T::one()) - a)) + y.component_mul(a)
}
/// Returns `x * (1.0 - a) + y * a`, i.e., the linear blend of the scalars x and y using the scalar value a.
@ -383,8 +382,8 @@ pub fn mix_vec<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`lerp`](fn.lerp.html)
/// * [`lerp_vec`](fn.lerp_vec.html)
/// * [`lerp()`]
/// * [`lerp_vec()`]
pub fn lerp_scalar<T: Number>(x: T, y: T, a: T) -> T {
mix_scalar(x, y, a)
}
@ -405,8 +404,8 @@ pub fn lerp_scalar<T: Number>(x: T, y: T, a: T) -> T {
///
/// # See also:
///
/// * [`lerp_scalar`](fn.lerp_scalar.html)
/// * [`lerp_vec`](fn.lerp_vec.html)
/// * [`lerp_scalar()`]
/// * [`lerp_vec()`]
pub fn lerp<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>, a: T) -> TVec<T, D> {
mix(x, y, a)
}
@ -429,8 +428,8 @@ pub fn lerp<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>, a: T) ->
///
/// # See also:
///
/// * [`lerp_scalar`](fn.lerp_scalar.html)
/// * [`lerp`](fn.lerp.html)
/// * [`lerp_scalar()`]
/// * [`lerp()`]
pub fn lerp_vec<T: Number, const D: usize>(
x: &TVec<T, D>,
y: &TVec<T, D>,
@ -445,7 +444,7 @@ pub fn lerp_vec<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`modf`](fn.modf.html)
/// * [`modf()`]
pub fn modf_vec<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<T, D> {
x.zip_map(y, |x, y| x % y)
}
@ -454,7 +453,7 @@ pub fn modf_vec<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TV
///
/// # See also:
///
/// * [`modf_vec`](fn.modf_vec.html)
/// * [`modf_vec()`]
pub fn modf<T: Number>(x: T, i: T) -> T {
x % i
}
@ -473,10 +472,10 @@ pub fn modf<T: Number>(x: T, i: T) -> T {
///
/// # See also:
///
/// * [`ceil`](fn.ceil.html)
/// * [`floor`](fn.floor.html)
/// * [`fract`](fn.fract.html)
/// * [`trunc`](fn.trunc.html)
/// * [`ceil()`]
/// * [`floor()`]
/// * [`fract()`]
/// * [`trunc()`]
pub fn round<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
x.map(|x| x.round())
}
@ -497,7 +496,7 @@ pub fn round<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`abs`](fn.abs.html)
/// * [`abs()`]
///
pub fn sign<T: Number, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
x.map(|x| if x.is_zero() { T::zero() } else { x.signum() })
@ -545,10 +544,10 @@ pub fn step_vec<T: Number, const D: usize>(edge: &TVec<T, D>, x: &TVec<T, D>) ->
///
/// # See also:
///
/// * [`ceil`](fn.ceil.html)
/// * [`floor`](fn.floor.html)
/// * [`fract`](fn.fract.html)
/// * [`round`](fn.round.html)
/// * [`ceil()`]
/// * [`floor()`]
/// * [`fract()`]
/// * [`round()`]
pub fn trunc<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
x.map(|x| x.trunc())
}
@ -559,13 +558,13 @@ pub fn trunc<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`float_bits_to_int`](fn.float_bits_to_int.html)
/// * [`float_bits_to_int_vec`](fn.float_bits_to_int_vec.html)
/// * [`float_bits_to_uint`](fn.float_bits_to_uint.html)
/// * [`float_bits_to_uint_vec`](fn.float_bits_to_uint_vec.html)
/// * [`int_bits_to_float`](fn.int_bits_to_float.html)
/// * [`int_bits_to_float_vec`](fn.int_bits_to_float_vec.html)
/// * [`uint_bits_to_float`](fn.uint_bits_to_float.html)
/// * [`float_bits_to_int()`]
/// * [`float_bits_to_int_vec()`]
/// * [`float_bits_to_uint()`]
/// * [`float_bits_to_uint_vec()`]
/// * [`int_bits_to_float()`]
/// * [`int_bits_to_float_vec()`]
/// * [`uint_bits_to_float()`]
pub fn uint_bits_to_float_scalar(v: u32) -> f32 {
f32::from_bits(v)
}
@ -576,13 +575,13 @@ pub fn uint_bits_to_float_scalar(v: u32) -> f32 {
///
/// # See also:
///
/// * [`float_bits_to_int`](fn.float_bits_to_int.html)
/// * [`float_bits_to_int_vec`](fn.float_bits_to_int_vec.html)
/// * [`float_bits_to_uint`](fn.float_bits_to_uint.html)
/// * [`float_bits_to_uint_vec`](fn.float_bits_to_uint_vec.html)
/// * [`int_bits_to_float`](fn.int_bits_to_float.html)
/// * [`int_bits_to_float_vec`](fn.int_bits_to_float_vec.html)
/// * [`uint_bits_to_float_scalar`](fn.uint_bits_to_float_scalar.html)
/// * [`float_bits_to_int()`]
/// * [`float_bits_to_int_vec()`]
/// * [`float_bits_to_uint()`]
/// * [`float_bits_to_uint_vec()`]
/// * [`int_bits_to_float()`]
/// * [`int_bits_to_float_vec()`]
/// * [`uint_bits_to_float_scalar()`]
pub fn uint_bits_to_float<const D: usize>(v: &TVec<u32, D>) -> TVec<f32, D> {
v.map(uint_bits_to_float_scalar)
}

View File

@ -91,6 +91,7 @@ pub fn mat2x4<T: Scalar>(m11: T, m12: T, m13: T, m14: T,
/// );
/// ```
#[rustfmt::skip]
#[allow(clippy::too_many_arguments)]
pub fn mat3<T: Scalar>(m11: T, m12: T, m13: T,
m21: T, m22: T, m23: T,
m31: T, m32: T, m33: T) -> TMat3<T> {
@ -115,6 +116,7 @@ pub fn mat3x2<T: Scalar>(m11: T, m12: T,
/// Create a new 3x3 matrix.
#[rustfmt::skip]
#[allow(clippy::too_many_arguments)]
pub fn mat3x3<T: Scalar>(m11: T, m12: T, m13: T,
m21: T, m22: T, m23: T,
m31: T, m32: T, m33: T) -> TMat3<T> {
@ -127,6 +129,7 @@ pub fn mat3x3<T: Scalar>(m11: T, m12: T, m13: T,
/// Create a new 3x4 matrix.
#[rustfmt::skip]
#[allow(clippy::too_many_arguments)]
pub fn mat3x4<T: Scalar>(m11: T, m12: T, m13: T, m14: T,
m21: T, m22: T, m23: T, m24: T,
m31: T, m32: T, m33: T, m34: T) -> TMat3x4<T> {
@ -153,6 +156,7 @@ pub fn mat4x2<T: Scalar>(m11: T, m12: T,
/// Create a new 4x3 matrix.
#[rustfmt::skip]
#[allow(clippy::too_many_arguments)]
pub fn mat4x3<T: Scalar>(m11: T, m12: T, m13: T,
m21: T, m22: T, m23: T,
m31: T, m32: T, m33: T,
@ -167,6 +171,7 @@ pub fn mat4x3<T: Scalar>(m11: T, m12: T, m13: T,
/// Create a new 4x4 matrix.
#[rustfmt::skip]
#[allow(clippy::too_many_arguments)]
pub fn mat4x4<T: Scalar>(m11: T, m12: T, m13: T, m14: T,
m21: T, m22: T, m23: T, m24: T,
m31: T, m32: T, m33: T, m34: T,
@ -181,6 +186,7 @@ pub fn mat4x4<T: Scalar>(m11: T, m12: T, m13: T, m14: T,
/// Create a new 4x4 matrix.
#[rustfmt::skip]
#[allow(clippy::too_many_arguments)]
pub fn mat4<T: Scalar>(m11: T, m12: T, m13: T, m14: T,
m21: T, m22: T, m23: T, m24: T,
m31: T, m32: T, m33: T, m34: T,

View File

@ -5,7 +5,7 @@ use crate::RealNumber;
///
/// # See also:
///
/// * [`exp2`](fn.exp2.html)
/// * [`exp2()`]
pub fn exp<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
v.map(|x| x.exp())
}
@ -14,7 +14,7 @@ pub fn exp<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`exp`](fn.exp.html)
/// * [`exp()`]
pub fn exp2<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
v.map(|x| x.exp2())
}
@ -23,7 +23,7 @@ pub fn exp2<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`sqrt`](fn.sqrt.html)
/// * [`sqrt()`]
pub fn inversesqrt<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
v.map(|x| T::one() / x.sqrt())
}
@ -32,7 +32,7 @@ pub fn inversesqrt<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D>
///
/// # See also:
///
/// * [`log2`](fn.log2.html)
/// * [`log2()`]
pub fn log<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
v.map(|x| x.ln())
}
@ -41,7 +41,7 @@ pub fn log<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
///
/// # See also:
///
/// * [`log`](fn.log.html)
/// * [`log()`]
pub fn log2<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
v.map(|x| x.log2())
}
@ -55,10 +55,10 @@ pub fn pow<T: RealNumber, const D: usize>(base: &TVec<T, D>, exponent: &TVec<T,
///
/// # See also:
///
/// * [`exp`](fn.exp.html)
/// * [`exp2`](fn.exp2.html)
/// * [`inversesqrt`](fn.inversesqrt.html)
/// * [`pow`](fn.pow.html)
/// * [`exp()`]
/// * [`exp2()`]
/// * [`inversesqrt()`]
/// * [`pow`]
pub fn sqrt<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> TVec<T, D> {
v.map(|x| x.sqrt())
}

View File

@ -1,5 +1,3 @@
use na;
use crate::aliases::{TMat4, TVec2, TVec3, TVec4};
use crate::RealNumber;
@ -41,11 +39,11 @@ pub fn pick_matrix<T: RealNumber>(
///
/// # See also:
///
/// * [`project_no`](fn.project_no.html)
/// * [`project_zo`](fn.project_zo.html)
/// * [`unproject`](fn.unproject.html)
/// * [`unproject_no`](fn.unproject_no.html)
/// * [`unproject_zo`](fn.unproject_zo.html)
/// * [`project_no()`]
/// * [`project_zo()`]
/// * [`unproject()`]
/// * [`unproject_no()`]
/// * [`unproject_zo()`]
pub fn project<T: RealNumber>(
obj: &TVec3<T>,
model: &TMat4<T>,
@ -68,11 +66,11 @@ pub fn project<T: RealNumber>(
///
/// # See also:
///
/// * [`project`](fn.project.html)
/// * [`project_zo`](fn.project_zo.html)
/// * [`unproject`](fn.unproject.html)
/// * [`unproject_no`](fn.unproject_no.html)
/// * [`unproject_zo`](fn.unproject_zo.html)
/// * [`project()`]
/// * [`project_zo()`]
/// * [`unproject()`]
/// * [`unproject_no()`]
/// * [`unproject_zo()`]
pub fn project_no<T: RealNumber>(
obj: &TVec3<T>,
model: &TMat4<T>,
@ -96,11 +94,11 @@ pub fn project_no<T: RealNumber>(
///
/// # See also:
///
/// * [`project`](fn.project.html)
/// * [`project_no`](fn.project_no.html)
/// * [`unproject`](fn.unproject.html)
/// * [`unproject_no`](fn.unproject_no.html)
/// * [`unproject_zo`](fn.unproject_zo.html)
/// * [`project()`]
/// * [`project_no()`]
/// * [`unproject()`]
/// * [`unproject_no()`]
/// * [`unproject_zo()`]
pub fn project_zo<T: RealNumber>(
obj: &TVec3<T>,
model: &TMat4<T>,
@ -129,11 +127,11 @@ pub fn project_zo<T: RealNumber>(
///
/// # See also:
///
/// * [`project`](fn.project.html)
/// * [`project_no`](fn.project_no.html)
/// * [`project_zo`](fn.project_zo.html)
/// * [`unproject_no`](fn.unproject_no.html)
/// * [`unproject_zo`](fn.unproject_zo.html)
/// * [`project()`]
/// * [`project_no()`]
/// * [`project_zo()`]
/// * [`unproject_no()`]
/// * [`unproject_zo()`]
pub fn unproject<T: RealNumber>(
win: &TVec3<T>,
model: &TMat4<T>,
@ -156,11 +154,11 @@ pub fn unproject<T: RealNumber>(
///
/// # See also:
///
/// * [`project`](fn.project.html)
/// * [`project_no`](fn.project_no.html)
/// * [`project_zo`](fn.project_zo.html)
/// * [`unproject`](fn.unproject.html)
/// * [`unproject_zo`](fn.unproject_zo.html)
/// * [`project()`]
/// * [`project_no()`]
/// * [`project_zo()`]
/// * [`unproject()`]
/// * [`unproject_zo()`]
pub fn unproject_no<T: RealNumber>(
win: &TVec3<T>,
model: &TMat4<T>,
@ -193,11 +191,11 @@ pub fn unproject_no<T: RealNumber>(
///
/// # See also:
///
/// * [`project`](fn.project.html)
/// * [`project_no`](fn.project_no.html)
/// * [`project_zo`](fn.project_zo.html)
/// * [`unproject`](fn.unproject.html)
/// * [`unproject_no`](fn.unproject_no.html)
/// * [`project()`]
/// * [`project_no()`]
/// * [`project_zo()`]
/// * [`unproject()`]
/// * [`unproject_no()`]
pub fn unproject_zo<T: RealNumber>(
win: &TVec3<T>,
model: &TMat4<T>,

View File

@ -18,8 +18,8 @@ pub fn identity<T: Number, const D: usize>() -> TMat<T, D, D> {
///
/// # See also:
///
/// * [`look_at_lh`](fn.look_at_lh.html)
/// * [`look_at_rh`](fn.look_at_rh.html)
/// * [`look_at_lh()`]
/// * [`look_at_rh()`]
pub fn look_at<T: RealNumber>(eye: &TVec3<T>, center: &TVec3<T>, up: &TVec3<T>) -> TMat4<T> {
look_at_rh(eye, center, up)
}
@ -34,8 +34,8 @@ pub fn look_at<T: RealNumber>(eye: &TVec3<T>, center: &TVec3<T>, up: &TVec3<T>)
///
/// # See also:
///
/// * [`look_at`](fn.look_at.html)
/// * [`look_at_rh`](fn.look_at_rh.html)
/// * [`look_at()`]
/// * [`look_at_rh()`]
pub fn look_at_lh<T: RealNumber>(eye: &TVec3<T>, center: &TVec3<T>, up: &TVec3<T>) -> TMat4<T> {
TMat::look_at_lh(&Point3::from(*eye), &Point3::from(*center), up)
}
@ -50,8 +50,8 @@ pub fn look_at_lh<T: RealNumber>(eye: &TVec3<T>, center: &TVec3<T>, up: &TVec3<T
///
/// # See also:
///
/// * [`look_at`](fn.look_at.html)
/// * [`look_at_lh`](fn.look_at_lh.html)
/// * [`look_at()`]
/// * [`look_at_lh()`]
pub fn look_at_rh<T: RealNumber>(eye: &TVec3<T>, center: &TVec3<T>, up: &TVec3<T>) -> TMat4<T> {
TMat::look_at_rh(&Point3::from(*eye), &Point3::from(*center), up)
}
@ -66,11 +66,11 @@ pub fn look_at_rh<T: RealNumber>(eye: &TVec3<T>, center: &TVec3<T>, up: &TVec3<T
///
/// # See also:
///
/// * [`rotate_x`](fn.rotate_x.html)
/// * [`rotate_y`](fn.rotate_y.html)
/// * [`rotate_z`](fn.rotate_z.html)
/// * [`scale`](fn.scale.html)
/// * [`translate`](fn.translate.html)
/// * [`rotate_x()`]
/// * [`rotate_y()`]
/// * [`rotate_z()`]
/// * [`scale()`]
/// * [`translate()`]
pub fn rotate<T: RealNumber>(m: &TMat4<T>, angle: T, axis: &TVec3<T>) -> TMat4<T> {
m * Rotation3::from_axis_angle(&Unit::new_normalize(*axis), angle).to_homogeneous()
}
@ -84,11 +84,11 @@ pub fn rotate<T: RealNumber>(m: &TMat4<T>, angle: T, axis: &TVec3<T>) -> TMat4<T
///
/// # See also:
///
/// * [`rotate`](fn.rotate.html)
/// * [`rotate_y`](fn.rotate_y.html)
/// * [`rotate_z`](fn.rotate_z.html)
/// * [`scale`](fn.scale.html)
/// * [`translate`](fn.translate.html)
/// * [`rotate()`]
/// * [`rotate_y()`]
/// * [`rotate_z()`]
/// * [`scale()`]
/// * [`translate()`]
pub fn rotate_x<T: RealNumber>(m: &TMat4<T>, angle: T) -> TMat4<T> {
rotate(m, angle, &TVec::x())
}
@ -102,11 +102,11 @@ pub fn rotate_x<T: RealNumber>(m: &TMat4<T>, angle: T) -> TMat4<T> {
///
/// # See also:
///
/// * [`rotate`](fn.rotate.html)
/// * [`rotate_x`](fn.rotate_x.html)
/// * [`rotate_z`](fn.rotate_z.html)
/// * [`scale`](fn.scale.html)
/// * [`translate`](fn.translate.html)
/// * [`rotate()`]
/// * [`rotate_x()`]
/// * [`rotate_z()`]
/// * [`scale()`]
/// * [`translate()`]
pub fn rotate_y<T: RealNumber>(m: &TMat4<T>, angle: T) -> TMat4<T> {
rotate(m, angle, &TVec::y())
}
@ -120,11 +120,11 @@ pub fn rotate_y<T: RealNumber>(m: &TMat4<T>, angle: T) -> TMat4<T> {
///
/// # See also:
///
/// * [`rotate`](fn.rotate.html)
/// * [`rotate_x`](fn.rotate_x.html)
/// * [`rotate_y`](fn.rotate_y.html)
/// * [`scale`](fn.scale.html)
/// * [`translate`](fn.translate.html)
/// * [`rotate()`]
/// * [`rotate_x()`]
/// * [`rotate_y()`]
/// * [`scale()`]
/// * [`translate()`]
pub fn rotate_z<T: RealNumber>(m: &TMat4<T>, angle: T) -> TMat4<T> {
rotate(m, angle, &TVec::z())
}
@ -138,11 +138,11 @@ pub fn rotate_z<T: RealNumber>(m: &TMat4<T>, angle: T) -> TMat4<T> {
///
/// # See also:
///
/// * [`rotate`](fn.rotate.html)
/// * [`rotate_x`](fn.rotate_x.html)
/// * [`rotate_y`](fn.rotate_y.html)
/// * [`rotate_z`](fn.rotate_z.html)
/// * [`translate`](fn.translate.html)
/// * [`rotate()`]
/// * [`rotate_x()`]
/// * [`rotate_y()`]
/// * [`rotate_z()`]
/// * [`translate()`]
pub fn scale<T: Number>(m: &TMat4<T>, v: &TVec3<T>) -> TMat4<T> {
m.prepend_nonuniform_scaling(v)
}
@ -156,11 +156,11 @@ pub fn scale<T: Number>(m: &TMat4<T>, v: &TVec3<T>) -> TMat4<T> {
///
/// # See also:
///
/// * [`rotate`](fn.rotate.html)
/// * [`rotate_x`](fn.rotate_x.html)
/// * [`rotate_y`](fn.rotate_y.html)
/// * [`rotate_z`](fn.rotate_z.html)
/// * [`scale`](fn.scale.html)
/// * [`rotate()`]
/// * [`rotate_x()`]
/// * [`rotate_y()`]
/// * [`rotate_z()`]
/// * [`scale()`]
pub fn translate<T: Number>(m: &TMat4<T>, v: &TVec3<T>) -> TMat4<T> {
m.prepend_translation(v)
}

View File

@ -1,4 +1,4 @@
use na::{self, Unit};
use na::Unit;
use crate::aliases::Qua;
use crate::RealNumber;

View File

@ -12,9 +12,9 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`max4_scalar`](fn.max4_scalar.html)
/// * [`min3_scalar`](fn.min3_scalar.html)
/// * [`min4_scalar`](fn.min4_scalar.html)
/// * [`max4_scalar()`]
/// * [`min3_scalar()`]
/// * [`min4_scalar()`]
pub fn max2_scalar<T: Number>(a: T, b: T) -> T {
if a >= b {
a
@ -35,9 +35,9 @@ pub fn max2_scalar<T: Number>(a: T, b: T) -> T {
///
/// # See also:
///
/// * [`max4_scalar`](fn.max4_scalar.html)
/// * [`min3_scalar`](fn.min3_scalar.html)
/// * [`min4_scalar`](fn.min4_scalar.html)
/// * [`max4_scalar()`]
/// * [`min3_scalar()`]
/// * [`min4_scalar()`]
pub fn min2_scalar<T: Number>(a: T, b: T) -> T {
if a <= b {
a
@ -58,9 +58,9 @@ pub fn min2_scalar<T: Number>(a: T, b: T) -> T {
///
/// # See also:
///
/// * [`max4_scalar`](fn.max4_scalar.html)
/// * [`min3_scalar`](fn.min3_scalar.html)
/// * [`min4_scalar`](fn.min4_scalar.html)
/// * [`max4_scalar()`]
/// * [`min3_scalar()`]
/// * [`min4_scalar()`]
pub fn max3_scalar<T: Number>(a: T, b: T, c: T) -> T {
max2_scalar(max2_scalar(a, b), c)
}
@ -77,9 +77,9 @@ pub fn max3_scalar<T: Number>(a: T, b: T, c: T) -> T {
///
/// # See also:
///
/// * [`max3_scalar`](fn.max3_scalar.html)
/// * [`min3_scalar`](fn.min3_scalar.html)
/// * [`min4_scalar`](fn.min4_scalar.html)
/// * [`max3_scalar()`]
/// * [`min3_scalar()`]
/// * [`min4_scalar()`]
pub fn max4_scalar<T: Number>(a: T, b: T, c: T, d: T) -> T {
max2_scalar(max2_scalar(a, b), max2_scalar(c, d))
}
@ -96,9 +96,9 @@ pub fn max4_scalar<T: Number>(a: T, b: T, c: T, d: T) -> T {
///
/// # See also:
///
/// * [`max3_scalar`](fn.max3_scalar.html)
/// * [`max4_scalar`](fn.max4_scalar.html)
/// * [`min4_scalar`](fn.min4_scalar.html)
/// * [`max3_scalar()`]
/// * [`max4_scalar()`]
/// * [`min4_scalar()`]
pub fn min3_scalar<T: Number>(a: T, b: T, c: T) -> T {
min2_scalar(min2_scalar(a, b), c)
}
@ -115,9 +115,9 @@ pub fn min3_scalar<T: Number>(a: T, b: T, c: T) -> T {
///
/// # See also:
///
/// * [`max3_scalar`](fn.max3_scalar.html)
/// * [`max4_scalar`](fn.max4_scalar.html)
/// * [`min3_scalar`](fn.min3_scalar.html)
/// * [`max3_scalar()`]
/// * [`max4_scalar()`]
/// * [`min3_scalar()`]
pub fn min4_scalar<T: Number>(a: T, b: T, c: T, d: T) -> T {
min2_scalar(min2_scalar(a, b), min2_scalar(c, d))
}

View File

@ -10,18 +10,18 @@ pub fn epsilon<T: AbsDiffEq<Epsilon = T>>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`](crate::four_over_pi)
/// * [`half_pi()`](crate::half_pi)
/// * [`one_over_pi()`](crate::one_over_pi)
/// * [`one_over_two_pi()`](crate::one_over_two_pi)
/// * [`quarter_pi()`](crate::quarter_pi)
/// * [`root_half_pi()`](crate::root_half_pi)
/// * [`root_pi()`](crate::root_pi)
/// * [`root_two_pi()`](crate::root_two_pi)
/// * [`three_over_two_pi()`](crate::three_over_two_pi)
/// * [`two_over_pi()`](crate::two_over_pi)
/// * [`two_over_root_pi()`](crate::two_over_root_pi)
/// * [`two_pi()`](crate::two_pi)
pub fn pi<T: RealNumber>() -> T {
T::pi()
}

View File

@ -5,15 +5,15 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max2`](fn.max2.html)
/// * [`max3`](fn.max3.html)
/// * [`max4`](fn.max4.html)
/// * [`min`](fn.min.html)
/// * [`min2`](fn.min2.html)
/// * [`min3`](fn.min3.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max2()`]
/// * [`max3()`]
/// * [`max4()`]
/// * [`min()`]
/// * [`min2()`]
/// * [`min3()`]
/// * [`min4()`]
pub fn max<T: Number, const D: usize>(a: &TVec<T, D>, b: T) -> TVec<T, D> {
a.map(|a| crate::max2_scalar(a, b))
}
@ -22,15 +22,15 @@ pub fn max<T: Number, const D: usize>(a: &TVec<T, D>, b: T) -> TVec<T, D> {
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max3`](fn.max3.html)
/// * [`max4`](fn.max4.html)
/// * [`min`](fn.min.html)
/// * [`min2`](fn.min2.html)
/// * [`min3`](fn.min3.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max()`]
/// * [`max3()`]
/// * [`max4()`]
/// * [`min()`]
/// * [`min2()`]
/// * [`min3()`]
/// * [`min4()`]
pub fn max2<T: Number, const D: usize>(a: &TVec<T, D>, b: &TVec<T, D>) -> TVec<T, D> {
a.zip_map(b, |a, b| crate::max2_scalar(a, b))
}
@ -39,15 +39,15 @@ pub fn max2<T: Number, const D: usize>(a: &TVec<T, D>, b: &TVec<T, D>) -> TVec<T
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max2`](fn.max2.html)
/// * [`max4`](fn.max4.html)
/// * [`min`](fn.min.html)
/// * [`min2`](fn.min2.html)
/// * [`min3`](fn.min3.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max()`]
/// * [`max2()`]
/// * [`max4()`]
/// * [`min()`]
/// * [`min2()`]
/// * [`min3()`]
/// * [`min4()`]
pub fn max3<T: Number, const D: usize>(
a: &TVec<T, D>,
b: &TVec<T, D>,
@ -60,15 +60,15 @@ pub fn max3<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max2`](fn.max2.html)
/// * [`max3`](fn.max3.html)
/// * [`min`](fn.min.html)
/// * [`min2`](fn.min2.html)
/// * [`min3`](fn.min3.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max()`]
/// * [`max2()`]
/// * [`max3()`]
/// * [`min()`]
/// * [`min2()`]
/// * [`min3()`]
/// * [`min4()`]
pub fn max4<T: Number, const D: usize>(
a: &TVec<T, D>,
b: &TVec<T, D>,
@ -82,15 +82,15 @@ pub fn max4<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max2`](fn.max2.html)
/// * [`max3`](fn.max3.html)
/// * [`max4`](fn.max4.html)
/// * [`min2`](fn.min2.html)
/// * [`min3`](fn.min3.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max()`]
/// * [`max2()`]
/// * [`max3()`]
/// * [`max4()`]
/// * [`min2()`]
/// * [`min3()`]
/// * [`min4()`]
pub fn min<T: Number, const D: usize>(x: &TVec<T, D>, y: T) -> TVec<T, D> {
x.map(|x| crate::min2_scalar(x, y))
}
@ -99,15 +99,15 @@ pub fn min<T: Number, const D: usize>(x: &TVec<T, D>, y: T) -> TVec<T, D> {
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max2`](fn.max2.html)
/// * [`max3`](fn.max3.html)
/// * [`max4`](fn.max4.html)
/// * [`min`](fn.min.html)
/// * [`min3`](fn.min3.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max()`]
/// * [`max2()`]
/// * [`max3()`]
/// * [`max4()`]
/// * [`min()`]
/// * [`min3()`]
/// * [`min4()`]
pub fn min2<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<T, D> {
x.zip_map(y, |a, b| crate::min2_scalar(a, b))
}
@ -116,15 +116,15 @@ pub fn min2<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<T
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max2`](fn.max2.html)
/// * [`max3`](fn.max3.html)
/// * [`max4`](fn.max4.html)
/// * [`min`](fn.min.html)
/// * [`min2`](fn.min2.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max()`]
/// * [`max2()`]
/// * [`max3()`]
/// * [`max4()`]
/// * [`min()`]
/// * [`min2()`]
/// * [`min4()`]
pub fn min3<T: Number, const D: usize>(
a: &TVec<T, D>,
b: &TVec<T, D>,
@ -137,15 +137,15 @@ pub fn min3<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max2`](fn.max2.html)
/// * [`max3`](fn.max3.html)
/// * [`max4`](fn.max4.html)
/// * [`min`](fn.min.html)
/// * [`min2`](fn.min2.html)
/// * [`min3`](fn.min3.html)
/// * [`comp_max()`](crate::comp_max)
/// * [`comp_min()`](crate::comp_min)
/// * [`max()`]
/// * [`max2()`]
/// * [`max3()`]
/// * [`max4()`]
/// * [`min()`]
/// * [`min2()`]
/// * [`min3()`]
pub fn min4<T: Number, const D: usize>(
a: &TVec<T, D>,
b: &TVec<T, D>,

View File

@ -5,9 +5,9 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`equal_eps_vec`](fn.equal_eps_vec.html)
/// * [`not_equal_eps`](fn.not_equal_eps.html)
/// * [`not_equal_eps_vec`](fn.not_equal_eps_vec.html)
/// * [`equal_eps_vec()`]
/// * [`not_equal_eps()`]
/// * [`not_equal_eps_vec()`]
pub fn equal_eps<T: Number, const D: usize>(
x: &TVec<T, D>,
y: &TVec<T, D>,
@ -20,9 +20,9 @@ pub fn equal_eps<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`equal_eps`](fn.equal_eps.html)
/// * [`not_equal_eps`](fn.not_equal_eps.html)
/// * [`not_equal_eps_vec`](fn.not_equal_eps_vec.html)
/// * [`equal_eps()`]
/// * [`not_equal_eps()`]
/// * [`not_equal_eps_vec()`]
pub fn equal_eps_vec<T: Number, const D: usize>(
x: &TVec<T, D>,
y: &TVec<T, D>,
@ -35,9 +35,9 @@ pub fn equal_eps_vec<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`equal_eps`](fn.equal_eps.html)
/// * [`equal_eps_vec`](fn.equal_eps_vec.html)
/// * [`not_equal_eps_vec`](fn.not_equal_eps_vec.html)
/// * [`equal_eps()`]
/// * [`equal_eps_vec()`]
/// * [`not_equal_eps_vec()`]
pub fn not_equal_eps<T: Number, const D: usize>(
x: &TVec<T, D>,
y: &TVec<T, D>,
@ -50,9 +50,9 @@ pub fn not_equal_eps<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`equal_eps`](fn.equal_eps.html)
/// * [`equal_eps_vec`](fn.equal_eps_vec.html)
/// * [`not_equal_eps`](fn.not_equal_eps.html)
/// * [`equal_eps()`]
/// * [`equal_eps_vec()`]
/// * [`not_equal_eps()`]
pub fn not_equal_eps_vec<T: Number, const D: usize>(
x: &TVec<T, D>,
y: &TVec<T, D>,

View File

@ -12,7 +12,7 @@ pub fn cross<T: Number>(x: &TVec3<T>, y: &TVec3<T>) -> TVec3<T> {
///
/// # See also:
///
/// * [`distance2`](fn.distance2.html)
/// * [`distance2()`](crate::distance2)
pub fn distance<T: RealNumber, const D: usize>(p0: &TVec<T, D>, p1: &TVec<T, D>) -> T {
(p1 - p0).norm()
}
@ -37,13 +37,13 @@ pub fn faceforward<T: Number, const D: usize>(
/// The magnitude of a vector.
///
/// A synonym for [`magnitude`](fn.magnitude.html).
/// A synonym for [`magnitude()`].
///
/// # See also:
///
/// * [`length2`](fn.length2.html)
/// * [`magnitude`](fn.magnitude.html)
/// * [`magnitude2`](fn.magnitude2.html)
/// * [`length2()`](crate::length2)
/// * [`magnitude()`]
/// * [`magnitude2()`](crate::magnitude2)
pub fn length<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> T {
x.norm()
}
@ -54,8 +54,8 @@ pub fn length<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> T {
///
/// # See also:
///
/// * [`length`](fn.length.html)
/// * [`magnitude2`](fn.magnitude2.html)
/// * [`length()`]
/// * [`magnitude2()`](crate::magnitude2)
/// * [`nalgebra::norm`](../nalgebra/fn.norm.html)
pub fn magnitude<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> T {
x.norm()

View File

@ -1,9 +1,8 @@
use crate::RealNumber;
use na;
/// The Euler constant.
///
/// This is a shorthand alias for [`euler`](fn.euler.html).
/// This is a shorthand alias for [`euler()`].
pub fn e<T: RealNumber>() -> T {
T::e()
}
@ -17,18 +16,18 @@ pub fn euler<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn four_over_pi<T: RealNumber>() -> T {
na::convert::<_, T>(4.0) / T::pi()
}
@ -42,18 +41,18 @@ pub fn golden_ratio<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn half_pi<T: RealNumber>() -> T {
T::frac_pi_2()
}
@ -62,8 +61,8 @@ pub fn half_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`ln_ten`](fn.ln_ten.html)
/// * [`ln_two`](fn.ln_two.html)
/// * [`ln_ten()`]
/// * [`ln_two()`]
pub fn ln_ln_two<T: RealNumber>() -> T {
T::ln_2().ln()
}
@ -72,8 +71,8 @@ pub fn ln_ln_two<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`ln_ln_two`](fn.ln_ln_two.html)
/// * [`ln_two`](fn.ln_two.html)
/// * [`ln_ln_two()`]
/// * [`ln_two()`]
pub fn ln_ten<T: RealNumber>() -> T {
T::ln_10()
}
@ -82,8 +81,8 @@ pub fn ln_ten<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`ln_ln_two`](fn.ln_ln_two.html)
/// * [`ln_ten`](fn.ln_ten.html)
/// * [`ln_ln_two()`]
/// * [`ln_ten()`]
pub fn ln_two<T: RealNumber>() -> T {
T::ln_2()
}
@ -95,18 +94,18 @@ pub use na::one;
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn one_over_pi<T: RealNumber>() -> T {
T::frac_1_pi()
}
@ -120,18 +119,18 @@ pub fn one_over_root_two<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn one_over_two_pi<T: RealNumber>() -> T {
T::frac_1_pi() * na::convert(0.5)
}
@ -140,18 +139,18 @@ pub fn one_over_two_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn quarter_pi<T: RealNumber>() -> T {
T::frac_pi_4()
}
@ -160,8 +159,8 @@ pub fn quarter_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`root_three`](fn.root_three.html)
/// * [`root_two`](fn.root_two.html)
/// * [`root_three()`]
/// * [`root_two()`]
pub fn root_five<T: RealNumber>() -> T {
na::convert::<_, T>(5.0).sqrt()
}
@ -170,18 +169,18 @@ pub fn root_five<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn root_half_pi<T: RealNumber>() -> T {
(T::pi() / na::convert(2.0)).sqrt()
}
@ -195,18 +194,18 @@ pub fn root_ln_four<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn root_pi<T: RealNumber>() -> T {
T::pi().sqrt()
}
@ -215,8 +214,8 @@ pub fn root_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`root_five`](fn.root_five.html)
/// * [`root_two`](fn.root_two.html)
/// * [`root_five()`]
/// * [`root_two()`]
pub fn root_three<T: RealNumber>() -> T {
na::convert::<_, T>(3.0).sqrt()
}
@ -225,8 +224,8 @@ pub fn root_three<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`root_five`](fn.root_five.html)
/// * [`root_three`](fn.root_three.html)
/// * [`root_five()`]
/// * [`root_three()`]
pub fn root_two<T: RealNumber>() -> T {
// TODO: there should be a crate::sqrt_2() on the RealNumber trait.
na::convert::<_, T>(2.0).sqrt()
@ -236,18 +235,18 @@ pub fn root_two<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn root_two_pi<T: RealNumber>() -> T {
T::two_pi().sqrt()
}
@ -256,7 +255,7 @@ pub fn root_two_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`two_thirds`](fn.two_thirds.html)
/// * [`two_thirds()`]
pub fn third<T: RealNumber>() -> T {
na::convert(1.0 / 3.0)
}
@ -265,18 +264,18 @@ pub fn third<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn three_over_two_pi<T: RealNumber>() -> T {
na::convert::<_, T>(3.0) / T::two_pi()
}
@ -285,17 +284,18 @@ pub fn three_over_two_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_root_pi()`]
/// * [`two_pi()`]
pub fn two_over_pi<T: RealNumber>() -> T {
T::frac_2_pi()
}
@ -304,18 +304,18 @@ pub fn two_over_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_pi`](fn.two_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_pi()`]
pub fn two_over_root_pi<T: RealNumber>() -> T {
T::frac_2_sqrt_pi()
}
@ -324,18 +324,18 @@ pub fn two_over_root_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`four_over_pi`](fn.four_over_pi.html)
/// * [`half_pi`](fn.half_pi.html)
/// * [`one_over_pi`](fn.one_over_pi.html)
/// * [`one_over_two_pi`](fn.one_over_two_pi.html)
/// * [`pi`](fn.pi.html)
/// * [`quarter_pi`](fn.quarter_pi.html)
/// * [`root_half_pi`](fn.root_half_pi.html)
/// * [`root_pi`](fn.root_pi.html)
/// * [`root_two_pi`](fn.root_two_pi.html)
/// * [`three_over_two_pi`](fn.three_over_two_pi.html)
/// * [`two_over_pi`](fn.two_over_pi.html)
/// * [`two_over_root_pi`](fn.two_over_root_pi.html)
/// * [`four_over_pi()`]
/// * [`half_pi()`]
/// * [`one_over_pi()`]
/// * [`one_over_two_pi()`]
/// * [`pi()`](crate::pi)
/// * [`quarter_pi()`]
/// * [`root_half_pi()`]
/// * [`root_pi()`]
/// * [`root_two_pi()`]
/// * [`three_over_two_pi()`]
/// * [`two_over_pi()`]
/// * [`two_over_root_pi()`]
pub fn two_pi<T: RealNumber>() -> T {
T::two_pi()
}
@ -344,7 +344,7 @@ pub fn two_pi<T: RealNumber>() -> T {
///
/// # See also:
///
/// * [`third`](fn.third.html)
/// * [`third()`]
pub fn two_thirds<T: RealNumber>() -> T {
na::convert(2.0 / 3.0)
}

View File

@ -6,9 +6,9 @@ use crate::aliases::{TMat, TVec};
///
/// # See also:
///
/// * [`row`](fn.row.html)
/// * [`set_column`](fn.set_column.html)
/// * [`set_row`](fn.set_row.html)
/// * [`row()`]
/// * [`set_column()`]
/// * [`set_row()`]
pub fn column<T: Scalar, const R: usize, const C: usize>(
m: &TMat<T, R, C>,
index: usize,
@ -20,9 +20,9 @@ pub fn column<T: Scalar, const R: usize, const C: usize>(
///
/// # See also:
///
/// * [`column`](fn.column.html)
/// * [`row`](fn.row.html)
/// * [`set_row`](fn.set_row.html)
/// * [`column()`]
/// * [`row()`]
/// * [`set_row()`]
pub fn set_column<T: Scalar, const R: usize, const C: usize>(
m: &TMat<T, R, C>,
index: usize,
@ -37,9 +37,9 @@ pub fn set_column<T: Scalar, const R: usize, const C: usize>(
///
/// # See also:
///
/// * [`column`](fn.column.html)
/// * [`set_column`](fn.set_column.html)
/// * [`set_row`](fn.set_row.html)
/// * [`column()`]
/// * [`set_column()`]
/// * [`set_row()`]
pub fn row<T: Scalar, const R: usize, const C: usize>(
m: &TMat<T, R, C>,
index: usize,
@ -51,9 +51,9 @@ pub fn row<T: Scalar, const R: usize, const C: usize>(
///
/// # See also:
///
/// * [`column`](fn.column.html)
/// * [`row`](fn.row.html)
/// * [`set_column`](fn.set_column.html)
/// * [`column()`]
/// * [`row()`]
/// * [`set_column()`]
pub fn set_row<T: Scalar, const R: usize, const C: usize>(
m: &TMat<T, R, C>,
index: usize,

View File

@ -128,9 +128,9 @@ pub fn make_quat<T: RealNumber>(ptr: &[T]) -> Qua<T> {
///
/// # See also:
///
/// * [`make_vec2`](fn.make_vec2.html)
/// * [`make_vec3`](fn.make_vec3.html)
/// * [`make_vec4`](fn.make_vec4.html)
/// * [`make_vec2()`]
/// * [`make_vec3()`]
/// * [`make_vec4()`]
pub fn make_vec1<T: Scalar>(v: &TVec1<T>) -> TVec1<T> {
v.clone()
}
@ -139,12 +139,11 @@ pub fn make_vec1<T: Scalar>(v: &TVec1<T>) -> TVec1<T> {
///
/// # See also:
///
/// * [`vec1_to_vec1`](fn.vec1_to_vec1.html)
/// * [`vec3_to_vec1`](fn.vec3_to_vec1.html)
/// * [`vec4_to_vec1`](fn.vec4_to_vec1.html)
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec3_to_vec1()`]
/// * [`vec4_to_vec1()`]
/// * [`vec1_to_vec2()`]
/// * [`vec1_to_vec3()`]
/// * [`vec1_to_vec4()`]
pub fn vec2_to_vec1<T: Scalar>(v: &TVec2<T>) -> TVec1<T> {
TVec1::new(v.x.clone())
}
@ -153,12 +152,11 @@ pub fn vec2_to_vec1<T: Scalar>(v: &TVec2<T>) -> TVec1<T> {
///
/// # See also:
///
/// * [`vec1_to_vec1`](fn.vec1_to_vec1.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec4_to_vec1`](fn.vec4_to_vec1.html)
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec2_to_vec1()`]
/// * [`vec4_to_vec1()`]
/// * [`vec1_to_vec2()`]
/// * [`vec1_to_vec3()`]
/// * [`vec1_to_vec4()`]
pub fn vec3_to_vec1<T: Scalar>(v: &TVec3<T>) -> TVec1<T> {
TVec1::new(v.x.clone())
}
@ -167,12 +165,11 @@ pub fn vec3_to_vec1<T: Scalar>(v: &TVec3<T>) -> TVec1<T> {
///
/// # See also:
///
/// * [`vec1_to_vec1`](fn.vec1_to_vec1.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec3_to_vec1`](fn.vec3_to_vec1.html)
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec2_to_vec1()`]
/// * [`vec3_to_vec1()`]
/// * [`vec1_to_vec2()`]
/// * [`vec1_to_vec3()`]
/// * [`vec1_to_vec4()`]
pub fn vec4_to_vec1<T: Scalar>(v: &TVec4<T>) -> TVec1<T> {
TVec1::new(v.x.clone())
}
@ -183,12 +180,12 @@ pub fn vec4_to_vec1<T: Scalar>(v: &TVec4<T>) -> TVec1<T> {
///
/// # See also:
///
/// * [`vec3_to_vec2`](fn.vec3_to_vec2.html)
/// * [`vec4_to_vec2`](fn.vec4_to_vec2.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec2_to_vec2`](fn.vec2_to_vec2.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec3_to_vec2()`]
/// * [`vec4_to_vec2()`]
/// * [`vec2_to_vec1()`]
/// * [`vec2_to_vec2()`]
/// * [`vec2_to_vec3()`]
/// * [`vec2_to_vec4()`]
pub fn vec1_to_vec2<T: Number>(v: &TVec1<T>) -> TVec2<T> {
TVec2::new(v.x.clone(), T::zero())
}
@ -197,13 +194,13 @@ pub fn vec1_to_vec2<T: Number>(v: &TVec1<T>) -> TVec2<T> {
///
/// # See also:
///
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec3_to_vec2`](fn.vec3_to_vec2.html)
/// * [`vec4_to_vec2`](fn.vec4_to_vec2.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec2_to_vec2`](fn.vec2_to_vec2.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec1_to_vec2()`]
/// * [`vec3_to_vec2()`]
/// * [`vec4_to_vec2()`]
/// * [`vec2_to_vec1()`]
/// * [`vec2_to_vec2()`]
/// * [`vec2_to_vec3()`]
/// * [`vec2_to_vec4()`]
pub fn vec2_to_vec2<T: Scalar>(v: &TVec2<T>) -> TVec2<T> {
v.clone()
}
@ -212,12 +209,12 @@ pub fn vec2_to_vec2<T: Scalar>(v: &TVec2<T>) -> TVec2<T> {
///
/// # See also:
///
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec4_to_vec2`](fn.vec4_to_vec2.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec2_to_vec2`](fn.vec2_to_vec2.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec1_to_vec2()`]
/// * [`vec4_to_vec2()`]
/// * [`vec2_to_vec1()`]
/// * [`vec2_to_vec2()`]
/// * [`vec2_to_vec3()`]
/// * [`vec2_to_vec4()`]
pub fn vec3_to_vec2<T: Scalar>(v: &TVec3<T>) -> TVec2<T> {
TVec2::new(v.x.clone(), v.y.clone())
}
@ -226,12 +223,12 @@ pub fn vec3_to_vec2<T: Scalar>(v: &TVec3<T>) -> TVec2<T> {
///
/// # See also:
///
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec3_to_vec2`](fn.vec4_to_vec2.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec2_to_vec2`](fn.vec2_to_vec2.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec1_to_vec2()`]
/// * [`vec3_to_vec2()`]
/// * [`vec2_to_vec1()`]
/// * [`vec2_to_vec2()`]
/// * [`vec2_to_vec3()`]
/// * [`vec2_to_vec4()`]
pub fn vec4_to_vec2<T: Scalar>(v: &TVec4<T>) -> TVec2<T> {
TVec2::new(v.x.clone(), v.y.clone())
}
@ -240,9 +237,9 @@ pub fn vec4_to_vec2<T: Scalar>(v: &TVec4<T>) -> TVec2<T> {
///
/// # See also:
///
/// * [`make_vec1`](fn.make_vec1.html)
/// * [`make_vec3`](fn.make_vec3.html)
/// * [`make_vec4`](fn.make_vec4.html)
/// * [`make_vec1()`]
/// * [`make_vec3()`]
/// * [`make_vec4()`]
pub fn make_vec2<T: Scalar>(ptr: &[T]) -> TVec2<T> {
TVec2::from_column_slice(ptr)
}
@ -253,11 +250,11 @@ pub fn make_vec2<T: Scalar>(ptr: &[T]) -> TVec2<T> {
///
/// # See also:
///
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec3_to_vec3`](fn.vec3_to_vec3.html)
/// * [`vec4_to_vec3`](fn.vec4_to_vec3.html)
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec2_to_vec3()`]
/// * [`vec3_to_vec3()`]
/// * [`vec4_to_vec3()`]
/// * [`vec1_to_vec2()`]
/// * [`vec1_to_vec4()`]
pub fn vec1_to_vec3<T: Number>(v: &TVec1<T>) -> TVec3<T> {
TVec3::new(v.x.clone(), T::zero(), T::zero())
}
@ -268,12 +265,12 @@ pub fn vec1_to_vec3<T: Number>(v: &TVec1<T>) -> TVec3<T> {
///
/// # See also:
///
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec3_to_vec3`](fn.vec3_to_vec3.html)
/// * [`vec4_to_vec3`](fn.vec4_to_vec3.html)
/// * [`vec3_to_vec1`](fn.vec3_to_vec1.html)
/// * [`vec3_to_vec2`](fn.vec3_to_vec2.html)
/// * [`vec3_to_vec4`](fn.vec3_to_vec4.html)
/// * [`vec1_to_vec3()`]
/// * [`vec3_to_vec3()`]
/// * [`vec4_to_vec3()`]
/// * [`vec3_to_vec1()`]
/// * [`vec3_to_vec2()`]
/// * [`vec3_to_vec4()`]
pub fn vec2_to_vec3<T: Number>(v: &TVec2<T>) -> TVec3<T> {
TVec3::new(v.x.clone(), v.y.clone(), T::zero())
}
@ -282,12 +279,12 @@ pub fn vec2_to_vec3<T: Number>(v: &TVec2<T>) -> TVec3<T> {
///
/// # See also:
///
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec4_to_vec3`](fn.vec4_to_vec3.html)
/// * [`vec3_to_vec1`](fn.vec3_to_vec1.html)
/// * [`vec3_to_vec2`](fn.vec3_to_vec2.html)
/// * [`vec3_to_vec4`](fn.vec3_to_vec4.html)
/// * [`vec1_to_vec3()`]
/// * [`vec2_to_vec3()`]
/// * [`vec4_to_vec3()`]
/// * [`vec3_to_vec1()`]
/// * [`vec3_to_vec2()`]
/// * [`vec3_to_vec4()`]
pub fn vec3_to_vec3<T: Scalar>(v: &TVec3<T>) -> TVec3<T> {
v.clone()
}
@ -296,12 +293,12 @@ pub fn vec3_to_vec3<T: Scalar>(v: &TVec3<T>) -> TVec3<T> {
///
/// # See also:
///
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec3_to_vec3`](fn.vec3_to_vec3.html)
/// * [`vec3_to_vec1`](fn.vec3_to_vec1.html)
/// * [`vec3_to_vec2`](fn.vec3_to_vec2.html)
/// * [`vec3_to_vec4`](fn.vec3_to_vec4.html)
/// * [`vec1_to_vec3()`]
/// * [`vec2_to_vec3()`]
/// * [`vec3_to_vec3()`]
/// * [`vec3_to_vec1()`]
/// * [`vec3_to_vec2()`]
/// * [`vec3_to_vec4()`]
pub fn vec4_to_vec3<T: Scalar>(v: &TVec4<T>) -> TVec3<T> {
TVec3::new(v.x.clone(), v.y.clone(), v.z.clone())
}
@ -310,9 +307,9 @@ pub fn vec4_to_vec3<T: Scalar>(v: &TVec4<T>) -> TVec3<T> {
///
/// # See also:
///
/// * [`make_vec1`](fn.make_vec1.html)
/// * [`make_vec2`](fn.make_vec2.html)
/// * [`make_vec4`](fn.make_vec4.html)
/// * [`make_vec1()`]
/// * [`make_vec2()`]
/// * [`make_vec4()`]
pub fn make_vec3<T: Scalar>(ptr: &[T]) -> TVec3<T> {
TVec3::from_column_slice(ptr)
}
@ -323,12 +320,12 @@ pub fn make_vec3<T: Scalar>(ptr: &[T]) -> TVec3<T> {
///
/// # See also:
///
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec3_to_vec4`](fn.vec3_to_vec4.html)
/// * [`vec4_to_vec4`](fn.vec4_to_vec4.html)
/// * [`vec1_to_vec2`](fn.vec1_to_vec2.html)
/// * [`vec1_to_vec3`](fn.vec1_to_vec3.html)
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec2_to_vec4()`]
/// * [`vec3_to_vec4()`]
/// * [`vec4_to_vec4()`]
/// * [`vec1_to_vec2()`]
/// * [`vec1_to_vec3()`]
/// * [`vec1_to_vec4()`]
pub fn vec1_to_vec4<T: Number>(v: &TVec1<T>) -> TVec4<T> {
TVec4::new(v.x, T::zero(), T::zero(), T::zero())
}
@ -339,12 +336,12 @@ pub fn vec1_to_vec4<T: Number>(v: &TVec1<T>) -> TVec4<T> {
///
/// # See also:
///
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec3_to_vec4`](fn.vec3_to_vec4.html)
/// * [`vec4_to_vec4`](fn.vec4_to_vec4.html)
/// * [`vec2_to_vec1`](fn.vec2_to_vec1.html)
/// * [`vec2_to_vec2`](fn.vec2_to_vec2.html)
/// * [`vec2_to_vec3`](fn.vec2_to_vec3.html)
/// * [`vec1_to_vec4()`]
/// * [`vec3_to_vec4()`]
/// * [`vec4_to_vec4()`]
/// * [`vec2_to_vec1()`]
/// * [`vec2_to_vec2()`]
/// * [`vec2_to_vec3()`]
pub fn vec2_to_vec4<T: Number>(v: &TVec2<T>) -> TVec4<T> {
TVec4::new(v.x, v.y, T::zero(), T::zero())
}
@ -355,12 +352,12 @@ pub fn vec2_to_vec4<T: Number>(v: &TVec2<T>) -> TVec4<T> {
///
/// # See also:
///
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec4_to_vec4`](fn.vec4_to_vec4.html)
/// * [`vec3_to_vec1`](fn.vec3_to_vec1.html)
/// * [`vec3_to_vec2`](fn.vec3_to_vec2.html)
/// * [`vec3_to_vec3`](fn.vec3_to_vec3.html)
/// * [`vec1_to_vec4()`]
/// * [`vec2_to_vec4()`]
/// * [`vec4_to_vec4()`]
/// * [`vec3_to_vec1()`]
/// * [`vec3_to_vec2()`]
/// * [`vec3_to_vec3()`]
pub fn vec3_to_vec4<T: Number>(v: &TVec3<T>) -> TVec4<T> {
TVec4::new(v.x, v.y, v.z, T::zero())
}
@ -369,12 +366,12 @@ pub fn vec3_to_vec4<T: Number>(v: &TVec3<T>) -> TVec4<T> {
///
/// # See also:
///
/// * [`vec1_to_vec4`](fn.vec1_to_vec4.html)
/// * [`vec2_to_vec4`](fn.vec2_to_vec4.html)
/// * [`vec3_to_vec4`](fn.vec3_to_vec4.html)
/// * [`vec4_to_vec1`](fn.vec4_to_vec1.html)
/// * [`vec4_to_vec2`](fn.vec4_to_vec2.html)
/// * [`vec4_to_vec3`](fn.vec4_to_vec3.html)
/// * [`vec1_to_vec4()`]
/// * [`vec2_to_vec4()`]
/// * [`vec3_to_vec4()`]
/// * [`vec4_to_vec1()`]
/// * [`vec4_to_vec2()`]
/// * [`vec4_to_vec3()`]
pub fn vec4_to_vec4<T: Scalar>(v: &TVec4<T>) -> TVec4<T> {
v.clone()
}
@ -383,9 +380,9 @@ pub fn vec4_to_vec4<T: Scalar>(v: &TVec4<T>) -> TVec4<T> {
///
/// # See also:
///
/// * [`make_vec1`](fn.make_vec1.html)
/// * [`make_vec2`](fn.make_vec2.html)
/// * [`make_vec3`](fn.make_vec3.html)
/// * [`make_vec1()`]
/// * [`make_vec2()`]
/// * [`make_vec3()`]
pub fn make_vec4<T: Scalar>(ptr: &[T]) -> TVec4<T> {
TVec4::from_column_slice(ptr)
}

View File

@ -16,9 +16,9 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`comp_mul`](fn.comp_mul.html)
/// * [`comp_max()`]
/// * [`comp_min()`]
/// * [`comp_mul()`]
pub fn comp_add<T: Number, const R: usize, const C: usize>(m: &TMat<T, R, C>) -> T {
m.iter().fold(T::zero(), |x, y| x + *y)
}
@ -38,13 +38,13 @@ pub fn comp_add<T: Number, const R: usize, const C: usize>(m: &TMat<T, R, C>) ->
///
/// # See also:
///
/// * [`comp_add`](fn.comp_add.html)
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`max`](fn.max.html)
/// * [`max2`](fn.max2.html)
/// * [`max3`](fn.max3.html)
/// * [`max4`](fn.max4.html)
/// * [`comp_add()`]
/// * [`comp_max()`]
/// * [`comp_min()`]
/// * [`max()`](crate::max)
/// * [`max2()`](crate::max2)
/// * [`max3()`](crate::max3)
/// * [`max4()`](crate::max4)
pub fn comp_max<T: Number, const R: usize, const C: usize>(m: &TMat<T, R, C>) -> T {
m.iter()
.fold(T::min_value(), |x, y| crate::max2_scalar(x, *y))
@ -65,13 +65,13 @@ pub fn comp_max<T: Number, const R: usize, const C: usize>(m: &TMat<T, R, C>) ->
///
/// # See also:
///
/// * [`comp_add`](fn.comp_add.html)
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_mul`](fn.comp_mul.html)
/// * [`min`](fn.min.html)
/// * [`min2`](fn.min2.html)
/// * [`min3`](fn.min3.html)
/// * [`min4`](fn.min4.html)
/// * [`comp_add()`]
/// * [`comp_max()`]
/// * [`comp_mul()`]
/// * [`min()`](crate::min)
/// * [`min2()`](crate::min2)
/// * [`min3()`](crate::min3)
/// * [`min4()`](crate::min4)
pub fn comp_min<T: Number, const R: usize, const C: usize>(m: &TMat<T, R, C>) -> T {
m.iter()
.fold(T::max_value(), |x, y| crate::min2_scalar(x, *y))
@ -92,9 +92,9 @@ pub fn comp_min<T: Number, const R: usize, const C: usize>(m: &TMat<T, R, C>) ->
///
/// # See also:
///
/// * [`comp_add`](fn.comp_add.html)
/// * [`comp_max`](fn.comp_max.html)
/// * [`comp_min`](fn.comp_min.html)
/// * [`comp_add()`]
/// * [`comp_max()`]
/// * [`comp_min()`]
pub fn comp_mul<T: Number, const R: usize, const C: usize>(m: &TMat<T, R, C>) -> T {
m.iter().fold(T::one(), |x, y| x * *y)
}

View File

@ -5,7 +5,7 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`right_handed`](fn.right_handed.html)
/// * [`right_handed()`]
pub fn left_handed<T: Number>(a: &TVec3<T>, b: &TVec3<T>, c: &TVec3<T>) -> bool {
a.cross(b).dot(c) < T::zero()
}
@ -14,7 +14,7 @@ pub fn left_handed<T: Number>(a: &TVec3<T>, b: &TVec3<T>, c: &TVec3<T>) -> bool
///
/// # See also:
///
/// * [`left_handed`](fn.left_handed.html)
/// * [`left_handed()`]
pub fn right_handed<T: Number>(a: &TVec3<T>, b: &TVec3<T>, c: &TVec3<T>) -> bool {
a.cross(b).dot(c) > T::zero()
}

View File

@ -5,7 +5,7 @@ use crate::RealNumber;
///
/// # See also:
///
/// * [`matrix_cross`](fn.matrix_cross.html)
/// * [`matrix_cross()`]
pub fn matrix_cross3<T: RealNumber>(x: &TVec3<T>) -> TMat3<T> {
x.cross_matrix()
}
@ -14,7 +14,7 @@ pub fn matrix_cross3<T: RealNumber>(x: &TVec3<T>) -> TMat3<T> {
///
/// # See also:
///
/// * [`matrix_cross3`](fn.matrix_cross3.html)
/// * [`matrix_cross3()`]
pub fn matrix_cross<T: RealNumber>(x: &TVec3<T>) -> TMat4<T> {
crate::mat3_to_mat4(&x.cross_matrix())
}

View File

@ -7,14 +7,14 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x3()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x3()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x3()`]
/// * [`diagonal4x4()`]
pub fn diagonal2x2<T: Number>(v: &TVec2<T>) -> TMat2<T> {
TMat2::from_diagonal(v)
}
@ -23,14 +23,14 @@ pub fn diagonal2x2<T: Number>(v: &TVec2<T>) -> TMat2<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x3()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x3()`]
/// * [`diagonal4x4()`]
pub fn diagonal2x3<T: Number>(v: &TVec2<T>) -> TMat2x3<T> {
TMat2x3::from_partial_diagonal(v.as_slice())
}
@ -39,14 +39,14 @@ pub fn diagonal2x3<T: Number>(v: &TVec2<T>) -> TMat2x3<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x3()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x3()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x3()`]
/// * [`diagonal4x4()`]
pub fn diagonal2x4<T: Number>(v: &TVec2<T>) -> TMat2x4<T> {
TMat2x4::from_partial_diagonal(v.as_slice())
}
@ -55,14 +55,14 @@ pub fn diagonal2x4<T: Number>(v: &TVec2<T>) -> TMat2x4<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x3()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x3()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x3()`]
/// * [`diagonal4x4()`]
pub fn diagonal3x2<T: Number>(v: &TVec2<T>) -> TMat3x2<T> {
TMat3x2::from_partial_diagonal(v.as_slice())
}
@ -71,14 +71,14 @@ pub fn diagonal3x2<T: Number>(v: &TVec2<T>) -> TMat3x2<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x3()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x3()`]
/// * [`diagonal4x4()`]
pub fn diagonal3x3<T: Number>(v: &TVec3<T>) -> TMat3<T> {
TMat3::from_diagonal(v)
}
@ -87,14 +87,14 @@ pub fn diagonal3x3<T: Number>(v: &TVec3<T>) -> TMat3<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x3()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x3()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x3()`]
/// * [`diagonal4x4()`]
pub fn diagonal3x4<T: Number>(v: &TVec3<T>) -> TMat3x4<T> {
TMat3x4::from_partial_diagonal(v.as_slice())
}
@ -103,14 +103,14 @@ pub fn diagonal3x4<T: Number>(v: &TVec3<T>) -> TMat3x4<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x3()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x3()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x3()`]
/// * [`diagonal4x4()`]
pub fn diagonal4x2<T: Number>(v: &TVec2<T>) -> TMat4x2<T> {
TMat4x2::from_partial_diagonal(v.as_slice())
}
@ -119,14 +119,14 @@ pub fn diagonal4x2<T: Number>(v: &TVec2<T>) -> TMat4x2<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x4`](fn.diagonal4x4.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x3()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x3()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x4()`]
pub fn diagonal4x3<T: Number>(v: &TVec3<T>) -> TMat4x3<T> {
TMat4x3::from_partial_diagonal(v.as_slice())
}
@ -135,14 +135,14 @@ pub fn diagonal4x3<T: Number>(v: &TVec3<T>) -> TMat4x3<T> {
///
/// # See also:
///
/// * [`diagonal2x2`](fn.diagonal2x2.html)
/// * [`diagonal2x3`](fn.diagonal2x3.html)
/// * [`diagonal2x4`](fn.diagonal2x4.html)
/// * [`diagonal3x2`](fn.diagonal3x2.html)
/// * [`diagonal3x3`](fn.diagonal3x3.html)
/// * [`diagonal3x4`](fn.diagonal3x4.html)
/// * [`diagonal4x2`](fn.diagonal4x2.html)
/// * [`diagonal4x3`](fn.diagonal4x3.html)
/// * [`diagonal2x2()`]
/// * [`diagonal2x3()`]
/// * [`diagonal2x4()`]
/// * [`diagonal3x2()`]
/// * [`diagonal3x3()`]
/// * [`diagonal3x4()`]
/// * [`diagonal4x2()`]
/// * [`diagonal4x3()`]
pub fn diagonal4x4<T: Number>(v: &TVec4<T>) -> TMat4<T> {
TMat4::from_diagonal(v)
}

View File

@ -5,7 +5,7 @@ use crate::RealNumber;
///
/// # See also:
///
/// * [`distance`](fn.distance.html)
/// * [`distance()`](crate::distance)
pub fn distance2<T: RealNumber, const D: usize>(p0: &TVec<T, D>, p1: &TVec<T, D>) -> T {
(p1 - p0).norm_squared()
}
@ -14,9 +14,9 @@ pub fn distance2<T: RealNumber, const D: usize>(p0: &TVec<T, D>, p1: &TVec<T, D>
///
/// # See also:
///
/// * [`l1_norm`](fn.l1_norm.html)
/// * [`l2_distance`](fn.l2_distance.html)
/// * [`l2_norm`](fn.l2_norm.html)
/// * [`l1_norm()`]
/// * [`l2_distance()`]
/// * [`l2_norm()`]
pub fn l1_distance<T: RealNumber, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> T {
l1_norm(&(y - x))
}
@ -28,27 +28,27 @@ pub fn l1_distance<T: RealNumber, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>
///
/// # See also:
///
/// * [`l1_distance`](fn.l1_distance.html)
/// * [`l2_distance`](fn.l2_distance.html)
/// * [`l2_norm`](fn.l2_norm.html)
/// * [`l1_distance()`]
/// * [`l2_distance()`]
/// * [`l2_norm()`]
pub fn l1_norm<T: RealNumber, const D: usize>(v: &TVec<T, D>) -> T {
crate::comp_add(&v.abs())
}
/// The l2-norm of `x - y`.
///
/// This is the same value as returned by [`length2`](fn.length2.html) and
/// [`magnitude2`](fn.magnitude2.html).
/// This is the same value as returned by [`length2()`] and
/// [`magnitude2()`].
///
/// # See also:
///
/// * [`l1_distance`](fn.l1_distance.html)
/// * [`l1_norm`](fn.l1_norm.html)
/// * [`l2_norm`](fn.l2_norm.html)
/// * [`length`](fn.length.html)
/// * [`length2`](fn.length2.html)
/// * [`magnitude`](fn.magnitude.html)
/// * [`magnitude2`](fn.magnitude2.html)
/// * [`l1_distance()`]
/// * [`l1_norm()`]
/// * [`l2_norm()`]
/// * [`length()`](crate::length)
/// * [`length2()`]
/// * [`magnitude()`](crate::magnitude)
/// * [`magnitude2()`]
pub fn l2_distance<T: RealNumber, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> T {
l2_norm(&(y - x))
}
@ -57,33 +57,33 @@ pub fn l2_distance<T: RealNumber, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>
///
/// This is also known as the Euclidean norm.
///
/// This is the same value as returned by [`length`](fn.length.html) and
/// [`magnitude`](fn.magnitude.html).
/// This is the same value as returned by [`length()`](crate::length) and
/// [`magnitude()`](crate::magnitude).
///
/// # See also:
///
/// * [`l1_distance`](fn.l1_distance.html)
/// * [`l1_norm`](fn.l1_norm.html)
/// * [`l2_distance`](fn.l2_distance.html)
/// * [`length`](fn.length.html)
/// * [`length2`](fn.length2.html)
/// * [`magnitude`](fn.magnitude.html)
/// * [`magnitude2`](fn.magnitude2.html)
/// * [`l1_distance()`]
/// * [`l1_norm()`]
/// * [`l2_distance()`]
/// * [`length()`](crate::length)
/// * [`length2()`]
/// * [`magnitude()`](crate::magnitude)
/// * [`magnitude2()`]
pub fn l2_norm<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> T {
x.norm()
}
/// The squared magnitude of `x`.
///
/// A synonym for [`magnitude2`](fn.magnitude2.html).
/// A synonym for [`magnitude2()`].
///
/// # See also:
///
/// * [`distance`](fn.distance.html)
/// * [`distance2`](fn.distance2.html)
/// * [`length`](fn.length.html)
/// * [`magnitude`](fn.magnitude.html)
/// * [`magnitude2`](fn.magnitude2.html)
/// * [`distance()`](crate::distance)
/// * [`distance2()`]
/// * [`length()`](crate::length)
/// * [`magnitude()`](crate::magnitude)
/// * [`magnitude2()`]
pub fn length2<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> T {
x.norm_squared()
}
@ -94,10 +94,10 @@ pub fn length2<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> T {
///
/// # See also:
///
/// * [`distance`](fn.distance.html)
/// * [`distance2`](fn.distance2.html)
/// * [`length2`](fn.length2.html)
/// * [`magnitude`](fn.magnitude.html)
/// * [`distance()`](crate::distance)
/// * [`distance2()`]
/// * [`length2()`]
/// * [`magnitude()`](crate::magnitude)
/// * [`nalgebra::norm_squared`](../nalgebra/fn.norm_squared.html)
pub fn magnitude2<T: RealNumber, const D: usize>(x: &TVec<T, D>) -> T {
x.norm_squared()

View File

@ -4,11 +4,11 @@ use crate::aliases::TVec;
/// The dot product of the normalized version of `x` and `y`.
///
/// This is currently the same as [`normalize_dot`](fn.normalize_dot.html).
/// This is currently the same as [`normalize_dot()`]
///
/// # See also:
///
/// * [`normalize_dot`](fn.normalize_dot.html`)
/// * [`normalize_dot()`]
pub fn fast_normalize_dot<T: RealNumber, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> T {
// XXX: improve those.
x.normalize().dot(&y.normalize())
@ -18,7 +18,7 @@ pub fn fast_normalize_dot<T: RealNumber, const D: usize>(x: &TVec<T, D>, y: &TVe
///
/// # See also:
///
/// * [`fast_normalize_dot`](fn.fast_normalize_dot.html`)
/// * [`fast_normalize_dot()`]
pub fn normalize_dot<T: RealNumber, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> T {
// XXX: improve those.
x.normalize().dot(&y.normalize())

View File

@ -7,11 +7,11 @@ use crate::traits::{Number, RealNumber};
///
/// # See also:
///
/// * [`scaling`](fn.scaling.html)
/// * [`translation`](fn.translation.html)
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`scaling()`]
/// * [`translation()`]
/// * [`rotation2d()`]
/// * [`scaling2d()`]
/// * [`translation2d()`]
pub fn rotation<T: RealNumber>(angle: T, v: &TVec3<T>) -> TMat4<T> {
Rotation3::from_axis_angle(&Unit::new_normalize(*v), angle).to_homogeneous()
}
@ -20,11 +20,11 @@ pub fn rotation<T: RealNumber>(angle: T, v: &TVec3<T>) -> TMat4<T> {
///
/// # See also:
///
/// * [`rotation`](fn.rotation.html)
/// * [`translation`](fn.translation.html)
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`rotation()`]
/// * [`translation()`]
/// * [`rotation2d()`]
/// * [`scaling2d()`]
/// * [`translation2d()`]
pub fn scaling<T: Number>(v: &TVec3<T>) -> TMat4<T> {
TMat4::new_nonuniform_scaling(v)
}
@ -33,11 +33,11 @@ pub fn scaling<T: Number>(v: &TVec3<T>) -> TMat4<T> {
///
/// # See also:
///
/// * [`rotation`](fn.rotation.html)
/// * [`scaling`](fn.scaling.html)
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`rotation()`]
/// * [`scaling()`]
/// * [`rotation2d()`]
/// * [`scaling2d()`]
/// * [`translation2d()`]
pub fn translation<T: Number>(v: &TVec3<T>) -> TMat4<T> {
TMat4::new_translation(v)
}
@ -46,11 +46,11 @@ pub fn translation<T: Number>(v: &TVec3<T>) -> TMat4<T> {
///
/// # See also:
///
/// * [`rotation`](fn.rotation.html)
/// * [`scaling`](fn.scaling.html)
/// * [`translation`](fn.translation.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`rotation()`]
/// * [`scaling()`]
/// * [`translation()`]
/// * [`scaling2d()`]
/// * [`translation2d()`]
pub fn rotation2d<T: RealNumber>(angle: T) -> TMat3<T> {
Rotation2::new(angle).to_homogeneous()
}
@ -59,11 +59,11 @@ pub fn rotation2d<T: RealNumber>(angle: T) -> TMat3<T> {
///
/// # See also:
///
/// * [`rotation`](fn.rotation.html)
/// * [`scaling`](fn.scaling.html)
/// * [`translation`](fn.translation.html)
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`rotation()`]
/// * [`scaling()`]
/// * [`translation()`]
/// * [`rotation2d()`]
/// * [`translation2d()`]
pub fn scaling2d<T: Number>(v: &TVec2<T>) -> TMat3<T> {
TMat3::new_nonuniform_scaling(v)
}
@ -72,11 +72,11 @@ pub fn scaling2d<T: Number>(v: &TVec2<T>) -> TMat3<T> {
///
/// # See also:
///
/// * [`rotation`](fn.rotation.html)
/// * [`scaling`](fn.scaling.html)
/// * [`translation`](fn.translation.html)
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`rotation()`]
/// * [`scaling()`]
/// * [`translation()`]
/// * [`rotation2d()`]
/// * [`scaling2d()`]
pub fn translation2d<T: Number>(v: &TVec2<T>) -> TMat3<T> {
TMat3::new_translation(v)
}

View File

@ -7,11 +7,11 @@ use crate::traits::{Number, RealNumber};
///
/// # See also:
///
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`scale2d`](fn.scale2d.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`translate2d`](fn.translate2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`rotation2d()`](crate::rotation2d)
/// * [`scale2d()`]
/// * [`scaling2d()`](crate::scaling2d)
/// * [`translate2d()`]
/// * [`translation2d()`](crate::translation2d)
pub fn rotate2d<T: RealNumber>(m: &TMat3<T>, angle: T) -> TMat3<T> {
m * UnitComplex::new(angle).to_homogeneous()
}
@ -20,11 +20,11 @@ pub fn rotate2d<T: RealNumber>(m: &TMat3<T>, angle: T) -> TMat3<T> {
///
/// # See also:
///
/// * [`rotate2d`](fn.rotate2d.html)
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`translate2d`](fn.translate2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`rotate2d()`]
/// * [`rotation2d()`](crate::rotation2d)
/// * [`scaling2d()`](crate::scaling2d)
/// * [`translate2d()`]
/// * [`translation2d()`](crate::translation2d)
pub fn scale2d<T: Number>(m: &TMat3<T>, v: &TVec2<T>) -> TMat3<T> {
m.prepend_nonuniform_scaling(v)
}
@ -33,11 +33,11 @@ pub fn scale2d<T: Number>(m: &TMat3<T>, v: &TVec2<T>) -> TMat3<T> {
///
/// # See also:
///
/// * [`rotate2d`](fn.rotate2d.html)
/// * [`rotation2d`](fn.rotation2d.html)
/// * [`scale2d`](fn.scale2d.html)
/// * [`scaling2d`](fn.scaling2d.html)
/// * [`translation2d`](fn.translation2d.html)
/// * [`rotate2d()`]
/// * [`rotation2d()`](crate::rotation2d)
/// * [`scale2d()`]
/// * [`scaling2d()`](crate::scaling2d)
/// * [`translation2d()`](crate::translation2d)
pub fn translate2d<T: Number>(m: &TMat3<T>, v: &TVec2<T>) -> TMat3<T> {
m.prepend_translation(v)
}

View File

@ -7,16 +7,16 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`are_collinear2d`](fn.are_collinear2d.html)
/// * [`are_collinear2d()`]
pub fn are_collinear<T: Number>(v0: &TVec3<T>, v1: &TVec3<T>, epsilon: T) -> bool {
is_null(&v0.cross(v1), epsilon)
abs_diff_eq!(v0.cross(v1), TVec3::<T>::zeros(), epsilon = epsilon)
}
/// Returns `true` if two 2D vectors are collinear (up to an epsilon).
///
/// # See also:
///
/// * [`are_collinear`](fn.are_collinear.html)
/// * [`are_collinear()`]
pub fn are_collinear2d<T: Number>(v0: &TVec2<T>, v1: &TVec2<T>, epsilon: T) -> bool {
abs_diff_eq!(v0.perp(v1), T::zero(), epsilon = epsilon)
}
@ -41,10 +41,13 @@ pub fn is_comp_null<T: Number, const D: usize>(v: &TVec<T, D>, epsilon: T) -> TV
/// Returns `true` if `v` has a magnitude of 1 (up to an epsilon).
pub fn is_normalized<T: RealNumber, const D: usize>(v: &TVec<T, D>, epsilon: T) -> bool {
abs_diff_eq!(v.norm_squared(), T::one(), epsilon = epsilon * epsilon)
// sqrt(1 + epsilon_{norm²} = 1 + epsilon_{norm}
// ==> epsilon_{norm²} = epsilon_{norm}² + 2*epsilon_{norm}
// For small epsilon, epsilon² is basically zero, so use 2*epsilon.
abs_diff_eq!(v.norm_squared(), T::one(), epsilon = epsilon + epsilon)
}
/// Returns `true` if `v` is zero (up to an epsilon).
pub fn is_null<T: Number, const D: usize>(v: &TVec<T, D>, epsilon: T) -> bool {
abs_diff_eq!(*v, TVec::<T, D>::zeros(), epsilon = epsilon)
pub fn is_null<T: RealNumber, const D: usize>(v: &TVec<T, D>, epsilon: T) -> bool {
abs_diff_eq!(v.norm_squared(), T::zero(), epsilon = epsilon * epsilon)
}

View File

@ -38,7 +38,7 @@
* All function names use `snake_case`, which is the Rust convention.
* All type names use `CamelCase`, which is the Rust convention.
* All function arguments, except for scalars, are all passed by-reference.
* The most generic vector and matrix types are [`TMat`](type.TMat.html) and [`TVec`](type.TVec.html) instead of `mat` and `vec`.
* The most generic vector and matrix types are [`TMat`] and [`TVec`] instead of `mat` and `vec`.
* Some feature are not yet implemented and should be added in the future. In particular, no packing
functions are available.
* A few features are not implemented and will never be. This includes functions related to color
@ -47,17 +47,17 @@
In addition, because Rust does not allows function overloading, all functions must be given a unique name.
Here are a few rules chosen arbitrarily for **nalgebra-glm**:
* Functions operating in 2d will usually end with the `2d` suffix, e.g., [`glm::rotate2d`](fn.rotate2d.html) is for 2D while [`glm::rotate`](fn.rotate.html) is for 3D.
* Functions operating on vectors will often end with the `_vec` suffix, possibly followed by the dimension of vector, e.g., [`glm::rotate_vec2`](fn.rotate_vec2.html).
* Every function related to quaternions start with the `quat_` prefix, e.g., [`glm::quat_dot(q1, q2)`](fn.quat_dot.html).
* Functions operating in 2d will usually end with the `2d` suffix, e.g., [`glm::rotate2d()`](crate::rotate2d) is for 2D while [`glm::rotate()`](crate::rotate) is for 3D.
* Functions operating on vectors will often end with the `_vec` suffix, possibly followed by the dimension of vector, e.g., [`glm::rotate_vec2()`](crate::rotate_vec2).
* Every function related to quaternions start with the `quat_` prefix, e.g., [`glm::quat_dot(q1, q2)`](crate::quat_dot).
* All the conversion functions have unique names as described [below](#conversions).
### Vector and matrix construction
Vectors, matrices, and quaternions can be constructed using several approaches:
* Using functions with the same name as their type in lower-case. For example [`glm::vec3(x, y, z)`](fn.vec3.html) will create a 3D vector.
* Using functions with the same name as their type in lower-case. For example [`glm::vec3(x, y, z)`](crate::vec3) will create a 3D vector.
* Using the `::new` constructor. For example [`Vec3::new(x, y, z)`](../nalgebra/base/type.OMatrix.html#method.new-27) will create a 3D vector.
* Using the functions prefixed by `make_` to build a vector a matrix from a slice. For example [`glm::make_vec3(&[x, y, z])`](fn.make_vec3.html) will create a 3D vector.
* Using the functions prefixed by `make_` to build a vector a matrix from a slice. For example [`glm::make_vec3(&[x, y, z])`](crate::make_vec3) will create a 3D vector.
Keep in mind that constructing a matrix using this type of functions require its components to be arranged in column-major order on the slice.
* Using a geometric construction function. For example [`glm::rotation(angle, axis)`](fn.rotation.html) will build a 4x4 homogeneous rotation matrix from an angle (in radians) and an axis.
* Using a geometric construction function. For example [`glm::rotation(angle, axis)`](crate::rotation) will build a 4x4 homogeneous rotation matrix from an angle (in radians) and an axis.
* Using swizzling and conversions as described in the next sections.
### Swizzling
Vector swizzling is a native feature of **nalgebra** itself. Therefore, you can use it with all
@ -75,9 +75,9 @@
It is often useful to convert one algebraic type to another. There are two main approaches for converting
between types in `nalgebra-glm`:
* Using function with the form `type1_to_type2` in order to convert an instance of `type1` into an instance of `type2`.
For example [`glm::mat3_to_mat4(m)`](fn.mat3_to_mat4.html) will convert the 3x3 matrix `m` to a 4x4 matrix by appending one column on the right
For example [`glm::mat3_to_mat4(m)`](crate::mat3_to_mat4) will convert the 3x3 matrix `m` to a 4x4 matrix by appending one column on the right
and one row on the left. Those now row and columns are filled with 0 except for the diagonal element which is set to 1.
* Using one of the [`convert`](fn.convert.html), [`try_convert`](fn.try_convert.html), or [`convert_unchecked`](fn.convert_unchecked.html) functions.
* Using one of the [`convert`](crate::convert), [`try_convert`](crate::try_convert), or [`convert_unchecked`](crate::convert_unchecked) functions.
These functions are directly re-exported from nalgebra and are extremely versatile:
1. The `convert` function can convert any type (especially geometric types from nalgebra like `Isometry3`) into another algebraic type which is equivalent but more general. For example,
`let sim: Similarity3<_> = na::convert(isometry)` will convert an `Isometry3` into a `Similarity3`.

View File

@ -1,18 +1,17 @@
use approx::AbsDiffEq;
use num::{Bounded, Signed};
use core::cmp::PartialOrd;
use na::Scalar;
use simba::scalar::{ClosedAdd, ClosedMul, ClosedSub, RealField};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign, ClosedSubAssign, RealField};
/// A number that can either be an integer or a float.
pub trait Number:
Scalar
+ Copy
+ PartialOrd
+ ClosedAdd
+ ClosedSub
+ ClosedMul
+ ClosedAddAssign
+ ClosedSubAssign
+ ClosedMulAssign
+ AbsDiffEq<Epsilon = Self>
+ Signed
+ Bounded
@ -23,9 +22,9 @@ impl<
T: Scalar
+ Copy
+ PartialOrd
+ ClosedAdd
+ ClosedSub
+ ClosedMul
+ ClosedAddAssign
+ ClosedSubAssign
+ ClosedMulAssign
+ AbsDiffEq<Epsilon = Self>
+ Signed
+ Bounded,

View File

@ -1,5 +1,3 @@
use na;
use crate::aliases::TVec;
use crate::RealNumber;

View File

@ -16,8 +16,8 @@ use crate::traits::Number;
///
/// # See also:
///
/// * [`any`](fn.any.html)
/// * [`not`](fn.not.html)
/// * [`any()`]
/// * [`not()`]
pub fn all<const D: usize>(v: &TVec<bool, D>) -> bool {
v.iter().all(|x| *x)
}
@ -40,8 +40,8 @@ pub fn all<const D: usize>(v: &TVec<bool, D>) -> bool {
///
/// # See also:
///
/// * [`all`](fn.all.html)
/// * [`not`](fn.not.html)
/// * [`all()`]
/// * [`not()`]
pub fn any<const D: usize>(v: &TVec<bool, D>) -> bool {
v.iter().any(|x| *x)
}
@ -59,12 +59,12 @@ pub fn any<const D: usize>(v: &TVec<bool, D>) -> bool {
///
/// # See also:
///
/// * [`greater_than`](fn.greater_than.html)
/// * [`greater_than_equal`](fn.greater_than_equal.html)
/// * [`less_than`](fn.less_than.html)
/// * [`less_than_equal`](fn.less_than_equal.html)
/// * [`not`](fn.not.html)
/// * [`not_equal`](fn.not_equal.html)
/// * [`greater_than()`]
/// * [`greater_than_equal()`]
/// * [`less_than()`]
/// * [`less_than_equal()`]
/// * [`not()`]
/// * [`not_equal()`]
pub fn equal<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<bool, D> {
x.zip_map(y, |x, y| x == y)
}
@ -82,12 +82,12 @@ pub fn equal<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<
///
/// # See also:
///
/// * [`equal`](fn.equal.html)
/// * [`greater_than_equal`](fn.greater_than_equal.html)
/// * [`less_than`](fn.less_than.html)
/// * [`less_than_equal`](fn.less_than_equal.html)
/// * [`not`](fn.not.html)
/// * [`not_equal`](fn.not_equal.html)
/// * [`equal()`]
/// * [`greater_than_equal()`]
/// * [`less_than()`]
/// * [`less_than_equal()`]
/// * [`not()`]
/// * [`not_equal()`]
pub fn greater_than<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<bool, D> {
x.zip_map(y, |x, y| x > y)
}
@ -105,12 +105,12 @@ pub fn greater_than<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -
///
/// # See also:
///
/// * [`equal`](fn.equal.html)
/// * [`greater_than`](fn.greater_than.html)
/// * [`less_than`](fn.less_than.html)
/// * [`less_than_equal`](fn.less_than_equal.html)
/// * [`not`](fn.not.html)
/// * [`not_equal`](fn.not_equal.html)
/// * [`equal()`]
/// * [`greater_than()`]
/// * [`less_than()`]
/// * [`less_than_equal()`]
/// * [`not()`]
/// * [`not_equal()`]
pub fn greater_than_equal<T: Number, const D: usize>(
x: &TVec<T, D>,
y: &TVec<T, D>,
@ -131,17 +131,17 @@ pub fn greater_than_equal<T: Number, const D: usize>(
///
/// # See also:
///
/// * [`equal`](fn.equal.html)
/// * [`greater_than`](fn.greater_than.html)
/// * [`greater_than_equal`](fn.greater_than_equal.html)
/// * [`less_than_equal`](fn.less_than_equal.html)
/// * [`not`](fn.not.html)
/// * [`not_equal`](fn.not_equal.html)
/// * [`equal()`]
/// * [`greater_than()`]
/// * [`greater_than_equal()`]
/// * [`less_than_equal()`]
/// * [`not()`]
/// * [`not_equal()`]
pub fn less_than<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<bool, D> {
x.zip_map(y, |x, y| x < y)
}
/// Component-wise `>=` comparison.
/// Component-wise `<=` comparison.
///
/// # Examples:
///
@ -154,12 +154,12 @@ pub fn less_than<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> T
///
/// # See also:
///
/// * [`equal`](fn.equal.html)
/// * [`greater_than`](fn.greater_than.html)
/// * [`greater_than_equal`](fn.greater_than_equal.html)
/// * [`less_than`](fn.less_than.html)
/// * [`not`](fn.not.html)
/// * [`not_equal`](fn.not_equal.html)
/// * [`equal()`]
/// * [`greater_than()`]
/// * [`greater_than_equal()`]
/// * [`less_than()`]
/// * [`not()`]
/// * [`not_equal()`]
pub fn less_than_equal<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<bool, D> {
x.zip_map(y, |x, y| x <= y)
}
@ -176,14 +176,14 @@ pub fn less_than_equal<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>
///
/// # See also:
///
/// * [`all`](fn.all.html)
/// * [`any`](fn.any.html)
/// * [`equal`](fn.equal.html)
/// * [`greater_than`](fn.greater_than.html)
/// * [`greater_than_equal`](fn.greater_than_equal.html)
/// * [`less_than`](fn.less_than.html)
/// * [`less_than_equal`](fn.less_than_equal.html)
/// * [`not_equal`](fn.not_equal.html)
/// * [`all()`]
/// * [`any()`]
/// * [`equal()`]
/// * [`greater_than()`]
/// * [`greater_than_equal()`]
/// * [`less_than()`]
/// * [`less_than_equal()`]
/// * [`not_equal()`]
pub fn not<const D: usize>(v: &TVec<bool, D>) -> TVec<bool, D> {
v.map(|x| !x)
}
@ -201,12 +201,12 @@ pub fn not<const D: usize>(v: &TVec<bool, D>) -> TVec<bool, D> {
///
/// # See also:
///
/// * [`equal`](fn.equal.html)
/// * [`greater_than`](fn.greater_than.html)
/// * [`greater_than_equal`](fn.greater_than_equal.html)
/// * [`less_than`](fn.less_than.html)
/// * [`less_than_equal`](fn.less_than_equal.html)
/// * [`not`](fn.not.html)
/// * [`equal()`]
/// * [`greater_than()`]
/// * [`greater_than_equal()`]
/// * [`less_than()`]
/// * [`less_than_equal()`]
/// * [`not()`]
pub fn not_equal<T: Number, const D: usize>(x: &TVec<T, D>, y: &TVec<T, D>) -> TVec<bool, D> {
x.zip_map(y, |x, y| x != y)
}

View File

@ -1,47 +1,47 @@
[package]
name = "nalgebra-lapack"
version = "0.24.0"
authors = [ "Sébastien Crozet <developer@crozet.re>", "Andrew Straw <strawman@astraw.com>" ]
name = "nalgebra-lapack"
version = "0.25.0"
authors = ["Sébastien Crozet <developer@crozet.re>", "Andrew Straw <strawman@astraw.com>"]
description = "Matrix decompositions using nalgebra matrices and Lapack bindings."
description = "Matrix decompositions using nalgebra matrices and Lapack bindings."
documentation = "https://www.nalgebra.org/docs"
homepage = "https://nalgebra.org"
repository = "https://github.com/dimforge/nalgebra"
readme = "../README.md"
categories = [ "science", "mathematics" ]
keywords = [ "linear", "algebra", "matrix", "vector", "lapack" ]
license = "BSD-3-Clause"
categories = ["science", "mathematics"]
keywords = ["linear", "algebra", "matrix", "vector", "lapack"]
license = "MIT"
edition = "2018"
[badges]
maintenance = { status = "actively-developed" }
[features]
serde-serialize = [ "serde", "nalgebra/serde-serialize" ]
proptest-support = [ "nalgebra/proptest-support" ]
arbitrary = [ "nalgebra/arbitrary" ]
serde-serialize = ["serde", "nalgebra/serde-serialize"]
proptest-support = ["nalgebra/proptest-support"]
arbitrary = ["nalgebra/arbitrary"]
# For BLAS/LAPACK
default = ["netlib"]
openblas = ["lapack-src/openblas"]
netlib = ["lapack-src/netlib"]
default = ["netlib"]
openblas = ["lapack-src/openblas"]
netlib = ["lapack-src/netlib"]
accelerate = ["lapack-src/accelerate"]
intel-mkl = ["lapack-src/intel-mkl"]
intel-mkl = ["lapack-src/intel-mkl"]
[dependencies]
nalgebra = { version = "0.32", path = ".." }
num-traits = "0.2"
num-complex = { version = "0.4", default-features = false }
simba = "0.8"
serde = { version = "1.0", features = [ "derive" ], optional = true }
lapack = { version = "0.19", default-features = false }
lapack-src = { version = "0.8", default-features = false }
nalgebra = { version = "0.33", path = ".." }
num-traits = "0.2"
num-complex = { version = "0.4", default-features = false }
simba = "0.9"
serde = { version = "1.0", features = ["derive"], optional = true }
lapack = { version = "0.19", default-features = false }
lapack-src = { version = "0.8", default-features = false }
# clippy = "*"
[dev-dependencies]
nalgebra = { version = "0.32", features = [ "arbitrary", "rand" ], path = ".." }
proptest = { version = "1", default-features = false, features = ["std"] }
nalgebra = { version = "0.33", features = ["arbitrary", "rand"], path = ".." }
proptest = { version = "1", default-features = false, features = ["std"] }
quickcheck = "1"
approx = "0.5"
rand = "0.8"
approx = "0.5"
rand = "0.8"

View File

@ -15,32 +15,32 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(serialize = "DefaultAllocator: Allocator<T, D>,
serde(bound(serialize = "DefaultAllocator: Allocator<D>,
OMatrix<T, D, D>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(deserialize = "DefaultAllocator: Allocator<T, D>,
serde(bound(deserialize = "DefaultAllocator: Allocator<D>,
OMatrix<T, D, D>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct Cholesky<T: Scalar, D: Dim>
where
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
l: OMatrix<T, D, D>,
}
impl<T: Scalar + Copy, D: Dim> Copy for Cholesky<T, D>
where
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
OMatrix<T, D, D>: Copy,
{
}
impl<T: CholeskyScalar + Zero, D: Dim> Cholesky<T, D>
where
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
/// Computes the cholesky decomposition of the given symmetric-definite-positive square
/// matrix.
@ -105,7 +105,7 @@ where
) -> Option<OMatrix<T, R2, C2>>
where
S2: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, R2, C2>,
DefaultAllocator: Allocator<R2, C2>,
{
let mut res = b.clone_owned();
if self.solve_mut(&mut res) {
@ -119,7 +119,7 @@ where
/// the unknown to be determined.
pub fn solve_mut<R2: Dim, C2: Dim>(&self, b: &mut OMatrix<T, R2, C2>) -> bool
where
DefaultAllocator: Allocator<T, R2, C2>,
DefaultAllocator: Allocator<R2, C2>,
{
let dim = self.l.nrows();

View File

@ -16,24 +16,20 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(serialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(serialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Serialize,
OMatrix<T, D, D>: Serialize")
)
OMatrix<T, D, D>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(deserialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(deserialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Serialize,
OMatrix<T, D, D>: Deserialize<'de>")
)
OMatrix<T, D, D>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct Eigen<T: Scalar, D: Dim>
where
DefaultAllocator: Allocator<T, D> + Allocator<T, D, D>,
DefaultAllocator: Allocator<D> + Allocator<D, D>,
{
/// The real parts of eigenvalues of the decomposed matrix.
pub eigenvalues_re: OVector<T, D>,
@ -47,7 +43,7 @@ where
impl<T: Scalar + Copy, D: Dim> Copy for Eigen<T, D>
where
DefaultAllocator: Allocator<T, D> + Allocator<T, D, D>,
DefaultAllocator: Allocator<D> + Allocator<D, D>,
OVector<T, D>: Copy,
OMatrix<T, D, D>: Copy,
{
@ -55,7 +51,7 @@ where
impl<T: EigenScalar + RealField, D: Dim> Eigen<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
{
/// Computes the eigenvalues and eigenvectors of the square matrix `m`.
///
@ -177,7 +173,7 @@ where
Option<Vec<OVector<T, D>>>,
)
where
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
let (number_of_elements, _) = self.eigenvalues_re.shape_generic();
let number_of_elements_value = number_of_elements.value();
@ -200,17 +196,16 @@ where
eigenvalues.push(self.eigenvalues_re[c].clone());
if eigenvectors.is_some() {
eigenvectors.as_mut().unwrap().push(
(&self.eigenvectors.as_ref())
.unwrap()
.column(c)
.into_owned(),
);
eigenvectors
.as_mut()
.unwrap()
.push(self.eigenvectors.as_ref().unwrap().column(c).into_owned());
}
if left_eigenvectors.is_some() {
left_eigenvectors.as_mut().unwrap().push(
(&self.left_eigenvectors.as_ref())
self.left_eigenvectors
.as_ref()
.unwrap()
.column(c)
.into_owned(),
@ -235,7 +230,7 @@ where
Option<Vec<OVector<Complex<T>, D>>>,
)
where
DefaultAllocator: Allocator<Complex<T>, D>,
DefaultAllocator: Allocator<D>,
{
match self.eigenvalues_are_real() {
true => (None, None, None),
@ -285,12 +280,12 @@ where
for r in 0..number_of_elements_value {
vec[r] = Complex::<T>::new(
(&self.eigenvectors.as_ref()).unwrap()[(r, c)].clone(),
(&self.eigenvectors.as_ref()).unwrap()[(r, c + 1)].clone(),
self.eigenvectors.as_ref().unwrap()[(r, c)].clone(),
self.eigenvectors.as_ref().unwrap()[(r, c + 1)].clone(),
);
vec_conj[r] = Complex::<T>::new(
(&self.eigenvectors.as_ref()).unwrap()[(r, c)].clone(),
(&self.eigenvectors.as_ref()).unwrap()[(r, c + 1)].clone(),
self.eigenvectors.as_ref().unwrap()[(r, c)].clone(),
self.eigenvectors.as_ref().unwrap()[(r, c + 1)].clone(),
);
}
@ -310,12 +305,12 @@ where
for r in 0..number_of_elements_value {
vec[r] = Complex::<T>::new(
(&self.left_eigenvectors.as_ref()).unwrap()[(r, c)].clone(),
(&self.left_eigenvectors.as_ref()).unwrap()[(r, c + 1)].clone(),
self.left_eigenvectors.as_ref().unwrap()[(r, c)].clone(),
self.left_eigenvectors.as_ref().unwrap()[(r, c + 1)].clone(),
);
vec_conj[r] = Complex::<T>::new(
(&self.left_eigenvectors.as_ref()).unwrap()[(r, c)].clone(),
(&self.left_eigenvectors.as_ref()).unwrap()[(r, c + 1)].clone(),
self.left_eigenvectors.as_ref().unwrap()[(r, c)].clone(),
self.left_eigenvectors.as_ref().unwrap()[(r, c + 1)].clone(),
);
}

View File

@ -30,24 +30,20 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(serialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(serialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Serialize,
OMatrix<T, D, D>: Serialize")
)
OMatrix<T, D, D>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(deserialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(deserialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Deserialize<'de>,
OMatrix<T, D, D>: Deserialize<'de>")
)
OMatrix<T, D, D>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct GeneralizedEigen<T: Scalar, D: Dim>
where
DefaultAllocator: Allocator<T, D> + Allocator<T, D, D>,
DefaultAllocator: Allocator<D> + Allocator<D, D>,
{
alphar: OVector<T, D>,
alphai: OVector<T, D>,
@ -58,7 +54,7 @@ where
impl<T: Scalar + Copy, D: Dim> Copy for GeneralizedEigen<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
OMatrix<T, D, D>: Copy,
OVector<T, D>: Copy,
{
@ -66,7 +62,7 @@ where
impl<T: GeneralizedEigenScalar + RealField + Copy, D: Dim> GeneralizedEigen<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
{
/// Attempts to compute the generalized eigenvalues, and left and right associated eigenvectors
/// via the raw returns from LAPACK's dggev and sggev routines
@ -162,8 +158,7 @@ where
/// as columns.
pub fn eigenvectors(&self) -> (OMatrix<Complex<T>, D, D>, OMatrix<Complex<T>, D, D>)
where
DefaultAllocator:
Allocator<Complex<T>, D, D> + Allocator<Complex<T>, D> + Allocator<(Complex<T>, T), D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
{
/*
How the eigenvectors are built up:
@ -230,7 +225,7 @@ where
#[must_use]
pub fn raw_eigenvalues(&self) -> OVector<(Complex<T>, T), D>
where
DefaultAllocator: Allocator<(Complex<T>, T), D>,
DefaultAllocator: Allocator<D>,
{
let mut out = Matrix::from_element_generic(
self.vsl.shape_generic().0,

View File

@ -12,22 +12,22 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(serialize = "DefaultAllocator: Allocator<T, D, D> +
Allocator<T, DimDiff<D, U1>>,
serde(bound(serialize = "DefaultAllocator: Allocator<D, D> +
Allocator<DimDiff<D, U1>>,
OMatrix<T, D, D>: Serialize,
OVector<T, DimDiff<D, U1>>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(deserialize = "DefaultAllocator: Allocator<T, D, D> +
Allocator<T, DimDiff<D, U1>>,
serde(bound(deserialize = "DefaultAllocator: Allocator<D, D> +
Allocator<DimDiff<D, U1>>,
OMatrix<T, D, D>: Deserialize<'de>,
OVector<T, DimDiff<D, U1>>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct Hessenberg<T: Scalar, D: DimSub<U1>>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, DimDiff<D, U1>>,
DefaultAllocator: Allocator<D, D> + Allocator<DimDiff<D, U1>>,
{
h: OMatrix<T, D, D>,
tau: OVector<T, DimDiff<D, U1>>,
@ -35,7 +35,7 @@ where
impl<T: Scalar + Copy, D: DimSub<U1>> Copy for Hessenberg<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, DimDiff<D, U1>>,
DefaultAllocator: Allocator<D, D> + Allocator<DimDiff<D, U1>>,
OMatrix<T, D, D>: Copy,
OVector<T, DimDiff<D, U1>>: Copy,
{
@ -43,7 +43,7 @@ where
impl<T: HessenbergScalar + Zero, D: DimSub<U1>> Hessenberg<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, DimDiff<D, U1>>,
DefaultAllocator: Allocator<D, D> + Allocator<DimDiff<D, U1>>,
{
/// Computes the hessenberg decomposition of the matrix `m`.
pub fn new(mut m: OMatrix<T, D, D>) -> Self {
@ -97,7 +97,7 @@ where
impl<T: HessenbergReal + Zero, D: DimSub<U1>> Hessenberg<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, DimDiff<D, U1>>,
DefaultAllocator: Allocator<D, D> + Allocator<DimDiff<D, U1>>,
{
/// Computes the matrices `(Q, H)` of this decomposition.
#[inline]

View File

@ -20,22 +20,22 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(serialize = "DefaultAllocator: Allocator<T, R, C> +
Allocator<i32, DimMinimum<R, C>>,
serde(bound(serialize = "DefaultAllocator: Allocator<R, C> +
Allocator<DimMinimum<R, C>>,
OMatrix<T, R, C>: Serialize,
PermutationSequence<DimMinimum<R, C>>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(deserialize = "DefaultAllocator: Allocator<T, R, C> +
Allocator<i32, DimMinimum<R, C>>,
serde(bound(deserialize = "DefaultAllocator: Allocator<R, C> +
Allocator<DimMinimum<R, C>>,
OMatrix<T, R, C>: Deserialize<'de>,
PermutationSequence<DimMinimum<R, C>>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct LU<T: Scalar, R: DimMin<C>, C: Dim>
where
DefaultAllocator: Allocator<i32, DimMinimum<R, C>> + Allocator<T, R, C>,
DefaultAllocator: Allocator<DimMinimum<R, C>> + Allocator<R, C>,
{
lu: OMatrix<T, R, C>,
p: OVector<i32, DimMinimum<R, C>>,
@ -43,7 +43,7 @@ where
impl<T: Scalar + Copy, R: DimMin<C>, C: Dim> Copy for LU<T, R, C>
where
DefaultAllocator: Allocator<T, R, C> + Allocator<i32, DimMinimum<R, C>>,
DefaultAllocator: Allocator<R, C> + Allocator<DimMinimum<R, C>>,
OMatrix<T, R, C>: Copy,
OVector<i32, DimMinimum<R, C>>: Copy,
{
@ -53,11 +53,11 @@ impl<T: LUScalar, R: Dim, C: Dim> LU<T, R, C>
where
T: Zero + One,
R: DimMin<C>,
DefaultAllocator: Allocator<T, R, C>
+ Allocator<T, R, R>
+ Allocator<T, R, DimMinimum<R, C>>
+ Allocator<T, DimMinimum<R, C>, C>
+ Allocator<i32, DimMinimum<R, C>>,
DefaultAllocator: Allocator<R, C>
+ Allocator<R, R>
+ Allocator<R, DimMinimum<R, C>>
+ Allocator<DimMinimum<R, C>, C>
+ Allocator<DimMinimum<R, C>>,
{
/// Computes the LU decomposition with partial (row) pivoting of `matrix`.
pub fn new(mut m: OMatrix<T, R, C>) -> Self {
@ -136,7 +136,7 @@ where
#[inline]
pub fn permute<C2: Dim>(&self, rhs: &mut OMatrix<T, R, C2>)
where
DefaultAllocator: Allocator<T, R, C2>,
DefaultAllocator: Allocator<R, C2>,
{
let (nrows, ncols) = rhs.shape();
@ -153,7 +153,7 @@ where
fn generic_solve_mut<R2: Dim, C2: Dim>(&self, trans: u8, b: &mut OMatrix<T, R2, C2>) -> bool
where
DefaultAllocator: Allocator<T, R2, C2> + Allocator<i32, R2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R2>,
{
let dim = self.lu.nrows();
@ -192,7 +192,7 @@ where
) -> Option<OMatrix<T, R2, C2>>
where
S2: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, R2, C2> + Allocator<i32, R2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R2>,
{
let mut res = b.clone_owned();
if self.generic_solve_mut(b'T', &mut res) {
@ -210,7 +210,7 @@ where
) -> Option<OMatrix<T, R2, C2>>
where
S2: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, R2, C2> + Allocator<i32, R2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R2>,
{
let mut res = b.clone_owned();
if self.generic_solve_mut(b'T', &mut res) {
@ -228,7 +228,7 @@ where
) -> Option<OMatrix<T, R2, C2>>
where
S2: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, R2, C2> + Allocator<i32, R2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R2>,
{
let mut res = b.clone_owned();
if self.generic_solve_mut(b'T', &mut res) {
@ -243,7 +243,7 @@ where
/// Returns `false` if no solution was found (the decomposed matrix is singular).
pub fn solve_mut<R2: Dim, C2: Dim>(&self, b: &mut OMatrix<T, R2, C2>) -> bool
where
DefaultAllocator: Allocator<T, R2, C2> + Allocator<i32, R2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R2>,
{
self.generic_solve_mut(b'T', b)
}
@ -254,7 +254,7 @@ where
/// Returns `false` if no solution was found (the decomposed matrix is singular).
pub fn solve_transpose_mut<R2: Dim, C2: Dim>(&self, b: &mut OMatrix<T, R2, C2>) -> bool
where
DefaultAllocator: Allocator<T, R2, C2> + Allocator<i32, R2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R2>,
{
self.generic_solve_mut(b'T', b)
}
@ -265,7 +265,7 @@ where
/// Returns `false` if no solution was found (the decomposed matrix is singular).
pub fn solve_adjoint_mut<R2: Dim, C2: Dim>(&self, b: &mut OMatrix<T, R2, C2>) -> bool
where
DefaultAllocator: Allocator<T, R2, C2> + Allocator<i32, R2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R2>,
{
self.generic_solve_mut(b'T', b)
}
@ -275,7 +275,7 @@ impl<T: LUScalar, D: Dim> LU<T, D, D>
where
T: Zero + One,
D: DimMin<D, Output = D>,
DefaultAllocator: Allocator<T, D, D> + Allocator<i32, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
{
/// Computes the inverse of the decomposed matrix.
pub fn inverse(mut self) -> Option<OMatrix<T, D, D>> {

View File

@ -15,22 +15,22 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(serialize = "DefaultAllocator: Allocator<T, R, C> +
Allocator<T, DimMinimum<R, C>>,
serde(bound(serialize = "DefaultAllocator: Allocator<R, C> +
Allocator<DimMinimum<R, C>>,
OMatrix<T, R, C>: Serialize,
OVector<T, DimMinimum<R, C>>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(deserialize = "DefaultAllocator: Allocator<T, R, C> +
Allocator<T, DimMinimum<R, C>>,
serde(bound(deserialize = "DefaultAllocator: Allocator<R, C> +
Allocator<DimMinimum<R, C>>,
OMatrix<T, R, C>: Deserialize<'de>,
OVector<T, DimMinimum<R, C>>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct QR<T: Scalar, R: DimMin<C>, C: Dim>
where
DefaultAllocator: Allocator<T, R, C> + Allocator<T, DimMinimum<R, C>>,
DefaultAllocator: Allocator<R, C> + Allocator<DimMinimum<R, C>>,
{
qr: OMatrix<T, R, C>,
tau: OVector<T, DimMinimum<R, C>>,
@ -38,7 +38,7 @@ where
impl<T: Scalar + Copy, R: DimMin<C>, C: Dim> Copy for QR<T, R, C>
where
DefaultAllocator: Allocator<T, R, C> + Allocator<T, DimMinimum<R, C>>,
DefaultAllocator: Allocator<R, C> + Allocator<DimMinimum<R, C>>,
OMatrix<T, R, C>: Copy,
OVector<T, DimMinimum<R, C>>: Copy,
{
@ -46,10 +46,10 @@ where
impl<T: QRScalar + Zero, R: DimMin<C>, C: Dim> QR<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>
+ Allocator<T, R, DimMinimum<R, C>>
+ Allocator<T, DimMinimum<R, C>, C>
+ Allocator<T, DimMinimum<R, C>>,
DefaultAllocator: Allocator<R, C>
+ Allocator<R, DimMinimum<R, C>>
+ Allocator<DimMinimum<R, C>, C>
+ Allocator<DimMinimum<R, C>>,
{
/// Computes the QR decomposition of the matrix `m`.
pub fn new(mut m: OMatrix<T, R, C>) -> Self {
@ -98,10 +98,10 @@ where
impl<T: QRReal + Zero, R: DimMin<C>, C: Dim> QR<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>
+ Allocator<T, R, DimMinimum<R, C>>
+ Allocator<T, DimMinimum<R, C>, C>
+ Allocator<T, DimMinimum<R, C>>,
DefaultAllocator: Allocator<R, C>
+ Allocator<R, DimMinimum<R, C>>
+ Allocator<DimMinimum<R, C>, C>
+ Allocator<DimMinimum<R, C>>,
{
/// Retrieves the matrices `(Q, R)` of this decompositions.
pub fn unpack(

View File

@ -22,24 +22,20 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(serialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(serialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Serialize,
OMatrix<T, D, D>: Serialize")
)
OMatrix<T, D, D>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(deserialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(deserialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Deserialize<'de>,
OMatrix<T, D, D>: Deserialize<'de>")
)
OMatrix<T, D, D>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct QZ<T: Scalar, D: Dim>
where
DefaultAllocator: Allocator<T, D> + Allocator<T, D, D>,
DefaultAllocator: Allocator<D> + Allocator<D, D>,
{
alphar: OVector<T, D>,
alphai: OVector<T, D>,
@ -52,7 +48,7 @@ where
impl<T: Scalar + Copy, D: Dim> Copy for QZ<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
OMatrix<T, D, D>: Copy,
OVector<T, D>: Copy,
{
@ -60,7 +56,7 @@ where
impl<T: QZScalar + RealField, D: Dim> QZ<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
{
/// Attempts to compute the QZ decomposition of input real square matrices `a` and `b`.
///
@ -182,7 +178,7 @@ where
#[must_use]
pub fn raw_eigenvalues(&self) -> OVector<(Complex<T>, T), D>
where
DefaultAllocator: Allocator<(Complex<T>, T), D>,
DefaultAllocator: Allocator<D>,
{
let mut out = Matrix::from_element_generic(
self.vsl.shape_generic().0,

View File

@ -17,24 +17,20 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(serialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(serialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Serialize,
OMatrix<T, D, D>: Serialize")
)
OMatrix<T, D, D>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(
bound(deserialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
serde(bound(deserialize = "DefaultAllocator: Allocator<D, D> + Allocator<D>,
OVector<T, D>: Serialize,
OMatrix<T, D, D>: Deserialize<'de>")
)
OMatrix<T, D, D>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct Schur<T: Scalar, D: Dim>
where
DefaultAllocator: Allocator<T, D> + Allocator<T, D, D>,
DefaultAllocator: Allocator<D> + Allocator<D, D>,
{
re: OVector<T, D>,
im: OVector<T, D>,
@ -44,7 +40,7 @@ where
impl<T: Scalar + Copy, D: Dim> Copy for Schur<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
OMatrix<T, D, D>: Copy,
OVector<T, D>: Copy,
{
@ -52,7 +48,7 @@ where
impl<T: SchurScalar + RealField, D: Dim> Schur<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
{
/// Computes the eigenvalues and real Schur form of the matrix `m`.
///
@ -150,7 +146,7 @@ where
#[must_use]
pub fn complex_eigenvalues(&self) -> OVector<Complex<T>, D>
where
DefaultAllocator: Allocator<Complex<T>, D>,
DefaultAllocator: Allocator<D>,
{
let mut out = Matrix::zeros_generic(self.t.shape_generic().0, Const::<1>);

View File

@ -14,18 +14,18 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(serialize = "DefaultAllocator: Allocator<T, DimMinimum<R, C>> +
Allocator<T, R, R> +
Allocator<T, C, C>,
serde(bound(serialize = "DefaultAllocator: Allocator<DimMinimum<R, C>> +
Allocator<R, R> +
Allocator<C, C>,
OMatrix<T, R>: Serialize,
OMatrix<T, C>: Serialize,
OVector<T, DimMinimum<R, C>>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(serialize = "DefaultAllocator: Allocator<T, DimMinimum<R, C>> +
Allocator<T, R, R> +
Allocator<T, C, C>,
serde(bound(serialize = "DefaultAllocator: Allocator<DimMinimum<R, C>> +
Allocator<R, R> +
Allocator<C, C>,
OMatrix<T, R>: Deserialize<'de>,
OMatrix<T, C>: Deserialize<'de>,
OVector<T, DimMinimum<R, C>>: Deserialize<'de>"))
@ -33,7 +33,7 @@ use lapack;
#[derive(Clone, Debug)]
pub struct SVD<T: Scalar, R: DimMin<C>, C: Dim>
where
DefaultAllocator: Allocator<T, R, R> + Allocator<T, DimMinimum<R, C>> + Allocator<T, C, C>,
DefaultAllocator: Allocator<R, R> + Allocator<DimMinimum<R, C>> + Allocator<C, C>,
{
/// The left-singular vectors `U` of this SVD.
pub u: OMatrix<T, R, R>, // TODO: should be OMatrix<T, R, DimMinimum<R, C>>
@ -45,7 +45,7 @@ where
impl<T: Scalar + Copy, R: DimMin<C>, C: Dim> Copy for SVD<T, R, C>
where
DefaultAllocator: Allocator<T, C, C> + Allocator<T, R, R> + Allocator<T, DimMinimum<R, C>>,
DefaultAllocator: Allocator<C, C> + Allocator<R, R> + Allocator<DimMinimum<R, C>>,
OMatrix<T, R, R>: Copy,
OMatrix<T, C, C>: Copy,
OVector<T, DimMinimum<R, C>>: Copy,
@ -56,10 +56,8 @@ where
/// supported by the Singular Value Decompotition.
pub trait SVDScalar<R: DimMin<C>, C: Dim>: Scalar
where
DefaultAllocator: Allocator<Self, R, R>
+ Allocator<Self, R, C>
+ Allocator<Self, DimMinimum<R, C>>
+ Allocator<Self, C, C>,
DefaultAllocator:
Allocator<R, R> + Allocator<R, C> + Allocator<DimMinimum<R, C>> + Allocator<C, C>,
{
/// Computes the SVD decomposition of `m`.
fn compute(m: OMatrix<Self, R, C>) -> Option<SVD<Self, R, C>>;
@ -67,10 +65,8 @@ where
impl<T: SVDScalar<R, C>, R: DimMin<C>, C: Dim> SVD<T, R, C>
where
DefaultAllocator: Allocator<T, R, R>
+ Allocator<T, R, C>
+ Allocator<T, DimMinimum<R, C>>
+ Allocator<T, C, C>,
DefaultAllocator:
Allocator<R, R> + Allocator<R, C> + Allocator<DimMinimum<R, C>> + Allocator<C, C>,
{
/// Computes the Singular Value Decomposition of `matrix`.
pub fn new(m: OMatrix<T, R, C>) -> Option<Self> {
@ -82,10 +78,10 @@ macro_rules! svd_impl(
($t: ty, $lapack_func: path) => (
impl<R: Dim, C: Dim> SVDScalar<R, C> for $t
where R: DimMin<C>,
DefaultAllocator: Allocator<$t, R, C> +
Allocator<$t, R, R> +
Allocator<$t, C, C> +
Allocator<$t, DimMinimum<R, C>> {
DefaultAllocator: Allocator<R, C> +
Allocator<R, R> +
Allocator<C, C> +
Allocator<DimMinimum<R, C>> {
fn compute(mut m: OMatrix<$t, R, C>) -> Option<SVD<$t, R, C>> {
let (nrows, ncols) = m.shape_generic();
@ -134,16 +130,16 @@ macro_rules! svd_impl(
impl<R: DimMin<C>, C: Dim> SVD<$t, R, C>
// TODO: All those bounds…
where DefaultAllocator: Allocator<$t, R, C> +
Allocator<$t, C, R> +
Allocator<$t, U1, R> +
Allocator<$t, U1, C> +
Allocator<$t, R, R> +
Allocator<$t, DimMinimum<R, C>> +
Allocator<$t, DimMinimum<R, C>, R> +
Allocator<$t, DimMinimum<R, C>, C> +
Allocator<$t, R, DimMinimum<R, C>> +
Allocator<$t, C, C> {
where DefaultAllocator: Allocator<R, C> +
Allocator<C, R> +
Allocator<U1, R> +
Allocator<U1, C> +
Allocator<R, R> +
Allocator<DimMinimum<R, C>> +
Allocator<DimMinimum<R, C>, R> +
Allocator<DimMinimum<R, C>, C> +
Allocator<R, DimMinimum<R, C>> +
Allocator<C, C> {
/// Reconstructs the matrix from its decomposition.
///
/// Useful if some components (e.g. some singular values) of this decomposition have
@ -237,9 +233,9 @@ macro_rules! svd_complex_impl(
where R: DimMin<C>,
S: ContiguousStorage<Complex<$t>, R, C>,
S::Alloc: OwnedAllocator<Complex<$t>, R, C, S> +
Allocator<Complex<$t>, R, R> +
Allocator<Complex<$t>, C, C> +
Allocator<$t, DimMinimum<R, C>> {
Allocator<R, R> +
Allocator<C, C> +
Allocator<DimMinimum<R, C>> {
let (nrows, ncols) = m.shape_generic();
if nrows.value() == 0 || ncols.value() == 0 {

View File

@ -17,22 +17,22 @@ use lapack;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(serialize = "DefaultAllocator: Allocator<T, D, D> +
Allocator<T, D>,
serde(bound(serialize = "DefaultAllocator: Allocator<D, D> +
Allocator<D>,
OVector<T, D>: Serialize,
OMatrix<T, D, D>: Serialize"))
)]
#[cfg_attr(
feature = "serde-serialize",
serde(bound(deserialize = "DefaultAllocator: Allocator<T, D, D> +
Allocator<T, D>,
serde(bound(deserialize = "DefaultAllocator: Allocator<D, D> +
Allocator<D>,
OVector<T, D>: Deserialize<'de>,
OMatrix<T, D, D>: Deserialize<'de>"))
)]
#[derive(Clone, Debug)]
pub struct SymmetricEigen<T: Scalar, D: Dim>
where
DefaultAllocator: Allocator<T, D> + Allocator<T, D, D>,
DefaultAllocator: Allocator<D> + Allocator<D, D>,
{
/// The eigenvectors of the decomposed matrix.
pub eigenvectors: OMatrix<T, D, D>,
@ -43,7 +43,7 @@ where
impl<T: Scalar + Copy, D: Dim> Copy for SymmetricEigen<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
OMatrix<T, D, D>: Copy,
OVector<T, D>: Copy,
{
@ -51,7 +51,7 @@ where
impl<T: SymmetricEigenScalar + RealField, D: Dim> SymmetricEigen<T, D>
where
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>,
DefaultAllocator: Allocator<D, D> + Allocator<D>,
{
/// Computes the eigenvalues and eigenvectors of the symmetric matrix `m`.
///

View File

@ -1,25 +1,24 @@
[package]
name = "nalgebra-macros"
version = "0.2.1"
authors = [ "Andreas Longva", "Sébastien Crozet <developer@crozet.re>" ]
version = "0.2.2"
authors = ["Andreas Longva", "Sébastien Crozet <developer@crozet.re>"]
edition = "2018"
description = "Procedural macros for nalgebra"
documentation = "https://www.nalgebra.org/docs"
homepage = "https://nalgebra.org"
repository = "https://github.com/dimforge/nalgebra"
readme = "../README.md"
categories = [ "science", "mathematics" ]
keywords = [ "linear", "algebra", "matrix", "vector", "math" ]
categories = ["science", "mathematics"]
keywords = ["linear", "algebra", "matrix", "vector", "math"]
license = "Apache-2.0"
[lib]
proc-macro = true
[dependencies]
syn = { version="1.0", features = ["full"] }
syn = { version = "2.0", features = ["full"] }
quote = "1.0"
proc-macro2 = "1.0"
[dev-dependencies]
nalgebra = { version = "0.32.0", path = ".." }
trybuild = "1.0.42"
nalgebra = { version = "0.33", path = ".." }

1
nalgebra-macros/LICENSE Symbolic link
View File

@ -0,0 +1 @@
../LICENSE

View File

@ -12,102 +12,19 @@
future_incompatible,
missing_copy_implementations,
missing_debug_implementations,
clippy::all,
clippy::pedantic
clippy::all
)]
mod matrix_vector_impl;
mod stack_impl;
use matrix_vector_impl::{Matrix, Vector};
use crate::matrix_vector_impl::{dmatrix_impl, dvector_impl, matrix_impl, vector_impl};
use proc_macro::TokenStream;
use quote::{quote, ToTokens, TokenStreamExt};
use syn::parse::{Error, Parse, ParseStream, Result};
use syn::punctuated::Punctuated;
use syn::Expr;
use syn::{parse_macro_input, Token};
use proc_macro2::{Delimiter, Spacing, TokenStream as TokenStream2, TokenTree};
use proc_macro2::{Group, Punct};
struct Matrix {
// Represent the matrix as a row-major vector of vectors of expressions
rows: Vec<Vec<Expr>>,
ncols: usize,
}
impl Matrix {
fn nrows(&self) -> usize {
self.rows.len()
}
fn ncols(&self) -> usize {
self.ncols
}
/// Produces a stream of tokens representing this matrix as a column-major nested array.
fn to_col_major_nested_array_tokens(&self) -> TokenStream2 {
let mut result = TokenStream2::new();
for j in 0..self.ncols() {
let mut col = TokenStream2::new();
let col_iter = (0..self.nrows()).map(move |i| &self.rows[i][j]);
col.append_separated(col_iter, Punct::new(',', Spacing::Alone));
result.append(Group::new(Delimiter::Bracket, col));
result.append(Punct::new(',', Spacing::Alone));
}
TokenStream2::from(TokenTree::Group(Group::new(Delimiter::Bracket, result)))
}
/// Produces a stream of tokens representing this matrix as a column-major flat array
/// (suitable for representing e.g. a `DMatrix`).
fn to_col_major_flat_array_tokens(&self) -> TokenStream2 {
let mut data = TokenStream2::new();
for j in 0..self.ncols() {
for i in 0..self.nrows() {
self.rows[i][j].to_tokens(&mut data);
data.append(Punct::new(',', Spacing::Alone));
}
}
TokenStream2::from(TokenTree::Group(Group::new(Delimiter::Bracket, data)))
}
}
type MatrixRowSyntax = Punctuated<Expr, Token![,]>;
impl Parse for Matrix {
fn parse(input: ParseStream<'_>) -> Result<Self> {
let mut rows = Vec::new();
let mut ncols = None;
while !input.is_empty() {
let row_span = input.span();
let row = MatrixRowSyntax::parse_separated_nonempty(input)?;
if let Some(ncols) = ncols {
if row.len() != ncols {
let row_idx = rows.len();
let error_msg = format!(
"Unexpected number of entries in row {}. Expected {}, found {} entries.",
row_idx,
ncols,
row.len()
);
return Err(Error::new(row_span, error_msg));
}
} else {
ncols = Some(row.len());
}
rows.push(row.into_iter().collect());
// We've just read a row, so if there are more tokens, there must be a semi-colon,
// otherwise the input is malformed
if !input.is_empty() {
input.parse::<Token![;]>()?;
}
}
Ok(Self {
rows,
ncols: ncols.unwrap_or(0),
})
}
}
use quote::quote;
use stack_impl::stack_impl;
use syn::parse_macro_input;
/// Construct a fixed-size matrix directly from data.
///
@ -145,20 +62,7 @@ impl Parse for Matrix {
/// ```
#[proc_macro]
pub fn matrix(stream: TokenStream) -> TokenStream {
let matrix = parse_macro_input!(stream as Matrix);
let row_dim = matrix.nrows();
let col_dim = matrix.ncols();
let array_tokens = matrix.to_col_major_nested_array_tokens();
// TODO: Use quote_spanned instead??
let output = quote! {
nalgebra::SMatrix::<_, #row_dim, #col_dim>
::from_array_storage(nalgebra::ArrayStorage(#array_tokens))
};
proc_macro::TokenStream::from(output)
matrix_impl(stream)
}
/// Construct a dynamic matrix directly from data.
@ -180,55 +84,7 @@ pub fn matrix(stream: TokenStream) -> TokenStream {
/// ```
#[proc_macro]
pub fn dmatrix(stream: TokenStream) -> TokenStream {
let matrix = parse_macro_input!(stream as Matrix);
let row_dim = matrix.nrows();
let col_dim = matrix.ncols();
let array_tokens = matrix.to_col_major_flat_array_tokens();
// TODO: Use quote_spanned instead??
let output = quote! {
nalgebra::DMatrix::<_>
::from_vec_storage(nalgebra::VecStorage::new(
nalgebra::Dyn(#row_dim),
nalgebra::Dyn(#col_dim),
vec!#array_tokens))
};
proc_macro::TokenStream::from(output)
}
struct Vector {
elements: Vec<Expr>,
}
impl Vector {
fn to_array_tokens(&self) -> TokenStream2 {
let mut data = TokenStream2::new();
data.append_separated(&self.elements, Punct::new(',', Spacing::Alone));
TokenStream2::from(TokenTree::Group(Group::new(Delimiter::Bracket, data)))
}
fn len(&self) -> usize {
self.elements.len()
}
}
impl Parse for Vector {
fn parse(input: ParseStream<'_>) -> Result<Self> {
// The syntax of a vector is just the syntax of a single matrix row
if input.is_empty() {
Ok(Self {
elements: Vec::new(),
})
} else {
let elements = MatrixRowSyntax::parse_terminated(input)?
.into_iter()
.collect();
Ok(Self { elements })
}
}
dmatrix_impl(stream)
}
/// Construct a fixed-size column vector directly from data.
@ -252,14 +108,7 @@ impl Parse for Vector {
/// ```
#[proc_macro]
pub fn vector(stream: TokenStream) -> TokenStream {
let vector = parse_macro_input!(stream as Vector);
let len = vector.len();
let array_tokens = vector.to_array_tokens();
let output = quote! {
nalgebra::SVector::<_, #len>
::from_array_storage(nalgebra::ArrayStorage([#array_tokens]))
};
proc_macro::TokenStream::from(output)
vector_impl(stream)
}
/// Construct a dynamic column vector directly from data.
@ -279,17 +128,7 @@ pub fn vector(stream: TokenStream) -> TokenStream {
/// ```
#[proc_macro]
pub fn dvector(stream: TokenStream) -> TokenStream {
let vector = parse_macro_input!(stream as Vector);
let len = vector.len();
let array_tokens = vector.to_array_tokens();
let output = quote! {
nalgebra::DVector::<_>
::from_vec_storage(nalgebra::VecStorage::new(
nalgebra::Dyn(#len),
nalgebra::Const::<1>,
vec!#array_tokens))
};
proc_macro::TokenStream::from(output)
dvector_impl(stream)
}
/// Construct a fixed-size point directly from data.
@ -321,3 +160,100 @@ pub fn point(stream: TokenStream) -> TokenStream {
};
proc_macro::TokenStream::from(output)
}
/// Construct a new matrix by stacking matrices in a block matrix.
///
/// **Note: Requires the `macros` feature to be enabled (enabled by default)**.
///
/// This macro facilitates the construction of
/// [block matrices](https://en.wikipedia.org/wiki/Block_matrix)
/// by stacking blocks (matrices) using the same MATLAB-like syntax as the [`matrix!`] and
/// [`dmatrix!`] macros:
///
/// ```rust
/// # use nalgebra::stack;
/// #
/// # fn main() {
/// # let [a, b, c, d] = std::array::from_fn(|_| nalgebra::Matrix1::new(0));
/// // a, b, c and d are matrices
/// let block_matrix = stack![ a, b;
/// c, d ];
/// # }
/// ```
///
/// The resulting matrix is stack-allocated if the dimension of each block row and column
/// can be determined at compile-time, otherwise it is heap-allocated.
/// This is the case if, for every row, there is at least one matrix with a fixed number of rows,
/// and, for every column, there is at least one matrix with a fixed number of columns.
///
/// [`stack!`] also supports special syntax to indicate zero blocks in a matrix:
///
/// ```rust
/// # use nalgebra::stack;
/// #
/// # fn main() {
/// # let [a, b, c, d] = std::array::from_fn(|_| nalgebra::Matrix1::new(0));
/// // a and d are matrices
/// let block_matrix = stack![ a, 0;
/// 0, d ];
/// # }
/// ```
/// Here, the `0` literal indicates a zero matrix of implicitly defined size.
/// In order to infer the size of the zero blocks, there must be at least one matrix
/// in every row and column of the matrix.
/// In other words, no row or column can consist entirely of implicit zero blocks.
///
/// # Panics
///
/// Panics if dimensions are inconsistent and it cannot be determined at compile-time.
///
/// # Examples
///
/// ```
/// use nalgebra::{matrix, SMatrix, stack};
///
/// let a = matrix![1, 2;
/// 3, 4];
/// let b = matrix![5, 6;
/// 7, 8];
/// let c = matrix![9, 10];
///
/// let block_matrix = stack![ a, b;
/// c, 0 ];
///
/// assert_eq!(block_matrix, matrix![1, 2, 5, 6;
/// 3, 4, 7, 8;
/// 9, 10, 0, 0]);
///
/// // Verify that the resulting block matrix is stack-allocated
/// let _: SMatrix<_, 3, 4> = block_matrix;
/// ```
///
/// The example above shows how stacking stack-allocated matrices results in a stack-allocated
/// block matrix. If all row and column dimensions can not be determined at compile-time,
/// the result is instead a dynamically allocated matrix:
///
/// ```
/// use nalgebra::{dmatrix, DMatrix, Dyn, matrix, OMatrix, SMatrix, stack, U3};
///
/// # let a = matrix![1, 2; 3, 4]; let c = matrix![9, 10];
/// // a and c as before, but b is a dynamic matrix this time
/// let b = dmatrix![5, 6;
/// 7, 8];
///
/// // In this case, the number of rows can be statically inferred to be 3 (U3),
/// // but the number of columns cannot, hence it is dynamic
/// let block_matrix: OMatrix<_, U3, Dyn> = stack![ a, b;
/// c, 0 ];
///
/// // If necessary, a fully dynamic matrix (DMatrix) can be obtained by reshaping
/// let dyn_block_matrix: DMatrix<_> = block_matrix.reshape_generic(Dyn(3), Dyn(4));
/// ```
/// Note that explicitly annotating the types of `block_matrix` and `dyn_block_matrix` is
/// only made for illustrative purposes, and is not generally necessary.
///
#[proc_macro]
pub fn stack(stream: TokenStream) -> TokenStream {
let matrix = parse_macro_input!(stream as Matrix);
proc_macro::TokenStream::from(stack_impl(matrix).unwrap_or_else(syn::Error::into_compile_error))
}

View File

@ -0,0 +1,201 @@
use proc_macro::TokenStream;
use quote::{quote, ToTokens, TokenStreamExt};
use std::ops::Index;
use syn::parse::{Error, Parse, ParseStream};
use syn::punctuated::Punctuated;
use syn::spanned::Spanned;
use syn::Expr;
use syn::{parse_macro_input, Token};
use proc_macro2::{Delimiter, Spacing, TokenStream as TokenStream2, TokenTree};
use proc_macro2::{Group, Punct};
/// A matrix of expressions
pub struct Matrix {
// Represent the matrix data in row-major format
data: Vec<Expr>,
nrows: usize,
ncols: usize,
}
impl Index<(usize, usize)> for Matrix {
type Output = Expr;
fn index(&self, (row, col): (usize, usize)) -> &Self::Output {
let linear_idx = self.ncols * row + col;
&self.data[linear_idx]
}
}
impl Matrix {
pub fn nrows(&self) -> usize {
self.nrows
}
pub fn ncols(&self) -> usize {
self.ncols
}
/// Produces a stream of tokens representing this matrix as a column-major nested array.
pub fn to_col_major_nested_array_tokens(&self) -> TokenStream2 {
let mut result = TokenStream2::new();
for j in 0..self.ncols() {
let mut col = TokenStream2::new();
let col_iter = (0..self.nrows()).map(|i| &self[(i, j)]);
col.append_separated(col_iter, Punct::new(',', Spacing::Alone));
result.append(Group::new(Delimiter::Bracket, col));
result.append(Punct::new(',', Spacing::Alone));
}
TokenStream2::from(TokenTree::Group(Group::new(Delimiter::Bracket, result)))
}
/// Produces a stream of tokens representing this matrix as a column-major flat array
/// (suitable for representing e.g. a `DMatrix`).
pub fn to_col_major_flat_array_tokens(&self) -> TokenStream2 {
let mut data = TokenStream2::new();
for j in 0..self.ncols() {
for i in 0..self.nrows() {
self[(i, j)].to_tokens(&mut data);
data.append(Punct::new(',', Spacing::Alone));
}
}
TokenStream2::from(TokenTree::Group(Group::new(Delimiter::Bracket, data)))
}
}
type MatrixRowSyntax = Punctuated<Expr, Token![,]>;
impl Parse for Matrix {
fn parse(input: ParseStream<'_>) -> syn::Result<Self> {
let mut data = Vec::new();
let mut ncols = None;
let mut nrows = 0;
while !input.is_empty() {
let row = MatrixRowSyntax::parse_separated_nonempty(input)?;
let row_span = row.span();
if let Some(ncols) = ncols {
if row.len() != ncols {
let error_msg = format!(
"Unexpected number of entries in row {}. Expected {}, found {} entries.",
nrows,
ncols,
row.len()
);
return Err(Error::new(row_span, error_msg));
}
} else {
ncols = Some(row.len());
}
data.extend(row.into_iter());
nrows += 1;
// We've just read a row, so if there are more tokens, there must be a semi-colon,
// otherwise the input is malformed
if !input.is_empty() {
input.parse::<Token![;]>()?;
}
}
Ok(Self {
data,
nrows,
ncols: ncols.unwrap_or(0),
})
}
}
pub struct Vector {
elements: Vec<Expr>,
}
impl Vector {
pub fn to_array_tokens(&self) -> TokenStream2 {
let mut data = TokenStream2::new();
data.append_separated(&self.elements, Punct::new(',', Spacing::Alone));
TokenStream2::from(TokenTree::Group(Group::new(Delimiter::Bracket, data)))
}
pub fn len(&self) -> usize {
self.elements.len()
}
}
impl Parse for Vector {
fn parse(input: ParseStream<'_>) -> syn::Result<Self> {
// The syntax of a vector is just the syntax of a single matrix row
if input.is_empty() {
Ok(Self {
elements: Vec::new(),
})
} else {
let elements = MatrixRowSyntax::parse_terminated(input)?
.into_iter()
.collect();
Ok(Self { elements })
}
}
}
pub fn matrix_impl(stream: TokenStream) -> TokenStream {
let matrix = parse_macro_input!(stream as Matrix);
let row_dim = matrix.nrows();
let col_dim = matrix.ncols();
let array_tokens = matrix.to_col_major_nested_array_tokens();
// TODO: Use quote_spanned instead??
let output = quote! {
nalgebra::SMatrix::<_, #row_dim, #col_dim>
::from_array_storage(nalgebra::ArrayStorage(#array_tokens))
};
proc_macro::TokenStream::from(output)
}
pub fn dmatrix_impl(stream: TokenStream) -> TokenStream {
let matrix = parse_macro_input!(stream as Matrix);
let row_dim = matrix.nrows();
let col_dim = matrix.ncols();
let array_tokens = matrix.to_col_major_flat_array_tokens();
// TODO: Use quote_spanned instead??
let output = quote! {
nalgebra::DMatrix::<_>
::from_vec_storage(nalgebra::VecStorage::new(
nalgebra::Dyn(#row_dim),
nalgebra::Dyn(#col_dim),
vec!#array_tokens))
};
proc_macro::TokenStream::from(output)
}
pub fn vector_impl(stream: TokenStream) -> TokenStream {
let vector = parse_macro_input!(stream as Vector);
let len = vector.len();
let array_tokens = vector.to_array_tokens();
let output = quote! {
nalgebra::SVector::<_, #len>
::from_array_storage(nalgebra::ArrayStorage([#array_tokens]))
};
proc_macro::TokenStream::from(output)
}
pub fn dvector_impl(stream: TokenStream) -> TokenStream {
let vector = parse_macro_input!(stream as Vector);
let len = vector.len();
let array_tokens = vector.to_array_tokens();
let output = quote! {
nalgebra::DVector::<_>
::from_vec_storage(nalgebra::VecStorage::new(
nalgebra::Dyn(#len),
nalgebra::Const::<1>,
vec!#array_tokens))
};
proc_macro::TokenStream::from(output)
}

View File

@ -0,0 +1,302 @@
use crate::Matrix;
use proc_macro2::{Span, TokenStream as TokenStream2};
use quote::{format_ident, quote, quote_spanned};
use syn::spanned::Spanned;
use syn::{Error, Expr, Lit};
#[allow(clippy::too_many_lines)]
pub fn stack_impl(matrix: Matrix) -> syn::Result<TokenStream2> {
// The prefix is used to construct variable names
// that are extremely unlikely to collide with variable names used in e.g. expressions
// by the user. Although we could use a long, pseudo-random string, this makes the generated
// code very painful to parse, so we settle for something more semantic that is still
// very unlikely to collide
let prefix = "___na";
let n_block_rows = matrix.nrows();
let n_block_cols = matrix.ncols();
let mut output = quote! {};
// First assign data and shape for each matrix entry to variables
// (this is important so that we, for example, don't evaluate an expression more than once)
for i in 0..n_block_rows {
for j in 0..n_block_cols {
let expr = &matrix[(i, j)];
if !is_literal_zero(expr) {
let ident_block = format_ident!("{prefix}_stack_{i}_{j}_block");
let ident_shape = format_ident!("{prefix}_stack_{i}_{j}_shape");
output.extend(std::iter::once(quote_spanned! {expr.span()=>
let ref #ident_block = #expr;
let #ident_shape = #ident_block.shape_generic();
}));
}
}
}
// Determine the number of rows (dimension) in each block row,
// and write out variables that define block row dimensions and offsets into the
// output matrix
for i in 0..n_block_rows {
// The dimension of the block row is the result of trying to unify the row shape of
// all blocks in the block row
let dim = (0 ..n_block_cols)
.filter_map(|j| {
let expr = &matrix[(i, j)];
if !is_literal_zero(expr) {
let mut ident_shape = format_ident!("{prefix}_stack_{i}_{j}_shape");
ident_shape.set_span(ident_shape.span().located_at(expr.span()));
Some(quote_spanned!{expr.span()=> #ident_shape.0 })
} else {
None
}
}).reduce(|a, b| {
let expect_msg = format!("All blocks in block row {i} must have the same number of rows");
quote_spanned!{b.span()=>
<nalgebra::constraint::ShapeConstraint as nalgebra::constraint::SameNumberOfRows<_, _>>::representative(#a, #b)
.expect(#expect_msg)
}
}).ok_or(Error::new(Span::call_site(), format!("Block row {i} cannot consist entirely of implicit zero blocks.")))?;
let dim_ident = format_ident!("{prefix}_stack_row_{i}_dim");
let offset_ident = format_ident!("{prefix}_stack_row_{i}_offset");
let offset = if i == 0 {
quote! { 0 }
} else {
let prev_offset_ident = format_ident!("{prefix}_stack_row_{}_offset", i - 1);
let prev_dim_ident = format_ident!("{prefix}_stack_row_{}_dim", i - 1);
quote! { #prev_offset_ident + <_ as nalgebra::Dim>::value(&#prev_dim_ident) }
};
output.extend(std::iter::once(quote! {
let #dim_ident = #dim;
let #offset_ident = #offset;
}));
}
// Do the same thing for the block columns
for j in 0..n_block_cols {
let dim = (0 ..n_block_rows)
.filter_map(|i| {
let expr = &matrix[(i, j)];
if !is_literal_zero(expr) {
let mut ident_shape = format_ident!("{prefix}_stack_{i}_{j}_shape");
ident_shape.set_span(ident_shape.span().located_at(expr.span()));
Some(quote_spanned!{expr.span()=> #ident_shape.1 })
} else {
None
}
}).reduce(|a, b| {
let expect_msg = format!("All blocks in block column {j} must have the same number of columns");
quote_spanned!{b.span()=>
<nalgebra::constraint::ShapeConstraint as nalgebra::constraint::SameNumberOfColumns<_, _>>::representative(#a, #b)
.expect(#expect_msg)
}
}).ok_or(Error::new(Span::call_site(), format!("Block column {j} cannot consist entirely of implicit zero blocks.")))?;
let dim_ident = format_ident!("{prefix}_stack_col_{j}_dim");
let offset_ident = format_ident!("{prefix}_stack_col_{j}_offset");
let offset = if j == 0 {
quote! { 0 }
} else {
let prev_offset_ident = format_ident!("{prefix}_stack_col_{}_offset", j - 1);
let prev_dim_ident = format_ident!("{prefix}_stack_col_{}_dim", j - 1);
quote! { #prev_offset_ident + <_ as nalgebra::Dim>::value(&#prev_dim_ident) }
};
output.extend(std::iter::once(quote! {
let #dim_ident = #dim;
let #offset_ident = #offset;
}));
}
// Determine number of rows and cols in output matrix,
// by adding together dimensions of all block rows/cols
let num_rows = (0..n_block_rows)
.map(|i| {
let ident = format_ident!("{prefix}_stack_row_{i}_dim");
quote! { #ident }
})
.reduce(|a, b| {
quote! {
<_ as nalgebra::DimAdd<_>>::add(#a, #b)
}
})
.unwrap_or(quote! { nalgebra::dimension::U0 });
let num_cols = (0..n_block_cols)
.map(|j| {
let ident = format_ident!("{prefix}_stack_col_{j}_dim");
quote! { #ident }
})
.reduce(|a, b| {
quote! {
<_ as nalgebra::DimAdd<_>>::add(#a, #b)
}
})
.unwrap_or(quote! { nalgebra::dimension::U0 });
// It should be possible to use `uninitialized_generic` here instead
// however that would mean that the macro needs to generate unsafe code
// which does not seem like a great idea.
output.extend(std::iter::once(quote! {
let mut matrix = nalgebra::Matrix::zeros_generic(#num_rows, #num_cols);
}));
for i in 0..n_block_rows {
for j in 0..n_block_cols {
let row_dim = format_ident!("{prefix}_stack_row_{i}_dim");
let col_dim = format_ident!("{prefix}_stack_col_{j}_dim");
let row_offset = format_ident!("{prefix}_stack_row_{i}_offset");
let col_offset = format_ident!("{prefix}_stack_col_{j}_offset");
let expr = &matrix[(i, j)];
if !is_literal_zero(expr) {
let expr_ident = format_ident!("{prefix}_stack_{i}_{j}_block");
output.extend(std::iter::once(quote! {
let start = (#row_offset, #col_offset);
let shape = (#row_dim, #col_dim);
let input_view = #expr_ident.generic_view((0, 0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
}));
}
}
}
Ok(quote! {
{
#output
matrix
}
})
}
fn is_literal_zero(expr: &Expr) -> bool {
matches!(expr,
Expr::Lit(syn::ExprLit { lit: Lit::Int(integer_literal), .. })
if integer_literal.base10_digits() == "0")
}
#[cfg(test)]
mod tests {
use crate::stack_impl::stack_impl;
use crate::Matrix;
use quote::quote;
#[test]
fn stack_simple_generation() {
let input: Matrix = syn::parse_quote![
a, 0;
0, b;
];
let result = stack_impl(input).unwrap();
let expected = quote! {{
let ref ___na_stack_0_0_block = a;
let ___na_stack_0_0_shape = ___na_stack_0_0_block.shape_generic();
let ref ___na_stack_1_1_block = b;
let ___na_stack_1_1_shape = ___na_stack_1_1_block.shape_generic();
let ___na_stack_row_0_dim = ___na_stack_0_0_shape.0;
let ___na_stack_row_0_offset = 0;
let ___na_stack_row_1_dim = ___na_stack_1_1_shape.0;
let ___na_stack_row_1_offset = ___na_stack_row_0_offset + <_ as nalgebra::Dim>::value(&___na_stack_row_0_dim);
let ___na_stack_col_0_dim = ___na_stack_0_0_shape.1;
let ___na_stack_col_0_offset = 0;
let ___na_stack_col_1_dim = ___na_stack_1_1_shape.1;
let ___na_stack_col_1_offset = ___na_stack_col_0_offset + <_ as nalgebra::Dim>::value(&___na_stack_col_0_dim);
let mut matrix = nalgebra::Matrix::zeros_generic(
<_ as nalgebra::DimAdd<_>>::add(___na_stack_row_0_dim, ___na_stack_row_1_dim),
<_ as nalgebra::DimAdd<_>>::add(___na_stack_col_0_dim, ___na_stack_col_1_dim)
);
let start = (___na_stack_row_0_offset, ___na_stack_col_0_offset);
let shape = (___na_stack_row_0_dim, ___na_stack_col_0_dim);
let input_view = ___na_stack_0_0_block.generic_view((0,0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
let start = (___na_stack_row_1_offset, ___na_stack_col_1_offset);
let shape = (___na_stack_row_1_dim, ___na_stack_col_1_dim);
let input_view = ___na_stack_1_1_block.generic_view((0,0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
matrix
}};
assert_eq!(format!("{result}"), format!("{}", expected));
}
#[test]
fn stack_complex_generation() {
let input: Matrix = syn::parse_quote![
a, 0, b;
0, c, d;
e, 0, 0;
];
let result = stack_impl(input).unwrap();
let expected = quote! {{
let ref ___na_stack_0_0_block = a;
let ___na_stack_0_0_shape = ___na_stack_0_0_block.shape_generic();
let ref ___na_stack_0_2_block = b;
let ___na_stack_0_2_shape = ___na_stack_0_2_block.shape_generic();
let ref ___na_stack_1_1_block = c;
let ___na_stack_1_1_shape = ___na_stack_1_1_block.shape_generic();
let ref ___na_stack_1_2_block = d;
let ___na_stack_1_2_shape = ___na_stack_1_2_block.shape_generic();
let ref ___na_stack_2_0_block = e;
let ___na_stack_2_0_shape = ___na_stack_2_0_block.shape_generic();
let ___na_stack_row_0_dim = < nalgebra :: constraint :: ShapeConstraint as nalgebra :: constraint :: SameNumberOfRows < _ , _ >> :: representative (___na_stack_0_0_shape . 0 , ___na_stack_0_2_shape . 0) . expect ("All blocks in block row 0 must have the same number of rows") ;
let ___na_stack_row_0_offset = 0;
let ___na_stack_row_1_dim = < nalgebra :: constraint :: ShapeConstraint as nalgebra :: constraint :: SameNumberOfRows < _ , _ >> :: representative (___na_stack_1_1_shape . 0 , ___na_stack_1_2_shape . 0) . expect ("All blocks in block row 1 must have the same number of rows") ;
let ___na_stack_row_1_offset = ___na_stack_row_0_offset + <_ as nalgebra::Dim>::value(&___na_stack_row_0_dim);
let ___na_stack_row_2_dim = ___na_stack_2_0_shape.0;
let ___na_stack_row_2_offset = ___na_stack_row_1_offset + <_ as nalgebra::Dim>::value(&___na_stack_row_1_dim);
let ___na_stack_col_0_dim = < nalgebra :: constraint :: ShapeConstraint as nalgebra :: constraint :: SameNumberOfColumns < _ , _ >> :: representative (___na_stack_0_0_shape . 1 , ___na_stack_2_0_shape . 1) . expect ("All blocks in block column 0 must have the same number of columns") ;
let ___na_stack_col_0_offset = 0;
let ___na_stack_col_1_dim = ___na_stack_1_1_shape.1;
let ___na_stack_col_1_offset = ___na_stack_col_0_offset + <_ as nalgebra::Dim>::value(&___na_stack_col_0_dim);
let ___na_stack_col_2_dim = < nalgebra :: constraint :: ShapeConstraint as nalgebra :: constraint :: SameNumberOfColumns < _ , _ >> :: representative (___na_stack_0_2_shape . 1 , ___na_stack_1_2_shape . 1) . expect ("All blocks in block column 2 must have the same number of columns") ;
let ___na_stack_col_2_offset = ___na_stack_col_1_offset + <_ as nalgebra::Dim>::value(&___na_stack_col_1_dim);
let mut matrix = nalgebra::Matrix::zeros_generic(
<_ as nalgebra::DimAdd<_>>::add(
<_ as nalgebra::DimAdd<_>>::add(___na_stack_row_0_dim, ___na_stack_row_1_dim),
___na_stack_row_2_dim
),
<_ as nalgebra::DimAdd<_>>::add(
<_ as nalgebra::DimAdd<_>>::add(___na_stack_col_0_dim, ___na_stack_col_1_dim),
___na_stack_col_2_dim
)
);
let start = (___na_stack_row_0_offset, ___na_stack_col_0_offset);
let shape = (___na_stack_row_0_dim, ___na_stack_col_0_dim);
let input_view = ___na_stack_0_0_block.generic_view((0,0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
let start = (___na_stack_row_0_offset, ___na_stack_col_2_offset);
let shape = (___na_stack_row_0_dim, ___na_stack_col_2_dim);
let input_view = ___na_stack_0_2_block.generic_view((0,0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
let start = (___na_stack_row_1_offset, ___na_stack_col_1_offset);
let shape = (___na_stack_row_1_dim, ___na_stack_col_1_dim);
let input_view = ___na_stack_1_1_block.generic_view((0,0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
let start = (___na_stack_row_1_offset, ___na_stack_col_2_offset);
let shape = (___na_stack_row_1_dim, ___na_stack_col_2_dim);
let input_view = ___na_stack_1_2_block.generic_view((0,0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
let start = (___na_stack_row_2_offset, ___na_stack_col_0_offset);
let shape = (___na_stack_row_2_dim, ___na_stack_col_0_dim);
let input_view = ___na_stack_2_0_block.generic_view((0,0), shape);
let mut output_view = matrix.generic_view_mut(start, shape);
output_view.copy_from(&input_view);
matrix
}};
assert_eq!(format!("{result}"), format!("{}", expected));
}
}

View File

@ -1,44 +1,44 @@
[package]
name = "nalgebra-sparse"
version = "0.9.0"
authors = [ "Andreas Longva", "Sébastien Crozet <developer@crozet.re>" ]
version = "0.10.0"
authors = ["Andreas Longva", "Sébastien Crozet <developer@crozet.re>"]
edition = "2018"
description = "Sparse matrix computation based on nalgebra."
documentation = "https://www.nalgebra.org/docs"
homepage = "https://nalgebra.org"
repository = "https://github.com/dimforge/nalgebra"
readme = "../README.md"
categories = [ "science", "mathematics", "wasm", "no-std" ]
keywords = [ "linear", "algebra", "matrix", "vector", "math" ]
categories = ["science", "mathematics", "wasm", "no-std"]
keywords = ["linear", "algebra", "matrix", "vector", "math"]
license = "Apache-2.0"
[features]
proptest-support = ["proptest", "nalgebra/proptest-support"]
compare = [ "matrixcompare-core" ]
serde-serialize = [ "serde/std" ]
compare = ["matrixcompare-core"]
serde-serialize = ["serde/std"]
# Enable matrix market I/O
io = [ "pest", "pest_derive" ]
io = ["pest", "pest_derive"]
# Enable to enable running some tests that take a lot of time to run
slow-tests = []
[dependencies]
nalgebra = { version="0.32", path = "../" }
nalgebra = { version = "0.33", path = "../" }
num-traits = { version = "0.2", default-features = false }
proptest = { version = "1.0", optional = true }
matrixcompare-core = { version = "0.1.0", optional = true }
pest = { version = "2", optional = true }
pest_derive = { version = "2", optional = true }
serde = { version = "1.0", default-features = false, features = [ "derive" ], optional = true }
pest = { version = "2", optional = true }
pest_derive = { version = "2", optional = true }
serde = { version = "1.0", default-features = false, features = ["derive"], optional = true }
[dev-dependencies]
itertools = "0.10"
matrixcompare = { version = "0.3.0", features = [ "proptest-support" ] }
nalgebra = { version="0.32", path = "../", features = ["compare"] }
itertools = "0.13"
matrixcompare = { version = "0.3.0", features = ["proptest-support"] }
nalgebra = { version = "0.33", path = "../", features = ["compare"] }
tempfile = "3.3"
serde_json = "1.0"
[package.metadata.docs.rs]
# Enable certain features when building docs for docs.rs
features = [ "proptest-support", "compare", "io"]
features = ["proptest-support", "compare", "io"]

1
nalgebra-sparse/LICENSE Symbolic link
View File

@ -0,0 +1 @@
../LICENSE

View File

@ -3,7 +3,7 @@ use crate::coo::CooMatrix;
use crate::csc::CscMatrix;
use crate::csr::CsrMatrix;
use nalgebra::storage::RawStorage;
use nalgebra::{ClosedAdd, DMatrix, Dim, Matrix, Scalar};
use nalgebra::{ClosedAddAssign, DMatrix, Dim, Matrix, Scalar};
use num_traits::Zero;
impl<'a, T, R, C, S> From<&'a Matrix<T, R, C, S>> for CooMatrix<T>
@ -20,7 +20,7 @@ where
impl<'a, T> From<&'a CooMatrix<T>> for DMatrix<T>
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
fn from(coo: &'a CooMatrix<T>) -> Self {
convert_coo_dense(coo)
@ -29,7 +29,7 @@ where
impl<'a, T> From<&'a CooMatrix<T>> for CsrMatrix<T>
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
fn from(matrix: &'a CooMatrix<T>) -> Self {
convert_coo_csr(matrix)
@ -38,7 +38,7 @@ where
impl<'a, T> From<&'a CsrMatrix<T>> for CooMatrix<T>
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
fn from(matrix: &'a CsrMatrix<T>) -> Self {
convert_csr_coo(matrix)
@ -59,7 +59,7 @@ where
impl<'a, T> From<&'a CsrMatrix<T>> for DMatrix<T>
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
fn from(matrix: &'a CsrMatrix<T>) -> Self {
convert_csr_dense(matrix)
@ -68,7 +68,7 @@ where
impl<'a, T> From<&'a CooMatrix<T>> for CscMatrix<T>
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
fn from(matrix: &'a CooMatrix<T>) -> Self {
convert_coo_csc(matrix)
@ -98,7 +98,7 @@ where
impl<'a, T> From<&'a CscMatrix<T>> for DMatrix<T>
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
fn from(matrix: &'a CscMatrix<T>) -> Self {
convert_csc_dense(matrix)

View File

@ -8,7 +8,7 @@ use std::ops::Add;
use num_traits::Zero;
use nalgebra::storage::RawStorage;
use nalgebra::{ClosedAdd, DMatrix, Dim, Matrix, Scalar};
use nalgebra::{ClosedAddAssign, DMatrix, Dim, Matrix, Scalar};
use crate::coo::CooMatrix;
use crate::cs;
@ -41,7 +41,7 @@ where
/// Converts a [`CooMatrix`] to a dense matrix.
pub fn convert_coo_dense<T>(coo: &CooMatrix<T>) -> DMatrix<T>
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
let mut output = DMatrix::repeat(coo.nrows(), coo.ncols(), T::zero());
for (i, j, v) in coo.triplet_iter() {
@ -80,7 +80,7 @@ pub fn convert_csr_coo<T: Scalar>(csr: &CsrMatrix<T>) -> CooMatrix<T> {
/// Converts a [`CsrMatrix`] to a dense matrix.
pub fn convert_csr_dense<T>(csr: &CsrMatrix<T>) -> DMatrix<T>
where
T: Scalar + ClosedAdd + Zero,
T: Scalar + ClosedAddAssign + Zero,
{
let mut output = DMatrix::zeros(csr.nrows(), csr.ncols());
@ -157,7 +157,7 @@ where
/// Converts a [`CscMatrix`] to a dense matrix.
pub fn convert_csc_dense<T>(csc: &CscMatrix<T>) -> DMatrix<T>
where
T: Scalar + ClosedAdd + Zero,
T: Scalar + ClosedAddAssign + Zero,
{
let mut output = DMatrix::zeros(csc.nrows(), csc.ncols());
@ -306,7 +306,7 @@ where
|val| sorted_vals.push(val),
&idx_workspace[..count],
&values_workspace[..count],
&Add::add,
Add::add,
);
let new_col_count = sorted_minor_idx.len() - sorted_ja_current_len;

View File

@ -160,6 +160,25 @@ impl<T> CooMatrix<T> {
}
}
/// Try to construct a COO matrix from the given dimensions and a finite iterator of
/// (i, j, v) triplets.
///
/// Returns an error if either row or column indices contain indices out of bounds.
/// Note that the COO format inherently supports duplicate entries, but they are not
/// eagerly summed.
///
/// Implementation note:
/// Calls try_from_triplets so each value is scanned twice.
pub fn try_from_triplets_iter(
nrows: usize,
ncols: usize,
triplets: impl IntoIterator<Item = (usize, usize, T)>,
) -> Result<Self, SparseFormatError> {
let (row_indices, (col_indices, values)) =
triplets.into_iter().map(|(r, c, v)| (r, (c, v))).unzip();
Self::try_from_triplets(nrows, ncols, row_indices, col_indices, values)
}
/// An iterator over triplets (i, j, v).
// TODO: Consider giving the iterator a concrete type instead of impl trait...?
pub fn triplet_iter(&self) -> impl Iterator<Item = (usize, usize, &T)> {

View File

@ -1,4 +1,3 @@
use std::mem::replace;
use std::ops::Range;
use num_traits::One;
@ -226,6 +225,15 @@ impl<T> CsMatrix<T> {
}
}
impl<T> Default for CsMatrix<T> {
fn default() -> Self {
Self {
sparsity_pattern: Default::default(),
values: vec![],
}
}
}
impl<T: Scalar + One> CsMatrix<T> {
#[inline]
pub fn identity(n: usize) -> Self {
@ -360,7 +368,7 @@ where
if let Some(minor_indices) = lane {
let count = minor_indices.len();
let remaining = replace(&mut self.remaining_values, &mut []);
let remaining = std::mem::take(&mut self.remaining_values);
let (values_in_lane, remaining) = remaining.split_at_mut(count);
self.remaining_values = remaining;
self.current_lane_idx += 1;
@ -569,7 +577,7 @@ where
} else if sort {
unreachable!("Internal error: Sorting currently not supported if no values are present.");
}
if major_offsets.len() == 0 {
if major_offsets.is_empty() {
return Err(SparseFormatError::from_kind_and_msg(
SparseFormatErrorKind::InvalidStructure,
"Number of offsets should be greater than 0.",
@ -615,12 +623,12 @@ where
));
}
let minor_idx_in_lane = minor_indices.get(range_start..range_end).ok_or(
let minor_idx_in_lane = minor_indices.get(range_start..range_end).ok_or_else(|| {
SparseFormatError::from_kind_and_msg(
SparseFormatErrorKind::IndexOutOfBounds,
"A major offset is out of bounds.",
),
)?;
)
})?;
// We test for in-bounds, uniqueness and monotonicity at the same time
// to ensure that we only visit each minor index once
@ -653,9 +661,9 @@ where
if !monotonic && sort {
let range_size = range_end - range_start;
minor_index_permutation.resize(range_size, 0);
compute_sort_permutation(&mut minor_index_permutation, &minor_idx_in_lane);
compute_sort_permutation(&mut minor_index_permutation, minor_idx_in_lane);
minor_idx_buffer.clear();
minor_idx_buffer.extend_from_slice(&minor_idx_in_lane);
minor_idx_buffer.extend_from_slice(minor_idx_in_lane);
apply_permutation(
&mut minor_indices[range_start..range_end],
&minor_idx_buffer,

View File

@ -158,7 +158,7 @@ impl<T> CscMatrix<T> {
/// an error is returned to indicate the failure.
///
/// An error is returned if the data given does not conform to the CSC storage format.
/// See the documentation for [CscMatrix](struct.CscMatrix.html) for more information.
/// See the documentation for [`CscMatrix`] for more information.
pub fn try_from_csc_data(
num_rows: usize,
num_cols: usize,
@ -184,7 +184,7 @@ impl<T> CscMatrix<T> {
///
/// An error is returned if the data given does not conform to the CSC storage format
/// with the exception of having unsorted row indices and values.
/// See the documentation for [CscMatrix](struct.CscMatrix.html) for more information.
/// See the documentation for [`CscMatrix`] for more information.
pub fn try_from_unsorted_csc_data(
num_rows: usize,
num_cols: usize,
@ -574,6 +574,14 @@ impl<T> CscMatrix<T> {
}
}
impl<T> Default for CscMatrix<T> {
fn default() -> Self {
Self {
cs: Default::default(),
}
}
}
/// Convert pattern format errors into more meaningful CSC-specific errors.
///
/// This ensures that the terminology is consistent: we are talking about rows and columns,
@ -617,6 +625,15 @@ pub struct CscTripletIter<'a, T> {
values_iter: Iter<'a, T>,
}
impl<'a, T> Clone for CscTripletIter<'a, T> {
fn clone(&self) -> Self {
CscTripletIter {
pattern_iter: self.pattern_iter.clone(),
values_iter: self.values_iter.clone(),
}
}
}
impl<'a, T: Clone> CscTripletIter<'a, T> {
/// Adapts the triplet iterator to return owned values.
///
@ -748,7 +765,7 @@ impl<'a, T> CscColMut<'a, T> {
}
}
/// Column iterator for [CscMatrix](struct.CscMatrix.html).
/// Column iterator for [`CscMatrix`].
pub struct CscColIter<'a, T> {
lane_iter: CsLaneIter<'a, T>,
}
@ -761,7 +778,7 @@ impl<'a, T> Iterator for CscColIter<'a, T> {
}
}
/// Mutable column iterator for [CscMatrix](struct.CscMatrix.html).
/// Mutable column iterator for [`CscMatrix`].
pub struct CscColIterMut<'a, T> {
lane_iter: CsLaneIterMut<'a, T>,
}

View File

@ -159,7 +159,7 @@ impl<T> CsrMatrix<T> {
/// an error is returned to indicate the failure.
///
/// An error is returned if the data given does not conform to the CSR storage format.
/// See the documentation for [CsrMatrix](struct.CsrMatrix.html) for more information.
/// See the documentation for [`CsrMatrix`] for more information.
pub fn try_from_csr_data(
num_rows: usize,
num_cols: usize,
@ -185,7 +185,7 @@ impl<T> CsrMatrix<T> {
///
/// An error is returned if the data given does not conform to the CSR storage format
/// with the exception of having unsorted column indices and values.
/// See the documentation for [CsrMatrix](struct.CsrMatrix.html) for more information.
/// See the documentation for [`CsrMatrix`] for more information.
pub fn try_from_unsorted_csr_data(
num_rows: usize,
num_cols: usize,
@ -575,6 +575,14 @@ impl<T> CsrMatrix<T> {
}
}
impl<T> Default for CsrMatrix<T> {
fn default() -> Self {
Self {
cs: Default::default(),
}
}
}
/// Convert pattern format errors into more meaningful CSR-specific errors.
///
/// This ensures that the terminology is consistent: we are talking about rows and columns,
@ -618,6 +626,15 @@ pub struct CsrTripletIter<'a, T> {
values_iter: Iter<'a, T>,
}
impl<'a, T> Clone for CsrTripletIter<'a, T> {
fn clone(&self) -> Self {
CsrTripletIter {
pattern_iter: self.pattern_iter.clone(),
values_iter: self.values_iter.clone(),
}
}
}
impl<'a, T: Clone> CsrTripletIter<'a, T> {
/// Adapts the triplet iterator to return owned values.
///
@ -753,7 +770,7 @@ impl<'a, T> CsrRowMut<'a, T> {
}
}
/// Row iterator for [CsrMatrix](struct.CsrMatrix.html).
/// Row iterator for [`CsrMatrix`].
pub struct CsrRowIter<'a, T> {
lane_iter: CsLaneIter<'a, T>,
}
@ -766,7 +783,7 @@ impl<'a, T> Iterator for CsrRowIter<'a, T> {
}
}
/// Mutable row iterator for [CsrMatrix](struct.CsrMatrix.html).
/// Mutable row iterator for [`CsrMatrix`].
pub struct CsrRowIterMut<'a, T> {
lane_iter: CsLaneIterMut<'a, T>,
}

View File

@ -199,7 +199,7 @@ impl SparseFormatError {
}
}
/// The type of format error described by a [SparseFormatError](struct.SparseFormatError.html).
/// The type of format error described by a [`SparseFormatError`].
#[non_exhaustive]
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum SparseFormatErrorKind {

View File

@ -10,8 +10,8 @@ use nalgebra::allocator::Allocator;
use nalgebra::base::storage::RawStorage;
use nalgebra::constraint::{DimEq, ShapeConstraint};
use nalgebra::{
ClosedAdd, ClosedDiv, ClosedMul, ClosedSub, DefaultAllocator, Dim, Dyn, Matrix, OMatrix,
Scalar, U1,
ClosedAddAssign, ClosedDivAssign, ClosedMulAssign, ClosedSubAssign, DefaultAllocator, Dim, Dyn,
Matrix, OMatrix, Scalar, U1,
};
use num_traits::{One, Zero};
use std::ops::{Add, Div, DivAssign, Mul, MulAssign, Neg, Sub};
@ -28,7 +28,7 @@ macro_rules! impl_bin_op {
// Note: The Neg bound is currently required because we delegate e.g.
// Sub to SpAdd with negative coefficients. This is not well-defined for
// unsigned data types.
$($scalar_type: $($bounds + )? Scalar + ClosedAdd + ClosedSub + ClosedMul + Zero + One + Neg<Output=T>)?
$($scalar_type: $($bounds + )? Scalar + ClosedAddAssign + ClosedSubAssign + ClosedMulAssign + Zero + One + Neg<Output=T>)?
{
type Output = $ret;
fn $method(self, $b: $b_type) -> Self::Output {
@ -59,7 +59,7 @@ macro_rules! impl_sp_plus_minus {
let mut result = $matrix_type::try_from_pattern_and_values(pattern, values)
.unwrap();
$spadd_fn(T::zero(), &mut result, T::one(), Op::NoOp(&a)).unwrap();
$spadd_fn(T::one(), &mut result, $factor * T::one(), Op::NoOp(&b)).unwrap();
$spadd_fn(T::one(), &mut result, $factor, Op::NoOp(&b)).unwrap();
result
});
@ -164,7 +164,7 @@ macro_rules! impl_scalar_mul {
impl<T> MulAssign<T> for $matrix_type<T>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One
{
fn mul_assign(&mut self, scalar: T) {
for val in self.values_mut() {
@ -175,7 +175,7 @@ macro_rules! impl_scalar_mul {
impl<'a, T> MulAssign<&'a T> for $matrix_type<T>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One
{
fn mul_assign(&mut self, scalar: &'a T) {
for val in self.values_mut() {
@ -227,15 +227,15 @@ impl_neg!(CscMatrix);
macro_rules! impl_div {
($matrix_type:ident) => {
impl_bin_op!(Div, div, <T: ClosedDiv>(matrix: $matrix_type<T>, scalar: T) -> $matrix_type<T> {
impl_bin_op!(Div, div, <T: ClosedDivAssign>(matrix: $matrix_type<T>, scalar: T) -> $matrix_type<T> {
let mut matrix = matrix;
matrix /= scalar;
matrix
});
impl_bin_op!(Div, div, <'a, T: ClosedDiv>(matrix: $matrix_type<T>, scalar: &T) -> $matrix_type<T> {
impl_bin_op!(Div, div, <'a, T: ClosedDivAssign>(matrix: $matrix_type<T>, scalar: &T) -> $matrix_type<T> {
matrix / scalar.clone()
});
impl_bin_op!(Div, div, <'a, T: ClosedDiv>(matrix: &'a $matrix_type<T>, scalar: T) -> $matrix_type<T> {
impl_bin_op!(Div, div, <'a, T: ClosedDivAssign>(matrix: &'a $matrix_type<T>, scalar: T) -> $matrix_type<T> {
let new_values = matrix.values()
.iter()
.map(|v_i| v_i.clone() / scalar.clone())
@ -243,12 +243,12 @@ macro_rules! impl_div {
$matrix_type::try_from_pattern_and_values(matrix.pattern().clone(), new_values)
.unwrap()
});
impl_bin_op!(Div, div, <'a, T: ClosedDiv>(matrix: &'a $matrix_type<T>, scalar: &'a T) -> $matrix_type<T> {
impl_bin_op!(Div, div, <'a, T: ClosedDivAssign>(matrix: &'a $matrix_type<T>, scalar: &'a T) -> $matrix_type<T> {
matrix / scalar.clone()
});
impl<T> DivAssign<T> for $matrix_type<T>
where T : Scalar + ClosedAdd + ClosedMul + ClosedDiv + Zero + One
where T : Scalar + ClosedAddAssign + ClosedMulAssign + ClosedDivAssign + Zero + One
{
fn div_assign(&mut self, scalar: T) {
self.values_mut().iter_mut().for_each(|v_i| *v_i /= scalar.clone());
@ -256,7 +256,7 @@ macro_rules! impl_div {
}
impl<'a, T> DivAssign<&'a T> for $matrix_type<T>
where T : Scalar + ClosedAdd + ClosedMul + ClosedDiv + Zero + One
where T : Scalar + ClosedAddAssign + ClosedMulAssign + ClosedDivAssign + Zero + One
{
fn div_assign(&mut self, scalar: &'a T) {
*self /= scalar.clone();
@ -298,17 +298,17 @@ macro_rules! impl_spmm_cs_dense {
{
impl<'a, T, R, C, S> Mul<$dense_matrix_type> for $sparse_matrix_type
where
T: Scalar + ClosedMul + ClosedAdd + ClosedSub + ClosedDiv + Neg + Zero + One,
T: Scalar + ClosedMulAssign + ClosedAddAssign + ClosedSubAssign + ClosedDivAssign + Neg + Zero + One,
R: Dim,
C: Dim,
S: RawStorage<T, R, C>,
DefaultAllocator: Allocator<T, Dyn, C>,
DefaultAllocator: Allocator<Dyn, C>,
// TODO: Is it possible to simplify these bounds?
ShapeConstraint:
// Bounds so that we can turn OMatrix<T, Dyn, C> into a DMatrixSliceMut
DimEq<U1, <<DefaultAllocator as Allocator<T, Dyn, C>>::Buffer as RawStorage<T, Dyn, C>>::RStride>
DimEq<U1, <<DefaultAllocator as Allocator<Dyn, C>>::Buffer<T> as RawStorage<T, Dyn, C>>::RStride>
+ DimEq<C, Dyn>
+ DimEq<Dyn, <<DefaultAllocator as Allocator<T, Dyn, C>>::Buffer as RawStorage<T, Dyn, C>>::CStride>
+ DimEq<Dyn, <<DefaultAllocator as Allocator<Dyn, C>>::Buffer<T> as RawStorage<T, Dyn, C>>::CStride>
// Bounds so that we can turn &Matrix<T, R, C, S> into a DMatrixSlice
+ DimEq<U1, S::RStride>
+ DimEq<R, Dyn>

View File

@ -149,8 +149,8 @@ impl<T> Op<T> {
#[must_use]
pub fn as_ref(&self) -> Op<&T> {
match self {
Op::NoOp(obj) => Op::NoOp(&obj),
Op::Transpose(obj) => Op::Transpose(&obj),
Op::NoOp(obj) => Op::NoOp(obj),
Op::Transpose(obj) => Op::Transpose(obj),
}
}

View File

@ -2,7 +2,7 @@ use crate::cs::CsMatrix;
use crate::ops::serial::{OperationError, OperationErrorKind};
use crate::ops::Op;
use crate::SparseEntryMut;
use nalgebra::{ClosedAdd, ClosedMul, DMatrixView, DMatrixViewMut, Scalar};
use nalgebra::{ClosedAddAssign, ClosedMulAssign, DMatrixView, DMatrixViewMut, Scalar};
use num_traits::{One, Zero};
fn spmm_cs_unexpected_entry() -> OperationError {
@ -28,7 +28,7 @@ pub fn spmm_cs_prealloc_unchecked<T>(
b: &CsMatrix<T>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_eq!(c.pattern().major_dim(), a.pattern().major_dim());
assert_eq!(c.pattern().minor_dim(), b.pattern().minor_dim());
@ -73,7 +73,7 @@ pub fn spmm_cs_prealloc<T>(
b: &CsMatrix<T>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
for i in 0..c.pattern().major_dim() {
let a_lane_i = a.get_lane(i).unwrap();
@ -119,7 +119,7 @@ pub fn spadd_cs_prealloc<T>(
a: Op<&CsMatrix<T>>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
match a {
Op::NoOp(a) => {
@ -181,7 +181,7 @@ pub fn spmm_cs_dense<T>(
a: Op<&CsMatrix<T>>,
b: Op<DMatrixView<'_, T>>,
) where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
match a {
Op::NoOp(a) => {

View File

@ -4,7 +4,7 @@ use crate::ops::serial::cs::{
};
use crate::ops::serial::{OperationError, OperationErrorKind};
use crate::ops::Op;
use nalgebra::{ClosedAdd, ClosedMul, DMatrixView, DMatrixViewMut, RealField, Scalar};
use nalgebra::{ClosedAddAssign, ClosedMulAssign, DMatrixView, DMatrixViewMut, RealField, Scalar};
use num_traits::{One, Zero};
use std::borrow::Cow;
@ -21,7 +21,7 @@ pub fn spmm_csc_dense<'a, T>(
a: Op<&CscMatrix<T>>,
b: Op<impl Into<DMatrixView<'a, T>>>,
) where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
let b = b.convert();
spmm_csc_dense_(beta, c.into(), alpha, a, b)
@ -34,7 +34,7 @@ fn spmm_csc_dense_<T>(
a: Op<&CscMatrix<T>>,
b: Op<DMatrixView<'_, T>>,
) where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spmm_dims!(c, a, b);
// Need to interpret matrix as transposed since the spmm_cs_dense function assumes CSR layout
@ -57,7 +57,7 @@ pub fn spadd_csc_prealloc<T>(
a: Op<&CscMatrix<T>>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spadd_dims!(c, a);
spadd_cs_prealloc(beta, &mut c.cs, alpha, a.map_same_op(|a| &a.cs))
@ -81,14 +81,14 @@ pub fn spmm_csc_prealloc<T>(
b: Op<&CscMatrix<T>>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spmm_dims!(c, a, b);
use Op::NoOp;
match (&a, &b) {
(NoOp(ref a), NoOp(ref b)) => {
(NoOp(a), NoOp(b)) => {
// Note: We have to reverse the order for CSC matrices
spmm_cs_prealloc(beta, &mut c.cs, alpha, &b.cs, &a.cs)
}
@ -109,14 +109,14 @@ pub fn spmm_csc_prealloc_unchecked<T>(
b: Op<&CscMatrix<T>>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spmm_dims!(c, a, b);
use Op::NoOp;
match (&a, &b) {
(NoOp(ref a), NoOp(ref b)) => {
(NoOp(a), NoOp(b)) => {
// Note: We have to reverse the order for CSC matrices
spmm_cs_prealloc_unchecked(beta, &mut c.cs, alpha, &b.cs, &a.cs)
}
@ -133,7 +133,7 @@ fn spmm_csc_transposed<T, F>(
spmm_kernel: F,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
F: Fn(
T,
&mut CscMatrix<T>,
@ -152,9 +152,9 @@ where
use Cow::*;
match (&a, &b) {
(NoOp(_), NoOp(_)) => unreachable!(),
(Transpose(ref a), NoOp(_)) => (Owned(a.transpose()), Borrowed(b_ref)),
(NoOp(_), Transpose(ref b)) => (Borrowed(a_ref), Owned(b.transpose())),
(Transpose(ref a), Transpose(ref b)) => (Owned(a.transpose()), Owned(b.transpose())),
(Transpose(a), NoOp(_)) => (Owned(a.transpose()), Borrowed(b_ref)),
(NoOp(_), Transpose(b)) => (Borrowed(a_ref), Owned(b.transpose())),
(Transpose(a), Transpose(b)) => (Owned(a.transpose()), Owned(b.transpose())),
}
};
spmm_kernel(beta, c, alpha, NoOp(a.as_ref()), NoOp(b.as_ref()))

View File

@ -4,7 +4,7 @@ use crate::ops::serial::cs::{
};
use crate::ops::serial::OperationError;
use crate::ops::Op;
use nalgebra::{ClosedAdd, ClosedMul, DMatrixView, DMatrixViewMut, Scalar};
use nalgebra::{ClosedAddAssign, ClosedMulAssign, DMatrixView, DMatrixViewMut, Scalar};
use num_traits::{One, Zero};
use std::borrow::Cow;
@ -16,7 +16,7 @@ pub fn spmm_csr_dense<'a, T>(
a: Op<&CsrMatrix<T>>,
b: Op<impl Into<DMatrixView<'a, T>>>,
) where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
let b = b.convert();
spmm_csr_dense_(beta, c.into(), alpha, a, b)
@ -29,7 +29,7 @@ fn spmm_csr_dense_<T>(
a: Op<&CsrMatrix<T>>,
b: Op<DMatrixView<'_, T>>,
) where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spmm_dims!(c, a, b);
spmm_cs_dense(beta, c, alpha, a.map_same_op(|a| &a.cs), b)
@ -52,7 +52,7 @@ pub fn spadd_csr_prealloc<T>(
a: Op<&CsrMatrix<T>>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spadd_dims!(c, a);
spadd_cs_prealloc(beta, &mut c.cs, alpha, a.map_same_op(|a| &a.cs))
@ -75,14 +75,14 @@ pub fn spmm_csr_prealloc<T>(
b: Op<&CsrMatrix<T>>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spmm_dims!(c, a, b);
use Op::NoOp;
match (&a, &b) {
(NoOp(ref a), NoOp(ref b)) => spmm_cs_prealloc(beta, &mut c.cs, alpha, &a.cs, &b.cs),
(NoOp(a), NoOp(b)) => spmm_cs_prealloc(beta, &mut c.cs, alpha, &a.cs, &b.cs),
_ => spmm_csr_transposed(beta, c, alpha, a, b, spmm_csr_prealloc),
}
}
@ -100,16 +100,14 @@ pub fn spmm_csr_prealloc_unchecked<T>(
b: Op<&CsrMatrix<T>>,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
{
assert_compatible_spmm_dims!(c, a, b);
use Op::NoOp;
match (&a, &b) {
(NoOp(ref a), NoOp(ref b)) => {
spmm_cs_prealloc_unchecked(beta, &mut c.cs, alpha, &a.cs, &b.cs)
}
(NoOp(a), NoOp(b)) => spmm_cs_prealloc_unchecked(beta, &mut c.cs, alpha, &a.cs, &b.cs),
_ => spmm_csr_transposed(beta, c, alpha, a, b, spmm_csr_prealloc_unchecked),
}
}
@ -123,7 +121,7 @@ fn spmm_csr_transposed<T, F>(
spmm_kernel: F,
) -> Result<(), OperationError>
where
T: Scalar + ClosedAdd + ClosedMul + Zero + One,
T: Scalar + ClosedAddAssign + ClosedMulAssign + Zero + One,
F: Fn(
T,
&mut CsrMatrix<T>,
@ -142,9 +140,9 @@ where
use Cow::*;
match (&a, &b) {
(NoOp(_), NoOp(_)) => unreachable!(),
(Transpose(ref a), NoOp(_)) => (Owned(a.transpose()), Borrowed(b_ref)),
(NoOp(_), Transpose(ref b)) => (Borrowed(a_ref), Owned(b.transpose())),
(Transpose(ref a), Transpose(ref b)) => (Owned(a.transpose()), Owned(b.transpose())),
(Transpose(a), NoOp(_)) => (Owned(a.transpose()), Borrowed(b_ref)),
(NoOp(_), Transpose(b)) => (Borrowed(a_ref), Owned(b.transpose())),
(Transpose(a), Transpose(b)) => (Owned(a.transpose()), Owned(b.transpose())),
}
};
spmm_kernel(beta, c, alpha, NoOp(a.as_ref()), NoOp(b.as_ref()))

View File

@ -125,18 +125,22 @@ fn iterate_union<'a>(
) -> impl Iterator<Item = usize> + 'a {
iter::from_fn(move || {
if let (Some(a_item), Some(b_item)) = (sorted_a.first(), sorted_b.first()) {
let item = if a_item < b_item {
sorted_a = &sorted_a[1..];
a_item
} else if b_item < a_item {
sorted_b = &sorted_b[1..];
b_item
} else {
// Both lists contain the same element, advance both slices to avoid
// duplicate entries in the result
sorted_a = &sorted_a[1..];
sorted_b = &sorted_b[1..];
a_item
let item = match a_item.cmp(b_item) {
std::cmp::Ordering::Less => {
sorted_a = &sorted_a[1..];
a_item
}
std::cmp::Ordering::Greater => {
sorted_b = &sorted_b[1..];
b_item
}
std::cmp::Ordering::Equal => {
// Both lists contain the same element, advance both slices to avoid
// duplicate entries in the result
sorted_a = &sorted_a[1..];
sorted_b = &sorted_b[1..];
a_item
}
};
Some(*item)
} else if let Some(a_item) = sorted_a.first() {

View File

@ -80,7 +80,7 @@ impl SparsityPattern {
#[inline]
#[must_use]
pub fn major_dim(&self) -> usize {
assert!(self.major_offsets.len() > 0);
assert!(!self.major_offsets.is_empty());
self.major_offsets.len() - 1
}
@ -162,7 +162,7 @@ impl SparsityPattern {
// We test for in-bounds, uniqueness and monotonicity at the same time
// to ensure that we only visit each minor index once
let mut iter = minor_indices.iter();
let mut prev = None;
let mut prev: Option<usize> = None;
while let Some(next) = iter.next().copied() {
if next >= minor_dim {
@ -170,10 +170,10 @@ impl SparsityPattern {
}
if let Some(prev) = prev {
if prev > next {
return Err(NonmonotonicMinorIndices);
} else if prev == next {
return Err(DuplicateEntry);
match prev.cmp(&next) {
std::cmp::Ordering::Greater => return Err(NonmonotonicMinorIndices),
std::cmp::Ordering::Equal => return Err(DuplicateEntry),
std::cmp::Ordering::Less => {}
}
}
prev = Some(next);
@ -195,6 +195,14 @@ impl SparsityPattern {
///
/// Panics if the number of major offsets is not exactly one greater than the major dimension
/// or if major offsets do not start with 0 and end with the number of minor indices.
///
/// # Safety
///
/// Assumes that the major offsets and indices adhere to the requirements of being a valid
/// sparsity pattern.
/// Specifically, that major offsets is monotonically increasing, and
/// `major_offsets[i]..major_offsets[i+1]` refers to a major lane in the sparsity pattern,
/// and `minor_indices[major_offsets[i]..major_offsets[i+1]]` is monotonically increasing.
pub unsafe fn from_offset_and_indices_unchecked(
major_dim: usize,
minor_dim: usize,
@ -291,6 +299,16 @@ impl SparsityPattern {
}
}
impl Default for SparsityPattern {
fn default() -> Self {
Self {
major_offsets: vec![0],
minor_indices: vec![],
minor_dim: 0,
}
}
}
/// Error type for `SparsityPattern` format errors.
#[non_exhaustive]
#[derive(Copy, Clone, Debug, PartialEq, Eq)]

View File

@ -184,6 +184,31 @@ fn coo_try_from_triplets_reports_out_of_bounds_indices() {
}
}
#[test]
fn coo_try_from_triplets_iter() {
// Check that try_from_triplets_iter panics when the triplet vectors have different lengths
macro_rules! assert_errs {
($result:expr) => {
assert!(matches!(
$result.unwrap_err().kind(),
SparseFormatErrorKind::IndexOutOfBounds
))
};
}
assert_errs!(CooMatrix::<f32>::try_from_triplets_iter(
3,
5,
vec![(0, 6, 3.0)].into_iter(),
));
assert!(CooMatrix::<f32>::try_from_triplets_iter(
3,
5,
vec![(0, 3, 3.0), (1, 2, 2.0), (0, 3, 1.0),].into_iter(),
)
.is_ok());
}
#[test]
fn coo_try_from_triplets_panics_on_mismatched_vectors() {
// Check that try_from_triplets panics when the triplet vectors have different lengths

View File

@ -12,6 +12,18 @@ use crate::common::csc_strategy;
use std::collections::HashSet;
#[test]
fn csc_matrix_default() {
let matrix: CscMatrix<f32> = CscMatrix::default();
assert_eq!(matrix.nrows(), 0);
assert_eq!(matrix.ncols(), 0);
assert_eq!(matrix.nnz(), 0);
assert_eq!(matrix.values(), &[]);
assert!(matrix.get_entry(0, 0).is_none());
}
#[test]
fn csc_matrix_valid_data() {
// Construct matrix from valid data and check that selected methods return results

View File

@ -12,6 +12,18 @@ use crate::common::csr_strategy;
use std::collections::HashSet;
#[test]
fn csr_matrix_default() {
let matrix: CsrMatrix<f32> = CsrMatrix::default();
assert_eq!(matrix.nrows(), 0);
assert_eq!(matrix.ncols(), 0);
assert_eq!(matrix.nnz(), 0);
assert_eq!(matrix.values(), &[]);
assert!(matrix.get_entry(0, 0).is_none());
}
#[test]
fn csr_matrix_valid_data() {
// Construct matrix from valid data and check that selected methods return results

View File

@ -1,5 +1,19 @@
use nalgebra_sparse::pattern::{SparsityPattern, SparsityPatternFormatError};
#[test]
fn sparsity_pattern_default() {
// Check that the pattern created with `Default::default()` is equivalent to a zero-sized pattern.
let pattern = SparsityPattern::default();
let zero = SparsityPattern::zeros(0, 0);
assert_eq!(pattern.major_dim(), zero.major_dim());
assert_eq!(pattern.minor_dim(), zero.minor_dim());
assert_eq!(pattern.major_offsets(), zero.major_offsets());
assert_eq!(pattern.minor_indices(), zero.minor_indices());
assert_eq!(pattern.nnz(), 0);
}
#[test]
fn sparsity_pattern_valid_data() {
// Construct pattern from valid data and check that selected methods return results

View File

@ -10,285 +10,429 @@ use crate::base::{Const, Matrix};
*
*/
// NOTE: we can't provide defaults for the strides because it's not supported yet by min_const_generics.
/// A column-major matrix view with dimensions known at compile-time.
/// An immutable column-major matrix view with dimensions known at compile-time.
///
/// See [`SMatrixViewMut`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type SMatrixView<'a, T, const R: usize, const C: usize> =
Matrix<T, Const<R>, Const<C>, ViewStorage<'a, T, Const<R>, Const<C>, Const<1>, Const<R>>>;
/// A column-major matrix view dynamic numbers of rows and columns.
/// An immutable column-major matrix view dynamic numbers of rows and columns.
///
/// See [`DMatrixViewMut`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type DMatrixView<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, Dyn, ViewStorage<'a, T, Dyn, Dyn, RStride, CStride>>;
/// A column-major 1x1 matrix view.
/// An immutable column-major 1x1 matrix view.
///
/// See [`MatrixViewMut1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView1<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U1, ViewStorage<'a, T, U1, U1, RStride, CStride>>;
/// A column-major 2x2 matrix view.
/// An immutable column-major 2x2 matrix view.
///
/// See [`MatrixViewMut2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView2<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U2, ViewStorage<'a, T, U2, U2, RStride, CStride>>;
/// A column-major 3x3 matrix view.
/// An immutable column-major 3x3 matrix view.
///
/// See [`MatrixViewMut3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView3<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U3, ViewStorage<'a, T, U3, U3, RStride, CStride>>;
/// A column-major 4x4 matrix view.
/// An immutable column-major 4x4 matrix view.
///
/// See [`MatrixViewMut4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView4<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U4, ViewStorage<'a, T, U4, U4, RStride, CStride>>;
/// A column-major 5x5 matrix view.
/// An immutable column-major 5x5 matrix view.
///
/// See [`MatrixViewMut5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView5<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U5, ViewStorage<'a, T, U5, U5, RStride, CStride>>;
/// A column-major 6x6 matrix view.
/// An immutable column-major 6x6 matrix view.
///
/// See [`MatrixViewMut6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView6<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U6, ViewStorage<'a, T, U6, U6, RStride, CStride>>;
/// A column-major 1x2 matrix view.
/// An immutable column-major 1x2 matrix view.
///
/// See [`MatrixViewMut1x2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView1x2<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U2, ViewStorage<'a, T, U1, U2, RStride, CStride>>;
/// A column-major 1x3 matrix view.
/// An immutable column-major 1x3 matrix view.
///
/// See [`MatrixViewMut1x3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView1x3<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U3, ViewStorage<'a, T, U1, U3, RStride, CStride>>;
/// A column-major 1x4 matrix view.
/// An immutable column-major 1x4 matrix view.
///
/// See [`MatrixViewMut1x4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView1x4<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U4, ViewStorage<'a, T, U1, U4, RStride, CStride>>;
/// A column-major 1x5 matrix view.
/// An immutable column-major 1x5 matrix view.
///
/// See [`MatrixViewMut1x5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView1x5<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U5, ViewStorage<'a, T, U1, U5, RStride, CStride>>;
/// A column-major 1x6 matrix view.
/// An immutable column-major 1x6 matrix view.
///
/// See [`MatrixViewMut1x6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView1x6<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U6, ViewStorage<'a, T, U1, U6, RStride, CStride>>;
/// A column-major 2x1 matrix view.
/// An immutable column-major 2x1 matrix view.
///
/// See [`MatrixViewMut2x1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView2x1<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U1, ViewStorage<'a, T, U2, U1, RStride, CStride>>;
/// A column-major 2x3 matrix view.
/// An immutable column-major 2x3 matrix view.
///
/// See [`MatrixViewMut2x3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView2x3<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U3, ViewStorage<'a, T, U2, U3, RStride, CStride>>;
/// A column-major 2x4 matrix view.
/// An immutable column-major 2x4 matrix view.
///
/// See [`MatrixViewMut2x4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView2x4<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U4, ViewStorage<'a, T, U2, U4, RStride, CStride>>;
/// A column-major 2x5 matrix view.
/// An immutable column-major 2x5 matrix view.
///
/// See [`MatrixViewMut2x5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView2x5<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U5, ViewStorage<'a, T, U2, U5, RStride, CStride>>;
/// A column-major 2x6 matrix view.
/// An immutable column-major 2x6 matrix view.
///
/// See [`MatrixViewMut2x6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView2x6<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U6, ViewStorage<'a, T, U2, U6, RStride, CStride>>;
/// A column-major 3x1 matrix view.
/// An immutable column-major 3x1 matrix view.
///
/// See [`MatrixViewMut3x1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView3x1<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U1, ViewStorage<'a, T, U3, U1, RStride, CStride>>;
/// A column-major 3x2 matrix view.
/// An immutable column-major 3x2 matrix view.
///
/// See [`MatrixViewMut3x2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView3x2<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U2, ViewStorage<'a, T, U3, U2, RStride, CStride>>;
/// A column-major 3x4 matrix view.
/// An immutable column-major 3x4 matrix view.
///
/// See [`MatrixViewMut3x4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView3x4<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U4, ViewStorage<'a, T, U3, U4, RStride, CStride>>;
/// A column-major 3x5 matrix view.
/// An immutable column-major 3x5 matrix view.
///
/// See [`MatrixViewMut3x5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView3x5<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U5, ViewStorage<'a, T, U3, U5, RStride, CStride>>;
/// A column-major 3x6 matrix view.
/// An immutable column-major 3x6 matrix view.
///
/// See [`MatrixViewMut3x6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView3x6<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U6, ViewStorage<'a, T, U3, U6, RStride, CStride>>;
/// A column-major 4x1 matrix view.
/// An immutable column-major 4x1 matrix view.
///
/// See [`MatrixViewMut4x1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView4x1<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U1, ViewStorage<'a, T, U4, U1, RStride, CStride>>;
/// A column-major 4x2 matrix view.
/// An immutable column-major 4x2 matrix view.
///
/// See [`MatrixViewMut4x2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView4x2<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U2, ViewStorage<'a, T, U4, U2, RStride, CStride>>;
/// A column-major 4x3 matrix view.
/// An immutable column-major 4x3 matrix view.
///
/// See [`MatrixViewMut4x3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView4x3<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U3, ViewStorage<'a, T, U4, U3, RStride, CStride>>;
/// A column-major 4x5 matrix view.
/// An immutable column-major 4x5 matrix view.
///
/// See [`MatrixViewMut4x5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView4x5<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U5, ViewStorage<'a, T, U4, U5, RStride, CStride>>;
/// A column-major 4x6 matrix view.
/// An immutable column-major 4x6 matrix view.
///
/// See [`MatrixViewMut4x6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView4x6<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U6, ViewStorage<'a, T, U4, U6, RStride, CStride>>;
/// A column-major 5x1 matrix view.
/// An immutable column-major 5x1 matrix view.
///
/// See [`MatrixViewMut5x1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView5x1<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U1, ViewStorage<'a, T, U5, U1, RStride, CStride>>;
/// A column-major 5x2 matrix view.
/// An immutable column-major 5x2 matrix view.
///
/// See [`MatrixViewMut5x2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView5x2<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U2, ViewStorage<'a, T, U5, U2, RStride, CStride>>;
/// A column-major 5x3 matrix view.
/// An immutable column-major 5x3 matrix view.
///
/// See [`MatrixViewMut5x3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView5x3<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U3, ViewStorage<'a, T, U5, U3, RStride, CStride>>;
/// A column-major 5x4 matrix view.
/// An immutable column-major 5x4 matrix view.
///
/// See [`MatrixViewMut5x4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView5x4<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U4, ViewStorage<'a, T, U5, U4, RStride, CStride>>;
/// A column-major 5x6 matrix view.
/// An immutable column-major 5x6 matrix view.
///
/// See [`MatrixViewMut5x6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView5x6<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U6, ViewStorage<'a, T, U5, U6, RStride, CStride>>;
/// A column-major 6x1 matrix view.
/// An immutable column-major 6x1 matrix view.
///
/// See [`MatrixViewMut6x1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView6x1<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U1, ViewStorage<'a, T, U6, U1, RStride, CStride>>;
/// A column-major 6x2 matrix view.
/// An immutable column-major 6x2 matrix view.
///
/// See [`MatrixViewMut6x2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView6x2<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U2, ViewStorage<'a, T, U6, U2, RStride, CStride>>;
/// A column-major 6x3 matrix view.
/// An immutable column-major 6x3 matrix view.
///
/// See [`MatrixViewMut6x3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView6x3<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U3, ViewStorage<'a, T, U6, U3, RStride, CStride>>;
/// A column-major 6x4 matrix view.
/// An immutable column-major 6x4 matrix view.
///
/// See [`MatrixViewMut6x4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView6x4<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U4, ViewStorage<'a, T, U6, U4, RStride, CStride>>;
/// A column-major 6x5 matrix view.
/// An immutable column-major 6x5 matrix view.
///
/// See [`MatrixViewMut6x5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView6x5<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U5, ViewStorage<'a, T, U6, U5, RStride, CStride>>;
/// A column-major matrix view with 1 row and a number of columns chosen at runtime.
/// An immutable column-major matrix view with 1 row and a number of columns chosen at runtime.
///
/// See [`MatrixViewMut1xX`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView1xX<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, Dyn, ViewStorage<'a, T, U1, Dyn, RStride, CStride>>;
/// A column-major matrix view with 2 rows and a number of columns chosen at runtime.
/// An immutable column-major matrix view with 2 rows and a number of columns chosen at runtime.
///
/// See [`MatrixViewMut2xX`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView2xX<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, Dyn, ViewStorage<'a, T, U2, Dyn, RStride, CStride>>;
/// A column-major matrix view with 3 rows and a number of columns chosen at runtime.
/// An immutable column-major matrix view with 3 rows and a number of columns chosen at runtime.
///
/// See [`MatrixViewMut3xX`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView3xX<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, Dyn, ViewStorage<'a, T, U3, Dyn, RStride, CStride>>;
/// A column-major matrix view with 4 rows and a number of columns chosen at runtime.
/// An immutable column-major matrix view with 4 rows and a number of columns chosen at runtime.
///
/// See [`MatrixViewMut4xX`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView4xX<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, Dyn, ViewStorage<'a, T, U4, Dyn, RStride, CStride>>;
/// A column-major matrix view with 5 rows and a number of columns chosen at runtime.
/// An immutable column-major matrix view with 5 rows and a number of columns chosen at runtime.
///
/// See [`MatrixViewMut5xX`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView5xX<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, Dyn, ViewStorage<'a, T, U5, Dyn, RStride, CStride>>;
/// A column-major matrix view with 6 rows and a number of columns chosen at runtime.
/// An immutable column-major matrix view with 6 rows and a number of columns chosen at runtime.
///
/// See [`MatrixViewMut6xX`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixView6xX<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, Dyn, ViewStorage<'a, T, U6, Dyn, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 1 column.
/// An immutable column-major matrix view with a number of rows chosen at runtime and 1 column.
///
/// See [`MatrixViewMutXx1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewXx1<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U1, ViewStorage<'a, T, Dyn, U1, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 2 columns.
/// An immutable column-major matrix view with a number of rows chosen at runtime and 2 columns.
///
/// See [`MatrixViewMutXx2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewXx2<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U2, ViewStorage<'a, T, Dyn, U2, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 3 columns.
/// An immutable column-major matrix view with a number of rows chosen at runtime and 3 columns.
///
/// See [`MatrixViewMutXx3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewXx3<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U3, ViewStorage<'a, T, Dyn, U3, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 4 columns.
/// An immutable column-major matrix view with a number of rows chosen at runtime and 4 columns.
///
/// See [`MatrixViewMutXx4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewXx4<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U4, ViewStorage<'a, T, Dyn, U4, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 5 columns.
/// An immutable column-major matrix view with a number of rows chosen at runtime and 5 columns.
///
/// See [`MatrixViewMutXx5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewXx5<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U5, ViewStorage<'a, T, Dyn, U5, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 6 columns.
/// An immutable column-major matrix view with a number of rows chosen at runtime and 6 columns.
///
/// See [`MatrixViewMutXx6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewXx6<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U6, ViewStorage<'a, T, Dyn, U6, RStride, CStride>>;
/// A column vector view with dimensions known at compile-time.
/// An immutable column vector view with dimensions known at compile-time.
///
/// See [`VectorViewMut`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorView<'a, T, D, RStride = U1, CStride = D> =
Matrix<T, D, U1, ViewStorage<'a, T, D, U1, RStride, CStride>>;
/// A column vector view with dimensions known at compile-time.
/// An immutable column vector view with dimensions known at compile-time.
///
/// See [`SVectorViewMut`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type SVectorView<'a, T, const D: usize> =
Matrix<T, Const<D>, Const<1>, ViewStorage<'a, T, Const<D>, Const<1>, Const<1>, Const<D>>>;
/// A column vector view dynamic numbers of rows and columns.
/// An immutable column vector view dynamic numbers of rows and columns.
///
/// See [`DVectorViewMut`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type DVectorView<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U1, ViewStorage<'a, T, Dyn, U1, RStride, CStride>>;
/// A 1D column vector view.
/// An immutable 1D column vector view.
///
/// See [`VectorViewMut1`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorView1<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U1, ViewStorage<'a, T, U1, U1, RStride, CStride>>;
/// A 2D column vector view.
/// An immutable 2D column vector view.
///
/// See [`VectorViewMut2`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorView2<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U1, ViewStorage<'a, T, U2, U1, RStride, CStride>>;
/// A 3D column vector view.
/// An immutable 3D column vector view.
///
/// See [`VectorViewMut3`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorView3<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U1, ViewStorage<'a, T, U3, U1, RStride, CStride>>;
/// A 4D column vector view.
/// An immutable 4D column vector view.
///
/// See [`VectorViewMut4`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorView4<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U1, ViewStorage<'a, T, U4, U1, RStride, CStride>>;
/// A 5D column vector view.
/// An immutable 5D column vector view.
///
/// See [`VectorViewMut5`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorView5<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U1, ViewStorage<'a, T, U5, U1, RStride, CStride>>;
/// A 6D column vector view.
/// An immutable 6D column vector view.
///
/// See [`VectorViewMut6`] for a mutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorView6<'a, T, RStride = U1, CStride = U6> =
@ -302,287 +446,429 @@ pub type VectorView6<'a, T, RStride = U1, CStride = U6> =
*
*/
/// A column-major matrix view with dimensions known at compile-time.
/// A mutable column-major matrix view with dimensions known at compile-time.
///
/// See [`SMatrixView`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type SMatrixViewMut<'a, T, const R: usize, const C: usize> =
Matrix<T, Const<R>, Const<C>, ViewStorageMut<'a, T, Const<R>, Const<C>, Const<1>, Const<R>>>;
/// A column-major matrix view dynamic numbers of rows and columns.
/// A mutable column-major matrix view dynamic numbers of rows and columns.
///
/// See [`DMatrixView`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type DMatrixViewMut<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, Dyn, ViewStorageMut<'a, T, Dyn, Dyn, RStride, CStride>>;
/// A column-major 1x1 matrix view.
/// A mutable column-major 1x1 matrix view.
///
/// See [`MatrixView1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut1<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U1, ViewStorageMut<'a, T, U1, U1, RStride, CStride>>;
/// A column-major 2x2 matrix view.
/// A mutable column-major 2x2 matrix view.
///
/// See [`MatrixView2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut2<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U2, ViewStorageMut<'a, T, U2, U2, RStride, CStride>>;
/// A column-major 3x3 matrix view.
/// A mutable column-major 3x3 matrix view.
///
/// See [`MatrixView3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut3<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U3, ViewStorageMut<'a, T, U3, U3, RStride, CStride>>;
/// A column-major 4x4 matrix view.
/// A mutable column-major 4x4 matrix view.
///
/// See [`MatrixView4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut4<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U4, ViewStorageMut<'a, T, U4, U4, RStride, CStride>>;
/// A column-major 5x5 matrix view.
/// A mutable column-major 5x5 matrix view.
///
/// See [`MatrixView5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut5<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U5, ViewStorageMut<'a, T, U5, U5, RStride, CStride>>;
/// A column-major 6x6 matrix view.
/// A mutable column-major 6x6 matrix view.
///
/// See [`MatrixView6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut6<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U6, ViewStorageMut<'a, T, U6, U6, RStride, CStride>>;
/// A column-major 1x2 matrix view.
/// A mutable column-major 1x2 matrix view.
///
/// See [`MatrixView1x2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut1x2<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U2, ViewStorageMut<'a, T, U1, U2, RStride, CStride>>;
/// A column-major 1x3 matrix view.
/// A mutable column-major 1x3 matrix view.
///
/// See [`MatrixView1x3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut1x3<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U3, ViewStorageMut<'a, T, U1, U3, RStride, CStride>>;
/// A column-major 1x4 matrix view.
/// A mutable column-major 1x4 matrix view.
///
/// See [`MatrixView1x4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut1x4<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U4, ViewStorageMut<'a, T, U1, U4, RStride, CStride>>;
/// A column-major 1x5 matrix view.
/// A mutable column-major 1x5 matrix view.
///
/// See [`MatrixView1x5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut1x5<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U5, ViewStorageMut<'a, T, U1, U5, RStride, CStride>>;
/// A column-major 1x6 matrix view.
/// A mutable column-major 1x6 matrix view.
///
/// See [`MatrixView1x6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut1x6<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U6, ViewStorageMut<'a, T, U1, U6, RStride, CStride>>;
/// A column-major 2x1 matrix view.
/// A mutable column-major 2x1 matrix view.
///
/// See [`MatrixView2x1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut2x1<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U1, ViewStorageMut<'a, T, U2, U1, RStride, CStride>>;
/// A column-major 2x3 matrix view.
/// A mutable column-major 2x3 matrix view.
///
/// See [`MatrixView2x3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut2x3<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U3, ViewStorageMut<'a, T, U2, U3, RStride, CStride>>;
/// A column-major 2x4 matrix view.
/// A mutable column-major 2x4 matrix view.
///
/// See [`MatrixView2x4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut2x4<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U4, ViewStorageMut<'a, T, U2, U4, RStride, CStride>>;
/// A column-major 2x5 matrix view.
/// A mutable column-major 2x5 matrix view.
///
/// See [`MatrixView2x5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut2x5<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U5, ViewStorageMut<'a, T, U2, U5, RStride, CStride>>;
/// A column-major 2x6 matrix view.
/// A mutable column-major 2x6 matrix view.
///
/// See [`MatrixView2x6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut2x6<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U6, ViewStorageMut<'a, T, U2, U6, RStride, CStride>>;
/// A column-major 3x1 matrix view.
/// A mutable column-major 3x1 matrix view.
///
/// See [`MatrixView3x1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut3x1<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U1, ViewStorageMut<'a, T, U3, U1, RStride, CStride>>;
/// A column-major 3x2 matrix view.
/// A mutable column-major 3x2 matrix view.
///
/// See [`MatrixView3x2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut3x2<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U2, ViewStorageMut<'a, T, U3, U2, RStride, CStride>>;
/// A column-major 3x4 matrix view.
/// A mutable column-major 3x4 matrix view.
///
/// See [`MatrixView3x4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut3x4<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U4, ViewStorageMut<'a, T, U3, U4, RStride, CStride>>;
/// A column-major 3x5 matrix view.
/// A mutable column-major 3x5 matrix view.
///
/// See [`MatrixView3x5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut3x5<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U5, ViewStorageMut<'a, T, U3, U5, RStride, CStride>>;
/// A column-major 3x6 matrix view.
/// A mutable column-major 3x6 matrix view.
///
/// See [`MatrixView3x6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut3x6<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U6, ViewStorageMut<'a, T, U3, U6, RStride, CStride>>;
/// A column-major 4x1 matrix view.
/// A mutable column-major 4x1 matrix view.
///
/// See [`MatrixView4x1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut4x1<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U1, ViewStorageMut<'a, T, U4, U1, RStride, CStride>>;
/// A column-major 4x2 matrix view.
/// A mutable column-major 4x2 matrix view.
///
/// See [`MatrixView4x2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut4x2<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U2, ViewStorageMut<'a, T, U4, U2, RStride, CStride>>;
/// A column-major 4x3 matrix view.
/// A mutable column-major 4x3 matrix view.
///
/// See [`MatrixView4x3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut4x3<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U3, ViewStorageMut<'a, T, U4, U3, RStride, CStride>>;
/// A column-major 4x5 matrix view.
/// A mutable column-major 4x5 matrix view.
///
/// See [`MatrixView4x5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut4x5<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U5, ViewStorageMut<'a, T, U4, U5, RStride, CStride>>;
/// A column-major 4x6 matrix view.
/// A mutable column-major 4x6 matrix view.
///
/// See [`MatrixView4x6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut4x6<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U6, ViewStorageMut<'a, T, U4, U6, RStride, CStride>>;
/// A column-major 5x1 matrix view.
/// A mutable column-major 5x1 matrix view.
///
/// See [`MatrixView5x1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut5x1<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U1, ViewStorageMut<'a, T, U5, U1, RStride, CStride>>;
/// A column-major 5x2 matrix view.
/// A mutable column-major 5x2 matrix view.
///
/// See [`MatrixView5x2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut5x2<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U2, ViewStorageMut<'a, T, U5, U2, RStride, CStride>>;
/// A column-major 5x3 matrix view.
/// A mutable column-major 5x3 matrix view.
///
/// See [`MatrixView5x3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut5x3<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U3, ViewStorageMut<'a, T, U5, U3, RStride, CStride>>;
/// A column-major 5x4 matrix view.
/// A mutable column-major 5x4 matrix view.
///
/// See [`MatrixView5x4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut5x4<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U4, ViewStorageMut<'a, T, U5, U4, RStride, CStride>>;
/// A column-major 5x6 matrix view.
/// A mutable column-major 5x6 matrix view.
///
/// See [`MatrixView5x6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut5x6<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U6, ViewStorageMut<'a, T, U5, U6, RStride, CStride>>;
/// A column-major 6x1 matrix view.
/// A mutable column-major 6x1 matrix view.
///
/// See [`MatrixView6x1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut6x1<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U1, ViewStorageMut<'a, T, U6, U1, RStride, CStride>>;
/// A column-major 6x2 matrix view.
/// A mutable column-major 6x2 matrix view.
///
/// See [`MatrixView6x2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut6x2<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U2, ViewStorageMut<'a, T, U6, U2, RStride, CStride>>;
/// A column-major 6x3 matrix view.
/// A mutable column-major 6x3 matrix view.
///
/// See [`MatrixView6x3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut6x3<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U3, ViewStorageMut<'a, T, U6, U3, RStride, CStride>>;
/// A column-major 6x4 matrix view.
/// A mutable column-major 6x4 matrix view.
///
/// See [`MatrixView6x4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut6x4<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U4, ViewStorageMut<'a, T, U6, U4, RStride, CStride>>;
/// A column-major 6x5 matrix view.
/// A mutable column-major 6x5 matrix view.
///
/// See [`MatrixView6x5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut6x5<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, U5, ViewStorageMut<'a, T, U6, U5, RStride, CStride>>;
/// A column-major matrix view with 1 row and a number of columns chosen at runtime.
/// A mutable column-major matrix view with 1 row and a number of columns chosen at runtime.
///
/// See [`MatrixView1xX`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut1xX<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, Dyn, ViewStorageMut<'a, T, U1, Dyn, RStride, CStride>>;
/// A column-major matrix view with 2 rows and a number of columns chosen at runtime.
/// A mutable column-major matrix view with 2 rows and a number of columns chosen at runtime.
///
/// See [`MatrixView2xX`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut2xX<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, Dyn, ViewStorageMut<'a, T, U2, Dyn, RStride, CStride>>;
/// A column-major matrix view with 3 rows and a number of columns chosen at runtime.
/// A mutable column-major matrix view with 3 rows and a number of columns chosen at runtime.
///
/// See [`MatrixView3xX`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut3xX<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, Dyn, ViewStorageMut<'a, T, U3, Dyn, RStride, CStride>>;
/// A column-major matrix view with 4 rows and a number of columns chosen at runtime.
/// A mutable column-major matrix view with 4 rows and a number of columns chosen at runtime.
///
/// See [`MatrixView4xX`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut4xX<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, Dyn, ViewStorageMut<'a, T, U4, Dyn, RStride, CStride>>;
/// A column-major matrix view with 5 rows and a number of columns chosen at runtime.
/// A mutable column-major matrix view with 5 rows and a number of columns chosen at runtime.
///
/// See [`MatrixView5xX`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut5xX<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, Dyn, ViewStorageMut<'a, T, U5, Dyn, RStride, CStride>>;
/// A column-major matrix view with 6 rows and a number of columns chosen at runtime.
/// A mutable column-major matrix view with 6 rows and a number of columns chosen at runtime.
///
/// See [`MatrixView6xX`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMut6xX<'a, T, RStride = U1, CStride = U6> =
Matrix<T, U6, Dyn, ViewStorageMut<'a, T, U6, Dyn, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 1 column.
/// A mutable column-major matrix view with a number of rows chosen at runtime and 1 column.
///
/// See [`MatrixViewXx1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMutXx1<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U1, ViewStorageMut<'a, T, Dyn, U1, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 2 columns.
/// A mutable column-major matrix view with a number of rows chosen at runtime and 2 columns.
///
/// See [`MatrixViewXx2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMutXx2<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U2, ViewStorageMut<'a, T, Dyn, U2, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 3 columns.
/// A mutable column-major matrix view with a number of rows chosen at runtime and 3 columns.
///
/// See [`MatrixViewXx3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMutXx3<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U3, ViewStorageMut<'a, T, Dyn, U3, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 4 columns.
/// A mutable column-major matrix view with a number of rows chosen at runtime and 4 columns.
///
/// See [`MatrixViewXx4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMutXx4<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U4, ViewStorageMut<'a, T, Dyn, U4, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 5 columns.
/// A mutable column-major matrix view with a number of rows chosen at runtime and 5 columns.
///
/// See [`MatrixViewXx5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMutXx5<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U5, ViewStorageMut<'a, T, Dyn, U5, RStride, CStride>>;
/// A column-major matrix view with a number of rows chosen at runtime and 6 columns.
/// A mutable column-major matrix view with a number of rows chosen at runtime and 6 columns.
///
/// See [`MatrixViewXx6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type MatrixViewMutXx6<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U6, ViewStorageMut<'a, T, Dyn, U6, RStride, CStride>>;
/// A column vector view with dimensions known at compile-time.
/// A mutable column vector view with dimensions known at compile-time.
///
/// See [`VectorView`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorViewMut<'a, T, D, RStride = U1, CStride = D> =
Matrix<T, D, U1, ViewStorageMut<'a, T, D, U1, RStride, CStride>>;
/// A column vector view with dimensions known at compile-time.
/// A mutable column vector view with dimensions known at compile-time.
///
/// See [`SVectorView`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type SVectorViewMut<'a, T, const D: usize> =
Matrix<T, Const<D>, Const<1>, ViewStorageMut<'a, T, Const<D>, Const<1>, Const<1>, Const<D>>>;
/// A column vector view dynamic numbers of rows and columns.
/// A mutable column vector view dynamic numbers of rows and columns.
///
/// See [`DVectorView`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type DVectorViewMut<'a, T, RStride = U1, CStride = Dyn> =
Matrix<T, Dyn, U1, ViewStorageMut<'a, T, Dyn, U1, RStride, CStride>>;
/// A 1D column vector view.
/// A mutable 1D column vector view.
///
/// See [`VectorView1`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorViewMut1<'a, T, RStride = U1, CStride = U1> =
Matrix<T, U1, U1, ViewStorageMut<'a, T, U1, U1, RStride, CStride>>;
/// A 2D column vector view.
/// A mutable 2D column vector view.
///
/// See [`VectorView2`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorViewMut2<'a, T, RStride = U1, CStride = U2> =
Matrix<T, U2, U1, ViewStorageMut<'a, T, U2, U1, RStride, CStride>>;
/// A 3D column vector view.
/// A mutable 3D column vector view.
///
/// See [`VectorView3`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorViewMut3<'a, T, RStride = U1, CStride = U3> =
Matrix<T, U3, U1, ViewStorageMut<'a, T, U3, U1, RStride, CStride>>;
/// A 4D column vector view.
/// A mutable 4D column vector view.
///
/// See [`VectorView4`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorViewMut4<'a, T, RStride = U1, CStride = U4> =
Matrix<T, U4, U1, ViewStorageMut<'a, T, U4, U1, RStride, CStride>>;
/// A 5D column vector view.
/// A mutable 5D column vector view.
///
/// See [`VectorView5`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorViewMut5<'a, T, RStride = U1, CStride = U5> =
Matrix<T, U5, U1, ViewStorageMut<'a, T, U5, U1, RStride, CStride>>;
/// A 6D column vector view.
/// A mutable 6D column vector view.
///
/// See [`VectorView6`] for an immutable version of this type.
///
/// **Because this is an alias, not all its methods are listed here. See the [`Matrix`](crate::base::Matrix) type too.**
pub type VectorViewMut6<'a, T, RStride = U1, CStride = U6> =

View File

@ -19,36 +19,36 @@ use std::mem::MaybeUninit;
///
/// Every allocator must be both static and dynamic. Though not all implementations may share the
/// same `Buffer` type.
pub trait Allocator<T, R: Dim, C: Dim = U1>: Any + Sized {
pub trait Allocator<R: Dim, C: Dim = U1>: Any + Sized {
/// The type of buffer this allocator can instantiate.
type Buffer: StorageMut<T, R, C> + IsContiguous + Clone + Debug;
type Buffer<T: Scalar>: StorageMut<T, R, C> + IsContiguous + Clone + Debug;
/// The type of buffer with uninitialized components this allocator can instantiate.
type BufferUninit: RawStorageMut<MaybeUninit<T>, R, C> + IsContiguous;
type BufferUninit<T: Scalar>: RawStorageMut<MaybeUninit<T>, R, C> + IsContiguous;
/// Allocates a buffer with the given number of rows and columns without initializing its content.
fn allocate_uninit(nrows: R, ncols: C) -> Self::BufferUninit;
fn allocate_uninit<T: Scalar>(nrows: R, ncols: C) -> Self::BufferUninit<T>;
/// Assumes a data buffer to be initialized.
///
/// # Safety
/// The user must make sure that every single entry of the buffer has been initialized,
/// or Undefined Behavior will immediately occur.
unsafe fn assume_init(uninit: Self::BufferUninit) -> Self::Buffer;
unsafe fn assume_init<T: Scalar>(uninit: Self::BufferUninit<T>) -> Self::Buffer<T>;
/// Allocates a buffer initialized with the content of the given iterator.
fn allocate_from_iterator<I: IntoIterator<Item = T>>(
fn allocate_from_iterator<T: Scalar, I: IntoIterator<Item = T>>(
nrows: R,
ncols: C,
iter: I,
) -> Self::Buffer;
) -> Self::Buffer<T>;
#[inline]
/// Allocates a buffer initialized with the content of the given row-major order iterator.
fn allocate_from_row_iterator<I: IntoIterator<Item = T>>(
fn allocate_from_row_iterator<T: Scalar, I: IntoIterator<Item = T>>(
nrows: R,
ncols: C,
iter: I,
) -> Self::Buffer {
) -> Self::Buffer<T> {
let mut res = Self::allocate_uninit(nrows, ncols);
let mut count = 0;
@ -73,7 +73,7 @@ pub trait Allocator<T, R: Dim, C: Dim = U1>: Any + Sized {
"Matrix init. from row iterator: iterator not long enough."
);
<Self as Allocator<T, R, C>>::assume_init(res)
<Self as Allocator<R, C>>::assume_init(res)
}
}
}
@ -81,7 +81,7 @@ pub trait Allocator<T, R: Dim, C: Dim = U1>: Any + Sized {
/// A matrix reallocator. Changes the size of the memory buffer that initially contains (`RFrom` ×
/// `CFrom`) elements to a smaller or larger size (`RTo`, `CTo`).
pub trait Reallocator<T: Scalar, RFrom: Dim, CFrom: Dim, RTo: Dim, CTo: Dim>:
Allocator<T, RFrom, CFrom> + Allocator<T, RTo, CTo>
Allocator<RFrom, CFrom> + Allocator<RTo, CTo>
{
/// Reallocates a buffer of shape `(RTo, CTo)`, possibly reusing a previously allocated buffer
/// `buf`. Data stored by `buf` are linearly copied to the output:
@ -94,8 +94,8 @@ pub trait Reallocator<T: Scalar, RFrom: Dim, CFrom: Dim, RTo: Dim, CTo: Dim>:
unsafe fn reallocate_copy(
nrows: RTo,
ncols: CTo,
buf: <Self as Allocator<T, RFrom, CFrom>>::Buffer,
) -> <Self as Allocator<T, RTo, CTo>>::BufferUninit;
buf: <Self as Allocator<RFrom, CFrom>>::Buffer<T>,
) -> <Self as Allocator<RTo, CTo>>::BufferUninit<T>;
}
/// The number of rows of the result of a componentwise operation on two matrices.
@ -106,8 +106,8 @@ pub type SameShapeC<C1, C2> = <ShapeConstraint as SameNumberOfColumns<C1, C2>>::
// TODO: Bad name.
/// Restricts the given number of rows and columns to be respectively the same.
pub trait SameShapeAllocator<T, R1, C1, R2, C2>:
Allocator<T, R1, C1> + Allocator<T, SameShapeR<R1, R2>, SameShapeC<C1, C2>>
pub trait SameShapeAllocator<R1, C1, R2, C2>:
Allocator<R1, C1> + Allocator<SameShapeR<R1, R2>, SameShapeC<C1, C2>>
where
R1: Dim,
R2: Dim,
@ -117,21 +117,21 @@ where
{
}
impl<T, R1, R2, C1, C2> SameShapeAllocator<T, R1, C1, R2, C2> for DefaultAllocator
impl<R1, R2, C1, C2> SameShapeAllocator<R1, C1, R2, C2> for DefaultAllocator
where
R1: Dim,
R2: Dim,
C1: Dim,
C2: Dim,
DefaultAllocator: Allocator<T, R1, C1> + Allocator<T, SameShapeR<R1, R2>, SameShapeC<C1, C2>>,
DefaultAllocator: Allocator<R1, C1> + Allocator<SameShapeR<R1, R2>, SameShapeC<C1, C2>>,
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2>,
{
}
// XXX: Bad name.
/// Restricts the given number of rows to be equal.
pub trait SameShapeVectorAllocator<T, R1, R2>:
Allocator<T, R1> + Allocator<T, SameShapeR<R1, R2>> + SameShapeAllocator<T, R1, U1, R2, U1>
pub trait SameShapeVectorAllocator<R1, R2>:
Allocator<R1> + Allocator<SameShapeR<R1, R2>> + SameShapeAllocator<R1, U1, R2, U1>
where
R1: Dim,
R2: Dim,
@ -139,11 +139,11 @@ where
{
}
impl<T, R1, R2> SameShapeVectorAllocator<T, R1, R2> for DefaultAllocator
impl<R1, R2> SameShapeVectorAllocator<R1, R2> for DefaultAllocator
where
R1: Dim,
R2: Dim,
DefaultAllocator: Allocator<T, R1, U1> + Allocator<T, SameShapeR<R1, R2>>,
DefaultAllocator: Allocator<R1, U1> + Allocator<SameShapeR<R1, R2>>,
ShapeConstraint: SameNumberOfRows<R1, R2>,
{
}

View File

@ -27,7 +27,7 @@ use std::mem;
* Static RawStorage.
*
*/
/// A array-based statically sized matrix data storage.
/// An array-based statically sized matrix data storage.
#[repr(transparent)]
#[derive(Copy, Clone, PartialEq, Eq, Hash)]
#[cfg_attr(
@ -42,7 +42,6 @@ use std::mem;
)
)]
#[cfg_attr(feature = "rkyv-serialize", derive(bytecheck::CheckBytes))]
#[cfg_attr(feature = "cuda", derive(cust_core::DeviceCopy))]
pub struct ArrayStorage<T, const R: usize, const C: usize>(pub [[T; R]; C]);
impl<T, const R: usize, const C: usize> ArrayStorage<T, R, C> {
@ -114,12 +113,12 @@ unsafe impl<T, const R: usize, const C: usize> RawStorage<T, Const<R>, Const<C>>
unsafe impl<T: Scalar, const R: usize, const C: usize> Storage<T, Const<R>, Const<C>>
for ArrayStorage<T, R, C>
where
DefaultAllocator: Allocator<T, Const<R>, Const<C>, Buffer = Self>,
DefaultAllocator: Allocator<Const<R>, Const<C>, Buffer<T> = Self>,
{
#[inline]
fn into_owned(self) -> Owned<T, Const<R>, Const<C>>
where
DefaultAllocator: Allocator<T, Const<R>, Const<C>>,
DefaultAllocator: Allocator<Const<R>, Const<C>>,
{
self
}
@ -127,10 +126,16 @@ where
#[inline]
fn clone_owned(&self) -> Owned<T, Const<R>, Const<C>>
where
DefaultAllocator: Allocator<T, Const<R>, Const<C>>,
DefaultAllocator: Allocator<Const<R>, Const<C>>,
{
self.clone()
}
#[inline]
fn forget_elements(self) {
// No additional cleanup required.
std::mem::forget(self);
}
}
unsafe impl<T, const R: usize, const C: usize> RawStorageMut<T, Const<R>, Const<C>>
@ -251,7 +256,7 @@ where
V: SeqAccess<'a>,
{
let mut out: ArrayStorage<core::mem::MaybeUninit<T>, R, C> =
DefaultAllocator::allocate_uninit(Const::<R>, Const::<C>);
<DefaultAllocator as Allocator<_, _>>::allocate_uninit(Const::<R>, Const::<C>);
let mut curr = 0;
while let Some(value) = visitor.next_element()? {
@ -264,7 +269,7 @@ where
if curr == R * C {
// Safety: all the elements have been initialized.
unsafe { Ok(<DefaultAllocator as Allocator<T, Const<R>, Const<C>>>::assume_init(out)) }
unsafe { Ok(<DefaultAllocator as Allocator<Const<R>, Const<C>>>::assume_init(out)) }
} else {
for i in 0..curr {
// Safety:

View File

@ -1,6 +1,6 @@
use crate::{RawStorage, SimdComplexField};
use num::{One, Zero};
use simba::scalar::{ClosedAdd, ClosedMul};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign};
use crate::base::allocator::Allocator;
use crate::base::blas_uninit::{axcpy_uninit, gemm_uninit, gemv_uninit};
@ -17,7 +17,7 @@ use crate::base::{
/// # Dot/scalar product
impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S>
where
T: Scalar + Zero + ClosedAdd + ClosedMul,
T: Scalar + Zero + ClosedAddAssign + ClosedMulAssign,
{
#[inline(always)]
fn dotx<R2: Dim, C2: Dim, SB>(
@ -275,7 +275,7 @@ where
/// # BLAS functions
impl<T, D: Dim, S> Vector<T, D, S>
where
T: Scalar + Zero + ClosedAdd + ClosedMul,
T: Scalar + Zero + ClosedAddAssign + ClosedMulAssign,
S: StorageMut<T, D>,
{
/// Computes `self = a * x * c + b * self`.
@ -609,7 +609,7 @@ where
impl<T, R1: Dim, C1: Dim, S: StorageMut<T, R1, C1>> Matrix<T, R1, C1, S>
where
T: Scalar + Zero + ClosedAdd + ClosedMul,
T: Scalar + Zero + ClosedAddAssign + ClosedMulAssign,
{
#[inline(always)]
fn gerx<D2: Dim, D3: Dim, SB, SC>(
@ -862,7 +862,7 @@ where
impl<T, R1: Dim, C1: Dim, S: StorageMut<T, R1, C1>> Matrix<T, R1, C1, S>
where
T: Scalar + Zero + ClosedAdd + ClosedMul,
T: Scalar + Zero + ClosedAddAssign + ClosedMulAssign,
{
#[inline(always)]
fn xxgerx<D2: Dim, D3: Dim, SB, SC>(
@ -1010,7 +1010,7 @@ where
impl<T, D1: Dim, S: StorageMut<T, D1, D1>> SquareMatrix<T, D1, S>
where
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
{
/// Computes the quadratic form `self = alpha * lhs * mid * lhs.transpose() + beta * self`.
///
@ -1098,7 +1098,7 @@ where
S3: Storage<T, R3, C3>,
S4: Storage<T, D4, D4>,
ShapeConstraint: DimEq<D1, D1> + DimEq<D1, R3> + DimEq<C3, D4>,
DefaultAllocator: Allocator<T, D1>,
DefaultAllocator: Allocator<D1>,
{
// TODO: would it be useful to avoid the zero-initialization of the workspace data?
let mut work = Matrix::zeros_generic(self.shape_generic().0, Const::<1>);
@ -1196,7 +1196,7 @@ where
S2: Storage<T, D2, D2>,
S3: Storage<T, R3, C3>,
ShapeConstraint: DimEq<D2, R3> + DimEq<D1, C3> + AreMultipliable<C3, R3, D2, U1>,
DefaultAllocator: Allocator<T, D2>,
DefaultAllocator: Allocator<D2>,
{
// TODO: would it be useful to avoid the zero-initialization of the workspace data?
let mut work = Vector::zeros_generic(mid.shape_generic().0, Const::<1>);

View File

@ -11,18 +11,19 @@
#[cfg(feature = "std")]
use matrixmultiply;
use num::{One, Zero};
use simba::scalar::{ClosedAdd, ClosedMul};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign};
#[cfg(feature = "std")]
use std::mem;
use std::{any::TypeId, mem};
use crate::base::constraint::{
AreMultipliable, DimEq, SameNumberOfColumns, SameNumberOfRows, ShapeConstraint,
};
use crate::base::dimension::{Dim, Dyn, U1};
#[cfg(feature = "std")]
use crate::base::dimension::Dyn;
use crate::base::dimension::{Dim, U1};
use crate::base::storage::{RawStorage, RawStorageMut};
use crate::base::uninit::InitStatus;
use crate::base::{Matrix, Scalar, Vector};
use std::any::TypeId;
// # Safety
// The content of `y` must only contain values for which
@ -40,7 +41,7 @@ unsafe fn array_axcpy<Status, T>(
len: usize,
) where
Status: InitStatus<T>,
T: Scalar + Zero + ClosedAdd + ClosedMul,
T: Scalar + Zero + ClosedAddAssign + ClosedMulAssign,
{
for i in 0..len {
let y = Status::assume_init_mut(y.get_unchecked_mut(i * stride1));
@ -60,7 +61,7 @@ fn array_axc<Status, T>(
len: usize,
) where
Status: InitStatus<T>,
T: Scalar + Zero + ClosedAdd + ClosedMul,
T: Scalar + Zero + ClosedAddAssign + ClosedMulAssign,
{
for i in 0..len {
unsafe {
@ -88,7 +89,7 @@ pub unsafe fn axcpy_uninit<Status, T, D1: Dim, D2: Dim, SA, SB>(
c: T,
b: T,
) where
T: Scalar + Zero + ClosedAdd + ClosedMul,
T: Scalar + Zero + ClosedAddAssign + ClosedMulAssign,
SA: RawStorageMut<Status::Value, D1>,
SB: RawStorage<T, D2>,
ShapeConstraint: DimEq<D1, D2>,
@ -128,7 +129,7 @@ pub unsafe fn gemv_uninit<Status, T, D1: Dim, R2: Dim, C2: Dim, D3: Dim, SA, SB,
beta: T,
) where
Status: InitStatus<T>,
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SA: RawStorageMut<Status::Value, D1>,
SB: RawStorage<T, R2, C2>,
SC: RawStorage<T, D3>,
@ -198,7 +199,7 @@ pub unsafe fn gemm_uninit<
beta: T,
) where
Status: InitStatus<T>,
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SA: RawStorageMut<Status::Value, R1, C1>,
SB: RawStorage<T, R2, C2>,
SC: RawStorage<T, R3, C3>,

View File

@ -19,13 +19,13 @@ use crate::geometry::{
Rotation3,
};
use simba::scalar::{ClosedAdd, ClosedMul, RealField};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign, RealField};
/// # Translation and scaling in any dimension
impl<T, D: DimName> OMatrix<T, D, D>
where
T: Scalar + Zero + One,
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
/// Creates a new homogeneous matrix that applies the same scaling factor on each dimension.
#[inline]
@ -207,8 +207,11 @@ impl<T: RealField> Matrix4<T> {
}
/// # Append/prepend translation and scaling
impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D, D>>
SquareMatrix<T, D, S>
impl<
T: Scalar + Zero + One + ClosedMulAssign + ClosedAddAssign,
D: DimName,
S: Storage<T, D, D>,
> SquareMatrix<T, D, S>
{
/// Computes the transformation equal to `self` followed by an uniform scaling factor.
#[inline]
@ -216,7 +219,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
pub fn append_scaling(&self, scaling: T) -> OMatrix<T, D, D>
where
D: DimNameSub<U1>,
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
let mut res = self.clone_owned();
res.append_scaling_mut(scaling);
@ -229,7 +232,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
pub fn prepend_scaling(&self, scaling: T) -> OMatrix<T, D, D>
where
D: DimNameSub<U1>,
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
let mut res = self.clone_owned();
res.prepend_scaling_mut(scaling);
@ -246,7 +249,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
where
D: DimNameSub<U1>,
SB: Storage<T, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
let mut res = self.clone_owned();
res.append_nonuniform_scaling_mut(scaling);
@ -263,7 +266,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
where
D: DimNameSub<U1>,
SB: Storage<T, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
let mut res = self.clone_owned();
res.prepend_nonuniform_scaling_mut(scaling);
@ -280,7 +283,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
where
D: DimNameSub<U1>,
SB: Storage<T, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
let mut res = self.clone_owned();
res.append_translation_mut(shift);
@ -297,7 +300,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
where
D: DimNameSub<U1>,
SB: Storage<T, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<T, D, D> + Allocator<T, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<D, D> + Allocator<DimNameDiff<D, U1>>,
{
let mut res = self.clone_owned();
res.prepend_translation_mut(shift);
@ -379,7 +382,7 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
D: DimNameSub<U1>,
S: StorageMut<T, D, D>,
SB: Storage<T, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<T, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<DimNameDiff<D, U1>>,
{
let scale = self
.generic_view(
@ -405,9 +408,9 @@ impl<T: Scalar + Zero + One + ClosedMul + ClosedAdd, D: DimName, S: Storage<T, D
/// # Transformation of vectors and points
impl<T: RealField, D: DimNameSub<U1>, S: Storage<T, D, D>> SquareMatrix<T, D, S>
where
DefaultAllocator: Allocator<T, D, D>
+ Allocator<T, DimNameDiff<D, U1>>
+ Allocator<T, DimNameDiff<D, U1>, DimNameDiff<D, U1>>,
DefaultAllocator: Allocator<D, D>
+ Allocator<DimNameDiff<D, U1>>
+ Allocator<DimNameDiff<D, U1>, DimNameDiff<D, U1>>,
{
/// Transforms the given vector, assuming the matrix `self` uses homogeneous coordinates.
#[inline]

View File

@ -3,7 +3,7 @@
use num::{Signed, Zero};
use std::ops::{Add, Mul};
use simba::scalar::{ClosedDiv, ClosedMul};
use simba::scalar::{ClosedDivAssign, ClosedMulAssign};
use simba::simd::SimdPartialOrd;
use crate::base::allocator::{Allocator, SameShapeAllocator};
@ -11,7 +11,7 @@ use crate::base::constraint::{SameNumberOfColumns, SameNumberOfRows, ShapeConstr
use crate::base::dimension::Dim;
use crate::base::storage::{Storage, StorageMut};
use crate::base::{DefaultAllocator, Matrix, MatrixSum, OMatrix, Scalar};
use crate::ClosedAdd;
use crate::ClosedAddAssign;
/// The type of the result of a matrix component-wise operation.
pub type MatrixComponentOp<T, R1, C1, R2, C2> = MatrixSum<T, R1, C1, R2, C2>;
@ -32,7 +32,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
pub fn abs(&self) -> OMatrix<T, R, C>
where
T: Signed,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let mut res = self.clone_owned();
@ -55,7 +55,7 @@ macro_rules! component_binop_impl(
where T: $Trait,
R2: Dim, C2: Dim,
SB: Storage<T, R2, C2>,
DefaultAllocator: SameShapeAllocator<T, R1, C1, R2, C2>,
DefaultAllocator: SameShapeAllocator<R1, C1, R2, C2>,
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
assert_eq!(self.shape(), rhs.shape(), "Componentwise mul/div: mismatched matrix dimensions.");
@ -148,7 +148,7 @@ macro_rules! component_binop_impl(
/// # Componentwise operations
impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA> {
component_binop_impl!(
component_mul, component_mul_mut, component_mul_assign, cmpy, ClosedMul.mul.mul_assign,
component_mul, component_mul_mut, component_mul_assign, cmpy, ClosedMulAssign.mul.mul_assign,
r"
Componentwise matrix or vector multiplication.
@ -193,7 +193,7 @@ impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>
assert_eq!(a, expected);
```
";
component_div, component_div_mut, component_div_assign, cdpy, ClosedDiv.div.div_assign,
component_div, component_div_mut, component_div_assign, cdpy, ClosedDivAssign.div.div_assign,
r"
Componentwise matrix or vector division.
@ -257,7 +257,7 @@ impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>
pub fn inf(&self, other: &Self) -> OMatrix<T, R1, C1>
where
T: SimdPartialOrd,
DefaultAllocator: Allocator<T, R1, C1>,
DefaultAllocator: Allocator<R1, C1>,
{
self.zip_map(other, |a, b| a.simd_min(b))
}
@ -278,7 +278,7 @@ impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>
pub fn sup(&self, other: &Self) -> OMatrix<T, R1, C1>
where
T: SimdPartialOrd,
DefaultAllocator: Allocator<T, R1, C1>,
DefaultAllocator: Allocator<R1, C1>,
{
self.zip_map(other, |a, b| a.simd_max(b))
}
@ -299,7 +299,7 @@ impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>
pub fn inf_sup(&self, other: &Self) -> (OMatrix<T, R1, C1>, OMatrix<T, R1, C1>)
where
T: SimdPartialOrd,
DefaultAllocator: Allocator<T, R1, C1>,
DefaultAllocator: Allocator<R1, C1>,
{
// TODO: can this be optimized?
(self.inf(other), self.sup(other))
@ -320,8 +320,8 @@ impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>
#[must_use = "Did you mean to use add_scalar_mut()?"]
pub fn add_scalar(&self, rhs: T) -> OMatrix<T, R1, C1>
where
T: ClosedAdd,
DefaultAllocator: Allocator<T, R1, C1>,
T: ClosedAddAssign,
DefaultAllocator: Allocator<R1, C1>,
{
let mut res = self.clone_owned();
res.add_scalar_mut(rhs);
@ -343,7 +343,7 @@ impl<T: Scalar, R1: Dim, C1: Dim, SA: Storage<T, R1, C1>> Matrix<T, R1, C1, SA>
#[inline]
pub fn add_scalar_mut(&mut self, rhs: T)
where
T: ClosedAdd,
T: ClosedAddAssign,
SA: StorageMut<T, R1, C1>,
{
for e in self.iter_mut() {

View File

@ -6,7 +6,7 @@ use crate::base::dimension::{Dim, DimName, Dyn};
#[derive(Copy, Clone, Debug)]
pub struct ShapeConstraint;
/// Constraints `C1` and `R2` to be equivalent.
/// Constrains `C1` and `R2` to be equivalent.
pub trait AreMultipliable<R1: Dim, C1: Dim, R2: Dim, C2: Dim>: DimEq<C1, R2> {}
impl<R1: Dim, C1: Dim, R2: Dim, C2: Dim> AreMultipliable<R1, C1, R2, C2> for ShapeConstraint where
@ -14,11 +14,21 @@ impl<R1: Dim, C1: Dim, R2: Dim, C2: Dim> AreMultipliable<R1, C1, R2, C2> for Sha
{
}
/// Constraints `D1` and `D2` to be equivalent.
/// Constrains `D1` and `D2` to be equivalent.
pub trait DimEq<D1: Dim, D2: Dim> {
/// This is either equal to `D1` or `D2`, always choosing the one (if any) which is a type-level
/// constant.
type Representative: Dim;
/// This constructs a value of type `Representative` with the
/// correct value
fn representative(d1: D1, d2: D2) -> Option<Self::Representative> {
if d1.value() != d2.value() {
None
} else {
Some(Self::Representative::from_usize(d1.value()))
}
}
}
impl<D: Dim> DimEq<D, D> for ShapeConstraint {
@ -35,12 +45,19 @@ impl<D: DimName> DimEq<Dyn, D> for ShapeConstraint {
macro_rules! equality_trait_decl(
($($doc: expr, $Trait: ident),* $(,)*) => {$(
// XXX: we can't do something like `DimEq<D1> for D2` because we would require a blancket impl…
// XXX: we can't do something like `DimEq<D1> for D2` because we would require a blanket impl…
#[doc = $doc]
pub trait $Trait<D1: Dim, D2: Dim>: DimEq<D1, D2> + DimEq<D2, D1> {
/// This is either equal to `D1` or `D2`, always choosing the one (if any) which is a type-level
/// constant.
type Representative: Dim;
/// Returns a representative dimension instance if the two are equal,
/// otherwise `None`.
fn representative(d1: D1, d2: D2) -> Option<<Self as $Trait<D1, D2>>::Representative> {
<Self as DimEq<D1, D2>>::representative(d1, d2)
.map(|common_dim| <Self as $Trait<D1, D2>>::Representative::from_usize(common_dim.value()))
}
}
impl<D: Dim> $Trait<D, D> for ShapeConstraint {
@ -58,17 +75,17 @@ macro_rules! equality_trait_decl(
);
equality_trait_decl!(
"Constraints `D1` and `D2` to be equivalent. \
"Constrains `D1` and `D2` to be equivalent. \
They are both assumed to be the number of \
rows of a matrix.",
SameNumberOfRows,
"Constraints `D1` and `D2` to be equivalent. \
"Constrains `D1` and `D2` to be equivalent. \
They are both assumed to be the number of \
columns of a matrix.",
SameNumberOfColumns
);
/// Constraints D1 and D2 to be equivalent, where they both designate dimensions of algebraic
/// Constrains D1 and D2 to be equivalent, where they both designate dimensions of algebraic
/// entities (e.g. square matrices).
pub trait SameDimension<D1: Dim, D2: Dim>:
SameNumberOfRows<D1, D2> + SameNumberOfColumns<D1, D2>

View File

@ -16,7 +16,7 @@ use rand::{
use std::iter;
use typenum::{self, Cmp, Greater};
use simba::scalar::{ClosedAdd, ClosedMul};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign};
use crate::base::allocator::Allocator;
use crate::base::dimension::{Dim, DimName, Dyn, ToTypenum};
@ -29,7 +29,7 @@ use std::mem::MaybeUninit;
impl<T: Scalar, R: Dim, C: Dim> UninitMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
/// Builds a matrix with uninitialized elements of type `MaybeUninit<T>`.
#[inline(always)]
@ -50,7 +50,7 @@ where
/// These functions should only be used when working on dimension-generic code.
impl<T: Scalar, R: Dim, C: Dim> OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
/// Creates a matrix with all its elements set to `elem`.
#[inline]
@ -226,7 +226,7 @@ where
SB: RawStorage<T, Const<1>, C>,
{
assert!(!rows.is_empty(), "At least one row must be given.");
let nrows = R::try_to_usize().unwrap_or_else(|| rows.len());
let nrows = R::try_to_usize().unwrap_or(rows.len());
let ncols = rows[0].len();
assert!(
rows.len() == nrows,
@ -268,7 +268,7 @@ where
SB: RawStorage<T, R>,
{
assert!(!columns.is_empty(), "At least one column must be given.");
let ncols = C::try_to_usize().unwrap_or_else(|| columns.len());
let ncols = C::try_to_usize().unwrap_or(columns.len());
let nrows = columns[0].len();
assert!(
columns.len() == ncols,
@ -338,7 +338,7 @@ where
impl<T, D: Dim> OMatrix<T, D, D>
where
T: Scalar,
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
/// Creates a square matrix with its diagonal set to `diag` and all other entries set to 0.
///
@ -646,7 +646,7 @@ macro_rules! impl_constructors(
/// # Constructors of statically-sized vectors or statically-sized matrices
impl<T: Scalar, R: DimName, C: DimName> OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
// TODO: this is not very pretty. We could find a better call syntax.
impl_constructors!(R, C; // Arguments for Matrix<T, ..., S>
@ -658,7 +658,7 @@ where
/// # Constructors of matrices with a dynamic number of columns
impl<T: Scalar, R: DimName> OMatrix<T, R, Dyn>
where
DefaultAllocator: Allocator<T, R, Dyn>,
DefaultAllocator: Allocator<R, Dyn>,
{
impl_constructors!(R, Dyn;
=> R: DimName;
@ -669,7 +669,7 @@ where
/// # Constructors of dynamic vectors and matrices with a dynamic number of rows
impl<T: Scalar, C: DimName> OMatrix<T, Dyn, C>
where
DefaultAllocator: Allocator<T, Dyn, C>,
DefaultAllocator: Allocator<Dyn, C>,
{
impl_constructors!(Dyn, C;
=> C: DimName;
@ -678,9 +678,10 @@ where
}
/// # Constructors of fully dynamic matrices
#[cfg(any(feature = "std", feature = "alloc"))]
impl<T: Scalar> OMatrix<T, Dyn, Dyn>
where
DefaultAllocator: Allocator<T, Dyn, Dyn>,
DefaultAllocator: Allocator<Dyn, Dyn>,
{
impl_constructors!(Dyn, Dyn;
;
@ -697,7 +698,7 @@ where
macro_rules! impl_constructors_from_data(
($data: ident; $($Dims: ty),*; $(=> $DimIdent: ident: $DimBound: ident),*; $($gargs: expr),*; $($args: ident),*) => {
impl<T: Scalar, $($DimIdent: $DimBound, )*> OMatrix<T $(, $Dims)*>
where DefaultAllocator: Allocator<T $(, $Dims)*> {
where DefaultAllocator: Allocator<$($Dims),*> {
/// Creates a matrix with its elements filled with the components provided by a slice
/// in row-major order.
///
@ -800,6 +801,7 @@ impl_constructors_from_data!(data; Dyn, C;
Dyn(data.len() / C::dim()), C::name();
);
#[cfg(any(feature = "std", feature = "alloc"))]
impl_constructors_from_data!(data; Dyn, Dyn;
;
Dyn(nrows), Dyn(ncols);
@ -812,8 +814,8 @@ impl_constructors_from_data!(data; Dyn, Dyn;
*/
impl<T, R: DimName, C: DimName> Zero for OMatrix<T, R, C>
where
T: Scalar + Zero + ClosedAdd,
DefaultAllocator: Allocator<T, R, C>,
T: Scalar + Zero + ClosedAddAssign,
DefaultAllocator: Allocator<R, C>,
{
#[inline]
fn zero() -> Self {
@ -828,8 +830,8 @@ where
impl<T, D: DimName> One for OMatrix<T, D, D>
where
T: Scalar + Zero + One + ClosedMul + ClosedAdd,
DefaultAllocator: Allocator<T, D, D>,
T: Scalar + Zero + One + ClosedMulAssign + ClosedAddAssign,
DefaultAllocator: Allocator<D, D>,
{
#[inline]
fn one() -> Self {
@ -840,7 +842,7 @@ where
impl<T, R: DimName, C: DimName> Bounded for OMatrix<T, R, C>
where
T: Scalar + Bounded,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
#[inline]
fn max_value() -> Self {
@ -856,7 +858,7 @@ where
#[cfg(feature = "rand-no-std")]
impl<T: Scalar, R: Dim, C: Dim> Distribution<OMatrix<T, R, C>> for Standard
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
Standard: Distribution<T>,
{
#[inline]
@ -874,7 +876,7 @@ where
R: Dim,
C: Dim,
T: Scalar + Arbitrary + Send,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
Owned<T, R, C>: Clone + Send,
{
#[inline]
@ -892,7 +894,7 @@ where
#[cfg(feature = "rand")]
impl<T: crate::RealField, D: DimName> Distribution<Unit<OVector<T, D>>> for Standard
where
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
rand_distr::StandardNormal: Distribution<T>,
{
/// Generate a uniformly distributed random unit vector.
@ -1111,7 +1113,7 @@ impl<T, R: DimName> OVector<T, R>
where
R: ToTypenum,
T: Scalar + Zero + One,
DefaultAllocator: Allocator<T, R>,
DefaultAllocator: Allocator<R>,
{
/// The column vector with `val` as its i-th component.
#[inline]

View File

@ -97,6 +97,8 @@ macro_rules! impl_constructors(
}
/// Creates, without bound checking, a new matrix view from the given data array.
/// # Safety
/// `data[start..start+rstride * cstride]` must be within bounds.
#[inline]
pub unsafe fn from_slice_unchecked(data: &'a [T], start: usize, $($args: usize),*) -> Self {
Self::from_slice_generic_unchecked(data, start, $($gargs),*)
@ -113,6 +115,11 @@ macro_rules! impl_constructors(
}
/// Creates, without bound checking, a new matrix view with the specified strides from the given data array.
///
/// # Safety
///
/// `start`, `rstride`, and `cstride`, with the given matrix size will not index
/// outside of `data`.
#[inline]
pub unsafe fn from_slice_with_strides_unchecked(data: &'a [T], start: usize, $($args: usize,)* rstride: usize, cstride: usize) -> Self {
Self::from_slice_with_strides_generic_unchecked(data, start, $($gargs,)* Dyn(rstride), Dyn(cstride))
@ -257,6 +264,10 @@ macro_rules! impl_constructors_mut(
}
/// Creates, without bound checking, a new mutable matrix view from the given data array.
///
/// # Safety
///
/// `data[start..start+(R * C)]` must be within bounds.
#[inline]
pub unsafe fn from_slice_unchecked(data: &'a mut [T], start: usize, $($args: usize),*) -> Self {
Self::from_slice_generic_unchecked(data, start, $($gargs),*)
@ -274,6 +285,8 @@ macro_rules! impl_constructors_mut(
}
/// Creates, without bound checking, a new mutable matrix view with the specified strides from the given data array.
/// # Safety
/// `data[start..start+rstride * cstride]` must be within bounds.
#[inline]
pub unsafe fn from_slice_with_strides_unchecked(data: &'a mut [T], start: usize, $($args: usize,)* rstride: usize, cstride: usize) -> Self {
Self::from_slice_with_strides_generic_unchecked(

View File

@ -8,11 +8,11 @@ use simba::simd::{PrimitiveSimdValue, SimdValue};
use crate::base::allocator::{Allocator, SameShapeAllocator};
use crate::base::constraint::{SameNumberOfColumns, SameNumberOfRows, ShapeConstraint};
#[cfg(any(feature = "std", feature = "alloc"))]
use crate::base::dimension::Dyn;
use crate::base::dimension::{
Const, Dim, DimName, U1, U10, U11, U12, U13, U14, U15, U16, U2, U3, U4, U5, U6, U7, U8, U9,
Const, Dim, U1, U10, U11, U12, U13, U14, U15, U16, U2, U3, U4, U5, U6, U7, U8, U9,
};
#[cfg(any(feature = "std", feature = "alloc"))]
use crate::base::dimension::{DimName, Dyn};
use crate::base::iter::{MatrixIter, MatrixIterMut};
use crate::base::storage::{IsContiguous, RawStorage, RawStorageMut};
use crate::base::{
@ -35,8 +35,7 @@ where
C2: Dim,
T1: Scalar,
T2: Scalar + SupersetOf<T1>,
DefaultAllocator:
Allocator<T2, R2, C2> + Allocator<T1, R1, C1> + SameShapeAllocator<T1, R1, C1, R2, C2>,
DefaultAllocator: Allocator<R2, C2> + Allocator<R1, C1> + SameShapeAllocator<R1, C1, R2, C2>,
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2>,
{
#[inline]
@ -98,6 +97,18 @@ impl<'a, T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> IntoIterator
}
}
impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> IntoIterator
for Matrix<T, R, C, ViewStorage<'a, T, R, C, RStride, CStride>>
{
type Item = &'a T;
type IntoIter = MatrixIter<'a, T, R, C, ViewStorage<'a, T, R, C, RStride, CStride>>;
#[inline]
fn into_iter(self) -> Self::IntoIter {
MatrixIter::new_owned(self.data)
}
}
impl<'a, T: Scalar, R: Dim, C: Dim, S: RawStorageMut<T, R, C>> IntoIterator
for &'a mut Matrix<T, R, C, S>
{
@ -110,6 +121,18 @@ impl<'a, T: Scalar, R: Dim, C: Dim, S: RawStorageMut<T, R, C>> IntoIterator
}
}
impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> IntoIterator
for Matrix<T, R, C, ViewStorageMut<'a, T, R, C, RStride, CStride>>
{
type Item = &'a mut T;
type IntoIter = MatrixIterMut<'a, T, R, C, ViewStorageMut<'a, T, R, C, RStride, CStride>>;
#[inline]
fn into_iter(self) -> Self::IntoIter {
MatrixIterMut::new_owned_mut(self.data)
}
}
impl<T: Scalar, const D: usize> From<[T; D]> for SVector<T, D> {
#[inline]
fn from(arr: [T; D]) -> Self {
@ -473,7 +496,7 @@ where
}
#[cfg(any(feature = "std", feature = "alloc"))]
impl<'a, T: Scalar> From<Vec<T>> for DVector<T> {
impl<T: Scalar> From<Vec<T>> for DVector<T> {
#[inline]
fn from(vec: Vec<T>) -> Self {
Self::from_vec(vec)
@ -481,7 +504,7 @@ impl<'a, T: Scalar> From<Vec<T>> for DVector<T> {
}
#[cfg(any(feature = "std", feature = "alloc"))]
impl<'a, T: Scalar> From<Vec<T>> for RowDVector<T> {
impl<T: Scalar> From<Vec<T>> for RowDVector<T> {
#[inline]
fn from(vec: Vec<T>) -> Self {
Self::from_vec(vec)
@ -537,7 +560,7 @@ impl<T: Scalar + PrimitiveSimdValue, R: Dim, C: Dim> From<[OMatrix<T::Element, R
where
T: From<[<T as SimdValue>::Element; 2]>,
T::Element: Scalar + SimdValue,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
#[inline]
fn from(arr: [OMatrix<T::Element, R, C>; 2]) -> Self {
@ -554,7 +577,7 @@ impl<T: Scalar + PrimitiveSimdValue, R: Dim, C: Dim> From<[OMatrix<T::Element, R
where
T: From<[<T as SimdValue>::Element; 4]>,
T::Element: Scalar + SimdValue,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
#[inline]
fn from(arr: [OMatrix<T::Element, R, C>; 4]) -> Self {
@ -577,7 +600,7 @@ impl<T: Scalar + PrimitiveSimdValue, R: Dim, C: Dim> From<[OMatrix<T::Element, R
where
T: From<[<T as SimdValue>::Element; 8]>,
T::Element: Scalar + SimdValue,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
#[inline]
fn from(arr: [OMatrix<T::Element, R, C>; 8]) -> Self {
@ -604,7 +627,7 @@ impl<T: Scalar + PrimitiveSimdValue, R: Dim, C: Dim> From<[OMatrix<T::Element, R
where
T: From<[<T as SimdValue>::Element; 16]>,
T::Element: Scalar + SimdValue,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
fn from(arr: [OMatrix<T::Element, R, C>; 16]) -> Self {
let (nrows, ncols) = arr[0].shape_generic();

View File

@ -12,14 +12,16 @@ use alloc::vec::Vec;
use super::Const;
use crate::base::allocator::{Allocator, Reallocator};
use crate::base::array_storage::ArrayStorage;
use crate::base::dimension::Dim;
#[cfg(any(feature = "alloc", feature = "std"))]
use crate::base::dimension::Dyn;
use crate::base::dimension::{Dim, DimName};
use crate::base::storage::{RawStorage, RawStorageMut};
use crate::base::dimension::{DimName, Dyn};
use crate::base::storage::{RawStorage, RawStorageMut, Storage};
#[cfg(any(feature = "std", feature = "alloc"))]
use crate::base::vec_storage::VecStorage;
use crate::base::Scalar;
use std::mem::{ManuallyDrop, MaybeUninit};
#[cfg(any(feature = "std", feature = "alloc"))]
use std::mem::ManuallyDrop;
use std::mem::MaybeUninit;
/*
*
@ -32,21 +34,21 @@ use std::mem::{ManuallyDrop, MaybeUninit};
pub struct DefaultAllocator;
// Static - Static
impl<T: Scalar, const R: usize, const C: usize> Allocator<T, Const<R>, Const<C>>
for DefaultAllocator
{
type Buffer = ArrayStorage<T, R, C>;
type BufferUninit = ArrayStorage<MaybeUninit<T>, R, C>;
impl<const R: usize, const C: usize> Allocator<Const<R>, Const<C>> for DefaultAllocator {
type Buffer<T: Scalar> = ArrayStorage<T, R, C>;
type BufferUninit<T: Scalar> = ArrayStorage<MaybeUninit<T>, R, C>;
#[inline(always)]
fn allocate_uninit(_: Const<R>, _: Const<C>) -> ArrayStorage<MaybeUninit<T>, R, C> {
fn allocate_uninit<T: Scalar>(_: Const<R>, _: Const<C>) -> ArrayStorage<MaybeUninit<T>, R, C> {
// SAFETY: An uninitialized `[MaybeUninit<_>; _]` is valid.
let array: [[MaybeUninit<T>; R]; C] = unsafe { MaybeUninit::uninit().assume_init() };
ArrayStorage(array)
}
#[inline(always)]
unsafe fn assume_init(uninit: ArrayStorage<MaybeUninit<T>, R, C>) -> ArrayStorage<T, R, C> {
unsafe fn assume_init<T: Scalar>(
uninit: ArrayStorage<MaybeUninit<T>, R, C>,
) -> ArrayStorage<T, R, C> {
// Safety:
// * The caller guarantees that all elements of the array are initialized
// * `MaybeUninit<T>` and T are guaranteed to have the same layout
@ -56,11 +58,11 @@ impl<T: Scalar, const R: usize, const C: usize> Allocator<T, Const<R>, Const<C>>
}
#[inline]
fn allocate_from_iterator<I: IntoIterator<Item = T>>(
fn allocate_from_iterator<T: Scalar, I: IntoIterator<Item = T>>(
nrows: Const<R>,
ncols: Const<C>,
iter: I,
) -> Self::Buffer {
) -> Self::Buffer<T> {
let mut res = Self::allocate_uninit(nrows, ncols);
let mut count = 0;
@ -78,19 +80,19 @@ impl<T: Scalar, const R: usize, const C: usize> Allocator<T, Const<R>, Const<C>>
// Safety: the assertion above made sure that the iterator
// yielded enough elements to initialize our matrix.
unsafe { <Self as Allocator<T, Const<R>, Const<C>>>::assume_init(res) }
unsafe { <Self as Allocator<Const<R>, Const<C>>>::assume_init(res) }
}
}
// Dyn - Static
// Dyn - Dyn
#[cfg(any(feature = "std", feature = "alloc"))]
impl<T: Scalar, C: Dim> Allocator<T, Dyn, C> for DefaultAllocator {
type Buffer = VecStorage<T, Dyn, C>;
type BufferUninit = VecStorage<MaybeUninit<T>, Dyn, C>;
impl<C: Dim> Allocator<Dyn, C> for DefaultAllocator {
type Buffer<T: Scalar> = VecStorage<T, Dyn, C>;
type BufferUninit<T: Scalar> = VecStorage<MaybeUninit<T>, Dyn, C>;
#[inline]
fn allocate_uninit(nrows: Dyn, ncols: C) -> VecStorage<MaybeUninit<T>, Dyn, C> {
fn allocate_uninit<T: Scalar>(nrows: Dyn, ncols: C) -> VecStorage<MaybeUninit<T>, Dyn, C> {
let mut data = Vec::new();
let length = nrows.value() * ncols.value();
data.reserve_exact(length);
@ -99,7 +101,9 @@ impl<T: Scalar, C: Dim> Allocator<T, Dyn, C> for DefaultAllocator {
}
#[inline]
unsafe fn assume_init(uninit: VecStorage<MaybeUninit<T>, Dyn, C>) -> VecStorage<T, Dyn, C> {
unsafe fn assume_init<T: Scalar>(
uninit: VecStorage<MaybeUninit<T>, Dyn, C>,
) -> VecStorage<T, Dyn, C> {
// Avoids a double-drop.
let (nrows, ncols) = uninit.shape();
let vec: Vec<_> = uninit.into();
@ -114,11 +118,11 @@ impl<T: Scalar, C: Dim> Allocator<T, Dyn, C> for DefaultAllocator {
}
#[inline]
fn allocate_from_iterator<I: IntoIterator<Item = T>>(
fn allocate_from_iterator<T: Scalar, I: IntoIterator<Item = T>>(
nrows: Dyn,
ncols: C,
iter: I,
) -> Self::Buffer {
) -> Self::Buffer<T> {
let it = iter.into_iter();
let res: Vec<T> = it.collect();
assert!(res.len() == nrows.value() * ncols.value(),
@ -130,12 +134,12 @@ impl<T: Scalar, C: Dim> Allocator<T, Dyn, C> for DefaultAllocator {
// Static - Dyn
#[cfg(any(feature = "std", feature = "alloc"))]
impl<T: Scalar, R: DimName> Allocator<T, R, Dyn> for DefaultAllocator {
type Buffer = VecStorage<T, R, Dyn>;
type BufferUninit = VecStorage<MaybeUninit<T>, R, Dyn>;
impl<R: DimName> Allocator<R, Dyn> for DefaultAllocator {
type Buffer<T: Scalar> = VecStorage<T, R, Dyn>;
type BufferUninit<T: Scalar> = VecStorage<MaybeUninit<T>, R, Dyn>;
#[inline]
fn allocate_uninit(nrows: R, ncols: Dyn) -> VecStorage<MaybeUninit<T>, R, Dyn> {
fn allocate_uninit<T: Scalar>(nrows: R, ncols: Dyn) -> VecStorage<MaybeUninit<T>, R, Dyn> {
let mut data = Vec::new();
let length = nrows.value() * ncols.value();
data.reserve_exact(length);
@ -145,7 +149,9 @@ impl<T: Scalar, R: DimName> Allocator<T, R, Dyn> for DefaultAllocator {
}
#[inline]
unsafe fn assume_init(uninit: VecStorage<MaybeUninit<T>, R, Dyn>) -> VecStorage<T, R, Dyn> {
unsafe fn assume_init<T: Scalar>(
uninit: VecStorage<MaybeUninit<T>, R, Dyn>,
) -> VecStorage<T, R, Dyn> {
// Avoids a double-drop.
let (nrows, ncols) = uninit.shape();
let vec: Vec<_> = uninit.into();
@ -160,11 +166,11 @@ impl<T: Scalar, R: DimName> Allocator<T, R, Dyn> for DefaultAllocator {
}
#[inline]
fn allocate_from_iterator<I: IntoIterator<Item = T>>(
fn allocate_from_iterator<T: Scalar, I: IntoIterator<Item = T>>(
nrows: R,
ncols: Dyn,
iter: I,
) -> Self::Buffer {
) -> Self::Buffer<T> {
let it = iter.into_iter();
let res: Vec<T> = it.collect();
assert!(res.len() == nrows.value() * ncols.value(),
@ -185,15 +191,15 @@ impl<T: Scalar, RFrom, CFrom, const RTO: usize, const CTO: usize>
where
RFrom: Dim,
CFrom: Dim,
Self: Allocator<T, RFrom, CFrom>,
Self: Allocator<RFrom, CFrom>,
{
#[inline]
unsafe fn reallocate_copy(
rto: Const<RTO>,
cto: Const<CTO>,
buf: <Self as Allocator<T, RFrom, CFrom>>::Buffer,
buf: <Self as Allocator<RFrom, CFrom>>::Buffer<T>,
) -> ArrayStorage<MaybeUninit<T>, RTO, CTO> {
let mut res = <Self as Allocator<T, Const<RTO>, Const<CTO>>>::allocate_uninit(rto, cto);
let mut res = <Self as Allocator<Const<RTO>, Const<CTO>>>::allocate_uninit(rto, cto);
let (rfrom, cfrom) = buf.shape();
@ -204,8 +210,8 @@ where
// Safety:
// - We dont care about dropping elements because the caller is responsible for dropping things.
// - We forget `buf` so that we dont drop the other elements.
std::mem::forget(buf);
// - We forget `buf` so that we dont drop the other elements, but ensure the buffer itself is cleaned up.
buf.forget_elements();
res
}
@ -224,7 +230,7 @@ where
cto: CTo,
buf: ArrayStorage<T, RFROM, CFROM>,
) -> VecStorage<MaybeUninit<T>, Dyn, CTo> {
let mut res = <Self as Allocator<T, Dyn, CTo>>::allocate_uninit(rto, cto);
let mut res = <Self as Allocator<Dyn, CTo>>::allocate_uninit(rto, cto);
let (rfrom, cfrom) = buf.shape();
@ -235,8 +241,8 @@ where
// Safety:
// - We dont care about dropping elements because the caller is responsible for dropping things.
// - We forget `buf` so that we dont drop the other elements.
std::mem::forget(buf);
// - We forget `buf` so that we dont drop the other elements, but ensure the buffer itself is cleaned up.
buf.forget_elements();
res
}
@ -255,7 +261,7 @@ where
cto: Dyn,
buf: ArrayStorage<T, RFROM, CFROM>,
) -> VecStorage<MaybeUninit<T>, RTo, Dyn> {
let mut res = <Self as Allocator<T, RTo, Dyn>>::allocate_uninit(rto, cto);
let mut res = <Self as Allocator<RTo, Dyn>>::allocate_uninit(rto, cto);
let (rfrom, cfrom) = buf.shape();
@ -266,8 +272,8 @@ where
// Safety:
// - We dont care about dropping elements because the caller is responsible for dropping things.
// - We forget `buf` so that we dont drop the other elements.
std::mem::forget(buf);
// - We forget `buf` so that we dont drop the other elements, but ensure the buffer itself is cleaned up.
buf.forget_elements();
res
}

View File

@ -23,7 +23,6 @@ use serde::{Deserialize, Deserializer, Serialize, Serializer};
feature = "rkyv-serialize",
archive_attr(derive(bytecheck::CheckBytes))
)]
#[cfg_attr(feature = "cuda", derive(cust_core::DeviceCopy))]
pub struct Dyn(pub usize);
#[deprecated(note = "use Dyn instead.")]
@ -68,6 +67,10 @@ impl IsNotStaticOne for Dyn {}
/// Trait implemented by any type that can be used as a dimension. This includes type-level
/// integers and `Dyn` (for dimensions not known at compile-time).
///
/// # Safety
///
/// Hoists integers to the type level, including binary operations.
pub unsafe trait Dim: Any + Debug + Copy + PartialEq + Send + Sync {
#[inline(always)]
fn is<D: Dim>() -> bool {
@ -216,7 +219,6 @@ dim_ops!(
archive(as = "Self")
)]
#[cfg_attr(feature = "rkyv-serialize", derive(bytecheck::CheckBytes))]
#[cfg_attr(feature = "cuda", derive(cust_core::DeviceCopy))]
pub struct Const<const R: usize>;
/// Trait implemented exclusively by type-level integers.

View File

@ -21,7 +21,7 @@ impl<T: Scalar + Zero, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn upper_triangle(&self) -> OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let mut res = self.clone_owned();
res.fill_lower_triangle(T::zero(), 1);
@ -34,7 +34,7 @@ impl<T: Scalar + Zero, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn lower_triangle(&self) -> OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let mut res = self.clone_owned();
res.fill_upper_triangle(T::zero(), 1);
@ -52,7 +52,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
where
I: IntoIterator<Item = &'a usize>,
I::IntoIter: ExactSizeIterator + Clone,
DefaultAllocator: Allocator<T, Dyn, C>,
DefaultAllocator: Allocator<Dyn, C>,
{
let irows = irows.into_iter();
let ncols = self.shape_generic().1;
@ -89,7 +89,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
where
I: IntoIterator<Item = &'a usize>,
I::IntoIter: ExactSizeIterator,
DefaultAllocator: Allocator<T, R, Dyn>,
DefaultAllocator: Allocator<R, Dyn>,
{
let icols = icols.into_iter();
let nrows = self.shape_generic().0;
@ -598,7 +598,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
if nremove.value() != 0 {
unsafe {
compress_rows(
&mut m.as_mut_slice(),
m.as_mut_slice(),
nrows.value(),
ncols.value(),
i,
@ -796,7 +796,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
if ninsert.value() != 0 {
extend_rows(
&mut res.as_mut_slice(),
res.as_mut_slice(),
nrows.value(),
ncols.value(),
i,
@ -909,7 +909,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
unsafe {
if new_nrows.value() < nrows {
compress_rows(
&mut data.as_mut_slice(),
data.as_mut_slice(),
nrows,
ncols,
new_nrows.value(),
@ -923,7 +923,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
new_nrows, new_ncols, data.data,
));
extend_rows(
&mut res.as_mut_slice(),
res.as_mut_slice(),
nrows,
new_ncols.value(),
nrows,
@ -1037,7 +1037,7 @@ impl<T: Scalar> OMatrix<T, Dyn, Dyn> {
#[cfg(any(feature = "std", feature = "alloc"))]
impl<T: Scalar, C: Dim> OMatrix<T, Dyn, C>
where
DefaultAllocator: Allocator<T, Dyn, C>,
DefaultAllocator: Allocator<Dyn, C>,
{
/// Changes the number of rows of this matrix in-place.
///
@ -1058,7 +1058,7 @@ where
#[cfg(any(feature = "std", feature = "alloc"))]
impl<T: Scalar, R: Dim> OMatrix<T, R, Dyn>
where
DefaultAllocator: Allocator<T, R, Dyn>,
DefaultAllocator: Allocator<R, Dyn>,
{
/// Changes the number of column of this matrix in-place.
///

View File

@ -519,6 +519,10 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
/// Produces a view of the data at the given index, without doing
/// any bounds checking.
///
/// # Safety
///
/// `index` must within bounds of the array.
#[inline]
#[must_use]
pub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Output
@ -530,6 +534,9 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
/// Returns a mutable view of the data at the given index, without doing
/// any bounds checking.
/// # Safety
///
/// `index` must within bounds of the array.
#[inline]
#[must_use]
pub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::OutputMut

View File

@ -2,11 +2,14 @@ use crate::storage::Storage;
use crate::{
Allocator, DefaultAllocator, Dim, OVector, One, RealField, Scalar, Unit, Vector, Zero,
};
use simba::scalar::{ClosedAdd, ClosedMul, ClosedSub};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign, ClosedSubAssign};
/// # Interpolation
impl<T: Scalar + Zero + One + ClosedAdd + ClosedSub + ClosedMul, D: Dim, S: Storage<T, D>>
Vector<T, D, S>
impl<
T: Scalar + Zero + One + ClosedAddAssign + ClosedSubAssign + ClosedMulAssign,
D: Dim,
S: Storage<T, D>,
> Vector<T, D, S>
{
/// Returns `self * (1.0 - t) + rhs * t`, i.e., the linear blend of the vectors x and y using the scalar value a.
///
@ -23,7 +26,7 @@ impl<T: Scalar + Zero + One + ClosedAdd + ClosedSub + ClosedMul, D: Dim, S: Stor
#[must_use]
pub fn lerp<S2: Storage<T, D>>(&self, rhs: &Vector<T, D, S2>, t: T) -> OVector<T, D>
where
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
let mut res = self.clone_owned();
res.axpy(t.clone(), rhs, T::one() - t);
@ -50,7 +53,7 @@ impl<T: Scalar + Zero + One + ClosedAdd + ClosedSub + ClosedMul, D: Dim, S: Stor
pub fn slerp<S2: Storage<T, D>>(&self, rhs: &Vector<T, D, S2>, t: T) -> OVector<T, D>
where
T: RealField,
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
let me = Unit::new_normalize(self.clone_owned());
let rhs = Unit::new_normalize(rhs.clone_owned());
@ -81,7 +84,7 @@ impl<T: RealField, D: Dim, S: Storage<T, D>> Unit<Vector<T, D, S>> {
t: T,
) -> Unit<OVector<T, D>>
where
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
// TODO: the result is wrong when self and rhs are collinear with opposite direction.
self.try_slerp(rhs, t, T::default_epsilon())
@ -100,7 +103,7 @@ impl<T: RealField, D: Dim, S: Storage<T, D>> Unit<Vector<T, D, S>> {
epsilon: T,
) -> Option<Unit<OVector<T, D>>>
where
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
let c_hang = self.dot(rhs);

View File

@ -12,26 +12,29 @@ use std::mem;
use crate::base::dimension::{Dim, U1};
use crate::base::storage::{RawStorage, RawStorageMut};
use crate::base::{Matrix, MatrixView, MatrixViewMut, Scalar};
use crate::base::{Matrix, MatrixView, MatrixViewMut, Scalar, ViewStorage, ViewStorageMut};
#[derive(Clone, Debug)]
struct RawIter<Ptr, T, R: Dim, C: Dim, RStride: Dim, CStride: Dim> {
ptr: Ptr,
inner_ptr: Ptr,
inner_end: Ptr,
size: usize,
strides: (RStride, CStride),
_phantoms: PhantomData<(fn() -> T, R, C)>,
}
macro_rules! iterator {
(struct $Name:ident for $Storage:ident.$ptr: ident -> $Ptr:ty, $Ref:ty, $SRef: ty, $($derives:ident),* $(,)?) => {
/// An iterator through a dense matrix with arbitrary strides matrix.
#[derive($($derives),*)]
pub struct $Name<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> {
ptr: $Ptr,
inner_ptr: $Ptr,
inner_end: $Ptr,
size: usize, // We can't use an end pointer here because a stride might be zero.
strides: (S::RStride, S::CStride),
_phantoms: PhantomData<($Ref, R, C, S)>,
}
// TODO: we need to specialize for the case where the matrix storage is owned (in which
// case the iterator is trivial because it does not have any stride).
impl<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> $Name<'a, T, R, C, S> {
impl<T, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
RawIter<$Ptr, T, R, C, RStride, CStride>
{
/// Creates a new iterator for the given matrix storage.
pub fn new(storage: $SRef) -> $Name<'a, T, R, C, S> {
fn new<'a, S: $Storage<T, R, C, RStride = RStride, CStride = CStride>>(
storage: $SRef,
) -> Self {
let shape = storage.shape();
let strides = storage.strides();
let inner_offset = shape.0.value() * strides.0.value();
@ -55,7 +58,7 @@ macro_rules! iterator {
unsafe { ptr.add(inner_offset) }
};
$Name {
RawIter {
ptr,
inner_ptr: ptr,
inner_end,
@ -66,11 +69,13 @@ macro_rules! iterator {
}
}
impl<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> Iterator for $Name<'a, T, R, C, S> {
type Item = $Ref;
impl<T, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Iterator
for RawIter<$Ptr, T, R, C, RStride, CStride>
{
type Item = $Ptr;
#[inline]
fn next(&mut self) -> Option<$Ref> {
fn next(&mut self) -> Option<Self::Item> {
unsafe {
if self.size == 0 {
None
@ -102,10 +107,7 @@ macro_rules! iterator {
self.ptr = self.ptr.add(stride);
}
// We want either `& *last` or `&mut *last` here, depending
// on the mutability of `$Ref`.
#[allow(clippy::transmute_ptr_to_ref)]
Some(mem::transmute(old))
Some(old)
}
}
}
@ -121,11 +123,11 @@ macro_rules! iterator {
}
}
impl<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> DoubleEndedIterator
for $Name<'a, T, R, C, S>
impl<T, R: Dim, C: Dim, RStride: Dim, CStride: Dim> DoubleEndedIterator
for RawIter<$Ptr, T, R, C, RStride, CStride>
{
#[inline]
fn next_back(&mut self) -> Option<$Ref> {
fn next_back(&mut self) -> Option<Self::Item> {
unsafe {
if self.size == 0 {
None
@ -152,21 +154,85 @@ macro_rules! iterator {
.ptr
.add((outer_remaining * outer_stride + inner_remaining * inner_stride));
// We want either `& *last` or `&mut *last` here, depending
// on the mutability of `$Ref`.
#[allow(clippy::transmute_ptr_to_ref)]
Some(mem::transmute(last))
Some(last)
}
}
}
}
impl<T, R: Dim, C: Dim, RStride: Dim, CStride: Dim> ExactSizeIterator
for RawIter<$Ptr, T, R, C, RStride, CStride>
{
#[inline]
fn len(&self) -> usize {
self.size
}
}
impl<T, R: Dim, C: Dim, RStride: Dim, CStride: Dim> FusedIterator
for RawIter<$Ptr, T, R, C, RStride, CStride>
{
}
/// An iterator through a dense matrix with arbitrary strides matrix.
#[derive($($derives),*)]
pub struct $Name<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> {
inner: RawIter<$Ptr, T, R, C, S::RStride, S::CStride>,
_marker: PhantomData<$Ref>,
}
impl<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> $Name<'a, T, R, C, S> {
/// Creates a new iterator for the given matrix storage.
pub fn new(storage: $SRef) -> Self {
Self {
inner: RawIter::<$Ptr, T, R, C, S::RStride, S::CStride>::new(storage),
_marker: PhantomData,
}
}
}
impl<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> Iterator for $Name<'a, T, R, C, S> {
type Item = $Ref;
#[inline(always)]
fn next(&mut self) -> Option<Self::Item> {
// We want either `& *last` or `&mut *last` here, depending
// on the mutability of `$Ref`.
#[allow(clippy::transmute_ptr_to_ref)]
self.inner.next().map(|ptr| unsafe { mem::transmute(ptr) })
}
#[inline(always)]
fn size_hint(&self) -> (usize, Option<usize>) {
self.inner.size_hint()
}
#[inline(always)]
fn count(self) -> usize {
self.inner.count()
}
}
impl<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> DoubleEndedIterator
for $Name<'a, T, R, C, S>
{
#[inline(always)]
fn next_back(&mut self) -> Option<Self::Item> {
// We want either `& *last` or `&mut *last` here, depending
// on the mutability of `$Ref`.
#[allow(clippy::transmute_ptr_to_ref)]
self.inner
.next_back()
.map(|ptr| unsafe { mem::transmute(ptr) })
}
}
impl<'a, T, R: Dim, C: Dim, S: 'a + $Storage<T, R, C>> ExactSizeIterator
for $Name<'a, T, R, C, S>
{
#[inline]
#[inline(always)]
fn len(&self) -> usize {
self.size
self.inner.len()
}
}
@ -180,6 +246,30 @@ macro_rules! iterator {
iterator!(struct MatrixIter for RawStorage.ptr -> *const T, &'a T, &'a S, Clone, Debug);
iterator!(struct MatrixIterMut for RawStorageMut.ptr_mut -> *mut T, &'a mut T, &'a mut S, Debug);
impl<'a, T, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
MatrixIter<'a, T, R, C, ViewStorage<'a, T, R, C, RStride, CStride>>
{
/// Creates a new iterator for the given matrix storage view.
pub fn new_owned(storage: ViewStorage<'a, T, R, C, RStride, CStride>) -> Self {
Self {
inner: RawIter::<*const T, T, R, C, RStride, CStride>::new(&storage),
_marker: PhantomData,
}
}
}
impl<'a, T, R: Dim, C: Dim, RStride: Dim, CStride: Dim>
MatrixIterMut<'a, T, R, C, ViewStorageMut<'a, T, R, C, RStride, CStride>>
{
/// Creates a new iterator for the given matrix storage view.
pub fn new_owned_mut(mut storage: ViewStorageMut<'a, T, R, C, RStride, CStride>) -> Self {
Self {
inner: RawIter::<*mut T, T, R, C, RStride, CStride>::new(&mut storage),
_marker: PhantomData,
}
}
}
/*
*
* Row iterators.
@ -313,8 +403,9 @@ impl<'a, T, R: Dim, C: Dim, S: 'a + RawStorage<T, R, C>> ColumnIter<'a, T, R, C,
}
}
#[cfg(feature = "rayon")]
pub(crate) fn split_at(self, index: usize) -> (Self, Self) {
// SAFETY: this makes sur the generated ranges are valid.
// SAFETY: this makes sure the generated ranges are valid.
let split_pos = (self.range.start + index).min(self.range.end);
let left_iter = ColumnIter {
@ -401,8 +492,9 @@ impl<'a, T, R: Dim, C: Dim, S: 'a + RawStorageMut<T, R, C>> ColumnIterMut<'a, T,
}
}
#[cfg(feature = "rayon")]
pub(crate) fn split_at(self, index: usize) -> (Self, Self) {
// SAFETY: this makes sur the generated ranges are valid.
// SAFETY: this makes sure the generated ranges are valid.
let split_pos = (self.range.start + index).min(self.range.end);
let left_iter = ColumnIterMut {

View File

@ -18,7 +18,7 @@ use rkyv::bytecheck;
#[cfg(feature = "rkyv-serialize-no-std")]
use rkyv::{with::With, Archive, Archived};
use simba::scalar::{ClosedAdd, ClosedMul, ClosedSub, Field, SupersetOf};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign, ClosedSubAssign, Field, SupersetOf};
use simba::simd::SimdPartialOrd;
use crate::base::allocator::{Allocator, SameShapeAllocator, SameShapeC, SameShapeR};
@ -171,7 +171,6 @@ pub type MatrixCross<T, R1, C1, R2, C2> =
)
)]
#[cfg_attr(feature = "rkyv-serialize", derive(bytecheck::CheckBytes))]
#[cfg_attr(feature = "cuda", derive(cust_core::DeviceCopy))]
pub struct Matrix<T, R, C, S> {
/// The data storage that contains all the matrix components. Disappointed?
///
@ -313,6 +312,10 @@ where
impl<T, R, C, S> Matrix<T, R, C, S> {
/// Creates a new matrix with the given data without statically checking that the matrix
/// dimension matches the storage dimension.
///
/// # Safety
///
/// The storage dimension must match the given dimensions.
#[inline(always)]
pub const unsafe fn from_data_statically_unchecked(data: S) -> Matrix<T, R, C, S> {
Matrix {
@ -380,9 +383,9 @@ impl<T> RowDVector<T> {
}
}
impl<T, R: Dim, C: Dim> UninitMatrix<T, R, C>
impl<T: Scalar, R: Dim, C: Dim> UninitMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
/// Assumes a matrix's entries to be initialized. This operation should be near zero-cost.
///
@ -391,7 +394,7 @@ where
/// or Undefined Behavior will immediately occur.
#[inline(always)]
pub unsafe fn assume_init(self) -> OMatrix<T, R, C> {
OMatrix::from_data(<DefaultAllocator as Allocator<T, R, C>>::assume_init(
OMatrix::from_data(<DefaultAllocator as Allocator<R, C>>::assume_init(
self.data,
))
}
@ -530,7 +533,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
max_relative: T::Epsilon,
) -> bool
where
T: RelativeEq,
T: RelativeEq + Scalar,
R2: Dim,
C2: Dim,
SB: Storage<T, R2, C2>,
@ -565,7 +568,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
where
T: Scalar,
S: Storage<T, R, C>,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
Matrix::from_data(self.data.into_owned())
}
@ -581,7 +584,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
S: Storage<T, R, C>,
R2: Dim,
C2: Dim,
DefaultAllocator: SameShapeAllocator<T, R, C, R2, C2>,
DefaultAllocator: SameShapeAllocator<R, C, R2, C2>,
ShapeConstraint: SameNumberOfRows<R, R2> + SameNumberOfColumns<C, C2>,
{
if TypeId::of::<SameShapeStorage<T, R, C, R2, C2>>() == TypeId::of::<Owned<T, R, C>>() {
@ -606,7 +609,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
where
T: Scalar,
S: Storage<T, R, C>,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
Matrix::from_data(self.data.clone_owned())
}
@ -621,7 +624,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
S: Storage<T, R, C>,
R2: Dim,
C2: Dim,
DefaultAllocator: SameShapeAllocator<T, R, C, R2, C2>,
DefaultAllocator: SameShapeAllocator<R, C, R2, C2>,
ShapeConstraint: SameNumberOfRows<R, R2> + SameNumberOfColumns<C, C2>,
{
let (nrows, ncols) = self.shape();
@ -697,7 +700,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn transpose(&self) -> OMatrix<T, C, R>
where
T: Scalar,
DefaultAllocator: Allocator<T, C, R>,
DefaultAllocator: Allocator<C, R>,
{
let (nrows, ncols) = self.shape_generic();
@ -716,7 +719,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn map<T2: Scalar, F: FnMut(T) -> T2>(&self, mut f: F) -> OMatrix<T2, R, C>
where
T: Scalar,
DefaultAllocator: Allocator<T2, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let (nrows, ncols) = self.shape_generic();
let mut res = Matrix::uninit(nrows, ncols);
@ -748,7 +751,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
where
T: Scalar,
OMatrix<T2, R, C>: SupersetOf<Self>,
DefaultAllocator: Allocator<T2, R, C>,
DefaultAllocator: Allocator<R, C>,
{
crate::convert(self)
}
@ -766,7 +769,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
where
T: Scalar,
Self: SupersetOf<OMatrix<T2, R, C>>,
DefaultAllocator: Allocator<T2, R, C>,
DefaultAllocator: Allocator<R, C>,
{
crate::try_convert(self)
}
@ -803,7 +806,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
) -> OMatrix<T2, R, C>
where
T: Scalar,
DefaultAllocator: Allocator<T2, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let (nrows, ncols) = self.shape_generic();
let mut res = Matrix::uninit(nrows, ncols);
@ -833,7 +836,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
N3: Scalar,
S2: RawStorage<T2, R, C>,
F: FnMut(T, T2) -> N3,
DefaultAllocator: Allocator<N3, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let (nrows, ncols) = self.shape_generic();
let mut res = Matrix::uninit(nrows, ncols);
@ -877,7 +880,7 @@ impl<T, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
S2: RawStorage<T2, R, C>,
S3: RawStorage<N3, R, C>,
F: FnMut(T, T2, N3) -> N4,
DefaultAllocator: Allocator<N4, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let (nrows, ncols) = self.shape_generic();
let mut res = Matrix::uninit(nrows, ncols);
@ -1194,6 +1197,10 @@ impl<T, R: Dim, C: Dim, S: RawStorageMut<T, R, C>> Matrix<T, R, C, S> {
}
/// Swaps two entries without bound-checking.
///
/// # Safety
///
/// Both `(r, c)` must have `r < nrows(), c < ncols()`.
#[inline]
pub unsafe fn swap_unchecked(&mut self, row_cols1: (usize, usize), row_cols2: (usize, usize)) {
debug_assert!(row_cols1.0 < self.nrows() && row_cols1.1 < self.ncols());
@ -1300,6 +1307,8 @@ impl<T, R: Dim, C: Dim, S: RawStorageMut<T, R, C>> Matrix<T, R, C, S> {
impl<T, D: Dim, S: RawStorage<T, D>> Vector<T, D, S> {
/// Gets a reference to the i-th element of this column vector without bound checking.
/// # Safety
/// `i` must be less than `D`.
#[inline]
#[must_use]
pub unsafe fn vget_unchecked(&self, i: usize) -> &T {
@ -1311,6 +1320,8 @@ impl<T, D: Dim, S: RawStorage<T, D>> Vector<T, D, S> {
impl<T, D: Dim, S: RawStorageMut<T, D>> Vector<T, D, S> {
/// Gets a mutable reference to the i-th element of this column vector without bound checking.
/// # Safety
/// `i` must be less than `D`.
#[inline]
#[must_use]
pub unsafe fn vget_unchecked_mut(&mut self, i: usize) -> &mut T {
@ -1409,7 +1420,7 @@ impl<T: SimdComplexField, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C
#[must_use = "Did you mean to use adjoint_mut()?"]
pub fn adjoint(&self) -> OMatrix<T, C, R>
where
DefaultAllocator: Allocator<T, C, R>,
DefaultAllocator: Allocator<C, R>,
{
let (nrows, ncols) = self.shape_generic();
@ -1438,7 +1449,7 @@ impl<T: SimdComplexField, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C
#[inline]
pub fn conjugate_transpose(&self) -> OMatrix<T, C, R>
where
DefaultAllocator: Allocator<T, C, R>,
DefaultAllocator: Allocator<C, R>,
{
self.adjoint()
}
@ -1448,7 +1459,7 @@ impl<T: SimdComplexField, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C
#[must_use = "Did you mean to use conjugate_mut()?"]
pub fn conjugate(&self) -> OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
self.map(|e| e.simd_conjugate())
}
@ -1458,7 +1469,7 @@ impl<T: SimdComplexField, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C
#[must_use = "Did you mean to use unscale_mut()?"]
pub fn unscale(&self, real: T::SimdRealField) -> OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
self.map(|e| e.simd_unscale(real.clone()))
}
@ -1468,7 +1479,7 @@ impl<T: SimdComplexField, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C
#[must_use = "Did you mean to use scale_mut()?"]
pub fn scale(&self, real: T::SimdRealField) -> OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
self.map(|e| e.simd_scale(real.clone()))
}
@ -1536,7 +1547,7 @@ impl<T: Scalar, D: Dim, S: RawStorage<T, D, D>> SquareMatrix<T, D, S> {
#[must_use]
pub fn diagonal(&self) -> OVector<T, D>
where
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
self.map_diagonal(|e| e)
}
@ -1548,7 +1559,7 @@ impl<T: Scalar, D: Dim, S: RawStorage<T, D, D>> SquareMatrix<T, D, S> {
#[must_use]
pub fn map_diagonal<T2: Scalar>(&self, mut f: impl FnMut(T) -> T2) -> OVector<T2, D>
where
DefaultAllocator: Allocator<T2, D>,
DefaultAllocator: Allocator<D>,
{
assert!(
self.is_square(),
@ -1575,7 +1586,7 @@ impl<T: Scalar, D: Dim, S: RawStorage<T, D, D>> SquareMatrix<T, D, S> {
#[must_use]
pub fn trace(&self) -> T
where
T: Scalar + Zero + ClosedAdd,
T: Scalar + Zero + ClosedAddAssign,
{
assert!(
self.is_square(),
@ -1599,7 +1610,7 @@ impl<T: SimdComplexField, D: Dim, S: Storage<T, D, D>> SquareMatrix<T, D, S> {
#[must_use]
pub fn symmetric_part(&self) -> OMatrix<T, D, D>
where
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
assert!(
self.is_square(),
@ -1616,7 +1627,7 @@ impl<T: SimdComplexField, D: Dim, S: Storage<T, D, D>> SquareMatrix<T, D, S> {
#[must_use]
pub fn hermitian_part(&self) -> OMatrix<T, D, D>
where
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
assert!(
self.is_square(),
@ -1639,7 +1650,7 @@ impl<T: Scalar + Zero + One, D: DimAdd<U1> + IsNotStaticOne, S: RawStorage<T, D,
#[must_use]
pub fn to_homogeneous(&self) -> OMatrix<T, DimSum<D, U1>, DimSum<D, U1>>
where
DefaultAllocator: Allocator<T, DimSum<D, U1>, DimSum<D, U1>>,
DefaultAllocator: Allocator<DimSum<D, U1>, DimSum<D, U1>>,
{
assert!(
self.is_square(),
@ -1660,7 +1671,7 @@ impl<T: Scalar + Zero, D: DimAdd<U1>, S: RawStorage<T, D>> Vector<T, D, S> {
#[must_use]
pub fn to_homogeneous(&self) -> OVector<T, DimSum<D, U1>>
where
DefaultAllocator: Allocator<T, DimSum<D, U1>>,
DefaultAllocator: Allocator<DimSum<D, U1>>,
{
self.push(T::zero())
}
@ -1671,7 +1682,7 @@ impl<T: Scalar + Zero, D: DimAdd<U1>, S: RawStorage<T, D>> Vector<T, D, S> {
pub fn from_homogeneous<SB>(v: Vector<T, DimSum<D, U1>, SB>) -> Option<OVector<T, D>>
where
SB: RawStorage<T, DimSum<D, U1>>,
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
if v[v.len() - 1].is_zero() {
let nrows = D::from_usize(v.len() - 1);
@ -1688,7 +1699,7 @@ impl<T: Scalar, D: DimAdd<U1>, S: RawStorage<T, D>> Vector<T, D, S> {
#[must_use]
pub fn push(&self, element: T) -> OVector<T, DimSum<D, U1>>
where
DefaultAllocator: Allocator<T, DimSum<D, U1>>,
DefaultAllocator: Allocator<DimSum<D, U1>>,
{
let len = self.len();
let hnrows = DimSum::<D, U1>::from_usize(len + 1);
@ -1991,8 +2002,12 @@ mod tests {
}
/// # Cross product
impl<T: Scalar + ClosedAdd + ClosedSub + ClosedMul, R: Dim, C: Dim, S: RawStorage<T, R, C>>
Matrix<T, R, C, S>
impl<
T: Scalar + ClosedAddAssign + ClosedSubAssign + ClosedMulAssign,
R: Dim,
C: Dim,
S: RawStorage<T, R, C>,
> Matrix<T, R, C, S>
{
/// The perpendicular product between two 2D column vectors, i.e. `a.x * b.y - a.y * b.x`.
#[inline]
@ -2041,7 +2056,7 @@ impl<T: Scalar + ClosedAdd + ClosedSub + ClosedMul, R: Dim, C: Dim, S: RawStorag
R2: Dim,
C2: Dim,
SB: RawStorage<T, R2, C2>,
DefaultAllocator: SameShapeAllocator<T, R, C, R2, C2>,
DefaultAllocator: SameShapeAllocator<R, C, R2, C2>,
ShapeConstraint: SameNumberOfRows<R, R2> + SameNumberOfColumns<C, C2>,
{
let shape = self.shape();
@ -2241,7 +2256,7 @@ where
where
T: Scalar,
OVector<T2, D>: SupersetOf<Vector<T, D, S>>,
DefaultAllocator: Allocator<T2, D, U1>,
DefaultAllocator: Allocator<D, U1>,
{
Unit::new_unchecked(crate::convert_ref(self.as_ref()))
}

View File

@ -15,16 +15,12 @@ where
R: Dim,
C: Dim,
T::Element: Scalar,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
const LANES: usize = T::LANES;
type Element = OMatrix<T::Element, R, C>;
type SimdBool = T::SimdBool;
#[inline]
fn lanes() -> usize {
T::lanes()
}
#[inline]
fn splat(val: Self::Element) -> Self {
val.map(T::splat)

View File

@ -43,6 +43,10 @@ macro_rules! view_storage_impl (
impl<'a, T, R: Dim, C: Dim, RStride: Dim, CStride: Dim> $T<'a, T, R, C, RStride, CStride> {
/// Create a new matrix view without bounds checking and from a raw pointer.
///
/// # Safety
///
/// `*ptr` must point to memory that is valid `[T; R * C]`.
#[inline]
pub unsafe fn from_raw_parts(ptr: $Ptr,
shape: (R, C),
@ -63,6 +67,11 @@ macro_rules! view_storage_impl (
// Dyn is arbitrary. It's just to be able to call the constructors with `Slice::`
impl<'a, T, R: Dim, C: Dim> $T<'a, T, R, C, Dyn, Dyn> {
/// Create a new matrix view without bounds checking.
///
/// # Safety
///
/// `storage` contains sufficient elements beyond `start + R * C` such that all
/// accesses are within bounds.
#[inline]
pub unsafe fn new_unchecked<RStor, CStor, S>(storage: $SRef, start: (usize, usize), shape: (R, C))
-> $T<'a, T, R, C, S::RStride, S::CStride>
@ -75,6 +84,10 @@ macro_rules! view_storage_impl (
}
/// Create a new matrix view without bounds checking.
///
/// # Safety
///
/// `strides` must be a valid stride indexing.
#[inline]
pub unsafe fn new_with_strides_unchecked<S, RStor, CStor, RStride, CStride>(storage: $SRef,
start: (usize, usize),
@ -128,12 +141,7 @@ impl<'a, T: Scalar, R: Dim, C: Dim, RStride: Dim, CStride: Dim> Clone
{
#[inline]
fn clone(&self) -> Self {
Self {
ptr: self.ptr,
shape: self.shape,
strides: self.strides,
_phantoms: PhantomData,
}
*self
}
}
@ -210,17 +218,22 @@ macro_rules! storage_impl(
for $T<'a, T, R, C, RStride, CStride> {
#[inline]
fn into_owned(self) -> Owned<T, R, C>
where DefaultAllocator: Allocator<T, R, C> {
where DefaultAllocator: Allocator<R, C> {
self.clone_owned()
}
#[inline]
fn clone_owned(&self) -> Owned<T, R, C>
where DefaultAllocator: Allocator<T, R, C> {
where DefaultAllocator: Allocator<R, C> {
let (nrows, ncols) = self.shape();
let it = MatrixIter::new(self).cloned();
DefaultAllocator::allocate_from_iterator(nrows, ncols, it)
}
#[inline]
fn forget_elements(self) {
// No cleanup required.
}
}
)*}
);
@ -538,8 +551,8 @@ macro_rules! matrix_view_impl (
$me.$generic_view_with_steps(start, shape, steps)
}
/// Slices this matrix starting at its component `(irow, icol)` and with `(R::dim(),
/// CView::dim())` consecutive components.
/// Slices this matrix starting at its component `(irow, icol)` and with `(RVIEW, CVIEW)`
/// consecutive components.
#[inline]
#[deprecated = slice_deprecation_note!($fixed_view)]
pub fn $fixed_slice<const RVIEW: usize, const CVIEW: usize>($me: $Me, irow: usize, icol: usize)
@ -547,8 +560,8 @@ macro_rules! matrix_view_impl (
$me.$fixed_view(irow, icol)
}
/// Return a view of this matrix starting at its component `(irow, icol)` and with `(R::dim(),
/// CView::dim())` consecutive components.
/// Return a view of this matrix starting at its component `(irow, icol)` and with
/// `(RVIEW, CVIEW)` consecutive components.
#[inline]
pub fn $fixed_view<const RVIEW: usize, const CVIEW: usize>($me: $Me, irow: usize, icol: usize)
-> $MatrixView<'_, T, Const<RVIEW>, Const<CVIEW>, S::RStride, S::CStride> {

View File

@ -301,7 +301,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
pub fn normalize(&self) -> OMatrix<T, R, C>
where
T: SimdComplexField,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
self.unscale(self.norm())
}
@ -325,7 +325,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
where
T: SimdComplexField,
T::Element: Scalar,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let n = self.norm();
let le = n.clone().simd_le(min_norm);
@ -336,7 +336,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
/// Sets the magnitude of this vector unless it is smaller than `min_magnitude`.
///
/// If `self.magnitude()` is smaller than `min_magnitude`, it will be left unchanged.
/// Otherwise this is equivalent to: `*self = self.normalize() * magnitude.
/// Otherwise this is equivalent to: `*self = self.normalize() * magnitude`.
#[inline]
pub fn try_set_magnitude(&mut self, magnitude: T::RealField, min_magnitude: T::RealField)
where
@ -356,7 +356,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
pub fn cap_magnitude(&self, max: T::RealField) -> OMatrix<T, R, C>
where
T: ComplexField,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let n = self.norm();
@ -374,7 +374,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
where
T: SimdComplexField,
T::Element: Scalar,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let n = self.norm();
let scaled = self.scale(max.clone() / n.clone());
@ -390,7 +390,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
pub fn try_normalize(&self, min_norm: T::RealField) -> Option<OMatrix<T, R, C>>
where
T: ComplexField,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let n = self.norm();
@ -430,7 +430,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: StorageMut<T, R, C>> Matrix<T, R, C, S> {
where
T: SimdComplexField,
T::Element: Scalar,
DefaultAllocator: Allocator<T, R, C> + Allocator<T::Element, R, C>,
DefaultAllocator: Allocator<R, C>,
{
let n = self.norm();
let le = n.clone().simd_le(min_norm);
@ -459,7 +459,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: StorageMut<T, R, C>> Matrix<T, R, C, S> {
impl<T: SimdComplexField, R: Dim, C: Dim> Normed for OMatrix<T, R, C>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
type Norm = T::SimdRealField;
@ -486,7 +486,7 @@ where
impl<T: Scalar + ClosedNeg, R: Dim, C: Dim> Neg for Unit<OMatrix<T, R, C>>
where
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
type Output = Unit<OMatrix<T, R, C>>;
@ -503,7 +503,7 @@ where
/// # Basis and orthogonalization
impl<T: ComplexField, D: DimName> OVector<T, D>
where
DefaultAllocator: Allocator<T, D>,
DefaultAllocator: Allocator<D>,
{
/// The i-the canonical basis element.
#[inline]
@ -525,7 +525,7 @@ where
let (elt, basis) = vs[..i + 1].split_last_mut().unwrap();
for basis_element in &basis[..nbasis_elements] {
*elt -= &*basis_element * elt.dot(basis_element)
*elt -= basis_element * elt.dot(basis_element)
}
}

View File

@ -4,7 +4,9 @@ use std::ops::{
Add, AddAssign, Div, DivAssign, Index, IndexMut, Mul, MulAssign, Neg, Sub, SubAssign,
};
use simba::scalar::{ClosedAdd, ClosedDiv, ClosedMul, ClosedNeg, ClosedSub};
use simba::scalar::{
ClosedAddAssign, ClosedDivAssign, ClosedMulAssign, ClosedNeg, ClosedSubAssign,
};
use crate::base::allocator::{Allocator, SameShapeAllocator, SameShapeC, SameShapeR};
use crate::base::blas_uninit::gemm_uninit;
@ -81,7 +83,7 @@ impl<T, R: Dim, C: Dim, S> Neg for Matrix<T, R, C, S>
where
T: Scalar + ClosedNeg,
S: Storage<T, R, C>,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
type Output = OMatrix<T, R, C>;
@ -97,7 +99,7 @@ impl<'a, T, R: Dim, C: Dim, S> Neg for &'a Matrix<T, R, C, S>
where
T: Scalar + ClosedNeg,
S: Storage<T, R, C>,
DefaultAllocator: Allocator<T, R, C>,
DefaultAllocator: Allocator<R, C>,
{
type Output = OMatrix<T, R, C>;
@ -262,7 +264,7 @@ macro_rules! componentwise_binop_impl(
T: Scalar + $bound,
SA: Storage<T, R1, C1>,
SB: Storage<T, R2, C2>,
DefaultAllocator: SameShapeAllocator<T, R1, C1, R2, C2>,
DefaultAllocator: SameShapeAllocator<R1, C1, R2, C2>,
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
type Output = MatrixSum<T, R1, C1, R2, C2>;
@ -280,7 +282,7 @@ macro_rules! componentwise_binop_impl(
T: Scalar + $bound,
SA: Storage<T, R1, C1>,
SB: Storage<T, R2, C2>,
DefaultAllocator: SameShapeAllocator<T, R2, C2, R1, C1>,
DefaultAllocator: SameShapeAllocator<R2, C2, R1, C1>,
ShapeConstraint: SameNumberOfRows<R2, R1> + SameNumberOfColumns<C2, C1> {
type Output = MatrixSum<T, R2, C2, R1, C1>;
@ -298,7 +300,7 @@ macro_rules! componentwise_binop_impl(
T: Scalar + $bound,
SA: Storage<T, R1, C1>,
SB: Storage<T, R2, C2>,
DefaultAllocator: SameShapeAllocator<T, R1, C1, R2, C2>,
DefaultAllocator: SameShapeAllocator<R1, C1, R2, C2>,
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
type Output = MatrixSum<T, R1, C1, R2, C2>;
@ -313,7 +315,7 @@ macro_rules! componentwise_binop_impl(
T: Scalar + $bound,
SA: Storage<T, R1, C1>,
SB: Storage<T, R2, C2>,
DefaultAllocator: SameShapeAllocator<T, R1, C1, R2, C2>,
DefaultAllocator: SameShapeAllocator<R1, C1, R2, C2>,
ShapeConstraint: SameNumberOfRows<R1, R2> + SameNumberOfColumns<C1, C2> {
type Output = MatrixSum<T, R1, C1, R2, C2>;
@ -357,17 +359,17 @@ macro_rules! componentwise_binop_impl(
}
);
componentwise_binop_impl!(Add, add, ClosedAdd;
componentwise_binop_impl!(Add, add, ClosedAddAssign;
AddAssign, add_assign, add_assign_statically_unchecked, add_assign_statically_unchecked_mut;
add_to, add_to_statically_unchecked_uninit);
componentwise_binop_impl!(Sub, sub, ClosedSub;
componentwise_binop_impl!(Sub, sub, ClosedSubAssign;
SubAssign, sub_assign, sub_assign_statically_unchecked, sub_assign_statically_unchecked_mut;
sub_to, sub_to_statically_unchecked_uninit);
impl<T, R: DimName, C: DimName> iter::Sum for OMatrix<T, R, C>
where
T: Scalar + ClosedAdd + Zero,
DefaultAllocator: Allocator<T, R, C>,
T: Scalar + ClosedAddAssign + Zero,
DefaultAllocator: Allocator<R, C>,
{
fn sum<I: Iterator<Item = OMatrix<T, R, C>>>(iter: I) -> OMatrix<T, R, C> {
iter.fold(Matrix::zero(), |acc, x| acc + x)
@ -376,8 +378,8 @@ where
impl<T, C: Dim> iter::Sum for OMatrix<T, Dyn, C>
where
T: Scalar + ClosedAdd + Zero,
DefaultAllocator: Allocator<T, Dyn, C>,
T: Scalar + ClosedAddAssign + Zero,
DefaultAllocator: Allocator<Dyn, C>,
{
/// # Example
/// ```
@ -406,8 +408,8 @@ where
impl<'a, T, R: DimName, C: DimName> iter::Sum<&'a OMatrix<T, R, C>> for OMatrix<T, R, C>
where
T: Scalar + ClosedAdd + Zero,
DefaultAllocator: Allocator<T, R, C>,
T: Scalar + ClosedAddAssign + Zero,
DefaultAllocator: Allocator<R, C>,
{
fn sum<I: Iterator<Item = &'a OMatrix<T, R, C>>>(iter: I) -> OMatrix<T, R, C> {
iter.fold(Matrix::zero(), |acc, x| acc + x)
@ -416,8 +418,8 @@ where
impl<'a, T, C: Dim> iter::Sum<&'a OMatrix<T, Dyn, C>> for OMatrix<T, Dyn, C>
where
T: Scalar + ClosedAdd + Zero,
DefaultAllocator: Allocator<T, Dyn, C>,
T: Scalar + ClosedAddAssign + Zero,
DefaultAllocator: Allocator<Dyn, C>,
{
/// # Example
/// ```
@ -458,7 +460,7 @@ macro_rules! componentwise_scalarop_impl(
impl<T, R: Dim, C: Dim, S> $Trait<T> for Matrix<T, R, C, S>
where T: Scalar + $bound,
S: Storage<T, R, C>,
DefaultAllocator: Allocator<T, R, C> {
DefaultAllocator: Allocator<R, C> {
type Output = OMatrix<T, R, C>;
#[inline]
@ -482,7 +484,7 @@ macro_rules! componentwise_scalarop_impl(
impl<'a, T, R: Dim, C: Dim, S> $Trait<T> for &'a Matrix<T, R, C, S>
where T: Scalar + $bound,
S: Storage<T, R, C>,
DefaultAllocator: Allocator<T, R, C> {
DefaultAllocator: Allocator<R, C> {
type Output = OMatrix<T, R, C>;
#[inline]
@ -506,13 +508,13 @@ macro_rules! componentwise_scalarop_impl(
}
);
componentwise_scalarop_impl!(Mul, mul, ClosedMul; MulAssign, mul_assign);
componentwise_scalarop_impl!(Div, div, ClosedDiv; DivAssign, div_assign);
componentwise_scalarop_impl!(Mul, mul, ClosedMulAssign; MulAssign, mul_assign);
componentwise_scalarop_impl!(Div, div, ClosedDivAssign; DivAssign, div_assign);
macro_rules! left_scalar_mul_impl(
($($T: ty),* $(,)*) => {$(
impl<R: Dim, C: Dim, S: Storage<$T, R, C>> Mul<Matrix<$T, R, C, S>> for $T
where DefaultAllocator: Allocator<$T, R, C> {
where DefaultAllocator: Allocator<R, C> {
type Output = OMatrix<$T, R, C>;
#[inline]
@ -534,7 +536,7 @@ macro_rules! left_scalar_mul_impl(
}
impl<'b, R: Dim, C: Dim, S: Storage<$T, R, C>> Mul<&'b Matrix<$T, R, C, S>> for $T
where DefaultAllocator: Allocator<$T, R, C> {
where DefaultAllocator: Allocator<R, C> {
type Output = OMatrix<$T, R, C>;
#[inline]
@ -551,10 +553,10 @@ left_scalar_mul_impl!(u8, u16, u32, u64, usize, i8, i16, i32, i64, isize, f32, f
impl<'a, 'b, T, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<&'b Matrix<T, R2, C2, SB>>
for &'a Matrix<T, R1, C1, SA>
where
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SA: Storage<T, R1, C1>,
SB: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, R1, C2>,
DefaultAllocator: Allocator<R1, C2>,
ShapeConstraint: AreMultipliable<R1, C1, R2, C2>,
{
type Output = OMatrix<T, R1, C2>;
@ -573,10 +575,10 @@ where
impl<'a, T, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<Matrix<T, R2, C2, SB>>
for &'a Matrix<T, R1, C1, SA>
where
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SB: Storage<T, R2, C2>,
SA: Storage<T, R1, C1>,
DefaultAllocator: Allocator<T, R1, C2>,
DefaultAllocator: Allocator<R1, C2>,
ShapeConstraint: AreMultipliable<R1, C1, R2, C2>,
{
type Output = OMatrix<T, R1, C2>;
@ -590,10 +592,10 @@ where
impl<'b, T, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<&'b Matrix<T, R2, C2, SB>>
for Matrix<T, R1, C1, SA>
where
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SB: Storage<T, R2, C2>,
SA: Storage<T, R1, C1>,
DefaultAllocator: Allocator<T, R1, C2>,
DefaultAllocator: Allocator<R1, C2>,
ShapeConstraint: AreMultipliable<R1, C1, R2, C2>,
{
type Output = OMatrix<T, R1, C2>;
@ -607,10 +609,10 @@ where
impl<T, R1: Dim, C1: Dim, R2: Dim, C2: Dim, SA, SB> Mul<Matrix<T, R2, C2, SB>>
for Matrix<T, R1, C1, SA>
where
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SB: Storage<T, R2, C2>,
SA: Storage<T, R1, C1>,
DefaultAllocator: Allocator<T, R1, C2>,
DefaultAllocator: Allocator<R1, C2>,
ShapeConstraint: AreMultipliable<R1, C1, R2, C2>,
{
type Output = OMatrix<T, R1, C2>;
@ -629,11 +631,11 @@ where
R1: Dim,
C1: Dim,
R2: Dim,
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SB: Storage<T, R2, C1>,
SA: StorageMut<T, R1, C1> + IsContiguous + Clone, // TODO: get rid of the IsContiguous
ShapeConstraint: AreMultipliable<R1, C1, R2, C1>,
DefaultAllocator: Allocator<T, R1, C1, Buffer = SA>,
DefaultAllocator: Allocator<R1, C1, Buffer<T> = SA>,
{
#[inline]
fn mul_assign(&mut self, rhs: Matrix<T, R2, C1, SB>) {
@ -646,12 +648,12 @@ where
R1: Dim,
C1: Dim,
R2: Dim,
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SB: Storage<T, R2, C1>,
SA: StorageMut<T, R1, C1> + IsContiguous + Clone, // TODO: get rid of the IsContiguous
ShapeConstraint: AreMultipliable<R1, C1, R2, C1>,
// TODO: this is too restrictive. See comments for the non-ref version.
DefaultAllocator: Allocator<T, R1, C1, Buffer = SA>,
DefaultAllocator: Allocator<R1, C1, Buffer<T> = SA>,
{
#[inline]
fn mul_assign(&mut self, rhs: &'b Matrix<T, R2, C1, SB>) {
@ -662,7 +664,7 @@ where
/// # Special multiplications.
impl<T, R1: Dim, C1: Dim, SA> Matrix<T, R1, C1, SA>
where
T: Scalar + Zero + One + ClosedAdd + ClosedMul,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SA: Storage<T, R1, C1>,
{
/// Equivalent to `self.transpose() * rhs`.
@ -671,7 +673,7 @@ where
pub fn tr_mul<R2: Dim, C2: Dim, SB>(&self, rhs: &Matrix<T, R2, C2, SB>) -> OMatrix<T, C1, C2>
where
SB: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, C1, C2>,
DefaultAllocator: Allocator<C1, C2>,
ShapeConstraint: SameNumberOfRows<R1, R2>,
{
let mut res = Matrix::uninit(self.shape_generic().1, rhs.shape_generic().1);
@ -687,7 +689,7 @@ where
where
T: SimdComplexField,
SB: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, C1, C2>,
DefaultAllocator: Allocator<C1, C2>,
ShapeConstraint: SameNumberOfRows<R1, R2>,
{
let mut res = Matrix::uninit(self.shape_generic().1, rhs.shape_generic().1);
@ -799,11 +801,11 @@ where
rhs: &Matrix<T, R2, C2, SB>,
) -> OMatrix<T, DimProd<R1, R2>, DimProd<C1, C2>>
where
T: ClosedMul,
T: ClosedMulAssign,
R1: DimMul<R2>,
C1: DimMul<C2>,
SB: Storage<T, R2, C2>,
DefaultAllocator: Allocator<T, DimProd<R1, R2>, DimProd<C1, C2>>,
DefaultAllocator: Allocator<DimProd<R1, R2>, DimProd<C1, C2>>,
{
let (nrows1, ncols1) = self.shape_generic();
let (nrows2, ncols2) = rhs.shape_generic();
@ -835,8 +837,8 @@ where
impl<T, D: DimName> iter::Product for OMatrix<T, D, D>
where
T: Scalar + Zero + One + ClosedMul + ClosedAdd,
DefaultAllocator: Allocator<T, D, D>,
T: Scalar + Zero + One + ClosedMulAssign + ClosedAddAssign,
DefaultAllocator: Allocator<D, D>,
{
fn product<I: Iterator<Item = OMatrix<T, D, D>>>(iter: I) -> OMatrix<T, D, D> {
iter.fold(Matrix::one(), |acc, x| acc * x)
@ -845,8 +847,8 @@ where
impl<'a, T, D: DimName> iter::Product<&'a OMatrix<T, D, D>> for OMatrix<T, D, D>
where
T: Scalar + Zero + One + ClosedMul + ClosedAdd,
DefaultAllocator: Allocator<T, D, D>,
T: Scalar + Zero + One + ClosedMulAssign + ClosedAddAssign,
DefaultAllocator: Allocator<D, D>,
{
fn product<I: Iterator<Item = &'a OMatrix<T, D, D>>>(iter: I) -> OMatrix<T, D, D> {
iter.fold(Matrix::one(), |acc, x| acc * x)

View File

@ -2,7 +2,7 @@
use approx::RelativeEq;
use num::{One, Zero};
use simba::scalar::{ClosedAdd, ClosedMul, ComplexField, RealField};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign, ComplexField, RealField};
use crate::base::allocator::Allocator;
use crate::base::dimension::{Dim, DimMin};
@ -88,10 +88,10 @@ impl<T: ComplexField, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn is_orthogonal(&self, eps: T::Epsilon) -> bool
where
T: Zero + One + ClosedAdd + ClosedMul + RelativeEq,
T: Zero + One + ClosedAddAssign + ClosedMulAssign + RelativeEq,
S: Storage<T, R, C>,
T::Epsilon: Clone,
DefaultAllocator: Allocator<T, R, C> + Allocator<T, C, C>,
DefaultAllocator: Allocator<R, C> + Allocator<C, C>,
{
(self.ad_mul(self)).is_identity(eps)
}
@ -99,7 +99,7 @@ impl<T: ComplexField, R: Dim, C: Dim, S: Storage<T, R, C>> Matrix<T, R, C, S> {
impl<T: RealField, D: Dim, S: Storage<T, D, D>> SquareMatrix<T, D, S>
where
DefaultAllocator: Allocator<T, D, D>,
DefaultAllocator: Allocator<D, D>,
{
/// Checks that this matrix is orthogonal and has a determinant equal to 1.
#[inline]
@ -107,7 +107,7 @@ where
pub fn is_special_orthogonal(&self, eps: T) -> bool
where
D: DimMin<D, Output = D>,
DefaultAllocator: Allocator<(usize, usize), D>,
DefaultAllocator: Allocator<D>,
{
self.is_square() && self.is_orthogonal(eps) && self.determinant() > T::zero()
}

View File

@ -1,6 +1,6 @@
//! Wrapper that allows changing the generic type of a PhantomData<T>
//!
//! Copied from https://github.com/rkyv/rkyv_contrib (MIT-Apache2 licences) which isnt published yet.
//! Copied from <https://github.com/rkyv/rkyv_contrib> (MIT-Apache2 licences) which isnt published yet.
use rkyv::{
with::{ArchiveWith, DeserializeWith, SerializeWith},
@ -8,7 +8,7 @@ use rkyv::{
};
use std::marker::PhantomData;
/// A wrapper that allows for changing the generic type of a PhantomData<T>.
/// A wrapper that allows for changing the generic type of a `PhantomData<T>`.
pub struct CustomPhantom<NT: ?Sized> {
_data: PhantomData<*const NT>,
}

View File

@ -2,7 +2,7 @@ use crate::allocator::Allocator;
use crate::storage::RawStorage;
use crate::{Const, DefaultAllocator, Dim, Matrix, OVector, RowOVector, Scalar, VectorView, U1};
use num::{One, Zero};
use simba::scalar::{ClosedAdd, ClosedMul, Field, SupersetOf};
use simba::scalar::{ClosedAddAssign, ClosedMulAssign, Field, SupersetOf};
use std::mem::MaybeUninit;
/// # Folding on columns and rows
@ -16,7 +16,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
f: impl Fn(VectorView<'_, T, R, S::RStride, S::CStride>) -> T,
) -> RowOVector<T, C>
where
DefaultAllocator: Allocator<T, U1, C>,
DefaultAllocator: Allocator<U1, C>,
{
let ncols = self.shape_generic().1;
let mut res = Matrix::uninit(Const::<1>, ncols);
@ -44,7 +44,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
f: impl Fn(VectorView<'_, T, R, S::RStride, S::CStride>) -> T,
) -> OVector<T, C>
where
DefaultAllocator: Allocator<T, C>,
DefaultAllocator: Allocator<C>,
{
let ncols = self.shape_generic().1;
let mut res = Matrix::uninit(ncols, Const::<1>);
@ -70,7 +70,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
f: impl Fn(&mut OVector<T, R>, VectorView<'_, T, R, S::RStride, S::CStride>),
) -> OVector<T, R>
where
DefaultAllocator: Allocator<T, R>,
DefaultAllocator: Allocator<R>,
{
let mut res = init;
@ -104,7 +104,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn sum(&self) -> T
where
T: ClosedAdd + Zero,
T: ClosedAddAssign + Zero,
{
self.iter().cloned().fold(T::zero(), |a, b| a + b)
}
@ -132,8 +132,8 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn row_sum(&self) -> RowOVector<T, C>
where
T: ClosedAdd + Zero,
DefaultAllocator: Allocator<T, U1, C>,
T: ClosedAddAssign + Zero,
DefaultAllocator: Allocator<U1, C>,
{
self.compress_rows(|col| col.sum())
}
@ -159,8 +159,8 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn row_sum_tr(&self) -> OVector<T, C>
where
T: ClosedAdd + Zero,
DefaultAllocator: Allocator<T, C>,
T: ClosedAddAssign + Zero,
DefaultAllocator: Allocator<C>,
{
self.compress_rows_tr(|col| col.sum())
}
@ -186,8 +186,8 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn column_sum(&self) -> OVector<T, R>
where
T: ClosedAdd + Zero,
DefaultAllocator: Allocator<T, R>,
T: ClosedAddAssign + Zero,
DefaultAllocator: Allocator<R>,
{
let nrows = self.shape_generic().0;
self.compress_columns(OVector::zeros_generic(nrows, Const::<1>), |out, col| {
@ -215,7 +215,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn product(&self) -> T
where
T: ClosedMul + One,
T: ClosedMulAssign + One,
{
self.iter().cloned().fold(T::one(), |a, b| a * b)
}
@ -243,8 +243,8 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn row_product(&self) -> RowOVector<T, C>
where
T: ClosedMul + One,
DefaultAllocator: Allocator<T, U1, C>,
T: ClosedMulAssign + One,
DefaultAllocator: Allocator<U1, C>,
{
self.compress_rows(|col| col.product())
}
@ -270,8 +270,8 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn row_product_tr(&self) -> OVector<T, C>
where
T: ClosedMul + One,
DefaultAllocator: Allocator<T, C>,
T: ClosedMulAssign + One,
DefaultAllocator: Allocator<C>,
{
self.compress_rows_tr(|col| col.product())
}
@ -297,8 +297,8 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
#[must_use]
pub fn column_product(&self) -> OVector<T, R>
where
T: ClosedMul + One,
DefaultAllocator: Allocator<T, R>,
T: ClosedMulAssign + One,
DefaultAllocator: Allocator<R>,
{
let nrows = self.shape_generic().0;
self.compress_columns(
@ -339,7 +339,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
let mean = self.mean();
self.iter().cloned().fold(T::zero(), |acc, x| {
acc + (x.clone() - mean.clone()) * (x.clone() - mean.clone())
acc + (x.clone() - mean.clone()) * (x - mean.clone())
}) / n_elements
}
}
@ -361,7 +361,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn row_variance(&self) -> RowOVector<T, C>
where
T: Field + SupersetOf<f64>,
DefaultAllocator: Allocator<T, U1, C>,
DefaultAllocator: Allocator<U1, C>,
{
self.compress_rows(|col| col.variance())
}
@ -382,7 +382,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn row_variance_tr(&self) -> OVector<T, C>
where
T: Field + SupersetOf<f64>,
DefaultAllocator: Allocator<T, C>,
DefaultAllocator: Allocator<C>,
{
self.compress_rows_tr(|col| col.variance())
}
@ -404,7 +404,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn column_variance(&self) -> OVector<T, R>
where
T: Field + SupersetOf<f64>,
DefaultAllocator: Allocator<T, R>,
DefaultAllocator: Allocator<R>,
{
let (nrows, ncols) = self.shape_generic();
@ -469,7 +469,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn row_mean(&self) -> RowOVector<T, C>
where
T: Field + SupersetOf<f64>,
DefaultAllocator: Allocator<T, U1, C>,
DefaultAllocator: Allocator<U1, C>,
{
self.compress_rows(|col| col.mean())
}
@ -490,7 +490,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn row_mean_tr(&self) -> OVector<T, C>
where
T: Field + SupersetOf<f64>,
DefaultAllocator: Allocator<T, C>,
DefaultAllocator: Allocator<C>,
{
self.compress_rows_tr(|col| col.mean())
}
@ -511,7 +511,7 @@ impl<T: Scalar, R: Dim, C: Dim, S: RawStorage<T, R, C>> Matrix<T, R, C, S> {
pub fn column_mean(&self) -> OVector<T, R>
where
T: Field + SupersetOf<f64>,
DefaultAllocator: Allocator<T, R>,
DefaultAllocator: Allocator<R>,
{
let (nrows, ncols) = self.shape_generic();
let denom = T::one() / crate::convert::<_, T>(ncols.value() as f64);

Some files were not shown because too many files have changed in this diff Show More