It's possible that some particularly bad inputs cause
severe loss of significance in the triangular solves.
This is exacerbated by the fact that the way we test
the (residual) error is also prone to loss of significance,
so that the error measure itself is problematic.
We could maybe improve this in the future by using arbitrary-
precision arithmetic to remove some sources of error and testing
against appropriate bounds.
After much deliberation, I have come to the conclusion that the
benefits do not really outweigh the added complexity. Even though
the added complexity is relatively minor, it makes it somewhat
more complicated to inter-op with other sparse linear algebra
libraries in the future.
The suffix is intended to communicate that these methods
assume `preallocated` storage, i.e. they try to store the
result in a matrix which already has the correct sparsity
pattern for the operation.
This mimics how std does it, e.g. std::vec::Vec. This avoids potential
problems down the road, where adding more types might clutter the
API interface and generated documentation.