When looking at how this would generlaize to the other public key
cryptosystems (ECDSA, ED25519, etc.), I think having fewer submodules
involved makes more sense.
Take a step towards flattening (and simplifying) the public API of
the RSA submodule. This is done as a separate step from the rest of
the work so that the Git history will correctly reflect that signing.rs
gets renamed to keypair.rs, with only minimial modifications, in the
next commit. (If this were merged with the following commit, then Git
would report the new keypair.rs as a new file without any history from
signing.rs.)
Previously, `mgf1()` wrote the mask to the buffer, and then we XOR'd
the data onto the mask. Now, `mgf1()` XOR's the mask onto the data
that is already in the `out` buffer.
When *ring* first started, BoringSSL and OpenSSL upstream were both
using an implementation of constant-time-ish exponentiation that took
shortcuts that made it clearly not constant-time. Long ago, that code
was replaced here and in BoringSSL (and probably OpenSSL upstream), so
this comment is no longer correct.
The tests of `bigint` were not doing CPU feature detection themselves.
Thus they were depending on some other tests that run before them to do
it, or else they were not making use of all the CPU optimizations
possible, and thus not testing all the interesting code paths.
Also, as we are expanding the functionality of the RSA module, it has
become more difficult to track where CPU feature detection has been done
and where it needs to be done. Move the proof that the CPU feature
detection has been done down into the callers of the `bn_` functions
that need CPU feature detection to have been done.
This will also be helpful if/when we expand the use of the `bigint`
module beyond RSA.
The digest is never used after encoding, so move it instead of referencing it.
This is more correct since for signing (and soon encryption) the padded value
is only supposed to be used once.
Idiomatic practice in Rust is to avoid type-level constraints in favor
of impl-level constraints so that things aren't over-constrained.
Derive `Clone` and `Copy` instead of explicitly implementing them,
which is now possible after implementing the type-level constraints.
This is a step towards removing the heap-allocated and usually-unnecessary
`public_key: RsaSubjectPublicKey` field. The new API allows the caller to
better control how it stores/allocates the component values. This also removes
a couple of infallible `unwrap()`s.
This is a step towards removing `io::Positive` from the public API.
This is a breaking API change.
Refactor `limb::big_endian_from_limbs` to use an approach based on
iterators. We will be then be able to use the new `limb::be_bytes`
to implement `rsa::public::Exponent::be_bytes()` and
`rsa::public::Modulus::be_bytes()` and eventually other similar functions.
We want those functions to return `ExactSizeIterator`s.
This is also part of an ongoing process to eliminate replace all the
big-endian/little-endian encoding logic in *ring* w/ use of core APIs.
types.
In particular, allow the type for the public components to implement
`Debug` without requiring the type for the private components to
implement `Debug`, for the purpose of implementing `Debug` for the
`Components` type itself.
All the callers of `elem_exp_vartime` call `into_unencoded()` on the result,
so just do that within `elem_exp_vartime`.
The default value of the `Encoding` type parameter is `Unencoded`, so elide
it.
The bounds checking that `bigint::PublicExponent`'s constructor is doing is
specific to RSA. The correctness of the exponentiation arithmetic doesn't
depend on those additional checks. Move all that bounds checking to RSA.
Soon, there will be `rsa::public::{Key, Modulus}` to complement `Exponent`.
Move `bigint::elem_exp_vartime` to `rsa`. The performance analysis is only
valid for RSA.
The assertion made sense when the function was only for the exponentiation
in RSA public key operations. However, this assertion is nonsensical for
the other use of the function to construct the montgomery constant for the
modulus.
Add more documentation about the performance.
Rather than trying to "improve" the assertion, just remove it.
`PUBLIC_EXPONENT_MAX_VALUE` is just a bit smaller of bits smaller than
what the type naturally enforces.
The added documentation should help us reason about whether the assertion
could ever fail. Because we constrain the maximum modulus (bit) length,
the maximum value of the exponent is for the Montgomery setup case is
less than `PUBLIC_EXPONENT_MAX_VALUE`.
Use `NonZerou64` to encode the fact that the exponent is nonzero, so that
we can remove an assertion that would never fail.
This is a non-functional change.