1. variable name accoriding to the Mbed TLS coding style;
2. add a comment explaining safety of the optimization;
3. safer T2 initialization and memory zeroing on the function exit;
Record checking fails if mbedtls_ssl_check_record() is called with
external buffer. Received record sequence number is available in the
incoming record but it is not available in the ssl contexts `in_ctr`-
variable that is used when decoding the sequence number.
To fix the problem, temporarily update ssl context `in_ctr` to
point to the received record header and restore value later.
We want to explicitly disallow creating new transactions when a
transaction is already in progress. However, we were incorrectly
checking for the existence of the injected entropy file before
continuing with creating a transaction. This meant we could have a
transaction already in progress and would be able to still create a new
transaction. It also meant we couldn't start a new transaction if any
entropy had been injected. Check the transaction file instead of the
injected entropy file in order to prevent multiple concurrent
transactions.
Change the default entropy nonce length to be nonzero in some cases.
Specifically, the default nonce length is now set in such a way that
the entropy input during the initial seeding always contains enough
entropy to achieve the maximum possible security strength per
NIST SP 800-90A given the key size and entropy length.
If MBEDTLS_CTR_DRBG_ENTROPY_LEN is kept to its default value,
mbedtls_ctr_drbg_seed() now grabs extra entropy for a nonce if
MBEDTLS_CTR_DRBG_USE_128_BIT_KEY is disabled and either
MBEDTLS_ENTROPY_FORCE_SHA256 is enabled or MBEDTLS_SHA512_C is
disabled. If MBEDTLS_CTR_DRBG_USE_128_BIT_KEY is enabled, or if
the entropy module uses SHA-512, then the default value of
MBEDTLS_CTR_DRBG_ENTROPY_LEN does not require a second call to the
entropy function to achieve the maximum security strength.
This choice of default nonce size guarantees NIST compliance with the
maximum security strength while keeping backward compatibility and
performance high: in configurations that do not require grabbing more
entropy, the code will not grab more entropy than before.
Add a new function mbedtls_ctr_drbg_set_nonce_len() which configures
the DRBG instance to call f_entropy a second time during the initial
seeding to grab a nonce.
The default nonce length is 0, so there is no behavior change unless
the user calls the new function.
Add a new function mbedtls_ctr_drbg_set_nonce_len() which configures
the DRBG instance to call f_entropy a second time during the initial
seeding to grab a nonce.
The default nonce length is 0, so there is no behavior change unless
the user calls the new function.
The blinding applied to the scalar before modular inversion is
inadequate. Bignum is not constant time/constant trace, side channel
attacks can retrieve the blinded value, factor it (it is smaller than
RSA keys and not guaranteed to have only large prime factors). Then the
key can be recovered by brute force.
Reducing the blinded value makes factoring useless because the adversary
can only recover pk*t+z*N instead of pk*t.
mbedtls_ctr_drbg_seed() always set the entropy length to the default,
so a call to mbedtls_ctr_drbg_set_entropy_len() before seed() had no
effect. Change this to the more intuitive behavior that
set_entropy_len() sets the entropy length and seed() respects that and
only uses the default entropy length if there was no call to
set_entropy_len().
This removes the need for the test-only function
mbedtls_ctr_drbg_seed_entropy_len(). Just call
mbedtls_ctr_drbg_set_entropy_len() followed by
mbedtls_ctr_drbg_seed(), it works now.
Move the definitions of mbedtls_ctr_drbg_seed_entropy_len() and
mbedtls_ctr_drbg_seed() to after they are used. This makes the code
easier to read and to maintain.
mbedtls_hmac_drbg_seed() always set the entropy length to the default,
so a call to mbedtls_hmac_drbg_set_entropy_len() before seed() had no
effect. Change this to the more intuitive behavior that
set_entropy_len() sets the entropy length and seed() respects that and
only uses the default entropy length if there was no call to
set_entropy_len().
Fix a signed int overflow in mbedtls_asn1_get_int() for numbers
between INT_MAX+1 and UINT_MAX (typically 0x80000000..0xffffffff).
This was undefined behavior which in practice would typically have
resulted in an incorrect value, but which may plausibly also have
caused the postcondition (*p == initial<*p> + len) to be violated.
Credit to OSS-Fuzz.
mbedtls_entropy_func returns up to MBEDTLS_ENTROPY_BLOCK_SIZE bytes.
This is the output of a hash function and does not indicate how many
bytes of entropy went into the hash computation.
Enforce that mbedtls_entropy_func gathers a total of
MBEDTLS_ENTROPY_BLOCK_SIZE bytes or more from strong sources. Weak
sources don't count for this calculation. This is complementary to the
per-source threshold mechanism.
In particular, we define system sources with a threshold of 32. But
when using SHA-512 for the entropy accumulator,
MBEDTLS_ENTROPY_BLOCK_SIZE = 64, so users can expect 64 bytes' worth
of entropy. Before, you only got 64 bytes of entropy if there were two
sources. Now you get 64 bytes of entropy even with a single source
with a threshold of 32.
This caused problems when running multiple jobs at once, since
there was no target matching libmbedcrypto.so with the path
prefix. It only worked if it was built first, since such file was found.
Additionally, building of libmbedcrypto.so now waits for the static .a version.
Previously, recipes for both libmbedcrypto.a and libmbedcrypto.so could run
independently when running parallel jobs, which resulted in the .o files
being built twice. It could sometimes be a problem, since linking would start
when building one of the object files was still in progress (the previous one
existed). This in turn resulted in reading (and trying to link) a malformed file.
The "|" character is followed by "order-only-prerequisites", and in this case,
makes linking of the shared version of the library wait for the .a file.
Since it's guaranteed to be always built in the "all" target, it's fine to do that.
All of the .o files are only built once thanks to this change.
Add a parameter to the p_validate_slot_number method to allow the
driver to modify the persistent data.
With the current structure of the core, the persistent data is already
updated. All it took was adding a way to modify it.
When registering a key in a secure element, go through the transaction
mechanism. This makes the code simpler, at the expense of a few extra
storage operations. Given that registering a key is typically very
rare over the lifetime of a device, this is an acceptable loss.
Drivers must now have a p_validate_slot_number method, otherwise
registering a key is not possible. This reduces the risk that due to a
mistake during the integration of a device, an application might claim
a slot in a way that is not supported by the driver.
In ssl_parse_hello_verify_request, we read 3 bytes (version and cookie
length) without checking that there are that many bytes left in
ssl->in_msg. This could potentially read from memory outside of the
ssl->receive buffer (which would be a remotely exploitable
crash).
In ssl_parse_hello_verify_request, we print cookie_len bytes without
checking that there are that many bytes left in ssl->in_msg. This
could potentially log data outside the received message (not a big
deal) and could potentially read from memory outside of the receive
buffer (which would be a remotely exploitable crash).
If none of the inputs to a key derivation is a
PSA_KEY_DERIVATION_INPUT_SECRET passed with
psa_key_derivation_input_key(), forbid
psa_key_derivation_output_key(). It usually doesn't make sense to
derive a key object if the secret isn't itself a proper key.
Allow a direct input as the SECRET input step in a key derivation, in
addition to allowing DERIVE keys. This makes it easier for
applications to run a key derivation where the "secret" input is
obtained from somewhere else. This makes it possible for the "secret"
input to be empty (keys cannot be empty), which some protocols do (for
example the IV derivation in EAP-TLS).
Conversely, allow a RAW_DATA key as the INFO/LABEL/SALT/SEED input to a key
derivation, in addition to allowing direct inputs. This doesn't
improve security, but removes a step when a personalization parameter
is stored in the key store, and allows this personalization parameter
to remain opaque.
Add test cases that explore step/key-type-and-keyhood combinations.
In TLS, the master secret is always a key. But EAP-TLS uses the TLS
PRF to derive an IV with an empty string for the "secret" input. The
code always stored the secret into a key slot before calling the TLS
PRF, but this doesn't work when the secret is empty, since PSA Crypto
no longer supports empty keys. Add a special case for an empty secret.
The signature must have exactly the same length as the key, it can't
be longer. Fix#258
If the signature doesn't have the correct size, that's an invalid
signature, not a problem with an output buffer size. Fix the error code.
Add test cases.
In psa_asymmetric_sign, immediately reject an empty signature buffer.
This can never be right.
Add test cases (one RSA and one ECDSA).
Change the SE HAL mock tests not to use an empty signature buffer.
Zero-length keys are rejected at creation time, so we don't need any
special handling internally.
When exporting a key, we do need to take care of the case where the
output buffer is empty, but this is easy: an empty output buffer is
never valid.
Document how mbedtls_asn1_store_named_data allocates val.p in the new
or modified entry.
Change the behavior to be more regular, always setting the new length
to val_len. This does not affect the previous documented behavior
since this aspect was not documented. This does not affect current
usage in Mbed TLS's X.509 module where calls with the same OID always
use the same size for the associated value.
At the end of `psa_hmac_setup_internal()`, the ipad is cleared.
However, the size that was given to clear was `key_len` which is larger
than the size of `ipad`.
* crypto/development: (77 commits)
all.sh: disable MEMORY_BUFFER_ALLOC in cmake asan build
Unify gcc and clang cmake flags to test with UBsan
Add an input check in psa_its_set
Remove storage errors from psa_generate_random
Update getting_started.md
Update based on Jaeden's comments.
Update getting_started.md
Fix return code warnings
Update getting_started.md
Fix warnings
Add PSA_ERROR_STORAGE_FAILURE to psa_cipher_generate_iv
Remove errorneous insert
Add STORAGE_FAILURE everywhere + add missing codes
Add storage failure to psa_mac_verify_finish
Add storage failure to psa_mac_sign_finish
Add PSA_ERROR_STORAGE_FAILURE to psa_aead_*_setup functions
Added PSA_ERROR_BAD_STATE to functions with operations
Added extra bad state case to psa_hash_setup
Add missing return codes to psa_generate_key
Add PSA_ERROR_BUFFER_TOO_SMALL to psa_mac_compute
...
We were still reusing the internal HMAC-DRBG of the deterministic ECDSA
for blinding. This meant that with cryptographically low likelyhood the
result was not the same signature as the one the deterministic ECDSA
algorithm has to produce (however it is still a valid ECDSA signature).
To correct this we seed a second HMAC-DRBG with the same seed to restore
correct behavior. We also apply a label to avoid reusing the bits of the
ephemeral key for a different purpose and reduce the chance that they
leak.
This workaround can't be implemented in the restartable case without
penalising the case where external RNG is available or completely
defeating the purpose of the restartable feature, therefore in this case
the small chance of incorrect behavior remains.
The current interface does not allow passing an RNG, which is needed for
blinding. Using the scheme's internal HMAC-DRBG results the same
blinding values for the same key and message, diminishing the
effectiveness of the countermeasure. A new function
`mbedtls_ecdsa_det_ext` is available to address this problem.
`mbedtls_ecdsa_sign_det` reuses the internal HMAC-DRBG instance to
implement blinding. The advantage of this is that the algorithm is
deterministic too, not just the resulting signature. The drawback is
that the blinding is always the same for the same key and message.
This diminishes the efficiency of blinding and leaks information about
the private key.
A function that takes external randomness fixes this weakness.
* crypto/development: (863 commits)
crypto_platform: Fix typo
des: Reduce number of self-test iterations
Fix -O0 build for Aarch64 bignum multiplication.
Make GNUC-compatible compilers use the right mbedtls_t_udbl again on Aarch64 builds.
Add optimized bignum multiplication for Aarch64.
Enable 64-bit limbs for all Aarch64 builds.
HMAC DRBG: Split entropy-gathering requests to reduce request sizes
psa: Use application key ID where necessary
psa: Adapt set_key_id() for when owner is included
psa: Add PSA_KEY_ID_INIT
psa: Don't duplicate policy initializer
crypto_extra: Use const seed for entropy injection
getting_started: Update for PSA Crypto API 1.0b3
Editorial fixes.
Cross reference 'key handles' from INVALID_HANDLE
Update documentation for psa_destroy_key
Update documentation for psa_close_key
Update psa_open_key documentation
Remove duplicated information in psa_open_key
Initialize key bits to max size + 1 in psa_import_key
...
* origin/development:
Fix uninitialized variable in x509_crt
Add a ChangeLog entry for mbedtls_net_close()
Added mbedtls_net_close and use it in ssl_fork_server to correctly disassociate the client socket from the parent process and the server socket from the child process.
Add ChangeLog entry
fix memory leak in mpi_miller_rabin()
* origin/pr/2803:
Add a ChangeLog entry for mbedtls_net_close()
Added mbedtls_net_close and use it in ssl_fork_server to correctly disassociate the client socket from the parent process and the server socket from the child process.
* origin/development: (42 commits)
Handle deleting non-existant files on Windows
Update submodule
Use 3rdparty headers from the submodule
Add Everest components to all.sh
3rdparty: Add config checks for Everest
Fix macros in benchmark.c
Update generated files
3rdparty: Fix inclusion order of CMakeLists.txt
Fix trailing whitespace
ECDH: Fix inclusion of platform.h for proper use of MBEDTLS_ERR_PLATFORM_FEATURE_UNSUPPORTED
ECDH: Fix use of ECDH API in full handshake benchmark
ECDH: Removed unnecessary calls to mbedtls_ecp_group_load in ECDH benchmark
ECDH: Fix Everest x25519 make_public
Fix file permissions
3rdparty: Rename THIRDPARTY_OBJECTS
3rdparty: Update description of MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED
3rdparty: Fix Makefile coding conventions
ECDSA: Refactor return value checks for mbedtls_ecdsa_can_do
Add a changelog entry for Everest ECDH (X25519)
Document that curve lists can include partially-supported curves
...
Manually edit ChangeLog to ensure correct placement of ChangeLog notes.
* origin/pr/2799: (42 commits)
Handle deleting non-existant files on Windows
Update submodule
Use 3rdparty headers from the submodule
Add Everest components to all.sh
3rdparty: Add config checks for Everest
Fix macros in benchmark.c
Update generated files
3rdparty: Fix inclusion order of CMakeLists.txt
Fix trailing whitespace
ECDH: Fix inclusion of platform.h for proper use of MBEDTLS_ERR_PLATFORM_FEATURE_UNSUPPORTED
ECDH: Fix use of ECDH API in full handshake benchmark
ECDH: Removed unnecessary calls to mbedtls_ecp_group_load in ECDH benchmark
ECDH: Fix Everest x25519 make_public
Fix file permissions
3rdparty: Rename THIRDPARTY_OBJECTS
3rdparty: Update description of MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED
3rdparty: Fix Makefile coding conventions
ECDSA: Refactor return value checks for mbedtls_ecdsa_can_do
Add a changelog entry for Everest ECDH (X25519)
Document that curve lists can include partially-supported curves
...
If we try to delete a non-existant file using del on Windows, as
can happen when running make clean, del will throw an error. Make
the Makefiles more robust by only deleting files if they exist.
This patch fixes an issue we encountered with more stringent compiler
warnings. The signature_is_good variable has a possibility of being
used uninitialized. This patch moves the use of the variable to a
place where it cannot be used while uninitialized.
Signed-off-by: Andy Gross <andy.gross@linaro.org>
The SSL context maintains a set of 'out pointers' indicating the
address at which to write the header fields of the next outgoing
record. Some of these addresses have a static offset from the
beginning of the record header, while other offsets can vary
depending on the active record encryption mechanism: For example,
if an explicit IV is in use, there's an offset between the end
of the record header and the beginning of the encrypted data to
allow the explicit IV to be placed in between; also, if the DTLS
Connection ID (CID) feature is in use, the CID is part of the
record header, shifting all subsequent information (length, IV, data)
to the back.
When setting up an SSL context, the out pointers are initialized
according to the identity transform + no CID, and it is important
to keep them up to date whenever the record encryption mechanism
changes, which is done by the helper function ssl_update_out_pointers().
During context deserialization, updating the out pointers according
to the deserialized record transform went missing, leaving the out
pointers the initial state. When attemping to encrypt a record in
this state, this lead to failure if either a CID or an explicit IV
was in use. This wasn't caught in the tests by the bad luck that
they didn't use CID, _and_ used the default ciphersuite based on
ChaChaPoly, which doesn't have an explicit IV. Changing either of
this would have made the existing tests fail.
This commit fixes the bug by adding a call to ssl_update_out_pointers()
to ssl_context_load() implementing context deserialization.
Extending test coverage is left for a separate commit.
According to SP800-90A, the DRBG seeding process should use a nonce
of length `security_strength / 2` bits as part of the DRBG seed. It
further notes that this nonce may be drawn from the same source of
entropy that is used for the first `security_strength` bits of the
DRBG seed. The present HMAC DRBG implementation does that, requesting
`security_strength * 3 / 2` bits of entropy from the configured entropy
source in total to form the initial part of the DRBG seed.
However, some entropy sources may have thresholds in terms of how much
entropy they can provide in a single call to their entropy gathering
function which may be exceeded by the present HMAC DRBG implementation
even if the threshold is not smaller than `security_strength` bits.
Specifically, this is the case for our own entropy module implementation
which only allows requesting at most 32 Bytes of entropy at a time
in configurations disabling SHA-512, and this leads to runtime failure
of HMAC DRBG when used with Mbed Crypto' own entropy callbacks in such
configurations.
This commit fixes this by splitting the seed entropy acquisition into
two calls, one requesting `security_strength` bits first, and another
one requesting `security_strength / 2` bits for the nonce.
Fixes#237.
* origin/development:
Update the crypto submodule
Use multipart PSA key derivation API
platform: Include stdarg.h where needed
Update Mbed Crypto to contain mbed-crypto#152
CMake: Add a subdirectory build regression test
README: Enable builds as a CMake subproject
ChangeLog: Enable builds as a CMake subproject
Remove use of CMAKE_SOURCE_DIR
Update library version to 2.18.0
* origin/pr/2807:
platform: Include stdarg.h where needed
Update Mbed Crypto to contain mbed-crypto#152
CMake: Add a subdirectory build regression test
README: Enable builds as a CMake subproject
ChangeLog: Enable builds as a CMake subproject
Remove use of CMAKE_SOURCE_DIR
Update library version to 2.18.0
Avoid compiler errors when MBEDTLS_PSA_CRYPTO_KEY_FILE_ID_ENCODES_OWNER
is set by using the application ID type.
[Error] psa_crypto_slot_management.c@175,9: used type 'psa_key_id_t' (aka 'psa_key_file_id_t') where arithmetic or pointer type is required
Bring Mbed TLS 2.18.0 and 2.18.1 release changes back into the
development branch. We had branched to release 2.18.0 and 2.18.1 in
order to allow those releases to go out without having to block work on
the `development` branch.
Manually resolve conflicts in the Changelog by moving all freshly addded
changes to a new, unreleased version entry.
Reject changes to include/mbedtls/platform.h made in the mbedtls-2.18
branch, as that file is now sourced from Mbed Crypto.
* mbedtls-2.18:
platform: Include stdarg.h where needed
Update Mbed Crypto to contain mbed-crypto#152
CMake: Add a subdirectory build regression test
README: Enable builds as a CMake subproject
ChangeLog: Enable builds as a CMake subproject
Remove use of CMAKE_SOURCE_DIR
Update library version to 2.18.0
* origin/development: (114 commits)
Don't redefine calloc and free
Add changelog entry to record checking
Fix compiler warning
Add debug messages
Remove duplicate entries from ChangeLog
Fix parameter name in doxygen
Add missing guards for mac usage
Improve reability and debugability of large if
Fix a typo in a comment
Fix MSVC warning
Fix compile error in reduced configurations
Avoid duplication of session format header
Implement config-checking header to context s11n
Provide serialisation API only if it's enabled
Fix compiler warning: comparing signed to unsigned
Actually reset the context on save as advertised
Re-use buffer allocated by handshake_init()
Enable serialisation tests in ssl-opt.sh
Change requirements for setting timer callback
Add setting of forced fields when deserializing
...
Breaking into a series of statements makes things easier when stepping through
the code in a debugger.
Previous comments we stating the opposite or what the code tested for (what we
want vs what we're erroring out on) which was confusing.
Also expand a bit on the reasons for these restrictions.
Modelled after the config-checking header from session s11n.
The list of relevant config flags was established by manually checking the
fields serialized in the format, and which config.h flags they depend on.
This probably deserves double-checking by reviewers.
Since the type of cid_len is unsigned but shorter than int, it gets
"promoted" to int (which is also the type of the result), unless we make the
other operand an unsigned int which then forces the expression to unsigned int
as well.
The number of meaning of the flags will be determined later, when handling the
relevant struct members. For now three bytes are reserved as an example, but
this number may change later.
This mainly follows the design document (saving all fields marked "saved" in
the main structure and the transform sub-structure) with two exceptions:
- things related to renegotiation are excluded here (there weren't quite in
the design document as the possibility of allowing renegotiation was still
on the table, which is no longer is) - also, ssl.secure_renegotiation (which
is not guarded by MBEDTLS_SSL_RENEGOTIATION because it's used in initial
handshakes even with renegotiation disabled) is still excluded, as we don't
need it after the handshake.
- things related to Connection ID are added, as they weren't present at the
time the design document was written.
The exact format of the header (value of the bitflag indicating compile-time
options, whether and how to merge it with the serialized session header) will
be determined later.
Enforce restrictions indicated in the documentation.
This allows to make some simplifying assumptions (no need to worry about
saving IVs for CBC in TLS < 1.1, nor about saving handshake data) and
guarantees that all values marked as "forced" in the design document have the
intended values and can be skipped when serialising.
Some of the "forced" values are not checked because their value is a
consequence of other checks (for example, session_negotiated == NULL outside
handshakes). We do however check that session and transform are not NULL (even
if that's also a consequence of the initial handshake being over) as we're
going to dereference them and static analyzers may appreciate the info.
This is enabled by default as we generally enable things by default unless
there's a reason not to (experimental, deprecated, security risk).
We need a compile-time option because, even though the functions themselves
can be easily garbage-collected by the linker, implementing them will require
saving 64 bytes of Client/ServerHello.random values after the handshake, that
would otherwise not be needed, and people who don't need this feature
shouldn't have to pay the price of increased RAM usage.
This commit introduces a new SSL error code
`MBEDTLS_ERR_SSL_VERSION_MISMATCH`
which can be used to indicate operation failure due to a
mismatch of version or configuration.
It is put to use in the implementation of `mbedtls_ssl_session_load()`
to signal the attempt to de-serialize a session which has been serialized
in a build of Mbed TLS using a different version or configuration.
This commit makes use of the added space in the session header to
encode the state of those parts of the compile-time configuration
which influence the structure of the serialized session in the
present version of Mbed TLS. Specifically, these are
- the options which influence the presence/omission of fields
from mbedtls_ssl_session (which is currently shallow-copied
into the serialized session)
- the setting of MBEDTLS_X509_CRT_PARSE_C, which determines whether
the serialized session contains a CRT-length + CRT-value pair after
the shallow-copied mbedtls_ssl_session instance.
- the setting of MBEDTLS_SSL_SESSION_TICKETS, which determines whether
the serialized session contains a session ticket.
This commit adds space for two bytes in the header of serizlied
SSL sessions which can be used to determine the structure of the
remaining serialized session in the respective version of Mbed TLS.
Specifically, if parts of the session depend on whether specific
compile-time options are set or not, the setting of these options
can be encoded in the added space.
This commit doesn't yet make use of the fields.
The format of serialized SSL sessions depends on the version and the
configuration of Mbed TLS; attempts to restore sessions established
in different versions and/or configurations lead to undefined behaviour.
This commit adds an 3-byte version header to the serialized session
generated and cleanly fails ticket parsing in case a session from a
non-matching version of Mbed TLS is presented.
This bug was present since cert digest had been introduced, which highlights
the need for testing.
While at it, fix a bug in the comment explaining the format - this was
introduced by me copy-pasting to hastily from current baremetal, that has a
different format (see next PR in the series for the same in development).
We have explicit recommendations to use US spelling for technical writing, so
let's apply this to code as well for uniformity. (My fingers tend to prefer UK
spelling, so this needs to be fixed in many places.)
sed -i 's/\([Ss]eriali\)s/\1z/g' **/*.[ch] **/*.function **/*.data ChangeLog
This uncovered a bug that led to a double-free (in practice, in general could
be free() on any invalid value): initially the session structure is loaded
with `memcpy()` which copies the previous values of pointers peer_cert and
ticket to heap-allocated buffers (or any other value if the input is
attacker-controlled). Now if we exit before we got a chance to replace those
invalid values with valid ones (for example because the input buffer is too
small, or because the second malloc() failed), then the next call to
session_free() is going to call free() on invalid pointers.
This bug is fixed in this commit by always setting the pointers to NULL right
after they've been read from the serialised state, so that the invalid values
can never be used.
(An alternative would be to NULL-ify them when writing, which was rejected
mostly because we need to do it when reading anyway (as the consequences of
free(invalid) are too severe to take any risk), so doing it when writing as
well is redundant and a waste of code size.)
Also, while thinking about what happens in case of errors, it became apparent
to me that it was bad practice to leave the session structure in an
half-initialised state and rely on the caller to call session_free(), so this
commit also ensures we always clear the structure when loading failed.
This allows callers to discover what an appropriate size is. Otherwise they'd
have to either try repeatedly, or allocate an overly large buffer (or some
combination of those).
Adapt documentation an example usage in ssl_client2.
Avoid useless copy with mbedtls_ssl_get_session() before serialising.
Used in ssl_client2 for testing and demonstrating usage, but unfortunately
that means mbedtls_ssl_get_session() is no longer tested, which will be fixed
in the next commit.
On client side, this is required for the main use case where of serialising a
session for later resumption, in case tickets are used.
On server side, this doesn't change much as ticket_len will always be 0.
This unblocks testing the functions by using them in ssl_client2, which will
be done in the next commit.
This finishes making these functions public. Next step is to get them tested,
but there's currently a blocker for that, see next commit (and the commit
after it for tests).
While 'session hash' is currently unique, so suitable to prove that the
intended code path has been taken, it's a generic enough phrase that in the
future we might add other debug messages containing it in completely unrelated
code paths. In order to future-proof the accuracy of the test, let's use a
more specific string.
The previous comment used "TLS" as a shortcut for "TLS 1.0/1.1" which was
confusing. This partially reflected the names of the calc_verify/finished that
go ssl, tls (for 1.0/1.1) tls_shaxxx (for 1.2), but still it's clearer to be
explicit in the comment - and perhaps in the long term the function names
could be clarified instead.
When using this function to deserialize, it's not a problem to have a session
structure as input as we'll have one around anyway (most probably freshly
deserialised).
However for tests it's convenient to be able to build a transform without
having a session structure around.
Also, removing this structure from parameters makes the function signature
more uniform, the only exception left being the ssl param at the end that's
hard to avoid for now.
Configs with no DEBUG_C are used for example in test-ref-configs.pl, which also
runs parts of compat.sh or ssl-opt.sh on them, so the added 'ssl = NULL'
statements will be exercised in those tests at least.
Make it more explicit what's used. Unfortunately, we still need ssl as a
parameter for debugging, and because calc_verify wants it as a parameter (for
all TLS versions except SSL3 it would actually only need handshake, but SSL3
also accesses session_negotiate).
It's also because of calc_verify that we can't make it const yet, but see next
commit.
* origin/development: (51 commits)
Fix possibly-lossy conversion warning from MSVC
Reintroduce length 0 check for records
Don't use memcpy() for 2-byte copy operation
Remove integer parsing macro
Fix alignment in record header parsing routine
Don't disallow 'record from another epoch' log msg in proxy ref test
Make sure 'record from another epoch' is displayed for next epoch
Implement record checking API
Mark ssl_parse_record_header() as `const` in SSL context
Make mbedtls_ssl_in_hdr_len() CID-unaware
Remove duplicate setting of ssl->in_msgtype and ssl->in_msglen
Move update of in_xxx fields in ssl_get_next_record()
Move update of in_xxx fields outside of ssl_prepare_record_content()
Reduce dependency of ssl_prepare_record_content() on in_xxx fields
Move ssl_update_in_pointers() to after record hdr parsing
Mark DTLS replay check as `const` on the SSL context
Move updating the internal rec ptrs to outside of rec hdr parsing
Mark ssl_decrypt_buf() as `const in the input SSL context
Adapt ssl_prepare_record_content() to use SSL record structure
Use record length from record structure when fetching content in TLS
...
* origin/pr/2790: (40 commits)
Fix possibly-lossy conversion warning from MSVC
Reintroduce length 0 check for records
Don't use memcpy() for 2-byte copy operation
Remove integer parsing macro
Fix alignment in record header parsing routine
Don't disallow 'record from another epoch' log msg in proxy ref test
Make sure 'record from another epoch' is displayed for next epoch
Implement record checking API
Mark ssl_parse_record_header() as `const` in SSL context
Make mbedtls_ssl_in_hdr_len() CID-unaware
Remove duplicate setting of ssl->in_msgtype and ssl->in_msglen
Move update of in_xxx fields in ssl_get_next_record()
Move update of in_xxx fields outside of ssl_prepare_record_content()
Reduce dependency of ssl_prepare_record_content() on in_xxx fields
Move ssl_update_in_pointers() to after record hdr parsing
Mark DTLS replay check as `const` on the SSL context
Move updating the internal rec ptrs to outside of rec hdr parsing
Mark ssl_decrypt_buf() as `const in the input SSL context
Adapt ssl_prepare_record_content() to use SSL record structure
Use record length from record structure when fetching content in TLS
...
* origin/pr/2781:
Documentation fixes according to review
Remove unused label in ssl_client2/ssl_server2
Add missing word in documentation of mbedtls_ssl_check_record()
cli/srv ex: Add dbg msg if record checking gives inconsistent result
Fix minor issues in documentation of mbedtls_ssl_check_record()
State that record checking is DTLS only and doesn't check content type
Update version_features.c
Pass dgrams to mbedtls_ssl_check_record in ssl_client2/server2
Add IO wrappers to ssl_server2 as interm's between NET and SSL layer
Add IO wrappers to ssl_client2 as interm's between NET and SSL layer
Introduce configuration option and API for SSL record checking
In psa_import_key, the key bits value was uninitialized before
calling the secure element driver import function. There is a
potential issue if the driver returns PSA_SUCCESS without setting
the key bits. This shouldn't happen, but shouldn't be discounted
either, so we initialize the key bits to an invalid issue.
* development:
Rename local variables
Update submodule
Update Visual studio project file
Move the examples to PSA 1.0
Use psa_raw_key_agreement
Remove calls to psa_allocate_key
Make variable naming consistent
Update psa_create_key to PSA 1.0
Update psa_import_key to PSA 1.0
Update psa_generator_abort to PSA 1.0
Update psa_generator_read to PSA 1.0
Update psa_crypto_generator_t to PSA 1.0
Update psa_key_agreement to PSA 1.0
Update GENERATOR_INIT macro to PSA 1.0
Update KEYPAIR macros to PSA 1.0
* crypto/pr/212: (337 commits)
Make TODO comments consistent
Fix PSA tests
Fix psa_generate_random for >1024 bytes
Add tests to generate more random than MBEDTLS_CTR_DRBG_MAX_REQUEST
Fix double free in psa_generate_key when psa_generate_random fails
Fix copypasta in test data
Avoid a lowercase letter in a macro name
Correct some comments
Fix PSA init/deinit in mbedtls_xxx tests when using PSA
Make psa_calculate_key_bits return psa_key_bits_t
Adjust secure element code to the new ITS interface
More refactoring: consolidate attribute validation
Fix policy validity check on key creation.
Add test function for import with a bad policy
Test key creation with an invalid type (0 and nonzero)
Remove "allocated" flag from key slots
Take advantage of psa_core_key_attributes_t internally #2
Store the key size in the slot in memory
Take advantage of psa_core_key_attributes_t internally: key loading
Switch storage functions over to psa_core_key_attributes_t
...
* development:
Update crypto to a repo with latest crypto
Update Mbed Crypto
tls: Remove duplicate psa_util.h include
Remove unused cryptography test files
Remove crypto C files
Remove files sourced from Mbed Crypto
config: Fix Doxygen link to MBEDTLS_PARAM_FAILED
Use mbedtls-based path for includes
check-names: Consider crypto-sourced header files
Resolve conflicts by performing the following actions:
- Reject changes to ChangeLog, as Mbed Crypto doesn't have one
- Reject changes to tests/compat.sh, as Mbed Crypto doesn't have it
- Reject changes to programs/fuzz/onefile.c, as Mbed Crypto doesn't have
it
- Resolve minor whitespace differences in library/ecdsa.c by taking the
version from Mbed TLS upstream.
* origin/development:
Honor MBEDTLS_CONFIG_FILE in fuzz tests
Test that a shared library build produces a dynamically linked executable
Test that the shared library build with CMake works
Add a test of MBEDTLS_CONFIG_FILE
Exclude DTLS 1.2 only with older OpenSSL
Document the rationale for the armel build
Switch armel build to -Os
Add a build on ARMv5TE in ARM mode
Add changelog entry for ARM assembly fix
bn_mul.h: require at least ARMv6 to enable the ARM DSP code
Adapt ChangeLog
ECP restart: Don't calculate address of sub ctx if ctx is NULL
This commit implements the record checking API
mbedtls_ssl_check_record()
on top of the restructured incoming record stack.
Specifically, it makes use of the fact that the core processing routines
ssl_parse_record_header()
mbedtls_ssl_decrypt_buf()
now operate on instances of the SSL record structure mbedtls_record
instead of the previous mbedtls_ssl_context::in_xxx fields.
ssl_get_next_record() updates the legacy in_xxx fields in two places,
once before record decryption and once after. Now that record decryption
doesn't use or affect the in_xxx fields anymore, setting up the these
legacy fields can entirely be moved to the end of ssl_get_next_record(),
which is what this comit does.
This commit solely moves existing code, but doesn't yet simplify the
now partially redundant settings of the in_xxx fields. This will be
done in a separate commit.
Multiple record attributes such as content type and payload length
may change during record decryption, and the legacy in_xxx fields
in the SSL context therefore need to be updated after the record
decryption routine ssl_decrypt_buf() has been called.
After the previous commit has made ssl_prepare_record_content()
independent of the in_xxx fields, setting them can be moved
outside of ssl_prepare_record_content(), which is what this
commit does.
Previously, ssl_update_in_pointers() ensured that the in_xxx pointers
in the SSL context are set to their default state so that the record
header parsing function ssl_parse_record_header() could make use of them.
By now, the latter is independent of these pointers, so they don't need
to be setup before calling ssl_parse_record_header() anymore.
However, other parts of the messaging stack might still depend on it
(to be studied), and hence this commit does not yet reomve
ssl_update_in_pointers() entirely.
The stack maintains pointers mbedtls_ssl_context::in_xxx pointing to
various parts of the [D]TLS record header. Originally, these fields
were determined and set in ssl_parse_record_header(). By now,
ssl_parse_record_header() has been modularized to setup an instance
of the internal SSL record structure mbedtls_record, and to derive
the old in_xxx fields from that.
This commit takes a further step towards removing the in_xxx fields
by deriving them from the established record structure _outside_ of
ssl_parse_record_header() after the latter has succeeded.
One exception is the handling of possible client reconnects,
which happens in the case then ssl_parse_record_header() returns
MBEDTLS_ERR_SSL_UNEXPECTED_RECORD; since ssl_check_client_reconnect()
so far uses the in_xxx fields, they need to be derived from the
record structure beforehand.
This commit makes a first step towards modularizing the incoming record
processing by having it operate on instances of the structure mbedtls_record
representing SSL records.
So far, only record encryption/decryption operate in terms of record
instances, but the rest of the parsing doesn't. In particular,
ssl_parse_record_header() operates directly on the fixed input buffer,
setting the various ssl->in_xxx pointers and fields, and only directly
before/after calling ssl_decrypt_buf() these fields a converted to/from
mbedtls_record instances.
This commit does not yet remove the ssl->in_xxx fields, but makes a step
towards extending the lifetime of mbedtls_record structure representing
incoming records, by modifying ssl_parse_record_header() to setup an
instance of mbedtls_record, and setting the ssl->in_xxx fields from that
instance. The instance so-constructed isn't used further so far, and in
particular it is not yet consolidated with the instance set up for use
in ssl_decrypt_record(). That's for a later commit.
Previously, ssl_parse_record_header() did not check whether the current
datagram is large enough to hold a record of the advertised size. This
could lead to records being silently skipped over or backed up on the
basis of an invalid record length. Concretely, the following would happen:
1) In the case of a record from an old epoch, the record would be
'skipped over' by setting next_record_offset according to the advertised
but non-validated length, and only in the subsequent mbedtls_ssl_fetch_input()
it would be noticed in an assertion failure if the record length is too
large for the current incoming datagram.
While not critical, this is fragile, and also contrary to the intend
that MBEDTLS_ERR_SSL_INTERNAL_ERROR should never be trigger-able by
external input.
2) In the case of a future record being buffered, it might be that we
backup a record before we have validated its length, hence copying
parts of the input buffer that don't belong to the current record.
This is a bug, and it's by luck that it doesn't seem to have critical
consequences.
This commit fixes this by modifying ssl_parse_record_header() to check that
the current incoming datagram is large enough to hold a record of the
advertised length, returning MBEDTLS_ERR_SSL_INVALID_RECORD otherwise.
We don't send alerts on other instances of ill-formed records,
so why should we do it here? If we want to keep it, the alerts
should rather be sent ssl_get_next_record().
As explained in the previous commit, if mbedtls_ssl_fetch_input()
is called multiple times, all but the first call are equivalent to
bounds checks in the incoming datagram.
In DTLS, if mbedtls_ssl_fetch_input() is called multiple times without
resetting the input buffer in between, the non-initial calls are functionally
equivalent to mere bounds checks ensuring that the incoming datagram is
large enough to hold the requested data. In the interest of code-size
and modularity (removing a call to a non-const function which is logically
const in this instance), this commit replaces such a call to
mbedtls_ssl_fetch_input() by an explicit bounds check in
ssl_parse_record_header().
Previously, `ssl_handle_possible_reconnect()` was part of
`ssl_parse_record_header()`, which was required to return a non-zero error
code to indicate a record which should not be further processed because it
was invalid, unexpected, duplicate, .... In this case, some error codes
would lead to some actions to be taken, e.g. `MBEDTLS_ERR_SSL_EARLY_MESSAGE`
to potential buffering of the record, but eventually, the record would be
dropped regardless of the precise value of the error code. The error code
`MBEDTLS_ERR_SSL_HELLO_VERIFY_REQUIRED` returned from
`ssl_handle_possible_reconnect()` did not receive any special treatment and
lead to silent dopping of the record - in particular, it was never returned
to the user.
In the new logic this commit introduces, `ssl_handle_possible_reconnect()` is
part of `ssl_check_client_reconnect()` which is triggered _after_
`ssl_parse_record_header()` found an unexpected record, which is already in
the code-path eventually dropping the record; we want to leave this code-path
only if a valid cookie has been found and we want to reset, but do nothing
otherwise. That's why `ssl_handle_possible_reconnect()` now returns `0` unless
a valid cookie has been found or a fatal error occurred.
Availability of sufficient incoming data should be checked when
it is needed, which is in mbedtls_ssl_fetch_input(), and this
function has the necessary bounds checks in place.
The check is in terms of the internal input buffer length and is
hence likely to be originally intended to protect against overflow
of the input buffer when fetching data from the underlying
transport in mbedtls_ssl_fetch_input(). For locality of reasoning,
it's better to perform such a check close to where it's needed,
and in fact, mbedtls_ssl_fetch_input() _does_ contain an equivalent
bounds check, too, rendering the bounds check in question redundant.
mbedtls_ssl_decrypt_buf() asserts that the passed transform is not NULL,
but the function is only invoked in a single place, and this invocation
is clearly visible to be within a branch ensuring that the incoming
transform isn't NULL. Remove the assertion for the benefit of code-size.
The previous code performed architectural maximum record length checks
both before and after record decryption. Since MBEDTLS_SSL_IN_CONTENT_LEN
bounds the maximum length of the record plaintext, it suffices to check
only once after (potential) decryption.
This must not be confused with the internal check that the record
length is small enough to make the record fit into the internal input
buffer; this is done in mbedtls_ssl_fetch_input().
Adopt a simple method for tracking whether there was a failure: each
fallible operation sets overall_status, unless overall_status is
already non-successful. Thus in case of multiple failures, the
function always reports whatever failed first. This may not always be
the right thing, but it's simple.
This revealed a bug whereby if the only failure was the call to
psa_destroy_se_key(), i.e. if the driver reported a failure or if the
driver lacked support for destroying keys, psa_destroy_key() would
ignore that failure.
For a key in a secure element, if creating a transaction file fails,
don't touch storage, but close the key in memory. This may not be
right, but it's no wronger than it was before. Tracked in
https://github.com/ARMmbed/mbed-crypto/issues/215
When a key slot is wiped, a copy of the key material may remain in
operations. This is undesirable, but does not violate the safety of
the code. Tracked in https://github.com/ARMmbed/mbed-crypto/issues/86
The methods to import and generate a key in a secure element drivers
were written for an earlier version of the application-side interface.
Now that there is a psa_key_attributes_t structure that combines all
key metadata including its lifetime (location), type, size, policy and
extra type-specific data (domain parameters), pass that to drivers
instead of separate arguments for each piece of metadata. This makes
the interface less cluttered.
Update parameter names and descriptions to follow general conventions.
Document the public-key output on key generation more precisely.
Explain that it is optional in a driver, and when a driver would
implement it. Declare that it is optional in the core, too (which
means that a crypto core might not support drivers for secure elements
that do need this feature).
Update the implementation and the tests accordingly.
Register an existing key in a secure element.
Minimal implementation that doesn't call any driver method and just
lets the application declare whatever it wants.
Pass the key creation method (import/generate/derive/copy) to the
driver methods to allocate or validate a slot number. This allows
drivers to enforce policies such as "this key slot can only be used
for keys generated inside the secure element".
Let psa_start_key_creation know what type of key creation this is. This
will be used at least for key registration in a secure element, which
is a peculiar kind of creation since it uses existing key material.
Allow the application to choose the slot number in a secure element,
rather than always letting the driver choose.
With this commit, any application may request any slot. In an
implementation with isolation, it's up to the service to filter key
creation requests and apply policies to limit which applications can
request which slot.
This function no longer modifies anything, so it doesn't actually
allocate the slot. Now, it just returns the empty key slot, and it's
up to the caller to cause the slot to be in use (or not).