There is a static dependency in the test system for
this file. To prevent the issue from happening, move
the definition to the end of file so that the last
return in the main remains in the same line.
As has been previously done for ciphersuites, this commit introduces
a zero-cost abstraction layer around the type
mbedtls_md_info const *
whose valid values represent implementations of message digest algorithms.
Access to a particular digest implementation can be requested by name or
digest ID through the API mbedtls_md_info_from_xxx(), which either returns
a valid implementation or NULL, representing failure.
This commit replaces such uses of `mbedtls_md_info const *` by an abstract
type `mbedtls_md_handle_t` whose valid values represent digest implementations,
and which has a designated invalid value MBEDTLS_MD_INVALID_HANDLE.
The purpose of this abstraction layer is to pave the way for builds which
support precisely one digest algorithm. In this case, mbedtls_md_handle_t
can be implemented as a two-valued type, with one value representing the
invalid handle, and the unique valid value representing the unique enabled
digest.
All sample and test programs had a definition of mbedtls_param_failed.
This was necessary because we wanted to be able to build them in a
configuration with MBEDTLS_CHECK_PARAMS set but without a definition
of MBEDTLS_PARAM_FAILED. Now that we activate the sample definition of
MBEDTLS_PARAM_FAILED in config.h when testing with
MBEDTLS_CHECK_PARAMS set, this boilerplate code is no longer needed.
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the current implementation suffers
from a lack of bounds checking for the parsed length of incoming
DTLS records that can lead to a buffer overflow when facing
malformed records.
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the introduction of the Connection ID
feature leads to yet another problem: The CID length is not part of
the record header but dynamically negotiated during (potentially
encrypted!) handshakes, and it is hence impossible for a passive traffic
analyzer (in this case our UDP proxy) to reliably parse record headers;
especially, it isn't possible to reliably infer the length of a record,
nor to dissect a datagram into records.
The previous implementation of the UDP proxy was not CID-aware and
assumed that the record length would always reside at offsets 11, 12
in the DTLS record header, which would allow it to iterate through
the datagram record by record. As mentioned, this is no longer possible
for CID-based records, and the current implementation can run into
a buffer overflow in this case (because it doesn't validate that
the record length is not larger than what remains in the datagram).
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
This commit adds the command line option 'bad_cid' to the UDP proxy
`./programs/test/udp_proxy`. It takes a non-negative integral value N,
which if not 0 has the effect of duplicating every 1:N CID records
and modifying the CID in the first copy sent.
This is to exercise the stacks documented behaviour on receipt
of unexpected CIDs.
It is important to send the record with the unexpected CID first,
because otherwise the packet would be dropped already during
replay protection (the same holds for the implementation of the
existing 'bad_ad' option).
ApplicationData records are not protected against loss by DTLS
and our test applications ssl_client2 and ssl_server2 don't
implement any retransmission scheme to deal with loss of the
data they exchange. Therefore, the UDP proxy programs/test/udp_proxy
does not drop ApplicationData records.
With the introduction of the Connection ID, encrypted ApplicationData
records cannot be recognized as such by inspecting the record content
type, as the latter is always set to the CID specific content type for
protected records using CIDs, while the actual content type is hidden
in the plaintext.
To keep tests working, this commit adds CID records to the list of
content types which are protected against dropping by the UDP proxy.
This commit improves hygiene and formatting of macro definitions
throughout the library. Specifically:
- It adds brackets around parameters to avoid unintended
interpretation of arguments, e.g. due to operator precedence.
- It adds uses of the `do { ... } while( 0 )` idiom for macros that
can be used as commands.
Remove the ssl_cert_test sample application, as it uses
hardcoded certificates that moved, and is redundant with the x509
tests and applications. Fixes#1905.
The previous prototype gave warnings are the strings produced by #cond and
__FILE__ are const, so we shouldn't implicitly cast them to non-const.
While at it modifying most example programs:
- include the header that has the function declaration, so that the definition
can be checked to match by the compiler
- fix whitespace
- make it work even if PLATFORM_C is not defined:
- CHECK_PARAMS is not documented as depending on PLATFORM_C and there is
no reason why it should
- so, remove the corresponding #if defined in each program...
- and add missing #defines for mbedtls_exit when needed
The result has been tested (make all test with -Werror) with the following
configurations:
- full with CHECK_PARAMS with PLATFORM_C
- full with CHECK_PARAMS without PLATFORM_C
- full without CHECK_PARAMS without PLATFORM_C
- full without CHECK_PARAMS with PLATFORM_C
Additionally, it has been manually tested that adding
mbedtls_aes_init( NULL );
near the normal call to mbedtls_aes_init() in programs/aes/aescrypt2.c has the
expected effect when running the program.
The sample programs require an additional handler function of
mbedtls_param_failed() to handle any failed parameter validation checks enabled
by the MBEDTLS_CHECK_PARAMS config.h option.
Some sample programs access structure fields directly. Making these work is
desirable in the long term, but these are not essential for the core
functionality in non-legacy mode.
Previously, the UDP proxy could only remember one delayed message
for future transmission; if two messages were delayed in succession,
without another one being normally forwarded in between,
the message that got delayed first would be dropped.
This commit enhances the UDP proxy to allow to delay an arbitrary
(compile-time fixed) number of messages in succession.
Functions time with TIME_AND_TSC() didn't have their return values checked.
I'm not sure whether Coverity complained about existing uses, but it did about
new ones, since we consistently check their return values everywhere but here,
which it rightfully finds suspicious.
So, let's check return values. This probably adds a few cycles to existing
loop overhead, but on my machine (x86_64) the added overhead is less than the
random-looking variation between various runs, so it's acceptable.
Some calls had their own particular error checking; remove that in favour of
the new general solution.
* development: (182 commits)
Change the library version to 2.11.0
Fix version in ChangeLog for fix for #552
Add ChangeLog entry for clang version fix. Issue #1072
Compilation warning fixes on 32b platfrom with IAR
Revert "Turn on MBEDTLS_SSL_ASYNC_PRIVATE by default"
Fix for missing len var when XTS config'd and CTR not
ssl_server2: handle mbedtls_x509_dn_gets failure
Fix harmless use of uninitialized memory in ssl_parse_encrypted_pms
SSL async tests: add a few test cases for error in decrypt
Fix memory leak in ssl_server2 with SNI + async callback
SNI + SSL async callback: make all keys async
ssl_async_resume: free the operation context on error
ssl_server2: get op_name from context in ssl_async_resume as well
Clarify "as directed here" in SSL async callback documentation
SSL async callbacks documentation: clarify resource cleanup
Async callback: use mbedtls_pk_check_pair to compare keys
Rename mbedtls_ssl_async_{get,set}_data for clarity
Fix copypasta in the async callback documentation
SSL async callback: cert is not always from mbedtls_ssl_conf_own_cert
ssl_async_set_key: detect if ctx->slots overflows
...
Add a new context structure for XTS. Adjust the API for XTS to use the new
context structure, including tests suites and the benchmark program. Update
Doxgen documentation accordingly.
AES-XEX is a building block for other cryptographic standards and not yet a
standard in and of itself. We'll just provide the standardized AES-XTS
algorithm, and not AES-XEX. The AES-XTS algorithm and interface provided
can be used to perform the AES-XEX algorithm when the length of the input
is a multiple of the AES block size.
XTS mode is fully known as "xor-encrypt-xor with ciphertext-stealing".
This is the generalization of the XEX mode.
This implementation is limited to an 8-bits (1 byte) boundary, which
doesn't seem to be what was thought considering some test vectors [1].
This commit comes with tests, extracted from [1], and benchmarks.
Although, benchmarks aren't really nice here, as they work with a buffer
of a multiple of 16 bytes, which isn't a challenge for XTS compared to
XEX.
[1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
* development: (97 commits)
Updated version number to 2.10.0 for release
Add a disabled CMAC define in the no-entropy configuration
Adapt the ARIA test cases for new ECB function
Fix file permissions for ssl.h
Add ChangeLog entry for PR#1651
Fix MicroBlaze register typo.
Fix typo in doc and copy missing warning
Fix edit mistake in cipher_wrap.c
Update CTR doc for the 64-bit block cipher
Update CTR doc for other 128-bit block ciphers
Slightly tune ARIA CTR documentation
Remove double declaration of mbedtls_ssl_list_ciphersuites
Update CTR documentation
Use zeroize function from new platform_util
Move to new header style for ALT implementations
Add ifdef for selftest in header file
Fix typo in comments
Use more appropriate type for local variable
Remove useless parameter from function
Wipe sensitive info from the stack
...