This uncovered a bug that led to a double-free (in practice, in general could
be free() on any invalid value): initially the session structure is loaded
with `memcpy()` which copies the previous values of pointers peer_cert and
ticket to heap-allocated buffers (or any other value if the input is
attacker-controlled). Now if we exit before we got a chance to replace those
invalid values with valid ones (for example because the input buffer is too
small, or because the second malloc() failed), then the next call to
session_free() is going to call free() on invalid pointers.
This bug is fixed in this commit by always setting the pointers to NULL right
after they've been read from the serialised state, so that the invalid values
can never be used.
(An alternative would be to NULL-ify them when writing, which was rejected
mostly because we need to do it when reading anyway (as the consequences of
free(invalid) are too severe to take any risk), so doing it when writing as
well is redundant and a waste of code size.)
Also, while thinking about what happens in case of errors, it became apparent
to me that it was bad practice to leave the session structure in an
half-initialised state and rely on the caller to call session_free(), so this
commit also ensures we always clear the structure when loading failed.
This test appeared to be passing for the wrong reason, it's actually not
appropriate for the current implementation. The serialised data contains
values of pointers to heap-allocated buffers. There is no reason these should
be identical after a load-save pair. They just happened to be identical when I
first ran the test due to the place of session_free() in the test code and the
fact that the libc's malloc() reused the same buffers. The test no longer
passes if other malloc() implementations are used (for example, when compiling
with asan which avoids re-using the buffer, probably for better error
detection).
So, disable this test for now (we can re-enable it when we changed how
sessions are serialised, which will be done in a future PR, hence the name of
the dummy macro in depends_on). In the next commit we're going to add a test
that save-load is the identity instead - which will be more work in testing as
it will require checking each field manually, but at least is reliable.
This initial test ensures that a load-save function is the identity. It is so
far incomplete in that it only tests sessions without tickets or certificate.
This will be improved in the next commits.
While 'session hash' is currently unique, so suitable to prove that the
intended code path has been taken, it's a generic enough phrase that in the
future we might add other debug messages containing it in completely unrelated
code paths. In order to future-proof the accuracy of the test, let's use a
more specific string.
* origin/pr/2790: (40 commits)
Fix possibly-lossy conversion warning from MSVC
Reintroduce length 0 check for records
Don't use memcpy() for 2-byte copy operation
Remove integer parsing macro
Fix alignment in record header parsing routine
Don't disallow 'record from another epoch' log msg in proxy ref test
Make sure 'record from another epoch' is displayed for next epoch
Implement record checking API
Mark ssl_parse_record_header() as `const` in SSL context
Make mbedtls_ssl_in_hdr_len() CID-unaware
Remove duplicate setting of ssl->in_msgtype and ssl->in_msglen
Move update of in_xxx fields in ssl_get_next_record()
Move update of in_xxx fields outside of ssl_prepare_record_content()
Reduce dependency of ssl_prepare_record_content() on in_xxx fields
Move ssl_update_in_pointers() to after record hdr parsing
Mark DTLS replay check as `const` on the SSL context
Move updating the internal rec ptrs to outside of rec hdr parsing
Mark ssl_decrypt_buf() as `const in the input SSL context
Adapt ssl_prepare_record_content() to use SSL record structure
Use record length from record structure when fetching content in TLS
...
It happens regularly in test runs that the server example application
shuts down a connection, goes into waiting mode for a new connection,
and then receives the encrypted ClosureAlert from the client. The only
reason why this does currently not trigger the 'record from another epoch'
message is that we handle ClientHello parsing outside of the main record
stack because we want to be able to detect SSLv2 ClientHellos. However,
this is likely to go away, and once it happens, we'll see the log message.
Further, when record checking is used, every record, including the mentioned
closure alert, is passed to the record checking API before being passed to
the rest of the stack, which leads to the log message being printed.
In summary, grepping for 'record from another epoch' is a fragile way
of checking whether a reordered message has arrived. A more reliable
way is to grep for 'Buffer record from epoch' which is printed when
a record from a future epoch is actually buffered, and 'ssl_buffer_message'
which is the function buffering a future handshake message.
compat.sh used to skip OpenSSL altogether for DTLS 1.2, because older
versions of OpenSSL didn't support it. But these days it is supported.
We don't want to use DTLS 1.2 with OpenSSL unconditionally, because we
still use legacy versions of OpenSSL to test with legacy ciphers. So
check whether the version we're using supports it.
Without any -O option, the default is -O0, and then the assembly code
is not used, so this would not be a non-regression test for the
assembly code that doesn't build.
* origin/pr/2660:
Fix parsing issue when int parameter is in base 16
Refactor receive_uint32()
Refactor get_byte function
Make the script portable to both pythons
Update the test encoding to support python3
update the test script
* origin/pr/1622: (29 commits)
Do not build fuzz on windows
No booleans and import config
Removing space before opening parenthesis
Style corrections
Syntax fix
Fixes warnings from MSVC
Add a linker flag to enable gcov in basic-build-test.sh
checks MBEDTLS_PEM_PARSE_C
Restore programs/fuzz/Makefile after in-tree cmake
Move fuzz directory to programs
Documentation for corpus generation
Restore tests/fuzz/Makefile after in-tree cmake
Adding ifdefs to avoid warnings for unused globals
Adds LDFLAGS fsanitize=address
Ignore compiled object files and executables
Also clean the fuzz subdirectory
copyediting README.md
Protecting client/server fuzz targts with ifdefs
Makefile support 1
Fuzz README and direct compilation
...
* origin/pr/2706:
Update Mbed Crypto to contain mbed-crypto#152
CMake: Add a subdirectory build regression test
README: Enable builds as a CMake subproject
ChangeLog: Enable builds as a CMake subproject
Remove use of CMAKE_SOURCE_DIR
We accidentally disabled testing with MBEDTLS_ECP_RESTARTABLE. Re-enable
testing with restartable ECP when MBEDTLS_USE_PSA_CRYPTO is not set.
Fixes 971dea3745 ("Enable USE_PSA_CRYPTO with config.pl full")
Limit log output in compat.sh and ssl-opt.sh, in case of failures with these
scripts where they may output seemingly unlimited length error logs.
Note that ulimit -f uses units of 512 bytes, so we use 10 * 1024 * 1024 * 2 to
get 10 GiB.
Fix error `ValueError: invalid literal for int() with base 10:` that
is caused when a parameter is given in base 16. Use relevant base
when calling `int()` function.
* Allow specifying MBEDTLS_DOCKER_REGISTRY for organizations that have
a mirroring proxy of Docker Hub
* Specify "--network host" during build to ensure use of the host's
DNS resolution.
Commit 117b8a4516 requires version 6+
of i686-w64-mingw32-gcc to run the mingw builds, but Ubuntu Xenial (16.04)
supplies 5.3.1. Change the Docker container to Ubuntu Bionic (18.04) to
pick up a version that will run the tests.
Enable running tests under Docker. This makes it easier to spin up an
environment with all dependencies (especially the multiple versions of
openssl and gnutls needed).
* tests/docker/xenial/Dockerfile: Definition for the docker image,
including local builds for openssl and gnutls.
* tests/scripts/docker_env.sh: New helper script that creates the Docker
image and has a function to run a command in the Docker container.
* tests/docker/all-in-docker.sh: Wrapper for all.sh under Docker.
* tests/docker/basic-in-docker.sh: Script that runs the same commands as
.travis.yml, in Docker.
* tests/ssl-opt-in-docker.sh: Wrapper to run ssl-opt.sh in Docker.
* tests/compat-in-docker.sh: Wrapper to run compat.sh in Docker.
* tests/make-in-docker.sh: Wrapper to run make in Docker.
Change-Id: Ie092b1deed24c24c3859754535589523ce1d0a58
Call `greentea_getc()` 8 times, and then `unhexify` once, instead of
calling `receive_byte()`, which inside calls `greentea_getc()` twice,
for every hex digit.
Since Python3 handles encoding differently than Python2,
a change in the way the data is encoded and sent to the target is needed.
1. Change the test data to be sent as hex string
2. Convert the characters to binary bytes.
This is done because the mbed tools translate the encoding differently
(mbed-greentea, and mbed-htrunner)
Fuzz targets are x509 crt csr and crl parsing
as well as private and public key parsing
and client and server communication
Fuzz targets for DTLS
Simple corpus with valid DTLS connection
Deterministic behavior for fuzzing, ie no timestamps
* origin/pr/2697:
Update crypto submodule
Add all.sh component that exercises invalid_param checks
Remove mbedtls_param_failed from programs
Make it easier to define MBEDTLS_PARAM_FAILED as assert
Make test suites compatible with #include <assert.h>
Pass -m32 to the linker as well
Don't systematically rebuild programs
* origin/pr/2053:
Clarify ChangeLog entry for fix to #1628
Add Changelog entry for clang test-ref-configs.pl fix
Enable more compiler warnings in tests/Makefile
Change file scoping of test helpers.function
* origin/pr/2443: (25 commits)
Fix documentation of X.509 parsing test
Add X.509 CRT parsing test for mixed time-encodings
Improve X.509 CRT parsing test names
Add negative X.509 parsing tests for v3Ext in v1/v2 CRT
Add negative X.509 parsing tests for IssuerID/SubjectID in v1 CRT
Improve name of X.509 CRT parsing test
Always use the same X.509 alg structure inside and outside of TBS
Fix test dependencies in X.509 CRT parsing suite
Fix non-DER length encoding in two X.509 CRT parsing tests
Fix test case name formatting in X.509 parsing suite
Use ASN.1 NULL TLVs when testing invalid tags
Shorten X.509 CRT parsing test names
Extend negative testing for X.509 Signature parsing
Extend negative testing for X.509 SignatureAlgorithm parsing
Extend negative testing for X.509 v3 Extension parsing
Extend negative testing for X.509 SubjectID parsing
Extend negative testing for X.509 IssuerID parsing
Extend negative testing for X.509 SubjectPublicKeyInfo parsing
Extend negative testing for X.509 Subject parsing
Extend negative testing for X.509 Validity parsing
...
With the change to the full config, there were no longer any tests
that exercise invalid-parameter behavior. The test suite exercises
invalid-parameter behavior by calling TEST_INVALID_PARAM and friends,
relying on the test suite's mbedtls_check_param function. This
function is only enabled if MBEDTLS_CHECK_PARAMS is defined but not
MBEDTLS_CHECK_PARAMS_ASSERT.
Add a component to all.sh that enables MBEDTLS_CHECK_PARAMS but
disables MBEDTLS_CHECK_PARAMS_ASSERT and doesn't define
MBEDTLS_PARAM_FAILED. This way, the xxx_invalid_param() tests do run.
Since sample programs don't provide a mbedtls_check_param function,
this component doesn't build the sample programs.
* origin/pr/2430:
Document support for MD2 and MD4 in programs/x509/cert_write
Correct name of X.509 parsing test for well-formed, ill-signed CRT
Add test cases exercising successful verification of MD2/MD4/MD5 CRT
Add test case exercising verification of valid MD2 CRT
Add MD[245] test CRTs to tree
Add instructions for MD[245] test CRTs to tests/data_files/Makefile
Add suppport for MD2 to CSR and CRT writing example programs
Remove use of MD2 in further x509parse tests
Convert further x509parse tests to use lower-case hex data
Correct placement of ChangeLog entry
Adapt ChangeLog
Use SHA-256 instead of MD2 in X.509 CRT parsing tests
Consistently use lower case hex data in X.509 parsing tests
* origin/pr/2260:
Update crypto submodule
Remove heading spaces in tests/data_files/Makefile
Re-generate library/certs.c from script
Add new line at the end of test-ca2.key.enc
Use strict syntax to annotate origin of test data in certs.c
Add run to all.sh exercising !MBEDTLS_PEM_PARSE_C + !MBEDTLS_FS_IO
Allow DHM self test to run without MBEDTLS_PEM_PARSE_C
ssl-opt.sh: Auto-skip tests that use files if MBEDTLS_FS_IO unset
Document origin of hardcoded certificates in library/certs.c
Adapt ChangeLog
Rename server1.der to server1.crt.der
Add DER encoded files to git tree
Add build instructions to generate DER versions of CRTs and keys
Document "none" value for ca_path/ca_file in ssl_client2/ssl_server2
ssl_server2: Skip CA setup if `ca_path` or `ca_file` argument "none"
ssl_client2: Skip CA setup if `ca_path` or `ca_file` argument "none"
Correct white spaces in ssl_server2 and ssl_client2
Adapt ssl_client2 to parse DER encoded test CRTs if PEM is disabled
Adapt ssl_server2 to parse DER encoded test CRTs if PEM is disabled
Don't use the macro name assert. It's technically permitted as long as
<assert.h> is not included, but it's fragile, because it means the
code and any header that it includes must not include <assert.h>.
Increase the SO versions of libmbedx509 and libmbedtls due to the
addition of fields in publicly visible (non-opaque) structs:
- mbedtls_ssl_config
- mbedtls_ssl_context
- mbedtls_x509_crt
For unit tests and sample programs, CFLAGS=-m32 is enough to get a
32-bit build, because these programs are all compiled directly
from *.c to the executable in one shot. But with makefile rules that
first build object files and then link them, LDFLAGS=-m32 is also
needed.
Remove the "Decrypt empty buffer" test, as ChaCha20 is a stream cipher
and 0 bytes encrypted is identical to a 0 length buffer. The "ChaCha20
Encrypt and decrypt 0 bytes" test will test decryption of a 0 length
buffer.
Previously, even in the Chacha20 and Chacha20-Poly1305 tests, we would
test that decryption of an empty buffer would work with
MBEDTLS_CIPHER_AES_128_CBC.
Make the cipher used with the dec_empty_buf() test configurable, so that
Chacha20 and Chacha20-Poly1305 empty buffer tests can use ciphers other
than AES CBC. Then, make the Chacha20 and Chacha20-Poly1305 empty buffer
tests use the MBEDTLS_CIPHER_CHACHA20 and
MBEDTLS_CIPHER_CHACHA20_POLY1305 cipher suites.
When testing a configuration where no ciphersuites have MAC, via
component_test_when_no_ciphersuites_have_mac(), perform a targeted test
of only encrypt-then-MAC tests within ssl-opt.sh.
When MBEDTLS_SSL_ENCRYPT_THEN_MAC is enabled, but not
MBEDTLS_SSL_SOME_MODES_USE_MAC, mbedtls_ssl_derive_keys() and
build_transforms() will attempt to use a non-existent `encrypt_then_mac`
field in the ssl_transform.
Compile [ 93.7%]: ssl_tls.c
[Error] ssl_tls.c@865,14: 'mbedtls_ssl_transform {aka struct mbedtls_ssl_transform}' ha
s no member named 'encrypt_then_mac'
[ERROR] ./mbed-os/features/mbedtls/src/ssl_tls.c: In function 'mbedtls_ssl_derive_keys'
:
./mbed-os/features/mbedtls/src/ssl_tls.c:865:14: error: 'mbedtls_ssl_transform {aka str
uct mbedtls_ssl_transform}' has no member named 'encrypt_then_mac'
transform->encrypt_then_mac = session->encrypt_then_mac;
^~
Change mbedtls_ssl_derive_keys() and build_transforms() to only access
`encrypt_then_mac` if `encrypt_then_mac` is actually present.
Add a regression test to detect when we have regressions with
configurations that do not include any MAC ciphersuites.
Fixes d56ed2491b ("Reduce size of `ssl_transform` if no MAC ciphersuite is enabled")
Lengths below 128 Bytes must be encoded as a single 'XX' byte in DER,
but two tests in the X.509 CRT parsing suite used the BER but non-DER
encoding '81 XX' (the first byte 10000001 indicating that the length
is to follow (high bit) and has length 1 byte (low bit)).
Previously, a test exercising the X.509 CRT parser's behaviour
on unexpected tags would use a '00' byte in place of the tag
for the expected structure. This makes reviewing the examples
harder because the binary data isn't valid DER-encoded ASN.1.
This commit uses the ASN.1 NULL TLV '05 00' to test invalid
tags, and adapts surrounding structures' length values accordingly.
This eases reviewing because now the ASN.1 structures are still
well-formed at the place where the mismatch occurs.