Only effective together with --rom, makes two changes:
- abort in case of build warnings
- skip writing statistics
The goal is to make sure we build cleanly in the configuration used for
measuring code size, with all the compilers we use, both because we care about
that configuration and those compilers, and because any warnings would cast a
shadow on the code size measurements.
Currently the build fails with armc5 due to a pre-existing warning in PK, this
will be fixed in the next commit.
The next commit will also add an all.sh component to make sure we have no
regression in the future. (Which is the motivation for --check skipping
statistics: an all.sh component should probably not leave files around.)
While at it, fix two things:
1. The call to gcc --version was redundant with the echo line below
2. WARNING_CFLAGS shouldn't be overriden with armclang, as it would remove the
-Wall -Wextra and any directory-specific warning (such as
-Wdeclaration-after-statement in library). It's meant to be overriden only
with compilers that don't accept the default value (namely armc5 here).
Some TLS-only code paths were not protected by an #ifdef and while some
compiler are happy to just silently remove them, armc5 complains:
Warning: #111-D: statement is unreachable
Let's make armc5 happy.
This is enabled by default as we generally enable things by default unless
there's a reason not to (experimental, deprecated, security risk).
We need a compile-time option because, even though the functions themselves
can be easily garbage-collected by the linker, implementing them will require
saving 64 bytes of Client/ServerHello.random values after the handshake, that
would otherwise not be needed, and people who don't need this feature
shouldn't have to pay the price of increased RAM usage.
A positive option looks better, but comes with the following compatibility
issue: people using a custom config.h that is not based on the default
config.h and need TLS support would need to manually change their config in
order to still get TLS.
Work around that by making the public option negative. Internally the positive
option is used, though.
In the future (when preparing the next major version), we might want to switch
back to a positive option as this would be more consistent with other options
we have.
* origin/pr/2481:
Document support for MD2 and MD4 in programs/x509/cert_write
Correct name of X.509 parsing test for well-formed, ill-signed CRT
Add test cases exercising successful verification of MD2/MD4/MD5 CRT
Add test case exercising verification of valid MD2 CRT
Add MD[245] test CRTs to tree
Add instructions for MD[245] test CRTs to tests/data_files/Makefile
Add suppport for MD2 to CSR and CRT writing example programs
Convert further x509parse tests to use lower-case hex data
Correct placement of ChangeLog entry
Adapt ChangeLog
Use SHA-256 instead of MD2 in X.509 CRT parsing tests
Consistently use lower case hex data in X.509 parsing tests
* origin/pr/2497:
Re-generate library/certs.c from script
Add new line at the end of test-ca2.key.enc
Use strict syntax to annotate origin of test data in certs.c
Add run to all.sh exercising !MBEDTLS_PEM_PARSE_C + !MBEDTLS_FS_IO
Allow DHM self test to run without MBEDTLS_PEM_PARSE_C
ssl-opt.sh: Auto-skip tests that use files if MBEDTLS_FS_IO unset
Document origin of hardcoded certificates in library/certs.c
Adapt ChangeLog
Rename server1.der to server1.crt.der
Add DER encoded files to git tree
Add build instructions to generate DER versions of CRTs and keys
Document "none" value for ca_path/ca_file in ssl_client2/ssl_server2
ssl_server2: Skip CA setup if `ca_path` or `ca_file` argument "none"
ssl_client2: Skip CA setup if `ca_path` or `ca_file` argument "none"
Correct white spaces in ssl_server2 and ssl_client2
Adapt ssl_client2 to parse DER encoded test CRTs if PEM is disabled
Adapt ssl_server2 to parse DER encoded test CRTs if PEM is disabled
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the current implementation suffers
from a lack of bounds checking for the parsed length of incoming
DTLS records that can lead to a buffer overflow when facing
malformed records.
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.