Apparently travis has an old version of doxygen that doesn't know all tags in
our config. That's not something we care about, we only want to know about
warnings in our doxygen content
On my machine, that reduces running time from about 30 minutes to less than 10
minutes, while maintaining a good probability of catching the most likely
issues in practice.
* yanesca/iss309:
Improved on the previous fix and added a test case to cover both types of carries.
Removed recursion from fix#309.
Improved on the fix of #309 and extended the test to cover subroutines.
Tests and fix added for #309 (inplace mpi doubling).
When we use the same documentation for a list of #defines, we used to use a
generic name in the \def command. Use the first name of the list instead so
that doxygen stops complaining, and mention the generic name in the longer
description.
This is not entirely satisfactory as the full list of macros will not be
included in the generated doc, but it's still an improvement as at least the
first macro is documented now, with a hint that there are others.
Otherwise we get warnings that some documentation items don't have
corresponding #define, and more importantly the corresponding snippets are not
included in the output.
For that we need a modified version of the "full" argument for config.pl.
Also, the new CMakeLists.txt target only works on Unix (which was already the
case of the Makefile target). Hopefully this is not an issue as people are
unlikely to need that target on Windows.
See for example page 8 of
http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf
The previous constant probably came from a typo as it was 2^26 - 2^5 instead
of 2^36 - 2^5. Clearly the intention was to allow for a constant bigger than
2^32 as the ull suffix and cast to uint64_t show.
fixes#362
By looking just at that test, it looks like 2 + dn_size could overflow. In
fact that can't happen as that would mean we've read a CA cert of size is too
big to be represented by a size_t.
However, it's best for code to be more obviously free of overflow without
having to reason about the bigger picture.
In case an entry with the given OID already exists in the list passed to
mbedtls_asn1_store_named_data() and there is not enough memory to allocate
room for the new value, the existing entry will be freed but the preceding
entry in the list will sill hold a pointer to it. (And the following entries
in the list are no longer reachable.) This results in memory leak or a double
free.
The issue is we want to leave the list in a consistent state on allocation
failure. (We could add a warning that the list is left in inconsistent state
when the function returns NULL, but behaviour changes that require more care
from the user are undesirable, especially in a stable branch.)
The chosen solution is a bit inefficient in that there is a time where both
blocks are allocated, but at least it's safe and this should trump efficiency
here: this code is only used for generating certificates, which is unlikely to
be done on very constrained devices, or to be in the critical loop of
anything. Also, the sizes involved should be fairly small anyway.
fixes#367
When the peer retransmits a flight with many record in the same datagram, and
we already saw one of the records in that datagram, we used to drop the whole
datagram, resulting in interoperability failure (spurious handshake timeouts,
due to ignoring record retransmitted by the peer) with some implementations
(issues with Chrome were reported).
So in those cases, we want to only drop the current record, and look at the
following records (if any) in the same datagram. OTOH, this is not something
we always want to do, as sometime the header of the current record is not
reliable enough.
This commit introduces a new return code for ssl_parse_header() that allows to
distinguish if we should drop only the current record or the whole datagram,
and uses it in mbedtls_ssl_read_record()
fixes#345
Remove check on the pathLenConstraint value when looking for a parent to the
EE cert, as the constraint is on the number of intermediate certs below the
parent, and that number is always 0 at that point, so the constraint is always
satisfied.
The check was actually off-by-one, which caused valid chains to be rejected
under the following conditions:
- the parent certificate is not a trusted root, and
- it has pathLenConstraint == 0 (max_pathlen == 1 in our representation)
fixes#280
* iotssl-519-asn1write-overflows-restricted:
Fix other int casts in bounds checking
Fix other occurrences of same bounds check issue
Fix potential buffer overflow in asn1write