The check is already effectively performed later in the function, but
implicitly, so Clang's analysis fail to notice the functions are in
fact safe. Pulling the check up to the top helps Clang to verify the
behaviour.
Since the buffer is used in a few places, it seems Clang isn't clever
enough to realise that the first byte is never touched. So, even though
the function has a correct null check for ssl->handshake, Clang
complains. Pulling the handshake type out into its own variable is
enough for Clang's analysis to kick in though.
The function appears to be safe, since grow() is called with sensible
arguments in previous functions. Ideally Clang would be clever enough to
realise this. Even if N has size MBEDTLS_MPI_MAX_LIMBS, which will
cause the grow to fail, the affected lines in montmul won't be reached.
Having this sanity check can hardly hurt though.
Fix an issue that caused valid certificates being rejected whenever an
expired or not yet valid version of the trusted certificate was before the
valid version in the trusted certificate list.
The callback typedefs defined for mbedtls_ssl_set_bio() and
mbedtls_ssl_set_timer_cb() were not used consistently where the callbacks were
referenced in structures or in code.
- basicContraints checks are done during verification
- there is no need to set extensions that are not present to default values,
as the code using the extension will check if it was present using
ext_types. (And default values would not make sense anyway.)
- document why we made that choice
- remove the two TODOs about checking hash and CA
- remove the code that parsed certificate_type: it did nothing except store
the selected type in handshake->cert_type, but that field was never accessed
afterwards. Since handshake_params is now an internal type, we can remove that
field without breaking the ABI.
We don't implement anonymous key exchanges, and we don't intend to, so it can
never happen that an unauthenticated server requests a certificate from us.
After the record contents are decompressed, in_len is no longer
accessed directly, only in_msglen is accessed. in_len is only read by
ssl_parse_record_header() which happens before ssl_prepare_record_contents().
This is also made clear by the fact that in_len is not touched after
decrypting anyway, so if it was accessed after that it would be wrong unless
decryption is used - as this is not the case, it show in_len is not accessed.
Previously it was failing with errors about headers not found, which is
suboptimal in terms of clarity. Now give a clean error with pointer to the
documentation.
Do the checks in the .c files rather than check_config.h as it keeps them
closer to the platform-specific implementations.
It is used only by `mbedtls_sha512_process()`, and in case `MBEDTLS_SHA512_PROCESS_ALT` is defined, it still cannot be reused because of `static` declaration.
armar doesn't understand the syntax without dash. OTOH, the syntax with dash
is the only one specified by POSIX, and it's accepted by GNU ar, BSD ar (as
bundled with OS X) and armar, so it looks like the most portable syntax.
fixes#386
* yanesca/iss309:
Improved on the previous fix and added a test case to cover both types of carries.
Removed recursion from fix#309.
Improved on the fix of #309 and extended the test to cover subroutines.
Tests and fix added for #309 (inplace mpi doubling).
On x32 systems, pointers are 4-bytes wide and are therefore stored in %e?x
registers (instead of %r?x registers). These registers must be accessed using
"addl" instead of "addq", however the GNU assembler will acccept the generic
"add" instruction and determine the correct opcode based on the registers
passed to it.
See for example page 8 of
http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf
The previous constant probably came from a typo as it was 2^26 - 2^5 instead
of 2^36 - 2^5. Clearly the intention was to allow for a constant bigger than
2^32 as the ull suffix and cast to uint64_t show.
fixes#362
By looking just at that test, it looks like 2 + dn_size could overflow. In
fact that can't happen as that would mean we've read a CA cert of size is too
big to be represented by a size_t.
However, it's best for code to be more obviously free of overflow without
having to reason about the bigger picture.
In case an entry with the given OID already exists in the list passed to
mbedtls_asn1_store_named_data() and there is not enough memory to allocate
room for the new value, the existing entry will be freed but the preceding
entry in the list will sill hold a pointer to it. (And the following entries
in the list are no longer reachable.) This results in memory leak or a double
free.
The issue is we want to leave the list in a consistent state on allocation
failure. (We could add a warning that the list is left in inconsistent state
when the function returns NULL, but behaviour changes that require more care
from the user are undesirable, especially in a stable branch.)
The chosen solution is a bit inefficient in that there is a time where both
blocks are allocated, but at least it's safe and this should trump efficiency
here: this code is only used for generating certificates, which is unlikely to
be done on very constrained devices, or to be in the critical loop of
anything. Also, the sizes involved should be fairly small anyway.
fixes#367
When the peer retransmits a flight with many record in the same datagram, and
we already saw one of the records in that datagram, we used to drop the whole
datagram, resulting in interoperability failure (spurious handshake timeouts,
due to ignoring record retransmitted by the peer) with some implementations
(issues with Chrome were reported).
So in those cases, we want to only drop the current record, and look at the
following records (if any) in the same datagram. OTOH, this is not something
we always want to do, as sometime the header of the current record is not
reliable enough.
This commit introduces a new return code for ssl_parse_header() that allows to
distinguish if we should drop only the current record or the whole datagram,
and uses it in mbedtls_ssl_read_record()
fixes#345
Remove check on the pathLenConstraint value when looking for a parent to the
EE cert, as the constraint is on the number of intermediate certs below the
parent, and that number is always 0 at that point, so the constraint is always
satisfied.
The check was actually off-by-one, which caused valid chains to be rejected
under the following conditions:
- the parent certificate is not a trusted root, and
- it has pathLenConstraint == 0 (max_pathlen == 1 in our representation)
fixes#280
* iotssl-519-asn1write-overflows-restricted:
Fix other int casts in bounds checking
Fix other occurrences of same bounds check issue
Fix potential buffer overflow in asn1write
* iotssl-515-max-pathlen:
Add Changelog entries for this branch
Fix a style issue
Fix whitespace at EOL issues
Use symbolic constants in test data
Fixed pathlen contraint enforcement.
Additional corner cases for testing pathlen constrains. Just in case.
Added test case for pathlen constrains in intermediate certificates
fixes#310
Actually all key exchanges that use a certificate use signatures too, and
there is no key exchange that uses signatures but no cert, so merge those two
flags.
Not a security issue as here we know the buffer is large enough (unless
something else if badly wrong in the code), and the value cast to int is less
than 2^16 (again, unless issues elsewhere).
Still changing to a more correct check as a matter of principle
Two causes:
- the buffer is too short (missing 4 bytes for encoding id_len)
- the test was wrong
Would only happen when MBEDTLS_ECP_MAX_BITS == the bitsize of the curve
actually used (does not happen in the default config).
Could not be triggered remotely.
* development: (73 commits)
Bump yotta dependencies version
Fix typo in documentation
Corrected misleading fn description in ssl_cache.h
Corrected URL/reference to MPI library
Fix yotta dependencies
Fix minor spelling mistake in programs/pkey/gen_key.c
Bump version to 2.1.2
Fix CVE number in ChangeLog
Add 'inline' workaround where needed
Fix references to non-standard SIZE_T_MAX
Fix yotta version dependencies again
Upgrade yotta dependency versions
Fix compile error in net.c with musl libc
Add missing warning in doc
Remove inline workaround when not useful
Fix macroization of inline in C++
Changed attribution for Guido Vranken
Merge of IOTSSL-476 - Random malloc in pem_read()
Fix for IOTSSL-473 Double free error
Fix potential overflow in CertificateRequest
...
Conflicts:
include/mbedtls/ssl_internal.h
library/ssl_cli.c
- "master secret" is the usual name
- move key block arg closer to the related lengths
- document lengths
Also fix some trailing whitespace while at it
In BER encoding, any boolean with a non-zero value is considered as
TRUE. However, DER encoding require a value of 255 (0xFF) for TRUE.
This commit makes `mbedtls_asn1_write_bool` function uses `255` instead
of `1` for BOOLEAN values.
With this fix, boolean values are now reconized by OS X keychain (tested
on OS X 10.11).
Fixes#318.
Two possible integer overflows (during << 2 or addition in BITS_TO_LIMB())
could result in far too few memory to be allocated, then overflowing the
buffer in the subsequent for loop.
Both integer overflows happen when slen is close to or greater than
SIZE_T_MAX >> 2 (ie 2^30 on a 32 bit system).
Note: one could also avoid those overflows by changing BITS_TO_LIMB(s << 2) to
CHARS_TO_LIMB(s >> 1) but the solution implemented looks more robust with
respect to future code changes.
Found by Guido Vranken.
Two possible integer overflows (during << 2 or addition in BITS_TO_LIMB())
could result in far too few memory to be allocated, then overflowing the
buffer in the subsequent for loop.
Both integer overflows happen when slen is close to or greater than
SIZE_T_MAX >> 2 (ie 2^30 on a 32 bit system).
Note: one could also avoid those overflows by changing BITS_TO_LIMB(s << 2) to
CHARS_TO_LIMB(s >> 1) but the solution implemented looks more robust with
respect to future code changes.
This extension is quite costly to generate, and we don't want to re-do it
again when the server performs a DTLS HelloVerify. So, cache the result the
first time and re-use if/when we build a new ClientHello.
Note: re-send due to timeouts are different, as the whole message is cached
already, so they don't need any special support.
This bug becomes noticeable when the extension following the "supported point
formats" extension has a number starting with 0x01, which is the case of the
EC J-PAKE extension, which explains what I noticed the bug now.
This will be immediately backported to the stable branches,
see the corresponding commits for impact analysis.
This is more consistent, as it doesn't make any sense for a user to be able to
set up an EC J-PAKE password with TLS if the corresponding key exchange is
disabled.
Arguably this is what we should de for other key exchanges as well instead of
depending on ECDH_C etc, but this is an independent issue, so let's just do
the right thing with the new key exchange and fix the other ones later. (This
is a marginal issue anyway, since people who disable all ECDH key exchange are
likely to also disable ECDH_C in order to minimize footprint.)
When we don't have a password, we want to skip the costly process of
generating the extension. So for consistency don't offer the ciphersuite
without the extension.
There is only one length byte but for some reason we skipped two, resulting in
reading one byte past the end of the extension. Fortunately, even if that
extension is at the very end of the ClientHello, it can't be at the end of the
buffer since the ClientHello length is at most SSL_MAX_CONTENT_LEN and the
buffer has some more room after that for MAC and so on. So there is no
buffer overread.
Possible consequences are:
- nothing, if the next byte is 0x00, which is a comment first byte for other
extensions, which is why the bug remained unnoticed
- using a point format that was not offered by the peer if next byte is 0x01.
In that case the peer will reject our ServerKeyExchange message and the
handshake will fail.
- thinking that we don't have a common point format even if we do, which will
cause us to immediately abort the handshake.
None of these are a security issue.
The same bug was fixed client-side in fd35af15
The Thread spec says we need those for EC J-PAKE too.
However, we won't be using the information, so we can skip the parsing
functions in an EC J-PAKE only config; keep the writing functions in order to
comply with the spec.
Especially for resumed handshake, it's entirely possible for an epoch=0
ClientHello to be retransmitted or arrive so late that the server is already
at epoch=1. There is no good way to detect whether it's that or a reconnect.
However:
- a late ClientHello seems more likely that client going down and then up
again in the middle of a handshake
- even if that's the case, we'll time out on that handshake soon enough
- we don't want to break handshake flows that used to work
So the safest option is to not treat that as a reconnect.
Don't depend on srv.c in config.h, but add explicit checks. This is more
in line with other options that only make sense server-side, and also it
allows to test full config minus srv.c more easily.
Use a custom function that minimally parses the message an creates a reply
without the overhead of a full SSL context.
Also fix dependencies: needs DTLS_HELLO_VERIFY for the cookie types, and let's
also depend on SRV_C as is doesn't make sense on client.
I'm not sure this is necessary, because it is only multiplied by xm2 which is
already random and secret, but OTOH, xm2 is related to a public value, so
let's add blinding with a random value that's only use for blinding, just to
be extra sure.
- reference handshake tests that we get the right values (not much now, but
much more later when we get to deriving the PMS)
- random handshake in addition tests our generate/write functions against our
read functions, that are tested by the reference handshake, and will be
further tested in the test suite later against invalid inputs
This helps in the case where an intermediate certificate is directly trusted.
In that case we want to ignore what comes after it in the chain, not only for
performance but also to avoid false negatives (eg an old root being no longer
trusted while the newer intermediate is directly trusted).
closes#220
Once the mutex is acquired, we must goto cleanup rather that return.
Since cleanup adjusts the return value, adjust that in test cases.
Also, at cleanup we don't want to overwrite 'ret', or we'll loose track of
errors.
see #257
- allow up to 12.5% security/error margin
- use larger delays
- this avoid the security/error margin being too low
The test used to fail about 1 out of 6 times on some buildbots VMs, but never
failed on the physical machines used for development.
This is not required nor recommended by the protocol, and it's a layering
violation, but it's a know flaw in the protocol that you can't detect a PSK
auth error in any other way, so it is probably the right thing to do.
closes#227
This is particularly problematic when calling FD_SET( -1, ... ), but let's
check it in all functions.
This was introduced with the new API and the fact the net_free() now sets the
internal fd to -1 in order to mark it as closed: now using this information.
Assume we have two trusted CAs with the same name, the first uses ECDSA 256
bits, the second RSA 2048; cert is signed by the second. If we do the keysize
check before we checked the key types match, we'll raise the badkey flags when
checking the EC-256 CA and it will remain up even when we finally find the
correct CA. So, move the check for the key size after signature verification,
which implicitly checks the key type.
When we build with Visual Studio in debug mode, the invalid parameter handler
aborts the application (and offers to debug it) when n is 0. We want to
just return -1 instead (as calls with n == 0 are expected and happen in our
tests).
We document that either of recv or recv_timeout may be NULL, but for TLS we
always used recv... Thanks Coverity for catching that.
(Not remotely trigerrable: local configuration.)
Also made me notice net_recv_timeout didn't do its job properly.
May happen with a faulty configuration (eg no allowed curve but trying to use
ECDHE key exchange), but not trigger able remotely.
(Found with Clang's scan-build.)
Due to the recent change about entropy sources strength, it is no longer
acceptable to just disable the platform source. So, instead "fix" it so that
it is clear to MemSan that memory is initialized.
I tried __attribute__((no_sanitize_memory)) and MemSan's blacklist file, but
couldn't seem to get them to work.
While at it, fix the following:
- on server with RSA_PSK, we don't want to set flags (client auth happens via
the PSK, no cert is expected).
- use safer tests (eg == OPTIONAL vs != REQUIRED)
Just applying rename.pl with this file:
mbedtls_cipher_get_key_size mbedtls_cipher_get_key_bitlen
mbedtls_pk_get_size mbedtls_pk_get_bitlen
MBEDTLS_BLOWFISH_MIN_KEY MBEDTLS_BLOWFISH_MIN_KEY_BITS
MBEDTLS_BLOWFISH_MAX_KEY MBEDTLS_BLOWFISH_MAX_KEY_BITS
- allows to express 'none' or 'all' more easily than lists
- more compact and easier to declare statically
- easier to check too
Only drawback: if we ever have more than 32 curves, we'll need an ABI change to
make that field a uint64_t.
This causes a compile-time error:
platform.c(157): error: #147: declaration is incompatible with "void (*polarssl_exit)(int)" (declared at line 179 of "platform.h")
* mbedtls-1.3:
Mark unused constant as such
Update ChangeLog for recent external bugfix
Serious bug fix in entropy.c
Fix memleak with repeated [gc]cm_setkey()
fix minor bug in path_cnt checks
Conflicts:
include/mbedtls/cipher.h
library/ccm.c
library/entropy.c
library/gcm.c
library/x509_crt.c
* gmtime_r is not standard so -std=c99 warns about it
* Anyway we need global mutexes in the threading layer, so better depend only
on that, rather that global mutexes + some _r functions
Adjust the size/performance trade-off:
* Reduces size of sha256_process() from 7.4KB to 2KB on ARMv7-M
* Reduces performance by less than 14% on Cortex-M4
* Seems to even improve performance on my Core i7
- DTLS_HELLO_VERIFY no longer depends on SRV_C
- SSL_COOKIE_C no longer depends on DTLS_HELLO_VERIFY
Not that much work for us, and easier on users (esp. since it allows just
disabling SRV_C alone).
- Only the server needs to generate/parse tickets
- Only the client needs to store them
Also adjust prototype of ssl_conf_session_tickets() while at it.
If the top certificate occurs twice in trust_ca (for example) it would
not be good for the second instance to be checked with check_path_cnt
reduced twice!
Initially thought it would be per-connection, but since max_version is in conf
too, and you need to lower that for a fallback connection, the fallback flag
should be in the same place