ECDH: Add Everest Curve25519 to 3rdparty/everest

These files are automatically generated by the Everest toolchain from F*
files. They do not respect the mbedTLS code style guidelines as manual
modification would invalidate verification guarantees. The files in
3rdparty/everest/include/kremli{n,b} are a customized (minimzed) version of the
support headers expected by the code extracted using KreMLin.
This commit is contained in:
Christoph M. Wintersteiger 2018-10-25 12:32:07 +01:00 committed by Janos Follath
parent 89e7655691
commit bee486146e
18 changed files with 2669 additions and 0 deletions

1
3rdparty/everest/README.md vendored Normal file
View file

@ -0,0 +1 @@
The files in this directory stem from [Project Everest](https://project-everest.github.io/) and are distributed under the Apache 2.0 license.

202
3rdparty/everest/apache-2.0.txt vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,21 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/* This file was generated by KreMLin <https://github.com/FStarLang/kremlin>
* KreMLin invocation: /mnt/e/everest/verify/kremlin/krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -fbuiltin-uint128 -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -I /mnt/e/everest/verify/hacl-star/code/lib/kremlin -I /mnt/e/everest/verify/kremlin/kremlib/compat -I /mnt/e/everest/verify/hacl-star/specs -I /mnt/e/everest/verify/hacl-star/specs/old -I . -ccopt -march=native -verbose -ldopt -flto -tmpdir x25519-c -I ../bignum -bundle Hacl.Curve25519=* -minimal -add-include "kremlib.h" -skip-compilation x25519-c/out.krml -o x25519-c/Hacl_Curve25519.c
* F* version: 059db0c8
* KreMLin version: 916c37ac
*/
#ifndef __Hacl_Curve25519_H
#define __Hacl_Curve25519_H
#include "kremlib.h"
void Hacl_Curve25519_crypto_scalarmult(uint8_t *mypublic, uint8_t *secret, uint8_t *basepoint);
#define __Hacl_Curve25519_H_DEFINED
#endif

View file

@ -0,0 +1,29 @@
/*
* Copyright 2016-2018 INRIA and Microsoft Corporation
*
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* This file is part of Mbed TLS (https://tls.mbed.org) and
* originated from Project Everest (https://project-everest.github.io/)
*/
#ifndef __KREMLIB_H
#define __KREMLIB_H
#include "kremlin/internal/target.h"
#include "kremlin/internal/types.h"
#include "kremlin/c_endianness.h"
#endif /* __KREMLIB_H */

View file

@ -0,0 +1,124 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/* This file was generated by KreMLin <https://github.com/FStarLang/kremlin>
* KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir dist/uint128 -skip-compilation -extract-uints -add-include <inttypes.h> -add-include <stdbool.h> -add-include "kremlin/internal/types.h" -bundle FStar.UInt128=* extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml
* F* version: 059db0c8
* KreMLin version: 916c37ac
*/
#ifndef __FStar_UInt128_H
#define __FStar_UInt128_H
#include <inttypes.h>
#include <stdbool.h>
#include "kremlin/internal/types.h"
uint64_t FStar_UInt128___proj__Mkuint128__item__low(FStar_UInt128_uint128 projectee);
uint64_t FStar_UInt128___proj__Mkuint128__item__high(FStar_UInt128_uint128 projectee);
typedef FStar_UInt128_uint128 FStar_UInt128_t;
FStar_UInt128_uint128 FStar_UInt128_add(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128
FStar_UInt128_add_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_add_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_sub(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128
FStar_UInt128_sub_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_sub_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_logand(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_logxor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_logor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_lognot(FStar_UInt128_uint128 a);
FStar_UInt128_uint128 FStar_UInt128_shift_left(FStar_UInt128_uint128 a, uint32_t s);
FStar_UInt128_uint128 FStar_UInt128_shift_right(FStar_UInt128_uint128 a, uint32_t s);
bool FStar_UInt128_eq(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
bool FStar_UInt128_gt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
bool FStar_UInt128_lt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
bool FStar_UInt128_gte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
bool FStar_UInt128_lte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_eq_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_gte_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b);
FStar_UInt128_uint128 FStar_UInt128_uint64_to_uint128(uint64_t a);
uint64_t FStar_UInt128_uint128_to_uint64(FStar_UInt128_uint128 a);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Plus_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Plus_Question_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Plus_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Subtraction_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Subtraction_Question_Hat)(
FStar_UInt128_uint128 x0,
FStar_UInt128_uint128 x1
);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Subtraction_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Amp_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Hat_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Bar_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Less_Less_Hat)(FStar_UInt128_uint128 x0, uint32_t x1);
extern FStar_UInt128_uint128
(*FStar_UInt128_op_Greater_Greater_Hat)(FStar_UInt128_uint128 x0, uint32_t x1);
extern bool (*FStar_UInt128_op_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern bool
(*FStar_UInt128_op_Greater_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern bool (*FStar_UInt128_op_Less_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern bool
(*FStar_UInt128_op_Greater_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
extern bool
(*FStar_UInt128_op_Less_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1);
FStar_UInt128_uint128 FStar_UInt128_mul32(uint64_t x, uint32_t y);
FStar_UInt128_uint128 FStar_UInt128_mul_wide(uint64_t x, uint64_t y);
#define __FStar_UInt128_H_DEFINED
#endif

View file

@ -0,0 +1,280 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/* This file was generated by KreMLin <https://github.com/FStarLang/kremlin>
* KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir dist/minimal -skip-compilation -extract-uints -add-include <inttypes.h> -add-include <stdbool.h> -add-include "kremlin/internal/compat.h" -add-include "kremlin/internal/types.h" -bundle FStar.UInt64+FStar.UInt32+FStar.UInt16+FStar.UInt8=* extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml
* F* version: 059db0c8
* KreMLin version: 916c37ac
*/
#ifndef __FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8_H
#define __FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8_H
#include <inttypes.h>
#include <stdbool.h>
#include "kremlin/internal/compat.h"
#include "kremlin/internal/types.h"
extern Prims_int FStar_UInt64_n;
extern Prims_int FStar_UInt64_v(uint64_t x0);
extern uint64_t FStar_UInt64_uint_to_t(Prims_int x0);
extern uint64_t FStar_UInt64_add(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_add_underspec(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_add_mod(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_sub(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_sub_underspec(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_sub_mod(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_mul(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_mul_underspec(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_mul_mod(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_mul_div(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_div(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_rem(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_logand(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_logxor(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_logor(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_lognot(uint64_t x0);
extern uint64_t FStar_UInt64_shift_right(uint64_t x0, uint32_t x1);
extern uint64_t FStar_UInt64_shift_left(uint64_t x0, uint32_t x1);
extern bool FStar_UInt64_eq(uint64_t x0, uint64_t x1);
extern bool FStar_UInt64_gt(uint64_t x0, uint64_t x1);
extern bool FStar_UInt64_gte(uint64_t x0, uint64_t x1);
extern bool FStar_UInt64_lt(uint64_t x0, uint64_t x1);
extern bool FStar_UInt64_lte(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_minus(uint64_t x0);
extern uint32_t FStar_UInt64_n_minus_one;
uint64_t FStar_UInt64_eq_mask(uint64_t a, uint64_t b);
uint64_t FStar_UInt64_gte_mask(uint64_t a, uint64_t b);
extern Prims_string FStar_UInt64_to_string(uint64_t x0);
extern uint64_t FStar_UInt64_of_string(Prims_string x0);
extern Prims_int FStar_UInt32_n;
extern Prims_int FStar_UInt32_v(uint32_t x0);
extern uint32_t FStar_UInt32_uint_to_t(Prims_int x0);
extern uint32_t FStar_UInt32_add(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_add_underspec(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_add_mod(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_sub(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_sub_underspec(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_sub_mod(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_mul(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_mul_underspec(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_mul_mod(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_mul_div(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_div(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_rem(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_logand(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_logxor(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_logor(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_lognot(uint32_t x0);
extern uint32_t FStar_UInt32_shift_right(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_shift_left(uint32_t x0, uint32_t x1);
extern bool FStar_UInt32_eq(uint32_t x0, uint32_t x1);
extern bool FStar_UInt32_gt(uint32_t x0, uint32_t x1);
extern bool FStar_UInt32_gte(uint32_t x0, uint32_t x1);
extern bool FStar_UInt32_lt(uint32_t x0, uint32_t x1);
extern bool FStar_UInt32_lte(uint32_t x0, uint32_t x1);
extern uint32_t FStar_UInt32_minus(uint32_t x0);
extern uint32_t FStar_UInt32_n_minus_one;
uint32_t FStar_UInt32_eq_mask(uint32_t a, uint32_t b);
uint32_t FStar_UInt32_gte_mask(uint32_t a, uint32_t b);
extern Prims_string FStar_UInt32_to_string(uint32_t x0);
extern uint32_t FStar_UInt32_of_string(Prims_string x0);
extern Prims_int FStar_UInt16_n;
extern Prims_int FStar_UInt16_v(uint16_t x0);
extern uint16_t FStar_UInt16_uint_to_t(Prims_int x0);
extern uint16_t FStar_UInt16_add(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_add_underspec(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_add_mod(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_sub(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_sub_underspec(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_sub_mod(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_mul(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_mul_underspec(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_mul_mod(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_mul_div(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_div(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_rem(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_logand(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_logxor(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_logor(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_lognot(uint16_t x0);
extern uint16_t FStar_UInt16_shift_right(uint16_t x0, uint32_t x1);
extern uint16_t FStar_UInt16_shift_left(uint16_t x0, uint32_t x1);
extern bool FStar_UInt16_eq(uint16_t x0, uint16_t x1);
extern bool FStar_UInt16_gt(uint16_t x0, uint16_t x1);
extern bool FStar_UInt16_gte(uint16_t x0, uint16_t x1);
extern bool FStar_UInt16_lt(uint16_t x0, uint16_t x1);
extern bool FStar_UInt16_lte(uint16_t x0, uint16_t x1);
extern uint16_t FStar_UInt16_minus(uint16_t x0);
extern uint32_t FStar_UInt16_n_minus_one;
uint16_t FStar_UInt16_eq_mask(uint16_t a, uint16_t b);
uint16_t FStar_UInt16_gte_mask(uint16_t a, uint16_t b);
extern Prims_string FStar_UInt16_to_string(uint16_t x0);
extern uint16_t FStar_UInt16_of_string(Prims_string x0);
extern Prims_int FStar_UInt8_n;
extern Prims_int FStar_UInt8_v(uint8_t x0);
extern uint8_t FStar_UInt8_uint_to_t(Prims_int x0);
extern uint8_t FStar_UInt8_add(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_add_underspec(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_add_mod(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_sub(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_sub_underspec(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_sub_mod(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_mul(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_mul_underspec(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_mul_mod(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_mul_div(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_div(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_rem(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_logand(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_logxor(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_logor(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_lognot(uint8_t x0);
extern uint8_t FStar_UInt8_shift_right(uint8_t x0, uint32_t x1);
extern uint8_t FStar_UInt8_shift_left(uint8_t x0, uint32_t x1);
extern bool FStar_UInt8_eq(uint8_t x0, uint8_t x1);
extern bool FStar_UInt8_gt(uint8_t x0, uint8_t x1);
extern bool FStar_UInt8_gte(uint8_t x0, uint8_t x1);
extern bool FStar_UInt8_lt(uint8_t x0, uint8_t x1);
extern bool FStar_UInt8_lte(uint8_t x0, uint8_t x1);
extern uint8_t FStar_UInt8_minus(uint8_t x0);
extern uint32_t FStar_UInt8_n_minus_one;
uint8_t FStar_UInt8_eq_mask(uint8_t a, uint8_t b);
uint8_t FStar_UInt8_gte_mask(uint8_t a, uint8_t b);
extern Prims_string FStar_UInt8_to_string(uint8_t x0);
extern uint8_t FStar_UInt8_of_string(Prims_string x0);
typedef uint8_t FStar_UInt8_byte;
#define __FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8_H_DEFINED
#endif

View file

@ -0,0 +1,204 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
#ifndef __KREMLIN_ENDIAN_H
#define __KREMLIN_ENDIAN_H
#include <string.h>
#include <inttypes.h>
/******************************************************************************/
/* Implementing C.fst (part 2: endian-ness macros) */
/******************************************************************************/
/* ... for Linux */
#if defined(__linux__) || defined(__CYGWIN__)
# include <endian.h>
/* ... for OSX */
#elif defined(__APPLE__)
# include <libkern/OSByteOrder.h>
# define htole64(x) OSSwapHostToLittleInt64(x)
# define le64toh(x) OSSwapLittleToHostInt64(x)
# define htobe64(x) OSSwapHostToBigInt64(x)
# define be64toh(x) OSSwapBigToHostInt64(x)
# define htole16(x) OSSwapHostToLittleInt16(x)
# define le16toh(x) OSSwapLittleToHostInt16(x)
# define htobe16(x) OSSwapHostToBigInt16(x)
# define be16toh(x) OSSwapBigToHostInt16(x)
# define htole32(x) OSSwapHostToLittleInt32(x)
# define le32toh(x) OSSwapLittleToHostInt32(x)
# define htobe32(x) OSSwapHostToBigInt32(x)
# define be32toh(x) OSSwapBigToHostInt32(x)
/* ... for Solaris */
#elif defined(__sun__)
# include <sys/byteorder.h>
# define htole64(x) LE_64(x)
# define le64toh(x) LE_64(x)
# define htobe64(x) BE_64(x)
# define be64toh(x) BE_64(x)
# define htole16(x) LE_16(x)
# define le16toh(x) LE_16(x)
# define htobe16(x) BE_16(x)
# define be16toh(x) BE_16(x)
# define htole32(x) LE_32(x)
# define le32toh(x) LE_32(x)
# define htobe32(x) BE_32(x)
# define be32toh(x) BE_32(x)
/* ... for the BSDs */
#elif defined(__FreeBSD__) || defined(__NetBSD__) || defined(__DragonFly__)
# include <sys/endian.h>
#elif defined(__OpenBSD__)
# include <endian.h>
/* ... for Windows (MSVC)... not targeting XBOX 360! */
#elif defined(_MSC_VER)
# include <stdlib.h>
# define htobe16(x) _byteswap_ushort(x)
# define htole16(x) (x)
# define be16toh(x) _byteswap_ushort(x)
# define le16toh(x) (x)
# define htobe32(x) _byteswap_ulong(x)
# define htole32(x) (x)
# define be32toh(x) _byteswap_ulong(x)
# define le32toh(x) (x)
# define htobe64(x) _byteswap_uint64(x)
# define htole64(x) (x)
# define be64toh(x) _byteswap_uint64(x)
# define le64toh(x) (x)
/* ... for Windows (GCC-like, e.g. mingw or clang) */
#elif (defined(_WIN32) || defined(_WIN64)) && \
(defined(__GNUC__) || defined(__clang__))
# define htobe16(x) __builtin_bswap16(x)
# define htole16(x) (x)
# define be16toh(x) __builtin_bswap16(x)
# define le16toh(x) (x)
# define htobe32(x) __builtin_bswap32(x)
# define htole32(x) (x)
# define be32toh(x) __builtin_bswap32(x)
# define le32toh(x) (x)
# define htobe64(x) __builtin_bswap64(x)
# define htole64(x) (x)
# define be64toh(x) __builtin_bswap64(x)
# define le64toh(x) (x)
/* ... generic big-endian fallback code */
#elif defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
/* byte swapping code inspired by:
* https://github.com/rweather/arduinolibs/blob/master/libraries/Crypto/utility/EndianUtil.h
* */
# define htobe32(x) (x)
# define be32toh(x) (x)
# define htole32(x) \
(__extension__({ \
uint32_t _temp = (x); \
((_temp >> 24) & 0x000000FF) | ((_temp >> 8) & 0x0000FF00) | \
((_temp << 8) & 0x00FF0000) | ((_temp << 24) & 0xFF000000); \
}))
# define le32toh(x) (htole32((x)))
# define htobe64(x) (x)
# define be64toh(x) (x)
# define htole64(x) \
(__extension__({ \
uint64_t __temp = (x); \
uint32_t __low = htobe32((uint32_t)__temp); \
uint32_t __high = htobe32((uint32_t)(__temp >> 32)); \
(((uint64_t)__low) << 32) | __high; \
}))
# define le64toh(x) (htole64((x)))
/* ... generic little-endian fallback code */
#elif defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
# define htole32(x) (x)
# define le32toh(x) (x)
# define htobe32(x) \
(__extension__({ \
uint32_t _temp = (x); \
((_temp >> 24) & 0x000000FF) | ((_temp >> 8) & 0x0000FF00) | \
((_temp << 8) & 0x00FF0000) | ((_temp << 24) & 0xFF000000); \
}))
# define be32toh(x) (htobe32((x)))
# define htole64(x) (x)
# define le64toh(x) (x)
# define htobe64(x) \
(__extension__({ \
uint64_t __temp = (x); \
uint32_t __low = htobe32((uint32_t)__temp); \
uint32_t __high = htobe32((uint32_t)(__temp >> 32)); \
(((uint64_t)__low) << 32) | __high; \
}))
# define be64toh(x) (htobe64((x)))
/* ... couldn't determine endian-ness of the target platform */
#else
# error "Please define __BYTE_ORDER__!"
#endif /* defined(__linux__) || ... */
/* Loads and stores. These avoid undefined behavior due to unaligned memory
* accesses, via memcpy. */
inline static uint16_t load16(uint8_t *b) {
uint16_t x;
memcpy(&x, b, 2);
return x;
}
inline static uint32_t load32(uint8_t *b) {
uint32_t x;
memcpy(&x, b, 4);
return x;
}
inline static uint64_t load64(uint8_t *b) {
uint64_t x;
memcpy(&x, b, 8);
return x;
}
inline static void store16(uint8_t *b, uint16_t i) {
memcpy(b, &i, 2);
}
inline static void store32(uint8_t *b, uint32_t i) {
memcpy(b, &i, 4);
}
inline static void store64(uint8_t *b, uint64_t i) {
memcpy(b, &i, 8);
}
#define load16_le(b) (le16toh(load16(b)))
#define store16_le(b, i) (store16(b, htole16(i)))
#define load16_be(b) (be16toh(load16(b)))
#define store16_be(b, i) (store16(b, htobe16(i)))
#define load32_le(b) (le32toh(load32(b)))
#define store32_le(b, i) (store32(b, htole32(i)))
#define load32_be(b) (be32toh(load32(b)))
#define store32_be(b, i) (store32(b, htobe32(i)))
#define load64_le(b) (le64toh(load64(b)))
#define store64_le(b, i) (store64(b, htole64(i)))
#define load64_be(b) (be64toh(load64(b)))
#define store64_be(b, i) (store64(b, htobe64(i)))
#endif

View file

@ -0,0 +1,16 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
#ifndef __KREMLIN_BUILTIN_H
#define __KREMLIN_BUILTIN_H
/* For alloca, when using KreMLin's -falloca */
#if (defined(_WIN32) || defined(_WIN64))
# include <malloc.h>
#endif
/* If some globals need to be initialized before the main, then kremlin will
* generate and try to link last a function with this type: */
void kremlinit_globals(void);
#endif

View file

@ -0,0 +1,44 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
#ifndef __KREMLIN_CALLCONV_H
#define __KREMLIN_CALLCONV_H
/******************************************************************************/
/* Some macros to ease compatibility */
/******************************************************************************/
/* We want to generate __cdecl safely without worrying about it being undefined.
* When using MSVC, these are always defined. When using MinGW, these are
* defined too. They have no meaning for other platforms, so we define them to
* be empty macros in other situations. */
#ifndef _MSC_VER
#ifndef __cdecl
#define __cdecl
#endif
#ifndef __stdcall
#define __stdcall
#endif
#ifndef __fastcall
#define __fastcall
#endif
#endif
/* TODO: review these two definitions and understand why they're needed. */
#ifdef __GNUC__
# define inline __inline__
#endif
/* GCC-specific attribute syntax; everyone else gets the standard C inline
* attribute. */
#ifdef __GNU_C__
# ifndef __clang__
# define force_inline inline __attribute__((always_inline))
# else
# define force_inline inline
# endif
#else
# define force_inline inline
#endif
#endif

View file

@ -0,0 +1,34 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
#ifndef KRML_COMPAT_H
#define KRML_COMPAT_H
#include <inttypes.h>
/* A series of macros that define C implementations of types that are not Low*,
* to facilitate porting programs to Low*. */
typedef const char *Prims_string;
typedef struct {
uint32_t length;
const char *data;
} FStar_Bytes_bytes;
typedef int32_t Prims_pos, Prims_nat, Prims_nonzero, Prims_int,
krml_checked_int_t;
#define RETURN_OR(x) \
do { \
int64_t __ret = x; \
if (__ret < INT32_MIN || INT32_MAX < __ret) { \
KRML_HOST_PRINTF( \
"Prims.{int,nat,pos} integer overflow at %s:%d\n", __FILE__, \
__LINE__); \
KRML_HOST_EXIT(252); \
} \
return (int32_t)__ret; \
} while (0)
#endif

View file

@ -0,0 +1,57 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
#ifndef __KREMLIN_DEBUG_H
#define __KREMLIN_DEBUG_H
#include <inttypes.h>
#include "kremlin/internal/target.h"
/******************************************************************************/
/* Debugging helpers - intended only for KreMLin developers */
/******************************************************************************/
/* In support of "-wasm -d force-c": we might need this function to be
* forward-declared, because the dependency on WasmSupport appears very late,
* after SimplifyWasm, and sadly, after the topological order has been done. */
void WasmSupport_check_buffer_size(uint32_t s);
/* A series of GCC atrocities to trace function calls (kremlin's [-d c-calls]
* option). Useful when trying to debug, say, Wasm, to compare traces. */
/* clang-format off */
#ifdef __GNUC__
#define KRML_FORMAT(X) _Generic((X), \
uint8_t : "0x%08" PRIx8, \
uint16_t: "0x%08" PRIx16, \
uint32_t: "0x%08" PRIx32, \
uint64_t: "0x%08" PRIx64, \
int8_t : "0x%08" PRIx8, \
int16_t : "0x%08" PRIx16, \
int32_t : "0x%08" PRIx32, \
int64_t : "0x%08" PRIx64, \
default : "%s")
#define KRML_FORMAT_ARG(X) _Generic((X), \
uint8_t : X, \
uint16_t: X, \
uint32_t: X, \
uint64_t: X, \
int8_t : X, \
int16_t : X, \
int32_t : X, \
int64_t : X, \
default : "unknown")
/* clang-format on */
# define KRML_DEBUG_RETURN(X) \
({ \
__auto_type _ret = (X); \
KRML_HOST_PRINTF("returning: "); \
KRML_HOST_PRINTF(KRML_FORMAT(_ret), KRML_FORMAT_ARG(_ret)); \
KRML_HOST_PRINTF(" \n"); \
_ret; \
})
#endif
#endif

View file

@ -0,0 +1,102 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
#ifndef __KREMLIN_TARGET_H
#define __KREMLIN_TARGET_H
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <inttypes.h>
#include <limits.h>
#include "kremlin/internal/callconv.h"
/******************************************************************************/
/* Macros that KreMLin will generate. */
/******************************************************************************/
/* For "bare" targets that do not have a C stdlib, the user might want to use
* [-add-early-include '"mydefinitions.h"'] and override these. */
#ifndef KRML_HOST_PRINTF
# define KRML_HOST_PRINTF printf
#endif
#if ( \
(defined __STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) && \
(!(defined KRML_HOST_EPRINTF)))
# define KRML_HOST_EPRINTF(...) fprintf(stderr, __VA_ARGS__)
#endif
#ifndef KRML_HOST_EXIT
# define KRML_HOST_EXIT exit
#endif
#ifndef KRML_HOST_MALLOC
# define KRML_HOST_MALLOC malloc
#endif
#ifndef KRML_HOST_CALLOC
# define KRML_HOST_CALLOC calloc
#endif
#ifndef KRML_HOST_FREE
# define KRML_HOST_FREE free
#endif
#ifndef KRML_HOST_TIME
# include <time.h>
/* Prims_nat not yet in scope */
inline static int32_t krml_time() {
return (int32_t)time(NULL);
}
# define KRML_HOST_TIME krml_time
#endif
/* In statement position, exiting is easy. */
#define KRML_EXIT \
do { \
KRML_HOST_PRINTF("Unimplemented function at %s:%d\n", __FILE__, __LINE__); \
KRML_HOST_EXIT(254); \
} while (0)
/* In expression position, use the comma-operator and a malloc to return an
* expression of the right size. KreMLin passes t as the parameter to the macro.
*/
#define KRML_EABORT(t, msg) \
(KRML_HOST_PRINTF("KreMLin abort at %s:%d\n%s\n", __FILE__, __LINE__, msg), \
KRML_HOST_EXIT(255), *((t *)KRML_HOST_MALLOC(sizeof(t))))
/* In FStar.Buffer.fst, the size of arrays is uint32_t, but it's a number of
* *elements*. Do an ugly, run-time check (some of which KreMLin can eliminate).
*/
#ifdef __GNUC__
# define _KRML_CHECK_SIZE_PRAGMA \
_Pragma("GCC diagnostic ignored \"-Wtype-limits\"")
#else
# define _KRML_CHECK_SIZE_PRAGMA
#endif
#define KRML_CHECK_SIZE(size_elt, sz) \
do { \
_KRML_CHECK_SIZE_PRAGMA \
if (((size_t)(sz)) > ((size_t)(SIZE_MAX / (size_elt)))) { \
KRML_HOST_PRINTF( \
"Maximum allocatable size exceeded, aborting before overflow at " \
"%s:%d\n", \
__FILE__, __LINE__); \
KRML_HOST_EXIT(253); \
} \
} while (0)
#if defined(_MSC_VER) && _MSC_VER < 1900
# define KRML_HOST_SNPRINTF(buf, sz, fmt, arg) _snprintf_s(buf, sz, _TRUNCATE, fmt, arg)
#else
# define KRML_HOST_SNPRINTF(buf, sz, fmt, arg) snprintf(buf, sz, fmt, arg)
#endif
#endif

View file

@ -0,0 +1,61 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
#ifndef KRML_TYPES_H
#define KRML_TYPES_H
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
/* Types which are either abstract, meaning that have to be implemented in C, or
* which are models, meaning that they are swapped out at compile-time for
* hand-written C types (in which case they're marked as noextract). */
typedef uint64_t FStar_UInt64_t, FStar_UInt64_t_;
typedef int64_t FStar_Int64_t, FStar_Int64_t_;
typedef uint32_t FStar_UInt32_t, FStar_UInt32_t_;
typedef int32_t FStar_Int32_t, FStar_Int32_t_;
typedef uint16_t FStar_UInt16_t, FStar_UInt16_t_;
typedef int16_t FStar_Int16_t, FStar_Int16_t_;
typedef uint8_t FStar_UInt8_t, FStar_UInt8_t_;
typedef int8_t FStar_Int8_t, FStar_Int8_t_;
/* Only useful when building Kremlib, because it's in the dependency graph of
* FStar.Int.Cast. */
typedef uint64_t FStar_UInt63_t, FStar_UInt63_t_;
typedef int64_t FStar_Int63_t, FStar_Int63_t_;
typedef double FStar_Float_float;
typedef uint32_t FStar_Char_char;
typedef FILE *FStar_IO_fd_read, *FStar_IO_fd_write;
typedef void *FStar_Dyn_dyn;
typedef const char *C_String_t, *C_String_t_;
typedef int exit_code;
typedef FILE *channel;
typedef unsigned long long TestLib_cycles;
typedef uint64_t FStar_Date_dateTime, FStar_Date_timeSpan;
/* The uint128 type is a special case since we offer several implementations of
* it, depending on the compiler and whether the user wants the verified
* implementation or not. */
#if !defined(KRML_VERIFIED_UINT128) && defined(_MSC_VER) && defined(_M_X64)
# include <emmintrin.h>
typedef __m128i FStar_UInt128_uint128;
#elif !defined(KRML_VERIFIED_UINT128) && !defined(_MSC_VER)
typedef unsigned __int128 FStar_UInt128_uint128;
#else
typedef struct FStar_UInt128_uint128_s {
uint64_t low;
uint64_t high;
} FStar_UInt128_uint128;
#endif
typedef FStar_UInt128_uint128 FStar_UInt128_t, FStar_UInt128_t_, uint128_t;
#endif

View file

@ -0,0 +1,5 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/* This file is automatically included when compiling with -wasm -d force-c */
#define WasmSupport_check_buffer_size(X)

View file

@ -0,0 +1,760 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/* This file was generated by KreMLin <https://github.com/FStarLang/kremlin>
* KreMLin invocation: /mnt/e/everest/verify/kremlin/krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -fbuiltin-uint128 -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -I /mnt/e/everest/verify/hacl-star/code/lib/kremlin -I /mnt/e/everest/verify/kremlin/kremlib/compat -I /mnt/e/everest/verify/hacl-star/specs -I /mnt/e/everest/verify/hacl-star/specs/old -I . -ccopt -march=native -verbose -ldopt -flto -tmpdir x25519-c -I ../bignum -bundle Hacl.Curve25519=* -minimal -add-include "kremlib.h" -skip-compilation x25519-c/out.krml -o x25519-c/Hacl_Curve25519.c
* F* version: 059db0c8
* KreMLin version: 916c37ac
*/
#include "Hacl_Curve25519.h"
extern uint64_t FStar_UInt64_eq_mask(uint64_t x0, uint64_t x1);
extern uint64_t FStar_UInt64_gte_mask(uint64_t x0, uint64_t x1);
extern uint128_t FStar_UInt128_add(uint128_t x0, uint128_t x1);
extern uint128_t FStar_UInt128_add_mod(uint128_t x0, uint128_t x1);
extern uint128_t FStar_UInt128_logand(uint128_t x0, uint128_t x1);
extern uint128_t FStar_UInt128_shift_right(uint128_t x0, uint32_t x1);
extern uint128_t FStar_UInt128_uint64_to_uint128(uint64_t x0);
extern uint64_t FStar_UInt128_uint128_to_uint64(uint128_t x0);
extern uint128_t FStar_UInt128_mul_wide(uint64_t x0, uint64_t x1);
static void Hacl_Bignum_Modulo_carry_top(uint64_t *b)
{
uint64_t b4 = b[4U];
uint64_t b0 = b[0U];
uint64_t b4_ = b4 & (uint64_t)0x7ffffffffffffU;
uint64_t b0_ = b0 + (uint64_t)19U * (b4 >> (uint32_t)51U);
b[4U] = b4_;
b[0U] = b0_;
}
inline static void Hacl_Bignum_Fproduct_copy_from_wide_(uint64_t *output, uint128_t *input)
{
uint32_t i;
for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U)
{
uint128_t xi = input[i];
output[i] = (uint64_t)xi;
}
}
inline static void
Hacl_Bignum_Fproduct_sum_scalar_multiplication_(uint128_t *output, uint64_t *input, uint64_t s)
{
uint32_t i;
for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U)
{
uint128_t xi = output[i];
uint64_t yi = input[i];
output[i] = xi + (uint128_t)yi * s;
}
}
inline static void Hacl_Bignum_Fproduct_carry_wide_(uint128_t *tmp)
{
uint32_t i;
for (i = (uint32_t)0U; i < (uint32_t)4U; i = i + (uint32_t)1U)
{
uint32_t ctr = i;
uint128_t tctr = tmp[ctr];
uint128_t tctrp1 = tmp[ctr + (uint32_t)1U];
uint64_t r0 = (uint64_t)tctr & (uint64_t)0x7ffffffffffffU;
uint128_t c = tctr >> (uint32_t)51U;
tmp[ctr] = (uint128_t)r0;
tmp[ctr + (uint32_t)1U] = tctrp1 + c;
}
}
inline static void Hacl_Bignum_Fmul_shift_reduce(uint64_t *output)
{
uint64_t tmp = output[4U];
uint64_t b0;
{
uint32_t i;
for (i = (uint32_t)0U; i < (uint32_t)4U; i = i + (uint32_t)1U)
{
uint32_t ctr = (uint32_t)5U - i - (uint32_t)1U;
uint64_t z = output[ctr - (uint32_t)1U];
output[ctr] = z;
}
}
output[0U] = tmp;
b0 = output[0U];
output[0U] = (uint64_t)19U * b0;
}
static void
Hacl_Bignum_Fmul_mul_shift_reduce_(uint128_t *output, uint64_t *input, uint64_t *input2)
{
uint32_t i;
uint64_t input2i;
{
uint32_t i0;
for (i0 = (uint32_t)0U; i0 < (uint32_t)4U; i0 = i0 + (uint32_t)1U)
{
uint64_t input2i0 = input2[i0];
Hacl_Bignum_Fproduct_sum_scalar_multiplication_(output, input, input2i0);
Hacl_Bignum_Fmul_shift_reduce(input);
}
}
i = (uint32_t)4U;
input2i = input2[i];
Hacl_Bignum_Fproduct_sum_scalar_multiplication_(output, input, input2i);
}
inline static void Hacl_Bignum_Fmul_fmul(uint64_t *output, uint64_t *input, uint64_t *input2)
{
uint64_t tmp[5U] = { 0U };
memcpy(tmp, input, (uint32_t)5U * sizeof input[0U]);
KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U);
{
uint128_t t[5U];
{
uint32_t _i;
for (_i = 0U; _i < (uint32_t)5U; ++_i)
t[_i] = (uint128_t)(uint64_t)0U;
}
{
uint128_t b4;
uint128_t b0;
uint128_t b4_;
uint128_t b0_;
uint64_t i0;
uint64_t i1;
uint64_t i0_;
uint64_t i1_;
Hacl_Bignum_Fmul_mul_shift_reduce_(t, tmp, input2);
Hacl_Bignum_Fproduct_carry_wide_(t);
b4 = t[4U];
b0 = t[0U];
b4_ = b4 & (uint128_t)(uint64_t)0x7ffffffffffffU;
b0_ = b0 + (uint128_t)(uint64_t)19U * (uint64_t)(b4 >> (uint32_t)51U);
t[4U] = b4_;
t[0U] = b0_;
Hacl_Bignum_Fproduct_copy_from_wide_(output, t);
i0 = output[0U];
i1 = output[1U];
i0_ = i0 & (uint64_t)0x7ffffffffffffU;
i1_ = i1 + (i0 >> (uint32_t)51U);
output[0U] = i0_;
output[1U] = i1_;
}
}
}
inline static void Hacl_Bignum_Fsquare_fsquare__(uint128_t *tmp, uint64_t *output)
{
uint64_t r0 = output[0U];
uint64_t r1 = output[1U];
uint64_t r2 = output[2U];
uint64_t r3 = output[3U];
uint64_t r4 = output[4U];
uint64_t d0 = r0 * (uint64_t)2U;
uint64_t d1 = r1 * (uint64_t)2U;
uint64_t d2 = r2 * (uint64_t)2U * (uint64_t)19U;
uint64_t d419 = r4 * (uint64_t)19U;
uint64_t d4 = d419 * (uint64_t)2U;
uint128_t s0 = (uint128_t)r0 * r0 + (uint128_t)d4 * r1 + (uint128_t)d2 * r3;
uint128_t s1 = (uint128_t)d0 * r1 + (uint128_t)d4 * r2 + (uint128_t)(r3 * (uint64_t)19U) * r3;
uint128_t s2 = (uint128_t)d0 * r2 + (uint128_t)r1 * r1 + (uint128_t)d4 * r3;
uint128_t s3 = (uint128_t)d0 * r3 + (uint128_t)d1 * r2 + (uint128_t)r4 * d419;
uint128_t s4 = (uint128_t)d0 * r4 + (uint128_t)d1 * r3 + (uint128_t)r2 * r2;
tmp[0U] = s0;
tmp[1U] = s1;
tmp[2U] = s2;
tmp[3U] = s3;
tmp[4U] = s4;
}
inline static void Hacl_Bignum_Fsquare_fsquare_(uint128_t *tmp, uint64_t *output)
{
uint128_t b4;
uint128_t b0;
uint128_t b4_;
uint128_t b0_;
uint64_t i0;
uint64_t i1;
uint64_t i0_;
uint64_t i1_;
Hacl_Bignum_Fsquare_fsquare__(tmp, output);
Hacl_Bignum_Fproduct_carry_wide_(tmp);
b4 = tmp[4U];
b0 = tmp[0U];
b4_ = b4 & (uint128_t)(uint64_t)0x7ffffffffffffU;
b0_ = b0 + (uint128_t)(uint64_t)19U * (uint64_t)(b4 >> (uint32_t)51U);
tmp[4U] = b4_;
tmp[0U] = b0_;
Hacl_Bignum_Fproduct_copy_from_wide_(output, tmp);
i0 = output[0U];
i1 = output[1U];
i0_ = i0 & (uint64_t)0x7ffffffffffffU;
i1_ = i1 + (i0 >> (uint32_t)51U);
output[0U] = i0_;
output[1U] = i1_;
}
static void
Hacl_Bignum_Fsquare_fsquare_times_(uint64_t *input, uint128_t *tmp, uint32_t count1)
{
uint32_t i;
Hacl_Bignum_Fsquare_fsquare_(tmp, input);
for (i = (uint32_t)1U; i < count1; i = i + (uint32_t)1U)
Hacl_Bignum_Fsquare_fsquare_(tmp, input);
}
inline static void
Hacl_Bignum_Fsquare_fsquare_times(uint64_t *output, uint64_t *input, uint32_t count1)
{
KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U);
{
uint128_t t[5U];
{
uint32_t _i;
for (_i = 0U; _i < (uint32_t)5U; ++_i)
t[_i] = (uint128_t)(uint64_t)0U;
}
memcpy(output, input, (uint32_t)5U * sizeof input[0U]);
Hacl_Bignum_Fsquare_fsquare_times_(output, t, count1);
}
}
inline static void Hacl_Bignum_Fsquare_fsquare_times_inplace(uint64_t *output, uint32_t count1)
{
KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U);
{
uint128_t t[5U];
{
uint32_t _i;
for (_i = 0U; _i < (uint32_t)5U; ++_i)
t[_i] = (uint128_t)(uint64_t)0U;
}
Hacl_Bignum_Fsquare_fsquare_times_(output, t, count1);
}
}
inline static void Hacl_Bignum_Crecip_crecip(uint64_t *out, uint64_t *z)
{
uint64_t buf[20U] = { 0U };
uint64_t *a0 = buf;
uint64_t *t00 = buf + (uint32_t)5U;
uint64_t *b0 = buf + (uint32_t)10U;
uint64_t *t01;
uint64_t *b1;
uint64_t *c0;
uint64_t *a;
uint64_t *t0;
uint64_t *b;
uint64_t *c;
Hacl_Bignum_Fsquare_fsquare_times(a0, z, (uint32_t)1U);
Hacl_Bignum_Fsquare_fsquare_times(t00, a0, (uint32_t)2U);
Hacl_Bignum_Fmul_fmul(b0, t00, z);
Hacl_Bignum_Fmul_fmul(a0, b0, a0);
Hacl_Bignum_Fsquare_fsquare_times(t00, a0, (uint32_t)1U);
Hacl_Bignum_Fmul_fmul(b0, t00, b0);
Hacl_Bignum_Fsquare_fsquare_times(t00, b0, (uint32_t)5U);
t01 = buf + (uint32_t)5U;
b1 = buf + (uint32_t)10U;
c0 = buf + (uint32_t)15U;
Hacl_Bignum_Fmul_fmul(b1, t01, b1);
Hacl_Bignum_Fsquare_fsquare_times(t01, b1, (uint32_t)10U);
Hacl_Bignum_Fmul_fmul(c0, t01, b1);
Hacl_Bignum_Fsquare_fsquare_times(t01, c0, (uint32_t)20U);
Hacl_Bignum_Fmul_fmul(t01, t01, c0);
Hacl_Bignum_Fsquare_fsquare_times_inplace(t01, (uint32_t)10U);
Hacl_Bignum_Fmul_fmul(b1, t01, b1);
Hacl_Bignum_Fsquare_fsquare_times(t01, b1, (uint32_t)50U);
a = buf;
t0 = buf + (uint32_t)5U;
b = buf + (uint32_t)10U;
c = buf + (uint32_t)15U;
Hacl_Bignum_Fmul_fmul(c, t0, b);
Hacl_Bignum_Fsquare_fsquare_times(t0, c, (uint32_t)100U);
Hacl_Bignum_Fmul_fmul(t0, t0, c);
Hacl_Bignum_Fsquare_fsquare_times_inplace(t0, (uint32_t)50U);
Hacl_Bignum_Fmul_fmul(t0, t0, b);
Hacl_Bignum_Fsquare_fsquare_times_inplace(t0, (uint32_t)5U);
Hacl_Bignum_Fmul_fmul(out, t0, a);
}
inline static void Hacl_Bignum_fsum(uint64_t *a, uint64_t *b)
{
uint32_t i;
for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U)
{
uint64_t xi = a[i];
uint64_t yi = b[i];
a[i] = xi + yi;
}
}
inline static void Hacl_Bignum_fdifference(uint64_t *a, uint64_t *b)
{
uint64_t tmp[5U] = { 0U };
uint64_t b0;
uint64_t b1;
uint64_t b2;
uint64_t b3;
uint64_t b4;
memcpy(tmp, b, (uint32_t)5U * sizeof b[0U]);
b0 = tmp[0U];
b1 = tmp[1U];
b2 = tmp[2U];
b3 = tmp[3U];
b4 = tmp[4U];
tmp[0U] = b0 + (uint64_t)0x3fffffffffff68U;
tmp[1U] = b1 + (uint64_t)0x3ffffffffffff8U;
tmp[2U] = b2 + (uint64_t)0x3ffffffffffff8U;
tmp[3U] = b3 + (uint64_t)0x3ffffffffffff8U;
tmp[4U] = b4 + (uint64_t)0x3ffffffffffff8U;
{
uint32_t i;
for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U)
{
uint64_t xi = a[i];
uint64_t yi = tmp[i];
a[i] = yi - xi;
}
}
}
inline static void Hacl_Bignum_fscalar(uint64_t *output, uint64_t *b, uint64_t s)
{
KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U);
{
uint128_t tmp[5U];
{
uint32_t _i;
for (_i = 0U; _i < (uint32_t)5U; ++_i)
tmp[_i] = (uint128_t)(uint64_t)0U;
}
{
uint128_t b4;
uint128_t b0;
uint128_t b4_;
uint128_t b0_;
{
uint32_t i;
for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U)
{
uint64_t xi = b[i];
tmp[i] = (uint128_t)xi * s;
}
}
Hacl_Bignum_Fproduct_carry_wide_(tmp);
b4 = tmp[4U];
b0 = tmp[0U];
b4_ = b4 & (uint128_t)(uint64_t)0x7ffffffffffffU;
b0_ = b0 + (uint128_t)(uint64_t)19U * (uint64_t)(b4 >> (uint32_t)51U);
tmp[4U] = b4_;
tmp[0U] = b0_;
Hacl_Bignum_Fproduct_copy_from_wide_(output, tmp);
}
}
}
inline static void Hacl_Bignum_fmul(uint64_t *output, uint64_t *a, uint64_t *b)
{
Hacl_Bignum_Fmul_fmul(output, a, b);
}
inline static void Hacl_Bignum_crecip(uint64_t *output, uint64_t *input)
{
Hacl_Bignum_Crecip_crecip(output, input);
}
static void
Hacl_EC_Point_swap_conditional_step(uint64_t *a, uint64_t *b, uint64_t swap1, uint32_t ctr)
{
uint32_t i = ctr - (uint32_t)1U;
uint64_t ai = a[i];
uint64_t bi = b[i];
uint64_t x = swap1 & (ai ^ bi);
uint64_t ai1 = ai ^ x;
uint64_t bi1 = bi ^ x;
a[i] = ai1;
b[i] = bi1;
}
static void
Hacl_EC_Point_swap_conditional_(uint64_t *a, uint64_t *b, uint64_t swap1, uint32_t ctr)
{
if (!(ctr == (uint32_t)0U))
{
uint32_t i;
Hacl_EC_Point_swap_conditional_step(a, b, swap1, ctr);
i = ctr - (uint32_t)1U;
Hacl_EC_Point_swap_conditional_(a, b, swap1, i);
}
}
static void Hacl_EC_Point_swap_conditional(uint64_t *a, uint64_t *b, uint64_t iswap)
{
uint64_t swap1 = (uint64_t)0U - iswap;
Hacl_EC_Point_swap_conditional_(a, b, swap1, (uint32_t)5U);
Hacl_EC_Point_swap_conditional_(a + (uint32_t)5U, b + (uint32_t)5U, swap1, (uint32_t)5U);
}
static void Hacl_EC_Point_copy(uint64_t *output, uint64_t *input)
{
memcpy(output, input, (uint32_t)5U * sizeof input[0U]);
memcpy(output + (uint32_t)5U,
input + (uint32_t)5U,
(uint32_t)5U * sizeof (input + (uint32_t)5U)[0U]);
}
static void Hacl_EC_Format_fexpand(uint64_t *output, uint8_t *input)
{
uint64_t i0 = load64_le(input);
uint8_t *x00 = input + (uint32_t)6U;
uint64_t i1 = load64_le(x00);
uint8_t *x01 = input + (uint32_t)12U;
uint64_t i2 = load64_le(x01);
uint8_t *x02 = input + (uint32_t)19U;
uint64_t i3 = load64_le(x02);
uint8_t *x0 = input + (uint32_t)24U;
uint64_t i4 = load64_le(x0);
uint64_t output0 = i0 & (uint64_t)0x7ffffffffffffU;
uint64_t output1 = i1 >> (uint32_t)3U & (uint64_t)0x7ffffffffffffU;
uint64_t output2 = i2 >> (uint32_t)6U & (uint64_t)0x7ffffffffffffU;
uint64_t output3 = i3 >> (uint32_t)1U & (uint64_t)0x7ffffffffffffU;
uint64_t output4 = i4 >> (uint32_t)12U & (uint64_t)0x7ffffffffffffU;
output[0U] = output0;
output[1U] = output1;
output[2U] = output2;
output[3U] = output3;
output[4U] = output4;
}
static void Hacl_EC_Format_fcontract_first_carry_pass(uint64_t *input)
{
uint64_t t0 = input[0U];
uint64_t t1 = input[1U];
uint64_t t2 = input[2U];
uint64_t t3 = input[3U];
uint64_t t4 = input[4U];
uint64_t t1_ = t1 + (t0 >> (uint32_t)51U);
uint64_t t0_ = t0 & (uint64_t)0x7ffffffffffffU;
uint64_t t2_ = t2 + (t1_ >> (uint32_t)51U);
uint64_t t1__ = t1_ & (uint64_t)0x7ffffffffffffU;
uint64_t t3_ = t3 + (t2_ >> (uint32_t)51U);
uint64_t t2__ = t2_ & (uint64_t)0x7ffffffffffffU;
uint64_t t4_ = t4 + (t3_ >> (uint32_t)51U);
uint64_t t3__ = t3_ & (uint64_t)0x7ffffffffffffU;
input[0U] = t0_;
input[1U] = t1__;
input[2U] = t2__;
input[3U] = t3__;
input[4U] = t4_;
}
static void Hacl_EC_Format_fcontract_first_carry_full(uint64_t *input)
{
Hacl_EC_Format_fcontract_first_carry_pass(input);
Hacl_Bignum_Modulo_carry_top(input);
}
static void Hacl_EC_Format_fcontract_second_carry_pass(uint64_t *input)
{
uint64_t t0 = input[0U];
uint64_t t1 = input[1U];
uint64_t t2 = input[2U];
uint64_t t3 = input[3U];
uint64_t t4 = input[4U];
uint64_t t1_ = t1 + (t0 >> (uint32_t)51U);
uint64_t t0_ = t0 & (uint64_t)0x7ffffffffffffU;
uint64_t t2_ = t2 + (t1_ >> (uint32_t)51U);
uint64_t t1__ = t1_ & (uint64_t)0x7ffffffffffffU;
uint64_t t3_ = t3 + (t2_ >> (uint32_t)51U);
uint64_t t2__ = t2_ & (uint64_t)0x7ffffffffffffU;
uint64_t t4_ = t4 + (t3_ >> (uint32_t)51U);
uint64_t t3__ = t3_ & (uint64_t)0x7ffffffffffffU;
input[0U] = t0_;
input[1U] = t1__;
input[2U] = t2__;
input[3U] = t3__;
input[4U] = t4_;
}
static void Hacl_EC_Format_fcontract_second_carry_full(uint64_t *input)
{
uint64_t i0;
uint64_t i1;
uint64_t i0_;
uint64_t i1_;
Hacl_EC_Format_fcontract_second_carry_pass(input);
Hacl_Bignum_Modulo_carry_top(input);
i0 = input[0U];
i1 = input[1U];
i0_ = i0 & (uint64_t)0x7ffffffffffffU;
i1_ = i1 + (i0 >> (uint32_t)51U);
input[0U] = i0_;
input[1U] = i1_;
}
static void Hacl_EC_Format_fcontract_trim(uint64_t *input)
{
uint64_t a0 = input[0U];
uint64_t a1 = input[1U];
uint64_t a2 = input[2U];
uint64_t a3 = input[3U];
uint64_t a4 = input[4U];
uint64_t mask0 = FStar_UInt64_gte_mask(a0, (uint64_t)0x7ffffffffffedU);
uint64_t mask1 = FStar_UInt64_eq_mask(a1, (uint64_t)0x7ffffffffffffU);
uint64_t mask2 = FStar_UInt64_eq_mask(a2, (uint64_t)0x7ffffffffffffU);
uint64_t mask3 = FStar_UInt64_eq_mask(a3, (uint64_t)0x7ffffffffffffU);
uint64_t mask4 = FStar_UInt64_eq_mask(a4, (uint64_t)0x7ffffffffffffU);
uint64_t mask = (((mask0 & mask1) & mask2) & mask3) & mask4;
uint64_t a0_ = a0 - ((uint64_t)0x7ffffffffffedU & mask);
uint64_t a1_ = a1 - ((uint64_t)0x7ffffffffffffU & mask);
uint64_t a2_ = a2 - ((uint64_t)0x7ffffffffffffU & mask);
uint64_t a3_ = a3 - ((uint64_t)0x7ffffffffffffU & mask);
uint64_t a4_ = a4 - ((uint64_t)0x7ffffffffffffU & mask);
input[0U] = a0_;
input[1U] = a1_;
input[2U] = a2_;
input[3U] = a3_;
input[4U] = a4_;
}
static void Hacl_EC_Format_fcontract_store(uint8_t *output, uint64_t *input)
{
uint64_t t0 = input[0U];
uint64_t t1 = input[1U];
uint64_t t2 = input[2U];
uint64_t t3 = input[3U];
uint64_t t4 = input[4U];
uint64_t o0 = t1 << (uint32_t)51U | t0;
uint64_t o1 = t2 << (uint32_t)38U | t1 >> (uint32_t)13U;
uint64_t o2 = t3 << (uint32_t)25U | t2 >> (uint32_t)26U;
uint64_t o3 = t4 << (uint32_t)12U | t3 >> (uint32_t)39U;
uint8_t *b0 = output;
uint8_t *b1 = output + (uint32_t)8U;
uint8_t *b2 = output + (uint32_t)16U;
uint8_t *b3 = output + (uint32_t)24U;
store64_le(b0, o0);
store64_le(b1, o1);
store64_le(b2, o2);
store64_le(b3, o3);
}
static void Hacl_EC_Format_fcontract(uint8_t *output, uint64_t *input)
{
Hacl_EC_Format_fcontract_first_carry_full(input);
Hacl_EC_Format_fcontract_second_carry_full(input);
Hacl_EC_Format_fcontract_trim(input);
Hacl_EC_Format_fcontract_store(output, input);
}
static void Hacl_EC_Format_scalar_of_point(uint8_t *scalar, uint64_t *point)
{
uint64_t *x = point;
uint64_t *z = point + (uint32_t)5U;
uint64_t buf[10U] = { 0U };
uint64_t *zmone = buf;
uint64_t *sc = buf + (uint32_t)5U;
Hacl_Bignum_crecip(zmone, z);
Hacl_Bignum_fmul(sc, x, zmone);
Hacl_EC_Format_fcontract(scalar, sc);
}
static void
Hacl_EC_AddAndDouble_fmonty(
uint64_t *pp,
uint64_t *ppq,
uint64_t *p,
uint64_t *pq,
uint64_t *qmqp
)
{
uint64_t *qx = qmqp;
uint64_t *x2 = pp;
uint64_t *z2 = pp + (uint32_t)5U;
uint64_t *x3 = ppq;
uint64_t *z3 = ppq + (uint32_t)5U;
uint64_t *x = p;
uint64_t *z = p + (uint32_t)5U;
uint64_t *xprime = pq;
uint64_t *zprime = pq + (uint32_t)5U;
uint64_t buf[40U] = { 0U };
uint64_t *origx = buf;
uint64_t *origxprime0 = buf + (uint32_t)5U;
uint64_t *xxprime0 = buf + (uint32_t)25U;
uint64_t *zzprime0 = buf + (uint32_t)30U;
uint64_t *origxprime;
uint64_t *xx0;
uint64_t *zz0;
uint64_t *xxprime;
uint64_t *zzprime;
uint64_t *zzzprime;
uint64_t *zzz;
uint64_t *xx;
uint64_t *zz;
uint64_t scalar;
memcpy(origx, x, (uint32_t)5U * sizeof x[0U]);
Hacl_Bignum_fsum(x, z);
Hacl_Bignum_fdifference(z, origx);
memcpy(origxprime0, xprime, (uint32_t)5U * sizeof xprime[0U]);
Hacl_Bignum_fsum(xprime, zprime);
Hacl_Bignum_fdifference(zprime, origxprime0);
Hacl_Bignum_fmul(xxprime0, xprime, z);
Hacl_Bignum_fmul(zzprime0, x, zprime);
origxprime = buf + (uint32_t)5U;
xx0 = buf + (uint32_t)15U;
zz0 = buf + (uint32_t)20U;
xxprime = buf + (uint32_t)25U;
zzprime = buf + (uint32_t)30U;
zzzprime = buf + (uint32_t)35U;
memcpy(origxprime, xxprime, (uint32_t)5U * sizeof xxprime[0U]);
Hacl_Bignum_fsum(xxprime, zzprime);
Hacl_Bignum_fdifference(zzprime, origxprime);
Hacl_Bignum_Fsquare_fsquare_times(x3, xxprime, (uint32_t)1U);
Hacl_Bignum_Fsquare_fsquare_times(zzzprime, zzprime, (uint32_t)1U);
Hacl_Bignum_fmul(z3, zzzprime, qx);
Hacl_Bignum_Fsquare_fsquare_times(xx0, x, (uint32_t)1U);
Hacl_Bignum_Fsquare_fsquare_times(zz0, z, (uint32_t)1U);
zzz = buf + (uint32_t)10U;
xx = buf + (uint32_t)15U;
zz = buf + (uint32_t)20U;
Hacl_Bignum_fmul(x2, xx, zz);
Hacl_Bignum_fdifference(zz, xx);
scalar = (uint64_t)121665U;
Hacl_Bignum_fscalar(zzz, zz, scalar);
Hacl_Bignum_fsum(zzz, xx);
Hacl_Bignum_fmul(z2, zzz, zz);
}
static void
Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step(
uint64_t *nq,
uint64_t *nqpq,
uint64_t *nq2,
uint64_t *nqpq2,
uint64_t *q,
uint8_t byt
)
{
uint64_t bit0 = (uint64_t)(byt >> (uint32_t)7U);
uint64_t bit;
Hacl_EC_Point_swap_conditional(nq, nqpq, bit0);
Hacl_EC_AddAndDouble_fmonty(nq2, nqpq2, nq, nqpq, q);
bit = (uint64_t)(byt >> (uint32_t)7U);
Hacl_EC_Point_swap_conditional(nq2, nqpq2, bit);
}
static void
Hacl_EC_Ladder_SmallLoop_cmult_small_loop_double_step(
uint64_t *nq,
uint64_t *nqpq,
uint64_t *nq2,
uint64_t *nqpq2,
uint64_t *q,
uint8_t byt
)
{
uint8_t byt1;
Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step(nq, nqpq, nq2, nqpq2, q, byt);
byt1 = byt << (uint32_t)1U;
Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step(nq2, nqpq2, nq, nqpq, q, byt1);
}
static void
Hacl_EC_Ladder_SmallLoop_cmult_small_loop(
uint64_t *nq,
uint64_t *nqpq,
uint64_t *nq2,
uint64_t *nqpq2,
uint64_t *q,
uint8_t byt,
uint32_t i
)
{
if (!(i == (uint32_t)0U))
{
uint32_t i_ = i - (uint32_t)1U;
uint8_t byt_;
Hacl_EC_Ladder_SmallLoop_cmult_small_loop_double_step(nq, nqpq, nq2, nqpq2, q, byt);
byt_ = byt << (uint32_t)2U;
Hacl_EC_Ladder_SmallLoop_cmult_small_loop(nq, nqpq, nq2, nqpq2, q, byt_, i_);
}
}
static void
Hacl_EC_Ladder_BigLoop_cmult_big_loop(
uint8_t *n1,
uint64_t *nq,
uint64_t *nqpq,
uint64_t *nq2,
uint64_t *nqpq2,
uint64_t *q,
uint32_t i
)
{
if (!(i == (uint32_t)0U))
{
uint32_t i1 = i - (uint32_t)1U;
uint8_t byte = n1[i1];
Hacl_EC_Ladder_SmallLoop_cmult_small_loop(nq, nqpq, nq2, nqpq2, q, byte, (uint32_t)4U);
Hacl_EC_Ladder_BigLoop_cmult_big_loop(n1, nq, nqpq, nq2, nqpq2, q, i1);
}
}
static void Hacl_EC_Ladder_cmult(uint64_t *result, uint8_t *n1, uint64_t *q)
{
uint64_t point_buf[40U] = { 0U };
uint64_t *nq = point_buf;
uint64_t *nqpq = point_buf + (uint32_t)10U;
uint64_t *nq2 = point_buf + (uint32_t)20U;
uint64_t *nqpq2 = point_buf + (uint32_t)30U;
Hacl_EC_Point_copy(nqpq, q);
nq[0U] = (uint64_t)1U;
Hacl_EC_Ladder_BigLoop_cmult_big_loop(n1, nq, nqpq, nq2, nqpq2, q, (uint32_t)32U);
Hacl_EC_Point_copy(result, nq);
}
void Hacl_Curve25519_crypto_scalarmult(uint8_t *mypublic, uint8_t *secret, uint8_t *basepoint)
{
uint64_t buf0[10U] = { 0U };
uint64_t *x0 = buf0;
uint64_t *z = buf0 + (uint32_t)5U;
uint64_t *q;
Hacl_EC_Format_fexpand(x0, basepoint);
z[0U] = (uint64_t)1U;
q = buf0;
{
uint8_t e[32U] = { 0U };
uint8_t e0;
uint8_t e31;
uint8_t e01;
uint8_t e311;
uint8_t e312;
uint8_t *scalar;
memcpy(e, secret, (uint32_t)32U * sizeof secret[0U]);
e0 = e[0U];
e31 = e[31U];
e01 = e0 & (uint8_t)248U;
e311 = e31 & (uint8_t)127U;
e312 = e311 | (uint8_t)64U;
e[0U] = e01;
e[31U] = e312;
scalar = e;
{
uint64_t buf[15U] = { 0U };
uint64_t *nq = buf;
uint64_t *x = nq;
x[0U] = (uint64_t)1U;
Hacl_EC_Ladder_cmult(nq, scalar, q);
Hacl_EC_Format_scalar_of_point(mypublic, nq);
}
}
}

View file

@ -0,0 +1,413 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/* This file was generated by KreMLin <https://github.com/FStarLang/kremlin>
* KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir extracted -warn-error +9+11 -skip-compilation -extract-uints -add-include <inttypes.h> -add-include "kremlib.h" -add-include "kremlin/internal/compat.h" extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml
* F* version: 059db0c8
* KreMLin version: 916c37ac
*/
#include "FStar_UInt128.h"
#include "kremlin/c_endianness.h"
#include "FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.h"
uint64_t FStar_UInt128___proj__Mkuint128__item__low(FStar_UInt128_uint128 projectee)
{
return projectee.low;
}
uint64_t FStar_UInt128___proj__Mkuint128__item__high(FStar_UInt128_uint128 projectee)
{
return projectee.high;
}
static uint64_t FStar_UInt128_constant_time_carry(uint64_t a, uint64_t b)
{
return (a ^ ((a ^ b) | ((a - b) ^ b))) >> (uint32_t)63U;
}
static uint64_t FStar_UInt128_carry(uint64_t a, uint64_t b)
{
return FStar_UInt128_constant_time_carry(a, b);
}
FStar_UInt128_uint128 FStar_UInt128_add(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat = { a.low + b.low, a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low) };
return flat;
}
FStar_UInt128_uint128
FStar_UInt128_add_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat = { a.low + b.low, a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low) };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_add_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat = { a.low + b.low, a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low) };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_sub(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat = { a.low - b.low, a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low) };
return flat;
}
FStar_UInt128_uint128
FStar_UInt128_sub_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat = { a.low - b.low, a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low) };
return flat;
}
static FStar_UInt128_uint128
FStar_UInt128_sub_mod_impl(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat = { a.low - b.low, a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low) };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_sub_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
return FStar_UInt128_sub_mod_impl(a, b);
}
FStar_UInt128_uint128 FStar_UInt128_logand(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128 flat = { a.low & b.low, a.high & b.high };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_logxor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128 flat = { a.low ^ b.low, a.high ^ b.high };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_logor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128 flat = { a.low | b.low, a.high | b.high };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_lognot(FStar_UInt128_uint128 a)
{
FStar_UInt128_uint128 flat = { ~a.low, ~a.high };
return flat;
}
static uint32_t FStar_UInt128_u32_64 = (uint32_t)64U;
static uint64_t FStar_UInt128_add_u64_shift_left(uint64_t hi, uint64_t lo, uint32_t s)
{
return (hi << s) + (lo >> (FStar_UInt128_u32_64 - s));
}
static uint64_t FStar_UInt128_add_u64_shift_left_respec(uint64_t hi, uint64_t lo, uint32_t s)
{
return FStar_UInt128_add_u64_shift_left(hi, lo, s);
}
static FStar_UInt128_uint128
FStar_UInt128_shift_left_small(FStar_UInt128_uint128 a, uint32_t s)
{
if (s == (uint32_t)0U)
{
return a;
}
else
{
FStar_UInt128_uint128
flat = { a.low << s, FStar_UInt128_add_u64_shift_left_respec(a.high, a.low, s) };
return flat;
}
}
static FStar_UInt128_uint128
FStar_UInt128_shift_left_large(FStar_UInt128_uint128 a, uint32_t s)
{
FStar_UInt128_uint128 flat = { (uint64_t)0U, a.low << (s - FStar_UInt128_u32_64) };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_shift_left(FStar_UInt128_uint128 a, uint32_t s)
{
if (s < FStar_UInt128_u32_64)
{
return FStar_UInt128_shift_left_small(a, s);
}
else
{
return FStar_UInt128_shift_left_large(a, s);
}
}
static uint64_t FStar_UInt128_add_u64_shift_right(uint64_t hi, uint64_t lo, uint32_t s)
{
return (lo >> s) + (hi << (FStar_UInt128_u32_64 - s));
}
static uint64_t FStar_UInt128_add_u64_shift_right_respec(uint64_t hi, uint64_t lo, uint32_t s)
{
return FStar_UInt128_add_u64_shift_right(hi, lo, s);
}
static FStar_UInt128_uint128
FStar_UInt128_shift_right_small(FStar_UInt128_uint128 a, uint32_t s)
{
if (s == (uint32_t)0U)
{
return a;
}
else
{
FStar_UInt128_uint128
flat = { FStar_UInt128_add_u64_shift_right_respec(a.high, a.low, s), a.high >> s };
return flat;
}
}
static FStar_UInt128_uint128
FStar_UInt128_shift_right_large(FStar_UInt128_uint128 a, uint32_t s)
{
FStar_UInt128_uint128 flat = { a.high >> (s - FStar_UInt128_u32_64), (uint64_t)0U };
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_shift_right(FStar_UInt128_uint128 a, uint32_t s)
{
if (s < FStar_UInt128_u32_64)
{
return FStar_UInt128_shift_right_small(a, s);
}
else
{
return FStar_UInt128_shift_right_large(a, s);
}
}
bool FStar_UInt128_eq(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
return a.low == b.low && a.high == b.high;
}
bool FStar_UInt128_gt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
return a.high > b.high || (a.high == b.high && a.low > b.low);
}
bool FStar_UInt128_lt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
return a.high < b.high || (a.high == b.high && a.low < b.low);
}
bool FStar_UInt128_gte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
return a.high > b.high || (a.high == b.high && a.low >= b.low);
}
bool FStar_UInt128_lte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
return a.high < b.high || (a.high == b.high && a.low <= b.low);
}
FStar_UInt128_uint128 FStar_UInt128_eq_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat =
{
FStar_UInt64_eq_mask(a.low,
b.low)
& FStar_UInt64_eq_mask(a.high, b.high),
FStar_UInt64_eq_mask(a.low,
b.low)
& FStar_UInt64_eq_mask(a.high, b.high)
};
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_gte_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b)
{
FStar_UInt128_uint128
flat =
{
(FStar_UInt64_gte_mask(a.high, b.high) & ~FStar_UInt64_eq_mask(a.high, b.high))
| (FStar_UInt64_eq_mask(a.high, b.high) & FStar_UInt64_gte_mask(a.low, b.low)),
(FStar_UInt64_gte_mask(a.high, b.high) & ~FStar_UInt64_eq_mask(a.high, b.high))
| (FStar_UInt64_eq_mask(a.high, b.high) & FStar_UInt64_gte_mask(a.low, b.low))
};
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_uint64_to_uint128(uint64_t a)
{
FStar_UInt128_uint128 flat = { a, (uint64_t)0U };
return flat;
}
uint64_t FStar_UInt128_uint128_to_uint64(FStar_UInt128_uint128 a)
{
return a.low;
}
FStar_UInt128_uint128
(*FStar_UInt128_op_Plus_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_add;
FStar_UInt128_uint128
(*FStar_UInt128_op_Plus_Question_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_add_underspec;
FStar_UInt128_uint128
(*FStar_UInt128_op_Plus_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_add_mod;
FStar_UInt128_uint128
(*FStar_UInt128_op_Subtraction_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_sub;
FStar_UInt128_uint128
(*FStar_UInt128_op_Subtraction_Question_Hat)(
FStar_UInt128_uint128 x0,
FStar_UInt128_uint128 x1
) = FStar_UInt128_sub_underspec;
FStar_UInt128_uint128
(*FStar_UInt128_op_Subtraction_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_sub_mod;
FStar_UInt128_uint128
(*FStar_UInt128_op_Amp_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_logand;
FStar_UInt128_uint128
(*FStar_UInt128_op_Hat_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_logxor;
FStar_UInt128_uint128
(*FStar_UInt128_op_Bar_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_logor;
FStar_UInt128_uint128
(*FStar_UInt128_op_Less_Less_Hat)(FStar_UInt128_uint128 x0, uint32_t x1) =
FStar_UInt128_shift_left;
FStar_UInt128_uint128
(*FStar_UInt128_op_Greater_Greater_Hat)(FStar_UInt128_uint128 x0, uint32_t x1) =
FStar_UInt128_shift_right;
bool
(*FStar_UInt128_op_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_eq;
bool
(*FStar_UInt128_op_Greater_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_gt;
bool
(*FStar_UInt128_op_Less_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_lt;
bool
(*FStar_UInt128_op_Greater_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_gte;
bool
(*FStar_UInt128_op_Less_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) =
FStar_UInt128_lte;
static uint64_t FStar_UInt128_u64_mod_32(uint64_t a)
{
return a & (uint64_t)0xffffffffU;
}
static uint32_t FStar_UInt128_u32_32 = (uint32_t)32U;
static uint64_t FStar_UInt128_u32_combine(uint64_t hi, uint64_t lo)
{
return lo + (hi << FStar_UInt128_u32_32);
}
FStar_UInt128_uint128 FStar_UInt128_mul32(uint64_t x, uint32_t y)
{
FStar_UInt128_uint128
flat =
{
FStar_UInt128_u32_combine((x >> FStar_UInt128_u32_32)
* (uint64_t)y
+ (FStar_UInt128_u64_mod_32(x) * (uint64_t)y >> FStar_UInt128_u32_32),
FStar_UInt128_u64_mod_32(FStar_UInt128_u64_mod_32(x) * (uint64_t)y)),
((x >> FStar_UInt128_u32_32)
* (uint64_t)y
+ (FStar_UInt128_u64_mod_32(x) * (uint64_t)y >> FStar_UInt128_u32_32))
>> FStar_UInt128_u32_32
};
return flat;
}
typedef struct K___uint64_t_uint64_t_uint64_t_uint64_t_s
{
uint64_t fst;
uint64_t snd;
uint64_t thd;
uint64_t f3;
}
K___uint64_t_uint64_t_uint64_t_uint64_t;
static K___uint64_t_uint64_t_uint64_t_uint64_t
FStar_UInt128_mul_wide_impl_t_(uint64_t x, uint64_t y)
{
K___uint64_t_uint64_t_uint64_t_uint64_t
flat =
{
FStar_UInt128_u64_mod_32(x),
FStar_UInt128_u64_mod_32(FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y)),
x
>> FStar_UInt128_u32_32,
(x >> FStar_UInt128_u32_32)
* FStar_UInt128_u64_mod_32(y)
+ (FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y) >> FStar_UInt128_u32_32)
};
return flat;
}
static uint64_t FStar_UInt128_u32_combine_(uint64_t hi, uint64_t lo)
{
return lo + (hi << FStar_UInt128_u32_32);
}
static FStar_UInt128_uint128 FStar_UInt128_mul_wide_impl(uint64_t x, uint64_t y)
{
K___uint64_t_uint64_t_uint64_t_uint64_t scrut = FStar_UInt128_mul_wide_impl_t_(x, y);
uint64_t u1 = scrut.fst;
uint64_t w3 = scrut.snd;
uint64_t x_ = scrut.thd;
uint64_t t_ = scrut.f3;
FStar_UInt128_uint128
flat =
{
FStar_UInt128_u32_combine_(u1 * (y >> FStar_UInt128_u32_32) + FStar_UInt128_u64_mod_32(t_),
w3),
x_
* (y >> FStar_UInt128_u32_32)
+ (t_ >> FStar_UInt128_u32_32)
+ ((u1 * (y >> FStar_UInt128_u32_32) + FStar_UInt128_u64_mod_32(t_)) >> FStar_UInt128_u32_32)
};
return flat;
}
FStar_UInt128_uint128 FStar_UInt128_mul_wide(uint64_t x, uint64_t y)
{
return FStar_UInt128_mul_wide_impl(x, y);
}

View file

@ -0,0 +1,100 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/* This file was generated by KreMLin <https://github.com/FStarLang/kremlin>
* KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir dist/minimal -skip-compilation -extract-uints -add-include <inttypes.h> -add-include <stdbool.h> -add-include "kremlin/internal/compat.h" -add-include "kremlin/internal/types.h" -bundle FStar.UInt64+FStar.UInt32+FStar.UInt16+FStar.UInt8=* extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml
* F* version: 059db0c8
* KreMLin version: 916c37ac
*/
#include "FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.h"
uint64_t FStar_UInt64_eq_mask(uint64_t a, uint64_t b)
{
uint64_t x = a ^ b;
uint64_t minus_x = ~x + (uint64_t)1U;
uint64_t x_or_minus_x = x | minus_x;
uint64_t xnx = x_or_minus_x >> (uint32_t)63U;
return xnx - (uint64_t)1U;
}
uint64_t FStar_UInt64_gte_mask(uint64_t a, uint64_t b)
{
uint64_t x = a;
uint64_t y = b;
uint64_t x_xor_y = x ^ y;
uint64_t x_sub_y = x - y;
uint64_t x_sub_y_xor_y = x_sub_y ^ y;
uint64_t q = x_xor_y | x_sub_y_xor_y;
uint64_t x_xor_q = x ^ q;
uint64_t x_xor_q_ = x_xor_q >> (uint32_t)63U;
return x_xor_q_ - (uint64_t)1U;
}
uint32_t FStar_UInt32_eq_mask(uint32_t a, uint32_t b)
{
uint32_t x = a ^ b;
uint32_t minus_x = ~x + (uint32_t)1U;
uint32_t x_or_minus_x = x | minus_x;
uint32_t xnx = x_or_minus_x >> (uint32_t)31U;
return xnx - (uint32_t)1U;
}
uint32_t FStar_UInt32_gte_mask(uint32_t a, uint32_t b)
{
uint32_t x = a;
uint32_t y = b;
uint32_t x_xor_y = x ^ y;
uint32_t x_sub_y = x - y;
uint32_t x_sub_y_xor_y = x_sub_y ^ y;
uint32_t q = x_xor_y | x_sub_y_xor_y;
uint32_t x_xor_q = x ^ q;
uint32_t x_xor_q_ = x_xor_q >> (uint32_t)31U;
return x_xor_q_ - (uint32_t)1U;
}
uint16_t FStar_UInt16_eq_mask(uint16_t a, uint16_t b)
{
uint16_t x = a ^ b;
uint16_t minus_x = ~x + (uint16_t)1U;
uint16_t x_or_minus_x = x | minus_x;
uint16_t xnx = x_or_minus_x >> (uint32_t)15U;
return xnx - (uint16_t)1U;
}
uint16_t FStar_UInt16_gte_mask(uint16_t a, uint16_t b)
{
uint16_t x = a;
uint16_t y = b;
uint16_t x_xor_y = x ^ y;
uint16_t x_sub_y = x - y;
uint16_t x_sub_y_xor_y = x_sub_y ^ y;
uint16_t q = x_xor_y | x_sub_y_xor_y;
uint16_t x_xor_q = x ^ q;
uint16_t x_xor_q_ = x_xor_q >> (uint32_t)15U;
return x_xor_q_ - (uint16_t)1U;
}
uint8_t FStar_UInt8_eq_mask(uint8_t a, uint8_t b)
{
uint8_t x = a ^ b;
uint8_t minus_x = ~x + (uint8_t)1U;
uint8_t x_or_minus_x = x | minus_x;
uint8_t xnx = x_or_minus_x >> (uint32_t)7U;
return xnx - (uint8_t)1U;
}
uint8_t FStar_UInt8_gte_mask(uint8_t a, uint8_t b)
{
uint8_t x = a;
uint8_t y = b;
uint8_t x_xor_y = x ^ y;
uint8_t x_sub_y = x - y;
uint8_t x_sub_y_xor_y = x_sub_y ^ y;
uint8_t q = x_xor_y | x_sub_y_xor_y;
uint8_t x_xor_q = x ^ q;
uint8_t x_xor_q_ = x_xor_q >> (uint32_t)7U;
return x_xor_q_ - (uint8_t)1U;
}

View file

@ -0,0 +1,216 @@
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved.
Licensed under the Apache 2.0 License. */
/******************************************************************************/
/* Machine integers (128-bit arithmetic) */
/******************************************************************************/
/* This header makes KreMLin-generated C code work with:
* - the default setting where we assume the target compiler defines __int128
* - the setting where we use FStar.UInt128's implementation instead; in that
* case, generated C files must be compiled with -DKRML_VERIFIED_UINT128
* - a refinement of the case above, wherein all structures are passed by
* reference, a.k.a. "-fnostruct-passing", meaning that the KreMLin-generated
* must be compiled with -DKRML_NOSTRUCT_PASSING
* Note: no MSVC support in this file.
*/
#include "FStar_UInt128.h"
#include "kremlin/c_endianness.h"
#include "FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.h"
#if !defined(KRML_VERIFIED_UINT128) && !defined(_MSC_VER)
/* GCC + using native unsigned __int128 support */
uint128_t load128_le(uint8_t *b) {
uint128_t l = (uint128_t)load64_le(b);
uint128_t h = (uint128_t)load64_le(b + 8);
return (h << 64 | l);
}
void store128_le(uint8_t *b, uint128_t n) {
store64_le(b, (uint64_t)n);
store64_le(b + 8, (uint64_t)(n >> 64));
}
uint128_t load128_be(uint8_t *b) {
uint128_t h = (uint128_t)load64_be(b);
uint128_t l = (uint128_t)load64_be(b + 8);
return (h << 64 | l);
}
void store128_be(uint8_t *b, uint128_t n) {
store64_be(b, (uint64_t)(n >> 64));
store64_be(b + 8, (uint64_t)n);
}
uint128_t FStar_UInt128_add(uint128_t x, uint128_t y) {
return x + y;
}
uint128_t FStar_UInt128_mul(uint128_t x, uint128_t y) {
return x * y;
}
uint128_t FStar_UInt128_add_mod(uint128_t x, uint128_t y) {
return x + y;
}
uint128_t FStar_UInt128_sub(uint128_t x, uint128_t y) {
return x - y;
}
uint128_t FStar_UInt128_sub_mod(uint128_t x, uint128_t y) {
return x - y;
}
uint128_t FStar_UInt128_logand(uint128_t x, uint128_t y) {
return x & y;
}
uint128_t FStar_UInt128_logor(uint128_t x, uint128_t y) {
return x | y;
}
uint128_t FStar_UInt128_logxor(uint128_t x, uint128_t y) {
return x ^ y;
}
uint128_t FStar_UInt128_lognot(uint128_t x) {
return ~x;
}
uint128_t FStar_UInt128_shift_left(uint128_t x, uint32_t y) {
return x << y;
}
uint128_t FStar_UInt128_shift_right(uint128_t x, uint32_t y) {
return x >> y;
}
uint128_t FStar_UInt128_uint64_to_uint128(uint64_t x) {
return (uint128_t)x;
}
uint64_t FStar_UInt128_uint128_to_uint64(uint128_t x) {
return (uint64_t)x;
}
uint128_t FStar_UInt128_mul_wide(uint64_t x, uint64_t y) {
return ((uint128_t) x) * y;
}
uint128_t FStar_UInt128_eq_mask(uint128_t x, uint128_t y) {
uint64_t mask =
FStar_UInt64_eq_mask((uint64_t)(x >> 64), (uint64_t)(y >> 64)) &
FStar_UInt64_eq_mask(x, y);
return ((uint128_t)mask) << 64 | mask;
}
uint128_t FStar_UInt128_gte_mask(uint128_t x, uint128_t y) {
uint64_t mask =
(FStar_UInt64_gte_mask(x >> 64, y >> 64) &
~(FStar_UInt64_eq_mask(x >> 64, y >> 64))) |
(FStar_UInt64_eq_mask(x >> 64, y >> 64) & FStar_UInt64_gte_mask(x, y));
return ((uint128_t)mask) << 64 | mask;
}
uint128_t FStar_Int_Cast_Full_uint64_to_uint128(uint64_t x) {
return x;
}
uint64_t FStar_Int_Cast_Full_uint128_to_uint64(uint128_t x) {
return x;
}
#elif !defined(_MSC_VER) && defined(KRML_VERIFIED_UINT128)
/* Verified uint128 implementation. */
/* Access 64-bit fields within the int128. */
#define HIGH64_OF(x) ((x)->high)
#define LOW64_OF(x) ((x)->low)
typedef FStar_UInt128_uint128 FStar_UInt128_t_, uint128_t;
/* A series of definitions written using pointers. */
void load128_le_(uint8_t *b, uint128_t *r) {
LOW64_OF(r) = load64_le(b);
HIGH64_OF(r) = load64_le(b + 8);
}
void store128_le_(uint8_t *b, uint128_t *n) {
store64_le(b, LOW64_OF(n));
store64_le(b + 8, HIGH64_OF(n));
}
void load128_be_(uint8_t *b, uint128_t *r) {
HIGH64_OF(r) = load64_be(b);
LOW64_OF(r) = load64_be(b + 8);
}
void store128_be_(uint8_t *b, uint128_t *n) {
store64_be(b, HIGH64_OF(n));
store64_be(b + 8, LOW64_OF(n));
}
void
FStar_Int_Cast_Full_uint64_to_uint128_(uint64_t x, uint128_t *dst) {
/* C89 */
LOW64_OF(dst) = x;
HIGH64_OF(dst) = 0;
}
uint64_t FStar_Int_Cast_Full_uint128_to_uint64_(uint128_t *x) {
return LOW64_OF(x);
}
# ifndef KRML_NOSTRUCT_PASSING
uint128_t load128_le(uint8_t *b) {
uint128_t r;
load128_le_(b, &r);
return r;
}
void store128_le(uint8_t *b, uint128_t n) {
store128_le_(b, &n);
}
uint128_t load128_be(uint8_t *b) {
uint128_t r;
load128_be_(b, &r);
return r;
}
void store128_be(uint8_t *b, uint128_t n) {
store128_be_(b, &n);
}
uint128_t FStar_Int_Cast_Full_uint64_to_uint128(uint64_t x) {
uint128_t dst;
FStar_Int_Cast_Full_uint64_to_uint128_(x, &dst);
return dst;
}
uint64_t FStar_Int_Cast_Full_uint128_to_uint64(uint128_t x) {
return FStar_Int_Cast_Full_uint128_to_uint64_(&x);
}
# else /* !defined(KRML_STRUCT_PASSING) */
# define print128 print128_
# define load128_le load128_le_
# define store128_le store128_le_
# define load128_be load128_be_
# define store128_be store128_be_
# define FStar_Int_Cast_Full_uint128_to_uint64 \
FStar_Int_Cast_Full_uint128_to_uint64_
# define FStar_Int_Cast_Full_uint64_to_uint128 \
FStar_Int_Cast_Full_uint64_to_uint128_
# endif /* KRML_STRUCT_PASSING */
#endif