Compare commits

...

203 Commits

Author SHA1 Message Date
ychenfo 263bc82434 nac3artiq: remove debug print 2022-03-21 04:23:40 +08:00
Sebastien Bourdeauducq 3f890f183c nac3standalone/demo: handle imports consistently 2022-03-19 09:14:27 +08:00
pca006132 234823c51a nac3standalone: added typevar test 2022-03-18 16:52:52 +08:00
pca006132 b97c016629 nac3core: fixed test breakage 2022-03-18 16:52:28 +08:00
Sebastien Bourdeauducq 14a5c7981e Revert "Revert "update dependencies""
This reverts commit 93af337ed3.
2022-03-18 08:06:13 +08:00
pca006132 35ac5cb6f6 nac3core: fixed typevar bug 2022-03-18 01:07:44 +08:00
pca006132 93af337ed3 Revert "update dependencies"
This reverts commit 9ccdc0180d.
2022-03-17 21:53:58 +08:00
Sebastien Bourdeauducq 0ca2797428 fix compilation warning 2022-03-17 21:31:45 +08:00
Sebastien Bourdeauducq 9ccdc0180d update dependencies 2022-03-17 21:18:07 +08:00
Sebastien Bourdeauducq c5993c2a58 composer: improve class field typevar error message 2022-03-17 21:04:42 +08:00
pca006132 fb8553311c nac3artiq: remove accidentally added print 2022-03-17 15:13:00 +08:00
pca006132 04e7a7eb4b nac3artiq: support more exceptions 2022-03-17 15:03:22 +08:00
pca006132 642e3b2bad nac3core: moved all builtin errors to nac3artiq code
This remove the need for hard-coding those definition IDs.
2022-03-17 00:04:49 +08:00
pca006132 e126fef012 nac3artiq: support more builtin errors 2022-03-16 23:42:08 +08:00
Sebastien Bourdeauducq 8fd868a673 update dependencies 2022-03-10 17:28:56 +08:00
pca006132 94aac16cc5 nac3artiq: fixed RPC codegen for lists 2022-03-10 16:48:28 +08:00
pca006132 2f85bb3837 nac3core: impl call attributes
sret for returning large structs, and byval for struct args in extern
function calls.
2022-03-09 22:09:36 +08:00
ychenfo e266d3c2b0 nac3parser: modify to handle UAdd in front of int constant 2022-03-09 10:46:58 +08:00
ychenfo 60b3807ab3 nac3standalone: add test for abs function 2022-03-08 23:26:01 +08:00
ychenfo 5006028e2d nac3core: abs builtin function 2022-03-08 23:23:36 +08:00
ychenfo 1cc276cb43 nac3standalone: add test for max function 2022-03-08 22:23:13 +08:00
ychenfo 8241a29908 nac3core: max builtin function 2022-03-08 22:22:00 +08:00
ychenfo e9a17cf8f8 nac3standalone: add test for min function 2022-03-08 21:59:42 +08:00
ychenfo adb5c69e67 nac3core: min builtin function 2022-03-08 21:59:37 +08:00
ychenfo d848c2284e nac3parser: fix broken tests 2022-03-08 18:21:19 +08:00
ychenfo f7e62ab5b7 nac3ast/parser/core: use i128 for u64 constants 2022-03-08 18:21:14 +08:00
ychenfo 9f6c7b3359 nac3core: type conversion to/from uint 2022-03-08 13:42:45 +08:00
ychenfo 142e99a0f1 nac3core: fix broken tests 2022-03-08 13:34:08 +08:00
ychenfo 79c469301a basic unsigned integer support 2022-03-08 13:34:02 +08:00
ychenfo 8602852241 nac3core: use signed extension to convert i32 to i64 2022-03-06 04:49:02 +08:00
ychenfo 42fbe8e383 nac3core: fix err msg of too many args 2022-03-05 03:59:45 +08:00
pca006132 63b0f29728 Fix broken tests 2022-03-05 00:27:51 +08:00
pca006132 a5e1da0b92 nac3artiq/demo/embedding_map: avoid key 0
Object key 0 is reserved for builtin exceptions.
2022-03-05 00:27:23 +08:00
pca006132 294943e303 nac3core: get exception ID from symbol resolver
We need to store the exception class somewhere in order to create them
back in the host. Fixes #200
2022-03-05 00:26:35 +08:00
ychenfo 84b4bd920b nac3artiq: remove cached pyid_to_type if error 2022-03-04 16:23:25 +08:00
Sebastien Bourdeauducq 317eb80005 update dependencies 2022-03-03 17:10:22 +08:00
Sebastien Bourdeauducq 59ac5aae8a fix error message string (2) 2022-03-02 08:33:13 +08:00
Sebastien Bourdeauducq da039e3acf fix error message string 2022-03-02 08:04:15 +08:00
pca006132 d1e172501d nac3artiq: remove debug messages 2022-02-28 23:10:05 +08:00
pca006132 323d77a455 nac3artiq: improve error message for out of range error 2022-02-28 23:09:14 +08:00
pca006132 d41c923cfd nac3artiq: handle recursive types properly 2022-02-28 23:08:42 +08:00
Sebastien Bourdeauducq 5d8e87d923 more readable type annotation error string 2022-02-28 16:24:03 +08:00
Sebastien Bourdeauducq a9c73a4915 fix some error strings 2022-02-28 11:10:33 +08:00
Sebastien Bourdeauducq 804d5db27e nac3artiq: make CompileError importable from Python 2022-02-26 17:29:13 +08:00
Sebastien Bourdeauducq cbc77dddb0 nac3artiq: raise specific exception on error 2022-02-26 17:17:06 +08:00
pca006132 846d1726ef nac3core: fixed keyword arguments handling 2022-02-26 16:34:30 +08:00
pca006132 0686e83f4c nac3core/typecheck: fixed incorrect rollback 2022-02-25 20:01:11 +08:00
pca006132 e710b6c320 nac3core: fix exception final branch handling
According to https://github.com/m-labs/artiq/pull/1855
Passed the test cases from 1855.
Fixes #196.
2022-02-25 17:42:47 +08:00
pca006132 cc769a7006 nac3core: reset unification table state before printing errors
Fixes nondeterministic error messages due to nondeterministic
unification order. As all unification operations will be restored, the
error messages should not be affected by the unification order before
the failure operation.
2022-02-25 14:47:19 +08:00
Sebastien Bourdeauducq 5cd4fe6507 update tests 2022-02-23 11:50:03 +08:00
Sebastien Bourdeauducq aa79c8d8b7 rename exception symbols in host code 2022-02-23 11:43:41 +08:00
Sebastien Bourdeauducq 75fde1bbf7 update tests 2022-02-23 11:39:47 +08:00
Sebastien Bourdeauducq 17792b76b7 rename exception symbols 2022-02-23 11:04:35 +08:00
Sebastien Bourdeauducq 6ae770d5eb update dependencies 2022-02-23 10:59:13 +08:00
pca006132 d3cb5d6e52 Fixed type error messages 2022-02-22 17:22:15 +08:00
Sebastien Bourdeauducq bb7c0a2d79 nac3artiq: remove errors from demo 2022-02-22 16:00:37 +08:00
pca006132 3ad25c8f07 nac3core: sort error messages for determinism 2022-02-22 14:33:43 +08:00
pca006132 ede3706ca8 type_inferencer: special case tuple index error message 2022-02-21 18:41:42 +08:00
pca006132 f97f93d92c applied rustfmt and clippy auto fix 2022-02-21 18:27:46 +08:00
pca006132 d9cb506f6a nac3core: refactored for better error messages 2022-02-21 18:24:19 +08:00
pca006132 352831b2ca nac3core: removed legacy location definition 2022-02-13 22:39:24 +08:00
pca006132 21d9182ba2 nac3core: disallow methods/fields in Exception subclass
Fixes #192
2022-02-13 21:45:22 +08:00
Sebastien Bourdeauducq 91f41052fe test: remove outdated comment 2022-02-13 17:24:47 +08:00
pca006132 14d25b3b9d Fixed broken tests 2022-02-13 17:21:42 +08:00
Sebastien Bourdeauducq 265d234266 update LLVM 2022-02-13 13:20:08 +08:00
Sebastien Bourdeauducq 2e44745933 runkernel: add dummy artiq_personality function 2022-02-13 13:03:38 +08:00
Sebastien Bourdeauducq 4b8e70f746 nac3standalone: disable broken tests (#188) 2022-02-13 11:41:42 +08:00
Sebastien Bourdeauducq 31e76ca3b6 nac3standalone: add dummy support for artiq_personality
So existing tests can run again
2022-02-13 11:35:02 +08:00
Sebastien Bourdeauducq 343f6fd067 update dependencies 2022-02-13 10:51:03 +08:00
Sebastien Bourdeauducq f1ebf8f96e flake: update nixpkgs 2022-02-13 10:47:22 +08:00
pca006132 b18626b149 Fix compilation and test failures 2022-02-12 22:50:32 +08:00
pca006132 750d912eb4 nac3core: do list bound check and negative index handling
Raise error when index out of range. Note that we use llvm.expect to
tell the optimizer that we expect not to raise an exception, so the
normal path performance would be better. If this assumption is violated,
the exception overhead might be slightly larger, but the percentage
increase in overhead should not be high since exception unwinding is
already pretty slow.
2022-02-12 22:50:32 +08:00
pca006132 bf52e294ee nac3artiq: RPC support 2022-02-12 22:50:32 +08:00
pca006132 e303248261 nac3core: exception type check and codegen 2022-02-12 22:50:32 +08:00
pca006132 7ea5a5f84d nac3core: codegen refactoring
- No longer check if the statement will return. Instead, we check if
  the current basic block is terminated, which is simpler and handles
  exception/break/continue better.
- Use invoke statement when unwind is needed.
- Moved codegen for a block of statements into a separate function.
2022-02-12 22:13:59 +08:00
pca006132 b267a656a8 nac3core: added exception type and fixed primitive representation
- Added `Exception` primitive type and some builtin exception types.
  Note that all exception types share the same layout, and should
  inherit from the base `Exception` type. There are some hacks in the
  toplevel module for handling exception types, we should revisit and
  fix them later.
- Added new primitive types to concrete type module, otherwise there
  would be some weird type errors.
- Changed the representation of strings to CSlice<u8>, instead of
  CString.
2022-02-12 22:13:59 +08:00
pca006132 050c862c1a nac3core: function codegen callback changes
Added code generator argument to the callback, so it would be easier to
write complicated codegen with that callback. To prepare for RPC
codegen.
2022-02-12 21:24:41 +08:00
Sebastien Bourdeauducq ffe89eec86 llvm: disable threads 2022-02-08 14:52:09 +08:00
ychenfo d6ab73afb0 nac3core: style 2022-02-07 02:18:56 +08:00
ychenfo 6f9f455152 nac3core: list slice irrt use one function to handle var size 2022-02-07 02:09:50 +08:00
ychenfo e50f1017fa nac3core: irrt list of tuple use struct value representation 2022-02-07 02:09:50 +08:00
ychenfo 77608346b1 nac3core: handle tuple by value 2022-02-07 02:09:50 +08:00
Sebastien Bourdeauducq f5ce7376e3 flake: fix Windows build 2022-02-05 16:53:47 +08:00
Sebastien Bourdeauducq 1288624218 lock insta version (#179) 2022-01-31 15:18:49 +08:00
Sebastien Bourdeauducq 0124bcd26c update dependencies (missing part of previous commit) 2022-01-31 14:15:05 +08:00
Sebastien Bourdeauducq de065cfa14 update dependencies 2022-01-31 12:28:40 +08:00
pca006132 304181fd8c Merge pull request 'fix errors of non-primitive host object when running multiple kernels' (#171) from multiple_kernel_err into master
Reviewed-on: M-Labs/nac3#171
2022-01-27 14:46:22 +08:00
ychenfo 43048eb8d8 nac3standalone: add tests for list slice and len 2022-01-26 03:58:27 +08:00
ychenfo ace0e2a2c6 nac3core: fix use of size_t in list comprehension, cleanup 2022-01-25 03:35:59 +08:00
Sebastien Bourdeauducq e891683f2e flake: hack-link libstdc++ statically on Windows. Closes #175 2022-01-24 16:54:05 +08:00
Sebastien Bourdeauducq 8e01a20ac3 README: add Windows instructions 2022-01-24 15:54:01 +08:00
Sebastien Bourdeauducq 465514ca7a flake: fix mcfgthread filename 2022-01-24 15:52:04 +08:00
Sebastien Bourdeauducq 9c34dd9c80 flake: distribute mcfgthreads-12.dll on hydra 2022-01-24 15:49:32 +08:00
Sebastien Bourdeauducq ced7acd871 check_demos: improve output 2022-01-24 11:38:43 +08:00
Sebastien Bourdeauducq 6ea40809b3 README: fix nix shell URL 2022-01-24 11:35:39 +08:00
Sebastien Bourdeauducq f8e3f7a4ca add some basic list tests 2022-01-23 14:28:08 +08:00
Sebastien Bourdeauducq ba997ae094 flake: run nac3standalone demo checks
also keep auxiliary projects in separate Nix outputs
2022-01-23 11:32:34 +08:00
Sebastien Bourdeauducq 2a0caf931f nac3standalone: work around bash mess with exit codes of substituted processes
https://unix.stackexchange.com/questions/376114/how-to-detect-an-error-using-process-substitution
2022-01-23 11:15:11 +08:00
Sebastien Bourdeauducq 64b94955fe nac3standalone: reorganize demos, compare against cpython 2022-01-23 10:35:06 +08:00
Sebastien Bourdeauducq f478c6afcc update dependencies 2022-01-19 21:17:07 +08:00
ychenfo 0439bf6aef nac3artiq: fix errors of non-primitive object when running multiple kernels 2022-01-15 04:43:39 +08:00
Sebastien Bourdeauducq fd4bf12808 fix grammar of some type error messages 2022-01-14 16:56:23 +08:00
Sebastien Bourdeauducq d7b14dd705 update dependencies 2022-01-14 16:55:10 +08:00
ychenfo 9d342d9f0f nac3artiq: error msg improvement for synthesized __modinit__ 2022-01-14 16:28:37 +08:00
ychenfo ae8f82ccb0 nac3core: fix broken tests 2022-01-14 16:28:37 +08:00
ychenfo 4a1a4dc076 nac3core/artiq/standalone: symbol resolver return error msg for type error of host variables 2022-01-14 16:28:34 +08:00
ychenfo eba9fc8a69 nac3core: add missing location for type inference 2022-01-14 03:05:11 +08:00
ychenfo 4976e89ae2 nac3core: list slice support 2022-01-13 16:53:32 +08:00
Sebastien Bourdeauducq 82509d60ec remove obvious comment 2022-01-13 12:31:28 +08:00
ychenfo 2579ecbd19 nac3core: irrt module get attribute id using name instead of hard code 2022-01-11 17:25:07 +08:00
ychenfo 44f4c4f028 nac3core: build script use Path::join 2022-01-09 12:06:45 +08:00
Sebastien Bourdeauducq 8ef9e74aaf move rustfmt.toml upper 2022-01-09 11:31:06 +08:00
Sebastien Bourdeauducq 9c20e84c84 flake: fix/cleanup 2022-01-09 11:30:36 +08:00
Sebastien Bourdeauducq b88f17ed42 switch to clang-unwrapped, build IRRT with wasm32 2022-01-09 10:56:28 +08:00
Sebastien Bourdeauducq 096193f7ab demo: rewrite in Rust 2022-01-09 10:51:10 +08:00
ychenfo 4760851638 nac3standalone: link modules and load irrt like in nac3artiq 2022-01-09 02:17:58 +08:00
ychenfo 1ee857de6a nac3core: format, fix clippy warning 2022-01-09 01:12:18 +08:00
Sebastien Bourdeauducq 4a65d82db5 introduce IRRT, implement power
based on code by Yijia
M-Labs/nac3#160
2022-01-09 00:57:50 +08:00
Sebastien Bourdeauducq b638d1b4b0 nac3standalone: set up LLVM inliner like in nac3artiq 2022-01-08 21:03:58 +08:00
Sebastien Bourdeauducq 52ccf31bb1 update dependencies 2022-01-04 22:00:29 +08:00
Sebastien Bourdeauducq 4904610dc6 flake: provide mimalloc-enabled Python
The Linux linker and the libc are garbage, so there isn't much of an alternative to using the Nix wrapper and LD_PRELOAD.
2022-01-04 21:54:55 +08:00
ychenfo 7193e3f328 nac3core: codegen fix empty list llvm type 2021-12-30 05:09:21 +08:00
Sebastien Bourdeauducq 2822c613ef llvm: fix TLI-musl.patch 2021-12-29 20:52:59 +08:00
Sebastien Bourdeauducq a0bf6da6c2 update dependencies 2021-12-28 12:08:55 +08:00
Sebastien Bourdeauducq 9cc9a0284a nac3standalone: style 2021-12-28 10:59:17 +08:00
ychenfo 85e06d431a nac3core: improve some type annotation error messages (#87) 2021-12-28 10:49:14 +08:00
ychenfo 9b3b47ce50 fix broken tests 2021-12-28 01:38:16 +08:00
ychenfo 88f0da7bdd add file name to AST node location 2021-12-28 01:28:55 +08:00
pca006132 1bd966965e fixed M-Labs/nac3#146 2021-12-27 22:56:50 +08:00
pca006132 521f136f2e redo "nac3artiq: fixed compilation error"
This reverts commit 3b5328d3cd.
2021-12-27 22:56:30 +08:00
pca006132 fa04768a77 redo "nac3core: fix #84"
This reverts commit 86005da8e1.
2021-12-27 22:56:26 +08:00
Sebastien Bourdeauducq 6162d21a5b LLVM PGO support 2021-12-26 21:11:14 +08:00
Sebastien Bourdeauducq 8101483ebd flake: style 2021-12-26 18:57:02 +08:00
Sebastien Bourdeauducq dc5e42c5eb flake: use LLVM 13 throughout 2021-12-26 18:56:23 +08:00
Sebastien Bourdeauducq 86005da8e1 Revert "nac3core: fix #84"
This reverts commit 0902d8adf4.
2021-12-26 08:35:27 +08:00
Sebastien Bourdeauducq 3b5328d3cd Revert "nac3artiq: fixed compilation error"
This reverts commit 34cabe0e55.
2021-12-26 08:31:37 +08:00
Sebastien Bourdeauducq 5aa6749241 remove num-traits 2021-12-26 00:32:08 +08:00
Sebastien Bourdeauducq 80d3ab1b0f remove bigints 2021-12-26 00:23:54 +08:00
Sebastien Bourdeauducq ec986dfdf3 update dependencies 2021-12-25 23:03:53 +08:00
Sebastien Bourdeauducq d2a5cd6d57 update to LLVM 13 2021-12-25 22:49:47 +08:00
Sebastien Bourdeauducq 9e3f75255e update inkwell. Closes #67 2021-12-25 22:17:06 +08:00
Sebastien Bourdeauducq 53f13b44cf flake: update nixpkgs 2021-12-25 21:10:19 +08:00
pca006132 34cabe0e55 nac3artiq: fixed compilation error 2021-12-23 15:47:54 +08:00
pca006132 6e85f549f6 Merge pull request 'nac3core: fix #84' (#146) from fix_84 into master
Reviewed-on: M-Labs/nac3#146
2021-12-23 15:28:29 +08:00
pca006132 0902d8adf4 nac3core: fix #84 2021-12-23 15:26:48 +08:00
ychenfo 66320679be improve error messages
#112, #110, #108, #87

Reviewed-on: M-Labs/nac3#145
Co-authored-by: ychenfo <yc@m-labs.hk>
Co-committed-by: ychenfo <yc@m-labs.hk>
2021-12-22 08:52:19 +08:00
Sebastien Bourdeauducq 0ff995722c Revert "nac3core: add missing expr concrete type check"
This reverts commit cb450372d6.
2021-12-20 18:13:45 +08:00
Sebastien Bourdeauducq e2b44a066b return int32 in len(). Closes #141 2021-12-20 17:44:42 +08:00
Sebastien Bourdeauducq 2008db8097 nac3standalone: remove unused import 2021-12-20 17:39:16 +08:00
ychenfo cb450372d6 nac3core: add missing expr concrete type check 2021-12-19 18:01:49 +08:00
ychenfo ff27a1697e nac3core: fix for loop type inference 2021-12-19 18:01:49 +08:00
ychenfo 91625dd327 update kernel-only attribute annotation
Reviewed-on: M-Labs/nac3#127
Co-authored-by: ychenfo <yc@m-labs.hk>
Co-committed-by: ychenfo <yc@m-labs.hk>
2021-12-19 11:04:53 +08:00
Sebastien Bourdeauducq 7420ce185b README: update 2021-12-13 19:02:46 +08:00
Sebastien Bourdeauducq 69b9ac5152 nac3standalone: consistent naming 2021-12-13 11:19:11 +08:00
ychenfo ccfcba4066 nac3standalone: add output_long 2021-12-13 10:44:33 +08:00
ychenfo b5637a04e9 nac3core: use official implementation for len 2021-12-13 10:44:33 +08:00
ychenfo 2c6601d97c nac3core: fix len on range with step of different sign 2021-12-13 10:44:33 +08:00
ychenfo 82359b81a2 nac3core: fix bool to int conversion 2021-12-13 04:13:43 +08:00
ychenfo 4d2fd9582a nac3core: fix broken tests 2021-12-09 01:37:05 +08:00
ychenfo b7892ce952 nac3core: add len support for list and range 2021-12-09 01:37:00 +08:00
ychenfo 01d3249646 nac3core: add missing llvm range type 2021-12-09 01:16:05 +08:00
Sebastien Bourdeauducq d2ffdeeb47 flake: update nixpkgs and work around openssh cross compilation breakage. Closes #123 2021-12-08 21:21:37 +08:00
Sebastien Bourdeauducq ae902aac2f remove devshell inputs from hydraJobs
We are not recompiling packages that depend on LLVM anymore, llvm-nac3 is only used for static linking within NAC3.
2021-12-08 17:43:05 +08:00
Sebastien Bourdeauducq 3f73896477 remove a small amount of LLVM bloat
Also avoids libffi.dll dependency on Windows.
2021-12-08 17:41:34 +08:00
Sebastien Bourdeauducq ddb4c548ae add and use local copy of LLVM Nix files
Modifications accumulate and many are not suitable for nixpkgs upstream.

Based on nixpkgs 3f629e3dd5293bd3c211c4950c418f7cfb4501af
2021-12-08 16:55:25 +08:00
pca006132 6d00d4dabb nac3artiq: cache python data if possible 2021-12-05 20:30:03 +08:00
Sebastien Bourdeauducq baa713a3ca flake: don't attempt to fixup Windows build 2021-12-05 14:40:10 +08:00
Sebastien Bourdeauducq d2919b9620 Revert "flake: better shells"
llvm-config/llvm-sys hates pkgs.buildEnv.

This reverts commit e4f35372d3.
2021-12-05 14:35:58 +08:00
Sebastien Bourdeauducq 9ee2168932 Revert "flake: fix hydraJobs"
This reverts commit e8e1499478.
2021-12-05 14:35:58 +08:00
pca006132 65bc1e5fa4 nac3artiq: handle name_to_pyid in compilation
python variables can change between kernel invocations
2021-12-05 13:10:54 +08:00
pca006132 2938eacd16 nac3artiq: supports running multiple kernels 2021-12-05 13:10:54 +08:00
Sebastien Bourdeauducq e8e1499478 flake: fix hydraJobs 2021-12-05 13:03:44 +08:00
Sebastien Bourdeauducq e4f35372d3 flake: better shells 2021-12-05 12:56:47 +08:00
Sebastien Bourdeauducq 41f88095a5 min_artiq: add round64, floor64, ceil64 2021-12-04 20:35:52 +08:00
pca006132 c98f367f90 nac3artiq: enables inlining 2021-12-04 17:52:03 +08:00
ychenfo 1f3aa48361 nac3parser: modify parser to handle negative integer edge cases 2021-12-03 16:35:58 +08:00
Sebastien Bourdeauducq 8c05d8431d flake: use upstream nixpkgs patch
https://github.com/NixOS/nixpkgs/pull/148367
2021-12-03 11:57:01 +08:00
Sebastien Bourdeauducq 0ae2aae645 flake: publish zipfile with Windows Python module on Hydra 2021-12-02 22:47:35 +08:00
Sebastien Bourdeauducq b0eb7815da flake: consistent naming 2021-12-02 22:37:41 +08:00
Sebastien Bourdeauducq 26e60fca6e flake: cleanup tarball unpacking 2021-12-02 22:37:32 +08:00
Sebastien Bourdeauducq 22a509e7ce flake: add Hydra job for Windows build
This is a proof-of-concept; it works but requires manual fiddling with DLLs
(e.g. copy them from the Nix store into the Windows environment), and LLD
is not available on Windows.
2021-12-02 22:29:44 +08:00
Sebastien Bourdeauducq 4526c28edb Merge branch 'windows' 2021-12-02 22:26:55 +08:00
Sebastien Bourdeauducq 25fc9db66d cargo: specify inkwell LLVM target explicitly
Windows LLVM linking otherwise breaks on the non-existing targets.
2021-12-02 22:24:33 +08:00
Sebastien Bourdeauducq 6315027a8b flake: use *.pyd for Windows Python module 2021-12-02 22:24:23 +08:00
Sebastien Bourdeauducq c0f8d5c602 flake: Windows libs working 2021-12-02 22:01:19 +08:00
Sebastien Bourdeauducq 998f49261d flake: fix Windows libs further 2021-12-02 21:02:48 +08:00
Sebastien Bourdeauducq aab43b1c07 flake: unbreak Windows library link (WIP) 2021-12-02 20:00:50 +08:00
Sebastien Bourdeauducq a6275fbb57 flake: add libffi on Windows 2021-12-02 19:08:20 +08:00
Sebastien Bourdeauducq 8a46032f4c flake: unbreak llvm-config for cross-compilation of static libs 2021-12-02 18:46:04 +08:00
Sebastien Bourdeauducq 1c31aa6e8e consistent naming 2021-12-02 10:45:46 +08:00
sb10q b030aec191 Merge pull request 'Add floor and ceil, move built-in functions to a separate file' (#120) from built_in_floor_ceil into master
Reviewed-on: M-Labs/nac3#120
2021-12-02 10:40:50 +08:00
ychenfo aa2d79fea6 Merge branch 'master' into built_in_floor_ceil 2021-12-02 01:08:55 +08:00
ychenfo 1e6848ab92 nac3core: distinguish i64 and i32 in bool conversion 2021-12-02 01:02:42 +08:00
Sebastien Bourdeauducq a91b2d602c flake: switch to nixos- branch 2021-12-01 22:49:43 +08:00
Sebastien Bourdeauducq c683958e4a nac3artiq: clarify comment about virtual class 2021-12-01 22:49:20 +08:00
Sebastien Bourdeauducq 142f82f987 remove debug prints 2021-12-01 22:48:06 +08:00
ychenfo dfd3548ed2 TypeVar and virtual support in Symbol Resolver (#99)
Add `TypeVar` and `virtual` support for Symbol Resolver in nac3artiq and nac3standalone

Reviewed-on: M-Labs/nac3#99
Co-authored-by: ychenfo <yc@m-labs.hk>
Co-committed-by: ychenfo <yc@m-labs.hk>
2021-12-01 22:44:53 +08:00
Sebastien Bourdeauducq 31fba04cee flake: fix Windows build, now finding LLVM and Python 2021-12-01 18:30:26 +08:00
ychenfo fa2fe8ed5d nac3core: add ceil and floor 2021-12-01 03:23:58 +08:00
ychenfo 7ede4f15b6 nac3core: move builtin definitions to another file 2021-12-01 02:52:00 +08:00
Sebastien Bourdeauducq 701ca36e99 flake: windows build WIP 2021-11-26 17:26:18 +08:00
Sebastien Bourdeauducq 5e1b0a10a0 flake: patch nixpkgs to fix mingw llvm_12 build 2021-11-26 17:01:44 +08:00
Sebastien Bourdeauducq 9f316a3294 flake: revert nixpkgs to unbreak rust cross-compilation 2021-11-26 17:00:20 +08:00
106 changed files with 10626 additions and 3840 deletions

291
Cargo.lock generated
View File

@ -4,9 +4,9 @@ version = 3
[[package]] [[package]]
name = "ahash" name = "ahash"
version = "0.7.4" version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43bb833f0bf979d8475d38fbf09ed3b8a55e1885fe93ad3f93239fc6a4f17b98" checksum = "fcb51a0695d8f838b1ee009b3fbf66bda078cd64590202a864a8f3e8c4315c47"
dependencies = [ dependencies = [
"getrandom", "getrandom",
"once_cell", "once_cell",
@ -44,9 +44,9 @@ dependencies = [
[[package]] [[package]]
name = "autocfg" name = "autocfg"
version = "1.0.1" version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a" checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa"
[[package]] [[package]]
name = "bit-set" name = "bit-set"
@ -77,9 +77,9 @@ checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610"
[[package]] [[package]]
name = "cc" name = "cc"
version = "1.0.70" version = "1.0.73"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d26a6ce4b6a484fa3edb70f7efa6fc430fd2b87285fe8b84304fd0936faa0dc0" checksum = "2fff2a6927b3bb87f9595d67196a70493f627687a71d87a0d692242c33f58c11"
[[package]] [[package]]
name = "cfg-if" name = "cfg-if"
@ -89,13 +89,13 @@ checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]] [[package]]
name = "console" name = "console"
version = "0.14.1" version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3993e6445baa160675931ec041a5e03ca84b9c6e32a056150d3aa2bdda0a1f45" checksum = "a28b32d32ca44b70c3e4acd7db1babf555fa026e385fb95f18028f88848b3c31"
dependencies = [ dependencies = [
"encode_unicode", "encode_unicode",
"lazy_static",
"libc", "libc",
"once_cell",
"terminal_size", "terminal_size",
"winapi", "winapi",
] ]
@ -116,9 +116,9 @@ dependencies = [
[[package]] [[package]]
name = "crossbeam-channel" name = "crossbeam-channel"
version = "0.5.1" version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06ed27e177f16d65f0f0c22a213e17c696ace5dd64b14258b52f9417ccb52db4" checksum = "fdbfe11fe19ff083c48923cf179540e8cd0535903dc35e178a1fdeeb59aef51f"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"crossbeam-utils", "crossbeam-utils",
@ -137,10 +137,11 @@ dependencies = [
[[package]] [[package]]
name = "crossbeam-epoch" name = "crossbeam-epoch"
version = "0.9.5" version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ec02e091aa634e2c3ada4a392989e7c3116673ef0ac5b72232439094d73b7fd" checksum = "1145cf131a2c6ba0615079ab6a638f7e1973ac9c2634fcbeaaad6114246efe8c"
dependencies = [ dependencies = [
"autocfg",
"cfg-if", "cfg-if",
"crossbeam-utils", "crossbeam-utils",
"lazy_static", "lazy_static",
@ -150,9 +151,9 @@ dependencies = [
[[package]] [[package]]
name = "crossbeam-queue" name = "crossbeam-queue"
version = "0.3.2" version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b10ddc024425c88c2ad148c1b0fd53f4c6d38db9697c9f1588381212fa657c9" checksum = "1f25d8400f4a7a5778f0e4e52384a48cbd9b5c495d110786187fc750075277a2"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"crossbeam-utils", "crossbeam-utils",
@ -160,9 +161,9 @@ dependencies = [
[[package]] [[package]]
name = "crossbeam-utils" name = "crossbeam-utils"
version = "0.8.5" version = "0.8.8"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d82cfc11ce7f2c3faef78d8a684447b40d503d9681acebed6cb728d45940c4db" checksum = "0bf124c720b7686e3c2663cf54062ab0f68a88af2fb6a030e87e30bf721fcb38"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"lazy_static", "lazy_static",
@ -201,12 +202,6 @@ dependencies = [
"winapi", "winapi",
] ]
[[package]]
name = "dtoa"
version = "0.4.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "56899898ce76aaf4a0f24d914c97ea6ed976d42fec6ad33fcbb0a1103e07b2b0"
[[package]] [[package]]
name = "either" name = "either"
version = "1.6.1" version = "1.6.1"
@ -228,6 +223,15 @@ version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a357d28ed41a50f9c765dbfe56cbc04a64e53e5fc58ba79fbc34c10ef3df831f" checksum = "a357d28ed41a50f9c765dbfe56cbc04a64e53e5fc58ba79fbc34c10ef3df831f"
[[package]]
name = "fastrand"
version = "1.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3fcf0cee53519c866c09b5de1f6c56ff9d647101f81c1964fa632e148896cdf"
dependencies = [
"instant",
]
[[package]] [[package]]
name = "fixedbitset" name = "fixedbitset"
version = "0.2.0" version = "0.2.0"
@ -245,9 +249,9 @@ dependencies = [
[[package]] [[package]]
name = "getrandom" name = "getrandom"
version = "0.2.3" version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcd999463524c52659517fe2cea98493cfe485d10565e7b0fb07dbba7ad2753" checksum = "d39cd93900197114fa1fcb7ae84ca742095eed9442088988ae74fa744e930e77"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"libc", "libc",
@ -274,9 +278,9 @@ dependencies = [
[[package]] [[package]]
name = "indexmap" name = "indexmap"
version = "1.7.0" version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc633605454125dec4b66843673f01c7df2b89479b32e0ed634e43a91cff62a5" checksum = "282a6247722caba404c065016bbfa522806e51714c34f5dfc3e4a3a46fcb4223"
dependencies = [ dependencies = [
"autocfg", "autocfg",
"hashbrown", "hashbrown",
@ -294,9 +298,9 @@ dependencies = [
[[package]] [[package]]
name = "indoc" name = "indoc"
version = "1.0.3" version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5a75aeaaef0ce18b58056d306c27b07436fbb34b8816c53094b76dd81803136" checksum = "e7906a9fababaeacb774f72410e497a1d18de916322e33797bb2cd29baa23c9e"
dependencies = [ dependencies = [
"unindent", "unindent",
] ]
@ -316,8 +320,9 @@ dependencies = [
[[package]] [[package]]
name = "inkwell" name = "inkwell"
version = "0.1.0" version = "0.1.0-beta.4"
source = "git+https://github.com/TheDan64/inkwell?branch=master#d018ee22e4b5241dec2bc32ca67f3d4caaecee47" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2223d0eba0ae6d40a3e4680c6a3209143471e1f38b41746ea309aa36dde9f90b"
dependencies = [ dependencies = [
"either", "either",
"inkwell_internals", "inkwell_internals",
@ -330,8 +335,9 @@ dependencies = [
[[package]] [[package]]
name = "inkwell_internals" name = "inkwell_internals"
version = "0.3.0" version = "0.5.0"
source = "git+https://github.com/TheDan64/inkwell?branch=master#d018ee22e4b5241dec2bc32ca67f3d4caaecee47" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c7090af3d300424caa81976b8c97bca41cd70e861272c072e188ae082fb49f9"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -340,48 +346,47 @@ dependencies = [
[[package]] [[package]]
name = "insta" name = "insta"
version = "1.8.0" version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "15226a375927344c78d39dc6b49e2d5562a5b0705e26a589093c6792e52eed8e" checksum = "da408b722765c64aad796c666b756aa1dda2a6c1b44f98797f2d8ea8f197746f"
dependencies = [ dependencies = [
"console", "console",
"lazy_static", "once_cell",
"serde", "serde",
"serde_json", "serde_json",
"serde_yaml", "serde_yaml",
"similar", "similar",
"uuid",
] ]
[[package]] [[package]]
name = "instant" name = "instant"
version = "0.1.10" version = "0.1.12"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bee0328b1209d157ef001c94dd85b4f8f64139adb0eac2659f4b08382b2f474d" checksum = "7a5bbe824c507c5da5956355e86a746d82e0e1464f65d862cc5e71da70e94b2c"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
] ]
[[package]] [[package]]
name = "itertools" name = "itertools"
version = "0.10.1" version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69ddb889f9d0d08a67338271fa9b62996bc788c7796a5c18cf057420aaed5eaf" checksum = "a9a9d19fa1e79b6215ff29b9d6880b706147f16e9b1dbb1e4e5947b5b02bc5e3"
dependencies = [ dependencies = [
"either", "either",
] ]
[[package]] [[package]]
name = "itoa" name = "itoa"
version = "0.4.8" version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b71991ff56294aa922b450139ee08b3bfc70982c6b2c7562771375cf73542dd4" checksum = "1aab8fc367588b89dcee83ab0fd66b72b50b72fa1904d7095045ace2b0c81c35"
[[package]] [[package]]
name = "lalrpop" name = "lalrpop"
version = "0.19.6" version = "0.19.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b15174f1c529af5bf1283c3bc0058266b483a67156f79589fab2a25e23cf8988" checksum = "852b75a095da6b69da8c5557731c3afd06525d4f655a4fc1c799e2ec8bc4dce4"
dependencies = [ dependencies = [
"ascii-canvas", "ascii-canvas",
"atty", "atty",
@ -402,9 +407,9 @@ dependencies = [
[[package]] [[package]]
name = "lalrpop-util" name = "lalrpop-util"
version = "0.19.6" version = "0.19.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3e58cce361efcc90ba8a0a5f982c741ff86b603495bb15a998412e957dcd278" checksum = "d6d265705249fe209280676d8f68887859fa42e1d34f342fc05bd47726a5e188"
dependencies = [ dependencies = [
"regex", "regex",
] ]
@ -417,15 +422,15 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]] [[package]]
name = "libc" name = "libc"
version = "0.2.102" version = "0.2.120"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2a5ac8f984bfcf3a823267e5fde638acc3325f6496633a5da6bb6eb2171e103" checksum = "ad5c14e80759d0939d013e6ca49930e59fc53dd8e5009132f76240c179380c09"
[[package]] [[package]]
name = "libloading" name = "libloading"
version = "0.7.1" version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0cf036d15402bea3c5d4de17b3fce76b3e4a56ebc1f577be0e7a72f7c607cf0" checksum = "efbc0f03f9a775e9f6aed295c6a1ba2253c5757a9e03d55c6caa46a681abcddd"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"winapi", "winapi",
@ -439,9 +444,9 @@ checksum = "7fb9b38af92608140b86b693604b9ffcc5824240a484d1ecd4795bacb2fe88f3"
[[package]] [[package]]
name = "llvm-sys" name = "llvm-sys"
version = "120.2.1" version = "130.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4a810627ac62b396f5fd2214ba9bbd8748d4d6efdc4d2c1c1303ea7a75763ce" checksum = "95eb03b4f7ae21f48ef7c565a3e3aa22c50616aea64645fb1fd7f6f56b51c274"
dependencies = [ dependencies = [
"cc", "cc",
"lazy_static", "lazy_static",
@ -452,9 +457,9 @@ dependencies = [
[[package]] [[package]]
name = "lock_api" name = "lock_api"
version = "0.4.5" version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712a4d093c9976e24e7dbca41db895dabcbac38eb5f4045393d17a95bdfb1109" checksum = "88943dd7ef4a2e5a4bfa2753aaab3013e34ce2533d1996fb18ef591e315e2b3b"
dependencies = [ dependencies = [
"scopeguard", "scopeguard",
] ]
@ -476,9 +481,9 @@ checksum = "308cc39be01b73d0d18f82a0e7b2a3df85245f84af96fdddc5d202d27e47b86a"
[[package]] [[package]]
name = "memoffset" name = "memoffset"
version = "0.6.4" version = "0.6.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "59accc507f1338036a0477ef61afdae33cde60840f4dfe481319ce3ad116ddf9" checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce"
dependencies = [ dependencies = [
"autocfg", "autocfg",
] ]
@ -501,7 +506,6 @@ version = "0.1.0"
dependencies = [ dependencies = [
"fxhash", "fxhash",
"lazy_static", "lazy_static",
"num-bigint 0.4.2",
"parking_lot", "parking_lot",
"string-interner", "string-interner",
] ]
@ -511,15 +515,14 @@ name = "nac3core"
version = "0.1.0" version = "0.1.0"
dependencies = [ dependencies = [
"crossbeam", "crossbeam",
"indoc 1.0.3", "indoc 1.0.4",
"inkwell", "inkwell",
"insta", "insta",
"itertools", "itertools",
"nac3parser", "nac3parser",
"num-bigint 0.3.3",
"num-traits",
"parking_lot", "parking_lot",
"rayon", "rayon",
"regex",
"test-case", "test-case",
] ]
@ -533,8 +536,6 @@ dependencies = [
"lalrpop-util", "lalrpop-util",
"log", "log",
"nac3ast", "nac3ast",
"num-bigint 0.4.2",
"num-traits",
"phf", "phf",
"unic-emoji-char", "unic-emoji-char",
"unic-ucd-ident", "unic-ucd-ident",
@ -557,52 +558,11 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54" checksum = "e4a24736216ec316047a1fc4252e27dabb04218aa4a3f37c6e7ddbf1f9782b54"
[[package]]
name = "num-bigint"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f6f7833f2cbf2360a6cfd58cd41a53aa7a90bd4c202f5b1c7dd2ed73c57b2c3"
dependencies = [
"autocfg",
"num-integer",
"num-traits",
]
[[package]]
name = "num-bigint"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "74e768dff5fb39a41b3bcd30bb25cf989706c90d028d1ad71971987aa309d535"
dependencies = [
"autocfg",
"num-integer",
"num-traits",
]
[[package]]
name = "num-integer"
version = "0.1.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2cc698a63b549a70bc047073d2949cce27cd1c7b0a4a862d08a8031bc2801db"
dependencies = [
"autocfg",
"num-traits",
]
[[package]]
name = "num-traits"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a64b1ec5cda2586e284722486d802acf1f7dbdc623e2bfc57e65ca1cd099290"
dependencies = [
"autocfg",
]
[[package]] [[package]]
name = "num_cpus" name = "num_cpus"
version = "1.13.0" version = "1.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05499f3756671c15885fee9034446956fff3f243d6077b91e5767df161f766b3" checksum = "19e64526ebdee182341572e50e9ad03965aa510cd94427a4549448f285e957a1"
dependencies = [ dependencies = [
"hermit-abi", "hermit-abi",
"libc", "libc",
@ -610,9 +570,9 @@ dependencies = [
[[package]] [[package]]
name = "once_cell" name = "once_cell"
version = "1.8.0" version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "692fcb63b64b1758029e0a96ee63e049ce8c5948587f2f7208df04625e5f6b56" checksum = "87f3e037eac156d1775da914196f0f37741a274155e34a0b7e427c35d2a2ecb9"
[[package]] [[package]]
name = "parking_lot" name = "parking_lot"
@ -714,18 +674,18 @@ dependencies = [
[[package]] [[package]]
name = "phf_shared" name = "phf_shared"
version = "0.8.0" version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c00cf8b9eafe68dde5e9eaa2cef8ee84a9336a47d566ec55ca16589633b65af7" checksum = "a68318426de33640f02be62b4ae8eb1261be2efbc337b60c54d845bf4484e0d9"
dependencies = [ dependencies = [
"siphasher", "siphasher",
] ]
[[package]] [[package]]
name = "phf_shared" name = "phf_shared"
version = "0.9.0" version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a68318426de33640f02be62b4ae8eb1261be2efbc337b60c54d845bf4484e0d9" checksum = "b6796ad771acdc0123d2a88dc428b5e38ef24456743ddb1744ed628f9815c096"
dependencies = [ dependencies = [
"siphasher", "siphasher",
] ]
@ -738,9 +698,9 @@ checksum = "db8bcd96cb740d03149cbad5518db9fd87126a10ab519c011893b1754134c468"
[[package]] [[package]]
name = "ppv-lite86" name = "ppv-lite86"
version = "0.2.10" version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac74c624d6b2d21f425f752262f42188365d7b8ff1aff74c82e45136510a4857" checksum = "eb9f9e6e233e5c4a35559a617bf40a4ec447db2e84c20b55a6f83167b7e57872"
[[package]] [[package]]
name = "precomputed-hash" name = "precomputed-hash"
@ -756,9 +716,9 @@ checksum = "dbf0c48bc1d91375ae5c3cd81e3722dff1abcf81a30960240640d223f59fe0e5"
[[package]] [[package]]
name = "proc-macro2" name = "proc-macro2"
version = "1.0.29" version = "1.0.36"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9f5105d4fdaab20335ca9565e106a5d9b82b6219b5ba735731124ac6711d23d" checksum = "c7342d5883fbccae1cc37a2353b09c87c9b0f3afd73f5fb9bba687a1f733b029"
dependencies = [ dependencies = [
"unicode-xid", "unicode-xid",
] ]
@ -813,23 +773,22 @@ dependencies = [
[[package]] [[package]]
name = "quote" name = "quote"
version = "1.0.9" version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3d0b9745dc2debf507c8422de05d7226cc1f0644216dfdfead988f9b1ab32a7" checksum = "864d3e96a899863136fc6e99f3d7cae289dafe43bf2c5ac19b70df7210c0a145"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
] ]
[[package]] [[package]]
name = "rand" name = "rand"
version = "0.8.4" version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e7573632e6454cf6b99d7aac4ccca54be06da05aca2ef7423d22d27d4d4bcd8" checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
dependencies = [ dependencies = [
"libc", "libc",
"rand_chacha", "rand_chacha",
"rand_core", "rand_core",
"rand_hc",
] ]
[[package]] [[package]]
@ -851,15 +810,6 @@ dependencies = [
"getrandom", "getrandom",
] ]
[[package]]
name = "rand_hc"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d51e9f596de227fda2ea6c84607f5558e196eeaf43c986b724ba4fb8fdf497e7"
dependencies = [
"rand_core",
]
[[package]] [[package]]
name = "rayon" name = "rayon"
version = "1.5.1" version = "1.5.1"
@ -887,9 +837,9 @@ dependencies = [
[[package]] [[package]]
name = "redox_syscall" name = "redox_syscall"
version = "0.2.10" version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8383f39639269cde97d255a32bdb68c047337295414940c68bdd30c2e13203ff" checksum = "8380fe0152551244f0747b1bf41737e0f8a74f97a14ccefd1148187271634f3c"
dependencies = [ dependencies = [
"bitflags", "bitflags",
] ]
@ -906,9 +856,9 @@ dependencies = [
[[package]] [[package]]
name = "regex" name = "regex"
version = "1.5.4" version = "1.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d07a8629359eb56f1e2fb1652bb04212c072a87ba68546a04065d525673ac461" checksum = "1a11647b6b25ff05a515cb92c365cec08801e83423a235b51e231e1808747286"
dependencies = [ dependencies = [
"aho-corasick", "aho-corasick",
"memchr", "memchr",
@ -939,15 +889,15 @@ dependencies = [
[[package]] [[package]]
name = "rustversion" name = "rustversion"
version = "1.0.5" version = "1.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "61b3909d758bb75c79f23d4736fac9433868679d3ad2ea7a61e3c25cfda9a088" checksum = "f2cc38e8fa666e2de3c4aba7edeb5ffc5246c1c2ed0e3d17e560aeeba736b23f"
[[package]] [[package]]
name = "ryu" name = "ryu"
version = "1.0.5" version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71d301d4193d031abdd79ff7e3dd721168a9572ef3fe51a1517aba235bd8f86e" checksum = "73b4b750c782965c211b42f022f59af1fbceabdd026623714f104152f1ec149f"
[[package]] [[package]]
name = "scopeguard" name = "scopeguard"
@ -975,18 +925,18 @@ dependencies = [
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.130" version = "1.0.136"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f12d06de37cf59146fbdecab66aa99f9fe4f78722e3607577a5375d66bd0c913" checksum = "ce31e24b01e1e524df96f1c2fdd054405f8d7376249a5110886fb4b658484789"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.130" version = "1.0.136"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7bc1a1ab1961464eae040d96713baa5a724a8152c1222492465b54322ec508b" checksum = "08597e7152fcd306f41838ed3e37be9eaeed2b61c42e2117266a554fab4662f9"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -995,9 +945,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.68" version = "1.0.79"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0f690853975602e1bfe1ccbf50504d67174e3bcf340f23b5ea9992e0587a52d8" checksum = "8e8d9fa5c3b304765ce1fd9c4c8a3de2c8db365a5b91be52f186efc675681d95"
dependencies = [ dependencies = [
"itoa", "itoa",
"ryu", "ryu",
@ -1006,33 +956,33 @@ dependencies = [
[[package]] [[package]]
name = "serde_yaml" name = "serde_yaml"
version = "0.8.21" version = "0.8.23"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d8c608a35705a5d3cdc9fbe403147647ff34b921f8e833e49306df898f9b20af" checksum = "a4a521f2940385c165a24ee286aa8599633d162077a54bdcae2a6fd5a7bfa7a0"
dependencies = [ dependencies = [
"dtoa",
"indexmap", "indexmap",
"ryu",
"serde", "serde",
"yaml-rust", "yaml-rust",
] ]
[[package]] [[package]]
name = "similar" name = "similar"
version = "1.3.0" version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ad1d488a557b235fc46dae55512ffbfc429d2482b08b4d9435ab07384ca8aec" checksum = "2e24979f63a11545f5f2c60141afe249d4f19f84581ea2138065e400941d83d3"
[[package]] [[package]]
name = "siphasher" name = "siphasher"
version = "0.3.7" version = "0.3.10"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "533494a8f9b724d33625ab53c6c4800f7cc445895924a8ef649222dcb76e938b" checksum = "7bd3e3206899af3f8b12af284fafc038cc1dc2b41d1b89dd17297221c5d225de"
[[package]] [[package]]
name = "smallvec" name = "smallvec"
version = "1.6.1" version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fe0f37c9e8f3c5a4a66ad655a93c74daac4ad00c441533bf5c6e7990bb42604e" checksum = "f2dd574626839106c320a323308629dcb1acfc96e32a8cba364ddc61ac23ee83"
[[package]] [[package]]
name = "string-interner" name = "string-interner"
@ -1047,21 +997,22 @@ dependencies = [
[[package]] [[package]]
name = "string_cache" name = "string_cache"
version = "0.8.1" version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ddb1139b5353f96e429e1a5e19fbaf663bddedaa06d1dbd49f82e352601209a" checksum = "33994d0838dc2d152d17a62adf608a869b5e846b65b389af7f3dbc1de45c5b26"
dependencies = [ dependencies = [
"lazy_static", "lazy_static",
"new_debug_unreachable", "new_debug_unreachable",
"phf_shared 0.8.0", "parking_lot",
"phf_shared 0.10.0",
"precomputed-hash", "precomputed-hash",
] ]
[[package]] [[package]]
name = "syn" name = "syn"
version = "1.0.76" version = "1.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c6f107db402c2c2055242dbf4d2af0e69197202e9faacbef9571bbe47f5a1b84" checksum = "ea297be220d52398dcc07ce15a209fce436d361735ac1db700cab3b6cdfb9f54"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -1070,13 +1021,13 @@ dependencies = [
[[package]] [[package]]
name = "tempfile" name = "tempfile"
version = "3.2.0" version = "3.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dac1c663cfc93810f88aed9b8941d48cabf856a1b111c29a40439018d870eb22" checksum = "5cdb1ef4eaeeaddc8fbd371e5017057064af0911902ef36b39801f67cc6d79e4"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"fastrand",
"libc", "libc",
"rand",
"redox_syscall", "redox_syscall",
"remove_dir_all", "remove_dir_all",
"winapi", "winapi",
@ -1105,9 +1056,9 @@ dependencies = [
[[package]] [[package]]
name = "test-case" name = "test-case"
version = "1.2.0" version = "1.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b114ece25254e97bf48dd4bfc2a12bad0647adacfe4cae1247a9ca6ad302cec" checksum = "e9e5f048404b43e8ae66dce036163515b6057024cf58c6377be501f250bd3c6a"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"proc-macro2", "proc-macro2",
@ -1197,21 +1148,15 @@ checksum = "87d6678d7916394abad0d4b19df4d3802e1fd84abd7d701f39b75ee71b9e8cf1"
[[package]] [[package]]
name = "unindent" name = "unindent"
version = "0.1.7" version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f14ee04d9415b52b3aeab06258a3f07093182b88ba0f9b8d203f211a7a7d41c7" checksum = "514672a55d7380da379785a4d70ca8386c8883ff7eaae877be4d2081cebe73d8"
[[package]]
name = "uuid"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc5cf98d8186244414c848017f0e2676b3fcb46807f6668a97dfe67359a3c4b7"
[[package]] [[package]]
name = "version_check" name = "version_check"
version = "0.9.3" version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5fecdca9a5291cc2b8dcf7dc02453fee791a280f3743cb0905f8822ae463b3fe" checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
[[package]] [[package]]
name = "wasi" name = "wasi"

View File

@ -10,4 +10,3 @@ members = [
[profile.release] [profile.release]
debug = true debug = true

View File

@ -1,25 +1,69 @@
# NAC3 compiler # NAC3
NAC3 is a major, backward-incompatible rewrite of the compiler for the [ARTIQ](https://m-labs.hk/artiq) physics experiment control and data acquisition system. It features greatly improved compilation speeds, a much better type system, and more predictable and transparent operation.
NAC3 has a modular design and its applicability reaches beyond ARTIQ. The ``nac3core`` module does not contain anything specific to ARTIQ, and can be used in any project that requires compiling Python to machine code.
**WARNING: NAC3 is currently experimental software and several important features are not implemented yet.**
## Packaging
NAC3 is packaged using the [Nix](https://nixos.org) Flakes system. Install Nix 2.4+ and enable flakes by adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (e.g. ``~/.config/nix/nix.conf``).
## Try NAC3
### Linux
After setting up Nix as above, use ``nix shell git+https://github.com/m-labs/artiq.git?ref=nac3`` to get a shell with the NAC3 version of ARTIQ. See the ``examples`` directory in ARTIQ (``nac3`` Git branch) for some samples of NAC3 kernel code.
### Windows (work in progress)
NAC3 ARTIQ packaging for MSYS2/Windows is not yet complete so installation involves many manual steps. It is also less tested and you may encounter problems.
Install [MSYS2](https://www.msys2.org/) and run the following commands:
```
pacman -S mingw-w64-x86_64-python-h5py mingw-w64-x86_64-python-pyqt5 mingw-w64-x86_64-python-scipy mingw-w64-x86_64-python-prettytable mingw-w64-x86_64-python-pygit2
pacman -S mingw-w64-x86_64-python-pip
pip install qasync
pip install pyqtgraph
pacman -S patch git
git clone https://github.com/m-labs/sipyco
cd sipyco
git show 20c946aad78872fe60b78d9b57a624d69f3eea47 | patch -p1 -R
python setup.py install
cd ..
git clone -b nac3 https://github.com/m-labs/artiq
cd artiq
python setup.py install
```
Locate a recent build of ``nac3artiq-mingw`` from [Hydra](https://nixbld.m-labs.hk) and download ``nac3artiq.zip``. Then extract the contents in the appropriate location:
```
pacman -S unzip
wget https://nixbld.m-labs.hk/build/97899/download/1/nac3artiq.zip # edit the build number
unzip nac3artiq.zip -d C:/msys64/mingw64/lib/python3.9/site-packages
```
Install additional NAC3 dependencies:
```
pacman -S mingw-w64-x86_64-lld
wget https://nixbld.m-labs.hk/build/97899/download/1/mcfgthread-12.dll # edit the build number
cp mcfgthread-12.dll C:/msys64/mingw64/bin
```
And you should be good to go.
## For developers
This repository contains: This repository contains:
- nac3ast: Python abstract syntax tree definition (based on RustPython). - ``nac3ast``: Python abstract syntax tree definition (based on RustPython).
- nac3parser: Python parser (based on RustPython). - ``nac3parser``: Python parser (based on RustPython).
- nac3core: Core compiler library, containing type-checking and code - ``nac3core``: Core compiler library, containing type-checking and code generation.
generation. - ``nac3standalone``: Standalone compiler tool (core language only).
- nac3standalone: Standalone compiler tool (core language only). - ``nac3artiq``: Integration with ARTIQ and implementation of ARTIQ-specific extensions to the core language.
- nac3artiq: Integration with ARTIQ and implementation of ARTIQ-specific - ``runkernel``: Simple program that runs compiled ARTIQ kernels on the host and displays RTIO operations. Useful for testing without hardware.
extensions to the core language.
- runkernel: Simple program that runs compiled ARTIQ kernels on the host
and displays RTIO operations. Useful for testing without hardware.
Use ``nix develop`` in this repository to enter a development shell.
If you are using a different shell than bash you can use e.g. ``nix develop --command fish``.
The core compiler knows nothing about symbol resolution, host variables Build NAC3 with ``cargo build --release``. See the demonstrations in ``nac3artiq`` and ``nac3standalone``.
etc. nac3artiq and nac3standalone provide (implement) the
symbol resolver to the core compiler for resolving the type and value for
unknown symbols. The core compiler only type checks classes and functions
requested by nac3artiq/nac3standalone (the API should allow the
caller to specify which methods should be compiled). After type checking, the
compiler analyses the set of functions/classes that are used and performs
code generation.
value could be integer values, boolean values, bytes (for memcpy), function ID
(full name + concrete type)

View File

@ -2,16 +2,16 @@
"nodes": { "nodes": {
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1637636156, "lastModified": 1647282937,
"narHash": "sha256-E2ym4Vcpqu9JYoQDXJZR48gVD+LPPbaCoYveIk7Xu3Y=", "narHash": "sha256-K8Oo6QyFCfiEWTRpQVfzcwI3YNMKlz6Tu8rr+o3rzRQ=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "b026e1cf87a108dd06fe521f224fdc72fd0b013d", "rev": "64fc73bd74f04d3e10cb4e70e1c65b92337e76db",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "NixOS", "owner": "NixOS",
"ref": "release-21.11", "ref": "nixos-21.11",
"repo": "nixpkgs", "repo": "nixpkgs",
"type": "github" "type": "github"
} }

221
flake.nix
View File

@ -1,29 +1,142 @@
{ {
description = "The third-generation ARTIQ compiler"; description = "The third-generation ARTIQ compiler";
inputs.nixpkgs.url = github:NixOS/nixpkgs/release-21.11; inputs.nixpkgs.url = github:NixOS/nixpkgs/nixos-21.11;
outputs = { self, nixpkgs }: outputs = { self, nixpkgs }:
let let
# We can't use overlays because llvm dependencies are handled internally in llvmPackages_xx pkgs = import nixpkgs { system = "x86_64-linux"; };
pkgs-orig = import nixpkgs { system = "x86_64-linux"; }; pkgs-mingw = import nixpkgs {
nixpkgs-patched = pkgs-orig.applyPatches { system = "x86_64-linux";
name = "nixpkgs"; crossSystem = { config = "x86_64-w64-mingw32"; libc = "msvcrt"; };
src = nixpkgs; # work around https://github.com/NixOS/nixpkgs/issues/149593
patches = [ ./llvm-future-riscv-abi.diff ./llvm-restrict-targets.diff ]; overlays = [
(self: super: {
openssh = super.openssh.overrideAttrs(oa: { doCheck = false; });
})
];
};
msys2-python-tar = pkgs.fetchurl {
url = "https://mirror.msys2.org/mingw/mingw64/mingw-w64-x86_64-python-3.9.7-4-any.pkg.tar.zst";
sha256 = "0iwlgbk4b457yn9djwqswid55xhyyi35qymz1lfh42xwdpxdm47c";
};
msys2-python = pkgs.stdenvNoCC.mkDerivation {
name = "msys2-python";
src = msys2-python-tar;
buildInputs = [ pkgs.gnutar pkgs.zstd ];
phases = [ "installPhase" ];
installPhase =
''
mkdir $out
tar xf $src -C $out
'';
};
pyo3-mingw-config = pkgs.writeTextFile {
name = "pyo3-mingw-config";
text =
''
implementation=CPython
version=3.9
shared=true
abi3=false
lib_name=python3.9
lib_dir=${msys2-python}/mingw64/lib
pointer_width=64
build_flags=WITH_THREAD
suppress_build_script_link_lines=false
'';
}; };
pkgs = import nixpkgs-patched { system = "x86_64-linux"; };
in rec { in rec {
inherit nixpkgs-patched; packages.x86_64-linux = rec {
llvm-nac3 = pkgs.callPackage "${self}/llvm" {};
packages.x86_64-linux = {
nac3artiq = pkgs.python3Packages.toPythonModule ( nac3artiq = pkgs.python3Packages.toPythonModule (
pkgs.rustPlatform.buildRustPackage { pkgs.rustPlatform.buildRustPackage {
name = "nac3artiq"; name = "nac3artiq";
outputs = [ "out" "runkernel" "standalone" ];
src = self; src = self;
cargoSha256 = "sha256-otKLhr58HYMjVXAof6AdObNpggPnvK6qOl7I+4LWIP8="; cargoLock = { lockFile = ./Cargo.lock; };
nativeBuildInputs = [ pkgs.python3 pkgs.llvm_12 ]; nativeBuildInputs = [ pkgs.python3 pkgs.llvmPackages_13.clang-unwrapped llvm-nac3 ];
buildInputs = [ pkgs.python3 pkgs.libffi pkgs.libxml2 pkgs.llvm_12 ]; buildInputs = [ pkgs.python3 llvm-nac3 ];
checkInputs = [ (pkgs.python3.withPackages(ps: [ ps.numpy ])) ];
checkPhase =
''
echo "Checking nac3standalone demos..."
pushd nac3standalone/demo
patchShebangs .
./check_demos.sh
popd
echo "Running Cargo tests..."
cargoCheckHook
'';
installPhase =
''
PYTHON_SITEPACKAGES=$out/${pkgs.python3Packages.python.sitePackages}
mkdir -p $PYTHON_SITEPACKAGES
cp target/x86_64-unknown-linux-gnu/release/libnac3artiq.so $PYTHON_SITEPACKAGES/nac3artiq.so
mkdir -p $runkernel/bin
cp target/x86_64-unknown-linux-gnu/release/runkernel $runkernel/bin
mkdir -p $standalone/bin
cp target/x86_64-unknown-linux-gnu/release/nac3standalone $standalone/bin
'';
}
);
python3-mimalloc = pkgs.python3 // rec {
withMimalloc = pkgs.python3.buildEnv.override({ makeWrapperArgs = [ "--set LD_PRELOAD ${pkgs.mimalloc}/lib/libmimalloc.so" ]; });
withPackages = f: let packages = f pkgs.python3.pkgs; in withMimalloc.override { extraLibs = packages; };
};
# LLVM PGO support
llvm-nac3-instrumented = pkgs.callPackage "${self}/llvm" {
stdenv = pkgs.llvmPackages_13.stdenv;
extraCmakeFlags = [ "-DLLVM_BUILD_INSTRUMENTED=IR" ];
};
nac3artiq-instrumented = pkgs.python3Packages.toPythonModule (
pkgs.rustPlatform.buildRustPackage {
name = "nac3artiq-instrumented";
src = self;
cargoLock = { lockFile = ./Cargo.lock; };
nativeBuildInputs = [ pkgs.python3 pkgs.llvmPackages_13.clang-unwrapped llvm-nac3-instrumented ];
buildInputs = [ pkgs.python3 llvm-nac3-instrumented ];
cargoBuildFlags = [ "--package" "nac3artiq" "--features" "init-llvm-profile" ];
doCheck = false;
configurePhase =
''
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS="-C link-arg=-L${pkgs.llvmPackages_13.compiler-rt}/lib/linux -C link-arg=-lclang_rt.profile-x86_64"
'';
installPhase =
''
TARGET_DIR=$out/${pkgs.python3Packages.python.sitePackages}
mkdir -p $TARGET_DIR
cp target/x86_64-unknown-linux-gnu/release/libnac3artiq.so $TARGET_DIR/nac3artiq.so
'';
}
);
nac3artiq-profile = pkgs.stdenvNoCC.mkDerivation {
name = "nac3artiq-profile";
src = self;
buildInputs = [ (python3-mimalloc.withPackages(ps: [ ps.numpy nac3artiq-instrumented ])) pkgs.lld_13 pkgs.llvmPackages_13.libllvm ];
phases = [ "buildPhase" "installPhase" ];
# TODO: get more representative code.
buildPhase = "python $src/nac3artiq/demo/demo.py";
installPhase =
''
mkdir $out
llvm-profdata merge -o $out/llvm.profdata /build/llvm/build/profiles/*
'';
};
llvm-nac3-pgo = pkgs.callPackage "${self}/llvm" {
stdenv = pkgs.llvmPackages_13.stdenv;
extraCmakeFlags = [ "-DLLVM_PROFDATA_FILE=${nac3artiq-profile}/llvm.profdata" ];
};
nac3artiq-pgo = pkgs.python3Packages.toPythonModule (
pkgs.rustPlatform.buildRustPackage {
name = "nac3artiq-pgo";
src = self;
cargoLock = { lockFile = ./Cargo.lock; };
nativeBuildInputs = [ pkgs.python3 pkgs.llvmPackages_13.clang-unwrapped llvm-nac3-pgo ];
buildInputs = [ pkgs.python3 llvm-nac3-pgo ];
cargoBuildFlags = [ "--package" "nac3artiq" ]; cargoBuildFlags = [ "--package" "nac3artiq" ];
cargoTestFlags = [ "--package" "nac3ast" "--package" "nac3parser" "--package" "nac3core" "--package" "nac3artiq" ]; cargoTestFlags = [ "--package" "nac3ast" "--package" "nac3parser" "--package" "nac3core" "--package" "nac3artiq" ];
installPhase = installPhase =
@ -36,25 +149,87 @@
); );
}; };
packages.x86_64-w64-mingw32 = rec {
llvm-nac3 = pkgs-mingw.callPackage "${self}/llvm" { inherit (pkgs) llvmPackages_13; };
nac3artiq = pkgs-mingw.python3Packages.toPythonModule (
pkgs-mingw.rustPlatform.buildRustPackage {
name = "nac3artiq";
src = self;
cargoLock = { lockFile = ./Cargo.lock; };
nativeBuildInputs = [ pkgs.llvmPackages_13.clang-unwrapped pkgs.llvmPackages_13.llvm pkgs.zip ];
buildInputs = [ pkgs-mingw.zlib ];
configurePhase =
''
# Link libstdc++ statically. As usual with cargo, this is an adventure.
cp --no-preserve=mode,ownership -R $CARGO_HOME/cargo-vendor-dir/llvm-sys-130.0.3/ llvm-sys-130.0.3
substituteInPlace llvm-sys-130.0.3/build.rs --replace "cargo:rustc-link-lib=dylib=" "cargo:rustc-link-lib=static="
substituteInPlace llvm-sys-130.0.3/build.rs --replace "fn main() {" "fn main() { println!(\"cargo:rustc-link-search=native=${pkgs-mingw.stdenv.cc.cc}/x86_64-w64-mingw32/lib\");"
chmod 755 $CARGO_HOME/cargo-vendor-dir
rm $CARGO_HOME/cargo-vendor-dir/llvm-sys-130.0.3
mv llvm-sys-130.0.3 $CARGO_HOME/cargo-vendor-dir/llvm-sys-130.0.3
export PYO3_CONFIG_FILE=${pyo3-mingw-config}
mkdir llvm-cfg
cat << EOF > llvm-cfg/llvm-config
#!${pkgs.bash}/bin/bash
set -e
# Gross hack to work around llvm-config asking for the wrong system libraries.
exec ${llvm-nac3.dev}/bin/llvm-config-native \$@ | ${pkgs.gnused}/bin/sed s/-lrt\ -ldl\ -lpthread\ -lm//
EOF
chmod +x llvm-cfg/llvm-config
export PATH=`pwd`/llvm-cfg:$PATH
export CARGO_TARGET_X86_64_PC_WINDOWS_GNU_RUSTFLAGS="-C link-arg=-lz -C link-arg=-luuid -C link-arg=-lole32 -C link-arg=-lmcfgthread"
'';
cargoBuildFlags = [ "--package" "nac3artiq" ];
doCheck = false;
installPhase =
''
mkdir -p $out $out/nix-support
ln -s target/x86_64-pc-windows-gnu/release/nac3artiq.dll nac3artiq.pyd
zip $out/nac3artiq.zip nac3artiq.pyd
echo file binary-dist $out/nac3artiq.zip >> $out/nix-support/hydra-build-products
'';
dontFixup = true;
meta.platforms = ["x86_64-windows"];
}
);
};
devShell.x86_64-linux = pkgs.mkShell { devShell.x86_64-linux = pkgs.mkShell {
name = "nac3-dev-shell"; name = "nac3-dev-shell";
buildInputs = with pkgs; [ buildInputs = with pkgs; [
llvm_12 # build dependencies
clang_12 packages.x86_64-linux.llvm-nac3
lld_12 llvmPackages_13.clang-unwrapped # IRRT
cargo cargo
cargo-insta
rustc rustc
libffi # runtime dependencies
libxml2 lld_13
(packages.x86_64-linux.python3-mimalloc.withPackages(ps: [ ps.numpy ]))
# development tools
cargo-insta
clippy clippy
(python3.withPackages(ps: [ ps.numpy ])) rustfmt
]; ];
}; };
hydraJobs = { hydraJobs = {
inherit (packages.x86_64-linux) nac3artiq; inherit (packages.x86_64-linux) llvm-nac3 nac3artiq;
} // (pkgs.lib.foldr (a: b: {"${pkgs.lib.strings.getName a}" = a;} // b) {} devShell.x86_64-linux.buildInputs); llvm-nac3-mingw = packages.x86_64-w64-mingw32.llvm-nac3;
nac3artiq-mingw = packages.x86_64-w64-mingw32.nac3artiq;
mcfgthreads = pkgs-mingw.stdenvNoCC.mkDerivation {
name = "mcfgthreads-hydra";
phases = [ "installPhase" ];
installPhase =
''
mkdir -p $out $out/nix-support
ln -s ${pkgs-mingw.windows.mcfgthreads}/bin/mcfgthread-12.dll $out/
echo file binary-dist $out/mcfgthread-12.dll >> $out/nix-support/hydra-build-products
'';
};
};
}; };
nixConfig = { nixConfig = {

View File

@ -1,61 +0,0 @@
commit 6e2dea56207b4e52ade9d1eee6a4f198336dd0a6
Author: Sebastien Bourdeauducq <sb@m-labs.hk>
Date: Thu Nov 11 23:32:13 2021 +0800
llvm: switch RISC-V ABI when FPU is present
diff --git a/pkgs/development/compilers/llvm/12/llvm/default.nix b/pkgs/development/compilers/llvm/12/llvm/default.nix
index 30a1a7a16df..41b7211b2a5 100644
--- a/pkgs/development/compilers/llvm/12/llvm/default.nix
+++ b/pkgs/development/compilers/llvm/12/llvm/default.nix
@@ -66,6 +66,7 @@ in stdenv.mkDerivation (rec {
sha256 = "sha256:12s8vr6ibri8b48h2z38f3afhwam10arfiqfy4yg37bmc054p5hi";
stripLen = 1;
})
+ ./llvm-future-riscv-abi.diff
] ++ lib.optional enablePolly ./gnu-install-dirs-polly.patch;
postPatch = optionalString stdenv.isDarwin ''
@@ -183,7 +184,7 @@ in stdenv.mkDerivation (rec {
cp NATIVE/bin/llvm-config $dev/bin/llvm-config-native
'';
- doCheck = stdenv.isLinux && (!stdenv.isx86_32) && (!stdenv.hostPlatform.isMusl);
+ doCheck = false; # the ABI change breaks RISC-V FP tests
checkTarget = "check-all";
diff --git a/pkgs/development/compilers/llvm/12/llvm/llvm-future-riscv-abi.diff b/pkgs/development/compilers/llvm/12/llvm/llvm-future-riscv-abi.diff
new file mode 100644
index 00000000000..2427ed0e02c
--- /dev/null
+++ b/pkgs/development/compilers/llvm/12/llvm/llvm-future-riscv-abi.diff
@@ -0,0 +1,28 @@
+diff --git a/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp b/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp
+index 0aba18b20..9bb75e7f4 100644
+--- a/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp
++++ b/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp
+@@ -33,6 +33,8 @@ ABI computeTargetABI(const Triple &TT, FeatureBitset FeatureBits,
+ auto TargetABI = getTargetABI(ABIName);
+ bool IsRV64 = TT.isArch64Bit();
+ bool IsRV32E = FeatureBits[RISCV::FeatureRV32E];
++ bool IsRV32D = FeatureBits[RISCV::FeatureStdExtD];
++ bool IsRV32F = FeatureBits[RISCV::FeatureStdExtF];
+
+ if (!ABIName.empty() && TargetABI == ABI_Unknown) {
+ errs()
+@@ -56,10 +58,10 @@ ABI computeTargetABI(const Triple &TT, FeatureBitset FeatureBits,
+ if (TargetABI != ABI_Unknown)
+ return TargetABI;
+
+- // For now, default to the ilp32/ilp32e/lp64 ABI if no explicit ABI is given
+- // or an invalid/unrecognised string is given. In the future, it might be
+- // worth changing this to default to ilp32f/lp64f and ilp32d/lp64d when
+- // hardware support for floating point is present.
++ if (IsRV32D)
++ return ABI_ILP32D;
++ if (IsRV32F)
++ return ABI_ILP32F;
+ if (IsRV32E)
+ return ABI_ILP32E;
+ if (IsRV64)

View File

@ -1,12 +0,0 @@
diff --git a/pkgs/development/compilers/llvm/12/llvm/default.nix b/pkgs/development/compilers/llvm/12/llvm/default.nix
index 41b7211b2a5..dfc707f034d 100644
--- a/pkgs/development/compilers/llvm/12/llvm/default.nix
+++ b/pkgs/development/compilers/llvm/12/llvm/default.nix
@@ -127,6 +127,7 @@ in stdenv.mkDerivation (rec {
"-DLLVM_HOST_TRIPLE=${stdenv.hostPlatform.config}"
"-DLLVM_DEFAULT_TARGET_TRIPLE=${stdenv.hostPlatform.config}"
"-DLLVM_ENABLE_DUMP=ON"
+ "-DLLVM_TARGETS_TO_BUILD=X86;ARM;RISCV"
] ++ optionals enableSharedLibraries [
"-DLLVM_LINK_LLVM_DYLIB=ON"
] ++ optionals enableManpages [

35
llvm/TLI-musl.patch Normal file
View File

@ -0,0 +1,35 @@
From 5c571082fdaf61f6df19d9b7137dc26d71334058 Mon Sep 17 00:00:00 2001
From: Natanael Copa <ncopa@alpinelinux.org>
Date: Thu, 18 Feb 2016 10:33:04 +0100
Subject: [PATCH 2/3] Fix build with musl libc
On musl libc the fopen64 and fopen are the same thing, but for
compatibility they have a `#define fopen64 fopen`. Same applies for
fseek64, fstat64, fstatvfs64, ftello64, lstat64, stat64 and tmpfile64.
---
include/llvm/Analysis/TargetLibraryInfo.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/include/llvm/Analysis/TargetLibraryInfo.h b/include/llvm/Analysis/TargetLibraryInfo.h
index 7becdf0..7f14427 100644
--- a/include/llvm/Analysis/TargetLibraryInfo.h
+++ b/include/llvm/Analysis/TargetLibraryInfo.h
@@ -18,6 +18,15 @@
#include "llvm/IR/Module.h"
#include "llvm/Pass.h"
+#undef fopen64
+#undef fseeko64
+#undef fstat64
+#undef fstatvfs64
+#undef ftello64
+#undef lstat64
+#undef stat64
+#undef tmpfile64
+
namespace llvm {
/// VecDesc - Describes a possible vectorization of a function.
/// Function 'VectorFnName' is equivalent to 'ScalarFnName' vectorized
--
2.7.3

228
llvm/default.nix Normal file
View File

@ -0,0 +1,228 @@
{ lib, stdenv
, pkgsBuildBuild
, fetchurl
, fetchpatch
, cmake
, python3
, libbfd
, ncurses
, zlib
, which
, llvmPackages_13
, debugVersion ? false
, enableManpages ? false
, enableSharedLibraries ? false
, extraCmakeFlags ? []
}:
let
inherit (lib) optional optionals optionalString;
release_version = "13.0.1";
candidate = ""; # empty or "rcN"
dash-candidate = lib.optionalString (candidate != "") "-${candidate}";
version = "${release_version}${dash-candidate}"; # differentiating these (variables) is important for RCs
fetch = name: sha256: fetchurl {
url = "https://github.com/llvm/llvm-project/releases/download/llvmorg-${version}/${name}-${release_version}${candidate}.src.tar.xz";
inherit sha256;
};
# Used when creating a version-suffixed symlink of libLLVM.dylib
shortVersion = with lib;
concatStringsSep "." (take 1 (splitString "." release_version));
in stdenv.mkDerivation (rec {
pname = "llvm";
inherit version;
src = fetch pname "sha256-7GuA2Cw4SsrS3BkpA6bPLNuv+4ibhL+5janXHmMPyDQ=";
unpackPhase = ''
unpackFile $src
mv llvm-${release_version}* llvm
sourceRoot=$PWD/llvm
'';
outputs = [ "out" "lib" "dev" "python" ];
nativeBuildInputs = [ cmake python3 ]
++ optionals enableManpages [ python3.pkgs.sphinx python3.pkgs.recommonmark ];
buildInputs = [ ];
propagatedBuildInputs = optionals (stdenv.buildPlatform == stdenv.hostPlatform) [ ncurses ]
++ [ zlib ];
checkInputs = [ which ];
patches = [
./gnu-install-dirs.patch
# Fix random compiler crashes: https://bugs.llvm.org/show_bug.cgi?id=50611
(fetchpatch {
url = "https://raw.githubusercontent.com/archlinux/svntogit-packages/4764a4f8c920912a2bfd8b0eea57273acfe0d8a8/trunk/no-strict-aliasing-DwarfCompileUnit.patch";
sha256 = "18l6mrvm2vmwm77ckcnbjvh6ybvn72rhrb799d4qzwac4x2ifl7g";
stripLen = 1;
})
./llvm-future-riscv-abi.diff
];
postPatch = optionalString stdenv.isDarwin ''
substituteInPlace cmake/modules/AddLLVM.cmake \
--replace 'set(_install_name_dir INSTALL_NAME_DIR "@rpath")' "set(_install_name_dir)" \
--replace 'set(_install_rpath "@loader_path/../''${CMAKE_INSTALL_LIBDIR}''${LLVM_LIBDIR_SUFFIX}" ''${extra_libdir})' ""
''
# Patch llvm-config to return correct library path based on --link-{shared,static}.
+ ''
substitute '${./outputs.patch}' ./outputs.patch --subst-var lib
patch -p1 < ./outputs.patch
'' + ''
# FileSystem permissions tests fail with various special bits
substituteInPlace unittests/Support/CMakeLists.txt \
--replace "Path.cpp" ""
rm unittests/Support/Path.cpp
substituteInPlace unittests/IR/CMakeLists.txt \
--replace "PassBuilderCallbacksTest.cpp" ""
rm unittests/IR/PassBuilderCallbacksTest.cpp
rm test/tools/llvm-objcopy/ELF/mirror-permissions-unix.test
'' + optionalString stdenv.hostPlatform.isMusl ''
patch -p1 -i ${./TLI-musl.patch}
substituteInPlace unittests/Support/CMakeLists.txt \
--replace "add_subdirectory(DynamicLibrary)" ""
rm unittests/Support/DynamicLibrary/DynamicLibraryTest.cpp
# valgrind unhappy with musl or glibc, but fails w/musl only
rm test/CodeGen/AArch64/wineh4.mir
'' + optionalString stdenv.hostPlatform.isAarch32 ''
# skip failing X86 test cases on 32-bit ARM
rm test/DebugInfo/X86/convert-debugloc.ll
rm test/DebugInfo/X86/convert-inlined.ll
rm test/DebugInfo/X86/convert-linked.ll
rm test/tools/dsymutil/X86/op-convert.test
'' + optionalString (stdenv.hostPlatform.system == "armv6l-linux") ''
# Seems to require certain floating point hardware (NEON?)
rm test/ExecutionEngine/frem.ll
'' + ''
patchShebangs test/BugPoint/compile-custom.ll.py
'';
# hacky fix: created binaries need to be run before installation
preBuild = ''
mkdir -p $out/
ln -sv $PWD/lib $out
'';
# E.g. mesa.drivers use the build-id as a cache key (see #93946):
LDFLAGS = optionalString (enableSharedLibraries && !stdenv.isDarwin) "-Wl,--build-id=sha1";
cmakeFlags = with stdenv; [
"-DLLVM_INSTALL_CMAKE_DIR=${placeholder "dev"}/lib/cmake/llvm/"
"-DCMAKE_BUILD_TYPE=${if debugVersion then "Debug" else "Release"}"
"-DLLVM_BUILD_TESTS=${if stdenv.targetPlatform.isMinGW then "OFF" else "ON"}"
"-DLLVM_HOST_TRIPLE=${stdenv.hostPlatform.config}"
"-DLLVM_DEFAULT_TARGET_TRIPLE=${stdenv.hostPlatform.config}"
"-DLLVM_ENABLE_UNWIND_TABLES=OFF"
"-DLLVM_ENABLE_THREADS=OFF"
"-DLLVM_TARGETS_TO_BUILD=X86;ARM;RISCV"
] ++ optionals enableSharedLibraries [
"-DLLVM_LINK_LLVM_DYLIB=ON"
] ++ optionals enableManpages [
"-DLLVM_BUILD_DOCS=ON"
"-DLLVM_ENABLE_SPHINX=ON"
"-DSPHINX_OUTPUT_MAN=ON"
"-DSPHINX_OUTPUT_HTML=OFF"
"-DSPHINX_WARNINGS_AS_ERRORS=OFF"
] ++ optionals (!isDarwin && !stdenv.targetPlatform.isMinGW) [
"-DLLVM_BINUTILS_INCDIR=${libbfd.dev}/include"
] ++ optionals isDarwin [
"-DLLVM_ENABLE_LIBCXX=ON"
"-DCAN_TARGET_i386=false"
] ++ optionals (stdenv.hostPlatform != stdenv.buildPlatform) [
"-DCMAKE_CROSSCOMPILING=True"
"-DLLVM_TABLEGEN=${llvmPackages_13.tools.llvm}/bin/llvm-tblgen"
(
let
nativeCC = pkgsBuildBuild.targetPackages.stdenv.cc;
nativeBintools = nativeCC.bintools.bintools;
nativeToolchainFlags = [
"-DCMAKE_C_COMPILER=${nativeCC}/bin/${nativeCC.targetPrefix}cc"
"-DCMAKE_CXX_COMPILER=${nativeCC}/bin/${nativeCC.targetPrefix}c++"
"-DCMAKE_AR=${nativeBintools}/bin/${nativeBintools.targetPrefix}ar"
"-DCMAKE_STRIP=${nativeBintools}/bin/${nativeBintools.targetPrefix}strip"
"-DCMAKE_RANLIB=${nativeBintools}/bin/${nativeBintools.targetPrefix}ranlib"
];
in "-DCROSS_TOOLCHAIN_FLAGS_NATIVE:list=${lib.concatStringsSep ";" nativeToolchainFlags}"
)
] ++ extraCmakeFlags;
postBuild = ''
rm -fR $out
'';
preCheck = ''
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH''${LD_LIBRARY_PATH:+:}$PWD/lib
'';
postInstall = ''
mkdir -p $python/share
mv $out/share/opt-viewer $python/share/opt-viewer
moveToOutput "bin/llvm-config*" "$dev"
substituteInPlace "$dev/lib/cmake/llvm/LLVMExports-${if debugVersion then "debug" else "release"}.cmake" \
--replace "\''${_IMPORT_PREFIX}/lib/lib" "$lib/lib/lib" \
--replace "$out/bin/llvm-config" "$dev/bin/llvm-config"
substituteInPlace "$dev/lib/cmake/llvm/LLVMConfig.cmake" \
--replace 'set(LLVM_BINARY_DIR "''${LLVM_INSTALL_PREFIX}")' 'set(LLVM_BINARY_DIR "''${LLVM_INSTALL_PREFIX}'"$lib"'")'
''
+ optionalString (stdenv.isDarwin && enableSharedLibraries) ''
ln -s $lib/lib/libLLVM.dylib $lib/lib/libLLVM-${shortVersion}.dylib
ln -s $lib/lib/libLLVM.dylib $lib/lib/libLLVM-${release_version}.dylib
''
+ optionalString (stdenv.buildPlatform != stdenv.hostPlatform) ''
cp NATIVE/bin/llvm-config $dev/bin/llvm-config-native
'';
doCheck = false; # the ABI change breaks RISC-V FP tests
checkTarget = "check-all";
requiredSystemFeatures = [ "big-parallel" ];
meta = {
homepage = "https://llvm.org/";
description = "A collection of modular and reusable compiler and toolchain technologies";
longDescription = ''
The LLVM Project is a collection of modular and reusable compiler and
toolchain technologies. Despite its name, LLVM has little to do with
traditional virtual machines. The name "LLVM" itself is not an acronym; it
is the full name of the project.
LLVM began as a research project at the University of Illinois, with the
goal of providing a modern, SSA-based compilation strategy capable of
supporting both static and dynamic compilation of arbitrary programming
languages. Since then, LLVM has grown to be an umbrella project consisting
of a number of subprojects, many of which are being used in production by
a wide variety of commercial and open source projects as well as being
widely used in academic research. Code in the LLVM project is licensed
under the "Apache 2.0 License with LLVM exceptions".
'';
};
} // lib.optionalAttrs enableManpages {
pname = "llvm-manpages";
buildPhase = ''
make docs-llvm-man
'';
propagatedBuildInputs = [];
installPhase = ''
make -C docs install
'';
postPatch = null;
postInstall = null;
outputs = [ "out" ];
doCheck = false;
meta = {
description = "man pages for LLVM ${version}";
};
})

381
llvm/gnu-install-dirs.patch Normal file
View File

@ -0,0 +1,381 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 135036f509d2..265c36f8211b 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -270,15 +270,21 @@ if (CMAKE_BUILD_TYPE AND
message(FATAL_ERROR "Invalid value for CMAKE_BUILD_TYPE: ${CMAKE_BUILD_TYPE}")
endif()
+include(GNUInstallDirs)
+
set(LLVM_LIBDIR_SUFFIX "" CACHE STRING "Define suffix of library directory name (32/64)" )
-set(LLVM_TOOLS_INSTALL_DIR "bin" CACHE STRING "Path for binary subdirectory (defaults to 'bin')")
+set(LLVM_TOOLS_INSTALL_DIR "${CMAKE_INSTALL_BINDIR}" CACHE STRING
+ "Path for binary subdirectory (defaults to 'bin')")
mark_as_advanced(LLVM_TOOLS_INSTALL_DIR)
set(LLVM_UTILS_INSTALL_DIR "${LLVM_TOOLS_INSTALL_DIR}" CACHE STRING
"Path to install LLVM utilities (enabled by LLVM_INSTALL_UTILS=ON) (defaults to LLVM_TOOLS_INSTALL_DIR)")
mark_as_advanced(LLVM_UTILS_INSTALL_DIR)
+set(LLVM_INSTALL_CMAKE_DIR "${CMAKE_INSTALL_LIBDIR}${LLVM_LIBDIR_SUFFIX}/cmake/llvm" CACHE STRING
+ "Path for CMake subdirectory (defaults to lib/cmake/llvm)" )
+
# They are used as destination of target generators.
set(LLVM_RUNTIME_OUTPUT_INTDIR ${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}/bin)
set(LLVM_LIBRARY_OUTPUT_INTDIR ${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}/lib${LLVM_LIBDIR_SUFFIX})
@@ -581,9 +587,9 @@ option (LLVM_ENABLE_SPHINX "Use Sphinx to generate llvm documentation." OFF)
option (LLVM_ENABLE_OCAMLDOC "Build OCaml bindings documentation." ON)
option (LLVM_ENABLE_BINDINGS "Build bindings." ON)
-set(LLVM_INSTALL_DOXYGEN_HTML_DIR "share/doc/llvm/doxygen-html"
+set(LLVM_INSTALL_DOXYGEN_HTML_DIR "${CMAKE_INSTALL_DOCDIR}/${project}/doxygen-html"
CACHE STRING "Doxygen-generated HTML documentation install directory")
-set(LLVM_INSTALL_OCAMLDOC_HTML_DIR "share/doc/llvm/ocaml-html"
+set(LLVM_INSTALL_OCAMLDOC_HTML_DIR "${CMAKE_INSTALL_DOCDIR}/${project}/ocaml-html"
CACHE STRING "OCamldoc-generated HTML documentation install directory")
option (LLVM_BUILD_EXTERNAL_COMPILER_RT
@@ -1048,7 +1054,7 @@ endif()
if (NOT LLVM_INSTALL_TOOLCHAIN_ONLY)
install(DIRECTORY include/llvm include/llvm-c
- DESTINATION include
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
COMPONENT llvm-headers
FILES_MATCHING
PATTERN "*.def"
@@ -1059,7 +1065,7 @@ if (NOT LLVM_INSTALL_TOOLCHAIN_ONLY)
)
install(DIRECTORY ${LLVM_INCLUDE_DIR}/llvm ${LLVM_INCLUDE_DIR}/llvm-c
- DESTINATION include
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
COMPONENT llvm-headers
FILES_MATCHING
PATTERN "*.def"
@@ -1073,13 +1079,13 @@ if (NOT LLVM_INSTALL_TOOLCHAIN_ONLY)
if (LLVM_INSTALL_MODULEMAPS)
install(DIRECTORY include/llvm include/llvm-c
- DESTINATION include
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
COMPONENT llvm-headers
FILES_MATCHING
PATTERN "module.modulemap"
)
install(FILES include/llvm/module.install.modulemap
- DESTINATION include/llvm
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/llvm
COMPONENT llvm-headers
RENAME "module.extern.modulemap"
)
diff --git a/cmake/modules/AddLLVM.cmake b/cmake/modules/AddLLVM.cmake
index 9c2b85374307..5531ceeb2eeb 100644
--- a/cmake/modules/AddLLVM.cmake
+++ b/cmake/modules/AddLLVM.cmake
@@ -818,9 +818,9 @@ macro(add_llvm_library name)
get_target_export_arg(${name} LLVM export_to_llvmexports ${umbrella})
install(TARGETS ${name}
${export_to_llvmexports}
- LIBRARY DESTINATION lib${LLVM_LIBDIR_SUFFIX} COMPONENT ${name}
- ARCHIVE DESTINATION lib${LLVM_LIBDIR_SUFFIX} COMPONENT ${name}
- RUNTIME DESTINATION bin COMPONENT ${name})
+ LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}${LLVM_LIBDIR_SUFFIX} COMPONENT ${name}
+ ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}${LLVM_LIBDIR_SUFFIX} COMPONENT ${name}
+ RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT ${name})
if (NOT LLVM_ENABLE_IDE)
add_llvm_install_targets(install-${name}
@@ -1036,7 +1036,7 @@ function(process_llvm_pass_plugins)
"set(LLVM_STATIC_EXTENSIONS ${LLVM_STATIC_EXTENSIONS})")
install(FILES
${llvm_cmake_builddir}/LLVMConfigExtensions.cmake
- DESTINATION ${LLVM_INSTALL_PACKAGE_DIR}
+ DESTINATION ${LLVM_INSTALL_CMAKE_DIR}
COMPONENT cmake-exports)
set(ExtensionDef "${LLVM_BINARY_DIR}/include/llvm/Support/Extension.def")
@@ -1250,7 +1250,7 @@ macro(add_llvm_example name)
endif()
add_llvm_executable(${name} ${ARGN})
if( LLVM_BUILD_EXAMPLES )
- install(TARGETS ${name} RUNTIME DESTINATION examples)
+ install(TARGETS ${name} RUNTIME DESTINATION ${CMAKE_INSTALL_DOCDIR}/examples)
endif()
set_target_properties(${name} PROPERTIES FOLDER "Examples")
endmacro(add_llvm_example name)
@@ -1868,7 +1868,7 @@ function(llvm_install_library_symlink name dest type)
set(full_name ${CMAKE_${type}_LIBRARY_PREFIX}${name}${CMAKE_${type}_LIBRARY_SUFFIX})
set(full_dest ${CMAKE_${type}_LIBRARY_PREFIX}${dest}${CMAKE_${type}_LIBRARY_SUFFIX})
- set(output_dir lib${LLVM_LIBDIR_SUFFIX})
+ set(output_dir ${CMAKE_INSTALL_FULL_LIBDIR}${LLVM_LIBDIR_SUFFIX})
if(WIN32 AND "${type}" STREQUAL "SHARED")
set(output_dir bin)
endif()
@@ -1879,7 +1879,7 @@ function(llvm_install_library_symlink name dest type)
endfunction()
-function(llvm_install_symlink name dest)
+function(llvm_install_symlink name dest output_dir)
cmake_parse_arguments(ARG "ALWAYS_GENERATE" "COMPONENT" "" ${ARGN})
foreach(path ${CMAKE_MODULE_PATH})
if(EXISTS ${path}/LLVMInstallSymlink.cmake)
@@ -1902,7 +1902,7 @@ function(llvm_install_symlink name dest)
set(full_dest ${dest}${CMAKE_EXECUTABLE_SUFFIX})
install(SCRIPT ${INSTALL_SYMLINK}
- CODE "install_symlink(${full_name} ${full_dest} ${LLVM_TOOLS_INSTALL_DIR})"
+ CODE "install_symlink(${full_name} ${full_dest} ${output_dir})"
COMPONENT ${component})
if (NOT LLVM_ENABLE_IDE AND NOT ARG_ALWAYS_GENERATE)
@@ -1985,7 +1985,8 @@ function(add_llvm_tool_symlink link_name target)
endif()
if ((TOOL_IS_TOOLCHAIN OR NOT LLVM_INSTALL_TOOLCHAIN_ONLY) AND LLVM_BUILD_TOOLS)
- llvm_install_symlink(${link_name} ${target})
+ GNUInstallDirs_get_absolute_install_dir(output_dir LLVM_TOOLS_INSTALL_DIR)
+ llvm_install_symlink(${link_name} ${target} ${output_dir})
endif()
endif()
endfunction()
@@ -2114,9 +2115,9 @@ function(llvm_setup_rpath name)
# Since BUILD_SHARED_LIBS is only recommended for use by developers,
# hardcode the rpath to build/install lib dir first in this mode.
# FIXME: update this when there is better solution.
- set(_install_rpath "${LLVM_LIBRARY_OUTPUT_INTDIR}" "${CMAKE_INSTALL_PREFIX}/lib${LLVM_LIBDIR_SUFFIX}" ${extra_libdir})
+ set(_install_rpath "${LLVM_LIBRARY_OUTPUT_INTDIR}" "${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}${LLVM_LIBDIR_SUFFIX}" ${extra_libdir})
elseif(UNIX)
- set(_install_rpath "\$ORIGIN/../lib${LLVM_LIBDIR_SUFFIX}" ${extra_libdir})
+ set(_install_rpath "\$ORIGIN/../${CMAKE_INSTALL_LIBDIR}${LLVM_LIBDIR_SUFFIX}" ${extra_libdir})
if(${CMAKE_SYSTEM_NAME} MATCHES "(FreeBSD|DragonFly)")
set_property(TARGET ${name} APPEND_STRING PROPERTY
LINK_FLAGS " -Wl,-z,origin ")
diff --git a/cmake/modules/AddOCaml.cmake b/cmake/modules/AddOCaml.cmake
index 554046b20edf..4d1ad980641e 100644
--- a/cmake/modules/AddOCaml.cmake
+++ b/cmake/modules/AddOCaml.cmake
@@ -144,9 +144,9 @@ function(add_ocaml_library name)
endforeach()
if( APPLE )
- set(ocaml_rpath "@executable_path/../../../lib${LLVM_LIBDIR_SUFFIX}")
+ set(ocaml_rpath "@executable_path/../../../${CMAKE_INSTALL_LIBDIR}${LLVM_LIBDIR_SUFFIX}")
elseif( UNIX )
- set(ocaml_rpath "\\$ORIGIN/../../../lib${LLVM_LIBDIR_SUFFIX}")
+ set(ocaml_rpath "\\$ORIGIN/../../../${CMAKE_INSTALL_LIBDIR}${LLVM_LIBDIR_SUFFIX}")
endif()
list(APPEND ocaml_flags "-ldopt" "-Wl,-rpath,${ocaml_rpath}")
diff --git a/cmake/modules/AddSphinxTarget.cmake b/cmake/modules/AddSphinxTarget.cmake
index e80c3b5c1cac..482f6d715ef5 100644
--- a/cmake/modules/AddSphinxTarget.cmake
+++ b/cmake/modules/AddSphinxTarget.cmake
@@ -90,7 +90,7 @@ function (add_sphinx_target builder project)
endif()
elseif (builder STREQUAL html)
string(TOUPPER "${project}" project_upper)
- set(${project_upper}_INSTALL_SPHINX_HTML_DIR "share/doc/${project}/html"
+ set(${project_upper}_INSTALL_SPHINX_HTML_DIR "${CMAKE_INSTALL_DOCDIR}/${project}/html"
CACHE STRING "HTML documentation install directory for ${project}")
# '/.' indicates: copy the contents of the directory directly into
diff --git a/cmake/modules/CMakeLists.txt b/cmake/modules/CMakeLists.txt
index 51b6a4fdc284..4adc2acfc074 100644
--- a/cmake/modules/CMakeLists.txt
+++ b/cmake/modules/CMakeLists.txt
@@ -1,6 +1,6 @@
include(LLVMDistributionSupport)
-set(LLVM_INSTALL_PACKAGE_DIR lib${LLVM_LIBDIR_SUFFIX}/cmake/llvm)
+set(LLVM_INSTALL_PACKAGE_DIR ${LLVM_INSTALL_CMAKE_DIR} CACHE STRING "Path for CMake subdirectory (defaults to 'cmake/llvm')")
set(llvm_cmake_builddir "${LLVM_BINARY_DIR}/${LLVM_INSTALL_PACKAGE_DIR}")
# First for users who use an installed LLVM, create the LLVMExports.cmake file.
@@ -109,13 +109,13 @@ foreach(p ${_count})
set(LLVM_CONFIG_CODE "${LLVM_CONFIG_CODE}
get_filename_component(LLVM_INSTALL_PREFIX \"\${LLVM_INSTALL_PREFIX}\" PATH)")
endforeach(p)
-set(LLVM_CONFIG_INCLUDE_DIRS "\${LLVM_INSTALL_PREFIX}/include")
+set(LLVM_CONFIG_INCLUDE_DIRS "\${LLVM_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}")
set(LLVM_CONFIG_INCLUDE_DIR "${LLVM_CONFIG_INCLUDE_DIRS}")
set(LLVM_CONFIG_MAIN_INCLUDE_DIR "${LLVM_CONFIG_INCLUDE_DIRS}")
-set(LLVM_CONFIG_LIBRARY_DIRS "\${LLVM_INSTALL_PREFIX}/lib\${LLVM_LIBDIR_SUFFIX}")
+set(LLVM_CONFIG_LIBRARY_DIRS "\${LLVM_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}\${LLVM_LIBDIR_SUFFIX}")
set(LLVM_CONFIG_CMAKE_DIR "\${LLVM_INSTALL_PREFIX}/${LLVM_INSTALL_PACKAGE_DIR}")
set(LLVM_CONFIG_BINARY_DIR "\${LLVM_INSTALL_PREFIX}")
-set(LLVM_CONFIG_TOOLS_BINARY_DIR "\${LLVM_INSTALL_PREFIX}/bin")
+set(LLVM_CONFIG_TOOLS_BINARY_DIR "\${LLVM_INSTALL_PREFIX}/${CMAKE_INSTALL_BINDIR}")
# Generate a default location for lit
if (LLVM_INSTALL_UTILS AND LLVM_BUILD_UTILS)
diff --git a/cmake/modules/LLVMInstallSymlink.cmake b/cmake/modules/LLVMInstallSymlink.cmake
index 3e6a2c9a2648..52e14d955c60 100644
--- a/cmake/modules/LLVMInstallSymlink.cmake
+++ b/cmake/modules/LLVMInstallSymlink.cmake
@@ -4,7 +4,7 @@
function(install_symlink name target outdir)
set(DESTDIR $ENV{DESTDIR})
- set(bindir "${DESTDIR}${CMAKE_INSTALL_PREFIX}/${outdir}/")
+ set(bindir "${DESTDIR}${outdir}/")
message(STATUS "Creating ${name}")
diff --git a/docs/CMake.rst b/docs/CMake.rst
index f1ac2c7d4934..c6e1469b5e13 100644
--- a/docs/CMake.rst
+++ b/docs/CMake.rst
@@ -202,7 +202,7 @@ CMake manual, or execute ``cmake --help-variable VARIABLE_NAME``.
**LLVM_LIBDIR_SUFFIX**:STRING
Extra suffix to append to the directory where libraries are to be
installed. On a 64-bit architecture, one could use ``-DLLVM_LIBDIR_SUFFIX=64``
- to install libraries to ``/usr/lib64``.
+ to install libraries to ``/usr/lib64``. See also ``CMAKE_INSTALL_LIBDIR``.
Rarely-used CMake variables
---------------------------
@@ -551,8 +551,8 @@ LLVM-specific variables
**LLVM_INSTALL_DOXYGEN_HTML_DIR**:STRING
The path to install Doxygen-generated HTML documentation to. This path can
- either be absolute or relative to the CMAKE_INSTALL_PREFIX. Defaults to
- `share/doc/llvm/doxygen-html`.
+ either be absolute or relative to the ``CMAKE_INSTALL_PREFIX``. Defaults to
+ `${CMAKE_INSTALL_DOCDIR}/${project}/doxygen-html`.
**LLVM_LINK_LLVM_DYLIB**:BOOL
If enabled, tools will be linked with the libLLVM shared library. Defaults
@@ -792,9 +792,11 @@ the ``cmake`` command or by setting it directly in ``ccmake`` or ``cmake-gui``).
This file is available in two different locations.
-* ``<INSTALL_PREFIX>/lib/cmake/llvm/LLVMConfig.cmake`` where
- ``<INSTALL_PREFIX>`` is the install prefix of an installed version of LLVM.
- On Linux typically this is ``/usr/lib/cmake/llvm/LLVMConfig.cmake``.
+* ``<LLVM_INSTALL_PACKAGE_DIR>LLVMConfig.cmake`` where
+ ``<LLVM_INSTALL_PACKAGE_DIR>`` is the location where LLVM CMake modules are
+ installed as part of an installed version of LLVM. This is typically
+ ``cmake/llvm/`` within the lib directory. On Linux, this is typically
+ ``/usr/lib/cmake/llvm/LLVMConfig.cmake``.
* ``<LLVM_BUILD_ROOT>/lib/cmake/llvm/LLVMConfig.cmake`` where
``<LLVM_BUILD_ROOT>`` is the root of the LLVM build tree. **Note: this is only
diff --git a/examples/Bye/CMakeLists.txt b/examples/Bye/CMakeLists.txt
index bb96edb4b4bf..678c22fb43c8 100644
--- a/examples/Bye/CMakeLists.txt
+++ b/examples/Bye/CMakeLists.txt
@@ -14,6 +14,6 @@ if (NOT WIN32)
BUILDTREE_ONLY
)
- install(TARGETS ${name} RUNTIME DESTINATION examples)
+ install(TARGETS ${name} RUNTIME DESTINATION ${CMAKE_INSTALL_DOCDIR}/examples)
set_target_properties(${name} PROPERTIES FOLDER "Examples")
endif()
diff --git a/include/llvm/CMakeLists.txt b/include/llvm/CMakeLists.txt
index b46319f24fc8..2feabd1954e4 100644
--- a/include/llvm/CMakeLists.txt
+++ b/include/llvm/CMakeLists.txt
@@ -5,5 +5,5 @@ add_subdirectory(Frontend)
# If we're doing an out-of-tree build, copy a module map for generated
# header files into the build area.
if (NOT "${CMAKE_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
- configure_file(module.modulemap.build module.modulemap COPYONLY)
+ configure_file(module.modulemap.build ${LLVM_INCLUDE_DIR}/module.modulemap COPYONLY)
endif (NOT "${CMAKE_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
diff --git a/tools/llvm-config/BuildVariables.inc.in b/tools/llvm-config/BuildVariables.inc.in
index ebe5b73a5c65..70c497be12f5 100644
--- a/tools/llvm-config/BuildVariables.inc.in
+++ b/tools/llvm-config/BuildVariables.inc.in
@@ -23,6 +23,10 @@
#define LLVM_CXXFLAGS "@LLVM_CXXFLAGS@"
#define LLVM_BUILDMODE "@LLVM_BUILDMODE@"
#define LLVM_LIBDIR_SUFFIX "@LLVM_LIBDIR_SUFFIX@"
+#define LLVM_INSTALL_BINDIR "@CMAKE_INSTALL_BINDIR@"
+#define LLVM_INSTALL_LIBDIR "@CMAKE_INSTALL_LIBDIR@"
+#define LLVM_INSTALL_INCLUDEDIR "@CMAKE_INSTALL_INCLUDEDIR@"
+#define LLVM_INSTALL_CMAKEDIR "@LLVM_INSTALL_CMAKE_DIR@"
#define LLVM_TARGETS_BUILT "@LLVM_TARGETS_BUILT@"
#define LLVM_SYSTEM_LIBS "@LLVM_SYSTEM_LIBS@"
#define LLVM_BUILD_SYSTEM "@LLVM_BUILD_SYSTEM@"
diff --git a/tools/llvm-config/llvm-config.cpp b/tools/llvm-config/llvm-config.cpp
index 1a2f04552d13..44fa7d3eec6b 100644
--- a/tools/llvm-config/llvm-config.cpp
+++ b/tools/llvm-config/llvm-config.cpp
@@ -357,12 +357,26 @@ int main(int argc, char **argv) {
("-I" + ActiveIncludeDir + " " + "-I" + ActiveObjRoot + "/include");
} else {
ActivePrefix = CurrentExecPrefix;
- ActiveIncludeDir = ActivePrefix + "/include";
- SmallString<256> path(StringRef(LLVM_TOOLS_INSTALL_DIR));
- sys::fs::make_absolute(ActivePrefix, path);
- ActiveBinDir = std::string(path.str());
- ActiveLibDir = ActivePrefix + "/lib" + LLVM_LIBDIR_SUFFIX;
- ActiveCMakeDir = ActiveLibDir + "/cmake/llvm";
+ {
+ SmallString<256> path(StringRef(LLVM_INSTALL_INCLUDEDIR));
+ sys::fs::make_absolute(ActivePrefix, path);
+ ActiveIncludeDir = std::string(path.str());
+ }
+ {
+ SmallString<256> path(StringRef(LLVM_INSTALL_BINDIR));
+ sys::fs::make_absolute(ActivePrefix, path);
+ ActiveBinDir = std::string(path.str());
+ }
+ {
+ SmallString<256> path(StringRef(LLVM_INSTALL_LIBDIR LLVM_LIBDIR_SUFFIX));
+ sys::fs::make_absolute(ActivePrefix, path);
+ ActiveLibDir = std::string(path.str());
+ }
+ {
+ SmallString<256> path(StringRef(LLVM_INSTALL_CMAKEDIR));
+ sys::fs::make_absolute(ActivePrefix, path);
+ ActiveCMakeDir = std::string(path.str());
+ }
ActiveIncludeOption = "-I" + ActiveIncludeDir;
}
diff --git a/tools/lto/CMakeLists.txt b/tools/lto/CMakeLists.txt
index 0af29ad762c5..37b99b83e35c 100644
--- a/tools/lto/CMakeLists.txt
+++ b/tools/lto/CMakeLists.txt
@@ -33,7 +33,7 @@ add_llvm_library(${LTO_LIBRARY_NAME} ${LTO_LIBRARY_TYPE} INSTALL_WITH_TOOLCHAIN
${SOURCES} DEPENDS intrinsics_gen)
install(FILES ${LLVM_MAIN_INCLUDE_DIR}/llvm-c/lto.h
- DESTINATION include/llvm-c
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/llvm-c
COMPONENT LTO)
if (APPLE)
diff --git a/tools/opt-viewer/CMakeLists.txt b/tools/opt-viewer/CMakeLists.txt
index ead73ec13a8f..250362021f17 100644
--- a/tools/opt-viewer/CMakeLists.txt
+++ b/tools/opt-viewer/CMakeLists.txt
@@ -8,7 +8,7 @@ set (files
foreach (file ${files})
install(PROGRAMS ${file}
- DESTINATION share/opt-viewer
+ DESTINATION ${CMAKE_INSTALL_DATADIR}/opt-viewer
COMPONENT opt-viewer)
endforeach (file)
diff --git a/tools/remarks-shlib/CMakeLists.txt b/tools/remarks-shlib/CMakeLists.txt
index 865436247270..ce1daa62f6ab 100644
--- a/tools/remarks-shlib/CMakeLists.txt
+++ b/tools/remarks-shlib/CMakeLists.txt
@@ -19,7 +19,7 @@ if(LLVM_ENABLE_PIC)
endif()
install(FILES ${LLVM_MAIN_INCLUDE_DIR}/llvm-c/Remarks.h
- DESTINATION include/llvm-c
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/llvm-c
COMPONENT Remarks)
if (APPLE)

View File

@ -0,0 +1,28 @@
diff --git a/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp b/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp
index 0aba18b20..9bb75e7f4 100644
--- a/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp
+++ b/lib/Target/RISCV/MCTargetDesc/RISCVBaseInfo.cpp
@@ -33,6 +33,8 @@ ABI computeTargetABI(const Triple &TT, FeatureBitset FeatureBits,
auto TargetABI = getTargetABI(ABIName);
bool IsRV64 = TT.isArch64Bit();
bool IsRV32E = FeatureBits[RISCV::FeatureRV32E];
+ bool IsRV32D = FeatureBits[RISCV::FeatureStdExtD];
+ bool IsRV32F = FeatureBits[RISCV::FeatureStdExtF];
if (!ABIName.empty() && TargetABI == ABI_Unknown) {
errs()
@@ -56,10 +58,10 @@ ABI computeTargetABI(const Triple &TT, FeatureBitset FeatureBits,
if (TargetABI != ABI_Unknown)
return TargetABI;
- // For now, default to the ilp32/ilp32e/lp64 ABI if no explicit ABI is given
- // or an invalid/unrecognised string is given. In the future, it might be
- // worth changing this to default to ilp32f/lp64f and ilp32d/lp64d when
- // hardware support for floating point is present.
+ if (IsRV32D)
+ return ABI_ILP32D;
+ if (IsRV32F)
+ return ABI_ILP32F;
if (IsRV32E)
return ABI_ILP32E;
if (IsRV64)

16
llvm/outputs.patch Normal file
View File

@ -0,0 +1,16 @@
diff --git a/tools/llvm-config/llvm-config.cpp b/tools/llvm-config/llvm-config.cpp
index 94d426b..37f7794 100644
--- a/tools/llvm-config/llvm-config.cpp
+++ b/tools/llvm-config/llvm-config.cpp
@@ -333,6 +333,11 @@ int main(int argc, char **argv) {
ActiveIncludeOption = "-I" + ActiveIncludeDir;
}
+ /// Nix-specific multiple-output handling: override ActiveLibDir
+ if (!IsInDevelopmentTree) {
+ ActiveLibDir = std::string("@lib@") + "/lib" + LLVM_LIBDIR_SUFFIX;
+ }
+
/// We only use `shared library` mode in cases where the static library form
/// of the components provided are not available; note however that this is
/// skipped if we're run from within the build dir. However, once installed,

View File

@ -10,8 +10,15 @@ crate-type = ["cdylib"]
[dependencies] [dependencies]
pyo3 = { version = "0.14", features = ["extension-module"] } pyo3 = { version = "0.14", features = ["extension-module"] }
inkwell = { git = "https://github.com/TheDan64/inkwell", branch = "master", features = ["llvm12-0"] }
parking_lot = "0.11" parking_lot = "0.11"
tempfile = "3" tempfile = "3"
nac3parser = { path = "../nac3parser" } nac3parser = { path = "../nac3parser" }
nac3core = { path = "../nac3core" } nac3core = { path = "../nac3core" }
[dependencies.inkwell]
version = "0.1.0-beta.4"
default-features = false
features = ["llvm13-0", "target-x86", "target-arm", "target-riscv", "no-libffi-linking"]
[features]
init-llvm-profile = []

View File

@ -1,4 +1,11 @@
from min_artiq import * from min_artiq import *
from numpy import int32, int64
@extern
def output_int(x: int32):
...
@nac3 @nac3
class Demo: class Demo:

View File

@ -0,0 +1,57 @@
class EmbeddingMap:
def __init__(self):
self.object_inverse_map = {}
self.object_map = {}
self.string_map = {}
self.string_reverse_map = {}
self.function_map = {}
# preallocate exception names
self.preallocate_runtime_exception_names(["RuntimeError",
"RTIOUnderflow",
"RTIOOverflow",
"RTIODestinationUnreachable",
"DMAError",
"I2CError",
"CacheError",
"SPIError",
"0:ZeroDivisionError",
"0:IndexError"])
def preallocate_runtime_exception_names(self, names):
for i, name in enumerate(names):
if ":" not in name:
name = "0:artiq.coredevice.exceptions." + name
exn_id = self.store_str(name)
assert exn_id == i
def store_function(self, key, fun):
self.function_map[key] = fun
return key
def store_object(self, obj):
obj_id = id(obj)
if obj_id in self.object_inverse_map:
return self.object_inverse_map[obj_id]
key = len(self.object_map) + 1
self.object_map[key] = obj
self.object_inverse_map[obj_id] = key
return key
def store_str(self, s):
if s in self.string_reverse_map:
return self.string_reverse_map[s]
key = len(self.string_map)
self.string_map[key] = s
self.string_reverse_map[s] = key
return key
def retrieve_function(self, key):
return self.function_map[key]
def retrieve_object(self, key):
return self.object_map[key]
def retrieve_str(self, key):
return self.string_map[key]

View File

@ -3,19 +3,45 @@ from functools import wraps
from types import SimpleNamespace from types import SimpleNamespace
from numpy import int32, int64 from numpy import int32, int64
from typing import Generic, TypeVar from typing import Generic, TypeVar
from math import floor, ceil
import nac3artiq import nac3artiq
from embedding_map import EmbeddingMap
__all__ = ["KernelInvariant", "extern", "kernel", "portable", "nac3",
"ms", "us", "ns", __all__ = [
"print_int32", "print_int64", "Kernel", "KernelInvariant", "virtual",
"Core", "TTLOut", "parallel", "sequential"] "round64", "floor64", "ceil64",
"extern", "kernel", "portable", "nac3",
"rpc", "ms", "us", "ns",
"print_int32", "print_int64",
"Core", "TTLOut",
"parallel", "sequential"
]
T = TypeVar('T') T = TypeVar('T')
class Kernel(Generic[T]):
pass
class KernelInvariant(Generic[T]): class KernelInvariant(Generic[T]):
pass pass
# The virtual class must exist before nac3artiq.NAC3 is created.
class virtual(Generic[T]):
pass
def round64(x):
return round(x)
def floor64(x):
return floor(x)
def ceil64(x):
return ceil(x)
import device_db import device_db
core_arguments = device_db.device_db["core"]["arguments"] core_arguments = device_db.device_db["core"]["arguments"]
@ -40,6 +66,10 @@ def extern(function):
register_function(function) register_function(function)
return function return function
def rpc(function):
"""Decorates a function declaration defined by the core device runtime."""
register_function(function)
return function
def kernel(function_or_method): def kernel(function_or_method):
"""Decorates a function or method to be executed on the core device.""" """Decorates a function or method to be executed on the core device."""
@ -121,6 +151,9 @@ class Core:
def run(self, method, *args, **kwargs): def run(self, method, *args, **kwargs):
global allow_registration global allow_registration
embedding = EmbeddingMap()
if allow_registration: if allow_registration:
compiler.analyze(registered_functions, registered_classes) compiler.analyze(registered_functions, registered_classes)
allow_registration = False allow_registration = False
@ -132,7 +165,7 @@ class Core:
obj = method obj = method
name = "" name = ""
compiler.compile_method_to_file(obj, name, args, "module.elf") compiler.compile_method_to_file(obj, name, args, "module.elf", embedding)
@kernel @kernel
def reset(self): def reset(self):

View File

@ -1,18 +1,32 @@
use nac3core::{ use nac3core::{
codegen::{expr::gen_call, stmt::gen_with, CodeGenContext, CodeGenerator}, codegen::{
toplevel::DefinitionId, expr::gen_call,
typecheck::typedef::{FunSignature, Type}, stmt::{gen_block, gen_with},
CodeGenContext, CodeGenerator,
},
symbol_resolver::ValueEnum, symbol_resolver::ValueEnum,
toplevel::{DefinitionId, GenCall},
typecheck::typedef::{FunSignature, Type},
}; };
use nac3parser::ast::{Expr, ExprKind, Located, Stmt, StmtKind, StrRef}; use nac3parser::ast::{Expr, ExprKind, Located, Stmt, StmtKind, StrRef};
use inkwell::values::BasicValueEnum; use inkwell::{
context::Context, module::Linkage, types::IntType, values::BasicValueEnum, AddressSpace,
};
use crate::timeline::TimeFns; use crate::timeline::TimeFns;
use std::{
collections::hash_map::DefaultHasher,
collections::HashMap,
hash::{Hash, Hasher},
sync::Arc,
};
pub struct ArtiqCodeGenerator<'a> { pub struct ArtiqCodeGenerator<'a> {
name: String, name: String,
size_t: u32,
name_counter: u32, name_counter: u32,
start: Option<Expr<Option<Type>>>, start: Option<Expr<Option<Type>>>,
end: Option<Expr<Option<Type>>>, end: Option<Expr<Option<Type>>>,
@ -20,14 +34,13 @@ pub struct ArtiqCodeGenerator<'a> {
} }
impl<'a> ArtiqCodeGenerator<'a> { impl<'a> ArtiqCodeGenerator<'a> {
pub fn new(name: String, timeline: &'a (dyn TimeFns + Sync)) -> ArtiqCodeGenerator<'a> { pub fn new(
ArtiqCodeGenerator { name: String,
name, size_t: u32,
name_counter: 0, timeline: &'a (dyn TimeFns + Sync),
start: None, ) -> ArtiqCodeGenerator<'a> {
end: None, assert!(size_t == 32 || size_t == 64);
timeline, ArtiqCodeGenerator { name, size_t, name_counter: 0, start: None, end: None, timeline }
}
} }
} }
@ -36,16 +49,24 @@ impl<'b> CodeGenerator for ArtiqCodeGenerator<'b> {
&self.name &self.name
} }
fn get_size_type<'ctx>(&self, ctx: &'ctx Context) -> IntType<'ctx> {
if self.size_t == 32 {
ctx.i32_type()
} else {
ctx.i64_type()
}
}
fn gen_call<'ctx, 'a>( fn gen_call<'ctx, 'a>(
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
obj: Option<(Type, ValueEnum<'ctx>)>, obj: Option<(Type, ValueEnum<'ctx>)>,
fun: (&FunSignature, DefinitionId), fun: (&FunSignature, DefinitionId),
params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>, params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
) -> Option<BasicValueEnum<'ctx>> { ) -> Result<Option<BasicValueEnum<'ctx>>, String> {
let result = gen_call(self, ctx, obj, fun, params); let result = gen_call(self, ctx, obj, fun, params)?;
if let Some(end) = self.end.clone() { if let Some(end) = self.end.clone() {
let old_end = self.gen_expr(ctx, &end).unwrap().to_basic_value_enum(ctx); let old_end = self.gen_expr(ctx, &end)?.unwrap().to_basic_value_enum(ctx, self)?;
let now = self.timeline.emit_now_mu(ctx); let now = self.timeline.emit_now_mu(ctx);
let smax = ctx.module.get_function("llvm.smax.i64").unwrap_or_else(|| { let smax = ctx.module.get_function("llvm.smax.i64").unwrap_or_else(|| {
let i64 = ctx.ctx.i64_type(); let i64 = ctx.ctx.i64_type();
@ -57,25 +78,25 @@ impl<'b> CodeGenerator for ArtiqCodeGenerator<'b> {
}); });
let max = ctx let max = ctx
.builder .builder
.build_call(smax, &[old_end, now], "smax") .build_call(smax, &[old_end.into(), now.into()], "smax")
.try_as_basic_value() .try_as_basic_value()
.left() .left()
.unwrap(); .unwrap();
let end_store = self.gen_store_target(ctx, &end); let end_store = self.gen_store_target(ctx, &end)?;
ctx.builder.build_store(end_store, max); ctx.builder.build_store(end_store, max);
} }
if let Some(start) = self.start.clone() { if let Some(start) = self.start.clone() {
let start_val = self.gen_expr(ctx, &start).unwrap().to_basic_value_enum(ctx); let start_val = self.gen_expr(ctx, &start)?.unwrap().to_basic_value_enum(ctx, self)?;
self.timeline.emit_at_mu(ctx, start_val); self.timeline.emit_at_mu(ctx, start_val);
} }
result Ok(result)
} }
fn gen_with<'ctx, 'a>( fn gen_with<'ctx, 'a>(
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
stmt: &Stmt<Option<Type>>, stmt: &Stmt<Option<Type>>,
) -> bool { ) -> Result<(), String> {
if let StmtKind::With { items, body, .. } = &stmt.node { if let StmtKind::With { items, body, .. } = &stmt.node {
if items.len() == 1 && items[0].optional_vars.is_none() { if items.len() == 1 && items[0].optional_vars.is_none() {
let item = &items[0]; let item = &items[0];
@ -97,7 +118,7 @@ impl<'b> CodeGenerator for ArtiqCodeGenerator<'b> {
let old_start = self.start.take(); let old_start = self.start.take();
let old_end = self.end.take(); let old_end = self.end.take();
let now = if let Some(old_start) = &old_start { let now = if let Some(old_start) = &old_start {
self.gen_expr(ctx, old_start).unwrap().to_basic_value_enum(ctx) self.gen_expr(ctx, old_start)?.unwrap().to_basic_value_enum(ctx, self)?
} else { } else {
self.timeline.emit_now_mu(ctx) self.timeline.emit_now_mu(ctx)
}; };
@ -108,83 +129,96 @@ impl<'b> CodeGenerator for ArtiqCodeGenerator<'b> {
// the LLVM Context. // the LLVM Context.
// The name is guaranteed to be unique as users cannot use this as variable // The name is guaranteed to be unique as users cannot use this as variable
// name. // name.
self.start = old_start.clone().or_else(|| { self.start = old_start.clone().map_or_else(
let start = format!("with-{}-start", self.name_counter).into(); || {
let start_expr = Located { let start = format!("with-{}-start", self.name_counter).into();
// location does not matter at this point let start_expr = Located {
location: stmt.location, // location does not matter at this point
node: ExprKind::Name { location: stmt.location,
id: start, node: ExprKind::Name { id: start, ctx: name_ctx.clone() },
ctx: name_ctx.clone(), custom: Some(ctx.primitives.int64),
}, };
custom: Some(ctx.primitives.int64), let start = self.gen_store_target(ctx, &start_expr)?;
}; ctx.builder.build_store(start, now);
let start = self.gen_store_target(ctx, &start_expr); Ok(Some(start_expr)) as Result<_, String>
ctx.builder.build_store(start, now); },
Some(start_expr) |v| Ok(Some(v)),
}); )?;
let end = format!("with-{}-end", self.name_counter).into(); let end = format!("with-{}-end", self.name_counter).into();
let end_expr = Located { let end_expr = Located {
// location does not matter at this point // location does not matter at this point
location: stmt.location, location: stmt.location,
node: ExprKind::Name { node: ExprKind::Name { id: end, ctx: name_ctx.clone() },
id: end,
ctx: name_ctx.clone(),
},
custom: Some(ctx.primitives.int64), custom: Some(ctx.primitives.int64),
}; };
let end = self.gen_store_target(ctx, &end_expr); let end = self.gen_store_target(ctx, &end_expr)?;
ctx.builder.build_store(end, now); ctx.builder.build_store(end, now);
self.end = Some(end_expr); self.end = Some(end_expr);
self.name_counter += 1; self.name_counter += 1;
let mut exited = false; gen_block(self, ctx, body.iter())?;
for stmt in body.iter() { let current = ctx.builder.get_insert_block().unwrap();
if self.gen_stmt(ctx, stmt) { // if the current block is terminated, move before the terminator
exited = true; // we want to set the timeline before reaching the terminator
break; // TODO: This may be unsound if there are multiple exit paths in the
} // block... e.g.
} // if ...:
// return
// Perhaps we can fix this by using actual with block?
let reset_position = if let Some(terminator) = current.get_terminator() {
ctx.builder.position_before(&terminator);
true
} else {
false
};
// set duration // set duration
let end_expr = self.end.take().unwrap(); let end_expr = self.end.take().unwrap();
let end_val = self.gen_expr(ctx, &end_expr).unwrap().to_basic_value_enum(ctx); let end_val =
self.gen_expr(ctx, &end_expr)?.unwrap().to_basic_value_enum(ctx, self)?;
// inside an sequential block // inside a sequential block
if old_start.is_none() { if old_start.is_none() {
self.timeline.emit_at_mu(ctx, end_val); self.timeline.emit_at_mu(ctx, end_val);
} }
// inside a parallel block, should update the outer max now_mu // inside a parallel block, should update the outer max now_mu
if let Some(old_end) = &old_end { if let Some(old_end) = &old_end {
let outer_end_val = self.gen_expr(ctx, old_end).unwrap().to_basic_value_enum(ctx); let outer_end_val = self
let smax = ctx.module.get_function("llvm.smax.i64").unwrap_or_else(|| { .gen_expr(ctx, old_end)?
let i64 = ctx.ctx.i64_type(); .unwrap()
ctx.module.add_function( .to_basic_value_enum(ctx, self)?;
"llvm.smax.i64", let smax =
i64.fn_type(&[i64.into(), i64.into()], false), ctx.module.get_function("llvm.smax.i64").unwrap_or_else(|| {
None, let i64 = ctx.ctx.i64_type();
) ctx.module.add_function(
}); "llvm.smax.i64",
i64.fn_type(&[i64.into(), i64.into()], false),
None,
)
});
let max = ctx let max = ctx
.builder .builder
.build_call(smax, &[end_val, outer_end_val], "smax") .build_call(smax, &[end_val.into(), outer_end_val.into()], "smax")
.try_as_basic_value() .try_as_basic_value()
.left() .left()
.unwrap(); .unwrap();
let outer_end = self.gen_store_target(ctx, old_end); let outer_end = self.gen_store_target(ctx, old_end)?;
ctx.builder.build_store(outer_end, max); ctx.builder.build_store(outer_end, max);
} }
self.start = old_start; self.start = old_start;
self.end = old_end; self.end = old_end;
return exited; if reset_position {
ctx.builder.position_at_end(current);
}
return Ok(());
} else if id == &"sequential".into() { } else if id == &"sequential".into() {
let start = self.start.take(); let start = self.start.take();
for stmt in body.iter() { for stmt in body.iter() {
if self.gen_stmt(ctx, stmt) { self.gen_stmt(ctx, stmt)?;
self.start = start; if ctx.is_terminated() {
return true; break;
} }
} }
self.start = start; self.start = start;
return false; return Ok(());
} }
} }
} }
@ -195,3 +229,265 @@ impl<'b> CodeGenerator for ArtiqCodeGenerator<'b> {
} }
} }
} }
fn gen_rpc_tag<'ctx, 'a>(
ctx: &mut CodeGenContext<'ctx, 'a>,
ty: Type,
buffer: &mut Vec<u8>,
) -> Result<(), String> {
use nac3core::typecheck::typedef::TypeEnum::*;
let int32 = ctx.primitives.int32;
let int64 = ctx.primitives.int64;
let float = ctx.primitives.float;
let bool = ctx.primitives.bool;
let str = ctx.primitives.str;
let none = ctx.primitives.none;
if ctx.unifier.unioned(ty, int32) {
buffer.push(b'i');
} else if ctx.unifier.unioned(ty, int64) {
buffer.push(b'I');
} else if ctx.unifier.unioned(ty, float) {
buffer.push(b'f');
} else if ctx.unifier.unioned(ty, bool) {
buffer.push(b'b');
} else if ctx.unifier.unioned(ty, str) {
buffer.push(b's');
} else if ctx.unifier.unioned(ty, none) {
buffer.push(b'n');
} else {
let ty_enum = ctx.unifier.get_ty(ty);
match &*ty_enum {
TTuple { ty } => {
buffer.push(b't');
buffer.push(ty.len() as u8);
for ty in ty {
gen_rpc_tag(ctx, *ty, buffer)?;
}
}
TList { ty } => {
buffer.push(b'l');
gen_rpc_tag(ctx, *ty, buffer)?;
}
// we should return an error, this will be fixed after improving error message
// as this requires returning an error during codegen
_ => return Err(format!("Unsupported type: {:?}", ctx.unifier.stringify(ty))),
}
}
Ok(())
}
fn rpc_codegen_callback_fn<'ctx, 'a>(
ctx: &mut CodeGenContext<'ctx, 'a>,
obj: Option<(Type, ValueEnum<'ctx>)>,
fun: (&FunSignature, DefinitionId),
args: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
generator: &mut dyn CodeGenerator,
) -> Result<Option<BasicValueEnum<'ctx>>, String> {
let ptr_type = ctx.ctx.i8_type().ptr_type(inkwell::AddressSpace::Generic);
let size_type = generator.get_size_type(ctx.ctx);
let int8 = ctx.ctx.i8_type();
let int32 = ctx.ctx.i32_type();
let tag_ptr_type = ctx.ctx.struct_type(&[ptr_type.into(), size_type.into()], false);
let service_id = int32.const_int(fun.1 .0 as u64, false);
// -- setup rpc tags
let mut tag = Vec::new();
if obj.is_some() {
tag.push(b'O');
}
for arg in fun.0.args.iter() {
gen_rpc_tag(ctx, arg.ty, &mut tag)?;
}
tag.push(b':');
gen_rpc_tag(ctx, fun.0.ret, &mut tag)?;
let mut hasher = DefaultHasher::new();
tag.hash(&mut hasher);
let hash = format!("{}", hasher.finish());
let tag_ptr = ctx
.module
.get_global(hash.as_str())
.unwrap_or_else(|| {
let tag_arr_ptr = ctx.module.add_global(
int8.array_type(tag.len() as u32),
None,
format!("tagptr{}", fun.1 .0).as_str(),
);
tag_arr_ptr.set_initializer(&int8.const_array(
&tag.iter().map(|v| int8.const_int(*v as u64, false)).collect::<Vec<_>>(),
));
tag_arr_ptr.set_linkage(Linkage::Private);
let tag_ptr = ctx.module.add_global(tag_ptr_type, None, &hash);
tag_ptr.set_linkage(Linkage::Private);
tag_ptr.set_initializer(&ctx.ctx.const_struct(
&[
tag_arr_ptr.as_pointer_value().const_cast(ptr_type).into(),
size_type.const_int(tag.len() as u64, false).into(),
],
false,
));
tag_ptr
})
.as_pointer_value();
let arg_length = args.len() + if obj.is_some() { 1 } else { 0 };
let stacksave = ctx.module.get_function("llvm.stacksave").unwrap_or_else(|| {
ctx.module.add_function("llvm.stacksave", ptr_type.fn_type(&[], false), None)
});
let stackrestore = ctx.module.get_function("llvm.stackrestore").unwrap_or_else(|| {
ctx.module.add_function(
"llvm.stackrestore",
ctx.ctx.void_type().fn_type(&[ptr_type.into()], false),
None,
)
});
let stackptr = ctx.builder.build_call(stacksave, &[], "rpc.stack");
let args_ptr = ctx.builder.build_array_alloca(
ptr_type,
ctx.ctx.i32_type().const_int(arg_length as u64, false),
"argptr",
);
// -- rpc args handling
let mut keys = fun.0.args.clone();
let mut mapping = HashMap::new();
for (key, value) in args.into_iter() {
mapping.insert(key.unwrap_or_else(|| keys.remove(0).name), value);
}
// default value handling
for k in keys.into_iter() {
mapping.insert(k.name, ctx.gen_symbol_val(generator, &k.default_value.unwrap()).into());
}
// reorder the parameters
let mut real_params = fun
.0
.args
.iter()
.map(|arg| mapping.remove(&arg.name).unwrap().to_basic_value_enum(ctx, generator))
.collect::<Result<Vec<_>, _>>()?;
if let Some(obj) = obj {
if let ValueEnum::Static(obj) = obj.1 {
real_params.insert(0, obj.get_const_obj(ctx, generator));
} else {
// should be an error here...
panic!("only host object is allowed");
}
}
for (i, arg) in real_params.iter().enumerate() {
let arg_slot = ctx.builder.build_alloca(arg.get_type(), &format!("rpc.arg{}", i));
ctx.builder.build_store(arg_slot, *arg);
let arg_slot = ctx.builder.build_bitcast(arg_slot, ptr_type, "rpc.arg");
let arg_ptr = unsafe {
ctx.builder.build_gep(
args_ptr,
&[int32.const_int(i as u64, false)],
&format!("rpc.arg{}", i),
)
};
ctx.builder.build_store(arg_ptr, arg_slot);
}
// call
let rpc_send = ctx.module.get_function("rpc_send").unwrap_or_else(|| {
ctx.module.add_function(
"rpc_send",
ctx.ctx.void_type().fn_type(
&[
int32.into(),
tag_ptr_type.ptr_type(AddressSpace::Generic).into(),
ptr_type.ptr_type(AddressSpace::Generic).into(),
],
false,
),
None,
)
});
ctx.builder.build_call(
rpc_send,
&[service_id.into(), tag_ptr.into(), args_ptr.into()],
"rpc.send",
);
// reclaim stack space used by arguments
ctx.builder.build_call(
stackrestore,
&[stackptr.try_as_basic_value().unwrap_left().into()],
"rpc.stackrestore",
);
// -- receive value:
// T result = {
// void *ret_ptr = alloca(sizeof(T));
// void *ptr = ret_ptr;
// loop: int size = rpc_recv(ptr);
// // Non-zero: Provide `size` bytes of extra storage for variable-length data.
// if(size) { ptr = alloca(size); goto loop; }
// else *(T*)ret_ptr
// }
let rpc_recv = ctx.module.get_function("rpc_recv").unwrap_or_else(|| {
ctx.module.add_function("rpc_recv", int32.fn_type(&[ptr_type.into()], false), None)
});
if ctx.unifier.unioned(fun.0.ret, ctx.primitives.none) {
ctx.build_call_or_invoke(rpc_recv, &[ptr_type.const_null().into()], "rpc_recv");
return Ok(None);
}
let prehead_bb = ctx.builder.get_insert_block().unwrap();
let current_function = prehead_bb.get_parent().unwrap();
let head_bb = ctx.ctx.append_basic_block(current_function, "rpc.head");
let alloc_bb = ctx.ctx.append_basic_block(current_function, "rpc.continue");
let tail_bb = ctx.ctx.append_basic_block(current_function, "rpc.tail");
let ret_ty = ctx.get_llvm_type(generator, fun.0.ret);
let need_load = !ret_ty.is_pointer_type();
let slot = ctx.builder.build_alloca(ret_ty, "rpc.ret.slot");
let slotgen = ctx.builder.build_bitcast(slot, ptr_type, "rpc.ret.ptr");
ctx.builder.build_unconditional_branch(head_bb);
ctx.builder.position_at_end(head_bb);
let phi = ctx.builder.build_phi(ptr_type, "rpc.ptr");
phi.add_incoming(&[(&slotgen, prehead_bb)]);
let alloc_size = ctx
.build_call_or_invoke(rpc_recv, &[phi.as_basic_value()], "rpc.size.next")
.unwrap()
.into_int_value();
let is_done = ctx.builder.build_int_compare(
inkwell::IntPredicate::EQ,
int32.const_zero(),
alloc_size,
"rpc.done",
);
ctx.builder.build_conditional_branch(is_done, tail_bb, alloc_bb);
ctx.builder.position_at_end(alloc_bb);
let alloc_ptr = ctx.builder.build_array_alloca(ptr_type, alloc_size, "rpc.alloc");
let alloc_ptr = ctx.builder.build_bitcast(alloc_ptr, ptr_type, "rpc.alloc.ptr");
phi.add_incoming(&[(&alloc_ptr, alloc_bb)]);
ctx.builder.build_unconditional_branch(head_bb);
ctx.builder.position_at_end(tail_bb);
let result = ctx.builder.build_load(slot, "rpc.result");
if need_load {
ctx.builder.build_call(
stackrestore,
&[stackptr.try_as_basic_value().unwrap_left().into()],
"rpc.stackrestore",
);
}
Ok(Some(result))
}
pub fn rpc_codegen_callback() -> Arc<GenCall> {
Arc::new(GenCall::new(Box::new(|ctx, obj, fun, args, generator| {
rpc_codegen_callback_fn(ctx, obj, fun, args, generator)
})))
}

View File

@ -1,26 +1,35 @@
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
use std::fs; use std::fs;
use std::process::Command; use std::process::Command;
use std::rc::Rc;
use std::sync::Arc; use std::sync::Arc;
use inkwell::{ use inkwell::{
memory_buffer::MemoryBuffer,
passes::{PassManager, PassManagerBuilder}, passes::{PassManager, PassManagerBuilder},
targets::*, targets::*,
OptimizationLevel, OptimizationLevel,
}; };
use nac3core::toplevel::builtins::get_exn_constructor;
use nac3core::typecheck::typedef::{TypeEnum, Unifier};
use nac3parser::{ use nac3parser::{
ast::{self, StrRef}, ast::{self, ExprKind, Stmt, StmtKind, StrRef},
parser::{self, parse_program}, parser::{self, parse_program},
}; };
use pyo3::prelude::*; use pyo3::prelude::*;
use pyo3::{exceptions, types::PyBytes, types::PyList, types::PySet}; use pyo3::{exceptions, types::PyBytes, types::PyDict, types::PySet};
use pyo3::create_exception;
use parking_lot::{Mutex, RwLock}; use parking_lot::{Mutex, RwLock};
use nac3core::{ use nac3core::{
codegen::irrt::load_irrt,
codegen::{concrete_type::ConcreteTypeStore, CodeGenTask, WithCall, WorkerRegistry}, codegen::{concrete_type::ConcreteTypeStore, CodeGenTask, WithCall, WorkerRegistry},
symbol_resolver::SymbolResolver, symbol_resolver::SymbolResolver,
toplevel::{composer::TopLevelComposer, DefinitionId, GenCall, TopLevelContext, TopLevelDef}, toplevel::{
composer::{ComposerConfig, TopLevelComposer},
DefinitionId, GenCall, TopLevelDef,
},
typecheck::typedef::{FunSignature, FuncArg}, typecheck::typedef::{FunSignature, FuncArg},
typecheck::{type_inferencer::PrimitiveStore, typedef::Type}, typecheck::{type_inferencer::PrimitiveStore, typedef::Type},
}; };
@ -28,7 +37,7 @@ use nac3core::{
use tempfile::{self, TempDir}; use tempfile::{self, TempDir};
use crate::{ use crate::{
codegen::ArtiqCodeGenerator, codegen::{rpc_codegen_callback, ArtiqCodeGenerator},
symbol_resolver::{InnerResolver, PythonHelper, Resolver}, symbol_resolver::{InnerResolver, PythonHelper, Resolver},
}; };
@ -51,12 +60,21 @@ pub struct PrimitivePythonId {
int: u64, int: u64,
int32: u64, int32: u64,
int64: u64, int64: u64,
uint32: u64,
uint64: u64,
float: u64, float: u64,
bool: u64, bool: u64,
list: u64, list: u64,
tuple: u64, tuple: u64,
typevar: u64,
none: u64,
exception: u64,
generic_alias: (u64, u64),
virtual_id: u64,
} }
type TopLevelComponent = (Stmt, String, PyObject);
// TopLevelComposer is unsendable as it holds the unification table, which is // TopLevelComposer is unsendable as it holds the unification table, which is
// unsendable due to Rc. Arc would cause a performance hit. // unsendable due to Rc. Arc would cause a performance hit.
#[pyclass(unsendable, name = "NAC3")] #[pyclass(unsendable, name = "NAC3")]
@ -64,78 +82,38 @@ struct Nac3 {
isa: Isa, isa: Isa,
time_fns: &'static (dyn TimeFns + Sync), time_fns: &'static (dyn TimeFns + Sync),
primitive: PrimitiveStore, primitive: PrimitiveStore,
builtins_ty: HashMap<StrRef, Type>, builtins: Vec<(StrRef, FunSignature, Arc<GenCall>)>,
builtins_def: HashMap<StrRef, DefinitionId>,
pyid_to_def: Arc<RwLock<HashMap<u64, DefinitionId>>>, pyid_to_def: Arc<RwLock<HashMap<u64, DefinitionId>>>,
pyid_to_type: Arc<RwLock<HashMap<u64, Type>>>,
composer: TopLevelComposer,
top_level: Option<Arc<TopLevelContext>>,
primitive_ids: PrimitivePythonId, primitive_ids: PrimitivePythonId,
global_value_ids: Arc<Mutex<HashSet<u64>>>,
working_directory: TempDir, working_directory: TempDir,
top_levels: Vec<TopLevelComponent>,
string_store: Arc<RwLock<HashMap<String, i32>>>,
exception_ids: Arc<RwLock<HashMap<usize, usize>>>,
} }
create_exception!(nac3artiq, CompileError, exceptions::PyException);
impl Nac3 { impl Nac3 {
fn register_module( fn register_module(
&mut self, &mut self,
module: PyObject, module: PyObject,
registered_class_ids: &HashSet<u64>, registered_class_ids: &HashSet<u64>,
) -> PyResult<()> { ) -> PyResult<()> {
let mut name_to_pyid: HashMap<StrRef, u64> = HashMap::new(); let (module_name, source_file) = Python::with_gil(|py| -> PyResult<(String, String)> {
let (module_name, source_file, helper) = let module: &PyAny = module.extract(py)?;
Python::with_gil(|py| -> PyResult<(String, String, PythonHelper)> { Ok((module.getattr("__name__")?.extract()?, module.getattr("__file__")?.extract()?))
let module: &PyAny = module.extract(py)?; })?;
let builtins = PyModule::import(py, "builtins")?;
let id_fn = builtins.getattr("id")?;
let members: &PyList = PyModule::import(py, "inspect")?
.getattr("getmembers")?
.call1((module,))?
.cast_as()?;
for member in members.iter() {
let key: &str = member.get_item(0)?.extract()?;
let val = id_fn.call1((member.get_item(1)?,))?.extract()?;
name_to_pyid.insert(key.into(), val);
}
let helper = PythonHelper {
id_fn: builtins.getattr("id").unwrap().to_object(py),
len_fn: builtins.getattr("len").unwrap().to_object(py),
type_fn: builtins.getattr("type").unwrap().to_object(py),
};
Ok((
module.getattr("__name__")?.extract()?,
module.getattr("__file__")?.extract()?,
helper,
))
})?;
let source = fs::read_to_string(source_file).map_err(|e| { let source = fs::read_to_string(&source_file).map_err(|e| {
exceptions::PyIOError::new_err(format!("failed to read input file: {}", e)) exceptions::PyIOError::new_err(format!("failed to read input file: {}", e))
})?; })?;
let parser_result = parser::parse_program(&source) let parser_result = parser::parse_program(&source, source_file.into())
.map_err(|e| exceptions::PySyntaxError::new_err(format!("parse error: {}", e)))?; .map_err(|e| exceptions::PySyntaxError::new_err(format!("parse error: {}", e)))?;
let resolver = Arc::new(Resolver(Arc::new(InnerResolver {
id_to_type: self.builtins_ty.clone().into(),
id_to_def: self.builtins_def.clone().into(),
pyid_to_def: self.pyid_to_def.clone(),
pyid_to_type: self.pyid_to_type.clone(),
primitive_ids: self.primitive_ids.clone(),
global_value_ids: self.global_value_ids.clone(),
class_names: Default::default(),
name_to_pyid: name_to_pyid.clone(),
module: module.clone(),
helper,
}))) as Arc<dyn SymbolResolver + Send + Sync>;
let mut name_to_def = HashMap::new();
let mut name_to_type = HashMap::new();
for mut stmt in parser_result.into_iter() { for mut stmt in parser_result.into_iter() {
let include = match stmt.node { let include = match stmt.node {
ast::StmtKind::ClassDef { ast::StmtKind::ClassDef {
ref decorator_list, ref decorator_list, ref mut body, ref mut bases, ..
ref mut body,
ref mut bases,
..
} => { } => {
let nac3_class = decorator_list.iter().any(|decorator| { let nac3_class = decorator_list.iter().any(|decorator| {
if let ast::ExprKind::Name { id, .. } = decorator.node { if let ast::ExprKind::Name { id, .. } = decorator.node {
@ -153,9 +131,13 @@ impl Nac3 {
let id_fn = PyModule::import(py, "builtins")?.getattr("id")?; let id_fn = PyModule::import(py, "builtins")?.getattr("id")?;
match &base.node { match &base.node {
ast::ExprKind::Name { id, .. } => { ast::ExprKind::Name { id, .. } => {
let base_obj = module.getattr(py, id.to_string())?; if *id == "Exception".into() {
let base_id = id_fn.call1((base_obj,))?.extract()?; Ok(true)
Ok(registered_class_ids.contains(&base_id)) } else {
let base_obj = module.getattr(py, id.to_string())?;
let base_id = id_fn.call1((base_obj,))?.extract()?;
Ok(registered_class_ids.contains(&base_id))
}
} }
_ => Ok(true), _ => Ok(true),
} }
@ -163,13 +145,12 @@ impl Nac3 {
.unwrap() .unwrap()
}); });
body.retain(|stmt| { body.retain(|stmt| {
if let ast::StmtKind::FunctionDef { if let ast::StmtKind::FunctionDef { ref decorator_list, .. } = stmt.node {
ref decorator_list, ..
} = stmt.node
{
decorator_list.iter().any(|decorator| { decorator_list.iter().any(|decorator| {
if let ast::ExprKind::Name { id, .. } = decorator.node { if let ast::ExprKind::Name { id, .. } = decorator.node {
id.to_string() == "kernel" || id.to_string() == "portable" id.to_string() == "kernel"
|| id.to_string() == "portable"
|| id.to_string() == "rpc"
} else { } else {
false false
} }
@ -180,40 +161,130 @@ impl Nac3 {
}); });
true true
} }
ast::StmtKind::FunctionDef { ast::StmtKind::FunctionDef { ref decorator_list, .. } => {
ref decorator_list, .. decorator_list.iter().any(|decorator| {
} => decorator_list.iter().any(|decorator| { if let ast::ExprKind::Name { id, .. } = decorator.node {
if let ast::ExprKind::Name { id, .. } = decorator.node { let id = id.to_string();
let id = id.to_string(); id == "extern" || id == "portable" || id == "kernel" || id == "rpc"
id == "extern" || id == "portable" || id == "kernel" } else {
} else { false
false }
} })
}), }
_ => false, _ => false,
}; };
if include { if include {
let (name, def_id, ty) = self self.top_levels.push((stmt, module_name.clone(), module.clone()));
.composer
.register_top_level(stmt, Some(resolver.clone()), module_name.clone())
.unwrap();
name_to_def.insert(name, def_id);
if let Some(ty) = ty {
name_to_type.insert(name, ty);
}
} }
} }
let mut map = self.pyid_to_def.write();
for (name, def) in name_to_def.into_iter() {
map.insert(*name_to_pyid.get(&name).unwrap(), def);
}
let mut map = self.pyid_to_type.write();
for (name, ty) in name_to_type.into_iter() {
map.insert(*name_to_pyid.get(&name).unwrap(), ty);
}
Ok(()) Ok(())
} }
fn report_modinit(
arg_names: &[String],
method_name: &str,
resolver: Arc<dyn SymbolResolver + Send + Sync>,
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
unifier: &mut Unifier,
primitives: &PrimitiveStore,
) -> Option<String> {
let base_ty =
match resolver.get_symbol_type(unifier, top_level_defs, primitives, "base".into()) {
Ok(ty) => ty,
Err(e) => return Some(format!("type error inside object launching kernel: {}", e)),
};
let fun_ty = if method_name.is_empty() {
base_ty
} else if let TypeEnum::TObj { fields, .. } = &*unifier.get_ty(base_ty) {
match fields.get(&(*method_name).into()) {
Some(t) => t.0,
None => {
return Some(format!(
"object launching kernel does not have method `{}`",
method_name
))
}
}
} else {
return Some("cannot launch kernel by calling a non-callable".into());
};
if let TypeEnum::TFunc(FunSignature { args, .. }) = &*unifier.get_ty(fun_ty) {
if arg_names.len() > args.len() {
return Some(format!(
"launching kernel function with too many arguments (expect {}, found {})",
args.len(),
arg_names.len(),
));
}
for (i, FuncArg { ty, default_value, name }) in args.iter().enumerate() {
let in_name = match arg_names.get(i) {
Some(n) => n,
None if default_value.is_none() => {
return Some(format!(
"argument `{}` not provided when launching kernel function",
name
))
}
_ => break,
};
let in_ty = match resolver.get_symbol_type(
unifier,
top_level_defs,
primitives,
in_name.clone().into(),
) {
Ok(t) => t,
Err(e) => {
return Some(format!(
"type error ({}) at parameter #{} when calling kernel function",
e, i
))
}
};
if let Err(e) = unifier.unify(in_ty, *ty) {
return Some(format!(
"type error ({}) at parameter #{} when calling kernel function",
e.to_display(unifier).to_string(),
i
));
}
}
} else {
return Some("cannot launch kernel by calling a non-callable".into());
}
None
}
}
fn add_exceptions(
composer: &mut TopLevelComposer,
builtin_def: &mut HashMap<StrRef, DefinitionId>,
builtin_ty: &mut HashMap<StrRef, Type>,
error_names: &[&str]
) -> Vec<Type> {
let mut types = Vec::new();
// note: this is only for builtin exceptions, i.e. the exception name is "0:{exn}"
for name in error_names {
let def_id = composer.definition_ast_list.len();
let (exception_fn, exception_class, exception_cons, exception_type) = get_exn_constructor(
name,
// class id
def_id,
// constructor id
def_id + 1,
&mut composer.unifier,
&composer.primitives_ty
);
composer.definition_ast_list.push((Arc::new(RwLock::new(exception_class)), None));
composer.definition_ast_list.push((Arc::new(RwLock::new(exception_fn)), None));
builtin_ty.insert((*name).into(), exception_cons);
builtin_def.insert((*name).into(), DefinitionId(def_id));
types.push(exception_type);
}
types
} }
#[pymethods] #[pymethods]
@ -237,13 +308,9 @@ impl Nac3 {
let builtins = vec![ let builtins = vec![
( (
"now_mu".into(), "now_mu".into(),
FunSignature { FunSignature { args: vec![], ret: primitive.int64, vars: HashMap::new() },
args: vec![], Arc::new(GenCall::new(Box::new(move |ctx, _, _, _, _| {
ret: primitive.int64, Ok(Some(time_fns.emit_now_mu(ctx)))
vars: HashMap::new(),
},
Arc::new(GenCall::new(Box::new(move |ctx, _, _, _| {
Some(time_fns.emit_now_mu(ctx))
}))), }))),
), ),
( (
@ -257,9 +324,10 @@ impl Nac3 {
ret: primitive.none, ret: primitive.none,
vars: HashMap::new(), vars: HashMap::new(),
}, },
Arc::new(GenCall::new(Box::new(move |ctx, _, _, args| { Arc::new(GenCall::new(Box::new(move |ctx, _, _, args, generator| {
time_fns.emit_at_mu(ctx, args[0].1); let arg = args[0].1.clone().to_basic_value_enum(ctx, generator).unwrap();
None time_fns.emit_at_mu(ctx, arg);
Ok(None)
}))), }))),
), ),
( (
@ -273,75 +341,65 @@ impl Nac3 {
ret: primitive.none, ret: primitive.none,
vars: HashMap::new(), vars: HashMap::new(),
}, },
Arc::new(GenCall::new(Box::new(move |ctx, _, _, args| { Arc::new(GenCall::new(Box::new(move |ctx, _, _, args, generator| {
time_fns.emit_delay_mu(ctx, args[0].1); let arg = args[0].1.clone().to_basic_value_enum(ctx, generator).unwrap();
None time_fns.emit_delay_mu(ctx, arg);
Ok(None)
}))), }))),
), ),
]; ];
let (composer, builtins_def, builtins_ty) = TopLevelComposer::new(builtins);
let builtins_mod = PyModule::import(py, "builtins").unwrap(); let builtins_mod = PyModule::import(py, "builtins").unwrap();
let id_fn = builtins_mod.getattr("id").unwrap(); let id_fn = builtins_mod.getattr("id").unwrap();
let numpy_mod = PyModule::import(py, "numpy").unwrap(); let numpy_mod = PyModule::import(py, "numpy").unwrap();
let typing_mod = PyModule::import(py, "typing").unwrap();
let types_mod = PyModule::import(py, "types").unwrap();
let get_id = |x| id_fn.call1((x,)).unwrap().extract().unwrap();
let get_attr_id = |obj: &PyModule, attr| id_fn.call1((obj.getattr(attr).unwrap(),))
.unwrap().extract().unwrap();
let primitive_ids = PrimitivePythonId { let primitive_ids = PrimitivePythonId {
int: id_fn virtual_id: get_id(
.call1((builtins_mod.getattr("int").unwrap(),)) builtins_mod
.unwrap() .getattr("globals")
.extract() .unwrap()
.unwrap(), .call0()
int32: id_fn .unwrap()
.call1((numpy_mod.getattr("int32").unwrap(),)) .get_item("virtual")
.unwrap() .unwrap(
.extract() )),
.unwrap(), generic_alias: (
int64: id_fn get_attr_id(typing_mod, "_GenericAlias"),
.call1((numpy_mod.getattr("int64").unwrap(),)) get_attr_id(types_mod, "GenericAlias"),
.unwrap() ),
.extract() none: get_attr_id(builtins_mod, "None"),
.unwrap(), typevar: get_attr_id(typing_mod, "TypeVar"),
bool: id_fn int: get_attr_id(builtins_mod, "int"),
.call1((builtins_mod.getattr("bool").unwrap(),)) int32: get_attr_id(numpy_mod, "int32"),
.unwrap() int64: get_attr_id(numpy_mod, "int64"),
.extract() uint32: get_attr_id(numpy_mod, "uint32"),
.unwrap(), uint64: get_attr_id(numpy_mod, "uint64"),
float: id_fn bool: get_attr_id(builtins_mod, "bool"),
.call1((builtins_mod.getattr("float").unwrap(),)) float: get_attr_id(builtins_mod, "float"),
.unwrap() list: get_attr_id(builtins_mod, "list"),
.extract() tuple: get_attr_id(builtins_mod, "tuple"),
.unwrap(), exception: get_attr_id(builtins_mod, "Exception"),
list: id_fn
.call1((builtins_mod.getattr("list").unwrap(),))
.unwrap()
.extract()
.unwrap(),
tuple: id_fn
.call1((builtins_mod.getattr("tuple").unwrap(),))
.unwrap()
.extract()
.unwrap(),
}; };
let working_directory = tempfile::Builder::new().prefix("nac3-").tempdir().unwrap(); let working_directory = tempfile::Builder::new().prefix("nac3-").tempdir().unwrap();
fs::write( fs::write(working_directory.path().join("kernel.ld"), include_bytes!("kernel.ld")).unwrap();
working_directory.path().join("kernel.ld"),
include_bytes!("kernel.ld"),
)
.unwrap();
Ok(Nac3 { Ok(Nac3 {
isa, isa,
time_fns, time_fns,
primitive, primitive,
builtins_ty, builtins,
builtins_def,
composer,
primitive_ids, primitive_ids,
top_level: None, top_levels: Default::default(),
pyid_to_def: Default::default(), pyid_to_def: Default::default(),
pyid_to_type: Default::default(),
global_value_ids: Default::default(),
working_directory, working_directory,
string_store: Default::default(),
exception_ids: Default::default(),
}) })
} }
@ -378,8 +436,142 @@ impl Nac3 {
method_name: &str, method_name: &str,
args: Vec<&PyAny>, args: Vec<&PyAny>,
filename: &str, filename: &str,
embedding_map: &PyAny,
py: Python, py: Python,
) -> PyResult<()> { ) -> PyResult<()> {
let (mut composer, mut builtins_def, mut builtins_ty) = TopLevelComposer::new(
self.builtins.clone(),
ComposerConfig { kernel_ann: Some("Kernel"), kernel_invariant_ann: "KernelInvariant" },
);
let builtins = PyModule::import(py, "builtins")?;
let typings = PyModule::import(py, "typing")?;
let id_fn = builtins.getattr("id")?;
let issubclass = builtins.getattr("issubclass")?;
let exn_class = builtins.getattr("Exception")?;
let store_obj = embedding_map.getattr("store_object").unwrap().to_object(py);
let store_str = embedding_map.getattr("store_str").unwrap().to_object(py);
let store_fun = embedding_map.getattr("store_function").unwrap().to_object(py);
let helper = PythonHelper {
id_fn: builtins.getattr("id").unwrap().to_object(py),
len_fn: builtins.getattr("len").unwrap().to_object(py),
type_fn: builtins.getattr("type").unwrap().to_object(py),
origin_ty_fn: typings.getattr("get_origin").unwrap().to_object(py),
args_ty_fn: typings.getattr("get_args").unwrap().to_object(py),
store_obj: store_obj.clone(),
store_str,
};
let pyid_to_type = Arc::new(RwLock::new(HashMap::<u64, Type>::new()));
let exception_names = [
"ZeroDivisionError",
"IndexError",
"ValueError",
"RuntimeError",
"AssertionError",
"KeyError",
"NotImplementedError",
"OverflowError",
"IOError"
];
add_exceptions(&mut composer, &mut builtins_def, &mut builtins_ty, &exception_names);
let mut module_to_resolver_cache: HashMap<u64, _> = HashMap::new();
let global_value_ids = Arc::new(RwLock::new(HashSet::<u64>::new()));
let mut rpc_ids = vec![];
for (stmt, path, module) in self.top_levels.iter() {
let py_module: &PyAny = module.extract(py)?;
let module_id: u64 = id_fn.call1((py_module,))?.extract()?;
let helper = helper.clone();
let class_obj;
if let StmtKind::ClassDef { name, .. } = &stmt.node {
let class = py_module.getattr(name.to_string()).unwrap();
if issubclass.call1((class, exn_class)).unwrap().extract().unwrap() &&
class.getattr("artiq_builtin").is_err() {
class_obj = Some(class);
} else {
class_obj = None;
}
} else {
class_obj = None;
}
let (name_to_pyid, resolver) =
module_to_resolver_cache.get(&module_id).cloned().unwrap_or_else(|| {
let mut name_to_pyid: HashMap<StrRef, u64> = HashMap::new();
let members: &PyDict =
py_module.getattr("__dict__").unwrap().cast_as().unwrap();
for (key, val) in members.iter() {
let key: &str = key.extract().unwrap();
let val = id_fn.call1((val,)).unwrap().extract().unwrap();
name_to_pyid.insert(key.into(), val);
}
let resolver = Arc::new(Resolver(Arc::new(InnerResolver {
id_to_type: builtins_ty.clone().into(),
id_to_def: builtins_def.clone().into(),
pyid_to_def: self.pyid_to_def.clone(),
pyid_to_type: pyid_to_type.clone(),
primitive_ids: self.primitive_ids.clone(),
global_value_ids: global_value_ids.clone(),
class_names: Default::default(),
name_to_pyid: name_to_pyid.clone(),
module: module.clone(),
id_to_pyval: Default::default(),
id_to_primitive: Default::default(),
field_to_val: Default::default(),
helper,
string_store: self.string_store.clone(),
exception_ids: self.exception_ids.clone(),
})))
as Arc<dyn SymbolResolver + Send + Sync>;
let name_to_pyid = Rc::new(name_to_pyid);
module_to_resolver_cache
.insert(module_id, (name_to_pyid.clone(), resolver.clone()));
(name_to_pyid, resolver)
});
let (name, def_id, ty) = composer
.register_top_level(stmt.clone(), Some(resolver.clone()), path.clone())
.map_err(|e| {
CompileError::new_err(format!(
"compilation failed\n----------\n{}",
e
))
})?;
if let Some(class_obj) = class_obj {
self.exception_ids.write().insert(def_id.0, store_obj.call1(py, (class_obj, ))?.extract(py)?);
}
match &stmt.node {
StmtKind::FunctionDef { decorator_list, .. } => {
if decorator_list.iter().any(|decorator| matches!(decorator.node, ExprKind::Name { id, .. } if id == "rpc".into())) {
store_fun.call1(py, (def_id.0.into_py(py), module.getattr(py, name.to_string()).unwrap())).unwrap();
rpc_ids.push((None, def_id));
}
}
StmtKind::ClassDef { name, body, .. } => {
let class_obj = module.getattr(py, name.to_string()).unwrap();
for stmt in body.iter() {
if let StmtKind::FunctionDef { name, decorator_list, .. } = &stmt.node {
if decorator_list.iter().any(|decorator| matches!(decorator.node, ExprKind::Name { id, .. } if id == "rpc".into())) {
rpc_ids.push((Some((class_obj.clone(), *name)), def_id));
}
}
}
}
_ => ()
}
let id = *name_to_pyid.get(&name).unwrap();
self.pyid_to_def.write().insert(id, def_id);
{
let mut pyid_to_ty = pyid_to_type.write();
if let Some(ty) = ty {
pyid_to_ty.insert(id, ty);
}
}
}
let id_fun = PyModule::import(py, "builtins")?.getattr("id")?; let id_fun = PyModule::import(py, "builtins")?.getattr("id")?;
let mut name_to_pyid: HashMap<StrRef, u64> = HashMap::new(); let mut name_to_pyid: HashMap<StrRef, u64> = HashMap::new();
let module = PyModule::new(py, "tmp")?; let module = PyModule::new(py, "tmp")?;
@ -395,66 +587,100 @@ impl Nac3 {
let synthesized = if method_name.is_empty() { let synthesized = if method_name.is_empty() {
format!("def __modinit__():\n base({})", arg_names.join(", ")) format!("def __modinit__():\n base({})", arg_names.join(", "))
} else { } else {
format!( format!("def __modinit__():\n base.{}({})", method_name, arg_names.join(", "))
"def __modinit__():\n base.{}({})",
method_name,
arg_names.join(", ")
)
};
let mut synthesized = parse_program(&synthesized).unwrap();
let builtins = PyModule::import(py, "builtins")?;
let helper = PythonHelper {
id_fn: builtins.getattr("id").unwrap().to_object(py),
len_fn: builtins.getattr("len").unwrap().to_object(py),
type_fn: builtins.getattr("type").unwrap().to_object(py),
}; };
let mut synthesized =
parse_program(&synthesized, "__nac3_synthesized_modinit__".to_string().into()).unwrap();
let resolver = Arc::new(Resolver(Arc::new(InnerResolver { let resolver = Arc::new(Resolver(Arc::new(InnerResolver {
id_to_type: self.builtins_ty.clone().into(), id_to_type: builtins_ty.clone().into(),
id_to_def: self.builtins_def.clone().into(), id_to_def: builtins_def.clone().into(),
pyid_to_def: self.pyid_to_def.clone(), pyid_to_def: self.pyid_to_def.clone(),
pyid_to_type: self.pyid_to_type.clone(), pyid_to_type: pyid_to_type.clone(),
primitive_ids: self.primitive_ids.clone(), primitive_ids: self.primitive_ids.clone(),
global_value_ids: self.global_value_ids.clone(), global_value_ids: global_value_ids.clone(),
class_names: Default::default(), class_names: Default::default(),
id_to_pyval: Default::default(),
id_to_primitive: Default::default(),
field_to_val: Default::default(),
name_to_pyid, name_to_pyid,
module: module.to_object(py), module: module.to_object(py),
helper, helper,
string_store: self.string_store.clone(),
exception_ids: self.exception_ids.clone(),
}))) as Arc<dyn SymbolResolver + Send + Sync>; }))) as Arc<dyn SymbolResolver + Send + Sync>;
let (_, def_id, _) = self let (_, def_id, _) = composer
.composer .register_top_level(synthesized.pop().unwrap(), Some(resolver.clone()), "".into())
.register_top_level(
synthesized.pop().unwrap(),
Some(resolver.clone()),
"".into(),
)
.unwrap(); .unwrap();
let signature = FunSignature { let signature =
args: vec![], FunSignature { args: vec![], ret: self.primitive.none, vars: HashMap::new() };
ret: self.primitive.none,
vars: HashMap::new(),
};
let mut store = ConcreteTypeStore::new(); let mut store = ConcreteTypeStore::new();
let mut cache = HashMap::new(); let mut cache = HashMap::new();
let signature = store.from_signature( let signature =
&mut self.composer.unifier, store.from_signature(&mut composer.unifier, &self.primitive, &signature, &mut cache);
&self.primitive,
&signature,
&mut cache,
);
let signature = store.add_cty(signature); let signature = store.add_cty(signature);
self.composer.start_analysis(true).unwrap(); if let Err(e) = composer.start_analysis(true) {
self.top_level = Some(Arc::new(self.composer.make_top_level_context())); // report error of __modinit__ separately
let top_level = self.top_level.as_ref().unwrap(); if !e.contains("__nac3_synthesized_modinit__") {
return Err(CompileError::new_err(format!(
"compilation failed\n----------\n{}",
e
)));
} else {
let msg = Self::report_modinit(
&arg_names,
method_name,
resolver.clone(),
&composer.extract_def_list(),
&mut composer.unifier,
&self.primitive,
);
return Err(CompileError::new_err(msg.unwrap_or(e)));
}
}
let top_level = Arc::new(composer.make_top_level_context());
{
let rpc_codegen = rpc_codegen_callback();
let defs = top_level.definitions.read();
for (class_data, id) in rpc_ids.iter() {
let mut def = defs[id.0].write();
match &mut *def {
TopLevelDef::Function { codegen_callback, .. } => {
*codegen_callback = Some(rpc_codegen.clone());
}
TopLevelDef::Class { methods, .. } => {
let (class_def, method_name) = class_data.as_ref().unwrap();
for (name, _, id) in methods.iter() {
if name != method_name {
continue;
}
if let TopLevelDef::Function { codegen_callback, .. } =
&mut *defs[id.0].write()
{
*codegen_callback = Some(rpc_codegen.clone());
store_fun
.call1(
py,
(
id.0.into_py(py),
class_def.getattr(py, name.to_string()).unwrap(),
),
)
.unwrap();
}
}
}
}
}
}
let instance = { let instance = {
let defs = top_level.definitions.read(); let defs = top_level.definitions.read();
let mut definition = defs[def_id.0].write(); let mut definition = defs[def_id.0].write();
if let TopLevelDef::Function { if let TopLevelDef::Function { instance_to_stmt, instance_to_symbol, .. } =
instance_to_stmt, &mut *definition
instance_to_symbol,
..
} = &mut *definition
{ {
instance_to_symbol.insert("".to_string(), "__modinit__".into()); instance_to_symbol.insert("".to_string(), "__modinit__".into());
instance_to_stmt[""].clone() instance_to_stmt[""].clone()
@ -476,55 +702,21 @@ impl Nac3 {
}; };
let isa = self.isa; let isa = self.isa;
let working_directory = self.working_directory.path().to_owned(); let working_directory = self.working_directory.path().to_owned();
let f = Arc::new(WithCall::new(Box::new(move |module| {
let builder = PassManagerBuilder::create();
builder.set_optimization_level(OptimizationLevel::Default);
let passes = PassManager::create(());
builder.populate_module_pass_manager(&passes);
passes.run_on(module);
let (triple, features) = match isa { let membuffers: Arc<Mutex<Vec<Vec<u8>>>> = Default::default();
Isa::Host => (
TargetMachine::get_default_triple(), let membuffer = membuffers.clone();
TargetMachine::get_host_cpu_features().to_string(),
), let f = Arc::new(WithCall::new(Box::new(move |module| {
Isa::RiscV32G => ( let buffer = module.write_bitcode_to_memory();
TargetTriple::create("riscv32-unknown-linux"), let buffer = buffer.as_slice().into();
"+a,+m,+f,+d".to_string(), membuffer.lock().push(buffer);
),
Isa::RiscV32IMA => (
TargetTriple::create("riscv32-unknown-linux"),
"+a,+m".to_string(),
),
Isa::CortexA9 => (
TargetTriple::create("armv7-unknown-linux-gnueabihf"),
"+dsp,+fp16,+neon,+vfp3".to_string(),
),
};
let target =
Target::from_triple(&triple).expect("couldn't create target from target triple");
let target_machine = target
.create_target_machine(
&triple,
"",
&features,
OptimizationLevel::Default,
RelocMode::PIC,
CodeModel::Default,
)
.expect("couldn't create target machine");
target_machine
.write_to_file(
module,
FileType::Object,
&working_directory.join(&format!("{}.o", module.get_name().to_str().unwrap())),
)
.expect("couldn't write module to file");
}))); })));
let thread_names: Vec<String> = (0..4).map(|i| format!("module{}", i)).collect(); let size_t = if self.isa == Isa::Host { 64 } else { 32 };
let thread_names: Vec<String> = (0..4).map(|_| "main".to_string()).collect();
let threads: Vec<_> = thread_names let threads: Vec<_> = thread_names
.iter() .iter()
.map(|s| Box::new(ArtiqCodeGenerator::new(s.to_string(), self.time_fns))) .map(|s| Box::new(ArtiqCodeGenerator::new(s.to_string(), size_t, self.time_fns)))
.collect(); .collect();
py.allow_threads(|| { py.allow_threads(|| {
@ -533,41 +725,88 @@ impl Nac3 {
registry.wait_tasks_complete(handles); registry.wait_tasks_complete(handles);
}); });
let buffers = membuffers.lock();
let context = inkwell::context::Context::create();
let main = context
.create_module_from_ir(MemoryBuffer::create_from_memory_range(&buffers[0], "main"))
.unwrap();
for buffer in buffers.iter().skip(1) {
let other = context
.create_module_from_ir(MemoryBuffer::create_from_memory_range(buffer, "main"))
.unwrap();
main.link_in_module(other)
.map_err(|err| CompileError::new_err(err.to_string()))?;
}
main.link_in_module(load_irrt(&context))
.map_err(|err| CompileError::new_err(err.to_string()))?;
let mut function_iter = main.get_first_function();
while let Some(func) = function_iter {
if func.count_basic_blocks() > 0 && func.get_name().to_str().unwrap() != "__modinit__" {
func.set_linkage(inkwell::module::Linkage::Private);
}
function_iter = func.get_next_function();
}
let builder = PassManagerBuilder::create();
builder.set_optimization_level(OptimizationLevel::Aggressive);
let passes = PassManager::create(());
builder.set_inliner_with_threshold(255);
builder.populate_module_pass_manager(&passes);
passes.run_on(&main);
let (triple, features) = match isa {
Isa::Host => (
TargetMachine::get_default_triple(),
TargetMachine::get_host_cpu_features().to_string(),
),
Isa::RiscV32G => {
(TargetTriple::create("riscv32-unknown-linux"), "+a,+m,+f,+d".to_string())
}
Isa::RiscV32IMA => (TargetTriple::create("riscv32-unknown-linux"), "+a,+m".to_string()),
Isa::CortexA9 => (
TargetTriple::create("armv7-unknown-linux-gnueabihf"),
"+dsp,+fp16,+neon,+vfp3".to_string(),
),
};
let target =
Target::from_triple(&triple).expect("couldn't create target from target triple");
let target_machine = target
.create_target_machine(
&triple,
"",
&features,
OptimizationLevel::Default,
RelocMode::PIC,
CodeModel::Default,
)
.expect("couldn't create target machine");
target_machine
.write_to_file(&main, FileType::Object, &working_directory.join("module.o"))
.expect("couldn't write module to file");
let mut linker_args = vec![ let mut linker_args = vec![
"-shared".to_string(), "-shared".to_string(),
"--eh-frame-hdr".to_string(), "--eh-frame-hdr".to_string(),
"-x".to_string(), "-x".to_string(),
"-o".to_string(), "-o".to_string(),
filename.to_string(), filename.to_string(),
working_directory.join("module.o").to_string_lossy().to_string(),
]; ];
if isa != Isa::Host { if isa != Isa::Host {
linker_args.push( linker_args.push(
"-T".to_string() "-T".to_string()
+ self + self.working_directory.path().join("kernel.ld").to_str().unwrap(),
.working_directory
.path()
.join("kernel.ld")
.to_str()
.unwrap(),
); );
} }
linker_args.extend(thread_names.iter().map(|name| {
let name_o = name.to_owned() + ".o";
self.working_directory
.path()
.join(name_o.as_str())
.to_str()
.unwrap()
.to_string()
}));
if let Ok(linker_status) = Command::new("ld.lld").args(linker_args).status() { if let Ok(linker_status) = Command::new("ld.lld").args(linker_args).status() {
if !linker_status.success() { if !linker_status.success() {
return Err(exceptions::PyRuntimeError::new_err( return Err(CompileError::new_err("failed to start linker"));
"failed to start linker",
));
} }
} else { } else {
return Err(exceptions::PyRuntimeError::new_err( return Err(CompileError::new_err(
"linker returned non-zero status code", "linker returned non-zero status code",
)); ));
} }
@ -580,18 +819,30 @@ impl Nac3 {
obj: &PyAny, obj: &PyAny,
method_name: &str, method_name: &str,
args: Vec<&PyAny>, args: Vec<&PyAny>,
embedding_map: &PyAny,
py: Python, py: Python,
) -> PyResult<PyObject> { ) -> PyResult<PyObject> {
let filename_path = self.working_directory.path().join("module.elf"); let filename_path = self.working_directory.path().join("module.elf");
let filename = filename_path.to_str().unwrap(); let filename = filename_path.to_str().unwrap();
self.compile_method_to_file(obj, method_name, args, filename, py)?; self.compile_method_to_file(obj, method_name, args, filename, embedding_map, py)?;
Ok(PyBytes::new(py, &fs::read(filename).unwrap()).into()) Ok(PyBytes::new(py, &fs::read(filename).unwrap()).into())
} }
} }
#[cfg(feature = "init-llvm-profile")]
extern "C" {
fn __llvm_profile_initialize();
}
#[pymodule] #[pymodule]
fn nac3artiq(_py: Python, m: &PyModule) -> PyResult<()> { fn nac3artiq(py: Python, m: &PyModule) -> PyResult<()> {
#[cfg(feature = "init-llvm-profile")]
unsafe {
__llvm_profile_initialize();
}
Target::initialize_all(&InitializationConfig::default()); Target::initialize_all(&InitializationConfig::default());
m.add("CompileError", py.get_type::<CompileError>())?;
m.add_class::<Nac3>()?; m.add_class::<Nac3>()?;
Ok(()) Ok(())
} }

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
use nac3core::codegen::CodeGenContext;
use inkwell::{values::BasicValueEnum, AddressSpace, AtomicOrdering}; use inkwell::{values::BasicValueEnum, AddressSpace, AtomicOrdering};
use nac3core::codegen::CodeGenContext;
pub trait TimeFns { pub trait TimeFns {
fn emit_now_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>) -> BasicValueEnum<'ctx>; fn emit_now_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>) -> BasicValueEnum<'ctx>;
@ -19,41 +19,23 @@ impl TimeFns for NowPinningTimeFns64 {
.module .module
.get_global("now") .get_global("now")
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now")); .unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
let now_hiptr = ctx.builder.build_bitcast( let now_hiptr =
now, ctx.builder.build_bitcast(now, i32_type.ptr_type(AddressSpace::Generic), "now_hiptr");
i32_type.ptr_type(AddressSpace::Generic),
"now_hiptr"
);
if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr { if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr {
let now_loptr = unsafe { let now_loptr = unsafe {
ctx.builder.build_gep( ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(2, false)], "now_gep")
now_hiptr,
&[i32_type.const_int(2, false)],
"now_gep",
)
}; };
if let ( if let (BasicValueEnum::IntValue(now_hi), BasicValueEnum::IntValue(now_lo)) = (
BasicValueEnum::IntValue(now_hi),
BasicValueEnum::IntValue(now_lo)
) = (
ctx.builder.build_load(now_hiptr, "now_hi"), ctx.builder.build_load(now_hiptr, "now_hi"),
ctx.builder.build_load(now_loptr, "now_lo") ctx.builder.build_load(now_loptr, "now_lo"),
) { ) {
let zext_hi = ctx.builder.build_int_z_extend( let zext_hi = ctx.builder.build_int_z_extend(now_hi, i64_type, "now_zext_hi");
now_hi,
i64_type,
"now_zext_hi"
);
let shifted_hi = ctx.builder.build_left_shift( let shifted_hi = ctx.builder.build_left_shift(
zext_hi, zext_hi,
i64_type.const_int(32, false), i64_type.const_int(32, false),
"now_shifted_zext_hi" "now_shifted_zext_hi",
);
let zext_lo = ctx.builder.build_int_z_extend(
now_lo,
i64_type,
"now_zext_lo"
); );
let zext_lo = ctx.builder.build_int_z_extend(now_lo, i64_type, "now_zext_lo");
ctx.builder.build_or(shifted_hi, zext_lo, "now_or").into() ctx.builder.build_or(shifted_hi, zext_lo, "now_or").into()
} else { } else {
unreachable!(); unreachable!();
@ -69,8 +51,7 @@ impl TimeFns for NowPinningTimeFns64 {
let i64_32 = i64_type.const_int(32, false); let i64_32 = i64_type.const_int(32, false);
if let BasicValueEnum::IntValue(time) = t { if let BasicValueEnum::IntValue(time) = t {
let time_hi = ctx.builder.build_int_truncate( let time_hi = ctx.builder.build_int_truncate(
ctx.builder ctx.builder.build_right_shift(time, i64_32, false, "now_lshr"),
.build_right_shift(time, i64_32, false, "now_lshr"),
i32_type, i32_type,
"now_trunc", "now_trunc",
); );
@ -86,11 +67,7 @@ impl TimeFns for NowPinningTimeFns64 {
); );
if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr { if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr {
let now_loptr = unsafe { let now_loptr = unsafe {
ctx.builder.build_gep( ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(2, false)], "now_gep")
now_hiptr,
&[i32_type.const_int(2, false)],
"now_gep",
)
}; };
ctx.builder ctx.builder
.build_store(now_hiptr, time_hi) .build_store(now_hiptr, time_hi)
@ -108,66 +85,54 @@ impl TimeFns for NowPinningTimeFns64 {
} }
} }
fn emit_delay_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>, dt: BasicValueEnum<'ctx>) { fn emit_delay_mu<'ctx, 'a>(
&self,
ctx: &mut CodeGenContext<'ctx, 'a>,
dt: BasicValueEnum<'ctx>,
) {
let i64_type = ctx.ctx.i64_type(); let i64_type = ctx.ctx.i64_type();
let i32_type = ctx.ctx.i32_type(); let i32_type = ctx.ctx.i32_type();
let now = ctx let now = ctx
.module .module
.get_global("now") .get_global("now")
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now")); .unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
let now_hiptr = ctx.builder.build_bitcast( let now_hiptr =
now, ctx.builder.build_bitcast(now, i32_type.ptr_type(AddressSpace::Generic), "now_hiptr");
i32_type.ptr_type(AddressSpace::Generic),
"now_hiptr"
);
if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr { if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr {
let now_loptr = unsafe { let now_loptr = unsafe {
ctx.builder.build_gep( ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(2, false)], "now_loptr")
now_hiptr,
&[i32_type.const_int(2, false)],
"now_loptr",
)
}; };
if let ( if let (
BasicValueEnum::IntValue(now_hi), BasicValueEnum::IntValue(now_hi),
BasicValueEnum::IntValue(now_lo), BasicValueEnum::IntValue(now_lo),
BasicValueEnum::IntValue(dt) BasicValueEnum::IntValue(dt),
) = ( ) = (
ctx.builder.build_load(now_hiptr, "now_hi"), ctx.builder.build_load(now_hiptr, "now_hi"),
ctx.builder.build_load(now_loptr, "now_lo"), ctx.builder.build_load(now_loptr, "now_lo"),
dt dt,
) { ) {
let zext_hi = ctx.builder.build_int_z_extend( let zext_hi = ctx.builder.build_int_z_extend(now_hi, i64_type, "now_zext_hi");
now_hi,
i64_type,
"now_zext_hi"
);
let shifted_hi = ctx.builder.build_left_shift( let shifted_hi = ctx.builder.build_left_shift(
zext_hi, zext_hi,
i64_type.const_int(32, false), i64_type.const_int(32, false),
"now_shifted_zext_hi" "now_shifted_zext_hi",
);
let zext_lo = ctx.builder.build_int_z_extend(
now_lo,
i64_type,
"now_zext_lo"
); );
let zext_lo = ctx.builder.build_int_z_extend(now_lo, i64_type, "now_zext_lo");
let now_val = ctx.builder.build_or(shifted_hi, zext_lo, "now_or"); let now_val = ctx.builder.build_or(shifted_hi, zext_lo, "now_or");
let time = ctx.builder.build_int_add(now_val, dt, "now_add"); let time = ctx.builder.build_int_add(now_val, dt, "now_add");
let time_hi = ctx.builder.build_int_truncate( let time_hi = ctx.builder.build_int_truncate(
ctx.builder ctx.builder.build_right_shift(
.build_right_shift( time,
time, i64_type.const_int(32, false),
i64_type.const_int(32, false), false,
false, "now_lshr",
"now_lshr" ),
),
i32_type, i32_type,
"now_trunc", "now_trunc",
); );
let time_lo = ctx.builder.build_int_truncate(time, i32_type, "now_trunc"); let time_lo = ctx.builder.build_int_truncate(time, i32_type, "now_trunc");
ctx.builder ctx.builder
.build_store(now_hiptr, time_hi) .build_store(now_hiptr, time_hi)
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent) .set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
@ -200,9 +165,7 @@ impl TimeFns for NowPinningTimeFns {
if let BasicValueEnum::IntValue(now_raw) = now_raw { if let BasicValueEnum::IntValue(now_raw) = now_raw {
let i64_32 = i64_type.const_int(32, false); let i64_32 = i64_type.const_int(32, false);
let now_lo = ctx.builder.build_left_shift(now_raw, i64_32, "now_shl"); let now_lo = ctx.builder.build_left_shift(now_raw, i64_32, "now_shl");
let now_hi = ctx let now_hi = ctx.builder.build_right_shift(now_raw, i64_32, false, "now_lshr");
.builder
.build_right_shift(now_raw, i64_32, false, "now_lshr");
ctx.builder.build_or(now_lo, now_hi, "now_or").into() ctx.builder.build_or(now_lo, now_hi, "now_or").into()
} else { } else {
unreachable!(); unreachable!();
@ -215,8 +178,7 @@ impl TimeFns for NowPinningTimeFns {
let i64_32 = i64_type.const_int(32, false); let i64_32 = i64_type.const_int(32, false);
if let BasicValueEnum::IntValue(time) = t { if let BasicValueEnum::IntValue(time) = t {
let time_hi = ctx.builder.build_int_truncate( let time_hi = ctx.builder.build_int_truncate(
ctx.builder ctx.builder.build_right_shift(time, i64_32, false, "now_lshr"),
.build_right_shift(time, i64_32, false, "now_lshr"),
i32_type, i32_type,
"now_trunc", "now_trunc",
); );
@ -232,11 +194,7 @@ impl TimeFns for NowPinningTimeFns {
); );
if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr { if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr {
let now_loptr = unsafe { let now_loptr = unsafe {
ctx.builder.build_gep( ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(1, false)], "now_gep")
now_hiptr,
&[i32_type.const_int(1, false)],
"now_gep",
)
}; };
ctx.builder ctx.builder
.build_store(now_hiptr, time_hi) .build_store(now_hiptr, time_hi)
@ -254,7 +212,11 @@ impl TimeFns for NowPinningTimeFns {
} }
} }
fn emit_delay_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>, dt: BasicValueEnum<'ctx>) { fn emit_delay_mu<'ctx, 'a>(
&self,
ctx: &mut CodeGenContext<'ctx, 'a>,
dt: BasicValueEnum<'ctx>,
) {
let i32_type = ctx.ctx.i32_type(); let i32_type = ctx.ctx.i32_type();
let i64_type = ctx.ctx.i64_type(); let i64_type = ctx.ctx.i64_type();
let i64_32 = i64_type.const_int(32, false); let i64_32 = i64_type.const_int(32, false);
@ -263,18 +225,13 @@ impl TimeFns for NowPinningTimeFns {
.get_global("now") .get_global("now")
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now")); .unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
let now_raw = ctx.builder.build_load(now.as_pointer_value(), "now"); let now_raw = ctx.builder.build_load(now.as_pointer_value(), "now");
if let (BasicValueEnum::IntValue(now_raw), BasicValueEnum::IntValue(dt)) = if let (BasicValueEnum::IntValue(now_raw), BasicValueEnum::IntValue(dt)) = (now_raw, dt) {
(now_raw, dt)
{
let now_lo = ctx.builder.build_left_shift(now_raw, i64_32, "now_shl"); let now_lo = ctx.builder.build_left_shift(now_raw, i64_32, "now_shl");
let now_hi = ctx let now_hi = ctx.builder.build_right_shift(now_raw, i64_32, false, "now_lshr");
.builder
.build_right_shift(now_raw, i64_32, false, "now_lshr");
let now_val = ctx.builder.build_or(now_lo, now_hi, "now_or"); let now_val = ctx.builder.build_or(now_lo, now_hi, "now_or");
let time = ctx.builder.build_int_add(now_val, dt, "now_add"); let time = ctx.builder.build_int_add(now_val, dt, "now_add");
let time_hi = ctx.builder.build_int_truncate( let time_hi = ctx.builder.build_int_truncate(
ctx.builder ctx.builder.build_right_shift(time, i64_32, false, "now_lshr"),
.build_right_shift(time, i64_32, false, "now_lshr"),
i32_type, i32_type,
"now_trunc", "now_trunc",
); );
@ -286,11 +243,7 @@ impl TimeFns for NowPinningTimeFns {
); );
if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr { if let BasicValueEnum::PointerValue(now_hiptr) = now_hiptr {
let now_loptr = unsafe { let now_loptr = unsafe {
ctx.builder.build_gep( ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(1, false)], "now_gep")
now_hiptr,
&[i32_type.const_int(1, false)],
"now_gep",
)
}; };
ctx.builder ctx.builder
.build_store(now_hiptr, time_hi) .build_store(now_hiptr, time_hi)
@ -315,33 +268,36 @@ pub struct ExternTimeFns {}
impl TimeFns for ExternTimeFns { impl TimeFns for ExternTimeFns {
fn emit_now_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>) -> BasicValueEnum<'ctx> { fn emit_now_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>) -> BasicValueEnum<'ctx> {
let now_mu = ctx let now_mu = ctx.module.get_function("now_mu").unwrap_or_else(|| {
.module ctx.module.add_function("now_mu", ctx.ctx.i64_type().fn_type(&[], false), None)
.get_function("now_mu") });
.unwrap_or_else(|| ctx.module.add_function("now_mu", ctx.ctx.i64_type().fn_type(&[], false), None)); ctx.builder.build_call(now_mu, &[], "now_mu").try_as_basic_value().left().unwrap()
ctx.builder
.build_call(now_mu, &[], "now_mu")
.try_as_basic_value()
.left()
.unwrap()
} }
fn emit_at_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>, t: BasicValueEnum<'ctx>) { fn emit_at_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>, t: BasicValueEnum<'ctx>) {
let at_mu = ctx let at_mu = ctx.module.get_function("at_mu").unwrap_or_else(|| {
.module ctx.module.add_function(
.get_function("at_mu") "at_mu",
.unwrap_or_else(|| ctx.module.add_function("at_mu", ctx.ctx.void_type().fn_type(&[ctx.ctx.i64_type().into()], false), None)); ctx.ctx.void_type().fn_type(&[ctx.ctx.i64_type().into()], false),
ctx.builder None,
.build_call(at_mu, &[t], "at_mu"); )
});
ctx.builder.build_call(at_mu, &[t.into()], "at_mu");
} }
fn emit_delay_mu<'ctx, 'a>(&self, ctx: &mut CodeGenContext<'ctx, 'a>, dt: BasicValueEnum<'ctx>) { fn emit_delay_mu<'ctx, 'a>(
let delay_mu = ctx &self,
.module ctx: &mut CodeGenContext<'ctx, 'a>,
.get_function("delay_mu") dt: BasicValueEnum<'ctx>,
.unwrap_or_else(|| ctx.module.add_function("delay_mu", ctx.ctx.void_type().fn_type(&[ctx.ctx.i64_type().into()], false), None)); ) {
ctx.builder let delay_mu = ctx.module.get_function("delay_mu").unwrap_or_else(|| {
.build_call(delay_mu, &[dt], "delay_mu"); ctx.module.add_function(
"delay_mu",
ctx.ctx.void_type().fn_type(&[ctx.ctx.i64_type().into()], false),
None,
)
});
ctx.builder.build_call(delay_mu, &[dt.into()], "delay_mu");
} }
} }

View File

@ -10,7 +10,6 @@ constant-optimization = ["fold"]
fold = [] fold = []
[dependencies] [dependencies]
num-bigint = "0.4.0"
lazy_static = "1.4.0" lazy_static = "1.4.0"
parking_lot = "0.11.1" parking_lot = "0.11.1"
string-interner = "0.13.0" string-interner = "0.13.0"

View File

@ -1,12 +1,10 @@
use num_bigint::BigInt;
#[derive(Clone, Debug, PartialEq)] #[derive(Clone, Debug, PartialEq)]
pub enum Constant { pub enum Constant {
None, None,
Bool(bool), Bool(bool),
Str(String), Str(String),
Bytes(Vec<u8>), Bytes(Vec<u8>),
Int(BigInt), Int(i128),
Tuple(Vec<Constant>), Tuple(Vec<Constant>),
Float(f64), Float(f64),
Complex { real: f64, imag: f64 }, Complex { real: f64, imag: f64 },
@ -28,9 +26,14 @@ impl From<bool> for Constant {
Self::Bool(b) Self::Bool(b)
} }
} }
impl From<BigInt> for Constant { impl From<i32> for Constant {
fn from(i: BigInt) -> Constant { fn from(i: i32) -> Constant {
Self::Int(i) Self::Int(i as i128)
}
}
impl From<i64> for Constant {
fn from(i: i64) -> Constant {
Self::Int(i as i128)
} }
} }
@ -124,7 +127,7 @@ mod tests {
use crate::fold::Fold; use crate::fold::Fold;
use crate::*; use crate::*;
let location = Location::new(0, 0); let location = Location::new(0, 0, Default::default());
let custom = (); let custom = ();
let ast = Located { let ast = Located {
location, location,
@ -136,7 +139,7 @@ mod tests {
location, location,
custom, custom,
node: ExprKind::Constant { node: ExprKind::Constant {
value: BigInt::from(1).into(), value: 1.into(),
kind: None, kind: None,
}, },
}, },
@ -144,7 +147,7 @@ mod tests {
location, location,
custom, custom,
node: ExprKind::Constant { node: ExprKind::Constant {
value: BigInt::from(2).into(), value: 2.into(),
kind: None, kind: None,
}, },
}, },
@ -158,7 +161,7 @@ mod tests {
location, location,
custom, custom,
node: ExprKind::Constant { node: ExprKind::Constant {
value: BigInt::from(3).into(), value: 3.into(),
kind: None, kind: None,
}, },
}, },
@ -166,7 +169,7 @@ mod tests {
location, location,
custom, custom,
node: ExprKind::Constant { node: ExprKind::Constant {
value: BigInt::from(4).into(), value: 4.into(),
kind: None, kind: None,
}, },
}, },
@ -174,7 +177,7 @@ mod tests {
location, location,
custom, custom,
node: ExprKind::Constant { node: ExprKind::Constant {
value: BigInt::from(5).into(), value: 5.into(),
kind: None, kind: None,
}, },
}, },
@ -194,12 +197,12 @@ mod tests {
custom, custom,
node: ExprKind::Constant { node: ExprKind::Constant {
value: Constant::Tuple(vec![ value: Constant::Tuple(vec![
BigInt::from(1).into(), 1.into(),
BigInt::from(2).into(), 2.into(),
Constant::Tuple(vec![ Constant::Tuple(vec![
BigInt::from(3).into(), 3.into(),
BigInt::from(4).into(), 4.into(),
BigInt::from(5).into(), 5.into(),
]) ])
]), ]),
kind: None kind: None

View File

@ -9,6 +9,6 @@ mod impls;
mod location; mod location;
pub use ast_gen::*; pub use ast_gen::*;
pub use location::Location; pub use location::{Location, FileName};
pub type Suite<U = ()> = Vec<Stmt<U>>; pub type Suite<U = ()> = Vec<Stmt<U>>;

View File

@ -1,17 +1,32 @@
//! Datatypes to support source location information. //! Datatypes to support source location information.
use crate::ast_gen::StrRef;
use std::fmt; use std::fmt;
#[derive(Clone, Copy, Debug, PartialEq)]
pub struct FileName(pub StrRef);
impl Default for FileName {
fn default() -> Self {
FileName("unknown".into())
}
}
impl From<String> for FileName {
fn from(s: String) -> Self {
FileName(s.into())
}
}
/// A location somewhere in the sourcecode. /// A location somewhere in the sourcecode.
#[derive(Clone, Copy, Debug, Default, PartialEq)] #[derive(Clone, Copy, Debug, Default, PartialEq)]
pub struct Location { pub struct Location {
row: usize, pub row: usize,
column: usize, pub column: usize,
pub file: FileName
} }
impl fmt::Display for Location { impl fmt::Display for Location {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "line {} column {}", self.row, self.column) write!(f, "{}: line {} column {}", self.file.0, self.row, self.column)
} }
} }
@ -47,8 +62,8 @@ impl Location {
} }
impl Location { impl Location {
pub fn new(row: usize, column: usize) -> Self { pub fn new(row: usize, column: usize, file: FileName) -> Self {
Location { row, column } Location { row, column, file }
} }
pub fn row(&self) -> usize { pub fn row(&self) -> usize {

View File

@ -5,16 +5,21 @@ authors = ["M-Labs"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
num-bigint = "0.3"
num-traits = "0.2"
inkwell = { git = "https://github.com/TheDan64/inkwell", branch = "master", features = ["llvm12-0"] }
itertools = "0.10.1" itertools = "0.10.1"
crossbeam = "0.8.1" crossbeam = "0.8.1"
parking_lot = "0.11.1" parking_lot = "0.11.1"
rayon = "1.5.1" rayon = "1.5.1"
nac3parser = { path = "../nac3parser" } nac3parser = { path = "../nac3parser" }
[dependencies.inkwell]
version = "0.1.0-beta.4"
default-features = false
features = ["llvm13-0", "target-x86", "target-arm", "target-riscv", "no-libffi-linking"]
[dev-dependencies] [dev-dependencies]
test-case = "1.2.0" test-case = "1.2.0"
indoc = "1.0" indoc = "1.0"
insta = "1.5" insta = "=1.11.0"
[build-dependencies]
regex = "1"

61
nac3core/build.rs Normal file
View File

@ -0,0 +1,61 @@
use regex::Regex;
use std::{
env,
io::Write,
path::Path,
process::{Command, Stdio},
};
fn main() {
const FILE: &str = "src/codegen/irrt/irrt.c";
println!("cargo:rerun-if-changed={}", FILE);
/*
* HACK: Sadly, clang doesn't let us emit generic LLVM bitcode.
* Compiling for WASM32 and filtering the output with regex is the closest we can get.
*/
const FLAG: &[&str] = &[
"--target=wasm32",
FILE,
"-O3",
"-emit-llvm",
"-S",
"-Wall",
"-Wextra",
"-Wno-implicit-function-declaration",
"-o",
"-",
];
let output = Command::new("clang")
.args(FLAG)
.output()
.map(|o| {
assert!(o.status.success(), "{}", std::str::from_utf8(&o.stderr).unwrap());
o
})
.unwrap();
let output = std::str::from_utf8(&output.stdout).unwrap();
let mut filtered_output = String::with_capacity(output.len());
let regex_filter = regex::Regex::new(r"(?ms:^define.*?\}$)|(?m:^declare.*?$)").unwrap();
for f in regex_filter.captures_iter(output) {
assert!(f.len() == 1);
filtered_output.push_str(&f[0]);
filtered_output.push('\n');
}
let filtered_output = Regex::new("(#\\d+)|(, *![0-9A-Za-z.]+)|(![0-9A-Za-z.]+)|(!\".*?\")")
.unwrap()
.replace_all(&filtered_output, "");
let mut llvm_as = Command::new("llvm-as")
.stdin(Stdio::piped())
.arg("-o")
.arg(Path::new(&env::var("OUT_DIR").unwrap()).join("irrt.bc"))
.spawn()
.unwrap();
llvm_as.stdin.as_mut().unwrap().write_all(filtered_output.as_bytes()).unwrap();
assert!(llvm_as.wait().unwrap().success())
}

View File

@ -1 +0,0 @@
use_small_heuristics = "Max"

View File

@ -28,9 +28,14 @@ pub struct ConcreteFuncArg {
pub enum Primitive { pub enum Primitive {
Int32, Int32,
Int64, Int64,
UInt32,
UInt64,
Float, Float,
Bool, Bool,
None, None,
Range,
Str,
Exception,
} }
#[derive(Debug)] #[derive(Debug)]
@ -66,6 +71,11 @@ impl ConcreteTypeStore {
ConcreteTypeEnum::TPrimitive(Primitive::Float), ConcreteTypeEnum::TPrimitive(Primitive::Float),
ConcreteTypeEnum::TPrimitive(Primitive::Bool), ConcreteTypeEnum::TPrimitive(Primitive::Bool),
ConcreteTypeEnum::TPrimitive(Primitive::None), ConcreteTypeEnum::TPrimitive(Primitive::None),
ConcreteTypeEnum::TPrimitive(Primitive::Range),
ConcreteTypeEnum::TPrimitive(Primitive::Str),
ConcreteTypeEnum::TPrimitive(Primitive::Exception),
ConcreteTypeEnum::TPrimitive(Primitive::UInt32),
ConcreteTypeEnum::TPrimitive(Primitive::UInt64),
], ],
} }
} }
@ -118,6 +128,16 @@ impl ConcreteTypeStore {
ConcreteType(3) ConcreteType(3)
} else if unifier.unioned(ty, primitives.none) { } else if unifier.unioned(ty, primitives.none) {
ConcreteType(4) ConcreteType(4)
} else if unifier.unioned(ty, primitives.range) {
ConcreteType(5)
} else if unifier.unioned(ty, primitives.str) {
ConcreteType(6)
} else if unifier.unioned(ty, primitives.exception) {
ConcreteType(7)
} else if unifier.unioned(ty, primitives.uint32) {
ConcreteType(8)
} else if unifier.unioned(ty, primitives.uint64) {
ConcreteType(9)
} else if let Some(cty) = cache.get(&ty) { } else if let Some(cty) = cache.get(&ty) {
if let Some(cty) = cty { if let Some(cty) = cty {
*cty *cty
@ -145,21 +165,25 @@ impl ConcreteTypeStore {
TypeEnum::TObj { obj_id, fields, params } => ConcreteTypeEnum::TObj { TypeEnum::TObj { obj_id, fields, params } => ConcreteTypeEnum::TObj {
obj_id: *obj_id, obj_id: *obj_id,
fields: fields fields: fields
.borrow()
.iter() .iter()
.filter_map(|(name, ty)| { .filter_map(|(name, ty)| {
// here we should not have type vars, but some partial instantiated // here we should not have type vars, but some partial instantiated
// class methods can still have uninstantiated type vars, so // class methods can still have uninstantiated type vars, so
// filter out all the methods, as this will not affect codegen // filter out all the methods, as this will not affect codegen
if let TypeEnum::TFunc( .. ) = &*unifier.get_ty(ty.0) { if let TypeEnum::TFunc(..) = &*unifier.get_ty(ty.0) {
None None
} else { } else {
Some((*name, (self.from_unifier_type(unifier, primitives, ty.0, cache), ty.1))) Some((
*name,
(
self.from_unifier_type(unifier, primitives, ty.0, cache),
ty.1,
),
))
} }
}) })
.collect(), .collect(),
params: params params: params
.borrow()
.iter() .iter()
.map(|(id, ty)| { .map(|(id, ty)| {
(*id, self.from_unifier_type(unifier, primitives, *ty, cache)) (*id, self.from_unifier_type(unifier, primitives, *ty, cache))
@ -170,7 +194,6 @@ impl ConcreteTypeStore {
ty: self.from_unifier_type(unifier, primitives, *ty, cache), ty: self.from_unifier_type(unifier, primitives, *ty, cache),
}, },
TypeEnum::TFunc(signature) => { TypeEnum::TFunc(signature) => {
let signature = signature.borrow();
self.from_signature(unifier, primitives, &*signature, cache) self.from_signature(unifier, primitives, &*signature, cache)
} }
_ => unreachable!(), _ => unreachable!(),
@ -198,7 +221,7 @@ impl ConcreteTypeStore {
return if let Some(ty) = ty { return if let Some(ty) = ty {
*ty *ty
} else { } else {
*ty = Some(unifier.get_fresh_var().0); *ty = Some(unifier.get_dummy_var().0);
ty.unwrap() ty.unwrap()
}; };
} }
@ -208,9 +231,14 @@ impl ConcreteTypeStore {
let ty = match primitive { let ty = match primitive {
Primitive::Int32 => primitives.int32, Primitive::Int32 => primitives.int32,
Primitive::Int64 => primitives.int64, Primitive::Int64 => primitives.int64,
Primitive::UInt32 => primitives.uint32,
Primitive::UInt64 => primitives.uint64,
Primitive::Float => primitives.float, Primitive::Float => primitives.float,
Primitive::Bool => primitives.bool, Primitive::Bool => primitives.bool,
Primitive::None => primitives.none, Primitive::None => primitives.none,
Primitive::Range => primitives.range,
Primitive::Str => primitives.str,
Primitive::Exception => primitives.exception,
}; };
*cache.get_mut(&cty).unwrap() = Some(ty); *cache.get_mut(&cty).unwrap() = Some(ty);
return ty; return ty;
@ -234,34 +262,27 @@ impl ConcreteTypeStore {
.map(|(name, cty)| { .map(|(name, cty)| {
(*name, (self.to_unifier_type(unifier, primitives, cty.0, cache), cty.1)) (*name, (self.to_unifier_type(unifier, primitives, cty.0, cache), cty.1))
}) })
.collect::<HashMap<_, _>>() .collect::<HashMap<_, _>>(),
.into(),
params: params params: params
.iter() .iter()
.map(|(id, cty)| (*id, self.to_unifier_type(unifier, primitives, *cty, cache))) .map(|(id, cty)| (*id, self.to_unifier_type(unifier, primitives, *cty, cache)))
.collect::<HashMap<_, _>>() .collect::<HashMap<_, _>>(),
.into(),
}, },
ConcreteTypeEnum::TFunc { args, ret, vars } => TypeEnum::TFunc( ConcreteTypeEnum::TFunc { args, ret, vars } => TypeEnum::TFunc(FunSignature {
FunSignature { args: args
args: args .iter()
.iter() .map(|arg| FuncArg {
.map(|arg| FuncArg { name: arg.name,
name: arg.name, ty: self.to_unifier_type(unifier, primitives, arg.ty, cache),
ty: self.to_unifier_type(unifier, primitives, arg.ty, cache), default_value: arg.default_value.clone(),
default_value: arg.default_value.clone(), })
}) .collect(),
.collect(), ret: self.to_unifier_type(unifier, primitives, *ret, cache),
ret: self.to_unifier_type(unifier, primitives, *ret, cache), vars: vars
vars: vars .iter()
.iter() .map(|(id, cty)| (*id, self.to_unifier_type(unifier, primitives, *cty, cache)))
.map(|(id, cty)| { .collect::<HashMap<_, _>>(),
(*id, self.to_unifier_type(unifier, primitives, *cty, cache)) }),
})
.collect::<HashMap<_, _>>(),
}
.into(),
),
}; };
let result = unifier.add_ty(result); let result = unifier.add_ty(result);
if let Some(ty) = cache.get(&cty).unwrap() { if let Some(ty) = cache.get(&cty).unwrap() {

File diff suppressed because it is too large Load Diff

View File

@ -4,13 +4,19 @@ use crate::{
toplevel::{DefinitionId, TopLevelDef}, toplevel::{DefinitionId, TopLevelDef},
typecheck::typedef::{FunSignature, Type}, typecheck::typedef::{FunSignature, Type},
}; };
use inkwell::values::{BasicValueEnum, PointerValue}; use inkwell::{
context::Context,
types::{BasicTypeEnum, IntType},
values::{BasicValueEnum, PointerValue},
};
use nac3parser::ast::{Expr, Stmt, StrRef}; use nac3parser::ast::{Expr, Stmt, StrRef};
pub trait CodeGenerator { pub trait CodeGenerator {
/// Return the module name for the code generator. /// Return the module name for the code generator.
fn get_name(&self) -> &str; fn get_name(&self) -> &str;
fn get_size_type<'ctx>(&self, ctx: &'ctx Context) -> IntType<'ctx>;
/// Generate function call and returns the function return value. /// Generate function call and returns the function return value.
/// - obj: Optional object for method call. /// - obj: Optional object for method call.
/// - fun: Function signature and definition ID. /// - fun: Function signature and definition ID.
@ -22,7 +28,10 @@ pub trait CodeGenerator {
obj: Option<(Type, ValueEnum<'ctx>)>, obj: Option<(Type, ValueEnum<'ctx>)>,
fun: (&FunSignature, DefinitionId), fun: (&FunSignature, DefinitionId),
params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>, params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
) -> Option<BasicValueEnum<'ctx>> { ) -> Result<Option<BasicValueEnum<'ctx>>, String>
where
Self: Sized,
{
gen_call(self, ctx, obj, fun, params) gen_call(self, ctx, obj, fun, params)
} }
@ -36,7 +45,10 @@ pub trait CodeGenerator {
signature: &FunSignature, signature: &FunSignature,
def: &TopLevelDef, def: &TopLevelDef,
params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>, params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
) -> BasicValueEnum<'ctx> { ) -> Result<BasicValueEnum<'ctx>, String>
where
Self: Sized,
{
gen_constructor(self, ctx, signature, def, params) gen_constructor(self, ctx, signature, def, params)
} }
@ -53,7 +65,7 @@ pub trait CodeGenerator {
obj: Option<(Type, ValueEnum<'ctx>)>, obj: Option<(Type, ValueEnum<'ctx>)>,
fun: (&FunSignature, &mut TopLevelDef, String), fun: (&FunSignature, &mut TopLevelDef, String),
id: usize, id: usize,
) -> String { ) -> Result<String, String> {
gen_func_instance(ctx, obj, fun, id) gen_func_instance(ctx, obj, fun, id)
} }
@ -62,7 +74,10 @@ pub trait CodeGenerator {
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
expr: &Expr<Option<Type>>, expr: &Expr<Option<Type>>,
) -> Option<ValueEnum<'ctx>> { ) -> Result<Option<ValueEnum<'ctx>>, String>
where
Self: Sized,
{
gen_expr(self, ctx, expr) gen_expr(self, ctx, expr)
} }
@ -71,8 +86,8 @@ pub trait CodeGenerator {
fn gen_var_alloc<'ctx, 'a>( fn gen_var_alloc<'ctx, 'a>(
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
ty: Type, ty: BasicTypeEnum<'ctx>,
) -> PointerValue<'ctx> { ) -> Result<PointerValue<'ctx>, String> {
gen_var(ctx, ty) gen_var(ctx, ty)
} }
@ -81,7 +96,10 @@ pub trait CodeGenerator {
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
pattern: &Expr<Option<Type>>, pattern: &Expr<Option<Type>>,
) -> PointerValue<'ctx> { ) -> Result<PointerValue<'ctx>, String>
where
Self: Sized,
{
gen_store_target(self, ctx, pattern) gen_store_target(self, ctx, pattern)
} }
@ -91,7 +109,10 @@ pub trait CodeGenerator {
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
target: &Expr<Option<Type>>, target: &Expr<Option<Type>>,
value: ValueEnum<'ctx>, value: ValueEnum<'ctx>,
) { ) -> Result<(), String>
where
Self: Sized,
{
gen_assign(self, ctx, target, value) gen_assign(self, ctx, target, value)
} }
@ -101,9 +122,11 @@ pub trait CodeGenerator {
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
stmt: &Stmt<Option<Type>>, stmt: &Stmt<Option<Type>>,
) -> bool { ) -> Result<(), String>
gen_while(self, ctx, stmt); where
false Self: Sized,
{
gen_while(self, ctx, stmt)
} }
/// Generate code for a while expression. /// Generate code for a while expression.
@ -112,9 +135,11 @@ pub trait CodeGenerator {
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
stmt: &Stmt<Option<Type>>, stmt: &Stmt<Option<Type>>,
) -> bool { ) -> Result<(), String>
gen_for(self, ctx, stmt); where
false Self: Sized,
{
gen_for(self, ctx, stmt)
} }
/// Generate code for an if expression. /// Generate code for an if expression.
@ -123,7 +148,10 @@ pub trait CodeGenerator {
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
stmt: &Stmt<Option<Type>>, stmt: &Stmt<Option<Type>>,
) -> bool { ) -> Result<(), String>
where
Self: Sized,
{
gen_if(self, ctx, stmt) gen_if(self, ctx, stmt)
} }
@ -131,7 +159,10 @@ pub trait CodeGenerator {
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
stmt: &Stmt<Option<Type>>, stmt: &Stmt<Option<Type>>,
) -> bool { ) -> Result<(), String>
where
Self: Sized,
{
gen_with(self, ctx, stmt) gen_with(self, ctx, stmt)
} }
@ -141,18 +172,23 @@ pub trait CodeGenerator {
&mut self, &mut self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
stmt: &Stmt<Option<Type>>, stmt: &Stmt<Option<Type>>,
) -> bool { ) -> Result<(), String>
where
Self: Sized,
{
gen_stmt(self, ctx, stmt) gen_stmt(self, ctx, stmt)
} }
} }
pub struct DefaultCodeGenerator { pub struct DefaultCodeGenerator {
name: String, name: String,
size_t: u32,
} }
impl DefaultCodeGenerator { impl DefaultCodeGenerator {
pub fn new(name: String) -> DefaultCodeGenerator { pub fn new(name: String, size_t: u32) -> DefaultCodeGenerator {
DefaultCodeGenerator { name } assert!(size_t == 32 || size_t == 64);
DefaultCodeGenerator { name, size_t }
} }
} }
@ -160,4 +196,14 @@ impl CodeGenerator for DefaultCodeGenerator {
fn get_name(&self) -> &str { fn get_name(&self) -> &str {
&self.name &self.name
} }
fn get_size_type<'ctx>(&self, ctx: &'ctx Context) -> IntType<'ctx> {
// it should be unsigned, but we don't really need unsigned and this could save us from
// having to do a bit cast...
if self.size_t == 32 {
ctx.i32_type()
} else {
ctx.i64_type()
}
}
} }

View File

@ -0,0 +1,140 @@
typedef _ExtInt(8) int8_t;
typedef unsigned _ExtInt(8) uint8_t;
typedef _ExtInt(32) int32_t;
typedef unsigned _ExtInt(32) uint32_t;
typedef _ExtInt(64) int64_t;
typedef unsigned _ExtInt(64) uint64_t;
# define MAX(a, b) (a > b ? a : b)
# define MIN(a, b) (a > b ? b : a)
// adapted from GNU Scientific Library: https://git.savannah.gnu.org/cgit/gsl.git/tree/sys/pow_int.c
// need to make sure `exp >= 0` before calling this function
#define DEF_INT_EXP(T) T __nac3_int_exp_##T( \
T base, \
T exp \
) { \
T res = (T)1; \
/* repeated squaring method */ \
do { \
if (exp & 1) res *= base; /* for n odd */ \
exp >>= 1; \
base *= base; \
} while (exp); \
return res; \
} \
DEF_INT_EXP(int32_t)
DEF_INT_EXP(int64_t)
DEF_INT_EXP(uint32_t)
DEF_INT_EXP(uint64_t)
int32_t __nac3_slice_index_bound(int32_t i, const int32_t len) {
if (i < 0) {
i = len + i;
}
if (i < 0) {
return 0;
} else if (i > len) {
return len;
}
return i;
}
int32_t __nac3_range_slice_len(const int32_t start, const int32_t end, const int32_t step) {
int32_t diff = end - start;
if (diff > 0 && step > 0) {
return ((diff - 1) / step) + 1;
} else if (diff < 0 && step < 0) {
return ((diff + 1) / step) + 1;
} else {
return 0;
}
}
// Handle list assignment and dropping part of the list when
// both dest_step and src_step are +1.
// - All the index must *not* be out-of-bound or negative,
// - The end index is *inclusive*,
// - The length of src and dest slice size should already
// be checked: if dest.step == 1 then len(src) <= len(dest) else len(src) == len(dest)
int32_t __nac3_list_slice_assign_var_size(
int32_t dest_start,
int32_t dest_end,
int32_t dest_step,
uint8_t *dest_arr,
int32_t dest_arr_len,
int32_t src_start,
int32_t src_end,
int32_t src_step,
uint8_t *src_arr,
int32_t src_arr_len,
const int32_t size
) {
/* if dest_arr_len == 0, do nothing since we do not support extending list */
if (dest_arr_len == 0) return dest_arr_len;
/* if both step is 1, memmove directly, handle the dropping of the list, and shrink size */
if (src_step == dest_step && dest_step == 1) {
const int32_t src_len = (src_end >= src_start) ? (src_end - src_start + 1) : 0;
const int32_t dest_len = (dest_end >= dest_start) ? (dest_end - dest_start + 1) : 0;
if (src_len > 0) {
__builtin_memmove(
dest_arr + dest_start * size,
src_arr + src_start * size,
src_len * size
);
}
if (dest_len > 0) {
/* dropping */
__builtin_memmove(
dest_arr + (dest_start + src_len) * size,
dest_arr + (dest_end + 1) * size,
(dest_arr_len - dest_end - 1) * size
);
}
/* shrink size */
return dest_arr_len - (dest_len - src_len);
}
/* if two range overlaps, need alloca */
uint8_t need_alloca =
(dest_arr == src_arr)
&& !(
MAX(dest_start, dest_end) < MIN(src_start, src_end)
|| MAX(src_start, src_end) < MIN(dest_start, dest_end)
);
if (need_alloca) {
uint8_t *tmp = alloca(src_arr_len * size);
__builtin_memcpy(tmp, src_arr, src_arr_len * size);
src_arr = tmp;
}
int32_t src_ind = src_start;
int32_t dest_ind = dest_start;
for (;
(src_step > 0) ? (src_ind <= src_end) : (src_ind >= src_end);
src_ind += src_step, dest_ind += dest_step
) {
/* for constant optimization */
if (size == 1) {
__builtin_memcpy(dest_arr + dest_ind, src_arr + src_ind, 1);
} else if (size == 4) {
__builtin_memcpy(dest_arr + dest_ind * 4, src_arr + src_ind * 4, 4);
} else if (size == 8) {
__builtin_memcpy(dest_arr + dest_ind * 8, src_arr + src_ind * 8, 8);
} else {
/* memcpy for var size, cannot overlap after previous alloca */
__builtin_memcpy(dest_arr + dest_ind * size, src_arr + src_ind * size, size);
}
}
/* only dest_step == 1 can we shrink the dest list. */
/* size should be ensured prior to calling this function */
if (dest_step == 1 && dest_end >= dest_start) {
__builtin_memmove(
dest_arr + dest_ind * size,
dest_arr + (dest_end + 1) * size,
(dest_arr_len - dest_end - 1) * size
);
return dest_arr_len - (dest_end - dest_ind) - 1;
}
return dest_arr_len;
}

View File

@ -0,0 +1,330 @@
use crate::typecheck::typedef::Type;
use super::{CodeGenContext, CodeGenerator};
use inkwell::{
attributes::{Attribute, AttributeLoc},
context::Context,
memory_buffer::MemoryBuffer,
module::Module,
types::{BasicTypeEnum, IntType},
values::{IntValue, PointerValue},
AddressSpace, IntPredicate,
};
use nac3parser::ast::Expr;
pub fn load_irrt(ctx: &Context) -> Module {
let bitcode_buf = MemoryBuffer::create_from_memory_range(
include_bytes!(concat!(env!("OUT_DIR"), "/irrt.bc")),
"irrt_bitcode_buffer",
);
let irrt_mod = Module::parse_bitcode_from_buffer(&bitcode_buf, ctx).unwrap();
let inline_attr = Attribute::get_named_enum_kind_id("alwaysinline");
for symbol in &[
"__nac3_int_exp_int32_t",
"__nac3_int_exp_int64_t",
"__nac3_range_slice_len",
"__nac3_slice_index_bound",
] {
let function = irrt_mod.get_function(symbol).unwrap();
function.add_attribute(AttributeLoc::Function, ctx.create_enum_attribute(inline_attr, 0));
}
irrt_mod
}
// repeated squaring method adapted from GNU Scientific Library:
// https://git.savannah.gnu.org/cgit/gsl.git/tree/sys/pow_int.c
pub fn integer_power<'ctx, 'a>(
ctx: &mut CodeGenContext<'ctx, 'a>,
base: IntValue<'ctx>,
exp: IntValue<'ctx>,
signed: bool,
) -> IntValue<'ctx> {
let symbol = match (base.get_type().get_bit_width(), exp.get_type().get_bit_width(), signed) {
(32, 32, true) => "__nac3_int_exp_int32_t",
(64, 64, true) => "__nac3_int_exp_int64_t",
(32, 32, false) => "__nac3_int_exp_uint32_t",
(64, 64, false) => "__nac3_int_exp_uint64_t",
_ => unreachable!(),
};
let base_type = base.get_type();
let pow_fun = ctx.module.get_function(symbol).unwrap_or_else(|| {
let fn_type = base_type.fn_type(&[base_type.into(), base_type.into()], false);
ctx.module.add_function(symbol, fn_type, None)
});
// TODO: throw exception when exp < 0
ctx.builder
.build_call(pow_fun, &[base.into(), exp.into()], "call_int_pow")
.try_as_basic_value()
.unwrap_left()
.into_int_value()
}
pub fn calculate_len_for_slice_range<'ctx, 'a>(
ctx: &mut CodeGenContext<'ctx, 'a>,
start: IntValue<'ctx>,
end: IntValue<'ctx>,
step: IntValue<'ctx>,
) -> IntValue<'ctx> {
const SYMBOL: &str = "__nac3_range_slice_len";
let len_func = ctx.module.get_function(SYMBOL).unwrap_or_else(|| {
let i32_t = ctx.ctx.i32_type();
let fn_t = i32_t.fn_type(&[i32_t.into(), i32_t.into(), i32_t.into()], false);
ctx.module.add_function(SYMBOL, fn_t, None)
});
// TODO: assert step != 0, throw exception if not
ctx.builder
.build_call(len_func, &[start.into(), end.into(), step.into()], "calc_len")
.try_as_basic_value()
.left()
.unwrap()
.into_int_value()
}
/// NOTE: the output value of the end index of this function should be compared ***inclusively***,
/// because python allows `a[2::-1]`, whose semantic is `[a[2], a[1], a[0]]`, which is equivalent to
/// NO numeric slice in python.
///
/// equivalent code:
/// ```pseudo_code
/// match (start, end, step):
/// case (s, e, None | Some(step)) if step > 0:
/// return (
/// match s:
/// case None:
/// 0
/// case Some(s):
/// handle_in_bound(s)
/// ,match e:
/// case None:
/// length - 1
/// case Some(e):
/// handle_in_bound(e) - 1
/// ,step == None ? 1 : step
/// )
/// case (s, e, Some(step)) if step < 0:
/// return (
/// match s:
/// case None:
/// length - 1
/// case Some(s):
/// s = handle_in_bound(s)
/// if s == length:
/// s - 1
/// else:
/// s
/// ,match e:
/// case None:
/// 0
/// case Some(e):
/// handle_in_bound(e) + 1
/// ,step
/// )
/// ```
pub fn handle_slice_indices<'a, 'ctx, G: CodeGenerator>(
start: &Option<Box<Expr<Option<Type>>>>,
end: &Option<Box<Expr<Option<Type>>>>,
step: &Option<Box<Expr<Option<Type>>>>,
ctx: &mut CodeGenContext<'ctx, 'a>,
generator: &mut G,
list: PointerValue<'ctx>,
) -> Result<(IntValue<'ctx>, IntValue<'ctx>, IntValue<'ctx>), String> {
// TODO: throw exception when step is 0
let int32 = ctx.ctx.i32_type();
let zero = int32.const_zero();
let one = int32.const_int(1, false);
let length = ctx.build_gep_and_load(list, &[zero, one]).into_int_value();
let length = ctx.builder.build_int_truncate_or_bit_cast(length, int32, "leni32");
Ok(match (start, end, step) {
(s, e, None) => (
s.as_ref().map_or_else(
|| Ok(int32.const_zero()),
|s| handle_slice_index_bound(s, ctx, generator, length),
)?,
{
let e = e.as_ref().map_or_else(
|| Ok(length),
|e| handle_slice_index_bound(e, ctx, generator, length),
)?;
ctx.builder.build_int_sub(e, one, "final_end")
},
one,
),
(s, e, Some(step)) => {
let step = generator
.gen_expr(ctx, step)?
.unwrap()
.to_basic_value_enum(ctx, generator)?
.into_int_value();
let len_id = ctx.builder.build_int_sub(length, one, "lenmin1");
let neg = ctx.builder.build_int_compare(IntPredicate::SLT, step, zero, "step_is_neg");
(
match s {
Some(s) => {
let s = handle_slice_index_bound(s, ctx, generator, length)?;
ctx.builder
.build_select(
ctx.builder.build_and(
ctx.builder.build_int_compare(
IntPredicate::EQ,
s,
length,
"s_eq_len",
),
neg,
"should_minus_one",
),
ctx.builder.build_int_sub(s, one, "s_min"),
s,
"final_start",
)
.into_int_value()
}
None => ctx.builder.build_select(neg, len_id, zero, "stt").into_int_value(),
},
match e {
Some(e) => {
let e = handle_slice_index_bound(e, ctx, generator, length)?;
ctx.builder
.build_select(
neg,
ctx.builder.build_int_add(e, one, "end_add_one"),
ctx.builder.build_int_sub(e, one, "end_sub_one"),
"final_end",
)
.into_int_value()
}
None => ctx.builder.build_select(neg, zero, len_id, "end").into_int_value(),
},
step,
)
}
})
}
/// this function allows index out of range, since python
/// allows index out of range in slice (`a = [1,2,3]; a[1:10] == [2,3]`).
pub fn handle_slice_index_bound<'a, 'ctx, G: CodeGenerator>(
i: &Expr<Option<Type>>,
ctx: &mut CodeGenContext<'ctx, 'a>,
generator: &mut G,
length: IntValue<'ctx>,
) -> Result<IntValue<'ctx>, String> {
const SYMBOL: &str = "__nac3_slice_index_bound";
let func = ctx.module.get_function(SYMBOL).unwrap_or_else(|| {
let i32_t = ctx.ctx.i32_type();
let fn_t = i32_t.fn_type(&[i32_t.into(), i32_t.into()], false);
ctx.module.add_function(SYMBOL, fn_t, None)
});
let i = generator.gen_expr(ctx, i)?.unwrap().to_basic_value_enum(ctx, generator)?;
Ok(ctx
.builder
.build_call(func, &[i.into(), length.into()], "bounded_ind")
.try_as_basic_value()
.left()
.unwrap()
.into_int_value())
}
/// This function handles 'end' **inclusively**.
/// Order of tuples assign_idx and value_idx is ('start', 'end', 'step').
/// Negative index should be handled before entering this function
pub fn list_slice_assignment<'ctx, 'a>(
ctx: &mut CodeGenContext<'ctx, 'a>,
size_ty: IntType<'ctx>,
ty: BasicTypeEnum<'ctx>,
dest_arr: PointerValue<'ctx>,
dest_idx: (IntValue<'ctx>, IntValue<'ctx>, IntValue<'ctx>),
src_arr: PointerValue<'ctx>,
src_idx: (IntValue<'ctx>, IntValue<'ctx>, IntValue<'ctx>),
) {
let int8_ptr = ctx.ctx.i8_type().ptr_type(AddressSpace::Generic);
let int32 = ctx.ctx.i32_type();
let (fun_symbol, elem_ptr_type) = ("__nac3_list_slice_assign_var_size", int8_ptr);
let slice_assign_fun = {
let ty_vec = vec![
int32.into(), // dest start idx
int32.into(), // dest end idx
int32.into(), // dest step
elem_ptr_type.into(), // dest arr ptr
int32.into(), // dest arr len
int32.into(), // src start idx
int32.into(), // src end idx
int32.into(), // src step
elem_ptr_type.into(), // src arr ptr
int32.into(), // src arr len
int32.into(), // size
];
ctx.module.get_function(fun_symbol).unwrap_or_else(|| {
let fn_t = int32.fn_type(ty_vec.as_slice(), false);
ctx.module.add_function(fun_symbol, fn_t, None)
})
};
let zero = int32.const_zero();
let one = int32.const_int(1, false);
let dest_arr_ptr = ctx.build_gep_and_load(dest_arr, &[zero, zero]);
let dest_arr_ptr = ctx.builder.build_pointer_cast(
dest_arr_ptr.into_pointer_value(),
elem_ptr_type,
"dest_arr_ptr_cast",
);
let dest_len = ctx.build_gep_and_load(dest_arr, &[zero, one]).into_int_value();
let dest_len = ctx.builder.build_int_truncate_or_bit_cast(dest_len, int32, "srclen32");
let src_arr_ptr = ctx.build_gep_and_load(src_arr, &[zero, zero]);
let src_arr_ptr = ctx.builder.build_pointer_cast(
src_arr_ptr.into_pointer_value(),
elem_ptr_type,
"src_arr_ptr_cast",
);
let src_len = ctx.build_gep_and_load(src_arr, &[zero, one]).into_int_value();
let src_len = ctx.builder.build_int_truncate_or_bit_cast(src_len, int32, "srclen32");
// index in bound and positive should be done
// TODO: assert if dest.step == 1 then len(src) <= len(dest) else len(src) == len(dest), and
// throw exception if not satisfied
let new_len = {
let args = vec![
dest_idx.0.into(), // dest start idx
dest_idx.1.into(), // dest end idx
dest_idx.2.into(), // dest step
dest_arr_ptr.into(), // dest arr ptr
dest_len.into(), // dest arr len
src_idx.0.into(), // src start idx
src_idx.1.into(), // src end idx
src_idx.2.into(), // src step
src_arr_ptr.into(), // src arr ptr
src_len.into(), // src arr len
{
let s = match ty {
BasicTypeEnum::FloatType(t) => t.size_of(),
BasicTypeEnum::IntType(t) => t.size_of(),
BasicTypeEnum::PointerType(t) => t.size_of(),
BasicTypeEnum::StructType(t) => t.size_of().unwrap(),
_ => unreachable!(),
};
ctx.builder.build_int_truncate_or_bit_cast(s, int32, "size")
}
.into(),
];
ctx.builder
.build_call(slice_assign_fun, args.as_slice(), "slice_assign")
.try_as_basic_value()
.unwrap_left()
.into_int_value()
};
// update length
let need_update =
ctx.builder.build_int_compare(IntPredicate::NE, new_len, dest_len, "need_update");
let current = ctx.builder.get_insert_block().unwrap().get_parent().unwrap();
let update_bb = ctx.ctx.append_basic_block(current, "update");
let cont_bb = ctx.ctx.append_basic_block(current, "cont");
ctx.builder.build_conditional_branch(need_update, update_bb, cont_bb);
ctx.builder.position_at_end(update_bb);
let dest_len_ptr = unsafe { ctx.builder.build_gep(dest_arr, &[zero, one], "dest_len_ptr") };
let new_len = ctx.builder.build_int_z_extend_or_bit_cast(new_len, size_ty, "new_len");
ctx.builder.build_store(dest_len_ptr, new_len);
ctx.builder.build_unconditional_branch(cont_bb);
ctx.builder.position_at_end(cont_bb);
}

View File

@ -8,19 +8,21 @@ use crate::{
}; };
use crossbeam::channel::{unbounded, Receiver, Sender}; use crossbeam::channel::{unbounded, Receiver, Sender};
use inkwell::{ use inkwell::{
AddressSpace,
OptimizationLevel,
attributes::{Attribute, AttributeLoc},
basic_block::BasicBlock, basic_block::BasicBlock,
builder::Builder, builder::Builder,
context::Context, context::Context,
module::Module, module::Module,
passes::{PassManager, PassManagerBuilder}, passes::{PassManager, PassManagerBuilder},
types::{BasicType, BasicTypeEnum}, types::{AnyType, BasicType, BasicTypeEnum},
values::{FunctionValue, PointerValue}, values::{BasicValueEnum, FunctionValue, PhiValue, PointerValue}
AddressSpace, OptimizationLevel,
}; };
use itertools::Itertools; use itertools::Itertools;
use nac3parser::ast::{Stmt, StrRef}; use nac3parser::ast::{Stmt, StrRef};
use parking_lot::{Condvar, Mutex}; use parking_lot::{Condvar, Mutex};
use std::collections::HashMap; use std::collections::{HashMap, HashSet};
use std::sync::{ use std::sync::{
atomic::{AtomicBool, Ordering}, atomic::{AtomicBool, Ordering},
Arc, Arc,
@ -30,6 +32,7 @@ use std::thread;
pub mod concrete_type; pub mod concrete_type;
pub mod expr; pub mod expr;
mod generator; mod generator;
pub mod irrt;
pub mod stmt; pub mod stmt;
#[cfg(test)] #[cfg(test)]
@ -59,11 +62,27 @@ pub struct CodeGenContext<'ctx, 'a> {
pub primitives: PrimitiveStore, pub primitives: PrimitiveStore,
pub calls: Arc<HashMap<CodeLocation, CallId>>, pub calls: Arc<HashMap<CodeLocation, CallId>>,
pub registry: &'a WorkerRegistry, pub registry: &'a WorkerRegistry,
// const string cache
pub const_strings: HashMap<String, BasicValueEnum<'ctx>>,
// stores the alloca for variables // stores the alloca for variables
pub init_bb: BasicBlock<'ctx>, pub init_bb: BasicBlock<'ctx>,
// where continue and break should go to respectively
// the first one is the test_bb, and the second one is bb after the loop // the first one is the test_bb, and the second one is bb after the loop
pub loop_bb: Option<(BasicBlock<'ctx>, BasicBlock<'ctx>)>, pub loop_target: Option<(BasicBlock<'ctx>, BasicBlock<'ctx>)>,
// unwind target bb
pub unwind_target: Option<BasicBlock<'ctx>>,
// return target bb, just emit ret if no such target
pub return_target: Option<BasicBlock<'ctx>>,
pub return_buffer: Option<PointerValue<'ctx>>,
// outer catch clauses
pub outer_catch_clauses:
Option<(Vec<Option<BasicValueEnum<'ctx>>>, BasicBlock<'ctx>, PhiValue<'ctx>)>,
pub need_sret: bool,
}
impl<'ctx, 'a> CodeGenContext<'ctx, 'a> {
pub fn is_terminated(&self) -> bool {
self.builder.get_insert_block().unwrap().get_terminator().is_some()
}
} }
type Fp = Box<dyn Fn(&Module) + Send + Sync>; type Fp = Box<dyn Fn(&Module) + Send + Sync>;
@ -181,21 +200,33 @@ impl WorkerRegistry {
fn worker_thread<G: CodeGenerator>(&self, generator: &mut G, f: Arc<WithCall>) { fn worker_thread<G: CodeGenerator>(&self, generator: &mut G, f: Arc<WithCall>) {
let context = Context::create(); let context = Context::create();
let mut builder = context.create_builder(); let mut builder = context.create_builder();
let mut module = context.create_module(generator.get_name()); let module = context.create_module(generator.get_name());
let pass_builder = PassManagerBuilder::create(); let pass_builder = PassManagerBuilder::create();
pass_builder.set_optimization_level(OptimizationLevel::Default); pass_builder.set_optimization_level(OptimizationLevel::Default);
let passes = PassManager::create(&module); let passes = PassManager::create(&module);
pass_builder.populate_function_pass_manager(&passes); pass_builder.populate_function_pass_manager(&passes);
let mut errors = HashSet::new();
while let Some(task) = self.receiver.recv().unwrap() { while let Some(task) = self.receiver.recv().unwrap() {
let result = gen_func(&context, generator, self, builder, module, task); let tmp_module = context.create_module("tmp");
builder = result.0; match gen_func(&context, generator, self, builder, tmp_module, task) {
module = result.1; Ok(result) => {
passes.run_on(&result.2); builder = result.0;
passes.run_on(&result.2);
module.link_in_module(result.1).unwrap();
}
Err((old_builder, e)) => {
builder = old_builder;
errors.insert(e);
}
}
*self.task_count.lock() -= 1; *self.task_count.lock() -= 1;
self.wait_condvar.notify_all(); self.wait_condvar.notify_all();
} }
if !errors.is_empty() {
panic!("Codegen error: {}", errors.into_iter().sorted().join("\n----------\n"));
}
let result = module.verify(); let result = module.verify();
if let Err(err) = result { if let Err(err) = result {
@ -224,6 +255,7 @@ pub struct CodeGenTask {
fn get_llvm_type<'ctx>( fn get_llvm_type<'ctx>(
ctx: &'ctx Context, ctx: &'ctx Context,
generator: &mut dyn CodeGenerator,
unifier: &mut Unifier, unifier: &mut Unifier,
top_level: &TopLevelContext, top_level: &TopLevelContext,
type_cache: &mut HashMap<Type, BasicTypeEnum<'ctx>>, type_cache: &mut HashMap<Type, BasicTypeEnum<'ctx>>,
@ -233,54 +265,88 @@ fn get_llvm_type<'ctx>(
// we assume the type cache should already contain primitive types, // we assume the type cache should already contain primitive types,
// and they should be passed by value instead of passing as pointer. // and they should be passed by value instead of passing as pointer.
type_cache.get(&unifier.get_representative(ty)).cloned().unwrap_or_else(|| { type_cache.get(&unifier.get_representative(ty)).cloned().unwrap_or_else(|| {
let ty = unifier.get_ty(ty); let ty_enum = unifier.get_ty(ty);
match &*ty { let result = match &*ty_enum {
TObj { obj_id, fields, .. } => { TObj { obj_id, fields, .. } => {
// check to avoid treating primitives as classes
if obj_id.0 <= 7 {
unreachable!();
}
// a struct with fields in the order of declaration // a struct with fields in the order of declaration
let top_level_defs = top_level.definitions.read(); let top_level_defs = top_level.definitions.read();
let definition = top_level_defs.get(obj_id.0).unwrap(); let definition = top_level_defs.get(obj_id.0).unwrap();
let ty = if let TopLevelDef::Class { fields: fields_list, .. } = &*definition.read() let ty = if let TopLevelDef::Class { name, fields: fields_list, .. } =
&*definition.read()
{ {
let fields = fields.borrow(); let struct_type = ctx.opaque_struct_type(&name.to_string());
type_cache.insert(unifier.get_representative(ty), struct_type.ptr_type(AddressSpace::Generic).into());
let fields = fields_list let fields = fields_list
.iter() .iter()
.map(|f| get_llvm_type(ctx, unifier, top_level, type_cache, fields[&f.0].0)) .map(|f| {
get_llvm_type(
ctx,
generator,
unifier,
top_level,
type_cache,
fields[&f.0].0,
)
})
.collect_vec(); .collect_vec();
ctx.struct_type(&fields, false).ptr_type(AddressSpace::Generic).into() struct_type.set_body(&fields, false);
struct_type.ptr_type(AddressSpace::Generic).into()
} else { } else {
unreachable!() unreachable!()
}; };
ty return ty;
} }
TTuple { ty } => { TTuple { ty } => {
// a struct with fields in the order present in the tuple // a struct with fields in the order present in the tuple
let fields = ty let fields = ty
.iter() .iter()
.map(|ty| get_llvm_type(ctx, unifier, top_level, type_cache, *ty)) .map(|ty| get_llvm_type(ctx, generator, unifier, top_level, type_cache, *ty))
.collect_vec(); .collect_vec();
ctx.struct_type(&fields, false).ptr_type(AddressSpace::Generic).into() ctx.struct_type(&fields, false).into()
} }
TList { ty } => { TList { ty } => {
// a struct with an integer and a pointer to an array // a struct with an integer and a pointer to an array
let element_type = get_llvm_type(ctx, unifier, top_level, type_cache, *ty); let element_type =
let fields = get_llvm_type(ctx, generator, unifier, top_level, type_cache, *ty);
[ctx.i32_type().into(), element_type.ptr_type(AddressSpace::Generic).into()]; let fields = [
element_type.ptr_type(AddressSpace::Generic).into(),
generator.get_size_type(ctx).into(),
];
ctx.struct_type(&fields, false).ptr_type(AddressSpace::Generic).into() ctx.struct_type(&fields, false).ptr_type(AddressSpace::Generic).into()
} }
TVirtual { .. } => unimplemented!(), TVirtual { .. } => unimplemented!(),
_ => unreachable!("{}", ty.get_type_name()), _ => unreachable!("{}", ty_enum.get_type_name()),
} };
type_cache.insert(unifier.get_representative(ty), result);
result
}) })
} }
pub fn gen_func<'ctx, G: CodeGenerator + ?Sized>( fn need_sret<'ctx>(ctx: &'ctx Context, ty: BasicTypeEnum<'ctx>) -> bool {
fn need_sret_impl<'ctx>(ctx: &'ctx Context, ty: BasicTypeEnum<'ctx>, maybe_large: bool) -> bool {
match ty {
BasicTypeEnum::IntType(_) | BasicTypeEnum::PointerType(_) => false,
BasicTypeEnum::FloatType(_) if maybe_large => false,
BasicTypeEnum::StructType(ty) if maybe_large && ty.count_fields() <= 2 =>
ty.get_field_types().iter().any(|ty| need_sret_impl(ctx, *ty, false)),
_ => true,
}
}
need_sret_impl(ctx, ty, true)
}
pub fn gen_func<'ctx, G: CodeGenerator>(
context: &'ctx Context, context: &'ctx Context,
generator: &mut G, generator: &mut G,
registry: &WorkerRegistry, registry: &WorkerRegistry,
builder: Builder<'ctx>, builder: Builder<'ctx>,
module: Module<'ctx>, module: Module<'ctx>,
task: CodeGenTask, task: CodeGenTask,
) -> (Builder<'ctx>, Module<'ctx>, FunctionValue<'ctx>) { ) -> Result<(Builder<'ctx>, Module<'ctx>, FunctionValue<'ctx>), (Builder<'ctx>, String)> {
let top_level_ctx = registry.top_level_ctx.clone(); let top_level_ctx = registry.top_level_ctx.clone();
let static_value_store = registry.static_value_store.clone(); let static_value_store = registry.static_value_store.clone();
let (mut unifier, primitives) = { let (mut unifier, primitives) = {
@ -293,40 +359,63 @@ pub fn gen_func<'ctx, G: CodeGenerator + ?Sized>(
// this should be unification between variables and concrete types // this should be unification between variables and concrete types
// and should not cause any problem... // and should not cause any problem...
let b = task.store.to_unifier_type(&mut unifier, &primitives, *b, &mut cache); let b = task.store.to_unifier_type(&mut unifier, &primitives, *b, &mut cache);
unifier.unify(*a, b).or_else(|err| { unifier
if matches!(&*unifier.get_ty(*a), TypeEnum::TRigidVar { .. }) { .unify(*a, b)
unifier.replace_rigid_var(*a, b); .or_else(|err| {
Ok(()) if matches!(&*unifier.get_ty(*a), TypeEnum::TRigidVar { .. }) {
} else { unifier.replace_rigid_var(*a, b);
Err(err) Ok(())
} } else {
}).unwrap() Err(err)
}
})
.unwrap()
} }
// rebuild primitive store with unique representatives // rebuild primitive store with unique representatives
let primitives = PrimitiveStore { let primitives = PrimitiveStore {
int32: unifier.get_representative(primitives.int32), int32: unifier.get_representative(primitives.int32),
int64: unifier.get_representative(primitives.int64), int64: unifier.get_representative(primitives.int64),
uint32: unifier.get_representative(primitives.uint32),
uint64: unifier.get_representative(primitives.uint64),
float: unifier.get_representative(primitives.float), float: unifier.get_representative(primitives.float),
bool: unifier.get_representative(primitives.bool), bool: unifier.get_representative(primitives.bool),
none: unifier.get_representative(primitives.none), none: unifier.get_representative(primitives.none),
range: unifier.get_representative(primitives.range), range: unifier.get_representative(primitives.range),
str: unifier.get_representative(primitives.str), str: unifier.get_representative(primitives.str),
exception: unifier.get_representative(primitives.exception),
}; };
let mut type_cache: HashMap<_, _> = [ let mut type_cache: HashMap<_, _> = [
(unifier.get_representative(primitives.int32), context.i32_type().into()), (primitives.int32, context.i32_type().into()),
(unifier.get_representative(primitives.int64), context.i64_type().into()), (primitives.int64, context.i64_type().into()),
(unifier.get_representative(primitives.float), context.f64_type().into()), (primitives.uint32, context.i32_type().into()),
(unifier.get_representative(primitives.bool), context.bool_type().into()), (primitives.uint64, context.i64_type().into()),
( (primitives.float, context.f64_type().into()),
unifier.get_representative(primitives.str), (primitives.bool, context.bool_type().into()),
context.i8_type().ptr_type(AddressSpace::Generic).into(), (primitives.str, {
), let str_type = context.opaque_struct_type("str");
let fields = [
context.i8_type().ptr_type(AddressSpace::Generic).into(),
generator.get_size_type(context).into(),
];
str_type.set_body(&fields, false);
str_type.into()
}),
(primitives.range, context.i32_type().array_type(3).ptr_type(AddressSpace::Generic).into()),
] ]
.iter() .iter()
.cloned() .cloned()
.collect(); .collect();
type_cache.insert(primitives.exception, {
let exception = context.opaque_struct_type("Exception");
let int32 = context.i32_type().into();
let int64 = context.i64_type().into();
let str_ty = *type_cache.get(&primitives.str).unwrap();
let fields = [int32, str_ty, int32, int32, str_ty, str_ty, int64, int64, int64];
exception.set_body(&fields, false);
exception.ptr_type(AddressSpace::Generic).into()
});
let (args, ret) = if let ConcreteTypeEnum::TFunc { args, ret, .. } = let (args, ret) = if let ConcreteTypeEnum::TFunc { args, ret, .. } =
task.store.get(task.signature) task.store.get(task.signature)
@ -344,18 +433,35 @@ pub fn gen_func<'ctx, G: CodeGenerator + ?Sized>(
} else { } else {
unreachable!() unreachable!()
}; };
let params = args let ret_type = if unifier.unioned(ret, primitives.none) {
None
} else {
Some(get_llvm_type(context, generator, &mut unifier, top_level_ctx.as_ref(), &mut type_cache, ret))
};
let has_sret = ret_type.map_or(false, |ty| need_sret(context, ty));
let mut params = args
.iter() .iter()
.map(|arg| { .map(|arg| {
get_llvm_type(context, &mut unifier, top_level_ctx.as_ref(), &mut type_cache, arg.ty) get_llvm_type(
context,
generator,
&mut unifier,
top_level_ctx.as_ref(),
&mut type_cache,
arg.ty,
)
.into()
}) })
.collect_vec(); .collect_vec();
let fn_type = if unifier.unioned(ret, primitives.none) { if has_sret {
context.void_type().fn_type(&params, false) params.insert(0, ret_type.unwrap().ptr_type(AddressSpace::Generic).into());
} else { }
get_llvm_type(context, &mut unifier, top_level_ctx.as_ref(), &mut type_cache, ret)
.fn_type(&params, false) let fn_type = match ret_type {
Some(ret_type) if !has_sret => ret_type.fn_type(&params, false),
_ => context.void_type().fn_type(&params, false)
}; };
let symbol = &task.symbol_name; let symbol = &task.symbol_name;
@ -369,21 +475,41 @@ pub fn gen_func<'ctx, G: CodeGenerator + ?Sized>(
}); });
fn_val.set_personality_function(personality); fn_val.set_personality_function(personality);
} }
if has_sret {
fn_val.add_attribute(AttributeLoc::Param(0),
context.create_type_attribute(Attribute::get_named_enum_kind_id("sret"),
ret_type.unwrap().as_any_type_enum()));
}
let init_bb = context.append_basic_block(fn_val, "init"); let init_bb = context.append_basic_block(fn_val, "init");
builder.position_at_end(init_bb); builder.position_at_end(init_bb);
let body_bb = context.append_basic_block(fn_val, "body"); let body_bb = context.append_basic_block(fn_val, "body");
let mut var_assignment = HashMap::new(); let mut var_assignment = HashMap::new();
let offset = if has_sret { 1 } else { 0 };
for (n, arg) in args.iter().enumerate() { for (n, arg) in args.iter().enumerate() {
let param = fn_val.get_nth_param(n as u32).unwrap(); let param = fn_val.get_nth_param((n as u32) + offset).unwrap();
let alloca = builder.build_alloca( let alloca = builder.build_alloca(
get_llvm_type(context, &mut unifier, top_level_ctx.as_ref(), &mut type_cache, arg.ty), get_llvm_type(
context,
generator,
&mut unifier,
top_level_ctx.as_ref(),
&mut type_cache,
arg.ty,
),
&arg.name.to_string(), &arg.name.to_string(),
); );
builder.build_store(alloca, param); builder.build_store(alloca, param);
var_assignment.insert(arg.name, (alloca, None, 0)); var_assignment.insert(arg.name, (alloca, None, 0));
} }
let return_buffer = if has_sret {
Some(fn_val.get_nth_param(0).unwrap().into_pointer_value())
} else {
fn_type.get_return_type().map(|v| builder.build_alloca(v, "$ret"))
};
let static_values = { let static_values = {
let store = registry.static_value_store.lock(); let store = registry.static_value_store.lock();
store.store[task.id].clone() store.store[task.id].clone()
@ -401,7 +527,12 @@ pub fn gen_func<'ctx, G: CodeGenerator + ?Sized>(
resolver: task.resolver, resolver: task.resolver,
top_level: top_level_ctx.as_ref(), top_level: top_level_ctx.as_ref(),
calls: task.calls, calls: task.calls,
loop_bb: None, loop_target: None,
return_target: None,
return_buffer,
unwind_target: None,
outer_catch_clauses: None,
const_strings: Default::default(),
registry, registry,
var_assignment, var_assignment,
type_cache, type_cache,
@ -411,21 +542,28 @@ pub fn gen_func<'ctx, G: CodeGenerator + ?Sized>(
module, module,
unifier, unifier,
static_value_store, static_value_store,
need_sret: has_sret
}; };
let mut returned = false; let mut err = None;
for stmt in task.body.iter() { for stmt in task.body.iter() {
returned = generator.gen_stmt(&mut code_gen_context, stmt); if let Err(e) = generator.gen_stmt(&mut code_gen_context, stmt) {
if returned { err = Some(e);
break;
}
if code_gen_context.is_terminated() {
break; break;
} }
} }
// after static analysis, only void functions can have no return at the end. // after static analysis, only void functions can have no return at the end.
if !returned { if !code_gen_context.is_terminated() {
code_gen_context.builder.build_return(None); code_gen_context.builder.build_return(None);
} }
let CodeGenContext { builder, module, .. } = code_gen_context; let CodeGenContext { builder, module, .. } = code_gen_context;
if let Some(e) = err {
return Err((builder, e));
}
(builder, module, fn_val) Ok((builder, module, fn_val))
} }

File diff suppressed because it is too large Load Diff

View File

@ -3,7 +3,6 @@ use crate::{
concrete_type::ConcreteTypeStore, CodeGenContext, CodeGenTask, DefaultCodeGenerator, concrete_type::ConcreteTypeStore, CodeGenContext, CodeGenTask, DefaultCodeGenerator,
WithCall, WorkerRegistry, WithCall, WorkerRegistry,
}, },
location::Location,
symbol_resolver::{SymbolResolver, ValueEnum}, symbol_resolver::{SymbolResolver, ValueEnum},
toplevel::{ toplevel::{
composer::TopLevelComposer, DefinitionId, FunInstance, TopLevelContext, TopLevelDef, composer::TopLevelComposer, DefinitionId, FunInstance, TopLevelContext, TopLevelDef,
@ -19,7 +18,6 @@ use nac3parser::{
parser::parse_program, parser::parse_program,
}; };
use parking_lot::RwLock; use parking_lot::RwLock;
use std::cell::RefCell;
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
use std::sync::Arc; use std::sync::Arc;
@ -36,18 +34,21 @@ impl Resolver {
} }
impl SymbolResolver for Resolver { impl SymbolResolver for Resolver {
fn get_default_param_value(&self, _: &nac3parser::ast::Expr) -> Option<crate::symbol_resolver::SymbolValue> { fn get_default_param_value(
&self,
_: &nac3parser::ast::Expr,
) -> Option<crate::symbol_resolver::SymbolValue> {
unimplemented!() unimplemented!()
} }
fn get_symbol_type( fn get_symbol_type(
&self, &self,
_: &mut Unifier, _: &mut Unifier,
_: &[Arc<RwLock<TopLevelDef>>], _: &[Arc<RwLock<TopLevelDef>>],
_: &PrimitiveStore, _: &PrimitiveStore,
str: StrRef, str: StrRef,
) -> Option<Type> { ) -> Result<Type, String> {
self.id_to_type.get(&str).cloned() self.id_to_type.get(&str).cloned().ok_or_else(|| format!("cannot find symbol `{}`", str))
} }
fn get_symbol_value<'ctx, 'a>( fn get_symbol_value<'ctx, 'a>(
@ -58,12 +59,20 @@ impl SymbolResolver for Resolver {
unimplemented!() unimplemented!()
} }
fn get_symbol_location(&self, _: StrRef) -> Option<Location> { fn get_identifier_def(&self, id: StrRef) -> Result<DefinitionId, String> {
self.id_to_def
.read()
.get(&id)
.cloned()
.ok_or_else(|| format!("cannot find symbol `{}`", id))
}
fn get_string_id(&self, _: &str) -> i32 {
unimplemented!() unimplemented!()
} }
fn get_identifier_def(&self, id: StrRef) -> Option<DefinitionId> { fn get_exception_id(&self, tyid: usize) -> usize {
self.id_to_def.read().get(&id).cloned() unimplemented!()
} }
} }
@ -74,7 +83,7 @@ fn test_primitives() {
d = a if c == 1 else 0 d = a if c == 1 else 0
return d return d
"}; "};
let statements = parse_program(source).unwrap(); let statements = parse_program(source, Default::default()).unwrap();
let composer: TopLevelComposer = Default::default(); let composer: TopLevelComposer = Default::default();
let mut unifier = composer.unifier.clone(); let mut unifier = composer.unifier.clone();
@ -88,7 +97,7 @@ fn test_primitives() {
class_names: Default::default(), class_names: Default::default(),
}) as Arc<dyn SymbolResolver + Send + Sync>; }) as Arc<dyn SymbolResolver + Send + Sync>;
let threads = vec![DefaultCodeGenerator::new("test".into()).into()]; let threads = vec![DefaultCodeGenerator::new("test".into(), 32).into()];
let signature = FunSignature { let signature = FunSignature {
args: vec![ args: vec![
FuncArg { name: "a".into(), ty: primitives.int32, default_value: None }, FuncArg { name: "a".into(), ty: primitives.int32, default_value: None },
@ -120,6 +129,7 @@ fn test_primitives() {
virtual_checks: &mut virtual_checks, virtual_checks: &mut virtual_checks,
calls: &mut calls, calls: &mut calls,
defined_identifiers: identifiers.clone(), defined_identifiers: identifiers.clone(),
in_handler: false,
}; };
inferencer.variable_mapping.insert("a".into(), inferencer.primitives.int32); inferencer.variable_mapping.insert("a".into(), inferencer.primitives.int32);
inferencer.variable_mapping.insert("b".into(), inferencer.primitives.int32); inferencer.variable_mapping.insert("b".into(), inferencer.primitives.int32);
@ -193,12 +203,12 @@ fn test_simple_call() {
a = foo(a) a = foo(a)
return a * 2 return a * 2
"}; "};
let statements_1 = parse_program(source_1).unwrap(); let statements_1 = parse_program(source_1, Default::default()).unwrap();
let source_2 = indoc! { " let source_2 = indoc! { "
return a + 1 return a + 1
"}; "};
let statements_2 = parse_program(source_2).unwrap(); let statements_2 = parse_program(source_2, Default::default()).unwrap();
let composer: TopLevelComposer = Default::default(); let composer: TopLevelComposer = Default::default();
let mut unifier = composer.unifier.clone(); let mut unifier = composer.unifier.clone();
@ -211,7 +221,7 @@ fn test_simple_call() {
ret: primitives.int32, ret: primitives.int32,
vars: HashMap::new(), vars: HashMap::new(),
}; };
let fun_ty = unifier.add_ty(TypeEnum::TFunc(RefCell::new(signature.clone()))); let fun_ty = unifier.add_ty(TypeEnum::TFunc(signature.clone()));
let mut store = ConcreteTypeStore::new(); let mut store = ConcreteTypeStore::new();
let mut cache = HashMap::new(); let mut cache = HashMap::new();
let signature = store.from_signature(&mut unifier, &primitives, &signature, &mut cache); let signature = store.from_signature(&mut unifier, &primitives, &signature, &mut cache);
@ -227,6 +237,7 @@ fn test_simple_call() {
instance_to_symbol: HashMap::new(), instance_to_symbol: HashMap::new(),
resolver: None, resolver: None,
codegen_callback: None, codegen_callback: None,
loc: None,
}))); })));
let resolver = Resolver { let resolver = Resolver {
@ -245,7 +256,7 @@ fn test_simple_call() {
unreachable!() unreachable!()
} }
let threads = vec![DefaultCodeGenerator::new("test".into()).into()]; let threads = vec![DefaultCodeGenerator::new("test".into(), 32).into()];
let mut function_data = FunctionData { let mut function_data = FunctionData {
resolver: resolver.clone(), resolver: resolver.clone(),
bound_variables: Vec::new(), bound_variables: Vec::new(),
@ -263,6 +274,7 @@ fn test_simple_call() {
virtual_checks: &mut virtual_checks, virtual_checks: &mut virtual_checks,
calls: &mut calls, calls: &mut calls,
defined_identifiers: identifiers.clone(), defined_identifiers: identifiers.clone(),
in_handler: false,
}; };
inferencer.variable_mapping.insert("a".into(), inferencer.primitives.int32); inferencer.variable_mapping.insert("a".into(), inferencer.primitives.int32);
inferencer.variable_mapping.insert("foo".into(), fun_ty); inferencer.variable_mapping.insert("foo".into(), fun_ty);

View File

@ -2,7 +2,6 @@
#![allow(dead_code)] #![allow(dead_code)]
pub mod codegen; pub mod codegen;
pub mod location;
pub mod symbol_resolver; pub mod symbol_resolver;
pub mod toplevel; pub mod toplevel;
pub mod typecheck; pub mod typecheck;

View File

@ -1,28 +0,0 @@
use nac3parser::ast;
use std::vec::Vec;
#[derive(Clone, Copy, PartialEq)]
pub struct FileID(u32);
#[derive(Clone, Copy, PartialEq)]
pub enum Location {
CodeRange(FileID, ast::Location),
Builtin,
}
#[derive(Default)]
pub struct FileRegistry {
files: Vec<String>,
}
impl FileRegistry {
pub fn add_file(&mut self, path: &str) -> FileID {
let index = self.files.len() as u32;
self.files.push(path.to_owned());
FileID(index)
}
pub fn query_file(&self, id: FileID) -> &str {
&self.files[id.0 as usize]
}
}

View File

@ -1,37 +1,73 @@
use std::collections::HashMap;
use std::fmt::Debug; use std::fmt::Debug;
use std::{cell::RefCell, sync::Arc}; use std::sync::Arc;
use std::{collections::HashMap, fmt::Display};
use crate::typecheck::{ use crate::typecheck::typedef::TypeEnum;
type_inferencer::PrimitiveStore,
typedef::{Type, Unifier},
};
use crate::{ use crate::{
codegen::CodeGenContext, codegen::CodeGenContext,
toplevel::{DefinitionId, TopLevelDef}, toplevel::{DefinitionId, TopLevelDef},
}; };
use crate::{location::Location, typecheck::typedef::TypeEnum}; use crate::{
codegen::CodeGenerator,
typecheck::{
type_inferencer::PrimitiveStore,
typedef::{Type, Unifier},
},
};
use inkwell::values::{BasicValueEnum, FloatValue, IntValue, PointerValue}; use inkwell::values::{BasicValueEnum, FloatValue, IntValue, PointerValue};
use itertools::{chain, izip}; use itertools::{chain, izip};
use nac3parser::ast::{Expr, StrRef}; use nac3parser::ast::{Expr, Location, StrRef};
use parking_lot::RwLock; use parking_lot::RwLock;
#[derive(Clone, PartialEq, Debug)] #[derive(Clone, PartialEq, Debug)]
pub enum SymbolValue { pub enum SymbolValue {
I32(i32), I32(i32),
I64(i64), I64(i64),
U32(u32),
U64(u64),
Str(String),
Double(f64), Double(f64),
Bool(bool), Bool(bool),
Tuple(Vec<SymbolValue>), Tuple(Vec<SymbolValue>),
} }
impl Display for SymbolValue {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
SymbolValue::I32(i) => write!(f, "{}", i),
SymbolValue::I64(i) => write!(f, "int64({})", i),
SymbolValue::U32(i) => write!(f, "uint32({})", i),
SymbolValue::U64(i) => write!(f, "uint64({})", i),
SymbolValue::Str(s) => write!(f, "\"{}\"", s),
SymbolValue::Double(d) => write!(f, "{}", d),
SymbolValue::Bool(b) => {
if *b {
write!(f, "True")
} else {
write!(f, "False")
}
}
SymbolValue::Tuple(t) => {
write!(f, "({})", t.iter().map(|v| format!("{}", v)).collect::<Vec<_>>().join(", "))
}
}
}
}
pub trait StaticValue { pub trait StaticValue {
fn get_unique_identifier(&self) -> u64; fn get_unique_identifier(&self) -> u64;
fn get_const_obj<'ctx, 'a>(
&self,
ctx: &mut CodeGenContext<'ctx, 'a>,
generator: &mut dyn CodeGenerator,
) -> BasicValueEnum<'ctx>;
fn to_basic_value_enum<'ctx, 'a>( fn to_basic_value_enum<'ctx, 'a>(
&self, &self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
) -> BasicValueEnum<'ctx>; generator: &mut dyn CodeGenerator,
) -> Result<BasicValueEnum<'ctx>, String>;
fn get_field<'ctx, 'a>( fn get_field<'ctx, 'a>(
&self, &self,
@ -74,10 +110,11 @@ impl<'ctx> ValueEnum<'ctx> {
pub fn to_basic_value_enum<'a>( pub fn to_basic_value_enum<'a>(
self, self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
) -> BasicValueEnum<'ctx> { generator: &mut dyn CodeGenerator,
) -> Result<BasicValueEnum<'ctx>, String> {
match self { match self {
ValueEnum::Static(v) => v.to_basic_value_enum(ctx), ValueEnum::Static(v) => v.to_basic_value_enum(ctx, generator),
ValueEnum::Dynamic(v) => v, ValueEnum::Dynamic(v) => Ok(v),
} }
} }
} }
@ -90,10 +127,10 @@ pub trait SymbolResolver {
top_level_defs: &[Arc<RwLock<TopLevelDef>>], top_level_defs: &[Arc<RwLock<TopLevelDef>>],
primitives: &PrimitiveStore, primitives: &PrimitiveStore,
str: StrRef, str: StrRef,
) -> Option<Type>; ) -> Result<Type, String>;
// get the top-level definition of identifiers // get the top-level definition of identifiers
fn get_identifier_def(&self, str: StrRef) -> Option<DefinitionId>; fn get_identifier_def(&self, str: StrRef) -> Result<DefinitionId, String>;
fn get_symbol_value<'ctx, 'a>( fn get_symbol_value<'ctx, 'a>(
&self, &self,
@ -101,13 +138,14 @@ pub trait SymbolResolver {
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
) -> Option<ValueEnum<'ctx>>; ) -> Option<ValueEnum<'ctx>>;
fn get_symbol_location(&self, str: StrRef) -> Option<Location>;
fn get_default_param_value(&self, expr: &nac3parser::ast::Expr) -> Option<SymbolValue>; fn get_default_param_value(&self, expr: &nac3parser::ast::Expr) -> Option<SymbolValue>;
fn get_string_id(&self, s: &str) -> i32;
fn get_exception_id(&self, tyid: usize) -> usize;
// handle function call etc. // handle function call etc.
} }
thread_local! { thread_local! {
static IDENTIFIER_ID: [StrRef; 8] = [ static IDENTIFIER_ID: [StrRef; 12] = [
"int32".into(), "int32".into(),
"int64".into(), "int64".into(),
"float".into(), "float".into(),
@ -115,7 +153,11 @@ thread_local! {
"None".into(), "None".into(),
"virtual".into(), "virtual".into(),
"list".into(), "list".into(),
"tuple".into() "tuple".into(),
"str".into(),
"Exception".into(),
"uint32".into(),
"uint64".into(),
]; ];
} }
@ -137,22 +179,34 @@ pub fn parse_type_annotation<T>(
let virtual_id = ids[5]; let virtual_id = ids[5];
let list_id = ids[6]; let list_id = ids[6];
let tuple_id = ids[7]; let tuple_id = ids[7];
let str_id = ids[8];
let exn_id = ids[9];
let uint32_id = ids[10];
let uint64_id = ids[11];
match &expr.node { let name_handling = |id: &StrRef, loc: Location, unifier: &mut Unifier| {
Name { id, .. } => { if *id == int32_id {
if *id == int32_id { Ok(primitives.int32)
Ok(primitives.int32) } else if *id == int64_id {
} else if *id == int64_id { Ok(primitives.int64)
Ok(primitives.int64) } else if *id == uint32_id {
} else if *id == float_id { Ok(primitives.uint32)
Ok(primitives.float) } else if *id == uint64_id {
} else if *id == bool_id { Ok(primitives.uint64)
Ok(primitives.bool) } else if *id == float_id {
} else if *id == none_id { Ok(primitives.float)
Ok(primitives.none) } else if *id == bool_id {
} else { Ok(primitives.bool)
let obj_id = resolver.get_identifier_def(*id); } else if *id == none_id {
if let Some(obj_id) = obj_id { Ok(primitives.none)
} else if *id == str_id {
Ok(primitives.str)
} else if *id == exn_id {
Ok(primitives.exception)
} else {
let obj_id = resolver.get_identifier_def(*id);
match obj_id {
Ok(obj_id) => {
let def = top_level_defs[obj_id.0].read(); let def = top_level_defs[obj_id.0].read();
if let TopLevelDef::Class { fields, methods, type_vars, .. } = &*def { if let TopLevelDef::Class { fields, methods, type_vars, .. } = &*def {
if !type_vars.is_empty() { if !type_vars.is_empty() {
@ -161,141 +215,111 @@ pub fn parse_type_annotation<T>(
type_vars.len() type_vars.len()
)); ));
} }
let fields = RefCell::new( let fields = chain(
chain( fields.iter().map(|(k, v, m)| (*k, (*v, *m))),
fields.iter().map(|(k, v, m)| (*k, (*v, *m))), methods.iter().map(|(k, v, _)| (*k, (*v, false))),
methods.iter().map(|(k, v, _)| (*k, (*v, false))), )
) .collect();
.collect(),
);
Ok(unifier.add_ty(TypeEnum::TObj { Ok(unifier.add_ty(TypeEnum::TObj {
obj_id, obj_id,
fields, fields,
params: Default::default(), params: Default::default(),
})) }))
} else { } else {
Err("Cannot use function name as type".into()) Err(format!("Cannot use function name as type at {}", loc))
} }
} else { }
// it could be a type variable Err(_) => {
let ty = resolver let ty = resolver
.get_symbol_type(unifier, top_level_defs, primitives, *id) .get_symbol_type(unifier, top_level_defs, primitives, *id)
.ok_or_else(|| "unknown type variable name".to_owned())?; .map_err(|e| format!("Unknown type annotation at {}: {}", loc, e))?;
if let TypeEnum::TVar { .. } = &*unifier.get_ty(ty) { if let TypeEnum::TVar { .. } = &*unifier.get_ty(ty) {
Ok(ty) Ok(ty)
} else { } else {
Err(format!("Unknown type annotation {}", id)) Err(format!("Unknown type annotation {} at {}", id, loc))
} }
} }
} }
} }
};
let subscript_name_handle = |id: &StrRef, slice: &Expr<T>, unifier: &mut Unifier| {
if *id == virtual_id {
let ty = parse_type_annotation(resolver, top_level_defs, unifier, primitives, slice)?;
Ok(unifier.add_ty(TypeEnum::TVirtual { ty }))
} else if *id == list_id {
let ty = parse_type_annotation(resolver, top_level_defs, unifier, primitives, slice)?;
Ok(unifier.add_ty(TypeEnum::TList { ty }))
} else if *id == tuple_id {
if let Tuple { elts, .. } = &slice.node {
let ty = elts
.iter()
.map(|elt| {
parse_type_annotation(resolver, top_level_defs, unifier, primitives, elt)
})
.collect::<Result<Vec<_>, _>>()?;
Ok(unifier.add_ty(TypeEnum::TTuple { ty }))
} else {
Err("Expected multiple elements for tuple".into())
}
} else {
let types = if let Tuple { elts, .. } = &slice.node {
elts.iter()
.map(|v| {
parse_type_annotation(resolver, top_level_defs, unifier, primitives, v)
})
.collect::<Result<Vec<_>, _>>()?
} else {
vec![parse_type_annotation(resolver, top_level_defs, unifier, primitives, slice)?]
};
let obj_id = resolver.get_identifier_def(*id)?;
let def = top_level_defs[obj_id.0].read();
if let TopLevelDef::Class { fields, methods, type_vars, .. } = &*def {
if types.len() != type_vars.len() {
return Err(format!(
"Unexpected number of type parameters: expected {} but got {}",
type_vars.len(),
types.len()
));
}
let mut subst = HashMap::new();
for (var, ty) in izip!(type_vars.iter(), types.iter()) {
let id = if let TypeEnum::TVar { id, .. } = &*unifier.get_ty(*var) {
*id
} else {
unreachable!()
};
subst.insert(id, *ty);
}
let mut fields = fields
.iter()
.map(|(attr, ty, is_mutable)| {
let ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
(*attr, (ty, *is_mutable))
})
.collect::<HashMap<_, _>>();
fields.extend(methods.iter().map(|(attr, ty, _)| {
let ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
(*attr, (ty, false))
}));
Ok(unifier.add_ty(TypeEnum::TObj { obj_id, fields, params: subst }))
} else {
Err("Cannot use function name as type".into())
}
}
};
match &expr.node {
Name { id, .. } => name_handling(id, expr.location, unifier),
Subscript { value, slice, .. } => { Subscript { value, slice, .. } => {
if let Name { id, .. } = &value.node { if let Name { id, .. } = &value.node {
if *id == virtual_id { subscript_name_handle(id, slice, unifier)
let ty = parse_type_annotation(
resolver,
top_level_defs,
unifier,
primitives,
slice,
)?;
Ok(unifier.add_ty(TypeEnum::TVirtual { ty }))
} else if *id == list_id {
let ty = parse_type_annotation(
resolver,
top_level_defs,
unifier,
primitives,
slice,
)?;
Ok(unifier.add_ty(TypeEnum::TList { ty }))
} else if *id == tuple_id {
if let Tuple { elts, .. } = &slice.node {
let ty = elts
.iter()
.map(|elt| {
parse_type_annotation(
resolver,
top_level_defs,
unifier,
primitives,
elt,
)
})
.collect::<Result<Vec<_>, _>>()?;
Ok(unifier.add_ty(TypeEnum::TTuple { ty }))
} else {
Err("Expected multiple elements for tuple".into())
}
} else {
let types = if let Tuple { elts, .. } = &slice.node {
elts.iter()
.map(|v| {
parse_type_annotation(
resolver,
top_level_defs,
unifier,
primitives,
v,
)
})
.collect::<Result<Vec<_>, _>>()?
} else {
vec![parse_type_annotation(
resolver,
top_level_defs,
unifier,
primitives,
slice,
)?]
};
let obj_id = resolver
.get_identifier_def(*id)
.ok_or_else(|| format!("Unknown type annotation {}", id))?;
let def = top_level_defs[obj_id.0].read();
if let TopLevelDef::Class { fields, methods, type_vars, .. } = &*def {
if types.len() != type_vars.len() {
return Err(format!(
"Unexpected number of type parameters: expected {} but got {}",
type_vars.len(),
types.len()
));
}
let mut subst = HashMap::new();
for (var, ty) in izip!(type_vars.iter(), types.iter()) {
let id = if let TypeEnum::TVar { id, .. } = &*unifier.get_ty(*var) {
*id
} else {
unreachable!()
};
subst.insert(id, *ty);
}
let mut fields = fields
.iter()
.map(|(attr, ty, is_mutable)| {
let ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
(*attr, (ty, *is_mutable))
})
.collect::<HashMap<_, _>>();
fields.extend(methods.iter().map(|(attr, ty, _)| {
let ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
(*attr, (ty, false))
}));
Ok(unifier.add_ty(TypeEnum::TObj {
obj_id,
fields: fields.into(),
params: subst.into(),
}))
} else {
Err("Cannot use function name as type".into())
}
}
} else { } else {
Err("unsupported type expression".into()) Err(format!("unsupported type expression at {}", expr.location))
} }
} }
_ => Err("unsupported type expression".into()), _ => Err(format!("unsupported type expression at {}", expr.location)),
} }
} }
@ -309,6 +333,26 @@ impl dyn SymbolResolver + Send + Sync {
) -> Result<Type, String> { ) -> Result<Type, String> {
parse_type_annotation(self, top_level_defs, unifier, primitives, expr) parse_type_annotation(self, top_level_defs, unifier, primitives, expr)
} }
pub fn get_type_name(
&self,
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
unifier: &mut Unifier,
ty: Type,
) -> String {
unifier.internal_stringify(
ty,
&mut |id| {
if let TopLevelDef::Class { name, .. } = &*top_level_defs[id].read() {
name.to_string()
} else {
unreachable!("expected class definition")
}
},
&mut |id| format!("var{}", id),
&mut None,
)
}
} }
impl Debug for dyn SymbolResolver + Send + Sync { impl Debug for dyn SymbolResolver + Send + Sync {

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,32 +1,22 @@
use std::convert::TryInto; use std::convert::TryInto;
use nac3parser::ast::{Constant, Location};
use crate::symbol_resolver::SymbolValue; use crate::symbol_resolver::SymbolValue;
use nac3parser::ast::{Constant, Location};
use super::*; use super::*;
impl TopLevelDef { impl TopLevelDef {
pub fn to_string( pub fn to_string(&self, unifier: &mut Unifier) -> String {
&self,
unifier: &mut Unifier,
) -> String
{
match self { match self {
TopLevelDef::Class { TopLevelDef::Class { name, ancestors, fields, methods, type_vars, .. } => {
name, ancestors, fields, methods, type_vars, ..
} => {
let fields_str = fields let fields_str = fields
.iter() .iter()
.map(|(n, ty, _)| { .map(|(n, ty, _)| (n.to_string(), unifier.stringify(*ty)))
(n.to_string(), unifier.default_stringify(*ty))
})
.collect_vec(); .collect_vec();
let methods_str = methods let methods_str = methods
.iter() .iter()
.map(|(n, ty, id)| { .map(|(n, ty, id)| (n.to_string(), unifier.stringify(*ty), *id))
(n.to_string(), unifier.default_stringify(*ty), *id)
})
.collect_vec(); .collect_vec();
format!( format!(
"Class {{\nname: {:?},\nancestors: {:?},\nfields: {:?},\nmethods: {:?},\ntype_vars: {:?}\n}}", "Class {{\nname: {:?},\nancestors: {:?},\nfields: {:?},\nmethods: {:?},\ntype_vars: {:?}\n}}",
@ -34,13 +24,13 @@ impl TopLevelDef {
ancestors.iter().map(|ancestor| ancestor.stringify(unifier)).collect_vec(), ancestors.iter().map(|ancestor| ancestor.stringify(unifier)).collect_vec(),
fields_str.iter().map(|(a, _)| a).collect_vec(), fields_str.iter().map(|(a, _)| a).collect_vec(),
methods_str.iter().map(|(a, b, _)| (a, b)).collect_vec(), methods_str.iter().map(|(a, b, _)| (a, b)).collect_vec(),
type_vars.iter().map(|id| unifier.default_stringify(*id)).collect_vec(), type_vars.iter().map(|id| unifier.stringify(*id)).collect_vec(),
) )
} }
TopLevelDef::Function { name, signature, var_id, .. } => format!( TopLevelDef::Function { name, signature, var_id, .. } => format!(
"Function {{\nname: {:?},\nsig: {:?},\nvar_id: {:?}\n}}", "Function {{\nname: {:?},\nsig: {:?},\nvar_id: {:?}\n}}",
name, name,
unifier.default_stringify(*signature), unifier.stringify(*signature),
{ {
// preserve the order for debug output and test // preserve the order for debug output and test
let mut r = var_id.clone(); let mut r = var_id.clone();
@ -57,40 +47,67 @@ impl TopLevelComposer {
let mut unifier = Unifier::new(); let mut unifier = Unifier::new();
let int32 = unifier.add_ty(TypeEnum::TObj { let int32 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(0), obj_id: DefinitionId(0),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let int64 = unifier.add_ty(TypeEnum::TObj { let int64 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(1), obj_id: DefinitionId(1),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let float = unifier.add_ty(TypeEnum::TObj { let float = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(2), obj_id: DefinitionId(2),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let bool = unifier.add_ty(TypeEnum::TObj { let bool = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(3), obj_id: DefinitionId(3),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let none = unifier.add_ty(TypeEnum::TObj { let none = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(4), obj_id: DefinitionId(4),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let range = unifier.add_ty(TypeEnum::TObj { let range = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(5), obj_id: DefinitionId(5),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let str = unifier.add_ty(TypeEnum::TObj { let str = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(6), obj_id: DefinitionId(6),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let primitives = PrimitiveStore { int32, int64, float, bool, none, range, str }; let exception = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(7),
fields: vec![
("__name__".into(), (int32, true)),
("__file__".into(), (int32, true)),
("__line__".into(), (int32, true)),
("__col__".into(), (int32, true)),
("__func__".into(), (str, true)),
("__message__".into(), (str, true)),
("__param0__".into(), (int64, true)),
("__param1__".into(), (int64, true)),
("__param2__".into(), (int64, true)),
]
.into_iter()
.collect::<HashMap<_, _>>(),
params: HashMap::new(),
});
let uint32 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(8),
fields: HashMap::new(),
params: HashMap::new(),
});
let uint64 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(9),
fields: HashMap::new(),
params: HashMap::new(),
});
let primitives = PrimitiveStore { int32, int64, float, bool, none, range, str, exception, uint32, uint64 };
crate::typecheck::magic_methods::set_primitives_magic_methods(&primitives, &mut unifier); crate::typecheck::magic_methods::set_primitives_magic_methods(&primitives, &mut unifier);
(primitives, unifier) (primitives, unifier)
} }
@ -102,6 +119,7 @@ impl TopLevelComposer {
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>, resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
name: StrRef, name: StrRef,
constructor: Option<Type>, constructor: Option<Type>,
loc: Option<Location>,
) -> TopLevelDef { ) -> TopLevelDef {
TopLevelDef::Class { TopLevelDef::Class {
name, name,
@ -112,6 +130,7 @@ impl TopLevelComposer {
ancestors: Default::default(), ancestors: Default::default(),
constructor, constructor,
resolver, resolver,
loc,
} }
} }
@ -121,6 +140,7 @@ impl TopLevelComposer {
simple_name: StrRef, simple_name: StrRef,
ty: Type, ty: Type,
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>, resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
loc: Option<Location>,
) -> TopLevelDef { ) -> TopLevelDef {
TopLevelDef::Function { TopLevelDef::Function {
name, name,
@ -131,6 +151,7 @@ impl TopLevelComposer {
instance_to_stmt: Default::default(), instance_to_stmt: Default::default(),
resolver, resolver,
codegen_callback: None, codegen_callback: None,
loc,
} }
} }
@ -229,12 +250,11 @@ impl TopLevelComposer {
let this = this.as_ref(); let this = this.as_ref();
let other = unifier.get_ty(other); let other = unifier.get_ty(other);
let other = other.as_ref(); let other = other.as_ref();
if let (TypeEnum::TFunc(this_sig), TypeEnum::TFunc(other_sig)) = (this, other) { if let (
let (this_sig, other_sig) = (&*this_sig.borrow(), &*other_sig.borrow()); TypeEnum::TFunc(FunSignature { args: this_args, ret: this_ret, .. }),
let ( TypeEnum::TFunc(FunSignature { args: other_args, ret: other_ret, .. }),
FunSignature { args: this_args, ret: this_ret, vars: _this_vars }, ) = (this, other)
FunSignature { args: other_args, ret: other_ret, vars: _other_vars }, {
) = (this_sig, other_sig);
// check args // check args
let args_ok = this_args let args_ok = this_args
.iter() .iter()
@ -347,11 +367,19 @@ impl TopLevelComposer {
Ok(result) Ok(result)
} }
pub fn parse_parameter_default_value(default: &ast::Expr, resolver: &(dyn SymbolResolver + Send + Sync)) -> Result<SymbolValue, String> { pub fn parse_parameter_default_value(
default: &ast::Expr,
resolver: &(dyn SymbolResolver + Send + Sync),
) -> Result<SymbolValue, String> {
parse_parameter_default_value(default, resolver) parse_parameter_default_value(default, resolver)
} }
pub fn check_default_param_type(val: &SymbolValue, ty: &TypeAnnotation, primitive: &PrimitiveStore, unifier: &mut Unifier) -> Result<(), String> { pub fn check_default_param_type(
val: &SymbolValue,
ty: &TypeAnnotation,
primitive: &PrimitiveStore,
unifier: &mut Unifier,
) -> Result<(), String> {
let res = match val { let res = match val {
SymbolValue::Bool(..) => { SymbolValue::Bool(..) => {
if matches!(ty, TypeAnnotation::Primitive(t) if *t == primitive.bool) { if matches!(ty, TypeAnnotation::Primitive(t) if *t == primitive.bool) {
@ -381,6 +409,27 @@ impl TopLevelComposer {
Some("int64".to_string()) Some("int64".to_string())
} }
} }
SymbolValue::U32(..) => {
if matches!(ty, TypeAnnotation::Primitive(t) if *t == primitive.uint32) {
None
} else {
Some("uint32".to_string())
}
}
SymbolValue::U64(..) => {
if matches!(ty, TypeAnnotation::Primitive(t) if *t == primitive.uint64) {
None
} else {
Some("uint64".to_string())
}
}
SymbolValue::Str(..) => {
if matches!(ty, TypeAnnotation::Primitive(t) if *t == primitive.str) {
None
} else {
Some("str".to_string())
}
}
SymbolValue::Tuple(elts) => { SymbolValue::Tuple(elts) => {
if let TypeAnnotation::Tuple(elts_ty) = ty { if let TypeAnnotation::Tuple(elts_ty) = ty {
for (e, t) in elts.iter().zip(elts_ty.iter()) { for (e, t) in elts.iter().zip(elts_ty.iter()) {
@ -408,43 +457,62 @@ impl TopLevelComposer {
} }
} }
pub fn parse_parameter_default_value(default: &ast::Expr, resolver: &(dyn SymbolResolver + Send + Sync)) -> Result<SymbolValue, String> { pub fn parse_parameter_default_value(
default: &ast::Expr,
resolver: &(dyn SymbolResolver + Send + Sync),
) -> Result<SymbolValue, String> {
fn handle_constant(val: &Constant, loc: &Location) -> Result<SymbolValue, String> { fn handle_constant(val: &Constant, loc: &Location) -> Result<SymbolValue, String> {
match val { match val {
Constant::Int(v) => { Constant::Int(v) => {
if let Ok(v) = v.try_into() { if let Ok(v) = (*v).try_into() {
Ok(SymbolValue::I32(v)) Ok(SymbolValue::I32(v))
} else { } else {
Err(format!( Err(format!("integer value out of range at {}", loc))
"integer value out of range at {}",
loc
))
} }
} }
Constant::Float(v) => Ok(SymbolValue::Double(*v)), Constant::Float(v) => Ok(SymbolValue::Double(*v)),
Constant::Bool(v) => Ok(SymbolValue::Bool(*v)), Constant::Bool(v) => Ok(SymbolValue::Bool(*v)),
Constant::Tuple(tuple) => Ok(SymbolValue::Tuple( Constant::Tuple(tuple) => Ok(SymbolValue::Tuple(
tuple.iter().map(|x| handle_constant(x, loc)).collect::<Result<Vec<_>, _>>()? tuple.iter().map(|x| handle_constant(x, loc)).collect::<Result<Vec<_>, _>>()?,
)), )),
_ => unimplemented!("this constant is not supported at {}", loc), _ => unimplemented!("this constant is not supported at {}", loc),
} }
} }
match &default.node { match &default.node {
ast::ExprKind::Constant { value, .. } => handle_constant(value, &default.location), ast::ExprKind::Constant { value, .. } => handle_constant(value, &default.location),
ast::ExprKind::Call { func, args, .. } if { ast::ExprKind::Call { func, args, .. } if args.len() == 1 => {
match &func.node { match &func.node {
ast::ExprKind::Name { id, .. } => *id == "int64".into(), ast::ExprKind::Name { id, .. } if *id == "int64".into() => match &args[0].node {
_ => false, ast::ExprKind::Constant { value: Constant::Int(v), .. } => {
} let v: Result<i64, _> = (*v).try_into();
} => { match v {
if args.len() == 1 { Ok(v) => Ok(SymbolValue::I64(v)),
match &args[0].node { _ => Err(format!("default param value out of range at {}", default.location)),
ast::ExprKind::Constant { value: Constant::Int(v), .. } => }
Ok(SymbolValue::I64(v.try_into().unwrap())), }
_ => Err(format!("only allow constant integer here at {}", default.location)) _ => Err(format!("only allow constant integer here at {}", default.location))
} }
} else { ast::ExprKind::Name { id, .. } if *id == "uint32".into() => match &args[0].node {
Err(format!("only allow constant integer here at {}", default.location)) ast::ExprKind::Constant { value: Constant::Int(v), .. } => {
let v: Result<u32, _> = (*v).try_into();
match v {
Ok(v) => Ok(SymbolValue::U32(v)),
_ => Err(format!("default param value out of range at {}", default.location)),
}
}
_ => Err(format!("only allow constant integer here at {}", default.location))
}
ast::ExprKind::Name { id, .. } if *id == "uint64".into() => match &args[0].node {
ast::ExprKind::Constant { value: Constant::Int(v), .. } => {
let v: Result<u64, _> = (*v).try_into();
match v {
Ok(v) => Ok(SymbolValue::U64(v)),
_ => Err(format!("default param value out of range at {}", default.location)),
}
}
_ => Err(format!("only allow constant integer here at {}", default.location))
}
_ => Err(format!("unsupported default parameter at {}", default.location)),
} }
} }
ast::ExprKind::Tuple { elts, .. } => Ok(SymbolValue::Tuple(elts ast::ExprKind::Tuple { elts, .. } => Ok(SymbolValue::Tuple(elts

View File

@ -11,20 +11,22 @@ use super::codegen::CodeGenContext;
use super::typecheck::type_inferencer::PrimitiveStore; use super::typecheck::type_inferencer::PrimitiveStore;
use super::typecheck::typedef::{FunSignature, FuncArg, SharedUnifier, Type, TypeEnum, Unifier}; use super::typecheck::typedef::{FunSignature, FuncArg, SharedUnifier, Type, TypeEnum, Unifier};
use crate::{ use crate::{
symbol_resolver::SymbolResolver, codegen::CodeGenerator,
symbol_resolver::{SymbolResolver, ValueEnum},
typecheck::{type_inferencer::CodeLocation, typedef::CallId}, typecheck::{type_inferencer::CodeLocation, typedef::CallId},
}; };
use itertools::{izip, Itertools};
use parking_lot::RwLock;
use nac3parser::ast::{self, Stmt, StrRef};
use inkwell::values::BasicValueEnum; use inkwell::values::BasicValueEnum;
use itertools::{izip, Itertools};
use nac3parser::ast::{self, Location, Stmt, StrRef};
use parking_lot::RwLock;
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone, Copy, Hash, Debug)] #[derive(PartialEq, Eq, PartialOrd, Ord, Clone, Copy, Hash, Debug)]
pub struct DefinitionId(pub usize); pub struct DefinitionId(pub usize);
pub mod builtins;
pub mod composer; pub mod composer;
pub mod helper; pub mod helper;
mod type_annotation; pub mod type_annotation;
use composer::*; use composer::*;
use type_annotation::*; use type_annotation::*;
#[cfg(test)] #[cfg(test)]
@ -33,10 +35,11 @@ mod test;
type GenCallCallback = Box< type GenCallCallback = Box<
dyn for<'ctx, 'a> Fn( dyn for<'ctx, 'a> Fn(
&mut CodeGenContext<'ctx, 'a>, &mut CodeGenContext<'ctx, 'a>,
Option<(Type, BasicValueEnum)>, Option<(Type, ValueEnum<'ctx>)>,
(&FunSignature, DefinitionId), (&FunSignature, DefinitionId),
Vec<(Option<StrRef>, BasicValueEnum<'ctx>)>, Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
) -> Option<BasicValueEnum<'ctx>> &mut dyn CodeGenerator,
) -> Result<Option<BasicValueEnum<'ctx>>, String>
+ Send + Send
+ Sync, + Sync,
>; >;
@ -53,11 +56,12 @@ impl GenCall {
pub fn run<'ctx, 'a>( pub fn run<'ctx, 'a>(
&self, &self,
ctx: &mut CodeGenContext<'ctx, 'a>, ctx: &mut CodeGenContext<'ctx, 'a>,
obj: Option<(Type, BasicValueEnum<'ctx>)>, obj: Option<(Type, ValueEnum<'ctx>)>,
fun: (&FunSignature, DefinitionId), fun: (&FunSignature, DefinitionId),
args: Vec<(Option<StrRef>, BasicValueEnum<'ctx>)>, args: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
) -> Option<BasicValueEnum<'ctx>> { generator: &mut dyn CodeGenerator,
(self.fp)(ctx, obj, fun, args) ) -> Result<Option<BasicValueEnum<'ctx>>, String> {
(self.fp)(ctx, obj, fun, args, generator)
} }
} }
@ -95,6 +99,8 @@ pub enum TopLevelDef {
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>, resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
// constructor type // constructor type
constructor: Option<Type>, constructor: Option<Type>,
// definition location
loc: Option<Location>,
}, },
Function { Function {
// prefix for symbol, should be unique globally // prefix for symbol, should be unique globally
@ -119,7 +125,9 @@ pub enum TopLevelDef {
// symbol resolver of the module defined the class // symbol resolver of the module defined the class
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>, resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
// custom codegen callback // custom codegen callback
codegen_callback: Option<Arc<GenCall>> codegen_callback: Option<Arc<GenCall>>,
// definition location
loc: Option<Location>,
}, },
} }

View File

@ -1,13 +1,14 @@
--- ---
source: nac3core/src/toplevel/test.rs source: nac3core/src/toplevel/test.rs
assertion_line: 540
expression: res_vec expression: res_vec
--- ---
[ [
"Class {\nname: \"Generic_A\",\nancestors: [\"{class: Generic_A, params: [\\\"var4\\\"]}\", \"{class: B, params: []}\"],\nfields: [\"aa\", \"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"foo\", \"fn[[b=var3], none]\"), (\"fun\", \"fn[[a=int32], var4]\")],\ntype_vars: [\"var4\"]\n}\n", "Class {\nname: \"Generic_A\",\nancestors: [\"{class: Generic_A, params: [\\\"V\\\"]}\", \"{class: B, params: []}\"],\nfields: [\"aa\", \"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"foo\", \"fn[[b:T], none]\"), (\"fun\", \"fn[[a:int32], V]\")],\ntype_vars: [\"V\"]\n}\n",
"Function {\nname: \"Generic_A.__init__\",\nsig: \"fn[[], none]\",\nvar_id: [4]\n}\n", "Function {\nname: \"Generic_A.__init__\",\nsig: \"fn[[], none]\",\nvar_id: [6]\n}\n",
"Function {\nname: \"Generic_A.fun\",\nsig: \"fn[[a=int32], var4]\",\nvar_id: [4]\n}\n", "Function {\nname: \"Generic_A.fun\",\nsig: \"fn[[a:int32], V]\",\nvar_id: [6, 17]\n}\n",
"Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\"],\nfields: [\"aa\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"foo\", \"fn[[b=var3], none]\")],\ntype_vars: []\n}\n", "Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\"],\nfields: [\"aa\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"foo\", \"fn[[b:T], none]\")],\ntype_vars: []\n}\n",
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n", "Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"B.foo\",\nsig: \"fn[[b=var3], none]\",\nvar_id: []\n}\n", "Function {\nname: \"B.foo\",\nsig: \"fn[[b:T], none]\",\nvar_id: []\n}\n",
] ]

View File

@ -1,16 +1,17 @@
--- ---
source: nac3core/src/toplevel/test.rs source: nac3core/src/toplevel/test.rs
assertion_line: 540
expression: res_vec expression: res_vec
--- ---
[ [
"Class {\nname: \"A\",\nancestors: [\"{class: A, params: [\\\"var3\\\"]}\"],\nfields: [\"a\", \"b\", \"c\"],\nmethods: [(\"__init__\", \"fn[[t=var3], none]\"), (\"fun\", \"fn[[a=int32, b=var3], list[virtual[B[4->bool]]]]\"), (\"foo\", \"fn[[c=C], none]\")],\ntype_vars: [\"var3\"]\n}\n", "Class {\nname: \"A\",\nancestors: [\"{class: A, params: [\\\"T\\\"]}\"],\nfields: [\"a\", \"b\", \"c\"],\nmethods: [(\"__init__\", \"fn[[t:T], none]\"), (\"fun\", \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\"), (\"foo\", \"fn[[c:C], none]\")],\ntype_vars: [\"T\"]\n}\n",
"Function {\nname: \"A.__init__\",\nsig: \"fn[[t=var3], none]\",\nvar_id: []\n}\n", "Function {\nname: \"A.__init__\",\nsig: \"fn[[t:T], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"A.fun\",\nsig: \"fn[[a=int32, b=var3], list[virtual[B[4->bool]]]]\",\nvar_id: []\n}\n", "Function {\nname: \"A.fun\",\nsig: \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\",\nvar_id: []\n}\n",
"Function {\nname: \"A.foo\",\nsig: \"fn[[c=C], none]\",\nvar_id: []\n}\n", "Function {\nname: \"A.foo\",\nsig: \"fn[[c:C], none]\",\nvar_id: []\n}\n",
"Class {\nname: \"B\",\nancestors: [\"{class: B, params: [\\\"var4\\\"]}\", \"{class: A, params: [\\\"float\\\"]}\"],\nfields: [\"a\", \"b\", \"c\", \"d\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a=int32, b=var3], list[virtual[B[4->bool]]]]\"), (\"foo\", \"fn[[c=C], none]\")],\ntype_vars: [\"var4\"]\n}\n", "Class {\nname: \"B\",\nancestors: [\"{class: B, params: [\\\"var6\\\"]}\", \"{class: A, params: [\\\"float\\\"]}\"],\nfields: [\"a\", \"b\", \"c\", \"d\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\"), (\"foo\", \"fn[[c:C], none]\")],\ntype_vars: [\"var6\"]\n}\n",
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: [4]\n}\n", "Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: [6]\n}\n",
"Function {\nname: \"B.fun\",\nsig: \"fn[[a=int32, b=var3], list[virtual[B[4->bool]]]]\",\nvar_id: [4]\n}\n", "Function {\nname: \"B.fun\",\nsig: \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\",\nvar_id: [6]\n}\n",
"Class {\nname: \"C\",\nancestors: [\"{class: C, params: []}\", \"{class: B, params: [\\\"bool\\\"]}\", \"{class: A, params: [\\\"float\\\"]}\"],\nfields: [\"a\", \"b\", \"c\", \"d\", \"e\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a=int32, b=var3], list[virtual[B[4->bool]]]]\"), (\"foo\", \"fn[[c=C], none]\")],\ntype_vars: []\n}\n", "Class {\nname: \"C\",\nancestors: [\"{class: C, params: []}\", \"{class: B, params: [\\\"bool\\\"]}\", \"{class: A, params: [\\\"float\\\"]}\"],\nfields: [\"a\", \"b\", \"c\", \"d\", \"e\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\"), (\"foo\", \"fn[[c:C], none]\")],\ntype_vars: []\n}\n",
"Function {\nname: \"C.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n", "Function {\nname: \"C.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
] ]

View File

@ -1,14 +1,15 @@
--- ---
source: nac3core/src/toplevel/test.rs source: nac3core/src/toplevel/test.rs
assertion_line: 540
expression: res_vec expression: res_vec
--- ---
[ [
"Function {\nname: \"foo\",\nsig: \"fn[[a=list[int32], b=tuple[var3, float]], A[3->B, 4->bool]]\",\nvar_id: []\n}\n", "Function {\nname: \"foo\",\nsig: \"fn[[a:list[int32], b:tuple[T, float]], A[B, bool]]\",\nvar_id: []\n}\n",
"Class {\nname: \"A\",\nancestors: [\"{class: A, params: [\\\"var3\\\", \\\"var4\\\"]}\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[v=var4], none]\"), (\"fun\", \"fn[[a=var3], var4]\")],\ntype_vars: [\"var3\", \"var4\"]\n}\n", "Class {\nname: \"A\",\nancestors: [\"{class: A, params: [\\\"T\\\", \\\"V\\\"]}\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[v:V], none]\"), (\"fun\", \"fn[[a:T], V]\")],\ntype_vars: [\"T\", \"V\"]\n}\n",
"Function {\nname: \"A.__init__\",\nsig: \"fn[[v=var4], none]\",\nvar_id: [4]\n}\n", "Function {\nname: \"A.__init__\",\nsig: \"fn[[v:V], none]\",\nvar_id: [18, 19]\n}\n",
"Function {\nname: \"A.fun\",\nsig: \"fn[[a=var3], var4]\",\nvar_id: [4]\n}\n", "Function {\nname: \"A.fun\",\nsig: \"fn[[a:T], V]\",\nvar_id: [19, 24]\n}\n",
"Function {\nname: \"gfun\",\nsig: \"fn[[a=A[3->list[float], 4->int32]], none]\",\nvar_id: []\n}\n", "Function {\nname: \"gfun\",\nsig: \"fn[[a:A[int32, list[float]]], none]\",\nvar_id: []\n}\n",
"Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\"],\nfields: [],\nmethods: [(\"__init__\", \"fn[[], none]\")],\ntype_vars: []\n}\n", "Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\"],\nfields: [],\nmethods: [(\"__init__\", \"fn[[], none]\")],\ntype_vars: []\n}\n",
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n", "Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
] ]

View File

@ -1,14 +1,15 @@
--- ---
source: nac3core/src/toplevel/test.rs source: nac3core/src/toplevel/test.rs
assertion_line: 540
expression: res_vec expression: res_vec
--- ---
[ [
"Class {\nname: \"A\",\nancestors: [\"{class: A, params: [\\\"var3\\\", \\\"var4\\\"]}\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[a=A[3->float, 4->bool], b=B], none]\"), (\"fun\", \"fn[[a=A[3->float, 4->bool]], A[3->bool, 4->int32]]\")],\ntype_vars: [\"var3\", \"var4\"]\n}\n", "Class {\nname: \"A\",\nancestors: [\"{class: A, params: [\\\"var5\\\", \\\"var6\\\"]}\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[a:A[bool, float], b:B], none]\"), (\"fun\", \"fn[[a:A[bool, float]], A[bool, int32]]\")],\ntype_vars: [\"var5\", \"var6\"]\n}\n",
"Function {\nname: \"A.__init__\",\nsig: \"fn[[a=A[3->float, 4->bool], b=B], none]\",\nvar_id: [4]\n}\n", "Function {\nname: \"A.__init__\",\nsig: \"fn[[a:A[bool, float], b:B], none]\",\nvar_id: [6]\n}\n",
"Function {\nname: \"A.fun\",\nsig: \"fn[[a=A[3->float, 4->bool]], A[3->bool, 4->int32]]\",\nvar_id: [4]\n}\n", "Function {\nname: \"A.fun\",\nsig: \"fn[[a:A[bool, float]], A[bool, int32]]\",\nvar_id: [6]\n}\n",
"Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\", \"{class: A, params: [\\\"int64\\\", \\\"bool\\\"]}\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a=A[3->float, 4->bool]], A[3->bool, 4->int32]]\"), (\"foo\", \"fn[[b=B], B]\"), (\"bar\", \"fn[[a=A[3->list[B], 4->int32]], tuple[A[3->virtual[A[3->B, 4->int32]], 4->bool], B]]\")],\ntype_vars: []\n}\n", "Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\", \"{class: A, params: [\\\"int64\\\", \\\"bool\\\"]}\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a:A[bool, float]], A[bool, int32]]\"), (\"foo\", \"fn[[b:B], B]\"), (\"bar\", \"fn[[a:A[int32, list[B]]], tuple[A[bool, virtual[A[B, int32]]], B]]\")],\ntype_vars: []\n}\n",
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n", "Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"B.foo\",\nsig: \"fn[[b=B], B]\",\nvar_id: []\n}\n", "Function {\nname: \"B.foo\",\nsig: \"fn[[b:B], B]\",\nvar_id: []\n}\n",
"Function {\nname: \"B.bar\",\nsig: \"fn[[a=A[3->list[B], 4->int32]], tuple[A[3->virtual[A[3->B, 4->int32]], 4->bool], B]]\",\nvar_id: []\n}\n", "Function {\nname: \"B.bar\",\nsig: \"fn[[a:A[int32, list[B]]], tuple[A[bool, virtual[A[B, int32]]], B]]\",\nvar_id: []\n}\n",
] ]

View File

@ -1,18 +1,19 @@
--- ---
source: nac3core/src/toplevel/test.rs source: nac3core/src/toplevel/test.rs
assertion_line: 540
expression: res_vec expression: res_vec
--- ---
[ [
"Class {\nname: \"A\",\nancestors: [\"{class: A, params: []}\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b=B], none]\"), (\"foo\", \"fn[[a=var3, b=var4], none]\")],\ntype_vars: []\n}\n", "Class {\nname: \"A\",\nancestors: [\"{class: A, params: []}\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b:B], none]\"), (\"foo\", \"fn[[a:T, b:V], none]\")],\ntype_vars: []\n}\n",
"Function {\nname: \"A.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n", "Function {\nname: \"A.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"A.fun\",\nsig: \"fn[[b=B], none]\",\nvar_id: []\n}\n", "Function {\nname: \"A.fun\",\nsig: \"fn[[b:B], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"A.foo\",\nsig: \"fn[[a=var3, b=var4], none]\",\nvar_id: [4]\n}\n", "Function {\nname: \"A.foo\",\nsig: \"fn[[a:T, b:V], none]\",\nvar_id: [25]\n}\n",
"Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\", \"{class: C, params: []}\", \"{class: A, params: []}\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b=B], none]\"), (\"foo\", \"fn[[a=var3, b=var4], none]\")],\ntype_vars: []\n}\n", "Class {\nname: \"B\",\nancestors: [\"{class: B, params: []}\", \"{class: C, params: []}\", \"{class: A, params: []}\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b:B], none]\"), (\"foo\", \"fn[[a:T, b:V], none]\")],\ntype_vars: []\n}\n",
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n", "Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
"Class {\nname: \"C\",\nancestors: [\"{class: C, params: []}\", \"{class: A, params: []}\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b=B], none]\"), (\"foo\", \"fn[[a=var3, b=var4], none]\")],\ntype_vars: []\n}\n", "Class {\nname: \"C\",\nancestors: [\"{class: C, params: []}\", \"{class: A, params: []}\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b:B], none]\"), (\"foo\", \"fn[[a:T, b:V], none]\")],\ntype_vars: []\n}\n",
"Function {\nname: \"C.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n", "Function {\nname: \"C.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"C.fun\",\nsig: \"fn[[b=B], none]\",\nvar_id: []\n}\n", "Function {\nname: \"C.fun\",\nsig: \"fn[[b:B], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"foo\",\nsig: \"fn[[a=A], none]\",\nvar_id: []\n}\n", "Function {\nname: \"foo\",\nsig: \"fn[[a:A], none]\",\nvar_id: []\n}\n",
"Function {\nname: \"ff\",\nsig: \"fn[[a=var3], var4]\",\nvar_id: [4]\n}\n", "Function {\nname: \"ff\",\nsig: \"fn[[a:T], V]\",\nvar_id: [33]\n}\n",
] ]

View File

@ -1,5 +1,6 @@
--- ---
source: nac3core/src/toplevel/test.rs source: nac3core/src/toplevel/test.rs
assertion_line: 541
expression: res_vec expression: res_vec
--- ---

View File

@ -1,6 +1,5 @@
use crate::{ use crate::{
codegen::CodeGenContext, codegen::CodeGenContext,
location::Location,
symbol_resolver::{SymbolResolver, ValueEnum}, symbol_resolver::{SymbolResolver, ValueEnum},
toplevel::DefinitionId, toplevel::DefinitionId,
typecheck::{ typecheck::{
@ -35,19 +34,26 @@ impl ResolverInternal {
struct Resolver(Arc<ResolverInternal>); struct Resolver(Arc<ResolverInternal>);
impl SymbolResolver for Resolver { impl SymbolResolver for Resolver {
fn get_default_param_value(&self, _: &nac3parser::ast::Expr) -> Option<crate::symbol_resolver::SymbolValue> { fn get_default_param_value(
&self,
_: &nac3parser::ast::Expr,
) -> Option<crate::symbol_resolver::SymbolValue> {
unimplemented!() unimplemented!()
} }
fn get_symbol_type( fn get_symbol_type(
&self, &self,
_: &mut Unifier, _: &mut Unifier,
_: &[Arc<RwLock<TopLevelDef>>], _: &[Arc<RwLock<TopLevelDef>>],
_: &PrimitiveStore, _: &PrimitiveStore,
str: StrRef, str: StrRef,
) -> Option<Type> { ) -> Result<Type, String> {
let ret = self.0.id_to_type.lock().get(&str).cloned(); self.0
ret .id_to_type
.lock()
.get(&str)
.cloned()
.ok_or_else(|| format!("cannot find symbol `{}`", str))
} }
fn get_symbol_value<'ctx, 'a>( fn get_symbol_value<'ctx, 'a>(
@ -58,12 +64,16 @@ impl SymbolResolver for Resolver {
unimplemented!() unimplemented!()
} }
fn get_symbol_location(&self, _: StrRef) -> Option<Location> { fn get_identifier_def(&self, id: StrRef) -> Result<DefinitionId, String> {
self.0.id_to_def.lock().get(&id).cloned().ok_or("Unknown identifier".to_string())
}
fn get_string_id(&self, _: &str) -> i32 {
unimplemented!() unimplemented!()
} }
fn get_identifier_def(&self, id: StrRef) -> Option<DefinitionId> { fn get_exception_id(&self, tyid: usize) -> usize {
self.0.id_to_def.lock().get(&id).cloned() unimplemented!()
} }
} }
@ -103,7 +113,7 @@ fn test_simple_register(source: Vec<&str>) {
let mut composer: TopLevelComposer = Default::default(); let mut composer: TopLevelComposer = Default::default();
for s in source { for s in source {
let ast = parse_program(s).unwrap(); let ast = parse_program(s, Default::default()).unwrap();
let ast = ast[0].clone(); let ast = ast[0].clone();
composer.register_top_level(ast, None, "".into()).unwrap(); composer.register_top_level(ast, None, "".into()).unwrap();
@ -126,9 +136,9 @@ fn test_simple_register(source: Vec<&str>) {
"}, "},
], ],
vec![ vec![
"fn[[a=0], 0]", "fn[[a:0], 0]",
"fn[[a=2], 4]", "fn[[a:2], 4]",
"fn[[b=1], 0]", "fn[[b:1], 0]",
], ],
vec![ vec![
"fun", "fun",
@ -149,7 +159,7 @@ fn test_simple_function_analyze(source: Vec<&str>, tys: Vec<&str>, names: Vec<&s
Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>; Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>;
for s in source { for s in source {
let ast = parse_program(s).unwrap(); let ast = parse_program(s, Default::default()).unwrap();
let ast = ast[0].clone(); let ast = ast[0].clone();
let (id, def_id, ty) = let (id, def_id, ty) =
@ -162,14 +172,16 @@ fn test_simple_function_analyze(source: Vec<&str>, tys: Vec<&str>, names: Vec<&s
composer.start_analysis(true).unwrap(); composer.start_analysis(true).unwrap();
for (i, (def, _)) in composer.definition_ast_list.iter().skip(composer.built_in_num).enumerate() for (i, (def, _)) in composer.definition_ast_list.iter().skip(composer.builtin_num).enumerate()
{ {
let def = &*def.read(); let def = &*def.read();
if let TopLevelDef::Function { signature, name, .. } = def { if let TopLevelDef::Function { signature, name, .. } = def {
let ty_str = let ty_str = composer.unifier.internal_stringify(
composer *signature,
.unifier &mut |id| id.to_string(),
.stringify(*signature, &mut |id| id.to_string(), &mut |id| id.to_string()); &mut |id| id.to_string(),
&mut None,
);
assert_eq!(ty_str, tys[i]); assert_eq!(ty_str, tys[i]);
assert_eq!(name, names[i]); assert_eq!(name, names[i]);
} }
@ -333,7 +345,7 @@ fn test_simple_function_analyze(source: Vec<&str>, tys: Vec<&str>, names: Vec<&s
pass pass
"} "}
], ],
vec!["application of type vars to generic class is not currently supported"]; vec!["application of type vars to generic class is not currently supported (at unknown: line 4 column 24)"];
"err no type var in generic app" "err no type var in generic app"
)] )]
#[test_case( #[test_case(
@ -389,7 +401,7 @@ fn test_simple_function_analyze(source: Vec<&str>, tys: Vec<&str>, names: Vec<&s
def __init__(): def __init__():
pass pass
"}], "}],
vec!["__init__ function must have a `self` parameter"]; vec!["__init__ method must have a `self` parameter (at unknown: line 2 column 5)"];
"err no self_1" "err no self_1"
)] )]
#[test_case( #[test_case(
@ -411,7 +423,7 @@ fn test_simple_function_analyze(source: Vec<&str>, tys: Vec<&str>, names: Vec<&s
"} "}
], ],
vec!["a class def can only have at most one base class declaration and one generic declaration"]; vec!["a class definition can only have at most one base class declaration and one generic declaration (at unknown: line 1 column 24)"];
"err multiple inheritance" "err multiple inheritance"
)] )]
#[test_case( #[test_case(
@ -436,7 +448,7 @@ fn test_simple_function_analyze(source: Vec<&str>, tys: Vec<&str>, names: Vec<&s
pass pass
"} "}
], ],
vec!["method has same name as ancestors' method, but incompatible type"]; vec!["method fun has same name as ancestors' method, but incompatible type"];
"err_incompatible_inheritance_method" "err_incompatible_inheritance_method"
)] )]
#[test_case( #[test_case(
@ -479,7 +491,7 @@ fn test_simple_function_analyze(source: Vec<&str>, tys: Vec<&str>, names: Vec<&s
pass pass
"} "}
], ],
vec!["duplicate definition of class"]; vec!["duplicate definition of class `A` (at unknown: line 1 column 1)"];
"class same name" "class same name"
)] )]
fn test_analyze(source: Vec<&str>, res: Vec<&str>) { fn test_analyze(source: Vec<&str>, res: Vec<&str>) {
@ -499,7 +511,7 @@ fn test_analyze(source: Vec<&str>, res: Vec<&str>) {
Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>; Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>;
for s in source { for s in source {
let ast = parse_program(s).unwrap(); let ast = parse_program(s, Default::default()).unwrap();
let ast = ast[0].clone(); let ast = ast[0].clone();
let (id, def_id, ty) = { let (id, def_id, ty) = {
@ -530,7 +542,7 @@ fn test_analyze(source: Vec<&str>, res: Vec<&str>) {
} else { } else {
// skip 5 to skip primitives // skip 5 to skip primitives
let mut res_vec: Vec<String> = Vec::new(); let mut res_vec: Vec<String> = Vec::new();
for (def, _) in composer.definition_ast_list.iter().skip(composer.built_in_num) { for (def, _) in composer.definition_ast_list.iter().skip(composer.builtin_num) {
let def = &*def.read(); let def = &*def.read();
res_vec.push(format!("{}\n", def.to_string(composer.unifier.borrow_mut()))); res_vec.push(format!("{}\n", def.to_string(composer.unifier.borrow_mut())));
} }
@ -683,7 +695,7 @@ fn test_inference(source: Vec<&str>, res: Vec<&str>) {
Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>; Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>;
for s in source { for s in source {
let ast = parse_program(s).unwrap(); let ast = parse_program(s, Default::default()).unwrap();
let ast = ast[0].clone(); let ast = ast[0].clone();
let (id, def_id, ty) = { let (id, def_id, ty) = {
@ -715,7 +727,7 @@ fn test_inference(source: Vec<&str>, res: Vec<&str>) {
// skip 5 to skip primitives // skip 5 to skip primitives
let mut stringify_folder = TypeToStringFolder { unifier: &mut composer.unifier }; let mut stringify_folder = TypeToStringFolder { unifier: &mut composer.unifier };
for (_i, (def, _)) in for (_i, (def, _)) in
composer.definition_ast_list.iter().skip(composer.built_in_num).enumerate() composer.definition_ast_list.iter().skip(composer.builtin_num).enumerate()
{ {
let def = &*def.read(); let def = &*def.read();
@ -749,7 +761,7 @@ fn make_internal_resolver_with_tvar(
.into_iter() .into_iter()
.map(|(name, range)| { .map(|(name, range)| {
(name, { (name, {
let (ty, id) = unifier.get_fresh_var_with_range(range.as_slice()); let (ty, id) = unifier.get_fresh_var_with_range(range.as_slice(), None, None);
if print { if print {
println!("{}: {:?}, tvar{}", name, ty, id); println!("{}: {:?}, tvar{}", name, ty, id);
} }
@ -776,9 +788,12 @@ impl<'a> Fold<Option<Type>> for TypeToStringFolder<'a> {
type Error = String; type Error = String;
fn map_user(&mut self, user: Option<Type>) -> Result<Self::TargetU, Self::Error> { fn map_user(&mut self, user: Option<Type>) -> Result<Self::TargetU, Self::Error> {
Ok(if let Some(ty) = user { Ok(if let Some(ty) = user {
self.unifier.stringify(ty, &mut |id| format!("class{}", id.to_string()), &mut |id| { self.unifier.internal_stringify(
format!("tvar{}", id.to_string()) ty,
}) &mut |id| format!("class{}", id.to_string()),
&mut |id| format!("tvar{}", id.to_string()),
&mut None,
)
} else { } else {
"None".into() "None".into()
}) })

View File

@ -1,7 +1,3 @@
use std::cell::RefCell;
use crate::typecheck::typedef::TypeVarMeta;
use super::*; use super::*;
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
@ -24,20 +20,30 @@ impl TypeAnnotation {
pub fn stringify(&self, unifier: &mut Unifier) -> String { pub fn stringify(&self, unifier: &mut Unifier) -> String {
use TypeAnnotation::*; use TypeAnnotation::*;
match self { match self {
Primitive(ty) | TypeVar(ty) => unifier.default_stringify(*ty), Primitive(ty) | TypeVar(ty) => unifier.stringify(*ty),
CustomClass { id, params } => { CustomClass { id, params } => {
let class_name = match unifier.top_level { let class_name = match unifier.top_level {
Some(ref top) => if let TopLevelDef::Class { name, .. } = &*top.definitions.read()[id.0].read() { Some(ref top) => {
(*name).into() if let TopLevelDef::Class { name, .. } =
} else { &*top.definitions.read()[id.0].read()
format!("def_{}", id.0) {
(*name).into()
} else {
format!("def_{}", id.0)
}
} }
None => format!("def_{}", id.0) None => format!("def_{}", id.0),
}; };
format!("{{class: {}, params: {:?}}}", class_name, params.iter().map(|p| p.stringify(unifier)).collect_vec()) format!(
"{{class: {}, params: {:?}}}",
class_name,
params.iter().map(|p| p.stringify(unifier)).collect_vec()
)
} }
Virtual(ty) | List(ty) => ty.stringify(unifier), Virtual(ty) | List(ty) => ty.stringify(unifier),
Tuple(types) => format!("({:?})", types.iter().map(|p| p.stringify(unifier)).collect_vec()), Tuple(types) => {
format!("({:?})", types.iter().map(|p| p.stringify(unifier)).collect_vec())
}
} }
} }
} }
@ -49,54 +55,137 @@ pub fn parse_ast_to_type_annotation_kinds<T>(
primitives: &PrimitiveStore, primitives: &PrimitiveStore,
expr: &ast::Expr<T>, expr: &ast::Expr<T>,
// the key stores the type_var of this topleveldef::class, we only need this field here // the key stores the type_var of this topleveldef::class, we only need this field here
mut locked: HashMap<DefinitionId, Vec<Type>>, locked: HashMap<DefinitionId, Vec<Type>>,
) -> Result<TypeAnnotation, String> { ) -> Result<TypeAnnotation, String> {
match &expr.node { let name_handle = |id: &StrRef,
ast::ExprKind::Name { id, .. } => { unifier: &mut Unifier,
if id == &"int32".into() { locked: HashMap<DefinitionId, Vec<Type>>| {
Ok(TypeAnnotation::Primitive(primitives.int32)) if id == &"int32".into() {
} else if id == &"int64".into() { Ok(TypeAnnotation::Primitive(primitives.int32))
Ok(TypeAnnotation::Primitive(primitives.int64)) } else if id == &"int64".into() {
} else if id == &"float".into() { Ok(TypeAnnotation::Primitive(primitives.int64))
Ok(TypeAnnotation::Primitive(primitives.float)) } else if id == &"uint32".into() {
} else if id == &"bool".into() { Ok(TypeAnnotation::Primitive(primitives.uint32))
Ok(TypeAnnotation::Primitive(primitives.bool)) } else if id == &"uint64".into() {
} else if id == &"None".into() { Ok(TypeAnnotation::Primitive(primitives.uint64))
Ok(TypeAnnotation::Primitive(primitives.none)) } else if id == &"float".into() {
} else if id == &"str".into() { Ok(TypeAnnotation::Primitive(primitives.float))
Ok(TypeAnnotation::Primitive(primitives.str)) } else if id == &"bool".into() {
} else if let Some(obj_id) = resolver.get_identifier_def(*id) { Ok(TypeAnnotation::Primitive(primitives.bool))
let type_vars = { } else if id == &"None".into() {
let def_read = top_level_defs[obj_id.0].try_read(); Ok(TypeAnnotation::Primitive(primitives.none))
if let Some(def_read) = def_read { } else if id == &"str".into() {
if let TopLevelDef::Class { type_vars, .. } = &*def_read { Ok(TypeAnnotation::Primitive(primitives.str))
type_vars.clone() } else if id == &"Exception".into() {
} else { Ok(TypeAnnotation::CustomClass { id: DefinitionId(7), params: Default::default() })
return Err("function cannot be used as a type".into()); } else if let Ok(obj_id) = resolver.get_identifier_def(*id) {
} let type_vars = {
let def_read = top_level_defs[obj_id.0].try_read();
if let Some(def_read) = def_read {
if let TopLevelDef::Class { type_vars, .. } = &*def_read {
type_vars.clone()
} else { } else {
locked.get(&obj_id).unwrap().clone() return Err(format!(
"function cannot be used as a type (at {})",
expr.location
));
} }
} else {
locked.get(&obj_id).unwrap().clone()
}
};
// check param number here
if !type_vars.is_empty() {
return Err(format!(
"expect {} type variable parameter but got 0 (at {})",
type_vars.len(),
expr.location,
));
}
Ok(TypeAnnotation::CustomClass { id: obj_id, params: vec![] })
} else if let Ok(ty) = resolver.get_symbol_type(unifier, top_level_defs, primitives, *id) {
if let TypeEnum::TVar { .. } = unifier.get_ty(ty).as_ref() {
let var = unifier.get_fresh_var(Some(*id), Some(expr.location)).0;
unifier.unify(var, ty).unwrap();
Ok(TypeAnnotation::TypeVar(ty))
} else {
Err(format!("`{}` is not a valid type annotation (at {})", id, expr.location))
}
} else {
Err(format!("`{}` is not a valid type annotation (at {})", id, expr.location))
}
};
let class_name_handle =
|id: &StrRef,
slice: &ast::Expr<T>,
unifier: &mut Unifier,
mut locked: HashMap<DefinitionId, Vec<Type>>| {
if vec!["virtual".into(), "Generic".into(), "list".into(), "tuple".into()].contains(id)
{
return Err(format!("keywords cannot be class name (at {})", expr.location));
}
let obj_id = resolver.get_identifier_def(*id)?;
let type_vars = {
let def_read = top_level_defs[obj_id.0].try_read();
if let Some(def_read) = def_read {
if let TopLevelDef::Class { type_vars, .. } = &*def_read {
type_vars.clone()
} else {
unreachable!("must be class here")
}
} else {
locked.get(&obj_id).unwrap().clone()
}
};
// we do not check whether the application of type variables are compatible here
let param_type_infos = {
let params_ast = if let ast::ExprKind::Tuple { elts, .. } = &slice.node {
elts.iter().collect_vec()
} else {
vec![slice]
}; };
// check param number here if type_vars.len() != params_ast.len() {
if !type_vars.is_empty() {
return Err(format!( return Err(format!(
"expect {} type variable parameter but got 0", "expect {} type parameters but got {} (at {})",
type_vars.len() type_vars.len(),
params_ast.len(),
params_ast[0].location,
)); ));
} }
Ok(TypeAnnotation::CustomClass { id: obj_id, params: vec![] }) let result = params_ast
} else if let Some(ty) = resolver.get_symbol_type(unifier, top_level_defs, primitives, *id) { .iter()
if let TypeEnum::TVar { .. } = unifier.get_ty(ty).as_ref() { .map(|x| {
Ok(TypeAnnotation::TypeVar(ty)) parse_ast_to_type_annotation_kinds(
resolver,
top_level_defs,
unifier,
primitives,
x,
{
locked.insert(obj_id, type_vars.clone());
locked.clone()
},
)
})
.collect::<Result<Vec<_>, _>>()?;
// make sure the result do not contain any type vars
let no_type_var =
result.iter().all(|x| get_type_var_contained_in_type_annotation(x).is_empty());
if no_type_var {
result
} else { } else {
Err("not a type variable identifier".into()) return Err(format!(
"application of type vars to generic class \
is not currently supported (at {})",
params_ast[0].location
));
} }
} else { };
Err("name cannot be parsed as a type annotation".into()) Ok(TypeAnnotation::CustomClass { id: obj_id, params: param_type_infos })
} };
} match &expr.node {
ast::ExprKind::Name { id, .. } => name_handle(id, unifier, locked),
// virtual // virtual
ast::ExprKind::Subscript { value, slice, .. } ast::ExprKind::Subscript { value, slice, .. }
if { if {
@ -140,100 +229,39 @@ pub fn parse_ast_to_type_annotation_kinds<T>(
matches!(&value.node, ast::ExprKind::Name { id, .. } if id == &"tuple".into()) matches!(&value.node, ast::ExprKind::Name { id, .. } if id == &"tuple".into())
} => } =>
{ {
if let ast::ExprKind::Tuple { elts, .. } = &slice.node { let tup_elts = {
let type_annotations = elts if let ast::ExprKind::Tuple { elts, .. } = &slice.node {
.iter() elts.as_slice()
.map(|e| { } else {
parse_ast_to_type_annotation_kinds( std::slice::from_ref(slice.as_ref())
resolver, }
top_level_defs, };
unifier, let type_annotations = tup_elts
primitives, .iter()
e, .map(|e| {
locked.clone(), parse_ast_to_type_annotation_kinds(
) resolver,
}) top_level_defs,
.collect::<Result<Vec<_>, _>>()?; unifier,
Ok(TypeAnnotation::Tuple(type_annotations)) primitives,
} else { e,
Err("Expect multiple elements for tuple".into()) locked.clone(),
} )
})
.collect::<Result<Vec<_>, _>>()?;
Ok(TypeAnnotation::Tuple(type_annotations))
} }
// custom class // custom class
ast::ExprKind::Subscript { value, slice, .. } => { ast::ExprKind::Subscript { value, slice, .. } => {
if let ast::ExprKind::Name { id, .. } = &value.node { if let ast::ExprKind::Name { id, .. } = &value.node {
if vec!["virtual".into(), "Generic".into(), "list".into(), "tuple".into()] class_name_handle(id, slice, unifier, locked)
.contains(id)
{
return Err("keywords cannot be class name".into());
}
let obj_id = resolver
.get_identifier_def(*id)
.ok_or_else(|| "unknown class name".to_string())?;
let type_vars = {
let def_read = top_level_defs[obj_id.0].try_read();
if let Some(def_read) = def_read {
if let TopLevelDef::Class { type_vars, .. } = &*def_read {
type_vars.clone()
} else {
unreachable!("must be class here")
}
} else {
locked.get(&obj_id).unwrap().clone()
}
};
// we do not check whether the application of type variables are compatible here
let param_type_infos = {
let params_ast = if let ast::ExprKind::Tuple { elts, .. } = &slice.node {
elts.iter().collect_vec()
} else {
vec![slice.as_ref()]
};
if type_vars.len() != params_ast.len() {
return Err(format!(
"expect {} type parameters but got {}",
type_vars.len(),
params_ast.len()
));
}
let result = params_ast
.into_iter()
.map(|x| {
parse_ast_to_type_annotation_kinds(
resolver,
top_level_defs,
unifier,
primitives,
x,
{
locked.insert(obj_id, type_vars.clone());
locked.clone()
},
)
})
.collect::<Result<Vec<_>, _>>()?;
// make sure the result do not contain any type vars
let no_type_var = result
.iter()
.all(|x| get_type_var_contained_in_type_annotation(x).is_empty());
if no_type_var {
result
} else {
return Err("application of type vars to generic class \
is not currently supported"
.into());
}
};
Ok(TypeAnnotation::CustomClass { id: obj_id, params: param_type_infos })
} else { } else {
Err("unsupported expression type for class name".into()) Err(format!("unsupported expression type for class name (at {})", value.location))
} }
} }
_ => Err("unsupported expression for type annotation".into()), _ => Err(format!("unsupported expression for type annotation (at {})", expr.location)),
} }
} }
@ -275,14 +303,17 @@ pub fn get_type_from_type_annotation_kinds(
// TODO: if allow type var to be applied(now this disallowed in the parse_to_type_annotation), need more check // TODO: if allow type var to be applied(now this disallowed in the parse_to_type_annotation), need more check
let mut result: HashMap<u32, Type> = HashMap::new(); let mut result: HashMap<u32, Type> = HashMap::new();
for (tvar, p) in type_vars.iter().zip(param_ty) { for (tvar, p) in type_vars.iter().zip(param_ty) {
if let TypeEnum::TVar { id, range, meta: TypeVarMeta::Generic } = if let TypeEnum::TVar { id, range, fields: None, name, loc } =
unifier.get_ty(*tvar).as_ref() unifier.get_ty(*tvar).as_ref()
{ {
let ok: bool = { let ok: bool = {
// create a temp type var and unify to check compatibility // create a temp type var and unify to check compatibility
p == *tvar || { p == *tvar || {
let temp = let temp = unifier.get_fresh_var_with_range(
unifier.get_fresh_var_with_range(range.borrow().as_slice()); range.as_slice(),
*name,
*loc,
);
unifier.unify(temp.0, p).is_ok() unifier.unify(temp.0, p).is_ok()
} }
}; };
@ -291,10 +322,11 @@ pub fn get_type_from_type_annotation_kinds(
} else { } else {
return Err(format!( return Err(format!(
"cannot apply type {} to type variable with id {:?}", "cannot apply type {} to type variable with id {:?}",
unifier.stringify( unifier.internal_stringify(
p, p,
&mut |id| format!("class{}", id), &mut |id| format!("class{}", id),
&mut |id| format!("tvar{}", id) &mut |id| format!("tvar{}", id),
&mut None
), ),
*id *id
)); ));
@ -320,8 +352,8 @@ pub fn get_type_from_type_annotation_kinds(
Ok(unifier.add_ty(TypeEnum::TObj { Ok(unifier.add_ty(TypeEnum::TObj {
obj_id: *obj_id, obj_id: *obj_id,
fields: RefCell::new(tobj_fields), fields: tobj_fields,
params: subst.into(), params: subst,
})) }))
} }
} else { } else {
@ -370,13 +402,7 @@ pub fn get_type_from_type_annotation_kinds(
/// But note that here we do not make a duplication of `T`, `V`, we direclty /// But note that here we do not make a duplication of `T`, `V`, we direclty
/// use them as they are in the TopLevelDef::Class since those in the /// use them as they are in the TopLevelDef::Class since those in the
/// TopLevelDef::Class.type_vars will be substitute later when seeing applications/instantiations /// TopLevelDef::Class.type_vars will be substitute later when seeing applications/instantiations
/// the Type of their fields and methods will also be subst when application/instantiation \ /// the Type of their fields and methods will also be subst when application/instantiation
/// \
/// Note this implicit self type is different with seeing `A[T, V]` explicitly outside
/// the class def ast body, where it is a new instantiation of the generic class `A`,
/// but equivalent to seeing `A[T, V]` inside the class def body ast, where although we
/// create copies of `T` and `V`, we will find them out as occured type vars in the analyze_class()
/// and unify them with the class generic `T`, `V`
pub fn make_self_type_annotation(type_vars: &[Type], object_id: DefinitionId) -> TypeAnnotation { pub fn make_self_type_annotation(type_vars: &[Type], object_id: DefinitionId) -> TypeAnnotation {
TypeAnnotation::CustomClass { TypeAnnotation::CustomClass {
id: object_id, id: object_id,
@ -426,8 +452,8 @@ pub fn check_overload_type_annotation_compatible(
let b = unifier.get_ty(*b); let b = unifier.get_ty(*b);
let b = b.deref(); let b = b.deref();
if let ( if let (
TypeEnum::TVar { id: a, meta: TypeVarMeta::Generic, .. }, TypeEnum::TVar { id: a, fields: None, .. },
TypeEnum::TVar { id: b, meta: TypeVarMeta::Generic, .. }, TypeEnum::TVar { id: b, fields: None, .. },
) = (a, b) ) = (a, b)
{ {
a == b a == b

View File

@ -69,23 +69,21 @@ impl<'a> Inferencer<'a> {
ExprKind::Name { id, .. } => { ExprKind::Name { id, .. } => {
self.should_have_value(expr)?; self.should_have_value(expr)?;
if !defined_identifiers.contains(id) { if !defined_identifiers.contains(id) {
if self match self.function_data.resolver.get_symbol_type(
.function_data self.unifier,
.resolver &self.top_level.definitions.read(),
.get_symbol_type( self.primitives,
self.unifier, *id,
&self.top_level.definitions.read(), ) {
self.primitives, Ok(_) => {
*id, self.defined_identifiers.insert(*id);
) }
.is_some() Err(e) => {
{ return Err(format!(
defined_identifiers.insert(*id); "type error at identifier `{}` ({}) at {}",
} else { id, e, expr.location
return Err(format!( ));
"unknown identifier {} (use before def?) at {}", }
id, expr.location
));
} }
} }
} }
@ -167,7 +165,6 @@ impl<'a> Inferencer<'a> {
} }
ExprKind::Constant { .. } => {} ExprKind::Constant { .. } => {}
_ => { _ => {
println!("{:?}", expr.node);
unimplemented!() unimplemented!()
} }
} }
@ -230,6 +227,20 @@ impl<'a> Inferencer<'a> {
self.check_block(body, &mut new_defined_identifiers)?; self.check_block(body, &mut new_defined_identifiers)?;
Ok(false) Ok(false)
} }
StmtKind::Try { body, handlers, orelse, finalbody, .. } => {
self.check_block(body, &mut defined_identifiers.clone())?;
self.check_block(orelse, &mut defined_identifiers.clone())?;
for handler in handlers.iter() {
let mut defined_identifiers = defined_identifiers.clone();
let ast::ExcepthandlerKind::ExceptHandler { name, body, .. } = &handler.node;
if let Some(name) = name {
defined_identifiers.insert(*name);
}
self.check_block(body, &mut defined_identifiers)?;
}
self.check_block(finalbody, defined_identifiers)?;
Ok(false)
}
StmtKind::Expr { value, .. } => { StmtKind::Expr { value, .. } => {
self.check_expr(value, defined_identifiers)?; self.check_expr(value, defined_identifiers)?;
Ok(false) Ok(false)

View File

@ -2,10 +2,10 @@ use crate::typecheck::{
type_inferencer::*, type_inferencer::*,
typedef::{FunSignature, FuncArg, Type, TypeEnum, Unifier}, typedef::{FunSignature, FuncArg, Type, TypeEnum, Unifier},
}; };
use nac3parser::ast; use nac3parser::ast::{self, StrRef};
use nac3parser::ast::{Cmpop, Operator, Unaryop}; use nac3parser::ast::{Cmpop, Operator, Unaryop};
use std::borrow::Borrow;
use std::collections::HashMap; use std::collections::HashMap;
use std::rc::Rc;
pub fn binop_name(op: &Operator) -> &'static str { pub fn binop_name(op: &Operator) -> &'static str {
match op { match op {
@ -64,6 +64,23 @@ pub fn comparison_name(op: &Cmpop) -> Option<&'static str> {
} }
} }
pub(super) fn with_fields<F>(unifier: &mut Unifier, ty: Type, f: F)
where
F: FnOnce(&mut Unifier, &mut HashMap<StrRef, (Type, bool)>),
{
let (id, mut fields, params) =
if let TypeEnum::TObj { obj_id, fields, params } = &*unifier.get_ty(ty) {
(*obj_id, fields.clone(), params.clone())
} else {
unreachable!()
};
f(unifier, &mut fields);
unsafe {
let unification_table = unifier.get_unification_table();
unification_table.set_value(ty, Rc::new(TypeEnum::TObj { obj_id: id, fields, params }));
}
}
pub fn impl_binop( pub fn impl_binop(
unifier: &mut Unifier, unifier: &mut Unifier,
store: &PrimitiveStore, store: &PrimitiveStore,
@ -72,11 +89,11 @@ pub fn impl_binop(
ret_ty: Type, ret_ty: Type,
ops: &[ast::Operator], ops: &[ast::Operator],
) { ) {
if let TypeEnum::TObj { fields, .. } = unifier.get_ty(ty).borrow() { with_fields(unifier, ty, |unifier, fields| {
let (other_ty, other_var_id) = if other_ty.len() == 1 { let (other_ty, other_var_id) = if other_ty.len() == 1 {
(other_ty[0], None) (other_ty[0], None)
} else { } else {
let (ty, var_id) = unifier.get_fresh_var_with_range(other_ty); let (ty, var_id) = unifier.get_fresh_var_with_range(other_ty, Some("N".into()), None);
(ty, Some(var_id)) (ty, Some(var_id))
}; };
let function_vars = if let Some(var_id) = other_var_id { let function_vars = if let Some(var_id) = other_var_id {
@ -85,69 +102,55 @@ pub fn impl_binop(
HashMap::new() HashMap::new()
}; };
for op in ops { for op in ops {
fields.borrow_mut().insert(binop_name(op).into(), { fields.insert(binop_name(op).into(), {
( (
unifier.add_ty(TypeEnum::TFunc( unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { ret: ret_ty,
ret: ret_ty, vars: function_vars.clone(),
vars: function_vars.clone(), args: vec![FuncArg {
args: vec![FuncArg { ty: other_ty,
ty: other_ty, default_value: None,
default_value: None, name: "other".into(),
name: "other".into(), }],
}], })),
}
.into(),
)),
false, false,
) )
}); });
fields.borrow_mut().insert(binop_assign_name(op).into(), { fields.insert(binop_assign_name(op).into(), {
( (
unifier.add_ty(TypeEnum::TFunc( unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { ret: store.none,
ret: store.none, vars: function_vars.clone(),
vars: function_vars.clone(), args: vec![FuncArg {
args: vec![FuncArg { ty: other_ty,
ty: other_ty, default_value: None,
default_value: None, name: "other".into(),
name: "other".into(), }],
}], })),
}
.into(),
)),
false, false,
) )
}); });
} }
} else { });
unreachable!("")
}
} }
pub fn impl_unaryop( pub fn impl_unaryop(unifier: &mut Unifier, ty: Type, ret_ty: Type, ops: &[ast::Unaryop]) {
unifier: &mut Unifier, with_fields(unifier, ty, |unifier, fields| {
_store: &PrimitiveStore,
ty: Type,
ret_ty: Type,
ops: &[ast::Unaryop],
) {
if let TypeEnum::TObj { fields, .. } = unifier.get_ty(ty).borrow() {
for op in ops { for op in ops {
fields.borrow_mut().insert( fields.insert(
unaryop_name(op).into(), unaryop_name(op).into(),
( (
unifier.add_ty(TypeEnum::TFunc( unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { ret: ret_ty, vars: HashMap::new(), args: vec![] }.into(), ret: ret_ty,
)), vars: HashMap::new(),
args: vec![],
})),
false, false,
), ),
); );
} }
} else { });
unreachable!()
}
} }
pub fn impl_cmpop( pub fn impl_cmpop(
@ -157,33 +160,28 @@ pub fn impl_cmpop(
other_ty: Type, other_ty: Type,
ops: &[ast::Cmpop], ops: &[ast::Cmpop],
) { ) {
if let TypeEnum::TObj { fields, .. } = unifier.get_ty(ty).borrow() { with_fields(unifier, ty, |unifier, fields| {
for op in ops { for op in ops {
fields.borrow_mut().insert( fields.insert(
comparison_name(op).unwrap().into(), comparison_name(op).unwrap().into(),
( (
unifier.add_ty(TypeEnum::TFunc( unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { ret: store.bool,
ret: store.bool, vars: HashMap::new(),
vars: HashMap::new(), args: vec![FuncArg {
args: vec![FuncArg { ty: other_ty,
ty: other_ty, default_value: None,
default_value: None, name: "other".into(),
name: "other".into(), }],
}], })),
}
.into(),
)),
false, false,
), ),
); );
} }
} else { });
unreachable!()
}
} }
/// Add, Sub, Mult, Pow /// Add, Sub, Mult
pub fn impl_basic_arithmetic( pub fn impl_basic_arithmetic(
unifier: &mut Unifier, unifier: &mut Unifier,
store: &PrimitiveStore, store: &PrimitiveStore,
@ -201,6 +199,7 @@ pub fn impl_basic_arithmetic(
) )
} }
/// Pow
pub fn impl_pow( pub fn impl_pow(
unifier: &mut Unifier, unifier: &mut Unifier,
store: &PrimitiveStore, store: &PrimitiveStore,
@ -256,18 +255,18 @@ pub fn impl_mod(
} }
/// UAdd, USub /// UAdd, USub
pub fn impl_sign(unifier: &mut Unifier, store: &PrimitiveStore, ty: Type) { pub fn impl_sign(unifier: &mut Unifier, _store: &PrimitiveStore, ty: Type) {
impl_unaryop(unifier, store, ty, ty, &[ast::Unaryop::UAdd, ast::Unaryop::USub]) impl_unaryop(unifier, ty, ty, &[ast::Unaryop::UAdd, ast::Unaryop::USub])
} }
/// Invert /// Invert
pub fn impl_invert(unifier: &mut Unifier, store: &PrimitiveStore, ty: Type) { pub fn impl_invert(unifier: &mut Unifier, _store: &PrimitiveStore, ty: Type) {
impl_unaryop(unifier, store, ty, ty, &[ast::Unaryop::Invert]) impl_unaryop(unifier, ty, ty, &[ast::Unaryop::Invert])
} }
/// Not /// Not
pub fn impl_not(unifier: &mut Unifier, store: &PrimitiveStore, ty: Type) { pub fn impl_not(unifier: &mut Unifier, store: &PrimitiveStore, ty: Type) {
impl_unaryop(unifier, store, ty, store.bool, &[ast::Unaryop::Not]) impl_unaryop(unifier, ty, store.bool, &[ast::Unaryop::Not])
} }
/// Lt, LtE, Gt, GtE /// Lt, LtE, Gt, GtE
@ -287,35 +286,32 @@ pub fn impl_eq(unifier: &mut Unifier, store: &PrimitiveStore, ty: Type) {
} }
pub fn set_primitives_magic_methods(store: &PrimitiveStore, unifier: &mut Unifier) { pub fn set_primitives_magic_methods(store: &PrimitiveStore, unifier: &mut Unifier) {
let PrimitiveStore { int32: int32_t, int64: int64_t, float: float_t, bool: bool_t, .. } = let PrimitiveStore {
*store; int32: int32_t,
/* int32 ======== */ int64: int64_t,
impl_basic_arithmetic(unifier, store, int32_t, &[int32_t], int32_t); float: float_t,
impl_pow(unifier, store, int32_t, &[int32_t], int32_t); bool: bool_t,
impl_bitwise_arithmetic(unifier, store, int32_t); uint32: uint32_t,
impl_bitwise_shift(unifier, store, int32_t); uint64: uint64_t,
impl_div(unifier, store, int32_t, &[int32_t]); ..
impl_floordiv(unifier, store, int32_t, &[int32_t], int32_t); } = *store;
impl_mod(unifier, store, int32_t, &[int32_t], int32_t); /* int ======== */
impl_sign(unifier, store, int32_t); for t in [int32_t, int64_t, uint32_t, uint64_t] {
impl_invert(unifier, store, int32_t); impl_basic_arithmetic(unifier, store, t, &[t], t);
impl_not(unifier, store, int32_t); impl_pow(unifier, store, t, &[t], t);
impl_comparison(unifier, store, int32_t, int32_t); impl_bitwise_arithmetic(unifier, store, t);
impl_eq(unifier, store, int32_t); impl_bitwise_shift(unifier, store, t);
impl_div(unifier, store, t, &[t]);
/* int64 ======== */ impl_floordiv(unifier, store, t, &[t], t);
impl_basic_arithmetic(unifier, store, int64_t, &[int64_t], int64_t); impl_mod(unifier, store, t, &[t], t);
impl_pow(unifier, store, int64_t, &[int64_t], int64_t); impl_invert(unifier, store, t);
impl_bitwise_arithmetic(unifier, store, int64_t); impl_not(unifier, store, t);
impl_bitwise_shift(unifier, store, int64_t); impl_comparison(unifier, store, t, t);
impl_div(unifier, store, int64_t, &[int64_t]); impl_eq(unifier, store, t);
impl_floordiv(unifier, store, int64_t, &[int64_t], int64_t); }
impl_mod(unifier, store, int64_t, &[int64_t], int64_t); for t in [int32_t, int64_t] {
impl_sign(unifier, store, int64_t); impl_sign(unifier, store, t);
impl_invert(unifier, store, int64_t); }
impl_not(unifier, store, int64_t);
impl_comparison(unifier, store, int64_t, int64_t);
impl_eq(unifier, store, int64_t);
/* float ======== */ /* float ======== */
impl_basic_arithmetic(unifier, store, float_t, &[float_t], float_t); impl_basic_arithmetic(unifier, store, float_t, &[float_t], float_t);

View File

@ -1,5 +1,6 @@
mod function_check; mod function_check;
pub mod magic_methods; pub mod magic_methods;
pub mod type_error;
pub mod type_inferencer; pub mod type_inferencer;
pub mod typedef; pub mod typedef;
mod unification_table; mod unification_table;

View File

@ -0,0 +1,186 @@
use std::collections::HashMap;
use std::fmt::Display;
use crate::typecheck::typedef::TypeEnum;
use super::typedef::{RecordKey, Type, Unifier};
use nac3parser::ast::{Location, StrRef};
#[derive(Debug, Clone)]
pub enum TypeErrorKind {
TooManyArguments {
expected: usize,
got: usize,
},
MissingArgs(String),
UnknownArgName(StrRef),
IncorrectArgType {
name: StrRef,
expected: Type,
got: Type,
},
FieldUnificationError {
field: RecordKey,
types: (Type, Type),
loc: (Option<Location>, Option<Location>),
},
IncompatibleRange(Type, Vec<Type>),
IncompatibleTypes(Type, Type),
MutationError(RecordKey, Type),
NoSuchField(RecordKey, Type),
TupleIndexOutOfBounds {
index: i32,
len: i32,
},
RequiresTypeAnn,
PolymorphicFunctionPointer,
}
#[derive(Debug, Clone)]
pub struct TypeError {
pub kind: TypeErrorKind,
pub loc: Option<Location>,
}
impl TypeError {
pub fn new(kind: TypeErrorKind, loc: Option<Location>) -> TypeError {
TypeError { kind, loc }
}
pub fn at(mut self, loc: Option<Location>) -> TypeError {
self.loc = self.loc.or(loc);
self
}
pub fn to_display(self, unifier: &Unifier) -> DisplayTypeError {
DisplayTypeError { err: self, unifier }
}
}
pub struct DisplayTypeError<'a> {
pub err: TypeError,
pub unifier: &'a Unifier,
}
fn loc_to_str(loc: Option<Location>) -> String {
match loc {
Some(loc) => format!("(in {})", loc),
None => "".to_string(),
}
}
impl<'a> Display for DisplayTypeError<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
use TypeErrorKind::*;
let mut notes = Some(HashMap::new());
match &self.err.kind {
TooManyArguments { expected, got } => {
write!(f, "Too many arguments. Expected {} but got {}", expected, got)
}
MissingArgs(args) => {
write!(f, "Missing arguments: {}", args)
}
UnknownArgName(name) => {
write!(f, "Unknown argument name: {}", name)
}
IncorrectArgType { name, expected, got } => {
let expected = self.unifier.stringify_with_notes(*expected, &mut notes);
let got = self.unifier.stringify_with_notes(*got, &mut notes);
write!(
f,
"Incorrect argument type for {}. Expected {}, but got {}",
name, expected, got
)
}
FieldUnificationError { field, types, loc } => {
let lhs = self.unifier.stringify_with_notes(types.0, &mut notes);
let rhs = self.unifier.stringify_with_notes(types.1, &mut notes);
write!(
f,
"Unable to unify field {}: Got types {}{} and {}{}",
field,
lhs,
loc_to_str(loc.0),
rhs,
loc_to_str(loc.1)
)
}
IncompatibleRange(t, ts) => {
let t = self.unifier.stringify_with_notes(*t, &mut notes);
let ts = ts
.iter()
.map(|t| self.unifier.stringify_with_notes(*t, &mut notes))
.collect::<Vec<_>>();
write!(f, "Expected any one of these types: {}, but got {}", ts.join(", "), t)
}
IncompatibleTypes(t1, t2) => {
let type1 = self.unifier.get_ty_immutable(*t1);
let type2 = self.unifier.get_ty_immutable(*t2);
match (&*type1, &*type2) {
(TypeEnum::TCall(calls), _) => {
let loc = self.unifier.calls[calls[0].0].loc;
let result = write!(
f,
"{} is not callable",
self.unifier.stringify_with_notes(*t2, &mut notes)
);
if let Some(loc) = loc {
result?;
write!(f, " (in {})", loc)?;
return Ok(());
}
result
}
(TypeEnum::TTuple { ty: ty1 }, TypeEnum::TTuple { ty: ty2 })
if ty1.len() != ty2.len() =>
{
let t1 = self.unifier.stringify_with_notes(*t1, &mut notes);
let t2 = self.unifier.stringify_with_notes(*t2, &mut notes);
write!(f, "Tuple length mismatch: got {} and {}", t1, t2)
}
_ => {
let t1 = self.unifier.stringify_with_notes(*t1, &mut notes);
let t2 = self.unifier.stringify_with_notes(*t2, &mut notes);
write!(f, "Incompatible types: {} and {}", t1, t2)
}
}
}
MutationError(name, t) => {
if let TypeEnum::TTuple { .. } = &*self.unifier.get_ty_immutable(*t) {
write!(f, "Cannot assign to an element of a tuple")
} else {
let t = self.unifier.stringify_with_notes(*t, &mut notes);
write!(f, "Cannot assign to field {} of {}, which is immutable", name, t)
}
}
NoSuchField(name, t) => {
let t = self.unifier.stringify_with_notes(*t, &mut notes);
write!(f, "`{}::{}` field/method does not exist", t, name)
}
TupleIndexOutOfBounds { index, len } => {
write!(
f,
"Tuple index out of bounds. Got {} but tuple has only {} elements",
index, len
)
}
RequiresTypeAnn => {
write!(f, "Unable to infer virtual object type: Type annotation required")
}
PolymorphicFunctionPointer => {
write!(f, "Polymorphic function pointers is not supported")
}
}?;
if let Some(loc) = self.err.loc {
write!(f, " at {}", loc)?;
}
let notes = notes.unwrap();
if !notes.is_empty() {
write!(f, "\n\nNotes:")?;
for line in notes.values() {
write!(f, "\n {}", line)?;
}
}
Ok(())
}
}

View File

@ -3,7 +3,7 @@ use std::convert::{From, TryInto};
use std::iter::once; use std::iter::once;
use std::{cell::RefCell, sync::Arc}; use std::{cell::RefCell, sync::Arc};
use super::typedef::{Call, FunSignature, FuncArg, Type, TypeEnum, Unifier}; use super::typedef::{Call, FunSignature, FuncArg, RecordField, Type, TypeEnum, Unifier};
use super::{magic_methods::*, typedef::CallId}; use super::{magic_methods::*, typedef::CallId};
use crate::{symbol_resolver::SymbolResolver, toplevel::TopLevelContext}; use crate::{symbol_resolver::SymbolResolver, toplevel::TopLevelContext};
use itertools::izip; use itertools::izip;
@ -32,11 +32,14 @@ impl From<Location> for CodeLocation {
pub struct PrimitiveStore { pub struct PrimitiveStore {
pub int32: Type, pub int32: Type,
pub int64: Type, pub int64: Type,
pub uint32: Type,
pub uint64: Type,
pub float: Type, pub float: Type,
pub bool: Type, pub bool: Type,
pub none: Type, pub none: Type,
pub range: Type, pub range: Type,
pub str: Type, pub str: Type,
pub exception: Type,
} }
pub struct FunctionData { pub struct FunctionData {
@ -51,9 +54,10 @@ pub struct Inferencer<'a> {
pub function_data: &'a mut FunctionData, pub function_data: &'a mut FunctionData,
pub unifier: &'a mut Unifier, pub unifier: &'a mut Unifier,
pub primitives: &'a PrimitiveStore, pub primitives: &'a PrimitiveStore,
pub virtual_checks: &'a mut Vec<(Type, Type)>, pub virtual_checks: &'a mut Vec<(Type, Type, Location)>,
pub variable_mapping: HashMap<StrRef, Type>, pub variable_mapping: HashMap<StrRef, Type>,
pub calls: &'a mut HashMap<CodeLocation, CallId>, pub calls: &'a mut HashMap<CodeLocation, CallId>,
pub in_handler: bool,
} }
struct NaiveFolder(); struct NaiveFolder();
@ -96,10 +100,10 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
self.unify(target.custom.unwrap(), ty.custom.unwrap(), &node.location)?; self.unify(target.custom.unwrap(), ty.custom.unwrap(), &node.location)?;
Some(ty) Some(ty)
} else { } else {
return Err(format!( return report_error(
"declaration without definition is not yet supported, at {}", "declaration without definition is not yet supported",
node.location node.location,
)); );
}; };
let top_level_defs = self.top_level.definitions.read(); let top_level_defs = self.top_level.definitions.read();
let annotation_type = self.function_data.resolver.parse_type_annotation( let annotation_type = self.function_data.resolver.parse_type_annotation(
@ -122,9 +126,103 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
}, },
} }
} }
ast::StmtKind::For { ref target, .. } => { ast::StmtKind::Try { body, handlers, orelse, finalbody, config_comment } => {
self.infer_pattern(target)?; let body = body
fold::fold_stmt(self, node)? .into_iter()
.map(|stmt| self.fold_stmt(stmt))
.collect::<Result<Vec<_>, _>>()?;
let outer_in_handler = self.in_handler;
let mut exception_handlers = Vec::with_capacity(handlers.len());
self.in_handler = true;
{
let top_level_defs = self.top_level.definitions.read();
let mut naive_folder = NaiveFolder();
for handler in handlers.into_iter() {
let ast::ExcepthandlerKind::ExceptHandler { type_, name, body } =
handler.node;
let type_ = if let Some(type_) = type_ {
let typ = self.function_data.resolver.parse_type_annotation(
top_level_defs.as_slice(),
self.unifier,
self.primitives,
&type_,
)?;
self.virtual_checks.push((
typ,
self.primitives.exception,
handler.location,
));
if let Some(name) = name {
if !self.defined_identifiers.contains(&name) {
self.defined_identifiers.insert(name);
}
if let Some(old_typ) = self.variable_mapping.insert(name, typ) {
let loc = handler.location;
self.unifier.unify(old_typ, typ).map_err(|e| {
e.at(Some(loc)).to_display(self.unifier).to_string()
})?;
}
}
let mut type_ = naive_folder.fold_expr(*type_)?;
type_.custom = Some(typ);
Some(Box::new(type_))
} else {
None
};
let body = body
.into_iter()
.map(|stmt| self.fold_stmt(stmt))
.collect::<Result<Vec<_>, _>>()?;
exception_handlers.push(Located {
location: handler.location,
node: ast::ExcepthandlerKind::ExceptHandler { type_, name, body },
custom: None,
});
}
}
self.in_handler = outer_in_handler;
let handlers = exception_handlers;
let orelse = orelse.into_iter().map(|stmt| self.fold_stmt(stmt)).collect::<Result<
Vec<_>,
_,
>>(
)?;
let finalbody = finalbody
.into_iter()
.map(|stmt| self.fold_stmt(stmt))
.collect::<Result<Vec<_>, _>>()?;
Located {
location: node.location,
node: ast::StmtKind::Try { body, handlers, orelse, finalbody, config_comment },
custom: None,
}
}
ast::StmtKind::For { target, iter, body, orelse, config_comment, type_comment } => {
self.infer_pattern(&target)?;
let target = self.fold_expr(*target)?;
let iter = self.fold_expr(*iter)?;
if self.unifier.unioned(iter.custom.unwrap(), self.primitives.range) {
self.unify(self.primitives.int32, target.custom.unwrap(), &target.location)?;
} else {
let list = self.unifier.add_ty(TypeEnum::TList { ty: target.custom.unwrap() });
self.unify(list, iter.custom.unwrap(), &iter.location)?;
}
let body =
body.into_iter().map(|b| self.fold_stmt(b)).collect::<Result<Vec<_>, _>>()?;
let orelse =
orelse.into_iter().map(|o| self.fold_stmt(o)).collect::<Result<Vec<_>, _>>()?;
Located {
location: node.location,
node: ast::StmtKind::For {
target: Box::new(target),
iter: Box::new(iter),
body,
orelse,
config_comment,
type_comment,
},
custom: None,
}
} }
ast::StmtKind::Assign { ref mut targets, ref config_comment, .. } => { ast::StmtKind::Assign { ref mut targets, ref config_comment, .. } => {
for target in targets.iter_mut() { for target in targets.iter_mut() {
@ -154,7 +252,7 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
self.primitives, self.primitives,
id, id,
) )
.unwrap_or_else(|| { .unwrap_or_else(|_| {
self.variable_mapping.insert(id, value_ty); self.variable_mapping.insert(id, value_ty);
value_ty value_ty
}) })
@ -170,7 +268,9 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
} }
}) })
.collect(); .collect();
let targets = targets?; let loc = node.location;
let targets = targets
.map_err(|e| e.at(Some(loc)).to_display(self.unifier).to_string())?;
return Ok(Located { return Ok(Located {
location: node.location, location: node.location,
node: ast::StmtKind::Assign { node: ast::StmtKind::Assign {
@ -201,14 +301,8 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
_ => fold::fold_stmt(self, node)?, _ => fold::fold_stmt(self, node)?,
}; };
match &stmt.node { match &stmt.node {
ast::StmtKind::For { target, iter, .. } => { ast::StmtKind::For { .. } => {}
if self.unifier.unioned(iter.custom.unwrap(), self.primitives.range) { ast::StmtKind::Try { .. } => {}
self.unify(self.primitives.int32, target.custom.unwrap(), &target.location)?;
} else {
let list = self.unifier.add_ty(TypeEnum::TList { ty: target.custom.unwrap() });
self.unify(list, iter.custom.unwrap(), &iter.location)?;
}
}
ast::StmtKind::If { test, .. } | ast::StmtKind::While { test, .. } => { ast::StmtKind::If { test, .. } | ast::StmtKind::While { test, .. } => {
self.unify(test.custom.unwrap(), self.primitives.bool, &test.location)?; self.unify(test.custom.unwrap(), self.primitives.bool, &test.location)?;
} }
@ -221,17 +315,32 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
ast::StmtKind::Break { .. } ast::StmtKind::Break { .. }
| ast::StmtKind::Continue { .. } | ast::StmtKind::Continue { .. }
| ast::StmtKind::Pass { .. } => {} | ast::StmtKind::Pass { .. } => {}
ast::StmtKind::Raise { exc, cause, .. } => {
if let Some(cause) = cause {
return report_error("raise ... from cause is not supported", cause.location);
}
if let Some(exc) = exc {
self.virtual_checks.push((
exc.custom.unwrap(),
self.primitives.exception,
exc.location,
));
} else if !self.in_handler {
return report_error(
"cannot reraise outside exception handlers",
stmt.location,
);
}
}
ast::StmtKind::With { items, .. } => { ast::StmtKind::With { items, .. } => {
for item in items.iter() { for item in items.iter() {
let ty = item.context_expr.custom.unwrap(); let ty = item.context_expr.custom.unwrap();
// if we can simply unify without creating new types... // if we can simply unify without creating new types...
let mut fast_path = false; let mut fast_path = false;
if let TypeEnum::TObj { fields, .. } = &*self.unifier.get_ty(ty) { if let TypeEnum::TObj { fields, .. } = &*self.unifier.get_ty(ty) {
let fields = fields.borrow();
fast_path = true; fast_path = true;
if let Some(enter) = fields.get(&"__enter__".into()).cloned() { if let Some(enter) = fields.get(&"__enter__".into()).cloned() {
if let TypeEnum::TFunc(signature) = &*self.unifier.get_ty(enter.0) { if let TypeEnum::TFunc(signature) = &*self.unifier.get_ty(enter.0) {
let signature = signature.borrow();
if !signature.args.is_empty() { if !signature.args.is_empty() {
return report_error( return report_error(
"__enter__ method should take no argument other than self", "__enter__ method should take no argument other than self",
@ -260,7 +369,6 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
} }
if let Some(exit) = fields.get(&"__exit__".into()).cloned() { if let Some(exit) = fields.get(&"__exit__".into()).cloned() {
if let TypeEnum::TFunc(signature) = &*self.unifier.get_ty(exit.0) { if let TypeEnum::TFunc(signature) = &*self.unifier.get_ty(exit.0) {
let signature = signature.borrow();
if !signature.args.is_empty() { if !signature.args.is_empty() {
return report_error( return report_error(
"__exit__ method should take no argument other than self", "__exit__ method should take no argument other than self",
@ -278,24 +386,24 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
} }
} }
if !fast_path { if !fast_path {
let enter = TypeEnum::TFunc(RefCell::new(FunSignature { let enter = TypeEnum::TFunc(FunSignature {
args: vec![], args: vec![],
ret: item.optional_vars.as_ref().map_or_else( ret: item.optional_vars.as_ref().map_or_else(
|| self.unifier.get_fresh_var().0, || self.unifier.get_dummy_var().0,
|var| var.custom.unwrap(), |var| var.custom.unwrap(),
), ),
vars: Default::default(), vars: Default::default(),
})); });
let enter = self.unifier.add_ty(enter); let enter = self.unifier.add_ty(enter);
let exit = TypeEnum::TFunc(RefCell::new(FunSignature { let exit = TypeEnum::TFunc(FunSignature {
args: vec![], args: vec![],
ret: self.unifier.get_fresh_var().0, ret: self.unifier.get_dummy_var().0,
vars: Default::default(), vars: Default::default(),
})); });
let exit = self.unifier.add_ty(exit); let exit = self.unifier.add_ty(exit);
let mut fields = HashMap::new(); let mut fields = HashMap::new();
fields.insert("__enter__".into(), (enter, false)); fields.insert("__enter__".into(), RecordField::new(enter, false, None));
fields.insert("__exit__".into(), (exit, false)); fields.insert("__exit__".into(), RecordField::new(exit, false, None));
let record = self.unifier.add_record(fields); let record = self.unifier.add_record(fields);
self.unify(ty, record, &stmt.location)?; self.unify(ty, record, &stmt.location)?;
} }
@ -336,26 +444,26 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
_ => fold::fold_expr(self, node)?, _ => fold::fold_expr(self, node)?,
}; };
let custom = match &expr.node { let custom = match &expr.node {
ast::ExprKind::Constant { value, .. } => Some(self.infer_constant(value)?), ast::ExprKind::Constant { value, .. } => {
Some(self.infer_constant(value, &expr.location)?)
}
ast::ExprKind::Name { id, .. } => { ast::ExprKind::Name { id, .. } => {
if !self.defined_identifiers.contains(id) { if !self.defined_identifiers.contains(id) {
if self match self.function_data.resolver.get_symbol_type(
.function_data self.unifier,
.resolver &self.top_level.definitions.read(),
.get_symbol_type( self.primitives,
self.unifier, *id,
&self.top_level.definitions.read(), ) {
self.primitives, Ok(_) => {
*id, self.defined_identifiers.insert(*id);
) }
.is_some() Err(e) => {
{ return report_error(
self.defined_identifiers.insert(*id); &format!("type error at identifier `{}` ({})", id, e),
} else { expr.location,
return Err(format!( );
"unknown identifier {} (use before def?) at {}", }
id, expr.location
));
} }
} }
Some(self.infer_identifier(*id)?) Some(self.infer_identifier(*id)?)
@ -373,8 +481,8 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
ast::ExprKind::Compare { left, ops, comparators } => { ast::ExprKind::Compare { left, ops, comparators } => {
Some(self.infer_compare(left, ops, comparators)?) Some(self.infer_compare(left, ops, comparators)?)
} }
ast::ExprKind::Subscript { value, slice, .. } => { ast::ExprKind::Subscript { value, slice, ctx, .. } => {
Some(self.infer_subscript(value.as_ref(), slice.as_ref())?) Some(self.infer_subscript(value.as_ref(), slice.as_ref(), ctx)?)
} }
ast::ExprKind::IfExp { test, body, orelse } => { ast::ExprKind::IfExp { test, body, orelse } => {
Some(self.infer_if_expr(test, body.as_ref(), orelse.as_ref())?) Some(self.infer_if_expr(test, body.as_ref(), orelse.as_ref())?)
@ -383,7 +491,7 @@ impl<'a> fold::Fold<()> for Inferencer<'a> {
| ast::ExprKind::Lambda { .. } | ast::ExprKind::Lambda { .. }
| ast::ExprKind::Call { .. } => expr.custom, // already computed | ast::ExprKind::Call { .. } => expr.custom, // already computed
ast::ExprKind::Slice { .. } => None, // we don't need it for slice ast::ExprKind::Slice { .. } => None, // we don't need it for slice
_ => return Err("not supported yet".into()), _ => return report_error("not supported", expr.location),
}; };
Ok(ast::Expr { custom, location: expr.location, node: expr.node }) Ok(ast::Expr { custom, location: expr.location, node: expr.node })
} }
@ -395,11 +503,13 @@ impl<'a> Inferencer<'a> {
/// Constrain a <: b /// Constrain a <: b
/// Currently implemented as unification /// Currently implemented as unification
fn constrain(&mut self, a: Type, b: Type, location: &Location) -> Result<(), String> { fn constrain(&mut self, a: Type, b: Type, location: &Location) -> Result<(), String> {
self.unifier.unify(a, b).map_err(|old| format!("{} at {}", old, location)) self.unify(a, b, location)
} }
fn unify(&mut self, a: Type, b: Type, location: &Location) -> Result<(), String> { fn unify(&mut self, a: Type, b: Type, location: &Location) -> Result<(), String> {
self.unifier.unify(a, b).map_err(|old| format!("{} at {}", old, location)) self.unifier
.unify(a, b)
.map_err(|e| e.at(Some(*location)).to_display(self.unifier).to_string())
} }
fn infer_pattern(&mut self, pattern: &ast::Expr<()>) -> Result<(), String> { fn infer_pattern(&mut self, pattern: &ast::Expr<()>) -> Result<(), String> {
@ -429,17 +539,17 @@ impl<'a> Inferencer<'a> {
ret: Option<Type>, ret: Option<Type>,
) -> InferenceResult { ) -> InferenceResult {
if let TypeEnum::TObj { params: class_params, fields, .. } = &*self.unifier.get_ty(obj) { if let TypeEnum::TObj { params: class_params, fields, .. } = &*self.unifier.get_ty(obj) {
if class_params.borrow().is_empty() { if class_params.is_empty() {
if let Some(ty) = fields.borrow().get(&method) { if let Some(ty) = fields.get(&method) {
let ty = ty.0; let ty = ty.0;
if let TypeEnum::TFunc(sign) = &*self.unifier.get_ty(ty) { if let TypeEnum::TFunc(sign) = &*self.unifier.get_ty(ty) {
let sign = sign.borrow();
if sign.vars.is_empty() { if sign.vars.is_empty() {
let call = Call { let call = Call {
posargs: params, posargs: params,
kwargs: HashMap::new(), kwargs: HashMap::new(),
ret: sign.ret, ret: sign.ret,
fun: RefCell::new(None), fun: RefCell::new(None),
loc: Some(location),
}; };
if let Some(ret) = ret { if let Some(ret) = ret {
self.unifier.unify(sign.ret, ret).unwrap(); self.unifier.unify(sign.ret, ret).unwrap();
@ -451,26 +561,27 @@ impl<'a> Inferencer<'a> {
.map(|v| v.name) .map(|v| v.name)
.rev() .rev()
.collect(); .collect();
self.unifier self.unifier.unify_call(&call, ty, sign, &required).map_err(|e| {
.unify_call(&call, ty, &sign, &required) e.at(Some(location)).to_display(self.unifier).to_string()
.map_err(|old| format!("{} at {}", old, location))?; })?;
return Ok(sign.ret); return Ok(sign.ret);
} }
} }
} }
} }
} }
let ret = ret.unwrap_or_else(|| self.unifier.get_fresh_var().0); let ret = ret.unwrap_or_else(|| self.unifier.get_dummy_var().0);
let call = self.unifier.add_call(Call { let call = self.unifier.add_call(Call {
posargs: params, posargs: params,
kwargs: HashMap::new(), kwargs: HashMap::new(),
ret, ret,
fun: RefCell::new(None), fun: RefCell::new(None),
loc: Some(location),
}); });
self.calls.insert(location.into(), call); self.calls.insert(location.into(), call);
let call = self.unifier.add_ty(TypeEnum::TCall(vec![call].into())); let call = self.unifier.add_ty(TypeEnum::TCall(vec![call]));
let fields = once((method, (call, false))).collect(); let fields = once((method.into(), RecordField::new(call, false, Some(location)))).collect();
let record = self.unifier.add_record(fields); let record = self.unifier.add_record(fields);
self.constrain(obj, record, &location)?; self.constrain(obj, record, &location)?;
Ok(ret) Ok(ret)
@ -489,9 +600,9 @@ impl<'a> Inferencer<'a> {
|| !args.defaults.is_empty() || !args.defaults.is_empty()
{ {
// actually I'm not sure whether programs violating this is a valid python program. // actually I'm not sure whether programs violating this is a valid python program.
return Err( return report_error(
"We only support positional or keyword arguments without defaults for lambdas." "We only support positional or keyword arguments without defaults for lambdas",
.to_string(), if args.args.is_empty() { body.location } else { args.args[0].location },
); );
} }
@ -502,11 +613,14 @@ impl<'a> Inferencer<'a> {
defined_identifiers.insert(*name); defined_identifiers.insert(*name);
} }
} }
let fn_args: Vec<_> = let fn_args: Vec<_> = args
args.args.iter().map(|v| (v.node.arg, self.unifier.get_fresh_var().0)).collect(); .args
.iter()
.map(|v| (v.node.arg, self.unifier.get_fresh_var(Some(v.node.arg), Some(v.location)).0))
.collect();
let mut variable_mapping = self.variable_mapping.clone(); let mut variable_mapping = self.variable_mapping.clone();
variable_mapping.extend(fn_args.iter().cloned()); variable_mapping.extend(fn_args.iter().cloned());
let ret = self.unifier.get_fresh_var().0; let ret = self.unifier.get_dummy_var().0;
let mut new_context = Inferencer { let mut new_context = Inferencer {
function_data: self.function_data, function_data: self.function_data,
@ -517,6 +631,8 @@ impl<'a> Inferencer<'a> {
top_level: self.top_level, top_level: self.top_level,
defined_identifiers, defined_identifiers,
variable_mapping, variable_mapping,
// lambda should not be considered in exception handler
in_handler: false,
}; };
let fun = FunSignature { let fun = FunSignature {
args: fn_args args: fn_args
@ -536,7 +652,7 @@ impl<'a> Inferencer<'a> {
Ok(Located { Ok(Located {
location, location,
node: ExprKind::Lambda { args: args.into(), body: body.into() }, node: ExprKind::Lambda { args: args.into(), body: body.into() },
custom: Some(self.unifier.add_ty(TypeEnum::TFunc(fun.into()))), custom: Some(self.unifier.add_ty(TypeEnum::TFunc(fun))),
}) })
} }
@ -547,8 +663,9 @@ impl<'a> Inferencer<'a> {
mut generators: Vec<Comprehension>, mut generators: Vec<Comprehension>,
) -> Result<ast::Expr<Option<Type>>, String> { ) -> Result<ast::Expr<Option<Type>>, String> {
if generators.len() != 1 { if generators.len() != 1 {
return Err( return report_error(
"Only 1 generator statement for list comprehension is supported.".to_string() "Only 1 generator statement for list comprehension is supported",
generators[0].target.location,
); );
} }
let variable_mapping = self.variable_mapping.clone(); let variable_mapping = self.variable_mapping.clone();
@ -562,10 +679,12 @@ impl<'a> Inferencer<'a> {
primitives: self.primitives, primitives: self.primitives,
calls: self.calls, calls: self.calls,
defined_identifiers, defined_identifiers,
// listcomp expr should not be considered as inside an exception handler...
in_handler: false,
}; };
let generator = generators.pop().unwrap(); let generator = generators.pop().unwrap();
if generator.is_async { if generator.is_async {
return Err("Async iterator not supported.".to_string()); return report_error("Async iterator not supported", generator.target.location);
} }
new_context.infer_pattern(&generator.target)?; new_context.infer_pattern(&generator.target)?;
let target = new_context.fold_expr(*generator.target)?; let target = new_context.fold_expr(*generator.target)?;
@ -623,8 +742,9 @@ impl<'a> Inferencer<'a> {
// handle special functions that cannot be typed in the usual way... // handle special functions that cannot be typed in the usual way...
if id == "virtual".into() { if id == "virtual".into() {
if args.is_empty() || args.len() > 2 || !keywords.is_empty() { if args.is_empty() || args.len() > 2 || !keywords.is_empty() {
return Err( return report_error(
"`virtual` can only accept 1/2 positional arguments.".to_string() "`virtual` can only accept 1/2 positional arguments",
func_location,
); );
} }
let arg0 = self.fold_expr(args.remove(0))?; let arg0 = self.fold_expr(args.remove(0))?;
@ -637,9 +757,9 @@ impl<'a> Inferencer<'a> {
&arg, &arg,
)? )?
} else { } else {
self.unifier.get_fresh_var().0 self.unifier.get_dummy_var().0
}; };
self.virtual_checks.push((arg0.custom.unwrap(), ty)); self.virtual_checks.push((arg0.custom.unwrap(), ty, func_location));
let custom = Some(self.unifier.add_ty(TypeEnum::TVirtual { ty })); let custom = Some(self.unifier.add_ty(TypeEnum::TVirtual { ty }));
return Ok(Located { return Ok(Located {
location, location,
@ -660,21 +780,60 @@ impl<'a> Inferencer<'a> {
if let ExprKind::Constant { value: ast::Constant::Int(val), kind } = if let ExprKind::Constant { value: ast::Constant::Int(val), kind } =
&args[0].node &args[0].node
{ {
let int64: Result<i64, _> = val.try_into(); let custom = Some(self.primitives.int64);
let custom; let v: Result<i64, _> = (*val).try_into();
if int64.is_ok() { if v.is_ok() {
custom = Some(self.primitives.int64); return Ok(Located {
location: args[0].location,
custom,
node: ExprKind::Constant {
value: ast::Constant::Int(*val),
kind: kind.clone(),
},
});
} else { } else {
return Err("Integer out of bound".into()); return report_error("Integer out of bound", args[0].location)
}
}
}
if id == "uint32".into() && args.len() == 1 {
if let ExprKind::Constant { value: ast::Constant::Int(val), kind } =
&args[0].node
{
let custom = Some(self.primitives.uint32);
let v: Result<u32, _> = (*val).try_into();
if v.is_ok() {
return Ok(Located {
location: args[0].location,
custom,
node: ExprKind::Constant {
value: ast::Constant::Int(*val),
kind: kind.clone(),
},
});
} else {
return report_error("Integer out of bound", args[0].location)
}
}
}
if id == "uint64".into() && args.len() == 1 {
if let ExprKind::Constant { value: ast::Constant::Int(val), kind } =
&args[0].node
{
let custom = Some(self.primitives.uint64);
let v: Result<u64, _> = (*val).try_into();
if v.is_ok() {
return Ok(Located {
location: args[0].location,
custom,
node: ExprKind::Constant {
value: ast::Constant::Int(*val),
kind: kind.clone(),
},
});
} else {
return report_error("Integer out of bound", args[0].location)
} }
return Ok(Located {
location: args[0].location,
custom,
node: ExprKind::Constant {
value: ast::Constant::Int(val.clone()),
kind: kind.clone(),
},
});
} }
} }
Located { location: func_location, custom, node: ExprKind::Name { id, ctx } } Located { location: func_location, custom, node: ExprKind::Name { id, ctx } }
@ -689,16 +848,16 @@ impl<'a> Inferencer<'a> {
.collect::<Result<Vec<_>, _>>()?; .collect::<Result<Vec<_>, _>>()?;
if let TypeEnum::TFunc(sign) = &*self.unifier.get_ty(func.custom.unwrap()) { if let TypeEnum::TFunc(sign) = &*self.unifier.get_ty(func.custom.unwrap()) {
let sign = sign.borrow();
if sign.vars.is_empty() { if sign.vars.is_empty() {
let call = Call { let call = Call {
posargs: args.iter().map(|v| v.custom.unwrap()).collect(), posargs: args.iter().map(|v| v.custom.unwrap()).collect(),
kwargs: keywords kwargs: keywords
.iter() .iter()
.map(|v| (*v.node.arg.as_ref().unwrap(), v.custom.unwrap())) .map(|v| (*v.node.arg.as_ref().unwrap(), v.node.value.custom.unwrap()))
.collect(), .collect(),
fun: RefCell::new(None), fun: RefCell::new(None),
ret: sign.ret, ret: sign.ret,
loc: Some(location),
}; };
let required: Vec<_> = sign let required: Vec<_> = sign
.args .args
@ -708,8 +867,8 @@ impl<'a> Inferencer<'a> {
.rev() .rev()
.collect(); .collect();
self.unifier self.unifier
.unify_call(&call, func.custom.unwrap(), &sign, &required) .unify_call(&call, func.custom.unwrap(), sign, &required)
.map_err(|old| format!("{} at {}", old, location))?; .map_err(|e| e.at(Some(location)).to_display(self.unifier).to_string())?;
return Ok(Located { return Ok(Located {
location, location,
custom: Some(sign.ret), custom: Some(sign.ret),
@ -718,7 +877,7 @@ impl<'a> Inferencer<'a> {
} }
} }
let ret = self.unifier.get_fresh_var().0; let ret = self.unifier.get_dummy_var().0;
let call = self.unifier.add_call(Call { let call = self.unifier.add_call(Call {
posargs: args.iter().map(|v| v.custom.unwrap()).collect(), posargs: args.iter().map(|v| v.custom.unwrap()).collect(),
kwargs: keywords kwargs: keywords
@ -727,9 +886,10 @@ impl<'a> Inferencer<'a> {
.collect(), .collect(),
fun: RefCell::new(None), fun: RefCell::new(None),
ret, ret,
loc: Some(location),
}); });
self.calls.insert(location.into(), call); self.calls.insert(location.into(), call);
let call = self.unifier.add_ty(TypeEnum::TCall(vec![call].into())); let call = self.unifier.add_ty(TypeEnum::TCall(vec![call]));
self.unify(func.custom.unwrap(), call, &func.location)?; self.unify(func.custom.unwrap(), call, &func.location)?;
Ok(Located { location, custom: Some(ret), node: ExprKind::Call { func, args, keywords } }) Ok(Located { location, custom: Some(ret), node: ExprKind::Call { func, args, keywords } })
@ -745,38 +905,39 @@ impl<'a> Inferencer<'a> {
.function_data .function_data
.resolver .resolver
.get_symbol_type(unifier, &self.top_level.definitions.read(), self.primitives, id) .get_symbol_type(unifier, &self.top_level.definitions.read(), self.primitives, id)
.unwrap_or_else(|| { .unwrap_or_else(|_| {
let ty = unifier.get_fresh_var().0; let ty = unifier.get_dummy_var().0;
variable_mapping.insert(id, ty); variable_mapping.insert(id, ty);
ty ty
})) }))
} }
} }
fn infer_constant(&mut self, constant: &ast::Constant) -> InferenceResult { fn infer_constant(&mut self, constant: &ast::Constant, loc: &Location) -> InferenceResult {
match constant { match constant {
ast::Constant::Bool(_) => Ok(self.primitives.bool), ast::Constant::Bool(_) => Ok(self.primitives.bool),
ast::Constant::Int(val) => { ast::Constant::Int(val) => {
let int32: Result<i32, _> = val.try_into(); let int32: Result<i32, _> = (*val).try_into();
// int64 would be handled separately in functions // int64 and unsigned integers are handled separately in functions
if int32.is_ok() { if int32.is_ok() {
Ok(self.primitives.int32) Ok(self.primitives.int32)
} else { } else {
Err("Integer out of bound".into()) report_error("Integer out of bound", *loc)
} }
} }
ast::Constant::Float(_) => Ok(self.primitives.float), ast::Constant::Float(_) => Ok(self.primitives.float),
ast::Constant::Tuple(vals) => { ast::Constant::Tuple(vals) => {
let ty: Result<Vec<_>, _> = vals.iter().map(|x| self.infer_constant(x)).collect(); let ty: Result<Vec<_>, _> =
vals.iter().map(|x| self.infer_constant(x, loc)).collect();
Ok(self.unifier.add_ty(TypeEnum::TTuple { ty: ty? })) Ok(self.unifier.add_ty(TypeEnum::TTuple { ty: ty? }))
} }
ast::Constant::Str(_) => Ok(self.primitives.str), ast::Constant::Str(_) => Ok(self.primitives.str),
_ => Err("not supported".into()), _ => report_error("not supported", *loc),
} }
} }
fn infer_list(&mut self, elts: &[ast::Expr<Option<Type>>]) -> InferenceResult { fn infer_list(&mut self, elts: &[ast::Expr<Option<Type>>]) -> InferenceResult {
let (ty, _) = self.unifier.get_fresh_var(); let ty = self.unifier.get_dummy_var().0;
for t in elts.iter() { for t in elts.iter() {
self.unify(ty, t.custom.unwrap(), &t.location)?; self.unify(ty, t.custom.unwrap(), &t.location)?;
} }
@ -797,18 +958,24 @@ impl<'a> Inferencer<'a> {
let ty = value.custom.unwrap(); let ty = value.custom.unwrap();
if let TypeEnum::TObj { fields, .. } = &*self.unifier.get_ty(ty) { if let TypeEnum::TObj { fields, .. } = &*self.unifier.get_ty(ty) {
// just a fast path // just a fast path
let fields = fields.borrow();
match (fields.get(&attr), ctx == &ExprContext::Store) { match (fields.get(&attr), ctx == &ExprContext::Store) {
(Some((ty, true)), _) => Ok(*ty), (Some((ty, true)), _) => Ok(*ty),
(Some((ty, false)), false) => Ok(*ty), (Some((ty, false)), false) => Ok(*ty),
(Some((_, false)), true) => { (Some((_, false)), true) => {
report_error(&format!("Field {} should be immutable", attr), value.location) report_error(&format!("Field `{}` is immutable", attr), value.location)
} }
(None, _) => report_error(&format!("No such field {}", attr), value.location), (None, _) => {
let t = self.unifier.stringify(ty);
report_error(&format!("`{}::{}` field/method does not exist", t, attr), value.location)
},
} }
} else { } else {
let (attr_ty, _) = self.unifier.get_fresh_var(); let attr_ty = self.unifier.get_dummy_var().0;
let fields = once((attr, (attr_ty, ctx == &ExprContext::Store))).collect(); let fields = once((
attr.into(),
RecordField::new(attr_ty, ctx == &ExprContext::Store, Some(value.location)),
))
.collect();
let record = self.unifier.add_record(fields); let record = self.unifier.add_record(fields);
self.constrain(value.custom.unwrap(), record, &value.location)?; self.constrain(value.custom.unwrap(), record, &value.location)?;
Ok(attr_ty) Ok(attr_ty)
@ -874,8 +1041,9 @@ impl<'a> Inferencer<'a> {
&mut self, &mut self,
value: &ast::Expr<Option<Type>>, value: &ast::Expr<Option<Type>>,
slice: &ast::Expr<Option<Type>>, slice: &ast::Expr<Option<Type>>,
ctx: &ExprContext,
) -> InferenceResult { ) -> InferenceResult {
let ty = self.unifier.get_fresh_var().0; let ty = self.unifier.get_dummy_var().0;
match &slice.node { match &slice.node {
ast::ExprKind::Slice { lower, upper, step } => { ast::ExprKind::Slice { lower, upper, step } => {
for v in [lower.as_ref(), upper.as_ref(), step.as_ref()].iter().flatten() { for v in [lower.as_ref(), upper.as_ref(), step.as_ref()].iter().flatten() {
@ -887,13 +1055,22 @@ impl<'a> Inferencer<'a> {
} }
ast::ExprKind::Constant { value: ast::Constant::Int(val), .. } => { ast::ExprKind::Constant { value: ast::Constant::Int(val), .. } => {
// the index is a constant, so value can be a sequence. // the index is a constant, so value can be a sequence.
let ind: i32 = val.try_into().map_err(|_| "Index must be int32".to_string())?; let ind: Option<i32> = (*val).try_into().ok();
let map = once((ind, ty)).collect(); let ind = ind.ok_or_else(|| "Index must be int32".to_string())?;
let seq = self.unifier.add_sequence(map); let map = once((
ind.into(),
RecordField::new(ty, ctx == &ExprContext::Store, Some(value.location)),
))
.collect();
let seq = self.unifier.add_record(map);
self.constrain(value.custom.unwrap(), seq, &value.location)?; self.constrain(value.custom.unwrap(), seq, &value.location)?;
Ok(ty) Ok(ty)
} }
_ => { _ => {
if let TypeEnum::TTuple { .. } = &*self.unifier.get_ty(value.custom.unwrap())
{
return report_error("Tuple index must be a constant (KernelInvariant is also not supported)", slice.location)
}
// the index is not a constant, so value can only be a list // the index is not a constant, so value can only be a list
self.constrain(slice.custom.unwrap(), self.primitives.int32, &slice.location)?; self.constrain(slice.custom.unwrap(), self.primitives.int32, &slice.location)?;
let list = self.unifier.add_ty(TypeEnum::TList { ty }); let list = self.unifier.add_ty(TypeEnum::TList { ty });
@ -910,9 +1087,7 @@ impl<'a> Inferencer<'a> {
orelse: &ast::Expr<Option<Type>>, orelse: &ast::Expr<Option<Type>>,
) -> InferenceResult { ) -> InferenceResult {
self.constrain(test.custom.unwrap(), self.primitives.bool, &test.location)?; self.constrain(test.custom.unwrap(), self.primitives.bool, &test.location)?;
let ty = self.unifier.get_fresh_var().0; self.constrain(body.custom.unwrap(), orelse.custom.unwrap(), &body.location)?;
self.constrain(body.custom.unwrap(), ty, &body.location)?; Ok(body.custom.unwrap())
self.constrain(orelse.custom.unwrap(), ty, &orelse.location)?;
Ok(ty)
} }
} }

View File

@ -1,8 +1,7 @@
use super::super::typedef::*; use super::super::{magic_methods::with_fields, typedef::*};
use super::*; use super::*;
use crate::{ use crate::{
codegen::CodeGenContext, codegen::CodeGenContext,
location::Location,
symbol_resolver::ValueEnum, symbol_resolver::ValueEnum,
toplevel::{DefinitionId, TopLevelDef}, toplevel::{DefinitionId, TopLevelDef},
}; };
@ -19,18 +18,21 @@ struct Resolver {
} }
impl SymbolResolver for Resolver { impl SymbolResolver for Resolver {
fn get_default_param_value(&self, _: &nac3parser::ast::Expr) -> Option<crate::symbol_resolver::SymbolValue> { fn get_default_param_value(
&self,
_: &nac3parser::ast::Expr,
) -> Option<crate::symbol_resolver::SymbolValue> {
unimplemented!() unimplemented!()
} }
fn get_symbol_type( fn get_symbol_type(
&self, &self,
_: &mut Unifier, _: &mut Unifier,
_: &[Arc<RwLock<TopLevelDef>>], _: &[Arc<RwLock<TopLevelDef>>],
_: &PrimitiveStore, _: &PrimitiveStore,
str: StrRef, str: StrRef,
) -> Option<Type> { ) -> Result<Type, String> {
self.id_to_type.get(&str).cloned() self.id_to_type.get(&str).cloned().ok_or_else(|| format!("cannot find symbol `{}`", str))
} }
fn get_symbol_value<'ctx, 'a>( fn get_symbol_value<'ctx, 'a>(
@ -41,12 +43,16 @@ impl SymbolResolver for Resolver {
unimplemented!() unimplemented!()
} }
fn get_symbol_location(&self, _: StrRef) -> Option<Location> { fn get_identifier_def(&self, id: StrRef) -> Result<DefinitionId, String> {
self.id_to_def.get(&id).cloned().ok_or("Unknown identifier".to_string())
}
fn get_string_id(&self, _: &str) -> i32 {
unimplemented!() unimplemented!()
} }
fn get_identifier_def(&self, id: StrRef) -> Option<DefinitionId> { fn get_exception_id(&self, tyid: usize) -> usize {
self.id_to_def.get(&id).cloned() unimplemented!()
} }
} }
@ -56,7 +62,7 @@ struct TestEnvironment {
pub primitives: PrimitiveStore, pub primitives: PrimitiveStore,
pub id_to_name: HashMap<usize, StrRef>, pub id_to_name: HashMap<usize, StrRef>,
pub identifier_mapping: HashMap<StrRef, Type>, pub identifier_mapping: HashMap<StrRef, Type>,
pub virtual_checks: Vec<(Type, Type)>, pub virtual_checks: Vec<(Type, Type, nac3parser::ast::Location)>,
pub calls: HashMap<CodeLocation, CallId>, pub calls: HashMap<CodeLocation, CallId>,
pub top_level: TopLevelContext, pub top_level: TopLevelContext,
} }
@ -67,51 +73,63 @@ impl TestEnvironment {
let int32 = unifier.add_ty(TypeEnum::TObj { let int32 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(0), obj_id: DefinitionId(0),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
});
with_fields(&mut unifier, int32, |unifier, fields| {
let add_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
args: vec![FuncArg { name: "other".into(), ty: int32, default_value: None }],
ret: int32,
vars: HashMap::new(),
}));
fields.insert("__add__".into(), (add_ty, false));
}); });
if let TypeEnum::TObj { fields, .. } = &*unifier.get_ty(int32) {
let add_ty = unifier.add_ty(TypeEnum::TFunc(
FunSignature {
args: vec![FuncArg { name: "other".into(), ty: int32, default_value: None }],
ret: int32,
vars: HashMap::new(),
}
.into(),
));
fields.borrow_mut().insert("__add__".into(), (add_ty, false));
}
let int64 = unifier.add_ty(TypeEnum::TObj { let int64 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(1), obj_id: DefinitionId(1),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let float = unifier.add_ty(TypeEnum::TObj { let float = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(2), obj_id: DefinitionId(2),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let bool = unifier.add_ty(TypeEnum::TObj { let bool = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(3), obj_id: DefinitionId(3),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let none = unifier.add_ty(TypeEnum::TObj { let none = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(4), obj_id: DefinitionId(4),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let range = unifier.add_ty(TypeEnum::TObj { let range = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(5), obj_id: DefinitionId(5),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let str = unifier.add_ty(TypeEnum::TObj { let str = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(6), obj_id: DefinitionId(6),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let primitives = PrimitiveStore { int32, int64, float, bool, none, range, str }; let exception = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(7),
fields: HashMap::new(),
params: HashMap::new(),
});
let uint32 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(8),
fields: HashMap::new(),
params: HashMap::new(),
});
let uint64 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(9),
fields: HashMap::new(),
params: HashMap::new(),
});
let primitives = PrimitiveStore { int32, int64, float, bool, none, range, str, exception, uint32, uint64 };
set_primitives_magic_methods(&primitives, &mut unifier); set_primitives_magic_methods(&primitives, &mut unifier);
let id_to_name = [ let id_to_name = [
@ -122,6 +140,7 @@ impl TestEnvironment {
(4, "none".into()), (4, "none".into()),
(5, "range".into()), (5, "range".into()),
(6, "str".into()), (6, "str".into()),
(7, "exception".into()),
] ]
.iter() .iter()
.cloned() .cloned()
@ -162,53 +181,66 @@ impl TestEnvironment {
let mut top_level_defs: Vec<Arc<RwLock<TopLevelDef>>> = Vec::new(); let mut top_level_defs: Vec<Arc<RwLock<TopLevelDef>>> = Vec::new();
let int32 = unifier.add_ty(TypeEnum::TObj { let int32 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(0), obj_id: DefinitionId(0),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
});
with_fields(&mut unifier, int32, |unifier, fields| {
let add_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
args: vec![FuncArg { name: "other".into(), ty: int32, default_value: None }],
ret: int32,
vars: HashMap::new(),
}));
fields.insert("__add__".into(), (add_ty, false));
}); });
if let TypeEnum::TObj { fields, .. } = &*unifier.get_ty(int32) {
let add_ty = unifier.add_ty(TypeEnum::TFunc(
FunSignature {
args: vec![FuncArg { name: "other".into(), ty: int32, default_value: None }],
ret: int32,
vars: HashMap::new(),
}
.into(),
));
fields.borrow_mut().insert("__add__".into(), (add_ty, false));
}
let int64 = unifier.add_ty(TypeEnum::TObj { let int64 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(1), obj_id: DefinitionId(1),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let float = unifier.add_ty(TypeEnum::TObj { let float = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(2), obj_id: DefinitionId(2),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let bool = unifier.add_ty(TypeEnum::TObj { let bool = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(3), obj_id: DefinitionId(3),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let none = unifier.add_ty(TypeEnum::TObj { let none = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(4), obj_id: DefinitionId(4),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let range = unifier.add_ty(TypeEnum::TObj { let range = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(5), obj_id: DefinitionId(5),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}); });
let str = unifier.add_ty(TypeEnum::TObj { let str = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(6), obj_id: DefinitionId(6),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
});
let exception = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(7),
fields: HashMap::new(),
params: HashMap::new(),
});
let uint32 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(8),
fields: HashMap::new(),
params: HashMap::new(),
});
let uint64 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(9),
fields: HashMap::new(),
params: HashMap::new(),
}); });
identifier_mapping.insert("None".into(), none); identifier_mapping.insert("None".into(), none);
for (i, name) in for (i, name) in ["int32", "int64", "float", "bool", "none", "range", "str", "Exception"]
["int32", "int64", "float", "bool", "none", "range", "str"].iter().enumerate() .iter()
.enumerate()
{ {
top_level_defs.push( top_level_defs.push(
RwLock::new(TopLevelDef::Class { RwLock::new(TopLevelDef::Class {
@ -220,122 +252,130 @@ impl TestEnvironment {
ancestors: Default::default(), ancestors: Default::default(),
resolver: None, resolver: None,
constructor: None, constructor: None,
loc: None,
}) })
.into(), .into(),
); );
} }
let defs = 7;
let primitives = PrimitiveStore { int32, int64, float, bool, none, range, str }; let primitives = PrimitiveStore { int32, int64, float, bool, none, range, str, exception, uint32, uint64 };
let (v0, id) = unifier.get_fresh_var(); let (v0, id) = unifier.get_dummy_var();
let foo_ty = unifier.add_ty(TypeEnum::TObj { let foo_ty = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(7), obj_id: DefinitionId(defs + 1),
fields: [("a".into(), (v0, true))].iter().cloned().collect::<HashMap<_, _>>().into(), fields: [("a".into(), (v0, true))].iter().cloned().collect::<HashMap<_, _>>(),
params: [(id, v0)].iter().cloned().collect::<HashMap<_, _>>().into(), params: [(id, v0)].iter().cloned().collect::<HashMap<_, _>>(),
}); });
top_level_defs.push( top_level_defs.push(
RwLock::new(TopLevelDef::Class { RwLock::new(TopLevelDef::Class {
name: "Foo".into(), name: "Foo".into(),
object_id: DefinitionId(7), object_id: DefinitionId(defs + 1),
type_vars: vec![v0], type_vars: vec![v0],
fields: [("a".into(), v0, true)].into(), fields: [("a".into(), v0, true)].into(),
methods: Default::default(), methods: Default::default(),
ancestors: Default::default(), ancestors: Default::default(),
resolver: None, resolver: None,
constructor: None, constructor: None,
loc: None,
}) })
.into(), .into(),
); );
identifier_mapping.insert( identifier_mapping.insert(
"Foo".into(), "Foo".into(),
unifier.add_ty(TypeEnum::TFunc( unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { args: vec![],
args: vec![], ret: foo_ty,
ret: foo_ty, vars: [(id, v0)].iter().cloned().collect(),
vars: [(id, v0)].iter().cloned().collect(), })),
}
.into(),
)),
); );
let fun = unifier.add_ty(TypeEnum::TFunc( let fun = unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { args: vec![], ret: int32, vars: Default::default() }.into(), args: vec![],
)); ret: int32,
vars: Default::default(),
}));
let bar = unifier.add_ty(TypeEnum::TObj { let bar = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(8), obj_id: DefinitionId(defs + 2),
fields: [("a".into(), (int32, true)), ("b".into(), (fun, true))] fields: [("a".into(), (int32, true)), ("b".into(), (fun, true))]
.iter() .iter()
.cloned() .cloned()
.collect::<HashMap<_, _>>() .collect::<HashMap<_, _>>(),
.into(),
params: Default::default(), params: Default::default(),
}); });
top_level_defs.push( top_level_defs.push(
RwLock::new(TopLevelDef::Class { RwLock::new(TopLevelDef::Class {
name: "Bar".into(), name: "Bar".into(),
object_id: DefinitionId(8), object_id: DefinitionId(defs + 2),
type_vars: Default::default(), type_vars: Default::default(),
fields: [("a".into(), int32, true), ("b".into(), fun, true)].into(), fields: [("a".into(), int32, true), ("b".into(), fun, true)].into(),
methods: Default::default(), methods: Default::default(),
ancestors: Default::default(), ancestors: Default::default(),
resolver: None, resolver: None,
constructor: None, constructor: None,
loc: None,
}) })
.into(), .into(),
); );
identifier_mapping.insert( identifier_mapping.insert(
"Bar".into(), "Bar".into(),
unifier.add_ty(TypeEnum::TFunc( unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { args: vec![], ret: bar, vars: Default::default() }.into(), args: vec![],
)), ret: bar,
vars: Default::default(),
})),
); );
let bar2 = unifier.add_ty(TypeEnum::TObj { let bar2 = unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(9), obj_id: DefinitionId(defs + 3),
fields: [("a".into(), (bool, true)), ("b".into(), (fun, false))] fields: [("a".into(), (bool, true)), ("b".into(), (fun, false))]
.iter() .iter()
.cloned() .cloned()
.collect::<HashMap<_, _>>() .collect::<HashMap<_, _>>(),
.into(),
params: Default::default(), params: Default::default(),
}); });
top_level_defs.push( top_level_defs.push(
RwLock::new(TopLevelDef::Class { RwLock::new(TopLevelDef::Class {
name: "Bar2".into(), name: "Bar2".into(),
object_id: DefinitionId(9), object_id: DefinitionId(defs + 3),
type_vars: Default::default(), type_vars: Default::default(),
fields: [("a".into(), bool, true), ("b".into(), fun, false)].into(), fields: [("a".into(), bool, true), ("b".into(), fun, false)].into(),
methods: Default::default(), methods: Default::default(),
ancestors: Default::default(), ancestors: Default::default(),
resolver: None, resolver: None,
constructor: None, constructor: None,
loc: None,
}) })
.into(), .into(),
); );
identifier_mapping.insert( identifier_mapping.insert(
"Bar2".into(), "Bar2".into(),
unifier.add_ty(TypeEnum::TFunc( unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { args: vec![], ret: bar2, vars: Default::default() }.into(), args: vec![],
)), ret: bar2,
vars: Default::default(),
})),
); );
let class_names = [("Bar".into(), bar), ("Bar2".into(), bar2)].iter().cloned().collect(); let class_names = [("Bar".into(), bar), ("Bar2".into(), bar2)].iter().cloned().collect();
let id_to_name = [ let id_to_name = [
(0, "int32".into()), "int32".into(),
(1, "int64".into()), "int64".into(),
(2, "float".into()), "float".into(),
(3, "bool".into()), "bool".into(),
(4, "none".into()), "none".into(),
(5, "range".into()), "range".into(),
(6, "str".into()), "str".into(),
(7, "Foo".into()), "exception".into(),
(8, "Bar".into()), "Foo".into(),
(9, "Bar2".into()), "Bar".into(),
"Bar2".into(),
] ]
.iter() .iter()
.cloned() .enumerate()
.map(|(a, b)| (a, *b))
.collect(); .collect();
let top_level = TopLevelContext { let top_level = TopLevelContext {
@ -347,9 +387,9 @@ impl TestEnvironment {
let resolver = Arc::new(Resolver { let resolver = Arc::new(Resolver {
id_to_type: identifier_mapping.clone(), id_to_type: identifier_mapping.clone(),
id_to_def: [ id_to_def: [
("Foo".into(), DefinitionId(7)), ("Foo".into(), DefinitionId(defs + 1)),
("Bar".into(), DefinitionId(8)), ("Bar".into(), DefinitionId(defs + 2)),
("Bar2".into(), DefinitionId(9)), ("Bar2".into(), DefinitionId(defs + 3)),
] ]
.iter() .iter()
.cloned() .cloned()
@ -383,6 +423,7 @@ impl TestEnvironment {
virtual_checks: &mut self.virtual_checks, virtual_checks: &mut self.virtual_checks,
calls: &mut self.calls, calls: &mut self.calls,
defined_identifiers: Default::default(), defined_identifiers: Default::default(),
in_handler: false,
} }
} }
} }
@ -402,7 +443,7 @@ impl TestEnvironment {
c = 1.234 c = 1.234
d = b(c) d = b(c)
"}, "},
[("a", "fn[[x=float, y=float], float]"), ("b", "fn[[x=float], float]"), ("c", "float"), ("d", "float")].iter().cloned().collect(), [("a", "fn[[x:float, y:float], float]"), ("b", "fn[[x:float], float]"), ("c", "float"), ("d", "float")].iter().cloned().collect(),
&[] &[]
; "lambda test")] ; "lambda test")]
#[test_case(indoc! {" #[test_case(indoc! {"
@ -411,7 +452,7 @@ impl TestEnvironment {
a = b a = b
c = b(1) c = b(1)
"}, "},
[("a", "fn[[x=int32], int32]"), ("b", "fn[[x=int32], int32]"), ("c", "int32")].iter().cloned().collect(), [("a", "fn[[x:int32], int32]"), ("b", "fn[[x:int32], int32]"), ("c", "int32")].iter().cloned().collect(),
&[] &[]
; "lambda test 2")] ; "lambda test 2")]
#[test_case(indoc! {" #[test_case(indoc! {"
@ -427,8 +468,8 @@ impl TestEnvironment {
b(123) b(123)
"}, "},
[("a", "fn[[x=bool], bool]"), ("b", "fn[[x=int32], int32]"), ("c", "bool"), [("a", "fn[[x:bool], bool]"), ("b", "fn[[x:int32], int32]"), ("c", "bool"),
("d", "int32"), ("foo1", "Foo[1->bool]"), ("foo2", "Foo[1->int32]")].iter().cloned().collect(), ("d", "int32"), ("foo1", "Foo[bool]"), ("foo2", "Foo[int32]")].iter().cloned().collect(),
&[] &[]
; "obj test")] ; "obj test")]
#[test_case(indoc! {" #[test_case(indoc! {"
@ -461,7 +502,7 @@ fn test_basic(source: &str, mapping: HashMap<&str, &str>, virtuals: &[(&str, &st
defined_identifiers.insert("virtual".into()); defined_identifiers.insert("virtual".into());
let mut inferencer = env.get_inferencer(); let mut inferencer = env.get_inferencer();
inferencer.defined_identifiers = defined_identifiers.clone(); inferencer.defined_identifiers = defined_identifiers.clone();
let statements = parse_program(source).unwrap(); let statements = parse_program(source, Default::default()).unwrap();
let statements = statements let statements = statements
.into_iter() .into_iter()
.map(|v| inferencer.fold_stmt(v)) .map(|v| inferencer.fold_stmt(v))
@ -471,33 +512,37 @@ fn test_basic(source: &str, mapping: HashMap<&str, &str>, virtuals: &[(&str, &st
inferencer.check_block(&statements, &mut defined_identifiers).unwrap(); inferencer.check_block(&statements, &mut defined_identifiers).unwrap();
for (k, v) in inferencer.variable_mapping.iter() { for (k, v) in inferencer.variable_mapping.iter() {
let name = inferencer.unifier.stringify( let name = inferencer.unifier.internal_stringify(
*v, *v,
&mut |v| (*id_to_name.get(&v).unwrap()).into(), &mut |v| (*id_to_name.get(&v).unwrap()).into(),
&mut |v| format!("v{}", v), &mut |v| format!("v{}", v),
&mut None,
); );
println!("{}: {}", k, name); println!("{}: {}", k, name);
} }
for (k, v) in mapping.iter() { for (k, v) in mapping.iter() {
let ty = inferencer.variable_mapping.get(&(*k).into()).unwrap(); let ty = inferencer.variable_mapping.get(&(*k).into()).unwrap();
let name = inferencer.unifier.stringify( let name = inferencer.unifier.internal_stringify(
*ty, *ty,
&mut |v| (*id_to_name.get(&v).unwrap()).into(), &mut |v| (*id_to_name.get(&v).unwrap()).into(),
&mut |v| format!("v{}", v), &mut |v| format!("v{}", v),
&mut None,
); );
assert_eq!(format!("{}: {}", k, v), format!("{}: {}", k, name)); assert_eq!(format!("{}: {}", k, v), format!("{}: {}", k, name));
} }
assert_eq!(inferencer.virtual_checks.len(), virtuals.len()); assert_eq!(inferencer.virtual_checks.len(), virtuals.len());
for ((a, b), (x, y)) in zip(inferencer.virtual_checks.iter(), virtuals) { for ((a, b, _), (x, y)) in zip(inferencer.virtual_checks.iter(), virtuals) {
let a = inferencer.unifier.stringify( let a = inferencer.unifier.internal_stringify(
*a, *a,
&mut |v| (*id_to_name.get(&v).unwrap()).into(), &mut |v| (*id_to_name.get(&v).unwrap()).into(),
&mut |v| format!("v{}", v), &mut |v| format!("v{}", v),
&mut None,
); );
let b = inferencer.unifier.stringify( let b = inferencer.unifier.internal_stringify(
*b, *b,
&mut |v| (*id_to_name.get(&v).unwrap()).into(), &mut |v| (*id_to_name.get(&v).unwrap()).into(),
&mut |v| format!("v{}", v), &mut |v| format!("v{}", v),
&mut None,
); );
assert_eq!(&a, x); assert_eq!(&a, x);
@ -603,7 +648,7 @@ fn test_primitive_magic_methods(source: &str, mapping: HashMap<&str, &str>) {
defined_identifiers.insert("virtual".into()); defined_identifiers.insert("virtual".into());
let mut inferencer = env.get_inferencer(); let mut inferencer = env.get_inferencer();
inferencer.defined_identifiers = defined_identifiers.clone(); inferencer.defined_identifiers = defined_identifiers.clone();
let statements = parse_program(source).unwrap(); let statements = parse_program(source, Default::default()).unwrap();
let statements = statements let statements = statements
.into_iter() .into_iter()
.map(|v| inferencer.fold_stmt(v)) .map(|v| inferencer.fold_stmt(v))
@ -613,19 +658,21 @@ fn test_primitive_magic_methods(source: &str, mapping: HashMap<&str, &str>) {
inferencer.check_block(&statements, &mut defined_identifiers).unwrap(); inferencer.check_block(&statements, &mut defined_identifiers).unwrap();
for (k, v) in inferencer.variable_mapping.iter() { for (k, v) in inferencer.variable_mapping.iter() {
let name = inferencer.unifier.stringify( let name = inferencer.unifier.internal_stringify(
*v, *v,
&mut |v| (*id_to_name.get(&v).unwrap()).into(), &mut |v| (*id_to_name.get(&v).unwrap()).into(),
&mut |v| format!("v{}", v), &mut |v| format!("v{}", v),
&mut None,
); );
println!("{}: {}", k, name); println!("{}: {}", k, name);
} }
for (k, v) in mapping.iter() { for (k, v) in mapping.iter() {
let ty = inferencer.variable_mapping.get(&(*k).into()).unwrap(); let ty = inferencer.variable_mapping.get(&(*k).into()).unwrap();
let name = inferencer.unifier.stringify( let name = inferencer.unifier.internal_stringify(
*ty, *ty,
&mut |v| (*id_to_name.get(&v).unwrap()).into(), &mut |v| (*id_to_name.get(&v).unwrap()).into(),
&mut |v| format!("v{}", v), &mut |v| format!("v{}", v),
&mut None,
); );
assert_eq!(format!("{}: {}", k, v), format!("{}: {}", k, name)); assert_eq!(format!("{}: {}", k, v), format!("{}: {}", k, name));
} }

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,4 @@
use super::super::magic_methods::with_fields;
use super::*; use super::*;
use indoc::indoc; use indoc::indoc;
use itertools::Itertools; use itertools::Itertools;
@ -7,7 +8,6 @@ use test_case::test_case;
impl Unifier { impl Unifier {
/// Check whether two types are equal. /// Check whether two types are equal.
fn eq(&mut self, a: Type, b: Type) -> bool { fn eq(&mut self, a: Type, b: Type) -> bool {
use TypeVarMeta::*;
if a == b { if a == b {
return true; return true;
} }
@ -21,13 +21,13 @@ impl Unifier {
match (&*ty_a, &*ty_b) { match (&*ty_a, &*ty_b) {
( (
TypeEnum::TVar { meta: Generic, id: id1, .. }, TypeEnum::TVar { fields: None, id: id1, .. },
TypeEnum::TVar { meta: Generic, id: id2, .. }, TypeEnum::TVar { fields: None, id: id2, .. },
) => id1 == id2, ) => id1 == id2,
( (
TypeEnum::TVar { meta: Sequence(map1), .. }, TypeEnum::TVar { fields: Some(map1), .. },
TypeEnum::TVar { meta: Sequence(map2), .. }, TypeEnum::TVar { fields: Some(map2), .. },
) => self.map_eq(&map1.borrow(), &map2.borrow()), ) => self.map_eq2(map1, map2),
(TypeEnum::TTuple { ty: ty1 }, TypeEnum::TTuple { ty: ty2 }) => { (TypeEnum::TTuple { ty: ty1 }, TypeEnum::TTuple { ty: ty2 }) => {
ty1.len() == ty2.len() ty1.len() == ty2.len()
&& ty1.iter().zip(ty2.iter()).all(|(t1, t2)| self.eq(*t1, *t2)) && ty1.iter().zip(ty2.iter()).all(|(t1, t2)| self.eq(*t1, *t2))
@ -36,14 +36,10 @@ impl Unifier {
| (TypeEnum::TVirtual { ty: ty1 }, TypeEnum::TVirtual { ty: ty2 }) => { | (TypeEnum::TVirtual { ty: ty1 }, TypeEnum::TVirtual { ty: ty2 }) => {
self.eq(*ty1, *ty2) self.eq(*ty1, *ty2)
} }
(
TypeEnum::TVar { meta: Record(fields1), .. },
TypeEnum::TVar { meta: Record(fields2), .. },
) => self.map_eq2(&fields1.borrow(), &fields2.borrow()),
( (
TypeEnum::TObj { obj_id: id1, params: params1, .. }, TypeEnum::TObj { obj_id: id1, params: params1, .. },
TypeEnum::TObj { obj_id: id2, params: params2, .. }, TypeEnum::TObj { obj_id: id2, params: params2, .. },
) => id1 == id2 && self.map_eq(&params1.borrow(), &params2.borrow()), ) => id1 == id2 && self.map_eq(params1, params2),
// TCall and TFunc are not yet implemented // TCall and TFunc are not yet implemented
_ => false, _ => false,
} }
@ -64,19 +60,15 @@ impl Unifier {
true true
} }
fn map_eq2<K>( fn map_eq2<K>(&mut self, map1: &Mapping<K, RecordField>, map2: &Mapping<K, RecordField>) -> bool
&mut self,
map1: &Mapping<K, (Type, bool)>,
map2: &Mapping<K, (Type, bool)>,
) -> bool
where where
K: std::hash::Hash + std::cmp::Eq + std::clone::Clone, K: std::hash::Hash + std::cmp::Eq + std::clone::Clone,
{ {
if map1.len() != map2.len() { if map1.len() != map2.len() {
return false; return false;
} }
for (k, (ty1, m1)) in map1.iter() { for (k, v) in map1.iter() {
if !map2.get(k).map(|(ty2, m2)| m1 == m2 && self.eq(*ty1, *ty2)).unwrap_or(false) { if !map2.get(k).map(|v1| self.eq(v.ty, v1.ty)).unwrap_or(false) {
return false; return false;
} }
} }
@ -98,37 +90,33 @@ impl TestEnvironment {
"int".into(), "int".into(),
unifier.add_ty(TypeEnum::TObj { unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(0), obj_id: DefinitionId(0),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}), }),
); );
type_mapping.insert( type_mapping.insert(
"float".into(), "float".into(),
unifier.add_ty(TypeEnum::TObj { unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(1), obj_id: DefinitionId(1),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}), }),
); );
type_mapping.insert( type_mapping.insert(
"bool".into(), "bool".into(),
unifier.add_ty(TypeEnum::TObj { unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(2), obj_id: DefinitionId(2),
fields: HashMap::new().into(), fields: HashMap::new(),
params: HashMap::new().into(), params: HashMap::new(),
}), }),
); );
let (v0, id) = unifier.get_fresh_var(); let (v0, id) = unifier.get_dummy_var();
type_mapping.insert( type_mapping.insert(
"Foo".into(), "Foo".into(),
unifier.add_ty(TypeEnum::TObj { unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(3), obj_id: DefinitionId(3),
fields: [("a".into(), (v0, true))] fields: [("a".into(), (v0, true))].iter().cloned().collect::<HashMap<_, _>>(),
.iter() params: [(id, v0)].iter().cloned().collect::<HashMap<_, _>>(),
.cloned()
.collect::<HashMap<_, _>>()
.into(),
params: [(id, v0)].iter().cloned().collect::<HashMap<_, _>>().into(),
}), }),
); );
@ -174,7 +162,7 @@ impl TestEnvironment {
let eq = s.find('=').unwrap(); let eq = s.find('=').unwrap();
let key = s[1..eq].into(); let key = s[1..eq].into();
let result = self.internal_parse(&s[eq + 1..], mapping); let result = self.internal_parse(&s[eq + 1..], mapping);
fields.insert(key, (result.0, true)); fields.insert(key, RecordField::new(result.0, true, None));
s = result.1; s = result.1;
} }
(self.unifier.add_record(fields), &s[1..]) (self.unifier.add_record(fields), &s[1..])
@ -187,7 +175,6 @@ impl TestEnvironment {
let mut ty = *self.type_mapping.get(x).unwrap(); let mut ty = *self.type_mapping.get(x).unwrap();
let te = self.unifier.get_ty(ty); let te = self.unifier.get_ty(ty);
if let TypeEnum::TObj { params, .. } = &*te.as_ref() { if let TypeEnum::TObj { params, .. } = &*te.as_ref() {
let params = params.borrow();
if !params.is_empty() { if !params.is_empty() {
assert!(&s[0..1] == "["); assert!(&s[0..1] == "[");
let mut p = Vec::new(); let mut p = Vec::new();
@ -209,6 +196,10 @@ impl TestEnvironment {
} }
} }
} }
fn unify(&mut self, typ1: Type, typ2: Type) -> Result<(), String> {
self.unifier.unify(typ1, typ2).map_err(|e| e.to_display(&self.unifier).to_string())
}
} }
#[test_case(2, #[test_case(2,
@ -258,7 +249,7 @@ fn test_unify(
let mut env = TestEnvironment::new(); let mut env = TestEnvironment::new();
let mut mapping = HashMap::new(); let mut mapping = HashMap::new();
for i in 1..=variable_count { for i in 1..=variable_count {
let v = env.unifier.get_fresh_var(); let v = env.unifier.get_dummy_var();
mapping.insert(format!("v{}", i), v.0); mapping.insert(format!("v{}", i), v.0);
} }
// unification may have side effect when we do type resolution, so freeze the types // unification may have side effect when we do type resolution, so freeze the types
@ -276,6 +267,7 @@ fn test_unify(
println!("{} = {}", a, b); println!("{} = {}", a, b);
let t1 = env.parse(a, &mapping); let t1 = env.parse(a, &mapping);
let t2 = env.parse(b, &mapping); let t2 = env.parse(b, &mapping);
println!("a = {}, b = {}", env.unifier.stringify(t1), env.unifier.stringify(t2));
assert!(env.unifier.eq(t1, t2)); assert!(env.unifier.eq(t1, t2));
} }
} }
@ -286,7 +278,7 @@ fn test_unify(
("v1", "tuple[int]"), ("v1", "tuple[int]"),
("v2", "list[int]"), ("v2", "list[int]"),
], ],
(("v1", "v2"), "Cannot unify list[0] with tuple[0]") (("v1", "v2"), "Incompatible types: list[0] and tuple[0]")
; "type mismatch" ; "type mismatch"
)] )]
#[test_case(2, #[test_case(2,
@ -294,7 +286,7 @@ fn test_unify(
("v1", "tuple[int]"), ("v1", "tuple[int]"),
("v2", "tuple[float]"), ("v2", "tuple[float]"),
], ],
(("v1", "v2"), "Cannot unify 0 with 1") (("v1", "v2"), "Incompatible types: 0 and 1")
; "tuple parameter mismatch" ; "tuple parameter mismatch"
)] )]
#[test_case(2, #[test_case(2,
@ -302,7 +294,7 @@ fn test_unify(
("v1", "tuple[int,int]"), ("v1", "tuple[int,int]"),
("v2", "tuple[int]"), ("v2", "tuple[int]"),
], ],
(("v1", "v2"), "Cannot unify tuples with length 2 and 1") (("v1", "v2"), "Tuple length mismatch: got tuple[0, 0] and tuple[0]")
; "tuple length mismatch" ; "tuple length mismatch"
)] )]
#[test_case(3, #[test_case(3,
@ -310,7 +302,7 @@ fn test_unify(
("v1", "Record[a=float,b=int]"), ("v1", "Record[a=float,b=int]"),
("v2", "Foo[v3]"), ("v2", "Foo[v3]"),
], ],
(("v1", "v2"), "No such attribute b") (("v1", "v2"), "`3[var4]::b` field/method does not exist")
; "record obj merge" ; "record obj merge"
)] )]
/// Test cases for invalid unifications. /// Test cases for invalid unifications.
@ -322,7 +314,7 @@ fn test_invalid_unification(
let mut env = TestEnvironment::new(); let mut env = TestEnvironment::new();
let mut mapping = HashMap::new(); let mut mapping = HashMap::new();
for i in 1..=variable_count { for i in 1..=variable_count {
let v = env.unifier.get_fresh_var(); let v = env.unifier.get_dummy_var();
mapping.insert(format!("v{}", i), v.0); mapping.insert(format!("v{}", i), v.0);
} }
// unification may have side effect when we do type resolution, so freeze the types // unification may have side effect when we do type resolution, so freeze the types
@ -338,7 +330,7 @@ fn test_invalid_unification(
for (a, b) in pairs { for (a, b) in pairs {
env.unifier.unify(a, b).unwrap(); env.unifier.unify(a, b).unwrap();
} }
assert_eq!(env.unifier.unify(t1, t2), Err(errornous_pair.1.to_string())); assert_eq!(env.unify(t1, t2), Err(errornous_pair.1.to_string()));
} }
#[test] #[test]
@ -348,16 +340,17 @@ fn test_recursive_subst() {
let foo_id = *env.type_mapping.get("Foo").unwrap(); let foo_id = *env.type_mapping.get("Foo").unwrap();
let foo_ty = env.unifier.get_ty(foo_id); let foo_ty = env.unifier.get_ty(foo_id);
let mapping: HashMap<_, _>; let mapping: HashMap<_, _>;
if let TypeEnum::TObj { fields, params, .. } = &*foo_ty { with_fields(&mut env.unifier, foo_id, |_unifier, fields| {
fields.borrow_mut().insert("rec".into(), (foo_id, true)); fields.insert("rec".into(), (foo_id, true));
mapping = params.borrow().iter().map(|(id, _)| (*id, int)).collect(); });
if let TypeEnum::TObj { params, .. } = &*foo_ty {
mapping = params.iter().map(|(id, _)| (*id, int)).collect();
} else { } else {
unreachable!() unreachable!()
} }
let instantiated = env.unifier.subst(foo_id, &mapping).unwrap(); let instantiated = env.unifier.subst(foo_id, &mapping).unwrap();
let instantiated_ty = env.unifier.get_ty(instantiated); let instantiated_ty = env.unifier.get_ty(instantiated);
if let TypeEnum::TObj { fields, .. } = &*instantiated_ty { if let TypeEnum::TObj { fields, .. } = &*instantiated_ty {
let fields = fields.borrow();
assert!(env.unifier.unioned(fields.get(&"a".into()).unwrap().0, int)); assert!(env.unifier.unioned(fields.get(&"a".into()).unwrap().0, int));
assert!(env.unifier.unioned(fields.get(&"rec".into()).unwrap().0, instantiated)); assert!(env.unifier.unioned(fields.get(&"rec".into()).unwrap().0, instantiated));
} else { } else {
@ -369,33 +362,40 @@ fn test_recursive_subst() {
fn test_virtual() { fn test_virtual() {
let mut env = TestEnvironment::new(); let mut env = TestEnvironment::new();
let int = env.parse("int", &HashMap::new()); let int = env.parse("int", &HashMap::new());
let fun = env.unifier.add_ty(TypeEnum::TFunc( let fun = env.unifier.add_ty(TypeEnum::TFunc(FunSignature {
FunSignature { args: vec![], ret: int, vars: HashMap::new() }.into(), args: vec![],
)); ret: int,
vars: HashMap::new(),
}));
let bar = env.unifier.add_ty(TypeEnum::TObj { let bar = env.unifier.add_ty(TypeEnum::TObj {
obj_id: DefinitionId(5), obj_id: DefinitionId(5),
fields: [("f".into(), (fun, false)), ("a".into(), (int, false))] fields: [("f".into(), (fun, false)), ("a".into(), (int, false))]
.iter() .iter()
.cloned() .cloned()
.collect::<HashMap<StrRef, _>>() .collect::<HashMap<StrRef, _>>(),
.into(), params: HashMap::new(),
params: HashMap::new().into(),
}); });
let v0 = env.unifier.get_fresh_var().0; let v0 = env.unifier.get_dummy_var().0;
let v1 = env.unifier.get_fresh_var().0; let v1 = env.unifier.get_dummy_var().0;
let a = env.unifier.add_ty(TypeEnum::TVirtual { ty: bar }); let a = env.unifier.add_ty(TypeEnum::TVirtual { ty: bar });
let b = env.unifier.add_ty(TypeEnum::TVirtual { ty: v0 }); let b = env.unifier.add_ty(TypeEnum::TVirtual { ty: v0 });
let c = env.unifier.add_record([("f".into(), (v1, false))].iter().cloned().collect()); let c = env
.unifier
.add_record([("f".into(), RecordField::new(v1, false, None))].iter().cloned().collect());
env.unifier.unify(a, b).unwrap(); env.unifier.unify(a, b).unwrap();
env.unifier.unify(b, c).unwrap(); env.unifier.unify(b, c).unwrap();
assert!(env.unifier.eq(v1, fun)); assert!(env.unifier.eq(v1, fun));
let d = env.unifier.add_record([("a".into(), (v1, true))].iter().cloned().collect()); let d = env
assert_eq!(env.unifier.unify(b, d), Err("Cannot access field a for virtual type".to_string())); .unifier
.add_record([("a".into(), RecordField::new(v1, true, None))].iter().cloned().collect());
assert_eq!(env.unify(b, d), Err("`virtual[5]::a` field/method does not exist".to_string()));
let d = env.unifier.add_record([("b".into(), (v1, true))].iter().cloned().collect()); let d = env
assert_eq!(env.unifier.unify(b, d), Err("No such attribute b".to_string())); .unifier
.add_record([("b".into(), RecordField::new(v1, true, None))].iter().cloned().collect());
assert_eq!(env.unify(b, d), Err("`virtual[5]::b` field/method does not exist".to_string()));
} }
#[test] #[test]
@ -409,107 +409,104 @@ fn test_typevar_range() {
// unification between v and int // unification between v and int
// where v in (int, bool) // where v in (int, bool)
let v = env.unifier.get_fresh_var_with_range(&[int, boolean]).0; let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).0;
env.unifier.unify(int, v).unwrap(); env.unifier.unify(int, v).unwrap();
// unification between v and list[int] // unification between v and list[int]
// where v in (int, bool) // where v in (int, bool)
let v = env.unifier.get_fresh_var_with_range(&[int, boolean]).0; let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).0;
assert_eq!( assert_eq!(
env.unifier.unify(int_list, v), env.unify(int_list, v),
Err("Cannot unify variable 3 with list[0] due to incompatible value range".to_string()) Err("Expected any one of these types: 0, 2, but got list[0]".to_string())
); );
// unification between v and float // unification between v and float
// where v in (int, bool) // where v in (int, bool)
let v = env.unifier.get_fresh_var_with_range(&[int, boolean]).0; let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).0;
assert_eq!( assert_eq!(
env.unifier.unify(float, v), env.unify(float, v),
Err("Cannot unify variable 4 with 1 due to incompatible value range".to_string()) Err("Expected any one of these types: 0, 2, but got 1".to_string())
); );
let v1 = env.unifier.get_fresh_var_with_range(&[int, boolean]).0; let v1 = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).0;
let v1_list = env.unifier.add_ty(TypeEnum::TList { ty: v1 }); let v1_list = env.unifier.add_ty(TypeEnum::TList { ty: v1 });
let v = env.unifier.get_fresh_var_with_range(&[int, v1_list]).0; let v = env.unifier.get_fresh_var_with_range(&[int, v1_list], None, None).0;
// unification between v and int // unification between v and int
// where v in (int, list[v1]), v1 in (int, bool) // where v in (int, list[v1]), v1 in (int, bool)
env.unifier.unify(int, v).unwrap(); env.unifier.unify(int, v).unwrap();
let v = env.unifier.get_fresh_var_with_range(&[int, v1_list]).0; let v = env.unifier.get_fresh_var_with_range(&[int, v1_list], None, None).0;
// unification between v and list[int] // unification between v and list[int]
// where v in (int, list[v1]), v1 in (int, bool) // where v in (int, list[v1]), v1 in (int, bool)
env.unifier.unify(int_list, v).unwrap(); env.unifier.unify(int_list, v).unwrap();
let v = env.unifier.get_fresh_var_with_range(&[int, v1_list]).0; let v = env.unifier.get_fresh_var_with_range(&[int, v1_list], None, None).0;
// unification between v and list[float] // unification between v and list[float]
// where v in (int, list[v1]), v1 in (int, bool) // where v in (int, list[v1]), v1 in (int, bool)
assert_eq!( assert_eq!(
env.unifier.unify(float_list, v), env.unify(float_list, v),
Err("Cannot unify variable 8 with list[1] due to incompatible value range".to_string()) Err("Expected any one of these types: 0, list[var5], but got list[1]\n\nNotes:\n var5 ∈ {0, 2}".to_string())
); );
let a = env.unifier.get_fresh_var_with_range(&[int, float]).0; let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).0;
let b = env.unifier.get_fresh_var_with_range(&[boolean, float]).0; let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).0;
env.unifier.unify(a, b).unwrap(); env.unifier.unify(a, b).unwrap();
env.unifier.unify(a, float).unwrap(); env.unifier.unify(a, float).unwrap();
let a = env.unifier.get_fresh_var_with_range(&[int, float]).0; let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).0;
let b = env.unifier.get_fresh_var_with_range(&[boolean, float]).0; let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).0;
env.unifier.unify(a, b).unwrap(); env.unifier.unify(a, b).unwrap();
assert_eq!( assert_eq!(env.unify(a, int), Err("Expected any one of these types: 1, but got 0".into()));
env.unifier.unify(a, int),
Err("Cannot unify variable 12 with 0 due to incompatible value range".into())
);
let a = env.unifier.get_fresh_var_with_range(&[int, float]).0; let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).0;
let b = env.unifier.get_fresh_var_with_range(&[boolean, float]).0; let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).0;
let a_list = env.unifier.add_ty(TypeEnum::TList { ty: a }); let a_list = env.unifier.add_ty(TypeEnum::TList { ty: a });
let a_list = env.unifier.get_fresh_var_with_range(&[a_list]).0; let a_list = env.unifier.get_fresh_var_with_range(&[a_list], None, None).0;
let b_list = env.unifier.add_ty(TypeEnum::TList { ty: b }); let b_list = env.unifier.add_ty(TypeEnum::TList { ty: b });
let b_list = env.unifier.get_fresh_var_with_range(&[b_list]).0; let b_list = env.unifier.get_fresh_var_with_range(&[b_list], None, None).0;
env.unifier.unify(a_list, b_list).unwrap(); env.unifier.unify(a_list, b_list).unwrap();
let float_list = env.unifier.add_ty(TypeEnum::TList { ty: float }); let float_list = env.unifier.add_ty(TypeEnum::TList { ty: float });
env.unifier.unify(a_list, float_list).unwrap(); env.unifier.unify(a_list, float_list).unwrap();
// previous unifications should not affect a and b // previous unifications should not affect a and b
env.unifier.unify(a, int).unwrap(); env.unifier.unify(a, int).unwrap();
let a = env.unifier.get_fresh_var_with_range(&[int, float]).0; let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).0;
let b = env.unifier.get_fresh_var_with_range(&[boolean, float]).0; let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).0;
let a_list = env.unifier.add_ty(TypeEnum::TList { ty: a }); let a_list = env.unifier.add_ty(TypeEnum::TList { ty: a });
let b_list = env.unifier.add_ty(TypeEnum::TList { ty: b }); let b_list = env.unifier.add_ty(TypeEnum::TList { ty: b });
env.unifier.unify(a_list, b_list).unwrap(); env.unifier.unify(a_list, b_list).unwrap();
let int_list = env.unifier.add_ty(TypeEnum::TList { ty: int }); let int_list = env.unifier.add_ty(TypeEnum::TList { ty: int });
assert_eq!( assert_eq!(
env.unifier.unify(a_list, int_list), env.unify(a_list, int_list),
Err("Cannot unify variable 19 with 0 due to incompatible value range".into()) Err("Expected any one of these types: 1, but got 0".into())
); );
let a = env.unifier.get_fresh_var_with_range(&[int, float]).0; let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).0;
let b = env.unifier.get_fresh_var().0; let b = env.unifier.get_dummy_var().0;
let a_list = env.unifier.add_ty(TypeEnum::TList { ty: a }); let a_list = env.unifier.add_ty(TypeEnum::TList { ty: a });
let a_list = env.unifier.get_fresh_var_with_range(&[a_list]).0; let a_list = env.unifier.get_fresh_var_with_range(&[a_list], None, None).0;
let b_list = env.unifier.add_ty(TypeEnum::TList { ty: b }); let b_list = env.unifier.add_ty(TypeEnum::TList { ty: b });
env.unifier.unify(a_list, b_list).unwrap(); env.unifier.unify(a_list, b_list).unwrap();
assert_eq!( assert_eq!(
env.unifier.unify(b, boolean), env.unify(b, boolean),
Err("Cannot unify variable 21 with 2 due to incompatible value range".into()) Err("Expected any one of these types: 0, 1, but got 2".into())
); );
} }
#[test] #[test]
fn test_rigid_var() { fn test_rigid_var() {
let mut env = TestEnvironment::new(); let mut env = TestEnvironment::new();
let a = env.unifier.get_fresh_rigid_var().0; let a = env.unifier.get_fresh_rigid_var(None, None).0;
let b = env.unifier.get_fresh_rigid_var().0; let b = env.unifier.get_fresh_rigid_var(None, None).0;
let x = env.unifier.get_fresh_var().0; let x = env.unifier.get_dummy_var().0;
let list_a = env.unifier.add_ty(TypeEnum::TList { ty: a }); let list_a = env.unifier.add_ty(TypeEnum::TList { ty: a });
let list_x = env.unifier.add_ty(TypeEnum::TList { ty: x }); let list_x = env.unifier.add_ty(TypeEnum::TList { ty: x });
let int = env.parse("int", &HashMap::new()); let int = env.parse("int", &HashMap::new());
let list_int = env.parse("list[int]", &HashMap::new()); let list_int = env.parse("list[int]", &HashMap::new());
assert_eq!(env.unifier.unify(a, b), Err("Cannot unify var3 with var2".to_string())); assert_eq!(env.unify(a, b), Err("Incompatible types: var3 and var2".to_string()));
env.unifier.unify(list_a, list_x).unwrap(); env.unifier.unify(list_a, list_x).unwrap();
assert_eq!(env.unifier.unify(list_x, list_int), Err("Cannot unify 0 with var2".to_string())); assert_eq!(env.unify(list_x, list_int), Err("Incompatible types: 0 and var2".to_string()));
env.unifier.replace_rigid_var(a, int); env.unifier.replace_rigid_var(a, int);
env.unifier.unify(list_x, list_int).unwrap(); env.unifier.unify(list_x, list_int).unwrap();
@ -526,13 +523,13 @@ fn test_instantiation() {
let obj_map: HashMap<_, _> = let obj_map: HashMap<_, _> =
[(0usize, "int"), (1, "float"), (2, "bool")].iter().cloned().collect(); [(0usize, "int"), (1, "float"), (2, "bool")].iter().cloned().collect();
let v = env.unifier.get_fresh_var_with_range(&[int, boolean]).0; let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).0;
let list_v = env.unifier.add_ty(TypeEnum::TList { ty: v }); let list_v = env.unifier.add_ty(TypeEnum::TList { ty: v });
let v1 = env.unifier.get_fresh_var_with_range(&[list_v, int]).0; let v1 = env.unifier.get_fresh_var_with_range(&[list_v, int], None, None).0;
let v2 = env.unifier.get_fresh_var_with_range(&[list_int, float]).0; let v2 = env.unifier.get_fresh_var_with_range(&[list_int, float], None, None).0;
let t = env.unifier.get_fresh_rigid_var().0; let t = env.unifier.get_dummy_var().0;
let tuple = env.unifier.add_ty(TypeEnum::TTuple { ty: vec![v, v1, v2] }); let tuple = env.unifier.add_ty(TypeEnum::TTuple { ty: vec![v, v1, v2] });
let v3 = env.unifier.get_fresh_var_with_range(&[tuple, t]).0; let v3 = env.unifier.get_fresh_var_with_range(&[tuple, t], None, None).0;
// t = TypeVar('t') // t = TypeVar('t')
// v = TypeVar('v', int, bool) // v = TypeVar('v', int, bool)
// v1 = TypeVar('v1', 'list[v]', int) // v1 = TypeVar('v1', 'list[v]', int)
@ -561,9 +558,12 @@ fn test_instantiation() {
let types = types let types = types
.iter() .iter()
.map(|ty| { .map(|ty| {
env.unifier.stringify(*ty, &mut |i| obj_map.get(&i).unwrap().to_string(), &mut |i| { env.unifier.internal_stringify(
format!("v{}", i) *ty,
}) &mut |i| obj_map.get(&i).unwrap().to_string(),
&mut |i| format!("v{}", i),
&mut None,
)
}) })
.sorted() .sorted()
.collect_vec(); .collect_vec();

View File

@ -10,6 +10,27 @@ pub struct UnificationTable<V> {
parents: Vec<usize>, parents: Vec<usize>,
ranks: Vec<u32>, ranks: Vec<u32>,
values: Vec<Option<V>>, values: Vec<Option<V>>,
log: Vec<Action<V>>,
generation: u32,
}
#[derive(Clone, Debug)]
enum Action<V> {
Parent {
key: usize,
original_parent: usize,
},
Value {
key: usize,
original_value: Option<V>,
},
Rank {
key: usize,
original_rank: u32,
},
Marker {
generation: u32,
}
} }
impl<V> Default for UnificationTable<V> { impl<V> Default for UnificationTable<V> {
@ -20,7 +41,7 @@ impl<V> Default for UnificationTable<V> {
impl<V> UnificationTable<V> { impl<V> UnificationTable<V> {
pub fn new() -> UnificationTable<V> { pub fn new() -> UnificationTable<V> {
UnificationTable { parents: Vec::new(), ranks: Vec::new(), values: Vec::new() } UnificationTable { parents: Vec::new(), ranks: Vec::new(), values: Vec::new(), log: Vec::new(), generation: 0 }
} }
pub fn new_key(&mut self, v: V) -> UnificationKey { pub fn new_key(&mut self, v: V) -> UnificationKey {
@ -40,12 +61,25 @@ impl<V> UnificationTable<V> {
if self.ranks[a] < self.ranks[b] { if self.ranks[a] < self.ranks[b] {
std::mem::swap(&mut a, &mut b); std::mem::swap(&mut a, &mut b);
} }
self.log.push(Action::Parent { key: b, original_parent: a });
self.parents[b] = a; self.parents[b] = a;
if self.ranks[a] == self.ranks[b] { if self.ranks[a] == self.ranks[b] {
self.log.push(Action::Rank { key: a, original_rank: self.ranks[a] });
self.ranks[a] += 1; self.ranks[a] += 1;
} }
} }
pub fn probe_value_immutable(&self, key: UnificationKey) -> &V {
let mut root = key.0;
let mut parent = self.parents[root];
while root != parent {
root = parent;
// parent = root.parent
parent = self.parents[parent];
}
self.values[parent].as_ref().unwrap()
}
pub fn probe_value(&mut self, a: UnificationKey) -> &V { pub fn probe_value(&mut self, a: UnificationKey) -> &V {
let index = self.find(a); let index = self.find(a);
self.values[index].as_ref().unwrap() self.values[index].as_ref().unwrap()
@ -53,7 +87,8 @@ impl<V> UnificationTable<V> {
pub fn set_value(&mut self, a: UnificationKey, v: V) { pub fn set_value(&mut self, a: UnificationKey, v: V) {
let index = self.find(a); let index = self.find(a);
self.values[index] = Some(v); let original_value = self.values[index].replace(v);
self.log.push(Action::Value { key: index, original_value });
} }
pub fn unioned(&mut self, a: UnificationKey, b: UnificationKey) -> bool { pub fn unioned(&mut self, a: UnificationKey, b: UnificationKey) -> bool {
@ -71,6 +106,7 @@ impl<V> UnificationTable<V> {
// a = parent.parent // a = parent.parent
let a = self.parents[parent]; let a = self.parents[parent];
// root.parent = parent.parent // root.parent = parent.parent
self.log.push(Action::Parent { key: root, original_parent: a });
self.parents[root] = a; self.parents[root] = a;
root = parent; root = parent;
// parent = root.parent // parent = root.parent
@ -78,6 +114,40 @@ impl<V> UnificationTable<V> {
} }
parent parent
} }
pub fn get_snapshot(&mut self) -> (usize, u32) {
let generation = self.generation;
self.log.push(Action::Marker { generation });
self.generation += 1;
(self.log.len(), generation)
}
pub fn restore_snapshot(&mut self, snapshot: (usize, u32)) {
let (log_len, generation) = snapshot;
assert!(self.log.len() >= log_len, "snapshot restoration error");
assert!(matches!(self.log[log_len - 1], Action::Marker { generation: gen } if gen == generation), "snapshot restoration error");
for action in self.log.drain(log_len - 1..).rev() {
match action {
Action::Parent { key, original_parent } => {
self.parents[key] = original_parent;
}
Action::Value { key, original_value } => {
self.values[key] = original_value;
}
Action::Rank { key, original_rank } => {
self.ranks[key] = original_rank;
}
Action::Marker { .. } => {}
}
}
}
pub fn discard_snapshot(&mut self, snapshot: (usize, u32)) {
let (log_len, generation) = snapshot;
assert!(self.log.len() >= log_len, "snapshot discard error");
assert!(matches!(self.log[log_len - 1], Action::Marker { generation: gen } if generation == gen), "snapshot discard error");
self.log.truncate(log_len - 1);
}
} }
impl<V> UnificationTable<Rc<V>> impl<V> UnificationTable<Rc<V>>
@ -89,11 +159,11 @@ where
.enumerate() .enumerate()
.map(|(i, (v, p))| if *p == i { v.as_ref().map(|v| v.as_ref().clone()) } else { None }) .map(|(i, (v, p))| if *p == i { v.as_ref().map(|v| v.as_ref().clone()) } else { None })
.collect(); .collect();
UnificationTable { parents: self.parents.clone(), ranks: self.ranks.clone(), values } UnificationTable { parents: self.parents.clone(), ranks: self.ranks.clone(), values, log: Vec::new(), generation: 0 }
} }
pub fn from_send(table: &UnificationTable<V>) -> UnificationTable<Rc<V>> { pub fn from_send(table: &UnificationTable<V>) -> UnificationTable<Rc<V>> {
let values = table.values.iter().cloned().map(|v| v.map(Rc::new)).collect(); let values = table.values.iter().cloned().map(|v| v.map(Rc::new)).collect();
UnificationTable { parents: table.parents.clone(), ranks: table.ranks.clone(), values } UnificationTable { parents: table.parents.clone(), ranks: table.ranks.clone(), values, log: Vec::new(), generation: 0 }
} }
} }

View File

@ -14,8 +14,6 @@ lalrpop = "0.19.6"
nac3ast = { path = "../nac3ast" } nac3ast = { path = "../nac3ast" }
lalrpop-util = "0.19.6" lalrpop-util = "0.19.6"
log = "0.4.1" log = "0.4.1"
num-bigint = "0.4.0"
num-traits = "0.2"
unic-emoji-char = "0.9" unic-emoji-char = "0.9"
unic-ucd-ident = "0.9" unic-ucd-ident = "0.9"
unicode_names2 = "0.4" unicode_names2 = "0.4"
@ -23,4 +21,4 @@ phf = { version = "0.9", features = ["macros"] }
ahash = "0.7.2" ahash = "0.7.2"
[dev-dependencies] [dev-dependencies]
insta = "1.5" insta = "=1.11.0"

View File

@ -3,14 +3,12 @@
//! This means source code is translated into separate tokens. //! This means source code is translated into separate tokens.
pub use super::token::Tok; pub use super::token::Tok;
use crate::ast::Location; use crate::ast::{Location, FileName};
use crate::error::{LexicalError, LexicalErrorType}; use crate::error::{LexicalError, LexicalErrorType};
use num_bigint::BigInt;
use num_traits::identities::Zero;
use num_traits::Num;
use std::char; use std::char;
use std::cmp::Ordering; use std::cmp::Ordering;
use std::str::FromStr; use std::str::FromStr;
use std::num::IntErrorKind;
use unic_emoji_char::is_emoji_presentation; use unic_emoji_char::is_emoji_presentation;
use unic_ucd_ident::{is_xid_continue, is_xid_start}; use unic_ucd_ident::{is_xid_continue, is_xid_start};
@ -113,8 +111,8 @@ pub type Spanned = (Location, Tok, Location);
pub type LexResult = Result<Spanned, LexicalError>; pub type LexResult = Result<Spanned, LexicalError>;
#[inline] #[inline]
pub fn make_tokenizer(source: &str) -> impl Iterator<Item = LexResult> + '_ { pub fn make_tokenizer(source: &str, file: FileName) -> impl Iterator<Item = LexResult> + '_ {
make_tokenizer_located(source, Location::new(0, 0)) make_tokenizer_located(source, Location::new(0, 0, file))
} }
pub fn make_tokenizer_located( pub fn make_tokenizer_located(
@ -287,10 +285,18 @@ where
fn lex_number_radix(&mut self, start_pos: Location, radix: u32) -> LexResult { fn lex_number_radix(&mut self, start_pos: Location, radix: u32) -> LexResult {
let value_text = self.radix_run(radix); let value_text = self.radix_run(radix);
let end_pos = self.get_pos(); let end_pos = self.get_pos();
let value = BigInt::from_str_radix(&value_text, radix).map_err(|e| LexicalError { let value = match i128::from_str_radix(&value_text, radix) {
error: LexicalErrorType::OtherError(format!("{:?}", e)), Ok(value) => value,
location: start_pos, Err(e) => {
})?; match e.kind() {
IntErrorKind::PosOverflow | IntErrorKind::NegOverflow => i128::MAX,
_ => return Err(LexicalError {
error: LexicalErrorType::OtherError(format!("{:?}", e)),
location: start_pos,
}),
}
}
};
Ok((start_pos, Tok::Int { value }, end_pos)) Ok((start_pos, Tok::Int { value }, end_pos))
} }
@ -353,14 +359,20 @@ where
Ok((start_pos, Tok::Complex { real: 0.0, imag }, end_pos)) Ok((start_pos, Tok::Complex { real: 0.0, imag }, end_pos))
} else { } else {
let end_pos = self.get_pos(); let end_pos = self.get_pos();
let value = value_text.parse::<BigInt>().unwrap(); // assumption: value_text contains a valid integer.
if start_is_zero && !value.is_zero() { // parse should only fail because of overflow.
let value = value_text.parse::<i128>().ok();
let nonzero = match value {
Some(value) => value != 0i128,
None => true
};
if start_is_zero && nonzero {
return Err(LexicalError { return Err(LexicalError {
error: LexicalErrorType::OtherError("Invalid Token".to_owned()), error: LexicalErrorType::OtherError("Invalid Token".to_owned()),
location: self.get_pos(), location: self.get_pos(),
}); });
} }
Ok((start_pos, Tok::Int { value }, end_pos)) Ok((start_pos, Tok::Int { value: value.unwrap_or(i128::MAX) }, end_pos))
} }
} }
} }
@ -1321,14 +1333,13 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{make_tokenizer, NewlineHandler, Tok}; use super::{make_tokenizer, NewlineHandler, Tok};
use num_bigint::BigInt;
const WINDOWS_EOL: &str = "\r\n"; const WINDOWS_EOL: &str = "\r\n";
const MAC_EOL: &str = "\r"; const MAC_EOL: &str = "\r";
const UNIX_EOL: &str = "\n"; const UNIX_EOL: &str = "\n";
pub fn lex_source(source: &str) -> Vec<Tok> { pub fn lex_source(source: &str) -> Vec<Tok> {
let lexer = make_tokenizer(source); let lexer = make_tokenizer(source, Default::default());
lexer.map(|x| x.unwrap().1).collect() lexer.map(|x| x.unwrap().1).collect()
} }
@ -1449,16 +1460,16 @@ class Foo(A, B):
tokens, tokens,
vec![ vec![
Tok::Int { Tok::Int {
value: BigInt::from(47), value: 47i128,
}, },
Tok::Int { Tok::Int {
value: BigInt::from(13), value: 13i128,
}, },
Tok::Int { Tok::Int {
value: BigInt::from(0), value: 0i128,
}, },
Tok::Int { Tok::Int {
value: BigInt::from(123), value: 123i128,
}, },
Tok::Float { value: 0.2 }, Tok::Float { value: 0.2 },
Tok::Complex { Tok::Complex {
@ -1481,7 +1492,7 @@ class Foo(A, B):
fn $name() { fn $name() {
let source = format!(r"99232 # {}", $eol); let source = format!(r"99232 # {}", $eol);
let tokens = lex_source(&source); let tokens = lex_source(&source);
assert_eq!(tokens, vec![Tok::Int { value: BigInt::from(99232) }, Tok::Newline]); assert_eq!(tokens, vec![Tok::Int { value: 99232i128 }, Tok::Newline]);
} }
)* )*
} }
@ -1504,9 +1515,9 @@ class Foo(A, B):
assert_eq!( assert_eq!(
tokens, tokens,
vec![ vec![
Tok::Int { value: BigInt::from(123) }, Tok::Int { value: 123i128 },
Tok::Newline, Tok::Newline,
Tok::Int { value: BigInt::from(456) }, Tok::Int { value: 456i128 },
Tok::Newline, Tok::Newline,
] ]
) )
@ -1533,15 +1544,15 @@ class Foo(A, B):
}, },
Tok::Equal, Tok::Equal,
Tok::Int { Tok::Int {
value: BigInt::from(99) value: 99i128
}, },
Tok::Plus, Tok::Plus,
Tok::Int { Tok::Int {
value: BigInt::from(2) value: 2i128
}, },
Tok::Minus, Tok::Minus,
Tok::Int { Tok::Int {
value: BigInt::from(0) value: 0i128
}, },
Tok::Newline, Tok::Newline,
] ]
@ -1568,7 +1579,7 @@ class Foo(A, B):
Tok::Newline, Tok::Newline,
Tok::Indent, Tok::Indent,
Tok::Return, Tok::Return,
Tok::Int { value: BigInt::from(99) }, Tok::Int { value: 99i128 },
Tok::Newline, Tok::Newline,
Tok::Dedent, Tok::Dedent,
] ]
@ -1611,7 +1622,7 @@ class Foo(A, B):
Tok::Newline, Tok::Newline,
Tok::Indent, Tok::Indent,
Tok::Return, Tok::Return,
Tok::Int { value: BigInt::from(99) }, Tok::Int { value: 99i128 },
Tok::Newline, Tok::Newline,
Tok::Dedent, Tok::Dedent,
Tok::Dedent, Tok::Dedent,
@ -1649,7 +1660,7 @@ class Foo(A, B):
Tok::Newline, Tok::Newline,
Tok::Indent, Tok::Indent,
Tok::Return, Tok::Return,
Tok::Int { value: BigInt::from(99) }, Tok::Int { value: 99i128 },
Tok::Newline, Tok::Newline,
Tok::Dedent, Tok::Dedent,
Tok::Dedent, Tok::Dedent,
@ -1687,9 +1698,9 @@ class Foo(A, B):
}, },
Tok::Equal, Tok::Equal,
Tok::Lsqb, Tok::Lsqb,
Tok::Int { value: BigInt::from(1) }, Tok::Int { value: 1i128 },
Tok::Comma, Tok::Comma,
Tok::Int { value: BigInt::from(2) }, Tok::Int { value: 2i128 },
Tok::Rsqb, Tok::Rsqb,
Tok::Newline, Tok::Newline,
] ]

View File

@ -7,7 +7,7 @@
use std::iter; use std::iter;
use crate::ast; use crate::ast::{self, FileName};
use crate::error::ParseError; use crate::error::ParseError;
use crate::lexer; use crate::lexer;
pub use crate::mode::Mode; pub use crate::mode::Mode;
@ -20,8 +20,8 @@ use crate::python;
*/ */
/// Parse a full python program, containing usually multiple lines. /// Parse a full python program, containing usually multiple lines.
pub fn parse_program(source: &str) -> Result<ast::Suite, ParseError> { pub fn parse_program(source: &str, file: FileName) -> Result<ast::Suite, ParseError> {
parse(source, Mode::Module).map(|top| match top { parse(source, Mode::Module, file).map(|top| match top {
ast::Mod::Module { body, .. } => body, ast::Mod::Module { body, .. } => body,
_ => unreachable!(), _ => unreachable!(),
}) })
@ -31,18 +31,17 @@ pub fn parse_program(source: &str) -> Result<ast::Suite, ParseError> {
/// ///
/// # Example /// # Example
/// ``` /// ```
/// extern crate num_bigint;
/// use nac3parser::{parser, ast}; /// use nac3parser::{parser, ast};
/// let expr = parser::parse_expression("1 + 2").unwrap(); /// let expr = parser::parse_expression("1 + 2").unwrap();
/// ///
/// assert_eq!( /// assert_eq!(
/// expr, /// expr,
/// ast::Expr { /// ast::Expr {
/// location: ast::Location::new(1, 3), /// location: ast::Location::new(1, 3, Default::default()),
/// custom: (), /// custom: (),
/// node: ast::ExprKind::BinOp { /// node: ast::ExprKind::BinOp {
/// left: Box::new(ast::Expr { /// left: Box::new(ast::Expr {
/// location: ast::Location::new(1, 1), /// location: ast::Location::new(1, 1, Default::default()),
/// custom: (), /// custom: (),
/// node: ast::ExprKind::Constant { /// node: ast::ExprKind::Constant {
/// value: ast::Constant::Int(1.into()), /// value: ast::Constant::Int(1.into()),
@ -51,7 +50,7 @@ pub fn parse_program(source: &str) -> Result<ast::Suite, ParseError> {
/// }), /// }),
/// op: ast::Operator::Add, /// op: ast::Operator::Add,
/// right: Box::new(ast::Expr { /// right: Box::new(ast::Expr {
/// location: ast::Location::new(1, 5), /// location: ast::Location::new(1, 5, Default::default()),
/// custom: (), /// custom: (),
/// node: ast::ExprKind::Constant { /// node: ast::ExprKind::Constant {
/// value: ast::Constant::Int(2.into()), /// value: ast::Constant::Int(2.into()),
@ -64,15 +63,15 @@ pub fn parse_program(source: &str) -> Result<ast::Suite, ParseError> {
/// ///
/// ``` /// ```
pub fn parse_expression(source: &str) -> Result<ast::Expr, ParseError> { pub fn parse_expression(source: &str) -> Result<ast::Expr, ParseError> {
parse(source, Mode::Expression).map(|top| match top { parse(source, Mode::Expression, Default::default()).map(|top| match top {
ast::Mod::Expression { body } => *body, ast::Mod::Expression { body } => *body,
_ => unreachable!(), _ => unreachable!(),
}) })
} }
// Parse a given source code // Parse a given source code
pub fn parse(source: &str, mode: Mode) -> Result<ast::Mod, ParseError> { pub fn parse(source: &str, mode: Mode, file: FileName) -> Result<ast::Mod, ParseError> {
let lxr = lexer::make_tokenizer(source); let lxr = lexer::make_tokenizer(source, file);
let marker_token = (Default::default(), mode.to_marker(), Default::default()); let marker_token = (Default::default(), mode.to_marker(), Default::default());
let tokenizer = iter::once(Ok(marker_token)).chain(lxr); let tokenizer = iter::once(Ok(marker_token)).chain(lxr);
@ -87,42 +86,42 @@ mod tests {
#[test] #[test]
fn test_parse_empty() { fn test_parse_empty() {
let parse_ast = parse_program("").unwrap(); let parse_ast = parse_program("", Default::default()).unwrap();
insta::assert_debug_snapshot!(parse_ast); insta::assert_debug_snapshot!(parse_ast);
} }
#[test] #[test]
fn test_parse_print_hello() { fn test_parse_print_hello() {
let source = String::from("print('Hello world')"); let source = String::from("print('Hello world')");
let parse_ast = parse_program(&source).unwrap(); let parse_ast = parse_program(&source, Default::default()).unwrap();
insta::assert_debug_snapshot!(parse_ast); insta::assert_debug_snapshot!(parse_ast);
} }
#[test] #[test]
fn test_parse_print_2() { fn test_parse_print_2() {
let source = String::from("print('Hello world', 2)"); let source = String::from("print('Hello world', 2)");
let parse_ast = parse_program(&source).unwrap(); let parse_ast = parse_program(&source, Default::default()).unwrap();
insta::assert_debug_snapshot!(parse_ast); insta::assert_debug_snapshot!(parse_ast);
} }
#[test] #[test]
fn test_parse_kwargs() { fn test_parse_kwargs() {
let source = String::from("my_func('positional', keyword=2)"); let source = String::from("my_func('positional', keyword=2)");
let parse_ast = parse_program(&source).unwrap(); let parse_ast = parse_program(&source, Default::default()).unwrap();
insta::assert_debug_snapshot!(parse_ast); insta::assert_debug_snapshot!(parse_ast);
} }
#[test] #[test]
fn test_parse_if_elif_else() { fn test_parse_if_elif_else() {
let source = String::from("if 1: 10\nelif 2: 20\nelse: 30"); let source = String::from("if 1: 10\nelif 2: 20\nelse: 30");
let parse_ast = parse_program(&source).unwrap(); let parse_ast = parse_program(&source, Default::default()).unwrap();
insta::assert_debug_snapshot!(parse_ast); insta::assert_debug_snapshot!(parse_ast);
} }
#[test] #[test]
fn test_parse_lambda() { fn test_parse_lambda() {
let source = "lambda x, y: x * y"; // lambda(x, y): x * y"; let source = "lambda x, y: x * y"; // lambda(x, y): x * y";
let parse_ast = parse_program(&source).unwrap(); let parse_ast = parse_program(&source, Default::default()).unwrap();
insta::assert_debug_snapshot!(parse_ast); insta::assert_debug_snapshot!(parse_ast);
} }
@ -130,7 +129,7 @@ mod tests {
fn test_parse_tuples() { fn test_parse_tuples() {
let source = "a, b = 4, 5"; let source = "a, b = 4, 5";
insta::assert_debug_snapshot!(parse_program(&source).unwrap()); insta::assert_debug_snapshot!(parse_program(&source, Default::default()).unwrap());
} }
#[test] #[test]
@ -141,7 +140,7 @@ class Foo(A, B):
pass pass
def method_with_default(self, arg='default'): def method_with_default(self, arg='default'):
pass"; pass";
insta::assert_debug_snapshot!(parse_program(&source).unwrap()); insta::assert_debug_snapshot!(parse_program(&source, Default::default()).unwrap());
} }
#[test] #[test]
@ -184,7 +183,7 @@ while i < 2: # nac3: 4
# nac3: if1 # nac3: if1
if 1: # nac3: if2 if 1: # nac3: if2
3"; 3";
insta::assert_debug_snapshot!(parse_program(&source).unwrap()); insta::assert_debug_snapshot!(parse_program(&source, Default::default()).unwrap());
} }
#[test] #[test]
@ -197,7 +196,7 @@ while test: # nac3: while3
# nac3: simple assign0 # nac3: simple assign0
a = 3 # nac3: simple assign1 a = 3 # nac3: simple assign1
"; ";
insta::assert_debug_snapshot!(parse_program(&source).unwrap()); insta::assert_debug_snapshot!(parse_program(&source, Default::default()).unwrap());
} }
#[test] #[test]
@ -216,7 +215,7 @@ if a: # nac3: small2
for i in a: # nac3: for1 for i in a: # nac3: for1
pass pass
"; ";
insta::assert_debug_snapshot!(parse_program(&source).unwrap()); insta::assert_debug_snapshot!(parse_program(&source, Default::default()).unwrap());
} }
#[test] #[test]
@ -225,6 +224,6 @@ for i in a: # nac3: for1
if a: # nac3: something if a: # nac3: something
a = 3 a = 3
"; ";
assert!(parse_program(&source).is_err()); assert!(parse_program(&source, Default::default()).is_err());
} }
} }

View File

@ -5,7 +5,7 @@
use std::iter::FromIterator; use std::iter::FromIterator;
use crate::ast; use crate::ast::{self, Constant};
use crate::fstring::parse_located_fstring; use crate::fstring::parse_located_fstring;
use crate::function::{ArgumentList, parse_args, parse_params}; use crate::function::{ArgumentList, parse_args, parse_params};
use crate::error::LexicalError; use crate::error::LexicalError;
@ -14,7 +14,6 @@ use crate::lexer;
use crate::config_comment_helper::*; use crate::config_comment_helper::*;
use lalrpop_util::ParseError; use lalrpop_util::ParseError;
use num_bigint::BigInt;
grammar; grammar;
@ -916,7 +915,20 @@ Factor: ast::Expr = {
<location:@L> <op:UnaryOp> <e:Factor> => ast::Expr { <location:@L> <op:UnaryOp> <e:Factor> => ast::Expr {
location, location,
custom: (), custom: (),
node: ast::ExprKind::UnaryOp { operand: Box::new(e), op } node: {
match (&op, &e.node) {
(ast::Unaryop::USub, ast::ExprKind::Constant { value: Constant::Int(val), kind }) => {
ast::ExprKind::Constant {
value: if i128::MAX == *val { Constant::Int(*val) } else { Constant::Int(-val) },
kind: kind.clone()
}
}
(ast::Unaryop::UAdd, ast::ExprKind::Constant { value: Constant::Int(val), kind }) => {
ast::ExprKind::Constant { value: Constant::Int(*val), kind: kind.clone() }
}
_ => ast::ExprKind::UnaryOp { operand: Box::new(e), op }
}
}
}, },
Power, Power,
}; };
@ -1352,7 +1364,7 @@ extern {
"True" => lexer::Tok::True, "True" => lexer::Tok::True,
"False" => lexer::Tok::False, "False" => lexer::Tok::False,
"None" => lexer::Tok::None, "None" => lexer::Tok::None,
int => lexer::Tok::Int { value: <BigInt> }, int => lexer::Tok::Int { value: <i128> },
float => lexer::Tok::Float { value: <f64> }, float => lexer::Tok::Float { value: <f64> },
complex => lexer::Tok::Complex { real: <f64>, imag: <f64> }, complex => lexer::Tok::Complex { real: <f64>, imag: <f64> },
string => lexer::Tok::String { value: <String>, is_fstring: <bool> }, string => lexer::Tok::String { value: <String>, is_fstring: <bool> },

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 327
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: JoinedStr { node: JoinedStr {
@ -14,6 +19,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -27,6 +35,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -40,6 +51,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -47,6 +61,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 335
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: JoinedStr { node: JoinedStr {
@ -14,6 +19,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -27,6 +35,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -40,6 +51,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -53,6 +67,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -60,6 +77,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -75,6 +95,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -88,6 +111,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -101,6 +127,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -114,6 +143,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -121,6 +153,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 343
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: JoinedStr { node: JoinedStr {
@ -14,6 +19,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -27,6 +35,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -40,6 +51,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -47,6 +61,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -60,6 +77,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 319
expression: "parse_fstring(\"\").unwrap()" expression: "parse_fstring(\"\").unwrap()"
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 298
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: JoinedStr { node: JoinedStr {
@ -14,6 +19,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -21,6 +29,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -36,6 +47,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -43,6 +57,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 3, column: 3,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -58,6 +75,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,5 +1,6 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 382
expression: parse_ast expression: parse_ast
--- ---
@ -7,6 +8,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -14,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Compare { node: Compare {
@ -21,6 +28,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -38,6 +48,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 306
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -13,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -26,6 +34,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -33,6 +44,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,5 +1,6 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 375
expression: parse_ast expression: parse_ast
--- ---
@ -7,6 +8,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -14,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Compare { node: Compare {
@ -21,6 +28,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -38,6 +48,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 314
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -13,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -26,6 +34,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 389
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: JoinedStr { node: JoinedStr {
@ -14,6 +19,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -27,6 +35,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -40,6 +51,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -47,6 +61,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 396
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: JoinedStr { node: JoinedStr {
@ -14,6 +19,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -27,6 +35,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -40,6 +51,9 @@ Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -47,6 +61,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/fstring.rs source: nac3parser/src/fstring.rs
assertion_line: 403
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 0, row: 0,
column: 0, column: 0,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FormattedValue { node: FormattedValue {
@ -13,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Yield { node: Yield {

View File

@ -1,6 +1,7 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
expression: parse_program(&source).unwrap() assertion_line: 218
expression: "parse_program(&source, Default::default()).unwrap()"
--- ---
[ [
@ -8,6 +9,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: If { node: If {
@ -15,6 +19,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -27,6 +34,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -34,6 +44,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -55,6 +68,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 2, row: 2,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: If { node: If {
@ -62,6 +78,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 2, row: 2,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -74,6 +93,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 2, row: 2,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -81,6 +103,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 2, row: 2,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -102,6 +127,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 3, row: 3,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: If { node: If {
@ -109,6 +137,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 3, row: 3,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -121,6 +152,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -128,6 +162,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -142,6 +179,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -149,6 +189,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 10, column: 10,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: BinOp { node: BinOp {
@ -156,6 +199,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -168,6 +214,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 12, column: 12,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -195,6 +244,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Assign { node: Assign {
@ -203,6 +255,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -215,6 +270,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -232,6 +290,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -239,6 +300,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 10, column: 10,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: BinOp { node: BinOp {
@ -246,6 +310,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -258,6 +325,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 12, column: 12,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -276,6 +346,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 15, column: 15,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Assign { node: Assign {
@ -284,6 +357,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 15, column: 15,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -296,6 +372,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 19, column: 19,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -313,6 +392,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 8, row: 8,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Assign { node: Assign {
@ -321,6 +403,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 8, row: 8,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -333,6 +418,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 8, row: 8,
column: 6, column: 6,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -354,6 +442,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 9, row: 9,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: If { node: If {
@ -361,6 +452,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 9, row: 9,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -373,6 +467,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 10, row: 10,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -380,6 +477,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 10, row: 10,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -401,6 +501,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 11, row: 11,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: For { node: For {
@ -408,6 +511,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 11, row: 11,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -419,6 +525,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 11, row: 11,
column: 10, column: 10,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -431,6 +540,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 12, row: 12,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Pass { node: Pass {

View File

@ -1,6 +1,7 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
expression: parse_program(&source).unwrap() assertion_line: 186
expression: "parse_program(&source, Default::default()).unwrap()"
--- ---
[ [
@ -8,6 +9,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: AnnAssign { node: AnnAssign {
@ -15,6 +19,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -26,6 +33,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -44,6 +54,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 2, row: 2,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: For { node: For {
@ -51,6 +64,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 3, row: 3,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -62,6 +78,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 3, row: 3,
column: 11, column: 11,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Tuple { node: Tuple {
@ -70,6 +89,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 3, row: 3,
column: 11, column: 11,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -83,6 +105,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 3, row: 3,
column: 15, column: 15,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -101,6 +126,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: AnnAssign { node: AnnAssign {
@ -108,6 +136,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -119,6 +150,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -144,6 +178,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: While { node: While {
@ -151,6 +188,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 7, row: 7,
column: 9, column: 9,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Compare { node: Compare {
@ -158,6 +198,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 7, row: 7,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -173,6 +216,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 7, row: 7,
column: 11, column: 11,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -190,6 +236,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 9, row: 9,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Pass { node: Pass {
@ -202,6 +251,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 12, row: 12,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -209,6 +261,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 12, row: 12,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: BinOp { node: BinOp {
@ -216,6 +271,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 12, row: 12,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -230,6 +288,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 12, row: 12,
column: 9, column: 9,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -252,6 +313,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 13, row: 13,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: If { node: If {
@ -259,6 +323,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 15, row: 15,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -273,6 +340,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 16, row: 16,
column: 9, column: 9,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -280,6 +350,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 16, row: 16,
column: 9, column: 9,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,6 +1,7 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
expression: parse_program(&source).unwrap() assertion_line: 143
expression: "parse_program(&source, Default::default()).unwrap()"
--- ---
[ [
@ -8,6 +9,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ClassDef { node: ClassDef {
@ -17,6 +21,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 11, column: 11,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -28,6 +35,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 14, column: 14,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -42,6 +52,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 2, row: 2,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FunctionDef { node: FunctionDef {
@ -53,6 +66,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 2, row: 2,
column: 15, column: 15,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ArgData { node: ArgData {
@ -73,6 +89,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 3, row: 3,
column: 3, column: 3,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Pass { node: Pass {
@ -90,6 +109,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: FunctionDef { node: FunctionDef {
@ -101,6 +123,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 26, column: 26,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ArgData { node: ArgData {
@ -113,6 +138,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 32, column: 32,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ArgData { node: ArgData {
@ -131,6 +159,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 37, column: 37,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -147,6 +178,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 5, row: 5,
column: 3, column: 3,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Pass { node: Pass {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 150
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: DictComp { node: DictComp {
@ -13,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -24,6 +32,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 6, column: 6,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -37,6 +48,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 13, column: 13,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -48,6 +62,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 18, column: 18,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,5 +1,6 @@
--- ---
source: parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 164
expression: parse_ast expression: parse_ast
--- ---
@ -7,6 +8,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ListComp { node: ListComp {
@ -14,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -27,6 +34,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Tuple { node: Tuple {
@ -35,6 +45,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -46,6 +59,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 11, column: 11,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -61,6 +77,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 17, column: 17,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -76,6 +95,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 23, column: 23,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -87,6 +109,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 28, column: 28,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -99,6 +124,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 35, column: 35,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Compare { node: Compare {
@ -106,6 +134,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 33, column: 33,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -121,6 +152,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 37, column: 37,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -137,6 +171,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 44, column: 44,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Compare { node: Compare {
@ -144,6 +181,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 42, column: 42,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -159,6 +199,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 46, column: 46,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,5 +1,7 @@
--- ---
source: parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 90
expression: parse_ast expression: parse_ast
--- ---
[] []

View File

@ -1,5 +1,6 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 118
expression: parse_ast expression: parse_ast
--- ---
@ -8,6 +9,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: If { node: If {
@ -15,6 +19,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -29,6 +36,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -36,6 +46,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -54,6 +67,9 @@ expression: parse_ast
location: Location { location: Location {
row: 2, row: 2,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: If { node: If {
@ -61,6 +77,9 @@ expression: parse_ast
location: Location { location: Location {
row: 2, row: 2,
column: 6, column: 6,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -75,6 +94,9 @@ expression: parse_ast
location: Location { location: Location {
row: 2, row: 2,
column: 9, column: 9,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -82,6 +104,9 @@ expression: parse_ast
location: Location { location: Location {
row: 2, row: 2,
column: 9, column: 9,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -100,6 +125,9 @@ expression: parse_ast
location: Location { location: Location {
row: 3, row: 3,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -107,6 +135,9 @@ expression: parse_ast
location: Location { location: Location {
row: 3, row: 3,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,5 +1,6 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 111
expression: parse_ast expression: parse_ast
--- ---
@ -8,6 +9,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -15,6 +19,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Call { node: Call {
@ -22,6 +29,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -34,6 +44,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 10, column: 10,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -49,6 +62,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 23, column: 23,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: KeywordData { node: KeywordData {
@ -59,6 +75,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 31, column: 31,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,5 +1,6 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 125
expression: parse_ast expression: parse_ast
--- ---
@ -8,6 +9,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -15,6 +19,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Lambda { node: Lambda {
@ -25,6 +32,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ArgData { node: ArgData {
@ -37,6 +47,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 11, column: 11,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ArgData { node: ArgData {
@ -56,6 +69,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 16, column: 16,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: BinOp { node: BinOp {
@ -63,6 +79,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 14, column: 14,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -75,6 +94,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 18, column: 18,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,11 +1,16 @@
--- ---
source: parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 157
expression: parse_ast expression: parse_ast
--- ---
Located { Located {
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: ListComp { node: ListComp {
@ -13,6 +18,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 2, column: 2,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -26,6 +34,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -37,6 +48,9 @@ Located {
location: Location { location: Location {
row: 1, row: 1,
column: 13, column: 13,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {

View File

@ -1,5 +1,6 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 104
expression: parse_ast expression: parse_ast
--- ---
@ -8,6 +9,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -15,6 +19,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 6, column: 6,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Call { node: Call {
@ -22,6 +29,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -34,6 +44,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -47,6 +60,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 22, column: 22,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,5 +1,6 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
assertion_line: 97
expression: parse_ast expression: parse_ast
--- ---
@ -8,6 +9,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Expr { node: Expr {
@ -15,6 +19,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 6, column: 6,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Call { node: Call {
@ -22,6 +29,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -34,6 +44,9 @@ expression: parse_ast
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,6 +1,7 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
expression: parse_program(&source).unwrap() assertion_line: 132
expression: "parse_program(&source, Default::default()).unwrap()"
--- ---
[ [
@ -8,6 +9,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Assign { node: Assign {
@ -16,6 +20,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Tuple { node: Tuple {
@ -24,6 +31,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -35,6 +45,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 4, column: 4,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -51,6 +64,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Tuple { node: Tuple {
@ -59,6 +75,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 8, column: 8,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {
@ -72,6 +91,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 11, column: 11,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,6 +1,7 @@
--- ---
source: nac3parser/src/parser.rs source: nac3parser/src/parser.rs
expression: parse_program(&source).unwrap() assertion_line: 199
expression: "parse_program(&source, Default::default()).unwrap()"
--- ---
[ [
@ -8,6 +9,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 1, row: 1,
column: 1, column: 1,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: While { node: While {
@ -15,6 +19,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 4, row: 4,
column: 7, column: 7,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -27,6 +34,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 6, row: 6,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Assign { node: Assign {
@ -35,6 +45,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 6, row: 6,
column: 5, column: 5,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Name { node: Name {
@ -47,6 +60,9 @@ expression: parse_program(&source).unwrap()
location: Location { location: Location {
row: 6, row: 6,
column: 9, column: 9,
file: FileName(
"unknown",
),
}, },
custom: (), custom: (),
node: Constant { node: Constant {

View File

@ -1,6 +1,5 @@
//! Different token definitions. //! Different token definitions.
//! Loosely based on token.h from CPython source: //! Loosely based on token.h from CPython source:
use num_bigint::BigInt;
use std::fmt::{self, Write}; use std::fmt::{self, Write};
use crate::ast; use crate::ast;
@ -8,7 +7,7 @@ use crate::ast;
#[derive(Clone, Debug, PartialEq)] #[derive(Clone, Debug, PartialEq)]
pub enum Tok { pub enum Tok {
Name { name: ast::StrRef }, Name { name: ast::StrRef },
Int { value: BigInt }, Int { value: i128 },
Float { value: f64 }, Float { value: f64 },
Complex { real: f64, imag: f64 }, Complex { real: f64, imag: f64 },
String { value: String, is_fstring: bool }, String { value: String, is_fstring: bool },
@ -113,7 +112,7 @@ impl fmt::Display for Tok {
use Tok::*; use Tok::*;
match self { match self {
Name { name } => write!(f, "'{}'", ast::get_str_from_ref(&ast::get_str_ref_lock(), *name)), Name { name } => write!(f, "'{}'", ast::get_str_from_ref(&ast::get_str_ref_lock(), *name)),
Int { value } => write!(f, "'{}'", value), Int { value } => if *value != i128::MAX { write!(f, "'{}'", value) } else { write!(f, "'#OFL#'") },
Float { value } => write!(f, "'{}'", value), Float { value } => write!(f, "'{}'", value),
Complex { real, imag } => write!(f, "{}j{}", real, imag), Complex { real, imag } => write!(f, "{}j{}", real, imag),
String { value, is_fstring } => { String { value, is_fstring } => {

View File

@ -5,7 +5,11 @@ authors = ["M-Labs"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
inkwell = { git = "https://github.com/TheDan64/inkwell", branch = "master", features = ["llvm12-0"] }
parking_lot = "0.11.1" parking_lot = "0.11.1"
nac3parser = { path = "../nac3parser" } nac3parser = { path = "../nac3parser" }
nac3core = { path = "../nac3core" } nac3core = { path = "../nac3core" }
[dependencies.inkwell]
version = "0.1.0-beta.4"
default-features = false
features = ["llvm13-0", "target-x86", "target-arm", "target-riscv", "no-libffi-linking"]

View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
count=0
for demo in src/*.py; do
echo -n "checking $demo... "
./interpret_demo.py $demo > interpreted.log
./run_demo.sh $demo > run.log
diff -Nau interpreted.log run.log
echo "ok"
let "count+=1"
done
echo "Ran $count demo checks - PASSED"

View File

@ -1,26 +0,0 @@
#include <stdio.h>
#include <string.h>
void output_int(int x) {
printf("%d\n", x);
}
void output_asciiart(int x) {
static char chars[] = " .,-:;i+hHM$*#@ ";
if(x < 0) {
putchar('\n');
} else {
if(x < strlen(chars)) {
putchar(chars[x]);
} else {
printf("ERROR\n");
}
}
}
extern int run();
int main() {
run();
return 0;
}

View File

@ -0,0 +1,85 @@
mod cslice {
// copied from https://github.com/dherman/cslice
use std::marker::PhantomData;
use std::slice;
#[repr(C)]
#[derive(Clone, Copy)]
pub struct CSlice<'a, T> {
base: *const T,
len: usize,
marker: PhantomData<&'a ()>,
}
impl<'a, T> AsRef<[T]> for CSlice<'a, T> {
fn as_ref(&self) -> &[T] {
unsafe { slice::from_raw_parts(self.base, self.len) }
}
}
}
#[no_mangle]
pub extern "C" fn output_int32(x: i32) {
println!("{}", x);
}
#[no_mangle]
pub extern "C" fn output_int64(x: i64) {
println!("{}", x);
}
#[no_mangle]
pub extern "C" fn output_uint32(x: u32) {
println!("{}", x);
}
#[no_mangle]
pub extern "C" fn output_uint64(x: u64) {
println!("{}", x);
}
#[no_mangle]
pub extern "C" fn output_float64(x: f64) {
// debug output to preserve the digits after the decimal points
// to match python `print` function
println!("{:?}", x);
}
#[no_mangle]
pub extern "C" fn output_asciiart(x: i32) {
let chars = " .,-:;i+hHM$*#@ ";
if x < 0 {
println!("");
} else {
print!("{}", chars.chars().nth(x as usize).unwrap());
}
}
#[no_mangle]
pub extern "C" fn output_int32_list(x: &cslice::CSlice<i32>) {
print!("[");
let mut it = x.as_ref().iter().peekable();
while let Some(e) = it.next() {
if it.peek().is_none() {
print!("{}", e);
} else {
print!("{}, ", e);
}
}
println!("]");
}
#[no_mangle]
pub extern "C" fn __nac3_personality(_state: u32, _exception_object: u32, _context: u32) -> u32 {
unimplemented!();
}
extern "C" {
fn run() -> i32;
}
fn main() {
unsafe {
run();
}
}

View File

@ -0,0 +1,71 @@
#!/usr/bin/env python3
import sys
import importlib.util
import importlib.machinery
import pathlib
from numpy import int32, int64, uint32, uint64
from typing import TypeVar, Generic
def patch(module):
def output_asciiart(x):
if x < 0:
sys.stdout.write("\n")
else:
sys.stdout.write(" .,-:;i+hHM$*#@ "[x])
def extern(fun):
name = fun.__name__
if name == "output_asciiart":
return output_asciiart
elif name in {
"output_int32",
"output_int64",
"output_int32_list",
"output_uint32",
"output_uint64",
"output_float64"
}:
return print
else:
raise NotImplementedError
module.int32 = int32
module.int64 = int64
module.uint32 = uint32
module.uint64 = uint64
module.TypeVar = TypeVar
module.Generic = Generic
module.extern = extern
def file_import(filename, prefix="file_import_"):
filename = pathlib.Path(filename)
modname = prefix + filename.stem
path = str(filename.resolve().parent)
sys.path.insert(0, path)
try:
spec = importlib.util.spec_from_loader(
modname,
importlib.machinery.SourceFileLoader(modname, str(filename)),
)
module = importlib.util.module_from_spec(spec)
patch(module)
spec.loader.exec_module(module)
finally:
sys.path.remove(path)
return module
def main():
demo = file_import(sys.argv[1])
demo.run()
if __name__ == "__main__":
main()

View File

@ -7,7 +7,14 @@ if [ -z "$1" ]; then
exit 1 exit 1
fi fi
if [ -e ../../target/release/nac3standalone ]; then
nac3standalone=../../target/release/nac3standalone
else
# used by Nix builds
nac3standalone=../../target/x86_64-unknown-linux-gnu/release/nac3standalone
fi
rm -f *.o rm -f *.o
../../target/release/nac3standalone $1 $nac3standalone $1
clang -Wall -O2 -o $1.elf demo.c module*.o -lm rustc -o demo demo.rs -Crelocation-model=static -Clink-arg=./module.o
./$1.elf ./demo

View File

@ -1,7 +1,17 @@
@extern @extern
def output_int(x: int32): def output_int32(x: int32):
... ...
@extern
def output_int64(x: int64):
...
class B:
b: int32
def __init__(self, a: int32):
self.b = a
class A: class A:
a: int32 a: int32
@ -12,25 +22,17 @@ class A:
def get_a(self) -> int32: def get_a(self) -> int32:
return self.a return self.a
def get_self(self) -> A:
return self
def get_b(self) -> B: def get_b(self) -> B:
return self.b return self.b
class B:
b: int32
def __init__(self, a: int32):
self.b = a
def run() -> int32: def run() -> int32:
a = A(10) a = A(10)
output_int(a.a) output_int32(a.a)
a = A(20) a = A(20)
output_int(a.a) output_int32(a.a)
output_int(a.get_a()) output_int32(a.get_a())
output_int(a.get_b().b) # output_int32(a.get_b().b) FIXME: NAC3 prints garbage
return 0 return 0

View File

@ -0,0 +1,25 @@
@extern
def output_int32(x: int32):
...
def run() -> int32:
for l in [[1, 2, 3, 4, 5], [1, 2, 3], [1], []]:
output_int32(len(l))
for r in [
range(10),
range(0, 0),
range(2, 10),
range(0, 10, 1),
range(1, 10, 2),
range(-1, 10, 2),
range(2, 10, 3),
range(-2, 12, 5),
range(2, 5, -1),
range(5, 2, -1),
range(5, 2, -2),
range(24, -3, -6),
range(24, -3, 6)]:
output_int32(len(r))
return 0

View File

@ -0,0 +1,323 @@
@extern
def output_int32_list(x: list[int32]):
...
@extern
def output_int32(x: int32):
...
class A:
a: int32
b: bool
def __init__(self, a: int32, b: bool):
self.a = a
self.b = b
def run() -> int32:
data = [0, 1, 2, 3]
output_int32_list(data[2:3])
output_int32_list(data[:])
output_int32_list(data[1:])
output_int32_list(data[:-1])
m1 = -1
output_int32_list(data[::m1])
output_int32_list(data[:0:m1])
m2 = -2
output_int32_list(data[m2::m1])
get_list_slice()
list_slice_assignment()
return 0
def get_list_slice():
il = [i for i in range(15)]
bl = [True, False, True, True, False, False, True, False, True, True]
fl = [1.2, 2.3, 3.4, 4.5, 5.6, 6.7, 7.8, 8.9, 9.0, 10.1]
al = [A(i, bl[i]) for i in range(len(bl))]
tl = [(i, al[i], fl[i], (), (i, i + 1, bl[i])) for i in range(len(bl))]
for l0 in [
il[:],
il[1:1],
il[1:2],
il[1:0],
il[0:10:3],
il[0::3],
il[:-3:3],
il[2:-3:3],
il[:5:-1],
il[-4:5:-1],
il[-4:5:-2],
il[-4:5:-3],
il[-4::-3],
il[::-3],
il[::-1],
il[3:5:-1],
il[3:50:3],
il[-20:-15:2],
il[-20:-16:2],
il[20:-13:-5],
il[16:50:-1],
il[:-13:-2],
il[15:50:1],
]:
output_int32_list(l0)
for l1 in [
bl[:],
bl[1:1],
bl[1:2],
bl[1:0],
bl[0:10:3],
bl[0::3],
bl[:-3:3],
bl[2:-3:3],
bl[:5:-1],
bl[-4:5:-1],
bl[-4:5:-2],
bl[-4:5:-3],
bl[-4::-3],
bl[::-3],
bl[::-1],
bl[3:5:-1],
bl[3:50:3],
bl[-20:-15:2],
bl[-20:-16:2],
bl[20:-13:-5],
bl[16:50:-1],
bl[:-13:-2],
bl[15:50:1],
]:
output_int32_list([int32(b) for b in l1])
for l2 in [
fl[:],
fl[1:1],
fl[1:2],
fl[1:0],
fl[0:10:3],
fl[0::3],
fl[:-3:3],
fl[2:-3:3],
fl[:5:-1],
fl[-4:5:-1],
fl[-4:5:-2],
fl[-4:5:-3],
fl[-4::-3],
fl[::-3],
fl[::-1],
fl[3:5:-1],
fl[3:50:3],
fl[-20:-15:2],
fl[-20:-16:2],
fl[20:-13:-5],
fl[16:50:-1],
fl[:-13:-2],
fl[15:50:1],
]:
output_int32_list([int32(f) for f in l2])
for l3 in [
al[:],
al[1:1],
al[1:2],
al[1:0],
al[0:10:3],
al[0::3],
al[:-3:3],
al[2:-3:3],
al[:5:-1],
al[-4:5:-1],
al[-4:5:-2],
al[-4:5:-3],
al[-4::-3],
al[::-3],
al[::-1],
al[3:5:-1],
al[3:50:3],
al[-20:-15:2],
al[-20:-16:2],
al[20:-13:-5],
al[16:50:-1],
al[:-13:-2],
al[15:50:1],
]:
output_int32_list([a.a for a in l3])
output_int32_list([int32(a.b) for a in l3])
for l4 in [
tl[:],
tl[1:1],
tl[1:2],
tl[1:0],
tl[0:10:3],
tl[0::3],
tl[:-3:3],
tl[2:-3:3],
tl[:5:-1],
tl[-4:5:-1],
tl[-4:5:-2],
tl[-4:5:-3],
tl[-4::-3],
tl[::-3],
tl[::-1],
tl[3:5:-1],
tl[3:50:3],
tl[-20:-15:2],
tl[-20:-16:2],
tl[20:-13:-5],
tl[16:50:-1],
tl[:-13:-2],
tl[15:50:1],
]:
output_int32_list([t[0] for t in l4])
output_int32_list([t[1].a for t in l4])
output_int32_list([int32(t[1].b) for t in l4])
output_int32_list([int32(t[2]) for t in l4])
output_int32_list([t[4][0] for t in l4])
output_int32_list([t[4][1] for t in l4])
output_int32_list([int32(t[4][2]) for t in l4])
def list_slice_assignment():
il = [i for i in range(15)]
bl = [True, False, True, True, False, False, True, False, True, True]
fl = [1.2, 2.3, 3.4, 4.5, 5.6, 6.7, 7.8, 8.9, 9.0, 10.1]
al = [A(i, bl[i]) for i in range(len(bl))]
tl = [(i, al[i], fl[i], (), (i, i + 1, bl[i])) for i in range(len(bl))]
il1 = il[:]
il1[2:5] = [99,98,97]
output_int32_list(il1)
il2 = il[:]
il2[2:10:3] = [99,98,97]
output_int32_list(il2)
il3 = il[:]
il3[12:4:-3] = [99,98,97]
output_int32_list(il3)
il4 = il[:]
il4[4::-2] = [91,93,95]
output_int32_list(il4)
il5 = il[:]
il5[3:-5] = []
output_int32_list(il5)
il6 = il[:]
il6[3:-5] = [99,98,97]
output_int32_list(il6)
il7 = il[:]
il7[:-2] = [99]
output_int32_list(il7)
il8 = il[:]
il8[4:] = [99]
output_int32_list(il8)
bl1 = bl[:]
bl1[2:5] = [False, True, True]
output_int32_list([int32(b) for b in bl1])
bl2 = bl[:]
bl2[2:10:3] = [False, True, True]
output_int32_list([int32(b) for b in bl2])
bl3 = bl[:]
bl3[12:4:-3] = [False, True]
output_int32_list([int32(b) for b in bl3])
bl4 = bl[:]
bl4[4::-2] = [False, True, False]
output_int32_list([int32(b) for b in bl4])
bl5 = bl[:]
bl5[3:-5] = []
output_int32_list([int32(b) for b in bl5])
bl6 = bl[:]
bl6[3:-5] = [True, False, False]
output_int32_list([int32(b) for b in bl6])
bl7 = bl[:]
bl7[:-2] = [False]
output_int32_list([int32(b) for b in bl7])
bl8 = bl[:]
bl8[4:] = [True]
output_int32_list([int32(b) for b in bl8])
tl_3 = [
(99, A(99, False), 99.88, (), (99, 100, True)),
(98, A(98, False), 98.77, (), (98, 99, True)),
(97, A(97, False), 97.66, (), (97, 98, True)),
]
tl_2 = [
(88, A(88, False), 88.77, (), (88, 89, True)),
(87, A(87, False), 87.66, (), (87, 88, True)),
]
tl_1 = [(78, A(78, False), 78.77, (), (78, 79, True)),]
tl1 = tl[:]
tl[2:5] = tl_3
output_int32_list([t[0] for t in tl])
output_int32_list([t[1].a for t in tl1])
output_int32_list([int32(t[1].b) for t in tl1])
output_int32_list([int32(t[2]) for t in tl1])
output_int32_list([t[4][0] for t in tl1])
output_int32_list([t[4][1] for t in tl1])
output_int32_list([int32(t[4][2]) for t in tl1])
tl2 = tl[:]
tl2[2:10:3] = tl_3
output_int32_list([t[0] for t in tl2])
output_int32_list([t[1].a for t in tl2])
output_int32_list([int32(t[1].b) for t in tl2])
output_int32_list([int32(t[2]) for t in tl2])
output_int32_list([t[4][0] for t in tl2])
output_int32_list([t[4][1] for t in tl2])
output_int32_list([int32(t[4][2]) for t in tl2])
tl3 = tl[:]
tl3[12:4:-3] = tl_2
output_int32_list([t[0] for t in tl3])
output_int32_list([t[1].a for t in tl3])
output_int32_list([int32(t[1].b) for t in tl3])
output_int32_list([int32(t[2]) for t in tl3])
output_int32_list([t[4][0] for t in tl3])
output_int32_list([t[4][1] for t in tl3])
output_int32_list([int32(t[4][2]) for t in tl3])
tl4 = tl[:]
tl4[4::-2] = tl_3
output_int32_list([t[0] for t in tl4])
output_int32_list([t[1].a for t in tl4])
output_int32_list([int32(t[1].b) for t in tl4])
output_int32_list([int32(t[2]) for t in tl4])
output_int32_list([t[4][0] for t in tl4])
output_int32_list([t[4][1] for t in tl4])
output_int32_list([int32(t[4][2]) for t in tl4])
tl5 = tl[:]
tl5[3:-5] = []
output_int32_list([t[0] for t in tl5])
output_int32_list([t[1].a for t in tl5])
output_int32_list([int32(t[1].b) for t in tl5])
output_int32_list([int32(t[2]) for t in tl5])
output_int32_list([t[4][0] for t in tl5])
output_int32_list([t[4][1] for t in tl5])
output_int32_list([int32(t[4][2]) for t in tl5])
tl6 = tl[:]
output_int32(len(tl6))
tl6[3:-5] = tl_2
output_int32_list([t[0] for t in tl6])
output_int32_list([t[1].a for t in tl6])
output_int32_list([int32(t[1].b) for t in tl6])
output_int32_list([int32(t[2]) for t in tl6])
output_int32_list([t[4][0] for t in tl6])
output_int32_list([t[4][1] for t in tl6])
output_int32_list([int32(t[4][2]) for t in tl6])
tl7 = tl[:]
tl7[:-2] = tl_1
output_int32_list([t[0] for t in tl7])
output_int32_list([t[1].a for t in tl7])
output_int32_list([int32(t[1].b) for t in tl7])
output_int32_list([int32(t[2]) for t in tl7])
output_int32_list([t[4][0] for t in tl7])
output_int32_list([t[4][1] for t in tl7])
output_int32_list([int32(t[4][2]) for t in tl7])
tl8 = tl[:]
tl8[4:] = tl_1
output_int32_list([t[0] for t in tl8])
output_int32_list([t[1].a for t in tl8])
output_int32_list([int32(t[1].b) for t in tl8])
output_int32_list([int32(t[2]) for t in tl8])
output_int32_list([t[4][0] for t in tl8])
output_int32_list([t[4][1] for t in tl8])
output_int32_list([int32(t[4][2]) for t in tl8])

Some files were not shown because too many files have changed in this diff Show More