forked from M-Labs/nac3
Compare commits
1149 Commits
refactor_a
...
ndarray-st
Author | SHA1 | Date |
---|---|---|
lyken | 87d2a4ed59 | |
lyken | 9aae290727 | |
lyken | d18c769cdc | |
lyken | f41f06aec7 | |
lyken | 1303265785 | |
lyken | e9cf6ce1e5 | |
lyken | bc91ab9b13 | |
lyken | 1e06a3d199 | |
lyken | 87511ac749 | |
Sebastien Bourdeauducq | d658d9b00e | |
abdul124 | eeb474f9e6 | |
abdul124 | 88b72af2d1 | |
abdul124 | b73f6c4d68 | |
David Mak | f47cdec650 | |
David Mak | d656880e44 | |
David Mak | a91602915a | |
David Mak | 1c56005a01 | |
David Mak | bc40a32524 | |
David Mak | c820daf5f8 | |
David Mak | 25d2de67f7 | |
David Mak | 2cfb7a7e10 | |
David Mak | 9238a5e86e | |
lyken | 76defac462 | |
lyken | 650f354b74 | |
abdul124 | f062ef5f59 | |
lyken | f52086b706 | |
lyken | 0a732691c9 | |
lyken | cbff356d50 | |
lyken | 24ac3820b2 | |
David Mak | ba32fab374 | |
David Mak | c4052b6342 | |
David Mak | 66c205275f | |
David Mak | c85e412206 | |
David Mak | 075536d7bd | |
David Mak | 13beeaa2bf | |
David Mak | 2194dbddd5 | |
David Mak | 94a1d547d6 | |
lyken | d6565feed3 | |
abdul124 | 83154ef8e1 | |
lyken | 0744b938b8 | |
lyken | 56fa2b6803 | |
lyken | d06c13f936 | |
lyken | 9808923258 | |
lyken | 5b11a1dbdd | |
lyken | b21df53e0d | |
lyken | 0ec967a468 | |
lyken | ca8459dc7b | |
abdul124 | b0b804051a | |
abdul124 | 134af79fd6 | |
abdul124 | 7fe2c3496c | |
lyken | 144a3fc426 | |
lyken | 74096eb9f6 | |
lyken | 06e9d90d57 | |
lyken | d89146aa02 | |
David Mak | 5bade81ddb | |
David Mak | 0452e6de78 | |
David Mak | 635c944c90 | |
lyken | e36af3b0a3 | |
Sebastien Bourdeauducq | 5b1aa812ed | |
David Mak | d3cd2a8d99 | |
David Mak | 202a63274d | |
David Mak | 76dd5191f5 | |
David Mak | 8d9df0a615 | |
David Mak | 07adfb2a18 | |
Sébastien Bourdeauducq | f00e458f60 | |
David Mak | 1bc95a7ba6 | |
lyken | e85f4f9bd2 | |
abdul124 | ce3e9bf4fe | |
David Mak | 82091b1be8 | |
David Mak | 32919949e2 | |
lyken | 2abe75d1f4 | |
lyken | 676412fe6d | |
lyken | 8b9df7252f | |
lyken | 6979843431 | |
lyken | fed1361c6a | |
lyken | aa94e0c8a4 | |
lyken | f523e26227 | |
lyken | f026b48e2a | |
lyken | dc874f2994 | |
lyken | 95de0800b4 | |
lyken | 3d71c6a850 | |
David Mak | be55e2ac80 | |
David Mak | 79c8b759ad | |
David Mak | 4798c53a21 | |
David Mak | 23974feae7 | |
David Mak | 40a3bded36 | |
lyken | c4420e6ab9 | |
lyken | fd36f78005 | |
lyken | 8168692cc3 | |
David Mak | 53d44b9595 | |
David Mak | 6153f94b05 | |
David Mak | 4730b595f3 | |
David Mak | c2fdb12397 | |
David Mak | 82bf14785b | |
David Mak | 2d4329e23c | |
David Mak | 679656f9e1 | |
David Mak | 210d9e2334 | |
David Mak | 181ac3ec1a | |
David Mak | 3acdfb304d | |
David Mak | 6e24da9cc5 | |
David Mak | f0ab1b858a | |
lyken | 08129cc635 | |
David Mak | ad4832dcf4 | |
lyken | 520bbb246b | |
lyken | b857f1e403 | |
Sebastien Bourdeauducq | fa8af37e84 | |
David Mak | 23b2fee4e7 | |
David Mak | ed79d5bb9e | |
David Mak | c35ad06949 | |
David Mak | 135ef557f9 | |
David Mak | a176c3eb70 | |
David Mak | 2cf79510c2 | |
David Mak | b6ff75dcaf | |
David Mak | 588c15f80d | |
David Mak | 82cc693b11 | |
David Mak | 520e1adc56 | |
David Mak | 73e81259f3 | |
David Mak | 7627acea41 | |
David Mak | a777099ea8 | |
David Mak | 876e6ea7b8 | |
David Mak | 30c6cffbad | |
David Mak | 51671800b6 | |
David Mak | 7195476edb | |
David Mak | eecba0b71d | |
David Mak | 7b4253ccd8 | |
David Mak | f58c3a11f8 | |
David Mak | d0766a116f | |
David Mak | 64a3751fc2 | |
David Mak | 9566047241 | |
David Mak | 062e318dd5 | |
David Mak | c4dc36ae99 | |
David Mak | baac348ee6 | |
David Mak | 847615fc2f | |
David Mak | 5dfcc63978 | |
David Mak | 025b3cd02f | |
David Mak | e0f440040c | |
David Mak | f0715e2b6d | |
David Mak | e7fca67786 | |
David Mak | 52c731c312 | |
David Mak | 00d1b9be9b | |
David Mak | 8404d4c4dc | |
David Mak | e614dd4257 | |
David Mak | 937a8b9698 | |
David Mak | 876ad6c59c | |
David Mak | a920fe0501 | |
David Mak | 727a1886b3 | |
David Mak | 6af13a8261 | |
David Mak | 3540d0ab29 | |
David Mak | 3a6c53d760 | |
David Mak | 87bc34f7ec | |
David Mak | f50a5f0345 | |
David Mak | a77fd213e0 | |
David Mak | 8f1497df83 | |
David Mak | 5ca2dbeec8 | |
David Mak | 9a98cde595 | |
David Mak | 5ba8601b39 | |
David Mak | 26a01b14d5 | |
David Mak | d5f4817134 | |
David Mak | 789bfb5a26 | |
David Mak | 4bb0e60981 | |
Sebastien Bourdeauducq | 623fcf85af | |
David Mak | 13f06f3e29 | |
David Mak | f0da9c0283 | |
David Mak | 2c4bf3ce59 | |
David Mak | e980f19c93 | |
David Mak | cfbc37c1ed | |
David Mak | 50264e8750 | |
David Mak | 1b77e62901 | |
David Mak | fd44ee6887 | |
David Mak | c8866b1534 | |
David Mak | 84a888758a | |
David Mak | 9d550725b7 | |
David Mak | 2edc1de0b6 | |
David Mak | c3b122acfc | |
David Mak | a94927a11d | |
David Mak | ebf86cd134 | |
David Mak | cccd8f2d00 | |
David Mak | 3292aed099 | |
David Mak | 96b7f29679 | |
David Mak | 3d2abf73c8 | |
David Mak | f682e9bf7a | |
David Mak | b26cb2b360 | |
David Mak | 2317516cf6 | |
David Mak | 77de24ef74 | |
David Mak | 234a6bde2a | |
David Mak | c3db6297d9 | |
David Mak | 82fdb02d13 | |
David Mak | 4efdd17513 | |
David Mak | 49de81ef1e | |
David Mak | 8492503af2 | |
Sebastien Bourdeauducq | e1dbe2526a | |
Sebastien Bourdeauducq | f37de381ce | |
Sebastien Bourdeauducq | 4452c8986a | |
David Mak | 22e831cb76 | |
David Mak | cc538d221a | |
David Mak | 0d5c53e60c | |
David Mak | 976a9512c1 | |
David Mak | 1eacaf9afa | |
David Mak | 8c7e44098a | |
David Mak | 282a3e1911 | |
David Mak | 5cecb2bb74 | |
David Mak | 1963c30744 | |
David Mak | 27011f385b | |
David Mak | d6302b6ec8 | |
David Mak | fef4b2a5ce | |
David Mak | b3736c3e99 | |
Sebastien Bourdeauducq | e328e44c9a | |
Sebastien Bourdeauducq | 9e4e90f8a0 | |
David Mak | 8470915809 | |
David Mak | 148900302e | |
David Mak | 5ee08b585f | |
David Mak | f1581299fc | |
David Mak | af95ba5012 | |
David Mak | 9c9756be33 | |
David Mak | 2a922c7480 | |
David Mak | e3e2c36ef4 | |
David Mak | 4f9a0110c4 | |
David Mak | 12c0eed0a3 | |
David Mak | c679474f5c | |
Sébastien Bourdeauducq | ab3fa05996 | |
David Mak | 140f8f8a08 | |
David Mak | 27fcf8926e | |
David Mak | afa7d9b100 | |
David Mak | c395472094 | |
David Mak | 03870f222d | |
David Mak | e435b25756 | |
David Mak | bd792904f9 | |
David Mak | 1c3a823670 | |
David Mak | f01d833d48 | |
David Mak | 9d64e606f4 | |
David Mak | 6dccb343bb | |
Sebastien Bourdeauducq | d47534e2ad | |
David Mak | 8886964776 | |
David Mak | f09f3c27a5 | |
David Mak | 0bbc9ce6f5 | |
David Mak | 457d3b6cd7 | |
David Mak | 5f692debd8 | |
David Mak | c7735d935b | |
David Mak | b47ac1b89b | |
David Mak | a19f1065e3 | |
Sebastien Bourdeauducq | 5bf05c6a69 | |
David Mak | 32746c37be | |
David Mak | 1d6291b9ba | |
David Mak | 16655959f2 | |
David Mak | beee3e1f7e | |
David Mak | d4c109b6ef | |
David Mak | 5ffd06dd61 | |
David Mak | 95d0c3c93c | |
David Mak | bd3d67f3d6 | |
David Mak | ddfb532b80 | |
David Mak | 02933753ca | |
David Mak | a1f244834f | |
David Mak | d304afd333 | |
David Mak | ef04696b02 | |
David Mak | 4dc5dbb856 | |
David Mak | fd9f66b8d9 | |
David Mak | 5182453bd9 | |
Sebastien Bourdeauducq | 68556da5fd | |
David Mak | 983f080ea7 | |
David Mak | 031e660f18 | |
David Mak | b6dfcfcc38 | |
David Mak | c93ad152d7 | |
David Mak | 68b97347b1 | |
David Mak | 875d534de4 | |
Sebastien Bourdeauducq | adadf56e2b | |
Sebastien Bourdeauducq | 9f610745b7 | |
Sebastien Bourdeauducq | 98199768e3 | |
Sebastien Bourdeauducq | bfa9ceaae3 | |
Sebastien Bourdeauducq | 120f8da5c7 | |
Sebastien Bourdeauducq | cee62aa6c5 | |
Sebastien Bourdeauducq | fcda360ad6 | |
Sebastien Bourdeauducq | 87c20ada48 | |
Sebastien Bourdeauducq | 38e968cff6 | |
David Mak | 5c5620692f | |
David Mak | 0af1e37e99 | |
David Mak | 854e33ed48 | |
Sebastien Bourdeauducq | f020d61cbb | |
David Mak | 10538b5296 | |
David Mak | d322c91697 | |
David Mak | 3231eb0d78 | |
Sebastien Bourdeauducq | 1ca4de99b9 | |
Sebastien Bourdeauducq | bf4b1aae47 | |
David Mak | 08a5050f9a | |
David Mak | c2ab6b58ff | |
David Mak | 0a84f7ac31 | |
David Mak | fd787ca3f5 | |
David Mak | 4dbe07a0c0 | |
David Mak | 2e055e8ab1 | |
David Mak | 9d737743c1 | |
David Mak | c6b9aefe00 | |
David Mak | 8ad09748d0 | |
David Mak | 7a5a2db842 | |
David Mak | 447eb9c387 | |
David Mak | 92d6f0a5d3 | |
David Mak | 7e4dab15ae | |
David Mak | ff1fed112c | |
David Mak | 36a6a7b8cd | |
David Mak | 2b635a0b97 | |
David Mak | 60ad100fbb | |
David Mak | 316f0824d8 | |
David Mak | 7cf7634985 | |
David Mak | 068f0d9faf | |
David Mak | 95810d4229 | |
David Mak | 630897b779 | |
Sebastien Bourdeauducq | e546535df0 | |
David Mak | 352f70b885 | |
David Mak | e95586f61e | |
David Mak | bb27e3d400 | |
David Mak | bb5147521f | |
David Mak | 9518d3fe14 | |
David Mak | cbd333ab10 | |
David Mak | 65d6104d00 | |
David Mak | 8373a6cb0f | |
David Mak | f75ae78677 | |
Sebastien Bourdeauducq | ea2ab0ef7c | |
David Mak | e49b760e34 | |
David Mak | aa92778363 | |
David Mak | e1487ed335 | |
David Mak | 73500c9081 | |
David Mak | 9ca34c714e | |
David Mak | 7fc2a30c14 | |
David Mak | 950f431483 | |
David Mak | a50c690428 | |
David Mak | 48eb64403f | |
David Mak | 2c44b58bb8 | |
David Mak | 50230e61f3 | |
David Mak | 0205161e35 | |
David Mak | a2fce49b26 | |
David Mak | 60a503a791 | |
David Mak | 0c49b30a90 | |
David Mak | c7de22287e | |
David Mak | 1a54aaa1c0 | |
David Mak | c5629d4eb5 | |
David Mak | a79286113e | |
Sebastien Bourdeauducq | 901e921e00 | |
Sebastien Bourdeauducq | 45a323e969 | |
Sebastien Bourdeauducq | 11759a722f | |
David Mak | 480a4bc0ad | |
Sebastien Bourdeauducq | a1d3093196 | |
Sebastien Bourdeauducq | 85c5f2c044 | |
David Mak | f34c6053d6 | |
David Mak | e8a5f0dfef | |
David Mak | 7140901261 | |
David Mak | 2a775d822e | |
David Mak | 1659c3e724 | |
David Mak | f53cb804ec | |
David Mak | 279376a373 | |
David Mak | b6afd1bfda | |
David Mak | be3e8f50a2 | |
David Mak | 059d3da58b | |
David Mak | 9b28f23d8c | |
Sebastien Bourdeauducq | 119f4d63e9 | |
Sebastien Bourdeauducq | 458fa12788 | |
David Mak | 48c6498d1f | |
David Mak | 2a38d5160e | |
David Mak | b39831b388 | |
David Mak | cb39f61e79 | |
David Mak | 176f250bdb | |
David Mak | acdb1de6fe | |
David Mak | 31dcd2dde9 | |
David Mak | fc93fc2f0e | |
David Mak | dd42022633 | |
David Mak | 6dfc43c8b0 | |
David Mak | ab2360d7a0 | |
David Mak | ee1ee4ab3b | |
David Mak | 3e430b9b40 | |
David Mak | 9e57498958 | |
David Mak | 769fd01df8 | |
David Mak | 411837cacd | |
David Mak | f59d45805f | |
David Mak | 048fcb0a69 | |
David Mak | 676d07657a | |
David Mak | 2482a1ef9b | |
David Mak | eb63f2ad48 | |
Sebastien Bourdeauducq | ff27e22ee6 | |
Sebastien Bourdeauducq | d672ef094b | |
Sebastien Bourdeauducq | d25921230e | |
Sebastien Bourdeauducq | 66f07b5bf4 | |
David Mak | 008d50995c | |
David Mak | 474f9050ce | |
David Mak | 3993a5cf3f | |
David Mak | 39724de598 | |
David Mak | e4940247f3 | |
David Mak | 4481d48709 | |
David Mak | b4983526bd | |
David Mak | b4a9616648 | |
David Mak | e0de82993f | |
David Mak | 6805253515 | |
David Mak | 19915bac79 | |
David Mak | 17b4686260 | |
David Mak | 6de0884dc1 | |
David Mak | f1b0e05b3d | |
David Mak | ff23968544 | |
Sebastien Bourdeauducq | 049908044a | |
David Mak | d37287a33d | |
Sebastien Bourdeauducq | 283bd7c69a | |
Sebastien Bourdeauducq | 3d73f5c129 | |
Sebastien Bourdeauducq | d824c5d8b5 | |
Sebastien Bourdeauducq | b8d637f5c4 | |
Sebastien Bourdeauducq | 3af287d1c4 | |
Sebastien Bourdeauducq | 5b53be0311 | |
Sebastien Bourdeauducq | aead36f0fd | |
Sebastien Bourdeauducq | c269444c0b | |
Sebastien Bourdeauducq | 52cec3c12f | |
Sebastien Bourdeauducq | 2927f2a1d0 | |
Sebastien Bourdeauducq | c1c45373a6 | |
Sebastien Bourdeauducq | 946ea155b8 | |
Sebastien Bourdeauducq | 085c6ee738 | |
Sebastien Bourdeauducq | cfa67c418a | |
Sebastien Bourdeauducq | 813bfa92a7 | |
Sebastien Bourdeauducq | fff4b65169 | |
Sebastien Bourdeauducq | c891fffd75 | |
Sebastien Bourdeauducq | 12acd35e15 | |
Sebastien Bourdeauducq | f66ca02b2d | |
z78078 | b514f91441 | |
z78078 | 8f95b79257 | |
z78078 | ebd25af38b | |
z78078 | 96b3a3bf5c | |
ychenfo | a18d095245 | |
Sebastien Bourdeauducq | b242463548 | |
Sebastien Bourdeauducq | 8e6e4d6715 | |
Sebastien Bourdeauducq | 73c2aefe4b | |
Sebastien Bourdeauducq | 892597cda4 | |
Sebastien Bourdeauducq | 33321c5e9c | |
occheung | 50ed04b787 | |
occheung | 7cb9be0f81 | |
occheung | ac560ba985 | |
occheung | a96371145d | |
ychenfo | 8addf2b55e | |
ychenfo | 5d5e9a5e02 | |
Sebastien Bourdeauducq | 4c39dd240f | |
occheung | 48fc5ceb8e | |
ychenfo | c4ab2855e5 | |
ychenfo | ffac37dc48 | |
ychenfo | 76473152e8 | |
Sebastien Bourdeauducq | b04631e935 | |
ychenfo | 09820e5aed | |
Sebastien Bourdeauducq | 0ec2ed4d91 | |
ychenfo | 2cb725b7ac | |
Sebastien Bourdeauducq | b9259b1907 | |
ychenfo | 096f4b03c0 | |
ychenfo | a022005183 | |
ychenfo | 325ba0a408 | |
ychenfo | ae6434696c | |
Sebastien Bourdeauducq | 3f327113b2 | |
Sebastien Bourdeauducq | 27d509d70e | |
Sebastien Bourdeauducq | a321b13bec | |
ychenfo | 48cb485b89 | |
Sebastien Bourdeauducq | 837aaa95f1 | |
Sebastien Bourdeauducq | a19e9c0bec | |
Sebastien Bourdeauducq | 5dbe1d3d7d | |
Sebastien Bourdeauducq | e9bca3c822 | |
Sebastien Bourdeauducq | 42d1aad507 | |
Sebastien Bourdeauducq | 2777a6e05f | |
Sebastien Bourdeauducq | 05be5e93c4 | |
Sebastien Bourdeauducq | 85f21060e4 | |
Sebastien Bourdeauducq | a308d24caa | |
Sebastien Bourdeauducq | 1eac111d4c | |
ychenfo | 44199781dc | |
ychenfo | 711c3d3303 | |
sb10q | 0975264482 | |
Sebastien Bourdeauducq | 087aded3a3 | |
ychenfo | f14b32be67 | |
David Nadlinger | 879c66cccf | |
wylited | 35b6459c58 | |
wylited | e94b25f544 | |
Sebastien Bourdeauducq | 6972689469 | |
Sebastien Bourdeauducq | 3fb22c9182 | |
Sebastien Bourdeauducq | 1e7abf0268 | |
Sebastien Bourdeauducq | f5a6d29106 | |
Sebastien Bourdeauducq | ca07cb66cd | |
Sebastien Bourdeauducq | 93e9a6a38a | |
ychenfo | 722e3df086 | |
ychenfo | ad9ad22cb8 | |
ychenfo | f66f66b3a4 | |
ychenfo | 6c485bc9dc | |
ychenfo | 089bba96a3 | |
ychenfo | 0e0871bc38 | |
ychenfo | 26187bff0b | |
ychenfo | 86ce513cb5 | |
ychenfo | c29cbf6ddd | |
ychenfo | 7443c5ea0f | |
Sebastien Bourdeauducq | f55b077e60 | |
Sebastien Bourdeauducq | e05b0bf5dc | |
Sebastien Bourdeauducq | 8eda0affc9 | |
Sebastien Bourdeauducq | 75c53b40a3 | |
pca006132 | 0d10044d66 | |
ychenfo | 23b7f4ef18 | |
ychenfo | 710904f975 | |
Sebastien Bourdeauducq | 4bf452ec5a | |
Sebastien Bourdeauducq | 9fdce11efe | |
Sebastien Bourdeauducq | f24ef85aed | |
Sebastien Bourdeauducq | 4a19787f10 | |
Sebastien Bourdeauducq | 8209c0a475 | |
pca006132 | 4f66bdeda9 | |
Sebastien Bourdeauducq | 57369896d7 | |
ychenfo | 2edeb31d21 | |
ychenfo | b8ef44d64e | |
ychenfo | c3156afebd | |
ychenfo | 388c9b7241 | |
ychenfo | e52d7fc97a | |
ychenfo | 6ab73a223c | |
ychenfo | a38cc04444 | |
ychenfo | 1f5826d352 | |
Sebastien Bourdeauducq | 94eebde4ea | |
Sebastien Bourdeauducq | 63ec382673 | |
Sebastien Bourdeauducq | 0ca1a7bedb | |
Sebastien Bourdeauducq | 201ca3f63d | |
Sebastien Bourdeauducq | 19182759cd | |
Sebastien Bourdeauducq | edd039abdc | |
Sebastien Bourdeauducq | 3852cc1058 | |
Sebastien Bourdeauducq | 0600ee8efa | |
ychenfo | bed33a7421 | |
ychenfo | 0d2b844a2e | |
ychenfo | 8d7e300a4a | |
ychenfo | 10d623e36f | |
ychenfo | 000b128551 | |
Sebastien Bourdeauducq | e4581a6d9b | |
pca006132 | 1a82d296e7 | |
pca006132 | bf067e2481 | |
ychenfo | ba8ed6c663 | |
ychenfo | 26a4834254 | |
Sebastien Bourdeauducq | 1ad4b0227c | |
Sebastien Bourdeauducq | 6288a66dc5 | |
Sebastien Bourdeauducq | de4320eefb | |
Sebastien Bourdeauducq | a380cd5010 | |
ychenfo | 80631fc92b | |
Sebastien Bourdeauducq | 55db05fdbb | |
pca006132 | 24a26b53ae | |
pca006132 | 1084ba2158 | |
ychenfo | be75fa7368 | |
Sebastien Bourdeauducq | ec52128a4a | |
Sebastien Bourdeauducq | b10b49e39a | |
Sebastien Bourdeauducq | d92ce201d3 | |
Sebastien Bourdeauducq | 8b485f552b | |
pca006132 | d9be8d3978 | |
pca006132 | 41d62f7325 | |
Sebastien Bourdeauducq | 4400d9b57d | |
Sebastien Bourdeauducq | 8ee5db7462 | |
Sebastien Bourdeauducq | 6d9b3abcd7 | |
Sebastien Bourdeauducq | f11a0776e7 | |
Sebastien Bourdeauducq | f2dc03dfa1 | |
Sebastien Bourdeauducq | 1c807ebe08 | |
Sebastien Bourdeauducq | 9e0b5187dd | |
Sebastien Bourdeauducq | 1887a337ff | |
Sebastien Bourdeauducq | 03f5b80153 | |
Sebastien Bourdeauducq | 1114d11b34 | |
Sebastien Bourdeauducq | a7a188da76 | |
Sebastien Bourdeauducq | eb6ceefdcd | |
Sebastien Bourdeauducq | 9332d1643c | |
Sebastien Bourdeauducq | 718b076e50 | |
Sebastien Bourdeauducq | 9d86b46e86 | |
ychenfo | 263bc82434 | |
Sebastien Bourdeauducq | 3f890f183c | |
pca006132 | 234823c51a | |
pca006132 | b97c016629 | |
Sebastien Bourdeauducq | 14a5c7981e | |
pca006132 | 35ac5cb6f6 | |
pca006132 | 93af337ed3 | |
Sebastien Bourdeauducq | 0ca2797428 | |
Sebastien Bourdeauducq | 9ccdc0180d | |
Sebastien Bourdeauducq | c5993c2a58 | |
pca006132 | fb8553311c | |
pca006132 | 04e7a7eb4b | |
pca006132 | 642e3b2bad | |
pca006132 | e126fef012 | |
Sebastien Bourdeauducq | 8fd868a673 | |
pca006132 | 94aac16cc5 | |
pca006132 | 2f85bb3837 | |
ychenfo | e266d3c2b0 | |
ychenfo | 60b3807ab3 | |
ychenfo | 5006028e2d | |
ychenfo | 1cc276cb43 | |
ychenfo | 8241a29908 | |
ychenfo | e9a17cf8f8 | |
ychenfo | adb5c69e67 | |
ychenfo | d848c2284e | |
ychenfo | f7e62ab5b7 | |
ychenfo | 9f6c7b3359 | |
ychenfo | 142e99a0f1 | |
ychenfo | 79c469301a | |
ychenfo | 8602852241 | |
ychenfo | 42fbe8e383 | |
pca006132 | 63b0f29728 | |
pca006132 | a5e1da0b92 | |
pca006132 | 294943e303 | |
ychenfo | 84b4bd920b | |
Sebastien Bourdeauducq | 317eb80005 | |
Sebastien Bourdeauducq | 59ac5aae8a | |
Sebastien Bourdeauducq | da039e3acf | |
pca006132 | d1e172501d | |
pca006132 | 323d77a455 | |
pca006132 | d41c923cfd | |
Sebastien Bourdeauducq | 5d8e87d923 | |
Sebastien Bourdeauducq | a9c73a4915 | |
Sebastien Bourdeauducq | 804d5db27e | |
Sebastien Bourdeauducq | cbc77dddb0 | |
pca006132 | 846d1726ef | |
pca006132 | 0686e83f4c | |
pca006132 | e710b6c320 | |
pca006132 | cc769a7006 | |
Sebastien Bourdeauducq | 5cd4fe6507 | |
Sebastien Bourdeauducq | aa79c8d8b7 | |
Sebastien Bourdeauducq | 75fde1bbf7 | |
Sebastien Bourdeauducq | 17792b76b7 | |
Sebastien Bourdeauducq | 6ae770d5eb | |
pca006132 | d3cb5d6e52 | |
Sebastien Bourdeauducq | bb7c0a2d79 | |
pca006132 | 3ad25c8f07 | |
pca006132 | ede3706ca8 | |
pca006132 | f97f93d92c | |
pca006132 | d9cb506f6a | |
pca006132 | 352831b2ca | |
pca006132 | 21d9182ba2 | |
Sebastien Bourdeauducq | 91f41052fe | |
pca006132 | 14d25b3b9d | |
Sebastien Bourdeauducq | 265d234266 | |
Sebastien Bourdeauducq | 2e44745933 | |
Sebastien Bourdeauducq | 4b8e70f746 | |
Sebastien Bourdeauducq | 31e76ca3b6 | |
Sebastien Bourdeauducq | 343f6fd067 | |
Sebastien Bourdeauducq | f1ebf8f96e | |
pca006132 | b18626b149 | |
pca006132 | 750d912eb4 | |
pca006132 | bf52e294ee | |
pca006132 | e303248261 | |
pca006132 | 7ea5a5f84d | |
pca006132 | b267a656a8 | |
pca006132 | 050c862c1a | |
Sebastien Bourdeauducq | ffe89eec86 | |
ychenfo | d6ab73afb0 | |
ychenfo | 6f9f455152 | |
ychenfo | e50f1017fa | |
ychenfo | 77608346b1 | |
Sebastien Bourdeauducq | f5ce7376e3 | |
Sebastien Bourdeauducq | 1288624218 | |
Sebastien Bourdeauducq | 0124bcd26c | |
Sebastien Bourdeauducq | de065cfa14 | |
pca006132 | 304181fd8c | |
ychenfo | 43048eb8d8 | |
ychenfo | ace0e2a2c6 | |
Sebastien Bourdeauducq | e891683f2e | |
Sebastien Bourdeauducq | 8e01a20ac3 | |
Sebastien Bourdeauducq | 465514ca7a | |
Sebastien Bourdeauducq | 9c34dd9c80 | |
Sebastien Bourdeauducq | ced7acd871 | |
Sebastien Bourdeauducq | 6ea40809b3 | |
Sebastien Bourdeauducq | f8e3f7a4ca | |
Sebastien Bourdeauducq | ba997ae094 | |
Sebastien Bourdeauducq | 2a0caf931f | |
Sebastien Bourdeauducq | 64b94955fe | |
Sebastien Bourdeauducq | f478c6afcc | |
ychenfo | 0439bf6aef | |
Sebastien Bourdeauducq | fd4bf12808 | |
Sebastien Bourdeauducq | d7b14dd705 | |
ychenfo | 9d342d9f0f | |
ychenfo | ae8f82ccb0 | |
ychenfo | 4a1a4dc076 | |
ychenfo | eba9fc8a69 | |
ychenfo | 4976e89ae2 | |
Sebastien Bourdeauducq | 82509d60ec | |
ychenfo | 2579ecbd19 | |
ychenfo | 44f4c4f028 | |
Sebastien Bourdeauducq | 8ef9e74aaf | |
Sebastien Bourdeauducq | 9c20e84c84 | |
Sebastien Bourdeauducq | b88f17ed42 | |
Sebastien Bourdeauducq | 096193f7ab | |
ychenfo | 4760851638 | |
ychenfo | 1ee857de6a | |
Sebastien Bourdeauducq | 4a65d82db5 | |
Sebastien Bourdeauducq | b638d1b4b0 | |
Sebastien Bourdeauducq | 52ccf31bb1 | |
Sebastien Bourdeauducq | 4904610dc6 | |
ychenfo | 7193e3f328 | |
Sebastien Bourdeauducq | 2822c613ef | |
Sebastien Bourdeauducq | a0bf6da6c2 | |
Sebastien Bourdeauducq | 9cc9a0284a | |
ychenfo | 85e06d431a | |
ychenfo | 9b3b47ce50 | |
ychenfo | 88f0da7bdd | |
pca006132 | 1bd966965e | |
pca006132 | 521f136f2e | |
pca006132 | fa04768a77 | |
Sebastien Bourdeauducq | 6162d21a5b | |
Sebastien Bourdeauducq | 8101483ebd | |
Sebastien Bourdeauducq | dc5e42c5eb | |
Sebastien Bourdeauducq | 86005da8e1 | |
Sebastien Bourdeauducq | 3b5328d3cd | |
Sebastien Bourdeauducq | 5aa6749241 | |
Sebastien Bourdeauducq | 80d3ab1b0f | |
Sebastien Bourdeauducq | ec986dfdf3 | |
Sebastien Bourdeauducq | d2a5cd6d57 | |
Sebastien Bourdeauducq | 9e3f75255e | |
Sebastien Bourdeauducq | 53f13b44cf | |
pca006132 | 34cabe0e55 | |
pca006132 | 6e85f549f6 | |
pca006132 | 0902d8adf4 | |
ychenfo | 66320679be | |
Sebastien Bourdeauducq | 0ff995722c | |
Sebastien Bourdeauducq | e2b44a066b | |
Sebastien Bourdeauducq | 2008db8097 | |
ychenfo | cb450372d6 | |
ychenfo | ff27a1697e | |
ychenfo | 91625dd327 | |
Sebastien Bourdeauducq | 7420ce185b | |
Sebastien Bourdeauducq | 69b9ac5152 | |
ychenfo | ccfcba4066 | |
ychenfo | b5637a04e9 | |
ychenfo | 2c6601d97c | |
ychenfo | 82359b81a2 | |
ychenfo | 4d2fd9582a | |
ychenfo | b7892ce952 | |
ychenfo | 01d3249646 | |
Sebastien Bourdeauducq | d2ffdeeb47 | |
Sebastien Bourdeauducq | ae902aac2f | |
Sebastien Bourdeauducq | 3f73896477 | |
Sebastien Bourdeauducq | ddb4c548ae | |
pca006132 | 6d00d4dabb | |
Sebastien Bourdeauducq | baa713a3ca | |
Sebastien Bourdeauducq | d2919b9620 | |
Sebastien Bourdeauducq | 9ee2168932 | |
pca006132 | 65bc1e5fa4 | |
pca006132 | 2938eacd16 | |
Sebastien Bourdeauducq | e8e1499478 | |
Sebastien Bourdeauducq | e4f35372d3 | |
Sebastien Bourdeauducq | 41f88095a5 | |
pca006132 | c98f367f90 | |
ychenfo | 1f3aa48361 | |
Sebastien Bourdeauducq | 8c05d8431d | |
Sebastien Bourdeauducq | 0ae2aae645 | |
Sebastien Bourdeauducq | b0eb7815da | |
Sebastien Bourdeauducq | 26e60fca6e | |
Sebastien Bourdeauducq | 22a509e7ce | |
Sebastien Bourdeauducq | 4526c28edb | |
Sebastien Bourdeauducq | 25fc9db66d | |
Sebastien Bourdeauducq | 6315027a8b | |
Sebastien Bourdeauducq | c0f8d5c602 | |
Sebastien Bourdeauducq | 998f49261d | |
Sebastien Bourdeauducq | aab43b1c07 | |
Sebastien Bourdeauducq | a6275fbb57 | |
Sebastien Bourdeauducq | 8a46032f4c | |
Sebastien Bourdeauducq | 1c31aa6e8e | |
sb10q | b030aec191 | |
ychenfo | aa2d79fea6 | |
ychenfo | 1e6848ab92 | |
Sebastien Bourdeauducq | a91b2d602c | |
Sebastien Bourdeauducq | c683958e4a | |
Sebastien Bourdeauducq | 142f82f987 | |
ychenfo | dfd3548ed2 | |
Sebastien Bourdeauducq | 31fba04cee | |
ychenfo | fa2fe8ed5d | |
ychenfo | 7ede4f15b6 | |
ychenfo | 0fe346106d | |
Sebastien Bourdeauducq | 681d85d3be | |
sb10q | 14119a2c80 | |
pca006132 | b35075245b | |
pca006132 | 4b17511b4a | |
pca006132 | 7ee82de312 | |
Sebastien Bourdeauducq | 701ca36e99 | |
Sebastien Bourdeauducq | 5e1b0a10a0 | |
Sebastien Bourdeauducq | 9f316a3294 | |
Sebastien Bourdeauducq | 0ae1fe82b4 | |
Sebastien Bourdeauducq | de8fc264d7 | |
Sebastien Bourdeauducq | 970f075490 | |
ychenfo | 4587088835 | |
ychenfo | 49476d06e1 | |
ychenfo | 664e02cec4 | |
ychenfo | c6f75c8bde | |
ychenfo | 01b51b62ee | |
ychenfo | aae9925014 | |
ychenfo | d336200bf4 | |
ychenfo | a50df6560e | |
ychenfo | a9635f0979 | |
ychenfo | c2706fa720 | |
pca006132 | f5ec103c82 | |
pca006132 | ba08deada6 | |
Sebastien Bourdeauducq | 439cef636f | |
ychenfo | 1e47b364c5 | |
ychenfo | 8ab3ee9cce | |
Sebastien Bourdeauducq | 9ae08d6e3d | |
Sebastien Bourdeauducq | d6b92adf70 | |
Sebastien Bourdeauducq | aa84fefa56 | |
Sebastien Bourdeauducq | 5ad7aa5a93 | |
Sebastien Bourdeauducq | b64d2399f2 | |
Sebastien Bourdeauducq | ebca596be6 | |
Sebastien Bourdeauducq | 4aeea87702 | |
Sebastien Bourdeauducq | e25a9bbcda | |
Sebastien Bourdeauducq | 978eaf16a4 | |
Sebastien Bourdeauducq | 4547eee82a | |
Sebastien Bourdeauducq | 96607432c1 | |
Sebastien Bourdeauducq | dba1a8b3d4 | |
Sebastien Bourdeauducq | 612b6768c0 | |
Sebastien Bourdeauducq | c004da85f7 | |
Sebastien Bourdeauducq | 7fc04936cb | |
Sebastien Bourdeauducq | b57b869c49 | |
Sebastien Bourdeauducq | 50f1aca1aa | |
pca006132 | ffa89e9308 | |
pca006132 | 34cf303e6c | |
pca006132 | b1e83a1fd4 | |
Sebastien Bourdeauducq | 7385b91113 | |
Sebastien Bourdeauducq | 016cbf2b90 | |
Sebastien Bourdeauducq | 37eae090e5 | |
Sebastien Bourdeauducq | 204baabfd2 | |
Sebastien Bourdeauducq | 597857ccd0 | |
ychenfo | efc9edbc14 | |
Sebastien Bourdeauducq | 7d66195eae | |
pca006132 | 1fea51d9b3 | |
pca006132 | 99b29d8ded | |
pca006132 | 3db95b120b | |
pca006132 | 8dbb4ad58a | |
ychenfo | ee67b22ebc | |
Sebastien Bourdeauducq | afb94dd299 | |
Sebastien Bourdeauducq | d6f0607ff0 | |
Sebastien Bourdeauducq | 610448fa73 | |
Sebastien Bourdeauducq | e8228710e7 | |
ychenfo | 032e1d84cf | |
ychenfo | b239806558 | |
ychenfo | 694c7e945c | |
ychenfo | 3b1cc02d06 | |
Sebastien Bourdeauducq | 32d1fe811b | |
Sebastien Bourdeauducq | 36e4028f5b | |
Sebastien Bourdeauducq | b6ff46c39e | |
Sebastien Bourdeauducq | bf7e2c295a | |
pca006132 | 48ce6bb6c5 | |
Sebastien Bourdeauducq | 80c7bc1cbd | |
Sebastien Bourdeauducq | e89bc93b5f | |
Sebastien Bourdeauducq | 47f563908a | |
Sebastien Bourdeauducq | 0e914ab7e9 | |
Sebastien Bourdeauducq | 613020a717 | |
Sebastien Bourdeauducq | ee2c0d8bab | |
Sebastien Bourdeauducq | 0d1e9262af | |
Sebastien Bourdeauducq | bc0f82cad8 | |
Sebastien Bourdeauducq | 624dfe8cd1 | |
pca006132 | e47597bb8a | |
pca006132 | 98d9f73afb | |
Sebastien Bourdeauducq | da2886565b | |
Sebastien Bourdeauducq | b37cf6de08 | |
pca006132 | 083eacc268 | |
Sebastien Bourdeauducq | 137efebb33 | |
Sebastien Bourdeauducq | 443b95d909 | |
Sebastien Bourdeauducq | 8b73a123cc | |
pca006132 | 84c5201243 | |
pca006132 | 558c3f03ef | |
pca006132 | 45673b0ecc | |
pca006132 | 181607008d | |
pca006132 | fb92b6d364 | |
pca006132 | 2f6ba69770 | |
pca006132 | cc83bbc63a | |
pca006132 | 279f47f633 | |
pca006132 | 9850cbe313 | |
pca006132 | 1f5bea2448 | |
pca006132 | c4259d14d1 | |
pca006132 | 26076c37ba | |
Sebastien Bourdeauducq | fd0b11087e | |
Sebastien Bourdeauducq | 3a1dd893a1 | |
pca006132 | a4ccac2329 | |
pca006132 | 77542170fd | |
pca006132 | a3ce5be10b | |
Sebastien Bourdeauducq | a22552a012 | |
Sebastien Bourdeauducq | 6ba74ed9f6 | |
Sebastien Bourdeauducq | 8b32c8270d | |
Sebastien Bourdeauducq | 5749141efb | |
Sebastien Bourdeauducq | 3b10172810 | |
Sebastien Bourdeauducq | 82efb0e720 | |
Sebastien Bourdeauducq | d3a21d75fa | |
pca006132 | a07674a042 | |
Sebastien Bourdeauducq | c5bcd352a5 | |
Sebastien Bourdeauducq | 79d3c5caae | |
pca006132 | c697e522d3 | |
pca006132 | 08947d20c2 | |
pca006132 | 62673cf608 | |
pca006132 | 11144301ca | |
ychenfo | 4fcb54e463 | |
ychenfo | 24b2111c64 | |
ychenfo | f5ce1afe0b | |
Sebastien Bourdeauducq | 915460ecb7 | |
Sebastien Bourdeauducq | b2c7f51d57 | |
Sebastien Bourdeauducq | 248d8cbece | |
Sebastien Bourdeauducq | c429a86586 | |
Sebastien Bourdeauducq | c5e731f16d | |
Sebastien Bourdeauducq | 0cbe4778d2 | |
Sebastien Bourdeauducq | c93305739d | |
Sebastien Bourdeauducq | ba93931758 | |
Sebastien Bourdeauducq | 3dd916b6ac | |
pca006132 | 8447aa3000 | |
pca006132 | 1d2a32b140 | |
pca006132 | 07a9229d52 | |
pca006132 | f0fdfe42cb | |
Sebastien Bourdeauducq | 928b5bafb5 | |
Sebastien Bourdeauducq | dceaf42500 | |
Sebastien Bourdeauducq | bfd041d361 | |
Sebastien Bourdeauducq | 6141f01180 | |
Sebastien Bourdeauducq | 8d839db553 | |
Sebastien Bourdeauducq | 316db42940 | |
Sebastien Bourdeauducq | 64404bba20 | |
pca006132 | d4ed76d76e | |
pca006132 | 3c121dfcda | |
pca006132 | 693ac7d336 | |
Sebastien Bourdeauducq | dd998c8afc | |
Sebastien Bourdeauducq | 7ab762a174 | |
Sebastien Bourdeauducq | 7ab2114882 | |
Sebastien Bourdeauducq | 4535b60fc0 | |
Sebastien Bourdeauducq | bf48151748 | |
Sebastien Bourdeauducq | bed8ce1f26 | |
Sebastien Bourdeauducq | c26689c7a7 | |
Sebastien Bourdeauducq | ac17bf50f8 | |
Sebastien Bourdeauducq | 13bd18bfcb | |
Sebastien Bourdeauducq | 5c236271c3 | |
Sebastien Bourdeauducq | 14662a66dc | |
pca006132 | c4fbfeaca9 | |
pca006132 | 20a752fd3a | |
pca006132 | 6a69211c55 | |
Sebastien Bourdeauducq | 59dac8bdf5 | |
Sebastien Bourdeauducq | edd60e3f9a | |
pca006132 | 799ed58d21 | |
pca006132 | 105d605e6d | |
pca006132 | 97f5b7c324 | |
pca006132 | 7d48883583 | |
pca006132 | 084efe92af | |
pca006132 | 891056631f | |
pca006132 | 1b5ac3cd25 | |
pca006132 | 5ed2b450d3 | |
pca006132 | a508baae20 | |
Sebastien Bourdeauducq | 013e7cfc2a | |
Sebastien Bourdeauducq | db14b4b635 | |
Sebastien Bourdeauducq | 8acb39f31f | |
Sebastien Bourdeauducq | d561450bf5 | |
Sebastien Bourdeauducq | 956cca6ac8 | |
pca006132 | 4a5f2d495e | |
pca006132 | 4fe643f45b | |
ychenfo | 1c170f5c41 | |
pca006132 | df6c9c8a35 | |
ychenfo | 20905a9b67 | |
ychenfo | e66693282c | |
ychenfo | dd1be541b8 | |
pca006132 | 3c930ae9ab | |
ychenfo | 35a94a8fc0 | |
pca006132 | 4939ff4dbd | |
ychenfo | bf1769cef6 | |
ychenfo | 2b74895b71 | |
ychenfo | 1b0f3d07cc | |
ychenfo | ed5dfd4100 | |
ychenfo | 41e63f24d0 | |
ychenfo | d0df705c5a | |
ychenfo | a0662c58e6 | |
ychenfo | 526c18bda0 | |
ychenfo | a10ab81ee7 | |
pca006132 | f5353419ac | |
pca006132 | 180392e2ab | |
ychenfo | 471547855e | |
ychenfo | 2ac3f9a176 | |
ychenfo | cb310965b8 | |
ychenfo | 118f19762a | |
ychenfo | b419634f8a | |
ychenfo | 147298ff40 | |
ychenfo | c7cb02b0f3 | |
ychenfo | 03b5e51822 | |
ychenfo | 4eacd1aa9e | |
ychenfo | 9eef51f29f | |
ychenfo | 917d447605 | |
ychenfo | f1013d9a17 | |
ychenfo | 2ce507964c | |
ychenfo | 5a1a8ecee3 | |
ychenfo | 1300b5ebdd | |
ychenfo | 87f25e1c5d | |
ychenfo | 55335fc05d | |
ychenfo | 247b364191 | |
ychenfo | bbcec6ae6f | |
ychenfo | 235b6e34d1 | |
ychenfo | 54b4572c5f | |
ychenfo | dc7c014b10 | |
ychenfo | 1ae6acc061 | |
ychenfo | 98d032b72a | |
ychenfo | 7bbd608492 | |
ychenfo | 4a9593efa3 | |
ychenfo | 098bd1e6e6 | |
ychenfo | 82c2edcf8d | |
ychenfo | 40e58d02ed | |
ychenfo | e2a9bdd8bc | |
ychenfo | 236989defc | |
pca006132 | 22a728833d | |
pca006132 | 2223c86d9b | |
pca006132 | 72aebed559 | |
pca006132 | 8c1c7fcfc3 | |
pca006132 | 6633eabb89 | |
pca006132 | d81249cabe | |
pca006132 | bf4e0009c0 | |
pca006132 | 52dd792b3e | |
pca006132 | a24e204824 | |
ychenfo | 35ef0386db | |
ychenfo | b9a580d271 | |
ychenfo | 018d6643e1 | |
ychenfo | 935e7410fd | |
ychenfo | 1a21fb1072 | |
ychenfo | 35a331552b | |
ychenfo | 0bab477ab0 | |
ychenfo | 862d205f67 | |
pca006132 | e2b11c3fee | |
pca006132 | 0608fd9659 | |
pca006132 | 173102fc56 | |
pca006132 | 93270d7227 | |
pca006132 | 1ffa1a8bb0 | |
ychenfo | 01f7a31aae | |
ychenfo | 32773c14e0 | |
pca006132 | c356062239 | |
ychenfo | 56f082ca7c | |
ychenfo | 39f300b62a | |
ychenfo | 7b1fe36e90 | |
ychenfo | fb5b4697a9 | |
ychenfo | 364054331c | |
ychenfo | 40b062ce0f | |
pca006132 | f5b8b58826 | |
pca006132 | c4d6b3691a | |
pca006132 | 957ceb74e4 | |
pca006132 | e47d063efc | |
pca006132 | 0e2da0d180 | |
pca006132 | 39545c0005 | |
pca006132 | 3279f7a776 | |
sb10q | f205a8282a | |
pca006132 | d1215bf5ac | |
pca006132 | 6e424a6a3e | |
pca006132 | 9a07ef3301 | |
ychenfo | c238c264e7 | |
Sebastien Bourdeauducq | f8a697e3d4 | |
ychenfo | 4b38fe66a2 | |
ychenfo | 9cb07e6f04 | |
ychenfo | 6279dbb589 | |
ychenfo | 529442590f | |
ychenfo | 4fcd48e4c8 | |
ychenfo | 619963dc8c | |
ychenfo | 276daa03f7 | |
ychenfo | a94145348a | |
ychenfo | fa40fd73c6 | |
ychenfo | 79ce13722a | |
ychenfo | eb814dd8c3 | |
ychenfo | 3734663188 | |
ychenfo | d8c3c063ec | |
pca006132 | d3ad894521 | |
pca006132 | 784111fdbe | |
pca006132 | d30918bea0 | |
pca006132 | e2adf82229 | |
ychenfo | 33391c55c2 | |
ychenfo | 3f65e1b133 | |
ychenfo | ba5bb78f11 | |
ychenfo | e176aa660d | |
pca006132 | cb01c79603 | |
pca006132 | 1db8378f60 | |
pca006132 | 8c7ccb626b | |
pca006132 | 1f6c16e08b | |
pca006132 | 77943a8117 | |
ychenfo | 3a93e2b048 | |
ychenfo | 824a5cb01a | |
ychenfo | 17ee8fe6d0 | |
pca006132 | d46a4b2d38 | |
pca006132 | de8b67b605 | |
pca006132 | 0af4e95914 | |
ychenfo | 99276c8f31 | |
ychenfo | 42a636b4ce | |
pca006132 | e112354d25 | |
ychenfo | 43236db9bd | |
ychenfo | 1bec6cf2db | |
pca006132 | a73ab922e2 | |
ychenfo | 82ce816177 | |
ychenfo | 6ad953f877 | |
pca006132 | 4db871c244 | |
pca006132 | cc0692a34c | |
pca006132 | 7a90ff5791 | |
pca006132 | d8c713ce3d | |
pca006132 | 1ffb792000 | |
pca006132 | 057fcfe3df | |
pca006132 | 86ca02796b | |
pca006132 | 711482d09c | |
pca006132 | 7a38ab3119 | |
pca006132 | 34d3317ea0 | |
pca006132 | c405e46b00 | |
ychenfo | 18db2ddd53 | |
ychenfo | fe26070364 | |
pca006132 | 095f28468b | |
pca006132 | 29286210b5 | |
pca006132 | b01d0f6fbb | |
pca006132 | 3dcd846302 | |
ychenfo | c0227210df | |
CrescentonC | 99c71687a6 | |
CrescentonC | d052f007fb | |
pca006132 | 8452579c67 | |
pca006132 | f00c1813e3 | |
pca006132 | d4d12a9d1d | |
pca006132 | a3acf09bda | |
pca006132 | 52dc112410 | |
CrescentonC | d4807293b0 | |
CrescentonC | d4721db4a3 | |
CrescentonC | a7e3eeea0d | |
CrescentonC | f7bbc3e10d | |
CrescentonC | 7e0d55443a | |
pca006132 | 197a72c658 | |
pca006132 | eba92ed8bd | |
CrescentonC | b87c627c41 | |
CrescentonC | ae79533cfd | |
CrescentonC | 9983aa62e6 | |
pca006132 | 7ad8e2d81d | |
pca006132 | 743a9384a3 | |
pca006132 | f2c5a9b352 | |
ychenfo | 09e76efcf7 | |
pca006132 | 832513e210 | |
pca006132 | f665ea358b | |
pca006132 | e15473d2c9 | |
pca006132 | 5f0490cd84 | |
pca006132 | 1d13b16f94 | |
pca006132 | 8d0856a58d | |
pca006132 | bf31c48bba | |
Sebastien Bourdeauducq | 0941de9ee1 | |
pca006132 | 8618837816 | |
Sebastien Bourdeauducq | 53ebe8d8b2 | |
pca006132 | d7df93bef1 | |
pca006132 | d140164a38 | |
pca006132 | ddcf4b7e39 | |
pca006132 | 88c45172b2 | |
pca006132 | c315227a28 | |
pca006132 | d484fa1e5c | |
pca006132 | 09c9218852 | |
pca006132 | 4f81690128 | |
pca006132 | b3d849ea7a | |
pca006132 | 3e03398d9b | |
pca006132 | 2f5c3b3cb7 | |
pca006132 | 25ff24a320 | |
pca006132 | 0296844d5f | |
pca006132 | e95bfe1d31 | |
pca006132 | bc9b453b3e | |
pca006132 | fa31e8f336 | |
pca006132 | 22455e43ac | |
pca006132 | 016166de46 | |
pca006132 | eb4b2bb7f6 | |
pca006132 | e732f7e089 | |
pca006132 | d4b85d0bac | |
pca006132 | c913fb28bd | |
pca006132 | f51603f6da | |
pca006132 | d67407716c | |
pca006132 | f4121b570d | |
pca006132 | 8b078dfa1b | |
pca006132 | 62736bd4bf | |
pca006132 | c2d00aa762 | |
pca006132 | d94f25583b | |
pca006132 | 1df3f4e757 | |
pca006132 | 97fe450a0b | |
pca006132 | e8c5189fce | |
pca006132 | 291e642699 | |
pca006132 | e554737b68 | |
pca006132 | 84c980fed3 | |
pca006132 | 2985b88351 |
|
@ -0,0 +1 @@
|
||||||
|
doc-valid-idents = ["CPython", "NumPy", ".."]
|
|
@ -1,2 +1,3 @@
|
||||||
__pycache__
|
__pycache__
|
||||||
/target
|
/target
|
||||||
|
nix/windows/msys2
|
||||||
|
|
|
@ -0,0 +1,24 @@
|
||||||
|
# See https://pre-commit.com for more information
|
||||||
|
# See https://pre-commit.com/hooks.html for more hooks
|
||||||
|
|
||||||
|
default_stages: [commit]
|
||||||
|
|
||||||
|
repos:
|
||||||
|
- repo: local
|
||||||
|
hooks:
|
||||||
|
- id: nac3-cargo-fmt
|
||||||
|
name: nac3 cargo format
|
||||||
|
entry: cargo
|
||||||
|
language: system
|
||||||
|
types: [file, rust]
|
||||||
|
pass_filenames: false
|
||||||
|
description: Runs cargo fmt on the codebase.
|
||||||
|
args: [fmt]
|
||||||
|
- id: nac3-cargo-clippy
|
||||||
|
name: nac3 cargo clippy
|
||||||
|
entry: cargo
|
||||||
|
language: system
|
||||||
|
types: [file, rust]
|
||||||
|
pass_filenames: false
|
||||||
|
description: Runs cargo clippy on the codebase.
|
||||||
|
args: [clippy, --tests]
|
File diff suppressed because it is too large
Load Diff
10
Cargo.toml
10
Cargo.toml
|
@ -1,6 +1,14 @@
|
||||||
[workspace]
|
[workspace]
|
||||||
members = [
|
members = [
|
||||||
|
"nac3ld",
|
||||||
|
"nac3ast",
|
||||||
|
"nac3parser",
|
||||||
"nac3core",
|
"nac3core",
|
||||||
"nac3standalone",
|
"nac3standalone",
|
||||||
"nac3embedded",
|
"nac3artiq",
|
||||||
|
"runkernel",
|
||||||
]
|
]
|
||||||
|
resolver = "2"
|
||||||
|
|
||||||
|
[profile.release]
|
||||||
|
debug = true
|
||||||
|
|
80
README.md
80
README.md
|
@ -1,34 +1,62 @@
|
||||||
# nac3 compiler
|
<div align="center">
|
||||||
|
|
||||||
|
![icon](https://git.m-labs.hk/M-Labs/nac3/raw/branch/master/nac3.svg)
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
# NAC3
|
||||||
|
NAC3 is a major, backward-incompatible rewrite of the compiler for the [ARTIQ](https://m-labs.hk/artiq) physics experiment control and data acquisition system. It features greatly improved compilation speeds, a much better type system, and more predictable and transparent operation.
|
||||||
|
|
||||||
|
NAC3 has a modular design and its applicability reaches beyond ARTIQ. The ``nac3core`` module does not contain anything specific to ARTIQ, and can be used in any project that requires compiling Python to machine code.
|
||||||
|
|
||||||
|
**WARNING: NAC3 is currently experimental software and several important features are not implemented yet.**
|
||||||
|
|
||||||
|
## Packaging
|
||||||
|
|
||||||
|
NAC3 is packaged using the [Nix](https://nixos.org) Flakes system. Install Nix 2.8+ and enable flakes by adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (e.g. ``~/.config/nix/nix.conf``).
|
||||||
|
|
||||||
|
## Try NAC3
|
||||||
|
|
||||||
|
### Linux
|
||||||
|
|
||||||
|
After setting up Nix as above, use ``nix shell git+https://github.com/m-labs/artiq.git?ref=nac3`` to get a shell with the NAC3 version of ARTIQ. See the ``examples`` directory in ARTIQ (``nac3`` Git branch) for some samples of NAC3 kernel code.
|
||||||
|
|
||||||
|
### Windows
|
||||||
|
|
||||||
|
Install [MSYS2](https://www.msys2.org/), and open "MSYS2 CLANG64". Edit ``/etc/pacman.conf`` to add:
|
||||||
|
```
|
||||||
|
[artiq]
|
||||||
|
SigLevel = Optional TrustAll
|
||||||
|
Server = https://msys2.m-labs.hk/artiq-nac3
|
||||||
|
```
|
||||||
|
|
||||||
|
Then run the following commands:
|
||||||
|
```
|
||||||
|
pacman -Syu
|
||||||
|
pacman -S mingw-w64-clang-x86_64-artiq
|
||||||
|
```
|
||||||
|
|
||||||
|
## For developers
|
||||||
|
|
||||||
This repository contains:
|
This repository contains:
|
||||||
- nac3core: Core compiler library, containing type-checking, static analysis (in
|
- ``nac3ast``: Python abstract syntax tree definition (based on RustPython).
|
||||||
the future) and code generation.
|
- ``nac3parser``: Python parser (based on RustPython).
|
||||||
- nac3embedded: Integration with CPython runtime.
|
- ``nac3core``: Core compiler library, containing type-checking and code generation.
|
||||||
- nac3standalone: Standalone compiler tool.
|
- ``nac3standalone``: Standalone compiler tool (core language only).
|
||||||
|
- ``nac3ld``: Minimalist RISC-V and ARM linker.
|
||||||
|
- ``nac3artiq``: Integration with ARTIQ and implementation of ARTIQ-specific extensions to the core language.
|
||||||
|
- ``runkernel``: Simple program that runs compiled ARTIQ kernels on the host and displays RTIO operations. Useful for testing without hardware.
|
||||||
|
|
||||||
The core compiler would know nothing about symbol resolution, host variables
|
Use ``nix develop`` in this repository to enter a development shell.
|
||||||
etc. The nac3embedded/nac3standalone library would provide (implement) the
|
If you are using a different shell than bash you can use e.g. ``nix develop --command fish``.
|
||||||
symbol resolver to the core compiler for resolving the type and value for
|
|
||||||
unknown symbols. The core compiler would only type check classes and functions
|
|
||||||
requested by the nac3embedded/nac3standalone lib (the API should allow the
|
|
||||||
caller to specify which methods should be compiled). After type checking, the
|
|
||||||
compiler would analyse the set of functions/classes that are used and perform
|
|
||||||
code generation.
|
|
||||||
|
|
||||||
value could be integer values, boolean values, bytes (for memcpy), function ID
|
Build NAC3 with ``cargo build --release``. See the demonstrations in ``nac3artiq`` and ``nac3standalone``.
|
||||||
(full name + concrete type)
|
|
||||||
|
|
||||||
## Current Plan
|
### Pre-Commit Hooks
|
||||||
|
|
||||||
Type checking:
|
|
||||||
|
|
||||||
- [x] Basic interface for symbol resolver.
|
|
||||||
- [x] Track location information in context object (for diagnostics).
|
|
||||||
- [ ] Refactor old expression and statement type inference code. (anto)
|
|
||||||
- [ ] Error diagnostics utilities. (pca)
|
|
||||||
- [ ] Move tests to external files, write scripts for testing. (pca)
|
|
||||||
- [ ] Implement function type checking (instantiate bounded type parameters),
|
|
||||||
loop unrolling, type inference for lists with virtual objects. (pca)
|
|
||||||
|
|
||||||
|
You are strongly recommended to use the provided pre-commit hooks to automatically reformat files and check for non-optimal Rust practices using Clippy. Run `pre-commit install` to install the hook and `pre-commit` will automatically run `cargo fmt` and `cargo clippy` for you.
|
||||||
|
|
||||||
|
Several things to note:
|
||||||
|
|
||||||
|
- If `cargo fmt` or `cargo clippy` returns an error, the pre-commit hook will fail. You should fix all errors before trying to commit again.
|
||||||
|
- If `cargo fmt` reformats some files, the pre-commit hook will also fail. You should review the changes and, if satisfied, try to commit again.
|
||||||
|
|
|
@ -0,0 +1,27 @@
|
||||||
|
{
|
||||||
|
"nodes": {
|
||||||
|
"nixpkgs": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1720418205,
|
||||||
|
"narHash": "sha256-cPJoFPXU44GlhWg4pUk9oUPqurPlCFZ11ZQPk21GTPU=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "655a58a72a6601292512670343087c2d75d859c1",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"ref": "nixos-unstable",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"root": {
|
||||||
|
"inputs": {
|
||||||
|
"nixpkgs": "nixpkgs"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"root": "root",
|
||||||
|
"version": 7
|
||||||
|
}
|
|
@ -0,0 +1,193 @@
|
||||||
|
{
|
||||||
|
description = "The third-generation ARTIQ compiler";
|
||||||
|
|
||||||
|
inputs.nixpkgs.url = github:NixOS/nixpkgs/nixos-unstable;
|
||||||
|
|
||||||
|
outputs = { self, nixpkgs }:
|
||||||
|
let
|
||||||
|
pkgs = import nixpkgs { system = "x86_64-linux"; };
|
||||||
|
in rec {
|
||||||
|
packages.x86_64-linux = rec {
|
||||||
|
llvm-nac3 = pkgs.callPackage ./nix/llvm {};
|
||||||
|
llvm-tools-irrt = pkgs.runCommandNoCC "llvm-tools-irrt" {}
|
||||||
|
''
|
||||||
|
mkdir -p $out/bin
|
||||||
|
ln -s ${pkgs.llvmPackages_14.clang-unwrapped}/bin/clang $out/bin/clang-irrt
|
||||||
|
ln -s ${pkgs.llvmPackages_14.clang}/bin/clang $out/bin/clang-irrt-test
|
||||||
|
ln -s ${pkgs.llvmPackages_14.llvm.out}/bin/llvm-as $out/bin/llvm-as-irrt
|
||||||
|
'';
|
||||||
|
nac3artiq = pkgs.python3Packages.toPythonModule (
|
||||||
|
pkgs.rustPlatform.buildRustPackage rec {
|
||||||
|
name = "nac3artiq";
|
||||||
|
outputs = [ "out" "runkernel" "standalone" ];
|
||||||
|
src = self;
|
||||||
|
cargoLock = {
|
||||||
|
lockFile = ./Cargo.lock;
|
||||||
|
};
|
||||||
|
cargoTestFlags = [ "--features" "test" ];
|
||||||
|
passthru.cargoLock = cargoLock;
|
||||||
|
nativeBuildInputs = [ pkgs.python3 pkgs.llvmPackages_14.clang llvm-tools-irrt pkgs.llvmPackages_14.llvm.out llvm-nac3 ];
|
||||||
|
buildInputs = [ pkgs.python3 llvm-nac3 ];
|
||||||
|
checkInputs = [ (pkgs.python3.withPackages(ps: [ ps.numpy ps.scipy ])) ];
|
||||||
|
checkPhase =
|
||||||
|
''
|
||||||
|
echo "Checking nac3standalone demos..."
|
||||||
|
pushd nac3standalone/demo
|
||||||
|
patchShebangs .
|
||||||
|
./check_demos.sh
|
||||||
|
popd
|
||||||
|
echo "Running Cargo tests..."
|
||||||
|
cargoCheckHook
|
||||||
|
'';
|
||||||
|
installPhase =
|
||||||
|
''
|
||||||
|
PYTHON_SITEPACKAGES=$out/${pkgs.python3Packages.python.sitePackages}
|
||||||
|
mkdir -p $PYTHON_SITEPACKAGES
|
||||||
|
cp target/x86_64-unknown-linux-gnu/release/libnac3artiq.so $PYTHON_SITEPACKAGES/nac3artiq.so
|
||||||
|
|
||||||
|
mkdir -p $runkernel/bin
|
||||||
|
cp target/x86_64-unknown-linux-gnu/release/runkernel $runkernel/bin
|
||||||
|
|
||||||
|
mkdir -p $standalone/bin
|
||||||
|
cp target/x86_64-unknown-linux-gnu/release/nac3standalone $standalone/bin
|
||||||
|
'';
|
||||||
|
}
|
||||||
|
);
|
||||||
|
python3-mimalloc = pkgs.python3 // rec {
|
||||||
|
withMimalloc = pkgs.python3.buildEnv.override({ makeWrapperArgs = [ "--set LD_PRELOAD ${pkgs.mimalloc}/lib/libmimalloc.so" ]; });
|
||||||
|
withPackages = f: let packages = f pkgs.python3.pkgs; in withMimalloc.override { extraLibs = packages; };
|
||||||
|
};
|
||||||
|
|
||||||
|
# LLVM PGO support
|
||||||
|
llvm-nac3-instrumented = pkgs.callPackage ./nix/llvm {
|
||||||
|
stdenv = pkgs.llvmPackages_14.stdenv;
|
||||||
|
extraCmakeFlags = [ "-DLLVM_BUILD_INSTRUMENTED=IR" ];
|
||||||
|
};
|
||||||
|
nac3artiq-instrumented = pkgs.python3Packages.toPythonModule (
|
||||||
|
pkgs.rustPlatform.buildRustPackage {
|
||||||
|
name = "nac3artiq-instrumented";
|
||||||
|
src = self;
|
||||||
|
inherit (nac3artiq) cargoLock;
|
||||||
|
nativeBuildInputs = [ pkgs.python3 packages.x86_64-linux.llvm-tools-irrt llvm-nac3-instrumented ];
|
||||||
|
buildInputs = [ pkgs.python3 llvm-nac3-instrumented ];
|
||||||
|
cargoBuildFlags = [ "--package" "nac3artiq" "--features" "init-llvm-profile" ];
|
||||||
|
doCheck = false;
|
||||||
|
configurePhase =
|
||||||
|
''
|
||||||
|
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS="-C link-arg=-L${pkgs.llvmPackages_14.compiler-rt}/lib/linux -C link-arg=-lclang_rt.profile-x86_64"
|
||||||
|
'';
|
||||||
|
installPhase =
|
||||||
|
''
|
||||||
|
TARGET_DIR=$out/${pkgs.python3Packages.python.sitePackages}
|
||||||
|
mkdir -p $TARGET_DIR
|
||||||
|
cp target/x86_64-unknown-linux-gnu/release/libnac3artiq.so $TARGET_DIR/nac3artiq.so
|
||||||
|
'';
|
||||||
|
}
|
||||||
|
);
|
||||||
|
nac3artiq-profile = pkgs.stdenvNoCC.mkDerivation {
|
||||||
|
name = "nac3artiq-profile";
|
||||||
|
srcs = [
|
||||||
|
(pkgs.fetchFromGitHub {
|
||||||
|
owner = "m-labs";
|
||||||
|
repo = "sipyco";
|
||||||
|
rev = "939f84f9b5eef7efbf7423c735d1834783b6140e";
|
||||||
|
sha256 = "sha256-15Nun4EY35j+6SPZkjzZtyH/ncxLS60KuGJjFh5kSTc=";
|
||||||
|
})
|
||||||
|
(pkgs.fetchFromGitHub {
|
||||||
|
owner = "m-labs";
|
||||||
|
repo = "artiq";
|
||||||
|
rev = "923ca3377d42c815f979983134ec549dc39d3ca0";
|
||||||
|
sha256 = "sha256-oJoEeNEeNFSUyh6jXG8Tzp6qHVikeHS0CzfE+mODPgw=";
|
||||||
|
})
|
||||||
|
];
|
||||||
|
buildInputs = [
|
||||||
|
(python3-mimalloc.withPackages(ps: [ ps.numpy ps.scipy ps.jsonschema ps.lmdb nac3artiq-instrumented ]))
|
||||||
|
pkgs.llvmPackages_14.llvm.out
|
||||||
|
];
|
||||||
|
phases = [ "buildPhase" "installPhase" ];
|
||||||
|
buildPhase =
|
||||||
|
''
|
||||||
|
srcs=($srcs)
|
||||||
|
sipyco=''${srcs[0]}
|
||||||
|
artiq=''${srcs[1]}
|
||||||
|
export PYTHONPATH=$sipyco:$artiq
|
||||||
|
python -m artiq.frontend.artiq_ddb_template $artiq/artiq/examples/nac3devices/nac3devices.json > device_db.py
|
||||||
|
cp $artiq/artiq/examples/nac3devices/nac3devices.py .
|
||||||
|
python -m artiq.frontend.artiq_compile nac3devices.py
|
||||||
|
'';
|
||||||
|
installPhase =
|
||||||
|
''
|
||||||
|
mkdir $out
|
||||||
|
llvm-profdata merge -o $out/llvm.profdata /build/llvm/build/profiles/*
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
llvm-nac3-pgo = pkgs.callPackage ./nix/llvm {
|
||||||
|
stdenv = pkgs.llvmPackages_14.stdenv;
|
||||||
|
extraCmakeFlags = [ "-DLLVM_PROFDATA_FILE=${nac3artiq-profile}/llvm.profdata" ];
|
||||||
|
};
|
||||||
|
nac3artiq-pgo = pkgs.python3Packages.toPythonModule (
|
||||||
|
pkgs.rustPlatform.buildRustPackage {
|
||||||
|
name = "nac3artiq-pgo";
|
||||||
|
src = self;
|
||||||
|
inherit (nac3artiq) cargoLock;
|
||||||
|
nativeBuildInputs = [ pkgs.python3 packages.x86_64-linux.llvm-tools-irrt llvm-nac3-pgo ];
|
||||||
|
buildInputs = [ pkgs.python3 llvm-nac3-pgo ];
|
||||||
|
cargoBuildFlags = [ "--package" "nac3artiq" ];
|
||||||
|
cargoTestFlags = [ "--package" "nac3ast" "--package" "nac3parser" "--package" "nac3core" "--package" "nac3artiq" ];
|
||||||
|
installPhase =
|
||||||
|
''
|
||||||
|
TARGET_DIR=$out/${pkgs.python3Packages.python.sitePackages}
|
||||||
|
mkdir -p $TARGET_DIR
|
||||||
|
cp target/x86_64-unknown-linux-gnu/release/libnac3artiq.so $TARGET_DIR/nac3artiq.so
|
||||||
|
'';
|
||||||
|
}
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
packages.x86_64-w64-mingw32 = import ./nix/windows { inherit pkgs; };
|
||||||
|
|
||||||
|
devShells.x86_64-linux.default = pkgs.mkShell {
|
||||||
|
name = "nac3-dev-shell";
|
||||||
|
buildInputs = with pkgs; [
|
||||||
|
# build dependencies
|
||||||
|
packages.x86_64-linux.llvm-nac3
|
||||||
|
llvmPackages_14.clang llvmPackages_14.llvm.out # for running nac3standalone demos
|
||||||
|
packages.x86_64-linux.llvm-tools-irrt
|
||||||
|
cargo
|
||||||
|
rustc
|
||||||
|
# runtime dependencies
|
||||||
|
lld_14 # for running kernels on the host
|
||||||
|
(packages.x86_64-linux.python3-mimalloc.withPackages(ps: [ ps.numpy ps.scipy ]))
|
||||||
|
# development tools
|
||||||
|
cargo-insta
|
||||||
|
clippy
|
||||||
|
pre-commit
|
||||||
|
rustfmt
|
||||||
|
rust-analyzer
|
||||||
|
];
|
||||||
|
# https://nixos.wiki/wiki/Rust#Shell.nix_example
|
||||||
|
RUST_SRC_PATH = "${pkgs.rust.packages.stable.rustPlatform.rustLibSrc}";
|
||||||
|
};
|
||||||
|
devShells.x86_64-linux.msys2 = pkgs.mkShell {
|
||||||
|
name = "nac3-dev-shell-msys2";
|
||||||
|
buildInputs = with pkgs; [
|
||||||
|
curl
|
||||||
|
pacman
|
||||||
|
fakeroot
|
||||||
|
packages.x86_64-w64-mingw32.wine-msys2
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
hydraJobs = {
|
||||||
|
inherit (packages.x86_64-linux) llvm-nac3 nac3artiq nac3artiq-pgo;
|
||||||
|
llvm-nac3-msys2 = packages.x86_64-w64-mingw32.llvm-nac3;
|
||||||
|
nac3artiq-msys2 = packages.x86_64-w64-mingw32.nac3artiq;
|
||||||
|
nac3artiq-msys2-pkg = packages.x86_64-w64-mingw32.nac3artiq-pkg;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
nixConfig = {
|
||||||
|
extra-trusted-public-keys = "nixbld.m-labs.hk-1:5aSRVA5b320xbNvu30tqxVPXpld73bhtOeH6uAjRyHc=";
|
||||||
|
extra-substituters = "https://nixbld.m-labs.hk";
|
||||||
|
};
|
||||||
|
}
|
|
@ -0,0 +1,56 @@
|
||||||
|
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||||
|
<svg
|
||||||
|
id="a"
|
||||||
|
width="128"
|
||||||
|
height="128"
|
||||||
|
viewBox="0 0 95.99999 95.99999"
|
||||||
|
version="1.1"
|
||||||
|
sodipodi:docname="nac3.svg"
|
||||||
|
inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
|
||||||
|
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||||
|
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||||
|
xmlns="http://www.w3.org/2000/svg"
|
||||||
|
xmlns:svg="http://www.w3.org/2000/svg">
|
||||||
|
<defs
|
||||||
|
id="defs11" />
|
||||||
|
<sodipodi:namedview
|
||||||
|
id="namedview9"
|
||||||
|
pagecolor="#ffffff"
|
||||||
|
bordercolor="#666666"
|
||||||
|
borderopacity="1.0"
|
||||||
|
inkscape:pageshadow="2"
|
||||||
|
inkscape:pageopacity="0.0"
|
||||||
|
inkscape:pagecheckerboard="0"
|
||||||
|
inkscape:document-units="mm"
|
||||||
|
showgrid="false"
|
||||||
|
units="px"
|
||||||
|
width="128px"
|
||||||
|
inkscape:zoom="5.9448568"
|
||||||
|
inkscape:cx="60.472441"
|
||||||
|
inkscape:cy="60.556547"
|
||||||
|
inkscape:window-width="2560"
|
||||||
|
inkscape:window-height="1371"
|
||||||
|
inkscape:window-x="0"
|
||||||
|
inkscape:window-y="32"
|
||||||
|
inkscape:window-maximized="1"
|
||||||
|
inkscape:current-layer="a" />
|
||||||
|
<rect
|
||||||
|
x="40.072601"
|
||||||
|
y="-26.776209"
|
||||||
|
width="55.668747"
|
||||||
|
height="55.668747"
|
||||||
|
transform="matrix(0.71803815,0.69600374,-0.71803815,0.69600374,0,0)"
|
||||||
|
style="fill:#be211e;stroke:#000000;stroke-width:4.37375px;stroke-linecap:round;stroke-linejoin:round"
|
||||||
|
id="rect2" />
|
||||||
|
<line
|
||||||
|
x1="38.00692"
|
||||||
|
y1="63.457153"
|
||||||
|
x2="57.993061"
|
||||||
|
y2="63.457153"
|
||||||
|
style="fill:none;stroke:#000000;stroke-width:4.37269px;stroke-linecap:round;stroke-linejoin:round"
|
||||||
|
id="line4" />
|
||||||
|
<path
|
||||||
|
d="m 48.007301,57.843329 c -1.943097,0 -3.877522,-0.41727 -5.686157,-1.246007 -3.218257,-1.474616 -5.650382,-4.075418 -6.849639,-7.323671 -2.065624,-5.588921 -1.192751,-10.226647 2.575258,-13.827 0.611554,-0.584909 1.518048,-0.773041 2.323689,-0.488206 0.80673,0.286405 1.369495,0.998486 1.447563,1.827234 0.237469,2.549302 2.439719,5.917376 4.28414,6.55273 0.396859,0.13506 0.820953,-0.05859 1.097084,-0.35222 0.339254,-0.360754 0.451065,-0.961893 -1.013597,-3.191372 -2.089851,-3.181137 -4.638728,-8.754903 -0.262407,-15.069853 0.494457,-0.713491 1.384673,-1.068907 2.256469,-0.909156 0.871795,0.161332 1.583757,0.806404 1.752251,1.651189 0.716448,3.591862 2.962357,6.151755 5.199306,8.023138 1.935503,1.61861 4.344688,3.867387 5.435687,7.096643 2.283183,6.758017 -1.202511,14.114988 -8.060822,16.494025 -1.467083,0.509226 -2.98513,0.762536 -4.498836,0.762536 z M 39.358865,40.002192 c -0.304711,0.696206 -0.541636,2.080524 -0.56865,2.237454 -0.330316,1.918771 0.168305,3.803963 0.846157,5.539951 0.856828,2.19436 2.437543,3.942467 4.583411,4.925713 2.143691,0.981675 4.554131,1.097816 6.789992,0.322666 4.571485,-1.586549 6.977584,-6.532238 5.363036,-11.02597 v -5.27e-4 C 55.455481,39.447968 54.023463,38.162043 52.221335,36.65432 50.876945,35.529534 49.409662,33.987726 48.417983,32.135555 48.01343,31.37996 47.79547,30.34303 47.76669,29.413263 c -0.187481,0.669514 -0.212441,2.325923 -0.150396,2.93691 0.179209,1.764456 1.333476,3.644546 2.340611,5.171243 1.311568,1.988179 2.72058,6.037272 0.459681,8.367985 -1.54192,1.58953 -4.038511,2.052034 -5.839973,1.38492 -2.398314,-0.888147 -3.942744,-2.690627 -4.941118,-4.768029 -0.121194,-0.25217 -0.532464,-1.174187 -0.276619,-2.5041 z"
|
||||||
|
id="path6"
|
||||||
|
style="stroke-width:1.09317" />
|
||||||
|
</svg>
|
After Width: | Height: | Size: 3.3 KiB |
|
@ -0,0 +1,26 @@
|
||||||
|
[package]
|
||||||
|
name = "nac3artiq"
|
||||||
|
version = "0.1.0"
|
||||||
|
authors = ["M-Labs"]
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
[lib]
|
||||||
|
name = "nac3artiq"
|
||||||
|
crate-type = ["cdylib"]
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
itertools = "0.13"
|
||||||
|
pyo3 = { version = "0.21", features = ["extension-module", "gil-refs"] }
|
||||||
|
parking_lot = "0.12"
|
||||||
|
tempfile = "3.10"
|
||||||
|
nac3parser = { path = "../nac3parser" }
|
||||||
|
nac3core = { path = "../nac3core" }
|
||||||
|
nac3ld = { path = "../nac3ld" }
|
||||||
|
|
||||||
|
[dependencies.inkwell]
|
||||||
|
version = "0.4"
|
||||||
|
default-features = false
|
||||||
|
features = ["llvm14-0", "target-x86", "target-arm", "target-riscv", "no-libffi-linking"]
|
||||||
|
|
||||||
|
[features]
|
||||||
|
init-llvm-profile = []
|
|
@ -0,0 +1,26 @@
|
||||||
|
from min_artiq import *
|
||||||
|
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class Demo:
|
||||||
|
core: KernelInvariant[Core]
|
||||||
|
led0: KernelInvariant[TTLOut]
|
||||||
|
led1: KernelInvariant[TTLOut]
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.core = Core()
|
||||||
|
self.led0 = TTLOut(self.core, 18)
|
||||||
|
self.led1 = TTLOut(self.core, 19)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def run(self):
|
||||||
|
self.core.reset()
|
||||||
|
while True:
|
||||||
|
with parallel:
|
||||||
|
self.led0.pulse(100.*ms)
|
||||||
|
self.led1.pulse(100.*ms)
|
||||||
|
self.core.delay(100.*ms)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
Demo().run()
|
|
@ -0,0 +1,16 @@
|
||||||
|
# python demo.py
|
||||||
|
# artiq_run module.elf
|
||||||
|
|
||||||
|
device_db = {
|
||||||
|
"core": {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.core",
|
||||||
|
"class": "Core",
|
||||||
|
"arguments": {
|
||||||
|
"host": "kc705",
|
||||||
|
"ref_period": 1e-9,
|
||||||
|
"ref_multiplier": 8,
|
||||||
|
"target": "rv32g"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
|
@ -0,0 +1,66 @@
|
||||||
|
class EmbeddingMap:
|
||||||
|
def __init__(self):
|
||||||
|
self.object_inverse_map = {}
|
||||||
|
self.object_map = {}
|
||||||
|
self.string_map = {}
|
||||||
|
self.string_reverse_map = {}
|
||||||
|
self.function_map = {}
|
||||||
|
self.attributes_writeback = []
|
||||||
|
|
||||||
|
# preallocate exception names
|
||||||
|
self.preallocate_runtime_exception_names(["RuntimeError",
|
||||||
|
"RTIOUnderflow",
|
||||||
|
"RTIOOverflow",
|
||||||
|
"RTIODestinationUnreachable",
|
||||||
|
"DMAError",
|
||||||
|
"I2CError",
|
||||||
|
"CacheError",
|
||||||
|
"SPIError",
|
||||||
|
"0:ZeroDivisionError",
|
||||||
|
"0:IndexError",
|
||||||
|
"0:ValueError",
|
||||||
|
"0:RuntimeError",
|
||||||
|
"0:AssertionError",
|
||||||
|
"0:KeyError",
|
||||||
|
"0:NotImplementedError",
|
||||||
|
"0:OverflowError",
|
||||||
|
"0:IOError",
|
||||||
|
"0:UnwrapNoneError"])
|
||||||
|
|
||||||
|
def preallocate_runtime_exception_names(self, names):
|
||||||
|
for i, name in enumerate(names):
|
||||||
|
if ":" not in name:
|
||||||
|
name = "0:artiq.coredevice.exceptions." + name
|
||||||
|
exn_id = self.store_str(name)
|
||||||
|
assert exn_id == i
|
||||||
|
|
||||||
|
def store_function(self, key, fun):
|
||||||
|
self.function_map[key] = fun
|
||||||
|
return key
|
||||||
|
|
||||||
|
def store_object(self, obj):
|
||||||
|
obj_id = id(obj)
|
||||||
|
if obj_id in self.object_inverse_map:
|
||||||
|
return self.object_inverse_map[obj_id]
|
||||||
|
key = len(self.object_map) + 1
|
||||||
|
self.object_map[key] = obj
|
||||||
|
self.object_inverse_map[obj_id] = key
|
||||||
|
return key
|
||||||
|
|
||||||
|
def store_str(self, s):
|
||||||
|
if s in self.string_reverse_map:
|
||||||
|
return self.string_reverse_map[s]
|
||||||
|
key = len(self.string_map)
|
||||||
|
self.string_map[key] = s
|
||||||
|
self.string_reverse_map[s] = key
|
||||||
|
return key
|
||||||
|
|
||||||
|
def retrieve_function(self, key):
|
||||||
|
return self.function_map[key]
|
||||||
|
|
||||||
|
def retrieve_object(self, key):
|
||||||
|
return self.object_map[key]
|
||||||
|
|
||||||
|
def retrieve_str(self, key):
|
||||||
|
return self.string_map[key]
|
||||||
|
|
|
@ -0,0 +1,295 @@
|
||||||
|
from inspect import getfullargspec
|
||||||
|
from functools import wraps
|
||||||
|
from types import SimpleNamespace
|
||||||
|
from numpy import int32, int64
|
||||||
|
from typing import Generic, TypeVar
|
||||||
|
from math import floor, ceil
|
||||||
|
|
||||||
|
import nac3artiq
|
||||||
|
from embedding_map import EmbeddingMap
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"Kernel", "KernelInvariant", "virtual", "ConstGeneric",
|
||||||
|
"Option", "Some", "none", "UnwrapNoneError",
|
||||||
|
"round64", "floor64", "ceil64",
|
||||||
|
"extern", "kernel", "portable", "nac3",
|
||||||
|
"rpc", "ms", "us", "ns",
|
||||||
|
"print_int32", "print_int64",
|
||||||
|
"Core", "TTLOut",
|
||||||
|
"parallel", "sequential"
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
T = TypeVar('T')
|
||||||
|
|
||||||
|
class Kernel(Generic[T]):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class KernelInvariant(Generic[T]):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# The virtual class must exist before nac3artiq.NAC3 is created.
|
||||||
|
class virtual(Generic[T]):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Option(Generic[T]):
|
||||||
|
_nac3_option: T
|
||||||
|
|
||||||
|
def __init__(self, v: T):
|
||||||
|
self._nac3_option = v
|
||||||
|
|
||||||
|
def is_none(self):
|
||||||
|
return self._nac3_option is None
|
||||||
|
|
||||||
|
def is_some(self):
|
||||||
|
return not self.is_none()
|
||||||
|
|
||||||
|
def unwrap(self):
|
||||||
|
if self.is_none():
|
||||||
|
raise UnwrapNoneError()
|
||||||
|
return self._nac3_option
|
||||||
|
|
||||||
|
def __repr__(self) -> str:
|
||||||
|
if self.is_none():
|
||||||
|
return "none"
|
||||||
|
else:
|
||||||
|
return "Some({})".format(repr(self._nac3_option))
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
if self.is_none():
|
||||||
|
return "none"
|
||||||
|
else:
|
||||||
|
return "Some({})".format(str(self._nac3_option))
|
||||||
|
|
||||||
|
def Some(v: T) -> Option[T]:
|
||||||
|
return Option(v)
|
||||||
|
|
||||||
|
none = Option(None)
|
||||||
|
|
||||||
|
class _ConstGenericMarker:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def ConstGeneric(name, constraint):
|
||||||
|
return TypeVar(name, _ConstGenericMarker, constraint)
|
||||||
|
|
||||||
|
def round64(x):
|
||||||
|
return round(x)
|
||||||
|
|
||||||
|
def floor64(x):
|
||||||
|
return floor(x)
|
||||||
|
|
||||||
|
def ceil64(x):
|
||||||
|
return ceil(x)
|
||||||
|
|
||||||
|
|
||||||
|
import device_db
|
||||||
|
core_arguments = device_db.device_db["core"]["arguments"]
|
||||||
|
|
||||||
|
artiq_builtins = {
|
||||||
|
"none": none,
|
||||||
|
"virtual": virtual,
|
||||||
|
"_ConstGenericMarker": _ConstGenericMarker,
|
||||||
|
"Option": Option,
|
||||||
|
}
|
||||||
|
compiler = nac3artiq.NAC3(core_arguments["target"], artiq_builtins)
|
||||||
|
allow_registration = True
|
||||||
|
# Delay NAC3 analysis until all referenced variables are supposed to exist on the CPython side.
|
||||||
|
registered_functions = set()
|
||||||
|
registered_classes = set()
|
||||||
|
|
||||||
|
def register_function(fun):
|
||||||
|
assert allow_registration
|
||||||
|
registered_functions.add(fun)
|
||||||
|
|
||||||
|
def register_class(cls):
|
||||||
|
assert allow_registration
|
||||||
|
registered_classes.add(cls)
|
||||||
|
|
||||||
|
|
||||||
|
def extern(function):
|
||||||
|
"""Decorates a function declaration defined by the core device runtime."""
|
||||||
|
register_function(function)
|
||||||
|
return function
|
||||||
|
|
||||||
|
def rpc(function):
|
||||||
|
"""Decorates a function declaration defined by the core device runtime."""
|
||||||
|
register_function(function)
|
||||||
|
return function
|
||||||
|
|
||||||
|
def kernel(function_or_method):
|
||||||
|
"""Decorates a function or method to be executed on the core device."""
|
||||||
|
register_function(function_or_method)
|
||||||
|
argspec = getfullargspec(function_or_method)
|
||||||
|
if argspec.args and argspec.args[0] == "self":
|
||||||
|
@wraps(function_or_method)
|
||||||
|
def run_on_core(self, *args, **kwargs):
|
||||||
|
fake_method = SimpleNamespace(__self__=self, __name__=function_or_method.__name__)
|
||||||
|
self.core.run(fake_method, *args, **kwargs)
|
||||||
|
else:
|
||||||
|
@wraps(function_or_method)
|
||||||
|
def run_on_core(*args, **kwargs):
|
||||||
|
raise RuntimeError("Kernel functions need explicit core.run()")
|
||||||
|
return run_on_core
|
||||||
|
|
||||||
|
|
||||||
|
def portable(function):
|
||||||
|
"""Decorates a function or method to be executed on the same device (host/core device) as the caller."""
|
||||||
|
register_function(function)
|
||||||
|
return function
|
||||||
|
|
||||||
|
|
||||||
|
def nac3(cls):
|
||||||
|
"""
|
||||||
|
Decorates a class to be analyzed by NAC3.
|
||||||
|
All classes containing kernels or portable methods must use this decorator.
|
||||||
|
"""
|
||||||
|
register_class(cls)
|
||||||
|
return cls
|
||||||
|
|
||||||
|
|
||||||
|
ms = 1e-3
|
||||||
|
us = 1e-6
|
||||||
|
ns = 1e-9
|
||||||
|
|
||||||
|
@extern
|
||||||
|
def rtio_init():
|
||||||
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
@extern
|
||||||
|
def rtio_get_counter() -> int64:
|
||||||
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
@extern
|
||||||
|
def rtio_output(target: int32, data: int32):
|
||||||
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
@extern
|
||||||
|
def rtio_input_timestamp(timeout_mu: int64, channel: int32) -> int64:
|
||||||
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
@extern
|
||||||
|
def rtio_input_data(channel: int32) -> int32:
|
||||||
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
# These is not part of ARTIQ and only available in runkernel. Defined here for convenience.
|
||||||
|
@extern
|
||||||
|
def print_int32(x: int32):
|
||||||
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
@extern
|
||||||
|
def print_int64(x: int64):
|
||||||
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class Core:
|
||||||
|
ref_period: KernelInvariant[float]
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.ref_period = core_arguments["ref_period"]
|
||||||
|
|
||||||
|
def run(self, method, *args, **kwargs):
|
||||||
|
global allow_registration
|
||||||
|
|
||||||
|
embedding = EmbeddingMap()
|
||||||
|
|
||||||
|
if allow_registration:
|
||||||
|
compiler.analyze(registered_functions, registered_classes)
|
||||||
|
allow_registration = False
|
||||||
|
|
||||||
|
if hasattr(method, "__self__"):
|
||||||
|
obj = method.__self__
|
||||||
|
name = method.__name__
|
||||||
|
else:
|
||||||
|
obj = method
|
||||||
|
name = ""
|
||||||
|
|
||||||
|
compiler.compile_method_to_file(obj, name, args, "module.elf", embedding)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def reset(self):
|
||||||
|
rtio_init()
|
||||||
|
at_mu(rtio_get_counter() + int64(125000))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def break_realtime(self):
|
||||||
|
min_now = rtio_get_counter() + int64(125000)
|
||||||
|
if now_mu() < min_now:
|
||||||
|
at_mu(min_now)
|
||||||
|
|
||||||
|
@portable
|
||||||
|
def seconds_to_mu(self, seconds: float) -> int64:
|
||||||
|
return int64(round(seconds/self.ref_period))
|
||||||
|
|
||||||
|
@portable
|
||||||
|
def mu_to_seconds(self, mu: int64) -> float:
|
||||||
|
return float(mu)*self.ref_period
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def delay(self, dt: float):
|
||||||
|
delay_mu(self.seconds_to_mu(dt))
|
||||||
|
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class TTLOut:
|
||||||
|
core: KernelInvariant[Core]
|
||||||
|
channel: KernelInvariant[int32]
|
||||||
|
target_o: KernelInvariant[int32]
|
||||||
|
|
||||||
|
def __init__(self, core: Core, channel: int32):
|
||||||
|
self.core = core
|
||||||
|
self.channel = channel
|
||||||
|
self.target_o = channel << 8
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def output(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_o(self, o: bool):
|
||||||
|
rtio_output(self.target_o, 1 if o else 0)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def on(self):
|
||||||
|
self.set_o(True)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def off(self):
|
||||||
|
self.set_o(False)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def pulse_mu(self, duration: int64):
|
||||||
|
self.on()
|
||||||
|
delay_mu(duration)
|
||||||
|
self.off()
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def pulse(self, duration: float):
|
||||||
|
self.on()
|
||||||
|
self.core.delay(duration)
|
||||||
|
self.off()
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class KernelContextManager:
|
||||||
|
@kernel
|
||||||
|
def __enter__(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def __exit__(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class UnwrapNoneError(Exception):
|
||||||
|
"""raised when unwrapping a none value"""
|
||||||
|
artiq_builtin = True
|
||||||
|
|
||||||
|
parallel = KernelContextManager()
|
||||||
|
sequential = KernelContextManager()
|
|
@ -0,0 +1 @@
|
||||||
|
../../target/release/libnac3artiq.so
|
|
@ -0,0 +1,24 @@
|
||||||
|
from min_artiq import *
|
||||||
|
from numpy import int32
|
||||||
|
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class Demo:
|
||||||
|
core: KernelInvariant[Core]
|
||||||
|
attr1: KernelInvariant[str]
|
||||||
|
attr2: KernelInvariant[int32]
|
||||||
|
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.core = Core()
|
||||||
|
self.attr2 = 32
|
||||||
|
self.attr1 = "SAMPLE"
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def run(self):
|
||||||
|
print_int32(self.attr2)
|
||||||
|
self.attr1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
Demo().run()
|
|
@ -0,0 +1,40 @@
|
||||||
|
from min_artiq import *
|
||||||
|
from numpy import int32
|
||||||
|
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class Demo:
|
||||||
|
attr1: KernelInvariant[int32] = 2
|
||||||
|
attr2: int32 = 4
|
||||||
|
attr3: Kernel[int32]
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def __init__(self):
|
||||||
|
self.attr3 = 8
|
||||||
|
|
||||||
|
|
||||||
|
@nac3
|
||||||
|
class NAC3Devices:
|
||||||
|
core: KernelInvariant[Core]
|
||||||
|
attr4: KernelInvariant[int32] = 16
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.core = Core()
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def run(self):
|
||||||
|
Demo.attr1 # Supported
|
||||||
|
# Demo.attr2 # Field not accessible on Kernel
|
||||||
|
# Demo.attr3 # Only attributes can be accessed in this way
|
||||||
|
# Demo.attr1 = 2 # Attributes are immutable
|
||||||
|
|
||||||
|
self.attr4 # Attributes can be accessed within class
|
||||||
|
|
||||||
|
obj = Demo()
|
||||||
|
obj.attr1 # Attributes can be accessed by class objects
|
||||||
|
|
||||||
|
NAC3Devices.attr4 # Attributes accessible for classes without __init__
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
NAC3Devices().run()
|
|
@ -0,0 +1,725 @@
|
||||||
|
use nac3core::{
|
||||||
|
codegen::{
|
||||||
|
expr::gen_call,
|
||||||
|
llvm_intrinsics::{call_int_smax, call_stackrestore, call_stacksave},
|
||||||
|
stmt::{gen_block, gen_with},
|
||||||
|
CodeGenContext, CodeGenerator,
|
||||||
|
},
|
||||||
|
symbol_resolver::ValueEnum,
|
||||||
|
toplevel::{helper::PrimDef, numpy::unpack_ndarray_var_tys, DefinitionId, GenCall},
|
||||||
|
typecheck::typedef::{iter_type_vars, FunSignature, FuncArg, Type, TypeEnum, VarMap},
|
||||||
|
};
|
||||||
|
|
||||||
|
use nac3parser::ast::{Expr, ExprKind, Located, Stmt, StmtKind, StrRef};
|
||||||
|
|
||||||
|
use inkwell::{
|
||||||
|
context::Context, module::Linkage, types::IntType, values::BasicValueEnum, AddressSpace,
|
||||||
|
};
|
||||||
|
|
||||||
|
use pyo3::{
|
||||||
|
types::{PyDict, PyList},
|
||||||
|
PyObject, PyResult, Python,
|
||||||
|
};
|
||||||
|
|
||||||
|
use crate::{symbol_resolver::InnerResolver, timeline::TimeFns};
|
||||||
|
|
||||||
|
use std::{
|
||||||
|
collections::hash_map::DefaultHasher,
|
||||||
|
collections::HashMap,
|
||||||
|
hash::{Hash, Hasher},
|
||||||
|
sync::Arc,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// The parallelism mode within a block.
|
||||||
|
#[derive(Copy, Clone, Eq, PartialEq)]
|
||||||
|
enum ParallelMode {
|
||||||
|
/// No parallelism is currently registered for this context.
|
||||||
|
None,
|
||||||
|
|
||||||
|
/// Legacy (or shallow) parallelism. Default before NAC3.
|
||||||
|
///
|
||||||
|
/// Each statement within the `with` block is treated as statements to be executed in parallel.
|
||||||
|
Legacy,
|
||||||
|
|
||||||
|
/// Deep parallelism. Default since NAC3.
|
||||||
|
///
|
||||||
|
/// Each function call within the `with` block (except those within a nested `sequential` block)
|
||||||
|
/// are treated to be executed in parallel.
|
||||||
|
Deep,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct ArtiqCodeGenerator<'a> {
|
||||||
|
name: String,
|
||||||
|
|
||||||
|
/// The size of a `size_t` variable in bits.
|
||||||
|
size_t: u32,
|
||||||
|
|
||||||
|
/// Monotonic counter for naming `start`/`stop` variables used by `with parallel` blocks.
|
||||||
|
name_counter: u32,
|
||||||
|
|
||||||
|
/// Variable for tracking the start of a `with parallel` block.
|
||||||
|
start: Option<Expr<Option<Type>>>,
|
||||||
|
|
||||||
|
/// Variable for tracking the end of a `with parallel` block.
|
||||||
|
end: Option<Expr<Option<Type>>>,
|
||||||
|
timeline: &'a (dyn TimeFns + Sync),
|
||||||
|
|
||||||
|
/// The [`ParallelMode`] of the current parallel context.
|
||||||
|
///
|
||||||
|
/// The current parallel context refers to the nearest `with parallel` or `with legacy_parallel`
|
||||||
|
/// statement, which is used to determine when and how the timeline should be updated.
|
||||||
|
parallel_mode: ParallelMode,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> ArtiqCodeGenerator<'a> {
|
||||||
|
pub fn new(
|
||||||
|
name: String,
|
||||||
|
size_t: u32,
|
||||||
|
timeline: &'a (dyn TimeFns + Sync),
|
||||||
|
) -> ArtiqCodeGenerator<'a> {
|
||||||
|
assert!(size_t == 32 || size_t == 64);
|
||||||
|
ArtiqCodeGenerator {
|
||||||
|
name,
|
||||||
|
size_t,
|
||||||
|
name_counter: 0,
|
||||||
|
start: None,
|
||||||
|
end: None,
|
||||||
|
timeline,
|
||||||
|
parallel_mode: ParallelMode::None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// If the generator is currently in a direct-`parallel` block context, emits IR that resets the
|
||||||
|
/// position of the timeline to the initial timeline position before entering the `parallel`
|
||||||
|
/// block.
|
||||||
|
///
|
||||||
|
/// Direct-`parallel` block context refers to when the generator is generating statements whose
|
||||||
|
/// closest parent `with` statement is a `with parallel` block.
|
||||||
|
fn timeline_reset_start(&mut self, ctx: &mut CodeGenContext<'_, '_>) -> Result<(), String> {
|
||||||
|
if let Some(start) = self.start.clone() {
|
||||||
|
let start_val = self.gen_expr(ctx, &start)?.unwrap().to_basic_value_enum(
|
||||||
|
ctx,
|
||||||
|
self,
|
||||||
|
start.custom.unwrap(),
|
||||||
|
)?;
|
||||||
|
self.timeline.emit_at_mu(ctx, start_val);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// If the generator is currently in a `parallel` block context, emits IR that updates the
|
||||||
|
/// maximum end position of the `parallel` block as specified by the timeline `end` value.
|
||||||
|
///
|
||||||
|
/// In general the `end` parameter should be set to `self.end` for updating the maximum end
|
||||||
|
/// position for the current `parallel` block. Other values can be passed in to update the
|
||||||
|
/// maximum end position for other `parallel` blocks.
|
||||||
|
///
|
||||||
|
/// `parallel`-block context refers to when the generator is generating statements within a
|
||||||
|
/// (possibly indirect) `parallel` block.
|
||||||
|
///
|
||||||
|
/// * `store_name` - The LLVM value name for the pointer to `end`. `.addr` will be appended to
|
||||||
|
/// the end of the provided value name.
|
||||||
|
fn timeline_update_end_max(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
end: Option<Expr<Option<Type>>>,
|
||||||
|
store_name: Option<&str>,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
if let Some(end) = end {
|
||||||
|
let old_end = self.gen_expr(ctx, &end)?.unwrap().to_basic_value_enum(
|
||||||
|
ctx,
|
||||||
|
self,
|
||||||
|
end.custom.unwrap(),
|
||||||
|
)?;
|
||||||
|
let now = self.timeline.emit_now_mu(ctx);
|
||||||
|
let max =
|
||||||
|
call_int_smax(ctx, old_end.into_int_value(), now.into_int_value(), Some("smax"));
|
||||||
|
let end_store = self
|
||||||
|
.gen_store_target(
|
||||||
|
ctx,
|
||||||
|
&end,
|
||||||
|
store_name.map(|name| format!("{name}.addr")).as_deref(),
|
||||||
|
)?
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder.build_store(end_store, max).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'b> CodeGenerator for ArtiqCodeGenerator<'b> {
|
||||||
|
fn get_name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_size_type<'ctx>(&self, ctx: &'ctx Context) -> IntType<'ctx> {
|
||||||
|
if self.size_t == 32 {
|
||||||
|
ctx.i32_type()
|
||||||
|
} else {
|
||||||
|
ctx.i64_type()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn gen_block<'ctx, 'a, 'c, I: Iterator<Item = &'c Stmt<Option<Type>>>>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, 'a>,
|
||||||
|
stmts: I,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
// Legacy parallel emits timeline end-update/timeline-reset after each top-level statement
|
||||||
|
// in the parallel block
|
||||||
|
if self.parallel_mode == ParallelMode::Legacy {
|
||||||
|
for stmt in stmts {
|
||||||
|
self.gen_stmt(ctx, stmt)?;
|
||||||
|
|
||||||
|
if ctx.is_terminated() {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
self.timeline_update_end_max(ctx, self.end.clone(), Some("end"))?;
|
||||||
|
self.timeline_reset_start(ctx)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
} else {
|
||||||
|
gen_block(self, ctx, stmts)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn gen_call<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
obj: Option<(Type, ValueEnum<'ctx>)>,
|
||||||
|
fun: (&FunSignature, DefinitionId),
|
||||||
|
params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
|
||||||
|
) -> Result<Option<BasicValueEnum<'ctx>>, String> {
|
||||||
|
let result = gen_call(self, ctx, obj, fun, params)?;
|
||||||
|
|
||||||
|
// Deep parallel emits timeline end-update/timeline-reset after each function call
|
||||||
|
if self.parallel_mode == ParallelMode::Deep {
|
||||||
|
self.timeline_update_end_max(ctx, self.end.clone(), Some("end"))?;
|
||||||
|
self.timeline_reset_start(ctx)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn gen_with(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
stmt: &Stmt<Option<Type>>,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
let StmtKind::With { items, body, .. } = &stmt.node else { unreachable!() };
|
||||||
|
|
||||||
|
if items.len() == 1 && items[0].optional_vars.is_none() {
|
||||||
|
let item = &items[0];
|
||||||
|
|
||||||
|
// Behavior of parallel and sequential:
|
||||||
|
// Each function call (indirectly, can be inside a sequential block) within a parallel
|
||||||
|
// block will update the end variable to the maximum now_mu in the block.
|
||||||
|
// Each function call directly inside a parallel block will reset the timeline after
|
||||||
|
// execution. A parallel block within a sequential block (or not within any block) will
|
||||||
|
// set the timeline to the max now_mu within the block (and the outer max now_mu will also
|
||||||
|
// be updated).
|
||||||
|
//
|
||||||
|
// Implementation: We track the start and end separately.
|
||||||
|
// - If there is a start variable, it indicates that we are directly inside a
|
||||||
|
// parallel block and we have to reset the timeline after every function call.
|
||||||
|
// - If there is a end variable, it indicates that we are (indirectly) inside a
|
||||||
|
// parallel block, and we should update the max end value.
|
||||||
|
if let ExprKind::Name { id, ctx: name_ctx } = &item.context_expr.node {
|
||||||
|
if id == &"parallel".into() || id == &"legacy_parallel".into() {
|
||||||
|
let old_start = self.start.take();
|
||||||
|
let old_end = self.end.take();
|
||||||
|
let old_parallel_mode = self.parallel_mode;
|
||||||
|
|
||||||
|
let now = if let Some(old_start) = &old_start {
|
||||||
|
self.gen_expr(ctx, old_start)?.unwrap().to_basic_value_enum(
|
||||||
|
ctx,
|
||||||
|
self,
|
||||||
|
old_start.custom.unwrap(),
|
||||||
|
)?
|
||||||
|
} else {
|
||||||
|
self.timeline.emit_now_mu(ctx)
|
||||||
|
};
|
||||||
|
|
||||||
|
// Emulate variable allocation, as we need to use the CodeGenContext
|
||||||
|
// HashMap to store our variable due to lifetime limitation
|
||||||
|
// Note: we should be able to store variables directly if generic
|
||||||
|
// associative type is used by limiting the lifetime of CodeGenerator to
|
||||||
|
// the LLVM Context.
|
||||||
|
// The name is guaranteed to be unique as users cannot use this as variable
|
||||||
|
// name.
|
||||||
|
self.start = old_start.clone().map_or_else(
|
||||||
|
|| {
|
||||||
|
let start = format!("with-{}-start", self.name_counter).into();
|
||||||
|
let start_expr = Located {
|
||||||
|
// location does not matter at this point
|
||||||
|
location: stmt.location,
|
||||||
|
node: ExprKind::Name { id: start, ctx: *name_ctx },
|
||||||
|
custom: Some(ctx.primitives.int64),
|
||||||
|
};
|
||||||
|
let start = self
|
||||||
|
.gen_store_target(ctx, &start_expr, Some("start.addr"))?
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder.build_store(start, now).unwrap();
|
||||||
|
Ok(Some(start_expr)) as Result<_, String>
|
||||||
|
},
|
||||||
|
|v| Ok(Some(v)),
|
||||||
|
)?;
|
||||||
|
let end = format!("with-{}-end", self.name_counter).into();
|
||||||
|
let end_expr = Located {
|
||||||
|
// location does not matter at this point
|
||||||
|
location: stmt.location,
|
||||||
|
node: ExprKind::Name { id: end, ctx: *name_ctx },
|
||||||
|
custom: Some(ctx.primitives.int64),
|
||||||
|
};
|
||||||
|
let end = self.gen_store_target(ctx, &end_expr, Some("end.addr"))?.unwrap();
|
||||||
|
ctx.builder.build_store(end, now).unwrap();
|
||||||
|
self.end = Some(end_expr);
|
||||||
|
self.name_counter += 1;
|
||||||
|
self.parallel_mode = match id.to_string().as_str() {
|
||||||
|
"parallel" => ParallelMode::Deep,
|
||||||
|
"legacy_parallel" => ParallelMode::Legacy,
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
|
||||||
|
self.gen_block(ctx, body.iter())?;
|
||||||
|
|
||||||
|
let current = ctx.builder.get_insert_block().unwrap();
|
||||||
|
|
||||||
|
// if the current block is terminated, move before the terminator
|
||||||
|
// we want to set the timeline before reaching the terminator
|
||||||
|
// TODO: This may be unsound if there are multiple exit paths in the
|
||||||
|
// block... e.g.
|
||||||
|
// if ...:
|
||||||
|
// return
|
||||||
|
// Perhaps we can fix this by using actual with block?
|
||||||
|
let reset_position = if let Some(terminator) = current.get_terminator() {
|
||||||
|
ctx.builder.position_before(&terminator);
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
};
|
||||||
|
|
||||||
|
// set duration
|
||||||
|
let end_expr = self.end.take().unwrap();
|
||||||
|
let end_val = self.gen_expr(ctx, &end_expr)?.unwrap().to_basic_value_enum(
|
||||||
|
ctx,
|
||||||
|
self,
|
||||||
|
end_expr.custom.unwrap(),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
// inside a sequential block
|
||||||
|
if old_start.is_none() {
|
||||||
|
self.timeline.emit_at_mu(ctx, end_val);
|
||||||
|
}
|
||||||
|
|
||||||
|
// inside a parallel block, should update the outer max now_mu
|
||||||
|
self.timeline_update_end_max(ctx, old_end.clone(), Some("outer.end"))?;
|
||||||
|
|
||||||
|
self.parallel_mode = old_parallel_mode;
|
||||||
|
self.end = old_end;
|
||||||
|
self.start = old_start;
|
||||||
|
|
||||||
|
if reset_position {
|
||||||
|
ctx.builder.position_at_end(current);
|
||||||
|
}
|
||||||
|
|
||||||
|
return Ok(());
|
||||||
|
} else if id == &"sequential".into() {
|
||||||
|
// For deep parallel, temporarily take away start to avoid function calls in
|
||||||
|
// the block from resetting the timeline.
|
||||||
|
// This does not affect legacy parallel, as the timeline will be reset after
|
||||||
|
// this block finishes execution.
|
||||||
|
let start = self.start.take();
|
||||||
|
self.gen_block(ctx, body.iter())?;
|
||||||
|
self.start = start;
|
||||||
|
|
||||||
|
// Reset the timeline when we are exiting the sequential block
|
||||||
|
// Legacy parallel does not need this, since it will be reset after codegen
|
||||||
|
// for this statement is completed
|
||||||
|
if self.parallel_mode == ParallelMode::Deep {
|
||||||
|
self.timeline_reset_start(ctx)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// not parallel/sequential
|
||||||
|
gen_with(self, ctx, stmt)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn gen_rpc_tag(
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
ty: Type,
|
||||||
|
buffer: &mut Vec<u8>,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
use nac3core::typecheck::typedef::TypeEnum::*;
|
||||||
|
|
||||||
|
let int32 = ctx.primitives.int32;
|
||||||
|
let int64 = ctx.primitives.int64;
|
||||||
|
let float = ctx.primitives.float;
|
||||||
|
let bool = ctx.primitives.bool;
|
||||||
|
let str = ctx.primitives.str;
|
||||||
|
let none = ctx.primitives.none;
|
||||||
|
|
||||||
|
if ctx.unifier.unioned(ty, int32) {
|
||||||
|
buffer.push(b'i');
|
||||||
|
} else if ctx.unifier.unioned(ty, int64) {
|
||||||
|
buffer.push(b'I');
|
||||||
|
} else if ctx.unifier.unioned(ty, float) {
|
||||||
|
buffer.push(b'f');
|
||||||
|
} else if ctx.unifier.unioned(ty, bool) {
|
||||||
|
buffer.push(b'b');
|
||||||
|
} else if ctx.unifier.unioned(ty, str) {
|
||||||
|
buffer.push(b's');
|
||||||
|
} else if ctx.unifier.unioned(ty, none) {
|
||||||
|
buffer.push(b'n');
|
||||||
|
} else {
|
||||||
|
let ty_enum = ctx.unifier.get_ty(ty);
|
||||||
|
match &*ty_enum {
|
||||||
|
TTuple { ty } => {
|
||||||
|
buffer.push(b't');
|
||||||
|
buffer.push(ty.len() as u8);
|
||||||
|
for ty in ty {
|
||||||
|
gen_rpc_tag(ctx, *ty, buffer)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
TObj { obj_id, params, .. } if *obj_id == PrimDef::List.id() => {
|
||||||
|
let ty = iter_type_vars(params).next().unwrap().ty;
|
||||||
|
|
||||||
|
buffer.push(b'l');
|
||||||
|
gen_rpc_tag(ctx, ty, buffer)?;
|
||||||
|
}
|
||||||
|
TObj { obj_id, .. } if *obj_id == PrimDef::NDArray.id() => {
|
||||||
|
let (ndarray_dtype, ndarray_ndims) = unpack_ndarray_var_tys(&mut ctx.unifier, ty);
|
||||||
|
let ndarray_ndims = if let TLiteral { values, .. } =
|
||||||
|
&*ctx.unifier.get_ty_immutable(ndarray_ndims)
|
||||||
|
{
|
||||||
|
if values.len() != 1 {
|
||||||
|
return Err(format!("NDArray types with multiple literal bounds for ndims is not supported: {}", ctx.unifier.stringify(ty)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let value = values[0].clone();
|
||||||
|
u64::try_from(value.clone())
|
||||||
|
.map_err(|()| format!("Expected u64 for ndarray.ndims, got {value}"))?
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
};
|
||||||
|
assert!((0u64..=u64::from(u8::MAX)).contains(&ndarray_ndims));
|
||||||
|
|
||||||
|
buffer.push(b'a');
|
||||||
|
buffer.push((ndarray_ndims & 0xFF) as u8);
|
||||||
|
gen_rpc_tag(ctx, ndarray_dtype, buffer)?;
|
||||||
|
}
|
||||||
|
_ => return Err(format!("Unsupported type: {:?}", ctx.unifier.stringify(ty))),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rpc_codegen_callback_fn<'ctx>(
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
obj: Option<(Type, ValueEnum<'ctx>)>,
|
||||||
|
fun: (&FunSignature, DefinitionId),
|
||||||
|
args: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
|
||||||
|
generator: &mut dyn CodeGenerator,
|
||||||
|
) -> Result<Option<BasicValueEnum<'ctx>>, String> {
|
||||||
|
let ptr_type = ctx.ctx.i8_type().ptr_type(AddressSpace::default());
|
||||||
|
let size_type = generator.get_size_type(ctx.ctx);
|
||||||
|
let int8 = ctx.ctx.i8_type();
|
||||||
|
let int32 = ctx.ctx.i32_type();
|
||||||
|
let tag_ptr_type = ctx.ctx.struct_type(&[ptr_type.into(), size_type.into()], false);
|
||||||
|
|
||||||
|
let service_id = int32.const_int(fun.1 .0 as u64, false);
|
||||||
|
// -- setup rpc tags
|
||||||
|
let mut tag = Vec::new();
|
||||||
|
if obj.is_some() {
|
||||||
|
tag.push(b'O');
|
||||||
|
}
|
||||||
|
for arg in &fun.0.args {
|
||||||
|
gen_rpc_tag(ctx, arg.ty, &mut tag)?;
|
||||||
|
}
|
||||||
|
tag.push(b':');
|
||||||
|
gen_rpc_tag(ctx, fun.0.ret, &mut tag)?;
|
||||||
|
|
||||||
|
let mut hasher = DefaultHasher::new();
|
||||||
|
tag.hash(&mut hasher);
|
||||||
|
let hash = format!("{}", hasher.finish());
|
||||||
|
|
||||||
|
let tag_ptr = ctx
|
||||||
|
.module
|
||||||
|
.get_global(hash.as_str())
|
||||||
|
.unwrap_or_else(|| {
|
||||||
|
let tag_arr_ptr = ctx.module.add_global(
|
||||||
|
int8.array_type(tag.len() as u32),
|
||||||
|
None,
|
||||||
|
format!("tagptr{}", fun.1 .0).as_str(),
|
||||||
|
);
|
||||||
|
tag_arr_ptr.set_initializer(&int8.const_array(
|
||||||
|
&tag.iter().map(|v| int8.const_int(u64::from(*v), false)).collect::<Vec<_>>(),
|
||||||
|
));
|
||||||
|
tag_arr_ptr.set_linkage(Linkage::Private);
|
||||||
|
let tag_ptr = ctx.module.add_global(tag_ptr_type, None, &hash);
|
||||||
|
tag_ptr.set_linkage(Linkage::Private);
|
||||||
|
tag_ptr.set_initializer(&ctx.ctx.const_struct(
|
||||||
|
&[
|
||||||
|
tag_arr_ptr.as_pointer_value().const_cast(ptr_type).into(),
|
||||||
|
size_type.const_int(tag.len() as u64, false).into(),
|
||||||
|
],
|
||||||
|
false,
|
||||||
|
));
|
||||||
|
tag_ptr
|
||||||
|
})
|
||||||
|
.as_pointer_value();
|
||||||
|
|
||||||
|
let arg_length = args.len() + usize::from(obj.is_some());
|
||||||
|
|
||||||
|
let stackptr = call_stacksave(ctx, Some("rpc.stack"));
|
||||||
|
let args_ptr = ctx
|
||||||
|
.builder
|
||||||
|
.build_array_alloca(
|
||||||
|
ptr_type,
|
||||||
|
ctx.ctx.i32_type().const_int(arg_length as u64, false),
|
||||||
|
"argptr",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// -- rpc args handling
|
||||||
|
let mut keys = fun.0.args.clone();
|
||||||
|
let mut mapping = HashMap::new();
|
||||||
|
for (key, value) in args {
|
||||||
|
mapping.insert(key.unwrap_or_else(|| keys.remove(0).name), value);
|
||||||
|
}
|
||||||
|
// default value handling
|
||||||
|
for k in keys {
|
||||||
|
mapping
|
||||||
|
.insert(k.name, ctx.gen_symbol_val(generator, &k.default_value.unwrap(), k.ty).into());
|
||||||
|
}
|
||||||
|
// reorder the parameters
|
||||||
|
let mut real_params = fun
|
||||||
|
.0
|
||||||
|
.args
|
||||||
|
.iter()
|
||||||
|
.map(|arg| mapping.remove(&arg.name).unwrap().to_basic_value_enum(ctx, generator, arg.ty))
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
if let Some(obj) = obj {
|
||||||
|
if let ValueEnum::Static(obj) = obj.1 {
|
||||||
|
real_params.insert(0, obj.get_const_obj(ctx, generator));
|
||||||
|
} else {
|
||||||
|
// should be an error here...
|
||||||
|
panic!("only host object is allowed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (i, arg) in real_params.iter().enumerate() {
|
||||||
|
let arg_slot =
|
||||||
|
generator.gen_var_alloc(ctx, arg.get_type(), Some(&format!("rpc.arg{i}"))).unwrap();
|
||||||
|
ctx.builder.build_store(arg_slot, *arg).unwrap();
|
||||||
|
let arg_slot = ctx.builder.build_bitcast(arg_slot, ptr_type, "rpc.arg").unwrap();
|
||||||
|
let arg_ptr = unsafe {
|
||||||
|
ctx.builder.build_gep(
|
||||||
|
args_ptr,
|
||||||
|
&[int32.const_int(i as u64, false)],
|
||||||
|
&format!("rpc.arg{i}"),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder.build_store(arg_ptr, arg_slot).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
// call
|
||||||
|
let rpc_send = ctx.module.get_function("rpc_send").unwrap_or_else(|| {
|
||||||
|
ctx.module.add_function(
|
||||||
|
"rpc_send",
|
||||||
|
ctx.ctx.void_type().fn_type(
|
||||||
|
&[
|
||||||
|
int32.into(),
|
||||||
|
tag_ptr_type.ptr_type(AddressSpace::default()).into(),
|
||||||
|
ptr_type.ptr_type(AddressSpace::default()).into(),
|
||||||
|
],
|
||||||
|
false,
|
||||||
|
),
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
});
|
||||||
|
ctx.builder
|
||||||
|
.build_call(rpc_send, &[service_id.into(), tag_ptr.into(), args_ptr.into()], "rpc.send")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// reclaim stack space used by arguments
|
||||||
|
call_stackrestore(ctx, stackptr);
|
||||||
|
|
||||||
|
// -- receive value:
|
||||||
|
// T result = {
|
||||||
|
// void *ret_ptr = alloca(sizeof(T));
|
||||||
|
// void *ptr = ret_ptr;
|
||||||
|
// loop: int size = rpc_recv(ptr);
|
||||||
|
// // Non-zero: Provide `size` bytes of extra storage for variable-length data.
|
||||||
|
// if(size) { ptr = alloca(size); goto loop; }
|
||||||
|
// else *(T*)ret_ptr
|
||||||
|
// }
|
||||||
|
let rpc_recv = ctx.module.get_function("rpc_recv").unwrap_or_else(|| {
|
||||||
|
ctx.module.add_function("rpc_recv", int32.fn_type(&[ptr_type.into()], false), None)
|
||||||
|
});
|
||||||
|
|
||||||
|
if ctx.unifier.unioned(fun.0.ret, ctx.primitives.none) {
|
||||||
|
ctx.build_call_or_invoke(rpc_recv, &[ptr_type.const_null().into()], "rpc_recv");
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
|
||||||
|
let prehead_bb = ctx.builder.get_insert_block().unwrap();
|
||||||
|
let current_function = prehead_bb.get_parent().unwrap();
|
||||||
|
let head_bb = ctx.ctx.append_basic_block(current_function, "rpc.head");
|
||||||
|
let alloc_bb = ctx.ctx.append_basic_block(current_function, "rpc.continue");
|
||||||
|
let tail_bb = ctx.ctx.append_basic_block(current_function, "rpc.tail");
|
||||||
|
|
||||||
|
let ret_ty = ctx.get_llvm_abi_type(generator, fun.0.ret);
|
||||||
|
let need_load = !ret_ty.is_pointer_type();
|
||||||
|
let slot = ctx.builder.build_alloca(ret_ty, "rpc.ret.slot").unwrap();
|
||||||
|
let slotgen = ctx.builder.build_bitcast(slot, ptr_type, "rpc.ret.ptr").unwrap();
|
||||||
|
ctx.builder.build_unconditional_branch(head_bb).unwrap();
|
||||||
|
ctx.builder.position_at_end(head_bb);
|
||||||
|
|
||||||
|
let phi = ctx.builder.build_phi(ptr_type, "rpc.ptr").unwrap();
|
||||||
|
phi.add_incoming(&[(&slotgen, prehead_bb)]);
|
||||||
|
let alloc_size = ctx
|
||||||
|
.build_call_or_invoke(rpc_recv, &[phi.as_basic_value()], "rpc.size.next")
|
||||||
|
.unwrap()
|
||||||
|
.into_int_value();
|
||||||
|
let is_done = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(inkwell::IntPredicate::EQ, int32.const_zero(), alloc_size, "rpc.done")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
ctx.builder.build_conditional_branch(is_done, tail_bb, alloc_bb).unwrap();
|
||||||
|
ctx.builder.position_at_end(alloc_bb);
|
||||||
|
|
||||||
|
let alloc_ptr = ctx.builder.build_array_alloca(ptr_type, alloc_size, "rpc.alloc").unwrap();
|
||||||
|
let alloc_ptr = ctx.builder.build_bitcast(alloc_ptr, ptr_type, "rpc.alloc.ptr").unwrap();
|
||||||
|
phi.add_incoming(&[(&alloc_ptr, alloc_bb)]);
|
||||||
|
ctx.builder.build_unconditional_branch(head_bb).unwrap();
|
||||||
|
|
||||||
|
ctx.builder.position_at_end(tail_bb);
|
||||||
|
|
||||||
|
let result = ctx.builder.build_load(slot, "rpc.result").unwrap();
|
||||||
|
if need_load {
|
||||||
|
call_stackrestore(ctx, stackptr);
|
||||||
|
}
|
||||||
|
Ok(Some(result))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn attributes_writeback(
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
generator: &mut dyn CodeGenerator,
|
||||||
|
inner_resolver: &InnerResolver,
|
||||||
|
host_attributes: &PyObject,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
Python::with_gil(|py| -> PyResult<Result<(), String>> {
|
||||||
|
let host_attributes: &PyList = host_attributes.downcast(py)?;
|
||||||
|
let top_levels = ctx.top_level.definitions.read();
|
||||||
|
let globals = inner_resolver.global_value_ids.read();
|
||||||
|
let int32 = ctx.ctx.i32_type();
|
||||||
|
let zero = int32.const_zero();
|
||||||
|
let mut values = Vec::new();
|
||||||
|
let mut scratch_buffer = Vec::new();
|
||||||
|
for val in (*globals).values() {
|
||||||
|
let val = val.as_ref(py);
|
||||||
|
let ty = inner_resolver.get_obj_type(
|
||||||
|
py,
|
||||||
|
val,
|
||||||
|
&mut ctx.unifier,
|
||||||
|
&top_levels,
|
||||||
|
&ctx.primitives,
|
||||||
|
)?;
|
||||||
|
if let Err(ty) = ty {
|
||||||
|
return Ok(Err(ty));
|
||||||
|
}
|
||||||
|
let ty = ty.unwrap();
|
||||||
|
match &*ctx.unifier.get_ty(ty) {
|
||||||
|
TypeEnum::TObj { fields, obj_id, .. }
|
||||||
|
if *obj_id != ctx.primitives.option.obj_id(&ctx.unifier).unwrap() =>
|
||||||
|
{
|
||||||
|
// we only care about primitive attributes
|
||||||
|
// for non-primitive attributes, they should be in another global
|
||||||
|
let mut attributes = Vec::new();
|
||||||
|
let obj = inner_resolver.get_obj_value(py, val, ctx, generator, ty)?.unwrap();
|
||||||
|
for (name, (field_ty, is_mutable)) in fields {
|
||||||
|
if !is_mutable {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if gen_rpc_tag(ctx, *field_ty, &mut scratch_buffer).is_ok() {
|
||||||
|
attributes.push(name.to_string());
|
||||||
|
let (index, _) = ctx.get_attr_index(ty, *name);
|
||||||
|
values.push((
|
||||||
|
*field_ty,
|
||||||
|
ctx.build_gep_and_load(
|
||||||
|
obj.into_pointer_value(),
|
||||||
|
&[zero, int32.const_int(index as u64, false)],
|
||||||
|
None,
|
||||||
|
),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !attributes.is_empty() {
|
||||||
|
let pydict = PyDict::new(py);
|
||||||
|
pydict.set_item("obj", val)?;
|
||||||
|
pydict.set_item("fields", attributes)?;
|
||||||
|
host_attributes.append(pydict)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
TypeEnum::TObj { obj_id, params, .. } if *obj_id == PrimDef::List.id() => {
|
||||||
|
let elem_ty = iter_type_vars(params).next().unwrap().ty;
|
||||||
|
|
||||||
|
if gen_rpc_tag(ctx, elem_ty, &mut scratch_buffer).is_ok() {
|
||||||
|
let pydict = PyDict::new(py);
|
||||||
|
pydict.set_item("obj", val)?;
|
||||||
|
host_attributes.append(pydict)?;
|
||||||
|
values.push((
|
||||||
|
ty,
|
||||||
|
inner_resolver.get_obj_value(py, val, ctx, generator, ty)?.unwrap(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let fun = FunSignature {
|
||||||
|
args: values
|
||||||
|
.iter()
|
||||||
|
.enumerate()
|
||||||
|
.map(|(i, (ty, _))| FuncArg {
|
||||||
|
name: i.to_string().into(),
|
||||||
|
ty: *ty,
|
||||||
|
default_value: None,
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
ret: ctx.primitives.none,
|
||||||
|
vars: VarMap::default(),
|
||||||
|
};
|
||||||
|
let args: Vec<_> =
|
||||||
|
values.into_iter().map(|(_, val)| (None, ValueEnum::Dynamic(val))).collect();
|
||||||
|
if let Err(e) =
|
||||||
|
rpc_codegen_callback_fn(ctx, None, (&fun, PrimDef::Int32.id()), args, generator)
|
||||||
|
{
|
||||||
|
return Ok(Err(e));
|
||||||
|
}
|
||||||
|
Ok(Ok(()))
|
||||||
|
})
|
||||||
|
.unwrap()?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn rpc_codegen_callback() -> Arc<GenCall> {
|
||||||
|
Arc::new(GenCall::new(Box::new(|ctx, obj, fun, args, generator| {
|
||||||
|
rpc_codegen_callback_fn(ctx, obj, fun, args, generator)
|
||||||
|
})))
|
||||||
|
}
|
|
@ -0,0 +1,56 @@
|
||||||
|
/* Force ld to make the ELF header as loadable. */
|
||||||
|
PHDRS
|
||||||
|
{
|
||||||
|
headers PT_LOAD FILEHDR PHDRS ;
|
||||||
|
text PT_LOAD ;
|
||||||
|
data PT_LOAD ;
|
||||||
|
dynamic PT_DYNAMIC ;
|
||||||
|
eh_frame PT_GNU_EH_FRAME ;
|
||||||
|
}
|
||||||
|
|
||||||
|
SECTIONS
|
||||||
|
{
|
||||||
|
/* Push back .text section enough so that ld.lld not complain */
|
||||||
|
. = SIZEOF_HEADERS;
|
||||||
|
|
||||||
|
.text :
|
||||||
|
{
|
||||||
|
*(.text .text.*)
|
||||||
|
} : text
|
||||||
|
|
||||||
|
.rodata :
|
||||||
|
{
|
||||||
|
*(.rodata .rodata.*)
|
||||||
|
}
|
||||||
|
|
||||||
|
.eh_frame :
|
||||||
|
{
|
||||||
|
KEEP(*(.eh_frame))
|
||||||
|
} : text
|
||||||
|
|
||||||
|
.eh_frame_hdr :
|
||||||
|
{
|
||||||
|
KEEP(*(.eh_frame_hdr))
|
||||||
|
} : text : eh_frame
|
||||||
|
|
||||||
|
.data :
|
||||||
|
{
|
||||||
|
*(.data)
|
||||||
|
} : data
|
||||||
|
|
||||||
|
.dynamic :
|
||||||
|
{
|
||||||
|
*(.dynamic)
|
||||||
|
} : data : dynamic
|
||||||
|
|
||||||
|
.bss (NOLOAD) : ALIGN(4)
|
||||||
|
{
|
||||||
|
__bss_start = .;
|
||||||
|
*(.sbss .sbss.* .bss .bss.*);
|
||||||
|
. = ALIGN(4);
|
||||||
|
_end = .;
|
||||||
|
}
|
||||||
|
|
||||||
|
. = ALIGN(0x1000);
|
||||||
|
_sstack_guard = .;
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,321 @@
|
||||||
|
use inkwell::{
|
||||||
|
values::{BasicValueEnum, CallSiteValue},
|
||||||
|
AddressSpace, AtomicOrdering,
|
||||||
|
};
|
||||||
|
use itertools::Either;
|
||||||
|
use nac3core::codegen::CodeGenContext;
|
||||||
|
|
||||||
|
/// Functions for manipulating the timeline.
|
||||||
|
pub trait TimeFns {
|
||||||
|
/// Emits LLVM IR for `now_mu`.
|
||||||
|
fn emit_now_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>) -> BasicValueEnum<'ctx>;
|
||||||
|
|
||||||
|
/// Emits LLVM IR for `at_mu`.
|
||||||
|
fn emit_at_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, t: BasicValueEnum<'ctx>);
|
||||||
|
|
||||||
|
/// Emits LLVM IR for `delay_mu`.
|
||||||
|
fn emit_delay_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, dt: BasicValueEnum<'ctx>);
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct NowPinningTimeFns64 {}
|
||||||
|
|
||||||
|
// For FPGA design reasons, on VexRiscv with 64-bit data bus, the "now" CSR is split into two 32-bit
|
||||||
|
// values that are each padded to 64-bits.
|
||||||
|
impl TimeFns for NowPinningTimeFns64 {
|
||||||
|
fn emit_now_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>) -> BasicValueEnum<'ctx> {
|
||||||
|
let i64_type = ctx.ctx.i64_type();
|
||||||
|
let i32_type = ctx.ctx.i32_type();
|
||||||
|
let now = ctx
|
||||||
|
.module
|
||||||
|
.get_global("now")
|
||||||
|
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
|
||||||
|
let now_hiptr = ctx
|
||||||
|
.builder
|
||||||
|
.build_bitcast(now, i32_type.ptr_type(AddressSpace::default()), "now.hi.addr")
|
||||||
|
.map(BasicValueEnum::into_pointer_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let now_loptr = unsafe {
|
||||||
|
ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(2, false)], "now.lo.addr")
|
||||||
|
}
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let now_hi = ctx
|
||||||
|
.builder
|
||||||
|
.build_load(now_hiptr, "now.hi")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
let now_lo = ctx
|
||||||
|
.builder
|
||||||
|
.build_load(now_loptr, "now.lo")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let zext_hi = ctx.builder.build_int_z_extend(now_hi, i64_type, "").unwrap();
|
||||||
|
let shifted_hi =
|
||||||
|
ctx.builder.build_left_shift(zext_hi, i64_type.const_int(32, false), "").unwrap();
|
||||||
|
let zext_lo = ctx.builder.build_int_z_extend(now_lo, i64_type, "").unwrap();
|
||||||
|
ctx.builder.build_or(shifted_hi, zext_lo, "now_mu").map(Into::into).unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn emit_at_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, t: BasicValueEnum<'ctx>) {
|
||||||
|
let i32_type = ctx.ctx.i32_type();
|
||||||
|
let i64_type = ctx.ctx.i64_type();
|
||||||
|
|
||||||
|
let i64_32 = i64_type.const_int(32, false);
|
||||||
|
let time = t.into_int_value();
|
||||||
|
|
||||||
|
let time_hi = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_truncate(
|
||||||
|
ctx.builder.build_right_shift(time, i64_32, false, "time.hi").unwrap(),
|
||||||
|
i32_type,
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let time_lo = ctx.builder.build_int_truncate(time, i32_type, "time.lo").unwrap();
|
||||||
|
let now = ctx
|
||||||
|
.module
|
||||||
|
.get_global("now")
|
||||||
|
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
|
||||||
|
let now_hiptr = ctx
|
||||||
|
.builder
|
||||||
|
.build_bitcast(now, i32_type.ptr_type(AddressSpace::default()), "now.hi.addr")
|
||||||
|
.map(BasicValueEnum::into_pointer_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let now_loptr = unsafe {
|
||||||
|
ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(2, false)], "now.lo.addr")
|
||||||
|
}
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_hiptr, time_hi)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_loptr, time_lo)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn emit_delay_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, dt: BasicValueEnum<'ctx>) {
|
||||||
|
let i64_type = ctx.ctx.i64_type();
|
||||||
|
let i32_type = ctx.ctx.i32_type();
|
||||||
|
let now = ctx
|
||||||
|
.module
|
||||||
|
.get_global("now")
|
||||||
|
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
|
||||||
|
let now_hiptr = ctx
|
||||||
|
.builder
|
||||||
|
.build_bitcast(now, i32_type.ptr_type(AddressSpace::default()), "now.hi.addr")
|
||||||
|
.map(BasicValueEnum::into_pointer_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let now_loptr = unsafe {
|
||||||
|
ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(2, false)], "now.lo.addr")
|
||||||
|
}
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let now_hi = ctx
|
||||||
|
.builder
|
||||||
|
.build_load(now_hiptr, "now.hi")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
let now_lo = ctx
|
||||||
|
.builder
|
||||||
|
.build_load(now_loptr, "now.lo")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
let dt = dt.into_int_value();
|
||||||
|
|
||||||
|
let zext_hi = ctx.builder.build_int_z_extend(now_hi, i64_type, "").unwrap();
|
||||||
|
let shifted_hi =
|
||||||
|
ctx.builder.build_left_shift(zext_hi, i64_type.const_int(32, false), "").unwrap();
|
||||||
|
let zext_lo = ctx.builder.build_int_z_extend(now_lo, i64_type, "").unwrap();
|
||||||
|
let now_val = ctx.builder.build_or(shifted_hi, zext_lo, "now").unwrap();
|
||||||
|
|
||||||
|
let time = ctx.builder.build_int_add(now_val, dt, "time").unwrap();
|
||||||
|
let time_hi = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_truncate(
|
||||||
|
ctx.builder
|
||||||
|
.build_right_shift(time, i64_type.const_int(32, false), false, "")
|
||||||
|
.unwrap(),
|
||||||
|
i32_type,
|
||||||
|
"time.hi",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let time_lo = ctx.builder.build_int_truncate(time, i32_type, "time.lo").unwrap();
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_hiptr, time_hi)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_loptr, time_lo)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub static NOW_PINNING_TIME_FNS_64: NowPinningTimeFns64 = NowPinningTimeFns64 {};
|
||||||
|
|
||||||
|
pub struct NowPinningTimeFns {}
|
||||||
|
|
||||||
|
impl TimeFns for NowPinningTimeFns {
|
||||||
|
fn emit_now_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>) -> BasicValueEnum<'ctx> {
|
||||||
|
let i64_type = ctx.ctx.i64_type();
|
||||||
|
let now = ctx
|
||||||
|
.module
|
||||||
|
.get_global("now")
|
||||||
|
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
|
||||||
|
let now_raw = ctx
|
||||||
|
.builder
|
||||||
|
.build_load(now.as_pointer_value(), "now")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let i64_32 = i64_type.const_int(32, false);
|
||||||
|
let now_lo = ctx.builder.build_left_shift(now_raw, i64_32, "now.lo").unwrap();
|
||||||
|
let now_hi = ctx.builder.build_right_shift(now_raw, i64_32, false, "now.hi").unwrap();
|
||||||
|
ctx.builder.build_or(now_lo, now_hi, "now_mu").map(Into::into).unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn emit_at_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, t: BasicValueEnum<'ctx>) {
|
||||||
|
let i32_type = ctx.ctx.i32_type();
|
||||||
|
let i64_type = ctx.ctx.i64_type();
|
||||||
|
let i64_32 = i64_type.const_int(32, false);
|
||||||
|
|
||||||
|
let time = t.into_int_value();
|
||||||
|
|
||||||
|
let time_hi = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_truncate(
|
||||||
|
ctx.builder.build_right_shift(time, i64_32, false, "").unwrap(),
|
||||||
|
i32_type,
|
||||||
|
"time.hi",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let time_lo = ctx.builder.build_int_truncate(time, i32_type, "now_trunc").unwrap();
|
||||||
|
let now = ctx
|
||||||
|
.module
|
||||||
|
.get_global("now")
|
||||||
|
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
|
||||||
|
let now_hiptr = ctx
|
||||||
|
.builder
|
||||||
|
.build_bitcast(now, i32_type.ptr_type(AddressSpace::default()), "now.hi.addr")
|
||||||
|
.map(BasicValueEnum::into_pointer_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let now_loptr = unsafe {
|
||||||
|
ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(1, false)], "now.lo.addr")
|
||||||
|
}
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_hiptr, time_hi)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_loptr, time_lo)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn emit_delay_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, dt: BasicValueEnum<'ctx>) {
|
||||||
|
let i32_type = ctx.ctx.i32_type();
|
||||||
|
let i64_type = ctx.ctx.i64_type();
|
||||||
|
let i64_32 = i64_type.const_int(32, false);
|
||||||
|
let now = ctx
|
||||||
|
.module
|
||||||
|
.get_global("now")
|
||||||
|
.unwrap_or_else(|| ctx.module.add_global(i64_type, None, "now"));
|
||||||
|
let now_raw = ctx
|
||||||
|
.builder
|
||||||
|
.build_load(now.as_pointer_value(), "")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let dt = dt.into_int_value();
|
||||||
|
|
||||||
|
let now_lo = ctx.builder.build_left_shift(now_raw, i64_32, "now.lo").unwrap();
|
||||||
|
let now_hi = ctx.builder.build_right_shift(now_raw, i64_32, false, "now.hi").unwrap();
|
||||||
|
let now_val = ctx.builder.build_or(now_lo, now_hi, "now_val").unwrap();
|
||||||
|
let time = ctx.builder.build_int_add(now_val, dt, "time").unwrap();
|
||||||
|
let time_hi = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_truncate(
|
||||||
|
ctx.builder.build_right_shift(time, i64_32, false, "time.hi").unwrap(),
|
||||||
|
i32_type,
|
||||||
|
"now_trunc",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let time_lo = ctx.builder.build_int_truncate(time, i32_type, "time.lo").unwrap();
|
||||||
|
let now_hiptr = ctx
|
||||||
|
.builder
|
||||||
|
.build_bitcast(now, i32_type.ptr_type(AddressSpace::default()), "now.hi.addr")
|
||||||
|
.map(BasicValueEnum::into_pointer_value)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let now_loptr = unsafe {
|
||||||
|
ctx.builder.build_gep(now_hiptr, &[i32_type.const_int(1, false)], "now.lo.addr")
|
||||||
|
}
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_hiptr, time_hi)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
ctx.builder
|
||||||
|
.build_store(now_loptr, time_lo)
|
||||||
|
.unwrap()
|
||||||
|
.set_atomic_ordering(AtomicOrdering::SequentiallyConsistent)
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub static NOW_PINNING_TIME_FNS: NowPinningTimeFns = NowPinningTimeFns {};
|
||||||
|
|
||||||
|
pub struct ExternTimeFns {}
|
||||||
|
|
||||||
|
impl TimeFns for ExternTimeFns {
|
||||||
|
fn emit_now_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>) -> BasicValueEnum<'ctx> {
|
||||||
|
let now_mu = ctx.module.get_function("now_mu").unwrap_or_else(|| {
|
||||||
|
ctx.module.add_function("now_mu", ctx.ctx.i64_type().fn_type(&[], false), None)
|
||||||
|
});
|
||||||
|
ctx.builder
|
||||||
|
.build_call(now_mu, &[], "now_mu")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn emit_at_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, t: BasicValueEnum<'ctx>) {
|
||||||
|
let at_mu = ctx.module.get_function("at_mu").unwrap_or_else(|| {
|
||||||
|
ctx.module.add_function(
|
||||||
|
"at_mu",
|
||||||
|
ctx.ctx.void_type().fn_type(&[ctx.ctx.i64_type().into()], false),
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
});
|
||||||
|
ctx.builder.build_call(at_mu, &[t.into()], "at_mu").unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn emit_delay_mu<'ctx>(&self, ctx: &mut CodeGenContext<'ctx, '_>, dt: BasicValueEnum<'ctx>) {
|
||||||
|
let delay_mu = ctx.module.get_function("delay_mu").unwrap_or_else(|| {
|
||||||
|
ctx.module.add_function(
|
||||||
|
"delay_mu",
|
||||||
|
ctx.ctx.void_type().fn_type(&[ctx.ctx.i64_type().into()], false),
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
});
|
||||||
|
ctx.builder.build_call(delay_mu, &[dt.into()], "delay_mu").unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub static EXTERN_TIME_FNS: ExternTimeFns = ExternTimeFns {};
|
|
@ -0,0 +1,16 @@
|
||||||
|
[package]
|
||||||
|
name = "nac3ast"
|
||||||
|
version = "0.1.0"
|
||||||
|
authors = ["RustPython Team", "M-Labs"]
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
[features]
|
||||||
|
default = ["constant-optimization", "fold"]
|
||||||
|
constant-optimization = ["fold"]
|
||||||
|
fold = []
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
lazy_static = "1.5"
|
||||||
|
parking_lot = "0.12"
|
||||||
|
string-interner = "0.17"
|
||||||
|
fxhash = "0.2"
|
|
@ -0,0 +1,127 @@
|
||||||
|
-- ASDL's 4 builtin types are:
|
||||||
|
-- identifier, int, string, constant
|
||||||
|
|
||||||
|
module Python
|
||||||
|
{
|
||||||
|
mod = Module(stmt* body, type_ignore* type_ignores)
|
||||||
|
| Interactive(stmt* body)
|
||||||
|
| Expression(expr body)
|
||||||
|
| FunctionType(expr* argtypes, expr returns)
|
||||||
|
|
||||||
|
stmt = FunctionDef(identifier name, arguments args,
|
||||||
|
stmt* body, expr* decorator_list, expr? returns,
|
||||||
|
string? type_comment, identifier* config_comment)
|
||||||
|
| AsyncFunctionDef(identifier name, arguments args,
|
||||||
|
stmt* body, expr* decorator_list, expr? returns,
|
||||||
|
string? type_comment, identifier* config_comment)
|
||||||
|
|
||||||
|
| ClassDef(identifier name,
|
||||||
|
expr* bases,
|
||||||
|
keyword* keywords,
|
||||||
|
stmt* body,
|
||||||
|
expr* decorator_list, identifier* config_comment)
|
||||||
|
| Return(expr? value, identifier* config_comment)
|
||||||
|
|
||||||
|
| Delete(expr* targets, identifier* config_comment)
|
||||||
|
| Assign(expr* targets, expr value, string? type_comment, identifier* config_comment)
|
||||||
|
| AugAssign(expr target, operator op, expr value, identifier* config_comment)
|
||||||
|
-- 'simple' indicates that we annotate simple name without parens
|
||||||
|
| AnnAssign(expr target, expr annotation, expr? value, bool simple, identifier* config_comment)
|
||||||
|
|
||||||
|
-- use 'orelse' because else is a keyword in target languages
|
||||||
|
| For(expr target, expr iter, stmt* body, stmt* orelse, string? type_comment, identifier* config_comment)
|
||||||
|
| AsyncFor(expr target, expr iter, stmt* body, stmt* orelse, string? type_comment, identifier* config_comment)
|
||||||
|
| While(expr test, stmt* body, stmt* orelse, identifier* config_comment)
|
||||||
|
| If(expr test, stmt* body, stmt* orelse, identifier* config_comment)
|
||||||
|
| With(withitem* items, stmt* body, string? type_comment, identifier* config_comment)
|
||||||
|
| AsyncWith(withitem* items, stmt* body, string? type_comment, identifier* config_comment)
|
||||||
|
|
||||||
|
| Raise(expr? exc, expr? cause, identifier* config_comment)
|
||||||
|
| Try(stmt* body, excepthandler* handlers, stmt* orelse, stmt* finalbody, identifier* config_comment)
|
||||||
|
| Assert(expr test, expr? msg, identifier* config_comment)
|
||||||
|
|
||||||
|
| Import(alias* names, identifier* config_comment)
|
||||||
|
| ImportFrom(identifier? module, alias* names, int level, identifier* config_comment)
|
||||||
|
|
||||||
|
| Global(identifier* names, identifier* config_comment)
|
||||||
|
| Nonlocal(identifier* names, identifier* config_comment)
|
||||||
|
| Expr(expr value, identifier* config_comment)
|
||||||
|
| Pass(identifier* config_comment)
|
||||||
|
| Break(identifier* config_comment)
|
||||||
|
| Continue(identifier* config_comment)
|
||||||
|
|
||||||
|
-- col_offset is the byte offset in the utf8 string the parser uses
|
||||||
|
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
|
||||||
|
|
||||||
|
-- BoolOp() can use left & right?
|
||||||
|
expr = BoolOp(boolop op, expr* values)
|
||||||
|
| NamedExpr(expr target, expr value)
|
||||||
|
| BinOp(expr left, operator op, expr right)
|
||||||
|
| UnaryOp(unaryop op, expr operand)
|
||||||
|
| Lambda(arguments args, expr body)
|
||||||
|
| IfExp(expr test, expr body, expr orelse)
|
||||||
|
| Dict(expr?* keys, expr* values)
|
||||||
|
| Set(expr* elts)
|
||||||
|
| ListComp(expr elt, comprehension* generators)
|
||||||
|
| SetComp(expr elt, comprehension* generators)
|
||||||
|
| DictComp(expr key, expr value, comprehension* generators)
|
||||||
|
| GeneratorExp(expr elt, comprehension* generators)
|
||||||
|
-- the grammar constrains where yield expressions can occur
|
||||||
|
| Await(expr value)
|
||||||
|
| Yield(expr? value)
|
||||||
|
| YieldFrom(expr value)
|
||||||
|
-- need sequences for compare to distinguish between
|
||||||
|
-- x < 4 < 3 and (x < 4) < 3
|
||||||
|
| Compare(expr left, cmpop* ops, expr* comparators)
|
||||||
|
| Call(expr func, expr* args, keyword* keywords)
|
||||||
|
| FormattedValue(expr value, conversion_flag? conversion, expr? format_spec)
|
||||||
|
| JoinedStr(expr* values)
|
||||||
|
| Constant(constant value, string? kind)
|
||||||
|
|
||||||
|
-- the following expression can appear in assignment context
|
||||||
|
| Attribute(expr value, identifier attr, expr_context ctx)
|
||||||
|
| Subscript(expr value, expr slice, expr_context ctx)
|
||||||
|
| Starred(expr value, expr_context ctx)
|
||||||
|
| Name(identifier id, expr_context ctx)
|
||||||
|
| List(expr* elts, expr_context ctx)
|
||||||
|
| Tuple(expr* elts, expr_context ctx)
|
||||||
|
|
||||||
|
-- can appear only in Subscript
|
||||||
|
| Slice(expr? lower, expr? upper, expr? step)
|
||||||
|
|
||||||
|
-- col_offset is the byte offset in the utf8 string the parser uses
|
||||||
|
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
|
||||||
|
|
||||||
|
expr_context = Load | Store | Del
|
||||||
|
|
||||||
|
boolop = And | Or
|
||||||
|
|
||||||
|
operator = Add | Sub | Mult | MatMult | Div | Mod | Pow | LShift
|
||||||
|
| RShift | BitOr | BitXor | BitAnd | FloorDiv
|
||||||
|
|
||||||
|
unaryop = Invert | Not | UAdd | USub
|
||||||
|
|
||||||
|
cmpop = Eq | NotEq | Lt | LtE | Gt | GtE | Is | IsNot | In | NotIn
|
||||||
|
|
||||||
|
comprehension = (expr target, expr iter, expr* ifs, bool is_async)
|
||||||
|
|
||||||
|
excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body)
|
||||||
|
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
|
||||||
|
|
||||||
|
arguments = (arg* posonlyargs, arg* args, arg? vararg, arg* kwonlyargs,
|
||||||
|
expr?* kw_defaults, arg? kwarg, expr* defaults)
|
||||||
|
|
||||||
|
arg = (identifier arg, expr? annotation, string? type_comment)
|
||||||
|
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
|
||||||
|
|
||||||
|
-- keyword arguments supplied to call (NULL identifier for **kwargs)
|
||||||
|
keyword = (identifier? arg, expr value)
|
||||||
|
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
|
||||||
|
|
||||||
|
-- import name with optional 'as' alias.
|
||||||
|
alias = (identifier name, identifier? asname)
|
||||||
|
|
||||||
|
withitem = (expr context_expr, expr? optional_vars)
|
||||||
|
|
||||||
|
type_ignore = TypeIgnore(int lineno, string tag)
|
||||||
|
}
|
|
@ -0,0 +1,385 @@
|
||||||
|
#-------------------------------------------------------------------------------
|
||||||
|
# Parser for ASDL [1] definition files. Reads in an ASDL description and parses
|
||||||
|
# it into an AST that describes it.
|
||||||
|
#
|
||||||
|
# The EBNF we're parsing here: Figure 1 of the paper [1]. Extended to support
|
||||||
|
# modules and attributes after a product. Words starting with Capital letters
|
||||||
|
# are terminals. Literal tokens are in "double quotes". Others are
|
||||||
|
# non-terminals. Id is either TokenId or ConstructorId.
|
||||||
|
#
|
||||||
|
# module ::= "module" Id "{" [definitions] "}"
|
||||||
|
# definitions ::= { TypeId "=" type }
|
||||||
|
# type ::= product | sum
|
||||||
|
# product ::= fields ["attributes" fields]
|
||||||
|
# fields ::= "(" { field, "," } field ")"
|
||||||
|
# field ::= TypeId ["?" | "*"] [Id]
|
||||||
|
# sum ::= constructor { "|" constructor } ["attributes" fields]
|
||||||
|
# constructor ::= ConstructorId [fields]
|
||||||
|
#
|
||||||
|
# [1] "The Zephyr Abstract Syntax Description Language" by Wang, et. al. See
|
||||||
|
# http://asdl.sourceforge.net/
|
||||||
|
#-------------------------------------------------------------------------------
|
||||||
|
from collections import namedtuple
|
||||||
|
import re
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
'builtin_types', 'parse', 'AST', 'Module', 'Type', 'Constructor',
|
||||||
|
'Field', 'Sum', 'Product', 'VisitorBase', 'Check', 'check']
|
||||||
|
|
||||||
|
# The following classes define nodes into which the ASDL description is parsed.
|
||||||
|
# Note: this is a "meta-AST". ASDL files (such as Python.asdl) describe the AST
|
||||||
|
# structure used by a programming language. But ASDL files themselves need to be
|
||||||
|
# parsed. This module parses ASDL files and uses a simple AST to represent them.
|
||||||
|
# See the EBNF at the top of the file to understand the logical connection
|
||||||
|
# between the various node types.
|
||||||
|
|
||||||
|
builtin_types = {'identifier', 'string', 'int', 'constant', 'bool', 'conversion_flag'}
|
||||||
|
|
||||||
|
class AST:
|
||||||
|
def __repr__(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
class Module(AST):
|
||||||
|
def __init__(self, name, dfns):
|
||||||
|
self.name = name
|
||||||
|
self.dfns = dfns
|
||||||
|
self.types = {type.name: type.value for type in dfns}
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return 'Module({0.name}, {0.dfns})'.format(self)
|
||||||
|
|
||||||
|
class Type(AST):
|
||||||
|
def __init__(self, name, value):
|
||||||
|
self.name = name
|
||||||
|
self.value = value
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return 'Type({0.name}, {0.value})'.format(self)
|
||||||
|
|
||||||
|
class Constructor(AST):
|
||||||
|
def __init__(self, name, fields=None):
|
||||||
|
self.name = name
|
||||||
|
self.fields = fields or []
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return 'Constructor({0.name}, {0.fields})'.format(self)
|
||||||
|
|
||||||
|
class Field(AST):
|
||||||
|
def __init__(self, type, name=None, seq=False, opt=False):
|
||||||
|
self.type = type
|
||||||
|
self.name = name
|
||||||
|
self.seq = seq
|
||||||
|
self.opt = opt
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
if self.seq:
|
||||||
|
extra = "*"
|
||||||
|
elif self.opt:
|
||||||
|
extra = "?"
|
||||||
|
else:
|
||||||
|
extra = ""
|
||||||
|
|
||||||
|
return "{}{} {}".format(self.type, extra, self.name)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
if self.seq:
|
||||||
|
extra = ", seq=True"
|
||||||
|
elif self.opt:
|
||||||
|
extra = ", opt=True"
|
||||||
|
else:
|
||||||
|
extra = ""
|
||||||
|
if self.name is None:
|
||||||
|
return 'Field({0.type}{1})'.format(self, extra)
|
||||||
|
else:
|
||||||
|
return 'Field({0.type}, {0.name}{1})'.format(self, extra)
|
||||||
|
|
||||||
|
class Sum(AST):
|
||||||
|
def __init__(self, types, attributes=None):
|
||||||
|
self.types = types
|
||||||
|
self.attributes = attributes or []
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
if self.attributes:
|
||||||
|
return 'Sum({0.types}, {0.attributes})'.format(self)
|
||||||
|
else:
|
||||||
|
return 'Sum({0.types})'.format(self)
|
||||||
|
|
||||||
|
class Product(AST):
|
||||||
|
def __init__(self, fields, attributes=None):
|
||||||
|
self.fields = fields
|
||||||
|
self.attributes = attributes or []
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
if self.attributes:
|
||||||
|
return 'Product({0.fields}, {0.attributes})'.format(self)
|
||||||
|
else:
|
||||||
|
return 'Product({0.fields})'.format(self)
|
||||||
|
|
||||||
|
# A generic visitor for the meta-AST that describes ASDL. This can be used by
|
||||||
|
# emitters. Note that this visitor does not provide a generic visit method, so a
|
||||||
|
# subclass needs to define visit methods from visitModule to as deep as the
|
||||||
|
# interesting node.
|
||||||
|
# We also define a Check visitor that makes sure the parsed ASDL is well-formed.
|
||||||
|
|
||||||
|
class VisitorBase(object):
|
||||||
|
"""Generic tree visitor for ASTs."""
|
||||||
|
def __init__(self):
|
||||||
|
self.cache = {}
|
||||||
|
|
||||||
|
def visit(self, obj, *args):
|
||||||
|
klass = obj.__class__
|
||||||
|
meth = self.cache.get(klass)
|
||||||
|
if meth is None:
|
||||||
|
methname = "visit" + klass.__name__
|
||||||
|
meth = getattr(self, methname, None)
|
||||||
|
self.cache[klass] = meth
|
||||||
|
if meth:
|
||||||
|
try:
|
||||||
|
meth(obj, *args)
|
||||||
|
except Exception as e:
|
||||||
|
print("Error visiting %r: %s" % (obj, e))
|
||||||
|
raise
|
||||||
|
|
||||||
|
class Check(VisitorBase):
|
||||||
|
"""A visitor that checks a parsed ASDL tree for correctness.
|
||||||
|
|
||||||
|
Errors are printed and accumulated.
|
||||||
|
"""
|
||||||
|
def __init__(self):
|
||||||
|
super(Check, self).__init__()
|
||||||
|
self.cons = {}
|
||||||
|
self.errors = 0
|
||||||
|
self.types = {}
|
||||||
|
|
||||||
|
def visitModule(self, mod):
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn)
|
||||||
|
|
||||||
|
def visitType(self, type):
|
||||||
|
self.visit(type.value, str(type.name))
|
||||||
|
|
||||||
|
def visitSum(self, sum, name):
|
||||||
|
for t in sum.types:
|
||||||
|
self.visit(t, name)
|
||||||
|
|
||||||
|
def visitConstructor(self, cons, name):
|
||||||
|
key = str(cons.name)
|
||||||
|
conflict = self.cons.get(key)
|
||||||
|
if conflict is None:
|
||||||
|
self.cons[key] = name
|
||||||
|
else:
|
||||||
|
print('Redefinition of constructor {}'.format(key))
|
||||||
|
print('Defined in {} and {}'.format(conflict, name))
|
||||||
|
self.errors += 1
|
||||||
|
for f in cons.fields:
|
||||||
|
self.visit(f, key)
|
||||||
|
|
||||||
|
def visitField(self, field, name):
|
||||||
|
key = str(field.type)
|
||||||
|
l = self.types.setdefault(key, [])
|
||||||
|
l.append(name)
|
||||||
|
|
||||||
|
def visitProduct(self, prod, name):
|
||||||
|
for f in prod.fields:
|
||||||
|
self.visit(f, name)
|
||||||
|
|
||||||
|
def check(mod):
|
||||||
|
"""Check the parsed ASDL tree for correctness.
|
||||||
|
|
||||||
|
Return True if success. For failure, the errors are printed out and False
|
||||||
|
is returned.
|
||||||
|
"""
|
||||||
|
v = Check()
|
||||||
|
v.visit(mod)
|
||||||
|
|
||||||
|
for t in v.types:
|
||||||
|
if t not in mod.types and not t in builtin_types:
|
||||||
|
v.errors += 1
|
||||||
|
uses = ", ".join(v.types[t])
|
||||||
|
print('Undefined type {}, used in {}'.format(t, uses))
|
||||||
|
return not v.errors
|
||||||
|
|
||||||
|
# The ASDL parser itself comes next. The only interesting external interface
|
||||||
|
# here is the top-level parse function.
|
||||||
|
|
||||||
|
def parse(filename):
|
||||||
|
"""Parse ASDL from the given file and return a Module node describing it."""
|
||||||
|
with open(filename) as f:
|
||||||
|
parser = ASDLParser()
|
||||||
|
return parser.parse(f.read())
|
||||||
|
|
||||||
|
# Types for describing tokens in an ASDL specification.
|
||||||
|
class TokenKind:
|
||||||
|
"""TokenKind is provides a scope for enumerated token kinds."""
|
||||||
|
(ConstructorId, TypeId, Equals, Comma, Question, Pipe, Asterisk,
|
||||||
|
LParen, RParen, LBrace, RBrace) = range(11)
|
||||||
|
|
||||||
|
operator_table = {
|
||||||
|
'=': Equals, ',': Comma, '?': Question, '|': Pipe, '(': LParen,
|
||||||
|
')': RParen, '*': Asterisk, '{': LBrace, '}': RBrace}
|
||||||
|
|
||||||
|
Token = namedtuple('Token', 'kind value lineno')
|
||||||
|
|
||||||
|
class ASDLSyntaxError(Exception):
|
||||||
|
def __init__(self, msg, lineno=None):
|
||||||
|
self.msg = msg
|
||||||
|
self.lineno = lineno or '<unknown>'
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return 'Syntax error on line {0.lineno}: {0.msg}'.format(self)
|
||||||
|
|
||||||
|
def tokenize_asdl(buf):
|
||||||
|
"""Tokenize the given buffer. Yield Token objects."""
|
||||||
|
for lineno, line in enumerate(buf.splitlines(), 1):
|
||||||
|
for m in re.finditer(r'\s*(\w+|--.*|.)', line.strip()):
|
||||||
|
c = m.group(1)
|
||||||
|
if c[0].isalpha():
|
||||||
|
# Some kind of identifier
|
||||||
|
if c[0].isupper():
|
||||||
|
yield Token(TokenKind.ConstructorId, c, lineno)
|
||||||
|
else:
|
||||||
|
yield Token(TokenKind.TypeId, c, lineno)
|
||||||
|
elif c[:2] == '--':
|
||||||
|
# Comment
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
# Operators
|
||||||
|
try:
|
||||||
|
op_kind = TokenKind.operator_table[c]
|
||||||
|
except KeyError:
|
||||||
|
raise ASDLSyntaxError('Invalid operator %s' % c, lineno)
|
||||||
|
yield Token(op_kind, c, lineno)
|
||||||
|
|
||||||
|
class ASDLParser:
|
||||||
|
"""Parser for ASDL files.
|
||||||
|
|
||||||
|
Create, then call the parse method on a buffer containing ASDL.
|
||||||
|
This is a simple recursive descent parser that uses tokenize_asdl for the
|
||||||
|
lexing.
|
||||||
|
"""
|
||||||
|
def __init__(self):
|
||||||
|
self._tokenizer = None
|
||||||
|
self.cur_token = None
|
||||||
|
|
||||||
|
def parse(self, buf):
|
||||||
|
"""Parse the ASDL in the buffer and return an AST with a Module root.
|
||||||
|
"""
|
||||||
|
self._tokenizer = tokenize_asdl(buf)
|
||||||
|
self._advance()
|
||||||
|
return self._parse_module()
|
||||||
|
|
||||||
|
def _parse_module(self):
|
||||||
|
if self._at_keyword('module'):
|
||||||
|
self._advance()
|
||||||
|
else:
|
||||||
|
raise ASDLSyntaxError(
|
||||||
|
'Expected "module" (found {})'.format(self.cur_token.value),
|
||||||
|
self.cur_token.lineno)
|
||||||
|
name = self._match(self._id_kinds)
|
||||||
|
self._match(TokenKind.LBrace)
|
||||||
|
defs = self._parse_definitions()
|
||||||
|
self._match(TokenKind.RBrace)
|
||||||
|
return Module(name, defs)
|
||||||
|
|
||||||
|
def _parse_definitions(self):
|
||||||
|
defs = []
|
||||||
|
while self.cur_token.kind == TokenKind.TypeId:
|
||||||
|
typename = self._advance()
|
||||||
|
self._match(TokenKind.Equals)
|
||||||
|
type = self._parse_type()
|
||||||
|
defs.append(Type(typename, type))
|
||||||
|
return defs
|
||||||
|
|
||||||
|
def _parse_type(self):
|
||||||
|
if self.cur_token.kind == TokenKind.LParen:
|
||||||
|
# If we see a (, it's a product
|
||||||
|
return self._parse_product()
|
||||||
|
else:
|
||||||
|
# Otherwise it's a sum. Look for ConstructorId
|
||||||
|
sumlist = [Constructor(self._match(TokenKind.ConstructorId),
|
||||||
|
self._parse_optional_fields())]
|
||||||
|
while self.cur_token.kind == TokenKind.Pipe:
|
||||||
|
# More constructors
|
||||||
|
self._advance()
|
||||||
|
sumlist.append(Constructor(
|
||||||
|
self._match(TokenKind.ConstructorId),
|
||||||
|
self._parse_optional_fields()))
|
||||||
|
return Sum(sumlist, self._parse_optional_attributes())
|
||||||
|
|
||||||
|
def _parse_product(self):
|
||||||
|
return Product(self._parse_fields(), self._parse_optional_attributes())
|
||||||
|
|
||||||
|
def _parse_fields(self):
|
||||||
|
fields = []
|
||||||
|
self._match(TokenKind.LParen)
|
||||||
|
while self.cur_token.kind == TokenKind.TypeId:
|
||||||
|
typename = self._advance()
|
||||||
|
is_seq, is_opt = self._parse_optional_field_quantifier()
|
||||||
|
id = (self._advance() if self.cur_token.kind in self._id_kinds
|
||||||
|
else None)
|
||||||
|
fields.append(Field(typename, id, seq=is_seq, opt=is_opt))
|
||||||
|
if self.cur_token.kind == TokenKind.RParen:
|
||||||
|
break
|
||||||
|
elif self.cur_token.kind == TokenKind.Comma:
|
||||||
|
self._advance()
|
||||||
|
self._match(TokenKind.RParen)
|
||||||
|
return fields
|
||||||
|
|
||||||
|
def _parse_optional_fields(self):
|
||||||
|
if self.cur_token.kind == TokenKind.LParen:
|
||||||
|
return self._parse_fields()
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _parse_optional_attributes(self):
|
||||||
|
if self._at_keyword('attributes'):
|
||||||
|
self._advance()
|
||||||
|
return self._parse_fields()
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _parse_optional_field_quantifier(self):
|
||||||
|
is_seq, is_opt = False, False
|
||||||
|
if self.cur_token.kind == TokenKind.Question:
|
||||||
|
is_opt = True
|
||||||
|
self._advance()
|
||||||
|
if self.cur_token.kind == TokenKind.Asterisk:
|
||||||
|
is_seq = True
|
||||||
|
self._advance()
|
||||||
|
return is_seq, is_opt
|
||||||
|
|
||||||
|
def _advance(self):
|
||||||
|
""" Return the value of the current token and read the next one into
|
||||||
|
self.cur_token.
|
||||||
|
"""
|
||||||
|
cur_val = None if self.cur_token is None else self.cur_token.value
|
||||||
|
try:
|
||||||
|
self.cur_token = next(self._tokenizer)
|
||||||
|
except StopIteration:
|
||||||
|
self.cur_token = None
|
||||||
|
return cur_val
|
||||||
|
|
||||||
|
_id_kinds = (TokenKind.ConstructorId, TokenKind.TypeId)
|
||||||
|
|
||||||
|
def _match(self, kind):
|
||||||
|
"""The 'match' primitive of RD parsers.
|
||||||
|
|
||||||
|
* Verifies that the current token is of the given kind (kind can
|
||||||
|
be a tuple, in which the kind must match one of its members).
|
||||||
|
* Returns the value of the current token
|
||||||
|
* Reads in the next token
|
||||||
|
"""
|
||||||
|
if (isinstance(kind, tuple) and self.cur_token.kind in kind or
|
||||||
|
self.cur_token.kind == kind
|
||||||
|
):
|
||||||
|
value = self.cur_token.value
|
||||||
|
self._advance()
|
||||||
|
return value
|
||||||
|
else:
|
||||||
|
raise ASDLSyntaxError(
|
||||||
|
'Unmatched {} (found {})'.format(kind, self.cur_token.kind),
|
||||||
|
self.cur_token.lineno)
|
||||||
|
|
||||||
|
def _at_keyword(self, keyword):
|
||||||
|
return (self.cur_token.kind == TokenKind.TypeId and
|
||||||
|
self.cur_token.value == keyword)
|
|
@ -0,0 +1,609 @@
|
||||||
|
#! /usr/bin/env python
|
||||||
|
"""Generate Rust code from an ASDL description."""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import textwrap
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
from argparse import ArgumentParser
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import asdl
|
||||||
|
|
||||||
|
TABSIZE = 4
|
||||||
|
AUTOGEN_MESSAGE = "// File automatically generated by {}.\n\n"
|
||||||
|
|
||||||
|
builtin_type_mapping = {
|
||||||
|
'identifier': 'Ident',
|
||||||
|
'string': 'String',
|
||||||
|
'int': 'usize',
|
||||||
|
'constant': 'Constant',
|
||||||
|
'bool': 'bool',
|
||||||
|
'conversion_flag': 'ConversionFlag',
|
||||||
|
}
|
||||||
|
assert builtin_type_mapping.keys() == asdl.builtin_types
|
||||||
|
|
||||||
|
def get_rust_type(name):
|
||||||
|
"""Return a string for the C name of the type.
|
||||||
|
|
||||||
|
This function special cases the default types provided by asdl.
|
||||||
|
"""
|
||||||
|
if name in asdl.builtin_types:
|
||||||
|
return builtin_type_mapping[name]
|
||||||
|
else:
|
||||||
|
return "".join(part.capitalize() for part in name.split("_"))
|
||||||
|
|
||||||
|
def is_simple(sum):
|
||||||
|
"""Return True if a sum is a simple.
|
||||||
|
|
||||||
|
A sum is simple if its types have no fields, e.g.
|
||||||
|
unaryop = Invert | Not | UAdd | USub
|
||||||
|
"""
|
||||||
|
for t in sum.types:
|
||||||
|
if t.fields:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def asdl_of(name, obj):
|
||||||
|
if isinstance(obj, asdl.Product) or isinstance(obj, asdl.Constructor):
|
||||||
|
fields = ", ".join(map(str, obj.fields))
|
||||||
|
if fields:
|
||||||
|
fields = "({})".format(fields)
|
||||||
|
return "{}{}".format(name, fields)
|
||||||
|
else:
|
||||||
|
if is_simple(obj):
|
||||||
|
types = " | ".join(type.name for type in obj.types)
|
||||||
|
else:
|
||||||
|
sep = "\n{}| ".format(" " * (len(name) + 1))
|
||||||
|
types = sep.join(
|
||||||
|
asdl_of(type.name, type) for type in obj.types
|
||||||
|
)
|
||||||
|
return "{} = {}".format(name, types)
|
||||||
|
|
||||||
|
class EmitVisitor(asdl.VisitorBase):
|
||||||
|
"""Visit that emits lines"""
|
||||||
|
|
||||||
|
def __init__(self, file):
|
||||||
|
self.file = file
|
||||||
|
self.identifiers = set()
|
||||||
|
super(EmitVisitor, self).__init__()
|
||||||
|
|
||||||
|
def emit_identifier(self, name):
|
||||||
|
name = str(name)
|
||||||
|
if name in self.identifiers:
|
||||||
|
return
|
||||||
|
self.emit("_Py_IDENTIFIER(%s);" % name, 0)
|
||||||
|
self.identifiers.add(name)
|
||||||
|
|
||||||
|
def emit(self, line, depth):
|
||||||
|
if line:
|
||||||
|
line = (" " * TABSIZE * depth) + line
|
||||||
|
self.file.write(line + "\n")
|
||||||
|
|
||||||
|
class TypeInfo:
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
self.has_userdata = None
|
||||||
|
self.children = set()
|
||||||
|
self.boxed = False
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"<TypeInfo: {self.name}>"
|
||||||
|
|
||||||
|
def determine_userdata(self, typeinfo, stack):
|
||||||
|
if self.name in stack:
|
||||||
|
return None
|
||||||
|
stack.add(self.name)
|
||||||
|
for child, child_seq in self.children:
|
||||||
|
if child in asdl.builtin_types:
|
||||||
|
continue
|
||||||
|
childinfo = typeinfo[child]
|
||||||
|
child_has_userdata = childinfo.determine_userdata(typeinfo, stack)
|
||||||
|
if self.has_userdata is None and child_has_userdata is True:
|
||||||
|
self.has_userdata = True
|
||||||
|
|
||||||
|
stack.remove(self.name)
|
||||||
|
return self.has_userdata
|
||||||
|
|
||||||
|
class FindUserdataTypesVisitor(asdl.VisitorBase):
|
||||||
|
def __init__(self, typeinfo):
|
||||||
|
self.typeinfo = typeinfo
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
def visitModule(self, mod):
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn)
|
||||||
|
stack = set()
|
||||||
|
for info in self.typeinfo.values():
|
||||||
|
info.determine_userdata(self.typeinfo, stack)
|
||||||
|
|
||||||
|
def visitType(self, type):
|
||||||
|
self.typeinfo[type.name] = TypeInfo(type.name)
|
||||||
|
self.visit(type.value, type.name)
|
||||||
|
|
||||||
|
def visitSum(self, sum, name):
|
||||||
|
info = self.typeinfo[name]
|
||||||
|
if is_simple(sum):
|
||||||
|
info.has_userdata = False
|
||||||
|
else:
|
||||||
|
if len(sum.types) > 1:
|
||||||
|
info.boxed = True
|
||||||
|
if sum.attributes:
|
||||||
|
# attributes means Located, which has the `custom: U` field
|
||||||
|
info.has_userdata = True
|
||||||
|
for variant in sum.types:
|
||||||
|
self.add_children(name, variant.fields)
|
||||||
|
|
||||||
|
def visitProduct(self, product, name):
|
||||||
|
info = self.typeinfo[name]
|
||||||
|
if product.attributes:
|
||||||
|
# attributes means Located, which has the `custom: U` field
|
||||||
|
info.has_userdata = True
|
||||||
|
if len(product.fields) > 2:
|
||||||
|
info.boxed = True
|
||||||
|
self.add_children(name, product.fields)
|
||||||
|
|
||||||
|
def add_children(self, name, fields):
|
||||||
|
self.typeinfo[name].children.update((field.type, field.seq) for field in fields)
|
||||||
|
|
||||||
|
def rust_field(field_name):
|
||||||
|
if field_name == 'type':
|
||||||
|
return 'type_'
|
||||||
|
else:
|
||||||
|
return field_name
|
||||||
|
|
||||||
|
class TypeInfoEmitVisitor(EmitVisitor):
|
||||||
|
def __init__(self, file, typeinfo):
|
||||||
|
self.typeinfo = typeinfo
|
||||||
|
super().__init__(file)
|
||||||
|
|
||||||
|
def has_userdata(self, typ):
|
||||||
|
return self.typeinfo[typ].has_userdata
|
||||||
|
|
||||||
|
def get_generics(self, typ, *generics):
|
||||||
|
if self.has_userdata(typ):
|
||||||
|
return [f"<{g}>" for g in generics]
|
||||||
|
else:
|
||||||
|
return ["" for g in generics]
|
||||||
|
|
||||||
|
class StructVisitor(TypeInfoEmitVisitor):
|
||||||
|
"""Visitor to generate typedefs for AST."""
|
||||||
|
|
||||||
|
def visitModule(self, mod):
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn)
|
||||||
|
|
||||||
|
def visitType(self, type, depth=0):
|
||||||
|
self.visit(type.value, type.name, depth)
|
||||||
|
|
||||||
|
def visitSum(self, sum, name, depth):
|
||||||
|
if is_simple(sum):
|
||||||
|
self.simple_sum(sum, name, depth)
|
||||||
|
else:
|
||||||
|
self.sum_with_constructors(sum, name, depth)
|
||||||
|
|
||||||
|
def emit_attrs(self, depth):
|
||||||
|
self.emit("#[derive(Clone, Debug, PartialEq)]", depth)
|
||||||
|
|
||||||
|
def simple_sum(self, sum, name, depth):
|
||||||
|
rustname = get_rust_type(name)
|
||||||
|
self.emit_attrs(depth)
|
||||||
|
self.emit(f"pub enum {rustname} {{", depth)
|
||||||
|
for variant in sum.types:
|
||||||
|
self.emit(f"{variant.name},", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
self.emit("", depth)
|
||||||
|
|
||||||
|
def sum_with_constructors(self, sum, name, depth):
|
||||||
|
typeinfo = self.typeinfo[name]
|
||||||
|
generics, generics_applied = self.get_generics(name, "U = ()", "U")
|
||||||
|
enumname = rustname = get_rust_type(name)
|
||||||
|
# all the attributes right now are for location, so if it has attrs we
|
||||||
|
# can just wrap it in Located<>
|
||||||
|
if sum.attributes:
|
||||||
|
enumname = rustname + "Kind"
|
||||||
|
self.emit_attrs(depth)
|
||||||
|
self.emit(f"pub enum {enumname}{generics} {{", depth)
|
||||||
|
for t in sum.types:
|
||||||
|
self.visit(t, typeinfo, depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
if sum.attributes:
|
||||||
|
self.emit(f"pub type {rustname}<U = ()> = Located<{enumname}{generics_applied}, U>;", depth)
|
||||||
|
self.emit("", depth)
|
||||||
|
|
||||||
|
def visitConstructor(self, cons, parent, depth):
|
||||||
|
if cons.fields:
|
||||||
|
self.emit(f"{cons.name} {{", depth)
|
||||||
|
for f in cons.fields:
|
||||||
|
self.visit(f, parent, "", depth + 1)
|
||||||
|
self.emit("},", depth)
|
||||||
|
else:
|
||||||
|
self.emit(f"{cons.name},", depth)
|
||||||
|
|
||||||
|
def visitField(self, field, parent, vis, depth):
|
||||||
|
typ = get_rust_type(field.type)
|
||||||
|
fieldtype = self.typeinfo.get(field.type)
|
||||||
|
if fieldtype and fieldtype.has_userdata:
|
||||||
|
typ = f"{typ}<U>"
|
||||||
|
# don't box if we're doing Vec<T>, but do box if we're doing Vec<Option<Box<T>>>
|
||||||
|
if fieldtype and fieldtype.boxed and (not field.seq or field.opt):
|
||||||
|
typ = f"Box<{typ}>"
|
||||||
|
if field.opt:
|
||||||
|
typ = f"Option<{typ}>"
|
||||||
|
if field.seq:
|
||||||
|
typ = f"Vec<{typ}>"
|
||||||
|
name = rust_field(field.name)
|
||||||
|
self.emit(f"{vis}{name}: {typ},", depth)
|
||||||
|
|
||||||
|
def visitProduct(self, product, name, depth):
|
||||||
|
typeinfo = self.typeinfo[name]
|
||||||
|
generics, generics_applied = self.get_generics(name, "U = ()", "U")
|
||||||
|
dataname = rustname = get_rust_type(name)
|
||||||
|
if product.attributes:
|
||||||
|
dataname = rustname + "Data"
|
||||||
|
self.emit_attrs(depth)
|
||||||
|
self.emit(f"pub struct {dataname}{generics} {{", depth)
|
||||||
|
for f in product.fields:
|
||||||
|
self.visit(f, typeinfo, "pub ", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
if product.attributes:
|
||||||
|
# attributes should just be location info
|
||||||
|
self.emit(f"pub type {rustname}<U = ()> = Located<{dataname}{generics_applied}, U>;", depth);
|
||||||
|
self.emit("", depth)
|
||||||
|
|
||||||
|
|
||||||
|
class FoldTraitDefVisitor(TypeInfoEmitVisitor):
|
||||||
|
def visitModule(self, mod, depth):
|
||||||
|
self.emit("pub trait Fold<U> {", depth)
|
||||||
|
self.emit("type TargetU;", depth + 1)
|
||||||
|
self.emit("type Error;", depth + 1)
|
||||||
|
self.emit("fn map_user(&mut self, user: U) -> Result<Self::TargetU, Self::Error>;", depth + 2)
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn, depth + 2)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
def visitType(self, type, depth):
|
||||||
|
name = type.name
|
||||||
|
apply_u, apply_target_u = self.get_generics(name, "U", "Self::TargetU")
|
||||||
|
enumname = get_rust_type(name)
|
||||||
|
self.emit(f"fn fold_{name}(&mut self, node: {enumname}{apply_u}) -> Result<{enumname}{apply_target_u}, Self::Error> {{", depth)
|
||||||
|
self.emit(f"fold_{name}(self, node)", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
|
||||||
|
class FoldImplVisitor(TypeInfoEmitVisitor):
|
||||||
|
def visitModule(self, mod, depth):
|
||||||
|
self.emit("fn fold_located<U, F: Fold<U> + ?Sized, T, MT>(folder: &mut F, node: Located<T, U>, f: impl FnOnce(&mut F, T) -> Result<MT, F::Error>) -> Result<Located<MT, F::TargetU>, F::Error> {", depth)
|
||||||
|
self.emit("Ok(Located { custom: folder.map_user(node.custom)?, location: node.location, node: f(folder, node.node)? })", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn, depth)
|
||||||
|
|
||||||
|
def visitType(self, type, depth=0):
|
||||||
|
self.visit(type.value, type.name, depth)
|
||||||
|
|
||||||
|
def visitSum(self, sum, name, depth):
|
||||||
|
apply_t, apply_u, apply_target_u = self.get_generics(name, "T", "U", "F::TargetU")
|
||||||
|
enumname = get_rust_type(name)
|
||||||
|
is_located = bool(sum.attributes)
|
||||||
|
|
||||||
|
self.emit(f"impl<T, U> Foldable<T, U> for {enumname}{apply_t} {{", depth)
|
||||||
|
self.emit(f"type Mapped = {enumname}{apply_u};", depth + 1)
|
||||||
|
self.emit("fn fold<F: Fold<T, TargetU = U> + ?Sized>(self, folder: &mut F) -> Result<Self::Mapped, F::Error> {", depth + 1)
|
||||||
|
self.emit(f"folder.fold_{name}(self)", depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
self.emit(f"pub fn fold_{name}<U, F: Fold<U> + ?Sized>(#[allow(unused)] folder: &mut F, node: {enumname}{apply_u}) -> Result<{enumname}{apply_target_u}, F::Error> {{", depth)
|
||||||
|
if is_located:
|
||||||
|
self.emit("fold_located(folder, node, |folder, node| {", depth)
|
||||||
|
enumname += "Kind"
|
||||||
|
self.emit("match node {", depth + 1)
|
||||||
|
for cons in sum.types:
|
||||||
|
fields_pattern = self.make_pattern(cons.fields)
|
||||||
|
self.emit(f"{enumname}::{cons.name} {{ {fields_pattern} }} => {{", depth + 2)
|
||||||
|
self.gen_construction(f"{enumname}::{cons.name}", cons.fields, depth + 3)
|
||||||
|
self.emit("}", depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
if is_located:
|
||||||
|
self.emit("})", depth)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
|
||||||
|
def visitProduct(self, product, name, depth):
|
||||||
|
apply_t, apply_u, apply_target_u = self.get_generics(name, "T", "U", "F::TargetU")
|
||||||
|
structname = get_rust_type(name)
|
||||||
|
is_located = bool(product.attributes)
|
||||||
|
|
||||||
|
self.emit(f"impl<T, U> Foldable<T, U> for {structname}{apply_t} {{", depth)
|
||||||
|
self.emit(f"type Mapped = {structname}{apply_u};", depth + 1)
|
||||||
|
self.emit("fn fold<F: Fold<T, TargetU = U> + ?Sized>(self, folder: &mut F) -> Result<Self::Mapped, F::Error> {", depth + 1)
|
||||||
|
self.emit(f"folder.fold_{name}(self)", depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
self.emit(f"pub fn fold_{name}<U, F: Fold<U> + ?Sized>(#[allow(unused)] folder: &mut F, node: {structname}{apply_u}) -> Result<{structname}{apply_target_u}, F::Error> {{", depth)
|
||||||
|
if is_located:
|
||||||
|
self.emit("fold_located(folder, node, |folder, node| {", depth)
|
||||||
|
structname += "Data"
|
||||||
|
fields_pattern = self.make_pattern(product.fields)
|
||||||
|
self.emit(f"let {structname} {{ {fields_pattern} }} = node;", depth + 1)
|
||||||
|
self.gen_construction(structname, product.fields, depth + 1)
|
||||||
|
if is_located:
|
||||||
|
self.emit("})", depth)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
def make_pattern(self, fields):
|
||||||
|
return ",".join(rust_field(f.name) for f in fields)
|
||||||
|
|
||||||
|
def gen_construction(self, cons_path, fields, depth):
|
||||||
|
self.emit(f"Ok({cons_path} {{", depth)
|
||||||
|
for field in fields:
|
||||||
|
name = rust_field(field.name)
|
||||||
|
self.emit(f"{name}: Foldable::fold({name}, folder)?,", depth + 1)
|
||||||
|
self.emit("})", depth)
|
||||||
|
|
||||||
|
|
||||||
|
class FoldModuleVisitor(TypeInfoEmitVisitor):
|
||||||
|
def visitModule(self, mod):
|
||||||
|
depth = 0
|
||||||
|
self.emit('#[cfg(feature = "fold")]', depth)
|
||||||
|
self.emit("pub mod fold {", depth)
|
||||||
|
self.emit("use super::*;", depth + 1)
|
||||||
|
self.emit("use crate::fold_helpers::Foldable;", depth + 1)
|
||||||
|
FoldTraitDefVisitor(self.file, self.typeinfo).visit(mod, depth + 1)
|
||||||
|
FoldImplVisitor(self.file, self.typeinfo).visit(mod, depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
|
||||||
|
class ClassDefVisitor(EmitVisitor):
|
||||||
|
|
||||||
|
def visitModule(self, mod):
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn)
|
||||||
|
|
||||||
|
def visitType(self, type, depth=0):
|
||||||
|
self.visit(type.value, type.name, depth)
|
||||||
|
|
||||||
|
def visitSum(self, sum, name, depth):
|
||||||
|
for cons in sum.types:
|
||||||
|
self.visit(cons, sum.attributes, depth)
|
||||||
|
|
||||||
|
def visitConstructor(self, cons, attrs, depth):
|
||||||
|
self.gen_classdef(cons.name, cons.fields, attrs, depth)
|
||||||
|
|
||||||
|
def visitProduct(self, product, name, depth):
|
||||||
|
self.gen_classdef(name, product.fields, product.attributes, depth)
|
||||||
|
|
||||||
|
def gen_classdef(self, name, fields, attrs, depth):
|
||||||
|
structname = "Node" + name
|
||||||
|
self.emit(f'#[pyclass(module = "_ast", name = {json.dumps(name)}, base = "AstNode")]', depth)
|
||||||
|
self.emit(f"struct {structname};", depth)
|
||||||
|
self.emit("#[pyimpl(flags(HAS_DICT, BASETYPE))]", depth)
|
||||||
|
self.emit(f"impl {structname} {{", depth)
|
||||||
|
self.emit(f"#[extend_class]", depth + 1)
|
||||||
|
self.emit("fn extend_class_with_fields(ctx: &PyContext, class: &PyTypeRef) {", depth + 1)
|
||||||
|
fields = ",".join(f"ctx.new_str({json.dumps(f.name)})" for f in fields)
|
||||||
|
self.emit(f'class.set_str_attr("_fields", ctx.new_list(vec![{fields}]));', depth + 2)
|
||||||
|
attrs = ",".join(f"ctx.new_str({json.dumps(attr.name)})" for attr in attrs)
|
||||||
|
self.emit(f'class.set_str_attr("_attributes", ctx.new_list(vec![{attrs}]));', depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
class ExtendModuleVisitor(EmitVisitor):
|
||||||
|
|
||||||
|
def visitModule(self, mod):
|
||||||
|
depth = 0
|
||||||
|
self.emit("pub fn extend_module_nodes(vm: &VirtualMachine, module: &PyObjectRef) {", depth)
|
||||||
|
self.emit("extend_module!(vm, module, {", depth + 1)
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn, depth + 2)
|
||||||
|
self.emit("})", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
def visitType(self, type, depth):
|
||||||
|
self.visit(type.value, type.name, depth)
|
||||||
|
|
||||||
|
def visitSum(self, sum, name, depth):
|
||||||
|
for cons in sum.types:
|
||||||
|
self.visit(cons, depth)
|
||||||
|
|
||||||
|
def visitConstructor(self, cons, depth):
|
||||||
|
self.gen_extension(cons.name, depth)
|
||||||
|
|
||||||
|
def visitProduct(self, product, name, depth):
|
||||||
|
self.gen_extension(name, depth)
|
||||||
|
|
||||||
|
def gen_extension(self, name, depth):
|
||||||
|
self.emit(f"{json.dumps(name)} => Node{name}::make_class(&vm.ctx),", depth)
|
||||||
|
|
||||||
|
|
||||||
|
class TraitImplVisitor(EmitVisitor):
|
||||||
|
|
||||||
|
def visitModule(self, mod):
|
||||||
|
for dfn in mod.dfns:
|
||||||
|
self.visit(dfn)
|
||||||
|
|
||||||
|
def visitType(self, type, depth=0):
|
||||||
|
self.visit(type.value, type.name, depth)
|
||||||
|
|
||||||
|
def visitSum(self, sum, name, depth):
|
||||||
|
enumname = get_rust_type(name)
|
||||||
|
if sum.attributes:
|
||||||
|
enumname += "Kind"
|
||||||
|
|
||||||
|
|
||||||
|
self.emit(f"impl NamedNode for ast::{enumname} {{", depth)
|
||||||
|
self.emit(f"const NAME: &'static str = {json.dumps(name)};", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
self.emit(f"impl Node for ast::{enumname} {{", depth)
|
||||||
|
self.emit("fn ast_to_object(self, _vm: &VirtualMachine) -> PyObjectRef {", depth + 1)
|
||||||
|
self.emit("match self {", depth + 2)
|
||||||
|
for variant in sum.types:
|
||||||
|
self.constructor_to_object(variant, enumname, depth + 3)
|
||||||
|
self.emit("}", depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
self.emit("fn ast_from_object(_vm: &VirtualMachine, _object: PyObjectRef) -> PyResult<Self> {", depth + 1)
|
||||||
|
self.gen_sum_fromobj(sum, name, enumname, depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
def constructor_to_object(self, cons, enumname, depth):
|
||||||
|
fields_pattern = self.make_pattern(cons.fields)
|
||||||
|
self.emit(f"ast::{enumname}::{cons.name} {{ {fields_pattern} }} => {{", depth)
|
||||||
|
self.make_node(cons.name, cons.fields, depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
def visitProduct(self, product, name, depth):
|
||||||
|
structname = get_rust_type(name)
|
||||||
|
if product.attributes:
|
||||||
|
structname += "Data"
|
||||||
|
|
||||||
|
self.emit(f"impl NamedNode for ast::{structname} {{", depth)
|
||||||
|
self.emit(f"const NAME: &'static str = {json.dumps(name)};", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
self.emit(f"impl Node for ast::{structname} {{", depth)
|
||||||
|
self.emit("fn ast_to_object(self, _vm: &VirtualMachine) -> PyObjectRef {", depth + 1)
|
||||||
|
fields_pattern = self.make_pattern(product.fields)
|
||||||
|
self.emit(f"let ast::{structname} {{ {fields_pattern} }} = self;", depth + 2)
|
||||||
|
self.make_node(name, product.fields, depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
self.emit("fn ast_from_object(_vm: &VirtualMachine, _object: PyObjectRef) -> PyResult<Self> {", depth + 1)
|
||||||
|
self.gen_product_fromobj(product, name, structname, depth + 2)
|
||||||
|
self.emit("}", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
def make_node(self, variant, fields, depth):
|
||||||
|
lines = []
|
||||||
|
self.emit(f"let _node = AstNode.into_ref_with_type(_vm, Node{variant}::static_type().clone()).unwrap();", depth)
|
||||||
|
if fields:
|
||||||
|
self.emit("let _dict = _node.as_object().dict().unwrap();", depth)
|
||||||
|
for f in fields:
|
||||||
|
self.emit(f"_dict.set_item({json.dumps(f.name)}, {rust_field(f.name)}.ast_to_object(_vm), _vm).unwrap();", depth)
|
||||||
|
self.emit("_node.into_object()", depth)
|
||||||
|
|
||||||
|
def make_pattern(self, fields):
|
||||||
|
return ",".join(rust_field(f.name) for f in fields)
|
||||||
|
|
||||||
|
def gen_sum_fromobj(self, sum, sumname, enumname, depth):
|
||||||
|
if sum.attributes:
|
||||||
|
self.extract_location(sumname, depth)
|
||||||
|
|
||||||
|
self.emit("let _cls = _object.class();", depth)
|
||||||
|
self.emit("Ok(", depth)
|
||||||
|
for cons in sum.types:
|
||||||
|
self.emit(f"if _cls.is(Node{cons.name}::static_type()) {{", depth)
|
||||||
|
self.gen_construction(f"{enumname}::{cons.name}", cons, sumname, depth + 1)
|
||||||
|
self.emit("} else", depth)
|
||||||
|
|
||||||
|
self.emit("{", depth)
|
||||||
|
msg = f'format!("expected some sort of {sumname}, but got {{}}",_vm.to_repr(&_object)?)'
|
||||||
|
self.emit(f"return Err(_vm.new_type_error({msg}));", depth + 1)
|
||||||
|
self.emit("})", depth)
|
||||||
|
|
||||||
|
def gen_product_fromobj(self, product, prodname, structname, depth):
|
||||||
|
if product.attributes:
|
||||||
|
self.extract_location(prodname, depth)
|
||||||
|
|
||||||
|
self.emit("Ok(", depth)
|
||||||
|
self.gen_construction(structname, product, prodname, depth + 1)
|
||||||
|
self.emit(")", depth)
|
||||||
|
|
||||||
|
def gen_construction(self, cons_path, cons, name, depth):
|
||||||
|
self.emit(f"ast::{cons_path} {{", depth)
|
||||||
|
for field in cons.fields:
|
||||||
|
self.emit(f"{rust_field(field.name)}: {self.decode_field(field, name)},", depth + 1)
|
||||||
|
self.emit("}", depth)
|
||||||
|
|
||||||
|
def extract_location(self, typename, depth):
|
||||||
|
row = self.decode_field(asdl.Field('int', 'lineno'), typename)
|
||||||
|
column = self.decode_field(asdl.Field('int', 'col_offset'), typename)
|
||||||
|
self.emit(f"let _location = ast::Location::new({row}, {column});", depth)
|
||||||
|
|
||||||
|
def wrap_located_node(self, depth):
|
||||||
|
self.emit(f"let node = ast::Located::new(_location, node);", depth)
|
||||||
|
|
||||||
|
def decode_field(self, field, typename):
|
||||||
|
name = json.dumps(field.name)
|
||||||
|
if field.opt and not field.seq:
|
||||||
|
return f"get_node_field_opt(_vm, &_object, {name})?.map(|obj| Node::ast_from_object(_vm, obj)).transpose()?"
|
||||||
|
else:
|
||||||
|
return f"Node::ast_from_object(_vm, get_node_field(_vm, &_object, {name}, {json.dumps(typename)})?)?"
|
||||||
|
|
||||||
|
class ChainOfVisitors:
|
||||||
|
def __init__(self, *visitors):
|
||||||
|
self.visitors = visitors
|
||||||
|
|
||||||
|
def visit(self, object):
|
||||||
|
for v in self.visitors:
|
||||||
|
v.visit(object)
|
||||||
|
v.emit("", 0)
|
||||||
|
|
||||||
|
|
||||||
|
def write_ast_def(mod, typeinfo, f):
|
||||||
|
f.write('pub use crate::location::Location;\n')
|
||||||
|
f.write('pub use crate::constant::*;\n')
|
||||||
|
f.write('\n')
|
||||||
|
f.write('type Ident = String;\n')
|
||||||
|
f.write('\n')
|
||||||
|
StructVisitor(f, typeinfo).emit_attrs(0)
|
||||||
|
f.write('pub struct Located<T, U = ()> {\n')
|
||||||
|
f.write(' pub location: Location,\n')
|
||||||
|
f.write(' pub custom: U,\n')
|
||||||
|
f.write(' pub node: T,\n')
|
||||||
|
f.write('}\n')
|
||||||
|
f.write('\n')
|
||||||
|
f.write('impl<T> Located<T> {\n')
|
||||||
|
f.write(' pub fn new(location: Location, node: T) -> Self {\n')
|
||||||
|
f.write(' Self { location, custom: (), node }\n')
|
||||||
|
f.write(' }\n')
|
||||||
|
f.write('}\n')
|
||||||
|
f.write('\n')
|
||||||
|
|
||||||
|
c = ChainOfVisitors(StructVisitor(f, typeinfo),
|
||||||
|
FoldModuleVisitor(f, typeinfo))
|
||||||
|
c.visit(mod)
|
||||||
|
|
||||||
|
|
||||||
|
def write_ast_mod(mod, f):
|
||||||
|
f.write('use super::*;\n')
|
||||||
|
f.write('\n')
|
||||||
|
|
||||||
|
c = ChainOfVisitors(ClassDefVisitor(f),
|
||||||
|
TraitImplVisitor(f),
|
||||||
|
ExtendModuleVisitor(f))
|
||||||
|
c.visit(mod)
|
||||||
|
|
||||||
|
def main(input_filename, ast_mod_filename, ast_def_filename, dump_module=False):
|
||||||
|
auto_gen_msg = AUTOGEN_MESSAGE.format("/".join(Path(__file__).parts[-2:]))
|
||||||
|
mod = asdl.parse(input_filename)
|
||||||
|
if dump_module:
|
||||||
|
print('Parsed Module:')
|
||||||
|
print(mod)
|
||||||
|
if not asdl.check(mod):
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
typeinfo = {}
|
||||||
|
FindUserdataTypesVisitor(typeinfo).visit(mod)
|
||||||
|
|
||||||
|
with ast_def_filename.open("w") as def_file, \
|
||||||
|
ast_mod_filename.open("w") as mod_file:
|
||||||
|
def_file.write(auto_gen_msg)
|
||||||
|
write_ast_def(mod, typeinfo, def_file)
|
||||||
|
|
||||||
|
mod_file.write(auto_gen_msg)
|
||||||
|
write_ast_mod(mod, mod_file)
|
||||||
|
|
||||||
|
print(f"{ast_def_filename}, {ast_mod_filename} regenerated.")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = ArgumentParser()
|
||||||
|
parser.add_argument("input_file", type=Path)
|
||||||
|
parser.add_argument("-M", "--mod-file", type=Path, required=True)
|
||||||
|
parser.add_argument("-D", "--def-file", type=Path, required=True)
|
||||||
|
parser.add_argument("-d", "--dump-module", action="store_true")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
main(args.input_file, args.mod_file, args.def_file, args.dump_module)
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,183 @@
|
||||||
|
#[derive(Clone, Debug, PartialEq)]
|
||||||
|
pub enum Constant {
|
||||||
|
None,
|
||||||
|
Bool(bool),
|
||||||
|
Str(String),
|
||||||
|
Bytes(Vec<u8>),
|
||||||
|
Int(i128),
|
||||||
|
Tuple(Vec<Constant>),
|
||||||
|
Float(f64),
|
||||||
|
Complex { real: f64, imag: f64 },
|
||||||
|
Ellipsis,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<String> for Constant {
|
||||||
|
fn from(s: String) -> Constant {
|
||||||
|
Self::Str(s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<Vec<u8>> for Constant {
|
||||||
|
fn from(b: Vec<u8>) -> Constant {
|
||||||
|
Self::Bytes(b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<bool> for Constant {
|
||||||
|
fn from(b: bool) -> Constant {
|
||||||
|
Self::Bool(b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<i32> for Constant {
|
||||||
|
fn from(i: i32) -> Constant {
|
||||||
|
Self::Int(i128::from(i))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<i64> for Constant {
|
||||||
|
fn from(i: i64) -> Constant {
|
||||||
|
Self::Int(i128::from(i))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Transforms a value prior to formatting it.
|
||||||
|
#[derive(Copy, Clone, Debug, PartialEq)]
|
||||||
|
#[repr(u8)]
|
||||||
|
pub enum ConversionFlag {
|
||||||
|
/// Converts by calling `str(<value>)`.
|
||||||
|
Str = b's',
|
||||||
|
/// Converts by calling `ascii(<value>)`.
|
||||||
|
Ascii = b'a',
|
||||||
|
/// Converts by calling `repr(<value>)`.
|
||||||
|
Repr = b'r',
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConversionFlag {
|
||||||
|
#[must_use]
|
||||||
|
pub fn try_from_byte(b: u8) -> Option<Self> {
|
||||||
|
match b {
|
||||||
|
b's' => Some(Self::Str),
|
||||||
|
b'a' => Some(Self::Ascii),
|
||||||
|
b'r' => Some(Self::Repr),
|
||||||
|
_ => None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "constant-optimization")]
|
||||||
|
#[derive(Default)]
|
||||||
|
pub struct ConstantOptimizer {
|
||||||
|
_priv: (),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "constant-optimization")]
|
||||||
|
impl ConstantOptimizer {
|
||||||
|
#[inline]
|
||||||
|
#[must_use]
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self { _priv: () }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "constant-optimization")]
|
||||||
|
impl<U> crate::fold::Fold<U> for ConstantOptimizer {
|
||||||
|
type TargetU = U;
|
||||||
|
type Error = std::convert::Infallible;
|
||||||
|
#[inline]
|
||||||
|
fn map_user(&mut self, user: U) -> Result<Self::TargetU, Self::Error> {
|
||||||
|
Ok(user)
|
||||||
|
}
|
||||||
|
fn fold_expr(&mut self, node: crate::Expr<U>) -> Result<crate::Expr<U>, Self::Error> {
|
||||||
|
match node.node {
|
||||||
|
crate::ExprKind::Tuple { elts, ctx } => {
|
||||||
|
let elts =
|
||||||
|
elts.into_iter().map(|x| self.fold_expr(x)).collect::<Result<Vec<_>, _>>()?;
|
||||||
|
let expr =
|
||||||
|
if elts.iter().all(|e| matches!(e.node, crate::ExprKind::Constant { .. })) {
|
||||||
|
let tuple = elts
|
||||||
|
.into_iter()
|
||||||
|
.map(|e| match e.node {
|
||||||
|
crate::ExprKind::Constant { value, .. } => value,
|
||||||
|
_ => unreachable!(),
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
crate::ExprKind::Constant { value: Constant::Tuple(tuple), kind: None }
|
||||||
|
} else {
|
||||||
|
crate::ExprKind::Tuple { elts, ctx }
|
||||||
|
};
|
||||||
|
Ok(crate::Expr { node: expr, custom: node.custom, location: node.location })
|
||||||
|
}
|
||||||
|
_ => crate::fold::fold_expr(self, node),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
#[cfg(feature = "constant-optimization")]
|
||||||
|
#[test]
|
||||||
|
fn test_constant_opt() {
|
||||||
|
use super::*;
|
||||||
|
use crate::fold::Fold;
|
||||||
|
use crate::*;
|
||||||
|
|
||||||
|
let location = Location::new(0, 0, FileName::default());
|
||||||
|
let custom = ();
|
||||||
|
let ast = Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Tuple {
|
||||||
|
ctx: ExprContext::Load,
|
||||||
|
elts: vec![
|
||||||
|
Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Constant { value: 1.into(), kind: None },
|
||||||
|
},
|
||||||
|
Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Constant { value: 2.into(), kind: None },
|
||||||
|
},
|
||||||
|
Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Tuple {
|
||||||
|
ctx: ExprContext::Load,
|
||||||
|
elts: vec![
|
||||||
|
Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Constant { value: 3.into(), kind: None },
|
||||||
|
},
|
||||||
|
Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Constant { value: 4.into(), kind: None },
|
||||||
|
},
|
||||||
|
Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Constant { value: 5.into(), kind: None },
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
let new_ast = ConstantOptimizer::new().fold_expr(ast).unwrap_or_else(|e| match e {});
|
||||||
|
assert_eq!(
|
||||||
|
new_ast,
|
||||||
|
Located {
|
||||||
|
location,
|
||||||
|
custom,
|
||||||
|
node: ExprKind::Constant {
|
||||||
|
value: Constant::Tuple(vec![
|
||||||
|
1.into(),
|
||||||
|
2.into(),
|
||||||
|
Constant::Tuple(vec![3.into(), 4.into(), 5.into(),])
|
||||||
|
]),
|
||||||
|
kind: None
|
||||||
|
},
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,67 @@
|
||||||
|
use crate::constant;
|
||||||
|
use crate::fold::Fold;
|
||||||
|
use crate::StrRef;
|
||||||
|
|
||||||
|
pub(crate) trait Foldable<T, U> {
|
||||||
|
type Mapped;
|
||||||
|
fn fold<F: Fold<T, TargetU = U> + ?Sized>(
|
||||||
|
self,
|
||||||
|
folder: &mut F,
|
||||||
|
) -> Result<Self::Mapped, F::Error>;
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T, U, X> Foldable<T, U> for Vec<X>
|
||||||
|
where
|
||||||
|
X: Foldable<T, U>,
|
||||||
|
{
|
||||||
|
type Mapped = Vec<X::Mapped>;
|
||||||
|
fn fold<F: Fold<T, TargetU = U> + ?Sized>(
|
||||||
|
self,
|
||||||
|
folder: &mut F,
|
||||||
|
) -> Result<Self::Mapped, F::Error> {
|
||||||
|
self.into_iter().map(|x| x.fold(folder)).collect()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T, U, X> Foldable<T, U> for Option<X>
|
||||||
|
where
|
||||||
|
X: Foldable<T, U>,
|
||||||
|
{
|
||||||
|
type Mapped = Option<X::Mapped>;
|
||||||
|
fn fold<F: Fold<T, TargetU = U> + ?Sized>(
|
||||||
|
self,
|
||||||
|
folder: &mut F,
|
||||||
|
) -> Result<Self::Mapped, F::Error> {
|
||||||
|
self.map(|x| x.fold(folder)).transpose()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T, U, X> Foldable<T, U> for Box<X>
|
||||||
|
where
|
||||||
|
X: Foldable<T, U>,
|
||||||
|
{
|
||||||
|
type Mapped = Box<X::Mapped>;
|
||||||
|
fn fold<F: Fold<T, TargetU = U> + ?Sized>(
|
||||||
|
self,
|
||||||
|
folder: &mut F,
|
||||||
|
) -> Result<Self::Mapped, F::Error> {
|
||||||
|
(*self).fold(folder).map(Box::new)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
macro_rules! simple_fold {
|
||||||
|
($($t:ty),+$(,)?) => {
|
||||||
|
$(impl<T, U> $crate::fold_helpers::Foldable<T, U> for $t {
|
||||||
|
type Mapped = Self;
|
||||||
|
#[inline]
|
||||||
|
fn fold<F: Fold<T, TargetU = U> + ?Sized>(
|
||||||
|
self,
|
||||||
|
_folder: &mut F,
|
||||||
|
) -> Result<Self::Mapped, F::Error> {
|
||||||
|
Ok(self)
|
||||||
|
}
|
||||||
|
})+
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
simple_fold!(usize, String, bool, StrRef, constant::Constant, constant::ConversionFlag);
|
|
@ -0,0 +1,51 @@
|
||||||
|
use crate::{Constant, ExprKind};
|
||||||
|
|
||||||
|
impl<U> ExprKind<U> {
|
||||||
|
/// Returns a short name for the node suitable for use in error messages.
|
||||||
|
#[must_use]
|
||||||
|
pub fn name(&self) -> &'static str {
|
||||||
|
match self {
|
||||||
|
ExprKind::BoolOp { .. } | ExprKind::BinOp { .. } | ExprKind::UnaryOp { .. } => {
|
||||||
|
"operator"
|
||||||
|
}
|
||||||
|
ExprKind::Subscript { .. } => "subscript",
|
||||||
|
ExprKind::Await { .. } => "await expression",
|
||||||
|
ExprKind::Yield { .. } | ExprKind::YieldFrom { .. } => "yield expression",
|
||||||
|
ExprKind::Compare { .. } => "comparison",
|
||||||
|
ExprKind::Attribute { .. } => "attribute",
|
||||||
|
ExprKind::Call { .. } => "function call",
|
||||||
|
ExprKind::Constant { value, .. } => match value {
|
||||||
|
Constant::Str(_)
|
||||||
|
| Constant::Int(_)
|
||||||
|
| Constant::Float(_)
|
||||||
|
| Constant::Complex { .. }
|
||||||
|
| Constant::Bytes(_) => "literal",
|
||||||
|
Constant::Tuple(_) => "tuple",
|
||||||
|
Constant::Bool(_) | Constant::None => "keyword",
|
||||||
|
Constant::Ellipsis => "ellipsis",
|
||||||
|
},
|
||||||
|
ExprKind::List { .. } => "list",
|
||||||
|
ExprKind::Tuple { .. } => "tuple",
|
||||||
|
ExprKind::Dict { .. } => "dict display",
|
||||||
|
ExprKind::Set { .. } => "set display",
|
||||||
|
ExprKind::ListComp { .. } => "list comprehension",
|
||||||
|
ExprKind::DictComp { .. } => "dict comprehension",
|
||||||
|
ExprKind::SetComp { .. } => "set comprehension",
|
||||||
|
ExprKind::GeneratorExp { .. } => "generator expression",
|
||||||
|
ExprKind::Starred { .. } => "starred",
|
||||||
|
ExprKind::Slice { .. } => "slice",
|
||||||
|
ExprKind::JoinedStr { values } => {
|
||||||
|
if values.iter().any(|e| matches!(e.node, ExprKind::JoinedStr { .. })) {
|
||||||
|
"f-string expression"
|
||||||
|
} else {
|
||||||
|
"literal"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::FormattedValue { .. } => "f-string expression",
|
||||||
|
ExprKind::Name { .. } => "name",
|
||||||
|
ExprKind::Lambda { .. } => "lambda",
|
||||||
|
ExprKind::IfExp { .. } => "conditional expression",
|
||||||
|
ExprKind::NamedExpr { .. } => "named expression",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,30 @@
|
||||||
|
#![deny(
|
||||||
|
future_incompatible,
|
||||||
|
let_underscore,
|
||||||
|
nonstandard_style,
|
||||||
|
rust_2024_compatibility,
|
||||||
|
clippy::all
|
||||||
|
)]
|
||||||
|
#![warn(clippy::pedantic)]
|
||||||
|
#![allow(
|
||||||
|
clippy::missing_errors_doc,
|
||||||
|
clippy::missing_panics_doc,
|
||||||
|
clippy::module_name_repetitions,
|
||||||
|
clippy::too_many_lines,
|
||||||
|
clippy::wildcard_imports
|
||||||
|
)]
|
||||||
|
|
||||||
|
#[macro_use]
|
||||||
|
extern crate lazy_static;
|
||||||
|
|
||||||
|
mod ast_gen;
|
||||||
|
mod constant;
|
||||||
|
#[cfg(feature = "fold")]
|
||||||
|
mod fold_helpers;
|
||||||
|
mod impls;
|
||||||
|
mod location;
|
||||||
|
|
||||||
|
pub use ast_gen::*;
|
||||||
|
pub use location::{FileName, Location};
|
||||||
|
|
||||||
|
pub type Suite<U = ()> = Vec<Stmt<U>>;
|
|
@ -0,0 +1,116 @@
|
||||||
|
//! Datatypes to support source location information.
|
||||||
|
use crate::ast_gen::StrRef;
|
||||||
|
use std::cmp::Ordering;
|
||||||
|
use std::fmt;
|
||||||
|
|
||||||
|
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
|
||||||
|
pub struct FileName(pub StrRef);
|
||||||
|
impl Default for FileName {
|
||||||
|
fn default() -> Self {
|
||||||
|
FileName("unknown".into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<String> for FileName {
|
||||||
|
fn from(s: String) -> Self {
|
||||||
|
FileName(s.into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A location somewhere in the sourcecode.
|
||||||
|
#[derive(Clone, Copy, Debug, Default, Eq, PartialEq)]
|
||||||
|
pub struct Location {
|
||||||
|
pub row: usize,
|
||||||
|
pub column: usize,
|
||||||
|
pub file: FileName,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl fmt::Display for Location {
|
||||||
|
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||||
|
write!(f, "{}:{}:{}", self.file.0, self.row, self.column)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Ord for Location {
|
||||||
|
fn cmp(&self, other: &Self) -> Ordering {
|
||||||
|
let file_cmp = self.file.0.to_string().cmp(&other.file.0.to_string());
|
||||||
|
if file_cmp != Ordering::Equal {
|
||||||
|
return file_cmp;
|
||||||
|
}
|
||||||
|
|
||||||
|
let row_cmp = self.row.cmp(&other.row);
|
||||||
|
if row_cmp != Ordering::Equal {
|
||||||
|
return row_cmp;
|
||||||
|
}
|
||||||
|
|
||||||
|
self.column.cmp(&other.column)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PartialOrd for Location {
|
||||||
|
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||||
|
Some(self.cmp(other))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Location {
|
||||||
|
pub fn visualize<'a>(
|
||||||
|
&self,
|
||||||
|
line: &'a str,
|
||||||
|
desc: impl fmt::Display + 'a,
|
||||||
|
) -> impl fmt::Display + 'a {
|
||||||
|
struct Visualize<'a, D: fmt::Display> {
|
||||||
|
loc: Location,
|
||||||
|
line: &'a str,
|
||||||
|
desc: D,
|
||||||
|
}
|
||||||
|
impl<D: fmt::Display> fmt::Display for Visualize<'_, D> {
|
||||||
|
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||||
|
write!(
|
||||||
|
f,
|
||||||
|
"{}\n{}\n{arrow:>pad$}",
|
||||||
|
self.desc,
|
||||||
|
self.line,
|
||||||
|
pad = self.loc.column,
|
||||||
|
arrow = "↑",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Visualize { loc: *self, line, desc }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Location {
|
||||||
|
#[must_use]
|
||||||
|
pub fn new(row: usize, column: usize, file: FileName) -> Self {
|
||||||
|
Location { row, column, file }
|
||||||
|
}
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn row(&self) -> usize {
|
||||||
|
self.row
|
||||||
|
}
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn column(&self) -> usize {
|
||||||
|
self.column
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn reset(&mut self) {
|
||||||
|
self.row = 1;
|
||||||
|
self.column = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn go_right(&mut self) {
|
||||||
|
self.column += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn go_left(&mut self) {
|
||||||
|
self.column -= 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn newline(&mut self) {
|
||||||
|
self.row += 1;
|
||||||
|
self.column = 1;
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,13 +1,31 @@
|
||||||
|
[features]
|
||||||
|
test = []
|
||||||
|
|
||||||
[package]
|
[package]
|
||||||
name = "nac3core"
|
name = "nac3core"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
authors = ["M-Labs"]
|
authors = ["M-Labs"]
|
||||||
edition = "2018"
|
edition = "2021"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
num-bigint = "0.3"
|
itertools = "0.13"
|
||||||
num-traits = "0.2"
|
crossbeam = "0.8"
|
||||||
inkwell = { git = "https://github.com/TheDan64/inkwell", branch = "master", features = ["llvm10-0"] }
|
indexmap = "2.2"
|
||||||
rustpython-parser = { git = "https://github.com/RustPython/RustPython", branch = "master" }
|
parking_lot = "0.12"
|
||||||
indoc = "1.0"
|
rayon = "1.8"
|
||||||
|
nac3parser = { path = "../nac3parser" }
|
||||||
|
strum = "0.26.2"
|
||||||
|
strum_macros = "0.26.4"
|
||||||
|
|
||||||
|
[dependencies.inkwell]
|
||||||
|
version = "0.4"
|
||||||
|
default-features = false
|
||||||
|
features = ["llvm14-0", "target-x86", "target-arm", "target-riscv", "no-libffi-linking"]
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
test-case = "1.2.0"
|
||||||
|
indoc = "2.0"
|
||||||
|
insta = "=1.11.0"
|
||||||
|
|
||||||
|
[build-dependencies]
|
||||||
|
regex = "1.10"
|
||||||
|
|
|
@ -0,0 +1,134 @@
|
||||||
|
use regex::Regex;
|
||||||
|
use std::{
|
||||||
|
env,
|
||||||
|
fs::File,
|
||||||
|
io::Write,
|
||||||
|
path::Path,
|
||||||
|
process::{Command, Stdio},
|
||||||
|
};
|
||||||
|
|
||||||
|
fn compile_irrt(irrt_dir: &Path, out_dir: &Path) {
|
||||||
|
let irrt_cpp_path = irrt_dir.join("irrt.cpp");
|
||||||
|
|
||||||
|
/*
|
||||||
|
* HACK: Sadly, clang doesn't let us emit generic LLVM bitcode.
|
||||||
|
* Compiling for WASM32 and filtering the output with regex is the closest we can get.
|
||||||
|
*/
|
||||||
|
let flags: &[&str] = &[
|
||||||
|
"--target=wasm32",
|
||||||
|
irrt_cpp_path.to_str().unwrap(),
|
||||||
|
"-x",
|
||||||
|
"c++",
|
||||||
|
"-fno-discard-value-names",
|
||||||
|
"-fno-exceptions",
|
||||||
|
"-fno-rtti",
|
||||||
|
match env::var("PROFILE").as_deref() {
|
||||||
|
Ok("debug") => "-O0",
|
||||||
|
Ok("release") => "-O3",
|
||||||
|
flavor => panic!("Unknown or missing build flavor {flavor:?}"),
|
||||||
|
},
|
||||||
|
"-emit-llvm",
|
||||||
|
"-S",
|
||||||
|
"-Wall",
|
||||||
|
"-Wextra",
|
||||||
|
"-Werror=return-type",
|
||||||
|
"-I",
|
||||||
|
irrt_dir.to_str().unwrap(),
|
||||||
|
"-o",
|
||||||
|
"-",
|
||||||
|
];
|
||||||
|
|
||||||
|
println!("cargo:rerun-if-changed={}", out_dir.to_str().unwrap());
|
||||||
|
|
||||||
|
let output = Command::new("clang-irrt")
|
||||||
|
.args(flags)
|
||||||
|
.output()
|
||||||
|
.map(|o| {
|
||||||
|
assert!(o.status.success(), "{}", std::str::from_utf8(&o.stderr).unwrap());
|
||||||
|
o
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// https://github.com/rust-lang/regex/issues/244
|
||||||
|
let output = std::str::from_utf8(&output.stdout).unwrap().replace("\r\n", "\n");
|
||||||
|
let mut filtered_output = String::with_capacity(output.len());
|
||||||
|
|
||||||
|
// (?ms:^define.*?\}$) to capture `define` blocks
|
||||||
|
// (?m:^declare.*?$) to capture `declare` blocks
|
||||||
|
// (?m:^%.+?=\s*type\s*\{.+?\}$) to capture `type` declarations
|
||||||
|
let regex_filter =
|
||||||
|
Regex::new(r"(?ms:^define.*?\}$)|(?m:^declare.*?$)|(?m:^%.+?=\s*type\s*\{.+?\}$)").unwrap();
|
||||||
|
for f in regex_filter.captures_iter(&output) {
|
||||||
|
assert_eq!(f.len(), 1);
|
||||||
|
filtered_output.push_str(&f[0]);
|
||||||
|
filtered_output.push('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
let filtered_output = Regex::new("(#\\d+)|(, *![0-9A-Za-z.]+)|(![0-9A-Za-z.]+)|(!\".*?\")")
|
||||||
|
.unwrap()
|
||||||
|
.replace_all(&filtered_output, "");
|
||||||
|
|
||||||
|
println!("cargo:rerun-if-env-changed=DEBUG_DUMP_IRRT");
|
||||||
|
if env::var("DEBUG_DUMP_IRRT").is_ok() {
|
||||||
|
let mut file = File::create(out_dir.join("irrt.ll")).unwrap();
|
||||||
|
file.write_all(output.as_bytes()).unwrap();
|
||||||
|
let mut file = File::create(out_dir.join("irrt-filtered.ll")).unwrap();
|
||||||
|
file.write_all(filtered_output.as_bytes()).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut llvm_as = Command::new("llvm-as-irrt")
|
||||||
|
.stdin(Stdio::piped())
|
||||||
|
.arg("-o")
|
||||||
|
.arg(out_dir.join("irrt.bc"))
|
||||||
|
.spawn()
|
||||||
|
.unwrap();
|
||||||
|
llvm_as.stdin.as_mut().unwrap().write_all(filtered_output.as_bytes()).unwrap();
|
||||||
|
assert!(llvm_as.wait().unwrap().success());
|
||||||
|
}
|
||||||
|
|
||||||
|
fn compile_irrt_test(irrt_dir: &Path, out_dir: &Path) {
|
||||||
|
let irrt_test_cpp_path = irrt_dir.join("irrt_test.cpp");
|
||||||
|
let exe_path = out_dir.join("irrt_test.out");
|
||||||
|
|
||||||
|
let flags: &[&str] = &[
|
||||||
|
irrt_test_cpp_path.to_str().unwrap(),
|
||||||
|
"-x",
|
||||||
|
"c++",
|
||||||
|
"-I",
|
||||||
|
irrt_dir.to_str().unwrap(),
|
||||||
|
"-g",
|
||||||
|
"-fno-discard-value-names",
|
||||||
|
"-O0",
|
||||||
|
"-Wall",
|
||||||
|
"-Wextra",
|
||||||
|
"-Werror=return-type",
|
||||||
|
"-lm", // for `tgamma()`, `lgamma()`
|
||||||
|
"-o",
|
||||||
|
exe_path.to_str().unwrap(),
|
||||||
|
];
|
||||||
|
|
||||||
|
Command::new("clang-irrt-test")
|
||||||
|
.args(flags)
|
||||||
|
.output()
|
||||||
|
.map(|o| {
|
||||||
|
assert!(o.status.success(), "{}", std::str::from_utf8(&o.stderr).unwrap());
|
||||||
|
o
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
println!("cargo:rerun-if-changed={}", out_dir.to_str().unwrap());
|
||||||
|
}
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
let out_dir = env::var("OUT_DIR").unwrap();
|
||||||
|
let out_dir = Path::new(&out_dir);
|
||||||
|
|
||||||
|
let irrt_dir = Path::new("./irrt");
|
||||||
|
|
||||||
|
compile_irrt(irrt_dir, out_dir);
|
||||||
|
|
||||||
|
// https://github.com/rust-lang/cargo/issues/2549
|
||||||
|
// `cargo test -F test` to also build `irrt_test.cpp
|
||||||
|
if cfg!(feature = "test") {
|
||||||
|
compile_irrt_test(irrt_dir, out_dir);
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,5 @@
|
||||||
|
#include "irrt_everything.hpp"
|
||||||
|
|
||||||
|
/*
|
||||||
|
This file will be read by `clang-irrt` to conveniently produce LLVM IR for `nac3core/codegen`.
|
||||||
|
*/
|
|
@ -0,0 +1,437 @@
|
||||||
|
#ifndef IRRT_DONT_TYPEDEF_INTS
|
||||||
|
typedef _BitInt(8) int8_t;
|
||||||
|
typedef unsigned _BitInt(8) uint8_t;
|
||||||
|
typedef _BitInt(32) int32_t;
|
||||||
|
typedef unsigned _BitInt(32) uint32_t;
|
||||||
|
typedef _BitInt(64) int64_t;
|
||||||
|
typedef unsigned _BitInt(64) uint64_t;
|
||||||
|
#endif
|
||||||
|
|
||||||
|
// NDArray indices are always `uint32_t`.
|
||||||
|
typedef uint32_t NDIndex;
|
||||||
|
// The type of an index or a value describing the length of a range/slice is
|
||||||
|
// always `int32_t`.
|
||||||
|
typedef int32_t SliceIndex;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static T max(T a, T b) {
|
||||||
|
return a > b ? a : b;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
static T min(T a, T b) {
|
||||||
|
return a > b ? b : a;
|
||||||
|
}
|
||||||
|
|
||||||
|
// adapted from GNU Scientific Library: https://git.savannah.gnu.org/cgit/gsl.git/tree/sys/pow_int.c
|
||||||
|
// need to make sure `exp >= 0` before calling this function
|
||||||
|
template <typename T>
|
||||||
|
static T __nac3_int_exp_impl(T base, T exp) {
|
||||||
|
T res = 1;
|
||||||
|
/* repeated squaring method */
|
||||||
|
do {
|
||||||
|
if (exp & 1) {
|
||||||
|
res *= base; /* for n odd */
|
||||||
|
}
|
||||||
|
exp >>= 1;
|
||||||
|
base *= base;
|
||||||
|
} while (exp);
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename SizeT>
|
||||||
|
static SizeT __nac3_ndarray_calc_size_impl(
|
||||||
|
const SizeT *list_data,
|
||||||
|
SizeT list_len,
|
||||||
|
SizeT begin_idx,
|
||||||
|
SizeT end_idx
|
||||||
|
) {
|
||||||
|
__builtin_assume(end_idx <= list_len);
|
||||||
|
|
||||||
|
SizeT num_elems = 1;
|
||||||
|
for (SizeT i = begin_idx; i < end_idx; ++i) {
|
||||||
|
SizeT val = list_data[i];
|
||||||
|
__builtin_assume(val > 0);
|
||||||
|
num_elems *= val;
|
||||||
|
}
|
||||||
|
return num_elems;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename SizeT>
|
||||||
|
static void __nac3_ndarray_calc_nd_indices_impl(
|
||||||
|
SizeT index,
|
||||||
|
const SizeT *dims,
|
||||||
|
SizeT num_dims,
|
||||||
|
NDIndex *idxs
|
||||||
|
) {
|
||||||
|
SizeT stride = 1;
|
||||||
|
for (SizeT dim = 0; dim < num_dims; dim++) {
|
||||||
|
SizeT i = num_dims - dim - 1;
|
||||||
|
__builtin_assume(dims[i] > 0);
|
||||||
|
idxs[i] = (index / stride) % dims[i];
|
||||||
|
stride *= dims[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename SizeT>
|
||||||
|
static SizeT __nac3_ndarray_flatten_index_impl(
|
||||||
|
const SizeT *dims,
|
||||||
|
SizeT num_dims,
|
||||||
|
const NDIndex *indices,
|
||||||
|
SizeT num_indices
|
||||||
|
) {
|
||||||
|
SizeT idx = 0;
|
||||||
|
SizeT stride = 1;
|
||||||
|
for (SizeT i = 0; i < num_dims; ++i) {
|
||||||
|
SizeT ri = num_dims - i - 1;
|
||||||
|
if (ri < num_indices) {
|
||||||
|
idx += stride * indices[ri];
|
||||||
|
}
|
||||||
|
|
||||||
|
__builtin_assume(dims[i] > 0);
|
||||||
|
stride *= dims[ri];
|
||||||
|
}
|
||||||
|
return idx;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename SizeT>
|
||||||
|
static void __nac3_ndarray_calc_broadcast_impl(
|
||||||
|
const SizeT *lhs_dims,
|
||||||
|
SizeT lhs_ndims,
|
||||||
|
const SizeT *rhs_dims,
|
||||||
|
SizeT rhs_ndims,
|
||||||
|
SizeT *out_dims
|
||||||
|
) {
|
||||||
|
SizeT max_ndims = lhs_ndims > rhs_ndims ? lhs_ndims : rhs_ndims;
|
||||||
|
|
||||||
|
for (SizeT i = 0; i < max_ndims; ++i) {
|
||||||
|
const SizeT *lhs_dim_sz = i < lhs_ndims ? &lhs_dims[lhs_ndims - i - 1] : nullptr;
|
||||||
|
const SizeT *rhs_dim_sz = i < rhs_ndims ? &rhs_dims[rhs_ndims - i - 1] : nullptr;
|
||||||
|
SizeT *out_dim = &out_dims[max_ndims - i - 1];
|
||||||
|
|
||||||
|
if (lhs_dim_sz == nullptr) {
|
||||||
|
*out_dim = *rhs_dim_sz;
|
||||||
|
} else if (rhs_dim_sz == nullptr) {
|
||||||
|
*out_dim = *lhs_dim_sz;
|
||||||
|
} else if (*lhs_dim_sz == 1) {
|
||||||
|
*out_dim = *rhs_dim_sz;
|
||||||
|
} else if (*rhs_dim_sz == 1) {
|
||||||
|
*out_dim = *lhs_dim_sz;
|
||||||
|
} else if (*lhs_dim_sz == *rhs_dim_sz) {
|
||||||
|
*out_dim = *lhs_dim_sz;
|
||||||
|
} else {
|
||||||
|
__builtin_unreachable();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename SizeT>
|
||||||
|
static void __nac3_ndarray_calc_broadcast_idx_impl(
|
||||||
|
const SizeT *src_dims,
|
||||||
|
SizeT src_ndims,
|
||||||
|
const NDIndex *in_idx,
|
||||||
|
NDIndex *out_idx
|
||||||
|
) {
|
||||||
|
for (SizeT i = 0; i < src_ndims; ++i) {
|
||||||
|
SizeT src_i = src_ndims - i - 1;
|
||||||
|
out_idx[src_i] = src_dims[src_i] == 1 ? 0 : in_idx[src_i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename SizeT>
|
||||||
|
static void __nac3_ndarray_strides_from_shape_impl(
|
||||||
|
SizeT ndims,
|
||||||
|
SizeT *shape,
|
||||||
|
SizeT *dst_strides
|
||||||
|
) {
|
||||||
|
SizeT stride_product = 1;
|
||||||
|
for (SizeT i = 0; i < ndims; i++) {
|
||||||
|
int dim_i = ndims - i - 1;
|
||||||
|
dst_strides[dim_i] = stride_product;
|
||||||
|
stride_product *= shape[dim_i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
extern "C" {
|
||||||
|
#define DEF_nac3_int_exp_(T) \
|
||||||
|
T __nac3_int_exp_##T(T base, T exp) {\
|
||||||
|
return __nac3_int_exp_impl(base, exp);\
|
||||||
|
}
|
||||||
|
|
||||||
|
DEF_nac3_int_exp_(int32_t)
|
||||||
|
DEF_nac3_int_exp_(int64_t)
|
||||||
|
DEF_nac3_int_exp_(uint32_t)
|
||||||
|
DEF_nac3_int_exp_(uint64_t)
|
||||||
|
|
||||||
|
SliceIndex __nac3_slice_index_bound(SliceIndex i, const SliceIndex len) {
|
||||||
|
if (i < 0) {
|
||||||
|
i = len + i;
|
||||||
|
}
|
||||||
|
if (i < 0) {
|
||||||
|
return 0;
|
||||||
|
} else if (i > len) {
|
||||||
|
return len;
|
||||||
|
}
|
||||||
|
return i;
|
||||||
|
}
|
||||||
|
|
||||||
|
SliceIndex __nac3_range_slice_len(
|
||||||
|
const SliceIndex start,
|
||||||
|
const SliceIndex end,
|
||||||
|
const SliceIndex step
|
||||||
|
) {
|
||||||
|
SliceIndex diff = end - start;
|
||||||
|
if (diff > 0 && step > 0) {
|
||||||
|
return ((diff - 1) / step) + 1;
|
||||||
|
} else if (diff < 0 && step < 0) {
|
||||||
|
return ((diff + 1) / step) + 1;
|
||||||
|
} else {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle list assignment and dropping part of the list when
|
||||||
|
// both dest_step and src_step are +1.
|
||||||
|
// - All the index must *not* be out-of-bound or negative,
|
||||||
|
// - The end index is *inclusive*,
|
||||||
|
// - The length of src and dest slice size should already
|
||||||
|
// be checked: if dest.step == 1 then len(src) <= len(dest) else len(src) == len(dest)
|
||||||
|
SliceIndex __nac3_list_slice_assign_var_size(
|
||||||
|
SliceIndex dest_start,
|
||||||
|
SliceIndex dest_end,
|
||||||
|
SliceIndex dest_step,
|
||||||
|
uint8_t *dest_arr,
|
||||||
|
SliceIndex dest_arr_len,
|
||||||
|
SliceIndex src_start,
|
||||||
|
SliceIndex src_end,
|
||||||
|
SliceIndex src_step,
|
||||||
|
uint8_t *src_arr,
|
||||||
|
SliceIndex src_arr_len,
|
||||||
|
const SliceIndex size
|
||||||
|
) {
|
||||||
|
/* if dest_arr_len == 0, do nothing since we do not support extending list */
|
||||||
|
if (dest_arr_len == 0) return dest_arr_len;
|
||||||
|
/* if both step is 1, memmove directly, handle the dropping of the list, and shrink size */
|
||||||
|
if (src_step == dest_step && dest_step == 1) {
|
||||||
|
const SliceIndex src_len = (src_end >= src_start) ? (src_end - src_start + 1) : 0;
|
||||||
|
const SliceIndex dest_len = (dest_end >= dest_start) ? (dest_end - dest_start + 1) : 0;
|
||||||
|
if (src_len > 0) {
|
||||||
|
__builtin_memmove(
|
||||||
|
dest_arr + dest_start * size,
|
||||||
|
src_arr + src_start * size,
|
||||||
|
src_len * size
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if (dest_len > 0) {
|
||||||
|
/* dropping */
|
||||||
|
__builtin_memmove(
|
||||||
|
dest_arr + (dest_start + src_len) * size,
|
||||||
|
dest_arr + (dest_end + 1) * size,
|
||||||
|
(dest_arr_len - dest_end - 1) * size
|
||||||
|
);
|
||||||
|
}
|
||||||
|
/* shrink size */
|
||||||
|
return dest_arr_len - (dest_len - src_len);
|
||||||
|
}
|
||||||
|
/* if two range overlaps, need alloca */
|
||||||
|
uint8_t need_alloca =
|
||||||
|
(dest_arr == src_arr)
|
||||||
|
&& !(
|
||||||
|
max(dest_start, dest_end) < min(src_start, src_end)
|
||||||
|
|| max(src_start, src_end) < min(dest_start, dest_end)
|
||||||
|
);
|
||||||
|
if (need_alloca) {
|
||||||
|
uint8_t *tmp = reinterpret_cast<uint8_t *>(__builtin_alloca(src_arr_len * size));
|
||||||
|
__builtin_memcpy(tmp, src_arr, src_arr_len * size);
|
||||||
|
src_arr = tmp;
|
||||||
|
}
|
||||||
|
SliceIndex src_ind = src_start;
|
||||||
|
SliceIndex dest_ind = dest_start;
|
||||||
|
for (;
|
||||||
|
(src_step > 0) ? (src_ind <= src_end) : (src_ind >= src_end);
|
||||||
|
src_ind += src_step, dest_ind += dest_step
|
||||||
|
) {
|
||||||
|
/* for constant optimization */
|
||||||
|
if (size == 1) {
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind, src_arr + src_ind, 1);
|
||||||
|
} else if (size == 4) {
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind * 4, src_arr + src_ind * 4, 4);
|
||||||
|
} else if (size == 8) {
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind * 8, src_arr + src_ind * 8, 8);
|
||||||
|
} else {
|
||||||
|
/* memcpy for var size, cannot overlap after previous alloca */
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind * size, src_arr + src_ind * size, size);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
/* only dest_step == 1 can we shrink the dest list. */
|
||||||
|
/* size should be ensured prior to calling this function */
|
||||||
|
if (dest_step == 1 && dest_end >= dest_start) {
|
||||||
|
__builtin_memmove(
|
||||||
|
dest_arr + dest_ind * size,
|
||||||
|
dest_arr + (dest_end + 1) * size,
|
||||||
|
(dest_arr_len - dest_end - 1) * size
|
||||||
|
);
|
||||||
|
return dest_arr_len - (dest_end - dest_ind) - 1;
|
||||||
|
}
|
||||||
|
return dest_arr_len;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t __nac3_isinf(double x) {
|
||||||
|
return __builtin_isinf(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t __nac3_isnan(double x) {
|
||||||
|
return __builtin_isnan(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
double tgamma(double arg);
|
||||||
|
|
||||||
|
double __nac3_gamma(double z) {
|
||||||
|
// Handling for denormals
|
||||||
|
// | x | Python gamma(x) | C tgamma(x) |
|
||||||
|
// --- | ----------------- | --------------- | ----------- |
|
||||||
|
// (1) | nan | nan | nan |
|
||||||
|
// (2) | -inf | -inf | inf |
|
||||||
|
// (3) | inf | inf | inf |
|
||||||
|
// (4) | 0.0 | inf | inf |
|
||||||
|
// (5) | {-1.0, -2.0, ...} | inf | nan |
|
||||||
|
|
||||||
|
// (1)-(3)
|
||||||
|
if (__builtin_isinf(z) || __builtin_isnan(z)) {
|
||||||
|
return z;
|
||||||
|
}
|
||||||
|
|
||||||
|
double v = tgamma(z);
|
||||||
|
|
||||||
|
// (4)-(5)
|
||||||
|
return __builtin_isinf(v) || __builtin_isnan(v) ? __builtin_inf() : v;
|
||||||
|
}
|
||||||
|
|
||||||
|
double lgamma(double arg);
|
||||||
|
|
||||||
|
double __nac3_gammaln(double x) {
|
||||||
|
// libm's handling of value overflows differs from scipy:
|
||||||
|
// - scipy: gammaln(-inf) -> -inf
|
||||||
|
// - libm : lgamma(-inf) -> inf
|
||||||
|
|
||||||
|
if (__builtin_isinf(x)) {
|
||||||
|
return x;
|
||||||
|
}
|
||||||
|
|
||||||
|
return lgamma(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
double j0(double x);
|
||||||
|
|
||||||
|
double __nac3_j0(double x) {
|
||||||
|
// libm's handling of value overflows differs from scipy:
|
||||||
|
// - scipy: j0(inf) -> nan
|
||||||
|
// - libm : j0(inf) -> 0.0
|
||||||
|
|
||||||
|
if (__builtin_isinf(x)) {
|
||||||
|
return __builtin_nan("");
|
||||||
|
}
|
||||||
|
|
||||||
|
return j0(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
uint32_t __nac3_ndarray_calc_size(
|
||||||
|
const uint32_t *list_data,
|
||||||
|
uint32_t list_len,
|
||||||
|
uint32_t begin_idx,
|
||||||
|
uint32_t end_idx
|
||||||
|
) {
|
||||||
|
return __nac3_ndarray_calc_size_impl(list_data, list_len, begin_idx, end_idx);
|
||||||
|
}
|
||||||
|
|
||||||
|
uint64_t __nac3_ndarray_calc_size64(
|
||||||
|
const uint64_t *list_data,
|
||||||
|
uint64_t list_len,
|
||||||
|
uint64_t begin_idx,
|
||||||
|
uint64_t end_idx
|
||||||
|
) {
|
||||||
|
return __nac3_ndarray_calc_size_impl(list_data, list_len, begin_idx, end_idx);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_calc_nd_indices(
|
||||||
|
uint32_t index,
|
||||||
|
const uint32_t* dims,
|
||||||
|
uint32_t num_dims,
|
||||||
|
NDIndex* idxs
|
||||||
|
) {
|
||||||
|
__nac3_ndarray_calc_nd_indices_impl(index, dims, num_dims, idxs);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_calc_nd_indices64(
|
||||||
|
uint64_t index,
|
||||||
|
const uint64_t* dims,
|
||||||
|
uint64_t num_dims,
|
||||||
|
NDIndex* idxs
|
||||||
|
) {
|
||||||
|
__nac3_ndarray_calc_nd_indices_impl(index, dims, num_dims, idxs);
|
||||||
|
}
|
||||||
|
|
||||||
|
uint32_t __nac3_ndarray_flatten_index(
|
||||||
|
const uint32_t* dims,
|
||||||
|
uint32_t num_dims,
|
||||||
|
const NDIndex* indices,
|
||||||
|
uint32_t num_indices
|
||||||
|
) {
|
||||||
|
return __nac3_ndarray_flatten_index_impl(dims, num_dims, indices, num_indices);
|
||||||
|
}
|
||||||
|
|
||||||
|
uint64_t __nac3_ndarray_flatten_index64(
|
||||||
|
const uint64_t* dims,
|
||||||
|
uint64_t num_dims,
|
||||||
|
const NDIndex* indices,
|
||||||
|
uint64_t num_indices
|
||||||
|
) {
|
||||||
|
return __nac3_ndarray_flatten_index_impl(dims, num_dims, indices, num_indices);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_calc_broadcast(
|
||||||
|
const uint32_t *lhs_dims,
|
||||||
|
uint32_t lhs_ndims,
|
||||||
|
const uint32_t *rhs_dims,
|
||||||
|
uint32_t rhs_ndims,
|
||||||
|
uint32_t *out_dims
|
||||||
|
) {
|
||||||
|
return __nac3_ndarray_calc_broadcast_impl(lhs_dims, lhs_ndims, rhs_dims, rhs_ndims, out_dims);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_calc_broadcast64(
|
||||||
|
const uint64_t *lhs_dims,
|
||||||
|
uint64_t lhs_ndims,
|
||||||
|
const uint64_t *rhs_dims,
|
||||||
|
uint64_t rhs_ndims,
|
||||||
|
uint64_t *out_dims
|
||||||
|
) {
|
||||||
|
return __nac3_ndarray_calc_broadcast_impl(lhs_dims, lhs_ndims, rhs_dims, rhs_ndims, out_dims);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_calc_broadcast_idx(
|
||||||
|
const uint32_t *src_dims,
|
||||||
|
uint32_t src_ndims,
|
||||||
|
const NDIndex *in_idx,
|
||||||
|
NDIndex *out_idx
|
||||||
|
) {
|
||||||
|
__nac3_ndarray_calc_broadcast_idx_impl(src_dims, src_ndims, in_idx, out_idx);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_calc_broadcast_idx64(
|
||||||
|
const uint64_t *src_dims,
|
||||||
|
uint64_t src_ndims,
|
||||||
|
const NDIndex *in_idx,
|
||||||
|
NDIndex *out_idx
|
||||||
|
) {
|
||||||
|
__nac3_ndarray_calc_broadcast_idx_impl(src_dims, src_ndims, in_idx, out_idx);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_strides_from_shape(uint32_t ndims, uint32_t* shape, uint32_t* dst_strides) {
|
||||||
|
__nac3_ndarray_strides_from_shape_impl(ndims, shape, dst_strides);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_strides_from_shape64(uint64_t ndims, uint64_t* shape, uint64_t* dst_strides) {
|
||||||
|
__nac3_ndarray_strides_from_shape_impl(ndims, shape, dst_strides);
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,216 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "irrt_utils.hpp"
|
||||||
|
#include "irrt_typedefs.hpp"
|
||||||
|
|
||||||
|
/*
|
||||||
|
This header contains IRRT implementations
|
||||||
|
that do not deserved to be categorized (e.g., into numpy, etc.)
|
||||||
|
|
||||||
|
Check out other *.hpp files before including them here!!
|
||||||
|
*/
|
||||||
|
|
||||||
|
// The type of an index or a value describing the length of a range/slice is
|
||||||
|
// always `int32_t`.
|
||||||
|
|
||||||
|
namespace {
|
||||||
|
// adapted from GNU Scientific Library: https://git.savannah.gnu.org/cgit/gsl.git/tree/sys/pow_int.c
|
||||||
|
// need to make sure `exp >= 0` before calling this function
|
||||||
|
template <typename T>
|
||||||
|
T __nac3_int_exp_impl(T base, T exp) {
|
||||||
|
T res = 1;
|
||||||
|
/* repeated squaring method */
|
||||||
|
do {
|
||||||
|
if (exp & 1) {
|
||||||
|
res *= base; /* for n odd */
|
||||||
|
}
|
||||||
|
exp >>= 1;
|
||||||
|
base *= base;
|
||||||
|
} while (exp);
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
extern "C" {
|
||||||
|
#define DEF_nac3_int_exp_(T) \
|
||||||
|
T __nac3_int_exp_##T(T base, T exp) {\
|
||||||
|
return __nac3_int_exp_impl(base, exp);\
|
||||||
|
}
|
||||||
|
|
||||||
|
DEF_nac3_int_exp_(int32_t)
|
||||||
|
DEF_nac3_int_exp_(int64_t)
|
||||||
|
DEF_nac3_int_exp_(uint32_t)
|
||||||
|
DEF_nac3_int_exp_(uint64_t)
|
||||||
|
|
||||||
|
SliceIndex __nac3_slice_index_bound(SliceIndex i, const SliceIndex len) {
|
||||||
|
if (i < 0) {
|
||||||
|
i = len + i;
|
||||||
|
}
|
||||||
|
if (i < 0) {
|
||||||
|
return 0;
|
||||||
|
} else if (i > len) {
|
||||||
|
return len;
|
||||||
|
}
|
||||||
|
return i;
|
||||||
|
}
|
||||||
|
|
||||||
|
SliceIndex __nac3_range_slice_len(
|
||||||
|
const SliceIndex start,
|
||||||
|
const SliceIndex end,
|
||||||
|
const SliceIndex step
|
||||||
|
) {
|
||||||
|
SliceIndex diff = end - start;
|
||||||
|
if (diff > 0 && step > 0) {
|
||||||
|
return ((diff - 1) / step) + 1;
|
||||||
|
} else if (diff < 0 && step < 0) {
|
||||||
|
return ((diff + 1) / step) + 1;
|
||||||
|
} else {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle list assignment and dropping part of the list when
|
||||||
|
// both dest_step and src_step are +1.
|
||||||
|
// - All the index must *not* be out-of-bound or negative,
|
||||||
|
// - The end index is *inclusive*,
|
||||||
|
// - The length of src and dest slice size should already
|
||||||
|
// be checked: if dest.step == 1 then len(src) <= len(dest) else len(src) == len(dest)
|
||||||
|
SliceIndex __nac3_list_slice_assign_var_size(
|
||||||
|
SliceIndex dest_start,
|
||||||
|
SliceIndex dest_end,
|
||||||
|
SliceIndex dest_step,
|
||||||
|
uint8_t *dest_arr,
|
||||||
|
SliceIndex dest_arr_len,
|
||||||
|
SliceIndex src_start,
|
||||||
|
SliceIndex src_end,
|
||||||
|
SliceIndex src_step,
|
||||||
|
uint8_t *src_arr,
|
||||||
|
SliceIndex src_arr_len,
|
||||||
|
const SliceIndex size
|
||||||
|
) {
|
||||||
|
/* if dest_arr_len == 0, do nothing since we do not support extending list */
|
||||||
|
if (dest_arr_len == 0) return dest_arr_len;
|
||||||
|
/* if both step is 1, memmove directly, handle the dropping of the list, and shrink size */
|
||||||
|
if (src_step == dest_step && dest_step == 1) {
|
||||||
|
const SliceIndex src_len = (src_end >= src_start) ? (src_end - src_start + 1) : 0;
|
||||||
|
const SliceIndex dest_len = (dest_end >= dest_start) ? (dest_end - dest_start + 1) : 0;
|
||||||
|
if (src_len > 0) {
|
||||||
|
__builtin_memmove(
|
||||||
|
dest_arr + dest_start * size,
|
||||||
|
src_arr + src_start * size,
|
||||||
|
src_len * size
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if (dest_len > 0) {
|
||||||
|
/* dropping */
|
||||||
|
__builtin_memmove(
|
||||||
|
dest_arr + (dest_start + src_len) * size,
|
||||||
|
dest_arr + (dest_end + 1) * size,
|
||||||
|
(dest_arr_len - dest_end - 1) * size
|
||||||
|
);
|
||||||
|
}
|
||||||
|
/* shrink size */
|
||||||
|
return dest_arr_len - (dest_len - src_len);
|
||||||
|
}
|
||||||
|
/* if two range overlaps, need alloca */
|
||||||
|
uint8_t need_alloca =
|
||||||
|
(dest_arr == src_arr)
|
||||||
|
&& !(
|
||||||
|
max(dest_start, dest_end) < min(src_start, src_end)
|
||||||
|
|| max(src_start, src_end) < min(dest_start, dest_end)
|
||||||
|
);
|
||||||
|
if (need_alloca) {
|
||||||
|
uint8_t *tmp = reinterpret_cast<uint8_t *>(__builtin_alloca(src_arr_len * size));
|
||||||
|
__builtin_memcpy(tmp, src_arr, src_arr_len * size);
|
||||||
|
src_arr = tmp;
|
||||||
|
}
|
||||||
|
SliceIndex src_ind = src_start;
|
||||||
|
SliceIndex dest_ind = dest_start;
|
||||||
|
for (;
|
||||||
|
(src_step > 0) ? (src_ind <= src_end) : (src_ind >= src_end);
|
||||||
|
src_ind += src_step, dest_ind += dest_step
|
||||||
|
) {
|
||||||
|
/* for constant optimization */
|
||||||
|
if (size == 1) {
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind, src_arr + src_ind, 1);
|
||||||
|
} else if (size == 4) {
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind * 4, src_arr + src_ind * 4, 4);
|
||||||
|
} else if (size == 8) {
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind * 8, src_arr + src_ind * 8, 8);
|
||||||
|
} else {
|
||||||
|
/* memcpy for var size, cannot overlap after previous alloca */
|
||||||
|
__builtin_memcpy(dest_arr + dest_ind * size, src_arr + src_ind * size, size);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
/* only dest_step == 1 can we shrink the dest list. */
|
||||||
|
/* size should be ensured prior to calling this function */
|
||||||
|
if (dest_step == 1 && dest_end >= dest_start) {
|
||||||
|
__builtin_memmove(
|
||||||
|
dest_arr + dest_ind * size,
|
||||||
|
dest_arr + (dest_end + 1) * size,
|
||||||
|
(dest_arr_len - dest_end - 1) * size
|
||||||
|
);
|
||||||
|
return dest_arr_len - (dest_end - dest_ind) - 1;
|
||||||
|
}
|
||||||
|
return dest_arr_len;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t __nac3_isinf(double x) {
|
||||||
|
return __builtin_isinf(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t __nac3_isnan(double x) {
|
||||||
|
return __builtin_isnan(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
double tgamma(double arg);
|
||||||
|
|
||||||
|
double __nac3_gamma(double z) {
|
||||||
|
// Handling for denormals
|
||||||
|
// | x | Python gamma(x) | C tgamma(x) |
|
||||||
|
// --- | ----------------- | --------------- | ----------- |
|
||||||
|
// (1) | nan | nan | nan |
|
||||||
|
// (2) | -inf | -inf | inf |
|
||||||
|
// (3) | inf | inf | inf |
|
||||||
|
// (4) | 0.0 | inf | inf |
|
||||||
|
// (5) | {-1.0, -2.0, ...} | inf | nan |
|
||||||
|
|
||||||
|
// (1)-(3)
|
||||||
|
if (__builtin_isinf(z) || __builtin_isnan(z)) {
|
||||||
|
return z;
|
||||||
|
}
|
||||||
|
|
||||||
|
double v = tgamma(z);
|
||||||
|
|
||||||
|
// (4)-(5)
|
||||||
|
return __builtin_isinf(v) || __builtin_isnan(v) ? __builtin_inf() : v;
|
||||||
|
}
|
||||||
|
|
||||||
|
double lgamma(double arg);
|
||||||
|
|
||||||
|
double __nac3_gammaln(double x) {
|
||||||
|
// libm's handling of value overflows differs from scipy:
|
||||||
|
// - scipy: gammaln(-inf) -> -inf
|
||||||
|
// - libm : lgamma(-inf) -> inf
|
||||||
|
|
||||||
|
if (__builtin_isinf(x)) {
|
||||||
|
return x;
|
||||||
|
}
|
||||||
|
|
||||||
|
return lgamma(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
double j0(double x);
|
||||||
|
|
||||||
|
double __nac3_j0(double x) {
|
||||||
|
// libm's handling of value overflows differs from scipy:
|
||||||
|
// - scipy: j0(inf) -> nan
|
||||||
|
// - libm : j0(inf) -> 0.0
|
||||||
|
|
||||||
|
if (__builtin_isinf(x)) {
|
||||||
|
return __builtin_nan("");
|
||||||
|
}
|
||||||
|
|
||||||
|
return j0(x);
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,14 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "irrt_utils.hpp"
|
||||||
|
#include "irrt_typedefs.hpp"
|
||||||
|
#include "irrt_basic.hpp"
|
||||||
|
#include "irrt_slice.hpp"
|
||||||
|
#include "irrt_numpy_ndarray.hpp"
|
||||||
|
|
||||||
|
/*
|
||||||
|
All IRRT implementations.
|
||||||
|
|
||||||
|
We don't have any pre-compiled objects, so we are writing all implementations in headers and
|
||||||
|
concatenate them with `#include` into one massive source file that contains all the IRRT stuff.
|
||||||
|
*/
|
|
@ -0,0 +1,466 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "irrt_utils.hpp"
|
||||||
|
#include "irrt_typedefs.hpp"
|
||||||
|
#include "irrt_slice.hpp"
|
||||||
|
|
||||||
|
/*
|
||||||
|
NDArray-related implementations.
|
||||||
|
`*/
|
||||||
|
|
||||||
|
// NDArray indices are always `uint32_t`.
|
||||||
|
using NDIndex = uint32_t;
|
||||||
|
|
||||||
|
namespace {
|
||||||
|
namespace ndarray_util {
|
||||||
|
template <typename SizeT>
|
||||||
|
static void set_indices_by_nth(SizeT ndims, const SizeT* shape, SizeT* indices, SizeT nth) {
|
||||||
|
for (int32_t i = 0; i < ndims; i++) {
|
||||||
|
int32_t dim_i = ndims - i - 1;
|
||||||
|
int32_t dim = shape[dim_i];
|
||||||
|
|
||||||
|
indices[dim_i] = nth % dim;
|
||||||
|
nth /= dim;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compute the strides of an ndarray given an ndarray `shape`
|
||||||
|
// and assuming that the ndarray is *fully C-contagious*.
|
||||||
|
//
|
||||||
|
// You might want to read up on https://ajcr.net/stride-guide-part-1/.
|
||||||
|
template <typename SizeT>
|
||||||
|
static void set_strides_by_shape(SizeT itemsize, SizeT ndims, SizeT* dst_strides, const SizeT* shape) {
|
||||||
|
SizeT stride_product = 1;
|
||||||
|
for (SizeT i = 0; i < ndims; i++) {
|
||||||
|
int dim_i = ndims - i - 1;
|
||||||
|
dst_strides[dim_i] = stride_product * itemsize;
|
||||||
|
stride_product *= shape[dim_i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compute the size/# of elements of an ndarray given its shape
|
||||||
|
template <typename SizeT>
|
||||||
|
static SizeT calc_size_from_shape(SizeT ndims, const SizeT* shape) {
|
||||||
|
SizeT size = 1;
|
||||||
|
for (SizeT dim_i = 0; dim_i < ndims; dim_i++) size *= shape[dim_i];
|
||||||
|
return size;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename SizeT>
|
||||||
|
static bool can_broadcast_shape_to(
|
||||||
|
const SizeT target_ndims,
|
||||||
|
const SizeT *target_shape,
|
||||||
|
const SizeT src_ndims,
|
||||||
|
const SizeT *src_shape
|
||||||
|
) {
|
||||||
|
/*
|
||||||
|
// See https://numpy.org/doc/stable/user/basics.broadcasting.html
|
||||||
|
|
||||||
|
This function handles this example:
|
||||||
|
```
|
||||||
|
Image (3d array): 256 x 256 x 3
|
||||||
|
Scale (1d array): 3
|
||||||
|
Result (3d array): 256 x 256 x 3
|
||||||
|
```
|
||||||
|
|
||||||
|
Other interesting examples to consider:
|
||||||
|
- `can_broadcast_shape_to([3], [1, 1, 1, 1, 3]) == true`
|
||||||
|
- `can_broadcast_shape_to([3], [3, 1]) == false`
|
||||||
|
- `can_broadcast_shape_to([256, 256, 3], [256, 1, 3]) == true`
|
||||||
|
|
||||||
|
In cases when the shapes contain zero(es):
|
||||||
|
- `can_broadcast_shape_to([0], [1]) == true`
|
||||||
|
- `can_broadcast_shape_to([0], [2]) == false`
|
||||||
|
- `can_broadcast_shape_to([0, 4, 0, 0], [1]) == true`
|
||||||
|
- `can_broadcast_shape_to([0, 4, 0, 0], [1, 1, 1, 1]) == true`
|
||||||
|
- `can_broadcast_shape_to([0, 4, 0, 0], [1, 4, 1, 1]) == true`
|
||||||
|
- `can_broadcast_shape_to([4, 3], [0, 3]) == false`
|
||||||
|
- `can_broadcast_shape_to([4, 3], [0, 0]) == false`
|
||||||
|
*/
|
||||||
|
|
||||||
|
// This is essentially doing the following in Python:
|
||||||
|
// `for target_dim, src_dim in itertools.zip_longest(target_shape[::-1], src_shape[::-1], fillvalue=1)`
|
||||||
|
for (SizeT i = 0; i < max(target_ndims, src_ndims); i++) {
|
||||||
|
SizeT target_dim_i = target_ndims - i - 1;
|
||||||
|
SizeT src_dim_i = src_ndims - i - 1;
|
||||||
|
|
||||||
|
bool target_dim_exists = target_dim_i >= 0;
|
||||||
|
bool src_dim_exists = src_dim_i >= 0;
|
||||||
|
|
||||||
|
SizeT target_dim = target_dim_exists ? target_shape[target_dim_i] : 1;
|
||||||
|
SizeT src_dim = src_dim_exists ? src_shape[src_dim_i] : 1;
|
||||||
|
|
||||||
|
bool ok = src_dim == 1 || target_dim == src_dim;
|
||||||
|
if (!ok) return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
typedef uint8_t NDSliceType;
|
||||||
|
extern "C" {
|
||||||
|
const NDSliceType INPUT_SLICE_TYPE_INDEX = 0;
|
||||||
|
const NDSliceType INPUT_SLICE_TYPE_SLICE = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
struct NDSlice {
|
||||||
|
// A poor-man's `std::variant<int, UserRange>`
|
||||||
|
NDSliceType type;
|
||||||
|
|
||||||
|
/*
|
||||||
|
if type == INPUT_SLICE_TYPE_INDEX => `slice` points to a single `SizeT`
|
||||||
|
if type == INPUT_SLICE_TYPE_SLICE => `slice` points to a single `UserRange`
|
||||||
|
*/
|
||||||
|
uint8_t *slice;
|
||||||
|
};
|
||||||
|
|
||||||
|
namespace ndarray_util {
|
||||||
|
template<typename SizeT>
|
||||||
|
SizeT deduce_ndims_after_slicing(SizeT ndims, SizeT num_slices, const NDSlice *slices) {
|
||||||
|
irrt_assert(num_slices <= ndims);
|
||||||
|
|
||||||
|
SizeT final_ndims = ndims;
|
||||||
|
for (SizeT i = 0; i < num_slices; i++) {
|
||||||
|
if (slices[i].type == INPUT_SLICE_TYPE_INDEX) {
|
||||||
|
final_ndims--; // An integer slice demotes the rank by 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return final_ndims;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename SizeT>
|
||||||
|
struct NDArrayIndicesIter {
|
||||||
|
SizeT ndims;
|
||||||
|
const SizeT *shape;
|
||||||
|
SizeT *indices;
|
||||||
|
|
||||||
|
void set_indices_zero() {
|
||||||
|
__builtin_memset(indices, 0, sizeof(SizeT) * ndims);
|
||||||
|
}
|
||||||
|
|
||||||
|
void next() {
|
||||||
|
for (SizeT i = 0; i < ndims; i++) {
|
||||||
|
SizeT dim_i = ndims - i - 1;
|
||||||
|
|
||||||
|
indices[dim_i]++;
|
||||||
|
if (indices[dim_i] < shape[dim_i]) {
|
||||||
|
break;
|
||||||
|
} else {
|
||||||
|
indices[dim_i] = 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// The NDArray object. `SizeT` is the *signed* size type of this ndarray.
|
||||||
|
//
|
||||||
|
// NOTE: The order of fields is IMPORTANT. DON'T TOUCH IT
|
||||||
|
//
|
||||||
|
// Some resources you might find helpful:
|
||||||
|
// - The official numpy implementations:
|
||||||
|
// - https://github.com/numpy/numpy/blob/735a477f0bc2b5b84d0e72d92f224bde78d4e069/doc/source/reference/c-api/types-and-structures.rst
|
||||||
|
// - On strides (about reshaping, slicing, C-contagiousness, etc)
|
||||||
|
// - https://ajcr.net/stride-guide-part-1/.
|
||||||
|
// - https://ajcr.net/stride-guide-part-2/.
|
||||||
|
// - https://ajcr.net/stride-guide-part-3/.
|
||||||
|
template <typename SizeT>
|
||||||
|
struct NDArray {
|
||||||
|
// The underlying data this `ndarray` is pointing to.
|
||||||
|
//
|
||||||
|
// NOTE: Formally this should be of type `void *`, but clang
|
||||||
|
// translates `void *` to `i8 *` when run with `-S -emit-llvm`,
|
||||||
|
// so we will put `uint8_t *` here for clarity.
|
||||||
|
uint8_t *data;
|
||||||
|
|
||||||
|
// The number of bytes of a single element in `data`.
|
||||||
|
//
|
||||||
|
// The `SizeT` is treated as `unsigned`.
|
||||||
|
SizeT itemsize;
|
||||||
|
|
||||||
|
// The number of dimensions of this shape.
|
||||||
|
//
|
||||||
|
// The `SizeT` is treated as `unsigned`.
|
||||||
|
SizeT ndims;
|
||||||
|
|
||||||
|
// Array shape, with length equal to `ndims`.
|
||||||
|
//
|
||||||
|
// The `SizeT` is treated as `unsigned`.
|
||||||
|
//
|
||||||
|
// NOTE: `shape` can contain 0.
|
||||||
|
// (those appear when the user makes an out of bounds slice into an ndarray, e.g., `np.zeros((3, 3))[400:].shape == (0, 3)`)
|
||||||
|
SizeT *shape;
|
||||||
|
|
||||||
|
// Array strides (stride value is in number of bytes, NOT number of elements), with length equal to `ndims`.
|
||||||
|
//
|
||||||
|
// The `SizeT` is treated as `signed`.
|
||||||
|
//
|
||||||
|
// NOTE: `strides` can have negative numbers.
|
||||||
|
// (those appear when there is a slice with a negative step, e.g., `my_array[::-1]`)
|
||||||
|
SizeT *strides;
|
||||||
|
|
||||||
|
// Calculate the size/# of elements of an `ndarray`.
|
||||||
|
// This function corresponds to `np.size(<ndarray>)` or `ndarray.size`
|
||||||
|
SizeT size() {
|
||||||
|
return ndarray_util::calc_size_from_shape(ndims, shape);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate the number of bytes of its content of an `ndarray` *in its view*.
|
||||||
|
// This function corresponds to `ndarray.nbytes`
|
||||||
|
SizeT nbytes() {
|
||||||
|
return this->size() * itemsize;
|
||||||
|
}
|
||||||
|
|
||||||
|
void set_value_at_pelement(uint8_t* pelement, const uint8_t* pvalue) {
|
||||||
|
__builtin_memcpy(pelement, pvalue, itemsize);
|
||||||
|
}
|
||||||
|
|
||||||
|
uint8_t* get_pelement(const SizeT *indices) {
|
||||||
|
uint8_t* element = data;
|
||||||
|
for (SizeT dim_i = 0; dim_i < ndims; dim_i++)
|
||||||
|
element += indices[dim_i] * strides[dim_i];
|
||||||
|
return element;
|
||||||
|
}
|
||||||
|
|
||||||
|
uint8_t* get_nth_pelement(SizeT nth) {
|
||||||
|
irrt_assert(0 <= nth);
|
||||||
|
irrt_assert(nth < this->size());
|
||||||
|
|
||||||
|
SizeT* indices = (SizeT*) __builtin_alloca(sizeof(SizeT) * this->ndims);
|
||||||
|
ndarray_util::set_indices_by_nth(this->ndims, this->shape, indices, nth);
|
||||||
|
return get_pelement(indices);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get pointer to the first element of this ndarray, assuming
|
||||||
|
// `this->size() > 0`, i.e., not "degenerate" due to zeroes in `this->shape`)
|
||||||
|
//
|
||||||
|
// This is particularly useful for when the ndarray is just containing a single scalar.
|
||||||
|
uint8_t* get_first_pelement() {
|
||||||
|
irrt_assert(this->size() > 0);
|
||||||
|
return this->data; // ...It is simply `this->data`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Is the given `indices` valid/in-bounds?
|
||||||
|
bool in_bounds(const SizeT *indices) {
|
||||||
|
for (SizeT dim_i = 0; dim_i < ndims; dim_i++) {
|
||||||
|
bool dim_ok = indices[dim_i] < shape[dim_i];
|
||||||
|
if (!dim_ok) return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fill the ndarray with a value
|
||||||
|
void fill_generic(const uint8_t* pvalue) {
|
||||||
|
NDArrayIndicesIter<SizeT> iter;
|
||||||
|
iter.ndims = this->ndims;
|
||||||
|
iter.shape = this->shape;
|
||||||
|
iter.indices = (SizeT*) __builtin_alloca(sizeof(SizeT) * ndims);
|
||||||
|
iter.set_indices_zero();
|
||||||
|
|
||||||
|
for (SizeT i = 0; i < this->size(); i++, iter.next()) {
|
||||||
|
uint8_t* pelement = get_pelement(iter.indices);
|
||||||
|
set_value_at_pelement(pelement, pvalue);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the strides of the ndarray with `ndarray_util::set_strides_by_shape`
|
||||||
|
void set_strides_by_shape() {
|
||||||
|
ndarray_util::set_strides_by_shape(itemsize, ndims, strides, shape);
|
||||||
|
}
|
||||||
|
|
||||||
|
// https://numpy.org/doc/stable/reference/generated/numpy.eye.html
|
||||||
|
void set_to_eye(SizeT k, const uint8_t* zero_pvalue, const uint8_t* one_pvalue) {
|
||||||
|
__builtin_assume(ndims == 2);
|
||||||
|
|
||||||
|
// TODO: Better implementation
|
||||||
|
|
||||||
|
fill_generic(zero_pvalue);
|
||||||
|
for (SizeT i = 0; i < min(shape[0], shape[1]); i++) {
|
||||||
|
SizeT row = i;
|
||||||
|
SizeT col = i + k;
|
||||||
|
SizeT indices[2] = { row, col };
|
||||||
|
|
||||||
|
if (!in_bounds(indices)) continue;
|
||||||
|
|
||||||
|
uint8_t* pelement = get_pelement(indices);
|
||||||
|
set_value_at_pelement(pelement, one_pvalue);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// To support numpy complex slices (e.g., `my_array[:50:2,4,:2:-1]`)
|
||||||
|
//
|
||||||
|
// Things assumed by this function:
|
||||||
|
// - `dst_ndarray` is allocated by the caller
|
||||||
|
// - `dst_ndarray.ndims` has the correct value (according to `ndarray_util::deduce_ndims_after_slicing`).
|
||||||
|
// - ... and `dst_ndarray.shape` and `dst_ndarray.strides` have been allocated by the caller as well
|
||||||
|
//
|
||||||
|
// Other notes:
|
||||||
|
// - `dst_ndarray->data` does not have to be set, it will be derived.
|
||||||
|
// - `dst_ndarray->itemsize` does not have to be set, it will be set to `this->itemsize`
|
||||||
|
// - `dst_ndarray->shape` and `dst_ndarray.strides` can contain empty values
|
||||||
|
void slice(SizeT num_ndslices, NDSlice* ndslices, NDArray<SizeT>* dst_ndarray) {
|
||||||
|
// REFERENCE CODE (check out `_index_helper` in `__getitem__`):
|
||||||
|
// https://github.com/wadetb/tinynumpy/blob/0d23d22e07062ffab2afa287374c7b366eebdda1/tinynumpy/tinynumpy.py#L652
|
||||||
|
|
||||||
|
irrt_assert(dst_ndarray->ndims == ndarray_util::deduce_ndims_after_slicing(this->ndims, num_ndslices, ndslices));
|
||||||
|
|
||||||
|
dst_ndarray->data = this->data;
|
||||||
|
|
||||||
|
SizeT this_axis = 0;
|
||||||
|
SizeT dst_axis = 0;
|
||||||
|
|
||||||
|
for (SizeT i = 0; i < num_ndslices; i++) {
|
||||||
|
NDSlice *ndslice = &ndslices[i];
|
||||||
|
if (ndslice->type == INPUT_SLICE_TYPE_INDEX) {
|
||||||
|
// Handle when the ndslice is just a single (possibly negative) integer
|
||||||
|
// e.g., `my_array[::2, -5, ::-1]`
|
||||||
|
// ^^------ like this
|
||||||
|
SizeT index_user = *((SizeT*) ndslice->slice);
|
||||||
|
SizeT index = resolve_index_in_length(this->shape[this_axis], index_user);
|
||||||
|
dst_ndarray->data += index * this->strides[this_axis]; // Add offset
|
||||||
|
|
||||||
|
// Next
|
||||||
|
this_axis++;
|
||||||
|
} else if (ndslice->type == INPUT_SLICE_TYPE_SLICE) {
|
||||||
|
// Handle when the ndslice is a slice (represented by UserSlice in IRRT)
|
||||||
|
// e.g., `my_array[::2, -5, ::-1]`
|
||||||
|
// ^^^------^^^^----- like these
|
||||||
|
UserSlice<SizeT>* user_slice = (UserSlice<SizeT>*) ndslice->slice;
|
||||||
|
Slice<SizeT> slice = user_slice->indices(this->shape[this_axis]); // To resolve negative indices and other funny stuff written by the user
|
||||||
|
|
||||||
|
// NOTE: There is no need to write special code to handle negative steps/strides.
|
||||||
|
// This simple implementation meticulously handles both positive and negative steps/strides.
|
||||||
|
// Check out the tinynumpy and IRRT's test cases if you are not convinced.
|
||||||
|
dst_ndarray->data += slice.start * this->strides[this_axis]; // Add offset (NOTE: no need to `* itemsize`, strides count in # of bytes)
|
||||||
|
dst_ndarray->strides[dst_axis] = slice.step * this->strides[this_axis]; // Determine stride
|
||||||
|
dst_ndarray->shape[dst_axis] = slice.len(); // Determine shape dimension
|
||||||
|
|
||||||
|
// Next
|
||||||
|
dst_axis++;
|
||||||
|
this_axis++;
|
||||||
|
} else {
|
||||||
|
__builtin_unreachable();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
irrt_assert(dst_axis == dst_ndarray->ndims); // Sanity check on the implementation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Similar to `np.broadcast_to(<ndarray>, <target_shape>)`
|
||||||
|
// Assumptions:
|
||||||
|
// - `this` has to be fully initialized.
|
||||||
|
// - `dst_ndarray->ndims` has to be set.
|
||||||
|
// - `dst_ndarray->shape` has to be set, this determines the shape `this` broadcasts to.
|
||||||
|
//
|
||||||
|
// Other notes:
|
||||||
|
// - `dst_ndarray->data` does not have to be set, it will be set to `this->data`.
|
||||||
|
// - `dst_ndarray->itemsize` does not have to be set, it will be set to `this->data`.
|
||||||
|
// - `dst_ndarray->strides` does not have to be set, it will be overwritten.
|
||||||
|
//
|
||||||
|
// Cautions:
|
||||||
|
// ```
|
||||||
|
// xs = np.zeros((4,))
|
||||||
|
// ys = np.zero((4, 1))
|
||||||
|
// ys[:] = xs # ok
|
||||||
|
//
|
||||||
|
// xs = np.zeros((1, 4))
|
||||||
|
// ys = np.zero((4,))
|
||||||
|
// ys[:] = xs # allowed
|
||||||
|
// # However `np.broadcast_to(xs, (4,))` would fails, as per numpy's broadcasting rule.
|
||||||
|
// # and apparently numpy will "deprecate" this? SEE https://github.com/numpy/numpy/issues/21744
|
||||||
|
// # This implementation will NOT support this assignment.
|
||||||
|
// ```
|
||||||
|
void broadcast_to(NDArray<SizeT>* dst_ndarray) {
|
||||||
|
dst_ndarray->data = this->data;
|
||||||
|
dst_ndarray->itemsize = this->itemsize;
|
||||||
|
|
||||||
|
irrt_assert(
|
||||||
|
ndarray_util::can_broadcast_shape_to(
|
||||||
|
dst_ndarray->ndims,
|
||||||
|
dst_ndarray->shape,
|
||||||
|
this->ndims,
|
||||||
|
this->shape
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
SizeT stride_product = 1;
|
||||||
|
for (SizeT i = 0; i < max(this->ndims, dst_ndarray->ndims); i++) {
|
||||||
|
SizeT this_dim_i = this->ndims - i - 1;
|
||||||
|
SizeT dst_dim_i = dst_ndarray->ndims - i - 1;
|
||||||
|
|
||||||
|
bool this_dim_exists = this_dim_i >= 0;
|
||||||
|
bool dst_dim_exists = dst_dim_i >= 0;
|
||||||
|
|
||||||
|
// TODO: Explain how this works
|
||||||
|
bool c1 = this_dim_exists && this->shape[this_dim_i] == 1;
|
||||||
|
bool c2 = dst_dim_exists && dst_ndarray->shape[dst_dim_i] != 1;
|
||||||
|
if (!this_dim_exists || (c1 && c2)) {
|
||||||
|
dst_ndarray->strides[dst_dim_i] = 0; // Freeze it in-place
|
||||||
|
} else {
|
||||||
|
dst_ndarray->strides[dst_dim_i] = stride_product * this->itemsize;
|
||||||
|
stride_product *= this->shape[this_dim_i]; // NOTE: this_dim_exist must be true here.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simulates `this_ndarray[:] = src_ndarray`, with automatic broadcasting.
|
||||||
|
// Caution on https://github.com/numpy/numpy/issues/21744
|
||||||
|
// Also see `NDArray::broadcast_to`
|
||||||
|
void assign_with(NDArray<SizeT>* src_ndarray) {
|
||||||
|
irrt_assert(
|
||||||
|
ndarray_util::can_broadcast_shape_to(
|
||||||
|
this->ndims,
|
||||||
|
this->shape,
|
||||||
|
src_ndarray->ndims,
|
||||||
|
src_ndarray->shape
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Broadcast the `src_ndarray` to make the reading process *much* easier
|
||||||
|
SizeT* broadcasted_src_ndarray_strides = __builtin_alloca(sizeof(SizeT) * this->ndims); // Remember to allocate strides beforehand
|
||||||
|
NDArray<SizeT> broadcasted_src_ndarray = {
|
||||||
|
.ndims = this->ndims,
|
||||||
|
.shape = this->shape,
|
||||||
|
.strides = broadcasted_src_ndarray_strides
|
||||||
|
};
|
||||||
|
src_ndarray->broadcast_to(&broadcasted_src_ndarray);
|
||||||
|
|
||||||
|
// Using iter instead of `get_nth_pelement` because it is slightly faster
|
||||||
|
SizeT* indices = __builtin_alloca(sizeof(SizeT) * this->ndims);
|
||||||
|
auto iter = NDArrayIndicesIter<SizeT> {
|
||||||
|
.ndims = this->ndims,
|
||||||
|
.shape = this->shape,
|
||||||
|
.indices = indices
|
||||||
|
};
|
||||||
|
const SizeT this_size = this->size();
|
||||||
|
for (SizeT i = 0; i < this_size; i++, iter.next()) {
|
||||||
|
uint8_t* src_pelement = broadcasted_src_ndarray_strides->get_pelement(indices);
|
||||||
|
uint8_t* this_pelement = this->get_pelement(indices);
|
||||||
|
this->set_value_at_pelement(src_pelement, src_pelement);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
extern "C" {
|
||||||
|
uint32_t __nac3_ndarray_size(NDArray<int32_t>* ndarray) {
|
||||||
|
return ndarray->size();
|
||||||
|
}
|
||||||
|
|
||||||
|
uint64_t __nac3_ndarray_size64(NDArray<int64_t>* ndarray) {
|
||||||
|
return ndarray->size();
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_fill_generic(NDArray<int32_t>* ndarray, uint8_t* pvalue) {
|
||||||
|
ndarray->fill_generic(pvalue);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __nac3_ndarray_fill_generic64(NDArray<int64_t>* ndarray, uint8_t* pvalue) {
|
||||||
|
ndarray->fill_generic(pvalue);
|
||||||
|
}
|
||||||
|
|
||||||
|
// void __nac3_ndarray_slice(NDArray<int32_t>* ndarray, int32_t num_slices, NDSlice<int32_t> *slices, NDArray<int32_t> *dst_ndarray) {
|
||||||
|
// // ndarray->slice(num_slices, slices, dst_ndarray);
|
||||||
|
// }
|
||||||
|
}
|
|
@ -0,0 +1,80 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "irrt_utils.hpp"
|
||||||
|
#include "irrt_typedefs.hpp"
|
||||||
|
|
||||||
|
namespace {
|
||||||
|
// A proper slice in IRRT, all negative indices have be resolved to absolute values.
|
||||||
|
// Even though nac3core's slices are always `int32_t`, we will template slice anyway
|
||||||
|
// since this struct is used as a general utility.
|
||||||
|
template <typename T>
|
||||||
|
struct Slice {
|
||||||
|
T start;
|
||||||
|
T stop;
|
||||||
|
T step;
|
||||||
|
|
||||||
|
// The length/The number of elements of the slice if it were a range,
|
||||||
|
// i.e., the value of `len(range(this->start, this->stop, this->end))`
|
||||||
|
T len() {
|
||||||
|
T diff = stop - start;
|
||||||
|
if (diff > 0 && step > 0) {
|
||||||
|
return ((diff - 1) / step) + 1;
|
||||||
|
} else if (diff < 0 && step < 0) {
|
||||||
|
return ((diff + 1) / step) + 1;
|
||||||
|
} else {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
T resolve_index_in_length(T length, T index) {
|
||||||
|
irrt_assert(length >= 0);
|
||||||
|
if (index < 0) {
|
||||||
|
// Remember that index is negative, so do a plus here
|
||||||
|
return max(length + index, 0);
|
||||||
|
} else {
|
||||||
|
return min(length, index);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NOTE: using a bitfield for the `*_defined` is better, at the
|
||||||
|
// cost of a more annoying implementation in nac3core inkwell
|
||||||
|
template <typename T>
|
||||||
|
struct UserSlice {
|
||||||
|
uint8_t start_defined;
|
||||||
|
T start;
|
||||||
|
|
||||||
|
uint8_t stop_defined;
|
||||||
|
T stop;
|
||||||
|
|
||||||
|
uint8_t step_defined;
|
||||||
|
T step;
|
||||||
|
|
||||||
|
// Like Python's `slice(start, stop, step).indices(length)`
|
||||||
|
Slice<T> indices(T length) {
|
||||||
|
// NOTE: This function implements Python's `slice.indices` *FAITHFULLY*.
|
||||||
|
// SEE: https://github.com/python/cpython/blob/f62161837e68c1c77961435f1b954412dd5c2b65/Objects/sliceobject.c#L546
|
||||||
|
irrt_assert(length >= 0);
|
||||||
|
irrt_assert(!step_defined || step != 0); // step_defined -> step != 0; step cannot be zero if specified by user
|
||||||
|
|
||||||
|
Slice<T> result;
|
||||||
|
result.step = step_defined ? step : 1;
|
||||||
|
bool step_is_negative = result.step < 0;
|
||||||
|
|
||||||
|
if (start_defined) {
|
||||||
|
result.start = resolve_index_in_length(length, start);
|
||||||
|
} else {
|
||||||
|
result.start = step_is_negative ? length - 1 : 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (stop_defined) {
|
||||||
|
result.stop = resolve_index_in_length(length, stop);
|
||||||
|
} else {
|
||||||
|
result.stop = step_is_negative ? -1 : length;
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
|
@ -0,0 +1,658 @@
|
||||||
|
// This file will be compiled like a real C++ program,
|
||||||
|
// and we do have the luxury to use the standard libraries.
|
||||||
|
// That is if the nix flakes do not have issues... especially on msys2...
|
||||||
|
#include <cstdint>
|
||||||
|
#include <cstdio>
|
||||||
|
#include <cstdlib>
|
||||||
|
|
||||||
|
// Set `IRRT_DONT_TYPEDEF_INTS` because `cstdint` defines them
|
||||||
|
#define IRRT_DONT_TYPEDEF_INTS
|
||||||
|
#include "irrt_everything.hpp"
|
||||||
|
|
||||||
|
void test_fail() {
|
||||||
|
printf("[!] Test failed\n");
|
||||||
|
exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
void __begin_test(const char* function_name, const char* file, int line) {
|
||||||
|
printf("######### Running %s @ %s:%d\n", function_name, file, line);
|
||||||
|
}
|
||||||
|
|
||||||
|
#define BEGIN_TEST() __begin_test(__FUNCTION__, __FILE__, __LINE__)
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
void debug_print_array(const char* format, int len, T* as) {
|
||||||
|
printf("[");
|
||||||
|
for (int i = 0; i < len; i++) {
|
||||||
|
if (i != 0) printf(", ");
|
||||||
|
printf(format, as[i]);
|
||||||
|
}
|
||||||
|
printf("]");
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
void assert_arrays_match(const char* label, const char* format, int len, T* expected, T* got) {
|
||||||
|
if (!arrays_match(len, expected, got)) {
|
||||||
|
printf(">>>>>>> %s\n", label);
|
||||||
|
printf(" Expecting = ");
|
||||||
|
debug_print_array(format, len, expected);
|
||||||
|
printf("\n");
|
||||||
|
printf(" Got = ");
|
||||||
|
debug_print_array(format, len, got);
|
||||||
|
printf("\n");
|
||||||
|
test_fail();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
void assert_values_match(const char* label, const char* format, T expected, T got) {
|
||||||
|
if (expected != got) {
|
||||||
|
printf(">>>>>>> %s\n", label);
|
||||||
|
printf(" Expecting = ");
|
||||||
|
printf(format, expected);
|
||||||
|
printf("\n");
|
||||||
|
printf(" Got = ");
|
||||||
|
printf(format, got);
|
||||||
|
printf("\n");
|
||||||
|
test_fail();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void print_repeated(const char *str, int count) {
|
||||||
|
for (int i = 0; i < count; i++) {
|
||||||
|
printf("%s", str);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename SizeT, typename ElementT>
|
||||||
|
void __print_ndarray_aux(const char *format, bool first, bool last, SizeT* cursor, SizeT depth, NDArray<SizeT>* ndarray) {
|
||||||
|
// A really lazy recursive implementation
|
||||||
|
|
||||||
|
// Add left padding unless its the first entry (since there would be "[[[" before it)
|
||||||
|
if (!first) {
|
||||||
|
print_repeated(" ", depth);
|
||||||
|
}
|
||||||
|
|
||||||
|
const SizeT dim = ndarray->shape[depth];
|
||||||
|
if (depth + 1 == ndarray->ndims) {
|
||||||
|
// Recursed down to last dimension, print the values in a nice list
|
||||||
|
printf("[");
|
||||||
|
|
||||||
|
SizeT* indices = (SizeT*) __builtin_alloca(sizeof(SizeT) * ndarray->ndims);
|
||||||
|
for (SizeT i = 0; i < dim; i++) {
|
||||||
|
ndarray_util::set_indices_by_nth(ndarray->ndims, ndarray->shape, indices, *cursor);
|
||||||
|
ElementT* pelement = (ElementT*) ndarray->get_pelement(indices);
|
||||||
|
ElementT element = *pelement;
|
||||||
|
|
||||||
|
if (i != 0) printf(", "); // List delimiter
|
||||||
|
printf(format, element);
|
||||||
|
printf("(@");
|
||||||
|
debug_print_array("%d", ndarray->ndims, indices);
|
||||||
|
printf(")");
|
||||||
|
|
||||||
|
(*cursor)++;
|
||||||
|
}
|
||||||
|
printf("]");
|
||||||
|
} else {
|
||||||
|
printf("[");
|
||||||
|
for (SizeT i = 0; i < ndarray->shape[depth]; i++) {
|
||||||
|
__print_ndarray_aux<SizeT, ElementT>(
|
||||||
|
format,
|
||||||
|
i == 0, // first?
|
||||||
|
i + 1 == dim, // last?
|
||||||
|
cursor,
|
||||||
|
depth + 1,
|
||||||
|
ndarray
|
||||||
|
);
|
||||||
|
}
|
||||||
|
printf("]");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add newline unless its the last entry (since there will be "]]]" after it)
|
||||||
|
if (!last) {
|
||||||
|
print_repeated("\n", depth);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename SizeT, typename ElementT>
|
||||||
|
void print_ndarray(const char *format, NDArray<SizeT>* ndarray) {
|
||||||
|
if (ndarray->ndims == 0) {
|
||||||
|
printf("<empty ndarray>");
|
||||||
|
} else {
|
||||||
|
SizeT cursor = 0;
|
||||||
|
__print_ndarray_aux<SizeT, ElementT>(format, true, true, &cursor, 0, ndarray);
|
||||||
|
}
|
||||||
|
printf("\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_calc_size_from_shape_normal() {
|
||||||
|
// Test shapes with normal values
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
int32_t shape[4] = { 2, 3, 5, 7 };
|
||||||
|
assert_values_match("size", "%d", 210, ndarray_util::calc_size_from_shape<int32_t>(4, shape));
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_calc_size_from_shape_has_zero() {
|
||||||
|
// Test shapes with 0 in them
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
int32_t shape[4] = { 2, 0, 5, 7 };
|
||||||
|
assert_values_match("size", "%d", 0, ndarray_util::calc_size_from_shape<int32_t>(4, shape));
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_set_strides_by_shape() {
|
||||||
|
// Test `set_strides_by_shape()`
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
int32_t shape[4] = { 99, 3, 5, 7 };
|
||||||
|
int32_t strides[4] = { 0 };
|
||||||
|
ndarray_util::set_strides_by_shape((int32_t) sizeof(int32_t), 4, strides, shape);
|
||||||
|
|
||||||
|
int32_t expected_strides[4] = {
|
||||||
|
105 * sizeof(int32_t),
|
||||||
|
35 * sizeof(int32_t),
|
||||||
|
7 * sizeof(int32_t),
|
||||||
|
1 * sizeof(int32_t)
|
||||||
|
};
|
||||||
|
assert_arrays_match("strides", "%u", 4u, expected_strides, strides);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_ndarray_indices_iter_normal() {
|
||||||
|
// Test NDArrayIndicesIter normal behavior
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
int32_t shape[3] = { 1, 2, 3 };
|
||||||
|
int32_t indices[3] = { 0, 0, 0 };
|
||||||
|
auto iter = NDArrayIndicesIter<int32_t> {
|
||||||
|
.ndims = 3,
|
||||||
|
.shape = shape,
|
||||||
|
.indices = indices
|
||||||
|
};
|
||||||
|
|
||||||
|
assert_arrays_match("indices #0", "%u", 3u, iter.indices, (int32_t[3]) { 0, 0, 0 });
|
||||||
|
iter.next();
|
||||||
|
assert_arrays_match("indices #1", "%u", 3u, iter.indices, (int32_t[3]) { 0, 0, 1 });
|
||||||
|
iter.next();
|
||||||
|
assert_arrays_match("indices #2", "%u", 3u, iter.indices, (int32_t[3]) { 0, 0, 2 });
|
||||||
|
iter.next();
|
||||||
|
assert_arrays_match("indices #3", "%u", 3u, iter.indices, (int32_t[3]) { 0, 1, 0 });
|
||||||
|
iter.next();
|
||||||
|
assert_arrays_match("indices #4", "%u", 3u, iter.indices, (int32_t[3]) { 0, 1, 1 });
|
||||||
|
iter.next();
|
||||||
|
assert_arrays_match("indices #5", "%u", 3u, iter.indices, (int32_t[3]) { 0, 1, 2 });
|
||||||
|
iter.next();
|
||||||
|
assert_arrays_match("indices #6", "%u", 3u, iter.indices, (int32_t[3]) { 0, 0, 0 }); // Loops back
|
||||||
|
iter.next();
|
||||||
|
assert_arrays_match("indices #7", "%u", 3u, iter.indices, (int32_t[3]) { 0, 0, 1 });
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_ndarray_fill_generic() {
|
||||||
|
// Test ndarray fill_generic
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
// Choose a type that's neither int32_t nor uint64_t (candidates of SizeT) to spice it up
|
||||||
|
// Also make all the octets non-zero, to see if `memcpy` in `fill_generic` is working perfectly.
|
||||||
|
uint16_t fill_value = 0xFACE;
|
||||||
|
|
||||||
|
uint16_t in_data[6] = { 100, 101, 102, 103, 104, 105 }; // Fill `data` with values that != `999`
|
||||||
|
int32_t in_itemsize = sizeof(uint16_t);
|
||||||
|
const int32_t in_ndims = 2;
|
||||||
|
int32_t in_shape[in_ndims] = { 2, 3 };
|
||||||
|
int32_t in_strides[in_ndims] = {};
|
||||||
|
NDArray<int32_t> ndarray = {
|
||||||
|
.data = (uint8_t*) in_data,
|
||||||
|
.itemsize = in_itemsize,
|
||||||
|
.ndims = in_ndims,
|
||||||
|
.shape = in_shape,
|
||||||
|
.strides = in_strides,
|
||||||
|
};
|
||||||
|
ndarray.set_strides_by_shape();
|
||||||
|
ndarray.fill_generic((uint8_t*) &fill_value); // `fill_generic` here
|
||||||
|
|
||||||
|
uint16_t expected_data[6] = { fill_value, fill_value, fill_value, fill_value, fill_value, fill_value };
|
||||||
|
assert_arrays_match("data", "0x%hX", 6, expected_data, in_data);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_ndarray_set_to_eye() {
|
||||||
|
// Test `set_to_eye` behavior (helper function to implement `np.eye()`)
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
double in_data[9] = { 99.0, 99.0, 99.0, 99.0, 99.0, 99.0, 99.0, 99.0, 99.0 };
|
||||||
|
int32_t in_itemsize = sizeof(double);
|
||||||
|
const int32_t in_ndims = 2;
|
||||||
|
int32_t in_shape[in_ndims] = { 3, 3 };
|
||||||
|
int32_t in_strides[in_ndims] = {};
|
||||||
|
NDArray<int32_t> ndarray = {
|
||||||
|
.data = (uint8_t*) in_data,
|
||||||
|
.itemsize = in_itemsize,
|
||||||
|
.ndims = in_ndims,
|
||||||
|
.shape = in_shape,
|
||||||
|
.strides = in_strides,
|
||||||
|
};
|
||||||
|
ndarray.set_strides_by_shape();
|
||||||
|
|
||||||
|
double zero = 0.0;
|
||||||
|
double one = 1.0;
|
||||||
|
ndarray.set_to_eye(1, (uint8_t*) &zero, (uint8_t*) &one);
|
||||||
|
|
||||||
|
assert_values_match("in_data[0]", "%f", 0.0, in_data[0]);
|
||||||
|
assert_values_match("in_data[1]", "%f", 1.0, in_data[1]);
|
||||||
|
assert_values_match("in_data[2]", "%f", 0.0, in_data[2]);
|
||||||
|
assert_values_match("in_data[3]", "%f", 0.0, in_data[3]);
|
||||||
|
assert_values_match("in_data[4]", "%f", 0.0, in_data[4]);
|
||||||
|
assert_values_match("in_data[5]", "%f", 1.0, in_data[5]);
|
||||||
|
assert_values_match("in_data[6]", "%f", 0.0, in_data[6]);
|
||||||
|
assert_values_match("in_data[7]", "%f", 0.0, in_data[7]);
|
||||||
|
assert_values_match("in_data[8]", "%f", 0.0, in_data[8]);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_slice_1() {
|
||||||
|
// Test `slice(5, None, None).indices(100) == slice(5, 100, 1)`
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
UserSlice<int> user_slice = {
|
||||||
|
.start_defined = 1,
|
||||||
|
.start = 5,
|
||||||
|
.stop_defined = 0,
|
||||||
|
.step_defined = 0,
|
||||||
|
};
|
||||||
|
|
||||||
|
auto slice = user_slice.indices(100);
|
||||||
|
assert_values_match("start", "%d", 5, slice.start);
|
||||||
|
assert_values_match("stop", "%d", 100, slice.stop);
|
||||||
|
assert_values_match("step", "%d", 1, slice.step);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_slice_2() {
|
||||||
|
// Test `slice(400, 999, None).indices(100) == slice(100, 100, 1)`
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
UserSlice<int> user_slice = {
|
||||||
|
.start_defined = 1,
|
||||||
|
.start = 400,
|
||||||
|
.stop_defined = 0,
|
||||||
|
.step_defined = 0,
|
||||||
|
};
|
||||||
|
|
||||||
|
auto slice = user_slice.indices(100);
|
||||||
|
assert_values_match("start", "%d", 100, slice.start);
|
||||||
|
assert_values_match("stop", "%d", 100, slice.stop);
|
||||||
|
assert_values_match("step", "%d", 1, slice.step);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_slice_3() {
|
||||||
|
// Test `slice(-10, -5, None).indices(100) == slice(90, 95, 1)`
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
UserSlice<int> user_slice = {
|
||||||
|
.start_defined = 1,
|
||||||
|
.start = -10,
|
||||||
|
.stop_defined = 1,
|
||||||
|
.stop = -5,
|
||||||
|
.step_defined = 0,
|
||||||
|
};
|
||||||
|
|
||||||
|
auto slice = user_slice.indices(100);
|
||||||
|
assert_values_match("start", "%d", 90, slice.start);
|
||||||
|
assert_values_match("stop", "%d", 95, slice.stop);
|
||||||
|
assert_values_match("step", "%d", 1, slice.step);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_slice_4() {
|
||||||
|
// Test `slice(None, None, -5).indices(100) == (99, -1, -5)`
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
UserSlice<int> user_slice = {
|
||||||
|
.start_defined = 0,
|
||||||
|
.stop_defined = 0,
|
||||||
|
.step_defined = 1,
|
||||||
|
.step = -5
|
||||||
|
};
|
||||||
|
|
||||||
|
auto slice = user_slice.indices(100);
|
||||||
|
assert_values_match("start", "%d", 99, slice.start);
|
||||||
|
assert_values_match("stop", "%d", -1, slice.stop);
|
||||||
|
assert_values_match("step", "%d", -5, slice.step);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_ndslice_1() {
|
||||||
|
/*
|
||||||
|
Reference Python code:
|
||||||
|
```python
|
||||||
|
ndarray = np.arange(12, dtype=np.float64).reshape((3, 4));
|
||||||
|
# array([[ 0., 1., 2., 3.],
|
||||||
|
# [ 4., 5., 6., 7.],
|
||||||
|
# [ 8., 9., 10., 11.]])
|
||||||
|
|
||||||
|
dst_ndarray = ndarray[-2:, 1::2]
|
||||||
|
# array([[ 5., 7.],
|
||||||
|
# [ 9., 11.]])
|
||||||
|
|
||||||
|
assert dst_ndarray.shape == (2, 2)
|
||||||
|
assert dst_ndarray.strides == (32, 16)
|
||||||
|
assert dst_ndarray[0, 0] == 5.0
|
||||||
|
assert dst_ndarray[0, 1] == 7.0
|
||||||
|
assert dst_ndarray[1, 0] == 9.0
|
||||||
|
assert dst_ndarray[1, 1] == 11.0
|
||||||
|
```
|
||||||
|
*/
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
double in_data[12] = { 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0 };
|
||||||
|
int32_t in_itemsize = sizeof(double);
|
||||||
|
const int32_t in_ndims = 2;
|
||||||
|
int32_t in_shape[in_ndims] = { 3, 4 };
|
||||||
|
int32_t in_strides[in_ndims] = {};
|
||||||
|
NDArray<int32_t> ndarray = {
|
||||||
|
.data = (uint8_t*) in_data,
|
||||||
|
.itemsize = in_itemsize,
|
||||||
|
.ndims = in_ndims,
|
||||||
|
.shape = in_shape,
|
||||||
|
.strides = in_strides
|
||||||
|
};
|
||||||
|
ndarray.set_strides_by_shape();
|
||||||
|
|
||||||
|
// Destination ndarray
|
||||||
|
// As documented, ndims and shape & strides must be allocated and determined by the caller.
|
||||||
|
const int32_t dst_ndims = 2;
|
||||||
|
int32_t dst_shape[dst_ndims] = {999, 999}; // Empty values
|
||||||
|
int32_t dst_strides[dst_ndims] = {999, 999}; // Empty values
|
||||||
|
NDArray<int32_t> dst_ndarray = {
|
||||||
|
.data = nullptr,
|
||||||
|
.ndims = dst_ndims,
|
||||||
|
.shape = dst_shape,
|
||||||
|
.strides = dst_strides
|
||||||
|
};
|
||||||
|
|
||||||
|
// Create the slice in `ndarray[-2::, 1::2]`
|
||||||
|
UserSlice<int32_t> user_slice_1 = {
|
||||||
|
.start_defined = 1,
|
||||||
|
.start = -2,
|
||||||
|
.stop_defined = 0,
|
||||||
|
.step_defined = 0
|
||||||
|
};
|
||||||
|
|
||||||
|
UserSlice<int32_t> user_slice_2 = {
|
||||||
|
.start_defined = 1,
|
||||||
|
.start = 1,
|
||||||
|
.stop_defined = 0,
|
||||||
|
.step_defined = 1,
|
||||||
|
.step = 2
|
||||||
|
};
|
||||||
|
|
||||||
|
const int32_t num_ndslices = 2;
|
||||||
|
NDSlice ndslices[num_ndslices] = {
|
||||||
|
{ .type = INPUT_SLICE_TYPE_SLICE, .slice = (uint8_t*) &user_slice_1 },
|
||||||
|
{ .type = INPUT_SLICE_TYPE_SLICE, .slice = (uint8_t*) &user_slice_2 }
|
||||||
|
};
|
||||||
|
|
||||||
|
ndarray.slice(num_ndslices, ndslices, &dst_ndarray);
|
||||||
|
|
||||||
|
int32_t expected_shape[dst_ndims] = { 2, 2 };
|
||||||
|
int32_t expected_strides[dst_ndims] = { 32, 16 };
|
||||||
|
assert_arrays_match("shape", "%d", dst_ndims, expected_shape, dst_ndarray.shape);
|
||||||
|
assert_arrays_match("strides", "%d", dst_ndims, expected_strides, dst_ndarray.strides);
|
||||||
|
|
||||||
|
assert_values_match("dst_ndarray[0, 0]", "%f", 5.0, *((double *) dst_ndarray.get_pelement((int32_t[dst_ndims]) { 0, 0 })));
|
||||||
|
assert_values_match("dst_ndarray[0, 1]", "%f", 7.0, *((double *) dst_ndarray.get_pelement((int32_t[dst_ndims]) { 0, 1 })));
|
||||||
|
assert_values_match("dst_ndarray[1, 0]", "%f", 9.0, *((double *) dst_ndarray.get_pelement((int32_t[dst_ndims]) { 1, 0 })));
|
||||||
|
assert_values_match("dst_ndarray[1, 1]", "%f", 11.0, *((double *) dst_ndarray.get_pelement((int32_t[dst_ndims]) { 1, 1 })));
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_ndslice_2() {
|
||||||
|
/*
|
||||||
|
```python
|
||||||
|
ndarray = np.arange(12, dtype=np.float64).reshape((3, 4))
|
||||||
|
# array([[ 0., 1., 2., 3.],
|
||||||
|
# [ 4., 5., 6., 7.],
|
||||||
|
# [ 8., 9., 10., 11.]])
|
||||||
|
|
||||||
|
dst_ndarray = ndarray[2, ::-2]
|
||||||
|
# array([11., 9.])
|
||||||
|
|
||||||
|
assert dst_ndarray.shape == (2,)
|
||||||
|
assert dst_ndarray.strides == (-16,)
|
||||||
|
assert dst_ndarray[0] == 11.0
|
||||||
|
assert dst_ndarray[1] == 9.0
|
||||||
|
|
||||||
|
dst_ndarray[1, 0] == 99 # If you write to `dst_ndarray`
|
||||||
|
assert ndarray[1, 3] == 99 # `ndarray` also updates!!
|
||||||
|
```
|
||||||
|
*/
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
double in_data[12] = { 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0 };
|
||||||
|
int32_t in_itemsize = sizeof(double);
|
||||||
|
const int32_t in_ndims = 2;
|
||||||
|
int32_t in_shape[in_ndims] = { 3, 4 };
|
||||||
|
int32_t in_strides[in_ndims] = {};
|
||||||
|
NDArray<int32_t> ndarray = {
|
||||||
|
.data = (uint8_t*) in_data,
|
||||||
|
.itemsize = in_itemsize,
|
||||||
|
.ndims = in_ndims,
|
||||||
|
.shape = in_shape,
|
||||||
|
.strides = in_strides
|
||||||
|
};
|
||||||
|
ndarray.set_strides_by_shape();
|
||||||
|
|
||||||
|
// Destination ndarray
|
||||||
|
// As documented, ndims and shape & strides must be allocated and determined by the caller.
|
||||||
|
const int32_t dst_ndims = 1;
|
||||||
|
int32_t dst_shape[dst_ndims] = {999}; // Empty values
|
||||||
|
int32_t dst_strides[dst_ndims] = {999}; // Empty values
|
||||||
|
NDArray<int32_t> dst_ndarray = {
|
||||||
|
.data = nullptr,
|
||||||
|
.ndims = dst_ndims,
|
||||||
|
.shape = dst_shape,
|
||||||
|
.strides = dst_strides
|
||||||
|
};
|
||||||
|
|
||||||
|
// Create the slice in `ndarray[2, ::-2]`
|
||||||
|
int32_t user_slice_1 = 2;
|
||||||
|
UserSlice<int32_t> user_slice_2 = {
|
||||||
|
.start_defined = 0,
|
||||||
|
.stop_defined = 0,
|
||||||
|
.step_defined = 1,
|
||||||
|
.step = -2
|
||||||
|
};
|
||||||
|
|
||||||
|
const int32_t num_ndslices = 2;
|
||||||
|
NDSlice ndslices[num_ndslices] = {
|
||||||
|
{ .type = INPUT_SLICE_TYPE_INDEX, .slice = (uint8_t*) &user_slice_1 },
|
||||||
|
{ .type = INPUT_SLICE_TYPE_SLICE, .slice = (uint8_t*) &user_slice_2 }
|
||||||
|
};
|
||||||
|
|
||||||
|
ndarray.slice(num_ndslices, ndslices, &dst_ndarray);
|
||||||
|
|
||||||
|
int32_t expected_shape[dst_ndims] = { 2 };
|
||||||
|
int32_t expected_strides[dst_ndims] = { -16 };
|
||||||
|
assert_arrays_match("shape", "%d", dst_ndims, expected_shape, dst_ndarray.shape);
|
||||||
|
assert_arrays_match("strides", "%d", dst_ndims, expected_strides, dst_ndarray.strides);
|
||||||
|
|
||||||
|
// [5.0, 3.0]
|
||||||
|
assert_values_match("dst_ndarray[0]", "%f", 11.0, *((double *) dst_ndarray.get_pelement((int32_t[dst_ndims]) { 0 })));
|
||||||
|
assert_values_match("dst_ndarray[1]", "%f", 9.0, *((double *) dst_ndarray.get_pelement((int32_t[dst_ndims]) { 1 })));
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_can_broadcast_shape() {
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([3], [1, 1, 1, 1, 3]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(1, (int32_t[]) { 3 }, 5, (int32_t[]) { 1, 1, 1, 1, 3 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([3], [3, 1]) == false",
|
||||||
|
"%d",
|
||||||
|
false,
|
||||||
|
ndarray_util::can_broadcast_shape_to(1, (int32_t[]) { 3 }, 2, (int32_t[]) { 3, 1 }));
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([3], [3]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(1, (int32_t[]) { 3 }, 1, (int32_t[]) { 3 }));
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([1], [3]) == false",
|
||||||
|
"%d",
|
||||||
|
false,
|
||||||
|
ndarray_util::can_broadcast_shape_to(1, (int32_t[]) { 1 }, 1, (int32_t[]) { 3 }));
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([1], [1]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(1, (int32_t[]) { 1 }, 1, (int32_t[]) { 1 }));
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([256, 256, 3], [256, 1, 3]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(3, (int32_t[]) { 256, 256, 3 }, 3, (int32_t[]) { 256, 1, 3 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([256, 256, 3], [3]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(3, (int32_t[]) { 256, 256, 3 }, 1, (int32_t[]) { 3 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([256, 256, 3], [2]) == false",
|
||||||
|
"%d",
|
||||||
|
false,
|
||||||
|
ndarray_util::can_broadcast_shape_to(3, (int32_t[]) { 256, 256, 3 }, 1, (int32_t[]) { 2 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([256, 256, 3], [1]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(3, (int32_t[]) { 256, 256, 3 }, 1, (int32_t[]) { 1 })
|
||||||
|
);
|
||||||
|
|
||||||
|
// In cases when the shapes contain zero(es)
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([0], [1]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(1, (int32_t[]) { 0 }, 1, (int32_t[]) { 1 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([0], [2]) == false",
|
||||||
|
"%d",
|
||||||
|
false,
|
||||||
|
ndarray_util::can_broadcast_shape_to(1, (int32_t[]) { 0 }, 1, (int32_t[]) { 2 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([0, 4, 0, 0], [1]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(4, (int32_t[]) { 0, 4, 0, 0 }, 1, (int32_t[]) { 1 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([0, 4, 0, 0], [1, 1, 1, 1]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(4, (int32_t[]) { 0, 4, 0, 0 }, 4, (int32_t[]) { 1, 1, 1, 1 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([0, 4, 0, 0], [1, 4, 1, 1]) == true",
|
||||||
|
"%d",
|
||||||
|
true,
|
||||||
|
ndarray_util::can_broadcast_shape_to(4, (int32_t[]) { 0, 4, 0, 0 }, 4, (int32_t[]) { 1, 4, 1, 1 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([4, 3], [0, 3]) == false",
|
||||||
|
"%d",
|
||||||
|
false,
|
||||||
|
ndarray_util::can_broadcast_shape_to(2, (int32_t[]) { 4, 3 }, 2, (int32_t[]) { 0, 3 })
|
||||||
|
);
|
||||||
|
assert_values_match(
|
||||||
|
"can_broadcast_shape_to([4, 3], [0, 0]) == false",
|
||||||
|
"%d",
|
||||||
|
false,
|
||||||
|
ndarray_util::can_broadcast_shape_to(2, (int32_t[]) { 4, 3 }, 2, (int32_t[]) { 0, 0 })
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_ndarray_broadcast_1() {
|
||||||
|
/*
|
||||||
|
# array = np.array([[19.9, 29.9, 39.9, 49.9]], dtype=np.float64)
|
||||||
|
# >>> [[19.9 29.9 39.9 49.9]]
|
||||||
|
#
|
||||||
|
# array = np.broadcast_to(array, (2, 3, 4))
|
||||||
|
# >>> [[[19.9 29.9 39.9 49.9]
|
||||||
|
# >>> [19.9 29.9 39.9 49.9]
|
||||||
|
# >>> [19.9 29.9 39.9 49.9]]
|
||||||
|
# >>> [[19.9 29.9 39.9 49.9]
|
||||||
|
# >>> [19.9 29.9 39.9 49.9]
|
||||||
|
# >>> [19.9 29.9 39.9 49.9]]]
|
||||||
|
#
|
||||||
|
# assery array.strides == (0, 0, 8)
|
||||||
|
|
||||||
|
*/
|
||||||
|
BEGIN_TEST();
|
||||||
|
|
||||||
|
double in_data[4] = { 19.9, 29.9, 39.9, 49.9 };
|
||||||
|
const int32_t in_ndims = 2;
|
||||||
|
int32_t in_shape[in_ndims] = {1, 4};
|
||||||
|
int32_t in_strides[in_ndims] = {};
|
||||||
|
NDArray<int32_t> ndarray = {
|
||||||
|
.data = (uint8_t*) in_data,
|
||||||
|
.itemsize = sizeof(double),
|
||||||
|
.ndims = in_ndims,
|
||||||
|
.shape = in_shape,
|
||||||
|
.strides = in_strides
|
||||||
|
};
|
||||||
|
ndarray.set_strides_by_shape();
|
||||||
|
|
||||||
|
const int32_t dst_ndims = 3;
|
||||||
|
int32_t dst_shape[dst_ndims] = {2, 3, 4};
|
||||||
|
int32_t dst_strides[dst_ndims] = {};
|
||||||
|
NDArray<int32_t> dst_ndarray = {
|
||||||
|
.ndims = dst_ndims,
|
||||||
|
.shape = dst_shape,
|
||||||
|
.strides = dst_strides
|
||||||
|
};
|
||||||
|
|
||||||
|
ndarray.broadcast_to(&dst_ndarray);
|
||||||
|
|
||||||
|
assert_arrays_match("dst_ndarray->strides", "%d", dst_ndims, (int32_t[]) { 0, 0, 8 }, dst_ndarray.strides);
|
||||||
|
|
||||||
|
assert_values_match("dst_ndarray[0, 0, 0]", "%f", 19.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 0, 0})));
|
||||||
|
assert_values_match("dst_ndarray[0, 0, 1]", "%f", 29.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 0, 1})));
|
||||||
|
assert_values_match("dst_ndarray[0, 0, 2]", "%f", 39.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 0, 2})));
|
||||||
|
assert_values_match("dst_ndarray[0, 0, 3]", "%f", 49.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 0, 3})));
|
||||||
|
assert_values_match("dst_ndarray[0, 1, 0]", "%f", 19.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 1, 0})));
|
||||||
|
assert_values_match("dst_ndarray[0, 1, 1]", "%f", 29.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 1, 1})));
|
||||||
|
assert_values_match("dst_ndarray[0, 1, 2]", "%f", 39.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 1, 2})));
|
||||||
|
assert_values_match("dst_ndarray[0, 1, 3]", "%f", 49.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {0, 1, 3})));
|
||||||
|
assert_values_match("dst_ndarray[1, 2, 3]", "%f", 49.9, *((double*) dst_ndarray.get_pelement((int32_t[]) {1, 2, 3})));
|
||||||
|
}
|
||||||
|
|
||||||
|
void test_assign_with() {
|
||||||
|
/*
|
||||||
|
```
|
||||||
|
xs = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]], dtype=np.float64)
|
||||||
|
ys = xs.shape
|
||||||
|
```
|
||||||
|
*/
|
||||||
|
}
|
||||||
|
|
||||||
|
int main() {
|
||||||
|
test_calc_size_from_shape_normal();
|
||||||
|
test_calc_size_from_shape_has_zero();
|
||||||
|
test_set_strides_by_shape();
|
||||||
|
test_ndarray_indices_iter_normal();
|
||||||
|
test_ndarray_fill_generic();
|
||||||
|
test_ndarray_set_to_eye();
|
||||||
|
test_slice_1();
|
||||||
|
test_slice_2();
|
||||||
|
test_slice_3();
|
||||||
|
test_slice_4();
|
||||||
|
test_ndslice_1();
|
||||||
|
test_ndslice_2();
|
||||||
|
test_can_broadcast_shape();
|
||||||
|
test_ndarray_broadcast_1();
|
||||||
|
test_assign_with();
|
||||||
|
return 0;
|
||||||
|
}
|
|
@ -0,0 +1,14 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
// This is made toggleable since `irrt_test.cpp` itself would include
|
||||||
|
// headers that define the `int_t` family.
|
||||||
|
#ifndef IRRT_DONT_TYPEDEF_INTS
|
||||||
|
typedef _BitInt(8) int8_t;
|
||||||
|
typedef unsigned _BitInt(8) uint8_t;
|
||||||
|
typedef _BitInt(32) int32_t;
|
||||||
|
typedef unsigned _BitInt(32) uint32_t;
|
||||||
|
typedef _BitInt(64) int64_t;
|
||||||
|
typedef unsigned _BitInt(64) uint64_t;
|
||||||
|
#endif
|
||||||
|
|
||||||
|
typedef int32_t SliceIndex;
|
|
@ -0,0 +1,37 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "irrt_typedefs.hpp"
|
||||||
|
|
||||||
|
namespace {
|
||||||
|
template <typename T>
|
||||||
|
T max(T a, T b) {
|
||||||
|
return a > b ? a : b;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
T min(T a, T b) {
|
||||||
|
return a > b ? b : a;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
bool arrays_match(int len, T *as, T *bs) {
|
||||||
|
for (int i = 0; i < len; i++) {
|
||||||
|
if (as[i] != bs[i]) return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void irrt_panic() {
|
||||||
|
// Crash the program for now.
|
||||||
|
// TODO: Don't crash the program
|
||||||
|
// ... or at least produce a good message when doing testing IRRT
|
||||||
|
|
||||||
|
uint8_t* death = nullptr;
|
||||||
|
*death = 0; // TODO: address 0 on hardware might be writable?
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: Make this a macro and allow it to be toggled on/off (e.g., debug vs release)
|
||||||
|
void irrt_assert(bool condition) {
|
||||||
|
if (!condition) irrt_panic();
|
||||||
|
}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,310 @@
|
||||||
|
use crate::{
|
||||||
|
symbol_resolver::SymbolValue,
|
||||||
|
toplevel::DefinitionId,
|
||||||
|
typecheck::{
|
||||||
|
type_inferencer::PrimitiveStore,
|
||||||
|
typedef::{
|
||||||
|
into_var_map, FunSignature, FuncArg, Type, TypeEnum, TypeVar, TypeVarId, Unifier,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
use indexmap::IndexMap;
|
||||||
|
use nac3parser::ast::StrRef;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
pub struct ConcreteTypeStore {
|
||||||
|
store: Vec<ConcreteTypeEnum>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Copy, Clone, Eq, PartialEq, Hash, Debug)]
|
||||||
|
pub struct ConcreteType(usize);
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct ConcreteFuncArg {
|
||||||
|
pub name: StrRef,
|
||||||
|
pub ty: ConcreteType,
|
||||||
|
pub default_value: Option<SymbolValue>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub enum Primitive {
|
||||||
|
Int32,
|
||||||
|
Int64,
|
||||||
|
UInt32,
|
||||||
|
UInt64,
|
||||||
|
Float,
|
||||||
|
Bool,
|
||||||
|
None,
|
||||||
|
Range,
|
||||||
|
Str,
|
||||||
|
Exception,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum ConcreteTypeEnum {
|
||||||
|
TPrimitive(Primitive),
|
||||||
|
TTuple {
|
||||||
|
ty: Vec<ConcreteType>,
|
||||||
|
},
|
||||||
|
TObj {
|
||||||
|
obj_id: DefinitionId,
|
||||||
|
fields: HashMap<StrRef, (ConcreteType, bool)>,
|
||||||
|
params: IndexMap<TypeVarId, ConcreteType>,
|
||||||
|
},
|
||||||
|
TVirtual {
|
||||||
|
ty: ConcreteType,
|
||||||
|
},
|
||||||
|
TFunc {
|
||||||
|
args: Vec<ConcreteFuncArg>,
|
||||||
|
ret: ConcreteType,
|
||||||
|
vars: HashMap<TypeVarId, ConcreteType>,
|
||||||
|
},
|
||||||
|
TLiteral {
|
||||||
|
values: Vec<SymbolValue>,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConcreteTypeStore {
|
||||||
|
#[must_use]
|
||||||
|
pub fn new() -> ConcreteTypeStore {
|
||||||
|
ConcreteTypeStore {
|
||||||
|
store: vec![
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::Int32),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::Int64),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::Float),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::Bool),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::None),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::Range),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::Str),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::Exception),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::UInt32),
|
||||||
|
ConcreteTypeEnum::TPrimitive(Primitive::UInt64),
|
||||||
|
],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn get(&self, cty: ConcreteType) -> &ConcreteTypeEnum {
|
||||||
|
&self.store[cty.0]
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_signature(
|
||||||
|
&mut self,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
signature: &FunSignature,
|
||||||
|
cache: &mut HashMap<Type, Option<ConcreteType>>,
|
||||||
|
) -> ConcreteTypeEnum {
|
||||||
|
ConcreteTypeEnum::TFunc {
|
||||||
|
args: signature
|
||||||
|
.args
|
||||||
|
.iter()
|
||||||
|
.map(|arg| ConcreteFuncArg {
|
||||||
|
name: arg.name,
|
||||||
|
ty: self.from_unifier_type(unifier, primitives, arg.ty, cache),
|
||||||
|
default_value: arg.default_value.clone(),
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
ret: self.from_unifier_type(unifier, primitives, signature.ret, cache),
|
||||||
|
vars: signature
|
||||||
|
.vars
|
||||||
|
.iter()
|
||||||
|
.map(|(id, ty)| (*id, self.from_unifier_type(unifier, primitives, *ty, cache)))
|
||||||
|
.collect(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_unifier_type(
|
||||||
|
&mut self,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
cache: &mut HashMap<Type, Option<ConcreteType>>,
|
||||||
|
) -> ConcreteType {
|
||||||
|
let ty = unifier.get_representative(ty);
|
||||||
|
if unifier.unioned(ty, primitives.int32) {
|
||||||
|
ConcreteType(0)
|
||||||
|
} else if unifier.unioned(ty, primitives.int64) {
|
||||||
|
ConcreteType(1)
|
||||||
|
} else if unifier.unioned(ty, primitives.float) {
|
||||||
|
ConcreteType(2)
|
||||||
|
} else if unifier.unioned(ty, primitives.bool) {
|
||||||
|
ConcreteType(3)
|
||||||
|
} else if unifier.unioned(ty, primitives.none) {
|
||||||
|
ConcreteType(4)
|
||||||
|
} else if unifier.unioned(ty, primitives.range) {
|
||||||
|
ConcreteType(5)
|
||||||
|
} else if unifier.unioned(ty, primitives.str) {
|
||||||
|
ConcreteType(6)
|
||||||
|
} else if unifier.unioned(ty, primitives.exception) {
|
||||||
|
ConcreteType(7)
|
||||||
|
} else if unifier.unioned(ty, primitives.uint32) {
|
||||||
|
ConcreteType(8)
|
||||||
|
} else if unifier.unioned(ty, primitives.uint64) {
|
||||||
|
ConcreteType(9)
|
||||||
|
} else if let Some(cty) = cache.get(&ty) {
|
||||||
|
if let Some(cty) = cty {
|
||||||
|
*cty
|
||||||
|
} else {
|
||||||
|
let index = self.store.len();
|
||||||
|
// placeholder
|
||||||
|
self.store.push(ConcreteTypeEnum::TPrimitive(Primitive::Int32));
|
||||||
|
let result = ConcreteType(index);
|
||||||
|
cache.insert(ty, Some(result));
|
||||||
|
result
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
cache.insert(ty, None);
|
||||||
|
let ty_enum = unifier.get_ty(ty);
|
||||||
|
let result = match &*ty_enum {
|
||||||
|
TypeEnum::TTuple { ty } => ConcreteTypeEnum::TTuple {
|
||||||
|
ty: ty
|
||||||
|
.iter()
|
||||||
|
.map(|t| self.from_unifier_type(unifier, primitives, *t, cache))
|
||||||
|
.collect(),
|
||||||
|
},
|
||||||
|
TypeEnum::TObj { obj_id, fields, params } => ConcreteTypeEnum::TObj {
|
||||||
|
obj_id: *obj_id,
|
||||||
|
fields: fields
|
||||||
|
.iter()
|
||||||
|
.filter_map(|(name, ty)| {
|
||||||
|
// here we should not have type vars, but some partial instantiated
|
||||||
|
// class methods can still have uninstantiated type vars, so
|
||||||
|
// filter out all the methods, as this will not affect codegen
|
||||||
|
if let TypeEnum::TFunc(..) = &*unifier.get_ty(ty.0) {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some((
|
||||||
|
*name,
|
||||||
|
(
|
||||||
|
self.from_unifier_type(unifier, primitives, ty.0, cache),
|
||||||
|
ty.1,
|
||||||
|
),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
params: params
|
||||||
|
.iter()
|
||||||
|
.map(|(id, ty)| {
|
||||||
|
(*id, self.from_unifier_type(unifier, primitives, *ty, cache))
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
},
|
||||||
|
TypeEnum::TVirtual { ty } => ConcreteTypeEnum::TVirtual {
|
||||||
|
ty: self.from_unifier_type(unifier, primitives, *ty, cache),
|
||||||
|
},
|
||||||
|
TypeEnum::TFunc(signature) => {
|
||||||
|
self.from_signature(unifier, primitives, signature, cache)
|
||||||
|
}
|
||||||
|
TypeEnum::TLiteral { values, .. } => {
|
||||||
|
ConcreteTypeEnum::TLiteral { values: values.clone() }
|
||||||
|
}
|
||||||
|
_ => unreachable!("{:?}", ty_enum.get_type_name()),
|
||||||
|
};
|
||||||
|
let index = if let Some(ConcreteType(index)) = cache.get(&ty).unwrap() {
|
||||||
|
self.store[*index] = result;
|
||||||
|
*index
|
||||||
|
} else {
|
||||||
|
self.store.push(result);
|
||||||
|
self.store.len() - 1
|
||||||
|
};
|
||||||
|
cache.insert(ty, Some(ConcreteType(index)));
|
||||||
|
ConcreteType(index)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn to_unifier_type(
|
||||||
|
&self,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
cty: ConcreteType,
|
||||||
|
cache: &mut HashMap<ConcreteType, Option<Type>>,
|
||||||
|
) -> Type {
|
||||||
|
if let Some(ty) = cache.get_mut(&cty) {
|
||||||
|
return if let Some(ty) = ty {
|
||||||
|
*ty
|
||||||
|
} else {
|
||||||
|
*ty = Some(unifier.get_dummy_var().ty);
|
||||||
|
ty.unwrap()
|
||||||
|
};
|
||||||
|
}
|
||||||
|
cache.insert(cty, None);
|
||||||
|
let result = match &self.store[cty.0] {
|
||||||
|
ConcreteTypeEnum::TPrimitive(primitive) => {
|
||||||
|
let ty = match primitive {
|
||||||
|
Primitive::Int32 => primitives.int32,
|
||||||
|
Primitive::Int64 => primitives.int64,
|
||||||
|
Primitive::UInt32 => primitives.uint32,
|
||||||
|
Primitive::UInt64 => primitives.uint64,
|
||||||
|
Primitive::Float => primitives.float,
|
||||||
|
Primitive::Bool => primitives.bool,
|
||||||
|
Primitive::None => primitives.none,
|
||||||
|
Primitive::Range => primitives.range,
|
||||||
|
Primitive::Str => primitives.str,
|
||||||
|
Primitive::Exception => primitives.exception,
|
||||||
|
};
|
||||||
|
*cache.get_mut(&cty).unwrap() = Some(ty);
|
||||||
|
return ty;
|
||||||
|
}
|
||||||
|
ConcreteTypeEnum::TTuple { ty } => TypeEnum::TTuple {
|
||||||
|
ty: ty
|
||||||
|
.iter()
|
||||||
|
.map(|cty| self.to_unifier_type(unifier, primitives, *cty, cache))
|
||||||
|
.collect(),
|
||||||
|
},
|
||||||
|
ConcreteTypeEnum::TVirtual { ty } => {
|
||||||
|
TypeEnum::TVirtual { ty: self.to_unifier_type(unifier, primitives, *ty, cache) }
|
||||||
|
}
|
||||||
|
ConcreteTypeEnum::TObj { obj_id, fields, params } => TypeEnum::TObj {
|
||||||
|
obj_id: *obj_id,
|
||||||
|
fields: fields
|
||||||
|
.iter()
|
||||||
|
.map(|(name, cty)| {
|
||||||
|
(*name, (self.to_unifier_type(unifier, primitives, cty.0, cache), cty.1))
|
||||||
|
})
|
||||||
|
.collect::<HashMap<_, _>>(),
|
||||||
|
params: into_var_map(params.iter().map(|(&id, cty)| {
|
||||||
|
let ty = self.to_unifier_type(unifier, primitives, *cty, cache);
|
||||||
|
TypeVar { id, ty }
|
||||||
|
})),
|
||||||
|
},
|
||||||
|
ConcreteTypeEnum::TFunc { args, ret, vars } => TypeEnum::TFunc(FunSignature {
|
||||||
|
args: args
|
||||||
|
.iter()
|
||||||
|
.map(|arg| FuncArg {
|
||||||
|
name: arg.name,
|
||||||
|
ty: self.to_unifier_type(unifier, primitives, arg.ty, cache),
|
||||||
|
default_value: arg.default_value.clone(),
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
ret: self.to_unifier_type(unifier, primitives, *ret, cache),
|
||||||
|
vars: into_var_map(vars.iter().map(|(&id, cty)| {
|
||||||
|
let ty = self.to_unifier_type(unifier, primitives, *cty, cache);
|
||||||
|
TypeVar { id, ty }
|
||||||
|
})),
|
||||||
|
}),
|
||||||
|
ConcreteTypeEnum::TLiteral { values, .. } => {
|
||||||
|
TypeEnum::TLiteral { values: values.clone(), loc: None }
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let result = unifier.add_ty(result);
|
||||||
|
if let Some(ty) = cache.get(&cty).unwrap() {
|
||||||
|
unifier.unify(*ty, result).unwrap();
|
||||||
|
}
|
||||||
|
cache.insert(cty, Some(result));
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn add_cty(&mut self, cty: ConcreteTypeEnum) -> ConcreteType {
|
||||||
|
self.store.push(cty);
|
||||||
|
ConcreteType(self.store.len() - 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for ConcreteTypeStore {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::new()
|
||||||
|
}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,132 @@
|
||||||
|
use inkwell::attributes::{Attribute, AttributeLoc};
|
||||||
|
use inkwell::values::{BasicValueEnum, CallSiteValue, FloatValue, IntValue};
|
||||||
|
use itertools::Either;
|
||||||
|
|
||||||
|
use crate::codegen::CodeGenContext;
|
||||||
|
|
||||||
|
/// Macro to generate extern function
|
||||||
|
/// Both function return type and function parameter type are `FloatValue`
|
||||||
|
///
|
||||||
|
/// Arguments:
|
||||||
|
/// * `unary/binary`: Whether the extern function requires one (unary) or two (binary) operands
|
||||||
|
/// * `$fn_name:ident`: The identifier of the rust function to be generated
|
||||||
|
/// * `$extern_fn:literal`: Name of underlying extern function
|
||||||
|
///
|
||||||
|
/// Optional Arguments:
|
||||||
|
/// * `$(,$attributes:literal)*)`: Attributes linked with the extern function
|
||||||
|
/// The default attributes are "mustprogress", "nofree", "nounwind", "willreturn", and "writeonly"
|
||||||
|
/// These will be used unless other attributes are specified
|
||||||
|
/// * `$(,$args:ident)*`: Operands of the extern function
|
||||||
|
/// The data type of these operands will be set to `FloatValue`
|
||||||
|
///
|
||||||
|
macro_rules! generate_extern_fn {
|
||||||
|
("unary", $fn_name:ident, $extern_fn:literal) => {
|
||||||
|
generate_extern_fn!($fn_name, $extern_fn, arg, "mustprogress", "nofree", "nounwind", "willreturn", "writeonly");
|
||||||
|
};
|
||||||
|
("unary", $fn_name:ident, $extern_fn:literal $(,$attributes:literal)*) => {
|
||||||
|
generate_extern_fn!($fn_name, $extern_fn, arg $(,$attributes)*);
|
||||||
|
};
|
||||||
|
("binary", $fn_name:ident, $extern_fn:literal) => {
|
||||||
|
generate_extern_fn!($fn_name, $extern_fn, arg1, arg2, "mustprogress", "nofree", "nounwind", "willreturn", "writeonly");
|
||||||
|
};
|
||||||
|
("binary", $fn_name:ident, $extern_fn:literal $(,$attributes:literal)*) => {
|
||||||
|
generate_extern_fn!($fn_name, $extern_fn, arg1, arg2 $(,$attributes)*);
|
||||||
|
};
|
||||||
|
($fn_name:ident, $extern_fn:literal $(,$args:ident)* $(,$attributes:literal)*) => {
|
||||||
|
#[doc = concat!("Invokes the [`", stringify!($extern_fn), "`](https://en.cppreference.com/w/c/numeric/math/", stringify!($llvm_name), ") function." )]
|
||||||
|
pub fn $fn_name<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>
|
||||||
|
$(,$args: FloatValue<'ctx>)*,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> FloatValue<'ctx> {
|
||||||
|
const FN_NAME: &str = $extern_fn;
|
||||||
|
|
||||||
|
let llvm_f64 = ctx.ctx.f64_type();
|
||||||
|
$(debug_assert_eq!($args.get_type(), llvm_f64);)*
|
||||||
|
|
||||||
|
let extern_fn = ctx.module.get_function(FN_NAME).unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_f64.fn_type(&[$($args.get_type().into()),*], false);
|
||||||
|
let func = ctx.module.add_function(FN_NAME, fn_type, None);
|
||||||
|
for attr in [$($attributes),*] {
|
||||||
|
func.add_attribute(
|
||||||
|
AttributeLoc::Function,
|
||||||
|
ctx.ctx.create_enum_attribute(Attribute::get_named_enum_kind_id(attr), 0),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
func
|
||||||
|
});
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(extern_fn, &[$($args.into()),*], name.unwrap_or_default())
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_float_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
generate_extern_fn!("unary", call_tan, "tan");
|
||||||
|
generate_extern_fn!("unary", call_asin, "asin");
|
||||||
|
generate_extern_fn!("unary", call_acos, "acos");
|
||||||
|
generate_extern_fn!("unary", call_atan, "atan");
|
||||||
|
generate_extern_fn!("unary", call_sinh, "sinh");
|
||||||
|
generate_extern_fn!("unary", call_cosh, "cosh");
|
||||||
|
generate_extern_fn!("unary", call_tanh, "tanh");
|
||||||
|
generate_extern_fn!("unary", call_asinh, "asinh");
|
||||||
|
generate_extern_fn!("unary", call_acosh, "acosh");
|
||||||
|
generate_extern_fn!("unary", call_atanh, "atanh");
|
||||||
|
generate_extern_fn!("unary", call_expm1, "expm1");
|
||||||
|
generate_extern_fn!(
|
||||||
|
"unary",
|
||||||
|
call_cbrt,
|
||||||
|
"cbrt",
|
||||||
|
"mustprogress",
|
||||||
|
"nofree",
|
||||||
|
"nosync",
|
||||||
|
"nounwind",
|
||||||
|
"readonly",
|
||||||
|
"willreturn"
|
||||||
|
);
|
||||||
|
generate_extern_fn!("unary", call_erf, "erf", "nounwind");
|
||||||
|
generate_extern_fn!("unary", call_erfc, "erfc", "nounwind");
|
||||||
|
generate_extern_fn!("unary", call_j1, "j1", "nounwind");
|
||||||
|
|
||||||
|
generate_extern_fn!("binary", call_atan2, "atan2");
|
||||||
|
generate_extern_fn!("binary", call_hypot, "hypot", "nounwind");
|
||||||
|
generate_extern_fn!("binary", call_nextafter, "nextafter", "nounwind");
|
||||||
|
|
||||||
|
/// Invokes the [`ldexp`](https://en.cppreference.com/w/c/numeric/math/ldexp) function.
|
||||||
|
pub fn call_ldexp<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
arg: FloatValue<'ctx>,
|
||||||
|
exp: IntValue<'ctx>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> FloatValue<'ctx> {
|
||||||
|
const FN_NAME: &str = "ldexp";
|
||||||
|
|
||||||
|
let llvm_f64 = ctx.ctx.f64_type();
|
||||||
|
let llvm_i32 = ctx.ctx.i32_type();
|
||||||
|
debug_assert_eq!(arg.get_type(), llvm_f64);
|
||||||
|
debug_assert_eq!(exp.get_type(), llvm_i32);
|
||||||
|
|
||||||
|
let extern_fn = ctx.module.get_function(FN_NAME).unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_f64.fn_type(&[llvm_f64.into(), llvm_i32.into()], false);
|
||||||
|
let func = ctx.module.add_function(FN_NAME, fn_type, None);
|
||||||
|
for attr in ["mustprogress", "nofree", "nounwind", "willreturn"] {
|
||||||
|
func.add_attribute(
|
||||||
|
AttributeLoc::Function,
|
||||||
|
ctx.ctx.create_enum_attribute(Attribute::get_named_enum_kind_id(attr), 0),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
func
|
||||||
|
});
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(extern_fn, &[arg.into(), exp.into()], name.unwrap_or_default())
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_float_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
|
@ -0,0 +1,257 @@
|
||||||
|
use crate::{
|
||||||
|
codegen::{bool_to_i1, bool_to_i8, classes::ArraySliceValue, expr::*, stmt::*, CodeGenContext},
|
||||||
|
symbol_resolver::ValueEnum,
|
||||||
|
toplevel::{DefinitionId, TopLevelDef},
|
||||||
|
typecheck::typedef::{FunSignature, Type},
|
||||||
|
};
|
||||||
|
use inkwell::{
|
||||||
|
context::Context,
|
||||||
|
types::{BasicTypeEnum, IntType},
|
||||||
|
values::{BasicValueEnum, IntValue, PointerValue},
|
||||||
|
};
|
||||||
|
use nac3parser::ast::{Expr, Stmt, StrRef};
|
||||||
|
|
||||||
|
pub trait CodeGenerator {
|
||||||
|
/// Return the module name for the code generator.
|
||||||
|
fn get_name(&self) -> &str;
|
||||||
|
|
||||||
|
fn get_size_type<'ctx>(&self, ctx: &'ctx Context) -> IntType<'ctx>;
|
||||||
|
|
||||||
|
/// Generate function call and returns the function return value.
|
||||||
|
/// - obj: Optional object for method call.
|
||||||
|
/// - fun: Function signature and definition ID.
|
||||||
|
/// - params: Function parameters. Note that this does not include the object even if the
|
||||||
|
/// function is a class method.
|
||||||
|
fn gen_call<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
obj: Option<(Type, ValueEnum<'ctx>)>,
|
||||||
|
fun: (&FunSignature, DefinitionId),
|
||||||
|
params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
|
||||||
|
) -> Result<Option<BasicValueEnum<'ctx>>, String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_call(self, ctx, obj, fun, params)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate object constructor and returns the constructed object.
|
||||||
|
/// - signature: Function signature of the constructor.
|
||||||
|
/// - def: Class definition for the constructor class.
|
||||||
|
/// - params: Function parameters.
|
||||||
|
fn gen_constructor<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
signature: &FunSignature,
|
||||||
|
def: &TopLevelDef,
|
||||||
|
params: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
|
||||||
|
) -> Result<BasicValueEnum<'ctx>, String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_constructor(self, ctx, signature, def, params)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a function instance.
|
||||||
|
/// - obj: Optional object for method call.
|
||||||
|
/// - fun: Function signature, definition ID and the substitution key.
|
||||||
|
/// - params: Function parameters. Note that this does not include the object even if the
|
||||||
|
/// function is a class method.
|
||||||
|
/// Note that this function should check if the function is generated in another thread (due to
|
||||||
|
/// possible race condition), see the default implementation for an example.
|
||||||
|
fn gen_func_instance<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
obj: Option<(Type, ValueEnum<'ctx>)>,
|
||||||
|
fun: (&FunSignature, &mut TopLevelDef, String),
|
||||||
|
id: usize,
|
||||||
|
) -> Result<String, String> {
|
||||||
|
gen_func_instance(ctx, &obj, fun, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate the code for an expression.
|
||||||
|
fn gen_expr<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
expr: &Expr<Option<Type>>,
|
||||||
|
) -> Result<Option<ValueEnum<'ctx>>, String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_expr(self, ctx, expr)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Allocate memory for a variable and return a pointer pointing to it.
|
||||||
|
/// The default implementation places the allocations at the start of the function.
|
||||||
|
fn gen_var_alloc<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
ty: BasicTypeEnum<'ctx>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> Result<PointerValue<'ctx>, String> {
|
||||||
|
gen_var(ctx, ty, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Allocate memory for a variable and return a pointer pointing to it.
|
||||||
|
/// The default implementation places the allocations at the start of the function.
|
||||||
|
fn gen_array_var_alloc<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
ty: BasicTypeEnum<'ctx>,
|
||||||
|
size: IntValue<'ctx>,
|
||||||
|
name: Option<&'ctx str>,
|
||||||
|
) -> Result<ArraySliceValue<'ctx>, String> {
|
||||||
|
gen_array_var(ctx, ty, size, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return a pointer pointing to the target of the expression.
|
||||||
|
fn gen_store_target<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
pattern: &Expr<Option<Type>>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> Result<Option<PointerValue<'ctx>>, String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_store_target(self, ctx, pattern, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate code for an assignment expression.
|
||||||
|
fn gen_assign<'ctx>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
target: &Expr<Option<Type>>,
|
||||||
|
value: ValueEnum<'ctx>,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_assign(self, ctx, target, value)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate code for a while expression.
|
||||||
|
/// Return true if the while loop must early return
|
||||||
|
fn gen_while(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
stmt: &Stmt<Option<Type>>,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_while(self, ctx, stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate code for a for expression.
|
||||||
|
/// Return true if the for loop must early return
|
||||||
|
fn gen_for(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
stmt: &Stmt<Option<Type>>,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_for(self, ctx, stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate code for an if expression.
|
||||||
|
/// Return true if the statement must early return
|
||||||
|
fn gen_if(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
stmt: &Stmt<Option<Type>>,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_if(self, ctx, stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn gen_with(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
stmt: &Stmt<Option<Type>>,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_with(self, ctx, stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate code for a statement
|
||||||
|
///
|
||||||
|
/// Return true if the statement must early return
|
||||||
|
fn gen_stmt(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
stmt: &Stmt<Option<Type>>,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_stmt(self, ctx, stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates code for a block statement.
|
||||||
|
fn gen_block<'a, I: Iterator<Item = &'a Stmt<Option<Type>>>>(
|
||||||
|
&mut self,
|
||||||
|
ctx: &mut CodeGenContext<'_, '_>,
|
||||||
|
stmts: I,
|
||||||
|
) -> Result<(), String>
|
||||||
|
where
|
||||||
|
Self: Sized,
|
||||||
|
{
|
||||||
|
gen_block(self, ctx, stmts)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// See [`bool_to_i1`].
|
||||||
|
fn bool_to_i1<'ctx>(
|
||||||
|
&self,
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
bool_value: IntValue<'ctx>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
bool_to_i1(&ctx.builder, bool_value)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// See [`bool_to_i8`].
|
||||||
|
fn bool_to_i8<'ctx>(
|
||||||
|
&self,
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
bool_value: IntValue<'ctx>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
bool_to_i8(&ctx.builder, ctx.ctx, bool_value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct DefaultCodeGenerator {
|
||||||
|
name: String,
|
||||||
|
size_t: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DefaultCodeGenerator {
|
||||||
|
#[must_use]
|
||||||
|
pub fn new(name: String, size_t: u32) -> DefaultCodeGenerator {
|
||||||
|
assert!(matches!(size_t, 32 | 64));
|
||||||
|
DefaultCodeGenerator { name, size_t }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CodeGenerator for DefaultCodeGenerator {
|
||||||
|
/// Returns the name for this [`CodeGenerator`].
|
||||||
|
fn get_name(&self) -> &str {
|
||||||
|
&self.name
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns an LLVM integer type representing `size_t`.
|
||||||
|
fn get_size_type<'ctx>(&self, ctx: &'ctx Context) -> IntType<'ctx> {
|
||||||
|
// it should be unsigned, but we don't really need unsigned and this could save us from
|
||||||
|
// having to do a bit cast...
|
||||||
|
if self.size_t == 32 {
|
||||||
|
ctx.i32_type()
|
||||||
|
} else {
|
||||||
|
ctx.i64_type()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,991 @@
|
||||||
|
use crate::{typecheck::typedef::Type, util::SizeVariant};
|
||||||
|
|
||||||
|
mod test;
|
||||||
|
|
||||||
|
use super::{
|
||||||
|
classes::{
|
||||||
|
ArrayLikeIndexer, ArrayLikeValue, ArraySliceValue, ListValue, NDArrayValue, NpArrayType,
|
||||||
|
NpArrayValue, TypedArrayLikeAdapter, UntypedArrayLikeAccessor,
|
||||||
|
},
|
||||||
|
llvm_intrinsics, CodeGenContext, CodeGenerator,
|
||||||
|
};
|
||||||
|
use crate::codegen::classes::TypedArrayLikeAccessor;
|
||||||
|
use crate::codegen::stmt::gen_for_callback_incrementing;
|
||||||
|
use inkwell::{
|
||||||
|
attributes::{Attribute, AttributeLoc},
|
||||||
|
context::Context,
|
||||||
|
memory_buffer::MemoryBuffer,
|
||||||
|
module::Module,
|
||||||
|
types::{BasicType, BasicTypeEnum, FunctionType, IntType, PointerType},
|
||||||
|
values::{BasicValueEnum, CallSiteValue, FloatValue, FunctionValue, IntValue},
|
||||||
|
AddressSpace, IntPredicate,
|
||||||
|
};
|
||||||
|
use itertools::Either;
|
||||||
|
use nac3parser::ast::Expr;
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn load_irrt(ctx: &Context) -> Module {
|
||||||
|
let bitcode_buf = MemoryBuffer::create_from_memory_range(
|
||||||
|
include_bytes!(concat!(env!("OUT_DIR"), "/irrt.bc")),
|
||||||
|
"irrt_bitcode_buffer",
|
||||||
|
);
|
||||||
|
let irrt_mod = Module::parse_bitcode_from_buffer(&bitcode_buf, ctx).unwrap();
|
||||||
|
let inline_attr = Attribute::get_named_enum_kind_id("alwaysinline");
|
||||||
|
for symbol in &[
|
||||||
|
"__nac3_int_exp_int32_t",
|
||||||
|
"__nac3_int_exp_int64_t",
|
||||||
|
"__nac3_range_slice_len",
|
||||||
|
"__nac3_slice_index_bound",
|
||||||
|
] {
|
||||||
|
let function = irrt_mod.get_function(symbol).unwrap();
|
||||||
|
function.add_attribute(AttributeLoc::Function, ctx.create_enum_attribute(inline_attr, 0));
|
||||||
|
}
|
||||||
|
irrt_mod
|
||||||
|
}
|
||||||
|
|
||||||
|
// repeated squaring method adapted from GNU Scientific Library:
|
||||||
|
// https://git.savannah.gnu.org/cgit/gsl.git/tree/sys/pow_int.c
|
||||||
|
pub fn integer_power<'ctx, G: CodeGenerator + ?Sized>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
base: IntValue<'ctx>,
|
||||||
|
exp: IntValue<'ctx>,
|
||||||
|
signed: bool,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
let symbol = match (base.get_type().get_bit_width(), exp.get_type().get_bit_width(), signed) {
|
||||||
|
(32, 32, true) => "__nac3_int_exp_int32_t",
|
||||||
|
(64, 64, true) => "__nac3_int_exp_int64_t",
|
||||||
|
(32, 32, false) => "__nac3_int_exp_uint32_t",
|
||||||
|
(64, 64, false) => "__nac3_int_exp_uint64_t",
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
let base_type = base.get_type();
|
||||||
|
let pow_fun = ctx.module.get_function(symbol).unwrap_or_else(|| {
|
||||||
|
let fn_type = base_type.fn_type(&[base_type.into(), base_type.into()], false);
|
||||||
|
ctx.module.add_function(symbol, fn_type, None)
|
||||||
|
});
|
||||||
|
// throw exception when exp < 0
|
||||||
|
let ge_zero = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(
|
||||||
|
IntPredicate::SGE,
|
||||||
|
exp,
|
||||||
|
exp.get_type().const_zero(),
|
||||||
|
"assert_int_pow_ge_0",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
ctx.make_assert(
|
||||||
|
generator,
|
||||||
|
ge_zero,
|
||||||
|
"0:ValueError",
|
||||||
|
"integer power must be positive or zero",
|
||||||
|
[None, None, None],
|
||||||
|
ctx.current_loc,
|
||||||
|
);
|
||||||
|
ctx.builder
|
||||||
|
.build_call(pow_fun, &[base.into(), exp.into()], "call_int_pow")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn calculate_len_for_slice_range<'ctx, G: CodeGenerator + ?Sized>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
start: IntValue<'ctx>,
|
||||||
|
end: IntValue<'ctx>,
|
||||||
|
step: IntValue<'ctx>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
const SYMBOL: &str = "__nac3_range_slice_len";
|
||||||
|
let len_func = ctx.module.get_function(SYMBOL).unwrap_or_else(|| {
|
||||||
|
let i32_t = ctx.ctx.i32_type();
|
||||||
|
let fn_t = i32_t.fn_type(&[i32_t.into(), i32_t.into(), i32_t.into()], false);
|
||||||
|
ctx.module.add_function(SYMBOL, fn_t, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
// assert step != 0, throw exception if not
|
||||||
|
let not_zero = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(IntPredicate::NE, step, step.get_type().const_zero(), "range_step_ne")
|
||||||
|
.unwrap();
|
||||||
|
ctx.make_assert(
|
||||||
|
generator,
|
||||||
|
not_zero,
|
||||||
|
"0:ValueError",
|
||||||
|
"step must not be zero",
|
||||||
|
[None, None, None],
|
||||||
|
ctx.current_loc,
|
||||||
|
);
|
||||||
|
ctx.builder
|
||||||
|
.build_call(len_func, &[start.into(), end.into(), step.into()], "calc_len")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// NOTE: the output value of the end index of this function should be compared ***inclusively***,
|
||||||
|
/// because python allows `a[2::-1]`, whose semantic is `[a[2], a[1], a[0]]`, which is equivalent to
|
||||||
|
/// NO numeric slice in python.
|
||||||
|
///
|
||||||
|
/// equivalent code:
|
||||||
|
/// ```pseudo_code
|
||||||
|
/// match (start, end, step):
|
||||||
|
/// case (s, e, None | Some(step)) if step > 0:
|
||||||
|
/// return (
|
||||||
|
/// match s:
|
||||||
|
/// case None:
|
||||||
|
/// 0
|
||||||
|
/// case Some(s):
|
||||||
|
/// handle_in_bound(s)
|
||||||
|
/// ,match e:
|
||||||
|
/// case None:
|
||||||
|
/// length - 1
|
||||||
|
/// case Some(e):
|
||||||
|
/// handle_in_bound(e) - 1
|
||||||
|
/// ,step == None ? 1 : step
|
||||||
|
/// )
|
||||||
|
/// case (s, e, Some(step)) if step < 0:
|
||||||
|
/// return (
|
||||||
|
/// match s:
|
||||||
|
/// case None:
|
||||||
|
/// length - 1
|
||||||
|
/// case Some(s):
|
||||||
|
/// s = handle_in_bound(s)
|
||||||
|
/// if s == length:
|
||||||
|
/// s - 1
|
||||||
|
/// else:
|
||||||
|
/// s
|
||||||
|
/// ,match e:
|
||||||
|
/// case None:
|
||||||
|
/// 0
|
||||||
|
/// case Some(e):
|
||||||
|
/// handle_in_bound(e) + 1
|
||||||
|
/// ,step
|
||||||
|
/// )
|
||||||
|
/// ```
|
||||||
|
pub fn handle_slice_indices<'ctx, G: CodeGenerator>(
|
||||||
|
start: &Option<Box<Expr<Option<Type>>>>,
|
||||||
|
end: &Option<Box<Expr<Option<Type>>>>,
|
||||||
|
step: &Option<Box<Expr<Option<Type>>>>,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
generator: &mut G,
|
||||||
|
length: IntValue<'ctx>,
|
||||||
|
) -> Result<Option<(IntValue<'ctx>, IntValue<'ctx>, IntValue<'ctx>)>, String> {
|
||||||
|
let int32 = ctx.ctx.i32_type();
|
||||||
|
let zero = int32.const_zero();
|
||||||
|
let one = int32.const_int(1, false);
|
||||||
|
let length = ctx.builder.build_int_truncate_or_bit_cast(length, int32, "leni32").unwrap();
|
||||||
|
Ok(Some(match (start, end, step) {
|
||||||
|
(s, e, None) => (
|
||||||
|
if let Some(s) = s.as_ref() {
|
||||||
|
match handle_slice_index_bound(s, ctx, generator, length)? {
|
||||||
|
Some(v) => v,
|
||||||
|
None => return Ok(None),
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
int32.const_zero()
|
||||||
|
},
|
||||||
|
{
|
||||||
|
let e = if let Some(s) = e.as_ref() {
|
||||||
|
match handle_slice_index_bound(s, ctx, generator, length)? {
|
||||||
|
Some(v) => v,
|
||||||
|
None => return Ok(None),
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
length
|
||||||
|
};
|
||||||
|
ctx.builder.build_int_sub(e, one, "final_end").unwrap()
|
||||||
|
},
|
||||||
|
one,
|
||||||
|
),
|
||||||
|
(s, e, Some(step)) => {
|
||||||
|
let step = if let Some(v) = generator.gen_expr(ctx, step)? {
|
||||||
|
v.to_basic_value_enum(ctx, generator, ctx.primitives.int32)?.into_int_value()
|
||||||
|
} else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
// assert step != 0, throw exception if not
|
||||||
|
let not_zero = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(
|
||||||
|
IntPredicate::NE,
|
||||||
|
step,
|
||||||
|
step.get_type().const_zero(),
|
||||||
|
"range_step_ne",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
ctx.make_assert(
|
||||||
|
generator,
|
||||||
|
not_zero,
|
||||||
|
"0:ValueError",
|
||||||
|
"slice step cannot be zero",
|
||||||
|
[None, None, None],
|
||||||
|
ctx.current_loc,
|
||||||
|
);
|
||||||
|
let len_id = ctx.builder.build_int_sub(length, one, "lenmin1").unwrap();
|
||||||
|
let neg = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(IntPredicate::SLT, step, zero, "step_is_neg")
|
||||||
|
.unwrap();
|
||||||
|
(
|
||||||
|
match s {
|
||||||
|
Some(s) => {
|
||||||
|
let Some(s) = handle_slice_index_bound(s, ctx, generator, length)? else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
ctx.builder
|
||||||
|
.build_select(
|
||||||
|
ctx.builder
|
||||||
|
.build_and(
|
||||||
|
ctx.builder
|
||||||
|
.build_int_compare(
|
||||||
|
IntPredicate::EQ,
|
||||||
|
s,
|
||||||
|
length,
|
||||||
|
"s_eq_len",
|
||||||
|
)
|
||||||
|
.unwrap(),
|
||||||
|
neg,
|
||||||
|
"should_minus_one",
|
||||||
|
)
|
||||||
|
.unwrap(),
|
||||||
|
ctx.builder.build_int_sub(s, one, "s_min").unwrap(),
|
||||||
|
s,
|
||||||
|
"final_start",
|
||||||
|
)
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
None => ctx
|
||||||
|
.builder
|
||||||
|
.build_select(neg, len_id, zero, "stt")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap(),
|
||||||
|
},
|
||||||
|
match e {
|
||||||
|
Some(e) => {
|
||||||
|
let Some(e) = handle_slice_index_bound(e, ctx, generator, length)? else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
ctx.builder
|
||||||
|
.build_select(
|
||||||
|
neg,
|
||||||
|
ctx.builder.build_int_add(e, one, "end_add_one").unwrap(),
|
||||||
|
ctx.builder.build_int_sub(e, one, "end_sub_one").unwrap(),
|
||||||
|
"final_end",
|
||||||
|
)
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
None => ctx
|
||||||
|
.builder
|
||||||
|
.build_select(neg, zero, len_id, "end")
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap(),
|
||||||
|
},
|
||||||
|
step,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// this function allows index out of range, since python
|
||||||
|
/// allows index out of range in slice (`a = [1,2,3]; a[1:10] == [2,3]`).
|
||||||
|
pub fn handle_slice_index_bound<'ctx, G: CodeGenerator>(
|
||||||
|
i: &Expr<Option<Type>>,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
generator: &mut G,
|
||||||
|
length: IntValue<'ctx>,
|
||||||
|
) -> Result<Option<IntValue<'ctx>>, String> {
|
||||||
|
const SYMBOL: &str = "__nac3_slice_index_bound";
|
||||||
|
let func = ctx.module.get_function(SYMBOL).unwrap_or_else(|| {
|
||||||
|
let i32_t = ctx.ctx.i32_type();
|
||||||
|
let fn_t = i32_t.fn_type(&[i32_t.into(), i32_t.into()], false);
|
||||||
|
ctx.module.add_function(SYMBOL, fn_t, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let i = if let Some(v) = generator.gen_expr(ctx, i)? {
|
||||||
|
v.to_basic_value_enum(ctx, generator, i.custom.unwrap())?
|
||||||
|
} else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
Ok(Some(
|
||||||
|
ctx.builder
|
||||||
|
.build_call(func, &[i.into(), length.into()], "bounded_ind")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap(),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// This function handles 'end' **inclusively**.
|
||||||
|
/// Order of tuples `assign_idx` and `value_idx` is ('start', 'end', 'step').
|
||||||
|
/// Negative index should be handled before entering this function
|
||||||
|
pub fn list_slice_assignment<'ctx, G: CodeGenerator + ?Sized>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
ty: BasicTypeEnum<'ctx>,
|
||||||
|
dest_arr: ListValue<'ctx>,
|
||||||
|
dest_idx: (IntValue<'ctx>, IntValue<'ctx>, IntValue<'ctx>),
|
||||||
|
src_arr: ListValue<'ctx>,
|
||||||
|
src_idx: (IntValue<'ctx>, IntValue<'ctx>, IntValue<'ctx>),
|
||||||
|
) {
|
||||||
|
let size_ty = generator.get_size_type(ctx.ctx);
|
||||||
|
let int8_ptr = ctx.ctx.i8_type().ptr_type(AddressSpace::default());
|
||||||
|
let int32 = ctx.ctx.i32_type();
|
||||||
|
let (fun_symbol, elem_ptr_type) = ("__nac3_list_slice_assign_var_size", int8_ptr);
|
||||||
|
let slice_assign_fun = {
|
||||||
|
let ty_vec = vec![
|
||||||
|
int32.into(), // dest start idx
|
||||||
|
int32.into(), // dest end idx
|
||||||
|
int32.into(), // dest step
|
||||||
|
elem_ptr_type.into(), // dest arr ptr
|
||||||
|
int32.into(), // dest arr len
|
||||||
|
int32.into(), // src start idx
|
||||||
|
int32.into(), // src end idx
|
||||||
|
int32.into(), // src step
|
||||||
|
elem_ptr_type.into(), // src arr ptr
|
||||||
|
int32.into(), // src arr len
|
||||||
|
int32.into(), // size
|
||||||
|
];
|
||||||
|
ctx.module.get_function(fun_symbol).unwrap_or_else(|| {
|
||||||
|
let fn_t = int32.fn_type(ty_vec.as_slice(), false);
|
||||||
|
ctx.module.add_function(fun_symbol, fn_t, None)
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
let zero = int32.const_zero();
|
||||||
|
let one = int32.const_int(1, false);
|
||||||
|
let dest_arr_ptr = dest_arr.data().base_ptr(ctx, generator);
|
||||||
|
let dest_arr_ptr =
|
||||||
|
ctx.builder.build_pointer_cast(dest_arr_ptr, elem_ptr_type, "dest_arr_ptr_cast").unwrap();
|
||||||
|
let dest_len = dest_arr.load_size(ctx, Some("dest.len"));
|
||||||
|
let dest_len = ctx.builder.build_int_truncate_or_bit_cast(dest_len, int32, "srclen32").unwrap();
|
||||||
|
let src_arr_ptr = src_arr.data().base_ptr(ctx, generator);
|
||||||
|
let src_arr_ptr =
|
||||||
|
ctx.builder.build_pointer_cast(src_arr_ptr, elem_ptr_type, "src_arr_ptr_cast").unwrap();
|
||||||
|
let src_len = src_arr.load_size(ctx, Some("src.len"));
|
||||||
|
let src_len = ctx.builder.build_int_truncate_or_bit_cast(src_len, int32, "srclen32").unwrap();
|
||||||
|
|
||||||
|
// index in bound and positive should be done
|
||||||
|
// assert if dest.step == 1 then len(src) <= len(dest) else len(src) == len(dest), and
|
||||||
|
// throw exception if not satisfied
|
||||||
|
let src_end = ctx
|
||||||
|
.builder
|
||||||
|
.build_select(
|
||||||
|
ctx.builder.build_int_compare(IntPredicate::SLT, src_idx.2, zero, "is_neg").unwrap(),
|
||||||
|
ctx.builder.build_int_sub(src_idx.1, one, "e_min_one").unwrap(),
|
||||||
|
ctx.builder.build_int_add(src_idx.1, one, "e_add_one").unwrap(),
|
||||||
|
"final_e",
|
||||||
|
)
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
let dest_end = ctx
|
||||||
|
.builder
|
||||||
|
.build_select(
|
||||||
|
ctx.builder.build_int_compare(IntPredicate::SLT, dest_idx.2, zero, "is_neg").unwrap(),
|
||||||
|
ctx.builder.build_int_sub(dest_idx.1, one, "e_min_one").unwrap(),
|
||||||
|
ctx.builder.build_int_add(dest_idx.1, one, "e_add_one").unwrap(),
|
||||||
|
"final_e",
|
||||||
|
)
|
||||||
|
.map(BasicValueEnum::into_int_value)
|
||||||
|
.unwrap();
|
||||||
|
let src_slice_len =
|
||||||
|
calculate_len_for_slice_range(generator, ctx, src_idx.0, src_end, src_idx.2);
|
||||||
|
let dest_slice_len =
|
||||||
|
calculate_len_for_slice_range(generator, ctx, dest_idx.0, dest_end, dest_idx.2);
|
||||||
|
let src_eq_dest = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(IntPredicate::EQ, src_slice_len, dest_slice_len, "slice_src_eq_dest")
|
||||||
|
.unwrap();
|
||||||
|
let src_slt_dest = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(IntPredicate::SLT, src_slice_len, dest_slice_len, "slice_src_slt_dest")
|
||||||
|
.unwrap();
|
||||||
|
let dest_step_eq_one = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(
|
||||||
|
IntPredicate::EQ,
|
||||||
|
dest_idx.2,
|
||||||
|
dest_idx.2.get_type().const_int(1, false),
|
||||||
|
"slice_dest_step_eq_one",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let cond_1 = ctx.builder.build_and(dest_step_eq_one, src_slt_dest, "slice_cond_1").unwrap();
|
||||||
|
let cond = ctx.builder.build_or(src_eq_dest, cond_1, "slice_cond").unwrap();
|
||||||
|
ctx.make_assert(
|
||||||
|
generator,
|
||||||
|
cond,
|
||||||
|
"0:ValueError",
|
||||||
|
"attempt to assign sequence of size {0} to slice of size {1} with step size {2}",
|
||||||
|
[Some(src_slice_len), Some(dest_slice_len), Some(dest_idx.2)],
|
||||||
|
ctx.current_loc,
|
||||||
|
);
|
||||||
|
|
||||||
|
let new_len = {
|
||||||
|
let args = vec![
|
||||||
|
dest_idx.0.into(), // dest start idx
|
||||||
|
dest_idx.1.into(), // dest end idx
|
||||||
|
dest_idx.2.into(), // dest step
|
||||||
|
dest_arr_ptr.into(), // dest arr ptr
|
||||||
|
dest_len.into(), // dest arr len
|
||||||
|
src_idx.0.into(), // src start idx
|
||||||
|
src_idx.1.into(), // src end idx
|
||||||
|
src_idx.2.into(), // src step
|
||||||
|
src_arr_ptr.into(), // src arr ptr
|
||||||
|
src_len.into(), // src arr len
|
||||||
|
{
|
||||||
|
let s = match ty {
|
||||||
|
BasicTypeEnum::FloatType(t) => t.size_of(),
|
||||||
|
BasicTypeEnum::IntType(t) => t.size_of(),
|
||||||
|
BasicTypeEnum::PointerType(t) => t.size_of(),
|
||||||
|
BasicTypeEnum::StructType(t) => t.size_of().unwrap(),
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
ctx.builder.build_int_truncate_or_bit_cast(s, int32, "size").unwrap()
|
||||||
|
}
|
||||||
|
.into(),
|
||||||
|
];
|
||||||
|
ctx.builder
|
||||||
|
.build_call(slice_assign_fun, args.as_slice(), "slice_assign")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
};
|
||||||
|
// update length
|
||||||
|
let need_update =
|
||||||
|
ctx.builder.build_int_compare(IntPredicate::NE, new_len, dest_len, "need_update").unwrap();
|
||||||
|
let current = ctx.builder.get_insert_block().unwrap().get_parent().unwrap();
|
||||||
|
let update_bb = ctx.ctx.append_basic_block(current, "update");
|
||||||
|
let cont_bb = ctx.ctx.append_basic_block(current, "cont");
|
||||||
|
ctx.builder.build_conditional_branch(need_update, update_bb, cont_bb).unwrap();
|
||||||
|
ctx.builder.position_at_end(update_bb);
|
||||||
|
let new_len = ctx.builder.build_int_z_extend_or_bit_cast(new_len, size_ty, "new_len").unwrap();
|
||||||
|
dest_arr.store_size(ctx, generator, new_len);
|
||||||
|
ctx.builder.build_unconditional_branch(cont_bb).unwrap();
|
||||||
|
ctx.builder.position_at_end(cont_bb);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `isinf` in IR. Returns an `i1` representing the result.
|
||||||
|
pub fn call_isinf<'ctx, G: CodeGenerator + ?Sized>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
v: FloatValue<'ctx>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
let intrinsic_fn = ctx.module.get_function("__nac3_isinf").unwrap_or_else(|| {
|
||||||
|
let fn_type = ctx.ctx.i32_type().fn_type(&[ctx.ctx.f64_type().into()], false);
|
||||||
|
ctx.module.add_function("__nac3_isinf", fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let ret = ctx
|
||||||
|
.builder
|
||||||
|
.build_call(intrinsic_fn, &[v.into()], "isinf")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
generator.bool_to_i1(ctx, ret)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `isnan` in IR. Returns an `i1` representing the result.
|
||||||
|
pub fn call_isnan<'ctx, G: CodeGenerator + ?Sized>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
v: FloatValue<'ctx>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
let intrinsic_fn = ctx.module.get_function("__nac3_isnan").unwrap_or_else(|| {
|
||||||
|
let fn_type = ctx.ctx.i32_type().fn_type(&[ctx.ctx.f64_type().into()], false);
|
||||||
|
ctx.module.add_function("__nac3_isnan", fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let ret = ctx
|
||||||
|
.builder
|
||||||
|
.build_call(intrinsic_fn, &[v.into()], "isnan")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
generator.bool_to_i1(ctx, ret)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `gamma` in IR. Returns an `f64` representing the result.
|
||||||
|
pub fn call_gamma<'ctx>(ctx: &CodeGenContext<'ctx, '_>, v: FloatValue<'ctx>) -> FloatValue<'ctx> {
|
||||||
|
let llvm_f64 = ctx.ctx.f64_type();
|
||||||
|
|
||||||
|
let intrinsic_fn = ctx.module.get_function("__nac3_gamma").unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_f64.fn_type(&[llvm_f64.into()], false);
|
||||||
|
ctx.module.add_function("__nac3_gamma", fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(intrinsic_fn, &[v.into()], "gamma")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_float_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `gammaln` in IR. Returns an `f64` representing the result.
|
||||||
|
pub fn call_gammaln<'ctx>(ctx: &CodeGenContext<'ctx, '_>, v: FloatValue<'ctx>) -> FloatValue<'ctx> {
|
||||||
|
let llvm_f64 = ctx.ctx.f64_type();
|
||||||
|
|
||||||
|
let intrinsic_fn = ctx.module.get_function("__nac3_gammaln").unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_f64.fn_type(&[llvm_f64.into()], false);
|
||||||
|
ctx.module.add_function("__nac3_gammaln", fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(intrinsic_fn, &[v.into()], "gammaln")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_float_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `j0` in IR. Returns an `f64` representing the result.
|
||||||
|
pub fn call_j0<'ctx>(ctx: &CodeGenContext<'ctx, '_>, v: FloatValue<'ctx>) -> FloatValue<'ctx> {
|
||||||
|
let llvm_f64 = ctx.ctx.f64_type();
|
||||||
|
|
||||||
|
let intrinsic_fn = ctx.module.get_function("__nac3_j0").unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_f64.fn_type(&[llvm_f64.into()], false);
|
||||||
|
ctx.module.add_function("__nac3_j0", fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(intrinsic_fn, &[v.into()], "j0")
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_float_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `__nac3_ndarray_calc_size`. Returns an [`IntValue`] representing the
|
||||||
|
/// calculated total size.
|
||||||
|
///
|
||||||
|
/// * `dims` - An [`ArrayLikeIndexer`] containing the size of each dimension.
|
||||||
|
/// * `range` - The dimension index to begin and end (exclusively) calculating the dimensions for,
|
||||||
|
/// or [`None`] if starting from the first dimension and ending at the last dimension respectively.
|
||||||
|
pub fn call_ndarray_calc_size<'ctx, G, Dims>(
|
||||||
|
generator: &G,
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
dims: &Dims,
|
||||||
|
(begin, end): (Option<IntValue<'ctx>>, Option<IntValue<'ctx>>),
|
||||||
|
) -> IntValue<'ctx>
|
||||||
|
where
|
||||||
|
G: CodeGenerator + ?Sized,
|
||||||
|
Dims: ArrayLikeIndexer<'ctx>,
|
||||||
|
{
|
||||||
|
let llvm_usize = generator.get_size_type(ctx.ctx);
|
||||||
|
let llvm_pusize = llvm_usize.ptr_type(AddressSpace::default());
|
||||||
|
|
||||||
|
let ndarray_calc_size_fn_name = match llvm_usize.get_bit_width() {
|
||||||
|
32 => "__nac3_ndarray_calc_size",
|
||||||
|
64 => "__nac3_ndarray_calc_size64",
|
||||||
|
bw => unreachable!("Unsupported size type bit width: {}", bw),
|
||||||
|
};
|
||||||
|
let ndarray_calc_size_fn_t = llvm_usize.fn_type(
|
||||||
|
&[llvm_pusize.into(), llvm_usize.into(), llvm_usize.into(), llvm_usize.into()],
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
let ndarray_calc_size_fn =
|
||||||
|
ctx.module.get_function(ndarray_calc_size_fn_name).unwrap_or_else(|| {
|
||||||
|
ctx.module.add_function(ndarray_calc_size_fn_name, ndarray_calc_size_fn_t, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let begin = begin.unwrap_or_else(|| llvm_usize.const_zero());
|
||||||
|
let end = end.unwrap_or_else(|| dims.size(ctx, generator));
|
||||||
|
ctx.builder
|
||||||
|
.build_call(
|
||||||
|
ndarray_calc_size_fn,
|
||||||
|
&[
|
||||||
|
dims.base_ptr(ctx, generator).into(),
|
||||||
|
dims.size(ctx, generator).into(),
|
||||||
|
begin.into(),
|
||||||
|
end.into(),
|
||||||
|
],
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `__nac3_ndarray_calc_nd_indices`. Returns a [`TypeArrayLikeAdpater`]
|
||||||
|
/// containing `i32` indices of the flattened index.
|
||||||
|
///
|
||||||
|
/// * `index` - The index to compute the multidimensional index for.
|
||||||
|
/// * `ndarray` - LLVM pointer to the `NDArray`. This value must be the LLVM representation of an
|
||||||
|
/// `NDArray`.
|
||||||
|
pub fn call_ndarray_calc_nd_indices<'ctx, G: CodeGenerator + ?Sized>(
|
||||||
|
generator: &G,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
index: IntValue<'ctx>,
|
||||||
|
ndarray: NDArrayValue<'ctx>,
|
||||||
|
) -> TypedArrayLikeAdapter<'ctx, IntValue<'ctx>> {
|
||||||
|
let llvm_void = ctx.ctx.void_type();
|
||||||
|
let llvm_i32 = ctx.ctx.i32_type();
|
||||||
|
let llvm_usize = generator.get_size_type(ctx.ctx);
|
||||||
|
let llvm_pi32 = llvm_i32.ptr_type(AddressSpace::default());
|
||||||
|
let llvm_pusize = llvm_usize.ptr_type(AddressSpace::default());
|
||||||
|
|
||||||
|
let ndarray_calc_nd_indices_fn_name = match llvm_usize.get_bit_width() {
|
||||||
|
32 => "__nac3_ndarray_calc_nd_indices",
|
||||||
|
64 => "__nac3_ndarray_calc_nd_indices64",
|
||||||
|
bw => unreachable!("Unsupported size type bit width: {}", bw),
|
||||||
|
};
|
||||||
|
let ndarray_calc_nd_indices_fn =
|
||||||
|
ctx.module.get_function(ndarray_calc_nd_indices_fn_name).unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_void.fn_type(
|
||||||
|
&[llvm_usize.into(), llvm_pusize.into(), llvm_usize.into(), llvm_pi32.into()],
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
ctx.module.add_function(ndarray_calc_nd_indices_fn_name, fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let ndarray_num_dims = ndarray.load_ndims(ctx);
|
||||||
|
let ndarray_dims = ndarray.dim_sizes();
|
||||||
|
|
||||||
|
let indices = ctx.builder.build_array_alloca(llvm_i32, ndarray_num_dims, "").unwrap();
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(
|
||||||
|
ndarray_calc_nd_indices_fn,
|
||||||
|
&[
|
||||||
|
index.into(),
|
||||||
|
ndarray_dims.base_ptr(ctx, generator).into(),
|
||||||
|
ndarray_num_dims.into(),
|
||||||
|
indices.into(),
|
||||||
|
],
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
TypedArrayLikeAdapter::from(
|
||||||
|
ArraySliceValue::from_ptr_val(indices, ndarray_num_dims, None),
|
||||||
|
Box::new(|_, v| v.into_int_value()),
|
||||||
|
Box::new(|_, v| v.into()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn call_ndarray_flatten_index_impl<'ctx, G, Indices>(
|
||||||
|
generator: &G,
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
ndarray: NDArrayValue<'ctx>,
|
||||||
|
indices: &Indices,
|
||||||
|
) -> IntValue<'ctx>
|
||||||
|
where
|
||||||
|
G: CodeGenerator + ?Sized,
|
||||||
|
Indices: ArrayLikeIndexer<'ctx>,
|
||||||
|
{
|
||||||
|
let llvm_i32 = ctx.ctx.i32_type();
|
||||||
|
let llvm_usize = generator.get_size_type(ctx.ctx);
|
||||||
|
|
||||||
|
let llvm_pi32 = llvm_i32.ptr_type(AddressSpace::default());
|
||||||
|
let llvm_pusize = llvm_usize.ptr_type(AddressSpace::default());
|
||||||
|
|
||||||
|
debug_assert_eq!(
|
||||||
|
IntType::try_from(indices.element_type(ctx, generator))
|
||||||
|
.map(IntType::get_bit_width)
|
||||||
|
.unwrap_or_default(),
|
||||||
|
llvm_i32.get_bit_width(),
|
||||||
|
"Expected i32 value for argument `indices` to `call_ndarray_flatten_index_impl`"
|
||||||
|
);
|
||||||
|
debug_assert_eq!(
|
||||||
|
indices.size(ctx, generator).get_type().get_bit_width(),
|
||||||
|
llvm_usize.get_bit_width(),
|
||||||
|
"Expected usize integer value for argument `indices_size` to `call_ndarray_flatten_index_impl`"
|
||||||
|
);
|
||||||
|
|
||||||
|
let ndarray_flatten_index_fn_name = match llvm_usize.get_bit_width() {
|
||||||
|
32 => "__nac3_ndarray_flatten_index",
|
||||||
|
64 => "__nac3_ndarray_flatten_index64",
|
||||||
|
bw => unreachable!("Unsupported size type bit width: {}", bw),
|
||||||
|
};
|
||||||
|
let ndarray_flatten_index_fn =
|
||||||
|
ctx.module.get_function(ndarray_flatten_index_fn_name).unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_usize.fn_type(
|
||||||
|
&[llvm_pusize.into(), llvm_usize.into(), llvm_pi32.into(), llvm_usize.into()],
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
ctx.module.add_function(ndarray_flatten_index_fn_name, fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let ndarray_num_dims = ndarray.load_ndims(ctx);
|
||||||
|
let ndarray_dims = ndarray.dim_sizes();
|
||||||
|
|
||||||
|
let index = ctx
|
||||||
|
.builder
|
||||||
|
.build_call(
|
||||||
|
ndarray_flatten_index_fn,
|
||||||
|
&[
|
||||||
|
ndarray_dims.base_ptr(ctx, generator).into(),
|
||||||
|
ndarray_num_dims.into(),
|
||||||
|
indices.base_ptr(ctx, generator).into(),
|
||||||
|
indices.size(ctx, generator).into(),
|
||||||
|
],
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_int_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
index
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `__nac3_ndarray_flatten_index`. Returns the flattened index for the
|
||||||
|
/// multidimensional index.
|
||||||
|
///
|
||||||
|
/// * `ndarray` - LLVM pointer to the `NDArray`. This value must be the LLVM representation of an
|
||||||
|
/// `NDArray`.
|
||||||
|
/// * `indices` - The multidimensional index to compute the flattened index for.
|
||||||
|
pub fn call_ndarray_flatten_index<'ctx, G, Index>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
ndarray: NDArrayValue<'ctx>,
|
||||||
|
indices: &Index,
|
||||||
|
) -> IntValue<'ctx>
|
||||||
|
where
|
||||||
|
G: CodeGenerator + ?Sized,
|
||||||
|
Index: ArrayLikeIndexer<'ctx>,
|
||||||
|
{
|
||||||
|
call_ndarray_flatten_index_impl(generator, ctx, ndarray, indices)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `__nac3_ndarray_calc_broadcast`. Returns a tuple containing the number of
|
||||||
|
/// dimension and size of each dimension of the resultant `ndarray`.
|
||||||
|
pub fn call_ndarray_calc_broadcast<'ctx, G: CodeGenerator + ?Sized>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
lhs: NDArrayValue<'ctx>,
|
||||||
|
rhs: NDArrayValue<'ctx>,
|
||||||
|
) -> TypedArrayLikeAdapter<'ctx, IntValue<'ctx>> {
|
||||||
|
let llvm_usize = generator.get_size_type(ctx.ctx);
|
||||||
|
let llvm_pusize = llvm_usize.ptr_type(AddressSpace::default());
|
||||||
|
|
||||||
|
let ndarray_calc_broadcast_fn_name = match llvm_usize.get_bit_width() {
|
||||||
|
32 => "__nac3_ndarray_calc_broadcast",
|
||||||
|
64 => "__nac3_ndarray_calc_broadcast64",
|
||||||
|
bw => unreachable!("Unsupported size type bit width: {}", bw),
|
||||||
|
};
|
||||||
|
let ndarray_calc_broadcast_fn =
|
||||||
|
ctx.module.get_function(ndarray_calc_broadcast_fn_name).unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_usize.fn_type(
|
||||||
|
&[
|
||||||
|
llvm_pusize.into(),
|
||||||
|
llvm_usize.into(),
|
||||||
|
llvm_pusize.into(),
|
||||||
|
llvm_usize.into(),
|
||||||
|
llvm_pusize.into(),
|
||||||
|
],
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
ctx.module.add_function(ndarray_calc_broadcast_fn_name, fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let lhs_ndims = lhs.load_ndims(ctx);
|
||||||
|
let rhs_ndims = rhs.load_ndims(ctx);
|
||||||
|
let min_ndims = llvm_intrinsics::call_int_umin(ctx, lhs_ndims, rhs_ndims, None);
|
||||||
|
|
||||||
|
gen_for_callback_incrementing(
|
||||||
|
generator,
|
||||||
|
ctx,
|
||||||
|
llvm_usize.const_zero(),
|
||||||
|
(min_ndims, false),
|
||||||
|
|generator, ctx, _, idx| {
|
||||||
|
let idx = ctx.builder.build_int_sub(min_ndims, idx, "").unwrap();
|
||||||
|
let (lhs_dim_sz, rhs_dim_sz) = unsafe {
|
||||||
|
(
|
||||||
|
lhs.dim_sizes().get_typed_unchecked(ctx, generator, &idx, None),
|
||||||
|
rhs.dim_sizes().get_typed_unchecked(ctx, generator, &idx, None),
|
||||||
|
)
|
||||||
|
};
|
||||||
|
|
||||||
|
let llvm_usize_const_one = llvm_usize.const_int(1, false);
|
||||||
|
let lhs_eqz = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(IntPredicate::EQ, lhs_dim_sz, llvm_usize_const_one, "")
|
||||||
|
.unwrap();
|
||||||
|
let rhs_eqz = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(IntPredicate::EQ, rhs_dim_sz, llvm_usize_const_one, "")
|
||||||
|
.unwrap();
|
||||||
|
let lhs_or_rhs_eqz = ctx.builder.build_or(lhs_eqz, rhs_eqz, "").unwrap();
|
||||||
|
|
||||||
|
let lhs_eq_rhs = ctx
|
||||||
|
.builder
|
||||||
|
.build_int_compare(IntPredicate::EQ, lhs_dim_sz, rhs_dim_sz, "")
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let is_compatible = ctx.builder.build_or(lhs_or_rhs_eqz, lhs_eq_rhs, "").unwrap();
|
||||||
|
|
||||||
|
ctx.make_assert(
|
||||||
|
generator,
|
||||||
|
is_compatible,
|
||||||
|
"0:ValueError",
|
||||||
|
"operands could not be broadcast together",
|
||||||
|
[None, None, None],
|
||||||
|
ctx.current_loc,
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
},
|
||||||
|
llvm_usize.const_int(1, false),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let max_ndims = llvm_intrinsics::call_int_umax(ctx, lhs_ndims, rhs_ndims, None);
|
||||||
|
let lhs_dims = lhs.dim_sizes().base_ptr(ctx, generator);
|
||||||
|
let lhs_ndims = lhs.load_ndims(ctx);
|
||||||
|
let rhs_dims = rhs.dim_sizes().base_ptr(ctx, generator);
|
||||||
|
let rhs_ndims = rhs.load_ndims(ctx);
|
||||||
|
let out_dims = ctx.builder.build_array_alloca(llvm_usize, max_ndims, "").unwrap();
|
||||||
|
let out_dims = ArraySliceValue::from_ptr_val(out_dims, max_ndims, None);
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(
|
||||||
|
ndarray_calc_broadcast_fn,
|
||||||
|
&[
|
||||||
|
lhs_dims.into(),
|
||||||
|
lhs_ndims.into(),
|
||||||
|
rhs_dims.into(),
|
||||||
|
rhs_ndims.into(),
|
||||||
|
out_dims.base_ptr(ctx, generator).into(),
|
||||||
|
],
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
TypedArrayLikeAdapter::from(
|
||||||
|
out_dims,
|
||||||
|
Box::new(|_, v| v.into_int_value()),
|
||||||
|
Box::new(|_, v| v.into()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generates a call to `__nac3_ndarray_calc_broadcast_idx`. Returns an [`ArrayAllocaValue`]
|
||||||
|
/// containing the indices used for accessing `array` corresponding to the index of the broadcasted
|
||||||
|
/// array `broadcast_idx`.
|
||||||
|
pub fn call_ndarray_calc_broadcast_index<
|
||||||
|
'ctx,
|
||||||
|
G: CodeGenerator + ?Sized,
|
||||||
|
BroadcastIdx: UntypedArrayLikeAccessor<'ctx>,
|
||||||
|
>(
|
||||||
|
generator: &mut G,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
array: NDArrayValue<'ctx>,
|
||||||
|
broadcast_idx: &BroadcastIdx,
|
||||||
|
) -> TypedArrayLikeAdapter<'ctx, IntValue<'ctx>> {
|
||||||
|
let llvm_i32 = ctx.ctx.i32_type();
|
||||||
|
let llvm_usize = generator.get_size_type(ctx.ctx);
|
||||||
|
let llvm_pi32 = llvm_i32.ptr_type(AddressSpace::default());
|
||||||
|
let llvm_pusize = llvm_usize.ptr_type(AddressSpace::default());
|
||||||
|
|
||||||
|
let ndarray_calc_broadcast_fn_name = match llvm_usize.get_bit_width() {
|
||||||
|
32 => "__nac3_ndarray_calc_broadcast_idx",
|
||||||
|
64 => "__nac3_ndarray_calc_broadcast_idx64",
|
||||||
|
bw => unreachable!("Unsupported size type bit width: {}", bw),
|
||||||
|
};
|
||||||
|
let ndarray_calc_broadcast_fn =
|
||||||
|
ctx.module.get_function(ndarray_calc_broadcast_fn_name).unwrap_or_else(|| {
|
||||||
|
let fn_type = llvm_usize.fn_type(
|
||||||
|
&[llvm_pusize.into(), llvm_usize.into(), llvm_pi32.into(), llvm_pi32.into()],
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
|
||||||
|
ctx.module.add_function(ndarray_calc_broadcast_fn_name, fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
let broadcast_size = broadcast_idx.size(ctx, generator);
|
||||||
|
let out_idx = ctx.builder.build_array_alloca(llvm_i32, broadcast_size, "").unwrap();
|
||||||
|
|
||||||
|
let array_dims = array.dim_sizes().base_ptr(ctx, generator);
|
||||||
|
let array_ndims = array.load_ndims(ctx);
|
||||||
|
let broadcast_idx_ptr = unsafe {
|
||||||
|
broadcast_idx.ptr_offset_unchecked(ctx, generator, &llvm_usize.const_zero(), None)
|
||||||
|
};
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(
|
||||||
|
ndarray_calc_broadcast_fn,
|
||||||
|
&[array_dims.into(), array_ndims.into(), broadcast_idx_ptr.into(), out_idx.into()],
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
TypedArrayLikeAdapter::from(
|
||||||
|
ArraySliceValue::from_ptr_val(out_idx, broadcast_size, None),
|
||||||
|
Box::new(|_, v| v.into_int_value()),
|
||||||
|
Box::new(|_, v| v.into()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_size_variant<'ctx>(ty: IntType<'ctx>) -> SizeVariant {
|
||||||
|
match ty.get_bit_width() {
|
||||||
|
32 => SizeVariant::Bits32,
|
||||||
|
64 => SizeVariant::Bits64,
|
||||||
|
_ => unreachable!("Unsupported int type bit width {}", ty.get_bit_width()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_size_type_dependent_function<'ctx, BuildFuncTypeFn>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
size_type: IntType<'ctx>,
|
||||||
|
base_name: &str,
|
||||||
|
build_func_type: BuildFuncTypeFn,
|
||||||
|
) -> FunctionValue<'ctx>
|
||||||
|
where
|
||||||
|
BuildFuncTypeFn: Fn() -> FunctionType<'ctx>,
|
||||||
|
{
|
||||||
|
let mut fn_name = base_name.to_owned();
|
||||||
|
match get_size_variant(size_type) {
|
||||||
|
SizeVariant::Bits32 => {
|
||||||
|
// The original fn_name is the correct function name
|
||||||
|
}
|
||||||
|
SizeVariant::Bits64 => {
|
||||||
|
// Append "64" at the end, this is the naming convention for 64-bit
|
||||||
|
fn_name.push_str("64");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get (or declare then get if does not exist) the corresponding function
|
||||||
|
ctx.module.get_function(&fn_name).unwrap_or_else(|| {
|
||||||
|
let fn_type = build_func_type();
|
||||||
|
ctx.module.add_function(&fn_name, fn_type, None)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_ndarray_struct_ptr<'ctx>(ctx: &'ctx Context, size_type: IntType<'ctx>) -> PointerType<'ctx> {
|
||||||
|
let i8_type = ctx.i8_type();
|
||||||
|
|
||||||
|
let ndarray_ty = NpArrayType { size_type, elem_type: i8_type.as_basic_type_enum() };
|
||||||
|
let struct_ty = ndarray_ty.fields().whole_struct.as_struct_type(ctx);
|
||||||
|
struct_ty.ptr_type(AddressSpace::default())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn call_nac3_ndarray_size<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
ndarray: NpArrayValue<'ctx>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
let size_type = ndarray.ty.size_type;
|
||||||
|
let function = get_size_type_dependent_function(ctx, size_type, "__nac3_ndarray_size", || {
|
||||||
|
size_type.fn_type(&[get_ndarray_struct_ptr(ctx.ctx, size_type).into()], false)
|
||||||
|
});
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(function, &[ndarray.ptr.into()], "size")
|
||||||
|
.unwrap()
|
||||||
|
.try_as_basic_value()
|
||||||
|
.unwrap_left()
|
||||||
|
.into_int_value()
|
||||||
|
}
|
|
@ -0,0 +1,26 @@
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use std::{path::Path, process::Command};
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn run_irrt_test() {
|
||||||
|
assert!(
|
||||||
|
cfg!(feature = "test"),
|
||||||
|
"Please do `cargo test -F test` to compile `irrt_test.out` and run test"
|
||||||
|
);
|
||||||
|
|
||||||
|
let irrt_test_out_path = Path::new(concat!(env!("OUT_DIR"), "/irrt_test.out"));
|
||||||
|
let output = Command::new(irrt_test_out_path.to_str().unwrap()).output().unwrap();
|
||||||
|
|
||||||
|
if !output.status.success() {
|
||||||
|
eprintln!("irrt_test failed with status {}:", output.status);
|
||||||
|
eprintln!("====== stdout ======");
|
||||||
|
eprintln!("{}", String::from_utf8(output.stdout).unwrap());
|
||||||
|
eprintln!("====== stderr ======");
|
||||||
|
eprintln!("{}", String::from_utf8(output.stderr).unwrap());
|
||||||
|
eprintln!("====================");
|
||||||
|
|
||||||
|
panic!("irrt_test failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,308 @@
|
||||||
|
use crate::codegen::CodeGenContext;
|
||||||
|
use inkwell::context::Context;
|
||||||
|
use inkwell::intrinsics::Intrinsic;
|
||||||
|
use inkwell::types::AnyTypeEnum::IntType;
|
||||||
|
use inkwell::types::FloatType;
|
||||||
|
use inkwell::values::{BasicValueEnum, CallSiteValue, FloatValue, IntValue, PointerValue};
|
||||||
|
use inkwell::AddressSpace;
|
||||||
|
use itertools::Either;
|
||||||
|
|
||||||
|
/// Returns the string representation for the floating-point type `ft` when used in intrinsic
|
||||||
|
/// functions.
|
||||||
|
fn get_float_intrinsic_repr(ctx: &Context, ft: FloatType) -> &'static str {
|
||||||
|
// Standard LLVM floating-point types
|
||||||
|
if ft == ctx.f16_type() {
|
||||||
|
return "f16";
|
||||||
|
}
|
||||||
|
if ft == ctx.f32_type() {
|
||||||
|
return "f32";
|
||||||
|
}
|
||||||
|
if ft == ctx.f64_type() {
|
||||||
|
return "f64";
|
||||||
|
}
|
||||||
|
if ft == ctx.f128_type() {
|
||||||
|
return "f128";
|
||||||
|
}
|
||||||
|
|
||||||
|
// Non-standard floating-point types
|
||||||
|
if ft == ctx.x86_f80_type() {
|
||||||
|
return "f80";
|
||||||
|
}
|
||||||
|
if ft == ctx.ppc_f128_type() {
|
||||||
|
return "ppcf128";
|
||||||
|
}
|
||||||
|
|
||||||
|
unreachable!()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Invokes the [`llvm.stacksave`](https://llvm.org/docs/LangRef.html#llvm-stacksave-intrinsic)
|
||||||
|
/// intrinsic.
|
||||||
|
pub fn call_stacksave<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> PointerValue<'ctx> {
|
||||||
|
const FN_NAME: &str = "llvm.stacksave";
|
||||||
|
|
||||||
|
let intrinsic_fn = Intrinsic::find(FN_NAME)
|
||||||
|
.and_then(|intrinsic| intrinsic.get_declaration(&ctx.module, &[]))
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(intrinsic_fn, &[], name.unwrap_or_default())
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_pointer_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Invokes the
|
||||||
|
/// [`llvm.stackrestore`](https://llvm.org/docs/LangRef.html#llvm-stackrestore-intrinsic) intrinsic.
|
||||||
|
///
|
||||||
|
/// - `ptr`: The pointer storing the address to restore the stack to.
|
||||||
|
pub fn call_stackrestore<'ctx>(ctx: &CodeGenContext<'ctx, '_>, ptr: PointerValue<'ctx>) {
|
||||||
|
const FN_NAME: &str = "llvm.stackrestore";
|
||||||
|
|
||||||
|
/*
|
||||||
|
SEE https://github.com/TheDan64/inkwell/issues/496
|
||||||
|
|
||||||
|
We want `llvm.stackrestore`, but the following would generate `llvm.stackrestore.p0i8`.
|
||||||
|
```ignore
|
||||||
|
let intrinsic_fn = Intrinsic::find(FN_NAME)
|
||||||
|
.and_then(|intrinsic| intrinsic.get_declaration(&ctx.module, &[llvm_p0i8.into()]))
|
||||||
|
.unwrap();
|
||||||
|
```
|
||||||
|
|
||||||
|
Temp workaround by manually declaring the intrinsic with the correct function name instead.
|
||||||
|
*/
|
||||||
|
let intrinsic_fn = ctx.module.get_function(FN_NAME).unwrap_or_else(|| {
|
||||||
|
let llvm_void = ctx.ctx.void_type();
|
||||||
|
let llvm_i8 = ctx.ctx.i8_type();
|
||||||
|
let llvm_p0i8 = llvm_i8.ptr_type(AddressSpace::default());
|
||||||
|
let fn_type = llvm_void.fn_type(&[llvm_p0i8.into()], false);
|
||||||
|
|
||||||
|
ctx.module.add_function(FN_NAME, fn_type, None)
|
||||||
|
});
|
||||||
|
|
||||||
|
ctx.builder.build_call(intrinsic_fn, &[ptr.into()], "").unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Invokes the [`llvm.memcpy`](https://llvm.org/docs/LangRef.html#llvm-memcpy-intrinsic) intrinsic.
|
||||||
|
///
|
||||||
|
/// * `dest` - The pointer to the destination. Must be a pointer to an integer type.
|
||||||
|
/// * `src` - The pointer to the source. Must be a pointer to an integer type.
|
||||||
|
/// * `len` - The number of bytes to copy.
|
||||||
|
/// * `is_volatile` - Whether the `memcpy` operation should be `volatile`.
|
||||||
|
pub fn call_memcpy<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
dest: PointerValue<'ctx>,
|
||||||
|
src: PointerValue<'ctx>,
|
||||||
|
len: IntValue<'ctx>,
|
||||||
|
is_volatile: IntValue<'ctx>,
|
||||||
|
) {
|
||||||
|
const FN_NAME: &str = "llvm.memcpy";
|
||||||
|
|
||||||
|
debug_assert!(dest.get_type().get_element_type().is_int_type());
|
||||||
|
debug_assert!(src.get_type().get_element_type().is_int_type());
|
||||||
|
debug_assert_eq!(
|
||||||
|
dest.get_type().get_element_type().into_int_type().get_bit_width(),
|
||||||
|
src.get_type().get_element_type().into_int_type().get_bit_width(),
|
||||||
|
);
|
||||||
|
debug_assert!(matches!(len.get_type().get_bit_width(), 32 | 64));
|
||||||
|
debug_assert_eq!(is_volatile.get_type().get_bit_width(), 1);
|
||||||
|
|
||||||
|
let llvm_dest_t = dest.get_type();
|
||||||
|
let llvm_src_t = src.get_type();
|
||||||
|
let llvm_len_t = len.get_type();
|
||||||
|
|
||||||
|
let intrinsic_fn = Intrinsic::find(FN_NAME)
|
||||||
|
.and_then(|intrinsic| {
|
||||||
|
intrinsic.get_declaration(
|
||||||
|
&ctx.module,
|
||||||
|
&[llvm_dest_t.into(), llvm_src_t.into(), llvm_len_t.into()],
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(intrinsic_fn, &[dest.into(), src.into(), len.into(), is_volatile.into()], "")
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Invokes the `llvm.memcpy` intrinsic.
|
||||||
|
///
|
||||||
|
/// Unlike [`call_memcpy`], this function accepts any type of pointer value. If `dest` or `src` is
|
||||||
|
/// not a pointer to an integer, the pointer(s) will be cast to `i8*` before invoking `memcpy`.
|
||||||
|
pub fn call_memcpy_generic<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
dest: PointerValue<'ctx>,
|
||||||
|
src: PointerValue<'ctx>,
|
||||||
|
len: IntValue<'ctx>,
|
||||||
|
is_volatile: IntValue<'ctx>,
|
||||||
|
) {
|
||||||
|
let llvm_i8 = ctx.ctx.i8_type();
|
||||||
|
let llvm_p0i8 = llvm_i8.ptr_type(AddressSpace::default());
|
||||||
|
|
||||||
|
let dest_elem_t = dest.get_type().get_element_type();
|
||||||
|
let src_elem_t = src.get_type().get_element_type();
|
||||||
|
|
||||||
|
let dest = if matches!(dest_elem_t, IntType(t) if t.get_bit_width() == 8) {
|
||||||
|
dest
|
||||||
|
} else {
|
||||||
|
ctx.builder
|
||||||
|
.build_bitcast(dest, llvm_p0i8, "")
|
||||||
|
.map(BasicValueEnum::into_pointer_value)
|
||||||
|
.unwrap()
|
||||||
|
};
|
||||||
|
let src = if matches!(src_elem_t, IntType(t) if t.get_bit_width() == 8) {
|
||||||
|
src
|
||||||
|
} else {
|
||||||
|
ctx.builder
|
||||||
|
.build_bitcast(src, llvm_p0i8, "")
|
||||||
|
.map(BasicValueEnum::into_pointer_value)
|
||||||
|
.unwrap()
|
||||||
|
};
|
||||||
|
|
||||||
|
call_memcpy(ctx, dest, src, len, is_volatile);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Macro to find and generate build call for llvm intrinsic (body of llvm intrinsic function)
|
||||||
|
///
|
||||||
|
/// Arguments:
|
||||||
|
/// * `$ctx:ident`: Reference to the current Code Generation Context
|
||||||
|
/// * `$name:ident`: Optional name to be assigned to the llvm build call (Option<&str>)
|
||||||
|
/// * `$llvm_name:literal`: Name of underlying llvm intrinsic function
|
||||||
|
/// * `$map_fn:ident`: Mapping function to be applied on `BasicValue` (`BasicValue` -> Function Return Type)
|
||||||
|
/// Use `BasicValueEnum::into_int_value` for Integer return type and `BasicValueEnum::into_float_value` for Float return type
|
||||||
|
/// * `$llvm_ty:ident`: Type of first operand
|
||||||
|
/// * `,($val:ident)*`: Comma separated list of operands
|
||||||
|
macro_rules! generate_llvm_intrinsic_fn_body {
|
||||||
|
($ctx:ident, $name:ident, $llvm_name:literal, $map_fn:expr, $llvm_ty:ident $(,$val:ident)*) => {{
|
||||||
|
const FN_NAME: &str = concat!("llvm.", $llvm_name);
|
||||||
|
let intrinsic_fn = Intrinsic::find(FN_NAME).and_then(|intrinsic| intrinsic.get_declaration(&$ctx.module, &[$llvm_ty.into()])).unwrap();
|
||||||
|
$ctx.builder.build_call(intrinsic_fn, &[$($val.into()),*], $name.unwrap_or_default()).map(CallSiteValue::try_as_basic_value).map(|v| v.map_left($map_fn)).map(Either::unwrap_left).unwrap()
|
||||||
|
}};
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Macro to generate the llvm intrinsic function using [`generate_llvm_intrinsic_fn_body`].
|
||||||
|
///
|
||||||
|
/// Arguments:
|
||||||
|
/// * `float/int`: Indicates the return and argument type of the function
|
||||||
|
/// * `$fn_name:ident`: The identifier of the rust function to be generated
|
||||||
|
/// * `$llvm_name:literal`: Name of underlying llvm intrinsic function
|
||||||
|
/// Omit "llvm." prefix from the function name i.e. use "ceil" instead of "llvm.ceil"
|
||||||
|
/// * `$val:ident`: The operand for unary operations
|
||||||
|
/// * `$val1:ident`, `$val2:ident`: The operands for binary operations
|
||||||
|
macro_rules! generate_llvm_intrinsic_fn {
|
||||||
|
("float", $fn_name:ident, $llvm_name:literal, $val:ident) => {
|
||||||
|
#[doc = concat!("Invokes the [`", stringify!($llvm_name), "`](https://llvm.org/docs/LangRef.html#llvm-", stringify!($llvm_name), "-intrinsic) intrinsic." )]
|
||||||
|
pub fn $fn_name<'ctx> (
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
$val: FloatValue<'ctx>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> FloatValue<'ctx> {
|
||||||
|
let llvm_ty = $val.get_type();
|
||||||
|
generate_llvm_intrinsic_fn_body!(ctx, name, $llvm_name, BasicValueEnum::into_float_value, llvm_ty, $val)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
("float", $fn_name:ident, $llvm_name:literal, $val1:ident, $val2:ident) => {
|
||||||
|
#[doc = concat!("Invokes the [`", stringify!($llvm_name), "`](https://llvm.org/docs/LangRef.html#llvm-", stringify!($llvm_name), "-intrinsic) intrinsic." )]
|
||||||
|
pub fn $fn_name<'ctx> (
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
$val1: FloatValue<'ctx>,
|
||||||
|
$val2: FloatValue<'ctx>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> FloatValue<'ctx> {
|
||||||
|
debug_assert_eq!($val1.get_type(), $val2.get_type());
|
||||||
|
let llvm_ty = $val1.get_type();
|
||||||
|
generate_llvm_intrinsic_fn_body!(ctx, name, $llvm_name, BasicValueEnum::into_float_value, llvm_ty, $val1, $val2)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
("int", $fn_name:ident, $llvm_name:literal, $val1:ident, $val2:ident) => {
|
||||||
|
#[doc = concat!("Invokes the [`", stringify!($llvm_name), "`](https://llvm.org/docs/LangRef.html#llvm-", stringify!($llvm_name), "-intrinsic) intrinsic." )]
|
||||||
|
pub fn $fn_name<'ctx> (
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
$val1: IntValue<'ctx>,
|
||||||
|
$val2: IntValue<'ctx>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
debug_assert_eq!($val1.get_type().get_bit_width(), $val2.get_type().get_bit_width());
|
||||||
|
let llvm_ty = $val1.get_type();
|
||||||
|
generate_llvm_intrinsic_fn_body!(ctx, name, $llvm_name, BasicValueEnum::into_int_value, llvm_ty, $val1, $val2)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Invokes the [`llvm.abs`](https://llvm.org/docs/LangRef.html#llvm-abs-intrinsic) intrinsic.
|
||||||
|
///
|
||||||
|
/// * `src` - The value for which the absolute value is to be returned.
|
||||||
|
/// * `is_int_min_poison` - Whether `poison` is to be returned if `src` is `INT_MIN`.
|
||||||
|
pub fn call_int_abs<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
src: IntValue<'ctx>,
|
||||||
|
is_int_min_poison: IntValue<'ctx>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> IntValue<'ctx> {
|
||||||
|
debug_assert_eq!(is_int_min_poison.get_type().get_bit_width(), 1);
|
||||||
|
debug_assert!(is_int_min_poison.is_const());
|
||||||
|
|
||||||
|
let src_type = src.get_type();
|
||||||
|
generate_llvm_intrinsic_fn_body!(
|
||||||
|
ctx,
|
||||||
|
name,
|
||||||
|
"abs",
|
||||||
|
BasicValueEnum::into_int_value,
|
||||||
|
src_type,
|
||||||
|
src,
|
||||||
|
is_int_min_poison
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
generate_llvm_intrinsic_fn!("int", call_int_smax, "smax", a, b);
|
||||||
|
generate_llvm_intrinsic_fn!("int", call_int_smin, "smin", a, b);
|
||||||
|
generate_llvm_intrinsic_fn!("int", call_int_umax, "umax", a, b);
|
||||||
|
generate_llvm_intrinsic_fn!("int", call_int_umin, "umin", a, b);
|
||||||
|
generate_llvm_intrinsic_fn!("int", call_expect, "expect", val, expected_val);
|
||||||
|
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_sqrt, "sqrt", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_sin, "sin", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_cos, "cos", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_pow, "pow", val, power);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_exp, "exp", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_exp2, "exp2", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_log, "log", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_log10, "log10", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_log2, "log2", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_fabs, "fabs", src);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_minnum, "minnum", val, power);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_maxnum, "maxnum", val, power);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_copysign, "copysign", mag, sgn);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_floor, "floor", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_ceil, "ceil", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_round, "round", val);
|
||||||
|
generate_llvm_intrinsic_fn!("float", call_float_rint, "rint", val);
|
||||||
|
|
||||||
|
/// Invokes the [`llvm.powi`](https://llvm.org/docs/LangRef.html#llvm-powi-intrinsic) intrinsic.
|
||||||
|
pub fn call_float_powi<'ctx>(
|
||||||
|
ctx: &CodeGenContext<'ctx, '_>,
|
||||||
|
val: FloatValue<'ctx>,
|
||||||
|
power: IntValue<'ctx>,
|
||||||
|
name: Option<&str>,
|
||||||
|
) -> FloatValue<'ctx> {
|
||||||
|
const FN_NAME: &str = "llvm.powi";
|
||||||
|
|
||||||
|
let llvm_val_t = val.get_type();
|
||||||
|
let llvm_power_t = power.get_type();
|
||||||
|
|
||||||
|
let intrinsic_fn = Intrinsic::find(FN_NAME)
|
||||||
|
.and_then(|intrinsic| {
|
||||||
|
intrinsic.get_declaration(&ctx.module, &[llvm_val_t.into(), llvm_power_t.into()])
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
ctx.builder
|
||||||
|
.build_call(intrinsic_fn, &[val.into(), power.into()], name.unwrap_or_default())
|
||||||
|
.map(CallSiteValue::try_as_basic_value)
|
||||||
|
.map(|v| v.map_left(BasicValueEnum::into_float_value))
|
||||||
|
.map(Either::unwrap_left)
|
||||||
|
.unwrap()
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,451 @@
|
||||||
|
use crate::{
|
||||||
|
codegen::{
|
||||||
|
classes::{ListType, NDArrayType, ProxyType, RangeType},
|
||||||
|
concrete_type::ConcreteTypeStore,
|
||||||
|
CodeGenContext, CodeGenLLVMOptions, CodeGenTargetMachineOptions, CodeGenTask,
|
||||||
|
CodeGenerator, DefaultCodeGenerator, WithCall, WorkerRegistry,
|
||||||
|
},
|
||||||
|
symbol_resolver::{SymbolResolver, ValueEnum},
|
||||||
|
toplevel::{
|
||||||
|
composer::{ComposerConfig, TopLevelComposer},
|
||||||
|
DefinitionId, FunInstance, TopLevelContext, TopLevelDef,
|
||||||
|
},
|
||||||
|
typecheck::{
|
||||||
|
type_inferencer::{FunctionData, Inferencer, PrimitiveStore},
|
||||||
|
typedef::{FunSignature, FuncArg, Type, TypeEnum, Unifier, VarMap},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
use indexmap::IndexMap;
|
||||||
|
use indoc::indoc;
|
||||||
|
use inkwell::{
|
||||||
|
targets::{InitializationConfig, Target},
|
||||||
|
OptimizationLevel,
|
||||||
|
};
|
||||||
|
use nac3parser::ast::FileName;
|
||||||
|
use nac3parser::{
|
||||||
|
ast::{fold::Fold, StrRef},
|
||||||
|
parser::parse_program,
|
||||||
|
};
|
||||||
|
use parking_lot::RwLock;
|
||||||
|
use std::collections::{HashMap, HashSet};
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
struct Resolver {
|
||||||
|
id_to_type: HashMap<StrRef, Type>,
|
||||||
|
id_to_def: RwLock<HashMap<StrRef, DefinitionId>>,
|
||||||
|
class_names: HashMap<StrRef, Type>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Resolver {
|
||||||
|
pub fn add_id_def(&self, id: StrRef, def: DefinitionId) {
|
||||||
|
self.id_to_def.write().insert(id, def);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SymbolResolver for Resolver {
|
||||||
|
fn get_default_param_value(
|
||||||
|
&self,
|
||||||
|
_: &nac3parser::ast::Expr,
|
||||||
|
) -> Option<crate::symbol_resolver::SymbolValue> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_symbol_type(
|
||||||
|
&self,
|
||||||
|
_: &mut Unifier,
|
||||||
|
_: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
_: &PrimitiveStore,
|
||||||
|
str: StrRef,
|
||||||
|
) -> Result<Type, String> {
|
||||||
|
self.id_to_type.get(&str).copied().ok_or_else(|| format!("cannot find symbol `{str}`"))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_symbol_value<'ctx>(
|
||||||
|
&self,
|
||||||
|
_: StrRef,
|
||||||
|
_: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
) -> Option<ValueEnum<'ctx>> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_identifier_def(&self, id: StrRef) -> Result<DefinitionId, HashSet<String>> {
|
||||||
|
self.id_to_def
|
||||||
|
.read()
|
||||||
|
.get(&id)
|
||||||
|
.copied()
|
||||||
|
.ok_or_else(|| HashSet::from([format!("cannot find symbol `{id}`")]))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_string_id(&self, _: &str) -> i32 {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_exception_id(&self, _tyid: usize) -> usize {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_primitives() {
|
||||||
|
let source = indoc! { "
|
||||||
|
c = a + b
|
||||||
|
d = a if c == 1 else 0
|
||||||
|
return d
|
||||||
|
"};
|
||||||
|
let statements = parse_program(source, FileName::default()).unwrap();
|
||||||
|
|
||||||
|
let composer = TopLevelComposer::new(Vec::new(), ComposerConfig::default(), 32).0;
|
||||||
|
let mut unifier = composer.unifier.clone();
|
||||||
|
let primitives = composer.primitives_ty;
|
||||||
|
let top_level = Arc::new(composer.make_top_level_context());
|
||||||
|
unifier.top_level = Some(top_level.clone());
|
||||||
|
|
||||||
|
let resolver = Arc::new(Resolver {
|
||||||
|
id_to_type: HashMap::new(),
|
||||||
|
id_to_def: RwLock::new(HashMap::new()),
|
||||||
|
class_names: HashMap::default(),
|
||||||
|
}) as Arc<dyn SymbolResolver + Send + Sync>;
|
||||||
|
|
||||||
|
let threads = vec![DefaultCodeGenerator::new("test".into(), 32).into()];
|
||||||
|
let signature = FunSignature {
|
||||||
|
args: vec![
|
||||||
|
FuncArg { name: "a".into(), ty: primitives.int32, default_value: None },
|
||||||
|
FuncArg { name: "b".into(), ty: primitives.int32, default_value: None },
|
||||||
|
],
|
||||||
|
ret: primitives.int32,
|
||||||
|
vars: VarMap::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut store = ConcreteTypeStore::new();
|
||||||
|
let mut cache = HashMap::new();
|
||||||
|
let signature = store.from_signature(&mut unifier, &primitives, &signature, &mut cache);
|
||||||
|
let signature = store.add_cty(signature);
|
||||||
|
|
||||||
|
let mut function_data = FunctionData {
|
||||||
|
resolver: resolver.clone(),
|
||||||
|
bound_variables: Vec::new(),
|
||||||
|
return_type: Some(primitives.int32),
|
||||||
|
};
|
||||||
|
let mut virtual_checks = Vec::new();
|
||||||
|
let mut calls = HashMap::new();
|
||||||
|
let mut identifiers: HashSet<_> = ["a".into(), "b".into()].into();
|
||||||
|
let mut inferencer = Inferencer {
|
||||||
|
top_level: &top_level,
|
||||||
|
function_data: &mut function_data,
|
||||||
|
unifier: &mut unifier,
|
||||||
|
variable_mapping: HashMap::default(),
|
||||||
|
primitives: &primitives,
|
||||||
|
virtual_checks: &mut virtual_checks,
|
||||||
|
calls: &mut calls,
|
||||||
|
defined_identifiers: identifiers.clone(),
|
||||||
|
in_handler: false,
|
||||||
|
};
|
||||||
|
inferencer.variable_mapping.insert("a".into(), inferencer.primitives.int32);
|
||||||
|
inferencer.variable_mapping.insert("b".into(), inferencer.primitives.int32);
|
||||||
|
|
||||||
|
let statements = statements
|
||||||
|
.into_iter()
|
||||||
|
.map(|v| inferencer.fold_stmt(v))
|
||||||
|
.collect::<Result<Vec<_>, _>>()
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
inferencer.check_block(&statements, &mut identifiers).unwrap();
|
||||||
|
let top_level = Arc::new(TopLevelContext {
|
||||||
|
definitions: Arc::new(RwLock::new(std::mem::take(&mut *top_level.definitions.write()))),
|
||||||
|
unifiers: Arc::new(RwLock::new(vec![(unifier.get_shared_unifier(), primitives)])),
|
||||||
|
personality_symbol: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let task = CodeGenTask {
|
||||||
|
subst: Vec::default(),
|
||||||
|
symbol_name: "testing".into(),
|
||||||
|
body: Arc::new(statements),
|
||||||
|
unifier_index: 0,
|
||||||
|
calls: Arc::new(calls),
|
||||||
|
resolver,
|
||||||
|
store,
|
||||||
|
signature,
|
||||||
|
id: 0,
|
||||||
|
};
|
||||||
|
let f = Arc::new(WithCall::new(Box::new(|module| {
|
||||||
|
// the following IR is equivalent to
|
||||||
|
// ```
|
||||||
|
// ; ModuleID = 'test.ll'
|
||||||
|
// source_filename = "test"
|
||||||
|
//
|
||||||
|
// ; Function Attrs: norecurse nounwind readnone
|
||||||
|
// define i32 @testing(i32 %0, i32 %1) local_unnamed_addr #0 {
|
||||||
|
// init:
|
||||||
|
// %add = add i32 %1, %0
|
||||||
|
// %cmp = icmp eq i32 %add, 1
|
||||||
|
// %ifexpr = select i1 %cmp, i32 %0, i32 0
|
||||||
|
// ret i32 %ifexpr
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// attributes #0 = { norecurse nounwind readnone }
|
||||||
|
// ```
|
||||||
|
// after O2 optimization
|
||||||
|
|
||||||
|
let expected = indoc! {"
|
||||||
|
; ModuleID = 'test'
|
||||||
|
source_filename = \"test\"
|
||||||
|
|
||||||
|
; Function Attrs: mustprogress nofree norecurse nosync nounwind readnone willreturn
|
||||||
|
define i32 @testing(i32 %0, i32 %1) local_unnamed_addr #0 !dbg !4 {
|
||||||
|
init:
|
||||||
|
%add = add i32 %1, %0, !dbg !9
|
||||||
|
%cmp = icmp eq i32 %add, 1, !dbg !10
|
||||||
|
%. = select i1 %cmp, i32 %0, i32 0, !dbg !11
|
||||||
|
ret i32 %., !dbg !12
|
||||||
|
}
|
||||||
|
|
||||||
|
attributes #0 = { mustprogress nofree norecurse nosync nounwind readnone willreturn }
|
||||||
|
|
||||||
|
!llvm.module.flags = !{!0, !1}
|
||||||
|
!llvm.dbg.cu = !{!2}
|
||||||
|
|
||||||
|
!0 = !{i32 2, !\"Debug Info Version\", i32 3}
|
||||||
|
!1 = !{i32 2, !\"Dwarf Version\", i32 4}
|
||||||
|
!2 = distinct !DICompileUnit(language: DW_LANG_Python, file: !3, producer: \"NAC3\", isOptimized: true, runtimeVersion: 0, emissionKind: FullDebug)
|
||||||
|
!3 = !DIFile(filename: \"unknown\", directory: \"\")
|
||||||
|
!4 = distinct !DISubprogram(name: \"testing\", linkageName: \"testing\", scope: null, file: !3, line: 1, type: !5, scopeLine: 1, flags: DIFlagPublic, spFlags: DISPFlagDefinition | DISPFlagOptimized, unit: !2, retainedNodes: !8)
|
||||||
|
!5 = !DISubroutineType(flags: DIFlagPublic, types: !6)
|
||||||
|
!6 = !{!7}
|
||||||
|
!7 = !DIBasicType(name: \"_\", flags: DIFlagPublic)
|
||||||
|
!8 = !{}
|
||||||
|
!9 = !DILocation(line: 1, column: 9, scope: !4)
|
||||||
|
!10 = !DILocation(line: 2, column: 15, scope: !4)
|
||||||
|
!11 = !DILocation(line: 0, scope: !4)
|
||||||
|
!12 = !DILocation(line: 3, column: 8, scope: !4)
|
||||||
|
"}
|
||||||
|
.trim();
|
||||||
|
assert_eq!(expected, module.print_to_string().to_str().unwrap().trim());
|
||||||
|
})));
|
||||||
|
|
||||||
|
Target::initialize_all(&InitializationConfig::default());
|
||||||
|
|
||||||
|
let llvm_options = CodeGenLLVMOptions {
|
||||||
|
opt_level: OptimizationLevel::Default,
|
||||||
|
target: CodeGenTargetMachineOptions::from_host_triple(),
|
||||||
|
};
|
||||||
|
let (registry, handles) = WorkerRegistry::create_workers(threads, top_level, &llvm_options, &f);
|
||||||
|
registry.add_task(task);
|
||||||
|
registry.wait_tasks_complete(handles);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_simple_call() {
|
||||||
|
let source_1 = indoc! { "
|
||||||
|
a = foo(a)
|
||||||
|
return a * 2
|
||||||
|
"};
|
||||||
|
let statements_1 = parse_program(source_1, FileName::default()).unwrap();
|
||||||
|
|
||||||
|
let source_2 = indoc! { "
|
||||||
|
return a + 1
|
||||||
|
"};
|
||||||
|
let statements_2 = parse_program(source_2, FileName::default()).unwrap();
|
||||||
|
|
||||||
|
let composer = TopLevelComposer::new(Vec::new(), ComposerConfig::default(), 32).0;
|
||||||
|
let mut unifier = composer.unifier.clone();
|
||||||
|
let primitives = composer.primitives_ty;
|
||||||
|
let top_level = Arc::new(composer.make_top_level_context());
|
||||||
|
unifier.top_level = Some(top_level.clone());
|
||||||
|
|
||||||
|
let signature = FunSignature {
|
||||||
|
args: vec![FuncArg { name: "a".into(), ty: primitives.int32, default_value: None }],
|
||||||
|
ret: primitives.int32,
|
||||||
|
vars: VarMap::new(),
|
||||||
|
};
|
||||||
|
let fun_ty = unifier.add_ty(TypeEnum::TFunc(signature.clone()));
|
||||||
|
let mut store = ConcreteTypeStore::new();
|
||||||
|
let mut cache = HashMap::new();
|
||||||
|
let signature = store.from_signature(&mut unifier, &primitives, &signature, &mut cache);
|
||||||
|
let signature = store.add_cty(signature);
|
||||||
|
|
||||||
|
let foo_id = top_level.definitions.read().len();
|
||||||
|
top_level.definitions.write().push(Arc::new(RwLock::new(TopLevelDef::Function {
|
||||||
|
name: "foo".to_string(),
|
||||||
|
simple_name: "foo".into(),
|
||||||
|
signature: fun_ty,
|
||||||
|
var_id: vec![],
|
||||||
|
instance_to_stmt: HashMap::new(),
|
||||||
|
instance_to_symbol: HashMap::new(),
|
||||||
|
resolver: None,
|
||||||
|
codegen_callback: None,
|
||||||
|
loc: None,
|
||||||
|
})));
|
||||||
|
|
||||||
|
let resolver = Resolver {
|
||||||
|
id_to_type: HashMap::new(),
|
||||||
|
id_to_def: RwLock::new(HashMap::new()),
|
||||||
|
class_names: HashMap::default(),
|
||||||
|
};
|
||||||
|
resolver.add_id_def("foo".into(), DefinitionId(foo_id));
|
||||||
|
let resolver = Arc::new(resolver) as Arc<dyn SymbolResolver + Send + Sync>;
|
||||||
|
|
||||||
|
if let TopLevelDef::Function { resolver: r, .. } =
|
||||||
|
&mut *top_level.definitions.read()[foo_id].write()
|
||||||
|
{
|
||||||
|
*r = Some(resolver.clone());
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
}
|
||||||
|
|
||||||
|
let threads = vec![DefaultCodeGenerator::new("test".into(), 32).into()];
|
||||||
|
let mut function_data = FunctionData {
|
||||||
|
resolver: resolver.clone(),
|
||||||
|
bound_variables: Vec::new(),
|
||||||
|
return_type: Some(primitives.int32),
|
||||||
|
};
|
||||||
|
let mut virtual_checks = Vec::new();
|
||||||
|
let mut calls = HashMap::new();
|
||||||
|
let mut identifiers: HashSet<_> = ["a".into(), "foo".into()].into();
|
||||||
|
let mut inferencer = Inferencer {
|
||||||
|
top_level: &top_level,
|
||||||
|
function_data: &mut function_data,
|
||||||
|
unifier: &mut unifier,
|
||||||
|
variable_mapping: HashMap::default(),
|
||||||
|
primitives: &primitives,
|
||||||
|
virtual_checks: &mut virtual_checks,
|
||||||
|
calls: &mut calls,
|
||||||
|
defined_identifiers: identifiers.clone(),
|
||||||
|
in_handler: false,
|
||||||
|
};
|
||||||
|
inferencer.variable_mapping.insert("a".into(), inferencer.primitives.int32);
|
||||||
|
inferencer.variable_mapping.insert("foo".into(), fun_ty);
|
||||||
|
|
||||||
|
let statements_1 = statements_1
|
||||||
|
.into_iter()
|
||||||
|
.map(|v| inferencer.fold_stmt(v))
|
||||||
|
.collect::<Result<Vec<_>, _>>()
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let calls1 = inferencer.calls.clone();
|
||||||
|
inferencer.calls.clear();
|
||||||
|
|
||||||
|
let statements_2 = statements_2
|
||||||
|
.into_iter()
|
||||||
|
.map(|v| inferencer.fold_stmt(v))
|
||||||
|
.collect::<Result<Vec<_>, _>>()
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
if let TopLevelDef::Function { instance_to_stmt, .. } =
|
||||||
|
&mut *top_level.definitions.read()[foo_id].write()
|
||||||
|
{
|
||||||
|
instance_to_stmt.insert(
|
||||||
|
String::new(),
|
||||||
|
FunInstance {
|
||||||
|
body: Arc::new(statements_2),
|
||||||
|
calls: Arc::new(inferencer.calls.clone()),
|
||||||
|
subst: IndexMap::default(),
|
||||||
|
unifier_id: 0,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
}
|
||||||
|
|
||||||
|
inferencer.check_block(&statements_1, &mut identifiers).unwrap();
|
||||||
|
let top_level = Arc::new(TopLevelContext {
|
||||||
|
definitions: Arc::new(RwLock::new(std::mem::take(&mut *top_level.definitions.write()))),
|
||||||
|
unifiers: Arc::new(RwLock::new(vec![(unifier.get_shared_unifier(), primitives)])),
|
||||||
|
personality_symbol: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let task = CodeGenTask {
|
||||||
|
subst: Vec::default(),
|
||||||
|
symbol_name: "testing".to_string(),
|
||||||
|
body: Arc::new(statements_1),
|
||||||
|
calls: Arc::new(calls1),
|
||||||
|
unifier_index: 0,
|
||||||
|
resolver,
|
||||||
|
signature,
|
||||||
|
store,
|
||||||
|
id: 0,
|
||||||
|
};
|
||||||
|
let f = Arc::new(WithCall::new(Box::new(|module| {
|
||||||
|
let expected = indoc! {"
|
||||||
|
; ModuleID = 'test'
|
||||||
|
source_filename = \"test\"
|
||||||
|
|
||||||
|
; Function Attrs: mustprogress nofree norecurse nosync nounwind readnone willreturn
|
||||||
|
define i32 @testing(i32 %0) local_unnamed_addr #0 !dbg !5 {
|
||||||
|
init:
|
||||||
|
%add.i = shl i32 %0, 1, !dbg !10
|
||||||
|
%mul = add i32 %add.i, 2, !dbg !10
|
||||||
|
ret i32 %mul, !dbg !10
|
||||||
|
}
|
||||||
|
|
||||||
|
; Function Attrs: mustprogress nofree norecurse nosync nounwind readnone willreturn
|
||||||
|
define i32 @foo.0(i32 %0) local_unnamed_addr #0 !dbg !11 {
|
||||||
|
init:
|
||||||
|
%add = add i32 %0, 1, !dbg !12
|
||||||
|
ret i32 %add, !dbg !12
|
||||||
|
}
|
||||||
|
|
||||||
|
attributes #0 = { mustprogress nofree norecurse nosync nounwind readnone willreturn }
|
||||||
|
|
||||||
|
!llvm.module.flags = !{!0, !1}
|
||||||
|
!llvm.dbg.cu = !{!2, !4}
|
||||||
|
|
||||||
|
!0 = !{i32 2, !\"Debug Info Version\", i32 3}
|
||||||
|
!1 = !{i32 2, !\"Dwarf Version\", i32 4}
|
||||||
|
!2 = distinct !DICompileUnit(language: DW_LANG_Python, file: !3, producer: \"NAC3\", isOptimized: true, runtimeVersion: 0, emissionKind: FullDebug)
|
||||||
|
!3 = !DIFile(filename: \"unknown\", directory: \"\")
|
||||||
|
!4 = distinct !DICompileUnit(language: DW_LANG_Python, file: !3, producer: \"NAC3\", isOptimized: true, runtimeVersion: 0, emissionKind: FullDebug)
|
||||||
|
!5 = distinct !DISubprogram(name: \"testing\", linkageName: \"testing\", scope: null, file: !3, line: 1, type: !6, scopeLine: 1, flags: DIFlagPublic, spFlags: DISPFlagDefinition | DISPFlagOptimized, unit: !2, retainedNodes: !9)
|
||||||
|
!6 = !DISubroutineType(flags: DIFlagPublic, types: !7)
|
||||||
|
!7 = !{!8}
|
||||||
|
!8 = !DIBasicType(name: \"_\", flags: DIFlagPublic)
|
||||||
|
!9 = !{}
|
||||||
|
!10 = !DILocation(line: 2, column: 12, scope: !5)
|
||||||
|
!11 = distinct !DISubprogram(name: \"foo.0\", linkageName: \"foo.0\", scope: null, file: !3, line: 1, type: !6, scopeLine: 1, flags: DIFlagPublic, spFlags: DISPFlagDefinition | DISPFlagOptimized, unit: !4, retainedNodes: !9)
|
||||||
|
!12 = !DILocation(line: 1, column: 12, scope: !11)
|
||||||
|
"}
|
||||||
|
.trim();
|
||||||
|
assert_eq!(expected, module.print_to_string().to_str().unwrap().trim());
|
||||||
|
})));
|
||||||
|
|
||||||
|
Target::initialize_all(&InitializationConfig::default());
|
||||||
|
|
||||||
|
let llvm_options = CodeGenLLVMOptions {
|
||||||
|
opt_level: OptimizationLevel::Default,
|
||||||
|
target: CodeGenTargetMachineOptions::from_host_triple(),
|
||||||
|
};
|
||||||
|
let (registry, handles) = WorkerRegistry::create_workers(threads, top_level, &llvm_options, &f);
|
||||||
|
registry.add_task(task);
|
||||||
|
registry.wait_tasks_complete(handles);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_classes_list_type_new() {
|
||||||
|
let ctx = inkwell::context::Context::create();
|
||||||
|
let generator = DefaultCodeGenerator::new(String::new(), 64);
|
||||||
|
|
||||||
|
let llvm_i32 = ctx.i32_type();
|
||||||
|
let llvm_usize = generator.get_size_type(&ctx);
|
||||||
|
|
||||||
|
let llvm_list = ListType::new(&generator, &ctx, llvm_i32.into());
|
||||||
|
assert!(ListType::is_type(llvm_list.as_base_type(), llvm_usize).is_ok());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_classes_range_type_new() {
|
||||||
|
let ctx = inkwell::context::Context::create();
|
||||||
|
|
||||||
|
let llvm_range = RangeType::new(&ctx);
|
||||||
|
assert!(RangeType::is_type(llvm_range.as_base_type()).is_ok());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_classes_ndarray_type_new() {
|
||||||
|
let ctx = inkwell::context::Context::create();
|
||||||
|
let generator = DefaultCodeGenerator::new(String::new(), 64);
|
||||||
|
|
||||||
|
let llvm_i32 = ctx.i32_type();
|
||||||
|
let llvm_usize = generator.get_size_type(&ctx);
|
||||||
|
|
||||||
|
let llvm_ndarray = NDArrayType::new(&generator, &ctx, llvm_i32.into());
|
||||||
|
assert!(NDArrayType::is_type(llvm_ndarray.as_base_type(), llvm_usize).is_ok());
|
||||||
|
}
|
|
@ -1,10 +1,26 @@
|
||||||
#![warn(clippy::all)]
|
#![deny(
|
||||||
#![allow(clippy::clone_double_ref)]
|
future_incompatible,
|
||||||
|
let_underscore,
|
||||||
extern crate num_bigint;
|
nonstandard_style,
|
||||||
extern crate inkwell;
|
rust_2024_compatibility,
|
||||||
extern crate rustpython_parser;
|
clippy::all
|
||||||
extern crate indoc;
|
)]
|
||||||
|
#![warn(clippy::pedantic)]
|
||||||
mod typecheck;
|
#![allow(
|
||||||
|
dead_code,
|
||||||
|
clippy::cast_possible_truncation,
|
||||||
|
clippy::cast_sign_loss,
|
||||||
|
clippy::enum_glob_use,
|
||||||
|
clippy::missing_errors_doc,
|
||||||
|
clippy::missing_panics_doc,
|
||||||
|
clippy::module_name_repetitions,
|
||||||
|
clippy::similar_names,
|
||||||
|
clippy::too_many_lines,
|
||||||
|
clippy::wildcard_imports
|
||||||
|
)]
|
||||||
|
|
||||||
|
pub mod codegen;
|
||||||
|
pub mod symbol_resolver;
|
||||||
|
pub mod toplevel;
|
||||||
|
pub mod typecheck;
|
||||||
|
pub mod util;
|
|
@ -0,0 +1,612 @@
|
||||||
|
use std::fmt::Debug;
|
||||||
|
use std::rc::Rc;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::{collections::HashMap, collections::HashSet, fmt::Display};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
codegen::{CodeGenContext, CodeGenerator},
|
||||||
|
toplevel::{type_annotation::TypeAnnotation, DefinitionId, TopLevelDef},
|
||||||
|
typecheck::{
|
||||||
|
type_inferencer::PrimitiveStore,
|
||||||
|
typedef::{Type, TypeEnum, Unifier, VarMap},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
use inkwell::values::{BasicValueEnum, FloatValue, IntValue, PointerValue, StructValue};
|
||||||
|
use itertools::{chain, izip, Itertools};
|
||||||
|
use nac3parser::ast::{Constant, Expr, Location, StrRef};
|
||||||
|
use parking_lot::RwLock;
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Debug)]
|
||||||
|
pub enum SymbolValue {
|
||||||
|
I32(i32),
|
||||||
|
I64(i64),
|
||||||
|
U32(u32),
|
||||||
|
U64(u64),
|
||||||
|
Str(String),
|
||||||
|
Double(f64),
|
||||||
|
Bool(bool),
|
||||||
|
Tuple(Vec<SymbolValue>),
|
||||||
|
OptionSome(Box<SymbolValue>),
|
||||||
|
OptionNone,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SymbolValue {
|
||||||
|
/// Creates a [`SymbolValue`] from a [`Constant`].
|
||||||
|
///
|
||||||
|
/// * `constant` - The constant to create the value from.
|
||||||
|
/// * `expected_ty` - The expected type of the [`SymbolValue`].
|
||||||
|
pub fn from_constant(
|
||||||
|
constant: &Constant,
|
||||||
|
expected_ty: Type,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
) -> Result<Self, String> {
|
||||||
|
match constant {
|
||||||
|
Constant::None => {
|
||||||
|
if unifier.unioned(expected_ty, primitives.option) {
|
||||||
|
Ok(SymbolValue::OptionNone)
|
||||||
|
} else {
|
||||||
|
Err(format!("Expected {expected_ty:?}, but got Option"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Constant::Bool(b) => {
|
||||||
|
if unifier.unioned(expected_ty, primitives.bool) {
|
||||||
|
Ok(SymbolValue::Bool(*b))
|
||||||
|
} else {
|
||||||
|
Err(format!("Expected {expected_ty:?}, but got bool"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Constant::Str(s) => {
|
||||||
|
if unifier.unioned(expected_ty, primitives.str) {
|
||||||
|
Ok(SymbolValue::Str(s.to_string()))
|
||||||
|
} else {
|
||||||
|
Err(format!("Expected {expected_ty:?}, but got str"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Constant::Int(i) => {
|
||||||
|
if unifier.unioned(expected_ty, primitives.int32) {
|
||||||
|
i32::try_from(*i).map(SymbolValue::I32).map_err(|e| e.to_string())
|
||||||
|
} else if unifier.unioned(expected_ty, primitives.int64) {
|
||||||
|
i64::try_from(*i).map(SymbolValue::I64).map_err(|e| e.to_string())
|
||||||
|
} else if unifier.unioned(expected_ty, primitives.uint32) {
|
||||||
|
u32::try_from(*i).map(SymbolValue::U32).map_err(|e| e.to_string())
|
||||||
|
} else if unifier.unioned(expected_ty, primitives.uint64) {
|
||||||
|
u64::try_from(*i).map(SymbolValue::U64).map_err(|e| e.to_string())
|
||||||
|
} else {
|
||||||
|
Err(format!("Expected {}, but got int", unifier.stringify(expected_ty)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Constant::Tuple(t) => {
|
||||||
|
let expected_ty = unifier.get_ty(expected_ty);
|
||||||
|
let TypeEnum::TTuple { ty } = expected_ty.as_ref() else {
|
||||||
|
return Err(format!(
|
||||||
|
"Expected {:?}, but got Tuple",
|
||||||
|
expected_ty.get_type_name()
|
||||||
|
));
|
||||||
|
};
|
||||||
|
|
||||||
|
assert_eq!(ty.len(), t.len());
|
||||||
|
|
||||||
|
let elems = t
|
||||||
|
.iter()
|
||||||
|
.zip(ty)
|
||||||
|
.map(|(constant, ty)| Self::from_constant(constant, *ty, primitives, unifier))
|
||||||
|
.collect::<Result<Vec<SymbolValue>, _>>()?;
|
||||||
|
Ok(SymbolValue::Tuple(elems))
|
||||||
|
}
|
||||||
|
Constant::Float(f) => {
|
||||||
|
if unifier.unioned(expected_ty, primitives.float) {
|
||||||
|
Ok(SymbolValue::Double(*f))
|
||||||
|
} else {
|
||||||
|
Err(format!("Expected {expected_ty:?}, but got float"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => Err(format!("Unsupported value type {constant:?}")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a [`SymbolValue`] from a [`Constant`], with its type being inferred from the constant value.
|
||||||
|
///
|
||||||
|
/// * `constant` - The constant to create the value from.
|
||||||
|
pub fn from_constant_inferred(constant: &Constant) -> Result<Self, String> {
|
||||||
|
match constant {
|
||||||
|
Constant::None => Ok(SymbolValue::OptionNone),
|
||||||
|
Constant::Bool(b) => Ok(SymbolValue::Bool(*b)),
|
||||||
|
Constant::Str(s) => Ok(SymbolValue::Str(s.to_string())),
|
||||||
|
Constant::Int(i) => {
|
||||||
|
let i = *i;
|
||||||
|
if i >= 0 {
|
||||||
|
i32::try_from(i)
|
||||||
|
.map(SymbolValue::I32)
|
||||||
|
.or_else(|_| i64::try_from(i).map(SymbolValue::I64))
|
||||||
|
.map_err(|_| {
|
||||||
|
format!("Literal cannot be expressed as any integral type: {i}")
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
u32::try_from(i)
|
||||||
|
.map(SymbolValue::U32)
|
||||||
|
.or_else(|_| u64::try_from(i).map(SymbolValue::U64))
|
||||||
|
.map_err(|_| {
|
||||||
|
format!("Literal cannot be expressed as any integral type: {i}")
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Constant::Tuple(t) => {
|
||||||
|
let elems = t
|
||||||
|
.iter()
|
||||||
|
.map(Self::from_constant_inferred)
|
||||||
|
.collect::<Result<Vec<SymbolValue>, _>>()?;
|
||||||
|
Ok(SymbolValue::Tuple(elems))
|
||||||
|
}
|
||||||
|
Constant::Float(f) => Ok(SymbolValue::Double(*f)),
|
||||||
|
_ => Err(format!("Unsupported value type {constant:?}")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the [`Type`] representing the data type of this value.
|
||||||
|
pub fn get_type(&self, primitives: &PrimitiveStore, unifier: &mut Unifier) -> Type {
|
||||||
|
match self {
|
||||||
|
SymbolValue::I32(_) => primitives.int32,
|
||||||
|
SymbolValue::I64(_) => primitives.int64,
|
||||||
|
SymbolValue::U32(_) => primitives.uint32,
|
||||||
|
SymbolValue::U64(_) => primitives.uint64,
|
||||||
|
SymbolValue::Str(_) => primitives.str,
|
||||||
|
SymbolValue::Double(_) => primitives.float,
|
||||||
|
SymbolValue::Bool(_) => primitives.bool,
|
||||||
|
SymbolValue::Tuple(vs) => {
|
||||||
|
let vs_tys = vs.iter().map(|v| v.get_type(primitives, unifier)).collect::<Vec<_>>();
|
||||||
|
unifier.add_ty(TypeEnum::TTuple { ty: vs_tys })
|
||||||
|
}
|
||||||
|
SymbolValue::OptionSome(_) | SymbolValue::OptionNone => primitives.option,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the [`TypeAnnotation`] representing the data type of this value.
|
||||||
|
pub fn get_type_annotation(
|
||||||
|
&self,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
) -> TypeAnnotation {
|
||||||
|
match self {
|
||||||
|
SymbolValue::Bool(..)
|
||||||
|
| SymbolValue::Double(..)
|
||||||
|
| SymbolValue::I32(..)
|
||||||
|
| SymbolValue::I64(..)
|
||||||
|
| SymbolValue::U32(..)
|
||||||
|
| SymbolValue::U64(..)
|
||||||
|
| SymbolValue::Str(..) => TypeAnnotation::Primitive(self.get_type(primitives, unifier)),
|
||||||
|
SymbolValue::Tuple(vs) => {
|
||||||
|
let vs_tys = vs
|
||||||
|
.iter()
|
||||||
|
.map(|v| v.get_type_annotation(primitives, unifier))
|
||||||
|
.collect::<Vec<_>>();
|
||||||
|
TypeAnnotation::Tuple(vs_tys)
|
||||||
|
}
|
||||||
|
SymbolValue::OptionNone => TypeAnnotation::CustomClass {
|
||||||
|
id: primitives.option.obj_id(unifier).unwrap(),
|
||||||
|
params: Vec::default(),
|
||||||
|
},
|
||||||
|
SymbolValue::OptionSome(v) => {
|
||||||
|
let ty = v.get_type_annotation(primitives, unifier);
|
||||||
|
TypeAnnotation::CustomClass {
|
||||||
|
id: primitives.option.obj_id(unifier).unwrap(),
|
||||||
|
params: vec![ty],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the [`TypeEnum`] representing the data type of this value.
|
||||||
|
pub fn get_type_enum(
|
||||||
|
&self,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
) -> Rc<TypeEnum> {
|
||||||
|
let ty = self.get_type(primitives, unifier);
|
||||||
|
unifier.get_ty(ty)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Display for SymbolValue {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
SymbolValue::I32(i) => write!(f, "{i}"),
|
||||||
|
SymbolValue::I64(i) => write!(f, "int64({i})"),
|
||||||
|
SymbolValue::U32(i) => write!(f, "uint32({i})"),
|
||||||
|
SymbolValue::U64(i) => write!(f, "uint64({i})"),
|
||||||
|
SymbolValue::Str(s) => write!(f, "\"{s}\""),
|
||||||
|
SymbolValue::Double(d) => write!(f, "{d}"),
|
||||||
|
SymbolValue::Bool(b) => {
|
||||||
|
if *b {
|
||||||
|
write!(f, "True")
|
||||||
|
} else {
|
||||||
|
write!(f, "False")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
SymbolValue::Tuple(t) => {
|
||||||
|
write!(f, "({})", t.iter().map(|v| format!("{v}")).collect::<Vec<_>>().join(", "))
|
||||||
|
}
|
||||||
|
SymbolValue::OptionSome(v) => write!(f, "Some({v})"),
|
||||||
|
SymbolValue::OptionNone => write!(f, "none"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<SymbolValue> for u64 {
|
||||||
|
type Error = ();
|
||||||
|
|
||||||
|
/// Tries to convert a [`SymbolValue`] into a [`u64`], returning [`Err`] if the value is not
|
||||||
|
/// numeric or if the value cannot be converted into a `u64` without overflow.
|
||||||
|
fn try_from(value: SymbolValue) -> Result<Self, Self::Error> {
|
||||||
|
match value {
|
||||||
|
SymbolValue::I32(v) => u64::try_from(v).map_err(|_| ()),
|
||||||
|
SymbolValue::I64(v) => u64::try_from(v).map_err(|_| ()),
|
||||||
|
SymbolValue::U32(v) => Ok(u64::from(v)),
|
||||||
|
SymbolValue::U64(v) => Ok(v),
|
||||||
|
_ => Err(()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TryFrom<SymbolValue> for i128 {
|
||||||
|
type Error = ();
|
||||||
|
|
||||||
|
/// Tries to convert a [`SymbolValue`] into a [`i128`], returning [`Err`] if the value is not
|
||||||
|
/// numeric.
|
||||||
|
fn try_from(value: SymbolValue) -> Result<Self, Self::Error> {
|
||||||
|
match value {
|
||||||
|
SymbolValue::I32(v) => Ok(i128::from(v)),
|
||||||
|
SymbolValue::I64(v) => Ok(i128::from(v)),
|
||||||
|
SymbolValue::U32(v) => Ok(i128::from(v)),
|
||||||
|
SymbolValue::U64(v) => Ok(i128::from(v)),
|
||||||
|
_ => Err(()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub trait StaticValue {
|
||||||
|
/// Returns a unique identifier for this value.
|
||||||
|
fn get_unique_identifier(&self) -> u64;
|
||||||
|
|
||||||
|
/// Returns the constant object represented by this unique identifier.
|
||||||
|
fn get_const_obj<'ctx>(
|
||||||
|
&self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
generator: &mut dyn CodeGenerator,
|
||||||
|
) -> BasicValueEnum<'ctx>;
|
||||||
|
|
||||||
|
/// Converts this value to a LLVM [`BasicValueEnum`].
|
||||||
|
fn to_basic_value_enum<'ctx>(
|
||||||
|
&self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
generator: &mut dyn CodeGenerator,
|
||||||
|
expected_ty: Type,
|
||||||
|
) -> Result<BasicValueEnum<'ctx>, String>;
|
||||||
|
|
||||||
|
/// Returns a field within this value.
|
||||||
|
fn get_field<'ctx>(
|
||||||
|
&self,
|
||||||
|
name: StrRef,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
) -> Option<ValueEnum<'ctx>>;
|
||||||
|
|
||||||
|
/// Returns a single element of this tuple.
|
||||||
|
fn get_tuple_element<'ctx>(&self, index: u32) -> Option<ValueEnum<'ctx>>;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub enum ValueEnum<'ctx> {
|
||||||
|
/// [`ValueEnum`] representing a static value.
|
||||||
|
Static(Arc<dyn StaticValue + Send + Sync>),
|
||||||
|
|
||||||
|
/// [`ValueEnum`] representing a dynamic value.
|
||||||
|
Dynamic(BasicValueEnum<'ctx>),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'ctx> From<BasicValueEnum<'ctx>> for ValueEnum<'ctx> {
|
||||||
|
fn from(v: BasicValueEnum<'ctx>) -> Self {
|
||||||
|
ValueEnum::Dynamic(v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'ctx> From<PointerValue<'ctx>> for ValueEnum<'ctx> {
|
||||||
|
fn from(v: PointerValue<'ctx>) -> Self {
|
||||||
|
ValueEnum::Dynamic(v.into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'ctx> From<IntValue<'ctx>> for ValueEnum<'ctx> {
|
||||||
|
fn from(v: IntValue<'ctx>) -> Self {
|
||||||
|
ValueEnum::Dynamic(v.into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'ctx> From<FloatValue<'ctx>> for ValueEnum<'ctx> {
|
||||||
|
fn from(v: FloatValue<'ctx>) -> Self {
|
||||||
|
ValueEnum::Dynamic(v.into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'ctx> From<StructValue<'ctx>> for ValueEnum<'ctx> {
|
||||||
|
fn from(v: StructValue<'ctx>) -> Self {
|
||||||
|
ValueEnum::Dynamic(v.into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'ctx> ValueEnum<'ctx> {
|
||||||
|
/// Converts this [`ValueEnum`] to a [`BasicValueEnum`].
|
||||||
|
pub fn to_basic_value_enum<'a>(
|
||||||
|
self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, 'a>,
|
||||||
|
generator: &mut dyn CodeGenerator,
|
||||||
|
expected_ty: Type,
|
||||||
|
) -> Result<BasicValueEnum<'ctx>, String> {
|
||||||
|
match self {
|
||||||
|
ValueEnum::Static(v) => v.to_basic_value_enum(ctx, generator, expected_ty),
|
||||||
|
ValueEnum::Dynamic(v) => Ok(v),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub trait SymbolResolver {
|
||||||
|
/// Get type of type variable identifier or top-level function type,
|
||||||
|
fn get_symbol_type(
|
||||||
|
&self,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
str: StrRef,
|
||||||
|
) -> Result<Type, String>;
|
||||||
|
|
||||||
|
/// Get the top-level definition of identifiers.
|
||||||
|
fn get_identifier_def(&self, str: StrRef) -> Result<DefinitionId, HashSet<String>>;
|
||||||
|
|
||||||
|
fn get_symbol_value<'ctx>(
|
||||||
|
&self,
|
||||||
|
str: StrRef,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
) -> Option<ValueEnum<'ctx>>;
|
||||||
|
|
||||||
|
fn get_default_param_value(&self, expr: &Expr) -> Option<SymbolValue>;
|
||||||
|
fn get_string_id(&self, s: &str) -> i32;
|
||||||
|
fn get_exception_id(&self, tyid: usize) -> usize;
|
||||||
|
|
||||||
|
fn handle_deferred_eval(
|
||||||
|
&self,
|
||||||
|
_unifier: &mut Unifier,
|
||||||
|
_top_level_defs: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
_primitives: &PrimitiveStore,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
thread_local! {
|
||||||
|
static IDENTIFIER_ID: [StrRef; 11] = [
|
||||||
|
"int32".into(),
|
||||||
|
"int64".into(),
|
||||||
|
"float".into(),
|
||||||
|
"bool".into(),
|
||||||
|
"virtual".into(),
|
||||||
|
"tuple".into(),
|
||||||
|
"str".into(),
|
||||||
|
"Exception".into(),
|
||||||
|
"uint32".into(),
|
||||||
|
"uint64".into(),
|
||||||
|
"Literal".into(),
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Converts a type annotation into a [Type].
|
||||||
|
pub fn parse_type_annotation<T>(
|
||||||
|
resolver: &dyn SymbolResolver,
|
||||||
|
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
expr: &Expr<T>,
|
||||||
|
) -> Result<Type, HashSet<String>> {
|
||||||
|
use nac3parser::ast::ExprKind::*;
|
||||||
|
let ids = IDENTIFIER_ID.with(|ids| *ids);
|
||||||
|
let int32_id = ids[0];
|
||||||
|
let int64_id = ids[1];
|
||||||
|
let float_id = ids[2];
|
||||||
|
let bool_id = ids[3];
|
||||||
|
let virtual_id = ids[4];
|
||||||
|
let tuple_id = ids[5];
|
||||||
|
let str_id = ids[6];
|
||||||
|
let exn_id = ids[7];
|
||||||
|
let uint32_id = ids[8];
|
||||||
|
let uint64_id = ids[9];
|
||||||
|
let literal_id = ids[10];
|
||||||
|
|
||||||
|
let name_handling = |id: &StrRef, loc: Location, unifier: &mut Unifier| {
|
||||||
|
if *id == int32_id {
|
||||||
|
Ok(primitives.int32)
|
||||||
|
} else if *id == int64_id {
|
||||||
|
Ok(primitives.int64)
|
||||||
|
} else if *id == uint32_id {
|
||||||
|
Ok(primitives.uint32)
|
||||||
|
} else if *id == uint64_id {
|
||||||
|
Ok(primitives.uint64)
|
||||||
|
} else if *id == float_id {
|
||||||
|
Ok(primitives.float)
|
||||||
|
} else if *id == bool_id {
|
||||||
|
Ok(primitives.bool)
|
||||||
|
} else if *id == str_id {
|
||||||
|
Ok(primitives.str)
|
||||||
|
} else if *id == exn_id {
|
||||||
|
Ok(primitives.exception)
|
||||||
|
} else {
|
||||||
|
let obj_id = resolver.get_identifier_def(*id);
|
||||||
|
if let Ok(obj_id) = obj_id {
|
||||||
|
let def = top_level_defs[obj_id.0].read();
|
||||||
|
if let TopLevelDef::Class { fields, methods, type_vars, .. } = &*def {
|
||||||
|
if !type_vars.is_empty() {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"Unexpected number of type parameters: expected {} but got 0",
|
||||||
|
type_vars.len()
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
let fields = chain(
|
||||||
|
fields.iter().map(|(k, v, m)| (*k, (*v, *m))),
|
||||||
|
methods.iter().map(|(k, v, _)| (*k, (*v, false))),
|
||||||
|
)
|
||||||
|
.collect();
|
||||||
|
Ok(unifier.add_ty(TypeEnum::TObj { obj_id, fields, params: VarMap::default() }))
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!("Cannot use function name as type at {loc}")]))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
let ty =
|
||||||
|
resolver.get_symbol_type(unifier, top_level_defs, primitives, *id).map_err(
|
||||||
|
|e| HashSet::from([format!("Unknown type annotation at {loc}: {e}")]),
|
||||||
|
)?;
|
||||||
|
if let TypeEnum::TVar { .. } = &*unifier.get_ty(ty) {
|
||||||
|
Ok(ty)
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!("Unknown type annotation {id} at {loc}")]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let subscript_name_handle = |id: &StrRef, slice: &Expr<T>, unifier: &mut Unifier| {
|
||||||
|
if *id == virtual_id {
|
||||||
|
let ty = parse_type_annotation(resolver, top_level_defs, unifier, primitives, slice)?;
|
||||||
|
Ok(unifier.add_ty(TypeEnum::TVirtual { ty }))
|
||||||
|
} else if *id == tuple_id {
|
||||||
|
if let Tuple { elts, .. } = &slice.node {
|
||||||
|
let ty = elts
|
||||||
|
.iter()
|
||||||
|
.map(|elt| {
|
||||||
|
parse_type_annotation(resolver, top_level_defs, unifier, primitives, elt)
|
||||||
|
})
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
Ok(unifier.add_ty(TypeEnum::TTuple { ty }))
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from(["Expected multiple elements for tuple".into()]))
|
||||||
|
}
|
||||||
|
} else if *id == literal_id {
|
||||||
|
let mut parse_literal = |elt: &Expr<T>| {
|
||||||
|
let ty = parse_type_annotation(resolver, top_level_defs, unifier, primitives, elt)?;
|
||||||
|
let ty_enum = &*unifier.get_ty_immutable(ty);
|
||||||
|
match ty_enum {
|
||||||
|
TypeEnum::TLiteral { values, .. } => Ok(values.clone()),
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"Expected literal in type argument for Literal at {}",
|
||||||
|
elt.location
|
||||||
|
)])),
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let values = if let Tuple { elts, .. } = &slice.node {
|
||||||
|
elts.iter().map(&mut parse_literal).collect::<Result<Vec<_>, _>>()?
|
||||||
|
} else {
|
||||||
|
vec![parse_literal(slice)?]
|
||||||
|
}
|
||||||
|
.into_iter()
|
||||||
|
.flatten()
|
||||||
|
.collect_vec();
|
||||||
|
|
||||||
|
Ok(unifier.get_fresh_literal(values, Some(slice.location)))
|
||||||
|
} else {
|
||||||
|
let types = if let Tuple { elts, .. } = &slice.node {
|
||||||
|
elts.iter()
|
||||||
|
.map(|v| {
|
||||||
|
parse_type_annotation(resolver, top_level_defs, unifier, primitives, v)
|
||||||
|
})
|
||||||
|
.collect::<Result<Vec<_>, _>>()?
|
||||||
|
} else {
|
||||||
|
vec![parse_type_annotation(resolver, top_level_defs, unifier, primitives, slice)?]
|
||||||
|
};
|
||||||
|
|
||||||
|
let obj_id = resolver.get_identifier_def(*id)?;
|
||||||
|
let def = top_level_defs[obj_id.0].read();
|
||||||
|
if let TopLevelDef::Class { fields, methods, type_vars, .. } = &*def {
|
||||||
|
if types.len() != type_vars.len() {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"Unexpected number of type parameters: expected {} but got {}",
|
||||||
|
type_vars.len(),
|
||||||
|
types.len()
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
let mut subst = VarMap::new();
|
||||||
|
for (var, ty) in izip!(type_vars.iter(), types.iter()) {
|
||||||
|
let id = if let TypeEnum::TVar { id, .. } = &*unifier.get_ty(*var) {
|
||||||
|
*id
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
};
|
||||||
|
subst.insert(id, *ty);
|
||||||
|
}
|
||||||
|
let mut fields = fields
|
||||||
|
.iter()
|
||||||
|
.map(|(attr, ty, is_mutable)| {
|
||||||
|
let ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
|
||||||
|
(*attr, (ty, *is_mutable))
|
||||||
|
})
|
||||||
|
.collect::<HashMap<_, _>>();
|
||||||
|
fields.extend(methods.iter().map(|(attr, ty, _)| {
|
||||||
|
let ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
|
||||||
|
(*attr, (ty, false))
|
||||||
|
}));
|
||||||
|
Ok(unifier.add_ty(TypeEnum::TObj { obj_id, fields, params: subst }))
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from(["Cannot use function name as type".into()]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match &expr.node {
|
||||||
|
Name { id, .. } => name_handling(id, expr.location, unifier),
|
||||||
|
Subscript { value, slice, .. } => {
|
||||||
|
if let Name { id, .. } = &value.node {
|
||||||
|
subscript_name_handle(id, slice, unifier)
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!("unsupported type expression at {}", expr.location)]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Constant { value, .. } => SymbolValue::from_constant_inferred(value)
|
||||||
|
.map(|v| unifier.get_fresh_literal(vec![v], Some(expr.location)))
|
||||||
|
.map_err(|err| HashSet::from([err])),
|
||||||
|
_ => Err(HashSet::from([format!("unsupported type expression at {}", expr.location)])),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl dyn SymbolResolver + Send + Sync {
|
||||||
|
pub fn parse_type_annotation<T>(
|
||||||
|
&self,
|
||||||
|
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
expr: &Expr<T>,
|
||||||
|
) -> Result<Type, HashSet<String>> {
|
||||||
|
parse_type_annotation(self, top_level_defs, unifier, primitives, expr)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_type_name(
|
||||||
|
&self,
|
||||||
|
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
ty: Type,
|
||||||
|
) -> String {
|
||||||
|
unifier.internal_stringify(
|
||||||
|
ty,
|
||||||
|
&mut |id| {
|
||||||
|
let TopLevelDef::Class { name, .. } = &*top_level_defs[id].read() else {
|
||||||
|
unreachable!("expected class definition")
|
||||||
|
};
|
||||||
|
|
||||||
|
name.to_string()
|
||||||
|
},
|
||||||
|
&mut |id| format!("typevar{id}"),
|
||||||
|
&mut None,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Debug for dyn SymbolResolver + Send + Sync {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
write!(f, "")
|
||||||
|
}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,940 @@
|
||||||
|
use std::convert::TryInto;
|
||||||
|
|
||||||
|
use crate::symbol_resolver::SymbolValue;
|
||||||
|
use crate::toplevel::numpy::unpack_ndarray_var_tys;
|
||||||
|
use crate::typecheck::typedef::{into_var_map, iter_type_vars, Mapping, TypeVarId, VarMap};
|
||||||
|
use nac3parser::ast::{Constant, Location};
|
||||||
|
use strum::IntoEnumIterator;
|
||||||
|
use strum_macros::EnumIter;
|
||||||
|
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
/// All primitive types and functions in nac3core.
|
||||||
|
#[derive(Clone, Copy, Debug, EnumIter, PartialEq, Eq)]
|
||||||
|
pub enum PrimDef {
|
||||||
|
// Classes
|
||||||
|
Int32,
|
||||||
|
Int64,
|
||||||
|
Float,
|
||||||
|
Bool,
|
||||||
|
None,
|
||||||
|
Range,
|
||||||
|
Str,
|
||||||
|
Exception,
|
||||||
|
UInt32,
|
||||||
|
UInt64,
|
||||||
|
Option,
|
||||||
|
List,
|
||||||
|
NDArray,
|
||||||
|
|
||||||
|
// Member Functions
|
||||||
|
OptionIsSome,
|
||||||
|
OptionIsNone,
|
||||||
|
OptionUnwrap,
|
||||||
|
NDArrayCopy,
|
||||||
|
NDArrayFill,
|
||||||
|
FunInt32,
|
||||||
|
FunInt64,
|
||||||
|
FunUInt32,
|
||||||
|
FunUInt64,
|
||||||
|
FunFloat,
|
||||||
|
FunNpNDArray,
|
||||||
|
FunNpEmpty,
|
||||||
|
FunNpZeros,
|
||||||
|
FunNpOnes,
|
||||||
|
FunNpFull,
|
||||||
|
FunNpArray,
|
||||||
|
FunNpEye,
|
||||||
|
FunNpIdentity,
|
||||||
|
FunRound,
|
||||||
|
FunRound64,
|
||||||
|
FunNpRound,
|
||||||
|
FunRangeInit,
|
||||||
|
FunStr,
|
||||||
|
FunBool,
|
||||||
|
FunFloor,
|
||||||
|
FunFloor64,
|
||||||
|
FunNpFloor,
|
||||||
|
FunCeil,
|
||||||
|
FunCeil64,
|
||||||
|
FunNpCeil,
|
||||||
|
FunLen,
|
||||||
|
FunMin,
|
||||||
|
FunNpMin,
|
||||||
|
FunNpMinimum,
|
||||||
|
FunMax,
|
||||||
|
FunNpMax,
|
||||||
|
FunNpMaximum,
|
||||||
|
FunAbs,
|
||||||
|
FunNpIsNan,
|
||||||
|
FunNpIsInf,
|
||||||
|
FunNpSin,
|
||||||
|
FunNpCos,
|
||||||
|
FunNpExp,
|
||||||
|
FunNpExp2,
|
||||||
|
FunNpLog,
|
||||||
|
FunNpLog10,
|
||||||
|
FunNpLog2,
|
||||||
|
FunNpFabs,
|
||||||
|
FunNpSqrt,
|
||||||
|
FunNpRint,
|
||||||
|
FunNpTan,
|
||||||
|
FunNpArcsin,
|
||||||
|
FunNpArccos,
|
||||||
|
FunNpArctan,
|
||||||
|
FunNpSinh,
|
||||||
|
FunNpCosh,
|
||||||
|
FunNpTanh,
|
||||||
|
FunNpArcsinh,
|
||||||
|
FunNpArccosh,
|
||||||
|
FunNpArctanh,
|
||||||
|
FunNpExpm1,
|
||||||
|
FunNpCbrt,
|
||||||
|
FunSpSpecErf,
|
||||||
|
FunSpSpecErfc,
|
||||||
|
FunSpSpecGamma,
|
||||||
|
FunSpSpecGammaln,
|
||||||
|
FunSpSpecJ0,
|
||||||
|
FunSpSpecJ1,
|
||||||
|
FunNpArctan2,
|
||||||
|
FunNpCopysign,
|
||||||
|
FunNpFmax,
|
||||||
|
FunNpFmin,
|
||||||
|
FunNpLdExp,
|
||||||
|
FunNpHypot,
|
||||||
|
FunNpNextAfter,
|
||||||
|
|
||||||
|
// Top-Level Functions
|
||||||
|
FunSome,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Associated details of a [`PrimDef`]
|
||||||
|
pub enum PrimDefDetails {
|
||||||
|
PrimFunction { name: &'static str, simple_name: &'static str },
|
||||||
|
PrimClass { name: &'static str },
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PrimDef {
|
||||||
|
/// Get the assigned [`DefinitionId`] of this [`PrimDef`].
|
||||||
|
///
|
||||||
|
/// The assigned definition ID is defined by the position this [`PrimDef`] enum unit variant is defined at,
|
||||||
|
/// with the first `PrimDef`'s definition id being `0`.
|
||||||
|
#[must_use]
|
||||||
|
pub fn id(&self) -> DefinitionId {
|
||||||
|
DefinitionId(*self as usize)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a definition ID is that of a [`PrimDef`].
|
||||||
|
#[must_use]
|
||||||
|
pub fn contains_id(id: DefinitionId) -> bool {
|
||||||
|
Self::iter().any(|prim| prim.id() == id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the definition "simple name" of this [`PrimDef`].
|
||||||
|
///
|
||||||
|
/// If the [`PrimDef`] is a function, this corresponds to [`TopLevelDef::Function::simple_name`].
|
||||||
|
///
|
||||||
|
/// If the [`PrimDef`] is a class, this returns [`None`].
|
||||||
|
#[must_use]
|
||||||
|
pub fn simple_name(&self) -> &'static str {
|
||||||
|
match self.details() {
|
||||||
|
PrimDefDetails::PrimFunction { simple_name, .. } => simple_name,
|
||||||
|
PrimDefDetails::PrimClass { .. } => {
|
||||||
|
panic!("PrimDef {self:?} has no simple_name as it is not a function.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the definition "name" of this [`PrimDef`].
|
||||||
|
///
|
||||||
|
/// If the [`PrimDef`] is a function, this corresponds to [`TopLevelDef::Function::name`].
|
||||||
|
///
|
||||||
|
/// If the [`PrimDef`] is a class, this corresponds to [`TopLevelDef::Class::name`].
|
||||||
|
#[must_use]
|
||||||
|
pub fn name(&self) -> &'static str {
|
||||||
|
match self.details() {
|
||||||
|
PrimDefDetails::PrimFunction { name, .. } | PrimDefDetails::PrimClass { name } => name,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the associated details of this [`PrimDef`]
|
||||||
|
#[must_use]
|
||||||
|
pub fn details(self) -> PrimDefDetails {
|
||||||
|
fn class(name: &'static str) -> PrimDefDetails {
|
||||||
|
PrimDefDetails::PrimClass { name }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn fun(name: &'static str, simple_name: Option<&'static str>) -> PrimDefDetails {
|
||||||
|
PrimDefDetails::PrimFunction { simple_name: simple_name.unwrap_or(name), name }
|
||||||
|
}
|
||||||
|
|
||||||
|
match self {
|
||||||
|
PrimDef::Int32 => class("int32"),
|
||||||
|
PrimDef::Int64 => class("int64"),
|
||||||
|
PrimDef::Float => class("float"),
|
||||||
|
PrimDef::Bool => class("bool"),
|
||||||
|
PrimDef::None => class("none"),
|
||||||
|
PrimDef::Range => class("range"),
|
||||||
|
PrimDef::Str => class("str"),
|
||||||
|
PrimDef::Exception => class("Exception"),
|
||||||
|
PrimDef::UInt32 => class("uint32"),
|
||||||
|
PrimDef::UInt64 => class("uint64"),
|
||||||
|
PrimDef::Option => class("Option"),
|
||||||
|
PrimDef::OptionIsSome => fun("Option.is_some", Some("is_some")),
|
||||||
|
PrimDef::OptionIsNone => fun("Option.is_none", Some("is_none")),
|
||||||
|
PrimDef::OptionUnwrap => fun("Option.unwrap", Some("unwrap")),
|
||||||
|
PrimDef::List => class("list"),
|
||||||
|
PrimDef::NDArray => class("ndarray"),
|
||||||
|
PrimDef::NDArrayCopy => fun("ndarray.copy", Some("copy")),
|
||||||
|
PrimDef::NDArrayFill => fun("ndarray.fill", Some("fill")),
|
||||||
|
PrimDef::FunInt32 => fun("int32", None),
|
||||||
|
PrimDef::FunInt64 => fun("int64", None),
|
||||||
|
PrimDef::FunUInt32 => fun("uint32", None),
|
||||||
|
PrimDef::FunUInt64 => fun("uint64", None),
|
||||||
|
PrimDef::FunFloat => fun("float", None),
|
||||||
|
PrimDef::FunNpNDArray => fun("np_ndarray", None),
|
||||||
|
PrimDef::FunNpEmpty => fun("np_empty", None),
|
||||||
|
PrimDef::FunNpZeros => fun("np_zeros", None),
|
||||||
|
PrimDef::FunNpOnes => fun("np_ones", None),
|
||||||
|
PrimDef::FunNpFull => fun("np_full", None),
|
||||||
|
PrimDef::FunNpArray => fun("np_array", None),
|
||||||
|
PrimDef::FunNpEye => fun("np_eye", None),
|
||||||
|
PrimDef::FunNpIdentity => fun("np_identity", None),
|
||||||
|
PrimDef::FunRound => fun("round", None),
|
||||||
|
PrimDef::FunRound64 => fun("round64", None),
|
||||||
|
PrimDef::FunNpRound => fun("np_round", None),
|
||||||
|
PrimDef::FunRangeInit => fun("range.__init__", Some("__init__")),
|
||||||
|
PrimDef::FunStr => fun("str", None),
|
||||||
|
PrimDef::FunBool => fun("bool", None),
|
||||||
|
PrimDef::FunFloor => fun("floor", None),
|
||||||
|
PrimDef::FunFloor64 => fun("floor64", None),
|
||||||
|
PrimDef::FunNpFloor => fun("np_floor", None),
|
||||||
|
PrimDef::FunCeil => fun("ceil", None),
|
||||||
|
PrimDef::FunCeil64 => fun("ceil64", None),
|
||||||
|
PrimDef::FunNpCeil => fun("np_ceil", None),
|
||||||
|
PrimDef::FunLen => fun("len", None),
|
||||||
|
PrimDef::FunMin => fun("min", None),
|
||||||
|
PrimDef::FunNpMin => fun("np_min", None),
|
||||||
|
PrimDef::FunNpMinimum => fun("np_minimum", None),
|
||||||
|
PrimDef::FunMax => fun("max", None),
|
||||||
|
PrimDef::FunNpMax => fun("np_max", None),
|
||||||
|
PrimDef::FunNpMaximum => fun("np_maximum", None),
|
||||||
|
PrimDef::FunAbs => fun("abs", None),
|
||||||
|
PrimDef::FunNpIsNan => fun("np_isnan", None),
|
||||||
|
PrimDef::FunNpIsInf => fun("np_isinf", None),
|
||||||
|
PrimDef::FunNpSin => fun("np_sin", None),
|
||||||
|
PrimDef::FunNpCos => fun("np_cos", None),
|
||||||
|
PrimDef::FunNpExp => fun("np_exp", None),
|
||||||
|
PrimDef::FunNpExp2 => fun("np_exp2", None),
|
||||||
|
PrimDef::FunNpLog => fun("np_log", None),
|
||||||
|
PrimDef::FunNpLog10 => fun("np_log10", None),
|
||||||
|
PrimDef::FunNpLog2 => fun("np_log2", None),
|
||||||
|
PrimDef::FunNpFabs => fun("np_fabs", None),
|
||||||
|
PrimDef::FunNpSqrt => fun("np_sqrt", None),
|
||||||
|
PrimDef::FunNpRint => fun("np_rint", None),
|
||||||
|
PrimDef::FunNpTan => fun("np_tan", None),
|
||||||
|
PrimDef::FunNpArcsin => fun("np_arcsin", None),
|
||||||
|
PrimDef::FunNpArccos => fun("np_arccos", None),
|
||||||
|
PrimDef::FunNpArctan => fun("np_arctan", None),
|
||||||
|
PrimDef::FunNpSinh => fun("np_sinh", None),
|
||||||
|
PrimDef::FunNpCosh => fun("np_cosh", None),
|
||||||
|
PrimDef::FunNpTanh => fun("np_tanh", None),
|
||||||
|
PrimDef::FunNpArcsinh => fun("np_arcsinh", None),
|
||||||
|
PrimDef::FunNpArccosh => fun("np_arccosh", None),
|
||||||
|
PrimDef::FunNpArctanh => fun("np_arctanh", None),
|
||||||
|
PrimDef::FunNpExpm1 => fun("np_expm1", None),
|
||||||
|
PrimDef::FunNpCbrt => fun("np_cbrt", None),
|
||||||
|
PrimDef::FunSpSpecErf => fun("sp_spec_erf", None),
|
||||||
|
PrimDef::FunSpSpecErfc => fun("sp_spec_erfc", None),
|
||||||
|
PrimDef::FunSpSpecGamma => fun("sp_spec_gamma", None),
|
||||||
|
PrimDef::FunSpSpecGammaln => fun("sp_spec_gammaln", None),
|
||||||
|
PrimDef::FunSpSpecJ0 => fun("sp_spec_j0", None),
|
||||||
|
PrimDef::FunSpSpecJ1 => fun("sp_spec_j1", None),
|
||||||
|
PrimDef::FunNpArctan2 => fun("np_arctan2", None),
|
||||||
|
PrimDef::FunNpCopysign => fun("np_copysign", None),
|
||||||
|
PrimDef::FunNpFmax => fun("np_fmax", None),
|
||||||
|
PrimDef::FunNpFmin => fun("np_fmin", None),
|
||||||
|
PrimDef::FunNpLdExp => fun("np_ldexp", None),
|
||||||
|
PrimDef::FunNpHypot => fun("np_hypot", None),
|
||||||
|
PrimDef::FunNpNextAfter => fun("np_nextafter", None),
|
||||||
|
PrimDef::FunSome => fun("Some", None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Asserts that a [`PrimDef`] is in an allowlist.
|
||||||
|
///
|
||||||
|
/// Like `debug_assert!`, this statements of this function are only
|
||||||
|
/// enabled if `cfg!(debug_assertions)` is true.
|
||||||
|
pub fn debug_assert_prim_is_allowed(prim: PrimDef, allowlist: &[PrimDef]) {
|
||||||
|
if cfg!(debug_assertions) {
|
||||||
|
let allowed = allowlist.iter().any(|p| *p == prim);
|
||||||
|
assert!(
|
||||||
|
allowed,
|
||||||
|
"Disallowed primitive definition. Got {prim:?}, but expects it to be in {allowlist:?}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Construct the fields of class `Exception`
|
||||||
|
/// See [`TypeEnum::TObj::fields`] and [`TopLevelDef::Class::fields`]
|
||||||
|
#[must_use]
|
||||||
|
pub fn make_exception_fields(int32: Type, int64: Type, str: Type) -> Vec<(StrRef, Type, bool)> {
|
||||||
|
vec![
|
||||||
|
("__name__".into(), int32, true),
|
||||||
|
("__file__".into(), str, true),
|
||||||
|
("__line__".into(), int32, true),
|
||||||
|
("__col__".into(), int32, true),
|
||||||
|
("__func__".into(), str, true),
|
||||||
|
("__message__".into(), str, true),
|
||||||
|
("__param0__".into(), int64, true),
|
||||||
|
("__param1__".into(), int64, true),
|
||||||
|
("__param2__".into(), int64, true),
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TopLevelDef {
|
||||||
|
pub fn to_string(&self, unifier: &mut Unifier) -> String {
|
||||||
|
match self {
|
||||||
|
TopLevelDef::Class { name, ancestors, fields, methods, type_vars, .. } => {
|
||||||
|
let fields_str = fields
|
||||||
|
.iter()
|
||||||
|
.map(|(n, ty, _)| (n.to_string(), unifier.stringify(*ty)))
|
||||||
|
.collect_vec();
|
||||||
|
|
||||||
|
let methods_str = methods
|
||||||
|
.iter()
|
||||||
|
.map(|(n, ty, id)| (n.to_string(), unifier.stringify(*ty), *id))
|
||||||
|
.collect_vec();
|
||||||
|
format!(
|
||||||
|
"Class {{\nname: {:?},\nancestors: {:?},\nfields: {:?},\nmethods: {:?},\ntype_vars: {:?}\n}}",
|
||||||
|
name,
|
||||||
|
ancestors.iter().map(|ancestor| ancestor.stringify(unifier)).collect_vec(),
|
||||||
|
fields_str.iter().map(|(a, _)| a).collect_vec(),
|
||||||
|
methods_str.iter().map(|(a, b, _)| (a, b)).collect_vec(),
|
||||||
|
type_vars.iter().map(|id| unifier.stringify(*id)).collect_vec(),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
TopLevelDef::Function { name, signature, var_id, .. } => format!(
|
||||||
|
"Function {{\nname: {:?},\nsig: {:?},\nvar_id: {:?}\n}}",
|
||||||
|
name,
|
||||||
|
unifier.stringify(*signature),
|
||||||
|
{
|
||||||
|
// preserve the order for debug output and test
|
||||||
|
let mut r = var_id.clone();
|
||||||
|
r.sort_unstable();
|
||||||
|
r
|
||||||
|
}
|
||||||
|
),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TopLevelComposer {
|
||||||
|
#[must_use]
|
||||||
|
pub fn make_primitives(size_t: u32) -> (PrimitiveStore, Unifier) {
|
||||||
|
let mut unifier = Unifier::new();
|
||||||
|
let int32 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Int32.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let int64 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Int64.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let float = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Float.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let bool = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Bool.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let none = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::None.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let range = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Range.id(),
|
||||||
|
fields: [
|
||||||
|
("start".into(), (int32, true)),
|
||||||
|
("stop".into(), (int32, true)),
|
||||||
|
("step".into(), (int32, true)),
|
||||||
|
]
|
||||||
|
.into_iter()
|
||||||
|
.collect(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let str = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Str.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let exception = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Exception.id(),
|
||||||
|
fields: make_exception_fields(int32, int64, str)
|
||||||
|
.into_iter()
|
||||||
|
.map(|(name, ty, mutable)| (name, (ty, mutable)))
|
||||||
|
.collect(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let uint32 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::UInt32.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let uint64 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::UInt64.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
|
||||||
|
let option_type_var = unifier.get_fresh_var(Some("option_type_var".into()), None);
|
||||||
|
let is_some_type_fun_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: bool,
|
||||||
|
vars: into_var_map([option_type_var]),
|
||||||
|
}));
|
||||||
|
let unwrap_fun_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: option_type_var.ty,
|
||||||
|
vars: into_var_map([option_type_var]),
|
||||||
|
}));
|
||||||
|
let option = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Option.id(),
|
||||||
|
fields: vec![
|
||||||
|
(PrimDef::OptionIsSome.simple_name().into(), (is_some_type_fun_ty, true)),
|
||||||
|
(PrimDef::OptionIsNone.simple_name().into(), (is_some_type_fun_ty, true)),
|
||||||
|
(PrimDef::OptionUnwrap.simple_name().into(), (unwrap_fun_ty, true)),
|
||||||
|
]
|
||||||
|
.into_iter()
|
||||||
|
.collect::<HashMap<_, _>>(),
|
||||||
|
params: into_var_map([option_type_var]),
|
||||||
|
});
|
||||||
|
|
||||||
|
let size_t_ty = match size_t {
|
||||||
|
32 => uint32,
|
||||||
|
64 => uint64,
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let list_elem_tvar = unifier.get_fresh_var(Some("list_elem".into()), None);
|
||||||
|
let list = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::List.id(),
|
||||||
|
fields: Mapping::new(),
|
||||||
|
params: into_var_map([list_elem_tvar]),
|
||||||
|
});
|
||||||
|
|
||||||
|
let ndarray_dtype_tvar = unifier.get_fresh_var(Some("ndarray_dtype".into()), None);
|
||||||
|
let ndarray_ndims_tvar =
|
||||||
|
unifier.get_fresh_const_generic_var(size_t_ty, Some("ndarray_ndims".into()), None);
|
||||||
|
let ndarray_copy_fun_ret_ty = unifier.get_fresh_var(None, None);
|
||||||
|
let ndarray_copy_fun_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: ndarray_copy_fun_ret_ty.ty,
|
||||||
|
vars: into_var_map([ndarray_dtype_tvar, ndarray_ndims_tvar]),
|
||||||
|
}));
|
||||||
|
let ndarray_fill_fun_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![FuncArg {
|
||||||
|
name: "value".into(),
|
||||||
|
ty: ndarray_dtype_tvar.ty,
|
||||||
|
default_value: None,
|
||||||
|
}],
|
||||||
|
ret: none,
|
||||||
|
vars: into_var_map([ndarray_dtype_tvar, ndarray_ndims_tvar]),
|
||||||
|
}));
|
||||||
|
let ndarray = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::NDArray.id(),
|
||||||
|
fields: Mapping::from([
|
||||||
|
(PrimDef::NDArrayCopy.simple_name().into(), (ndarray_copy_fun_ty, true)),
|
||||||
|
(PrimDef::NDArrayFill.simple_name().into(), (ndarray_fill_fun_ty, true)),
|
||||||
|
]),
|
||||||
|
params: into_var_map([ndarray_dtype_tvar, ndarray_ndims_tvar]),
|
||||||
|
});
|
||||||
|
|
||||||
|
unifier.unify(ndarray_copy_fun_ret_ty.ty, ndarray).unwrap();
|
||||||
|
|
||||||
|
let primitives = PrimitiveStore {
|
||||||
|
int32,
|
||||||
|
int64,
|
||||||
|
uint32,
|
||||||
|
uint64,
|
||||||
|
float,
|
||||||
|
bool,
|
||||||
|
none,
|
||||||
|
range,
|
||||||
|
str,
|
||||||
|
exception,
|
||||||
|
option,
|
||||||
|
list,
|
||||||
|
ndarray,
|
||||||
|
size_t,
|
||||||
|
};
|
||||||
|
unifier.put_primitive_store(&primitives);
|
||||||
|
crate::typecheck::magic_methods::set_primitives_magic_methods(&primitives, &mut unifier);
|
||||||
|
(primitives, unifier)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// already include the `definition_id` of itself inside the ancestors vector
|
||||||
|
/// when first registering, the `type_vars`, fields, methods, ancestors are invalid
|
||||||
|
#[must_use]
|
||||||
|
pub fn make_top_level_class_def(
|
||||||
|
obj_id: DefinitionId,
|
||||||
|
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
|
||||||
|
name: StrRef,
|
||||||
|
constructor: Option<Type>,
|
||||||
|
loc: Option<Location>,
|
||||||
|
) -> TopLevelDef {
|
||||||
|
TopLevelDef::Class {
|
||||||
|
name,
|
||||||
|
object_id: obj_id,
|
||||||
|
type_vars: Vec::default(),
|
||||||
|
fields: Vec::default(),
|
||||||
|
attributes: Vec::default(),
|
||||||
|
methods: Vec::default(),
|
||||||
|
ancestors: Vec::default(),
|
||||||
|
constructor,
|
||||||
|
resolver,
|
||||||
|
loc,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// when first registering, the type is a invalid value
|
||||||
|
#[must_use]
|
||||||
|
pub fn make_top_level_function_def(
|
||||||
|
name: String,
|
||||||
|
simple_name: StrRef,
|
||||||
|
ty: Type,
|
||||||
|
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
|
||||||
|
loc: Option<Location>,
|
||||||
|
) -> TopLevelDef {
|
||||||
|
TopLevelDef::Function {
|
||||||
|
name,
|
||||||
|
simple_name,
|
||||||
|
signature: ty,
|
||||||
|
var_id: Vec::default(),
|
||||||
|
instance_to_symbol: HashMap::default(),
|
||||||
|
instance_to_stmt: HashMap::default(),
|
||||||
|
resolver,
|
||||||
|
codegen_callback: None,
|
||||||
|
loc,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn make_class_method_name(mut class_name: String, method_name: &str) -> String {
|
||||||
|
class_name.push('.');
|
||||||
|
class_name.push_str(method_name);
|
||||||
|
class_name
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_class_method_def_info(
|
||||||
|
class_methods_def: &[(StrRef, Type, DefinitionId)],
|
||||||
|
method_name: StrRef,
|
||||||
|
) -> Result<(Type, DefinitionId), HashSet<String>> {
|
||||||
|
for (name, ty, def_id) in class_methods_def {
|
||||||
|
if name == &method_name {
|
||||||
|
return Ok((*ty, *def_id));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(HashSet::from([format!("no method {method_name} in the current class")]))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// get all base class def id of a class, excluding itself. \
|
||||||
|
/// this function should called only after the direct parent is set
|
||||||
|
/// and before all the ancestors are set
|
||||||
|
/// and when we allow single inheritance \
|
||||||
|
/// the order of the returned list is from the child to the deepest ancestor
|
||||||
|
pub fn get_all_ancestors_helper(
|
||||||
|
child: &TypeAnnotation,
|
||||||
|
temp_def_list: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
) -> Result<Vec<TypeAnnotation>, HashSet<String>> {
|
||||||
|
let mut result: Vec<TypeAnnotation> = Vec::new();
|
||||||
|
let mut parent = Self::get_parent(child, temp_def_list);
|
||||||
|
while let Some(p) = parent {
|
||||||
|
parent = Self::get_parent(&p, temp_def_list);
|
||||||
|
let p_id = if let TypeAnnotation::CustomClass { id, .. } = &p {
|
||||||
|
*id
|
||||||
|
} else {
|
||||||
|
unreachable!("must be class kind annotation")
|
||||||
|
};
|
||||||
|
// check cycle
|
||||||
|
let no_cycle = result.iter().all(|x| {
|
||||||
|
let TypeAnnotation::CustomClass { id, .. } = x else {
|
||||||
|
unreachable!("must be class kind annotation")
|
||||||
|
};
|
||||||
|
|
||||||
|
id.0 != p_id.0
|
||||||
|
});
|
||||||
|
if no_cycle {
|
||||||
|
result.push(p);
|
||||||
|
} else {
|
||||||
|
return Err(HashSet::from(["cyclic inheritance detected".into()]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// should only be called when finding all ancestors, so panic when wrong
|
||||||
|
fn get_parent(
|
||||||
|
child: &TypeAnnotation,
|
||||||
|
temp_def_list: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
) -> Option<TypeAnnotation> {
|
||||||
|
let child_id = if let TypeAnnotation::CustomClass { id, .. } = child {
|
||||||
|
*id
|
||||||
|
} else {
|
||||||
|
unreachable!("should be class type annotation")
|
||||||
|
};
|
||||||
|
let child_def = temp_def_list.get(child_id.0).unwrap();
|
||||||
|
let child_def = child_def.read();
|
||||||
|
let TopLevelDef::Class { ancestors, .. } = &*child_def else {
|
||||||
|
unreachable!("child must be top level class def")
|
||||||
|
};
|
||||||
|
|
||||||
|
if ancestors.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(ancestors[0].clone())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// get the `var_id` of a given `TVar` type
|
||||||
|
pub fn get_var_id(var_ty: Type, unifier: &mut Unifier) -> Result<TypeVarId, HashSet<String>> {
|
||||||
|
if let TypeEnum::TVar { id, .. } = unifier.get_ty(var_ty).as_ref() {
|
||||||
|
Ok(*id)
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from(["not type var".to_string()]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn check_overload_function_type(
|
||||||
|
this: Type,
|
||||||
|
other: Type,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
type_var_to_concrete_def: &HashMap<Type, TypeAnnotation>,
|
||||||
|
) -> bool {
|
||||||
|
let this = unifier.get_ty(this);
|
||||||
|
let this = this.as_ref();
|
||||||
|
let other = unifier.get_ty(other);
|
||||||
|
let other = other.as_ref();
|
||||||
|
let (
|
||||||
|
TypeEnum::TFunc(FunSignature { args: this_args, ret: this_ret, .. }),
|
||||||
|
TypeEnum::TFunc(FunSignature { args: other_args, ret: other_ret, .. }),
|
||||||
|
) = (this, other)
|
||||||
|
else {
|
||||||
|
unreachable!("this function must be called with function type")
|
||||||
|
};
|
||||||
|
|
||||||
|
// check args
|
||||||
|
let args_ok =
|
||||||
|
this_args
|
||||||
|
.iter()
|
||||||
|
.map(|FuncArg { name, ty, .. }| (name, type_var_to_concrete_def.get(ty).unwrap()))
|
||||||
|
.zip(other_args.iter().map(|FuncArg { name, ty, .. }| {
|
||||||
|
(name, type_var_to_concrete_def.get(ty).unwrap())
|
||||||
|
}))
|
||||||
|
.all(|(this, other)| {
|
||||||
|
if this.0 == &"self".into() && this.0 == other.0 {
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
this.0 == other.0
|
||||||
|
&& check_overload_type_annotation_compatible(this.1, other.1, unifier)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// check rets
|
||||||
|
let ret_ok = check_overload_type_annotation_compatible(
|
||||||
|
type_var_to_concrete_def.get(this_ret).unwrap(),
|
||||||
|
type_var_to_concrete_def.get(other_ret).unwrap(),
|
||||||
|
unifier,
|
||||||
|
);
|
||||||
|
|
||||||
|
// return
|
||||||
|
args_ok && ret_ok
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn check_overload_field_type(
|
||||||
|
this: Type,
|
||||||
|
other: Type,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
type_var_to_concrete_def: &HashMap<Type, TypeAnnotation>,
|
||||||
|
) -> bool {
|
||||||
|
check_overload_type_annotation_compatible(
|
||||||
|
type_var_to_concrete_def.get(&this).unwrap(),
|
||||||
|
type_var_to_concrete_def.get(&other).unwrap(),
|
||||||
|
unifier,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_all_assigned_field(stmts: &[Stmt<()>]) -> Result<HashSet<StrRef>, HashSet<String>> {
|
||||||
|
let mut result = HashSet::new();
|
||||||
|
for s in stmts {
|
||||||
|
match &s.node {
|
||||||
|
ast::StmtKind::AnnAssign { target, .. }
|
||||||
|
if {
|
||||||
|
if let ast::ExprKind::Attribute { value, .. } = &target.node {
|
||||||
|
if let ast::ExprKind::Name { id, .. } = &value.node {
|
||||||
|
id == &"self".into()
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
} =>
|
||||||
|
{
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"redundant type annotation for class fields at {}",
|
||||||
|
s.location
|
||||||
|
)]))
|
||||||
|
}
|
||||||
|
ast::StmtKind::Assign { targets, .. } => {
|
||||||
|
for t in targets {
|
||||||
|
if let ast::ExprKind::Attribute { value, attr, .. } = &t.node {
|
||||||
|
if let ast::ExprKind::Name { id, .. } = &value.node {
|
||||||
|
if id == &"self".into() {
|
||||||
|
result.insert(*attr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// TODO: do not check for For and While?
|
||||||
|
ast::StmtKind::For { body, orelse, .. }
|
||||||
|
| ast::StmtKind::While { body, orelse, .. } => {
|
||||||
|
result.extend(Self::get_all_assigned_field(body.as_slice())?);
|
||||||
|
result.extend(Self::get_all_assigned_field(orelse.as_slice())?);
|
||||||
|
}
|
||||||
|
ast::StmtKind::If { body, orelse, .. } => {
|
||||||
|
let inited_for_sure = Self::get_all_assigned_field(body.as_slice())?
|
||||||
|
.intersection(&Self::get_all_assigned_field(orelse.as_slice())?)
|
||||||
|
.copied()
|
||||||
|
.collect::<HashSet<_>>();
|
||||||
|
result.extend(inited_for_sure);
|
||||||
|
}
|
||||||
|
ast::StmtKind::Try { body, orelse, finalbody, .. } => {
|
||||||
|
let inited_for_sure = Self::get_all_assigned_field(body.as_slice())?
|
||||||
|
.intersection(&Self::get_all_assigned_field(orelse.as_slice())?)
|
||||||
|
.copied()
|
||||||
|
.collect::<HashSet<_>>();
|
||||||
|
result.extend(inited_for_sure);
|
||||||
|
result.extend(Self::get_all_assigned_field(finalbody.as_slice())?);
|
||||||
|
}
|
||||||
|
ast::StmtKind::With { body, .. } => {
|
||||||
|
result.extend(Self::get_all_assigned_field(body.as_slice())?);
|
||||||
|
}
|
||||||
|
ast::StmtKind::Pass { .. }
|
||||||
|
| ast::StmtKind::Assert { .. }
|
||||||
|
| ast::StmtKind::Expr { .. } => {}
|
||||||
|
|
||||||
|
_ => {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn parse_parameter_default_value(
|
||||||
|
default: &ast::Expr,
|
||||||
|
resolver: &(dyn SymbolResolver + Send + Sync),
|
||||||
|
) -> Result<SymbolValue, HashSet<String>> {
|
||||||
|
parse_parameter_default_value(default, resolver)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn check_default_param_type(
|
||||||
|
val: &SymbolValue,
|
||||||
|
ty: &TypeAnnotation,
|
||||||
|
primitive: &PrimitiveStore,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
fn is_compatible(
|
||||||
|
found: &TypeAnnotation,
|
||||||
|
expect: &TypeAnnotation,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitive: &PrimitiveStore,
|
||||||
|
) -> bool {
|
||||||
|
match (found, expect) {
|
||||||
|
(TypeAnnotation::Primitive(f), TypeAnnotation::Primitive(e)) => {
|
||||||
|
unifier.unioned(*f, *e)
|
||||||
|
}
|
||||||
|
(
|
||||||
|
TypeAnnotation::CustomClass { id: f_id, params: f_param },
|
||||||
|
TypeAnnotation::CustomClass { id: e_id, params: e_param },
|
||||||
|
) => {
|
||||||
|
*f_id == *e_id
|
||||||
|
&& *f_id == primitive.option.obj_id(unifier).unwrap()
|
||||||
|
&& (f_param.is_empty()
|
||||||
|
|| (f_param.len() == 1
|
||||||
|
&& e_param.len() == 1
|
||||||
|
&& is_compatible(&f_param[0], &e_param[0], unifier, primitive)))
|
||||||
|
}
|
||||||
|
(TypeAnnotation::Tuple(f), TypeAnnotation::Tuple(e)) => {
|
||||||
|
f.len() == e.len()
|
||||||
|
&& f.iter()
|
||||||
|
.zip(e.iter())
|
||||||
|
.all(|(f, e)| is_compatible(f, e, unifier, primitive))
|
||||||
|
}
|
||||||
|
_ => false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let found = val.get_type_annotation(primitive, unifier);
|
||||||
|
if is_compatible(&found, ty, unifier, primitive) {
|
||||||
|
Ok(())
|
||||||
|
} else {
|
||||||
|
Err(format!(
|
||||||
|
"incompatible default parameter type, expect {}, found {}",
|
||||||
|
ty.stringify(unifier),
|
||||||
|
found.stringify(unifier),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn parse_parameter_default_value(
|
||||||
|
default: &ast::Expr,
|
||||||
|
resolver: &(dyn SymbolResolver + Send + Sync),
|
||||||
|
) -> Result<SymbolValue, HashSet<String>> {
|
||||||
|
fn handle_constant(val: &Constant, loc: &Location) -> Result<SymbolValue, HashSet<String>> {
|
||||||
|
match val {
|
||||||
|
Constant::Int(v) => {
|
||||||
|
if let Ok(v) = (*v).try_into() {
|
||||||
|
Ok(SymbolValue::I32(v))
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!("integer value out of range at {loc}")]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Constant::Float(v) => Ok(SymbolValue::Double(*v)),
|
||||||
|
Constant::Bool(v) => Ok(SymbolValue::Bool(*v)),
|
||||||
|
Constant::Tuple(tuple) => Ok(SymbolValue::Tuple(
|
||||||
|
tuple.iter().map(|x| handle_constant(x, loc)).collect::<Result<Vec<_>, _>>()?,
|
||||||
|
)),
|
||||||
|
Constant::None => Err(HashSet::from([format!(
|
||||||
|
"`None` is not supported, use `none` for option type instead ({loc})"
|
||||||
|
)])),
|
||||||
|
_ => unimplemented!("this constant is not supported at {}", loc),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
match &default.node {
|
||||||
|
ast::ExprKind::Constant { value, .. } => handle_constant(value, &default.location),
|
||||||
|
ast::ExprKind::Call { func, args, .. } if args.len() == 1 => match &func.node {
|
||||||
|
ast::ExprKind::Name { id, .. } if *id == "int64".into() => match &args[0].node {
|
||||||
|
ast::ExprKind::Constant { value: Constant::Int(v), .. } => {
|
||||||
|
let v: Result<i64, _> = (*v).try_into();
|
||||||
|
match v {
|
||||||
|
Ok(v) => Ok(SymbolValue::I64(v)),
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"default param value out of range at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"only allow constant integer here at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
},
|
||||||
|
ast::ExprKind::Name { id, .. } if *id == "uint32".into() => match &args[0].node {
|
||||||
|
ast::ExprKind::Constant { value: Constant::Int(v), .. } => {
|
||||||
|
let v: Result<u32, _> = (*v).try_into();
|
||||||
|
match v {
|
||||||
|
Ok(v) => Ok(SymbolValue::U32(v)),
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"default param value out of range at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"only allow constant integer here at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
},
|
||||||
|
ast::ExprKind::Name { id, .. } if *id == "uint64".into() => match &args[0].node {
|
||||||
|
ast::ExprKind::Constant { value: Constant::Int(v), .. } => {
|
||||||
|
let v: Result<u64, _> = (*v).try_into();
|
||||||
|
match v {
|
||||||
|
Ok(v) => Ok(SymbolValue::U64(v)),
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"default param value out of range at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"only allow constant integer here at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
},
|
||||||
|
ast::ExprKind::Name { id, .. } if *id == "Some".into() => Ok(SymbolValue::OptionSome(
|
||||||
|
Box::new(parse_parameter_default_value(&args[0], resolver)?),
|
||||||
|
)),
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"unsupported default parameter at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
},
|
||||||
|
ast::ExprKind::Tuple { elts, .. } => Ok(SymbolValue::Tuple(
|
||||||
|
elts.iter()
|
||||||
|
.map(|x| parse_parameter_default_value(x, resolver))
|
||||||
|
.collect::<Result<Vec<_>, _>>()?,
|
||||||
|
)),
|
||||||
|
ast::ExprKind::Name { id, .. } if id == &"none".into() => Ok(SymbolValue::OptionNone),
|
||||||
|
ast::ExprKind::Name { id, .. } => {
|
||||||
|
resolver.get_default_param_value(default).ok_or_else(|| {
|
||||||
|
HashSet::from([format!(
|
||||||
|
"`{}` cannot be used as a default parameter at {} \
|
||||||
|
(not primitive type, option or tuple / not defined?)",
|
||||||
|
id, default.location
|
||||||
|
)])
|
||||||
|
})
|
||||||
|
}
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"unsupported default parameter (not primitive type, option or tuple) at {}",
|
||||||
|
default.location
|
||||||
|
)])),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Obtains the element type of an array-like type.
|
||||||
|
pub fn arraylike_flatten_element_type(unifier: &mut Unifier, ty: Type) -> Type {
|
||||||
|
match &*unifier.get_ty(ty) {
|
||||||
|
TypeEnum::TObj { obj_id, .. } if *obj_id == PrimDef::NDArray.id() => {
|
||||||
|
unpack_ndarray_var_tys(unifier, ty).0
|
||||||
|
}
|
||||||
|
|
||||||
|
TypeEnum::TObj { obj_id, params, .. } if *obj_id == PrimDef::List.id() => {
|
||||||
|
arraylike_flatten_element_type(unifier, iter_type_vars(params).next().unwrap().ty)
|
||||||
|
}
|
||||||
|
_ => ty,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Obtains the number of dimensions of an array-like type.
|
||||||
|
pub fn arraylike_get_ndims(unifier: &mut Unifier, ty: Type) -> u64 {
|
||||||
|
match &*unifier.get_ty(ty) {
|
||||||
|
TypeEnum::TObj { obj_id, .. } if *obj_id == PrimDef::NDArray.id() => {
|
||||||
|
let ndims = unpack_ndarray_var_tys(unifier, ty).1;
|
||||||
|
let TypeEnum::TLiteral { values, .. } = &*unifier.get_ty_immutable(ndims) else {
|
||||||
|
panic!("Expected TLiteral for ndarray.ndims, got {}", unifier.stringify(ndims))
|
||||||
|
};
|
||||||
|
|
||||||
|
if values.len() > 1 {
|
||||||
|
todo!("Getting num of dimensions for ndarray with more than one ndim bound is unimplemented")
|
||||||
|
}
|
||||||
|
|
||||||
|
u64::try_from(values[0].clone()).unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
TypeEnum::TObj { obj_id, params, .. } if *obj_id == PrimDef::List.id() => {
|
||||||
|
arraylike_get_ndims(unifier, iter_type_vars(params).next().unwrap().ty) + 1
|
||||||
|
}
|
||||||
|
_ => 0,
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,157 @@
|
||||||
|
use std::{
|
||||||
|
borrow::BorrowMut,
|
||||||
|
collections::{HashMap, HashSet},
|
||||||
|
fmt::Debug,
|
||||||
|
iter::FromIterator,
|
||||||
|
sync::Arc,
|
||||||
|
};
|
||||||
|
|
||||||
|
use super::codegen::CodeGenContext;
|
||||||
|
use super::typecheck::type_inferencer::PrimitiveStore;
|
||||||
|
use super::typecheck::typedef::{
|
||||||
|
FunSignature, FuncArg, SharedUnifier, Type, TypeEnum, Unifier, VarMap,
|
||||||
|
};
|
||||||
|
use crate::{
|
||||||
|
codegen::CodeGenerator,
|
||||||
|
symbol_resolver::{SymbolResolver, ValueEnum},
|
||||||
|
typecheck::{
|
||||||
|
type_inferencer::CodeLocation,
|
||||||
|
typedef::{CallId, TypeVarId},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
use inkwell::values::BasicValueEnum;
|
||||||
|
use itertools::Itertools;
|
||||||
|
use nac3parser::ast::{self, Location, Stmt, StrRef};
|
||||||
|
use parking_lot::RwLock;
|
||||||
|
|
||||||
|
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone, Copy, Hash, Debug)]
|
||||||
|
pub struct DefinitionId(pub usize);
|
||||||
|
|
||||||
|
pub mod builtins;
|
||||||
|
pub mod composer;
|
||||||
|
pub mod helper;
|
||||||
|
pub mod numpy;
|
||||||
|
pub mod type_annotation;
|
||||||
|
use composer::*;
|
||||||
|
use type_annotation::*;
|
||||||
|
#[cfg(test)]
|
||||||
|
mod test;
|
||||||
|
|
||||||
|
type GenCallCallback = dyn for<'ctx, 'a> Fn(
|
||||||
|
&mut CodeGenContext<'ctx, 'a>,
|
||||||
|
Option<(Type, ValueEnum<'ctx>)>,
|
||||||
|
(&FunSignature, DefinitionId),
|
||||||
|
Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
|
||||||
|
&mut dyn CodeGenerator,
|
||||||
|
) -> Result<Option<BasicValueEnum<'ctx>>, String>
|
||||||
|
+ Send
|
||||||
|
+ Sync;
|
||||||
|
|
||||||
|
pub struct GenCall {
|
||||||
|
fp: Box<GenCallCallback>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl GenCall {
|
||||||
|
#[must_use]
|
||||||
|
pub fn new(fp: Box<GenCallCallback>) -> GenCall {
|
||||||
|
GenCall { fp }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a dummy instance of [`GenCall`], which invokes [`unreachable!()`] with the given
|
||||||
|
/// `reason`.
|
||||||
|
#[must_use]
|
||||||
|
pub fn create_dummy(reason: String) -> GenCall {
|
||||||
|
Self::new(Box::new(move |_, _, _, _, _| unreachable!("{reason}")))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn run<'ctx>(
|
||||||
|
&self,
|
||||||
|
ctx: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
obj: Option<(Type, ValueEnum<'ctx>)>,
|
||||||
|
fun: (&FunSignature, DefinitionId),
|
||||||
|
args: Vec<(Option<StrRef>, ValueEnum<'ctx>)>,
|
||||||
|
generator: &mut dyn CodeGenerator,
|
||||||
|
) -> Result<Option<BasicValueEnum<'ctx>>, String> {
|
||||||
|
(self.fp)(ctx, obj, fun, args, generator)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Debug for GenCall {
|
||||||
|
fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct FunInstance {
|
||||||
|
pub body: Arc<Vec<Stmt<Option<Type>>>>,
|
||||||
|
pub calls: Arc<HashMap<CodeLocation, CallId>>,
|
||||||
|
pub subst: VarMap,
|
||||||
|
pub unifier_id: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum TopLevelDef {
|
||||||
|
Class {
|
||||||
|
/// Name for error messages and symbols.
|
||||||
|
name: StrRef,
|
||||||
|
/// Object ID used for [`TypeEnum`].
|
||||||
|
object_id: DefinitionId,
|
||||||
|
/// type variables bounded to the class.
|
||||||
|
type_vars: Vec<Type>,
|
||||||
|
/// Class fields.
|
||||||
|
///
|
||||||
|
/// Name and type is mutable.
|
||||||
|
fields: Vec<(StrRef, Type, bool)>,
|
||||||
|
/// Class Attributes.
|
||||||
|
///
|
||||||
|
/// Name, type, value.
|
||||||
|
attributes: Vec<(StrRef, Type, ast::Constant)>,
|
||||||
|
/// Class methods, pointing to the corresponding function definition.
|
||||||
|
methods: Vec<(StrRef, Type, DefinitionId)>,
|
||||||
|
/// Ancestor classes, including itself.
|
||||||
|
ancestors: Vec<TypeAnnotation>,
|
||||||
|
/// Symbol resolver of the module defined the class; [None] if it is built-in type.
|
||||||
|
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
|
||||||
|
/// Constructor type.
|
||||||
|
constructor: Option<Type>,
|
||||||
|
/// Definition location.
|
||||||
|
loc: Option<Location>,
|
||||||
|
},
|
||||||
|
Function {
|
||||||
|
/// Prefix for symbol, should be unique globally.
|
||||||
|
name: String,
|
||||||
|
/// Simple name, the same as in method/function definition.
|
||||||
|
simple_name: StrRef,
|
||||||
|
/// Function signature.
|
||||||
|
signature: Type,
|
||||||
|
/// Instantiated type variable IDs.
|
||||||
|
var_id: Vec<TypeVarId>,
|
||||||
|
/// Function instance to symbol mapping
|
||||||
|
///
|
||||||
|
/// * Key: String representation of type variable values, sorted by variable ID in ascending
|
||||||
|
/// order, including type variables associated with the class.
|
||||||
|
/// * Value: Function symbol name.
|
||||||
|
instance_to_symbol: HashMap<String, String>,
|
||||||
|
/// Function instances to annotated AST mapping
|
||||||
|
///
|
||||||
|
/// * Key: String representation of type variable values, sorted by variable ID in ascending
|
||||||
|
/// order, including type variables associated with the class. Excluding rigid type
|
||||||
|
/// variables.
|
||||||
|
///
|
||||||
|
/// Rigid type variables that would be substituted when the function is instantiated.
|
||||||
|
instance_to_stmt: HashMap<String, FunInstance>,
|
||||||
|
/// Symbol resolver of the module defined the class.
|
||||||
|
resolver: Option<Arc<dyn SymbolResolver + Send + Sync>>,
|
||||||
|
/// Custom code generation callback.
|
||||||
|
codegen_callback: Option<Arc<GenCall>>,
|
||||||
|
/// Definition location.
|
||||||
|
loc: Option<Location>,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct TopLevelContext {
|
||||||
|
pub definitions: Arc<RwLock<Vec<Arc<RwLock<TopLevelDef>>>>>,
|
||||||
|
pub unifiers: Arc<RwLock<Vec<(SharedUnifier, PrimitiveStore)>>>,
|
||||||
|
pub personality_symbol: Option<String>,
|
||||||
|
}
|
|
@ -0,0 +1,85 @@
|
||||||
|
use crate::{
|
||||||
|
toplevel::helper::PrimDef,
|
||||||
|
typecheck::{
|
||||||
|
type_inferencer::PrimitiveStore,
|
||||||
|
typedef::{Type, TypeEnum, TypeVarId, Unifier, VarMap},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
use itertools::Itertools;
|
||||||
|
|
||||||
|
/// Creates a `ndarray` [`Type`] with the given type arguments.
|
||||||
|
///
|
||||||
|
/// * `dtype` - The element type of the `ndarray`, or [`None`] if the type variable is not
|
||||||
|
/// specialized.
|
||||||
|
/// * `ndims` - The number of dimensions of the `ndarray`, or [`None`] if the type variable is not
|
||||||
|
/// specialized.
|
||||||
|
pub fn make_ndarray_ty(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
dtype: Option<Type>,
|
||||||
|
ndims: Option<Type>,
|
||||||
|
) -> Type {
|
||||||
|
subst_ndarray_tvars(unifier, primitives.ndarray, dtype, ndims)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Substitutes type variables in `ndarray`.
|
||||||
|
///
|
||||||
|
/// * `dtype` - The element type of the `ndarray`, or [`None`] if the type variable is not
|
||||||
|
/// specialized.
|
||||||
|
/// * `ndims` - The number of dimensions of the `ndarray`, or [`None`] if the type variable is not
|
||||||
|
/// specialized.
|
||||||
|
pub fn subst_ndarray_tvars(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
ndarray: Type,
|
||||||
|
dtype: Option<Type>,
|
||||||
|
ndims: Option<Type>,
|
||||||
|
) -> Type {
|
||||||
|
let TypeEnum::TObj { obj_id, params, .. } = &*unifier.get_ty_immutable(ndarray) else {
|
||||||
|
panic!("Expected `ndarray` to be TObj, but got {}", unifier.stringify(ndarray))
|
||||||
|
};
|
||||||
|
debug_assert_eq!(*obj_id, PrimDef::NDArray.id());
|
||||||
|
|
||||||
|
if dtype.is_none() && ndims.is_none() {
|
||||||
|
return ndarray;
|
||||||
|
}
|
||||||
|
|
||||||
|
let tvar_ids = params.iter().map(|(obj_id, _)| *obj_id).collect_vec();
|
||||||
|
debug_assert_eq!(tvar_ids.len(), 2);
|
||||||
|
|
||||||
|
let mut tvar_subst = VarMap::new();
|
||||||
|
if let Some(dtype) = dtype {
|
||||||
|
tvar_subst.insert(tvar_ids[0], dtype);
|
||||||
|
}
|
||||||
|
if let Some(ndims) = ndims {
|
||||||
|
tvar_subst.insert(tvar_ids[1], ndims);
|
||||||
|
}
|
||||||
|
|
||||||
|
unifier.subst(ndarray, &tvar_subst).unwrap_or(ndarray)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn unpack_ndarray_tvars(unifier: &mut Unifier, ndarray: Type) -> Vec<(TypeVarId, Type)> {
|
||||||
|
let TypeEnum::TObj { obj_id, params, .. } = &*unifier.get_ty_immutable(ndarray) else {
|
||||||
|
panic!("Expected `ndarray` to be TObj, but got {}", unifier.stringify(ndarray))
|
||||||
|
};
|
||||||
|
debug_assert_eq!(*obj_id, PrimDef::NDArray.id());
|
||||||
|
debug_assert_eq!(params.len(), 2);
|
||||||
|
|
||||||
|
params
|
||||||
|
.iter()
|
||||||
|
.sorted_by_key(|(obj_id, _)| *obj_id)
|
||||||
|
.map(|(var_id, ty)| (*var_id, *ty))
|
||||||
|
.collect_vec()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Unpacks the type variable IDs of `ndarray` into a tuple. The elements of the tuple corresponds
|
||||||
|
/// to `dtype` (the element type) and `ndims` (the number of dimensions) of the `ndarray`
|
||||||
|
/// respectively.
|
||||||
|
pub fn unpack_ndarray_var_ids(unifier: &mut Unifier, ndarray: Type) -> (TypeVarId, TypeVarId) {
|
||||||
|
unpack_ndarray_tvars(unifier, ndarray).into_iter().map(|v| v.0).collect_tuple().unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Unpacks the type variables of `ndarray` into a tuple. The elements of the tuple corresponds to
|
||||||
|
/// `dtype` (the element type) and `ndims` (the number of dimensions) of the `ndarray` respectively.
|
||||||
|
pub fn unpack_ndarray_var_tys(unifier: &mut Unifier, ndarray: Type) -> (Type, Type) {
|
||||||
|
unpack_ndarray_tvars(unifier, ndarray).into_iter().map(|v| v.1).collect_tuple().unwrap()
|
||||||
|
}
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
source: nac3core/src/toplevel/test.rs
|
||||||
|
expression: res_vec
|
||||||
|
---
|
||||||
|
[
|
||||||
|
"Class {\nname: \"Generic_A\",\nancestors: [\"Generic_A[V]\", \"B\"],\nfields: [\"aa\", \"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"foo\", \"fn[[b:T], none]\"), (\"fun\", \"fn[[a:int32], V]\")],\ntype_vars: [\"V\"]\n}\n",
|
||||||
|
"Function {\nname: \"Generic_A.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"Generic_A.fun\",\nsig: \"fn[[a:int32], V]\",\nvar_id: [TypeVarId(245)]\n}\n",
|
||||||
|
"Class {\nname: \"B\",\nancestors: [\"B\"],\nfields: [\"aa\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"foo\", \"fn[[b:T], none]\")],\ntype_vars: []\n}\n",
|
||||||
|
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"B.foo\",\nsig: \"fn[[b:T], none]\",\nvar_id: []\n}\n",
|
||||||
|
]
|
|
@ -0,0 +1,15 @@
|
||||||
|
---
|
||||||
|
source: nac3core/src/toplevel/test.rs
|
||||||
|
expression: res_vec
|
||||||
|
---
|
||||||
|
[
|
||||||
|
"Class {\nname: \"A\",\nancestors: [\"A[T]\"],\nfields: [\"a\", \"b\", \"c\"],\nmethods: [(\"__init__\", \"fn[[t:T], none]\"), (\"fun\", \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\"), (\"foo\", \"fn[[c:C], none]\")],\ntype_vars: [\"T\"]\n}\n",
|
||||||
|
"Function {\nname: \"A.__init__\",\nsig: \"fn[[t:T], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"A.fun\",\nsig: \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"A.foo\",\nsig: \"fn[[c:C], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Class {\nname: \"B\",\nancestors: [\"B[typevar234]\", \"A[float]\"],\nfields: [\"a\", \"b\", \"c\", \"d\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\"), (\"foo\", \"fn[[c:C], none]\")],\ntype_vars: [\"typevar234\"]\n}\n",
|
||||||
|
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"B.fun\",\nsig: \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\",\nvar_id: []\n}\n",
|
||||||
|
"Class {\nname: \"C\",\nancestors: [\"C\", \"B[bool]\", \"A[float]\"],\nfields: [\"a\", \"b\", \"c\", \"d\", \"e\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a:int32, b:T], list[virtual[B[bool]]]]\"), (\"foo\", \"fn[[c:C], none]\")],\ntype_vars: []\n}\n",
|
||||||
|
"Function {\nname: \"C.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
]
|
|
@ -0,0 +1,13 @@
|
||||||
|
---
|
||||||
|
source: nac3core/src/toplevel/test.rs
|
||||||
|
expression: res_vec
|
||||||
|
---
|
||||||
|
[
|
||||||
|
"Function {\nname: \"foo\",\nsig: \"fn[[a:list[int32], b:tuple[T, float]], A[B, bool]]\",\nvar_id: []\n}\n",
|
||||||
|
"Class {\nname: \"A\",\nancestors: [\"A[T, V]\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[v:V], none]\"), (\"fun\", \"fn[[a:T], V]\")],\ntype_vars: [\"T\", \"V\"]\n}\n",
|
||||||
|
"Function {\nname: \"A.__init__\",\nsig: \"fn[[v:V], none]\",\nvar_id: [TypeVarId(247)]\n}\n",
|
||||||
|
"Function {\nname: \"A.fun\",\nsig: \"fn[[a:T], V]\",\nvar_id: [TypeVarId(252)]\n}\n",
|
||||||
|
"Function {\nname: \"gfun\",\nsig: \"fn[[a:A[list[float], int32]], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Class {\nname: \"B\",\nancestors: [\"B\"],\nfields: [],\nmethods: [(\"__init__\", \"fn[[], none]\")],\ntype_vars: []\n}\n",
|
||||||
|
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
]
|
|
@ -0,0 +1,13 @@
|
||||||
|
---
|
||||||
|
source: nac3core/src/toplevel/test.rs
|
||||||
|
expression: res_vec
|
||||||
|
---
|
||||||
|
[
|
||||||
|
"Class {\nname: \"A\",\nancestors: [\"A[typevar233, typevar234]\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[a:A[float, bool], b:B], none]\"), (\"fun\", \"fn[[a:A[float, bool]], A[bool, int32]]\")],\ntype_vars: [\"typevar233\", \"typevar234\"]\n}\n",
|
||||||
|
"Function {\nname: \"A.__init__\",\nsig: \"fn[[a:A[float, bool], b:B], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"A.fun\",\nsig: \"fn[[a:A[float, bool]], A[bool, int32]]\",\nvar_id: []\n}\n",
|
||||||
|
"Class {\nname: \"B\",\nancestors: [\"B\", \"A[int64, bool]\"],\nfields: [\"a\", \"b\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[a:A[float, bool]], A[bool, int32]]\"), (\"foo\", \"fn[[b:B], B]\"), (\"bar\", \"fn[[a:A[list[B], int32]], tuple[A[virtual[A[B, int32]], bool], B]]\")],\ntype_vars: []\n}\n",
|
||||||
|
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"B.foo\",\nsig: \"fn[[b:B], B]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"B.bar\",\nsig: \"fn[[a:A[list[B], int32]], tuple[A[virtual[A[B, int32]], bool], B]]\",\nvar_id: []\n}\n",
|
||||||
|
]
|
|
@ -0,0 +1,17 @@
|
||||||
|
---
|
||||||
|
source: nac3core/src/toplevel/test.rs
|
||||||
|
expression: res_vec
|
||||||
|
---
|
||||||
|
[
|
||||||
|
"Class {\nname: \"A\",\nancestors: [\"A\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b:B], none]\"), (\"foo\", \"fn[[a:T, b:V], none]\")],\ntype_vars: []\n}\n",
|
||||||
|
"Function {\nname: \"A.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"A.fun\",\nsig: \"fn[[b:B], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"A.foo\",\nsig: \"fn[[a:T, b:V], none]\",\nvar_id: [TypeVarId(253)]\n}\n",
|
||||||
|
"Class {\nname: \"B\",\nancestors: [\"B\", \"C\", \"A\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b:B], none]\"), (\"foo\", \"fn[[a:T, b:V], none]\")],\ntype_vars: []\n}\n",
|
||||||
|
"Function {\nname: \"B.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Class {\nname: \"C\",\nancestors: [\"C\", \"A\"],\nfields: [\"a\"],\nmethods: [(\"__init__\", \"fn[[], none]\"), (\"fun\", \"fn[[b:B], none]\"), (\"foo\", \"fn[[a:T, b:V], none]\")],\ntype_vars: []\n}\n",
|
||||||
|
"Function {\nname: \"C.__init__\",\nsig: \"fn[[], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"C.fun\",\nsig: \"fn[[b:B], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"foo\",\nsig: \"fn[[a:A], none]\",\nvar_id: []\n}\n",
|
||||||
|
"Function {\nname: \"ff\",\nsig: \"fn[[a:T], V]\",\nvar_id: [TypeVarId(261)]\n}\n",
|
||||||
|
]
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
source: nac3core/src/toplevel/test.rs
|
||||||
|
assertion_line: 549
|
||||||
|
expression: res_vec
|
||||||
|
|
||||||
|
---
|
||||||
|
[
|
||||||
|
"Class {\nname: \"A\",\nancestors: [\"A\"],\nfields: [],\nmethods: [],\ntype_vars: []\n}\n",
|
||||||
|
]
|
|
@ -0,0 +1,829 @@
|
||||||
|
use super::*;
|
||||||
|
use crate::toplevel::helper::PrimDef;
|
||||||
|
use crate::typecheck::typedef::into_var_map;
|
||||||
|
use crate::{
|
||||||
|
codegen::CodeGenContext,
|
||||||
|
symbol_resolver::{SymbolResolver, ValueEnum},
|
||||||
|
toplevel::DefinitionId,
|
||||||
|
typecheck::{
|
||||||
|
type_inferencer::PrimitiveStore,
|
||||||
|
typedef::{Type, Unifier},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
use indoc::indoc;
|
||||||
|
use nac3parser::ast::FileName;
|
||||||
|
use nac3parser::{ast::fold::Fold, parser::parse_program};
|
||||||
|
use parking_lot::Mutex;
|
||||||
|
use std::{collections::HashMap, sync::Arc};
|
||||||
|
use test_case::test_case;
|
||||||
|
|
||||||
|
struct ResolverInternal {
|
||||||
|
id_to_type: Mutex<HashMap<StrRef, Type>>,
|
||||||
|
id_to_def: Mutex<HashMap<StrRef, DefinitionId>>,
|
||||||
|
class_names: Mutex<HashMap<StrRef, Type>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ResolverInternal {
|
||||||
|
fn add_id_def(&self, id: StrRef, def: DefinitionId) {
|
||||||
|
self.id_to_def.lock().insert(id, def);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn add_id_type(&self, id: StrRef, ty: Type) {
|
||||||
|
self.id_to_type.lock().insert(id, ty);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct Resolver(Arc<ResolverInternal>);
|
||||||
|
|
||||||
|
impl SymbolResolver for Resolver {
|
||||||
|
fn get_default_param_value(
|
||||||
|
&self,
|
||||||
|
_: &ast::Expr,
|
||||||
|
) -> Option<crate::symbol_resolver::SymbolValue> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_symbol_type(
|
||||||
|
&self,
|
||||||
|
_: &mut Unifier,
|
||||||
|
_: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
_: &PrimitiveStore,
|
||||||
|
str: StrRef,
|
||||||
|
) -> Result<Type, String> {
|
||||||
|
self.0
|
||||||
|
.id_to_type
|
||||||
|
.lock()
|
||||||
|
.get(&str)
|
||||||
|
.copied()
|
||||||
|
.ok_or_else(|| format!("cannot find symbol `{str}`"))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_symbol_value<'ctx>(
|
||||||
|
&self,
|
||||||
|
_: StrRef,
|
||||||
|
_: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
) -> Option<ValueEnum<'ctx>> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_identifier_def(&self, id: StrRef) -> Result<DefinitionId, HashSet<String>> {
|
||||||
|
self.0
|
||||||
|
.id_to_def
|
||||||
|
.lock()
|
||||||
|
.get(&id)
|
||||||
|
.copied()
|
||||||
|
.ok_or_else(|| HashSet::from(["Unknown identifier".to_string()]))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_string_id(&self, _: &str) -> i32 {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_exception_id(&self, _tyid: usize) -> usize {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(
|
||||||
|
vec![
|
||||||
|
indoc! {"
|
||||||
|
def fun(a: int32) -> int32:
|
||||||
|
return a
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class A:
|
||||||
|
def __init__(self):
|
||||||
|
self.a: int32 = 3
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B:
|
||||||
|
def __init__(self):
|
||||||
|
self.b: float = 4.3
|
||||||
|
|
||||||
|
def fun(self):
|
||||||
|
self.b = self.b + 3.0
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def foo(a: float):
|
||||||
|
a + 1.0
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class C(B):
|
||||||
|
def __init__(self):
|
||||||
|
self.c: int32 = 4
|
||||||
|
self.a: bool = True
|
||||||
|
"},
|
||||||
|
];
|
||||||
|
"register"
|
||||||
|
)]
|
||||||
|
fn test_simple_register(source: Vec<&str>) {
|
||||||
|
let mut composer = TopLevelComposer::new(Vec::new(), ComposerConfig::default(), 64).0;
|
||||||
|
|
||||||
|
for s in source {
|
||||||
|
let ast = parse_program(s, FileName::default()).unwrap();
|
||||||
|
let ast = ast[0].clone();
|
||||||
|
|
||||||
|
composer.register_top_level(ast, None, "", false).unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(
|
||||||
|
indoc! {"
|
||||||
|
class A:
|
||||||
|
def foo(self):
|
||||||
|
pass
|
||||||
|
a = A()
|
||||||
|
"};
|
||||||
|
"register"
|
||||||
|
)]
|
||||||
|
fn test_simple_register_without_constructor(source: &str) {
|
||||||
|
let mut composer = TopLevelComposer::new(Vec::new(), ComposerConfig::default(), 64).0;
|
||||||
|
let ast = parse_program(source, FileName::default()).unwrap();
|
||||||
|
let ast = ast[0].clone();
|
||||||
|
composer.register_top_level(ast, None, "", true).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
def fun(a: int32) -> int32:
|
||||||
|
return a
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def foo(a: float):
|
||||||
|
a + 1.0
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def f(b: int64) -> int32:
|
||||||
|
return 3
|
||||||
|
"},
|
||||||
|
],
|
||||||
|
&[
|
||||||
|
"fn[[a:0], 0]",
|
||||||
|
"fn[[a:2], 4]",
|
||||||
|
"fn[[b:1], 0]",
|
||||||
|
],
|
||||||
|
&[
|
||||||
|
"fun",
|
||||||
|
"foo",
|
||||||
|
"f"
|
||||||
|
];
|
||||||
|
"function compose"
|
||||||
|
)]
|
||||||
|
fn test_simple_function_analyze(source: &[&str], tys: &[&str], names: &[&str]) {
|
||||||
|
let mut composer = TopLevelComposer::new(Vec::new(), ComposerConfig::default(), 64).0;
|
||||||
|
|
||||||
|
let internal_resolver = Arc::new(ResolverInternal {
|
||||||
|
id_to_def: Mutex::default(),
|
||||||
|
id_to_type: Mutex::default(),
|
||||||
|
class_names: Mutex::default(),
|
||||||
|
});
|
||||||
|
let resolver =
|
||||||
|
Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>;
|
||||||
|
|
||||||
|
for s in source {
|
||||||
|
let ast = parse_program(s, FileName::default()).unwrap();
|
||||||
|
let ast = ast[0].clone();
|
||||||
|
|
||||||
|
let (id, def_id, ty) =
|
||||||
|
composer.register_top_level(ast, Some(resolver.clone()), "", false).unwrap();
|
||||||
|
internal_resolver.add_id_def(id, def_id);
|
||||||
|
if let Some(ty) = ty {
|
||||||
|
internal_resolver.add_id_type(id, ty);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
composer.start_analysis(true).unwrap();
|
||||||
|
|
||||||
|
for (i, (def, _)) in composer.definition_ast_list.iter().skip(composer.builtin_num).enumerate()
|
||||||
|
{
|
||||||
|
let def = &*def.read();
|
||||||
|
if let TopLevelDef::Function { signature, name, .. } = def {
|
||||||
|
let ty_str = composer.unifier.internal_stringify(
|
||||||
|
*signature,
|
||||||
|
&mut |id| id.to_string(),
|
||||||
|
&mut |id| id.to_string(),
|
||||||
|
&mut None,
|
||||||
|
);
|
||||||
|
assert_eq!(ty_str, tys[i]);
|
||||||
|
assert_eq!(name, names[i]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A():
|
||||||
|
a: int32
|
||||||
|
def __init__(self):
|
||||||
|
self.a = 3
|
||||||
|
def fun(self, b: B):
|
||||||
|
pass
|
||||||
|
def foo(self, a: T, b: V):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B(C):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class C(A):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
def fun(self, b: B):
|
||||||
|
a = 1
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def foo(a: A):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def ff(a: T) -> V:
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"simple class compose"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class Generic_A(Generic[V], B):
|
||||||
|
a: int64
|
||||||
|
def __init__(self):
|
||||||
|
self.a = 123123123123
|
||||||
|
def fun(self, a: int32) -> V:
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B:
|
||||||
|
aa: bool
|
||||||
|
def __init__(self):
|
||||||
|
self.aa = False
|
||||||
|
def foo(self, b: T):
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"generic class"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
def foo(a: list[int32], b: tuple[T, float]) -> A[B, bool]:
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class A(Generic[T, V]):
|
||||||
|
a: T
|
||||||
|
b: V
|
||||||
|
def __init__(self, v: V):
|
||||||
|
self.a = 1
|
||||||
|
self.b = v
|
||||||
|
def fun(self, a: T) -> V:
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def gfun(a: A[list[float], int32]):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B:
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"list tuple generic"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(Generic[T, V]):
|
||||||
|
a: A[float, bool]
|
||||||
|
b: B
|
||||||
|
def __init__(self, a: A[float, bool], b: B):
|
||||||
|
self.a = a
|
||||||
|
self.b = b
|
||||||
|
def fun(self, a: A[float, bool]) -> A[bool, int32]:
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B(A[int64, bool]):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
def foo(self, b: B) -> B:
|
||||||
|
pass
|
||||||
|
def bar(self, a: A[list[B], int32]) -> tuple[A[virtual[A[B, int32]], bool], B]:
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"self1"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(Generic[T]):
|
||||||
|
a: int32
|
||||||
|
b: T
|
||||||
|
c: A[int64]
|
||||||
|
def __init__(self, t: T):
|
||||||
|
self.a = 3
|
||||||
|
self.b = T
|
||||||
|
def fun(self, a: int32, b: T) -> list[virtual[B[bool]]]:
|
||||||
|
pass
|
||||||
|
def foo(self, c: C):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B(Generic[V], A[float]):
|
||||||
|
d: C
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
def fun(self, a: int32, b: T) -> list[virtual[B[bool]]]:
|
||||||
|
# override
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class C(B[bool]):
|
||||||
|
e: int64
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"inheritance_override"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(Generic[T]):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
def fun(self, a: A[T]) -> A[T]:
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&["application of type vars to generic class is not currently supported (at unknown:4:24)"];
|
||||||
|
"err no type var in generic app"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(B):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B(A):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&["cyclic inheritance detected"];
|
||||||
|
"cyclic1"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(B[bool, int64]):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B(Generic[V, T], C[int32]):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class C(Generic[T], A):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
],
|
||||||
|
&["cyclic inheritance detected"];
|
||||||
|
"cyclic2"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A:
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&["5: Class {\nname: \"A\",\ndef_id: DefinitionId(5),\nancestors: [CustomClassKind { id: DefinitionId(5), params: [] }],\nfields: [],\nmethods: [],\ntype_vars: []\n}"];
|
||||||
|
"simple pass in class"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[indoc! {"
|
||||||
|
class A:
|
||||||
|
def __init__():
|
||||||
|
pass
|
||||||
|
"}],
|
||||||
|
&["__init__ method must have a `self` parameter (at unknown:2:5)"];
|
||||||
|
"err no self_1"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(B, Generic[T], C):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B:
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class C:
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
|
||||||
|
],
|
||||||
|
&["a class definition can only have at most one base class declaration and one generic declaration (at unknown:1:24)"];
|
||||||
|
"err multiple inheritance"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(Generic[T]):
|
||||||
|
a: int32
|
||||||
|
b: T
|
||||||
|
c: A[int64]
|
||||||
|
def __init__(self, t: T):
|
||||||
|
self.a = 3
|
||||||
|
self.b = T
|
||||||
|
def fun(self, a: int32, b: T) -> list[virtual[B[bool]]]:
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B(Generic[V], A[float]):
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
def fun(self, a: int32, b: T) -> list[virtual[B[int32]]]:
|
||||||
|
# override
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&["method fun has same name as ancestors' method, but incompatible type"];
|
||||||
|
"err_incompatible_inheritance_method"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A(Generic[T]):
|
||||||
|
a: int32
|
||||||
|
b: T
|
||||||
|
c: A[int64]
|
||||||
|
def __init__(self, t: T):
|
||||||
|
self.a = 3
|
||||||
|
self.b = T
|
||||||
|
def fun(self, a: int32, b: T) -> list[virtual[B[bool]]]:
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class B(Generic[V], A[float]):
|
||||||
|
a: int32
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
def fun(self, a: int32, b: T) -> list[virtual[B[bool]]]:
|
||||||
|
# override
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&["field `a` has already declared in the ancestor classes"];
|
||||||
|
"err_incompatible_inheritance_field"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
&[
|
||||||
|
indoc! {"
|
||||||
|
class A:
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
class A:
|
||||||
|
a: int32
|
||||||
|
def __init__(self):
|
||||||
|
pass
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&["duplicate definition of class `A` (at unknown:1:1)"];
|
||||||
|
"class same name"
|
||||||
|
)]
|
||||||
|
fn test_analyze(source: &[&str], res: &[&str]) {
|
||||||
|
let print = false;
|
||||||
|
let mut composer = TopLevelComposer::new(Vec::new(), ComposerConfig::default(), 64).0;
|
||||||
|
|
||||||
|
let internal_resolver = make_internal_resolver_with_tvar(
|
||||||
|
vec![
|
||||||
|
("T".into(), vec![]),
|
||||||
|
("V".into(), vec![composer.primitives_ty.bool, composer.primitives_ty.int32]),
|
||||||
|
("G".into(), vec![composer.primitives_ty.bool, composer.primitives_ty.int64]),
|
||||||
|
],
|
||||||
|
&mut composer.unifier,
|
||||||
|
print,
|
||||||
|
);
|
||||||
|
let resolver =
|
||||||
|
Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>;
|
||||||
|
|
||||||
|
for s in source {
|
||||||
|
let ast = parse_program(s, FileName::default()).unwrap();
|
||||||
|
let ast = ast[0].clone();
|
||||||
|
|
||||||
|
let (id, def_id, ty) = {
|
||||||
|
match composer.register_top_level(ast, Some(resolver.clone()), "", false) {
|
||||||
|
Ok(x) => x,
|
||||||
|
Err(msg) => {
|
||||||
|
if print {
|
||||||
|
println!("{msg}");
|
||||||
|
} else {
|
||||||
|
assert_eq!(res[0], msg);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
internal_resolver.add_id_def(id, def_id);
|
||||||
|
if let Some(ty) = ty {
|
||||||
|
internal_resolver.add_id_type(id, ty);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Err(msg) = composer.start_analysis(false) {
|
||||||
|
if print {
|
||||||
|
println!("{}", msg.iter().sorted().join("\n----------\n"));
|
||||||
|
} else {
|
||||||
|
assert_eq!(res[0], msg.iter().next().unwrap());
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// skip 5 to skip primitives
|
||||||
|
let mut res_vec: Vec<String> = Vec::new();
|
||||||
|
for (def, _) in composer.definition_ast_list.iter().skip(composer.builtin_num) {
|
||||||
|
let def = &*def.read();
|
||||||
|
res_vec.push(format!("{}\n", def.to_string(composer.unifier.borrow_mut())));
|
||||||
|
}
|
||||||
|
insta::assert_debug_snapshot!(res_vec);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(
|
||||||
|
vec![
|
||||||
|
indoc! {"
|
||||||
|
def fun(a: int32, b: int32) -> int32:
|
||||||
|
return a + b
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def fib(n: int32) -> int32:
|
||||||
|
if n <= 2:
|
||||||
|
return 1
|
||||||
|
a = fib(n - 1)
|
||||||
|
b = fib(n - 2)
|
||||||
|
return fib(n - 1)
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"simple function"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
vec![
|
||||||
|
indoc! {"
|
||||||
|
class A:
|
||||||
|
a: int32
|
||||||
|
def __init__(self):
|
||||||
|
self.a = 3
|
||||||
|
def fun(self) -> int32:
|
||||||
|
b = self.a + 3
|
||||||
|
return b * self.a
|
||||||
|
def clone(self) -> A:
|
||||||
|
SELF = self
|
||||||
|
return SELF
|
||||||
|
def sum(self) -> int32:
|
||||||
|
if self.a == 0:
|
||||||
|
return self.a
|
||||||
|
else:
|
||||||
|
a = self.a
|
||||||
|
self.a = self.a - 1
|
||||||
|
return a + self.sum()
|
||||||
|
def fib(self, a: int32) -> int32:
|
||||||
|
if a <= 2:
|
||||||
|
return 1
|
||||||
|
return self.fib(a - 1) + self.fib(a - 2)
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def fun(a: A) -> int32:
|
||||||
|
return a.fun() + 2
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"simple class body"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
vec![
|
||||||
|
indoc! {"
|
||||||
|
def fun(a: V, c: G, t: T) -> V:
|
||||||
|
b = a
|
||||||
|
cc = c
|
||||||
|
ret = fun(b, cc, t)
|
||||||
|
return ret * ret
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def sum_three(l: list[V]) -> V:
|
||||||
|
return l[0] + l[1] + l[2]
|
||||||
|
"},
|
||||||
|
indoc! {"
|
||||||
|
def sum_sq_pair(p: tuple[V, V]) -> list[V]:
|
||||||
|
a = p[0]
|
||||||
|
b = p[1]
|
||||||
|
a = a**a
|
||||||
|
b = b**b
|
||||||
|
return [a, b]
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"type var fun"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
vec![
|
||||||
|
indoc! {"
|
||||||
|
class A(Generic[G]):
|
||||||
|
a: G
|
||||||
|
b: bool
|
||||||
|
def __init__(self, aa: G):
|
||||||
|
self.a = aa
|
||||||
|
if 2 > 1:
|
||||||
|
self.b = True
|
||||||
|
else:
|
||||||
|
# self.b = False
|
||||||
|
pass
|
||||||
|
def fun(self, a: G) -> list[G]:
|
||||||
|
ret = [a, self.a]
|
||||||
|
return ret if self.b else self.fun(self.a)
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"type var class"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
vec![
|
||||||
|
indoc! {"
|
||||||
|
class A:
|
||||||
|
def fun(self):
|
||||||
|
pass
|
||||||
|
"},
|
||||||
|
indoc!{"
|
||||||
|
class B:
|
||||||
|
a: int32
|
||||||
|
b: bool
|
||||||
|
def __init__(self):
|
||||||
|
# self.b = False
|
||||||
|
if 3 > 2:
|
||||||
|
self.a = 3
|
||||||
|
self.b = False
|
||||||
|
else:
|
||||||
|
self.a = 4
|
||||||
|
self.b = True
|
||||||
|
"}
|
||||||
|
],
|
||||||
|
&[];
|
||||||
|
"no_init_inst_check"
|
||||||
|
)]
|
||||||
|
fn test_inference(source: Vec<&str>, res: &[&str]) {
|
||||||
|
let print = true;
|
||||||
|
let mut composer = TopLevelComposer::new(Vec::new(), ComposerConfig::default(), 64).0;
|
||||||
|
|
||||||
|
let internal_resolver = make_internal_resolver_with_tvar(
|
||||||
|
vec![
|
||||||
|
("T".into(), vec![]),
|
||||||
|
(
|
||||||
|
"V".into(),
|
||||||
|
vec![
|
||||||
|
composer.primitives_ty.float,
|
||||||
|
composer.primitives_ty.int32,
|
||||||
|
composer.primitives_ty.int64,
|
||||||
|
],
|
||||||
|
),
|
||||||
|
("G".into(), vec![composer.primitives_ty.bool, composer.primitives_ty.int64]),
|
||||||
|
],
|
||||||
|
&mut composer.unifier,
|
||||||
|
print,
|
||||||
|
);
|
||||||
|
let resolver =
|
||||||
|
Arc::new(Resolver(internal_resolver.clone())) as Arc<dyn SymbolResolver + Send + Sync>;
|
||||||
|
|
||||||
|
for s in source {
|
||||||
|
let ast = parse_program(s, FileName::default()).unwrap();
|
||||||
|
let ast = ast[0].clone();
|
||||||
|
|
||||||
|
let (id, def_id, ty) = {
|
||||||
|
match composer.register_top_level(ast, Some(resolver.clone()), "", false) {
|
||||||
|
Ok(x) => x,
|
||||||
|
Err(msg) => {
|
||||||
|
if print {
|
||||||
|
println!("{msg}");
|
||||||
|
} else {
|
||||||
|
assert_eq!(res[0], msg);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
internal_resolver.add_id_def(id, def_id);
|
||||||
|
if let Some(ty) = ty {
|
||||||
|
internal_resolver.add_id_type(id, ty);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Err(msg) = composer.start_analysis(true) {
|
||||||
|
if print {
|
||||||
|
println!("{}", msg.iter().sorted().join("\n----------\n"));
|
||||||
|
} else {
|
||||||
|
assert_eq!(res[0], msg.iter().next().unwrap());
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// skip 5 to skip primitives
|
||||||
|
let mut stringify_folder = TypeToStringFolder { unifier: &mut composer.unifier };
|
||||||
|
for (def, _) in composer.definition_ast_list.iter().skip(composer.builtin_num) {
|
||||||
|
let def = &*def.read();
|
||||||
|
|
||||||
|
if let TopLevelDef::Function { instance_to_stmt, name, .. } = def {
|
||||||
|
println!(
|
||||||
|
"=========`{}`: number of instances: {}===========",
|
||||||
|
name,
|
||||||
|
instance_to_stmt.len()
|
||||||
|
);
|
||||||
|
for inst in instance_to_stmt {
|
||||||
|
let ast = &inst.1.body;
|
||||||
|
for b in ast.iter() {
|
||||||
|
println!("{:?}", stringify_folder.fold_stmt(b.clone()).unwrap());
|
||||||
|
println!("--------------------");
|
||||||
|
}
|
||||||
|
println!("\n");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn make_internal_resolver_with_tvar(
|
||||||
|
tvars: Vec<(StrRef, Vec<Type>)>,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
print: bool,
|
||||||
|
) -> Arc<ResolverInternal> {
|
||||||
|
let list_elem_tvar = unifier.get_fresh_var(Some("list_elem".into()), None);
|
||||||
|
let list = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::List.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: into_var_map([list_elem_tvar]),
|
||||||
|
});
|
||||||
|
|
||||||
|
let res: Arc<ResolverInternal> = ResolverInternal {
|
||||||
|
id_to_def: Mutex::new(HashMap::from([("list".into(), PrimDef::List.id())])),
|
||||||
|
id_to_type: tvars
|
||||||
|
.into_iter()
|
||||||
|
.map(|(name, range)| {
|
||||||
|
(name, {
|
||||||
|
let tvar = unifier.get_fresh_var_with_range(range.as_slice(), None, None);
|
||||||
|
if print {
|
||||||
|
println!("{}: {:?}, typevar{}", name, tvar.ty, tvar.id);
|
||||||
|
}
|
||||||
|
tvar.ty
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect::<HashMap<_, _>>()
|
||||||
|
.into(),
|
||||||
|
class_names: Mutex::new(HashMap::from([("list".into(), list)])),
|
||||||
|
}
|
||||||
|
.into();
|
||||||
|
if print {
|
||||||
|
println!();
|
||||||
|
}
|
||||||
|
res
|
||||||
|
}
|
||||||
|
|
||||||
|
struct TypeToStringFolder<'a> {
|
||||||
|
unifier: &'a mut Unifier,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> Fold<Option<Type>> for TypeToStringFolder<'a> {
|
||||||
|
type TargetU = String;
|
||||||
|
type Error = String;
|
||||||
|
fn map_user(&mut self, user: Option<Type>) -> Result<Self::TargetU, Self::Error> {
|
||||||
|
Ok(if let Some(ty) = user {
|
||||||
|
self.unifier.internal_stringify(
|
||||||
|
ty,
|
||||||
|
&mut |id| format!("class{id}"),
|
||||||
|
&mut |id| format!("typevar{id}"),
|
||||||
|
&mut None,
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
"None".into()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,606 @@
|
||||||
|
use super::*;
|
||||||
|
use crate::symbol_resolver::SymbolValue;
|
||||||
|
use crate::toplevel::helper::PrimDef;
|
||||||
|
use crate::typecheck::typedef::VarMap;
|
||||||
|
use nac3parser::ast::Constant;
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub enum TypeAnnotation {
|
||||||
|
Primitive(Type),
|
||||||
|
// we use type vars kind at params to represent self type
|
||||||
|
CustomClass {
|
||||||
|
id: DefinitionId,
|
||||||
|
// params can also be type var
|
||||||
|
params: Vec<TypeAnnotation>,
|
||||||
|
},
|
||||||
|
// can only be CustomClassKind
|
||||||
|
Virtual(Box<TypeAnnotation>),
|
||||||
|
TypeVar(Type),
|
||||||
|
/// A `Literal` allowing a subset of literals.
|
||||||
|
Literal(Vec<Constant>),
|
||||||
|
Tuple(Vec<TypeAnnotation>),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TypeAnnotation {
|
||||||
|
pub fn stringify(&self, unifier: &mut Unifier) -> String {
|
||||||
|
use TypeAnnotation::*;
|
||||||
|
match self {
|
||||||
|
Primitive(ty) | TypeVar(ty) => unifier.stringify(*ty),
|
||||||
|
CustomClass { id, params } => {
|
||||||
|
let class_name = if let Some(ref top) = unifier.top_level {
|
||||||
|
if let TopLevelDef::Class { name, .. } = &*top.definitions.read()[id.0].read() {
|
||||||
|
(*name).into()
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
format!("class_def_{}", id.0)
|
||||||
|
};
|
||||||
|
format!("{}{}", class_name, {
|
||||||
|
let param_list =
|
||||||
|
params.iter().map(|p| p.stringify(unifier)).collect_vec().join(", ");
|
||||||
|
if param_list.is_empty() {
|
||||||
|
String::new()
|
||||||
|
} else {
|
||||||
|
format!("[{param_list}]")
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
Literal(values) => {
|
||||||
|
format!("Literal({})", values.iter().map(|v| format!("{v:?}")).join(", "))
|
||||||
|
}
|
||||||
|
Virtual(ty) => format!("virtual[{}]", ty.stringify(unifier)),
|
||||||
|
Tuple(types) => {
|
||||||
|
format!(
|
||||||
|
"tuple[{}]",
|
||||||
|
types.iter().map(|p| p.stringify(unifier)).collect_vec().join(", ")
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parses an AST expression `expr` into a [`TypeAnnotation`].
|
||||||
|
///
|
||||||
|
/// * `locked` - A [`HashMap`] containing the IDs of known definitions, mapped to a [`Vec`] of all
|
||||||
|
/// generic variables associated with the definition.
|
||||||
|
/// * `type_var` - The type variable associated with the type argument currently being parsed. Pass
|
||||||
|
/// [`None`] when this function is invoked externally.
|
||||||
|
pub fn parse_ast_to_type_annotation_kinds<T, S: std::hash::BuildHasher + Clone>(
|
||||||
|
resolver: &(dyn SymbolResolver + Send + Sync),
|
||||||
|
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
expr: &ast::Expr<T>,
|
||||||
|
// the key stores the type_var of this topleveldef::class, we only need this field here
|
||||||
|
locked: HashMap<DefinitionId, Vec<Type>, S>,
|
||||||
|
) -> Result<TypeAnnotation, HashSet<String>> {
|
||||||
|
let name_handle = |id: &StrRef,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
locked: HashMap<DefinitionId, Vec<Type>, S>| {
|
||||||
|
if id == &"int32".into() {
|
||||||
|
Ok(TypeAnnotation::Primitive(primitives.int32))
|
||||||
|
} else if id == &"int64".into() {
|
||||||
|
Ok(TypeAnnotation::Primitive(primitives.int64))
|
||||||
|
} else if id == &"uint32".into() {
|
||||||
|
Ok(TypeAnnotation::Primitive(primitives.uint32))
|
||||||
|
} else if id == &"uint64".into() {
|
||||||
|
Ok(TypeAnnotation::Primitive(primitives.uint64))
|
||||||
|
} else if id == &"float".into() {
|
||||||
|
Ok(TypeAnnotation::Primitive(primitives.float))
|
||||||
|
} else if id == &"bool".into() {
|
||||||
|
Ok(TypeAnnotation::Primitive(primitives.bool))
|
||||||
|
} else if id == &"str".into() {
|
||||||
|
Ok(TypeAnnotation::Primitive(primitives.str))
|
||||||
|
} else if id == &"Exception".into() {
|
||||||
|
Ok(TypeAnnotation::CustomClass { id: PrimDef::Exception.id(), params: Vec::default() })
|
||||||
|
} else if let Ok(obj_id) = resolver.get_identifier_def(*id) {
|
||||||
|
let type_vars = {
|
||||||
|
let def_read = top_level_defs[obj_id.0].try_read();
|
||||||
|
if let Some(def_read) = def_read {
|
||||||
|
if let TopLevelDef::Class { type_vars, .. } = &*def_read {
|
||||||
|
type_vars.clone()
|
||||||
|
} else {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"function cannot be used as a type (at {})",
|
||||||
|
expr.location
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
locked.get(&obj_id).unwrap().clone()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
// check param number here
|
||||||
|
if !type_vars.is_empty() {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"expect {} type variable parameter but got 0 (at {})",
|
||||||
|
type_vars.len(),
|
||||||
|
expr.location,
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
Ok(TypeAnnotation::CustomClass { id: obj_id, params: vec![] })
|
||||||
|
} else if let Ok(ty) = resolver.get_symbol_type(unifier, top_level_defs, primitives, *id) {
|
||||||
|
if let TypeEnum::TVar { .. } = unifier.get_ty(ty).as_ref() {
|
||||||
|
let var = unifier.get_fresh_var(Some(*id), Some(expr.location)).ty;
|
||||||
|
unifier.unify(var, ty).unwrap();
|
||||||
|
Ok(TypeAnnotation::TypeVar(ty))
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!(
|
||||||
|
"`{}` is not a valid type annotation (at {})",
|
||||||
|
id, expr.location
|
||||||
|
)]))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!(
|
||||||
|
"`{}` is not a valid type annotation (at {})",
|
||||||
|
id, expr.location
|
||||||
|
)]))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let class_name_handle =
|
||||||
|
|id: &StrRef,
|
||||||
|
slice: &ast::Expr<T>,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
mut locked: HashMap<DefinitionId, Vec<Type>, S>| {
|
||||||
|
if ["virtual".into(), "Generic".into(), "tuple".into(), "Option".into()].contains(id) {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"keywords cannot be class name (at {})",
|
||||||
|
expr.location
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
let obj_id = resolver.get_identifier_def(*id)?;
|
||||||
|
let type_vars = {
|
||||||
|
let def_read = top_level_defs[obj_id.0].try_read();
|
||||||
|
if let Some(def_read) = def_read {
|
||||||
|
let TopLevelDef::Class { type_vars, .. } = &*def_read else {
|
||||||
|
unreachable!("must be class here")
|
||||||
|
};
|
||||||
|
|
||||||
|
type_vars.clone()
|
||||||
|
} else {
|
||||||
|
locked.get(&obj_id).unwrap().clone()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
// we do not check whether the application of type variables are compatible here
|
||||||
|
let param_type_infos = {
|
||||||
|
let params_ast = if let ast::ExprKind::Tuple { elts, .. } = &slice.node {
|
||||||
|
elts.iter().collect_vec()
|
||||||
|
} else {
|
||||||
|
vec![slice]
|
||||||
|
};
|
||||||
|
if type_vars.len() != params_ast.len() {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"expect {} type parameters but got {} (at {})",
|
||||||
|
type_vars.len(),
|
||||||
|
params_ast.len(),
|
||||||
|
params_ast[0].location,
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
let result = params_ast
|
||||||
|
.iter()
|
||||||
|
.map(|x| {
|
||||||
|
parse_ast_to_type_annotation_kinds(
|
||||||
|
resolver,
|
||||||
|
top_level_defs,
|
||||||
|
unifier,
|
||||||
|
primitives,
|
||||||
|
x,
|
||||||
|
{
|
||||||
|
locked.insert(obj_id, type_vars.clone());
|
||||||
|
locked.clone()
|
||||||
|
},
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
// make sure the result do not contain any type vars
|
||||||
|
let no_type_var =
|
||||||
|
result.iter().all(|x| get_type_var_contained_in_type_annotation(x).is_empty());
|
||||||
|
if no_type_var {
|
||||||
|
result
|
||||||
|
} else {
|
||||||
|
return Err(HashSet::from([
|
||||||
|
format!(
|
||||||
|
"application of type vars to generic class is not currently supported (at {})",
|
||||||
|
params_ast[0].location
|
||||||
|
),
|
||||||
|
]));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
Ok(TypeAnnotation::CustomClass { id: obj_id, params: param_type_infos })
|
||||||
|
};
|
||||||
|
|
||||||
|
match &expr.node {
|
||||||
|
ast::ExprKind::Name { id, .. } => name_handle(id, unifier, locked),
|
||||||
|
// virtual
|
||||||
|
ast::ExprKind::Subscript { value, slice, .. }
|
||||||
|
if {
|
||||||
|
matches!(&value.node, ast::ExprKind::Name { id, .. } if id == &"virtual".into())
|
||||||
|
} =>
|
||||||
|
{
|
||||||
|
let def = parse_ast_to_type_annotation_kinds(
|
||||||
|
resolver,
|
||||||
|
top_level_defs,
|
||||||
|
unifier,
|
||||||
|
primitives,
|
||||||
|
slice.as_ref(),
|
||||||
|
locked,
|
||||||
|
)?;
|
||||||
|
if !matches!(def, TypeAnnotation::CustomClass { .. }) {
|
||||||
|
unreachable!("must be concretized custom class kind in the virtual")
|
||||||
|
}
|
||||||
|
Ok(TypeAnnotation::Virtual(def.into()))
|
||||||
|
}
|
||||||
|
|
||||||
|
// option
|
||||||
|
ast::ExprKind::Subscript { value, slice, .. }
|
||||||
|
if {
|
||||||
|
matches!(&value.node, ast::ExprKind::Name { id, .. } if id == &"Option".into())
|
||||||
|
} =>
|
||||||
|
{
|
||||||
|
let def_ann = parse_ast_to_type_annotation_kinds(
|
||||||
|
resolver,
|
||||||
|
top_level_defs,
|
||||||
|
unifier,
|
||||||
|
primitives,
|
||||||
|
slice.as_ref(),
|
||||||
|
locked,
|
||||||
|
)?;
|
||||||
|
let id =
|
||||||
|
if let TypeEnum::TObj { obj_id, .. } = unifier.get_ty(primitives.option).as_ref() {
|
||||||
|
*obj_id
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
};
|
||||||
|
Ok(TypeAnnotation::CustomClass { id, params: vec![def_ann] })
|
||||||
|
}
|
||||||
|
|
||||||
|
// tuple
|
||||||
|
ast::ExprKind::Subscript { value, slice, .. }
|
||||||
|
if {
|
||||||
|
matches!(&value.node, ast::ExprKind::Name { id, .. } if id == &"tuple".into())
|
||||||
|
} =>
|
||||||
|
{
|
||||||
|
let tup_elts = {
|
||||||
|
if let ast::ExprKind::Tuple { elts, .. } = &slice.node {
|
||||||
|
elts.as_slice()
|
||||||
|
} else {
|
||||||
|
std::slice::from_ref(slice.as_ref())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let type_annotations = tup_elts
|
||||||
|
.iter()
|
||||||
|
.map(|e| {
|
||||||
|
parse_ast_to_type_annotation_kinds(
|
||||||
|
resolver,
|
||||||
|
top_level_defs,
|
||||||
|
unifier,
|
||||||
|
primitives,
|
||||||
|
e,
|
||||||
|
locked.clone(),
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
Ok(TypeAnnotation::Tuple(type_annotations))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Literal
|
||||||
|
ast::ExprKind::Subscript { value, slice, .. }
|
||||||
|
if {
|
||||||
|
matches!(&value.node, ast::ExprKind::Name { id, .. } if id == &"Literal".into())
|
||||||
|
} =>
|
||||||
|
{
|
||||||
|
let tup_elts = {
|
||||||
|
if let ast::ExprKind::Tuple { elts, .. } = &slice.node {
|
||||||
|
elts.as_slice()
|
||||||
|
} else {
|
||||||
|
std::slice::from_ref(slice.as_ref())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let type_annotations = tup_elts
|
||||||
|
.iter()
|
||||||
|
.map(|e| match &e.node {
|
||||||
|
ast::ExprKind::Constant { value, .. } => {
|
||||||
|
Ok(TypeAnnotation::Literal(vec![value.clone()]))
|
||||||
|
}
|
||||||
|
_ => parse_ast_to_type_annotation_kinds(
|
||||||
|
resolver,
|
||||||
|
top_level_defs,
|
||||||
|
unifier,
|
||||||
|
primitives,
|
||||||
|
e,
|
||||||
|
locked.clone(),
|
||||||
|
),
|
||||||
|
})
|
||||||
|
.collect::<Result<Vec<_>, _>>()?
|
||||||
|
.into_iter()
|
||||||
|
.flat_map(|type_ann| match type_ann {
|
||||||
|
TypeAnnotation::Literal(values) => values,
|
||||||
|
_ => unreachable!(),
|
||||||
|
})
|
||||||
|
.collect_vec();
|
||||||
|
|
||||||
|
if type_annotations.len() == 1 {
|
||||||
|
Ok(TypeAnnotation::Literal(type_annotations))
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!(
|
||||||
|
"multiple literal bounds are currently unsupported (at {})",
|
||||||
|
value.location
|
||||||
|
)]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// custom class
|
||||||
|
ast::ExprKind::Subscript { value, slice, .. } => {
|
||||||
|
if let ast::ExprKind::Name { id, .. } = &value.node {
|
||||||
|
class_name_handle(id, slice, unifier, locked)
|
||||||
|
} else {
|
||||||
|
Err(HashSet::from([format!(
|
||||||
|
"unsupported expression type for class name (at {})",
|
||||||
|
value.location
|
||||||
|
)]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ast::ExprKind::Constant { value, .. } => Ok(TypeAnnotation::Literal(vec![value.clone()])),
|
||||||
|
|
||||||
|
_ => Err(HashSet::from([format!(
|
||||||
|
"unsupported expression for type annotation (at {})",
|
||||||
|
expr.location
|
||||||
|
)])),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// no need to have the `locked` parameter, unlike the `parse_ast_to_type_annotation_kinds`, since
|
||||||
|
// when calling this function, there should be no topleveldef::class being write, and this function
|
||||||
|
// also only read the toplevedefs
|
||||||
|
pub fn get_type_from_type_annotation_kinds(
|
||||||
|
top_level_defs: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
ann: &TypeAnnotation,
|
||||||
|
subst_list: &mut Option<Vec<Type>>,
|
||||||
|
) -> Result<Type, HashSet<String>> {
|
||||||
|
match ann {
|
||||||
|
TypeAnnotation::CustomClass { id: obj_id, params } => {
|
||||||
|
let def_read = top_level_defs[obj_id.0].read();
|
||||||
|
let class_def: &TopLevelDef = &def_read;
|
||||||
|
let TopLevelDef::Class { fields, methods, type_vars, .. } = class_def else {
|
||||||
|
unreachable!("should be class def here")
|
||||||
|
};
|
||||||
|
|
||||||
|
if type_vars.len() != params.len() {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"unexpected number of type parameters: expected {} but got {}",
|
||||||
|
type_vars.len(),
|
||||||
|
params.len()
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
|
||||||
|
let param_ty = params
|
||||||
|
.iter()
|
||||||
|
.map(|x| {
|
||||||
|
get_type_from_type_annotation_kinds(top_level_defs, unifier, x, subst_list)
|
||||||
|
})
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
|
||||||
|
let subst = {
|
||||||
|
// check for compatible range
|
||||||
|
// TODO: if allow type var to be applied(now this disallowed in the parse_to_type_annotation), need more check
|
||||||
|
let mut result = VarMap::new();
|
||||||
|
for (tvar, p) in type_vars.iter().zip(param_ty) {
|
||||||
|
match unifier.get_ty(*tvar).as_ref() {
|
||||||
|
TypeEnum::TVar {
|
||||||
|
id,
|
||||||
|
range,
|
||||||
|
fields: None,
|
||||||
|
name,
|
||||||
|
loc,
|
||||||
|
is_const_generic: false,
|
||||||
|
} => {
|
||||||
|
let ok: bool = {
|
||||||
|
// create a temp type var and unify to check compatibility
|
||||||
|
p == *tvar || {
|
||||||
|
let temp = unifier.get_fresh_var_with_range(
|
||||||
|
range.as_slice(),
|
||||||
|
*name,
|
||||||
|
*loc,
|
||||||
|
);
|
||||||
|
unifier.unify(temp.ty, p).is_ok()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
if ok {
|
||||||
|
result.insert(*id, p);
|
||||||
|
} else {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"cannot apply type {} to type variable with id {:?}",
|
||||||
|
unifier.internal_stringify(
|
||||||
|
p,
|
||||||
|
&mut |id| format!("class{id}"),
|
||||||
|
&mut |id| format!("typevar{id}"),
|
||||||
|
&mut None
|
||||||
|
),
|
||||||
|
*id
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
TypeEnum::TVar { id, range, name, loc, is_const_generic: true, .. } => {
|
||||||
|
let ty = range[0];
|
||||||
|
let ok: bool = {
|
||||||
|
// create a temp type var and unify to check compatibility
|
||||||
|
p == *tvar || {
|
||||||
|
let temp = unifier.get_fresh_const_generic_var(ty, *name, *loc);
|
||||||
|
unifier.unify(temp.ty, p).is_ok()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
if ok {
|
||||||
|
result.insert(*id, p);
|
||||||
|
} else {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"cannot apply type {} to type variable {}",
|
||||||
|
unifier.stringify(p),
|
||||||
|
name.unwrap_or_else(|| format!("typevar{id}").into()),
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_ => unreachable!("must be generic type var"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result
|
||||||
|
};
|
||||||
|
// Class Attributes keep a copy with Class Definition and are not added to objects
|
||||||
|
let mut tobj_fields = methods
|
||||||
|
.iter()
|
||||||
|
.map(|(name, ty, _)| {
|
||||||
|
let subst_ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
|
||||||
|
// methods are immutable
|
||||||
|
(*name, (subst_ty, false))
|
||||||
|
})
|
||||||
|
.collect::<HashMap<_, _>>();
|
||||||
|
tobj_fields.extend(fields.iter().map(|(name, ty, mutability)| {
|
||||||
|
let subst_ty = unifier.subst(*ty, &subst).unwrap_or(*ty);
|
||||||
|
(*name, (subst_ty, *mutability))
|
||||||
|
}));
|
||||||
|
let need_subst = !subst.is_empty();
|
||||||
|
let ty = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: *obj_id,
|
||||||
|
fields: tobj_fields,
|
||||||
|
params: subst,
|
||||||
|
});
|
||||||
|
if need_subst {
|
||||||
|
if let Some(wl) = subst_list.as_mut() {
|
||||||
|
wl.push(ty);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(ty)
|
||||||
|
}
|
||||||
|
TypeAnnotation::Primitive(ty) | TypeAnnotation::TypeVar(ty) => Ok(*ty),
|
||||||
|
TypeAnnotation::Literal(values) => {
|
||||||
|
let values = values
|
||||||
|
.iter()
|
||||||
|
.map(SymbolValue::from_constant_inferred)
|
||||||
|
.collect::<Result<Vec<_>, _>>()
|
||||||
|
.map_err(|err| HashSet::from([err]))?;
|
||||||
|
|
||||||
|
let var = unifier.get_fresh_literal(values, None);
|
||||||
|
Ok(var)
|
||||||
|
}
|
||||||
|
TypeAnnotation::Virtual(ty) => {
|
||||||
|
let ty = get_type_from_type_annotation_kinds(
|
||||||
|
top_level_defs,
|
||||||
|
unifier,
|
||||||
|
ty.as_ref(),
|
||||||
|
subst_list,
|
||||||
|
)?;
|
||||||
|
Ok(unifier.add_ty(TypeEnum::TVirtual { ty }))
|
||||||
|
}
|
||||||
|
TypeAnnotation::Tuple(tys) => {
|
||||||
|
let tys = tys
|
||||||
|
.iter()
|
||||||
|
.map(|x| {
|
||||||
|
get_type_from_type_annotation_kinds(top_level_defs, unifier, x, subst_list)
|
||||||
|
})
|
||||||
|
.collect::<Result<Vec<_>, _>>()?;
|
||||||
|
Ok(unifier.add_ty(TypeEnum::TTuple { ty: tys }))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// given an def id, return a type annotation of self \
|
||||||
|
/// ```python
|
||||||
|
/// class A(Generic[T, V]):
|
||||||
|
/// def fun(self):
|
||||||
|
/// ```
|
||||||
|
/// the type of `self` should be similar to `A[T, V]`, where `T`, `V`
|
||||||
|
/// considered to be type variables associated with the class \
|
||||||
|
/// \
|
||||||
|
/// But note that here we do not make a duplication of `T`, `V`, we directly
|
||||||
|
/// use them as they are in the [`TopLevelDef::Class`] since those in the
|
||||||
|
/// `TopLevelDef::Class.type_vars` will be substitute later when seeing applications/instantiations
|
||||||
|
/// the Type of their fields and methods will also be subst when application/instantiation
|
||||||
|
#[must_use]
|
||||||
|
pub fn make_self_type_annotation(type_vars: &[Type], object_id: DefinitionId) -> TypeAnnotation {
|
||||||
|
TypeAnnotation::CustomClass {
|
||||||
|
id: object_id,
|
||||||
|
params: type_vars.iter().map(|ty| TypeAnnotation::TypeVar(*ty)).collect_vec(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// get all the occurences of type vars contained in a type annotation
|
||||||
|
/// e.g. `A[int, B[T], V, virtual[C[G]]]` => [T, V, G]
|
||||||
|
/// this function will not make a duplicate of type var
|
||||||
|
#[must_use]
|
||||||
|
pub fn get_type_var_contained_in_type_annotation(ann: &TypeAnnotation) -> Vec<TypeAnnotation> {
|
||||||
|
let mut result: Vec<TypeAnnotation> = Vec::new();
|
||||||
|
match ann {
|
||||||
|
TypeAnnotation::TypeVar(..) => result.push(ann.clone()),
|
||||||
|
TypeAnnotation::Virtual(ann) => {
|
||||||
|
result.extend(get_type_var_contained_in_type_annotation(ann.as_ref()));
|
||||||
|
}
|
||||||
|
TypeAnnotation::CustomClass { params, .. } => {
|
||||||
|
for p in params {
|
||||||
|
result.extend(get_type_var_contained_in_type_annotation(p));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
TypeAnnotation::Tuple(anns) => {
|
||||||
|
for a in anns {
|
||||||
|
result.extend(get_type_var_contained_in_type_annotation(a));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
TypeAnnotation::Primitive(..) | TypeAnnotation::Literal { .. } => {}
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
/// check the type compatibility for overload
|
||||||
|
pub fn check_overload_type_annotation_compatible(
|
||||||
|
this: &TypeAnnotation,
|
||||||
|
other: &TypeAnnotation,
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
) -> bool {
|
||||||
|
match (this, other) {
|
||||||
|
(TypeAnnotation::Primitive(a), TypeAnnotation::Primitive(b)) => a == b,
|
||||||
|
(TypeAnnotation::TypeVar(a), TypeAnnotation::TypeVar(b)) => {
|
||||||
|
let a = unifier.get_ty(*a);
|
||||||
|
let a = &*a;
|
||||||
|
let b = unifier.get_ty(*b);
|
||||||
|
let b = &*b;
|
||||||
|
let (
|
||||||
|
TypeEnum::TVar { id: a, fields: None, .. },
|
||||||
|
TypeEnum::TVar { id: b, fields: None, .. },
|
||||||
|
) = (a, b)
|
||||||
|
else {
|
||||||
|
unreachable!("must be type var")
|
||||||
|
};
|
||||||
|
|
||||||
|
a == b
|
||||||
|
}
|
||||||
|
(TypeAnnotation::Virtual(a), TypeAnnotation::Virtual(b)) => {
|
||||||
|
check_overload_type_annotation_compatible(a.as_ref(), b.as_ref(), unifier)
|
||||||
|
}
|
||||||
|
|
||||||
|
(TypeAnnotation::Tuple(a), TypeAnnotation::Tuple(b)) => {
|
||||||
|
a.len() == b.len() && {
|
||||||
|
a.iter()
|
||||||
|
.zip(b)
|
||||||
|
.all(|(a, b)| check_overload_type_annotation_compatible(a, b, unifier))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
(
|
||||||
|
TypeAnnotation::CustomClass { id: a, params: a_p },
|
||||||
|
TypeAnnotation::CustomClass { id: b, params: b_p },
|
||||||
|
) => {
|
||||||
|
a.0 == b.0 && {
|
||||||
|
a_p.len() == b_p.len() && {
|
||||||
|
a_p.iter()
|
||||||
|
.zip(b_p)
|
||||||
|
.all(|(a, b)| check_overload_type_annotation_compatible(a, b, unifier))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_ => false,
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,109 +0,0 @@
|
||||||
use super::super::typedef::*;
|
|
||||||
use std::collections::HashMap;
|
|
||||||
use std::rc::Rc;
|
|
||||||
|
|
||||||
/// Structure for storing top-level type definitions.
|
|
||||||
/// Used for collecting type signature from source code.
|
|
||||||
/// Can be converted to `InferenceContext` for type inference in functions.
|
|
||||||
pub struct GlobalContext<'a> {
|
|
||||||
/// List of primitive definitions.
|
|
||||||
pub(super) primitive_defs: Vec<TypeDef<'a>>,
|
|
||||||
/// List of class definitions.
|
|
||||||
pub(super) class_defs: Vec<ClassDef<'a>>,
|
|
||||||
/// List of parametric type definitions.
|
|
||||||
pub(super) parametric_defs: Vec<ParametricDef<'a>>,
|
|
||||||
/// List of type variable definitions.
|
|
||||||
pub(super) var_defs: Vec<VarDef<'a>>,
|
|
||||||
/// Function name to signature mapping.
|
|
||||||
pub(super) fn_table: HashMap<&'a str, FnDef>,
|
|
||||||
|
|
||||||
primitives: Vec<Type>,
|
|
||||||
variables: Vec<Type>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<'a> GlobalContext<'a> {
|
|
||||||
pub fn new(primitive_defs: Vec<TypeDef<'a>>) -> GlobalContext {
|
|
||||||
let mut primitives = Vec::new();
|
|
||||||
for (i, t) in primitive_defs.iter().enumerate() {
|
|
||||||
primitives.push(TypeEnum::PrimitiveType(PrimitiveId(i)).into());
|
|
||||||
}
|
|
||||||
GlobalContext {
|
|
||||||
primitive_defs,
|
|
||||||
class_defs: Vec::new(),
|
|
||||||
parametric_defs: Vec::new(),
|
|
||||||
var_defs: Vec::new(),
|
|
||||||
fn_table: HashMap::new(),
|
|
||||||
primitives,
|
|
||||||
variables: Vec::new(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn add_class(&mut self, def: ClassDef<'a>) -> ClassId {
|
|
||||||
self.class_defs.push(def);
|
|
||||||
ClassId(self.class_defs.len() - 1)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn add_parametric(&mut self, def: ParametricDef<'a>) -> ParamId {
|
|
||||||
self.parametric_defs.push(def);
|
|
||||||
ParamId(self.parametric_defs.len() - 1)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn add_variable(&mut self, def: VarDef<'a>) -> VariableId {
|
|
||||||
self.add_variable_private(def)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn add_variable_private(&mut self, def: VarDef<'a>) -> VariableId {
|
|
||||||
self.var_defs.push(def);
|
|
||||||
self.variables
|
|
||||||
.push(TypeEnum::TypeVariable(VariableId(self.var_defs.len() - 1)).into());
|
|
||||||
VariableId(self.var_defs.len() - 1)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn add_fn(&mut self, name: &'a str, def: FnDef) {
|
|
||||||
self.fn_table.insert(name, def);
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_fn_def(&self, name: &str) -> Option<&FnDef> {
|
|
||||||
self.fn_table.get(name)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_primitive_def_mut(&mut self, id: PrimitiveId) -> &mut TypeDef<'a> {
|
|
||||||
self.primitive_defs.get_mut(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_primitive_def(&self, id: PrimitiveId) -> &TypeDef {
|
|
||||||
self.primitive_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_class_def_mut(&mut self, id: ClassId) -> &mut ClassDef<'a> {
|
|
||||||
self.class_defs.get_mut(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_class_def(&self, id: ClassId) -> &ClassDef {
|
|
||||||
self.class_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_parametric_def_mut(&mut self, id: ParamId) -> &mut ParametricDef<'a> {
|
|
||||||
self.parametric_defs.get_mut(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_parametric_def(&self, id: ParamId) -> &ParametricDef {
|
|
||||||
self.parametric_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_variable_def_mut(&mut self, id: VariableId) -> &mut VarDef<'a> {
|
|
||||||
self.var_defs.get_mut(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_variable_def(&self, id: VariableId) -> &VarDef {
|
|
||||||
self.var_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_primitive(&self, id: PrimitiveId) -> Type {
|
|
||||||
self.primitives.get(id.0).unwrap().clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_variable(&self, id: VariableId) -> Type {
|
|
||||||
self.variables.get(id.0).unwrap().clone()
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,202 +0,0 @@
|
||||||
use super::super::location::{FileID, Location};
|
|
||||||
use super::super::symbol_resolver::*;
|
|
||||||
use super::super::typedef::*;
|
|
||||||
use super::GlobalContext;
|
|
||||||
use rustpython_parser::ast;
|
|
||||||
use std::boxed::Box;
|
|
||||||
use std::collections::HashMap;
|
|
||||||
|
|
||||||
struct ContextStack<'a> {
|
|
||||||
/// stack level, starts from 0
|
|
||||||
level: u32,
|
|
||||||
/// stack of symbol definitions containing (name, level) where `level` is the smallest level
|
|
||||||
/// where the name is assigned a value
|
|
||||||
sym_def: Vec<(&'a str, u32)>,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct InferenceContext<'a> {
|
|
||||||
/// global context
|
|
||||||
global: GlobalContext<'a>,
|
|
||||||
/// per source symbol resolver
|
|
||||||
resolver: Box<dyn SymbolResolver>,
|
|
||||||
/// File ID
|
|
||||||
file: FileID,
|
|
||||||
|
|
||||||
/// identifier to (type, readable) mapping.
|
|
||||||
/// an identifier might be defined earlier but has no value (for some code path), thus not
|
|
||||||
/// readable.
|
|
||||||
sym_table: HashMap<&'a str, (Type, bool, Location)>,
|
|
||||||
/// stack
|
|
||||||
stack: ContextStack<'a>,
|
|
||||||
}
|
|
||||||
|
|
||||||
// non-trivial implementations here
|
|
||||||
impl<'a> InferenceContext<'a> {
|
|
||||||
pub fn new(
|
|
||||||
global: GlobalContext,
|
|
||||||
resolver: Box<dyn SymbolResolver>,
|
|
||||||
file: FileID,
|
|
||||||
) -> InferenceContext {
|
|
||||||
InferenceContext {
|
|
||||||
global,
|
|
||||||
resolver,
|
|
||||||
file,
|
|
||||||
sym_table: HashMap::new(),
|
|
||||||
stack: ContextStack {
|
|
||||||
level: 0,
|
|
||||||
sym_def: Vec::new(),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// execute the function with new scope.
|
|
||||||
/// variable assignment would be limited within the scope (not readable outside), and type
|
|
||||||
/// returns the list of variables assigned within the scope, and the result of the function
|
|
||||||
pub fn with_scope<F, R>(&mut self, f: F) -> (Vec<(&'a str, Type, Location)>, R)
|
|
||||||
where
|
|
||||||
F: FnOnce(&mut Self) -> R,
|
|
||||||
{
|
|
||||||
self.stack.level += 1;
|
|
||||||
let result = f(self);
|
|
||||||
self.stack.level -= 1;
|
|
||||||
let mut poped_names = Vec::new();
|
|
||||||
while !self.stack.sym_def.is_empty() {
|
|
||||||
let (_, level) = self.stack.sym_def.last().unwrap();
|
|
||||||
if *level > self.stack.level {
|
|
||||||
let (name, _) = self.stack.sym_def.pop().unwrap();
|
|
||||||
let (t, b, l) = self.sym_table.get_mut(name).unwrap();
|
|
||||||
// set it to be unreadable
|
|
||||||
*b = false;
|
|
||||||
poped_names.push((name, t.clone(), *l));
|
|
||||||
} else {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
(poped_names, result)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// assign a type to an identifier.
|
|
||||||
/// may return error if the identifier was defined but with different type
|
|
||||||
pub fn assign(&mut self, name: &'a str, ty: Type, loc: ast::Location) -> Result<Type, String> {
|
|
||||||
if let Some((t, x, _)) = self.sym_table.get_mut(name) {
|
|
||||||
if t == &ty {
|
|
||||||
if !*x {
|
|
||||||
self.stack.sym_def.push((name, self.stack.level));
|
|
||||||
}
|
|
||||||
*x = true;
|
|
||||||
Ok(ty)
|
|
||||||
} else {
|
|
||||||
Err("different types".into())
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
self.stack.sym_def.push((name, self.stack.level));
|
|
||||||
self.sym_table.insert(
|
|
||||||
name,
|
|
||||||
(ty.clone(), true, Location::CodeRange(self.file, loc)),
|
|
||||||
);
|
|
||||||
Ok(ty)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// get the type of an identifier
|
|
||||||
/// may return error if the identifier is not defined, and cannot be resolved with the
|
|
||||||
/// resolution function.
|
|
||||||
pub fn resolve(&self, name: &str) -> Result<Type, String> {
|
|
||||||
if let Some((t, x, _)) = self.sym_table.get(name) {
|
|
||||||
if *x {
|
|
||||||
Ok(t.clone())
|
|
||||||
} else {
|
|
||||||
Err("may not be defined".into())
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
match self.resolver.get_symbol_type(name) {
|
|
||||||
Some(SymbolType::Identifier(t)) => Ok(t),
|
|
||||||
Some(SymbolType::TypeName(_)) => Err("is not a value".into()),
|
|
||||||
_ => Err("unbounded identifier".into()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_location(&self, name: &str) -> Option<Location> {
|
|
||||||
if let Some((_, _, l)) = self.sym_table.get(name) {
|
|
||||||
Some(*l)
|
|
||||||
} else {
|
|
||||||
self.resolver.get_symbol_location(name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// trivial getters:
|
|
||||||
impl<'a> InferenceContext<'a> {
|
|
||||||
pub fn get_primitive(&self, id: PrimitiveId) -> Type {
|
|
||||||
TypeEnum::PrimitiveType(id).into()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_variable(&self, id: VariableId) -> Type {
|
|
||||||
TypeEnum::TypeVariable(id).into()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_fn_def(&self, name: &str) -> Option<&FnDef> {
|
|
||||||
self.global.fn_table.get(name)
|
|
||||||
}
|
|
||||||
pub fn get_primitive_def(&self, id: PrimitiveId) -> &TypeDef {
|
|
||||||
self.global.primitive_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
pub fn get_class_def(&self, id: ClassId) -> &ClassDef {
|
|
||||||
self.global.class_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
pub fn get_parametric_def(&self, id: ParamId) -> &ParametricDef {
|
|
||||||
self.global.parametric_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
pub fn get_variable_def(&self, id: VariableId) -> &VarDef {
|
|
||||||
self.global.var_defs.get(id.0).unwrap()
|
|
||||||
}
|
|
||||||
pub fn get_type(&self, name: &str) -> Result<Type, String> {
|
|
||||||
match self.resolver.get_symbol_type(name) {
|
|
||||||
Some(SymbolType::TypeName(t)) => Ok(t),
|
|
||||||
Some(SymbolType::Identifier(_)) => Err("not a type".into()),
|
|
||||||
_ => Err("unbounded identifier".into()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl TypeEnum {
|
|
||||||
pub fn subst(&self, map: &HashMap<VariableId, Type>) -> TypeEnum {
|
|
||||||
match self {
|
|
||||||
TypeEnum::TypeVariable(id) => map.get(id).map(|v| v.as_ref()).unwrap_or(self).clone(),
|
|
||||||
TypeEnum::ParametricType(id, params) => TypeEnum::ParametricType(
|
|
||||||
*id,
|
|
||||||
params
|
|
||||||
.iter()
|
|
||||||
.map(|v| v.as_ref().subst(map).into())
|
|
||||||
.collect(),
|
|
||||||
),
|
|
||||||
_ => self.clone(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_subst(&self, ctx: &InferenceContext) -> HashMap<VariableId, Type> {
|
|
||||||
match self {
|
|
||||||
TypeEnum::ParametricType(id, params) => {
|
|
||||||
let vars = &ctx.get_parametric_def(*id).params;
|
|
||||||
vars.iter()
|
|
||||||
.zip(params)
|
|
||||||
.map(|(v, p)| (*v, p.as_ref().clone().into()))
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
// if this proves to be slow, we can use option type
|
|
||||||
_ => HashMap::new(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn get_base<'b: 'a, 'a>(&'a self, ctx: &'b InferenceContext) -> Option<&'b TypeDef> {
|
|
||||||
match self {
|
|
||||||
TypeEnum::PrimitiveType(id) => Some(ctx.get_primitive_def(*id)),
|
|
||||||
TypeEnum::ClassType(id) | TypeEnum::VirtualClassType(id) => {
|
|
||||||
Some(&ctx.get_class_def(*id).base)
|
|
||||||
}
|
|
||||||
TypeEnum::ParametricType(id, _) => Some(&ctx.get_parametric_def(*id).base),
|
|
||||||
_ => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,4 +0,0 @@
|
||||||
mod inference_context;
|
|
||||||
mod global_context;
|
|
||||||
pub use inference_context::InferenceContext;
|
|
||||||
pub use global_context::GlobalContext;
|
|
|
@ -0,0 +1,374 @@
|
||||||
|
use crate::toplevel::helper::PrimDef;
|
||||||
|
|
||||||
|
use super::type_inferencer::Inferencer;
|
||||||
|
use super::typedef::{Type, TypeEnum};
|
||||||
|
use nac3parser::ast::{
|
||||||
|
self, Constant, Expr, ExprKind,
|
||||||
|
Operator::{LShift, RShift},
|
||||||
|
Stmt, StmtKind, StrRef,
|
||||||
|
};
|
||||||
|
use std::{collections::HashSet, iter::once};
|
||||||
|
|
||||||
|
impl<'a> Inferencer<'a> {
|
||||||
|
fn should_have_value(&mut self, expr: &Expr<Option<Type>>) -> Result<(), HashSet<String>> {
|
||||||
|
if matches!(expr.custom, Some(ty) if self.unifier.unioned(ty, self.primitives.none)) {
|
||||||
|
Err(HashSet::from([format!("Error at {}: cannot have value none", expr.location)]))
|
||||||
|
} else {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn check_pattern(
|
||||||
|
&mut self,
|
||||||
|
pattern: &Expr<Option<Type>>,
|
||||||
|
defined_identifiers: &mut HashSet<StrRef>,
|
||||||
|
) -> Result<(), HashSet<String>> {
|
||||||
|
match &pattern.node {
|
||||||
|
ExprKind::Name { id, .. } if id == &"none".into() => {
|
||||||
|
Err(HashSet::from([format!("cannot assign to a `none` (at {})", pattern.location)]))
|
||||||
|
}
|
||||||
|
ExprKind::Name { id, .. } => {
|
||||||
|
if !defined_identifiers.contains(id) {
|
||||||
|
defined_identifiers.insert(*id);
|
||||||
|
}
|
||||||
|
self.should_have_value(pattern)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
ExprKind::Tuple { elts, .. } => {
|
||||||
|
for elt in elts {
|
||||||
|
self.check_pattern(elt, defined_identifiers)?;
|
||||||
|
self.should_have_value(elt)?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
ExprKind::Subscript { value, slice, .. } => {
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
self.should_have_value(value)?;
|
||||||
|
self.check_expr(slice, defined_identifiers)?;
|
||||||
|
if let TypeEnum::TTuple { .. } = &*self.unifier.get_ty(value.custom.unwrap()) {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"Error at {}: cannot assign to tuple element",
|
||||||
|
value.location
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
ExprKind::Constant { .. } => Err(HashSet::from([format!(
|
||||||
|
"cannot assign to a constant (at {})",
|
||||||
|
pattern.location
|
||||||
|
)])),
|
||||||
|
_ => self.check_expr(pattern, defined_identifiers),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn check_expr(
|
||||||
|
&mut self,
|
||||||
|
expr: &Expr<Option<Type>>,
|
||||||
|
defined_identifiers: &mut HashSet<StrRef>,
|
||||||
|
) -> Result<(), HashSet<String>> {
|
||||||
|
// there are some cases where the custom field is None
|
||||||
|
if let Some(ty) = &expr.custom {
|
||||||
|
if !matches!(&expr.node, ExprKind::Constant { value: Constant::Ellipsis, .. })
|
||||||
|
&& !ty.obj_id(self.unifier).is_some_and(|id| id == PrimDef::List.id())
|
||||||
|
&& !self.unifier.is_concrete(*ty, &self.function_data.bound_variables)
|
||||||
|
{
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"expected concrete type at {} but got {}",
|
||||||
|
expr.location,
|
||||||
|
self.unifier.get_ty(*ty).get_type_name()
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
match &expr.node {
|
||||||
|
ExprKind::Name { id, .. } => {
|
||||||
|
if id == &"none".into() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
self.should_have_value(expr)?;
|
||||||
|
if !defined_identifiers.contains(id) {
|
||||||
|
match self.function_data.resolver.get_symbol_type(
|
||||||
|
self.unifier,
|
||||||
|
&self.top_level.definitions.read(),
|
||||||
|
self.primitives,
|
||||||
|
*id,
|
||||||
|
) {
|
||||||
|
Ok(_) => {
|
||||||
|
self.defined_identifiers.insert(*id);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"type error at identifier `{}` ({}) at {}",
|
||||||
|
id, e, expr.location
|
||||||
|
)]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::List { elts, .. }
|
||||||
|
| ExprKind::Tuple { elts, .. }
|
||||||
|
| ExprKind::BoolOp { values: elts, .. } => {
|
||||||
|
for elt in elts {
|
||||||
|
self.check_expr(elt, defined_identifiers)?;
|
||||||
|
self.should_have_value(elt)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::Attribute { value, .. } => {
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
self.should_have_value(value)?;
|
||||||
|
}
|
||||||
|
ExprKind::BinOp { left, op, right } => {
|
||||||
|
self.check_expr(left, defined_identifiers)?;
|
||||||
|
self.check_expr(right, defined_identifiers)?;
|
||||||
|
self.should_have_value(left)?;
|
||||||
|
self.should_have_value(right)?;
|
||||||
|
|
||||||
|
// Check whether a bitwise shift has a negative RHS constant value
|
||||||
|
if *op == LShift || *op == RShift {
|
||||||
|
if let ExprKind::Constant { value, .. } = &right.node {
|
||||||
|
let Constant::Int(rhs_val) = value else { unreachable!() };
|
||||||
|
|
||||||
|
if *rhs_val < 0 {
|
||||||
|
return Err(HashSet::from([format!(
|
||||||
|
"shift count is negative at {}",
|
||||||
|
right.location
|
||||||
|
)]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::UnaryOp { operand, .. } => {
|
||||||
|
self.check_expr(operand, defined_identifiers)?;
|
||||||
|
self.should_have_value(operand)?;
|
||||||
|
}
|
||||||
|
ExprKind::Compare { left, comparators, .. } => {
|
||||||
|
for elt in once(left.as_ref()).chain(comparators.iter()) {
|
||||||
|
self.check_expr(elt, defined_identifiers)?;
|
||||||
|
self.should_have_value(elt)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::Subscript { value, slice, .. } => {
|
||||||
|
self.should_have_value(value)?;
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
self.check_expr(slice, defined_identifiers)?;
|
||||||
|
}
|
||||||
|
ExprKind::IfExp { test, body, orelse } => {
|
||||||
|
self.check_expr(test, defined_identifiers)?;
|
||||||
|
self.check_expr(body, defined_identifiers)?;
|
||||||
|
self.check_expr(orelse, defined_identifiers)?;
|
||||||
|
}
|
||||||
|
ExprKind::Slice { lower, upper, step } => {
|
||||||
|
for elt in [lower.as_ref(), upper.as_ref(), step.as_ref()].iter().flatten() {
|
||||||
|
self.should_have_value(elt)?;
|
||||||
|
self.check_expr(elt, defined_identifiers)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::Lambda { args, body } => {
|
||||||
|
let mut defined_identifiers = defined_identifiers.clone();
|
||||||
|
for arg in &args.args {
|
||||||
|
// TODO: should we check the types here?
|
||||||
|
if !defined_identifiers.contains(&arg.node.arg) {
|
||||||
|
defined_identifiers.insert(arg.node.arg);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
self.check_expr(body, &mut defined_identifiers)?;
|
||||||
|
}
|
||||||
|
ExprKind::ListComp { elt, generators, .. } => {
|
||||||
|
// in our type inference stage, we already make sure that there is only 1 generator
|
||||||
|
let ast::Comprehension { target, iter, ifs, .. } = &generators[0];
|
||||||
|
self.check_expr(iter, defined_identifiers)?;
|
||||||
|
self.should_have_value(iter)?;
|
||||||
|
let mut defined_identifiers = defined_identifiers.clone();
|
||||||
|
self.check_pattern(target, &mut defined_identifiers)?;
|
||||||
|
self.should_have_value(target)?;
|
||||||
|
for term in once(elt.as_ref()).chain(ifs.iter()) {
|
||||||
|
self.check_expr(term, &mut defined_identifiers)?;
|
||||||
|
self.should_have_value(term)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::Call { func, args, keywords } => {
|
||||||
|
for expr in once(func.as_ref())
|
||||||
|
.chain(args.iter())
|
||||||
|
.chain(keywords.iter().map(|v| v.node.value.as_ref()))
|
||||||
|
{
|
||||||
|
self.check_expr(expr, defined_identifiers)?;
|
||||||
|
self.should_have_value(expr)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ExprKind::Constant { .. } => {}
|
||||||
|
_ => {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check that the return value is a non-`alloca` type, effectively only allowing primitive types.
|
||||||
|
///
|
||||||
|
/// This is a workaround preventing the caller from using a variable `alloca`-ed in the body, which
|
||||||
|
/// is freed when the function returns.
|
||||||
|
fn check_return_value_ty(&mut self, ret_ty: Type) -> bool {
|
||||||
|
match &*self.unifier.get_ty_immutable(ret_ty) {
|
||||||
|
TypeEnum::TObj { .. } => [
|
||||||
|
self.primitives.int32,
|
||||||
|
self.primitives.int64,
|
||||||
|
self.primitives.uint32,
|
||||||
|
self.primitives.uint64,
|
||||||
|
self.primitives.float,
|
||||||
|
self.primitives.bool,
|
||||||
|
]
|
||||||
|
.iter()
|
||||||
|
.any(|allowed_ty| self.unifier.unioned(ret_ty, *allowed_ty)),
|
||||||
|
TypeEnum::TTuple { ty } => ty.iter().all(|t| self.check_return_value_ty(*t)),
|
||||||
|
_ => false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check statements for proper identifier def-use and return on all paths
|
||||||
|
fn check_stmt(
|
||||||
|
&mut self,
|
||||||
|
stmt: &Stmt<Option<Type>>,
|
||||||
|
defined_identifiers: &mut HashSet<StrRef>,
|
||||||
|
) -> Result<bool, HashSet<String>> {
|
||||||
|
match &stmt.node {
|
||||||
|
StmtKind::For { target, iter, body, orelse, .. } => {
|
||||||
|
self.check_expr(iter, defined_identifiers)?;
|
||||||
|
self.should_have_value(iter)?;
|
||||||
|
let mut local_defined_identifiers = defined_identifiers.clone();
|
||||||
|
for stmt in orelse {
|
||||||
|
self.check_stmt(stmt, &mut local_defined_identifiers)?;
|
||||||
|
}
|
||||||
|
let mut local_defined_identifiers = defined_identifiers.clone();
|
||||||
|
self.check_pattern(target, &mut local_defined_identifiers)?;
|
||||||
|
self.should_have_value(target)?;
|
||||||
|
for stmt in body {
|
||||||
|
self.check_stmt(stmt, &mut local_defined_identifiers)?;
|
||||||
|
}
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
StmtKind::If { test, body, orelse, .. } => {
|
||||||
|
self.check_expr(test, defined_identifiers)?;
|
||||||
|
self.should_have_value(test)?;
|
||||||
|
let mut body_identifiers = defined_identifiers.clone();
|
||||||
|
let mut orelse_identifiers = defined_identifiers.clone();
|
||||||
|
let body_returned = self.check_block(body, &mut body_identifiers)?;
|
||||||
|
let orelse_returned = self.check_block(orelse, &mut orelse_identifiers)?;
|
||||||
|
|
||||||
|
for ident in &body_identifiers {
|
||||||
|
if !defined_identifiers.contains(ident) && orelse_identifiers.contains(ident) {
|
||||||
|
defined_identifiers.insert(*ident);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(body_returned && orelse_returned)
|
||||||
|
}
|
||||||
|
StmtKind::While { test, body, orelse, .. } => {
|
||||||
|
self.check_expr(test, defined_identifiers)?;
|
||||||
|
self.should_have_value(test)?;
|
||||||
|
let mut defined_identifiers = defined_identifiers.clone();
|
||||||
|
self.check_block(body, &mut defined_identifiers)?;
|
||||||
|
self.check_block(orelse, &mut defined_identifiers)?;
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
StmtKind::With { items, body, .. } => {
|
||||||
|
let mut new_defined_identifiers = defined_identifiers.clone();
|
||||||
|
for item in items {
|
||||||
|
self.check_expr(&item.context_expr, defined_identifiers)?;
|
||||||
|
if let Some(var) = item.optional_vars.as_ref() {
|
||||||
|
self.check_pattern(var, &mut new_defined_identifiers)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
self.check_block(body, &mut new_defined_identifiers)?;
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
StmtKind::Try { body, handlers, orelse, finalbody, .. } => {
|
||||||
|
self.check_block(body, &mut defined_identifiers.clone())?;
|
||||||
|
self.check_block(orelse, &mut defined_identifiers.clone())?;
|
||||||
|
for handler in handlers {
|
||||||
|
let mut defined_identifiers = defined_identifiers.clone();
|
||||||
|
let ast::ExcepthandlerKind::ExceptHandler { name, body, .. } = &handler.node;
|
||||||
|
if let Some(name) = name {
|
||||||
|
defined_identifiers.insert(*name);
|
||||||
|
}
|
||||||
|
self.check_block(body, &mut defined_identifiers)?;
|
||||||
|
}
|
||||||
|
self.check_block(finalbody, defined_identifiers)?;
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
StmtKind::Expr { value, .. } => {
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
StmtKind::Assign { targets, value, .. } => {
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
self.should_have_value(value)?;
|
||||||
|
for target in targets {
|
||||||
|
self.check_pattern(target, defined_identifiers)?;
|
||||||
|
}
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
StmtKind::AnnAssign { target, value, .. } => {
|
||||||
|
if let Some(value) = value {
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
self.should_have_value(value)?;
|
||||||
|
self.check_pattern(target, defined_identifiers)?;
|
||||||
|
}
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
StmtKind::Return { value, .. } => {
|
||||||
|
if let Some(value) = value {
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
self.should_have_value(value)?;
|
||||||
|
|
||||||
|
// Check that the return value is a non-`alloca` type, effectively only allowing primitive types.
|
||||||
|
// This is a workaround preventing the caller from using a variable `alloca`-ed in the body, which
|
||||||
|
// is freed when the function returns.
|
||||||
|
if let Some(ret_ty) = value.custom {
|
||||||
|
// Explicitly allow ellipsis as a return value, as the type of the ellipsis is contextually
|
||||||
|
// inferred and just generates an unconditional assertion
|
||||||
|
if matches!(
|
||||||
|
value.node,
|
||||||
|
ExprKind::Constant { value: Constant::Ellipsis, .. }
|
||||||
|
) {
|
||||||
|
return Ok(true);
|
||||||
|
}
|
||||||
|
|
||||||
|
if !self.check_return_value_ty(ret_ty) {
|
||||||
|
return Err(HashSet::from([
|
||||||
|
format!(
|
||||||
|
"return value of type {} must be a primitive or a tuple of primitives at {}",
|
||||||
|
self.unifier.stringify(ret_ty),
|
||||||
|
value.location,
|
||||||
|
),
|
||||||
|
]));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
StmtKind::Raise { exc, .. } => {
|
||||||
|
if let Some(value) = exc {
|
||||||
|
self.check_expr(value, defined_identifiers)?;
|
||||||
|
}
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
// break, raise, etc.
|
||||||
|
_ => Ok(false),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn check_block(
|
||||||
|
&mut self,
|
||||||
|
block: &[Stmt<Option<Type>>],
|
||||||
|
defined_identifiers: &mut HashSet<StrRef>,
|
||||||
|
) -> Result<bool, HashSet<String>> {
|
||||||
|
let mut ret = false;
|
||||||
|
for stmt in block {
|
||||||
|
if ret {
|
||||||
|
eprintln!("warning: dead code at {}\n", stmt.location);
|
||||||
|
}
|
||||||
|
if self.check_stmt(stmt, defined_identifiers)? {
|
||||||
|
ret = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(ret)
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,525 +0,0 @@
|
||||||
use super::context::InferenceContext;
|
|
||||||
use super::typedef::{TypeEnum::*, *};
|
|
||||||
use std::collections::HashMap;
|
|
||||||
|
|
||||||
fn find_subst(
|
|
||||||
ctx: &InferenceContext,
|
|
||||||
valuation: &Option<(VariableId, Type)>,
|
|
||||||
sub: &mut HashMap<VariableId, Type>,
|
|
||||||
mut a: Type,
|
|
||||||
mut b: Type,
|
|
||||||
) -> Result<(), String> {
|
|
||||||
// TODO: fix error messages later
|
|
||||||
if let TypeVariable(id) = a.as_ref() {
|
|
||||||
if let Some((assumption_id, t)) = valuation {
|
|
||||||
if assumption_id == id {
|
|
||||||
a = t.clone();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut substituted = false;
|
|
||||||
if let TypeVariable(id) = b.as_ref() {
|
|
||||||
if let Some(c) = sub.get(&id) {
|
|
||||||
b = c.clone();
|
|
||||||
substituted = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
match (a.as_ref(), b.as_ref()) {
|
|
||||||
(BotType, _) => Ok(()),
|
|
||||||
(TypeVariable(id_a), TypeVariable(id_b)) => {
|
|
||||||
if substituted {
|
|
||||||
return if id_a == id_b {
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err("different variables".to_string())
|
|
||||||
};
|
|
||||||
}
|
|
||||||
let v_a = ctx.get_variable_def(*id_a);
|
|
||||||
let v_b = ctx.get_variable_def(*id_b);
|
|
||||||
if !v_b.bound.is_empty() {
|
|
||||||
if v_a.bound.is_empty() {
|
|
||||||
return Err("unbounded a".to_string());
|
|
||||||
} else {
|
|
||||||
let diff: Vec<_> = v_a
|
|
||||||
.bound
|
|
||||||
.iter()
|
|
||||||
.filter(|x| !v_b.bound.contains(x))
|
|
||||||
.collect();
|
|
||||||
if !diff.is_empty() {
|
|
||||||
return Err("different domain".to_string());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
sub.insert(*id_b, a.clone());
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
(TypeVariable(id_a), _) => {
|
|
||||||
let v_a = ctx.get_variable_def(*id_a);
|
|
||||||
if v_a.bound.len() == 1 && v_a.bound[0].as_ref() == b.as_ref() {
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err("different domain".to_string())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
(_, TypeVariable(id_b)) => {
|
|
||||||
let v_b = ctx.get_variable_def(*id_b);
|
|
||||||
if v_b.bound.is_empty() || v_b.bound.contains(&a) {
|
|
||||||
sub.insert(*id_b, a.clone());
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err("different domain".to_string())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
(_, VirtualClassType(id_b)) => {
|
|
||||||
let mut parents;
|
|
||||||
match a.as_ref() {
|
|
||||||
ClassType(id_a) => {
|
|
||||||
parents = [*id_a].to_vec();
|
|
||||||
}
|
|
||||||
VirtualClassType(id_a) => {
|
|
||||||
parents = [*id_a].to_vec();
|
|
||||||
}
|
|
||||||
_ => {
|
|
||||||
return Err("cannot substitute non-class type into virtual class".to_string());
|
|
||||||
}
|
|
||||||
};
|
|
||||||
while !parents.is_empty() {
|
|
||||||
if *id_b == parents[0] {
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
let c = ctx.get_class_def(parents.remove(0));
|
|
||||||
parents.extend_from_slice(&c.parents);
|
|
||||||
}
|
|
||||||
Err("not subtype".to_string())
|
|
||||||
}
|
|
||||||
(ParametricType(id_a, param_a), ParametricType(id_b, param_b)) => {
|
|
||||||
if id_a != id_b || param_a.len() != param_b.len() {
|
|
||||||
Err("different parametric types".to_string())
|
|
||||||
} else {
|
|
||||||
for (x, y) in param_a.iter().zip(param_b.iter()) {
|
|
||||||
find_subst(ctx, valuation, sub, x.clone(), y.clone())?;
|
|
||||||
}
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
(_, _) => {
|
|
||||||
if a == b {
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err("not equal".to_string())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn resolve_call_rec(
|
|
||||||
ctx: &InferenceContext,
|
|
||||||
valuation: &Option<(VariableId, Type)>,
|
|
||||||
obj: Option<Type>,
|
|
||||||
func: &str,
|
|
||||||
args: &[Type],
|
|
||||||
) -> Result<Option<Type>, String> {
|
|
||||||
let mut subst = obj
|
|
||||||
.as_ref()
|
|
||||||
.map(|v| v.get_subst(ctx))
|
|
||||||
.unwrap_or_else(HashMap::new);
|
|
||||||
|
|
||||||
let fun = match &obj {
|
|
||||||
Some(obj) => {
|
|
||||||
let base = match obj.as_ref() {
|
|
||||||
PrimitiveType(id) => &ctx.get_primitive_def(*id),
|
|
||||||
ClassType(id) | VirtualClassType(id) => &ctx.get_class_def(*id).base,
|
|
||||||
ParametricType(id, _) => &ctx.get_parametric_def(*id).base,
|
|
||||||
_ => return Err("not supported".to_string()),
|
|
||||||
};
|
|
||||||
base.methods.get(func)
|
|
||||||
}
|
|
||||||
None => ctx.get_fn_def(func),
|
|
||||||
}
|
|
||||||
.ok_or_else(|| "no such function".to_string())?;
|
|
||||||
|
|
||||||
if args.len() != fun.args.len() {
|
|
||||||
return Err("incorrect parameter number".to_string());
|
|
||||||
}
|
|
||||||
for (a, b) in args.iter().zip(fun.args.iter()) {
|
|
||||||
find_subst(ctx, valuation, &mut subst, a.clone(), b.clone())?;
|
|
||||||
}
|
|
||||||
let result = fun.result.as_ref().map(|v| v.subst(&subst));
|
|
||||||
Ok(result.map(|result| {
|
|
||||||
if let SelfType = result {
|
|
||||||
obj.unwrap()
|
|
||||||
} else {
|
|
||||||
result.into()
|
|
||||||
}
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn resolve_call(
|
|
||||||
ctx: &InferenceContext,
|
|
||||||
obj: Option<Type>,
|
|
||||||
func: &str,
|
|
||||||
args: &[Type],
|
|
||||||
) -> Result<Option<Type>, String> {
|
|
||||||
resolve_call_rec(ctx, &None, obj, func, args)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
use super::super::context::GlobalContext;
|
|
||||||
use super::super::primitives::*;
|
|
||||||
use std::rc::Rc;
|
|
||||||
|
|
||||||
fn get_inference_context(ctx: GlobalContext) -> InferenceContext {
|
|
||||||
InferenceContext::new(ctx, Box::new(|_| Err("unbounded identifier".into())))
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_simple_generic() {
|
|
||||||
let mut ctx = basic_ctx();
|
|
||||||
let v1 = ctx.add_variable(VarDef {
|
|
||||||
name: "V1",
|
|
||||||
bound: vec![ctx.get_primitive(INT32_TYPE), ctx.get_primitive(FLOAT_TYPE)],
|
|
||||||
});
|
|
||||||
let v1 = ctx.get_variable(v1);
|
|
||||||
let v2 = ctx.add_variable(VarDef {
|
|
||||||
name: "V2",
|
|
||||||
bound: vec![
|
|
||||||
ctx.get_primitive(BOOL_TYPE),
|
|
||||||
ctx.get_primitive(INT32_TYPE),
|
|
||||||
ctx.get_primitive(FLOAT_TYPE),
|
|
||||||
],
|
|
||||||
});
|
|
||||||
let v2 = ctx.get_variable(v2);
|
|
||||||
let ctx = get_inference_context(ctx);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "int32", &[ctx.get_primitive(FLOAT_TYPE)]),
|
|
||||||
Ok(Some(ctx.get_primitive(INT32_TYPE)))
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "int32", &[ctx.get_primitive(INT32_TYPE)],),
|
|
||||||
Ok(Some(ctx.get_primitive(INT32_TYPE)))
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "float", &[ctx.get_primitive(INT32_TYPE)]),
|
|
||||||
Ok(Some(ctx.get_primitive(FLOAT_TYPE)))
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "float", &[ctx.get_primitive(BOOL_TYPE)]),
|
|
||||||
Err("different domain".to_string())
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "float", &[]),
|
|
||||||
Err("incorrect parameter number".to_string())
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "float", &[v1]),
|
|
||||||
Ok(Some(ctx.get_primitive(FLOAT_TYPE)))
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "float", &[v2]),
|
|
||||||
Err("different domain".to_string())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_methods() {
|
|
||||||
let mut ctx = basic_ctx();
|
|
||||||
|
|
||||||
let v0 = ctx.add_variable(VarDef {
|
|
||||||
name: "V0",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
let v0 = ctx.get_variable(v0);
|
|
||||||
|
|
||||||
let int32 = ctx.get_primitive(INT32_TYPE);
|
|
||||||
let int64 = ctx.get_primitive(INT64_TYPE);
|
|
||||||
let ctx = get_inference_context(ctx);
|
|
||||||
|
|
||||||
// simple cases
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, Some(int32.clone()), "__add__", &[int32.clone()]),
|
|
||||||
Ok(Some(int32.clone()))
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_ne!(
|
|
||||||
resolve_call(&ctx, Some(int32.clone()), "__add__", &[int32.clone()]),
|
|
||||||
Ok(Some(int64.clone()))
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, Some(int32), "__add__", &[int64]),
|
|
||||||
Err("not equal".to_string())
|
|
||||||
);
|
|
||||||
|
|
||||||
// with type variables
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, Some(v0.clone()), "__add__", &[v0.clone()]),
|
|
||||||
Err("not supported".into())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_multi_generic() {
|
|
||||||
let mut ctx = basic_ctx();
|
|
||||||
let v0 = ctx.add_variable(VarDef {
|
|
||||||
name: "V0",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
let v0 = ctx.get_variable(v0);
|
|
||||||
let v1 = ctx.add_variable(VarDef {
|
|
||||||
name: "V1",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
let v1 = ctx.get_variable(v1);
|
|
||||||
let v2 = ctx.add_variable(VarDef {
|
|
||||||
name: "V2",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
let v2 = ctx.get_variable(v2);
|
|
||||||
let v3 = ctx.add_variable(VarDef {
|
|
||||||
name: "V3",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
let v3 = ctx.get_variable(v3);
|
|
||||||
|
|
||||||
ctx.add_fn(
|
|
||||||
"foo",
|
|
||||||
FnDef {
|
|
||||||
args: vec![v0.clone(), v0.clone(), v1.clone()],
|
|
||||||
result: Some(v0.clone()),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
|
|
||||||
ctx.add_fn(
|
|
||||||
"foo1",
|
|
||||||
FnDef {
|
|
||||||
args: vec![ParametricType(TUPLE_TYPE, vec![v0.clone(), v0.clone(), v1]).into()],
|
|
||||||
result: Some(v0),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
let ctx = get_inference_context(ctx);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[v2.clone(), v2.clone(), v2.clone()]),
|
|
||||||
Ok(Some(v2.clone()))
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[v2.clone(), v2.clone(), v3.clone()]),
|
|
||||||
Ok(Some(v2.clone()))
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[v2.clone(), v3.clone(), v3.clone()]),
|
|
||||||
Err("different variables".to_string())
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(
|
|
||||||
&ctx,
|
|
||||||
None,
|
|
||||||
"foo1",
|
|
||||||
&[ParametricType(TUPLE_TYPE, vec![v2.clone(), v2.clone(), v2.clone()]).into()]
|
|
||||||
),
|
|
||||||
Ok(Some(v2.clone()))
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(
|
|
||||||
&ctx,
|
|
||||||
None,
|
|
||||||
"foo1",
|
|
||||||
&[ParametricType(TUPLE_TYPE, vec![v2.clone(), v2.clone(), v3.clone()]).into()]
|
|
||||||
),
|
|
||||||
Ok(Some(v2.clone()))
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(
|
|
||||||
&ctx,
|
|
||||||
None,
|
|
||||||
"foo1",
|
|
||||||
&[ParametricType(TUPLE_TYPE, vec![v2, v3.clone(), v3]).into()]
|
|
||||||
),
|
|
||||||
Err("different variables".to_string())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_class_generics() {
|
|
||||||
let mut ctx = basic_ctx();
|
|
||||||
|
|
||||||
let list = ctx.get_parametric_def_mut(LIST_TYPE);
|
|
||||||
let t = Rc::new(TypeVariable(list.params[0]));
|
|
||||||
list.base.methods.insert(
|
|
||||||
"head",
|
|
||||||
FnDef {
|
|
||||||
args: vec![],
|
|
||||||
result: Some(t.clone()),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
list.base.methods.insert(
|
|
||||||
"append",
|
|
||||||
FnDef {
|
|
||||||
args: vec![t],
|
|
||||||
result: None,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
|
|
||||||
let v0 = ctx.add_variable(VarDef {
|
|
||||||
name: "V0",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
let v0 = ctx.get_variable(v0);
|
|
||||||
let v1 = ctx.add_variable(VarDef {
|
|
||||||
name: "V1",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
let v1 = ctx.get_variable(v1);
|
|
||||||
let ctx = get_inference_context(ctx);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(
|
|
||||||
&ctx,
|
|
||||||
Some(ParametricType(LIST_TYPE, vec![v0.clone()]).into()),
|
|
||||||
"head",
|
|
||||||
&[]
|
|
||||||
),
|
|
||||||
Ok(Some(v0.clone()))
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(
|
|
||||||
&ctx,
|
|
||||||
Some(ParametricType(LIST_TYPE, vec![v0.clone()]).into()),
|
|
||||||
"append",
|
|
||||||
&[v0.clone()]
|
|
||||||
),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(
|
|
||||||
&ctx,
|
|
||||||
Some(ParametricType(LIST_TYPE, vec![v0]).into()),
|
|
||||||
"append",
|
|
||||||
&[v1]
|
|
||||||
),
|
|
||||||
Err("different variables".to_string())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_virtual_class() {
|
|
||||||
let mut ctx = basic_ctx();
|
|
||||||
|
|
||||||
let foo = ctx.add_class(ClassDef {
|
|
||||||
base: TypeDef {
|
|
||||||
name: "Foo",
|
|
||||||
methods: HashMap::new(),
|
|
||||||
fields: HashMap::new(),
|
|
||||||
},
|
|
||||||
parents: vec![],
|
|
||||||
});
|
|
||||||
|
|
||||||
let foo1 = ctx.add_class(ClassDef {
|
|
||||||
base: TypeDef {
|
|
||||||
name: "Foo1",
|
|
||||||
methods: HashMap::new(),
|
|
||||||
fields: HashMap::new(),
|
|
||||||
},
|
|
||||||
parents: vec![foo],
|
|
||||||
});
|
|
||||||
|
|
||||||
let foo2 = ctx.add_class(ClassDef {
|
|
||||||
base: TypeDef {
|
|
||||||
name: "Foo2",
|
|
||||||
methods: HashMap::new(),
|
|
||||||
fields: HashMap::new(),
|
|
||||||
},
|
|
||||||
parents: vec![foo1],
|
|
||||||
});
|
|
||||||
|
|
||||||
let bar = ctx.add_class(ClassDef {
|
|
||||||
base: TypeDef {
|
|
||||||
name: "bar",
|
|
||||||
methods: HashMap::new(),
|
|
||||||
fields: HashMap::new(),
|
|
||||||
},
|
|
||||||
parents: vec![],
|
|
||||||
});
|
|
||||||
|
|
||||||
ctx.add_fn(
|
|
||||||
"foo",
|
|
||||||
FnDef {
|
|
||||||
args: vec![VirtualClassType(foo).into()],
|
|
||||||
result: None,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
ctx.add_fn(
|
|
||||||
"foo1",
|
|
||||||
FnDef {
|
|
||||||
args: vec![VirtualClassType(foo1).into()],
|
|
||||||
result: None,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
let ctx = get_inference_context(ctx);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[ClassType(foo).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[ClassType(foo1).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[ClassType(foo2).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[ClassType(bar).into()]),
|
|
||||||
Err("not subtype".to_string())
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo1", &[ClassType(foo1).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo1", &[ClassType(foo2).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo1", &[ClassType(foo).into()]),
|
|
||||||
Err("not subtype".to_string())
|
|
||||||
);
|
|
||||||
|
|
||||||
// virtual class substitution
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[VirtualClassType(foo).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[VirtualClassType(foo1).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[VirtualClassType(foo2).into()]),
|
|
||||||
Ok(None)
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
resolve_call(&ctx, None, "foo", &[VirtualClassType(bar).into()]),
|
|
||||||
Err("not subtype".to_string())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,31 +0,0 @@
|
||||||
use rustpython_parser::ast;
|
|
||||||
use std::vec::Vec;
|
|
||||||
|
|
||||||
#[derive(Clone, Copy, PartialEq)]
|
|
||||||
pub struct FileID(u32);
|
|
||||||
|
|
||||||
#[derive(Clone, Copy, PartialEq)]
|
|
||||||
pub enum Location {
|
|
||||||
CodeRange(FileID, ast::Location),
|
|
||||||
Builtin
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct FileRegistry {
|
|
||||||
files: Vec<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl FileRegistry {
|
|
||||||
pub fn new() -> FileRegistry {
|
|
||||||
FileRegistry { files: Vec::new() }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn add_file(&mut self, path: &str) -> FileID {
|
|
||||||
let index = self.files.len() as u32;
|
|
||||||
self.files.push(path.to_owned());
|
|
||||||
FileID(index)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn query_file(&self, id: FileID) -> &str {
|
|
||||||
&self.files[id.0 as usize]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,58 +1,754 @@
|
||||||
use rustpython_parser::ast::{Cmpop, Operator, Unaryop};
|
use crate::symbol_resolver::SymbolValue;
|
||||||
|
use crate::toplevel::helper::PrimDef;
|
||||||
|
use crate::toplevel::numpy::{make_ndarray_ty, unpack_ndarray_var_tys};
|
||||||
|
use crate::typecheck::{
|
||||||
|
type_inferencer::*,
|
||||||
|
typedef::{FunSignature, FuncArg, Type, TypeEnum, Unifier, VarMap},
|
||||||
|
};
|
||||||
|
use itertools::{iproduct, Itertools};
|
||||||
|
use nac3parser::ast::StrRef;
|
||||||
|
use nac3parser::ast::{Cmpop, Operator, Unaryop};
|
||||||
|
use std::cmp::max;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::rc::Rc;
|
||||||
|
use strum::IntoEnumIterator;
|
||||||
|
|
||||||
pub fn binop_name(op: &Operator) -> &'static str {
|
/// The variant of a binary operator.
|
||||||
match op {
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
Operator::Add => "__add__",
|
pub enum BinopVariant {
|
||||||
Operator::Sub => "__sub__",
|
/// The normal variant.
|
||||||
Operator::Div => "__truediv__",
|
/// For addition, it would be `+`.
|
||||||
Operator::Mod => "__mod__",
|
Normal,
|
||||||
Operator::Mult => "__mul__",
|
/// The "Augmented Assigning Operator" variant.
|
||||||
Operator::Pow => "__pow__",
|
/// For addition, it would be `+=`.
|
||||||
Operator::BitOr => "__or__",
|
AugAssign,
|
||||||
Operator::BitXor => "__xor__",
|
}
|
||||||
Operator::BitAnd => "__and__",
|
|
||||||
Operator::LShift => "__lshift__",
|
/// A binary operator with its variant.
|
||||||
Operator::RShift => "__rshift__",
|
#[derive(Debug, Clone, Copy)]
|
||||||
Operator::FloorDiv => "__floordiv__",
|
pub struct Binop {
|
||||||
Operator::MatMult => "__matmul__",
|
/// The base [`Operator`] of this binary operator.
|
||||||
|
pub base: Operator,
|
||||||
|
/// The variant of this binary operator.
|
||||||
|
pub variant: BinopVariant,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Binop {
|
||||||
|
/// Make a [`Binop`] of the normal variant from an [`Operator`].
|
||||||
|
#[must_use]
|
||||||
|
pub fn normal(base: Operator) -> Self {
|
||||||
|
Binop { base, variant: BinopVariant::Normal }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Make a [`Binop`] of the aug assign variant from an [`Operator`].
|
||||||
|
#[must_use]
|
||||||
|
pub fn aug_assign(base: Operator) -> Self {
|
||||||
|
Binop { base, variant: BinopVariant::AugAssign }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn binop_assign_name(op: &Operator) -> &'static str {
|
/// Details about an operator (unary, binary, etc...) in Python
|
||||||
match op {
|
#[derive(Debug, Clone, Copy)]
|
||||||
Operator::Add => "__iadd__",
|
pub struct OpInfo {
|
||||||
Operator::Sub => "__isub__",
|
/// The method name of the binary operator.
|
||||||
Operator::Div => "__itruediv__",
|
/// For addition, this would be `__add__`, and `__iadd__` if
|
||||||
Operator::Mod => "__imod__",
|
/// it is the augmented assigning variant.
|
||||||
Operator::Mult => "__imul__",
|
pub method_name: &'static str,
|
||||||
Operator::Pow => "__ipow__",
|
/// The symbol of the binary operator.
|
||||||
Operator::BitOr => "__ior__",
|
/// For addition, this would be `+`, and `+=` if
|
||||||
Operator::BitXor => "__ixor__",
|
/// it is the augmented assigning variant.
|
||||||
Operator::BitAnd => "__iand__",
|
pub symbol: &'static str,
|
||||||
Operator::LShift => "__ilshift__",
|
|
||||||
Operator::RShift => "__irshift__",
|
|
||||||
Operator::FloorDiv => "__ifloordiv__",
|
|
||||||
Operator::MatMult => "__imatmul__",
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn unaryop_name(op: &Unaryop) -> &'static str {
|
/// Helper macro to conveniently build an [`OpInfo`].
|
||||||
match op {
|
///
|
||||||
Unaryop::UAdd => "__pos__",
|
/// Example usage: `make_info("add", "+")` generates `OpInfo { name: "__add__", symbol: "+" }`
|
||||||
Unaryop::USub => "__neg__",
|
macro_rules! make_op_info {
|
||||||
Unaryop::Not => "__not__",
|
($name:expr, $symbol:expr) => {
|
||||||
Unaryop::Invert => "__inv__",
|
OpInfo { method_name: concat!("__", $name, "__"), symbol: $symbol }
|
||||||
}
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn comparison_name(op: &Cmpop) -> Option<&'static str> {
|
pub trait HasOpInfo {
|
||||||
|
fn op_info(&self) -> OpInfo;
|
||||||
|
}
|
||||||
|
|
||||||
|
fn try_get_cmpop_info(op: Cmpop) -> Option<OpInfo> {
|
||||||
match op {
|
match op {
|
||||||
Cmpop::Lt => Some("__lt__"),
|
Cmpop::Lt => Some(make_op_info!("lt", "<")),
|
||||||
Cmpop::LtE => Some("__le__"),
|
Cmpop::LtE => Some(make_op_info!("le", "<=")),
|
||||||
Cmpop::Gt => Some("__gt__"),
|
Cmpop::Gt => Some(make_op_info!("gt", ">")),
|
||||||
Cmpop::GtE => Some("__ge__"),
|
Cmpop::GtE => Some(make_op_info!("ge", ">=")),
|
||||||
Cmpop::Eq => Some("__eq__"),
|
Cmpop::Eq => Some(make_op_info!("eq", "==")),
|
||||||
Cmpop::NotEq => Some("__ne__"),
|
Cmpop::NotEq => Some(make_op_info!("ne", "!=")),
|
||||||
_ => None,
|
_ => None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl OpInfo {
|
||||||
|
#[must_use]
|
||||||
|
pub fn supports_cmpop(op: Cmpop) -> bool {
|
||||||
|
try_get_cmpop_info(op).is_some()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl HasOpInfo for Cmpop {
|
||||||
|
fn op_info(&self) -> OpInfo {
|
||||||
|
try_get_cmpop_info(*self).expect("{self:?} is not supported")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl HasOpInfo for Binop {
|
||||||
|
fn op_info(&self) -> OpInfo {
|
||||||
|
// Helper macro to generate both the normal variant [`OpInfo`] and the
|
||||||
|
// augmented assigning variant [`OpInfo`] for a binary operator conveniently.
|
||||||
|
macro_rules! info {
|
||||||
|
($name:literal, $symbol:literal) => {
|
||||||
|
(
|
||||||
|
make_op_info!($name, $symbol),
|
||||||
|
make_op_info!(concat!("i", $name), concat!($symbol, "=")),
|
||||||
|
)
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
let (normal_variant, aug_assign_variant) = match self.base {
|
||||||
|
Operator::Add => info!("add", "+"),
|
||||||
|
Operator::Sub => info!("sub", "-"),
|
||||||
|
Operator::Div => info!("truediv", "/"),
|
||||||
|
Operator::Mod => info!("mod", "%"),
|
||||||
|
Operator::Mult => info!("mul", "*"),
|
||||||
|
Operator::Pow => info!("pow", "**"),
|
||||||
|
Operator::BitOr => info!("or", "|"),
|
||||||
|
Operator::BitXor => info!("xor", "^"),
|
||||||
|
Operator::BitAnd => info!("and", "&"),
|
||||||
|
Operator::LShift => info!("lshift", "<<"),
|
||||||
|
Operator::RShift => info!("rshift", ">>"),
|
||||||
|
Operator::FloorDiv => info!("floordiv", "//"),
|
||||||
|
Operator::MatMult => info!("matmul", "@"),
|
||||||
|
};
|
||||||
|
|
||||||
|
match self.variant {
|
||||||
|
BinopVariant::Normal => normal_variant,
|
||||||
|
BinopVariant::AugAssign => aug_assign_variant,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl HasOpInfo for Unaryop {
|
||||||
|
fn op_info(&self) -> OpInfo {
|
||||||
|
match self {
|
||||||
|
Unaryop::UAdd => make_op_info!("pos", "+"),
|
||||||
|
Unaryop::USub => make_op_info!("neg", "-"),
|
||||||
|
Unaryop::Not => make_op_info!("not", "not"), // i.e., `not False`, so the symbol is just `not`.
|
||||||
|
Unaryop::Invert => make_op_info!("inv", "~"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) fn with_fields<F>(unifier: &mut Unifier, ty: Type, f: F)
|
||||||
|
where
|
||||||
|
F: FnOnce(&mut Unifier, &mut HashMap<StrRef, (Type, bool)>),
|
||||||
|
{
|
||||||
|
let (id, mut fields, params) =
|
||||||
|
if let TypeEnum::TObj { obj_id, fields, params } = &*unifier.get_ty(ty) {
|
||||||
|
(*obj_id, fields.clone(), params.clone())
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
};
|
||||||
|
f(unifier, &mut fields);
|
||||||
|
unsafe {
|
||||||
|
let unification_table = unifier.get_unification_table();
|
||||||
|
unification_table.set_value(ty, Rc::new(TypeEnum::TObj { obj_id: id, fields, params }));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn impl_binop(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
_store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
ops: &[Operator],
|
||||||
|
) {
|
||||||
|
with_fields(unifier, ty, |unifier, fields| {
|
||||||
|
let (other_ty, other_var_id) = if other_ty.len() == 1 {
|
||||||
|
(other_ty[0], None)
|
||||||
|
} else {
|
||||||
|
let tvar = unifier.get_fresh_var_with_range(other_ty, Some("N".into()), None);
|
||||||
|
(tvar.ty, Some(tvar.id))
|
||||||
|
};
|
||||||
|
|
||||||
|
let function_vars = if let Some(var_id) = other_var_id {
|
||||||
|
vec![(var_id, other_ty)].into_iter().collect::<VarMap>()
|
||||||
|
} else {
|
||||||
|
VarMap::new()
|
||||||
|
};
|
||||||
|
|
||||||
|
let ret_ty = ret_ty.unwrap_or_else(|| unifier.get_fresh_var(None, None).ty);
|
||||||
|
|
||||||
|
for (base_op, variant) in iproduct!(ops, [BinopVariant::Normal, BinopVariant::AugAssign]) {
|
||||||
|
let op = Binop { base: *base_op, variant };
|
||||||
|
fields.insert(op.op_info().method_name.into(), {
|
||||||
|
(
|
||||||
|
unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
ret: ret_ty,
|
||||||
|
vars: function_vars.clone(),
|
||||||
|
args: vec![FuncArg {
|
||||||
|
ty: other_ty,
|
||||||
|
default_value: None,
|
||||||
|
name: "other".into(),
|
||||||
|
}],
|
||||||
|
})),
|
||||||
|
false,
|
||||||
|
)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn impl_unaryop(unifier: &mut Unifier, ty: Type, ret_ty: Option<Type>, ops: &[Unaryop]) {
|
||||||
|
with_fields(unifier, ty, |unifier, fields| {
|
||||||
|
let ret_ty = ret_ty.unwrap_or_else(|| unifier.get_fresh_var(None, None).ty);
|
||||||
|
|
||||||
|
for op in ops {
|
||||||
|
fields.insert(
|
||||||
|
op.op_info().method_name.into(),
|
||||||
|
(
|
||||||
|
unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
ret: ret_ty,
|
||||||
|
vars: VarMap::new(),
|
||||||
|
args: vec![],
|
||||||
|
})),
|
||||||
|
false,
|
||||||
|
),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn impl_cmpop(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
_store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ops: &[Cmpop],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
with_fields(unifier, ty, |unifier, fields| {
|
||||||
|
let (other_ty, other_var_id) = if other_ty.len() == 1 {
|
||||||
|
(other_ty[0], None)
|
||||||
|
} else {
|
||||||
|
let tvar = unifier.get_fresh_var_with_range(other_ty, Some("N".into()), None);
|
||||||
|
(tvar.ty, Some(tvar.id))
|
||||||
|
};
|
||||||
|
|
||||||
|
let function_vars = if let Some(var_id) = other_var_id {
|
||||||
|
vec![(var_id, other_ty)].into_iter().collect::<VarMap>()
|
||||||
|
} else {
|
||||||
|
VarMap::new()
|
||||||
|
};
|
||||||
|
|
||||||
|
let ret_ty = ret_ty.unwrap_or_else(|| unifier.get_fresh_var(None, None).ty);
|
||||||
|
|
||||||
|
for op in ops {
|
||||||
|
fields.insert(
|
||||||
|
op.op_info().method_name.into(),
|
||||||
|
(
|
||||||
|
unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
ret: ret_ty,
|
||||||
|
vars: function_vars.clone(),
|
||||||
|
args: vec![FuncArg {
|
||||||
|
ty: other_ty,
|
||||||
|
default_value: None,
|
||||||
|
name: "other".into(),
|
||||||
|
}],
|
||||||
|
})),
|
||||||
|
false,
|
||||||
|
),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Add`, `Sub`, `Mult`
|
||||||
|
pub fn impl_basic_arithmetic(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_binop(
|
||||||
|
unifier,
|
||||||
|
store,
|
||||||
|
ty,
|
||||||
|
other_ty,
|
||||||
|
ret_ty,
|
||||||
|
&[Operator::Add, Operator::Sub, Operator::Mult],
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Pow`
|
||||||
|
pub fn impl_pow(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_binop(unifier, store, ty, other_ty, ret_ty, &[Operator::Pow]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `BitOr`, `BitXor`, `BitAnd`
|
||||||
|
pub fn impl_bitwise_arithmetic(unifier: &mut Unifier, store: &PrimitiveStore, ty: Type) {
|
||||||
|
impl_binop(
|
||||||
|
unifier,
|
||||||
|
store,
|
||||||
|
ty,
|
||||||
|
&[ty],
|
||||||
|
Some(ty),
|
||||||
|
&[Operator::BitAnd, Operator::BitOr, Operator::BitXor],
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `LShift`, `RShift`
|
||||||
|
pub fn impl_bitwise_shift(unifier: &mut Unifier, store: &PrimitiveStore, ty: Type) {
|
||||||
|
impl_binop(
|
||||||
|
unifier,
|
||||||
|
store,
|
||||||
|
ty,
|
||||||
|
&[store.int32, store.uint32],
|
||||||
|
Some(ty),
|
||||||
|
&[Operator::LShift, Operator::RShift],
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Div`
|
||||||
|
pub fn impl_div(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_binop(unifier, store, ty, other_ty, ret_ty, &[Operator::Div]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `FloorDiv`
|
||||||
|
pub fn impl_floordiv(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_binop(unifier, store, ty, other_ty, ret_ty, &[Operator::FloorDiv]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Mod`
|
||||||
|
pub fn impl_mod(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_binop(unifier, store, ty, other_ty, ret_ty, &[Operator::Mod]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// [`Operator::MatMult`]
|
||||||
|
pub fn impl_matmul(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_binop(unifier, store, ty, other_ty, ret_ty, &[Operator::MatMult]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `UAdd`, `USub`
|
||||||
|
pub fn impl_sign(unifier: &mut Unifier, _store: &PrimitiveStore, ty: Type, ret_ty: Option<Type>) {
|
||||||
|
impl_unaryop(unifier, ty, ret_ty, &[Unaryop::UAdd, Unaryop::USub]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Invert`
|
||||||
|
pub fn impl_invert(unifier: &mut Unifier, _store: &PrimitiveStore, ty: Type, ret_ty: Option<Type>) {
|
||||||
|
impl_unaryop(unifier, ty, ret_ty, &[Unaryop::Invert]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Not`
|
||||||
|
pub fn impl_not(unifier: &mut Unifier, _store: &PrimitiveStore, ty: Type, ret_ty: Option<Type>) {
|
||||||
|
impl_unaryop(unifier, ty, ret_ty, &[Unaryop::Not]);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Lt`, `LtE`, `Gt`, `GtE`
|
||||||
|
pub fn impl_comparison(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_cmpop(
|
||||||
|
unifier,
|
||||||
|
store,
|
||||||
|
ty,
|
||||||
|
other_ty,
|
||||||
|
&[Cmpop::Lt, Cmpop::Gt, Cmpop::LtE, Cmpop::GtE],
|
||||||
|
ret_ty,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `Eq`, `NotEq`
|
||||||
|
pub fn impl_eq(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
store: &PrimitiveStore,
|
||||||
|
ty: Type,
|
||||||
|
other_ty: &[Type],
|
||||||
|
ret_ty: Option<Type>,
|
||||||
|
) {
|
||||||
|
impl_cmpop(unifier, store, ty, other_ty, &[Cmpop::Eq, Cmpop::NotEq], ret_ty);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the expected return type of binary operations with at least one `ndarray` operand.
|
||||||
|
pub fn typeof_ndarray_broadcast(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
left: Type,
|
||||||
|
right: Type,
|
||||||
|
) -> Result<Type, String> {
|
||||||
|
let is_left_ndarray = left.obj_id(unifier).is_some_and(|id| id == PrimDef::NDArray.id());
|
||||||
|
let is_right_ndarray = right.obj_id(unifier).is_some_and(|id| id == PrimDef::NDArray.id());
|
||||||
|
|
||||||
|
assert!(is_left_ndarray || is_right_ndarray);
|
||||||
|
|
||||||
|
if is_left_ndarray && is_right_ndarray {
|
||||||
|
// Perform broadcasting on two ndarray operands.
|
||||||
|
|
||||||
|
let (left_ty_dtype, left_ty_ndims) = unpack_ndarray_var_tys(unifier, left);
|
||||||
|
let (right_ty_dtype, right_ty_ndims) = unpack_ndarray_var_tys(unifier, right);
|
||||||
|
|
||||||
|
assert!(unifier.unioned(left_ty_dtype, right_ty_dtype));
|
||||||
|
|
||||||
|
let left_ty_ndims = match &*unifier.get_ty_immutable(left_ty_ndims) {
|
||||||
|
TypeEnum::TLiteral { values, .. } => values.clone(),
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
let right_ty_ndims = match &*unifier.get_ty_immutable(right_ty_ndims) {
|
||||||
|
TypeEnum::TLiteral { values, .. } => values.clone(),
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let res_ndims = left_ty_ndims
|
||||||
|
.into_iter()
|
||||||
|
.cartesian_product(right_ty_ndims)
|
||||||
|
.map(|(left, right)| {
|
||||||
|
let left_val = u64::try_from(left).unwrap();
|
||||||
|
let right_val = u64::try_from(right).unwrap();
|
||||||
|
|
||||||
|
max(left_val, right_val)
|
||||||
|
})
|
||||||
|
.unique()
|
||||||
|
.map(SymbolValue::U64)
|
||||||
|
.collect_vec();
|
||||||
|
let res_ndims = unifier.get_fresh_literal(res_ndims, None);
|
||||||
|
|
||||||
|
Ok(make_ndarray_ty(unifier, primitives, Some(left_ty_dtype), Some(res_ndims)))
|
||||||
|
} else {
|
||||||
|
let (ndarray_ty, scalar_ty) = if is_left_ndarray { (left, right) } else { (right, left) };
|
||||||
|
|
||||||
|
let (ndarray_ty_dtype, _) = unpack_ndarray_var_tys(unifier, ndarray_ty);
|
||||||
|
|
||||||
|
if unifier.unioned(ndarray_ty_dtype, scalar_ty) {
|
||||||
|
Ok(ndarray_ty)
|
||||||
|
} else {
|
||||||
|
let (expected_ty, actual_ty) = if is_left_ndarray {
|
||||||
|
(ndarray_ty_dtype, scalar_ty)
|
||||||
|
} else {
|
||||||
|
(scalar_ty, ndarray_ty_dtype)
|
||||||
|
};
|
||||||
|
|
||||||
|
Err(format!(
|
||||||
|
"Expected right-hand side operand to be {}, got {}",
|
||||||
|
unifier.stringify(expected_ty),
|
||||||
|
unifier.stringify(actual_ty),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the return type given a binary operator and its primitive operands.
|
||||||
|
pub fn typeof_binop(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
op: Operator,
|
||||||
|
lhs: Type,
|
||||||
|
rhs: Type,
|
||||||
|
) -> Result<Option<Type>, String> {
|
||||||
|
let op = Binop { base: op, variant: BinopVariant::Normal };
|
||||||
|
|
||||||
|
let is_left_list = lhs.obj_id(unifier).is_some_and(|id| id == PrimDef::List.id());
|
||||||
|
let is_right_list = rhs.obj_id(unifier).is_some_and(|id| id == PrimDef::List.id());
|
||||||
|
let is_left_ndarray = lhs.obj_id(unifier).is_some_and(|id| id == PrimDef::NDArray.id());
|
||||||
|
let is_right_ndarray = rhs.obj_id(unifier).is_some_and(|id| id == PrimDef::NDArray.id());
|
||||||
|
|
||||||
|
Ok(Some(match op.base {
|
||||||
|
Operator::Add | Operator::Sub | Operator::Mult | Operator::Mod | Operator::FloorDiv => {
|
||||||
|
if is_left_list || is_right_list {
|
||||||
|
if ![Operator::Add, Operator::Mult].contains(&op.base) {
|
||||||
|
return Err(format!(
|
||||||
|
"Binary operator {} not supported for list",
|
||||||
|
op.op_info().symbol
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
if is_left_list {
|
||||||
|
lhs
|
||||||
|
} else {
|
||||||
|
rhs
|
||||||
|
}
|
||||||
|
} else if is_left_ndarray || is_right_ndarray {
|
||||||
|
typeof_ndarray_broadcast(unifier, primitives, lhs, rhs)?
|
||||||
|
} else if unifier.unioned(lhs, rhs) {
|
||||||
|
lhs
|
||||||
|
} else {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Operator::MatMult => {
|
||||||
|
let (_, lhs_ndims) = unpack_ndarray_var_tys(unifier, lhs);
|
||||||
|
let lhs_ndims = match &*unifier.get_ty_immutable(lhs_ndims) {
|
||||||
|
TypeEnum::TLiteral { values, .. } => {
|
||||||
|
assert_eq!(values.len(), 1);
|
||||||
|
u64::try_from(values[0].clone()).unwrap()
|
||||||
|
}
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
let (_, rhs_ndims) = unpack_ndarray_var_tys(unifier, rhs);
|
||||||
|
let rhs_ndims = match &*unifier.get_ty_immutable(rhs_ndims) {
|
||||||
|
TypeEnum::TLiteral { values, .. } => {
|
||||||
|
assert_eq!(values.len(), 1);
|
||||||
|
u64::try_from(values[0].clone()).unwrap()
|
||||||
|
}
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
|
||||||
|
match (lhs_ndims, rhs_ndims) {
|
||||||
|
(2, 2) => typeof_ndarray_broadcast(unifier, primitives, lhs, rhs)?,
|
||||||
|
(lhs, rhs) if lhs == 0 || rhs == 0 => {
|
||||||
|
return Err(format!(
|
||||||
|
"Input operand {} does not have enough dimensions (has {lhs}, requires {rhs})",
|
||||||
|
u8::from(rhs == 0)
|
||||||
|
))
|
||||||
|
}
|
||||||
|
(lhs, rhs) => {
|
||||||
|
return Err(format!(
|
||||||
|
"ndarray.__matmul__ on {lhs}D and {rhs}D operands not supported"
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Operator::Div => {
|
||||||
|
if is_left_ndarray || is_right_ndarray {
|
||||||
|
typeof_ndarray_broadcast(unifier, primitives, lhs, rhs)?
|
||||||
|
} else if unifier.unioned(lhs, rhs) {
|
||||||
|
primitives.float
|
||||||
|
} else {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Operator::Pow => {
|
||||||
|
if is_left_ndarray || is_right_ndarray {
|
||||||
|
typeof_ndarray_broadcast(unifier, primitives, lhs, rhs)?
|
||||||
|
} else if [
|
||||||
|
primitives.int32,
|
||||||
|
primitives.int64,
|
||||||
|
primitives.uint32,
|
||||||
|
primitives.uint64,
|
||||||
|
primitives.float,
|
||||||
|
]
|
||||||
|
.into_iter()
|
||||||
|
.any(|ty| unifier.unioned(lhs, ty))
|
||||||
|
{
|
||||||
|
lhs
|
||||||
|
} else {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Operator::LShift | Operator::RShift => lhs,
|
||||||
|
Operator::BitOr | Operator::BitXor | Operator::BitAnd => {
|
||||||
|
if unifier.unioned(lhs, rhs) {
|
||||||
|
lhs
|
||||||
|
} else {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn typeof_unaryop(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
op: Unaryop,
|
||||||
|
operand: Type,
|
||||||
|
) -> Result<Option<Type>, String> {
|
||||||
|
let operand_obj_id = operand.obj_id(unifier);
|
||||||
|
|
||||||
|
if op == Unaryop::Not
|
||||||
|
&& operand_obj_id.is_some_and(|id| id == primitives.ndarray.obj_id(unifier).unwrap())
|
||||||
|
{
|
||||||
|
return Err(
|
||||||
|
"The truth value of an array with more than one element is ambiguous".to_string()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(match op {
|
||||||
|
Unaryop::Not => match operand_obj_id {
|
||||||
|
Some(v) if v == PrimDef::NDArray.id() => Some(operand),
|
||||||
|
Some(_) => Some(primitives.bool),
|
||||||
|
_ => None,
|
||||||
|
},
|
||||||
|
|
||||||
|
Unaryop::Invert => {
|
||||||
|
if operand_obj_id.is_some_and(|id| id == PrimDef::Bool.id()) {
|
||||||
|
Some(primitives.int32)
|
||||||
|
} else if operand_obj_id.is_some_and(|id| PrimDef::iter().any(|prim| id == prim.id())) {
|
||||||
|
Some(operand)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Unaryop::UAdd | Unaryop::USub => {
|
||||||
|
if operand_obj_id.is_some_and(|id| id == PrimDef::NDArray.id()) {
|
||||||
|
let (dtype, _) = unpack_ndarray_var_tys(unifier, operand);
|
||||||
|
if dtype.obj_id(unifier).is_some_and(|id| id == PrimDef::Bool.id()) {
|
||||||
|
return Err(if op == Unaryop::UAdd {
|
||||||
|
"The ufunc 'positive' cannot be applied to ndarray[bool, N]".to_string()
|
||||||
|
} else {
|
||||||
|
"The numpy boolean negative, the `-` operator, is not supported, use the `~` operator function instead.".to_string()
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Some(operand)
|
||||||
|
} else if operand_obj_id.is_some_and(|id| id == PrimDef::Bool.id()) {
|
||||||
|
Some(primitives.int32)
|
||||||
|
} else if operand_obj_id.is_some_and(|id| PrimDef::iter().any(|prim| id == prim.id())) {
|
||||||
|
Some(operand)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the return type given a comparison operator and its primitive operands.
|
||||||
|
pub fn typeof_cmpop(
|
||||||
|
unifier: &mut Unifier,
|
||||||
|
primitives: &PrimitiveStore,
|
||||||
|
_op: Cmpop,
|
||||||
|
lhs: Type,
|
||||||
|
rhs: Type,
|
||||||
|
) -> Result<Option<Type>, String> {
|
||||||
|
let is_left_ndarray = lhs.obj_id(unifier).is_some_and(|id| id == PrimDef::NDArray.id());
|
||||||
|
let is_right_ndarray = rhs.obj_id(unifier).is_some_and(|id| id == PrimDef::NDArray.id());
|
||||||
|
|
||||||
|
Ok(Some(if is_left_ndarray || is_right_ndarray {
|
||||||
|
let brd = typeof_ndarray_broadcast(unifier, primitives, lhs, rhs)?;
|
||||||
|
let (_, ndims) = unpack_ndarray_var_tys(unifier, brd);
|
||||||
|
|
||||||
|
make_ndarray_ty(unifier, primitives, Some(primitives.bool), Some(ndims))
|
||||||
|
} else if unifier.unioned(lhs, rhs) {
|
||||||
|
primitives.bool
|
||||||
|
} else {
|
||||||
|
return Ok(None);
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn set_primitives_magic_methods(store: &PrimitiveStore, unifier: &mut Unifier) {
|
||||||
|
let PrimitiveStore {
|
||||||
|
int32: int32_t,
|
||||||
|
int64: int64_t,
|
||||||
|
float: float_t,
|
||||||
|
bool: bool_t,
|
||||||
|
uint32: uint32_t,
|
||||||
|
uint64: uint64_t,
|
||||||
|
list: list_t,
|
||||||
|
ndarray: ndarray_t,
|
||||||
|
..
|
||||||
|
} = *store;
|
||||||
|
let size_t = store.usize();
|
||||||
|
|
||||||
|
/* int ======== */
|
||||||
|
for t in [int32_t, int64_t, uint32_t, uint64_t] {
|
||||||
|
let ndarray_int_t = make_ndarray_ty(unifier, store, Some(t), None);
|
||||||
|
impl_basic_arithmetic(unifier, store, t, &[t, ndarray_int_t], None);
|
||||||
|
impl_pow(unifier, store, t, &[t, ndarray_int_t], None);
|
||||||
|
impl_bitwise_arithmetic(unifier, store, t);
|
||||||
|
impl_bitwise_shift(unifier, store, t);
|
||||||
|
impl_div(unifier, store, t, &[t, ndarray_int_t], None);
|
||||||
|
impl_floordiv(unifier, store, t, &[t, ndarray_int_t], None);
|
||||||
|
impl_mod(unifier, store, t, &[t, ndarray_int_t], None);
|
||||||
|
impl_invert(unifier, store, t, Some(t));
|
||||||
|
impl_not(unifier, store, t, Some(bool_t));
|
||||||
|
impl_comparison(unifier, store, t, &[t, ndarray_int_t], None);
|
||||||
|
impl_eq(unifier, store, t, &[t, ndarray_int_t], None);
|
||||||
|
}
|
||||||
|
for t in [int32_t, int64_t] {
|
||||||
|
impl_sign(unifier, store, t, Some(t));
|
||||||
|
}
|
||||||
|
|
||||||
|
/* float ======== */
|
||||||
|
let ndarray_float_t = make_ndarray_ty(unifier, store, Some(float_t), None);
|
||||||
|
let ndarray_int32_t = make_ndarray_ty(unifier, store, Some(int32_t), None);
|
||||||
|
impl_basic_arithmetic(unifier, store, float_t, &[float_t, ndarray_float_t], None);
|
||||||
|
impl_pow(unifier, store, float_t, &[int32_t, float_t, ndarray_int32_t, ndarray_float_t], None);
|
||||||
|
impl_div(unifier, store, float_t, &[float_t, ndarray_float_t], None);
|
||||||
|
impl_floordiv(unifier, store, float_t, &[float_t, ndarray_float_t], None);
|
||||||
|
impl_mod(unifier, store, float_t, &[float_t, ndarray_float_t], None);
|
||||||
|
impl_sign(unifier, store, float_t, Some(float_t));
|
||||||
|
impl_not(unifier, store, float_t, Some(bool_t));
|
||||||
|
impl_comparison(unifier, store, float_t, &[float_t, ndarray_float_t], None);
|
||||||
|
impl_eq(unifier, store, float_t, &[float_t, ndarray_float_t], None);
|
||||||
|
|
||||||
|
/* bool ======== */
|
||||||
|
let ndarray_bool_t = make_ndarray_ty(unifier, store, Some(bool_t), None);
|
||||||
|
impl_invert(unifier, store, bool_t, Some(int32_t));
|
||||||
|
impl_not(unifier, store, bool_t, Some(bool_t));
|
||||||
|
impl_sign(unifier, store, bool_t, Some(int32_t));
|
||||||
|
impl_eq(unifier, store, bool_t, &[bool_t, ndarray_bool_t], None);
|
||||||
|
|
||||||
|
/* list ======== */
|
||||||
|
impl_binop(unifier, store, list_t, &[list_t], Some(list_t), &[Operator::Add]);
|
||||||
|
impl_binop(unifier, store, list_t, &[int32_t, int64_t], Some(list_t), &[Operator::Mult]);
|
||||||
|
impl_cmpop(unifier, store, list_t, &[list_t], &[Cmpop::Eq, Cmpop::NotEq], Some(bool_t));
|
||||||
|
|
||||||
|
/* ndarray ===== */
|
||||||
|
let ndarray_usized_ndims_tvar =
|
||||||
|
unifier.get_fresh_const_generic_var(size_t, Some("ndarray_ndims".into()), None);
|
||||||
|
let ndarray_unsized_t =
|
||||||
|
make_ndarray_ty(unifier, store, None, Some(ndarray_usized_ndims_tvar.ty));
|
||||||
|
let (ndarray_dtype_t, _) = unpack_ndarray_var_tys(unifier, ndarray_t);
|
||||||
|
let (ndarray_unsized_dtype_t, _) = unpack_ndarray_var_tys(unifier, ndarray_unsized_t);
|
||||||
|
impl_basic_arithmetic(
|
||||||
|
unifier,
|
||||||
|
store,
|
||||||
|
ndarray_t,
|
||||||
|
&[ndarray_unsized_t, ndarray_unsized_dtype_t],
|
||||||
|
None,
|
||||||
|
);
|
||||||
|
impl_pow(unifier, store, ndarray_t, &[ndarray_unsized_t, ndarray_unsized_dtype_t], None);
|
||||||
|
impl_div(unifier, store, ndarray_t, &[ndarray_t, ndarray_dtype_t], None);
|
||||||
|
impl_floordiv(unifier, store, ndarray_t, &[ndarray_unsized_t, ndarray_unsized_dtype_t], None);
|
||||||
|
impl_mod(unifier, store, ndarray_t, &[ndarray_unsized_t, ndarray_unsized_dtype_t], None);
|
||||||
|
impl_matmul(unifier, store, ndarray_t, &[ndarray_t], Some(ndarray_t));
|
||||||
|
impl_sign(unifier, store, ndarray_t, Some(ndarray_t));
|
||||||
|
impl_invert(unifier, store, ndarray_t, Some(ndarray_t));
|
||||||
|
impl_eq(unifier, store, ndarray_t, &[ndarray_unsized_t, ndarray_unsized_dtype_t], None);
|
||||||
|
impl_comparison(unifier, store, ndarray_t, &[ndarray_unsized_t, ndarray_unsized_dtype_t], None);
|
||||||
|
}
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
pub mod context;
|
mod function_check;
|
||||||
pub mod inference_core;
|
|
||||||
pub mod location;
|
|
||||||
pub mod magic_methods;
|
pub mod magic_methods;
|
||||||
pub mod primitives;
|
pub mod type_error;
|
||||||
pub mod symbol_resolver;
|
pub mod type_inferencer;
|
||||||
pub mod typedef;
|
pub mod typedef;
|
||||||
|
mod unification_table;
|
||||||
|
|
|
@ -1,184 +0,0 @@
|
||||||
use super::typedef::{TypeEnum::*, *};
|
|
||||||
use super::context::*;
|
|
||||||
use std::collections::HashMap;
|
|
||||||
|
|
||||||
pub const TUPLE_TYPE: ParamId = ParamId(0);
|
|
||||||
pub const LIST_TYPE: ParamId = ParamId(1);
|
|
||||||
|
|
||||||
pub const BOOL_TYPE: PrimitiveId = PrimitiveId(0);
|
|
||||||
pub const INT32_TYPE: PrimitiveId = PrimitiveId(1);
|
|
||||||
pub const INT64_TYPE: PrimitiveId = PrimitiveId(2);
|
|
||||||
pub const FLOAT_TYPE: PrimitiveId = PrimitiveId(3);
|
|
||||||
|
|
||||||
fn impl_math(def: &mut TypeDef, ty: &Type) {
|
|
||||||
let result = Some(ty.clone());
|
|
||||||
let fun = FnDef {
|
|
||||||
args: vec![ty.clone()],
|
|
||||||
result: result.clone(),
|
|
||||||
};
|
|
||||||
def.methods.insert("__add__", fun.clone());
|
|
||||||
def.methods.insert("__sub__", fun.clone());
|
|
||||||
def.methods.insert("__mul__", fun.clone());
|
|
||||||
def.methods.insert(
|
|
||||||
"__neg__",
|
|
||||||
FnDef {
|
|
||||||
args: vec![],
|
|
||||||
result,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
def.methods.insert(
|
|
||||||
"__truediv__",
|
|
||||||
FnDef {
|
|
||||||
args: vec![ty.clone()],
|
|
||||||
result: Some(PrimitiveType(FLOAT_TYPE).into()),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
def.methods.insert("__floordiv__", fun.clone());
|
|
||||||
def.methods.insert("__mod__", fun.clone());
|
|
||||||
def.methods.insert("__pow__", fun);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn impl_bits(def: &mut TypeDef, ty: &Type) {
|
|
||||||
let result = Some(ty.clone());
|
|
||||||
let fun = FnDef {
|
|
||||||
args: vec![PrimitiveType(INT32_TYPE).into()],
|
|
||||||
result,
|
|
||||||
};
|
|
||||||
|
|
||||||
def.methods.insert("__lshift__", fun.clone());
|
|
||||||
def.methods.insert("__rshift__", fun);
|
|
||||||
def.methods.insert(
|
|
||||||
"__xor__",
|
|
||||||
FnDef {
|
|
||||||
args: vec![ty.clone()],
|
|
||||||
result: Some(ty.clone()),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn impl_eq(def: &mut TypeDef, ty: &Type) {
|
|
||||||
let fun = FnDef {
|
|
||||||
args: vec![ty.clone()],
|
|
||||||
result: Some(PrimitiveType(BOOL_TYPE).into()),
|
|
||||||
};
|
|
||||||
|
|
||||||
def.methods.insert("__eq__", fun.clone());
|
|
||||||
def.methods.insert("__ne__", fun);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn impl_order(def: &mut TypeDef, ty: &Type) {
|
|
||||||
let fun = FnDef {
|
|
||||||
args: vec![ty.clone()],
|
|
||||||
result: Some(PrimitiveType(BOOL_TYPE).into()),
|
|
||||||
};
|
|
||||||
|
|
||||||
def.methods.insert("__lt__", fun.clone());
|
|
||||||
def.methods.insert("__gt__", fun.clone());
|
|
||||||
def.methods.insert("__le__", fun.clone());
|
|
||||||
def.methods.insert("__ge__", fun);
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn basic_ctx() -> GlobalContext<'static> {
|
|
||||||
let primitives = [
|
|
||||||
TypeDef {
|
|
||||||
name: "bool",
|
|
||||||
fields: HashMap::new(),
|
|
||||||
methods: HashMap::new(),
|
|
||||||
},
|
|
||||||
TypeDef {
|
|
||||||
name: "int32",
|
|
||||||
fields: HashMap::new(),
|
|
||||||
methods: HashMap::new(),
|
|
||||||
},
|
|
||||||
TypeDef {
|
|
||||||
name: "int64",
|
|
||||||
fields: HashMap::new(),
|
|
||||||
methods: HashMap::new(),
|
|
||||||
},
|
|
||||||
TypeDef {
|
|
||||||
name: "float",
|
|
||||||
fields: HashMap::new(),
|
|
||||||
methods: HashMap::new(),
|
|
||||||
},
|
|
||||||
]
|
|
||||||
.to_vec();
|
|
||||||
let mut ctx = GlobalContext::new(primitives);
|
|
||||||
|
|
||||||
let b = ctx.get_primitive(BOOL_TYPE);
|
|
||||||
let b_def = ctx.get_primitive_def_mut(BOOL_TYPE);
|
|
||||||
impl_eq(b_def, &b);
|
|
||||||
let int32 = ctx.get_primitive(INT32_TYPE);
|
|
||||||
let int32_def = ctx.get_primitive_def_mut(INT32_TYPE);
|
|
||||||
impl_math(int32_def, &int32);
|
|
||||||
impl_bits(int32_def, &int32);
|
|
||||||
impl_order(int32_def, &int32);
|
|
||||||
impl_eq(int32_def, &int32);
|
|
||||||
let int64 = ctx.get_primitive(INT64_TYPE);
|
|
||||||
let int64_def = ctx.get_primitive_def_mut(INT64_TYPE);
|
|
||||||
impl_math(int64_def, &int64);
|
|
||||||
impl_bits(int64_def, &int64);
|
|
||||||
impl_order(int64_def, &int64);
|
|
||||||
impl_eq(int64_def, &int64);
|
|
||||||
let float = ctx.get_primitive(FLOAT_TYPE);
|
|
||||||
let float_def = ctx.get_primitive_def_mut(FLOAT_TYPE);
|
|
||||||
impl_math(float_def, &float);
|
|
||||||
impl_order(float_def, &float);
|
|
||||||
impl_eq(float_def, &float);
|
|
||||||
|
|
||||||
let t = ctx.add_variable_private(VarDef {
|
|
||||||
name: "T",
|
|
||||||
bound: vec![],
|
|
||||||
});
|
|
||||||
|
|
||||||
ctx.add_parametric(ParametricDef {
|
|
||||||
base: TypeDef {
|
|
||||||
name: "tuple",
|
|
||||||
fields: HashMap::new(),
|
|
||||||
methods: HashMap::new(),
|
|
||||||
},
|
|
||||||
// we have nothing for tuple, so no param def
|
|
||||||
params: vec![],
|
|
||||||
});
|
|
||||||
|
|
||||||
ctx.add_parametric(ParametricDef {
|
|
||||||
base: TypeDef {
|
|
||||||
name: "list",
|
|
||||||
fields: HashMap::new(),
|
|
||||||
methods: HashMap::new(),
|
|
||||||
},
|
|
||||||
params: vec![t],
|
|
||||||
});
|
|
||||||
|
|
||||||
let i = ctx.add_variable_private(VarDef {
|
|
||||||
name: "I",
|
|
||||||
bound: vec![
|
|
||||||
PrimitiveType(INT32_TYPE).into(),
|
|
||||||
PrimitiveType(INT64_TYPE).into(),
|
|
||||||
PrimitiveType(FLOAT_TYPE).into(),
|
|
||||||
],
|
|
||||||
});
|
|
||||||
let args = vec![TypeVariable(i).into()];
|
|
||||||
ctx.add_fn(
|
|
||||||
"int32",
|
|
||||||
FnDef {
|
|
||||||
args: args.clone(),
|
|
||||||
result: Some(PrimitiveType(INT32_TYPE).into()),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
ctx.add_fn(
|
|
||||||
"int64",
|
|
||||||
FnDef {
|
|
||||||
args: args.clone(),
|
|
||||||
result: Some(PrimitiveType(INT64_TYPE).into()),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
ctx.add_fn(
|
|
||||||
"float",
|
|
||||||
FnDef {
|
|
||||||
args,
|
|
||||||
result: Some(PrimitiveType(FLOAT_TYPE).into()),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
|
|
||||||
ctx
|
|
||||||
}
|
|
|
@ -1,23 +0,0 @@
|
||||||
use super::typedef::Type;
|
|
||||||
use super::location::Location;
|
|
||||||
|
|
||||||
pub enum SymbolType {
|
|
||||||
TypeName(Type),
|
|
||||||
Identifier(Type),
|
|
||||||
}
|
|
||||||
|
|
||||||
pub enum SymbolValue<'a> {
|
|
||||||
I32(i32),
|
|
||||||
I64(i64),
|
|
||||||
Double(f64),
|
|
||||||
Bool(bool),
|
|
||||||
Tuple(&'a [SymbolValue<'a>]),
|
|
||||||
Bytes(&'a [u8]),
|
|
||||||
}
|
|
||||||
|
|
||||||
pub trait SymbolResolver {
|
|
||||||
fn get_symbol_type(&self, str: &str) -> Option<SymbolType>;
|
|
||||||
fn get_symbol_value(&self, str: &str) -> Option<SymbolValue>;
|
|
||||||
fn get_symbol_location(&self, str: &str) -> Option<Location>;
|
|
||||||
// handle function call etc.
|
|
||||||
}
|
|
|
@ -0,0 +1,241 @@
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::fmt::Display;
|
||||||
|
|
||||||
|
use crate::typecheck::{magic_methods::HasOpInfo, typedef::TypeEnum};
|
||||||
|
|
||||||
|
use super::{
|
||||||
|
magic_methods::Binop,
|
||||||
|
typedef::{RecordKey, Type, Unifier},
|
||||||
|
};
|
||||||
|
use itertools::Itertools;
|
||||||
|
use nac3parser::ast::{Cmpop, Location, StrRef};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum TypeErrorKind {
|
||||||
|
GotMultipleValues {
|
||||||
|
name: StrRef,
|
||||||
|
},
|
||||||
|
TooManyArguments {
|
||||||
|
expected_min_count: usize,
|
||||||
|
expected_max_count: usize,
|
||||||
|
got_count: usize,
|
||||||
|
},
|
||||||
|
MissingArgs {
|
||||||
|
missing_arg_names: Vec<StrRef>,
|
||||||
|
},
|
||||||
|
UnknownArgName(StrRef),
|
||||||
|
IncorrectArgType {
|
||||||
|
name: StrRef,
|
||||||
|
expected: Type,
|
||||||
|
got: Type,
|
||||||
|
},
|
||||||
|
UnsupportedBinaryOpTypes {
|
||||||
|
operator: Binop,
|
||||||
|
lhs_type: Type,
|
||||||
|
rhs_type: Type,
|
||||||
|
expected_rhs_type: Type,
|
||||||
|
},
|
||||||
|
UnsupportedComparsionOpTypes {
|
||||||
|
operator: Cmpop,
|
||||||
|
lhs_type: Type,
|
||||||
|
rhs_type: Type,
|
||||||
|
expected_rhs_type: Type,
|
||||||
|
},
|
||||||
|
FieldUnificationError {
|
||||||
|
field: RecordKey,
|
||||||
|
types: (Type, Type),
|
||||||
|
loc: (Option<Location>, Option<Location>),
|
||||||
|
},
|
||||||
|
IncompatibleRange(Type, Vec<Type>),
|
||||||
|
IncompatibleTypes(Type, Type),
|
||||||
|
MutationError(RecordKey, Type),
|
||||||
|
NoSuchField(RecordKey, Type),
|
||||||
|
TupleIndexOutOfBounds {
|
||||||
|
index: i32,
|
||||||
|
len: i32,
|
||||||
|
},
|
||||||
|
RequiresTypeAnn,
|
||||||
|
PolymorphicFunctionPointer,
|
||||||
|
NoSuchAttribute(RecordKey, Type),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct TypeError {
|
||||||
|
pub kind: TypeErrorKind,
|
||||||
|
pub loc: Option<Location>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TypeError {
|
||||||
|
#[must_use]
|
||||||
|
pub fn new(kind: TypeErrorKind, loc: Option<Location>) -> TypeError {
|
||||||
|
TypeError { kind, loc }
|
||||||
|
}
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn at(mut self, loc: Option<Location>) -> TypeError {
|
||||||
|
self.loc = self.loc.or(loc);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
#[must_use]
|
||||||
|
pub fn to_display(self, unifier: &Unifier) -> DisplayTypeError {
|
||||||
|
DisplayTypeError { err: self, unifier }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct DisplayTypeError<'a> {
|
||||||
|
pub err: TypeError,
|
||||||
|
pub unifier: &'a Unifier,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn loc_to_str(loc: Option<Location>) -> String {
|
||||||
|
match loc {
|
||||||
|
Some(loc) => format!("(in {loc})"),
|
||||||
|
None => String::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> Display for DisplayTypeError<'a> {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
|
||||||
|
use TypeErrorKind::*;
|
||||||
|
let mut notes = Some(HashMap::new());
|
||||||
|
match &self.err.kind {
|
||||||
|
GotMultipleValues { name } => {
|
||||||
|
write!(f, "For multiple values for parameter {name}")
|
||||||
|
}
|
||||||
|
TooManyArguments { expected_min_count, expected_max_count, got_count } => {
|
||||||
|
debug_assert!(expected_min_count <= expected_max_count);
|
||||||
|
if expected_min_count == expected_max_count {
|
||||||
|
let expected_count = expected_min_count; // or expected_max_count
|
||||||
|
write!(f, "Too many arguments. Expected {expected_count} but got {got_count}")
|
||||||
|
} else {
|
||||||
|
write!(f, "Too many arguments. Expected {expected_min_count} to {expected_max_count} arguments but got {got_count}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MissingArgs { missing_arg_names } => {
|
||||||
|
let args = missing_arg_names.iter().join(", ");
|
||||||
|
write!(f, "Missing arguments: {args}")
|
||||||
|
}
|
||||||
|
UnsupportedBinaryOpTypes { operator, lhs_type, rhs_type, expected_rhs_type } => {
|
||||||
|
let op_symbol = operator.op_info().symbol;
|
||||||
|
|
||||||
|
let lhs_type_str = self.unifier.stringify_with_notes(*lhs_type, &mut notes);
|
||||||
|
let rhs_type_str = self.unifier.stringify_with_notes(*rhs_type, &mut notes);
|
||||||
|
let expected_rhs_type_str =
|
||||||
|
self.unifier.stringify_with_notes(*expected_rhs_type, &mut notes);
|
||||||
|
|
||||||
|
write!(f, "Unsupported operand type(s) for {op_symbol}: '{lhs_type_str}' and '{rhs_type_str}' (right operand should have type {expected_rhs_type_str})")
|
||||||
|
}
|
||||||
|
UnsupportedComparsionOpTypes { operator, lhs_type, rhs_type, expected_rhs_type } => {
|
||||||
|
let op_symbol = operator.op_info().symbol;
|
||||||
|
|
||||||
|
let lhs_type_str = self.unifier.stringify_with_notes(*lhs_type, &mut notes);
|
||||||
|
let rhs_type_str = self.unifier.stringify_with_notes(*rhs_type, &mut notes);
|
||||||
|
let expected_rhs_type_str =
|
||||||
|
self.unifier.stringify_with_notes(*expected_rhs_type, &mut notes);
|
||||||
|
|
||||||
|
write!(f, "'{op_symbol}' not supported between instances of '{lhs_type_str}' and '{rhs_type_str}' (right operand should have type {expected_rhs_type_str})")
|
||||||
|
}
|
||||||
|
UnknownArgName(name) => {
|
||||||
|
write!(f, "Unknown argument name: {name}")
|
||||||
|
}
|
||||||
|
IncorrectArgType { name, expected, got } => {
|
||||||
|
let expected = self.unifier.stringify_with_notes(*expected, &mut notes);
|
||||||
|
let got = self.unifier.stringify_with_notes(*got, &mut notes);
|
||||||
|
write!(f, "Incorrect argument type for parameter {name}. Expected {expected}, but got {got}")
|
||||||
|
}
|
||||||
|
FieldUnificationError { field, types, loc } => {
|
||||||
|
let lhs = self.unifier.stringify_with_notes(types.0, &mut notes);
|
||||||
|
let rhs = self.unifier.stringify_with_notes(types.1, &mut notes);
|
||||||
|
write!(
|
||||||
|
f,
|
||||||
|
"Unable to unify field {}: Got types {}{} and {}{}",
|
||||||
|
field,
|
||||||
|
lhs,
|
||||||
|
loc_to_str(loc.0),
|
||||||
|
rhs,
|
||||||
|
loc_to_str(loc.1)
|
||||||
|
)
|
||||||
|
}
|
||||||
|
IncompatibleRange(t, ts) => {
|
||||||
|
let t = self.unifier.stringify_with_notes(*t, &mut notes);
|
||||||
|
let ts = ts
|
||||||
|
.iter()
|
||||||
|
.map(|t| self.unifier.stringify_with_notes(*t, &mut notes))
|
||||||
|
.collect::<Vec<_>>();
|
||||||
|
write!(f, "Expected any one of these types: {}, but got {}", ts.join(", "), t)
|
||||||
|
}
|
||||||
|
IncompatibleTypes(t1, t2) => {
|
||||||
|
let type1 = self.unifier.get_ty_immutable(*t1);
|
||||||
|
let type2 = self.unifier.get_ty_immutable(*t2);
|
||||||
|
match (&*type1, &*type2) {
|
||||||
|
(TypeEnum::TCall(calls), _) => {
|
||||||
|
let loc = self.unifier.calls[calls[0].0].loc;
|
||||||
|
let result = write!(
|
||||||
|
f,
|
||||||
|
"{} is not callable",
|
||||||
|
self.unifier.stringify_with_notes(*t2, &mut notes)
|
||||||
|
);
|
||||||
|
if let Some(loc) = loc {
|
||||||
|
result?;
|
||||||
|
write!(f, " (in {loc})")?;
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
(TypeEnum::TTuple { ty: ty1 }, TypeEnum::TTuple { ty: ty2 })
|
||||||
|
if ty1.len() != ty2.len() =>
|
||||||
|
{
|
||||||
|
let t1 = self.unifier.stringify_with_notes(*t1, &mut notes);
|
||||||
|
let t2 = self.unifier.stringify_with_notes(*t2, &mut notes);
|
||||||
|
write!(f, "Tuple length mismatch: got {t1} and {t2}")
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
let t1 = self.unifier.stringify_with_notes(*t1, &mut notes);
|
||||||
|
let t2 = self.unifier.stringify_with_notes(*t2, &mut notes);
|
||||||
|
write!(f, "Incompatible types: {t1} and {t2}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
MutationError(name, t) => {
|
||||||
|
if let TypeEnum::TTuple { .. } = &*self.unifier.get_ty_immutable(*t) {
|
||||||
|
write!(f, "Cannot assign to an element of a tuple")
|
||||||
|
} else {
|
||||||
|
let t = self.unifier.stringify_with_notes(*t, &mut notes);
|
||||||
|
write!(f, "Cannot assign to field {name} of {t}, which is immutable")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
NoSuchField(name, t) => {
|
||||||
|
let t = self.unifier.stringify_with_notes(*t, &mut notes);
|
||||||
|
write!(f, "`{t}::{name}` field/method does not exist")
|
||||||
|
}
|
||||||
|
NoSuchAttribute(name, t) => {
|
||||||
|
let t = self.unifier.stringify_with_notes(*t, &mut notes);
|
||||||
|
write!(f, "`{t}::{name}` is not a class attribute")
|
||||||
|
}
|
||||||
|
TupleIndexOutOfBounds { index, len } => {
|
||||||
|
write!(
|
||||||
|
f,
|
||||||
|
"Tuple index out of bounds. Got {index} but tuple has only {len} elements"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
RequiresTypeAnn => {
|
||||||
|
write!(f, "Unable to infer virtual object type: Type annotation required")
|
||||||
|
}
|
||||||
|
PolymorphicFunctionPointer => {
|
||||||
|
write!(f, "Polymorphic function pointers is not supported")
|
||||||
|
}
|
||||||
|
}?;
|
||||||
|
if let Some(loc) = self.err.loc {
|
||||||
|
write!(f, " at {loc}")?;
|
||||||
|
}
|
||||||
|
let notes = notes.unwrap();
|
||||||
|
if !notes.is_empty() {
|
||||||
|
write!(f, "\n\nNotes:")?;
|
||||||
|
for line in notes.values() {
|
||||||
|
write!(f, "\n {line}")?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,764 @@
|
||||||
|
use super::super::{magic_methods::with_fields, typedef::*};
|
||||||
|
use super::*;
|
||||||
|
use crate::{
|
||||||
|
codegen::CodeGenContext,
|
||||||
|
symbol_resolver::ValueEnum,
|
||||||
|
toplevel::{helper::PrimDef, DefinitionId, TopLevelDef},
|
||||||
|
};
|
||||||
|
use indexmap::IndexMap;
|
||||||
|
use indoc::indoc;
|
||||||
|
use nac3parser::ast::FileName;
|
||||||
|
use nac3parser::parser::parse_program;
|
||||||
|
use parking_lot::RwLock;
|
||||||
|
use std::iter::zip;
|
||||||
|
use test_case::test_case;
|
||||||
|
|
||||||
|
struct Resolver {
|
||||||
|
id_to_type: HashMap<StrRef, Type>,
|
||||||
|
id_to_def: HashMap<StrRef, DefinitionId>,
|
||||||
|
class_names: HashMap<StrRef, Type>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SymbolResolver for Resolver {
|
||||||
|
fn get_default_param_value(
|
||||||
|
&self,
|
||||||
|
_: &ast::Expr,
|
||||||
|
) -> Option<crate::symbol_resolver::SymbolValue> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_symbol_type(
|
||||||
|
&self,
|
||||||
|
_: &mut Unifier,
|
||||||
|
_: &[Arc<RwLock<TopLevelDef>>],
|
||||||
|
_: &PrimitiveStore,
|
||||||
|
str: StrRef,
|
||||||
|
) -> Result<Type, String> {
|
||||||
|
self.id_to_type.get(&str).copied().ok_or_else(|| format!("cannot find symbol `{str}`"))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_symbol_value<'ctx>(
|
||||||
|
&self,
|
||||||
|
_: StrRef,
|
||||||
|
_: &mut CodeGenContext<'ctx, '_>,
|
||||||
|
) -> Option<ValueEnum<'ctx>> {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_identifier_def(&self, id: StrRef) -> Result<DefinitionId, HashSet<String>> {
|
||||||
|
self.id_to_def
|
||||||
|
.get(&id)
|
||||||
|
.copied()
|
||||||
|
.ok_or_else(|| HashSet::from(["Unknown identifier".to_string()]))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_string_id(&self, _: &str) -> i32 {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_exception_id(&self, _tyid: usize) -> usize {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct TestEnvironment {
|
||||||
|
pub unifier: Unifier,
|
||||||
|
pub function_data: FunctionData,
|
||||||
|
pub primitives: PrimitiveStore,
|
||||||
|
pub id_to_name: HashMap<usize, StrRef>,
|
||||||
|
pub identifier_mapping: HashMap<StrRef, Type>,
|
||||||
|
pub virtual_checks: Vec<(Type, Type, Location)>,
|
||||||
|
pub calls: HashMap<CodeLocation, CallId>,
|
||||||
|
pub top_level: TopLevelContext,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TestEnvironment {
|
||||||
|
pub fn basic_test_env() -> TestEnvironment {
|
||||||
|
let mut unifier = Unifier::new();
|
||||||
|
|
||||||
|
let int32 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Int32.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
with_fields(&mut unifier, int32, |unifier, fields| {
|
||||||
|
let add_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![FuncArg { name: "other".into(), ty: int32, default_value: None }],
|
||||||
|
ret: int32,
|
||||||
|
vars: VarMap::new(),
|
||||||
|
}));
|
||||||
|
fields.insert("__add__".into(), (add_ty, false));
|
||||||
|
});
|
||||||
|
let int64 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Int64.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let float = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Float.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let bool = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Bool.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let none = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::None.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let range = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Range.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let str = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Str.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let exception = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Exception.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let uint32 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::UInt32.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let uint64 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::UInt64.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let option = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Option.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let list_elem_tvar = unifier.get_fresh_var(Some("list_elem".into()), None);
|
||||||
|
let list = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::List.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: into_var_map([list_elem_tvar]),
|
||||||
|
});
|
||||||
|
let ndarray_dtype_tvar = unifier.get_fresh_var(Some("ndarray_dtype".into()), None);
|
||||||
|
let ndarray_ndims_tvar =
|
||||||
|
unifier.get_fresh_const_generic_var(uint64, Some("ndarray_ndims".into()), None);
|
||||||
|
let ndarray = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::NDArray.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: into_var_map([ndarray_dtype_tvar, ndarray_ndims_tvar]),
|
||||||
|
});
|
||||||
|
let primitives = PrimitiveStore {
|
||||||
|
int32,
|
||||||
|
int64,
|
||||||
|
float,
|
||||||
|
bool,
|
||||||
|
none,
|
||||||
|
range,
|
||||||
|
str,
|
||||||
|
exception,
|
||||||
|
uint32,
|
||||||
|
uint64,
|
||||||
|
option,
|
||||||
|
list,
|
||||||
|
ndarray,
|
||||||
|
size_t: 64,
|
||||||
|
};
|
||||||
|
unifier.put_primitive_store(&primitives);
|
||||||
|
set_primitives_magic_methods(&primitives, &mut unifier);
|
||||||
|
|
||||||
|
let id_to_name: HashMap<_, _> = [
|
||||||
|
(0, "int32".into()),
|
||||||
|
(1, "int64".into()),
|
||||||
|
(2, "float".into()),
|
||||||
|
(3, "bool".into()),
|
||||||
|
(4, "none".into()),
|
||||||
|
(5, "range".into()),
|
||||||
|
(6, "str".into()),
|
||||||
|
(7, "exception".into()),
|
||||||
|
]
|
||||||
|
.into();
|
||||||
|
|
||||||
|
let mut identifier_mapping = HashMap::new();
|
||||||
|
identifier_mapping.insert("None".into(), none);
|
||||||
|
|
||||||
|
let resolver = Arc::new(Resolver {
|
||||||
|
id_to_type: identifier_mapping.clone(),
|
||||||
|
id_to_def: HashMap::default(),
|
||||||
|
class_names: HashMap::default(),
|
||||||
|
}) as Arc<dyn SymbolResolver + Send + Sync>;
|
||||||
|
|
||||||
|
TestEnvironment {
|
||||||
|
top_level: TopLevelContext {
|
||||||
|
definitions: Arc::default(),
|
||||||
|
unifiers: Arc::default(),
|
||||||
|
personality_symbol: None,
|
||||||
|
},
|
||||||
|
unifier,
|
||||||
|
function_data: FunctionData {
|
||||||
|
resolver,
|
||||||
|
bound_variables: Vec::new(),
|
||||||
|
return_type: None,
|
||||||
|
},
|
||||||
|
primitives,
|
||||||
|
id_to_name,
|
||||||
|
identifier_mapping,
|
||||||
|
virtual_checks: Vec::new(),
|
||||||
|
calls: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn new() -> TestEnvironment {
|
||||||
|
let mut unifier = Unifier::new();
|
||||||
|
let mut identifier_mapping = HashMap::new();
|
||||||
|
let mut top_level_defs: Vec<Arc<RwLock<TopLevelDef>>> = Vec::new();
|
||||||
|
let int32 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Int32.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
with_fields(&mut unifier, int32, |unifier, fields| {
|
||||||
|
let add_ty = unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![FuncArg { name: "other".into(), ty: int32, default_value: None }],
|
||||||
|
ret: int32,
|
||||||
|
vars: VarMap::new(),
|
||||||
|
}));
|
||||||
|
fields.insert("__add__".into(), (add_ty, false));
|
||||||
|
});
|
||||||
|
let int64 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Int64.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let float = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Float.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let bool = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Bool.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let none = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::None.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let range = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Range.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let str = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Str.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let exception = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Exception.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let uint32 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::UInt32.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let uint64 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::UInt64.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let option = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::Option.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let list_elem_tvar = unifier.get_fresh_var(Some("list_elem".into()), None);
|
||||||
|
let list = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::List.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: into_var_map([list_elem_tvar]),
|
||||||
|
});
|
||||||
|
let ndarray = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::NDArray.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
identifier_mapping.insert("None".into(), none);
|
||||||
|
for (i, name) in [
|
||||||
|
"int32",
|
||||||
|
"int64",
|
||||||
|
"float",
|
||||||
|
"bool",
|
||||||
|
"none",
|
||||||
|
"range",
|
||||||
|
"str",
|
||||||
|
"Exception",
|
||||||
|
"uint32",
|
||||||
|
"uint64",
|
||||||
|
"Option",
|
||||||
|
"list",
|
||||||
|
"ndarray",
|
||||||
|
]
|
||||||
|
.iter()
|
||||||
|
.enumerate()
|
||||||
|
{
|
||||||
|
top_level_defs.push(
|
||||||
|
RwLock::new(TopLevelDef::Class {
|
||||||
|
name: (*name).into(),
|
||||||
|
object_id: DefinitionId(i),
|
||||||
|
type_vars: Vec::default(),
|
||||||
|
fields: Vec::default(),
|
||||||
|
attributes: Vec::default(),
|
||||||
|
methods: Vec::default(),
|
||||||
|
ancestors: Vec::default(),
|
||||||
|
resolver: None,
|
||||||
|
constructor: None,
|
||||||
|
loc: None,
|
||||||
|
})
|
||||||
|
.into(),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
let defs = 12;
|
||||||
|
|
||||||
|
let primitives = PrimitiveStore {
|
||||||
|
int32,
|
||||||
|
int64,
|
||||||
|
float,
|
||||||
|
bool,
|
||||||
|
none,
|
||||||
|
range,
|
||||||
|
str,
|
||||||
|
exception,
|
||||||
|
uint32,
|
||||||
|
uint64,
|
||||||
|
option,
|
||||||
|
list,
|
||||||
|
ndarray,
|
||||||
|
size_t: 64,
|
||||||
|
};
|
||||||
|
|
||||||
|
unifier.put_primitive_store(&primitives);
|
||||||
|
|
||||||
|
let tvar = unifier.get_dummy_var();
|
||||||
|
|
||||||
|
let foo_ty = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(defs + 1),
|
||||||
|
fields: [("a".into(), (tvar.ty, true))].into(),
|
||||||
|
params: into_var_map([tvar]),
|
||||||
|
});
|
||||||
|
top_level_defs.push(
|
||||||
|
RwLock::new(TopLevelDef::Class {
|
||||||
|
name: "Foo".into(),
|
||||||
|
object_id: DefinitionId(defs + 1),
|
||||||
|
type_vars: vec![tvar.ty],
|
||||||
|
fields: [("a".into(), tvar.ty, true)].into(),
|
||||||
|
attributes: Vec::default(),
|
||||||
|
methods: Vec::default(),
|
||||||
|
ancestors: Vec::default(),
|
||||||
|
resolver: None,
|
||||||
|
constructor: None,
|
||||||
|
loc: None,
|
||||||
|
})
|
||||||
|
.into(),
|
||||||
|
);
|
||||||
|
|
||||||
|
identifier_mapping.insert(
|
||||||
|
"Foo".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: foo_ty,
|
||||||
|
vars: into_var_map([tvar]),
|
||||||
|
})),
|
||||||
|
);
|
||||||
|
|
||||||
|
let fun = unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: int32,
|
||||||
|
vars: IndexMap::default(),
|
||||||
|
}));
|
||||||
|
let bar = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(defs + 2),
|
||||||
|
fields: [("a".into(), (int32, true)), ("b".into(), (fun, true))].into(),
|
||||||
|
params: IndexMap::default(),
|
||||||
|
});
|
||||||
|
top_level_defs.push(
|
||||||
|
RwLock::new(TopLevelDef::Class {
|
||||||
|
name: "Bar".into(),
|
||||||
|
object_id: DefinitionId(defs + 2),
|
||||||
|
type_vars: Vec::default(),
|
||||||
|
fields: [("a".into(), int32, true), ("b".into(), fun, true)].into(),
|
||||||
|
attributes: Vec::default(),
|
||||||
|
methods: Vec::default(),
|
||||||
|
ancestors: Vec::default(),
|
||||||
|
resolver: None,
|
||||||
|
constructor: None,
|
||||||
|
loc: None,
|
||||||
|
})
|
||||||
|
.into(),
|
||||||
|
);
|
||||||
|
identifier_mapping.insert(
|
||||||
|
"Bar".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: bar,
|
||||||
|
vars: IndexMap::default(),
|
||||||
|
})),
|
||||||
|
);
|
||||||
|
|
||||||
|
let bar2 = unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(defs + 3),
|
||||||
|
fields: [("a".into(), (bool, true)), ("b".into(), (fun, false))].into(),
|
||||||
|
params: IndexMap::default(),
|
||||||
|
});
|
||||||
|
top_level_defs.push(
|
||||||
|
RwLock::new(TopLevelDef::Class {
|
||||||
|
name: "Bar2".into(),
|
||||||
|
object_id: DefinitionId(defs + 3),
|
||||||
|
type_vars: Vec::default(),
|
||||||
|
fields: [("a".into(), bool, true), ("b".into(), fun, false)].into(),
|
||||||
|
attributes: Vec::default(),
|
||||||
|
methods: Vec::default(),
|
||||||
|
ancestors: Vec::default(),
|
||||||
|
resolver: None,
|
||||||
|
constructor: None,
|
||||||
|
loc: None,
|
||||||
|
})
|
||||||
|
.into(),
|
||||||
|
);
|
||||||
|
identifier_mapping.insert(
|
||||||
|
"Bar2".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: bar2,
|
||||||
|
vars: IndexMap::default(),
|
||||||
|
})),
|
||||||
|
);
|
||||||
|
let class_names: HashMap<_, _> = [("Bar".into(), bar), ("Bar2".into(), bar2)].into();
|
||||||
|
|
||||||
|
let id_to_name = [
|
||||||
|
"int32".into(),
|
||||||
|
"int64".into(),
|
||||||
|
"float".into(),
|
||||||
|
"bool".into(),
|
||||||
|
"none".into(),
|
||||||
|
"range".into(),
|
||||||
|
"str".into(),
|
||||||
|
"exception".into(),
|
||||||
|
"uint32".into(),
|
||||||
|
"uint64".into(),
|
||||||
|
"option".into(),
|
||||||
|
"list".into(),
|
||||||
|
"ndarray".into(),
|
||||||
|
"Foo".into(),
|
||||||
|
"Bar".into(),
|
||||||
|
"Bar2".into(),
|
||||||
|
]
|
||||||
|
.into_iter()
|
||||||
|
.enumerate()
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let top_level = TopLevelContext {
|
||||||
|
definitions: Arc::new(top_level_defs.into()),
|
||||||
|
unifiers: Arc::default(),
|
||||||
|
personality_symbol: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let resolver = Arc::new(Resolver {
|
||||||
|
id_to_type: identifier_mapping.clone(),
|
||||||
|
id_to_def: [
|
||||||
|
("Foo".into(), DefinitionId(defs + 1)),
|
||||||
|
("Bar".into(), DefinitionId(defs + 2)),
|
||||||
|
("Bar2".into(), DefinitionId(defs + 3)),
|
||||||
|
]
|
||||||
|
.into(),
|
||||||
|
class_names,
|
||||||
|
}) as Arc<dyn SymbolResolver + Send + Sync>;
|
||||||
|
|
||||||
|
TestEnvironment {
|
||||||
|
unifier,
|
||||||
|
top_level,
|
||||||
|
function_data: FunctionData {
|
||||||
|
resolver,
|
||||||
|
bound_variables: Vec::new(),
|
||||||
|
return_type: None,
|
||||||
|
},
|
||||||
|
primitives,
|
||||||
|
id_to_name,
|
||||||
|
identifier_mapping,
|
||||||
|
virtual_checks: Vec::new(),
|
||||||
|
calls: HashMap::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_inferencer(&mut self) -> Inferencer {
|
||||||
|
Inferencer {
|
||||||
|
top_level: &self.top_level,
|
||||||
|
function_data: &mut self.function_data,
|
||||||
|
unifier: &mut self.unifier,
|
||||||
|
variable_mapping: HashMap::default(),
|
||||||
|
primitives: &mut self.primitives,
|
||||||
|
virtual_checks: &mut self.virtual_checks,
|
||||||
|
calls: &mut self.calls,
|
||||||
|
defined_identifiers: HashSet::default(),
|
||||||
|
in_handler: false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = 1234
|
||||||
|
b = int64(2147483648)
|
||||||
|
c = 1.234
|
||||||
|
d = True
|
||||||
|
"},
|
||||||
|
&[("a", "int32"), ("b", "int64"), ("c", "float"), ("d", "bool")].into(),
|
||||||
|
&[]
|
||||||
|
; "primitives test")]
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = lambda x, y: x
|
||||||
|
b = lambda x: a(x, x)
|
||||||
|
c = 1.234
|
||||||
|
d = b(c)
|
||||||
|
"},
|
||||||
|
&[("a", "fn[[x:float, y:float], float]"), ("b", "fn[[x:float], float]"), ("c", "float"), ("d", "float")].into(),
|
||||||
|
&[]
|
||||||
|
; "lambda test")]
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = lambda x: x + x
|
||||||
|
b = lambda x: a(x) + x
|
||||||
|
a = b
|
||||||
|
c = b(1)
|
||||||
|
"},
|
||||||
|
&[("a", "fn[[x:int32], int32]"), ("b", "fn[[x:int32], int32]"), ("c", "int32")].into(),
|
||||||
|
&[]
|
||||||
|
; "lambda test 2")]
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = lambda x: x
|
||||||
|
b = lambda x: x
|
||||||
|
|
||||||
|
foo1 = Foo()
|
||||||
|
foo2 = Foo()
|
||||||
|
c = a(foo1.a)
|
||||||
|
d = b(foo2.a)
|
||||||
|
|
||||||
|
a(True)
|
||||||
|
b(123)
|
||||||
|
|
||||||
|
"},
|
||||||
|
&[("a", "fn[[x:bool], bool]"), ("b", "fn[[x:int32], int32]"), ("c", "bool"),
|
||||||
|
("d", "int32"), ("foo1", "Foo[bool]"), ("foo2", "Foo[int32]")].into(),
|
||||||
|
&[]
|
||||||
|
; "obj test")]
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = [1, 2, 3]
|
||||||
|
b = [x + x for x in a]
|
||||||
|
"},
|
||||||
|
&[("a", "list[int32]"), ("b", "list[int32]")].into(),
|
||||||
|
&[]
|
||||||
|
; "listcomp test")]
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = virtual(Bar(), Bar)
|
||||||
|
b = a.b()
|
||||||
|
a = virtual(Bar2())
|
||||||
|
"},
|
||||||
|
&[("a", "virtual[Bar]"), ("b", "int32")].into(),
|
||||||
|
&[("Bar", "Bar"), ("Bar2", "Bar")]
|
||||||
|
; "virtual test")]
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = [virtual(Bar(), Bar), virtual(Bar2())]
|
||||||
|
b = [x.b() for x in a]
|
||||||
|
"},
|
||||||
|
&[("a", "list[virtual[Bar]]"), ("b", "list[int32]")].into(),
|
||||||
|
&[("Bar", "Bar"), ("Bar2", "Bar")]
|
||||||
|
; "virtual list test")]
|
||||||
|
fn test_basic(source: &str, mapping: &HashMap<&str, &str>, virtuals: &[(&str, &str)]) {
|
||||||
|
println!("source:\n{source}");
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let id_to_name = std::mem::take(&mut env.id_to_name);
|
||||||
|
let mut defined_identifiers: HashSet<_> = env.identifier_mapping.keys().copied().collect();
|
||||||
|
defined_identifiers.insert("virtual".into());
|
||||||
|
let mut inferencer = env.get_inferencer();
|
||||||
|
inferencer.defined_identifiers.clone_from(&defined_identifiers);
|
||||||
|
let statements = parse_program(source, FileName::default()).unwrap();
|
||||||
|
let statements = statements
|
||||||
|
.into_iter()
|
||||||
|
.map(|v| inferencer.fold_stmt(v))
|
||||||
|
.collect::<Result<Vec<_>, _>>()
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
inferencer.check_block(&statements, &mut defined_identifiers).unwrap();
|
||||||
|
|
||||||
|
for (k, v) in &inferencer.variable_mapping {
|
||||||
|
let name = inferencer.unifier.internal_stringify(
|
||||||
|
*v,
|
||||||
|
&mut |v| (*id_to_name.get(&v).unwrap()).into(),
|
||||||
|
&mut |v| format!("v{v}"),
|
||||||
|
&mut None,
|
||||||
|
);
|
||||||
|
println!("{k}: {name}");
|
||||||
|
}
|
||||||
|
for (k, v) in mapping {
|
||||||
|
let ty = inferencer.variable_mapping.get(&(*k).into()).unwrap();
|
||||||
|
let name = inferencer.unifier.internal_stringify(
|
||||||
|
*ty,
|
||||||
|
&mut |v| (*id_to_name.get(&v).unwrap()).into(),
|
||||||
|
&mut |v| format!("v{v}"),
|
||||||
|
&mut None,
|
||||||
|
);
|
||||||
|
assert_eq!(format!("{k}: {v}"), format!("{k}: {name}"));
|
||||||
|
}
|
||||||
|
assert_eq!(inferencer.virtual_checks.len(), virtuals.len());
|
||||||
|
for ((a, b, _), (x, y)) in zip(inferencer.virtual_checks.iter(), virtuals) {
|
||||||
|
let a = inferencer.unifier.internal_stringify(
|
||||||
|
*a,
|
||||||
|
&mut |v| (*id_to_name.get(&v).unwrap()).into(),
|
||||||
|
&mut |v| format!("v{v}"),
|
||||||
|
&mut None,
|
||||||
|
);
|
||||||
|
let b = inferencer.unifier.internal_stringify(
|
||||||
|
*b,
|
||||||
|
&mut |v| (*id_to_name.get(&v).unwrap()).into(),
|
||||||
|
&mut |v| format!("v{v}"),
|
||||||
|
&mut None,
|
||||||
|
);
|
||||||
|
|
||||||
|
assert_eq!(&a, x);
|
||||||
|
assert_eq!(&b, y);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(indoc! {"
|
||||||
|
a = 2
|
||||||
|
b = 2
|
||||||
|
c = a + b
|
||||||
|
d = a - b
|
||||||
|
e = a * b
|
||||||
|
f = a / b
|
||||||
|
g = a // b
|
||||||
|
h = a % b
|
||||||
|
"},
|
||||||
|
&[("a", "int32"),
|
||||||
|
("b", "int32"),
|
||||||
|
("c", "int32"),
|
||||||
|
("d", "int32"),
|
||||||
|
("e", "int32"),
|
||||||
|
("f", "float"),
|
||||||
|
("g", "int32"),
|
||||||
|
("h", "int32")].into()
|
||||||
|
; "int32")]
|
||||||
|
#[test_case(
|
||||||
|
indoc! {"
|
||||||
|
a = 2.4
|
||||||
|
b = 3.6
|
||||||
|
c = a + b
|
||||||
|
d = a - b
|
||||||
|
e = a * b
|
||||||
|
f = a / b
|
||||||
|
g = a // b
|
||||||
|
h = a % b
|
||||||
|
i = a ** b
|
||||||
|
ii = 3
|
||||||
|
j = a ** b
|
||||||
|
"},
|
||||||
|
&[("a", "float"),
|
||||||
|
("b", "float"),
|
||||||
|
("c", "float"),
|
||||||
|
("d", "float"),
|
||||||
|
("e", "float"),
|
||||||
|
("f", "float"),
|
||||||
|
("g", "float"),
|
||||||
|
("h", "float"),
|
||||||
|
("i", "float"),
|
||||||
|
("ii", "int32"),
|
||||||
|
("j", "float")].into()
|
||||||
|
; "float"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
indoc! {"
|
||||||
|
a = int64(12312312312)
|
||||||
|
b = int64(24242424424)
|
||||||
|
c = a + b
|
||||||
|
d = a - b
|
||||||
|
e = a * b
|
||||||
|
f = a / b
|
||||||
|
g = a // b
|
||||||
|
h = a % b
|
||||||
|
i = a == b
|
||||||
|
j = a > b
|
||||||
|
k = a < b
|
||||||
|
l = a != b
|
||||||
|
"},
|
||||||
|
&[("a", "int64"),
|
||||||
|
("b", "int64"),
|
||||||
|
("c", "int64"),
|
||||||
|
("d", "int64"),
|
||||||
|
("e", "int64"),
|
||||||
|
("f", "float"),
|
||||||
|
("g", "int64"),
|
||||||
|
("h", "int64"),
|
||||||
|
("i", "bool"),
|
||||||
|
("j", "bool"),
|
||||||
|
("k", "bool"),
|
||||||
|
("l", "bool")].into()
|
||||||
|
; "int64"
|
||||||
|
)]
|
||||||
|
#[test_case(
|
||||||
|
indoc! {"
|
||||||
|
a = True
|
||||||
|
b = False
|
||||||
|
c = a == b
|
||||||
|
d = not a
|
||||||
|
e = a != b
|
||||||
|
"},
|
||||||
|
&[("a", "bool"),
|
||||||
|
("b", "bool"),
|
||||||
|
("c", "bool"),
|
||||||
|
("d", "bool"),
|
||||||
|
("e", "bool")].into()
|
||||||
|
; "boolean"
|
||||||
|
)]
|
||||||
|
fn test_primitive_magic_methods(source: &str, mapping: &HashMap<&str, &str>) {
|
||||||
|
println!("source:\n{source}");
|
||||||
|
let mut env = TestEnvironment::basic_test_env();
|
||||||
|
let id_to_name = std::mem::take(&mut env.id_to_name);
|
||||||
|
let mut defined_identifiers: HashSet<_> = env.identifier_mapping.keys().copied().collect();
|
||||||
|
defined_identifiers.insert("virtual".into());
|
||||||
|
let mut inferencer = env.get_inferencer();
|
||||||
|
inferencer.defined_identifiers.clone_from(&defined_identifiers);
|
||||||
|
let statements = parse_program(source, FileName::default()).unwrap();
|
||||||
|
let statements = statements
|
||||||
|
.into_iter()
|
||||||
|
.map(|v| inferencer.fold_stmt(v))
|
||||||
|
.collect::<Result<Vec<_>, _>>()
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
inferencer.check_block(&statements, &mut defined_identifiers).unwrap();
|
||||||
|
|
||||||
|
for (k, v) in &inferencer.variable_mapping {
|
||||||
|
let name = inferencer.unifier.internal_stringify(
|
||||||
|
*v,
|
||||||
|
&mut |v| (*id_to_name.get(&v).unwrap()).into(),
|
||||||
|
&mut |v| format!("v{v}"),
|
||||||
|
&mut None,
|
||||||
|
);
|
||||||
|
println!("{k}: {name}");
|
||||||
|
}
|
||||||
|
for (k, v) in mapping {
|
||||||
|
let ty = inferencer.variable_mapping.get(&(*k).into()).unwrap();
|
||||||
|
let name = inferencer.unifier.internal_stringify(
|
||||||
|
*ty,
|
||||||
|
&mut |v| (*id_to_name.get(&v).unwrap()).into(),
|
||||||
|
&mut |v| format!("v{v}"),
|
||||||
|
&mut None,
|
||||||
|
);
|
||||||
|
assert_eq!(format!("{k}: {v}"), format!("{k}: {name}"));
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,60 +0,0 @@
|
||||||
use std::collections::HashMap;
|
|
||||||
use std::rc::Rc;
|
|
||||||
|
|
||||||
#[derive(PartialEq, Eq, Copy, Clone, Hash, Debug)]
|
|
||||||
pub struct PrimitiveId(pub(crate) usize);
|
|
||||||
|
|
||||||
#[derive(PartialEq, Eq, Copy, Clone, Hash, Debug)]
|
|
||||||
pub struct ClassId(pub(crate) usize);
|
|
||||||
|
|
||||||
#[derive(PartialEq, Eq, Copy, Clone, Hash, Debug)]
|
|
||||||
pub struct ParamId(pub(crate) usize);
|
|
||||||
|
|
||||||
#[derive(PartialEq, Eq, Copy, Clone, Hash, Debug)]
|
|
||||||
pub struct VariableId(pub(crate) usize);
|
|
||||||
|
|
||||||
#[derive(PartialEq, Eq, Clone, Hash, Debug)]
|
|
||||||
pub enum TypeEnum {
|
|
||||||
BotType,
|
|
||||||
SelfType,
|
|
||||||
PrimitiveType(PrimitiveId),
|
|
||||||
ClassType(ClassId),
|
|
||||||
VirtualClassType(ClassId),
|
|
||||||
ParametricType(ParamId, Vec<Rc<TypeEnum>>),
|
|
||||||
TypeVariable(VariableId),
|
|
||||||
}
|
|
||||||
|
|
||||||
pub type Type = Rc<TypeEnum>;
|
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct FnDef {
|
|
||||||
// we assume methods first argument to be SelfType,
|
|
||||||
// so the first argument is not contained here
|
|
||||||
pub args: Vec<Type>,
|
|
||||||
pub result: Option<Type>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct TypeDef<'a> {
|
|
||||||
pub name: &'a str,
|
|
||||||
pub fields: HashMap<&'a str, Type>,
|
|
||||||
pub methods: HashMap<&'a str, FnDef>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct ClassDef<'a> {
|
|
||||||
pub base: TypeDef<'a>,
|
|
||||||
pub parents: Vec<ClassId>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct ParametricDef<'a> {
|
|
||||||
pub base: TypeDef<'a>,
|
|
||||||
pub params: Vec<VariableId>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct VarDef<'a> {
|
|
||||||
pub name: &'a str,
|
|
||||||
pub bound: Vec<Type>,
|
|
||||||
}
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,651 @@
|
||||||
|
use super::super::magic_methods::with_fields;
|
||||||
|
use super::*;
|
||||||
|
use indoc::indoc;
|
||||||
|
use itertools::Itertools;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use test_case::test_case;
|
||||||
|
|
||||||
|
impl Unifier {
|
||||||
|
/// Check whether two types are equal.
|
||||||
|
fn eq(&mut self, a: Type, b: Type) -> bool {
|
||||||
|
if a == b {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
let (ty_a, ty_b) = {
|
||||||
|
let table = &mut self.unification_table;
|
||||||
|
if table.unioned(a, b) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
(table.probe_value(a).clone(), table.probe_value(b).clone())
|
||||||
|
};
|
||||||
|
|
||||||
|
match (&*ty_a, &*ty_b) {
|
||||||
|
(
|
||||||
|
TypeEnum::TVar { fields: None, id: id1, .. },
|
||||||
|
TypeEnum::TVar { fields: None, id: id2, .. },
|
||||||
|
) => id1 == id2,
|
||||||
|
(
|
||||||
|
TypeEnum::TVar { fields: Some(map1), .. },
|
||||||
|
TypeEnum::TVar { fields: Some(map2), .. },
|
||||||
|
) => self.map_eq2(map1, map2),
|
||||||
|
(TypeEnum::TTuple { ty: ty1 }, TypeEnum::TTuple { ty: ty2 }) => {
|
||||||
|
ty1.len() == ty2.len()
|
||||||
|
&& ty1.iter().zip(ty2.iter()).all(|(t1, t2)| self.eq(*t1, *t2))
|
||||||
|
}
|
||||||
|
(TypeEnum::TVirtual { ty: ty1 }, TypeEnum::TVirtual { ty: ty2 }) => self.eq(*ty1, *ty2),
|
||||||
|
(
|
||||||
|
TypeEnum::TObj { obj_id: id1, params: params1, .. },
|
||||||
|
TypeEnum::TObj { obj_id: id2, params: params2, .. },
|
||||||
|
) => id1 == id2 && self.map_eq(params1, params2),
|
||||||
|
// TLiteral, TCall and TFunc are not yet implemented
|
||||||
|
_ => false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn map_eq<K>(&mut self, map1: &IndexMapping<K>, map2: &IndexMapping<K>) -> bool
|
||||||
|
where
|
||||||
|
K: std::hash::Hash + Eq + Clone,
|
||||||
|
{
|
||||||
|
if map1.len() != map2.len() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
for (k, v) in map1 {
|
||||||
|
if !map2.get(k).is_some_and(|v1| self.eq(*v, *v1)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
true
|
||||||
|
}
|
||||||
|
|
||||||
|
fn map_eq2<K>(&mut self, map1: &Mapping<K, RecordField>, map2: &Mapping<K, RecordField>) -> bool
|
||||||
|
where
|
||||||
|
K: std::hash::Hash + Eq + Clone,
|
||||||
|
{
|
||||||
|
if map1.len() != map2.len() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
for (k, v) in map1 {
|
||||||
|
if !map2.get(k).is_some_and(|v1| self.eq(v.ty, v1.ty)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct TestEnvironment {
|
||||||
|
pub unifier: Unifier,
|
||||||
|
pub type_mapping: HashMap<String, Type>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TestEnvironment {
|
||||||
|
fn new() -> TestEnvironment {
|
||||||
|
let mut unifier = Unifier::new();
|
||||||
|
let mut type_mapping = HashMap::new();
|
||||||
|
|
||||||
|
type_mapping.insert(
|
||||||
|
"int".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(0),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
type_mapping.insert(
|
||||||
|
"float".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(1),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
type_mapping.insert(
|
||||||
|
"bool".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(2),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
let tvar = unifier.get_dummy_var();
|
||||||
|
type_mapping.insert(
|
||||||
|
"Foo".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(3),
|
||||||
|
fields: [("a".into(), (tvar.ty, true))].into(),
|
||||||
|
params: into_var_map([tvar]),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
let tvar = unifier.get_dummy_var();
|
||||||
|
type_mapping.insert(
|
||||||
|
"list".into(),
|
||||||
|
unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: PrimDef::List.id(),
|
||||||
|
fields: HashMap::new(),
|
||||||
|
params: into_var_map([tvar]),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
TestEnvironment { unifier, type_mapping }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse(&mut self, typ: &str, mapping: &Mapping<String>) -> Type {
|
||||||
|
let result = self.internal_parse(typ, mapping);
|
||||||
|
assert!(result.1.is_empty());
|
||||||
|
result.0
|
||||||
|
}
|
||||||
|
|
||||||
|
fn internal_parse<'b>(&mut self, typ: &'b str, mapping: &Mapping<String>) -> (Type, &'b str) {
|
||||||
|
// for testing only, so we can just panic when the input is malformed
|
||||||
|
let end = typ.find(|c| ['[', ',', ']', '='].contains(&c)).unwrap_or(typ.len());
|
||||||
|
match &typ[..end] {
|
||||||
|
"list" => {
|
||||||
|
let mut s = &typ[end..];
|
||||||
|
assert_eq!(&s[0..1], "[");
|
||||||
|
let mut ty = Vec::new();
|
||||||
|
while &s[0..1] != "]" {
|
||||||
|
let result = self.internal_parse(&s[1..], mapping);
|
||||||
|
ty.push(result.0);
|
||||||
|
s = result.1;
|
||||||
|
}
|
||||||
|
|
||||||
|
assert_eq!(ty.len(), 1);
|
||||||
|
|
||||||
|
let list_elem_tvar = if let TypeEnum::TObj { params, .. } =
|
||||||
|
&*self.unifier.get_ty_immutable(self.type_mapping["list"])
|
||||||
|
{
|
||||||
|
iter_type_vars(params).next().unwrap()
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
};
|
||||||
|
|
||||||
|
(
|
||||||
|
self.unifier
|
||||||
|
.subst(
|
||||||
|
self.type_mapping["list"],
|
||||||
|
&into_var_map([TypeVar { id: list_elem_tvar.id, ty: ty[0] }]),
|
||||||
|
)
|
||||||
|
.unwrap(),
|
||||||
|
&s[1..],
|
||||||
|
)
|
||||||
|
}
|
||||||
|
"tuple" => {
|
||||||
|
let mut s = &typ[end..];
|
||||||
|
assert_eq!(&s[0..1], "[");
|
||||||
|
let mut ty = Vec::new();
|
||||||
|
while &s[0..1] != "]" {
|
||||||
|
let result = self.internal_parse(&s[1..], mapping);
|
||||||
|
ty.push(result.0);
|
||||||
|
s = result.1;
|
||||||
|
}
|
||||||
|
(self.unifier.add_ty(TypeEnum::TTuple { ty }), &s[1..])
|
||||||
|
}
|
||||||
|
"Record" => {
|
||||||
|
let mut s = &typ[end..];
|
||||||
|
assert_eq!(&s[0..1], "[");
|
||||||
|
let mut fields = HashMap::new();
|
||||||
|
while &s[0..1] != "]" {
|
||||||
|
let eq = s.find('=').unwrap();
|
||||||
|
let key = s[1..eq].into();
|
||||||
|
let result = self.internal_parse(&s[eq + 1..], mapping);
|
||||||
|
fields.insert(key, RecordField::new(result.0, true, None));
|
||||||
|
s = result.1;
|
||||||
|
}
|
||||||
|
(self.unifier.add_record(fields), &s[1..])
|
||||||
|
}
|
||||||
|
x => {
|
||||||
|
let mut s = &typ[end..];
|
||||||
|
let ty = mapping.get(x).copied().unwrap_or_else(|| {
|
||||||
|
// mapping should be type variables, type_mapping should be concrete types
|
||||||
|
// we should not resolve the type of type variables.
|
||||||
|
let mut ty = *self.type_mapping.get(x).unwrap();
|
||||||
|
let te = self.unifier.get_ty(ty);
|
||||||
|
if let TypeEnum::TObj { params, .. } = &*te {
|
||||||
|
if !params.is_empty() {
|
||||||
|
assert_eq!(&s[0..1], "[");
|
||||||
|
let mut p = Vec::new();
|
||||||
|
while &s[0..1] != "]" {
|
||||||
|
let result = self.internal_parse(&s[1..], mapping);
|
||||||
|
p.push(result.0);
|
||||||
|
s = result.1;
|
||||||
|
}
|
||||||
|
s = &s[1..];
|
||||||
|
ty = self
|
||||||
|
.unifier
|
||||||
|
.subst(ty, ¶ms.keys().copied().zip(p).collect())
|
||||||
|
.unwrap_or(ty);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ty
|
||||||
|
});
|
||||||
|
(ty, s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn unify(&mut self, typ1: Type, typ2: Type) -> Result<(), String> {
|
||||||
|
self.unifier.unify(typ1, typ2).map_err(|e| e.to_display(&self.unifier).to_string())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(2,
|
||||||
|
&[("v1", "v2"), ("v2", "float")],
|
||||||
|
&[("v1", "float"), ("v2", "float")]
|
||||||
|
; "simple variable"
|
||||||
|
)]
|
||||||
|
#[test_case(2,
|
||||||
|
&[("v1", "list[v2]"), ("v1", "list[float]")],
|
||||||
|
&[("v1", "list[float]"), ("v2", "float")]
|
||||||
|
; "list element"
|
||||||
|
)]
|
||||||
|
#[test_case(3,
|
||||||
|
&[
|
||||||
|
("v1", "Record[a=v3,b=v3]"),
|
||||||
|
("v2", "Record[b=float,c=v3]"),
|
||||||
|
("v1", "v2")
|
||||||
|
],
|
||||||
|
&[
|
||||||
|
("v1", "Record[a=float,b=float,c=float]"),
|
||||||
|
("v2", "Record[a=float,b=float,c=float]"),
|
||||||
|
("v3", "float")
|
||||||
|
]
|
||||||
|
; "record merge"
|
||||||
|
)]
|
||||||
|
#[test_case(3,
|
||||||
|
&[
|
||||||
|
("v1", "Record[a=float]"),
|
||||||
|
("v2", "Foo[v3]"),
|
||||||
|
("v1", "v2")
|
||||||
|
],
|
||||||
|
&[
|
||||||
|
("v1", "Foo[float]"),
|
||||||
|
("v3", "float")
|
||||||
|
]
|
||||||
|
; "record obj merge"
|
||||||
|
)]
|
||||||
|
/// Test cases for valid unifications.
|
||||||
|
fn test_unify(
|
||||||
|
variable_count: u32,
|
||||||
|
unify_pairs: &[(&'static str, &'static str)],
|
||||||
|
verify_pairs: &[(&'static str, &'static str)],
|
||||||
|
) {
|
||||||
|
let unify_count = unify_pairs.len();
|
||||||
|
// test all permutations...
|
||||||
|
for perm in unify_pairs.iter().permutations(unify_count) {
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let mut mapping = HashMap::new();
|
||||||
|
for i in 1..=variable_count {
|
||||||
|
let v = env.unifier.get_dummy_var();
|
||||||
|
mapping.insert(format!("v{i}"), v.ty);
|
||||||
|
}
|
||||||
|
// unification may have side effect when we do type resolution, so freeze the types
|
||||||
|
// before doing unification.
|
||||||
|
let mut pairs = Vec::new();
|
||||||
|
for (a, b) in &perm {
|
||||||
|
let t1 = env.parse(a, &mapping);
|
||||||
|
let t2 = env.parse(b, &mapping);
|
||||||
|
pairs.push((t1, t2));
|
||||||
|
}
|
||||||
|
for (t1, t2) in pairs {
|
||||||
|
env.unifier.unify(t1, t2).unwrap();
|
||||||
|
}
|
||||||
|
for (a, b) in verify_pairs {
|
||||||
|
println!("{a} = {b}");
|
||||||
|
let t1 = env.parse(a, &mapping);
|
||||||
|
let t2 = env.parse(b, &mapping);
|
||||||
|
println!("a = {}, b = {}", env.unifier.stringify(t1), env.unifier.stringify(t2));
|
||||||
|
assert!(env.unifier.eq(t1, t2));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test_case(2,
|
||||||
|
&[
|
||||||
|
("v1", "tuple[int]"),
|
||||||
|
("v2", "list[int]"),
|
||||||
|
],
|
||||||
|
(("v1", "v2"), "Incompatible types: 11[0] and tuple[0]")
|
||||||
|
; "type mismatch"
|
||||||
|
)]
|
||||||
|
#[test_case(2,
|
||||||
|
&[
|
||||||
|
("v1", "tuple[int]"),
|
||||||
|
("v2", "tuple[float]"),
|
||||||
|
],
|
||||||
|
(("v1", "v2"), "Incompatible types: tuple[0] and tuple[1]")
|
||||||
|
; "tuple parameter mismatch"
|
||||||
|
)]
|
||||||
|
#[test_case(2,
|
||||||
|
&[
|
||||||
|
("v1", "tuple[int,int]"),
|
||||||
|
("v2", "tuple[int]"),
|
||||||
|
],
|
||||||
|
(("v1", "v2"), "Tuple length mismatch: got tuple[0, 0] and tuple[0]")
|
||||||
|
; "tuple length mismatch"
|
||||||
|
)]
|
||||||
|
#[test_case(3,
|
||||||
|
&[
|
||||||
|
("v1", "Record[a=float,b=int]"),
|
||||||
|
("v2", "Foo[v3]"),
|
||||||
|
],
|
||||||
|
(("v1", "v2"), "`3[typevar5]::b` field/method does not exist")
|
||||||
|
; "record obj merge"
|
||||||
|
)]
|
||||||
|
/// Test cases for invalid unifications.
|
||||||
|
fn test_invalid_unification(
|
||||||
|
variable_count: u32,
|
||||||
|
unify_pairs: &[(&'static str, &'static str)],
|
||||||
|
erroneous_pair: ((&'static str, &'static str), &'static str),
|
||||||
|
) {
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let mut mapping = HashMap::new();
|
||||||
|
for i in 1..=variable_count {
|
||||||
|
let v = env.unifier.get_dummy_var();
|
||||||
|
mapping.insert(format!("v{i}"), v.ty);
|
||||||
|
}
|
||||||
|
// unification may have side effect when we do type resolution, so freeze the types
|
||||||
|
// before doing unification.
|
||||||
|
let mut pairs = Vec::new();
|
||||||
|
for (a, b) in unify_pairs {
|
||||||
|
let t1 = env.parse(a, &mapping);
|
||||||
|
let t2 = env.parse(b, &mapping);
|
||||||
|
pairs.push((t1, t2));
|
||||||
|
}
|
||||||
|
let (t1, t2) =
|
||||||
|
(env.parse(erroneous_pair.0 .0, &mapping), env.parse(erroneous_pair.0 .1, &mapping));
|
||||||
|
for (a, b) in pairs {
|
||||||
|
env.unifier.unify(a, b).unwrap();
|
||||||
|
}
|
||||||
|
assert_eq!(env.unify(t1, t2), Err(erroneous_pair.1.to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_recursive_subst() {
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let int = *env.type_mapping.get("int").unwrap();
|
||||||
|
let foo_id = *env.type_mapping.get("Foo").unwrap();
|
||||||
|
let foo_ty = env.unifier.get_ty(foo_id);
|
||||||
|
with_fields(&mut env.unifier, foo_id, |_unifier, fields| {
|
||||||
|
fields.insert("rec".into(), (foo_id, true));
|
||||||
|
});
|
||||||
|
let TypeEnum::TObj { params, .. } = &*foo_ty else { unreachable!() };
|
||||||
|
let mapping = params.iter().map(|(id, _)| (*id, int)).collect();
|
||||||
|
let instantiated = env.unifier.subst(foo_id, &mapping).unwrap();
|
||||||
|
let instantiated_ty = env.unifier.get_ty(instantiated);
|
||||||
|
|
||||||
|
let TypeEnum::TObj { fields, .. } = &*instantiated_ty else { unreachable!() };
|
||||||
|
assert!(env.unifier.unioned(fields.get(&"a".into()).unwrap().0, int));
|
||||||
|
assert!(env.unifier.unioned(fields.get(&"rec".into()).unwrap().0, instantiated));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_virtual() {
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let int = env.parse("int", &HashMap::new());
|
||||||
|
let fun = env.unifier.add_ty(TypeEnum::TFunc(FunSignature {
|
||||||
|
args: vec![],
|
||||||
|
ret: int,
|
||||||
|
vars: VarMap::new(),
|
||||||
|
}));
|
||||||
|
let bar = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: DefinitionId(5),
|
||||||
|
fields: [("f".into(), (fun, false)), ("a".into(), (int, false))].into(),
|
||||||
|
params: VarMap::new(),
|
||||||
|
});
|
||||||
|
let v0 = env.unifier.get_dummy_var().ty;
|
||||||
|
let v1 = env.unifier.get_dummy_var().ty;
|
||||||
|
|
||||||
|
let a = env.unifier.add_ty(TypeEnum::TVirtual { ty: bar });
|
||||||
|
let b = env.unifier.add_ty(TypeEnum::TVirtual { ty: v0 });
|
||||||
|
let c = env.unifier.add_record([("f".into(), RecordField::new(v1, false, None))].into());
|
||||||
|
env.unifier.unify(a, b).unwrap();
|
||||||
|
env.unifier.unify(b, c).unwrap();
|
||||||
|
assert!(env.unifier.eq(v1, fun));
|
||||||
|
|
||||||
|
let d = env.unifier.add_record([("a".into(), RecordField::new(v1, true, None))].into());
|
||||||
|
assert_eq!(env.unify(b, d), Err("`virtual[5]::a` field/method does not exist".to_string()));
|
||||||
|
|
||||||
|
let d = env.unifier.add_record([("b".into(), RecordField::new(v1, true, None))].into());
|
||||||
|
assert_eq!(env.unify(b, d), Err("`virtual[5]::b` field/method does not exist".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_typevar_range() {
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let int = env.parse("int", &HashMap::new());
|
||||||
|
let boolean = env.parse("bool", &HashMap::new());
|
||||||
|
let float = env.parse("float", &HashMap::new());
|
||||||
|
let int_list = env.parse("list[int]", &HashMap::new());
|
||||||
|
let float_list = env.parse("list[float]", &HashMap::new());
|
||||||
|
|
||||||
|
let list_elem_tvar = if let TypeEnum::TObj { params, .. } =
|
||||||
|
&*env.unifier.get_ty_immutable(env.type_mapping["list"])
|
||||||
|
{
|
||||||
|
iter_type_vars(params).next().unwrap()
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
};
|
||||||
|
|
||||||
|
// unification between v and int
|
||||||
|
// where v in (int, bool)
|
||||||
|
let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).ty;
|
||||||
|
env.unifier.unify(int, v).unwrap();
|
||||||
|
|
||||||
|
// unification between v and list[int]
|
||||||
|
// where v in (int, bool)
|
||||||
|
let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).ty;
|
||||||
|
assert_eq!(
|
||||||
|
env.unify(int_list, v),
|
||||||
|
Err("Expected any one of these types: 0, 2, but got 11[0]".to_string())
|
||||||
|
);
|
||||||
|
|
||||||
|
// unification between v and float
|
||||||
|
// where v in (int, bool)
|
||||||
|
let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).ty;
|
||||||
|
assert_eq!(
|
||||||
|
env.unify(float, v),
|
||||||
|
Err("Expected any one of these types: 0, 2, but got 1".to_string())
|
||||||
|
);
|
||||||
|
|
||||||
|
let v1 = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).ty;
|
||||||
|
let v1_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: v1 }]),
|
||||||
|
});
|
||||||
|
let v = env.unifier.get_fresh_var_with_range(&[int, v1_list], None, None).ty;
|
||||||
|
// unification between v and int
|
||||||
|
// where v in (int, list[v1]), v1 in (int, bool)
|
||||||
|
env.unifier.unify(int, v).unwrap();
|
||||||
|
|
||||||
|
let v = env.unifier.get_fresh_var_with_range(&[int, v1_list], None, None).ty;
|
||||||
|
// unification between v and list[int]
|
||||||
|
// where v in (int, list[v1]), v1 in (int, bool)
|
||||||
|
env.unifier.unify(int_list, v).unwrap();
|
||||||
|
|
||||||
|
let v = env.unifier.get_fresh_var_with_range(&[int, v1_list], None, None).ty;
|
||||||
|
// unification between v and list[float]
|
||||||
|
// where v in (int, list[v1]), v1 in (int, bool)
|
||||||
|
println!("float_list: {}, v: {}", env.unifier.stringify(float_list), env.unifier.stringify(v));
|
||||||
|
assert_eq!(
|
||||||
|
env.unify(float_list, v),
|
||||||
|
Err("Expected any one of these types: 0, 11[typevar6], but got 11[1]\n\nNotes:\n typevar6 ∈ {0, 2}".to_string())
|
||||||
|
);
|
||||||
|
|
||||||
|
let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).ty;
|
||||||
|
let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).ty;
|
||||||
|
env.unifier.unify(a, b).unwrap();
|
||||||
|
env.unifier.unify(a, float).unwrap();
|
||||||
|
|
||||||
|
let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).ty;
|
||||||
|
let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).ty;
|
||||||
|
env.unifier.unify(a, b).unwrap();
|
||||||
|
assert_eq!(env.unify(a, int), Err("Expected any one of these types: 1, but got 0".into()));
|
||||||
|
|
||||||
|
let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).ty;
|
||||||
|
let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).ty;
|
||||||
|
let a_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: a }]),
|
||||||
|
});
|
||||||
|
let a_list = env.unifier.get_fresh_var_with_range(&[a_list], None, None).ty;
|
||||||
|
let b_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: b }]),
|
||||||
|
});
|
||||||
|
let b_list = env.unifier.get_fresh_var_with_range(&[b_list], None, None).ty;
|
||||||
|
env.unifier.unify(a_list, b_list).unwrap();
|
||||||
|
let float_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: float }]),
|
||||||
|
});
|
||||||
|
env.unifier.unify(a_list, float_list).unwrap();
|
||||||
|
// previous unifications should not affect a and b
|
||||||
|
env.unifier.unify(a, int).unwrap();
|
||||||
|
|
||||||
|
let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).ty;
|
||||||
|
let b = env.unifier.get_fresh_var_with_range(&[boolean, float], None, None).ty;
|
||||||
|
let a_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: a }]),
|
||||||
|
});
|
||||||
|
let b_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: b }]),
|
||||||
|
});
|
||||||
|
env.unifier.unify(a_list, b_list).unwrap();
|
||||||
|
let int_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: int }]),
|
||||||
|
});
|
||||||
|
assert_eq!(
|
||||||
|
env.unify(a_list, int_list),
|
||||||
|
Err("Incompatible types: 11[typevar23] and 11[0]\
|
||||||
|
\n\nNotes:\n typevar23 ∈ {1}"
|
||||||
|
.into())
|
||||||
|
);
|
||||||
|
|
||||||
|
let a = env.unifier.get_fresh_var_with_range(&[int, float], None, None).ty;
|
||||||
|
let b = env.unifier.get_dummy_var().ty;
|
||||||
|
let a_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: a }]),
|
||||||
|
});
|
||||||
|
let a_list = env.unifier.get_fresh_var_with_range(&[a_list], None, None).ty;
|
||||||
|
let b_list = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: b }]),
|
||||||
|
});
|
||||||
|
env.unifier.unify(a_list, b_list).unwrap();
|
||||||
|
assert_eq!(
|
||||||
|
env.unify(b, boolean),
|
||||||
|
Err("Expected any one of these types: 0, 1, but got 2".into())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_rigid_var() {
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let a = env.unifier.get_fresh_rigid_var(None, None).ty;
|
||||||
|
let b = env.unifier.get_fresh_rigid_var(None, None).ty;
|
||||||
|
let x = env.unifier.get_dummy_var().ty;
|
||||||
|
let list_elem_tvar = env.unifier.get_fresh_var(Some("list_elem".into()), None);
|
||||||
|
let list_a = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: a }]),
|
||||||
|
});
|
||||||
|
let list_x = env.unifier.add_ty(TypeEnum::TObj {
|
||||||
|
obj_id: env.type_mapping["list"].obj_id(&env.unifier).unwrap(),
|
||||||
|
fields: Mapping::default(),
|
||||||
|
params: into_var_map([TypeVar { id: list_elem_tvar.id, ty: x }]),
|
||||||
|
});
|
||||||
|
let int = env.parse("int", &HashMap::new());
|
||||||
|
let list_int = env.parse("list[int]", &HashMap::new());
|
||||||
|
|
||||||
|
assert_eq!(env.unify(a, b), Err("Incompatible types: typevar4 and typevar3".to_string()));
|
||||||
|
env.unifier.unify(list_a, list_x).unwrap();
|
||||||
|
assert_eq!(
|
||||||
|
env.unify(list_x, list_int),
|
||||||
|
Err("Incompatible types: 11[typevar3] and 11[0]".to_string())
|
||||||
|
);
|
||||||
|
|
||||||
|
env.unifier.replace_rigid_var(a, int);
|
||||||
|
env.unifier.unify(list_x, list_int).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_instantiation() {
|
||||||
|
let mut env = TestEnvironment::new();
|
||||||
|
let int = env.parse("int", &HashMap::new());
|
||||||
|
let boolean = env.parse("bool", &HashMap::new());
|
||||||
|
let float = env.parse("float", &HashMap::new());
|
||||||
|
let list_int = env.parse("list[int]", &HashMap::new());
|
||||||
|
|
||||||
|
let list_elem_tvar = if let TypeEnum::TObj { params, .. } =
|
||||||
|
&*env.unifier.get_ty_immutable(env.type_mapping["list"])
|
||||||
|
{
|
||||||
|
iter_type_vars(params).next().unwrap()
|
||||||
|
} else {
|
||||||
|
unreachable!()
|
||||||
|
};
|
||||||
|
|
||||||
|
let obj_map: HashMap<_, _> = [(0usize, "int"), (1, "float"), (2, "bool"), (11, "list")].into();
|
||||||
|
|
||||||
|
let v = env.unifier.get_fresh_var_with_range(&[int, boolean], None, None).ty;
|
||||||
|
let list_v = env
|
||||||
|
.unifier
|
||||||
|
.subst(env.type_mapping["list"], &into_var_map([TypeVar { id: list_elem_tvar.id, ty: v }]))
|
||||||
|
.unwrap();
|
||||||
|
let v1 = env.unifier.get_fresh_var_with_range(&[list_v, int], None, None).ty;
|
||||||
|
let v2 = env.unifier.get_fresh_var_with_range(&[list_int, float], None, None).ty;
|
||||||
|
let t = env.unifier.get_dummy_var().ty;
|
||||||
|
let tuple = env.unifier.add_ty(TypeEnum::TTuple { ty: vec![v, v1, v2] });
|
||||||
|
let v3 = env.unifier.get_fresh_var_with_range(&[tuple, t], None, None).ty;
|
||||||
|
// t = TypeVar('t')
|
||||||
|
// v = TypeVar('v', int, bool)
|
||||||
|
// v1 = TypeVar('v1', 'list[v]', int)
|
||||||
|
// v2 = TypeVar('v2', 'list[int]', float)
|
||||||
|
// v3 = TypeVar('v3', tuple[v, v1, v2], t)
|
||||||
|
// what values can v3 take?
|
||||||
|
|
||||||
|
let types = env.unifier.get_instantiations(v3).unwrap();
|
||||||
|
let expected_types = indoc! {"
|
||||||
|
tuple[bool, int, float]
|
||||||
|
tuple[bool, int, list[int]]
|
||||||
|
tuple[bool, list[bool], float]
|
||||||
|
tuple[bool, list[bool], list[int]]
|
||||||
|
tuple[bool, list[int], float]
|
||||||
|
tuple[bool, list[int], list[int]]
|
||||||
|
tuple[int, int, float]
|
||||||
|
tuple[int, int, list[int]]
|
||||||
|
tuple[int, list[bool], float]
|
||||||
|
tuple[int, list[bool], list[int]]
|
||||||
|
tuple[int, list[int], float]
|
||||||
|
tuple[int, list[int], list[int]]
|
||||||
|
v6"
|
||||||
|
}
|
||||||
|
.split('\n')
|
||||||
|
.collect_vec();
|
||||||
|
let types = types
|
||||||
|
.iter()
|
||||||
|
.map(|ty| {
|
||||||
|
env.unifier.internal_stringify(
|
||||||
|
*ty,
|
||||||
|
&mut |i| (*obj_map.get(&i).unwrap()).to_string(),
|
||||||
|
&mut |i| format!("v{i}"),
|
||||||
|
&mut None,
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.sorted()
|
||||||
|
.collect_vec();
|
||||||
|
assert_eq!(expected_types, types);
|
||||||
|
}
|
|
@ -0,0 +1,182 @@
|
||||||
|
use std::rc::Rc;
|
||||||
|
|
||||||
|
use itertools::izip;
|
||||||
|
|
||||||
|
#[derive(Copy, Clone, PartialEq, Eq, Debug, Hash)]
|
||||||
|
pub struct UnificationKey(usize);
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct UnificationTable<V> {
|
||||||
|
parents: Vec<usize>,
|
||||||
|
ranks: Vec<u32>,
|
||||||
|
values: Vec<Option<V>>,
|
||||||
|
log: Vec<Action<V>>,
|
||||||
|
generation: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
enum Action<V> {
|
||||||
|
Parent { key: usize, original_parent: usize },
|
||||||
|
Value { key: usize, original_value: Option<V> },
|
||||||
|
Rank { key: usize, original_rank: u32 },
|
||||||
|
Marker { generation: u32 },
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<V> Default for UnificationTable<V> {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<V> UnificationTable<V> {
|
||||||
|
pub fn new() -> UnificationTable<V> {
|
||||||
|
UnificationTable {
|
||||||
|
parents: Vec::new(),
|
||||||
|
ranks: Vec::new(),
|
||||||
|
values: Vec::new(),
|
||||||
|
log: Vec::new(),
|
||||||
|
generation: 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn new_key(&mut self, v: V) -> UnificationKey {
|
||||||
|
let index = self.parents.len();
|
||||||
|
self.parents.push(index);
|
||||||
|
self.ranks.push(0);
|
||||||
|
self.values.push(Some(v));
|
||||||
|
UnificationKey(index)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn unify(&mut self, a: UnificationKey, b: UnificationKey) {
|
||||||
|
let mut a = self.find(a);
|
||||||
|
let mut b = self.find(b);
|
||||||
|
if a == b {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if self.ranks[a] < self.ranks[b] {
|
||||||
|
std::mem::swap(&mut a, &mut b);
|
||||||
|
}
|
||||||
|
self.log.push(Action::Parent { key: b, original_parent: self.parents[b] });
|
||||||
|
self.parents[b] = a;
|
||||||
|
if self.ranks[a] == self.ranks[b] {
|
||||||
|
self.log.push(Action::Rank { key: a, original_rank: self.ranks[a] });
|
||||||
|
self.ranks[a] += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn probe_value_immutable(&self, key: UnificationKey) -> &V {
|
||||||
|
let mut root = key.0;
|
||||||
|
let mut parent = self.parents[root];
|
||||||
|
while root != parent {
|
||||||
|
root = parent;
|
||||||
|
// parent = root.parent
|
||||||
|
parent = self.parents[parent];
|
||||||
|
}
|
||||||
|
self.values[parent].as_ref().unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn probe_value(&mut self, a: UnificationKey) -> &V {
|
||||||
|
let index = self.find(a);
|
||||||
|
self.values[index].as_ref().unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn set_value(&mut self, a: UnificationKey, v: V) {
|
||||||
|
let index = self.find(a);
|
||||||
|
let original_value = self.values[index].replace(v);
|
||||||
|
self.log.push(Action::Value { key: index, original_value });
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn unioned(&mut self, a: UnificationKey, b: UnificationKey) -> bool {
|
||||||
|
self.find(a) == self.find(b)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_representative(&mut self, key: UnificationKey) -> UnificationKey {
|
||||||
|
UnificationKey(self.find(key))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn find(&mut self, key: UnificationKey) -> usize {
|
||||||
|
let mut root = key.0;
|
||||||
|
let mut parent = self.parents[root];
|
||||||
|
while root != parent {
|
||||||
|
// a = parent.parent
|
||||||
|
let a = self.parents[parent];
|
||||||
|
// root.parent = parent.parent
|
||||||
|
self.log.push(Action::Parent { key: root, original_parent: self.parents[root] });
|
||||||
|
self.parents[root] = a;
|
||||||
|
root = parent;
|
||||||
|
// parent = root.parent
|
||||||
|
parent = a;
|
||||||
|
}
|
||||||
|
parent
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_snapshot(&mut self) -> (usize, u32) {
|
||||||
|
let generation = self.generation;
|
||||||
|
self.log.push(Action::Marker { generation });
|
||||||
|
self.generation += 1;
|
||||||
|
(self.log.len(), generation)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn restore_snapshot(&mut self, snapshot: (usize, u32)) {
|
||||||
|
let (log_len, generation) = snapshot;
|
||||||
|
assert!(self.log.len() >= log_len, "snapshot restoration error");
|
||||||
|
assert!(
|
||||||
|
matches!(self.log[log_len - 1], Action::Marker { generation: gen } if gen == generation),
|
||||||
|
"snapshot restoration error"
|
||||||
|
);
|
||||||
|
for action in self.log.drain(log_len - 1..).rev() {
|
||||||
|
match action {
|
||||||
|
Action::Parent { key, original_parent } => {
|
||||||
|
self.parents[key] = original_parent;
|
||||||
|
}
|
||||||
|
Action::Value { key, original_value } => {
|
||||||
|
self.values[key] = original_value;
|
||||||
|
}
|
||||||
|
Action::Rank { key, original_rank } => {
|
||||||
|
self.ranks[key] = original_rank;
|
||||||
|
}
|
||||||
|
Action::Marker { .. } => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn discard_snapshot(&mut self, snapshot: (usize, u32)) {
|
||||||
|
let (log_len, generation) = snapshot;
|
||||||
|
assert!(self.log.len() >= log_len, "snapshot discard error");
|
||||||
|
assert!(
|
||||||
|
matches!(self.log[log_len - 1], Action::Marker { generation: gen } if gen == generation),
|
||||||
|
"snapshot discard error"
|
||||||
|
);
|
||||||
|
self.log.clear();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<V> UnificationTable<Rc<V>>
|
||||||
|
where
|
||||||
|
V: Clone,
|
||||||
|
{
|
||||||
|
pub fn get_send(&self) -> UnificationTable<V> {
|
||||||
|
let values = izip!(self.values.iter(), self.parents.iter())
|
||||||
|
.enumerate()
|
||||||
|
.map(|(i, (v, p))| if *p == i { v.as_ref().map(|v| v.as_ref().clone()) } else { None })
|
||||||
|
.collect();
|
||||||
|
UnificationTable {
|
||||||
|
parents: self.parents.clone(),
|
||||||
|
ranks: self.ranks.clone(),
|
||||||
|
values,
|
||||||
|
log: Vec::new(),
|
||||||
|
generation: 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_send(table: &UnificationTable<V>) -> UnificationTable<Rc<V>> {
|
||||||
|
let values = table.values.iter().cloned().map(|v| v.map(Rc::new)).collect();
|
||||||
|
UnificationTable {
|
||||||
|
parents: table.parents.clone(),
|
||||||
|
ranks: table.ranks.clone(),
|
||||||
|
values,
|
||||||
|
log: Vec::new(),
|
||||||
|
generation: 0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,15 +0,0 @@
|
||||||
[package]
|
|
||||||
name = "nac3embedded"
|
|
||||||
version = "0.1.0"
|
|
||||||
authors = ["M-Labs"]
|
|
||||||
edition = "2018"
|
|
||||||
|
|
||||||
[lib]
|
|
||||||
name = "nac3embedded"
|
|
||||||
crate-type = ["cdylib"]
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
pyo3 = { version = "0.12.4", features = ["extension-module"] }
|
|
||||||
inkwell = { git = "https://github.com/TheDan64/inkwell", branch = "master", features = ["llvm10-0"] }
|
|
||||||
rustpython-parser = { git = "https://github.com/RustPython/RustPython", branch = "master" }
|
|
||||||
nac3core = { path = "../nac3core" }
|
|
|
@ -1,11 +0,0 @@
|
||||||
from language import *
|
|
||||||
|
|
||||||
|
|
||||||
class Demo:
|
|
||||||
@kernel
|
|
||||||
def run(self: bool) -> bool:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
Demo().run()
|
|
|
@ -1,19 +0,0 @@
|
||||||
from functools import wraps
|
|
||||||
|
|
||||||
import nac3embedded
|
|
||||||
|
|
||||||
|
|
||||||
__all__ = ["kernel", "portable"]
|
|
||||||
|
|
||||||
|
|
||||||
def kernel(function):
|
|
||||||
@wraps(function)
|
|
||||||
def run_on_core(self, *args, **kwargs):
|
|
||||||
nac3 = nac3embedded.NAC3()
|
|
||||||
nac3.register_host_object(self)
|
|
||||||
nac3.compile_method(self, function.__name__)
|
|
||||||
return run_on_core
|
|
||||||
|
|
||||||
|
|
||||||
def portable(function):
|
|
||||||
return fn
|
|
|
@ -1 +0,0 @@
|
||||||
../target/release/libnac3embedded.so
|
|
|
@ -1,116 +0,0 @@
|
||||||
use std::collections::HashMap;
|
|
||||||
use std::collections::hash_map::Entry;
|
|
||||||
|
|
||||||
use pyo3::prelude::*;
|
|
||||||
use pyo3::exceptions;
|
|
||||||
use rustpython_parser::{ast, parser};
|
|
||||||
use inkwell::context::Context;
|
|
||||||
use inkwell::targets::*;
|
|
||||||
|
|
||||||
use nac3core::CodeGen;
|
|
||||||
|
|
||||||
fn runs_on_core(decorator_list: &[ast::Expression]) -> bool {
|
|
||||||
for decorator in decorator_list.iter() {
|
|
||||||
if let ast::ExpressionType::Identifier { name } = &decorator.node {
|
|
||||||
if name == "kernel" || name == "portable" {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
false
|
|
||||||
}
|
|
||||||
|
|
||||||
#[pyclass(name=NAC3)]
|
|
||||||
struct Nac3 {
|
|
||||||
type_definitions: HashMap<i64, ast::Program>,
|
|
||||||
host_objects: HashMap<i64, i64>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[pymethods]
|
|
||||||
impl Nac3 {
|
|
||||||
#[new]
|
|
||||||
fn new() -> Self {
|
|
||||||
Nac3 {
|
|
||||||
type_definitions: HashMap::new(),
|
|
||||||
host_objects: HashMap::new(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn register_host_object(&mut self, obj: PyObject) -> PyResult<()> {
|
|
||||||
Python::with_gil(|py| -> PyResult<()> {
|
|
||||||
let obj: &PyAny = obj.extract(py)?;
|
|
||||||
let obj_type = obj.get_type();
|
|
||||||
|
|
||||||
let builtins = PyModule::import(py, "builtins")?;
|
|
||||||
let type_id = builtins.call1("id", (obj_type, ))?.extract()?;
|
|
||||||
|
|
||||||
let entry = self.type_definitions.entry(type_id);
|
|
||||||
if let Entry::Vacant(entry) = entry {
|
|
||||||
let source = PyModule::import(py, "inspect")?.call1("getsource", (obj_type, ))?;
|
|
||||||
let ast = parser::parse_program(source.extract()?).map_err(|e|
|
|
||||||
exceptions::PySyntaxError::new_err(format!("failed to parse host object source: {}", e)))?;
|
|
||||||
entry.insert(ast);
|
|
||||||
// TODO: examine AST and recursively register dependencies
|
|
||||||
};
|
|
||||||
|
|
||||||
let obj_id = builtins.call1("id", (obj, ))?.extract()?;
|
|
||||||
match self.host_objects.entry(obj_id) {
|
|
||||||
Entry::Vacant(entry) => entry.insert(type_id),
|
|
||||||
Entry::Occupied(_) => return Err(
|
|
||||||
exceptions::PyValueError::new_err("host object registered twice")),
|
|
||||||
};
|
|
||||||
// TODO: collect other information about host object, e.g. value of fields
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
fn compile_method(&self, obj: PyObject, name: String) -> PyResult<()> {
|
|
||||||
Python::with_gil(|py| -> PyResult<()> {
|
|
||||||
let obj: &PyAny = obj.extract(py)?;
|
|
||||||
let builtins = PyModule::import(py, "builtins")?;
|
|
||||||
let obj_id = builtins.call1("id", (obj, ))?.extract()?;
|
|
||||||
|
|
||||||
let type_id = self.host_objects.get(&obj_id).ok_or_else(||
|
|
||||||
exceptions::PyKeyError::new_err("type of host object not found"))?;
|
|
||||||
let ast = self.type_definitions.get(&type_id).ok_or_else(||
|
|
||||||
exceptions::PyKeyError::new_err("type definition not found"))?;
|
|
||||||
|
|
||||||
if let ast::StatementType::ClassDef {
|
|
||||||
name: _,
|
|
||||||
body,
|
|
||||||
bases: _,
|
|
||||||
keywords: _,
|
|
||||||
decorator_list: _ } = &ast.statements[0].node {
|
|
||||||
for statement in body.iter() {
|
|
||||||
if let ast::StatementType::FunctionDef {
|
|
||||||
is_async: _,
|
|
||||||
name: funcdef_name,
|
|
||||||
args: _,
|
|
||||||
body: _,
|
|
||||||
decorator_list,
|
|
||||||
returns: _ } = &statement.node {
|
|
||||||
if runs_on_core(decorator_list) && funcdef_name == &name {
|
|
||||||
let context = Context::create();
|
|
||||||
let mut codegen = CodeGen::new(&context);
|
|
||||||
codegen.compile_toplevel(&body[0]).map_err(|e|
|
|
||||||
exceptions::PyRuntimeError::new_err(format!("compilation failed: {}", e)))?;
|
|
||||||
codegen.print_ir();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return Err(exceptions::PyValueError::new_err("expected ClassDef for type definition"));
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[pymodule]
|
|
||||||
fn nac3embedded(_py: Python, m: &PyModule) -> PyResult<()> {
|
|
||||||
Target::initialize_all(&InitializationConfig::default());
|
|
||||||
m.add_class::<Nac3>()?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
|
@ -0,0 +1,8 @@
|
||||||
|
[package]
|
||||||
|
name = "nac3ld"
|
||||||
|
version = "0.1.0"
|
||||||
|
authors = ["M-Labs"]
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
byteorder = { version = "1.5", default-features = false }
|
|
@ -0,0 +1,509 @@
|
||||||
|
#![allow(non_camel_case_types, non_upper_case_globals)]
|
||||||
|
|
||||||
|
use std::mem;
|
||||||
|
|
||||||
|
use byteorder::{ByteOrder, LittleEndian};
|
||||||
|
|
||||||
|
pub const DW_EH_PE_omit: u8 = 0xFF;
|
||||||
|
pub const DW_EH_PE_absptr: u8 = 0x00;
|
||||||
|
|
||||||
|
pub const DW_EH_PE_uleb128: u8 = 0x01;
|
||||||
|
pub const DW_EH_PE_udata2: u8 = 0x02;
|
||||||
|
pub const DW_EH_PE_udata4: u8 = 0x03;
|
||||||
|
pub const DW_EH_PE_udata8: u8 = 0x04;
|
||||||
|
pub const DW_EH_PE_sleb128: u8 = 0x09;
|
||||||
|
pub const DW_EH_PE_sdata2: u8 = 0x0A;
|
||||||
|
pub const DW_EH_PE_sdata4: u8 = 0x0B;
|
||||||
|
pub const DW_EH_PE_sdata8: u8 = 0x0C;
|
||||||
|
|
||||||
|
pub const DW_EH_PE_pcrel: u8 = 0x10;
|
||||||
|
pub const DW_EH_PE_textrel: u8 = 0x20;
|
||||||
|
pub const DW_EH_PE_datarel: u8 = 0x30;
|
||||||
|
pub const DW_EH_PE_funcrel: u8 = 0x40;
|
||||||
|
pub const DW_EH_PE_aligned: u8 = 0x50;
|
||||||
|
|
||||||
|
pub const DW_EH_PE_indirect: u8 = 0x80;
|
||||||
|
|
||||||
|
pub struct DwarfReader<'a> {
|
||||||
|
pub slice: &'a [u8],
|
||||||
|
pub virt_addr: u32,
|
||||||
|
base_slice: &'a [u8],
|
||||||
|
base_virt_addr: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> DwarfReader<'a> {
|
||||||
|
pub fn new(slice: &[u8], virt_addr: u32) -> DwarfReader {
|
||||||
|
DwarfReader { slice, virt_addr, base_slice: slice, base_virt_addr: virt_addr }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a new instance from another instance of [DwarfReader], optionally removing any
|
||||||
|
/// offsets previously applied to the other instance.
|
||||||
|
pub fn from_reader(other: &DwarfReader<'a>, reset_offset: bool) -> DwarfReader<'a> {
|
||||||
|
if reset_offset {
|
||||||
|
DwarfReader::new(other.base_slice, other.base_virt_addr)
|
||||||
|
} else {
|
||||||
|
DwarfReader::new(other.slice, other.virt_addr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn offset(&mut self, offset: u32) {
|
||||||
|
self.slice = &self.slice[offset as usize..];
|
||||||
|
self.virt_addr = self.virt_addr.wrapping_add(offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// ULEB128 and SLEB128 encodings are defined in Section 7.6 - "Variable Length Data" of the
|
||||||
|
/// [DWARF-4 Manual](https://dwarfstd.org/doc/DWARF4.pdf).
|
||||||
|
pub fn read_uleb128(&mut self) -> u64 {
|
||||||
|
let mut shift: usize = 0;
|
||||||
|
let mut result: u64 = 0;
|
||||||
|
let mut byte: u8;
|
||||||
|
loop {
|
||||||
|
byte = self.read_u8();
|
||||||
|
result |= u64::from(byte & 0x7F) << shift;
|
||||||
|
shift += 7;
|
||||||
|
if byte & 0x80 == 0 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn read_sleb128(&mut self) -> i64 {
|
||||||
|
let mut shift: u32 = 0;
|
||||||
|
let mut result: u64 = 0;
|
||||||
|
let mut byte: u8;
|
||||||
|
loop {
|
||||||
|
byte = self.read_u8();
|
||||||
|
result |= u64::from(byte & 0x7F) << shift;
|
||||||
|
shift += 7;
|
||||||
|
if byte & 0x80 == 0 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// sign-extend
|
||||||
|
if shift < u64::BITS && (byte & 0x40) != 0 {
|
||||||
|
result |= (!0u64) << shift;
|
||||||
|
}
|
||||||
|
result as i64
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn read_u8(&mut self) -> u8 {
|
||||||
|
let val = self.slice[0];
|
||||||
|
self.slice = &self.slice[1..];
|
||||||
|
val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
macro_rules! impl_read_fn {
|
||||||
|
( $($type: ty, $byteorder_fn: ident);* ) => {
|
||||||
|
impl<'a> DwarfReader<'a> {
|
||||||
|
$(
|
||||||
|
pub fn $byteorder_fn(&mut self) -> $type {
|
||||||
|
let val = LittleEndian::$byteorder_fn(self.slice);
|
||||||
|
self.slice = &self.slice[mem::size_of::<$type>()..];
|
||||||
|
val
|
||||||
|
}
|
||||||
|
)*
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl_read_fn!(
|
||||||
|
u16, read_u16;
|
||||||
|
u32, read_u32;
|
||||||
|
u64, read_u64;
|
||||||
|
i16, read_i16;
|
||||||
|
i32, read_i32;
|
||||||
|
i64, read_i64
|
||||||
|
);
|
||||||
|
|
||||||
|
pub struct DwarfWriter<'a> {
|
||||||
|
pub slice: &'a mut [u8],
|
||||||
|
pub offset: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> DwarfWriter<'a> {
|
||||||
|
pub fn new(slice: &mut [u8]) -> DwarfWriter {
|
||||||
|
DwarfWriter { slice, offset: 0 }
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn write_u8(&mut self, data: u8) {
|
||||||
|
self.slice[self.offset] = data;
|
||||||
|
self.offset += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn write_u32(&mut self, data: u32) {
|
||||||
|
LittleEndian::write_u32(&mut self.slice[self.offset..], data);
|
||||||
|
self.offset += 4;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_encoded_pointer(reader: &mut DwarfReader, encoding: u8) -> Result<usize, ()> {
|
||||||
|
if encoding == DW_EH_PE_omit {
|
||||||
|
return Err(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// DW_EH_PE_aligned implies it's an absolute pointer value
|
||||||
|
// However, we are linking library for 32-bits architecture
|
||||||
|
// The size of variable should be 4 bytes instead
|
||||||
|
if encoding == DW_EH_PE_aligned {
|
||||||
|
let shifted_virt_addr = round_up(reader.virt_addr as usize, mem::size_of::<u32>())?;
|
||||||
|
let addr_inc = shifted_virt_addr - reader.virt_addr as usize;
|
||||||
|
|
||||||
|
reader.slice = &reader.slice[addr_inc..];
|
||||||
|
reader.virt_addr = shifted_virt_addr as u32;
|
||||||
|
return Ok(reader.read_u32() as usize);
|
||||||
|
}
|
||||||
|
|
||||||
|
match encoding & 0x0F {
|
||||||
|
DW_EH_PE_absptr | DW_EH_PE_udata4 => Ok(reader.read_u32() as usize),
|
||||||
|
DW_EH_PE_uleb128 => Ok(reader.read_uleb128() as usize),
|
||||||
|
DW_EH_PE_udata2 => Ok(reader.read_u16() as usize),
|
||||||
|
DW_EH_PE_udata8 => Ok(reader.read_u64() as usize),
|
||||||
|
DW_EH_PE_sleb128 => Ok(reader.read_sleb128() as usize),
|
||||||
|
DW_EH_PE_sdata2 => Ok(reader.read_i16() as usize),
|
||||||
|
DW_EH_PE_sdata4 => Ok(reader.read_i32() as usize),
|
||||||
|
DW_EH_PE_sdata8 => Ok(reader.read_i64() as usize),
|
||||||
|
_ => Err(()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_encoded_pointer_with_pc(reader: &mut DwarfReader, encoding: u8) -> Result<usize, ()> {
|
||||||
|
let entry_virt_addr = reader.virt_addr;
|
||||||
|
let mut result = read_encoded_pointer(reader, encoding)?;
|
||||||
|
|
||||||
|
// DW_EH_PE_aligned implies it's an absolute pointer value
|
||||||
|
if encoding == DW_EH_PE_aligned {
|
||||||
|
return Ok(result);
|
||||||
|
}
|
||||||
|
|
||||||
|
result = match encoding & 0x70 {
|
||||||
|
DW_EH_PE_pcrel => result.wrapping_add(entry_virt_addr as usize),
|
||||||
|
|
||||||
|
// .eh_frame normally would not have these kinds of relocations
|
||||||
|
// These would not be supported by a dedicated linker relocation schemes for RISC-V
|
||||||
|
DW_EH_PE_textrel | DW_EH_PE_datarel | DW_EH_PE_funcrel | DW_EH_PE_aligned => {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Other values should be impossible
|
||||||
|
_ => unreachable!(),
|
||||||
|
};
|
||||||
|
|
||||||
|
if encoding & DW_EH_PE_indirect != 0 {
|
||||||
|
// There should not be a need for indirect addressing, as assembly code from
|
||||||
|
// the dynamic library should not be freely moved relative to the EH frame.
|
||||||
|
unreachable!()
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn round_up(unrounded: usize, align: usize) -> Result<usize, ()> {
|
||||||
|
if align.is_power_of_two() {
|
||||||
|
Ok((unrounded + align - 1) & !(align - 1))
|
||||||
|
} else {
|
||||||
|
Err(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Minimalistic structure to store everything needed for parsing FDEs to synthesize `.eh_frame_hdr`
|
||||||
|
/// section.
|
||||||
|
///
|
||||||
|
/// Refer to [The Linux Standard Base Core Specification, Generic Part](https://refspecs.linuxfoundation.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/ehframechpt.html)
|
||||||
|
/// for more information.
|
||||||
|
pub struct EH_Frame<'a> {
|
||||||
|
reader: DwarfReader<'a>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> EH_Frame<'a> {
|
||||||
|
/// Creates an [EH_Frame] using the bytes in the `.eh_frame` section and its address in the ELF
|
||||||
|
/// file.
|
||||||
|
pub fn new(eh_frame_slice: &[u8], eh_frame_addr: u32) -> EH_Frame {
|
||||||
|
EH_Frame { reader: DwarfReader::new(eh_frame_slice, eh_frame_addr) }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns an [Iterator] over all Call Frame Information (CFI) records.
|
||||||
|
pub fn cfi_records(&self) -> CFI_Records<'a> {
|
||||||
|
let reader = DwarfReader::from_reader(&self.reader, true);
|
||||||
|
let len = reader.slice.len();
|
||||||
|
|
||||||
|
CFI_Records { reader, available: len }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A single Call Frame Information (CFI) record.
|
||||||
|
///
|
||||||
|
/// From the [specification](https://refspecs.linuxfoundation.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/ehframechpt.html):
|
||||||
|
///
|
||||||
|
/// > Each CFI record contains a Common Information Entry (CIE) record followed by 1 or more Frame
|
||||||
|
/// Description Entry (FDE) records.
|
||||||
|
pub struct CFI_Record<'a> {
|
||||||
|
// It refers to the augmentation data that corresponds to 'R' in the augmentation string
|
||||||
|
fde_pointer_encoding: u8,
|
||||||
|
fde_reader: DwarfReader<'a>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> CFI_Record<'a> {
|
||||||
|
pub fn from_reader(cie_reader: &mut DwarfReader<'a>) -> Result<CFI_Record<'a>, ()> {
|
||||||
|
let length = cie_reader.read_u32();
|
||||||
|
let fde_reader = match length {
|
||||||
|
// eh_frame with 0 lengths means the CIE is terminated
|
||||||
|
0 => panic!("Cannot create an EH_Frame from a termination CIE"),
|
||||||
|
|
||||||
|
// length == u32::MAX means that the length is only representable with 64 bits,
|
||||||
|
// which does not make sense in a system with 32-bit address.
|
||||||
|
0xFFFF_FFFF => unimplemented!(),
|
||||||
|
|
||||||
|
_ => {
|
||||||
|
let mut fde_reader = DwarfReader::from_reader(cie_reader, false);
|
||||||
|
fde_reader.offset(length);
|
||||||
|
fde_reader
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Routine check on the .eh_frame well-formness, in terms of CIE ID & Version args.
|
||||||
|
let cie_ptr = cie_reader.read_u32();
|
||||||
|
assert_eq!(cie_ptr, 0);
|
||||||
|
assert_eq!(cie_reader.read_u8(), 1);
|
||||||
|
|
||||||
|
// Parse augmentation string
|
||||||
|
// The first character must be 'z', there is no way to proceed otherwise
|
||||||
|
assert_eq!(cie_reader.read_u8(), b'z');
|
||||||
|
|
||||||
|
// Establish a pointer that skips ahead of the string
|
||||||
|
// Skip code/data alignment factors & return address register along the way as well
|
||||||
|
// We only tackle the case where 'z' and 'R' are part of the augmentation string, otherwise
|
||||||
|
// we cannot get the addresses to make .eh_frame_hdr
|
||||||
|
let mut aug_data_reader = DwarfReader::from_reader(cie_reader, false);
|
||||||
|
let mut aug_str_len = 0;
|
||||||
|
loop {
|
||||||
|
if aug_data_reader.read_u8() == b'\0' {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
aug_str_len += 1;
|
||||||
|
}
|
||||||
|
if aug_str_len == 0 {
|
||||||
|
unimplemented!();
|
||||||
|
}
|
||||||
|
aug_data_reader.read_uleb128(); // Code alignment factor
|
||||||
|
aug_data_reader.read_sleb128(); // Data alignment factor
|
||||||
|
aug_data_reader.read_uleb128(); // Return address register
|
||||||
|
aug_data_reader.read_uleb128(); // Augmentation data length
|
||||||
|
let mut fde_pointer_encoding = DW_EH_PE_omit;
|
||||||
|
for _ in 0..aug_str_len {
|
||||||
|
match cie_reader.read_u8() {
|
||||||
|
b'L' => {
|
||||||
|
aug_data_reader.read_u8();
|
||||||
|
}
|
||||||
|
|
||||||
|
b'P' => {
|
||||||
|
let encoding = aug_data_reader.read_u8();
|
||||||
|
read_encoded_pointer(&mut aug_data_reader, encoding)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
b'R' => {
|
||||||
|
fde_pointer_encoding = aug_data_reader.read_u8();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Other characters are not supported
|
||||||
|
_ => unimplemented!(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
assert_ne!(fde_pointer_encoding, DW_EH_PE_omit);
|
||||||
|
|
||||||
|
Ok(CFI_Record { fde_pointer_encoding, fde_reader })
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns a [DwarfReader] initialized to the first Frame Description Entry (FDE) of this CFI
|
||||||
|
/// record.
|
||||||
|
pub fn get_fde_reader(&self) -> DwarfReader<'a> {
|
||||||
|
DwarfReader::from_reader(&self.fde_reader, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns an [Iterator] over all Frame Description Entries (FDEs).
|
||||||
|
pub fn fde_records(&self) -> FDE_Records<'a> {
|
||||||
|
let reader = self.get_fde_reader();
|
||||||
|
let len = reader.slice.len();
|
||||||
|
|
||||||
|
FDE_Records { pointer_encoding: self.fde_pointer_encoding, reader, available: len }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// [Iterator] over Call Frame Information (CFI) records in an
|
||||||
|
/// [Exception Handling (EH) frame][EH_Frame].
|
||||||
|
pub struct CFI_Records<'a> {
|
||||||
|
reader: DwarfReader<'a>,
|
||||||
|
available: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> Iterator for CFI_Records<'a> {
|
||||||
|
type Item = CFI_Record<'a>;
|
||||||
|
|
||||||
|
fn next(&mut self) -> Option<Self::Item> {
|
||||||
|
loop {
|
||||||
|
if self.available == 0 {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut this_reader = DwarfReader::from_reader(&self.reader, false);
|
||||||
|
|
||||||
|
// Remove the length of the header and the content from the counter
|
||||||
|
let length = self.reader.read_u32();
|
||||||
|
let length = match length {
|
||||||
|
// eh_frame with 0-length means the CIE is terminated
|
||||||
|
0 => return None,
|
||||||
|
0xFFFF_FFFF => unimplemented!("CIE entries larger than 4 bytes not supported"),
|
||||||
|
other => other,
|
||||||
|
} as usize;
|
||||||
|
|
||||||
|
// Remove the length of the header and the content from the counter
|
||||||
|
self.available -= length + mem::size_of::<u32>();
|
||||||
|
let mut next_reader = DwarfReader::from_reader(&self.reader, false);
|
||||||
|
next_reader.offset(length as u32);
|
||||||
|
|
||||||
|
let cie_ptr = self.reader.read_u32();
|
||||||
|
|
||||||
|
self.reader = next_reader;
|
||||||
|
|
||||||
|
// Skip this record if it is a FDE
|
||||||
|
if cie_ptr == 0 {
|
||||||
|
// Rewind back to the start of the CFI Record
|
||||||
|
return Some(CFI_Record::from_reader(&mut this_reader).ok().unwrap());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// [Iterator] over Frame Description Entries (FDEs) in an
|
||||||
|
/// [Exception Handling (EH) frame][EH_Frame].
|
||||||
|
pub struct FDE_Records<'a> {
|
||||||
|
pointer_encoding: u8,
|
||||||
|
reader: DwarfReader<'a>,
|
||||||
|
available: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> Iterator for FDE_Records<'a> {
|
||||||
|
type Item = (u32, u32);
|
||||||
|
|
||||||
|
fn next(&mut self) -> Option<Self::Item> {
|
||||||
|
// Parse each FDE to obtain the starting address that the FDE applies to
|
||||||
|
// Send the FDE offset and the mentioned address to a callback that write up the
|
||||||
|
// .eh_frame_hdr section
|
||||||
|
|
||||||
|
if self.available == 0 {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove the length of the header and the content from the counter
|
||||||
|
let length = match self.reader.read_u32() {
|
||||||
|
// eh_frame with 0-length means the CIE is terminated
|
||||||
|
0 => return None,
|
||||||
|
0xFFFF_FFFF => unimplemented!("CIE entries larger than 4 bytes not supported"),
|
||||||
|
other => other,
|
||||||
|
} as usize;
|
||||||
|
|
||||||
|
// Remove the length of the header and the content from the counter
|
||||||
|
self.available -= length + mem::size_of::<u32>();
|
||||||
|
let mut next_fde_reader = DwarfReader::from_reader(&self.reader, false);
|
||||||
|
next_fde_reader.offset(length as u32);
|
||||||
|
|
||||||
|
let cie_ptr = self.reader.read_u32();
|
||||||
|
let next_val = if cie_ptr != 0 {
|
||||||
|
let pc_begin = read_encoded_pointer_with_pc(&mut self.reader, self.pointer_encoding)
|
||||||
|
.expect("Failed to read PC Begin");
|
||||||
|
Some((pc_begin as u32, self.reader.virt_addr))
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
self.reader = next_fde_reader;
|
||||||
|
|
||||||
|
next_val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct EH_Frame_Hdr<'a> {
|
||||||
|
fde_writer: DwarfWriter<'a>,
|
||||||
|
eh_frame_hdr_addr: u32,
|
||||||
|
fdes: Vec<(u32, u32)>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> EH_Frame_Hdr<'a> {
|
||||||
|
/// Create a [EH_Frame_Hdr] object, and write out the fixed fields of `.eh_frame_hdr` to memory.
|
||||||
|
///
|
||||||
|
/// Load address is not known at this point.
|
||||||
|
pub fn new(
|
||||||
|
eh_frame_hdr_slice: &mut [u8],
|
||||||
|
eh_frame_hdr_addr: u32,
|
||||||
|
eh_frame_addr: u32,
|
||||||
|
) -> EH_Frame_Hdr {
|
||||||
|
let mut writer = DwarfWriter::new(eh_frame_hdr_slice);
|
||||||
|
|
||||||
|
writer.write_u8(1); // version
|
||||||
|
writer.write_u8(0x1B); // eh_frame_ptr_enc - PC-relative 4-byte signed value
|
||||||
|
writer.write_u8(0x03); // fde_count_enc - 4-byte unsigned value
|
||||||
|
writer.write_u8(0x3B); // table_enc - .eh_frame_hdr section-relative 4-byte signed value
|
||||||
|
|
||||||
|
let eh_frame_offset = eh_frame_addr.wrapping_sub(
|
||||||
|
eh_frame_hdr_addr + writer.offset as u32 + ((mem::size_of::<u8>() as u32) * 4),
|
||||||
|
);
|
||||||
|
writer.write_u32(eh_frame_offset); // eh_frame_ptr
|
||||||
|
writer.write_u32(0); // `fde_count`, will be written in finalize_fde
|
||||||
|
|
||||||
|
EH_Frame_Hdr { fde_writer: writer, eh_frame_hdr_addr, fdes: Vec::new() }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// The offset of the `fde_count` value relative to the start of the `.eh_frame_hdr` section in
|
||||||
|
/// bytes.
|
||||||
|
fn fde_count_offset() -> usize {
|
||||||
|
8
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn add_fde(&mut self, init_loc: u32, addr: u32) {
|
||||||
|
self.fdes.push((
|
||||||
|
init_loc.wrapping_sub(self.eh_frame_hdr_addr),
|
||||||
|
addr.wrapping_sub(self.eh_frame_hdr_addr),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn finalize_fde(mut self) {
|
||||||
|
self.fdes
|
||||||
|
.sort_by(|(left_init_loc, _), (right_init_loc, _)| left_init_loc.cmp(right_init_loc));
|
||||||
|
for (init_loc, addr) in &self.fdes {
|
||||||
|
self.fde_writer.write_u32(*init_loc);
|
||||||
|
self.fde_writer.write_u32(*addr);
|
||||||
|
}
|
||||||
|
LittleEndian::write_u32(
|
||||||
|
&mut self.fde_writer.slice[Self::fde_count_offset()..],
|
||||||
|
self.fdes.len() as u32,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn size_from_eh_frame(eh_frame: &[u8]) -> usize {
|
||||||
|
// The virtual address of the EH frame does not matter in this case
|
||||||
|
// Calculation of size does not involve modifying any headers
|
||||||
|
let mut reader = DwarfReader::new(eh_frame, 0);
|
||||||
|
let mut fde_count = 0;
|
||||||
|
while !reader.slice.is_empty() {
|
||||||
|
// The original length field should be able to hold the entire value.
|
||||||
|
// The device memory space is limited to 32-bits addresses anyway.
|
||||||
|
let entry_length = reader.read_u32();
|
||||||
|
if entry_length == 0 || entry_length == 0xFFFF_FFFF {
|
||||||
|
unimplemented!()
|
||||||
|
}
|
||||||
|
|
||||||
|
// This slot stores the CIE ID (for CIE)/CIE Pointer (for FDE).
|
||||||
|
// This value must be non-zero for FDEs.
|
||||||
|
let cie_ptr = reader.read_u32();
|
||||||
|
if cie_ptr != 0 {
|
||||||
|
fde_count += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
reader.offset(entry_length - mem::size_of::<u32>() as u32);
|
||||||
|
}
|
||||||
|
|
||||||
|
12 + fde_count * 8
|
||||||
|
}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,24 @@
|
||||||
|
[package]
|
||||||
|
name = "nac3parser"
|
||||||
|
version = "0.1.2"
|
||||||
|
description = "Parser for python code."
|
||||||
|
authors = [ "RustPython Team", "M-Labs" ]
|
||||||
|
build = "build.rs"
|
||||||
|
license = "MIT"
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
[build-dependencies]
|
||||||
|
lalrpop = "0.20"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
nac3ast = { path = "../nac3ast" }
|
||||||
|
lalrpop-util = "0.20"
|
||||||
|
log = "0.4"
|
||||||
|
unic-emoji-char = "0.9"
|
||||||
|
unic-ucd-ident = "0.9"
|
||||||
|
unicode_names2 = "1.2"
|
||||||
|
phf = { version = "0.11", features = ["macros"] }
|
||||||
|
ahash = "0.8"
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
insta = "=1.11.0"
|
|
@ -0,0 +1,56 @@
|
||||||
|
# nac3parser
|
||||||
|
|
||||||
|
This directory has the code for python lexing, parsing and generating Abstract Syntax Trees (AST).
|
||||||
|
|
||||||
|
This is the RustPython parser with modifications for NAC3.
|
||||||
|
|
||||||
|
The steps are:
|
||||||
|
- Lexical analysis: splits the source code into tokens.
|
||||||
|
- Parsing and generating the AST: transforms those tokens into an AST. Uses `LALRPOP`, a Rust parser generator framework.
|
||||||
|
|
||||||
|
The RustPython team wrote [a blog post](https://rustpython.github.io/2020/04/02/thing-explainer-parser.html) with screenshots and an explanation to help you understand the steps by seeing them in action.
|
||||||
|
|
||||||
|
For more information on LALRPOP, here is a link to the [LALRPOP book](https://github.com/lalrpop/lalrpop).
|
||||||
|
|
||||||
|
There is a readme in the `src` folder with the details of each file.
|
||||||
|
|
||||||
|
|
||||||
|
## Directory content
|
||||||
|
|
||||||
|
`build.rs`: The build script.
|
||||||
|
`Cargo.toml`: The config file.
|
||||||
|
|
||||||
|
The `src` directory has:
|
||||||
|
|
||||||
|
**lib.rs**
|
||||||
|
This is the crate's root.
|
||||||
|
|
||||||
|
**lexer.rs**
|
||||||
|
This module takes care of lexing python source text. This means source code is translated into separate tokens.
|
||||||
|
|
||||||
|
**parser.rs**
|
||||||
|
A python parsing module. Use this module to parse python code into an AST. There are three ways to parse python code. You could parse a whole program, a single statement, or a single expression.
|
||||||
|
|
||||||
|
**ast.rs**
|
||||||
|
Implements abstract syntax tree (AST) nodes for the python language. Roughly equivalent to [the python AST](https://docs.python.org/3/library/ast.html).
|
||||||
|
|
||||||
|
**python.lalrpop**
|
||||||
|
Python grammar.
|
||||||
|
|
||||||
|
**token.rs**
|
||||||
|
Different token definitions. Loosely based on token.h from CPython source.
|
||||||
|
|
||||||
|
**errors.rs**
|
||||||
|
Define internal parse error types. The goal is to provide a matching and a safe error API, masking errors from LALR.
|
||||||
|
|
||||||
|
**fstring.rs**
|
||||||
|
Format strings.
|
||||||
|
|
||||||
|
**function.rs**
|
||||||
|
Collection of functions for parsing parameters, arguments.
|
||||||
|
|
||||||
|
**location.rs**
|
||||||
|
Datatypes to support source location information.
|
||||||
|
|
||||||
|
**mode.rs**
|
||||||
|
Execution mode check. Allowed modes are `exec`, `eval` or `single`.
|
|
@ -0,0 +1,3 @@
|
||||||
|
fn main() {
|
||||||
|
lalrpop::process_root().unwrap()
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue