Phaser: "Call parameter does not match function signature!" #276
Labels
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: M-Labs/nac3#276
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
^ or similar, with entire code dump.
I thought it's because of some
i32
shenanigans (mentioned in #117) and I put int32 in phaser coredevice driver around all operators that use any arithmetic or logical operation, and even around constants. Sometimes even 2-3 times, around variables too. I thought I was getting somewhere as the error message would get smaller over time, but then it would get bigger again. Looks like I was a victim of #190 - running the same code gets me different results. Either way had to give up before I lost all my hair.Tried to reproduce this bug by making a call to
set_duc_phase_mu
, byself.phaser0.channel[0].set_duc_phase_mu(0)
(guessed from the IR)It seems that this is caused by how llvm handles opaque types of the same name when linking different modules as discussed here. Thinking of what is the best way to solve this since the llvm
Context
is notSend
. Maybe using a single llvmContext
with lock, and a global type cache for opaque types for codegen..?A temporary workaround for this is surely to simply use a single thread to do codegen (then everything would be in a single llvm module).. And I am guessing that the random segfault in #275 is also caused by some inconsistency in different llvm modules since using a single thread seems to solve that problem, too.
Let's measure how much time parallel codegen really saves on a representative example. We keep having issues with it (e.g. non-deterministic behavior still remains), many parts ended up not being parallelized, it makes the codebase more complex and harder to work with, and nobody has time to work on those problems. If it's not much faster let's get rid of all the thread stuff.
For benchmarking the speed improvement of multithreading, I wrote a simple script to randomly generate code of a certain number of functions with certain number of lines per function, and then ran some timing tests for them. I have also just tried to compile
nac3devices.py
with different configuration of number of threads.The result is as below (50 loc per function; compilation stops after the linking of all llvm modules generated by nac3)
So it seems that the multithreading does make things noticeable faster.
Looking into detail of the
get_llvm_type
function, I also found that under single thread sometimes the llvm type of a non-polymorphic classes also got to evaluated for multiple times, resulting in different opaque types for the same class. Need to look more into this.The detailed benchmark data is attached.