I've hit this with docker (explained to me by @avsm) that the default limits on shared memory are quite restrictive in terms of the datasets we work with. I've been chasing some random sigbus errors in shark (process exits with signal 135) and I believe this is the same root cause.
This ticket is to remind me to look at making this configurable or at least have run use a higher bound. For now I can work around it by disabling parallelism in yirgacheffe
I've hit this with docker (explained to me by @avsm) that the default limits on shared memory are quite restrictive in terms of the datasets we work with. I've been chasing some random sigbus errors in shark (process exits with signal 135) and I believe this is the same root cause.
This ticket is to remind me to look at making this configurable or at least have run use a higher bound. For now I can work around it by disabling parallelism in yirgacheffe