I have a formula that takes about 0.5 s for calculcate. However, I need this calculation to execute 1 lakh times with different values. An example of formula (simplified):
y = a + b
in which I have 1 million coincidence a
and b
which should all be calculated These 1 million combinations are saved in a list called combination
. I work with Python.
My idea is to increase the AWS example for every 100,000 calculations, so in this case I would need 10. Then the idea is to divide the combinations
list in 10 pieces ( part1 = combination [: 100000]
etc.). Then I have to send a subset of each AWS example combinations.
But how can I do the best? I thought that a shared volume was accessible to all instances and on that volume I put the code calculate.py
which I call via SSH:
Ssh user @ instant cithon calculcate.py
Or celery is probably a better way of doing this? Or is there another way?
Edit: I did some tests and found a way to go cellar.
You can set a ssp-tunnel to pathos
, Then submit the function to multiple servers using pathos
using the parallelpython
- or simply pathos
and use the tunnel Use something like rpyc
or zmq
to different server via tunnel.
See:
Comments
Post a Comment