yup, that’s true. most meaningful tasks are io-bound so “parallel” basically qualifies as “whatever allows multiple threads of execution to keep going”. if you’re doing numbercrunching in pythen without a proper library like pandas, that can parallelize your calculations, you’re doing it wrong.
I’ve used multiprocessing to squeeze more performance out of numpy and scipy. But yeah, resorting to multiprocessing is a sign that you should be dropping into something like Rust or a C variant.
I’ve always hated object oriented multi threading. Goroutines (green threads) are just the best way 90% of the time. If I need to control where threads go I’ll write it in rust.
python has way too many ways to do that.
asyncio
,future
,thread
,multiprocessing
…Of the ways you listed the only one that will actually take advantage of a multi core CPU is
multiprocessing
yup, that’s true. most meaningful tasks are io-bound so “parallel” basically qualifies as “whatever allows multiple threads of execution to keep going”. if you’re doing numbercrunching in pythen without a proper library like pandas, that can parallelize your calculations, you’re doing it wrong.
I’ve used multiprocessing to squeeze more performance out of numpy and scipy. But yeah, resorting to multiprocessing is a sign that you should be dropping into something like Rust or a C variant.
Most numpy array functions already utilize multiple cores, because they’re optimized and written in C
I’ve always hated object oriented multi threading. Goroutines (green threads) are just the best way 90% of the time. If I need to control where threads go I’ll write it in rust.