I am trying to learn multiprocessing, and created an example, however it’s behaving unexpectedly. the parent process run, then create a child process, but resources doesnt go back to parent until child is done.
code:
from multiprocessing import Process import time def f(): newTime = time.time() + 7 while(time.time() < newTime): print("inside child process") time.sleep(int(5)) if __name__ == '__main__': bln = True while(True): newTime = time.time() + 4 while(time.time() < newTime): print("printing fillers") if(bln): p = Process(target=f) p.start() p.join() bln = False
result
“inside child process”
(wait for 5 sec)
“inside child process”
“printing fillers”
“printing fillers”
[…]
If I remove ‘p.join()’ then it will work. But from my understanding, p.join() is to tell the program to wait for this thread/process to finish before ending the program. Can someone tell me why this is happening?
Advertisement
Answer
But from my understanding, p.join() is to tell the program to wait for this thread/process to finish before ending the program.
Nope, It blocks the main thread right then and there until the thread / process finishes. By doing that right after you start the process, you don’t let the loop continue until each process completes.
It would be better to collect all the Process
objects you create into a list, so they can be accessed after the loop creating them. Then in a new loop, wait for them to finish only after they are all created and started.
#for example processes = [] for i in whatever: p = Process(target=foo) p.start() processes.append(p) for p in processes: p.join()
If you want to be able to do things in the meantime (while waiting for join
), it is most common to use yet another thread or process. You can also choose to only wait a short time on join
by giving it a timeout value, and if the process doesn’t complete in that amount of time, an exception will be thrown which you can catch with a try
block, and decide to go do something else before trying to join
again.