Skip to content

Commit 8df61ad

Browse files
Fill out the section about Python theads.
1 parent 0586c72 commit 8df61ad

File tree

1 file changed

+54
-20
lines changed

1 file changed

+54
-20
lines changed

Doc/howto/concurrency.rst

Lines changed: 54 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -470,40 +470,74 @@ Free-threading
470470

471471
.. currentmodule:: threading
472472

473-
For free-threading we can use the stdlib :mod:`threading` module.
473+
Threads, through the :mod:`threading` module, have been the dominant
474+
tool in Python concurrency for decades, which mirrors the generate state
475+
of software in general. Threads are very light-weight and efficient.
476+
Most importantly, they are the most direct route to taking advantage
477+
of multi-core parallelism (more an that in a moment).
474478

475-
Here's a basic example of how it looks to use the threading module::
479+
The main downside to using threads is that each one shares the full
480+
memory of the process with all the others. That exposes programs
481+
to a significant risk of `races <concurrency-downsides_>`_.
482+
483+
The other potential problem with using threads is that the conceptual
484+
model has no inherent synchronization, so it can be hard to follow
485+
what is going on in the program at any given moment. That is
486+
especially challenging for testing and debugging.
487+
488+
Using threads for concurrency boils down to:
489+
490+
1. create a thread object to run a function
491+
2. start the thread
492+
3. (optionally) wait for it to finish
493+
494+
Here's how that looks::
476495

477496
import threading
478497

479498
def task():
480499
# Do something.
481-
pass
500+
...
482501

483-
threads = []
484-
for _ in range(5):
485-
t = threading.Thread(target=task)
486-
t.start()
487-
threads.append(t)
502+
t = threading.Thread(target=task)
503+
t.start()
488504

489-
# Wait for all the threads to finish
490-
for t in threads:
491-
t.join()
505+
# Do other stuff.
506+
507+
t.join()
492508

493509
.. _python-gil:
494510

495511
The Global Interpreter Lock (GIL)
496512
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
497513

498-
Note that there are some limitations to the parallelism Python
499-
can provide. See :pep:`630`.
500-
501-
the :term:`global interpreter lock` (GIL) prevents multi-core
502-
parallelism for CPU-bound Python code (:pep:`for now... <630>`)
503-
504-
the :term:`global interpreter lock` (GIL)
505-
506-
...
514+
While physical threads are the direct route to multi-core parallelism,
515+
Python's threads have always had an extra wrinkle that gets in the way:
516+
the :term:`global interpreter lock` (GIL).
517+
518+
The :term:`!GIL` is very efficient tool for keeping the Python
519+
implementation simple, which is an important constraint for the project.
520+
In fact, it protects Python's maintainers and users from a large
521+
category of concurrency problems that one must normally face when
522+
threads are involved.
523+
524+
The big tradeoff is that the bytecode interpreter, which executes your
525+
Python code, only runs while holding the :term:`!GIL`. That means only
526+
one thread can be running Python code at a time. Threads will take
527+
short turns, so none have to wait too long, but it still prevents
528+
any actual parallelism.
529+
530+
At the same time, the Python runtime (and extension modules) can
531+
release the :term:`!GIL` when the thread is going to be doing something
532+
unrelated to Python, particularly something slow or long,
533+
like a blocking IO operation.
534+
535+
There is also an ongoing effort to eliminate the :term:`!GIL`:
536+
:pep:`630`. Any attempt to remove the :term:`!GIL` necessarily involves
537+
some slowdown to single-threaded performance and extra maintenance
538+
burden to the Python project and extension module maintainers.
539+
However, there is sufficient interest in unlocking full multi-core
540+
parallelism to justify the current experiment.
507541

508542
.. currentmodule:: None
509543

0 commit comments

Comments
 (0)