Overview of Race Conditions
In computer science, a race condition occurs when the behavior or correctness of a program depends on the timing or interleaving of multiple threads or processes. In other words, if events happen in an unexpected order, the outcome may be wrong. A common definition is that a race condition happens when the timing or order of events affects the correctness of a piece of code . This often occurs in concurrent programs where two or more operations run in parallel without proper coordination. The result is nondeterministic – the program might produce different outputs or behaviors from run to run, depending on which “races” (operations) finish first.
To visualize this, imagine a high-speed printing press with many parts working at once: paper feeders, ink rollers, drying fans, and binding machines all operating concurrently to print books faster. If these parts are not perfectly synchronized, problems can arise. For example, if the binder starts binding pages before the ink is dry, the pages will smear; if two paper feeders push paper at the same time, a jam occurs. These printing press mishaps are analogous to race conditions in software – they result from steps happening in the wrong order or at the same time without coordination.
In the sections below, we’ll break down three common types of race conditions and use the printing press analogy to explain each:
- Data Races – when two processes try to modify the same data at the same time (like two ink rollers printing on the same page simultaneously).
- Ordering Issues – when operations happen out of the required order (like binding a book before all pages are printed).
- Deadlocks – when processes wait on each other indefinitely (like two parts of the press each waiting for the other to finish, causing a standstill).
For each type, we’ll provide a code example in Python (using threads or processes) to simulate the issue, explain what’s happening, relate it back to the printing press, and then suggest how to fix or avoid the problem. The goal is to build a strong conceptual understanding of race conditions and their practical implications.
Data Races: Concurrent Writing Without Coordination
What is a Data Race? A data race is a specific kind of race condition where two or more threads access the same memory at the same time, and at least one thread is modifying (writing) that memory, without proper synchronization. This can lead to corrupted or unexpected values. In formal terms, if one thread could write to a variable while another thread is reading or writing that same variable at the same time (with no lock or synchronization), you have a data race .
- In a data race, each thread “races” to access the data first, and the final result depends on who wins. This is dangerous because the operations can interleave unpredictably. For example, if two threads both try to increment a shared counter, one increment may overwrite the other, causing the counter to increase only once instead of twice .
Printing Press Analogy: Imagine a high-speed printing press with two ink rollers (two printing heads) trying to print on the same piece of paper at nearly the same time. If they aren’t coordinated, one roller might print “Hello” and the other prints “World” on the same line simultaneously. The final output on the paper could become garbled – perhaps only “World” appears because the second roller overwrote the first, or a messy mix of both if the inks smudge together. Inconsistent or corrupted text on the page is just like a data race in memory: the shared “resource” (the paper or the memory variable) ends up in an unpredictable state because two agents updated it concurrently.
Code Example – Data Race with a Shared Counter: Below is a Python example using threads that increment a shared counter. Ideally, if we increment the counter 100,000 times in two threads, the result should be 200,000. However, without proper locking, a data race may occur. (Note: Python’s Global Interpreter Lock (GIL) can make pure Python threading behavior a bit serialized, but conceptually this code demonstrates a race condition. For a true simultaneous update effect, one could use multiprocessing or lower-level atomic operations. Here we’ll assume a scenario or language where threads truly run in parallel.)
import threading
counter = 0 # shared resource
def increment():
global counter
for _ in range(100000):
counter += 1 # read-modify-write on shared data (not atomic)
# Launch two threads that increment the counter
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=increment)
t1.start()
t2.start()
t1.join()
t2.join()
print("Final counter value:", counter)
In a perfect world (and if the threads were synchronized), counter should end up as 200000. But if these two threads interfere with each other, you might see a smaller number. For example, on a system allowing true parallel thread execution, one possible outcome is:
Final counter value: 153278 # (example of an incorrect result due to a data race)
This happens because the operations counter += 1 from the two threads can overlap. Internally, counter += 1 involves three steps: (1) read the value of counter, (2) add 1 to it, and (3) write it back. If thread A reads the value (say 42) and thread B reads the value (also 42) before either writes back, both will compute 43 and write it back. The final result is 43, incremented only once instead of twice – one increment was lost. In the code above, when run under true concurrent conditions, many such lost increments can occur, so the final count is less than expected . The exact outcome is nondeterministic; if you run it multiple times, you might get different results depending on how the threads interleave.
Link to Analogy: This is just like our two ink rollers printing on the same page. If roller A and roller B both look at the blank line (perceiving it as empty), and then both print their text, one text will override the other. The final printed line might show only one of the texts or a mixture that doesn’t make sense. The shared counter in the code is analogous to the shared paper in the press. Both threads (like two print rollers) tried to update it at once, and one thread’s work partly obliterated the other’s. The end state (counter value or printed text) is corrupted or incorrect.
How to Prevent Data Races: The key to avoiding data races is to ensure mutual exclusion when accessing shared data – in other words, allow only one thread at a time to modify the critical resource. Here are some best practices and solutions:
- Use Locks (Mutexes): Surround the critical section (the code that reads/writes shared data) with a lock. In Python, one would use a threading.Lock. Only one thread can hold the lock at a time, so this serializes the access to counter. For example, if we put a lock around the counter += 1 operation, thread A will lock, update and unlock, then thread B will do the same, preventing overlap. This ensures the final result is correct (just like a printing press might enforce that only one ink roller prints at a time on a given page).
- Atomic Operations: Use atomic data types or operations if available. Some languages offer atomic counters or atomic increment instructions which guarantee the read-modify-write happens as one uninterruptible step. In our analogy, this is like having a single combined printing mechanism that prints “HelloWorld” as one operation instead of two separate rollers – there’s no chance for interference in the middle.
- Avoid Shared State when Possible: Another strategy is to redesign the program to minimize shared data. For example, have each thread work on separate data and then combine results, or use message-passing concurrency models. In a printing press, this would be akin to giving each roller its own separate page to work on, so they never conflict on the same page. As one source notes, avoiding shared states can significantly reduce the risk of race conditions .
- Proper Synchronization Primitives: Higher-level primitives like semaphores, monitors, or synchronized collections can encapsulate the locking for you. Use these to enforce correct access patterns.
By applying these practices, the data race in the code above can be fixed. For instance, using a lock in the Python example ensures each increment is done exclusively by one thread at a time:
lock = threading.Lock()
def safe_increment():
global counter
for _ in range(100000):
with lock: # acquire lock
counter += 1 # critical section
With this change, running two threads will reliably result in counter being 200000 at the end, because we’ve eliminated the simultaneous access. This is analogous to synchronizing the two ink rollers so they take turns printing without interfering, yielding correct combined output.
Ordering Issues: Operations Out of Sequence
What are Ordering Issues? Not all race conditions involve two threads writing to the same variable. Sometimes the race is about the order of events. An ordering issue (or ordering race condition) occurs when a program assumes things happen in a certain sequence, but due to concurrency, that sequence isn’t guaranteed. In other words, one operation that should logically happen before another might actually happen after, leading to incorrect behavior. Even if no data is directly corrupted, the logic breaks. A simple description: a race condition occurs when the timing or order of events affects correctness – here we emphasize the order aspect.
Such issues are common in producer-consumer scenarios, initialization vs usage, or any situation where Task B relies on Task A having done something first. If the timing is wrong, Task B runs too early and fails or produces wrong results.
Printing Press Analogy: Think of a book printing pipeline with multiple stages: printing pages, drying ink, and binding pages into a book. These must happen in order. If the binder (which staples or binds pages together) starts working before the printing stage has finished with all pages, you might end up binding empty or half-printed pages. Or consider double-sided printing: you must print side A before side B. If side B gets printed first due to some glitch in timing, it will either print on a blank sheet (wasting ink) or on the wrong side. Another scenario: the drying fan should operate after ink is applied; if it turns on too late or too early, the ink may not dry properly, causing smudges or delays. All these are ordering problems — each stage individually works, but if they happen out of order, the product is wrong (a book with missing content, mis-ordered pages, or smeared ink).
In summary, the printing press requires a proper sequence: feed paper → print → dry → bind. If any of these go out of sequence due to a race, you get a defective output. Similarly, in software, if one thread doesn’t see the latest state because it ran too early, or two operations happen in reverse order, the program’s logic breaks.
Code Example – Ordering Problem (Producer/Consumer Mismatch): Below is a Python example with two threads: one producer thread that prepares a piece of data (like printing a page), and one consumer thread that uses that data (like binding the page into a book). We intentionally do not synchronize them properly, to illustrate an ordering bug. The consumer will attempt to use the data before the producer has finished producing it.
import threading, time
data = None # shared resource representing a printed page or result
def producer():
global data
print("Producer: Starting to print the page...")
time.sleep(0.1) # simulate time taken to print
data = "Page content" # the page is now printed
print("Producer: Page is printed.")
def consumer():
global data
print("Consumer: Waiting to bind the page...")
time.sleep(0.05) # simulate slight wait (not long enough for producer to finish)
if data is None:
print("Consumer: ERROR - No page to bind (page not ready)!")
else:
print(f"Consumer: Binding the page with content: '{data}'")
# Start consumer first, then producer
t_cons = threading.Thread(target=consumer)
t_prod = threading.Thread(target=producer)
t_cons.start()
t_prod.start()
t_cons.join()
t_prod.join()
Expected outcome: In a correct scenario, the producer would finish printing the page, and then the consumer would bind it. But here we started the consumer slightly earlier and it only waits 0.05 seconds, whereas the producer takes 0.1 seconds to produce the page. This means the consumer will likely check data before the producer has set it. A possible output from running this code is:
Consumer: Waiting to bind the page...
Producer: Starting to print the page...
Consumer: ERROR - No page to bind (page not ready)!
Producer: Page is printed.
What happened? The consumer ran too soon, found data still None (the page wasn’t ready), and reported an error. The producer then finished printing, but it was too late — the binding step already failed. We have an ordering issue: we assumed the print would happen before bind, but the opposite happened due to the race between threads.
If you tweak the timing (say, change the sleep durations or thread start order), you might sometimes get the correct order by luck. For example, if the producer manages to set data before the consumer checks it, the consumer would bind successfully. The point is that without explicit coordination, we cannot guarantee the correct order. This nondeterminism is typical in race conditions – the outcome changes based on subtle timing differences.
Link to Analogy: This mirrors the printing press sequence problem. The consumer thread (binder) acted like a binding machine that started binding before the printing press finished printing the page. The result was a bound book with a missing page (an error saying no page to bind). In another run, perhaps the binder might occasionally get it right if the printing happened to complete just in time. But we clearly want a reliable process, not one that sometimes works. Just as a printing press must enforce that printing is done before binding, our program must enforce that the producer finishes before the consumer proceeds.
How to Prevent Ordering Issues: The solution is to introduce proper synchronization so that events happen in the correct order (also known as establishing a happens-before relationship between the operations). Here are ways to do that:
- Use Join or Wait/Notify: If one thread must finish before another begins, you can simply join the first thread (wait for it to complete) before starting the second. In our code, if we started the producer first and did t_prod.join() before starting the consumer, the consumer would only run after production is done. More generally, use higher-level synchronization like condition variables or events. For example, Python’s threading.Event can be used: the producer would event.set() when done, and the consumer would event.wait() until that signal is set. This guarantees ordering.
- Condition Variables / Semaphores: These allow a thread to signal another thread that something has happened. A condition variable could be used so that the consumer waits (“is there a page ready?”) and the producer notifies when the page is ready. This is analogous to the printing press using sensors – e.g., a sensor that detects a page is printed (and dry) and then triggers the binder to start.
- Designing for Dependencies: Recognize when one task logically depends on another. In such cases, design the system to reflect that dependency explicitly instead of relying on timing. For example, in a pipeline, break the work into stages and use a queue: the producer adds completed pages to a queue, and the consumer takes from that queue. If the queue is empty, the consumer naturally waits (or blocks) until an item is available. This way, binding can never happen on an empty queue. In printing press terms, this is like having a buffer stack where printed pages go, and the binder only takes from the stack when there’s at least one page present.
- Avoid Assumptions of Implicit Timing: Do not assume that just because you started thread A before thread B, A will finish first. Thread scheduling is managed by the operating system and can be unpredictable. Always explicitly synchronize if order matters. In our analogy, just because the printing press started printing doesn’t mean the binder will politely wait – we must explicitly signal or lockout the binder until ready.
By adding proper synchronization to the above code, we can fix the issue. For example, using an Event:
ready_event = threading.Event()
def producer():
# ... print page ...
data = "Page content"
ready_event.set() # signal that data is ready
def consumer():
ready_event.wait() # wait until the page is printed
# ... now bind the page ...
This way, the consumer will not proceed to bind until the producer signals that the page is printed. The error goes away, and the output is always in the correct order:
Producer: Starting to print the page...
Producer: Page is printed.
Consumer: Binding the page with content: 'Page content'
The steps are now properly sequenced, just as a well-designed printing press pipeline ensures each stage happens in the right order.
Deadlocks: Two Processes Waiting Forever
What is a Deadlock? A deadlock is a situation in concurrent programming where two or more threads are each waiting for the other to release a resource, and thus none of them can proceed. It’s like an impasse – each thread holds some resource(s) and needs another resource to continue, but that other resource is held by another thread which is also waiting. As a result, all threads involved are blocked forever. A classic definition: deadlock is a situation in computing where two or more processes are unable to proceed because each is waiting for the other to release resources . Deadlocks are a bit different from the other race conditions: here the issue isn’t corrupted data or wrong order, but rather that the program freezes and makes no progress, typically requiring external intervention to resolve.
Printing Press Analogy: Consider a scenario with a complex high-speed press that has multiple subsystems. For instance, one robotic arm (Subsystem A) is responsible for feeding blank paper into the printer, and another arm (Subsystem B) is responsible for collecting printed pages and stacking them. Now imagine:
- Subsystem A has picked up a blank sheet (holds the “paper resource”) and wants to place it into the printer, but the printer is currently full. It decides to wait until some paper is removed.
- Subsystem B has grabbed a printed page from the printer (holds the “printed page resource”) and wants to put it on the output stack, but the output stack is full because a new blank sheet hasn’t come in to move the process forward. Perhaps it’s waiting for space or a signal from A.
In this contrived scenario, A won’t put down the blank paper until B clears the printer, but B won’t clear the printer until A does something (like maybe move the conveyor). Each part is waiting for the other to act. The entire press grinds to a halt – a deadlock.
Another simpler analogy: imagine two workers on an assembly line. Worker 1 is holding a box of ink cartridges and needs a stack of paper from Worker 2 to proceed. Worker 2 is holding a stack of paper and needs an ink cartridge from Worker 1 to proceed. Both refuse to give up what they have until they get what they need. They’ll stand there waiting forever, halting the whole printing process.
In short, a deadlock in the printing press is like a paper jam of control – not a physical jam, but a logical one where each component is stuck waiting for resources locked by another. Nothing moves forward, and the press stands idle until someone intervenes (like an engineer manually resetting one part).
Code Example – Deadlock with Two Locks: Deadlocks are often illustrated with multiple locks. If Thread 1 locks Resource A and then waits to lock Resource B, while Thread 2 has locked Resource B and waits to lock Resource A, neither can continue. Here’s a Python example using threading.Lock to demonstrate a deadlock situation:
import threading, time
lock_a = threading.Lock()
lock_b = threading.Lock()
def task1():
print("Task1: acquiring lock A")
lock_a.acquire()
time.sleep(0.1) # hold A for a bit, then try for B
print("Task1: acquired lock A, now waiting for lock B...")
lock_b.acquire()
print("Task1: acquired lock B as well (both locks) -- performing operation")
# ... (perform some operation requiring both A and B) ...
lock_b.release()
lock_a.release()
def task2():
print("Task2: acquiring lock B")
lock_b.acquire()
time.sleep(0.1) # hold B for a bit, then try for A
print("Task2: acquired lock B, now waiting for lock A...")
lock_a.acquire()
print("Task2: acquired lock A as well -- performing operation")
# ... (perform some operation requiring both A and B) ...
lock_a.release()
lock_b.release()
# Start both tasks
t1 = threading.Thread(target=task1)
t2 = threading.Thread(target=task2)
t1.start(); t2.start()
t1.join(); t2.join()
print("Both tasks finished.")
Let’s analyze what happens here:
- Task1 locks A first, then (after a delay) tries to lock B.
- Task2 locks B first, then (after a delay) tries to lock A.
- If the timing aligns (which it likely will, given the sleeps), Task1 will hold A and wait for B, while Task2 holds B and waits for A. Each thread is now waiting for a lock held by the other thread. Neither thread can continue, so they will wait forever.
A possible output up to the deadlock point could be:
Task1: acquiring lock A
Task2: acquiring lock B
Task1: acquired lock A, now waiting for lock B...
Task2: acquired lock B, now waiting for lock A...
At this stage, both threads are stuck. The final print "Both tasks finished." will never be reached because neither task1 nor task2 ever completes – this program will hang indefinitely due to a deadlock. (If you run it, you’d have to terminate the program manually.) In a real system, this kind of standstill means the part of the program (or the whole system) is unresponsive and not doing any work, much like a frozen printing press.
Link to Analogy: In this code, lock_a and lock_b represent two resources needed for tasks. You can imagine lock_a is the “paper feeder access” and lock_b is the “paper output tray access” in the printing press. Task1 (perhaps representing Subsystem A) got the feeder (so it’s holding a blank page, analogously) and needs the output tray to place something there, while Task2 (Subsystem B) got the output tray (holding a printed page) and needs the feeder. Because neither will release what they have until they get the other, they wait forever. This is exactly like the two workers with ink and paper waiting on each other, or two parts of the machine in a cyclic wait. Deadlock = complete standstill, caused by each part of a cycle waiting for the next.
How to Prevent Deadlocks: Deadlocks are prevented or resolved by breaking the conditions that lead to them. Classic deadlock theory (Coffman conditions) says four things are needed for a deadlock: mutual exclusion, hold-and-wait, no preemption, and circular wait. To avoid deadlock, you need to break one of those conditions. Here are practical tips:
- Consistent Lock Ordering: The most common way to avoid deadlock in code is to establish an ordering for locks (or resources) and always acquire them in that order. For example, if we decide that in our program, we always must lock A before locking B (and never the other way around), then the scenario where one thread has B and the other has A waiting for each other cannot happen. In the printing press analogy, this is like having a rule that says “Always put down what you’re holding before picking up a new item” or “Always pick up paper before ink” — some global order that everyone follows, so you don’t get circular waiting.
- Try-Lock and Timeout: Sometimes you can detect or break deadlocks by not waiting forever. For instance, use a non-blocking attempt to acquire a lock (try_acquire) and if you fail to get it, release what you have and retry later. In our code, each task could try to get the second lock and, if it can’t within a short time, release the first lock and back off. This is like a worker realizing “I can’t get the resource I need, I will put down what I have and wait, or try again later,” thus avoiding the impasse.
- One Big Lock (Simplification): Although it reduces concurrency, one straightforward solution is to avoid fine-grained multiple locks when not necessary. If we used a single lock for both resources A and B in our example, there’s no possibility of deadlock (because a thread holds all needed resources at once or nothing). In the printing press, this is analogous to having one master switch that controls both feeder and output together – only one subsystem can operate at a time in a big critical section. This sacrifices parallelism for safety but avoids deadlock. (This is not always efficient, but it’s simple.)
- Deadlock Detection and Recovery: In more advanced systems, a watchdog can detect deadlock (e.g., by noticing cyclic wait or threads stuck for too long) and then take action, such as killing one of the tasks or forcing a resource release. In an operating system or database, there are algorithms to detect deadlocks and recover. In a printing press, an operator might notice the standstill and manually intervene by resetting one machine or moving resources around to break the circular wait.
- Resource Hierarchy or Allocation Strategy: Design your system such that circular waits are structurally impossible. For example, have a strict hierarchy of resources (rank them), or use a system where one thread requests all it will need upfront (preventing hold-and-wait).
- Timeouts on waiting: If it makes sense, program threads to not wait indefinitely for a resource. If a wait exceeds some threshold, have them release what they hold and try a different strategy (or report an error). This at least prevents hanging forever.
In our two-lock example, implementing a simple global ordering would solve it. For instance, if both tasks were coded to always acquire lock A before lock B (perhaps both try to lock A then B), then Task2 would not lock B first; it would lock A first. What would happen in that case is either Task1 locks A first, then Task2 will block trying to lock A (instead of locking B out of order), or vice versa. They wouldn’t deadlock; one will succeed in locking both, do its work, then release, then the other goes – perhaps one waits, but not forever. Thus, the system still progresses (no permanent stall). This is akin to adding a rule in the printing press: Subsystems must coordinate the order in which they claim resources. If feeder and stacker both require coordination, perhaps they always handshake in a fixed sequence (feed then stack then feed, etc.) so they never seize resources in conflicting order.
By anticipating and preventing deadlocks in design, we ensure the printing press (or program) never gets stuck in a stalemate.
Best Practices Summary
To recap the strategies for each type of race condition:
- Data Races: Protect shared data with locks or other synchronization. Only one thread updates shared variables at a time. Alternatively, avoid sharing mutable data by design (use immutable data or worker-local data). In essence, don’t let two “ink rollers” write on the same page at once.
- Ordering Issues: Use synchronization to enforce the correct sequence (joins, events, condition variables). Design producer-consumer workflows with proper signaling. Ensure each stage of the “assembly line” waits for the signal that the previous stage is finished.
- Deadlocks: Avoid circular wait by careful resource ordering or use try-lock approaches. Keep locking simple and minimal. Plan resource usage so that your system never ends up like two machines waiting on each other. If using multiple locks, document an order to acquire them. Also consider using higher-level concurrency libraries that handle these issues.
Concurrency is hard, but thinking in terms of real-world analogies like a printing press can help. Just as engineers must coordinate multiple parts of a machine to work in harmony, software developers must coordinate threads and processes to avoid chaos.
Idea for a Live Demo Setup
To really solidify understanding, a live demo with your interns can be very effective. Here’s one idea for an interactive Python demo that ties everything together and lets them observe race conditions in action:
Printing Press Simulation Demo: Build a simple simulation of a printing press pipeline using threads and a shared data structure (like a queue or just shared variables), then intentionally introduce race conditions and fix them:
- Setup: Create three threads to simulate three stages of a printing press:
- Feeder Thread: Simulates feeding blank pages (e.g., increments a page count or puts a “blank page” object into a queue).
- Printer Thread: Simulates printing on a page (takes a blank page and marks it printed).
- Binder Thread: Simulates binding pages (takes printed pages and binds them into a book or increments a bound count).
- Introduce Race Conditions: Initially, do not use any synchronization. Let all threads run as fast as possible in parallel. Since they share data (the pool of pages), you will likely see things like the binder trying to bind a page that hasn’t been printed yet, or the printer trying to print on a page that isn’t fed (depending on how you structure it). The logs will show out-of-order operations or even mistakes (you can program in checks – e.g., if binder finds an unprinted page, log an error “Tried to bind unprinted page!”).
- For example, without coordination, the output log might look jumbled: the binder might announce binding page 2 before the printer says page 2 was printed, etc. This demonstrates an ordering issue (and potentially data races if shared counters are mishandled).
- You could also see inconsistent counts if two threads try to update a shared counter at once (data race). Or even a deadlock if you intentionally have them wait for each other in a wrong way (though a deadlock might just freeze the demo, so perhaps just demonstrate that separately).
- Observation: Have the interns run this unsynchronized version multiple times. Because of the inherent race conditions, they might observe different outcomes each run – sometimes one type of error, sometimes another, sometimes just scrambled ordering. This highlights nondeterminism. Encourage them to identify which problems correspond to data races or ordering issues in the analogy (e.g., “We saw a binder error – that’s like binding before printing, an ordering issue”).
- Introduce Fixes Incrementally: Now, turn the demo into a learning workshop:
- First, add a lock around any shared counter or shared data structure modifications to fix data races. Run again and see if the counts now always match expected values (no missing pages, etc.).
- Next, add synchronization for ordering: perhaps use events or a queue to ensure the feeder, printer, binder operate in the correct sequence (feeder puts pages in a queue, printer thread consumes that queue, binder thread waits for printer’s output). Run again and observe that the log now always shows the proper order (feeder -> printer -> binder for each page). No more error messages or out-of-sequence logs.
- If a deadlock was part of the demo (say you had a scenario with two locks representing two resources like ink and paper), demonstrate how that caused a freeze, then fix it by applying an ordering or removing one lock. For example, show them the two-lock deadlock code we discussed, perhaps with smaller sleeps, and let them see it hang. Then change the code so both threads acquire locks in the same order (A then B) and show that it runs to completion.
- Interactive Experimentation: Let the interns play with the timing (insert time.sleep calls of different lengths in threads to simulate different speeds) and see how sensitive the system was without synchronization versus how stable it is with proper synchronization. They can also try removing the locks or signals to re-introduce bugs and watch them reappear. This trial-and-error cements the concept that race conditions are timing-dependent and that proper coordination cures the problem.
These threads will print log messages like “Feeder: added page 5”, “Printer: printing page 5”, “Binder: binding page 5”.
Through this live demo, the interns will see a concrete manifestation of race conditions and their fixes:
- They’ll witness data corruption or wrong behavior happening randomly when things are unsynchronized (like the printing press producing flawed output when not tuned).
- They’ll appreciate that adding coordination (locks, events) enforces a proper order and exclusive access, making the behavior reliable (the printing press now runs smoothly without jams or smudges).
This kind of hands-on simulation is not only instructive but also memorable. It connects the abstract concepts of “race conditions” to a physical analogy and actual program output. By the end, your team should have a deeper understanding of why race conditions occur, how to recognize them, and how to prevent them — ensuring that both their software and your hypothetical printing press run like a well-oiled machine!