Concurrency vs Parallelism
Last updated
Last updated
“Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.” - Rob Pike
Concurrency is a property of the code; parallelism is a property of the running program.
Concurrency is a semantic property of a program or system. Concurrency is when multiple tasks are in progress for overlapping periods of time. Concurrency is a conceptual property of a program or a system, it’s more about how the program or system has been designed. Long story short, concurrency happens when you have context switching between sequential tasks.
Using the same example as Kirill Bobrov uses in Grokking Concurrency, imagine that one cook is chopping salad while occasionally stirring the soup on the stove. He has to stop chopping, check the stove top, and then start chopping again, and repeat this process until everything is done.
As you can see, we only have one processing resource here, the chef, and his concurrency is mostly related to logistics; without concurrency, the chef has to wait until the soup on the stove is ready to chop the salad.
Parallelism is an implementation property. It resides on the hardware layer.
Parallelism is about multiple tasks or subtasks of the same task that literally run at the same time on a hardware with multiple computing resources like multi-core processor.
Back in the kitchen, now we have two chefs, one who can do stirring and one who can chop the salad. We’ve divided the work by having another processing resource, another chef.
Concurrency can be parallelised but concurrency does not imply parallelism. e.g. In a single-core CPU, you can have concurrency but not parallelism.
=> We don't write parallel code, only concurrent code that we hope might be ran in parallel.
Concept | Go | Java |
---|---|---|
Multithreading | through goroutines | through threads via Thread class or Runnable interface |
Memory Space | goroutines use only 2 KB of memory space. | threads take 2 MB of memory space |
Communication Coordination | through built in primivate channels which are built to handle race conditions => safe and prevents explicit locking; the data structure that is shared between goroutines doesn't have to be locked |
|
Scheduling | scheduling of goroutines is done by go runtime and hence it is quite faster => context switching is faster | the scheduling of threads is done by OS runtime => context switching is slower |
Garbage Collection | not automatically garbage collected | Once the thread dies its native memory and stack are freed immediately without needing to be GC. However, the |
thousands of goroutines are multiplexed on one or two OS threads. | if you launch 1000 threads in JAVA then it would consume lot of resources and these 1000 threads needs to be managed by OS. Moreover each of these threads will be more than 1 MB in size |