concurrency, parallelism and multithreading

concurrent programming:

it is manageament of resources and processes. it enables executing in real time multiple processes on a single core, by doing a tiny bit of computation of one process and switching context to another process and doing a tiny bit of computation there. by switching many times and very fast it creates the illusion of parallelism, and in real world use case, the programs run at the same time, but in computer world they actually don't.

concurrency can be an amazing optimization when coding programs that have heavy Input/Output (I/O) wait times.

parallel programming

consists of dividing one big task in smaller sub tasks and assigning these to be executed by different processors.
hardware needs to have multicore processors obv.

is multithreading concurrency?

threads are a software construct for a running process. I can start as many pthreads (lets say run a browser and video player at the same time) as I want on a single core (concurrency). multithreading becomes parallel if the hardware supports it (multiple cores)

Amdahl's law for evaluating speedup gain when implementing parallelism, concurrency.
if a program has 50% perfectly parallelizable computing section the max speedup archeivable is 2.

when to use concurrency or paralellism

in python:
For I/O-bound problems, there’s a general rule of thumb in the Python community: “Use asyncio when you can, threading when you must.” asyncio can provide the best speed up for this type of program, but sometimes you will require critical libraries that have not been ported to take advantage of asyncio. Remember that any task that doesn’t give up control to the event loop will block all of the other tasks.