The industrial revolution that changed the world occurred because of automation. We were able to make machines, very specific machines that were able to do jobs quickly, which might otherwise take multiple men to complete. I witnessed an example in a automated bakery in the outskirts of Riyadh. First manually added the flour and ingredients to the mixture. There was a huge machine to mix everything, create a dough, and pour it in the containers. The manual part again was to put the containers in oven, and then empty the containers to an assembly line, which dropped 1 cup cake (without cup) to the assembly line at regular intervals, which later wrapped A printed plastic sheet around the assembly line, and sealed the top. Next was a cutter, that stamped at each gap between cakes, to pack them into separate pieces. and then manually a person was assigned to collect the falling packets to cartons, and pass it further for sealing.
What an assembly line consisted of was a series of some what generic sealers, rotors, wrappers, mixers, assembled in a specific order to create the automated process. But the assembly and standardization of these individual pieces was hard. Every automation had a huge setup cost.
When the computer revolution occurred, We ended up making a common compute(r) that can process instructions. A generic, standardized, automated process. The biggest advantage was, this digital, precise machine can be connected to analog devices, and can be used to make any machine work, Robots, scanners, etc. The generic nature of this machine allowed it to raise the efficiency bar, (already raised by the industrial revolution). Then Moore’s law came into action. We will see the power of computation double every 2 year, with decrease in costs. A comparison made by many was that had such a revolution occurred in transport industry, we would all be using personal jet ski’s for travel now.
Anyway, even though a single assembly line (or computational core) can process a lot of stuff, the speed of other components didn’t increased as fast as the speed of processing increased.
So a lot of time was getting wasted in waiting for these smaller components, like accessing data from a memory storage, or accessing information from some other server. Seeing this wastage of precious computation resource, the same line could be used for other process, while waiting for this outside interaction to complete, Threads were introduced. Each thread is an assembly line, passing through the same processor, and whenever some one is waiting for a slower component to complete, the next thread gets a chance to use to processor, increasing the overall efficiency of the processor.
Hence the name Multi-Threading, processing multiple threads at the same time. Now even though these threads were doing potentially different work, their overall goals were related, and they were dependent on each other to complete some requirement for them. This led to another great concept of Thread synchronization. This allowed these threads (or assembly lines) to take a hard dependency on another thread to complete its job. Say I need to get information from another server, before I can proceed using the information.
The thread synchronization was achieved using what we call locks. One thread takes a lock of a marker, saying I am using this marker. The other thread waits till the marker is released by that thread, signalling the second thread that the first thread has finished processing. If the locks had been taken for minutes, it would have been easier for the marker to inform the second thread. However if the lock was taken for sub seconds, the locks time would be wasted more by informing, rather the second thread can keep asking if the lock is free now.
This is called spin locking. The next question is of the interval of asking. How much to wait before you ask again. If the interval is small, (sub- seconds), and the lock is processing a task for minutes, a lot of processing will be wasted. The thread will be waiting for the information and keep asking without doing anything else. A lot of useful time will be wasted. If the interval is large, say multiple seconds, and the lock is processing a task for sub-second, there would be unnecessary penalty for using locks.
The scientists agreed on the exponential growth model. the thread should keep doubling its interval of waiting. So it starts with 1ms, 2ms, 4ms, 8ms, and so on. This provides a balance between waiting and not disturbing.
This whole process was generic, and completely dependent on the program that uses it. So the question comes on optimization. How smaller can you break the tasks, so that they remain efficient, and the interaction cost is not more than the improvement obtained by splitting the task. As each task was split to smaller tasks, the thread processing it can be optimized for its specific task, and made the task efficient and exact.
So here is the final comparison.
If a single person works to complete the assembly line, it is going to take time, and its going to be unique. It will be called a unique art.
If a lot of persons contribution goes into the assembly of the common work, each working on independent tasks efficiently, and similarly, It will be called a generic product.