Thread & Locks


Threads

Threads are subunits of processes, which can be scheduled independently by the operating system scheduler.

Multiple threads can be executed at the same time on a single CPU core. The operating system assigns small slices of computing time to each thread, so that it seems to the user as if multiple tasks are executed at the same time. If multiple CPU cores are available, then multiple threads can be executed truly in parallel.

We can either use the POSIX thread API, or the Objective-C wrapper around this API, NSThread. NSThread is a simple Objective-C wrapper around pthreads.

Grand Central Dispatch

With GCD you don’t interact with threads directly anymore. Instead you add blocks of code to queues, and GCD manages a thread pool behind the scenes. GCD decides on which particular thread your code blocks are going to be executed on, and it manages these threads according to the available system resources. This alleviates the problem of too many threads being created, because the threads are now centrally managed and abstracted away from application developers.


GCD exposes five different queues:
  • the main queue running on the main thread, 
  • three background queues with different priorities, and 
  • one background queue with an even lower priority, which is I/O throttled. 

Furthermore, we can create custom queues, which can either be serial or concurrent queues. While custom queues are a powerful abstraction, all blocks we schedule on them will ultimately trickle down to one of the system’s global queues and its thread pool(s).

In fact, when creating a custom queue, it is just to find a corresponding priority root queue from the root queue list, set it to the new queue target queue. The execution of a block in a custom queue is actually performed in its target queue.

The "root queue" in the code is actually the Global Queue in the GCD concept. At initialization, 15 global queues are created, which are:



among them:
  1. #0 is not used.
  2. #1 is associated with the main thread and is defined in the init.c file.
  3. #2-3 is used by the internal management queue, defined in the queue.c file.
  4. The queue #4-15 is defined in the _dispatch_root_queues array of the queue.c file.

The parameter with the overcommit parameter indicates that when the queue is executed, no matter how busy the system is, a new thread will be opened.

The thread pools of all queues are managed uniformly. So even for a serial queue, it faces all the thread pools. So whether the task is the decision to execute concurrently is in the queue itself.
The value of the dq_width attribute is the maximum number of tasks that can be executed concurrently. The value of concurrent queue is DISPATCH_QUEUE_WIDTH_MAX ( #define DISPATCH_QUEUE_WIDTH_MAX UINT16_MAX ), and the value of serial queue is 1. The Queue will schedule the execution of the task based on the size of its own dq_width value.

https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=http%3A%2F%2Fblog.zorro.im%2Fposts%2Fgcd-dispatch-queue-and-thread.html


Quality of Service Classes

A quality of service (QoS) class allows you to categorize work to be performed by NSOperation, NSOperationQueue, NSThread objects, dispatch queues, and pthreads (POSIX threads). By assigning a QoS to work, you indicate its importance, and the system prioritizes it and schedules it accordingly. 

Solves the priority inversion problem by increasing the lower priority thread to the priority of the thread which is trying to get lock for the same resource which is locked by lower priority thread.

The system uses QoS information to adjust priorities such as scheduling, CPU and I/O throughput, and timer latency. As a result, the work performed maintains a balance between performance and energy efficiency.



Global Concurrent Queues

In the past, GCD has provided high, default, low, and background global concurrent queues for prioritizing work. Corresponding QoS classes should now be used in place of these queues.


Groups

For running a task after completing group of task, dispatch_group is used.


let dispatchGroup = DispatchGroup()

dispatchGroup.enter() // increment groups task count
longRunningFunction { dispatchGroup.leave() // decrement groups task count }

dispatchGroup.enter()
longRunningFunctionTwo { dispatchGroup.leave() }

dispatchGroup.notify(queue: .main) {
    print("Both functions complete 👍")//run the task which need to wait for the above two task.
}

  • Instead of notify(), we can call wait(). This blocks the current thread until the group’s tasks have completed.
  • A further option is wait(timeout:). This blocks the current thread, but after the timeout specified, continues anyway. 

DispatchSemaphore

While DispatchGroup provides a  way to synchronize a group of asynchronous operations while still remaining asynchronous, DispatchSemaphore provides a way to synchronously wait for a group of asynchronous tasks. This is very useful in command line tools or scripts, where you don’t have an application run loop, and instead just execute synchronously in a global context until done.

let semaphore = DispatchSemaphore(value: 1) //The value parameter is the number of threads that can access to the resource

The Structure

The Semaphore is composed by:
  • counter that let the Semaphore know how many threads can use its resource(s);
  • FIFO queue for tracking the threads waiting for the resource;

Resource Request: wait()

When the semaphore receives a request, it checks if its counter is above zero:
  • if yes, then the semaphore decrements it and gives the thread the green light;
  • otherwise it pushes the thread at the end of its queue;

Resource Release: signal()

Once the semaphore receives a signal, it checks if its FIFO queue has threads in it:
  • if yes, then the semaphore pulls the first thread and give him the green light;
  • otherwise it increments its counter;
DispatchSemaphore
DispatchSemaphore

Warning: Busy Waiting

When a thread sends a wait() resource request to the semaphore, the thread freezes until the semaphore gives the thread the green light.
⚠️️ If you do this in the main thread, the whole app will freeze

Sources

NSTimer must run in a NSRunLoop, when the NSTimer is scheduled in the main thread, it will be added to the Main Thread NSRunLoop automatically.

A dispatch source can basically be used to monitor for some type of event. Events can include Timer, Unix signals, file descriptors, Mach ports, VFS Nodes.

Does GCD Timer uses runloop implicitly?

Mutual Exclusion


Mutual exclusive access means that only one thread at a time gets access to a certain resource. In order to ensure this, each thread that wants to access a resource first needs to acquire a mutex lock on it. Once it has finished its operation, it releases the lock, so that other threads get a chance to access it.

In addition to ensuring mutual exclusive access, locks must also handle the problem caused by out-of-order execution.

Properties are even atomic by default. Declaring a property as atomic results in implicit locking/unlocking around each access of this property.

Dead Locks

Mutex locks solve the problem of race conditions, but unfortunately, they also introduce a new problem at the same time deadlocks. A dead lock occurs when multiple threads are waiting on each other to finish and get stuck.

Starvation


If we have a case of Priority Inversion, the high priority thread must wait for the low priority thread, but, most of the time, the processor will execute the middle priority threads, as they have higher priority than our low priority one. In this scenario our high priority thread is being starved of CPU time

Priority Inversion

Consider two tasks H and L, of high and low priority respectively, either of which can acquire exclusive use of a shared resource R. If H attempts to acquire R after L has acquired it, then H becomes blocked until L relinquishes the resource. Sharing an exclusive-use resource (R in this case) in a well-designed system typically involves L relinquishing R promptly so that H (a higher priority task) does not stay blocked for excessive periods of time. Despite good design, however, it is possible that a third task M of medium priority (p(L) < p(M) < p(H), where p(x) represents the priority for task (x)) becomes runnable during L's use of R. At this point, M being higher in priority than L, preempts L, causing L to not be able to relinquish R promptly, in turn causing H—the highest priority process—to be unable to run. This is called priority inversion where a higher priority task is preempted by a lower priority one.

The problem is that the low priority thread has low priority even when one high priority thread is waiting for him: this is called Priority Inversion.
The trouble experienced by the Mars lander "Mars Pathfinder"[1][2] is a classic example of problems caused by priority inversion in realtime systems.

when the low priority thread will temporarily inherit the priority of the highest priority thread that is waiting on him: this is called Priority Inheritance.


Consider 3 tasks, task 1 with highest priority, task2 medium priority and task3 with the lowest priority. From the above diagram: 
  1. Task 1 and Task 2 are both waiting for an event to occur and Task 3 is executing.
  2. At some point, Task 3 acquires a semaphore, which the task needs before it can access a shared resource.
  3. Task 3 performs some operations on the acquired resource.
  4. The event for which Task 1 was waiting occurs, and thus the kernel suspends Task 3 and starts executing Task 1 because Task 1 has a higher priority.
  5. Task 1 continues execution 
  6. Task 1 executes for a while until it also wants to access the resource (i.e., it attempts to get the semaphore that Task 3 owns). Because Task 3 owns the resource, Task 1 is placed in a list of tasks waiting for the kernel to free the semaphore.
  7. Task 3 resumes execution since the CPU control is now transferred from task1 to task3. 
  8. Task 3 resumes and continues execution until it is preempted by Task 2 because the event for which Task 2 was waiting occurred.
  9. Task 2 continues execution 
  10. Task 2 handles the event for which it was waiting, and, when it’s done, the kernel relinquishes the CPU back to Task 3.
  11. Task 3 continues execution 
  12. Task 3 finishes working with the resource and releases the semaphore. At this point, the kernel knows that a higher priority task is waiting for the semaphore and performs a context switch to resume Task 1.
  13. At this point, Task 1 has the semaphore and can access the shared resource.
The priority of Task 1 has been virtually reduced to that of Task 3 because Task 1 was waiting for the resource that Task 3 owned. The situation was aggravated when Task 2 preempted Task 3, which further delayed the execution of Task 1
Solving the problem of priority inversion:
  1. In the above example since task 3 was stopped from executing by task2 at step 8 due to its lower priority, priority inversion was observed. 
  2. The problem can therefore solved increasing the priority if task3 equal to that of task1 from the moment task1 starts waiting for the semaphore, i.e. at step 4. 
  3. Task 3 can now execute without any interruption from task2 because of its higher priority. 
  4. On completing execution, task3 releases semaphore which is then acquired by task1. Thus task1 continue its execution. This work around for solving the priority inversion problem is known as priority inheritance. 
  5. Thus priority inheritance is mechanism where a low-priority task that is currently accessing a shared resource requested by a high-priority task temporarily inherits the priority of that high-priority task, from the moment high priority task raises request.

Which caused by Pathfinder, Mars rover and was unable to operate.

Thread Locks


C, C++ (pthreads)


 Mutex

  • mutex is implemented via pthread_mutex_tStrictly speaking, a mutex ilocking mechanism used to synchronize access to a resource. 
  • Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there is ownership associated with the mutex, and only the owner can release the lock (mutex).
  • When a thread attempts to acquire a mutex lock, it is stuck and returns only if the lock is granted to that thread. The lock is recognised by OS, so OS knows that till such a lock is available to give thread - it has nothing else to do. The thread is parked inside OS. It is only the OS who knows that now the thread has ownership it goes back to sleep. 

  • Deadlock. If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in a deadlock. It is because no other thread can unlock the mutex. 

Semaphore
  • A semaphore is implemented via sem_t. Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). 
  • A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.
  • Semaphore controls access to a shared pool of resources. It provides operations to Wait() until one of the resources in the pool becomes available, and Signal() when it is given back to the pool.
  • Semaphores can be signalled (posted) even from a non current owner. It means you can simply post from any other thread, though you are not the owner.
  • A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore).


SpinLock

A spinlock is a lock that causes the acquiring thread to wait in a loop, or “spin”, checking the lock repeatedly until it becomes available. Most locks cause the acquiring thread to sleep if the lock is unavailable, and the act of releasing a lock wakes up one of the sleeping threads.

https://www.geeksforgeeks.org/mutex-vs-semaphore/

https://stackoverflow.com/questions/9427276/what-does-mutex-and-semaphore-actually-do

"Shared mutexes are usually used in situations when multiple readers can access the same resource at the same time without causing data races, but only one writer can do so." A shared mutex has two levels of access 'shared' and 'exclusive'. Multiple threads can acquire shared access but only one can hold 'exclusive' access.


std::shared_lock<std::shared_mutex> lock(mutex_);//multiple shared lock used before read
std::unique_lock<std::shared_mutex> lock(mutex_);//single lock used before write

Run Loops


Run loops are not technically a concurrency mechanism like GCD or operation queues, because they don’t enable the parallel execution of tasks.


A run loop is always bound to one particular thread. The main run loop associated with the main thread has a central role in each Cocoa and CocoaTouch application, because it handles UI events, timers, and other kernel events.

Comments

Popular posts from this blog

Opengl-es Buffer

Opengl Stages of Vertex Transformation