RTOS Basics Concepts – Part 2

Hi all…. In the previous post we have seen the RTOS basics Part 1. If you haven’t seen yet, please click Here and see. So this is the continuation of that part.RTOS Advanced Tutorial .

RTOS Advanced Tutorial

Real Time Operating Systems

In Real Time Operating Systems, each activity set as its own task which runs independently under the supervision of the kernel. For example in Fig 1.5, one task update the screen, another task is handled the communications (TCP/IP) and other one processor the data. All these three task runs under the supervision of kernel.

When an interrupt is occurred from a external source, the interrupt handler handles that particular interrupt and pass the information to the appropriate task by making a call to the kernel.

Where we opt for RTOS?

An RTOS is really needed to simplify the code and make it more robust. For example if the system has to accept inputs from multiple sources and handle various outputs and also doing some sort of calculations or processing an RTOS make lot of sense.

1-3 RTOS Basics Concepts – Part 2

Advantage of using RTOS

  • RTOS can run multiple independent activities.
  • Support Complex communication Protocols (TCP/TI. I2c, CAN, USB etc..). These protocols come with RTOS as library provided by the RTOS vendors.
  • File System.
  • GUI (Graphical User Interface).

RTOS Tracking Mechanisms

  • Task Control Block (TCB)
    • Track individual task status
  • Device Control Block (DCB)
    • Tracks status of system associated devices
  • Dispatcher/Scheduler
    • Primary function is to determine which task executes next

Kernel

The heart of every operating system is called ‘kernel’. Tasks are relived of monitoring the hardware. It’s the responsibility of the kernel to manage and allocate the resources. As tasks cannot acquire CPU attention all the time, the kernel must also provide some more services. These includes,

  • Interrupt handling services
  • Time services
  • Device management services
  • Memory management services
  • Input-output services

The Kernel takes care of the task. It involves the following

  • Creating a task
  • Deleting a task
  • Changing the priority of the task
  • Changing state of the task

Functioning of RTOS

  • Decides which task to be executed – task switching
  • Maintains information about the state of each task – task context
  • Maintains task’s context in a block – called the task control block

Possible states of Tasks

  • The task under execution – running state
  • Tasks ready for execution – ready state
  • Tasks waiting for an external event – waiting state or blocked
  • “The scheduler decides which task to run”

Basic elements of RTOS

  • Scheduler
  • Scheduling Points
  • Context Switch Routine
  • Definition of a Task
  • Synchronization
  • Mechanism for inter task communication

Scheduler

  • Decides which task in ready state queue has to be moved to running state
  • The scheduler uses a data structure called the ready list to track the tasks in ready state

Task Control Block (TCB)

It stores these details in it.

2-3 RTOS Basics Concepts – Part 2

To schedule a task, three techniques are adapted.

  • Co-operative scheduling:  In this scheme, a task runs, until it completes its execution.
  • Round Robin Scheduling: Each task is assigned a fixed time slot in this scheme. The task needs to complete its execution. Otherwise, the task may lose its flow, and data generated or it would have to wait for its next turn.
  • Preemptive Scheduling: This scheduling scheme includes priority dependent time allocation. Usually in programs, 256 priority level is generally used. Each task is assigned an unique priority level. While some system may support more priority levels, and multiple tasks may have same priorities.

Idle Task

  • An infinite wait loop
  • executed when no tasks are ready
  • Has a valid ID and least priority

Context Switch

  • Process of storing and restoring the state of a process or thread while changing one task to another. so that execution can be resumed from the same point at a later time.
  • Context switching is the mechanism by which an OS can take a running process, save its state and bring another process into execution. It does this by saving the process “context” and restoring the context of the next process in line to the CPU.

Preemption Vs Context switch

  • Preemption is when a process is taken off the CPU because a higher-priority process needs to run. Context switching is when the memory map and registers are changed.
  • Context switching happens whenever the process changes, which may happen because of preemption, but also for other reasons: the process blocks, its quantum runs out, etc. Context switching also happens when a process makes a system call or an interrupt or fault is serviced.
  • So preemption requires a context switch, but not all context switches are due to preemption.

Starvation

  • A task is starved when the scheduler gives it no CPU time
  • Could occur
    • when a high priority task is in an infinite loop
    • a low priority task will be starved unless the OS terminates that task
    • Other high priority tasks will round robin with that task

Task Synchronization

  • In a multi tasking system, tasks can interact with one another
    • directly
    • indirectly through common resources
  • These interactions must be coordinated or synchronized
  • All tasks should be able to communicate with one another to synchronize their activities
  • Mechanisms like Mutex(mutual exclusion) , semaphores, message queues and monitors are used for this purpose
  • A mutex object can be in any one of two states: owned or free

Race Condition

  • Occurs when the result of two or more tasks are mutually inclusive

Example

  • Consider task A and B, memory locations A1  and B1
  • A race condition will arise, if A accessing A1 before B accessing B1 or  if B accesses B1 before A accesses A1  causes different result

Real Example

  • Let us assume that two task T1 and T2 each want to increment the value of a global integer by one. Ideally, the following sequence of operations would take place:

Original Scenario 

  • Integer i = 0;
  • T1 reads the value of i from memory into a register : 0
  • T1 increments the value of i in the register: (register contents) + 1 = 1
  • T1 stores the value of the register in memory : 1
  • T2 reads the value of i from memory into a register : 1
  • T2 increments the value of i in the register: (register contents) + 1 = 2
  • T2 stores the value of the register in memory : 2
  • Integer i = 2

Race Condition Scenario

  • Integer i = 0;
  • T1 reads the value of i from memory into a register : 0
  • T2 reads the value of i from memory into a register : 0
  • T1 increments the value of i in the register: (register contents) + 1 = 1
  • T2 increments the value of i in the register: (register contents) + 1 = 1
  • T1 stores the value of the register in memory : 1
  • T2 stores the value of the register in memory : 1
  • Integer i = 1
  • The final value of i is 1 instead of the expected result of 2

Semaphore

A semaphore (sometimes called a semaphore token) is a kernel object that one or more threads of execution can acquire or release for the purposes of synchronization or mutual exclusion.

A kernel can support many different types of semaphores, including

  • Binary Semaphore,
  • Counting Semaphore, and
  • Mutual‐exclusion (Mutex) semaphores.

Binary Semaphore

  • Similar to mutex
  • Can have a value 1  or  0
  • Whenever a task asks for semaphore, the OS checks if the semaphore’s value is 1
  • If so, the call succeeds and the value is set to 0
  • Else, the task is blocked

Binary semaphores are treated as global resources,

  • They are shared among all tasks that need them.
  • Making the semaphore a global resource allows any task to release it, even if the task did not initially acquire it?

Counting Semaphores

  • Semaphores with an initial value greater than 1
  • can give multiple tasks simultaneous access to a shared resource, unlike a mutex
  • Priority inheritance, therefore, cannot be implemented

Mutexes

  • Are powerful tools for synchronizing access to shared resources
  • A mutual exclusion (mutex) semaphore is a special binary semaphore that supports
    • ownership,
    • recursive access,
    • task deletion safety, and
    • one or more protocols for avoiding problems inherent to mutual exclusion.
  • Problems that may arise with mutexes
    • Deadlock
    • priority Inversion

Deadlock

  • Can occur whenever there is a circular dependency between tasks and resources
    • g. consider two tasks A and B, each requiring two mutexes X and Y;
    • task A takes mutex X and waits for Y,  task B takes  mutex Y and waits for X
    • both tasks wait deadlocked

3-3 RTOS Basics Concepts – Part 2

Priority Inversion

Priority inversion occurs when a higher priority task is blocked and is waiting for a resource being used by a lower priority task, which has itself been preempted by an unrelated medium‐ priority task. In this situation, the higher priority task’s priority level has effectively been inverted to the lower priority task’s level.

4-3 RTOS Basics Concepts – Part 2

Two common protocols used for avoiding priority inversion include:

  • Priority inheritance protocol
  • Ceiling priority protocol

Apply to the task that owns the mutex.

Priority inheritance protocol

  • It ensures that the priority level of the lower priority task that has acquired the mutex is raised to that of the higher priority task that has requested the mutex when inversion happens.
  • The priority of the raised task is lowered to its original value after the task releases the mutex that the higher priority task requires.

Ceiling priority protocol

  • It ensures that the priority level of the task that acquires the mutex is automatically set to the highest priority of all possible tasks that might request that mutex when it is first acquired until it is released.

Mutex vs Semaphore

Consider the standard producer-consumer problem. Assume, we have a buffer of 4096 byte length. A producer thread collects the data and writes it to the buffer. A consumer thread processes the collected data from the buffer. Objective is, both the threads should not run at the same time.

Using Mutex

  • A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.
  • At any point of time, only one thread can work with theentire  The concept can be generalized using semaphore.

Using Semaphore

A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.

Misconception

  • There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.
  • Strictly speaking, a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there is ownership associated with mutex, and only the owner can release the lock (mutex).
  • Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend calls you, an interrupt is triggered upon which an interrupt service routine (ISR) signals the call processing task to wakeup.

Messaging

Messaging provides a means of communication with other system and between the tasks. The messaging services includes

  • Semaphores
  • Event flags
  • Mailboxes
  • Pipes
  • Message queues

Semaphores are used to synchronize access to shared resources, such as common data areas. Event flags are used to synchronize the inter-task activities. Mailboxes, pipes, and message queues are used to send messages among tasks.

Pipe

  • Communication channel used to send data between tasks
  • Can be opened, closed, written to and read from using file I/O functions in C
  • Unlike a file, is unidirectional
  • Has a source end and a destination end
  • Source end task can only write to the pipe ; the destination end task can only read from it
  • Acts as a queue
  • Depending on the pipe’s length, the task on the source end can write data into the pipe until the pipe fills up
  • Acts like a FIFO
  • A task attempting to write a full pipe or a task trying to read from an empty pipe will be blocked

Message Queue

  • Allows transmission of arbitrary structures (messages) from task to task
  • They are bi-directional;
  • The tasks are blocked when they are trying to write to a full queue or read from an empty queue
  • Some implementations support the notion of message type
  • When a task places a message in the queue, it can associate a type identifier with the message
  • The task receiving the message can determine the nature of the message’s contents by examining the type field instead of the message

Interrupt Handling

  • A decision on whether an ISR can be preempted for another task has to be made
  • Certain OS do not allow non interrupt tasks to be scheduled during an ISR
  • Certain OS preempt the ISR for higher priority task

Download our new Android app. You can learn all Embedded Tutorials from your Android Phone easily.

Click Here to Download App!