Linux Device Driver Tutorial Part 16 – Workqueue in Linux Kernel Part 3

This is the Series on Linux Device Driver. The aim of this series is to provide the easy and practical examples that anyone can understand. In our previous tutorials we have used Global work queue. But in this tutorial we are going to use our own work queue in Linux device driver.

Work queue in Linux Device Driver

In our previous (Part 1, Part 2) tutorials we haven’t created any of the workqueue. We were just creating work and scheduling that work to the global workqueue. Now we are going to create our own workqueue. Let’s get into the tutorial.

The core work queue is represented by structure struct workqueue_struct, which is the structure onto which work is placed. This work is added to queue in the top half (Interrupt context) and execution of this work happened in the bottom half (Kernel context).
The work is represented by structure struct work_struct, which identifies the work and the deferral function.

Create and destroy work queue structure

Work queues are created through a macro called create_workqueue, which returns a workqueue_struct reference. You can remote this work queue later (if needed) through a call to the destroy_workqueue function.

struct workqueue_struct *create_workqueue( name );

void destroy_workqueue( struct workqueue_struct * );

You should use create_singlethread_workqueue() for create workqueue when you want to create only a single thread for all the processor..

Since create_workqueue and create_singlethread_workqueue() are macros. Both are using the alloc_workqueue function in background.


Allocate a workqueue with the specified parameters.

alloc_workqueue ( fmt, flags, max_active );

fmt– printf format for the name of the workqueue

flags – WQ_* flags

max_active – max in-flight work items, 0 for default

This will return Pointer to the allocated workqueue on success, NULL on failure.

WQ_* flags

This is the second argument of alloc_workqueue.


Work items queued to an unbound wq are served by the special worker-pools which host workers which are not bound to any specific CPU. This makes the wq behave as a simple execution context provider without concurrency management. The unbound worker-pools try to start execution of work items as soon as possible. Unbound wq sacrifices locality but is useful for the following cases.

  • Wide fluctuation in the concurrency level requirement is expected and using bound wq may end up creating large number of mostly unused workers across different CPUs as the issuer hops through different CPUs.
  • Long running CPU intensive workloads which can be better managed by the system scheduler.


A freezable wq participates in the freeze phase of the system suspend operations. Work items on the wq are drained and no new work item starts execution until thawed.


All wq which might be used in the memory reclaim paths MUST have this flag set. The wq is guaranteed to have at least one execution context regardless of memory pressure.


Work items of a highpri wq are queued to the highpri worker-pool of the target cpu. Highpri worker-pools are served by worker threads with elevated nice level.

Note that normal and highpri worker-pools don’t interact with each other. Each maintain its separate pool of workers and implements concurrency management among its workers.


Work items of a CPU intensive wq do not contribute to the concurrency level. In other words, runnable CPU intensive work items will not prevent other work items in the same worker-pool from starting execution. This is useful for bound work items which are expected to hog CPU cycles so that their execution is regulated by the system scheduler.

Although CPU intensive work items don’t contribute to the concurrency level, start of their executions is still regulated by the concurrency management and runnable non-CPU-intensive work items can delay execution of CPU intensive work items.

This flag is meaningless for unbound wq.

Queuing Work to workqueue

With the work structure initialized, the next step is enqueuing the work on a work queue. You can do this in a few ways.


This will queue the work to the CPU on which it was submitted, but if the CPU dies it can be processed by another CPU.

int queue_work( struct workqueue_struct *wq, struct work_struct *work );


wq – workqueue to use

work – work to queue

It returns false if work was already on a queue, true otherwise.


This puts a work on a specific cpu.

int queue_work_on( int cpu, struct workqueue_struct *wq,                    struct work_struct *work );


cpu– cpu to put the work task on

wq – workqueue to use

work– job to be done


After waiting for a given time this function puts a work in the workqueue.

int queue_delayed_work( struct workqueue_struct *wq,
            struct delayed_work *dwork, unsigned long delay );

wq – workqueue to use

dwork – work to queue

delay – number of jiffies to wait before queueing or 0 for immediate execution


After waiting for a given time this puts a job in the workqueue on the specified CPU.

int queue_delayed_work_on( int cpu, struct workqueue_struct *wq,
            struct delayed_work *dwork, unsigned long delay );

cpu– cpu to put the work task on

wq – workqueue to use

dwork – work to queue

delay – number of jiffies to wait before queueing or 0 for immediate execution


Driver Source Code

In that source code, When we read the /dev/etx_device interrupt will hit (To understand interrupts in Linux go to this tutorial). Whenever interrupt hits, I’m scheduling the work to the workqueue. I’m not going to do any job in both interrupt handler and workqueue function,  since it is a tutorial post. But in real workqueues, this function can be used to carry out any operations that need to be scheduled.

We have created workqueue “own_wq” in init function.

Let’s go through the code.


Building and Testing Driver

  • Build the driver by using Makefile (sudo make)
  • Load the driver using sudo insmod driver.ko
  • To trigger interrupt read device file (sudo cat /dev/etx_device)
  • Now see the Dmesg (dmesg)

[ 2562.609446] Major = 246 Minor = 0
[ 2562.649362] Device Driver Insert…Done!!!
[ 2565.133204] Device File Opened…!!!
[ 2565.133225] Read function
[ 2565.133248] Shared IRQ: Interrupt Occurred
[ 2565.133267] Executing Workqueue Function
[ 2565.140284] Device File Closed…!!!

  • We can able to see the print “Shared IRQ: Interrupt Occurred“ and “Executing Workqueue Function
  • Use “ps -aef” command to see our workqueue. You can able to see our workqueue which is “own_wq

UID    PID   PPID     C     STIME     TTY       TIME            CMD

root   3516     2          0       21:35        ?        00:00:00   [own_wq]

  • Unload the module using sudo rmmod driver


Difference between Schedule_work and queue_work

  • If you want to use your own dedicated workqueue you should create workqueue using create_workqueue. In that time you need to put work on your workqueue by using queue_work function.
  • If you don’t want to create any own workqueue, you can use kernel global workqueue. In that condition, you can use schedule_work function to put your work to global workqueue.
%d bloggers like this: