it causes the execution of an interrupt handler or
interrupt service routine (ISR)
the operating system assures that the interrupt
handler called is from the driver for the interrupting device
Identifying handlers and devices: IRQ
on the x86 system, each device is assigned a unique
Interrupt ReQuest (IRQ) number, either statically, or at boot time
see /proc/interrupts for a typical list
the IRQ may be shared among multiple handlers
if an IRQ is shared, all handlers are called, and each handler
must decide whether it needs to do something
this is done by checking its device to see if it interrupted
the interrupt handler can check the device ID (passed in as
parameter) so a single function can be used to handle multiple interrupts
Interrupt Context
a reentrant function can be called by several threads
at the same time without error
Linux interrupt handlers need not be re-entrant -- they will
be called one at a time, with their interrupt line disabled (for a
fast interrupt handler, with all interrupts disabled but
only on the current processor)
the current task is not relevant to the interrupt
handler: the handler CANNOT suspend
the handler should not take too long
the handler should not have too many local variable (limited stack)
Interrupt Enabling
Code may need to disable interrupts, do something (critical region),
then re-enable interrupts
but if interrupts were already disabled, re-enabling them might
be incorrect
use:
local_irq_save (flags);
... /* critical region */
local_irq_restore (flags);
Bottom Half Processing
interrupt handler may have to initiate an action that it
cannot afford to complete -- because it may take too long, or require
too many resources, e.g. locks
instead, interrupt handler schedules execution of a bottom half
the bottom half runs with interrupts enabled, and may sleep
linux bottom halves:
softirqs: fixed at kernel compile time, limited number, more time critical
tasklets: more flexible, created statically or dynamically
work queues: run in process context, may sleep
softirqs and work queues must be reentrant, tasklets are serialized
Bottom Halves in Linux
softirqs: fixed at kernel compile time, limited number, more time critical
tasklets: more flexible, created statically or dynamically
kernel timers: execute at a given time (or later)
work queues: run in process context, may sleep
SoftIRQs
currently 6:
high priority tasklets
timer
network transmit
network receive
SCSI
regular tasklets
raise_soft_irq(SOFTIRQ_ID) called by interrupt handler (or
any other code, including the soft IRQ itself) marks the soft IRQ
pending
pending softIRQs executed:
after a hardware interrupt
by the kernel thread softirqd
by any code (e.g. networking code) that calls do_softirq()
only interrupt handlers can pre-empt a soft IRQ, but multiple soft IRQ
handlers can run concurrently on separate processors
softirqd runs with low priority (nice 19), so gives fast
response on a lightly-loaded system, but does not starve user processes
on a heavily loaded system
Tasklets
tasklet_schedule(&tasklet) called by interrupt handler (or
any other code) marks the tasklet as pending
pending tasklets executed whenever softIRQs are executed
like soft IRQs, a tasklet executes at most once for each call to
do_softirq()
cannot sleep, because called by a soft IRQ handler
unlike soft IRQs, a tasklet never runs concurrently with itself: not
reentrant, so less synchronization may be needed
Kernel Timers
also executed by a soft IRQ handler, so similar to a tasklet
TIMER_SOFTIRQ raised by the timer interrupt handler
add_timer() to start a timer
mod_timer() to change the expiration date of a timer
del_timer() to delete a timer
del_timer_sync() to delete a timer and wait until any
currently executing timer handler for that timer has completed
kernel keeps timers approximately sorted (in 5 bins) to make
it easier to determine which timer(s) has/have expired
Work Queues
potentially many work queue kernel threads, including the
default events thread
work is created by specifying a function and data for that
function to work on
work is then assigned to a kernel thread (via schedule_work()
for the events thread
or queue_work for an arbitrary thread)
these functions execute their work queue in order (and sleep
when idle -- see the Sleeping Barber model in Tanenbaum)
schedule_delayed_work () and
queue_delayed_work () can also specify a (minimum) delay (in
timer ticks)
kernel thread: has a current process, but has no user space, can sleep
(but probably unwise to make the events thread sleep too long)