%----------------------------------------------------------------------------
\chapter{Examples}
%----------------------------------------------------------------------------
The application of the proposed approach is presented in this
chapter, on three meaningful examples, by showing the code of the
scheduling and of the resource modules. The examples are really
implemented in the Kernel, so these remarks can also be used as
documentation.
%----------------------------------------------------------------------------
\section{SIMPLEEDF (Earliest Deadline First) Scheduling Module}
%----------------------------------------------------------------------------
This section describes an implementation of EDF with the following
characteristics:
\begin{itemize}
\item Support for periodic and sporadic tasks;
\item Support for release offsets and relative deadlines less than the period;
\item On-line guarantee using the utilization factor paradigm;
\item Temporal isolation implemented using the \texttt{CONTROL\_CAP} flag;
\item A number of different deadline/WCET/activation violation options.
\end{itemize}
Typically this module is registered at Level 0; in effect the
guarantee algorithm works only if the Module can use all the
bandwidth of the system. In order to schedule background periodic
tasks, a soft task model should be used with a Scheduling Module
like the CBS Scheduling Module. The Module described in this
section is contained in \texttt{modules/edf.c}.
%----------------------------------------------------------------------------
\subsection{State transition diagram}
%----------------------------------------------------------------------------
The state transition diagram for an EDF task is shown in Figure
\ref{Examples_EDF_Stati}.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\columnwidth]{images/example_EDF_status.eps}
\end{center}
\label{Examples_EDF_Stati}
\caption{State transition diagram for an EDF task. (For simplicity, some
transitions have been excluded.)}
\end{figure}
The states whose name start with \texttt{EDF\_} are internal
Module statuses. The event names are reported near the arcs; in
particular the names of the timer events are written within
parentheses. The different states have the following meanings:
\begin{description}
\item [\texttt{FREE}]Before creation, and after destruction, the
task is in this state.
\item [\texttt{SLEEP}]The task waits in this state for an explicit activation.
\item [\texttt{EDF\_READY}]This is the classic ready state where the task has to
wait until it has the highest priority (i.e., the earliest deadline).
\item [\texttt{EXE}]This is the state when the task is executing.
\item [\texttt{EDF\_WAIT}]This is the state of a sporadic task after finishing
an instance, waiting for the endperiod event to arrive.
\item [\texttt{EDF\_IDLE}]This is the
state of a periodic task after finishing an instance, waiting for
the endperiod event to arrive. An activated task with a release
offset is also put in this state, waiting for the first period to
arrive.
\item [\texttt{EDF\_ZOMBIE}]This is the state where a task
is put when it terminates correctly. The allocated bandwidth is
freed when the endperiod event is fired.
\end{description}
%----------------------------------------------------------------------------
\subsection{Level
descriptor}
%----------------------------------------------------------------------------
The EDF Module extends the
\texttt{level\_des} structure that
contains the interface exported by a generic Scheduling Module.
The extended data structure, \texttt{EDF\_level\_des,} contains
the following fields:
\begin{description}
\item [\texttt{flags}]This variable stores the flags passed to the
module at registration time. The following flags are defined:
\begin{description}
\item [\texttt{EDF\_ENABLE\_DL\_CHECK}]If set, the module will
keep track of task deadlines by internal deadline timers. The
behavior in case of a deadline overrun depends on the
\texttt{EDF\_ENABLE\_DL\_EXCEPTION} flag, see below.
\item
[\texttt{EDF\_ENABLE\_WCET\_CHECK}]If set, the module will keep
track of task execution times by enabling the CONTROL\_CAP flag
for the tasks in the generic kernel.
\item
[\texttt{EDF\_ENABLE\_DL\_EXCEPTION}]If set, the module will raise
an exception if a deadline overrun occurs. If not set, the
\texttt{dl\_miss} counter for the task is increased every time a
deadline overrun occurs.
\item
[\texttt{EDF\_ENABLE\_WCET\_EXCEPTION}]If set, the module will
raise an exception if an execution-time overrun occurs. If not
set, the \texttt{wcet\_miss} counter for the task is increased
every time an execution-time overrun occurs.
\item
[\texttt{EDF\_ENABLE\_ACT\_EXCEPTION}]If set, the module will
raise an exception if a task is activated more often than its
declared minimum interarrival time. If not set, the \texttt{nskip}
counter for the task is increased instead.
\end{description}
\item [\texttt{ready}]This is an \texttt{IQUEUE} variable used to
handle the ready queue. \item [\texttt{U}]This variable is used to
store the sum of the reserved bandwidth for the tasks owned by the
Module. \item [\texttt{tvec}]A vector of EDF task descriptors,
\texttt{EDF\_task\_des}, that defines a number of additional
variables for each task, see below.
\end{description}
%----------------------------------------------------------------------------
\subsection{Task
descriptor}
%----------------------------------------------------------------------------
The EDF module introduces a number of additional task variables.
They are collected in a
\texttt{EDF\_task\_des} structure for each
task:
\begin{description}
\item [\texttt{flags}]Flags that store some additional type/status
information about the task. The following flags are defined:
\begin{description}
\item [\texttt{EDF\_FLAG\_SPORADIC}]The task is sporadic. This
influences some state transitions, see Figure
\ref{Examples_EDF_Stati}.
\item
[\texttt{EDF\_FLAG\_SPOR\_LATE}]The task is sporadic and has
experienced a period overrun. (This is only possible if the
EDF\_ENABLE\_DL\_EXCEPTION level flag is not set.) When finished,
the task should go directly to the SLEEP state.
\end{description}
\item [\texttt{period}]The period or minimum interarrival time of
the task.
\item [\texttt{rdeadline}]The relative deadline of the
task. Currently, only $D\leq T$ is allowed.
\item
[\texttt{offset}]The release offset, relative to the activation
time of the task. \item [\texttt{release}]This variable stores the
release time of the current instance.
\item
[\texttt{adeadline}]This variable stores the absolute deadline
associated with the most recent task activation.
\item
[\texttt{dltimer}]A handle to the task deadline timer.
\item
[\texttt{eop\_timer}]A handle to the task end-of-period timer.
\item [\texttt{dl\_miss}]Counter for the number of missed
deadlines.
\item [\texttt{wcet\_miss}]Counter for the number of
execution-time overruns.
\item [\texttt{act\_miss}]Counter for the
number of skipped activations.
\item [\texttt{nact}]The current
number of queued activations (periodic tasks only).
\end{description}
%----------------------------------------------------------------------------
\subsection{Module
internal event handlers}
%----------------------------------------------------------------------------
The module uses internal kernel events (one-shot timers) to handle
the release of tasks, deadline overruns, etc. When an event is
posted by the module, an event handler is specified, and the PID
of the relevant task is passed as parameter. For instance, the
\texttt{endperiod} event is handled by the following function:
%----------------------------------------------------------------------------
\subsubsection{\texttt{static
void EDF\_timer\_endperiod(void *par) }}
%----------------------------------------------------------------------------
The first thing to do when handling an event is to recover the
information about the Module that posted the event. This can be
done with the following statements:
\bigskip{}
\texttt{PID p = (PID) par;}
\texttt{EDF\_level\_des *lev =}
\begin{center}\texttt{(EDF\_level\_des
*)level\_table{[}proc\_table{[}p{]}.task\_level{]};}\end{center}
\bigskip{}
The generic kernel task fields can be referenced as
\texttt{proc\_table{[}p{]}.name}; the internal data of the Module
can be referenced as \texttt{lev-}\texttt{\emph{>}}\texttt{field}.
The EDF task descriptor can be referenced as
\bigskip{}
\texttt{EDF\_task\_des *td = \&lev->tvec{[}p{]}; }
\bigskip{}
and the EDF task fields can then be referenced as
\texttt{td-}\texttt{\emph{>}}\texttt{field}.
By studying the state transition diagram, we see that different
actions should be taken depending on the state of the task:
\begin{itemize}
\item If the task state is \texttt{EDF\_ZOMBIE,} the task is
inserted into the \texttt{freedesc} queue and the allocated
bandwidth is freed;
\item If the state is \texttt{EDF\_WAIT,} the
task state is set to \texttt{SLEEP}, so the sporadic task can be
reactivated;
\item If the state is \texttt{EDF\_IDLE} and the task
is periodic, it is reactivated by a call to the
\texttt{EDF\_intern\_release} function. This involves posting a
deadline event (if \texttt{EDF\_ENABLE\_DL\_CHECK} is set);
inserting the task in the ready queue; increasing the absolute
deadline; and telling the kernel that it may need to reschedule by
calling \texttt{event\_need\_reschedule.}
\item If the state is
\texttt{EDF\_IDLE} and the task is sporadic, it is marked as late.
\end{itemize}
The module also contains the following event handlers:
static void EDF
\_timer
\_deadline(void *par)
static void EDF
\_timer
\_offset(void *par)
static void EDF
\_timer
\_guest
\_deadline(void *par)
%----------------------------------------------------------------------------
\subsection{Public Functions}
%----------------------------------------------------------------------------
The Module redefines the Level Calls interface. In the following
paragraphs the implementation of these functions is described.
All the functions of the interface receive a parameter of type
\texttt{LEVEL} that can be used in a way similar to the parameter
passed to the event functions to find all the data structures
needed.
%----------------------------------------------------------------------------
\subsubsection{\texttt{PID
EDF\_public\_scheduler(LEVEL l);}}
%----------------------------------------------------------------------------
This is the Module scheduler that, as all good schedulers, simply
returns the first task in the ready queue without extracting it.
%----------------------------------------------------------------------------
\subsubsection{\texttt{int
EDF\_public\_guarantee(LEVEL l, bandwidth\_t *freebandwidth);}}
%----------------------------------------------------------------------------
The on-line guarantee function simply verifies if there is enough
free bandwidth for scheduling its tasks. If so, the free bandwidth
is decremented by the amount used by the Module, and it returns
that the task set can be guaranteed.
If the guarantee is called after a task creation in the Module, it
can be the case that the new task, with all the other tasks
already guaranteed by the Module, uses a bandwidth greater than 1
(note that the U field can store only numbers in the
{[}0\ldots{}1{]} interval). In this case, in the function
\texttt{EDF\_public\_create} a flag is set forcing the guarantee
algorithm to fail.
%----------------------------------------------------------------------------
\subsubsection{\texttt{int EDF\_public\_create(LEVEL l, PID p, TASK\_MODEL *m);}}
%----------------------------------------------------------------------------
The function checks if the Model passed as second parameter can be
handled. In this case, the Module handles all the
HARD
\_TASK
\_MODELs that have a correct
\texttt{pclass} value and a
wcet and a period != 0. This function sets the \texttt{period},
\texttt{flag}, and \texttt{wcet} internal fields of the newly
created task. The function sets also the \texttt{CONTROL\_CAP}
flag to inform the Generic Kernel that the execution time have to
be controlled. Finally, the function allocates the system
bandwidth in such a way that it can be checked by the guarantee
algorithm. If the bandwidth allocated for the already guaranteed
tasks plus the new one is greater than 1, a flag is set to signal
that the guarantee algorithm must fail.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_public\_detach(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function simply reclaims the bandwidth used by the task
allocated by
\texttt{EDF\_public\_create}, disabling the flag set
by \texttt{EDF\_public\_create} when the guarantee is impossible
(U>1).
%----------------------------------------------------------------------------
\subsubsection{\texttt{int EDF\_public\_eligible(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function simply returns 0 because the EDF tasks are always
eligibles. In fact, the EDF Module does not use the guest
functions of another Module to handle its tasks.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_public\_dispatch(LEVEL l, PID p, int nostop);}}
%----------------------------------------------------------------------------
To dispatch an EDF task, the task itself must be removed from the
ready queue. The capacity handling (like the capacity event post)
is automatically done by the Generic Kernel.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_public\_epilogue(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function must suspend the running task because it has been preempted or
because if has finished his capacity. Therefore, the first thing to be done is
the check of the available capacity. If it is exhausted an exception is raised
and the task is put into the
\texttt{EDF\_WCET\_VIOLATED} \footnote{The task
shall not be extracted by any queue because the task was extracted by the
\texttt{EDF\_public\_dispatch} function.} state.
When the task has not consumed all of its capacity it is inserted
back into the ready queue.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_public\_activate(LEVEL l, PID p, struct timespec *t);}}
%----------------------------------------------------------------------------
This function simply activates the task, inserting it into the
ready queue. A task can be activated only if it is in the
\texttt{SLEEP} or in the \texttt{EDF\_WCET\_VIOLATED} state. If
the task is in the \texttt{EDF\_WAIT} state it means that the task
is a sporadic task activated too early, so an exception is raised.
The function executes the following steps:
\begin{itemize}
\item A suitable deadline is computed for the task;
\item The task
is inserted into the ready queue;
\item A deadline event is posted
for the task.
\end{itemize}
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_public\_unblock(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function simply inserts the task into into the ready queue.
The task was blocked on a synchronization by a call to the
function
\texttt{EDF\_public\_block}.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_public\_block(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function implements a synchronization block. The function
simply does nothing, because:
\begin{itemize}
\item The task was already extracted from the ready queue;
\item
The capacity event is handled by the Generic Kernel;
\item The
task state is set by the calling primitive;
\item The deadline
does not need modifications.
\end{itemize}
%----------------------------------------------------------------------------
\subsubsection{\texttt{int EDF\_public\_message(LEVEL l, PID p, void *m);}}
%----------------------------------------------------------------------------
This function implement only the task
\_endcycle behavior, doing
two things:
\begin{itemize}
\item The \texttt{wcet} of the task is refilled, since the task
has finished its instance;
\item The task state is set to
\texttt{EDF\_IDLE} or \texttt{EDF\_WAIT}, waiting the deadline
arrival.
\end{itemize}
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_public\_end(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function should erase all the information about the task in
the data structure of the Module. It simply sets the task state to
\texttt{EDF\_ZOMBIE}. The deallocation of the bandwidth used by
the task and the freeing of the task descriptor is performed while
handling the deadline event.
%----------------------------------------------------------------------------
\subsection{Private Functions}
%----------------------------------------------------------------------------
The EDF Module can accept a JOB
\_TASK
\_MODEL as the Model for the
Guest Tasks. This Model does not provide information about the
time that the task will execute. This means that the EDF Module
does not check the execution time of the task. It must be checked
by the Module that inserts the tasks as guest tasks. In the
following paragraphs the guest calls are described.
%----------------------------------------------------------------------------
\subsubsection{\texttt{int EDF\_private\_insert(LEVEL l, PID p, TASK\_MODEL *m);}}
%----------------------------------------------------------------------------
This function is called by a generic Aperiodic Server Module to
insert a task into the EDF Module.
The function simply fills the private data structures of the EDF
Module with the task parameters passed through the Task Model. No
guarantee is done on the guest tasks (the guarantee of a guest
task is a responsibility of the Module that calls this function).
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_private\_dispatch(LEVEL l, PID p, int nostop);}}
%----------------------------------------------------------------------------
This function is typically called by the
\texttt{task\_dispatch}
Task Call of the Module that inserts the task as guest task. The
effect of the call is to extract a task from the ready queue, so
it is identical to the \texttt{task\_dispatch} Task Call.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_private\_epilogue(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function is called when a task is preempted (no capacity are
handled by the Module). The function simply inserts the task into
the ready queue.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_guest\_activate(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function is called when the Module that inserts the task in
the system through the EDF
\_private
\_insert wants to activate it.
The effect of this function is to insert the task into the ready
queue and to post a deadline event if the deadline miss should be
detected for the task (a flag that specifies if the Module should
generate a deadline is given in the JOB
\_TASK
\_MODEL.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void EDF\_private\_extract(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function is called by a Module when it wants to terminate a
task inserted as guest. This function is not called only at task
termination, but also when the Module has to change some
parameters for the task (for example, when a deadline should be
postponed).
The function has to erase all references to the task in the
private data structures of the EDF Module. Note that this function
can be called also when the task is not in the EXE state, so all
task states should be checked for a correct behavior.
%----------------------------------------------------------------------------
\subsection{Additional Functions exported by the Module}
%----------------------------------------------------------------------------
Finally, the EDF Module exports two functions that can be used by
the user.
The first function is the registration function, used to register
the Module into the system and to init the internal data
structures.
The second function,
\texttt{EDF\_usedbandwidth}, can be used to
get the bandwidth actually used by the Module. That function needs
as a parameter the level at which an EDF Module is registered, so
all the private data structures can be read. Note that giving the
level of a Module in an application should not be considered a
violation of the independence of an Application to the Registered
Modules. If an application wants to know a specific data in a
Module, it has to know in what level the Module is Registered...
%----------------------------------------------------------------------------
\section{PS (Polling Server) Scheduling Module}
%----------------------------------------------------------------------------
In this Section will be described an implementation of a PS Module
that have the following characteristics:
\begin{itemize}
\item Soft Aperiodic Task support;
\item On-line guarantee using
the utilization factor paradigm;
\item Temporal isolation
implemented without the \texttt{CONTROL\_CAP} flag.
\item Feature
that allows to use the idle time left by the Modules registered at
lower level numbers;
\item Support for pending task activations;
\item Compatibility with static (RM) and dynamic (EDF) algorithms.
\end{itemize}
This Module implements an aperiodic server that inserts its tasks
into another Scheduling Module, without having any information on
how the Master Module is implemented. The module described in this
section is contained in \texttt{modules/ps}.
%----------------------------------------------------------------------------
\subsection{Transition state diagram}
%----------------------------------------------------------------------------
In Figure
\ref{Examples_PS_Status} the state diagram of a task is
showed.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\columnwidth]{images/example_PS_status.eps}
\end{center}
\label{Examples_PS_Status}
\caption{State Diagram of a Task scheduled with the PS Module.}
\end{figure}
The Module introduce only one state, \texttt{PS\_WAIT}. This is
the state in which a task waits to be inserted in the ``ready
queue''. The server queue is handled in a FIFO way. The other
states in which a Module will go are internal states of the Master
Module.
%----------------------------------------------------------------------------
\subsection{Private Data structures}
%----------------------------------------------------------------------------
The PS Module redefines the
\texttt{level\_des} structures adding
some private data structures, listed below:
\begin{description}
\item [\texttt{nact}]This array is used to track the pending
activations of a task handled by the Module. Each task element has
a value of\texttt{-1} (if the task skips the pending activations)
or a value \texttt{>=0} (that is, the value of pending activations
for the task);
\item [\texttt{lastdline}]This field is used to
store the deadline used by the Polling Server;
\item
[\texttt{period}]This field stores the reactivation period of the
server;
\item [\texttt{Cs}]This field stores the Maximum capacity
of the server;
\item [\texttt{availCs}]This field stores the
computation time available for the server at a given time. The
capacity is updated when the task handled by the Module is
scheduled by the Master Module, or when a task handled by the
Module is dispatched following the shadow chain and the server is
not scheduling in background;
\item [\texttt{wait}]This field is
the wait queue, where the tasks wait their turn to be served by
the Polling Server;
item [\texttt{activated}]This field is the
PID of the task currently inserted into the Scheduling Module. The
server inserts in the Master Module maximum one Module at a time;
\item [\texttt{flags}]The flag field is used to store some
informations passed when the Module was registered, like for
example the implemented guarantee algorithm (RM or EDF). This
field is used also to know if the server is executing a task in
background;
\item [\texttt{U}]This field is used to store an
information of the used bandwidth by the task handled by the
Module.
\item [\texttt{scheduling\_level}]This field stores the
level of the Host Module.
\end{description}
%----------------------------------------------------------------------------
\subsection{Internal Module Functions}
%----------------------------------------------------------------------------
The PS Module needs to define some internal functions. These
functions are called to handle the internal events posted by the
Module. Many of these functions are declared
\texttt{static}, so
they are not visible externally to the Module.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_activation(PS\_level\_des *lev);}}
%----------------------------------------------------------------------------
This function is an inline function and it handles the activation
of a task into a Master Module. In particular:
\begin{itemize}
\item It inits a Job Task Model that will be passed to the Master
Module with the period and the deadline of the Polling Server;
\item It creates and activates through the guest calls the task
indexed by the private field \texttt{lev->activated}.
\end{itemize}
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_deadline\_timer(void *a);}}
%----------------------------------------------------------------------------
This function implements the periodic reactivation of the Polling
Server. In particular:
\begin{itemize}
\item a new deadline is computed and a new deadline event is
posted;
\item the Server capacity is reloaded to the maximum value
if the available capacity was positive, to a value less than the
maximum (a ``recharge'' (sum) is done) if negative;
\item If the
recharge turn the available capacity positive, a waiting task is
activated, or the capacity available is depleted.
\end{itemize}
%----------------------------------------------------------------------------
\subsection{Public Functions}
%----------------------------------------------------------------------------
All the functions of the interface receives a LEVEL parameter that
can be used in a way similar to the parameter passed to the event
functions to get all the private data of the Module.
%----------------------------------------------------------------------------
\subsubsection{\texttt{PID PS\_level\_scheduler(LEVEL l);}}
%----------------------------------------------------------------------------
This scheduler is used by the Module if the Module is registered
specifying that it can not use the idle time left by other
Modules. It always return
\texttt{NIL} (meaning that the module
has nothing to schedule).
%----------------------------------------------------------------------------
\subsubsection{\texttt{PID PS\_level\_schedulerbackground(LEVEL l);}}
%----------------------------------------------------------------------------
This scheduler is used by the Module if the Module is registered
specifying that it can use the idle time left by other Modules. It
sets a flag in the
\texttt{flags} field to remember that the
Module is scheduling a task in background, after that it returns
the first task in the wait queue. The scheduler is disactived in
case the task scheduled by the server is blocked on a
synchronization (look at the function \texttt{PS\_public\_block}).
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PS\_level\_guaranteeRM(LEVEL l, bandwidth\_t *freebandwidth);}}
%----------------------------------------------------------------------------
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PS\_level\_guaranteeEDF(LEVEL l, bandwidth\_t *freebandwidth);}}
%----------------------------------------------------------------------------
These two functions implements the internal guarantee of the
Module, simply decrementing when possible the available bandwidth.
the difference between the two functions is that the first
function allow to schedule until a used bandwidth of 0.69, and the
second one allow a scheduling limit of 1. The structure of this
function is similar to that of
\texttt{EDF\_public\_guarantee},
except that the guarantee is done on the server and not on each
task handled by the server..
%----------------------------------------------------------------------------
\subsection{Task Calls}
%----------------------------------------------------------------------------
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PS\_public\_create(LEVEL l, PID p, TASK\_MODEL *m);}}
%----------------------------------------------------------------------------
This functions checks if the Task Model passed can be handled by
the Module. The Module simply handles all the Soft Task Models
that specify a periodicity equal to APERIODIC, ignoring each
additional parameters. This function sets the
\texttt{nact} field
on the new task according to the requirements of the task model.
The function does not set the flag \texttt{CONTROL\_CAP}.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_public\_detach(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function does nothing, because in the
\texttt{PS\_public\_create} function no dynamic data structures
were allocated, and because the tasks served by do not require
individual guarantee (the guarantee is done for the whole Server).
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PS\_public\_eligible(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function always returns 0 because the PS tasks are always
eligible.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_public\_dispatch(LEVEL l, PID p, int nostop);}}
%----------------------------------------------------------------------------
A task can be dispatched if it is in the wait queue or if it is
inserted into the Master Module.
In the first case the task is extracted from the
\texttt{wait}
queue, in the latter case the private function
\texttt{private\_dispatch} is called to signal to the Master
Module to dispatch the task.
Then, a capacity event is generated (only if the parameter
\texttt{nostop} is \texttt{0} (the parameter is 0 if there is no
substitution due to the shadow chain mechanism). The capacity
event is created using the \texttt{cap\_timer} field. In this way
the event will be removed by the Generic Kernel.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_public\_epilogue(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function implements the suspension of the running task due to
a preemption or a capacity exhaustion.
First, the function updates the server capacity using the
\texttt{cap\_lasttime} and \texttt{schedule\_time} variables. If
the server capacity is exhausted, the task that is inserted into
the Master Module (if the task is not scheduled in background)
terminates with a \texttt{private\_extract}, and then it is
inserted in the wait queue.
If the server capacity is not exhausted, there are two cases: if
the Module is scheduling a task in background, the task will be
inserted on the head of the wait queue, otherwise the
\texttt{private\_epilogue} function will be called to signal the
Master Module a task preemption.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_public\_activate(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function activates a task. If the task passed as parameter is
the task currently inserted in the Master Module, or if the task
is already inserted into the
\texttt{wait} queue, the activation
is stored and the nact field is incremented (only if the task has
specified in the task model passed at its creation to save the
activations).
Otherwise the normal activation actions are done:
\begin{itemize}
\item the field \texttt{request\_time} of the task descriptor is
updated; \item The task is activated using the internal
\texttt{PS\_activation} function if there aren't tasks in the
Module and the server capacity is positive, otherwise the task is
queued into the wait queue.
\end{itemize}
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_public\_block(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function should implement the synchronization blocking. The
function should block the whole server, calling eventually the
\texttt{private\_extract} guest call on the Master Module and
setting the \texttt{PS\_BACKGROUND\_BLOCK} flag to block every
background schedule.
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_public\_unblock(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function should reactivate a blocked task. The task
reactivation consists of its insertion into the
\texttt{wait}
queue and of the reset of the flags set in the
\texttt{public\_block} function.
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PS\_public\_endcycle(LEVEL l, PID p, void *m);}}
%----------------------------------------------------------------------------
The function implement the task
\_endcycle behavior and does the
following steps:
\begin{itemize}
\item The server capacity is updated, or the background scheduling
feature is disabled if it was active;
\item The task was extracted
from the Master Module through a call to the private function
\texttt{private\_extract}, otherwise it is extracted from the
\texttt{wait} queue;
\item If the task has some pending
activations, it is inserted at the end of the \texttt{wait} queue,
otherwise the task state is set to \texttt{SLEEP}.
\item If
possible a new task is extracted from the top of the \texttt{wait}
queue, through a call to the function \texttt{PS\_activation};
\end{itemize}
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PS\_public\_end(LEVEL l, PID p);}}
%----------------------------------------------------------------------------
The function directly insert the task into the
\texttt{freedesc}
queue.
%----------------------------------------------------------------------------
\subsection{Private functions}
%----------------------------------------------------------------------------
The private functions are not defined (the defaults are used).
%----------------------------------------------------------------------------
\subsection{Functions exported by the Module}
%----------------------------------------------------------------------------
Finally, the PS Module exports two functions that can be used by
the user.
The first function is the registration function, used to register
the Module in the system and to init the internal data structures.
This function registers also a initialization function that posts
the first server deadline event. This event cannot be created in
the registration function because the registration function is
called when the OS Lib is not initialized yet.
The second function,
\texttt{PS\_usedbandwidth}, can be used to
obtain the allocated bandwidth of the Module, and it is similar to
the function \texttt{EDF\_usedbandwidth}.
%----------------------------------------------------------------------------
\section{PI (Priority Inheritance) Resource Module}
%----------------------------------------------------------------------------
In this Section an implementation of the shared resource access
protocol Priority Inheritance (PI)
\cite{Sha90} is described; it
has the following characteristics:
\begin{itemize}
\item support for the Generic Kernel mutex interface;
\item use of
the shadow mechanism provided by the Generic Kernel;
\item
independence from the data structures used internally by the
Scheduling Modules;
\item possibility of a static initialization
of the used mutexes.
\end{itemize}
The Module is contained in \texttt{modules/pi}.
%----------------------------------------------------------------------------
\subsection{Used approach}
%----------------------------------------------------------------------------
The key idea of the implementation are:
\begin{itemize}
\item when a task enters into a critical section locking a mutex,
the Module registers which is the task that owns the mutex,
because it is used if other tasks try to lock the mutex, and
because only that task can unlock it;
\item the Module registers
the number of mutexes that owns the tasks, to check that the task
does not die with some mutexes locked.
\item when a task tries to
block a busy mutex, its shadow pointer will be set to the blocking
task;
\item when a mutex is unlocked, all the task blocked by it
are freed, so all the blocked tasks can try to acquire the mutex
(the mutex will be locked by the first task blocked scheduled,
usually the higher priority task that was blocked);
\end{itemize}
%----------------------------------------------------------------------------
\subsection{Private data structures}
%----------------------------------------------------------------------------
The PI Module defines the structure
\texttt{mutex\_resource\_des}
that handle the interface exported by the Resource Modules. The
private data structures added by the Module to the interface are
the following:
\begin{description}
\item [\texttt{nlocked}]this array stores the number of mutexes
that each task currently locks;
\item [\texttt{blocked}]this array
is used to track the blocked tasks on a mutex. Each PI mutex has a
pointer to the first blocked task; the other tasks are queued in
this structure (look at Figure\ref{Examples_PI_blocked}). The data
structure can be allocated locally to the Module because a task
can be blocked on only one
mutex.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\columnwidth]{images/example_PI_blocked.eps}
\end{center}
\label{Examples_PI_blocked}
\caption{Use of the \texttt{blocked} array. The example describes a structure
\texttt{mutex\_t} initialized with the Priority Inheritance protocol. The field
\texttt{firstblocked} is the first element of the blocked task queue on a
specific mutex.}
\end{figure}
\end{description}
Each mutex handled by the Priority Inheritance protocol uses a
dynamically allocated internal data structure called
\texttt{PI\_mutex\_t}, that has the following fields:
\begin{description}
\item [\texttt{owner}]When the mutex is free this field is
\texttt{NIL}, else it is the PID of the task that locks the mutex;
\item [\texttt{nblocked}]This is the number of tasks actually
blocked on a mutex;
\item [\texttt{firstblocked}]When the field
\texttt{nblocked} is different from 0 this field is the first task
blocked on a mutex (the following tasks can be found following the
blocked list ).
\end{description}
Finally, to init a PI mutex, a correct parameter must be passed to
\texttt{mutex\_init}. That parameter must be of type
\texttt{PI\_mutexattr\_t} and it does not add any other parameter
to the default attribute.
%----------------------------------------------------------------------------
\subsection{Internal and Interface Functions}
%----------------------------------------------------------------------------
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PI\_res\_register(RLEVEL l, PID p, RES\_MODEL *r);}}
%----------------------------------------------------------------------------
This function always return -1 because it will never be called by
the Generic Kernel (because the Module does not accept any
Resource Model).
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PI\_res\_detach(RLEVEL l, PID p);}}
%----------------------------------------------------------------------------
This function simply controls that the task that is still ending
does not lock any mutexes. If not, an exception is raised. Such a
situation is very dangerous because, when a task is died, the
shadow data structures are not consistent, and this will probably
cause a system crash.
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PI\_init(RLEVEL l, mutex\_t *m, const mutexattr\_t *a);}}
%----------------------------------------------------------------------------
This function inits a mutex to be used with the PI Protocol. A
structure of type
\texttt{PI\_mutex\_t} is allocated and all the
fields are initialized..
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PI\_destroy(RLEVEL l, mutex\_t *m);}}
%----------------------------------------------------------------------------
This function should destroy a mutex. The mutex has to be
correctly initialized and it must be free (not locked by a task).
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PI\_lock(RLEVEL l, mutex\_t *m);}}
%----------------------------------------------------------------------------
This function should implement a mutex lock. First, a check is
done to see if the mutex was initializated statically. In that
case, the initialization of the data structures is completed.
At this point, a check is done to see if the task already owns the
mutex. If not, a cycle is done. In the body the shadow field is
set, and the system is rescheduled. The cycle is needed because
when the mutex will be unlocked, all the blocked tasks will be
woken up to fight for the locking of the mutex.
Finally, When the mutex will be found free, it is locked.
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PI\_trylock(RLEVEL l, mutex\_t *m);}}
%----------------------------------------------------------------------------
This function is similar to the previous one, except that the task
is never blocked if the mutex is busy.
%----------------------------------------------------------------------------
\subsubsection{\texttt{int PI\_unlock(RLEVEL l, mutex\_t *m);}}
%----------------------------------------------------------------------------
This function should free the mutex. First, a check is done to see
if the task that unlocks really owns the mutex. Then, the mutex is
unlocked and all the blocked tasks are woken up (the shadow field
is reset to point to the blocked tasks themselves). Then, the
system is rescheduled to see if a preemption should be done (the
Module does not know if a preemption will occurs or not, because
it does not know which are the modules registered!).
%----------------------------------------------------------------------------
\subsubsection{\texttt{void PI\_register\_module(void);}}
%----------------------------------------------------------------------------
This function will register the Module in the system. This
function is very similar to the Scheduling Modules Registration
function.