Subversion Repositories shark

Rev

Blame | Last modification | View Log | RSS feed

%----------------------------------------------------------------------------
\chapter{Kernel Overview}
%----------------------------------------------------------------------------

%----------------------------------------------------------------------------
\section{Scheduling Architecture}
%----------------------------------------------------------------------------

S.Ha.R.K. is a research kernel expressly designed to help the
implementation and testing of new scheduling algorithms, both for
the CPU and for shared resources. In order to achieve independence
between applications and scheduling algorithms (and between the
schedulers and the kernel), S.Ha.R.K. is based on a \emph{Generic
Kernel}, which does not implement any particular scheduling
algorithm, but postpones scheduling decisions to external
entities, the \emph{scheduling modules}. In a similar fashion, the
access to shared resources is coordinated by \emph{resource
modules}. External modules can implement periodic scheduling
algorithms, soft task management through real-time servers,
semaphore protocols, and resource management policies. The modules
implementing the most common algorithms (such as RM, EDF, Round
Robin, and so on) are already provided, and it is easy to develop
new modules. A scheme of the kernel architecture is depicted in
Figure \ref{oview_Fig_KernelArchitecture}.

\begin{figure}
\begin{center}
\includegraphics[width=8cm]{images/oview_Architecture.eps}
\end{center}
\label{oview_Fig_KernelArchitecture}
\caption{The S.Ha.R.K. Architecture}
\end{figure}

The OS Lib layer implements an abstraction of a generic machine
capable of exporting some services, as for example context
switching, time management, interrupts handling, and a subset of
the run-time C library. For detailed information about the OS Lib,
see \cite{Abe00-1}\cite{Abe00-2}.

The Generic Kernel provides the mechanisms used by the modules to
perform scheduling and resource management thus allowing the
system to abstract from the specific algorithms that can be
implemented. The Generic Kernel simply provides the primitives
without specifying any algorithm, whose implementation resides in
external modules, configured at run-time with the support of the
\emph{Model Mapper} (see Section \ref{oview_Models}).

Another important component of the Generic Kernel is the Job
Execution Time (JET) estimator, which monitors the computation
time actually consumed by each job. This is a generic mechanism,
independent from the scheduling algorithms, that can be used for
statistical measurements, for enforcing temporal protection, or
for resource accounting (see Section \ref{Kernel_Jet}).

The API is exported through the \emph{Libraries}, which use the
Generic Kernel to support some common hardware devices (i.e.,
keyboard, sound cards, network cards, graphic cards) and provide a
compatibility layer with the POSIX Real-time Controller System
Profile \cite{POSIX1003.13}.

An Application can be considered as a set of cooperating tasks \footnote{In this
document, a process is a set of computations performed in a private address
space. A thread is a single execution flow in a process. Each thread is
identified by an unique ID, plus some specific parameters. The term Task is used
as a synonymous of the term thread.} that share a common address space. There is
no memory protection implemented into the Kernel. Intertask communication is
performed using shared memory buffers accessed through some synchronization
mechanisms (as mutexes, condition variables, semaphores, CAB, message queues).
Each task is characterized by a Task Model, some optional Resource Models, and a
body. The body of a task is simply a C function with the prototype \texttt{void
*body(void *arg)}.

An Application can use the following sets of functions:

\begin{itemize}
\item The functions exported by the OS Lib;
\item The Generic Kernel primitives;
\item Some Module-dependent functions;
\item Some functions exported by libraries, device drivers, or the standard C
library;
\item The library implementing of the POSIX standard interface.
\end{itemize}

Each Module consists of a set of data and functions used for
implementing a specific algorithm, whose implementation is
independent from the other modules in the system, thus realizing a
trade-off between user-level and in-kernel schedulers. In this
way, many different module configurations are possible. For
example, a Polling Server can either work with a RM or an EDF
scheduling Module without any modification.

Currently, S.Ha.R.K. provides two basic modules:

\begin{itemize}
\item modules that implement scheduling algorithms and aperiodic service
policies (Scheduling Modules);
\item modules that manage shared (hardware or software) resources (Resource
Modules);
\end{itemize
}

All resource access protocols, such as Priority Inheritance, are
implemented as a mutex module whose interface is derived from the
resource module interface. A POSIX mutex interface is also
provided on top of the implemented protocols.

Each type of Module provides a well defined interface to
communicate with the Generic Kernel (user programs do not directly
interact with the modules). The interface functions are called by
the Generic Kernel to implement the kernel primitives. When
modules need to interact with the hardware (for example, the
timer), they can use the service calls provided by the Generic
Kernel.

Finally, each Module has some unique identifiers to allow the
implementation of some consistency checks (for example, a module
that implements a Total Bandwidth Server cannot work with Rate
Monotonic).

%----------------------------------------------------------------------------
\section{QoS Specification}
%----------------------------------------------------------------------------

\label{oview_Models} One of the goals of S.Ha.R.K. is to allow the
user to easily implement and test novel scheduling algorithms. In
particular, the kernel has been designed with the following
objectives:

\begin{itemize}
\item achieve independence between the kernel mechanisms and the
scheduling policies for tasks and resource management; \item
configure the system at run-time by specifying the algorithms to
be used for task scheduling and resource access; \item achieve
independence between applications and scheduling algorithms.
\end{itemize}

These requirements are particularly useful for comparing the
performance of similar algorithms on the same application. In
fact, module independence allows the user to configure and test
applications without recompile them, only relinking them.

Independence between applications and scheduling algorithms is
achieved by introducing the concept of \emph{model}. Each task
asks the system to be scheduled according to a given Quality of
Service (QoS) specified by a model. In other words, a model is the
entity used by S.Ha.R.K. to separate the scheduling parameters
from the QoS parameters required by each task. In this way, the
kernel provides a common interface to isolate the task QoS
requirements from the real scheduler implementation.

Models are descriptions of the scheduling requirements expressed
by tasks. S.Ha.R.K. provides three different kinds of models:

\begin{description}
\item [Task~Models.]A task model expresses the QoS requirements of
a task for the CPU scheduling. Requirements are specified through
a set of parameters at task creation time. Some of the task
requirements are mandatory (e.g., the stack size of a task), while
others depend on the specific task model (e.g., a deadline). For
this reason, all task models are characterized by a general common
part, which can be extended by a model-dependent part. Usually,
the model-dependent part abstracts from a specific scheduling
algorithm (for instance, the concept of period or deadline is
independent from a specific algorithm like EDF or RM). The task
models have a function similar to the \texttt{pthread\_attr\_t}
structure defined in the POSIX standard. \item [Resource~Models.]A
resource model is used to define the QoS parameters relative to a
set of shared resources used by a task. For example, the resource
model can be used to specify the semaphore protocol to be used for
protecting critical sections (e.g., Priority Inheritance, Priority
Ceiling, or SRP). In other cases, the resource model can be used
to specify a hardware resource scheduling algorithm (e.g. a File
System Scheduling Algorithm). \item [Mutex~Models.]When a mutex
semaphore is created, these Models are used to specify the
resource access protocol to be used, in a way similar to that done
with Task Models. The mutex models have a function similar to the
\texttt{pthread\_mutexattr\_t} structure defined in the POSIX
standard.
\end{description}

Each task is characterized by a single mandatory QoS parameter,
the \emph{task criticality} (hard, soft, firm, non real-time, and
so on). This parameter belongs the common part of the task model,
together with a \emph{model identifier} and some other parameters,
such as the stack size.

Each task model is implemented as a C structure, in which the
first field is the model identifier, the next fields are the
mandatory parameters and the last field is a sequence of bytes
containing the model-dependent parameters, that only the specific
module can interpret. Resource models are completely generic, and
depend on the resource they describe: the only mandatory parameter
is the model identifier.

Models are required to make the generic kernel independent from
the implemented scheduling algorithms: since the generic kernel
does not implement any algorithm, it does not know how to serve a
task but invokes a service request to scheduling entities realized
as external \emph{modules}. Hence, the generic kernel does not
interpret the models, but just passes them to the modules; each
module, by reading the common part of the model, can verify
whether the task can be served or not.

Using models an application is able to specify the requested QoS,
independently from the modules used into the system. For example,
an application that creates a task using an Hard Task Model can be
executed on an EDF, a RM, or a Deadline Monotonic
Module.

\begin{figure}
\begin{center}
\includegraphics[width=11cm]{images/oview_QOSMapper.eps}
\end{center}
\label{oview_Fig_QOSMapper}
\caption{The interaction between the Model Mapper and the QOS Mapper.}
\end{figure}

Task creation works as follows (see Figure
\ref{oview_Fig_QOSMapper}): when an application issues a request
to the kernel for creating a new task, it also sends the model
describing the requested QoS. A kernel component, namely the
\emph{model mapper}, passes the model to a module, selected
according to an internal policy, and the module checks whether it
can provide the requested QoS; if the selected module cannot serve
the task, the model mapper selects a different module. When a
module accepts to manage the task described by the specified
model, it converts the model's QOS parameters into the appropriate
scheduling parameters. Such a conversion is performed by a module
component, called the \emph{QoS Mapper}. For example, a hard
periodic task may have a model consisting of a period and a
worst-case execution time (WCET); when a task is created with that
model, the EDF module will convert such parameters into deadlines,
reactivation times, and so on. In general, a module can manage
only a subset of the models, and the set of models is not limited
by the kernel. This is possible because the kernel does not handle
the models, but it simply passes them to the Model Mapper, that
selects a module and passes the model to that module. Currently,
the Model Mapper uses a simple strategy, according to which
modules are selected based on fixed priorities (see Section
\ref{oview_Module_Organization
} for more details).

%----------------------------------------------------------------------------
\section{Scheduling Modules}
%----------------------------------------------------------------------------

Scheduling Modules are used by the Generic Kernel to schedule
tasks, or serve aperiodic requests using an aperiodic server. In
general, the implementation of a scheduling algorithm should
possibly be independent on resource access protocols, and handle
only the scheduling behavior. Nevertheless, the implementation of
an aperiodic server relies on the presence of another scheduling
module, called the Master Module

\begin{figure}
\begin{center}
\includegraphics[width=4cm]{images/oview_Server_e_Master.eps}
\end{center}
\label{oview_Fig_MasterModule}
\caption{An aperiodic Server that inserts its tasks into a master module.}
\end{figure}

(for example, a Deferrable Server can be used if the base
scheduling algorithm is RM or EDF, but not Round Robin; see Figure
\ref{oview_Fig_MasterModule}). Such a design choice reflects the
traditional approach followed in the literature, where most
aperiodic servers insert their tasks directly into the scheduling
queues of the base scheduling algorithm. Again, our modular
approach masks this mechanism with the task models: an aperiodic
server must use a task model to insert his tasks into the Master
Module.

The Model Mapper distributes the tasks to the registered modules
according to the task models the set of modules can handle. For
this reason, the task descriptor includes an additional field
(\texttt{task\_level
}), which points to the module that is
handling the task.

When the Generic Kernel has to perform a scheduling decision, it
asks the modules for the task to schedule, according to fixed
priorities: first, it invokes a scheduling decision to the highest
priority module, then (if the module does not manage any task
ready to run), it asks the next high priority module, and so on.
In this way, each module manages its private ready task list, and
the Generic Kernel schedules the first task of the highest
priority non empty module's queue.

A Scheduling Module must include all the data structures needed.
It can be thought as an object in an Object oriented language;
this implies that many instances of a module can be created (for
example, many TBS servers with different bandwidth).

%----------------------------------------------------------------------------
\subsection{Module Organization}
\label{oview_Module_Organization
}
%----------------------------------------------------------------------------

The Scheduling Modules are organized into levels, one Module for
each level. These levels can be thought as priority scheduling
levels (index 0 has the maximum priority).

Modules are selected for scheduling by the Model Mapper by a fixed
priority strategy. When a task is given to a module, the module
\emph{owns} the task. The \texttt{task\_level} field of the
generic task descriptor is used to save the level index of the
Module that handles the task (see Section
\ref{KernSupport_Task_descriptor}).

Each Scheduling Module handles all the events that belong to its
owned tasks. A task owned by a module is scheduled in background
with respect to the tasks owned by a Module with higher level
index. For example, in Figure \ref{oview_Levels}, the tasks
\emph{scheduled
}%
\footnote{Note that the word \emph{scheduled} is emphasized: the
tasks \emph{scheduled
} by a Module are the tasks owned by the
Module itself and the tasks that other modules have inserted in
it.%
} by the XXX Scheduling Module are run in foreground; the tasks
scheduled by the WWW Module run in background with respect to
those of the XXX and YYY Modules, and in foreground with respect
to the tasks scheduled by the ZZZ Module.

\begin{figure}
\begin{center}
\includegraphics[width=6cm]{images/oview_Levels.eps}
\end{center}
\label{oview_Levels}
\caption{Kernel Level Organization.}
\end{figure
}

%----------------------------------------------------------------------------
\subsection{Sample scheduling architectures}
%----------------------------------------------------------------------------

The approach followed in the organization of the Scheduling
Modules is very versatile and allows to implement different Kernel
configurations. In the following, some typical scheduling
architectures are described.

\begin{description}
\item [Fixed~Priority~Scheme.]This is the most typical approach
used in the real-time systems (see Figure
\ref{oview_Power_FixedPriority}). In this example, hard tasks
(periodic and sporadic) are served at the highest priority,
whereas aperiodic tasks are handled by a Sporadic Server. At the
lowest priority level non-realtime tasks can be handled by a Round
Robin scheduling
policy.

\begin{figure}
\begin{center}
\includegraphics[width=7cm]{images/oview_Power_FixedPriority.eps}
\end{center}
\label{oview_Power_FixedPriority}
\caption{A fixed priority Module configuration\textit{.}}
\end{figure}

\item [Dual~Priority~Scheme.]This configuration (described in
Figure \ref{oview_Power_DualPriority}) proposes a combination of
modules which were not developed to work together at
implementation time. In this example, the highest priority tasks
are scheduled by the RM with a Deferrable Server linked to it.
Other tasks are scheduled at medium priority using EDF with a
Total Bandwidth Server. At the lowest priority level, non-realtime
tasks can be handled by a Round Robin scheduling scheme.. This
configuration can be used to reduce the jitter of some important
tasks \cite{Dav95}.

\begin{figure}
\begin{center}
\includegraphics[width=7cm]{images/oview_Power_DualPriority.eps}
\end{center}
\label{oview_Power_DualPriority}
\caption{A Dual Priority Module configuration. Note that the TBS Module is put
at the lowest priority level to prevent the TBS algorithm from using the
background time left by other modules.}
\end{figure}

\item [Dynamic~Multiserver.]This example (described in Figure
\ref{oview_Power_Dynamic}) shows how to create a complex
scheduling architecture. In the example, some Hard tasks are
scheduled with a set of aperiodic tasks, each one handled by a
different server. Note that the EDF Module is designed to accept
tasks from a generic Module, independently from the algorithm
implemented by that Module. Note also that in the system there are
many instances of a single module, and each instance can only
manage the tasks that it
owns.

\begin{figure}
\begin{center}
\includegraphics[width=8cm]{images/oview_Power_Dynamic.eps}
\end{center}
\label{oview_Power_Dynamic}
\caption{Dynamic Module Configuration.}
\end{figure}

\item [Timeline~Scheduling.]The example illustrated in Figure
\ref{oview_Power_Timeline} shows a Timeline scheduler integrated
with a fixed priority algorithm, such as RM. Note that S.Ha.R.K.
does not force the developer to use classic approaches, like
prioritary task queues. The Generic Kernel does not impose any
restrictions to developers.

\begin{figure}
\begin{center}
\includegraphics[width=7cm]{images/oview_Power_Timeline.eps}
\end{center}
\label{oview_Power_Timeline}
\caption{A hybrid Timeline-RM approach.}
\end{figure}

\end{description
}

%----------------------------------------------------------------------------
\subsection{Module Interface}
\label{oview_Module_Interface
}
%----------------------------------------------------------------------------

The interface functions provided by a scheduling module can be
grouped in two classes: public and private functions. In general,
a scheduling module has an interface that implements a specific
behavior for each event in the system generated by a task.

In the following paragraph the various classes of functions are
explained in a general way. Each function will be then described
in detail in Chapter \ref{CapSchedulingModules}.

%----------------------------------------------------------------------------
\subsection{Public Functions}
%----------------------------------------------------------------------------

The public functions are those fuctions that are directly called
by the Generic Kernel to implement the behavior of the primitives.
Some of the functions are directly related to the life of a single
task (e.g. task creation, task end), whereas other functions are
related to the module as a whole (the scheduler, and the
acceptance test).

%----------------------------------------------------------------------------
\subsection{Private Functions}
%----------------------------------------------------------------------------

On the other side, a module can export an interface to the public
part of the same or of another module. For example, an EDF Module
can export an interface (smaller than the Public Functions) that
allows a generic aperiodic server to insert tasks in the EDF reqdy
queue.

That is, Private Functions are called ONLY by other Public and
Private Functions. They are NEVER called by the Generic Kernel.

%----------------------------------------------------------------------------
\subsection{An Example}
\label{oview_Example
}
%----------------------------------------------------------------------------

In this paragraph an useful example is explained to show the use
of the various functions of a Scheduling Module Interface. The
interface will be described in detail in Chapter
\ref{CapSchedulingModules}.

\begin{figure}
\begin{center}
\includegraphics[width=8cm]{images/oview_LevelExample.eps}
\end{center}
\label{oview_Fig_Example}
\caption{Configuration of the example in Section \ref{oview_Example}.}
\end{figure}

The example (see Figure \ref{oview_Fig_Example}) considers a
configuration made using two scheduling modules, registered in the
first two levels:

\begin{itemize}
\item at Level 0 there is a Module that implements a generic
Scheduling Algorithm; \item at Level 1 there is a Module that
implements a generic Aperiodic Server (that inserts his tasks into
the first Module).
\end{itemize}
Then, we consider two tasks in the system:

\begin{itemize}
\item Task A is a task created into the Module that implements the
scheduling algorithm (registered at level 0); therefore its
\texttt{task\_level} = 0. \item Task B is a task created into the
Module that implements the aperiodic server (registered at level
1); therefore its \texttt{task\_level} = 1. Moreover, the task B
is inserted into the level 0 using level-0's private functions.
\end{itemize}
Both the tasks are scheduled by the level 0 public\_scheduler:
task A because it was created in that level; task B because it was
inserted into the level 0 through the level-0's private functions.

When the scheduling procedure is called, the Generic Kernel
scheduler will call the public\_scheduler, stating from that of
the Module Registered at level 0. In this case, the level 0
public\_scheduler will choose which task really schedule between A
and B. If task A is selected, the Generic Kernel will call the
public\_dispatch of the level 0 (because Task A's
\texttt{task\_level} is 0). If task B is selected, the Generic
Kernel will call the public\_dispatch of the level 1 (because Task
A's \texttt{task\_level
} is 1). To handle the public\_dispatch
event that function will call the level-0 private\_dispatch of
Level 0.


%----------------------------------------------------------------------------
\section{Resource Modules}
\label{oview_Moduli_GestioneRisorse
}
%----------------------------------------------------------------------------

Resource Modules are normally used to implement some parts that do
not directly interact with task scheduling, but need some
information that has to be provided at task creation and
termination time.

Such Modules are, for example, those that implement shared
resource access protocols (they require some parameters like
system ceiling, the set of mutexes used by a task, etc.), or the
Modules that implements a file system that supports the
specification of a disk bandwidth to be guaranteed to each task.

Like Scheduling Modules, also Resource Modules are organized into
levels, one module for each level. The level number influences
only the order in which events are notified to modules.

The only events that can be notified to a Resource Module are the
creation and the termination of a task. At creation time an
application can specify one or more Resource Models, which will be
handled by the modules registered in the Kernel.

Note that the Resource Module interface is not complete, because
in general a Module will need a larger interface which depends on
the resource that the module itself handles. Usually, the Modules
extends the interface used by the Generic Kernel, by adding a set
of new functions, in a way similar to that used in an Object
Oriented Language when a class is inherited from another base
class.

%----------------------------------------------------------------------------
\section{Shared Resource Access Protocols}
\label{oview_Shadows
}
%----------------------------------------------------------------------------

S.Ha.R.K. is based on a shared memory programming paradigm, so
communication among tasks is performed by accessing shared
buffers. In this case, tasks that concurrently access the same
shared resource must be synchronized through \emph{mutual
exclusion}: real-time theory \cite{Sha90} teaches that mutual
exclusion through classical semaphores is prone to \emph{priority
inversion}. In order to avoid or limit priority inversion,
suitable shared resource access protocols must be used.

As for scheduling, S.Ha.R.K. achieves modularity also in the
implementation of shared resource access protocols. Resource
modules are used to make resource protocols modular and almost
independent from the scheduling policy and from the others
resource protocols. Each resource module exports a common
interface, similar to the one provided by POSIX for mutexes, and
implements a specific resource access protocol. A task may also
require to use a specified protocol through a resource model.

Some protocols (like Priority Inheritance or Priority Ceiling)
directly interact with the scheduler (since a low-priority task
can inherit the priority from a high-priority task), making the
protocol dependent on the particular scheduling algorithm (see
Figure \ref{oview_Fig_Idea1}).

\begin{figure}
\begin{center}
\includegraphics[width=5cm]{images/oview_Fig_Idea1.eps}
\end{center}
\label{oview_Fig_Idea1}
\caption{Priority inheritance implemented with an out of order queue insertion:
(1) Ta blocks when it tries to access a resource; (2) Ta goes in the blocked
queue; (3) Tb replaces the position of the high priority task.}
\end{figure}

Although a solution based on a direct interaction between the
scheduler and the resource protocol is efficient in terms of
runtime overhead, it limits the full modularity of the kernel,
preventing the substitution of a scheduling algorithm with another
one handling the same task models (for example, Rate Monotonic
could be replaced by the more general Deadline Monotonic
algorithm).

To achieve full modularity, the S.Ha.R.K. Generic Kernel supports
a generic priority inheritance mechanism independent from the
scheduling modules. Such a mechanism is based on the concept of
\emph{shadow tasks}. A shadow task is a task that is scheduled in
place of the task selected by the scheduler. When a task is
blocked by the protocol, it is kept in the ready queue, and a
shadow task is binded to it; when the blocked task becomes the
first task in the ready queue, its binded shadow task is scheduled
instead. In this way, the shadow task executes as if it
{}``inherited'' the priority of the blocked tasks, but no
inheritance takes place, thus decoupling the implementation of the
scheduling algorithm from that of the shared resource access
protocol (see Figure \ref{oview_Fig_Idea2}).

\begin{figure}
\begin{center}
\includegraphics[width=5cm]{images/oview_Fig_Idea2.eps}
\end{center}
\label{oview_Fig_Idea2}
\caption{Priority Inheritance implemented through shadows: (1) Ta blocks when it
tries to access a resource; (2) Ta indicates a shadow task and remains into the
ready queue; (3) Tb is scheduled in place of Ta.}
\end{figure}

To implement this solution, a new field \texttt{shadow} is added
to the generic part of the task descriptor. This field points to
the shadow task (see Figure \ref{oview_Fig_Shadow_Significato}).

\begin{figure}
\begin{center}
\includegraphics[width=5cm]{images/oview_Fig_Shadow1.eps}
\end{center}
\label{oview_Fig_Shadow_Significato}
\caption{Meaning of the Task Descriptor's \texttt{shadow} field.}
\end{figure}

Initially, the shadow field is equal to the task ID (no
substitution; see Figure \ref{oview_Fig_Shadow_Typical}).

\begin{figure}
\begin{center}
\includegraphics[width=2cm]{images/oview_Fig_Shadow2.eps}
\end{center}
\label{oview_Fig_Shadow_Typical}
\caption{Typical scenario with no substitution.}
\end{figure}

When a task blocks on a shared resource, its shadow field is set
to the task ID of the task holding that resource (see Figure
\ref{oview_Fig Shadow_Blocking}).

\begin{figure}
\begin{center}
\includegraphics[width=5cm]{images/oview_Fig_Shadow3.eps}
\end{center}
\label{oview_Fig Shadow_Blocking}
\caption{In this scenario a blocked task waits for a resource; the blocking task
inherits its priority.}
\end{figure}

In general, a graph can grow from a blocking task (see Figure
\ref{oview_Fig_Shadow_DAG}).

\begin{figure}
\begin{center}
\includegraphics[width=5cm]{images/oview_Fig_Shadow4.eps}
\end{center}
\label{oview_Fig_Shadow_DAG}
\caption{Using the shadow mechanism a graph can grow...}
\end{figure}

In this way, when the blocked task is scheduled, the blocking
(shadow) task is scheduled, thus allowing the scheduler to
abstract from the resource protocol. This approach has also the
benefit of allowing a classical deadlock detection strategy:
cycles have to be searched in the shadow graph when a
\texttt{shadow
} field is set.

Using this approach, a large number of shared resources protocols
can be implemented in a way independent from the scheduling
implementation. The classical approach, however, can also be
pursued. In addition to this method, a classical synchronization
mechanism is available (see the User Manual for informations about
semaphores interfaces).

%----------------------------------------------------------------------------
\section{The Generic Kernel}
%----------------------------------------------------------------------------

The \textit{Generic Kernel} implements the primitives which form
the interface if with the Libraries and the User Applications.

The term Generic is used because the kernel is developed
abstracting from the implementation of a particular scheduling
algorithm. This means that it does not provide the concept of
priority, deadline, task queues, and so on. All the peculiarities
of an algorithm are encapsulated in the Modules, interacting with
the Kernel.

In general Applications use the Kernel through the generic
primitives, asking the Kernel to handle a task set. The Kernel
tries to handle the tasks passing them to the modules according to
the Task Models specified for the tasks. Any module modification
that does not affect the Model interface, does not require any
modification to the Kernel and the Applications.

The generic kernel divides the task set in two parts: the system
tasks and the user tasks. System tasks are used to manage external
devices (like keyboard, mouse, file system, etc.). User tasks are
tasks that belong to the Application.

Moreover, it distinguishes between scheduling and dispatching:

\begin{description}
\item [scheduling]is the activity by which the Generic Kernel asks
a module the indication of the task to be executed; \item
[dispatching]is the activity by which the Generic Kernel orders a
module the execution of a task.
\end{description
}
The following section describes some new additions implemented in
the kernel to achieve modularity. They mainly concern the task
descriptor and the task states.

%----------------------------------------------------------------------------
\subsection{Task descriptor}
\label{oview_task_descriptor
}
%----------------------------------------------------------------------------

To have full modularity, the Generic Kernel should not be modified
when implementing a new algorithm.

For this reason the task descriptor is split in several parts: one
generic part, common to all the modules, and a set of parts that
are dependent from the used Modules and are local to the modules.

The generic descriptor part contains all the data used by the
Generic Kernel to implement the generic primitives, like context,
stack address and size, error codes, statistical information, etc.

The generic part does not contain any information like deadlines,
priorities, time slices, reactivation times, etc. These data are
used by specific scheduling algorithms and must be contained in
the corresponding scheduling modules. In particular:

\begin{itemize}
\item The Generic Kernel does not implement a specific scheduling
policy. \item Each instance of a module has a private memory where
it can store information about the owned tasks. \item Each Module
uses the information stored in the generic task descriptor plus
his private information, but not the information of other modules.
\item Typically, an Aperiodic Server does not use the internal
structures of the Master Module, but interacts with it using the
available interface functions.
\end{itemize
}

%----------------------------------------------------------------------------
\subsection{Task States}
\label{oview_Status
}
%----------------------------------------------------------------------------

One of the consequences of the modular approach is that many task
states are local to the modules. The Generic Kernel defines only a
subset of the states in which a task can be, giving freedom to the
Modules to create their private subsystem.

In particular, the Generic Kernel defines only the states
described in Table \ref{oview_Table_Kernel_States}.

\begin{table}
\begin{center}\begin{tabular}{|p{2cm}|p{10cm}|}
\hline
\multicolumn{1}{|c|}{State} & Description\\ \hline \hline

\multicolumn{1}{|c|}{FREE} & Descriptor not used\\ \hline

\multicolumn{1}{|c|}{SLEEP \foreignlanguage{english}{  }} & Default state in
which a task go after creation; this is the state in whichthe task returns after
a call to thetask\_sleep primitive\\ \hline

\multicolumn{1}{|c|}{EXE} & State of the task currently being
executed\\ \hline

\multicolumn{1}{|c|}{WAIT\_* \foreignlanguage{english}{ }} & State in which a
task is put after blocking on a synchronizationprimitive\\ \hline
\end{tabular}\end{center}

\label{oview_Table_Kernel_States}
\caption{Task states defined by the Generic Kernel.}
\end{table}

These states must be integrated with other internal states defined
into the Modules. Some typical states implemented in modules are
reported in Table \ref{oview_Table_Modules_States}.

\begin{table}
\begin{center}\begin{tabular}{|p{3cm}|p{10cm}|}
\hline \multicolumn{1}{|c|}{State name}&
Description\\ \hline \hline
\multicolumn{1}{|c|}{READY \foreignlanguage{english}{}}& This is
the classical ready state in which a task waits to be
executed.\\ \hline \multicolumn{1}{|c|}{BLOCKED}&
This is the state of a task blocked on a semaphore.\\
\hline \multicolumn{1}{|c|}{ZOMBIE \foreignlanguage{english}{ }}&
This is the state of a Hard Periodic Task afterhis termination,
when it waits the end of the period tofree his used
bandwidth.\\ \hline \multicolumn{1}{|c|}{LOBBY
\foreignlanguage{english}{  }}& In some implementation of the SRP
protocolthis state is used to postpone the activation of atask
while the system waits that for system ceiling to become
zero.\\ \hline \multicolumn{1}{|c|}{IDLE
\foreignlanguage{english}{}}& This is the typical state in which a
periodic task is putat the end of an instance, to wait for
reactivation at the beginning of the next period.\\
\cline{2-2}
\end{tabular}\end{center}


\caption{\label{oview_Table_Modules_States}Examples of states
defined in the Modules.}
\end{table
}

%----------------------------------------------------------------------------
\section{Initialization and termination of the system}
\label{oview_Initialization
}
%----------------------------------------------------------------------------

Before an Application can be executed, the user has to tell the
Generic Kernel the modules used by the system. This is done
through an initialization function in which the specified modules
are registered and initialized.

Normally, the registration creates a non-realtime task, that
initializes some devices and calls the standard C startup function
\texttt{main()}.

When the system ends (because a function like \texttt{sys\_end()}
or \texttt{sys\_abort()
} is called) or when the last user task
terminates, all the devices are closed and all the system tasks
are terminated in a clean way.