Subversion Repositories shark

Rev

Blame | Last modification | View Log | RSS feed

%----------------------------------------------------------------------------
\chapter{The Generic Kernel Internals}
%----------------------------------------------------------------------------

In this chapter some information are given on the implementation
of the Generic Kernel. The objective of this chapter is to give
the user enought information about the internal of the kernel to
allow an analysis of the source code. The code described in this
chapter is contained into the \texttt{kernel} directory.

%----------------------------------------------------------------------------
\section{System Tasks and User Tasks}
\label{Kernel_TaskUtente
}
%----------------------------------------------------------------------------

The Generic Kernel classifies the tasks in the system using two
flags of the control field of the task descriptor. The Programming
Model of the kernel is a monoprocess multithread model, so each
task shares a common memory without any kind of address
protection.

The two flags of interest are the \texttt{SYSTEM\_TASK} and the
\texttt{NO\_KILL} flags:

\begin{itemize}
\item If the \texttt{SYSTEM\_TASK} flag is set a task is a task
used internally by the Kernel, otherwise the task is considered an
user task; \item If the \texttt{NO\_KILL} flag is set a task
cannot be killed by a \texttt{task\_kill} or
\texttt{pthread\_cancel} primitive.
\end{itemize}
These two flags divide the task universe in four sets (look at
Figure \ref{Kernel_4Insiemi} and at Table
\ref{Kernel_Tab_4Insiemi
}):%
\begin{figure}
\begin{center
}\includegraphics[%
  width=8cm]{images/kernel_quattro_insiemi.eps}\end{center}


\caption{\label{Kernel_4Insiemi}The four sets in whitch the tasks
are divided; The values of the two flags \texttt{SYSTEM\_TASK}
(ST) and \texttt{NO\_KILL} (NK) are showed.}
\end{figure
}
%
\begin{table}
\begin{center}\begin{tabular}{|c||c|c|}
\hline & \texttt{SYSTEM\_TASK=0}&
\texttt{SYSTEM\_TASK=1}\\ \hline \hline
\texttt{NO\_KILL=0}& User tasks& System Drivers\\
\hline \texttt{NO\_KILL=1}& Immortal User Tasks& System
Tasks\\ \hline
\end{tabular}\end{center}


\caption{\label{Kernel_Tab_4Insiemi}The four sets in whitch the
tasks are divided.}
\end{table}

\begin{description}
\item [User~Tasks]These are the tasks usually created by the user;
\item [Immortal~User~Tasks]These tasks are tasks that the user
wants to protect against uncontrolled cancellations. Usually the
life of these tasks is not important for the termination of the
system; in other words, the system can shut down also if these
tasks are not ended; \item [System~Drivers]These tasks are handled
directly by the Kernel or by some Libraries that implement some
important things (for example, the file system controls the hard
disks with tasks of this type); \item [System~Tasks]These are
non-critical tasks that have to be always present in the system;
the life of the system depends on the life of these tasks. Such a
task is for example the dummy task.
\end{description}

System termination can be generated automatically by the Generic Kernel or it
can be forced if the user calls the \texttt{exit()} function.

The Generic Kernel starts the system termination when all User
Tasks ends or when all the System Drivers ends.

To do the shutdown in a correct way, the libraries that are
implemented using the System Drivers should end in a correct way.
Look at Section\ref{Kernel_Inizializzazione
} for more
informations.

%----------------------------------------------------------------------------
\section{Initialization and Termination}
\label{Kernel_Inizializzazione
}
%----------------------------------------------------------------------------

In this section the structure of the function
\texttt{\_\_kernel\_init\_\_} is described in more detail. This
function is the function called by the OS Lib at system startup.
%
% Tool: such section does not exists.
%
% The interface between the Generic Kernel and the OS Lib is not
% described here but in section \ref{OSLib_SezInizializzazione}.

%----------------------------------------------------------------------------
\subsection{Interrupt Disabling}
%----------------------------------------------------------------------------

The first thing that is done in the function is the disabling of
the interrupts. When the system starts, the OS Lib allocs a
context, whose number is stored by the Generic Kernel into the
global variable \texttt{global\_context}. In this startup context
the function \texttt{\_\_kernel\_init\_\_} is called; also the
functions that it calls run in that context.

The context used by the tasks will be allocated next, with a call
to the OS Lib function \texttt{ll\_context\_create} (this function
is called by the Generic Kernel into the primitive
\texttt{task\_create
}).

The interrupts will be enabled automatically at the first context
change.

%----------------------------------------------------------------------------
\subsection{Initialization of the Memory Management}
%----------------------------------------------------------------------------

After disabling the interrupts, the dynamic memory manager can be
initialized. It must be the first thing that is initialized
because dynamic memory is used extensively in all the Kernel (and
the first place where it is used is the Module Registration).

%----------------------------------------------------------------------------
\subsection{Initialization of the static data structures}
%----------------------------------------------------------------------------

The next step in the in the Kernel startup is the initialization
of the staticdata structures. In particular, it will be
initialized:

\begin{itemize}
\item The task descriptor and the task-specific data; \item The
free descriptor queue; \item The arrays that contains the pointers
to the Module descriptors; \item The tata structures used to
implement POSIX signals; \item The data structures used to call
the init functions posted through the function
\texttt{sys\_atrunlevel}.
\end{itemize
}

%----------------------------------------------------------------------------
\subsection{Resgistration of the Modules in the system}
\label{Kernel_kernel_register_levels
}
%----------------------------------------------------------------------------

At this point, the system has the interrupts disabled and all
static data structures initialized. Now, the Generic kernel needs
to know what is the real Module configuration in the system. To
handle that, the following function is called:

\bigskip{}
\begin{center}\texttt{TIME
\_\_kernel\_register\_levels\_\_(void {*}arg)}\end{center}
\bigskip{}

That function is a user defined function that must call the
registration functions of the Scheduling Modules and of the
Resource Modules.

It has a parameter that contains a pointer to a \texttt{multiboot
}
structure,that can be used to know some information about the
system and about the command line
arguments.%
\footnote{The function \texttt{\_\_compute\_args\_\_} described in
the file \texttt{include/kernel/func.h
} can be
used.%
} . These informations can be useful to modify the Module
registration dinamically at run time.

The value returned by the function is the system tick that the
system will use for the periodic timer initialization. If the
value returned is 0 the generic kernel will use the one-shot timer
instead
%
% Tool: such section does not exists.
%
% (look at Section \ref{OSLib_Inizializzazione})
.

To simplify the developing of the applications, the kernel
distribution contains some init examples in the directory
\texttt{kernel/init}.

In the initialization function only these functions can be used:

\begin{itemize}
\item The functions that alloc and free the dynamic memory
(described in Section \ref{KernSupport_GestioneMemoria}); \item
The function \texttt{sys\_atrunlevel} that can be used to register
the initialization and termination functions; \item The functions
of the C library exported by the OS Lib; \item The functions that
prints some messages on the console, like for example
\texttt{printk} and \texttt{kern\_printf}.
\end{itemize}
\bf{The other functions of the Generic Kernel and of the OS
Lib can not be used}.

For the developers that knows the earlier versions of the Hartik
Kernel, the body of the startup function
\texttt{\_\_kernel\_register\_levels\_\_} can be thought as the
first part of the \texttt{main()} function (until the
\texttt{sys\_init()
}).

%----------------------------------------------------------------------------
\subsection{OS Lib initialization}
%----------------------------------------------------------------------------

At this point all the data structures are initialized, so the
system can go in multitasking mode calling the OS Lib's
\texttt{ll\_init} and \texttt{event\_init} functions.

%----------------------------------------------------------------------------
\subsection{Initialization functions call}
\label{Kernel_Runlevel_init
}
%----------------------------------------------------------------------------

At this point the system has only one valid context (the
\texttt{global\_context}); there aren't any tasks yet (because
nobody could create them before).

To end the initialization part the Generic Kernel calls the
initialization functions registered by the Modules registered in
the system. Because the OS Lib is initialized, all the function
exported by it can be called. Moreover, the primitives
\texttt{task\_create} and \texttt{task\_activate} can be called.

The initialization functions can be used by the Scheduling Modules
to create a startup task set. Tipically the startup task set is
composed by two tasks: the \texttt{dummy} task (usually created by
the \texttt{dummy
} Scheduling Module
%
\footnote{This module simply ends the Scheduling Module levels, in
a way that there will be always a task to
schedule.%
}) and the task that has the function \texttt{\_\_init\_\_} as
body (that task is usually created by the Round Robin
(\texttt{RR}) Scheduling Module). This choice is the tipical
situation used in most cases, but is not mandatory. The only thing
really important is that there must be a task to schedule after
this step.

The function \texttt{\_\_init\_\_} is usually contained into the
initialization files, just after the \texttt{\_\_kernel\_register}
\texttt{\_levels\_\_} function. That function is the body of the
first task that is executed in the system. The actions done by
that function are tipically two:

\begin{itemize}
\item the initialization of some devices and libraries that use
some system primitives (for example, the semaphores) and for this
reason they must be initialized into a context of a {}``real''
task (and not in the \texttt{global\_context} where the
initialization functions are called). Examples of these libraries
are for example the communication ports, and the keyboard and
mouse drivers (all these libraries uses the semaphores); \item the
call to the \texttt{main()
} function (in this way the Applications
can be writed using the straight C
standard)%
\footnote{The function \texttt{\_\_call\_main\_\_} described in
the file \texttt{include/kernel/func.h
} can be
used.%
}.
\end{itemize}
For the developers that knows the earlier versions of the Hartik
Kernel, the body of the \texttt{\_\_init\_\_} function can be
thought as the part of the \texttt{main()} function (from the
\texttt{sys\_init} to the \texttt{sys\_end()}).

Per quanti fossero familiari con le versioni precedenti del
Kernel, il corpo della funzione \texttt{\_\_init\_\_} corrisponde
piĆ¹ o meno alla parte della funzione \texttt{main()} compresa tra
la \texttt{sys\_init} (esclusa) e la \texttt{sys\_end
} finale
(esclusa).

%----------------------------------------------------------------------------
\subsection{First context change}
%----------------------------------------------------------------------------

Now, the data structures are initialized, and the first tasks are
created. At this point the Generic Kernel simply schedule the
first task and dispatch it.

When the first task is scheduled the global context is also saved.
Because the global context is not a context of a task, the system
will never schedule again the global context until all the user
task are finished.

The system will change the context to the global context also when
the \texttt{sys\_end} or the \texttt{sys\_abort} functions are
called to shut down the system. Note that the function
\texttt{ll\_abort
} does not change the context to the global
context, but it simply change to a safe stack and shut down the OS
Lib, without shutting down the Generic Kernel correctly.

%----------------------------------------------------------------------------
\subsection{The shutdown functions}
%----------------------------------------------------------------------------

When the last user task ends, or when the \texttt{sys\_end}, or
\texttt{sys\_abort
} function are called, the current context
changes to the global context.

At this point the system has to do some operations to shut down
the system in a correct
way%
\footnote{For example, if the File System is used maybe there will
be some data that have to be written on the
disks%
}.

The operations that have to be called depends on the registered
Modules, so the Generic Kernel allows to set, using the function
\texttt{sys\_atrunlevel}, a set of functions to be called at this
time.

Usually these functions activates some recovery tasks that will
shut down correctly the system. These function should be small,
because just after the calls the system will be scheduled again to
allow the libraries to shut down using the newly activated
threads.

If the shutdown functions are too long and uses a lot of computation time, thete
can be some undesirable effects that can put the system in an instable state
\footnote{A typical situation is that when the system is rescheduled a task that
used a resource miss a deadline; then the Scheduling Module disable its
schedulability, and the operation on the device cannot end, and with it all the
system!
}.

%----------------------------------------------------------------------------
\subsection{Termination request for all user tasks}
%----------------------------------------------------------------------------

To speed-up the system termination, the system tries to kill all
the user tasks. Because the cancellation is usually deferred (as
told by the POSIX standard), this should not cause the
instantaneous dead of all tasks when the system returns in
multitasking mode.

%----------------------------------------------------------------------------
\subsection{Second context change}
%----------------------------------------------------------------------------

To shut down correctly the system the scheduler must be called
again. For the secon time the system exits from the global
context. The system usually evolve as follows:

\begin{itemize}
\item The user tasks should die (slowly\ldots{}); \item The
shutdown function should give some information to the system tasks
so they can finish their work and end.
\end{itemize}
The system will return to the global context when all system tasks
will end or the \texttt{sys\_abort} will be called. The
\texttt{sys\_end
} function does not have any effect in this phase.

%----------------------------------------------------------------------------
\subsection{Exit functions called before OS Lib termination}
%----------------------------------------------------------------------------

When all the tasks end or the \texttt{sys\_abort} is called the
execution returns to the global context. At this point the
functions registered through the \texttt{sys\_atrunlevel} function
with the parameter \texttt{RUNLEVEL\_BEFORE\_EXIT
} are called.

The mission of these functions is to terminate the cleaning of the
system (for example, it may be useful to set the display in text
mode if the application uses the graphic modes).

%----------------------------------------------------------------------------
\subsection{Termination of the OS Lib}
%----------------------------------------------------------------------------

At this point the system can end its work. For this reason the
function \texttt{ll\_end} is called; this function frees all the
data structures allocated with the \texttt{ll\_init}. After this
call only the function specified in the Section
\ref{Kernel_kernel_register_levels
} can be called.

%----------------------------------------------------------------------------
\subsection{Exit functions called after OS Lib termination}
%----------------------------------------------------------------------------

Finally, the Kernel calls the last functions registered with the
\texttt{sys\_atrunlevel}, that tipically prints some nice messages
or simply reboot the computer.

%----------------------------------------------------------------------------
\section{Task creation and on-line guarantee}
\label{Kernel_Garanzia
}
%----------------------------------------------------------------------------

The Generic Kernel primitive that creates and guarantees a new
task is called \texttt{task\_createn}. The prototype of that
function is the following (the code of the primitive is contained
in the file \texttt{kernel/create.c}):

\bigskip{}
\noindent \texttt{PID task\_createn(char {*}name, TASK
({*}body)(), TASK\_MODEL {*}m, \ldots{});}
\bigskip{}

\noindent The parameter passed with that function are the
following:

\begin{description}
\item [\texttt{name}]Symbolic name for the task, used for
statistical pourposes; \item [\texttt{body}]Pointer to the first
instruction of the task; \item [\texttt{m}]Pointer to a Task Model
for the new task to be created \item [\texttt{\ldots{}}]List of
Resource Model pointers terminated with a \texttt{NULL} pointer.
\end{description}
The primitive returns the descriptor number associated to the
newly created task, or \texttt{NIL} if the task cannot be created
in the system. In the latter case the variable \texttt{errno
} is
set to a value that explain the typology of the
error%
\footnote{The error codes are listed in the file
\texttt{include/bits/errno.h
}.%
}.

There is also a redefinition of the primitive called
\texttt{task\_create} that accept only one Resource Module instead
of {}``\ldots{}''. This redefinition may be useful because usually
only a few tasks need more than one Resource Model.

The step followed to create and guarantee correctly a new task are
described in the following paragraphs.

The first thing to do is to find a unused task descriptor.
Tipically the free descriptors are queued in the \texttt{freedesc}
queue. During the selection the tasks that are into the freedesc
queue but that waits a synchronization with a \texttt{task\_join}
primitive are discarded (look at Section \ref{Kernel_Join}).

At this point the descriptor chosen is removed from the freedesc
queue and initialized with some default values.

Then, a Scheduling Module that can handle the Task Model passed as
parameter have to be found. The research is done starting from
level 0 and calling the \texttt{public\_create} function. When a
correct Module is found the task is created into that module.

The next step in task creation is the handling of the Resource
Models passed. This initialization is done calling the
\texttt{res\_register} function on the Resource Modules registered
in the system.

At this point all system components are informed of the Quality of
Service required by the new task, and the on-line guarantee can
start. The guarantee algorithm cannot be called before registering
the Resource Models because in general {}``hybrid'' Modules can be
developed (for example, a Module can register itself as Scheduling
Module and Resource Module, using the two descriptors...).

Finally, if the task can be guaranteed, the stack memory for the
task is allocated (only if needed), the task context is created
using the OS Lib function \texttt{ll\_context\_create}), the
creation event is registered for the tracer and the task is
counted into the user or system task counter.

If one of these steps fail, the system will be put in the state
preceding the call of the primitive (the functions
\texttt{public\_detach} and \texttt{res\_detach} are also called).

Looking at the on-line system guarantee, the generic kernel
supports a distributed guarantee on all the Scheduling Modules
based on the utilization factor paradigm. The system will call the
\texttt{public\_guarantee} function starting from level 0, and
passing each time the free bandwidth left by the upper levels.
This algorithm is implemented in the \texttt{guarantee()} function
stored in the file \texttt{kernel/kern.c}. That function returns
-1 if the task set cannot be guaranteed, 0 otherwise.

This approach allows the implementation in a simple way the
on-line guarantee of many algorithms. However, this approach is
not suitable to implement more complex algorithms, like for
example the Deferrable Server guarantee, the TB{*} \cite{But97}
guarantee and others. in these cases two strategies can be used:

\begin{itemize}
\item All the system tasks are guarantee off-line, so the
guarantee procedure can be disabled at run-time. \item All the
algorithms that need a guarantee are developed in a single
Sheduling Module, placed at level 0. In this way it can control
all the system bandwidth, and a guarantee can be done because the
Module knows all the data needed. However, in this way all
advantages of the Modularity is lost.
\end{itemize
}

%----------------------------------------------------------------------------
\section{Task activation}
\label{Kernel_Attivazione
}
%----------------------------------------------------------------------------

The Generic Kernel, unlike the POSIX standard, decouple the task
creation and guarantee and the task activation. This is done
because in literature many proofs are given for tasks that are
activated at the start of the major cycle. Also, the guarantee
function can be heavy and long, unlike the activation that is
typically shorter.

The primitives provided by the generic kernel to activate a task
are two:

\begin{description}
\item [\texttt{task\_activate}
]Activation of a single
task%
\footnote{This primitive can be called also into an OS Lib event
and into the global\_context (in other words, in the function
posted with the primitive
\texttt{sys\_atrunlevel
}).%
}; \item [\texttt{group\_activate}]Activation of a group of tasks
in an atomic
way%
\footnote{The system is rescheduled only one time, so it can
speed-up the activation of a lot of
tasks.%
}.
\end{description}
The Generic kernel provides a mechanism that allow to freeze task
activations. That mechanism is inserted in the generic kernel to
allow the modular implementation of some shared resource protocols
like SRP or similar.

This mechanism use the task descripror control field flag
\texttt{FREEZE\_ACTIVATION}, that stores the freeze state of the
activations, and use the task descripror field
\texttt{frozen\_activations}, that stores the number of freed
activations for the Generic Kernel.

These primitives are also defined:

\begin{description}
\item [\texttt{task\_block\_activation}]blocks explicit task
activations and activates its counting. The function usually
returns 0, or -1 if the task index is not correct. \item
[\texttt{task\_unblock\_activation}]enables explicit task
activations. it returns -1 if the task had the
\texttt{FREEZE\_ACTIVATION} field disabled, or the number of
freezed activations. If there were freezed activations, the
primitive does not the activations.
\end{description}
The prototypes presenyted in this Section are showed in Figura
\ref{Kernel_Fig_activate} and they are stored in the files file
\texttt{kernel/activate.c} and
\texttt{kernel/blkact.c
}.%
\begin{figure}
\begin{center} \fbox{\tt{ \begin{minipage}{6cm} \begin{tabbing}
123\=123\=123\=\kill
int task\_activate(PID p);\\
int group\_activate(WORD g);\\
int task\_block\_activation(PID p);\\
int task\_unblock\_activation(PID p);\\
\end{tabbing} \end{minipage} }} \end{center}
\label{Kernel_Fig_activate}
\caption{Prototypes of the actiovation
functions.}
\end{figure
}

%----------------------------------------------------------------------------
\section{The Scheduler}
\label{Kernel_Scheduler
}
%----------------------------------------------------------------------------

The steps that the Generic Kernel does when the system is
rescheduled are three:

\begin{itemize}
\item If when the system is rescheduled a task is running, the end
of the slice must be called for that
task%
\footnote{A task slice is the time interval in that starts when
the running task is dispatched and end when the system is
scheduled
again.%
}; \item Then, a new task to run must be found (scheduling); \item
Finally, the chosen task must be run (dispatching).
\end{itemize}
These steps are implemented into the system primitives and in the
\texttt{scheduler()} function stored in the file
\texttt{kernel/kern.c
}. In the following section the three points
are showed in detail.

%----------------------------------------------------------------------------
\subsection{Current slice end for the running task}
%----------------------------------------------------------------------------

To specify which are the actions to do at the end of a slice of
the running task, the reason af the slice end must be known.
Depending on the behaviour of the end of the slice, different
actions should be made.

For this reason the Generic Kernel provides different functions
that terminates a slice. The cases in that a slice must be ended
are the following (into parenthesis the related Task Calls are
listed):

\begin{enumerate}
\item A new task becomes active in the system, so the Generic
Kernel wants to check if a preemption (\texttt{public\_epilogue})
must be done. This situation can happen in a lot of situations,
like for example:

\begin{itemize}
\item a new task is activated with a \texttt{task\_activate} or
\texttt{group\_activate} primitive; \item a resource or a mutex is
freed, so a task blocked on it is unblocked; \item a periodic task
is reactivated at the beginning of its period; \item a System
Driver is activated because an intrettupt is arrived;
\end{itemize}
\item The running task finishes its available capacity
(\texttt{public\_epilogue}); \item The running task blocks itself
on a synchronization primitive (\texttt{public\_block}); \item The
running task ends its instance and it suspend itself with a
\texttt{task\_endcycle} primitive (\texttt{public\_message});
\item The running task ends or it is killed by a
\texttt{task\_kill} or \texttt{pthread\_cancel} primitive
(\texttt{public\_end});
\end{enumerate}
In general the funcions sequence that have to be called is the
following:

\begin{enumerate}
\item The current time is read into the global variable
\texttt{schedule\_time
}%
\footnote{That variable is used as temporal reference for the
scheduling time. Note that the Generic kernel does not separe the
CPU time passed executing user code and system code; all the CPU
time is assigned to the user
task.%
}; \item The length of the current terminated slice is computed
using the variables \texttt{schedule\_time} and
\texttt{cap\_lasttime}; \item The computation time of the current
slice is accounted to the task (look at Section \ref{Kernel_Jet});
\item The capacity event (if one is pending) is erased; \item The
Scheduling Module function that handles the termination of the
slice for the task is called..
\end{enumerate}
To simplify the writing of the primitives the following approach
is implemented: because the preemption rescheduling is the most
common situation, the sequence given before that terminates with a
call to \texttt{public\_epilogue} is included as prologue in the
\texttt{scheduler()} function. That prologue is not executed if
the variables \texttt{exec} and \texttt{exec\_shadow} have a
\texttt{NIL} (\texttt{-1
}) value when the function is called.

%----------------------------------------------------------------------------
\subsection{Scheduling}
%----------------------------------------------------------------------------

When the previous slice is terminated a new task to schedule must
be chosen. The generic scheduling algorithm starts from the
Scheduling Module at level 0, calling the function
\texttt{public\_scheduler}, and going through the levels when a
Module does not have any task to schedule. The Generic Kernel
assumes that there is always a task to
schedule%
\footnote{To have always a task to schedule a Scheduling Module
called \texttt{dummy
} is provided that always guarantees the
existence of a task to
schedule.%
}.

When a task to schedule is found, the function
\texttt{public\_eligible} is called to verify if the task chosen
by the scheduler is correct. If the task is not correct, the
generic scheduling algorithm restarts from the level that gave the
wrong task before
%
% Tool: such section does not exists.
%
% (look at Section \ref{SchedModules_TaskCalls})
.

%----------------------------------------------------------------------------
\subsection{Dispatching}
%----------------------------------------------------------------------------

To find the task that should be executed really another step has
to be done: the shadow chain of the scheduled task must be
followed
%
% Tool: such section does not exists.
%
% (look at Section \ref{ArchDetail_prot_ris_condiv})
. When
the tail of that chain is found, the function
\texttt{public\_dispatch} is called on that task.

Finally, if the task has the \texttt{CONTROL\_CAP
} bit of the task
descriptor control field set, a capacity event is posted.

%----------------------------------------------------------------------------
\section{Execution Time statistics}
\label{Kernel_Jet
}
%----------------------------------------------------------------------------

The Generic Kernel supports the accounting of the task execution
times. This is useful because the behaviour of many algorithm
proposed in the literature depends widely on the accuracy with
that the task capacities are managed.

To enable the Generic Kernel to account the execution time of a
task, the user should use the provided macros for the Task Models
(look at Section \ref{Modelli_TASK_MODEL}). These macros modifies
the \texttt{JET\_ENABLE} flag into the \texttt{control} field of
the task descriptor.

The Generic Kernel can store some data about a task. In
particular, the mean and the maximum execution time of a task and
the time consumed by the current instance and of the last
\texttt{JET\_TABLE\_DIM} instances.

The prototypes of the Generic Kernel functions are described in
Figure\ref{Kernel_Fig_Jet
}.%
\begin{figure}
\begin{center} \fbox{\tt{ \begin{minipage}{6cm} \begin{tabbing}
123\=123\=123\=\kill
int jet\_getstat(PID p, TIME *sum, TIME *max,\\
\>\>int *n, TIME *curr);\\
 int jet\_delstat(PID p); \\
 int jet\_gettable(PID p, TIME *table, int n);\\
 void jet\_update\_slice(TIME t);\\
 void jet\_update\_endcycle();
 \end{tabbing} \end{minipage} }} \end{center}

\label{Kernel_Fig_Jet}
\caption{Primitives for execution time handling and correlated functions used
internally by the Generic Kernel.}
\end{figure}

In the following paragraphs these functions are described:

\begin{description}
\item [\texttt{jet\_getstat}]This primitive returns some
statistical informations; in particular, the informations are
stored into the following parameters:

\begin{description}
\item [\texttt{sum}]is the task total execution time since it was
created or since the last call to the \texttt{jet\_delstat}
function; \item [\texttt{max}]is the maximum time used by a task
instance since it was created or since the last call to the
\texttt{jet\_delstat} function; \item [\texttt{n}]is the number of
terminated instances which sum and max refers to; \item
[\texttt{curr}]is the total execution time of the current
instance.
\end{description}
if a parameter is passed as NULL the information is not returned.
The function returns 0 if the PID passed is correct, \texttt{-1}
if the PID passed does not correspond to a valid PID or the task
does not have the \texttt{JET\_ENABLE} bit set.

\item [\texttt{jet\_delstat}]The primitive voids the actual task
execution time data mantained by the Generic Kernel. The function
returns 0 if the PID passed is correct, \texttt{-1} if the PID
passed does not correspond to a valid PID or the task does not
have the \texttt{JET\_ENABLE} bit set. \item
[\texttt{jet\_gettable}]The primitive returns the last n execution
times of the task passed as parameter. If the parameter n is less
than 0, it returns only the last values stored since the last call
to \texttt{jet\_gettable}. If the value is greater than 0, the
function returns the last \texttt{min(n,~JET\_TABLE\_DIM)} values
registered. The return value is \texttt{-1} if the task passed as
parameter does not exist or the task does not have the
\texttt{JET\_ENABLE} bit set, otherwise the number of values
stored into the array is returned. The table passed as parameter
should store at least \texttt{JET\_TABLE\_DIM} elements.
\end{description}
The function used into the Generic Kernel implementation are the
following:

\begin{description}
\item [\texttt{jet\_update\_slice}]updates the current slice of
the running task (pointed by the \texttt{exec\_shadow} field) by t
microseconds; \item [\texttt{jet\_update\_endcycle}]updates the
execution time of the last instance. When this function is called
the last instance is just terminated.
\end{description
}

%----------------------------------------------------------------------------
\section{Cancellation}
\label{Kernel_Cancellazione
}
%----------------------------------------------------------------------------

The POSIX standard provides some mechanisms to enable and disable
the cancellation, and to set the cancellation as deferred or
asynchronous.

For more informations about the cancellation functions look to the
POSIX standard.
%
% Tool: table does not exist.
%
% In Table \ref{Posix_Tab_Funzioni} there are some
% primitives that are very similar to these of the standard.

The biggest problems in implementing task cancellation into the
Generic Kernel are the following:

\begin{itemize}
\item The kernel does not have a private stack, and works simply
disabling the interrupts into the contexts of the tasks in the
system; \item The cancellation functions for a tasks should be
called into the stack of the task, so it is not possible to kill
another task immediately without changing context; \item The
Generic Kernel should abstract from the cancellation points
present in the system, because in general it is not possible to
handle all the internal structures introduced by a particular
cancellation point.
\end{itemize
}
The solution to these problems is proposed in the following
Sections.

%----------------------------------------------------------------------------
\subsection{The task\_makefree function}
%----------------------------------------------------------------------------

When a task die the flow control of a task is switched to the
\texttt{task\_makefree} function. This function have to call all
the cancellation points function, and the key destructors.

The function can be called into the cancellation points (the
\texttt{task\_testcancel} function is called, look at the file
\texttt{kernel/cancel.c}), and at task termination (look at the
\texttt{task\_create\_stub} in the file \texttt{kernel/create.c}),
and each time a task is scheduled (to test asynchronous
cancellation).

The function does the following steps:

\begin{itemize}
\item It checks if someone is waiting for the task termination
(with a \texttt{task\_join
} primitive); \item It verifies if the
task that is terminating is actually using a resource handled with
the shadow mechanism (if so an exception is raised); \item It
calls some cleanup functions; \item It calls the thread specific
data destructors; \item It frees the
context%
\footnote{Note that the freed context is the running context. this
is not a problem because the \texttt{task\_makefree
} is executed
with the interrupts disabled, and nobody can use the free memory
areas
freed.%
}and the allocated memory for the stack; \item It calls the
\texttt{public\_end} on the Scheduling Module that owns the task,
and the \texttt{res\_detach} function on the Resource Modules
registered in the system; \item It verifies if the end of the task
should cause the whole system termination (look at Section
\ref{Kernel_TaskUtente}).
\end{itemize
}

%----------------------------------------------------------------------------
\subsection{Cancellation point registration}
%----------------------------------------------------------------------------

The last problem to solve is the independence of the Generic
Kernel from the Cancellation Points. The objective of the
cancellation point registration is to write the code for a
cancellation point without modify the primitives that effectively
kill a task. The implementations can be depicted with these
points:

\begin{itemize}
\item The blocking of a task on a cancellation point is
implemented through the \texttt{public\_block} function; \item The
task state of a blocked task on a cancellation point is modified
to a value visible by the Generic Kernel (usually these names
starts with the prefix WAIT\_); \item The functions that
implements the cancellation points register themselves at their
first execution calling the \texttt{register\_cancellation\_point}
primitive (this function is defined in the file
\texttt{kernel/kill.c}). The primitive accepts a function pointer
that returns 1 if the task passed as parameter is blocked on the
cancellation point handled by the function. \item First, the
function that should kill a task sets the \texttt{KILL\_REQUEST}
flag of the control field of the task descriptor; then, it calls
the registered cancellation point functions to check if a task is
blocked on a cancellation point. If so, the registered function
reactivates the blocked task calling the \texttt{public\_unblock}
function. \item The architecture of a cancellation point should
guarantee that when a task is woken up a check is made to see if a
task is killed. If so, the function internally calls the primitive
\texttt{task\_testcancel} to kill the task.
\end{itemize
}

%----------------------------------------------------------------------------
\subsection{Cleanups and Thread Specific Data}
\label{Kernel_Cleanups}
\label{Kernel_pthread_keys
}
%----------------------------------------------------------------------------

The POSIX standard provides two primitives,
\texttt{pthread\_cleanup\_push} and
\texttt{pthread\_cleanup\_pop}, that allows to specify functions
to be executed in the case a task has been killed during a section
of code delimited by these two functions.

The implementation of these two functions has been done through a
macro similar to that contained into the rationale of the POSIX
standard.

Their implementation is contained into the files
\texttt{include/pthread.h} and \texttt{include/kernel/func.h}.

The Generic kernel provides also the support for the Thread
Specific Data of the POSIX Standard. The implementation of these
primitives is not complex and can be found in the file
\texttt{kernel/keys.c
}.

%----------------------------------------------------------------------------
\section{Signals}
\label{Kernel_Segnali
}
%----------------------------------------------------------------------------

The Generic kernel provides a POSIX signal implementation derived
from the Flux OSKit \cite{Bry97}.

Two aspects need to be described:

\begin{itemize}
\item the implementation of the signal interruptable functions:


To implement these function a registration call is provided in a
way similar to the cancellation points. Each time a signal is
generated, a check is done to see if some task is blocked on a
signal interruptable function. The registration function is called
\texttt{register\_interruptable\_point} and it is contained into
the file \texttt{kernel/signal.c;}

\item the correct delivery of the signals:


a function called \texttt{kern\_deliver\_pending\_signals}
(defined in the file \texttt{kernel/signal.c}) is provided; this
function is called into the macro that changes context (the macro
\texttt{kern\_context\_load}, defined into the file
\texttt{include/kernel/func.h}). That function is usually called
after a context change, so when a task is rescheduled the pending
signals for that task are delivered. Note that in the current
version if a task is preempted by a task activated in an
interrupt, when the task is rescheduled there will not be any
signal dispatching. This IS a bug, and it will be fixed in the
next releases of the OS Lib.

\end{itemize
}
Moreover, the OS Kit signal implementation is slightly modified to
handle the POSIX message queues and the POSIX realtime timers.

%----------------------------------------------------------------------------
\section{Task Join}
\label{Kernel_Join
}
%----------------------------------------------------------------------------

The POSIX standard specifies that a thread return value can be
read, if the task is \emph{joinable}, through a call to the
primitive \texttt{pthread\_join} or \texttt{task\_join}.

In this section the implementation of the primitive
\texttt{task\_join} is described, with all the modification that
the implementation has done on the Generic Kernel.

First, the information about the task type (joinable or detached)
is stored into the flag \texttt{TASK\_JOINABLE} of the
\texttt{control} field of the task descriptor.

Usually the POSIX threads starts in a joinable state and then they
can be detached. The Generic kernel follow this line when
implementing the \texttt{pthread\_create
}, but with a difference:
the default attribute for the task models is
detached%
\footnote{Note that this does not impact on the standard POSIX
implementation, since the task\_create is a non-standard
function.%
}.

The \texttt{task\_join} primitive implements the POSIX primitive
\texttt{pthread\_join}. It is a cancellation point and it register
itself in the Generic kernel the first time it executes.

The main problem in the implementation of this primitive is that a
task descriptor correctly terminated can be reused until a join is
executed on it. The problem is that in this way the Scheduling
Modules should know the internal implementation of the primitive,
and this fact may complicate the writing of a Scheduling Module if
special task guarantees are implemented.

The implementation tries to avoid these problems in the following
way:

\begin{itemize}
\item The Scheduling Modules prescind from the task tipology
(joinable or detached) and simply inserts a task that terminates
in the free queue when the descriptor is no longer needed; \item
The \texttt{task\_makefree} checks if the task is joinable, and if
it is the flag \texttt{WAIT\_FOR\_JOIN} in the control field of
the task descriptor is set. In any case the context and the stack
for the dead process are released; \item A call to
\texttt{task\_create} that tries to alloc a task descriptor that
waits for a join and whose descriptor is inserted in the freedesc
queue simply discards it, setting the bit
\texttt{DESCRIPTOR\_DISCARDED}, in the \texttt{control} field of
the task descriptor. \item A call to \texttt{task\_join} on a task
that is already terminated, inserted in the freedesc queue and
discarded by the primitive \texttt{task\_create}, inserts the
descriptor in the \texttt{freedesc} queue.
\end{itemize
}
This way allow the Scheduling Modules to abstract and remain
independent from the implementation of the join primitive.

%----------------------------------------------------------------------------
\section{Pause and Nanosleep}
\label{Kernel_nanosleep
}
%----------------------------------------------------------------------------

The Generic Kernel supports a set of primitives to implement a
task suspension. The differences between them are the following:

\begin{description}
\item [\texttt{sleep}]This primitive suspend the execution task
for a number of seconds. The task can be woken up by a signal
delivery; \item [\texttt{pause}]This function suspends the task
until a signal is delivered to it; \item [\texttt{nanosleep}]This
function suspends the running task for a minimum time passed as
parameter. The task can be woken up by the dispatch of a signal,
in that case the residual time is returned.
\end{description
}

%----------------------------------------------------------------------------
\section{Mutex and condition variables}
%----------------------------------------------------------------------------

The Generic kernel provides a set of functions that are similar in
the interface with the correspondents POSIX functions that handles
mutexes and condition variables.

The extensions to the interface of the Resource Modules described
in the previous chapter are used by these primitives to handle
different shared resource access protocols in a general way.

Le estensioni apportate all'interfaccia dei Moduli di Gestione
delle Risorse descritte nella sezione precedente vengono
utilizzate da tali primitive per gestire i vari protocolli di
accesso a risorse condivise in modo trasparente.

In particular, the proposed interfaces are the following (for a
better description look at the POSIX standard):

%----------------------------------------------------------------------------
\subsubsection{\texttt{int mutex\_init(mutex\_t {*}mutex, const mutexattr\_t {*}attr);}}
%----------------------------------------------------------------------------

This primitive can be used to init a task descriptor. The attr
parameter should be correctly initialized before the call. It can
not be NULL.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int mutex\_destroy(mutex\_t {*}mutex);}}
%----------------------------------------------------------------------------

This function dealloc a mutex.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int mutex\_lock(mutex\_t {*}mutex);}}
%----------------------------------------------------------------------------

This function implements a blocking wait.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int mutex\_trylock(mutex\_t {*}mutex);}}
%----------------------------------------------------------------------------

This function implements a non blocking wait.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int mutex\_unlock(mutex\_t {*}mutex);}}
%----------------------------------------------------------------------------

This functions unlocks a mutex.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int cond\_init(cond\_t {*}cond);}}
%----------------------------------------------------------------------------

This function initializes a condition variable.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int cond\_destroy(cond\_t {*}cond);}}
%----------------------------------------------------------------------------

This function destroys a condition variable.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int cond\_signal(cond\_t {*}cond);}}
%----------------------------------------------------------------------------

This function signals on a condition variable. Only one task is
unblocked.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int cond\_broadcast(cond\_t {*}cond);}}
%----------------------------------------------------------------------------

This function signals on a condition variables, unblocking all the
task blocked on the condition variable.

%----------------------------------------------------------------------------
\subsubsection{\texttt{int cond\_wait(cond\_t {*}cond, mutex\_t {*}mutex);}}
%----------------------------------------------------------------------------

%----------------------------------------------------------------------------
\subsubsection{\texttt{int cond\_timedwait(cond\_t {*}cond, mutex\_t {*}mutex, }}
%----------------------------------------------------------------------------

%----------------------------------------------------------------------------
\subsubsection{\texttt{const struct timespec {*}abstime);}}
%----------------------------------------------------------------------------

The task that exec this primitive blocks and the mutex passed as
parameter is unlocked to be required when the task restarts. There
are two versions of the primitive, and one has a timeout to limit
blocking times. These functions are cancellation points. If a
cancellation request is generated for a task blocked on a
condition variable, the task will end after reaquiring the mutex.
This implies that each call have to be protected by cleanup
functions that should free the mutex in a correct way.

%----------------------------------------------------------------------------
\section{Other primitives}
\label{Kernel_Altreprimitive
}
%----------------------------------------------------------------------------

In this section a set of other primitives are shortly described.
They are implemented in the source files contained into the kernel
directory.

%----------------------------------------------------------------------------
\subsubsection{\texttt{void task\_endcycle(void);}}
%----------------------------------------------------------------------------

This primitive terminates the current instance of a task (look at
Section \ref{SchedModules_Lifecycle}).

%----------------------------------------------------------------------------
\subsubsection{\texttt{void task\_abort(void);}}
%----------------------------------------------------------------------------

This primitive ends the task.

%----------------------------------------------------------------------------
\subsubsection{\texttt{void group\_kill(WORD g);}}
%----------------------------------------------------------------------------

This primitive send a kill request to all the tasks that have the
group g.

%----------------------------------------------------------------------------
\subsubsection{\texttt{TIME sys\_time(struct timespec {*}t);}}
%----------------------------------------------------------------------------

This primitive can be used into the applications to read the
system time. Its behaviour is equal to the \texttt{ll\_gettime}
but it is executed with interrupt disabled.