Rev 1676 | Details | Compare with Previous | Last modification | View Log | RSS feed
Rev | Author | Line No. | Line |
---|---|---|---|
1676 | tullio | 1 | %---------------------------------------------------------------------------- |
2 | \chapter{Kernel Overview} |
||
3 | %---------------------------------------------------------------------------- |
||
4 | |||
5 | %---------------------------------------------------------------------------- |
||
6 | \section{Scheduling Architecture} |
||
7 | %---------------------------------------------------------------------------- |
||
8 | |||
9 | S.Ha.R.K. is a research kernel expressly designed to help the |
||
10 | implementation and testing of new scheduling algorithms, both for |
||
11 | the CPU and for shared resources. In order to achieve independence |
||
12 | between applications and scheduling algorithms (and between the |
||
13 | schedulers and the kernel), S.Ha.R.K. is based on a \emph{Generic |
||
14 | Kernel}, which does not implement any particular scheduling |
||
15 | algorithm, but postpones scheduling decisions to external |
||
16 | entities, the \emph{scheduling modules}. In a similar fashion, the |
||
17 | access to shared resources is coordinated by \emph{resource |
||
18 | modules}. External modules can implement periodic scheduling |
||
19 | algorithms, soft task management through real-time servers, |
||
20 | semaphore protocols, and resource management policies. The modules |
||
21 | implementing the most common algorithms (such as RM, EDF, Round |
||
22 | Robin, and so on) are already provided, and it is easy to develop |
||
23 | new modules. A scheme of the kernel architecture is depicted in |
||
24 | Figure \ref{oview_Fig_KernelArchitecture}. |
||
25 | |||
26 | \begin{figure} |
||
27 | \begin{center} |
||
28 | \includegraphics[width=8cm]{images/oview_Architecture.eps} |
||
29 | \end{center} |
||
30 | \label{oview_Fig_KernelArchitecture} |
||
31 | \caption{The S.Ha.R.K. Architecture} |
||
32 | \end{figure} |
||
33 | |||
34 | The OS Lib layer implements an abstraction of a generic machine |
||
35 | capable of exporting some services, as for example context |
||
36 | switching, time management, interrupts handling, and a subset of |
||
37 | the run-time C library. For detailed information about the OS Lib, |
||
38 | see \cite{Abe00-1}\cite{Abe00-2}. |
||
39 | |||
40 | The Generic Kernel provides the mechanisms used by the modules to |
||
41 | perform scheduling and resource management thus allowing the |
||
42 | system to abstract from the specific algorithms that can be |
||
43 | implemented. The Generic Kernel simply provides the primitives |
||
44 | without specifying any algorithm, whose implementation resides in |
||
45 | external modules, configured at run-time with the support of the |
||
46 | \emph{Model Mapper} (see Section \ref{oview_Models}). |
||
47 | |||
48 | Another important component of the Generic Kernel is the Job |
||
49 | Execution Time (JET) estimator, which monitors the computation |
||
50 | time actually consumed by each job. This is a generic mechanism, |
||
51 | independent from the scheduling algorithms, that can be used for |
||
52 | statistical measurements, for enforcing temporal protection, or |
||
53 | for resource accounting (see Section \ref{Kernel_Jet}). |
||
54 | |||
55 | The API is exported through the \emph{Libraries}, which use the |
||
56 | Generic Kernel to support some common hardware devices (i.e., |
||
57 | keyboard, sound cards, network cards, graphic cards) and provide a |
||
58 | compatibility layer with the POSIX Real-time Controller System |
||
59 | Profile \cite{POSIX1003.13}. |
||
60 | |||
61 | An Application can be considered as a set of cooperating tasks \footnote{In this |
||
62 | document, a process is a set of computations performed in a private address |
||
63 | space. A thread is a single execution flow in a process. Each thread is |
||
64 | identified by an unique ID, plus some specific parameters. The term Task is used |
||
65 | as a synonymous of the term thread.} that share a common address space. There is |
||
66 | no memory protection implemented into the Kernel. Intertask communication is |
||
67 | performed using shared memory buffers accessed through some synchronization |
||
68 | mechanisms (as mutexes, condition variables, semaphores, CAB, message queues). |
||
69 | Each task is characterized by a Task Model, some optional Resource Models, and a |
||
70 | body. The body of a task is simply a C function with the prototype \texttt{void |
||
71 | *body(void *arg)}. |
||
72 | |||
73 | An Application can use the following sets of functions: |
||
74 | |||
75 | \begin{itemize} |
||
76 | \item The functions exported by the OS Lib; |
||
77 | \item The Generic Kernel primitives; |
||
78 | \item Some Module-dependent functions; |
||
79 | \item Some functions exported by libraries, device drivers, or the standard C |
||
80 | library; |
||
81 | \item The library implementing of the POSIX standard interface. |
||
82 | \end{itemize} |
||
83 | |||
84 | Each Module consists of a set of data and functions used for |
||
85 | implementing a specific algorithm, whose implementation is |
||
86 | independent from the other modules in the system, thus realizing a |
||
87 | trade-off between user-level and in-kernel schedulers. In this |
||
88 | way, many different module configurations are possible. For |
||
89 | example, a Polling Server can either work with a RM or an EDF |
||
90 | scheduling Module without any modification. |
||
91 | |||
92 | Currently, S.Ha.R.K. provides two basic modules: |
||
93 | |||
94 | \begin{itemize} |
||
95 | \item modules that implement scheduling algorithms and aperiodic service |
||
96 | policies (Scheduling Modules); |
||
97 | \item modules that manage shared (hardware or software) resources (Resource |
||
98 | Modules); |
||
99 | \end{itemize} |
||
100 | |||
101 | All resource access protocols, such as Priority Inheritance, are |
||
102 | implemented as a mutex module whose interface is derived from the |
||
103 | resource module interface. A POSIX mutex interface is also |
||
104 | provided on top of the implemented protocols. |
||
105 | |||
106 | Each type of Module provides a well defined interface to |
||
107 | communicate with the Generic Kernel (user programs do not directly |
||
108 | interact with the modules). The interface functions are called by |
||
109 | the Generic Kernel to implement the kernel primitives. When |
||
110 | modules need to interact with the hardware (for example, the |
||
111 | timer), they can use the service calls provided by the Generic |
||
112 | Kernel. |
||
113 | |||
114 | Finally, each Module has some unique identifiers to allow the |
||
115 | implementation of some consistency checks (for example, a module |
||
116 | that implements a Total Bandwidth Server cannot work with Rate |
||
117 | Monotonic). |
||
118 | |||
119 | %---------------------------------------------------------------------------- |
||
120 | \section{QoS Specification} |
||
121 | %---------------------------------------------------------------------------- |
||
122 | |||
123 | \label{oview_Models} One of the goals of S.Ha.R.K. is to allow the |
||
124 | user to easily implement and test novel scheduling algorithms. In |
||
125 | particular, the kernel has been designed with the following |
||
126 | objectives: |
||
127 | |||
128 | \begin{itemize} |
||
129 | \item achieve independence between the kernel mechanisms and the |
||
130 | scheduling policies for tasks and resource management; \item |
||
131 | configure the system at run-time by specifying the algorithms to |
||
132 | be used for task scheduling and resource access; \item achieve |
||
133 | independence between applications and scheduling algorithms. |
||
134 | \end{itemize} |
||
135 | |||
136 | These requirements are particularly useful for comparing the |
||
137 | performance of similar algorithms on the same application. In |
||
138 | fact, module independence allows the user to configure and test |
||
139 | applications without recompile them, only relinking them. |
||
140 | |||
141 | Independence between applications and scheduling algorithms is |
||
142 | achieved by introducing the concept of \emph{model}. Each task |
||
143 | asks the system to be scheduled according to a given Quality of |
||
144 | Service (QoS) specified by a model. In other words, a model is the |
||
145 | entity used by S.Ha.R.K. to separate the scheduling parameters |
||
146 | from the QoS parameters required by each task. In this way, the |
||
147 | kernel provides a common interface to isolate the task QoS |
||
148 | requirements from the real scheduler implementation. |
||
149 | |||
150 | Models are descriptions of the scheduling requirements expressed |
||
151 | by tasks. S.Ha.R.K. provides three different kinds of models: |
||
152 | |||
153 | \begin{description} |
||
154 | \item [Task~Models.]A task model expresses the QoS requirements of |
||
155 | a task for the CPU scheduling. Requirements are specified through |
||
156 | a set of parameters at task creation time. Some of the task |
||
157 | requirements are mandatory (e.g., the stack size of a task), while |
||
158 | others depend on the specific task model (e.g., a deadline). For |
||
159 | this reason, all task models are characterized by a general common |
||
160 | part, which can be extended by a model-dependent part. Usually, |
||
161 | the model-dependent part abstracts from a specific scheduling |
||
162 | algorithm (for instance, the concept of period or deadline is |
||
163 | independent from a specific algorithm like EDF or RM). The task |
||
164 | models have a function similar to the \texttt{pthread\_attr\_t} |
||
165 | structure defined in the POSIX standard. \item [Resource~Models.]A |
||
166 | resource model is used to define the QoS parameters relative to a |
||
167 | set of shared resources used by a task. For example, the resource |
||
168 | model can be used to specify the semaphore protocol to be used for |
||
169 | protecting critical sections (e.g., Priority Inheritance, Priority |
||
170 | Ceiling, or SRP). In other cases, the resource model can be used |
||
171 | to specify a hardware resource scheduling algorithm (e.g. a File |
||
172 | System Scheduling Algorithm). \item [Mutex~Models.]When a mutex |
||
173 | semaphore is created, these Models are used to specify the |
||
174 | resource access protocol to be used, in a way similar to that done |
||
175 | with Task Models. The mutex models have a function similar to the |
||
176 | \texttt{pthread\_mutexattr\_t} structure defined in the POSIX |
||
177 | standard. |
||
178 | \end{description} |
||
179 | |||
180 | Each task is characterized by a single mandatory QoS parameter, |
||
181 | the \emph{task criticality} (hard, soft, firm, non real-time, and |
||
182 | so on). This parameter belongs the common part of the task model, |
||
183 | together with a \emph{model identifier} and some other parameters, |
||
184 | such as the stack size. |
||
185 | |||
186 | Each task model is implemented as a C structure, in which the |
||
187 | first field is the model identifier, the next fields are the |
||
188 | mandatory parameters and the last field is a sequence of bytes |
||
189 | containing the model-dependent parameters, that only the specific |
||
190 | module can interpret. Resource models are completely generic, and |
||
191 | depend on the resource they describe: the only mandatory parameter |
||
192 | is the model identifier. |
||
193 | |||
194 | Models are required to make the generic kernel independent from |
||
195 | the implemented scheduling algorithms: since the generic kernel |
||
196 | does not implement any algorithm, it does not know how to serve a |
||
197 | task but invokes a service request to scheduling entities realized |
||
198 | as external \emph{modules}. Hence, the generic kernel does not |
||
199 | interpret the models, but just passes them to the modules; each |
||
200 | module, by reading the common part of the model, can verify |
||
201 | whether the task can be served or not. |
||
202 | |||
203 | Using models an application is able to specify the requested QoS, |
||
204 | independently from the modules used into the system. For example, |
||
205 | an application that creates a task using an Hard Task Model can be |
||
206 | executed on an EDF, a RM, or a Deadline Monotonic |
||
207 | Module. |
||
208 | |||
209 | \begin{figure} |
||
210 | \begin{center} |
||
211 | \includegraphics[width=11cm]{images/oview_QOSMapper.eps} |
||
212 | \end{center} |
||
213 | \label{oview_Fig_QOSMapper} |
||
214 | \caption{The interaction between the Model Mapper and the QOS Mapper.} |
||
215 | \end{figure} |
||
216 | |||
217 | Task creation works as follows (see Figure |
||
218 | \ref{oview_Fig_QOSMapper}): when an application issues a request |
||
219 | to the kernel for creating a new task, it also sends the model |
||
220 | describing the requested QoS. A kernel component, namely the |
||
221 | \emph{model mapper}, passes the model to a module, selected |
||
222 | according to an internal policy, and the module checks whether it |
||
223 | can provide the requested QoS; if the selected module cannot serve |
||
224 | the task, the model mapper selects a different module. When a |
||
225 | module accepts to manage the task described by the specified |
||
226 | model, it converts the model's QOS parameters into the appropriate |
||
227 | scheduling parameters. Such a conversion is performed by a module |
||
228 | component, called the \emph{QoS Mapper}. For example, a hard |
||
229 | periodic task may have a model consisting of a period and a |
||
230 | worst-case execution time (WCET); when a task is created with that |
||
231 | model, the EDF module will convert such parameters into deadlines, |
||
232 | reactivation times, and so on. In general, a module can manage |
||
233 | only a subset of the models, and the set of models is not limited |
||
234 | by the kernel. This is possible because the kernel does not handle |
||
235 | the models, but it simply passes them to the Model Mapper, that |
||
236 | selects a module and passes the model to that module. Currently, |
||
237 | the Model Mapper uses a simple strategy, according to which |
||
238 | modules are selected based on fixed priorities (see Section |
||
239 | \ref{oview_Module_Organization} for more details). |
||
240 | |||
241 | %---------------------------------------------------------------------------- |
||
242 | \section{Scheduling Modules} |
||
243 | %---------------------------------------------------------------------------- |
||
244 | |||
245 | Scheduling Modules are used by the Generic Kernel to schedule |
||
246 | tasks, or serve aperiodic requests using an aperiodic server. In |
||
247 | general, the implementation of a scheduling algorithm should |
||
248 | possibly be independent on resource access protocols, and handle |
||
249 | only the scheduling behavior. Nevertheless, the implementation of |
||
250 | an aperiodic server relies on the presence of another scheduling |
||
251 | module, called the Master Module |
||
252 | |||
253 | \begin{figure} |
||
254 | \begin{center} |
||
255 | \includegraphics[width=4cm]{images/oview_Server_e_Master.eps} |
||
256 | \end{center} |
||
257 | \label{oview_Fig_MasterModule} |
||
258 | \caption{An aperiodic Server that inserts its tasks into a master module.} |
||
259 | \end{figure} |
||
260 | |||
261 | (for example, a Deferrable Server can be used if the base |
||
262 | scheduling algorithm is RM or EDF, but not Round Robin; see Figure |
||
263 | \ref{oview_Fig_MasterModule}). Such a design choice reflects the |
||
264 | traditional approach followed in the literature, where most |
||
265 | aperiodic servers insert their tasks directly into the scheduling |
||
266 | queues of the base scheduling algorithm. Again, our modular |
||
267 | approach masks this mechanism with the task models: an aperiodic |
||
268 | server must use a task model to insert his tasks into the Master |
||
269 | Module. |
||
270 | |||
271 | The Model Mapper distributes the tasks to the registered modules |
||
272 | according to the task models the set of modules can handle. For |
||
273 | this reason, the task descriptor includes an additional field |
||
274 | (\texttt{task\_level}), which points to the module that is |
||
275 | handling the task. |
||
276 | |||
277 | When the Generic Kernel has to perform a scheduling decision, it |
||
278 | asks the modules for the task to schedule, according to fixed |
||
279 | priorities: first, it invokes a scheduling decision to the highest |
||
280 | priority module, then (if the module does not manage any task |
||
281 | ready to run), it asks the next high priority module, and so on. |
||
282 | In this way, each module manages its private ready task list, and |
||
283 | the Generic Kernel schedules the first task of the highest |
||
284 | priority non empty module's queue. |
||
285 | |||
286 | A Scheduling Module must include all the data structures needed. |
||
287 | It can be thought as an object in an Object oriented language; |
||
288 | this implies that many instances of a module can be created (for |
||
289 | example, many TBS servers with different bandwidth). |
||
290 | |||
291 | %---------------------------------------------------------------------------- |
||
292 | \subsection{Module Organization} |
||
293 | \label{oview_Module_Organization} |
||
294 | %---------------------------------------------------------------------------- |
||
295 | |||
296 | The Scheduling Modules are organized into levels, one Module for |
||
297 | each level. These levels can be thought as priority scheduling |
||
298 | levels (index 0 has the maximum priority). |
||
299 | |||
300 | Modules are selected for scheduling by the Model Mapper by a fixed |
||
301 | priority strategy. When a task is given to a module, the module |
||
302 | \emph{owns} the task. The \texttt{task\_level} field of the |
||
303 | generic task descriptor is used to save the level index of the |
||
304 | Module that handles the task (see Section |
||
305 | \ref{KernSupport_Task_descriptor}). |
||
306 | |||
307 | Each Scheduling Module handles all the events that belong to its |
||
308 | owned tasks. A task owned by a module is scheduled in background |
||
309 | with respect to the tasks owned by a Module with higher level |
||
310 | index. For example, in Figure \ref{oview_Levels}, the tasks |
||
311 | \emph{scheduled}% |
||
312 | \footnote{Note that the word \emph{scheduled} is emphasized: the |
||
313 | tasks \emph{scheduled} by a Module are the tasks owned by the |
||
314 | Module itself and the tasks that other modules have inserted in |
||
315 | it.% |
||
316 | } by the XXX Scheduling Module are run in foreground; the tasks |
||
317 | scheduled by the WWW Module run in background with respect to |
||
318 | those of the XXX and YYY Modules, and in foreground with respect |
||
319 | to the tasks scheduled by the ZZZ Module. |
||
320 | |||
321 | \begin{figure} |
||
322 | \begin{center} |
||
323 | \includegraphics[width=6cm]{images/oview_Levels.eps} |
||
324 | \end{center} |
||
325 | \label{oview_Levels} |
||
326 | \caption{Kernel Level Organization.} |
||
327 | \end{figure} |
||
328 | |||
329 | %---------------------------------------------------------------------------- |
||
330 | \subsection{Sample scheduling architectures} |
||
331 | %---------------------------------------------------------------------------- |
||
332 | |||
333 | The approach followed in the organization of the Scheduling |
||
334 | Modules is very versatile and allows to implement different Kernel |
||
335 | configurations. In the following, some typical scheduling |
||
336 | architectures are described. |
||
337 | |||
338 | \begin{description} |
||
339 | \item [Fixed~Priority~Scheme.]This is the most typical approach |
||
340 | used in the real-time systems (see Figure |
||
341 | \ref{oview_Power_FixedPriority}). In this example, hard tasks |
||
342 | (periodic and sporadic) are served at the highest priority, |
||
343 | whereas aperiodic tasks are handled by a Sporadic Server. At the |
||
344 | lowest priority level non-realtime tasks can be handled by a Round |
||
345 | Robin scheduling |
||
346 | policy. |
||
347 | |||
348 | \begin{figure} |
||
349 | \begin{center} |
||
350 | \includegraphics[width=7cm]{images/oview_Power_FixedPriority.eps} |
||
351 | \end{center} |
||
352 | \label{oview_Power_FixedPriority} |
||
353 | \caption{A fixed priority Module configuration\textit{.}} |
||
354 | \end{figure} |
||
355 | |||
356 | \item [Dual~Priority~Scheme.]This configuration (described in |
||
357 | Figure \ref{oview_Power_DualPriority}) proposes a combination of |
||
358 | modules which were not developed to work together at |
||
359 | implementation time. In this example, the highest priority tasks |
||
360 | are scheduled by the RM with a Deferrable Server linked to it. |
||
361 | Other tasks are scheduled at medium priority using EDF with a |
||
362 | Total Bandwidth Server. At the lowest priority level, non-realtime |
||
363 | tasks can be handled by a Round Robin scheduling scheme.. This |
||
364 | configuration can be used to reduce the jitter of some important |
||
365 | tasks \cite{Dav95}. |
||
366 | |||
367 | \begin{figure} |
||
368 | \begin{center} |
||
369 | \includegraphics[width=7cm]{images/oview_Power_DualPriority.eps} |
||
370 | \end{center} |
||
371 | \label{oview_Power_DualPriority} |
||
372 | \caption{A Dual Priority Module configuration. Note that the TBS Module is put |
||
373 | at the lowest priority level to prevent the TBS algorithm from using the |
||
374 | background time left by other modules.} |
||
375 | \end{figure} |
||
376 | |||
377 | \item [Dynamic~Multiserver.]This example (described in Figure |
||
378 | \ref{oview_Power_Dynamic}) shows how to create a complex |
||
379 | scheduling architecture. In the example, some Hard tasks are |
||
380 | scheduled with a set of aperiodic tasks, each one handled by a |
||
381 | different server. Note that the EDF Module is designed to accept |
||
382 | tasks from a generic Module, independently from the algorithm |
||
383 | implemented by that Module. Note also that in the system there are |
||
384 | many instances of a single module, and each instance can only |
||
385 | manage the tasks that it |
||
386 | owns. |
||
387 | |||
388 | \begin{figure} |
||
389 | \begin{center} |
||
390 | \includegraphics[width=8cm]{images/oview_Power_Dynamic.eps} |
||
391 | \end{center} |
||
392 | \label{oview_Power_Dynamic} |
||
393 | \caption{Dynamic Module Configuration.} |
||
394 | \end{figure} |
||
395 | |||
396 | \item [Timeline~Scheduling.]The example illustrated in Figure |
||
397 | \ref{oview_Power_Timeline} shows a Timeline scheduler integrated |
||
398 | with a fixed priority algorithm, such as RM. Note that S.Ha.R.K. |
||
399 | does not force the developer to use classic approaches, like |
||
400 | prioritary task queues. The Generic Kernel does not impose any |
||
401 | restrictions to developers. |
||
402 | |||
403 | \begin{figure} |
||
404 | \begin{center} |
||
405 | \includegraphics[width=7cm]{images/oview_Power_Timeline.eps} |
||
406 | \end{center} |
||
407 | \label{oview_Power_Timeline} |
||
408 | \caption{A hybrid Timeline-RM approach.} |
||
409 | \end{figure} |
||
410 | |||
411 | \end{description} |
||
412 | |||
413 | %---------------------------------------------------------------------------- |
||
414 | \subsection{Module Interface} |
||
415 | \label{oview_Module_Interface} |
||
416 | %---------------------------------------------------------------------------- |
||
417 | |||
418 | The interface functions provided by a scheduling module can be |
||
419 | grouped in two classes: public and private functions. In general, |
||
420 | a scheduling module has an interface that implements a specific |
||
421 | behavior for each event in the system generated by a task. |
||
422 | |||
423 | In the following paragraph the various classes of functions are |
||
424 | explained in a general way. Each function will be then described |
||
425 | in detail in Chapter \ref{CapSchedulingModules}. |
||
426 | |||
427 | %---------------------------------------------------------------------------- |
||
428 | \subsection{Public Functions} |
||
429 | %---------------------------------------------------------------------------- |
||
430 | |||
431 | The public functions are those fuctions that are directly called |
||
432 | by the Generic Kernel to implement the behavior of the primitives. |
||
433 | Some of the functions are directly related to the life of a single |
||
434 | task (e.g. task creation, task end), whereas other functions are |
||
435 | related to the module as a whole (the scheduler, and the |
||
436 | acceptance test). |
||
437 | |||
438 | %---------------------------------------------------------------------------- |
||
439 | \subsection{Private Functions} |
||
440 | %---------------------------------------------------------------------------- |
||
441 | |||
442 | On the other side, a module can export an interface to the public |
||
443 | part of the same or of another module. For example, an EDF Module |
||
444 | can export an interface (smaller than the Public Functions) that |
||
445 | allows a generic aperiodic server to insert tasks in the EDF reqdy |
||
446 | queue. |
||
447 | |||
448 | That is, Private Functions are called ONLY by other Public and |
||
449 | Private Functions. They are NEVER called by the Generic Kernel. |
||
450 | |||
451 | %---------------------------------------------------------------------------- |
||
452 | \subsection{An Example} |
||
453 | \label{oview_Example} |
||
454 | %---------------------------------------------------------------------------- |
||
455 | |||
456 | In this paragraph an useful example is explained to show the use |
||
457 | of the various functions of a Scheduling Module Interface. The |
||
458 | interface will be described in detail in Chapter |
||
459 | \ref{CapSchedulingModules}. |
||
460 | |||
461 | \begin{figure} |
||
462 | \begin{center} |
||
463 | \includegraphics[width=8cm]{images/oview_LevelExample.eps} |
||
464 | \end{center} |
||
465 | \label{oview_Fig_Example} |
||
466 | \caption{Configuration of the example in Section \ref{oview_Example}.} |
||
467 | \end{figure} |
||
468 | |||
469 | The example (see Figure \ref{oview_Fig_Example}) considers a |
||
470 | configuration made using two scheduling modules, registered in the |
||
471 | first two levels: |
||
472 | |||
473 | \begin{itemize} |
||
474 | \item at Level 0 there is a Module that implements a generic |
||
475 | Scheduling Algorithm; \item at Level 1 there is a Module that |
||
476 | implements a generic Aperiodic Server (that inserts his tasks into |
||
477 | the first Module). |
||
478 | \end{itemize} |
||
479 | Then, we consider two tasks in the system: |
||
480 | |||
481 | \begin{itemize} |
||
482 | \item Task A is a task created into the Module that implements the |
||
483 | scheduling algorithm (registered at level 0); therefore its |
||
484 | \texttt{task\_level} = 0. \item Task B is a task created into the |
||
485 | Module that implements the aperiodic server (registered at level |
||
486 | 1); therefore its \texttt{task\_level} = 1. Moreover, the task B |
||
487 | is inserted into the level 0 using level-0's private functions. |
||
488 | \end{itemize} |
||
489 | Both the tasks are scheduled by the level 0 public\_scheduler: |
||
490 | task A because it was created in that level; task B because it was |
||
491 | inserted into the level 0 through the level-0's private functions. |
||
492 | |||
493 | When the scheduling procedure is called, the Generic Kernel |
||
494 | scheduler will call the public\_scheduler, stating from that of |
||
495 | the Module Registered at level 0. In this case, the level 0 |
||
496 | public\_scheduler will choose which task really schedule between A |
||
497 | and B. If task A is selected, the Generic Kernel will call the |
||
498 | public\_dispatch of the level 0 (because Task A's |
||
499 | \texttt{task\_level} is 0). If task B is selected, the Generic |
||
500 | Kernel will call the public\_dispatch of the level 1 (because Task |
||
501 | A's \texttt{task\_level} is 1). To handle the public\_dispatch |
||
502 | event that function will call the level-0 private\_dispatch of |
||
503 | Level 0. |
||
504 | |||
505 | |||
506 | %---------------------------------------------------------------------------- |
||
507 | \section{Resource Modules} |
||
508 | \label{oview_Moduli_GestioneRisorse} |
||
509 | %---------------------------------------------------------------------------- |
||
510 | |||
511 | Resource Modules are normally used to implement some parts that do |
||
512 | not directly interact with task scheduling, but need some |
||
513 | information that has to be provided at task creation and |
||
514 | termination time. |
||
515 | |||
516 | Such Modules are, for example, those that implement shared |
||
517 | resource access protocols (they require some parameters like |
||
518 | system ceiling, the set of mutexes used by a task, etc.), or the |
||
519 | Modules that implements a file system that supports the |
||
520 | specification of a disk bandwidth to be guaranteed to each task. |
||
521 | |||
522 | Like Scheduling Modules, also Resource Modules are organized into |
||
523 | levels, one module for each level. The level number influences |
||
524 | only the order in which events are notified to modules. |
||
525 | |||
526 | The only events that can be notified to a Resource Module are the |
||
527 | creation and the termination of a task. At creation time an |
||
528 | application can specify one or more Resource Models, which will be |
||
529 | handled by the modules registered in the Kernel. |
||
530 | |||
531 | Note that the Resource Module interface is not complete, because |
||
532 | in general a Module will need a larger interface which depends on |
||
533 | the resource that the module itself handles. Usually, the Modules |
||
534 | extends the interface used by the Generic Kernel, by adding a set |
||
535 | of new functions, in a way similar to that used in an Object |
||
536 | Oriented Language when a class is inherited from another base |
||
537 | class. |
||
538 | |||
539 | %---------------------------------------------------------------------------- |
||
540 | \section{Shared Resource Access Protocols} |
||
541 | \label{oview_Shadows} |
||
542 | %---------------------------------------------------------------------------- |
||
543 | |||
544 | S.Ha.R.K. is based on a shared memory programming paradigm, so |
||
545 | communication among tasks is performed by accessing shared |
||
546 | buffers. In this case, tasks that concurrently access the same |
||
547 | shared resource must be synchronized through \emph{mutual |
||
548 | exclusion}: real-time theory \cite{Sha90} teaches that mutual |
||
549 | exclusion through classical semaphores is prone to \emph{priority |
||
550 | inversion}. In order to avoid or limit priority inversion, |
||
551 | suitable shared resource access protocols must be used. |
||
552 | |||
553 | As for scheduling, S.Ha.R.K. achieves modularity also in the |
||
554 | implementation of shared resource access protocols. Resource |
||
555 | modules are used to make resource protocols modular and almost |
||
556 | independent from the scheduling policy and from the others |
||
557 | resource protocols. Each resource module exports a common |
||
558 | interface, similar to the one provided by POSIX for mutexes, and |
||
559 | implements a specific resource access protocol. A task may also |
||
560 | require to use a specified protocol through a resource model. |
||
561 | |||
562 | Some protocols (like Priority Inheritance or Priority Ceiling) |
||
563 | directly interact with the scheduler (since a low-priority task |
||
564 | can inherit the priority from a high-priority task), making the |
||
565 | protocol dependent on the particular scheduling algorithm (see |
||
566 | Figure \ref{oview_Fig_Idea1}). |
||
567 | |||
568 | \begin{figure} |
||
569 | \begin{center} |
||
570 | \includegraphics[width=5cm]{images/oview_Fig_Idea1.eps} |
||
571 | \end{center} |
||
572 | \label{oview_Fig_Idea1} |
||
573 | \caption{Priority inheritance implemented with an out of order queue insertion: |
||
574 | (1) Ta blocks when it tries to access a resource; (2) Ta goes in the blocked |
||
575 | queue; (3) Tb replaces the position of the high priority task.} |
||
576 | \end{figure} |
||
577 | |||
578 | Although a solution based on a direct interaction between the |
||
579 | scheduler and the resource protocol is efficient in terms of |
||
580 | runtime overhead, it limits the full modularity of the kernel, |
||
581 | preventing the substitution of a scheduling algorithm with another |
||
582 | one handling the same task models (for example, Rate Monotonic |
||
583 | could be replaced by the more general Deadline Monotonic |
||
584 | algorithm). |
||
585 | |||
586 | To achieve full modularity, the S.Ha.R.K. Generic Kernel supports |
||
587 | a generic priority inheritance mechanism independent from the |
||
588 | scheduling modules. Such a mechanism is based on the concept of |
||
589 | \emph{shadow tasks}. A shadow task is a task that is scheduled in |
||
590 | place of the task selected by the scheduler. When a task is |
||
591 | blocked by the protocol, it is kept in the ready queue, and a |
||
592 | shadow task is binded to it; when the blocked task becomes the |
||
593 | first task in the ready queue, its binded shadow task is scheduled |
||
594 | instead. In this way, the shadow task executes as if it |
||
595 | {}``inherited'' the priority of the blocked tasks, but no |
||
596 | inheritance takes place, thus decoupling the implementation of the |
||
597 | scheduling algorithm from that of the shared resource access |
||
598 | protocol (see Figure \ref{oview_Fig_Idea2}). |
||
599 | |||
600 | \begin{figure} |
||
601 | \begin{center} |
||
602 | \includegraphics[width=5cm]{images/oview_Fig_Idea2.eps} |
||
603 | \end{center} |
||
604 | \label{oview_Fig_Idea2} |
||
605 | \caption{Priority Inheritance implemented through shadows: (1) Ta blocks when it |
||
606 | tries to access a resource; (2) Ta indicates a shadow task and remains into the |
||
607 | ready queue; (3) Tb is scheduled in place of Ta.} |
||
608 | \end{figure} |
||
609 | |||
610 | To implement this solution, a new field \texttt{shadow} is added |
||
611 | to the generic part of the task descriptor. This field points to |
||
612 | the shadow task (see Figure \ref{oview_Fig_Shadow_Significato}). |
||
613 | |||
614 | \begin{figure} |
||
615 | \begin{center} |
||
616 | \includegraphics[width=5cm]{images/oview_Fig_Shadow1.eps} |
||
617 | \end{center} |
||
618 | \label{oview_Fig_Shadow_Significato} |
||
619 | \caption{Meaning of the Task Descriptor's \texttt{shadow} field.} |
||
620 | \end{figure} |
||
621 | |||
622 | Initially, the shadow field is equal to the task ID (no |
||
623 | substitution; see Figure \ref{oview_Fig_Shadow_Typical}). |
||
624 | |||
625 | \begin{figure} |
||
626 | \begin{center} |
||
627 | \includegraphics[width=2cm]{images/oview_Fig_Shadow2.eps} |
||
628 | \end{center} |
||
629 | \label{oview_Fig_Shadow_Typical} |
||
630 | \caption{Typical scenario with no substitution.} |
||
631 | \end{figure} |
||
632 | |||
633 | When a task blocks on a shared resource, its shadow field is set |
||
634 | to the task ID of the task holding that resource (see Figure |
||
635 | \ref{oview_Fig Shadow_Blocking}). |
||
636 | |||
637 | \begin{figure} |
||
638 | \begin{center} |
||
639 | \includegraphics[width=5cm]{images/oview_Fig_Shadow3.eps} |
||
640 | \end{center} |
||
641 | \label{oview_Fig Shadow_Blocking} |
||
642 | \caption{In this scenario a blocked task waits for a resource; the blocking task |
||
643 | inherits its priority.} |
||
644 | \end{figure} |
||
645 | |||
646 | In general, a graph can grow from a blocking task (see Figure |
||
647 | \ref{oview_Fig_Shadow_DAG}). |
||
648 | |||
649 | \begin{figure} |
||
650 | \begin{center} |
||
651 | \includegraphics[width=5cm]{images/oview_Fig_Shadow4.eps} |
||
652 | \end{center} |
||
653 | \label{oview_Fig_Shadow_DAG} |
||
654 | \caption{Using the shadow mechanism a graph can grow...} |
||
655 | \end{figure} |
||
656 | |||
657 | In this way, when the blocked task is scheduled, the blocking |
||
658 | (shadow) task is scheduled, thus allowing the scheduler to |
||
659 | abstract from the resource protocol. This approach has also the |
||
660 | benefit of allowing a classical deadlock detection strategy: |
||
661 | cycles have to be searched in the shadow graph when a |
||
662 | \texttt{shadow} field is set. |
||
663 | |||
664 | Using this approach, a large number of shared resources protocols |
||
665 | can be implemented in a way independent from the scheduling |
||
666 | implementation. The classical approach, however, can also be |
||
667 | pursued. In addition to this method, a classical synchronization |
||
668 | mechanism is available (see the User Manual for informations about |
||
669 | semaphores interfaces). |
||
670 | |||
671 | %---------------------------------------------------------------------------- |
||
672 | \section{The Generic Kernel} |
||
673 | %---------------------------------------------------------------------------- |
||
674 | |||
675 | The \textit{Generic Kernel} implements the primitives which form |
||
676 | the interface if with the Libraries and the User Applications. |
||
677 | |||
678 | The term Generic is used because the kernel is developed |
||
679 | abstracting from the implementation of a particular scheduling |
||
680 | algorithm. This means that it does not provide the concept of |
||
681 | priority, deadline, task queues, and so on. All the peculiarities |
||
682 | of an algorithm are encapsulated in the Modules, interacting with |
||
683 | the Kernel. |
||
684 | |||
685 | In general Applications use the Kernel through the generic |
||
686 | primitives, asking the Kernel to handle a task set. The Kernel |
||
687 | tries to handle the tasks passing them to the modules according to |
||
688 | the Task Models specified for the tasks. Any module modification |
||
689 | that does not affect the Model interface, does not require any |
||
690 | modification to the Kernel and the Applications. |
||
691 | |||
692 | The generic kernel divides the task set in two parts: the system |
||
693 | tasks and the user tasks. System tasks are used to manage external |
||
694 | devices (like keyboard, mouse, file system, etc.). User tasks are |
||
695 | tasks that belong to the Application. |
||
696 | |||
697 | Moreover, it distinguishes between scheduling and dispatching: |
||
698 | |||
699 | \begin{description} |
||
700 | \item [scheduling]is the activity by which the Generic Kernel asks |
||
701 | a module the indication of the task to be executed; \item |
||
702 | [dispatching]is the activity by which the Generic Kernel orders a |
||
703 | module the execution of a task. |
||
704 | \end{description} |
||
705 | The following section describes some new additions implemented in |
||
706 | the kernel to achieve modularity. They mainly concern the task |
||
707 | descriptor and the task states. |
||
708 | |||
709 | %---------------------------------------------------------------------------- |
||
710 | \subsection{Task descriptor} |
||
711 | \label{oview_task_descriptor} |
||
712 | %---------------------------------------------------------------------------- |
||
713 | |||
714 | To have full modularity, the Generic Kernel should not be modified |
||
715 | when implementing a new algorithm. |
||
716 | |||
717 | For this reason the task descriptor is split in several parts: one |
||
718 | generic part, common to all the modules, and a set of parts that |
||
719 | are dependent from the used Modules and are local to the modules. |
||
720 | |||
721 | The generic descriptor part contains all the data used by the |
||
722 | Generic Kernel to implement the generic primitives, like context, |
||
723 | stack address and size, error codes, statistical information, etc. |
||
724 | |||
725 | The generic part does not contain any information like deadlines, |
||
726 | priorities, time slices, reactivation times, etc. These data are |
||
727 | used by specific scheduling algorithms and must be contained in |
||
728 | the corresponding scheduling modules. In particular: |
||
729 | |||
730 | \begin{itemize} |
||
731 | \item The Generic Kernel does not implement a specific scheduling |
||
732 | policy. \item Each instance of a module has a private memory where |
||
733 | it can store information about the owned tasks. \item Each Module |
||
734 | uses the information stored in the generic task descriptor plus |
||
735 | his private information, but not the information of other modules. |
||
736 | \item Typically, an Aperiodic Server does not use the internal |
||
737 | structures of the Master Module, but interacts with it using the |
||
738 | available interface functions. |
||
739 | \end{itemize} |
||
740 | |||
741 | %---------------------------------------------------------------------------- |
||
742 | \subsection{Task States} |
||
743 | \label{oview_Status} |
||
744 | %---------------------------------------------------------------------------- |
||
745 | |||
746 | One of the consequences of the modular approach is that many task |
||
747 | states are local to the modules. The Generic Kernel defines only a |
||
748 | subset of the states in which a task can be, giving freedom to the |
||
749 | Modules to create their private subsystem. |
||
750 | |||
751 | In particular, the Generic Kernel defines only the states |
||
752 | described in Table \ref{oview_Table_Kernel_States}. |
||
753 | |||
754 | \begin{table} |
||
755 | \begin{center}\begin{tabular}{|p{2cm}|p{10cm}|} |
||
756 | \hline |
||
757 | \multicolumn{1}{|c|}{State} & Description\\ \hline \hline |
||
758 | |||
759 | \multicolumn{1}{|c|}{FREE} & Descriptor not used\\ \hline |
||
760 | |||
761 | \multicolumn{1}{|c|}{SLEEP \foreignlanguage{english}{ }} & Default state in |
||
762 | which a task go after creation; this is the state in whichthe task returns after |
||
763 | a call to thetask\_sleep primitive\\ \hline |
||
764 | |||
765 | \multicolumn{1}{|c|}{EXE} & State of the task currently being |
||
766 | executed\\ \hline |
||
767 | |||
768 | \multicolumn{1}{|c|}{WAIT\_* \foreignlanguage{english}{ }} & State in which a |
||
769 | task is put after blocking on a synchronizationprimitive\\ \hline |
||
770 | \end{tabular}\end{center} |
||
771 | |||
772 | \label{oview_Table_Kernel_States} |
||
773 | \caption{Task states defined by the Generic Kernel.} |
||
774 | \end{table} |
||
775 | |||
776 | These states must be integrated with other internal states defined |
||
777 | into the Modules. Some typical states implemented in modules are |
||
778 | reported in Table \ref{oview_Table_Modules_States}. |
||
779 | |||
780 | \begin{table} |
||
781 | \begin{center}\begin{tabular}{|p{3cm}|p{10cm}|} |
||
782 | \hline \multicolumn{1}{|c|}{State name}& |
||
783 | Description\\ \hline \hline |
||
784 | \multicolumn{1}{|c|}{READY \foreignlanguage{english}{}}& This is |
||
785 | the classical ready state in which a task waits to be |
||
786 | executed.\\ \hline \multicolumn{1}{|c|}{BLOCKED}& |
||
787 | This is the state of a task blocked on a semaphore.\\ |
||
788 | \hline \multicolumn{1}{|c|}{ZOMBIE \foreignlanguage{english}{ }}& |
||
789 | This is the state of a Hard Periodic Task afterhis termination, |
||
790 | when it waits the end of the period tofree his used |
||
791 | bandwidth.\\ \hline \multicolumn{1}{|c|}{LOBBY |
||
792 | \foreignlanguage{english}{ }}& In some implementation of the SRP |
||
793 | protocolthis state is used to postpone the activation of atask |
||
794 | while the system waits that for system ceiling to become |
||
795 | zero.\\ \hline \multicolumn{1}{|c|}{IDLE |
||
796 | \foreignlanguage{english}{}}& This is the typical state in which a |
||
797 | periodic task is putat the end of an instance, to wait for |
||
798 | reactivation at the beginning of the next period.\\ |
||
799 | \cline{2-2} |
||
800 | \end{tabular}\end{center} |
||
801 | |||
802 | |||
803 | \caption{\label{oview_Table_Modules_States}Examples of states |
||
804 | defined in the Modules.} |
||
805 | \end{table} |
||
806 | |||
807 | %---------------------------------------------------------------------------- |
||
808 | \section{Initialization and termination of the system} |
||
809 | \label{oview_Initialization} |
||
810 | %---------------------------------------------------------------------------- |
||
811 | |||
812 | Before an Application can be executed, the user has to tell the |
||
813 | Generic Kernel the modules used by the system. This is done |
||
814 | through an initialization function in which the specified modules |
||
815 | are registered and initialized. |
||
816 | |||
817 | Normally, the registration creates a non-realtime task, that |
||
818 | initializes some devices and calls the standard C startup function |
||
819 | \texttt{main()}. |
||
820 | |||
821 | When the system ends (because a function like \texttt{sys\_end()} |
||
822 | or \texttt{sys\_abort()} is called) or when the last user task |
||
823 | terminates, all the devices are closed and all the system tasks |
||
824 | are terminated in a clean way. |