Partners

Mälardalen University, SWEDEN ,
University of York, UK
Universidad de Cantabria, SPAIN
Scuola Superiore S.Anna, ITALY

Contents

Introduction

In this project, we aim to bring together much of the valuable real-time reaserch that has been done around the world in order to provide a common framework that anyone who builds a real-time system may use or refer to. This page contains a brief description of the FIRST Scheduling Framework and of the Shark Hierarchical scheduling interface. A description of the architectural aspects of FSF can be found here. That interface has been developed in the context of the FIRST Project, and it consists of a set of interfaces and Shark scheduling modules.

Look at First Project webpage for more information.

Shark and FSF

The application programming interface (API) to the scheduling framework of the FIRST project consists of a software library, the FSF (FIRST Scheduling Framework) library. The library provides of a set of include files that contain the data definitions and the function prototypes to the service contract. A complete description of the FSF API is done in deliverable D-SI.1v3. In this section we briefly describe the modules that compose the FSF library, specifying which ones have been implemented in Shark. The structure of the implementation is described in the next sections. The FSF library provides many different and complex services, from simple budget control to synchronization primitives, hierarchical scheduling, reclaimation, shared objects and distribution. To simplify the structure of the library and its implementation, and to make the provided services more accessible to the final user, the FSF library has been divided into a set of modules, each one providing a specific set of services.

The Core module is essential for the framework because it provides the basic concept of service contract, which the entire library is built upon. The service contract is the mechanism that the application uses to dynamically specify its own set of complex and flexible execution requirements. A contract is specified through an opaque structure of type fsf_contract_parameters_t, whose parameters can be set through some functions. The contract can then be negotiated with fsf_negotiate_contract() or similar functions. If the negotiation is succesful, a server is created for a specified thread. The core module also include functions to obtain information from the scheduler or to synchronize the thread with the server.

The Spare Capacity module is optional. It is useful if we want to distribute extra capacity available in the system to the needing applications. This module adds parameters to the contract and functions to specify these additional parameters. Then the negotiation algorithm takes into account the extra capacity in assigning the budget to the servers.

The Shared Objects module is optional. It is used when two applications, using two different contracts, share a common data structure in memory with mutual exclusion semaphore. The module takes into account the extra budget that may be needed if the server normal budget is exhausted while the task is in a critical section. For this reason, it is necessary to specify the lenght of the critical sections in the contract specification. Also, we defined a new object type, fsf_shared_obj_id_t to identify shared objects, to specify the lenght of the associated critical sections, and to create a specific pthread_mutex_t variable for it.

The Dynamic Reclaimation module is optional. It adds the possibility to dynamically reclaim extra capacity available in the system due to non active servers or to threads that do not consume all the server budget. This extra capacity is distributed to the needing applications in a ``best effort'' way, in the sense that it is not possible to control how much extra budget an application will receive. This module has not specific function because it is not possible to set any parameter.

The Hierarchical Scheduling module is optional. It allows many threads belonging to one application to share the same server with a local scheduling algorithm. The corresponding API allows to specify the local scheduler and its parameters, and allows threads to be added to servers with their own scheduling parameters.

All the modules described so far have been implemented in the Shark OS, and their implementation will be discussed in the following sections. Shark does not implement the Distributed module and the Distributed Spare Capacity module.

The complete description of the API is done in FIRST api Documentation that is distribuited with the latest shark source code.

Hierarchical scheduling

The whole hierarchical scheduling package is currently included in Shark. This feature is important for composing different applications, with two obvious advantages. One is that each application could use the scheduler that best fits its needs and the second one is that legacy applications, designed for a particular scheduler, could be re-used by simply re-compiling, or in worst case, with some simple modification. Shark met the goals to isolate the beahavior of each application from the others and provide a real time service for each group. Hierarchical scheduling has been implemented using a proper composition of scheduling modules that allowed us to separate the various concerns of the FIRST scheduling framework (servers, local schedulers,...). Then, bandwidth reclaiming and isolation is obtained using a proper algorithm for scheduling servers called GRUB (GRUBSTAR will be the name of the shark module that implements it), following the BWI (BandWidth Inheritance) specification, all that resulting in a limited kernel overhead that is obtained putting all the hierarchical scheduler layers inside kernel space.

The resulting organization of the modules in Shark is better explained in Shark documentation. Shark distribution could be find under the directory shark/ports/first.


Contacts:

Paolo Gai, Giuseppe Lipari,
Michael Trimarchi and Giacomo Guidi