Rev 1676 | Details | Compare with Previous | Last modification | View Log | RSS feed
Rev | Author | Line No. | Line |
---|---|---|---|
1676 | tullio | 1 | %---------------------------------------------------------------------------- |
2 | \subsection{Introduction} |
||
3 | %---------------------------------------------------------------------------- |
||
4 | |||
5 | Low-level support of peripheral devices is one of the most |
||
6 | demanding activities in a real-time system for control |
||
7 | applications. In fact, the rapid development of new interface |
||
8 | boards and the continuous upgrade of hardware platforms causes a |
||
9 | tremendous effort at the operating system level for writing and |
||
10 | testing low-level drivers for supporting the new hardware. The |
||
11 | possibility of reusing legacy drivers in real-time systems would |
||
12 | offer the great advantage of keeping the rate of changes with a |
||
13 | small programming effort. Since typical legacy drivers are written |
||
14 | to execute in a non-preemptive fashion, a suitable operating |
||
15 | system mechanism is needed to protect real-time application tasks |
||
16 | from unpredictable bursty interrupt requests. |
||
17 | |||
18 | %---------------------------------------------------------------------------- |
||
19 | \subsection{Why do we need a new server technology ?} |
||
20 | %---------------------------------------------------------------------------- |
||
21 | |||
22 | The rapid development of new interface boards and the continuous |
||
23 | upgrade of hardware platforms causes a tremendous effort at the |
||
24 | operating system level for writing and testing the low-level |
||
25 | software to support new peripheral devices. The problem of device |
||
26 | management is particularly significant in those embedded systems |
||
27 | dedicated to control applications, where the number of I/O |
||
28 | peripherals is usually large. In these systems, the code layer |
||
29 | dedicated to device drivers is one of the most delicate |
||
30 | components. The possibility of reusing legacy drivers in real-time |
||
31 | systems would offer the great advantage of keeping the pace of |
||
32 | changes with a small programming effort. |
||
33 | |||
34 | One of the main problems of reusing legacy drivers in a real-time |
||
35 | system, however, is that most interrupt handlers disable the |
||
36 | interruption capability of the processor, executing long portions |
||
37 | of code at the highest priority in a non-preemptive fashion. As a |
||
38 | consequence, a bursty sequence of interrupts may introduce long |
||
39 | blocking delays in application tasks, which would cause hard tasks |
||
40 | to miss their deadlines and soft tasks to increase their response |
||
41 | time. Under such an execution model for the interrupts, an |
||
42 | off-line guarantee of real-time constraints could require the |
||
43 | system to run with a very low utilization. |
||
44 | |||
45 | Inside S.Ha.R.K. this is possible. The modules stack is instrinsically |
||
46 | hierarchical and the idle time of the first kernel module becomes the |
||
47 | execution time for the next one (\ref{f:intserver3}). |
||
48 | |||
49 | \begin{figure} |
||
50 | \centering |
||
51 | \includegraphics[width=0.5\columnwidth]{images/int-server3} |
||
52 | \caption{\small S.Ha.R.K. module stack} |
||
53 | \label{f:intserver3} |
||
54 | \end{figure} |
||
55 | |||
56 | Splitting the scheduler, the server algorithm becomes system independent and, |
||
57 | therefore, it makes it possible to transparently use FP and EDF scheduling |
||
58 | algorithms. |
||
59 | |||
60 | This approach also keeps the server algorithm simple, just to fit our |
||
61 | requirements for the IRQ and timers handling, leaving the main scheduler out |
||
62 | from this specific problem. |
||
63 | |||
64 | %---------------------------------------------------------------------------- |
||
65 | \subsection{Server features} |
||
66 | %---------------------------------------------------------------------------- |
||
67 | |||
68 | The server features are |
||
69 | |||
70 | \begin{itemize} |
||
71 | |||
72 | \item |
||
73 | The handler is always executed in a non preemptive fashion, but |
||
74 | the server limits its bandwidth consumption through a suitable |
||
75 | budget management that allows guaranteeing the other real-time |
||
76 | activities. |
||
77 | |||
78 | \item |
||
79 | A hierarchical scheduling approach \cite{Lip03} is employed |
||
80 | to make the interrupt service mechanism independent of the |
||
81 | scheduling policy, so that either fixed or dynamic priority |
||
82 | assignments can be used for the application tasks. |
||
83 | |||
84 | \item |
||
85 | The server can be tuned to balance its responsiveness versus its |
||
86 | bandwidth consumption. |
||
87 | |||
88 | \item |
||
89 | The mechanism can be efficiently implemented to reduce the extra |
||
90 | overhead typically required in capacity-based servers to set the |
||
91 | timers for the budget management. |
||
92 | |||
93 | \item |
||
94 | Finally, the context-switch overhead introduced by the interrupt |
||
95 | requests can be easily taken into account in the guarantee test |
||
96 | for the application tasks. |
||
97 | |||
98 | \end{itemize} |
||
99 | |||
100 | %------------------------------------------------------------------ |
||
101 | \subsection{Server description} |
||
102 | \label{s:desc} |
||
103 | %------------------------------------------------------------------ |
||
104 | |||
105 | The novel server mechanism proposed in this paper aims at |
||
106 | executing interrupt requests coming from complex legacy drivers |
||
107 | imported into a real-time kernel from widely available open source |
||
108 | operating systems. Since a device driver may be imported as it is, |
||
109 | with very few or even without modifications, it is important to |
||
110 | provide a method to safely schedule the requests without |
||
111 | jeopardizing the internal driver temporization, keeping the |
||
112 | developer away from the low-level details of the driver |
||
113 | implementation and saving a lot of programming efforts. To achieve |
||
114 | this goal, the interrupt service routines are always executed in a |
||
115 | non preemptive fashion, but a budget management mechanism is used |
||
116 | in the server to protect the application tasks from unbounded |
||
117 | interference, in the case of long bursty interrupt requests. |
||
118 | As a consequence of such a budget management mechanism, an ISR can |
||
119 | experience a bounded activation delay, which can be tuned through |
||
120 | the server parameters to balance application predictability vs. |
||
121 | interrupt responsiveness. |
||
122 | |||
123 | The server is defined by 3 parameters: a maximum budget $Q_{max}$, |
||
124 | a bandwidth $U$, and a budget threshold $Q_\theta$. The server |
||
125 | also keeps two state variables: its current budget $Q(t) \leq |
||
126 | Q_{max}$ and an activity state $\Phi(t)$, which can have three |
||
127 | values: |
||
128 | |||
129 | \begin{itemize} |
||
130 | |||
131 | \item |
||
132 | $exe$. The server is in this state when it executes an ISR; |
||
133 | |||
134 | \item |
||
135 | $ready$. The server is ready when there are no pending interrupt |
||
136 | requests and a new incoming request can be executed immediately |
||
137 | without any activation delay; |
||
138 | |||
139 | \item |
||
140 | $idle$. The server is idle when a new request cannot be immediately |
||
141 | executed because the previous requests consumed the available budget |
||
142 | below the threshold $Q_\theta$. In this state, the budget is |
||
143 | recharged |
||
144 | according to a given replenishment rule, until the maximum level |
||
145 | $Q_{max}$ is reached or a new request arrives. |
||
146 | |||
147 | \end{itemize} |
||
148 | |||
149 | The maximum budget ($Q_{max}$) is the upper bound for the current |
||
150 | budget and limits the number of ISRs that can be consecutively |
||
151 | executed by the server. The budget $Q(t)$ is decreased while an |
||
152 | ISR is executing to keep track of the remaining budget that can be |
||
153 | allocated to other requests. To prevent any preemption of the |
||
154 | server, the budget is allowed to be negative. When no request is |
||
155 | executing, $Q(t)$ is recharged at a constant rate. |
||
156 | |||
157 | The $U$ parameter specifies the percentage of processor allocated |
||
158 | to the server, which leaves a bandwidth $1-U$ to the application |
||
159 | tasks. The value of $U$ directly influences the server budget |
||
160 | $Q(t)$, which increases at rate $U$ when the server is ready or |
||
161 | idle, and decreases at rate $1-U$ when the server is executing. |
||
162 | A higher value of $U$ makes the budget to decrease more slowly (see |
||
163 | Section \ref{s:rules} for the detailed budget variation rules), |
||
164 | thus allowing the execution of a higher number of ISRs before |
||
165 | starting the recharge. On the contrary, decreasing the value of |
||
166 | $U$ makes the budget to increase more slowly, thus letting more |
||
167 | space for the application tasks. |
||
168 | |||
169 | The budget threshold $Q_\theta$ ($0 \leq Q_\theta \leq Q_{max}$) |
||
170 | defines the budget level above which the server can start |
||
171 | executing pending requests after an idle period. In other words, |
||
172 | when the budget is exhausted ($Q < 0$) a new request can only be |
||
173 | started when the budget is replenished up to $Q_\theta$. However, |
||
174 | if $Q > 0$ and the server is ready, an ISR can be executed even |
||
175 | though $Q \leq Q_\theta$. |
||
176 | |||
177 | Decreasing the value of $Q_\theta$ decreases the latency of the |
||
178 | ISR, while increasing $Q_\theta$ decreases the overhead introduced |
||
179 | by the server during IRQ bursts. |
||
180 | |||
181 | While the server is $idle$, the ISRs that cannot be executed due to |
||
182 | the bandwidth limitations are sent to a ready queue, which can be |
||
183 | handled by an arbitrary discipline. |
||
184 | Then, they are fetched from the queue when the processor can be |
||
185 | safely assigned to the server, meaning that the execution of an |
||
186 | interrupt service does not jeopardize the temporal requirements of |
||
187 | the application tasks. |
||
188 | |||
189 | Two examples of server execution are reported in Figure |
||
190 | \ref{f:budget} to better illustrate the budget management |
||
191 | mechanism and the server state transitions. |
||
192 | |||
193 | \begin{figure} |
||
194 | \centering |
||
195 | \includegraphics[width=\columnwidth]{images/budget1} |
||
196 | \includegraphics[width=\columnwidth]{images/budget2} |
||
197 | \caption{Examples of server budget behavior.} |
||
198 | \label{f:budget} |
||
199 | \end{figure} |
||
200 | |||
201 | %------------------------------------------------------------------ |
||
202 | \subsection{Server rules} |
||
203 | \label{s:rules} |
||
204 | %------------------------------------------------------------------ |
||
205 | |||
206 | This section presents the rules that regulate the variation of the |
||
207 | budget and the state transitions. |
||
208 | |||
209 | Budget consumption and recharging is regulated by the following |
||
210 | rules: |
||
211 | |||
212 | \begin{enumerate} |
||
213 | |||
214 | \item |
||
215 | At the system start-up $\Phi(0) = idle$ and the initial budget is |
||
216 | set to $0$, i.e., $Q(0) = 0$. |
||
217 | |||
218 | \item |
||
219 | While $\Phi=idle$ or $\Phi=ready$, the budget increases at a |
||
220 | constant rate $U$ up to its maximum value. If $Q(t_1)$ is the |
||
221 | budget at time $t_1 < t_2$, then |
||
222 | |||
223 | \begin{equation} |
||
224 | \label{equ:inc} |
||
225 | Q(t_2) = \min \{Q_{max}, \; Q(t_1) + (t_2 - t_1)U\}. |
||
226 | \end{equation} |
||
227 | |||
228 | \item |
||
229 | While $\Phi=exe$, the budget decreases at a constant rate equals |
||
230 | to $1-U$. If $Q(t_1)$ is the budget at time $t_1 < t_2$, then |
||
231 | \begin{equation} |
||
232 | \label{equ:dec} |
||
233 | Q(t_2) = Q(t_1) - (t_2 - t_1)(1 - U). |
||
234 | \end{equation} |
||
235 | |||
236 | \end{enumerate} |
||
237 | |||
238 | The activity status of the server is determined by the current |
||
239 | available budget, by the previous server status and by the presence |
||
240 | or absence of pending ISRs into the ready queue. The status |
||
241 | switches accordingly with the following rules: |
||
242 | |||
243 | \begin{itemize} |
||
244 | |||
245 | \item |
||
246 | The initial state of the server is $idle$; |
||
247 | |||
248 | \item |
||
249 | When an IRQ arrives, if $\Phi$ is $exe$ or $idle$ the ISR is sent to |
||
250 | the ready queue and the server maintains its current state; |
||
251 | |||
252 | \item |
||
253 | When an IRQ arrives, if $\Phi=ready$ the server starts executing the |
||
254 | handler and $\Phi=exe$; |
||
255 | |||
256 | \item |
||
257 | When an interrupt handler terminates the execution, if $Q(t) < 0$ |
||
258 | $\Phi$ switches from $exe$ to $idle$; if $Q(t) \geq 0$ and the |
||
259 | ready queue is empty, the server switches to $ready$, otherwise, |
||
260 | if an ISR is waiting in the queue, the server keeps the $exe$ state |
||
261 | and starts executing the next ISR; |
||
262 | |||
263 | \item |
||
264 | When $Q(t)$ increases $\Phi$ can only be $idle$ or $ready$. If $\Phi |
||
265 | = idle$, when $Q(t)$ reaches $Q_\theta$ and the ready queue is |
||
266 | empty, |
||
267 | the server switches to $ready$; if the queue is not |
||
268 | empty it switches to $exe$ and starts executing the first pending |
||
269 | request. If $\Phi = ready$, when $Q(t)$ reaches $Q_\theta$ |
||
270 | the server keeps its current status and keeps recharging up to |
||
271 | $Q_{max}$ if there are no IRQs to execute. |
||
272 | |||
273 | \end{itemize} |
||
274 | |||
275 | \begin{figure} |
||
276 | \centering |
||
277 | \includegraphics[width=\columnwidth]{images/fsm} |
||
278 | \caption{Server finite-states machine.} |
||
279 | \label{f:fsm} |
||
280 | \end{figure} |
||
281 | |||
282 | These rules are illustrated in Figure~\ref{f:fsm} as a finite-state |
||
283 | machine. |
||
284 | |||
285 | %------------------------------------------------------------------ |
||
286 | \subsection{Server Properties} |
||
287 | \label{s:prop} |
||
288 | %------------------------------------------------------------------ |
||
289 | |||
290 | The proposed interrupt server is characterized by the following |
||
291 | interesting properties: |
||
292 | |||
293 | \begin{itemize} |
||
294 | |||
295 | \item |
||
296 | the response time of every single ISR can be predicted in order to |
||
297 | perform an online guarantee of incoming requests; |
||
298 | |||
299 | \item |
||
300 | the implementation overhead can be traded for the ISR latency by |
||
301 | acting on the budget threshold $Q_\theta$; |
||
302 | |||
303 | \item |
||
304 | the server parameters can be directly used to specify the bandwidth |
||
305 | allocation within a hierarchical framework. |
||
306 | |||
307 | \end{itemize} |