2016년 12월 7일 수요일

Rate monotonic scheduling

Rate monotonic scheduling

A kind of the scheduling algorithm that the rate monotonic scheduling (Rate-Monotonic Scheduling, RMS[1]) is used by Real-time Operating System. It is the most suitable scheduling using the fixed priority in プリエンプティブ.

Table of contents

Characteristic

The preconditions for process when I use this algorithm are as follows.

  • I do not share a resource (including a hardware resource, a cue, the semaphore).
  • I understand deadline (deadline for practice) and completely agree with a practice period (in other words, you should finish this practice by the next start).
  • Fixed priority (if a processor is released, and a new task period is started, プリエンプト does all other tasks, and the process of the best fixed priority is started)
  • The fixed priority is set according to a principle called rate monotonic (I give the task that a deadline for / practice has a short a high priority in a practice period).

RMS is a statistical model and includes the record of the event. I am used in closed environment only as for the input of the predictable number. A round robin and the time sharing do not function in the real time system well. RMS decides the processing time of the process and the start time of the process with reference to a history.

In 1973, Liu and Layland have an inherent period I proved that the periodic task of the unit always satisfied deadline if CPU utilization satisfied the following expressions and had a schedule.

 

 は calculation time, It is は period. For example,  のとき   となる. When the number of the processes is infinite, and it approaches, this expression is as follows:

 

Therefore, the limit of the CPU utilization that generally RMS satisfies all deadline and has a schedule   となる. Of the remainder   I can use it for the non-real-time task of the low priority at の time. However, it is only a sufficient condition that this does not depend on the combination of tasks and can make out the schedule, and it is a considerably conservative value. Real CPU availability The combination of tasks that a deadline mistake happens is extremely rare if to front and back. About a period task generated as the straightforward example which expressed it at random CPU utilization   I am known to satisfy all deadline if as follows. [2] However, it is not a thing guaranteed by the combination of every task because this depends whether you know the statistics information (including a period, the deadline) of the task exactly.

If "it is most suitable, and a case to be optimal", any fixed priority scheduling algorithm can meet all deadline the priority setting of the rate monotonic, the RMS algorithm means a similar thing, too. When a period is different from deadline, deadline monotonic scheduling (DMS) is algorithm of "it is most suitable". When a period is equal to deadline, RMS and DMS are equivalent, and DMS "is most suitable" when deadline is shorter than a period. [3]

The most suitable fixed priority scheduling when deadline is longer than a period is "an unsolved problem".

リソースプリエンプション and priority succession

In much practical use application, the resource is shared. Therefore, it is necessary for RMS to deal with a reversal and the issue of of the priority deadlock. The priority succession was introduced as a method to solve this.

I enumerate below examples of the priority succession algorithm:

  • Because a primitive called OSIntEnter() and OSIntExit() locks a CPU interrupt in a real-time kernel (uC-OS II), I am used.
  • Primitives such as splx() are used for the lock of the device interrupt (Linux kernel).
  • It is necessary that Highest Locker Protocol analyzes the collision between all tasks offline and looks for "upper limit priority" (later description).
  • Basic priority succession protocol (until Basic Priority Inheritance Protocol)[4] frees (lock to raise the priority of the low priority task to the priority of the high priority task when a high priority task requires the lock which the low priority task holds).
  • The priority upper limit protocol [5] improved a basic priority succession protocol and assigns "an upper limit priority" to each semaphore. An upper limit priority is the best priority of all tasks that may access the semaphore. And ようにする where other tasks that are lower than an upper limit priority cannot seize it by force when the task of the low priority is in condition to have got the semaphore (プリエンプション is not made).

I can classify the priority succession algorithm in two parameters. The succession is delayed (when really necessary) (before a collision occurs), or there is the first whether it is performed immediately. The succession is carried out optimistically (only as for the minimum) (more than required), or there is the second whether it is performed pessimistically.

Pessimistic Optimistic
Immediately OSIntEnter/Exit() splx(), Highest Locker
Delay Priority upper limit protocol, basic priority succession protocol

Actually, there is no mathematical difference in delay algorithm and immediate algorithm, and algorithm is used by much practical use systems immediately because implementation is effective.

The basic priority succession protocol may produce "the chain of the block". In other words, I may be kept waiting for a long time that a critical section contains the task of the high priority before I really put it. I guarantee that the critical section of one low priority is only carried out to the utmost before a critical section contains a high priority task in other protocols.

The priority upper limit protocol is usable in the kernel in VxWorks real time, but is rarely used.

A use example of the basic priority succession includes issue of bug of the Mars Pathfinder. It is the priority succession that I perform means avoiding a bug on Mars and was used.

Example

I think about the system that three periodic processes work.

Process Execute time Period
P1 1 8
P2 2 5
P3 2 10

In addition, P1 which is time of the unit same for the execute time in a period (e.g., ten milliseconds) starts, and it shows that only 1 unit (ten milliseconds) works to (80 milliseconds) in 8 unit periods. The CPU utilization is as follows:

 

  The theoretical limit of the process is as follows:

 

Therefore   であるため, the system are schedule possibility.

Allied item

References

  1. ^ C. L. Liu and J. Layland. 'Scheduling algorithms for multiprogramming in a hard real-time environment', Journal of the ACM, 10(1), 1973.
  2. ^ J. Lehoczky, L. Sha and Y. Ding, 'The Rate monotonic scheduling algorithm: exact characterization and average case behavior', IEEE Real-Time Systems Symposium, pp. 166-171, December 1989.
  3. ^ J. Y. Leung and J. Whitehead. 'On the complexity of fixed-priority scheduling of periodic, real-time tasks'. Performance Evaluation, 2(4): 237--250, December 1982.
  4. ^ B.W. Lampson, and D. D. Redell. 'Experience with Processes and Monitors in Mesa'. Communications of the ACM, Vol. 23, No. 2 (Feb 1980), pp. 105-117.
  5. ^ L. Sha, R. Rajkumar and J. P. Lehoczky, 'Priority inheritance protocols: an approach to real-time synchronization', IEEE Transactions on Computers, vol. It is pp. in 1990 in 39 no. 9, September 1175-1185.
  • M. Joseph and P. Pandya. 'Finding Response times in Real-time systems', BCS Computer Journal, 29(5), pp. 390-395, October 1986.

Outside link

This article is taken from the Japanese Wikipedia Rate monotonic scheduling

This article is distributed by cc-by-sa or GFDL license in accordance with the provisions of Wikipedia.

Wikipedia and Tranpedia does not guarantee the accuracy of this document. See our disclaimer for more information.

In addition, Tranpedia is simply not responsible for any show is only by translating the writings of foreign licenses that are compatible with CC-BY-SA license information.

0 개의 댓글:

댓글 쓰기