B L O G G E R

--> This is JAN AUSTIN EVANGELISTA hir signing..

Showing posts with label OS 6. Show all posts
Showing posts with label OS 6. Show all posts

Thread Scheduling


Thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we'll describe here. We'll also mention some key varitions between the platforms.

Note that we'll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. (On some systems, switching on different processors is staggered to avoid contention on shared scheduling tables.) Unless otherwise specified, we'll use the term thread scheduler to refer to this overall system of coordinated per-CPU schedulers.

Across platforms, thread scheduling1 tends to be based on at least the following criteria:

****a priority, or in fact usually multiple "priority" settings that we'll discuss below;

****a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration);

****a state, notably "runnable" vs "waiting";

****metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".

Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:

****a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority;

****otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU;

****there are a few extra "tweaks" to make things work

Multi-processor Scheduling


*Will consider only shared memory multiprocessor

*Salient features:
-One or more caches: cache affinity is important
-Semaphores/locks typically implemented as spin-locks: preemption during critical sections

Real-Time Scheduling


Distinctive Embedded System Attributes
● Reactive: computations occur in response to external events
Periodic events (e.g. rotating machinery and control loops).
Aperiodic events (e.g. button click).
● Real Time: correctness is partially a function of time
Hard real time.
– Absolute deadline, beyond which answer is useless
– May include minimum time as well as maximum time
Soft real time.
– Approximate deadline
– Utility of answer degrades with time difference from deadline

Real-Time Review
Real time is not just “real fast”
Real time means that correctness of result depends on both functional correctness and time that the result is delivered.
Soft real time
Utility degrades with distance from deadline.
Hard real time
System fails if deadline window is missed.
Firm real time
Result has no utility outside deadline window, but system can withstand a few missed results.

Type of Real-Time Scheduling

Taxonomy of Real-Time Scheduling

Dynamic vs. Static
Dynamic schedule computed at run-time based on tasks really executing.
Static schedule done at compile time for all possible tasks.
Preemptive permits one task to preempt another one of lower priority

Schedulability
● NP-hard if there are any resources dependencies
Prove it definitely cannot be scheduled.
Find a schedule if it is easy to do.
Stuck in the middle somewhere.

Necessary and sufficient schedulability test


Scheduling Parameters
Assume N CPUs available for execution of a single task set
Set of tasks {Ti}
Periods pi.
Deadline di (completion deadline after task is queued).
Execution time ci (amount of CPU time to complete).
Handy values:
Laxity li = di – ci (amount of slack time before Ti must begin execution).
Utilization factor ui = ci/pi(portion of CPU used).

Static Schedule
Assume non-preemptive system with 5 Restrictions:
1. Tasks {Ti} are periodic, with hard deadlines and no jitter
2. Tasks are completely independent
3. Deadline = period pi = di
4. Computation time ci is known and constant
5. Context switching is free (zero cost) INCLUDING network messages to send context to another CPU(!)
Consider least common multiple of periods pi
This considers all possible cases of period phase differences.
Worst case is time that is product of all periods; usually not that bad.
If you can figure out (somehow) how to schedule this, you win.
Performance
Optimal if all tasks always run; can get up to 100% utilization.
If it runs once, it will always work.

EDF: Earliest Deadline First
Assume a preemptive system with dynamic priorities, and (same 5 restrictions)
Scheduling policy:
Always execute the task with the nearest deadline
Performance
Optimal for uniprocessor (supports up to 100% of CPU usage in all situations)
If you’re overloaded, ensures that a lot of tasks don’t complete
– Gives everyone a chance to fail at the expense of the later tasks

Least Laxity
Assume a preemptive system with dynamic priorities, and (same 5 restrictions)
Scheduling policy:
Always execute the task with the smallest laxity.
Performance:
Optimal for uniprocessor (supports up to 100% of CPU usage in all situations).
– Similar in properties to EDF
A little more general than EDF for multiprocessors.
– Takes into account that slack time is more meaningful than deadline for tasks of mixed computing sizes
Probably more graceful degradations.
– Laxity measure permits dumping tasks that are hopeless causes

EDF/Least Laxity Tradeoffs
Pro:
If it works, it can get 100% efficiency (on a uniprocessor)
Con:
It is not always feasible to prove that it will work in all cases.
– And having it work for a while doesn’t mean it will always work
Requires dynamic prioritization.
The laxity time hack for global priority has limits.
– May take too many bits to achieve fine-grain temporal ordering
– May take too many bits to achieve a long enough time horizon

Rate Monotonic
Assume a preemptive system with static priorities, and (same 5 restrictions) plus

Scheduling policy:
Highest static priority goes to shortest period; always execute highest priority.
Performance:
Provides a guarantee for schedulability with CPU load of ~70%.
– Even with arbitrarily selected task periods
– Can do better if you know about periods & offsets
If all periods are multiple of shortest period, works for CPU load of 100%.

Welcome

=] an eXciting WORLD awaits you [=

"dount count your
friends
on a
SUNNY DAY
when the sky is blue
and
laughter is
abundant.
Instead,
wait for the
STORM
when the clouds are dark
and
smiles are
scarce.
When someone
stands
besides
you
and
lift your
SPIRITs high,
then you'll
know who deserves
to be
called
FRIENDS..
"

Followers

Labels