B L O G G E R

--> This is JAN AUSTIN EVANGELISTA hir signing..

2.) All Graphs in Chapter 8 (Deadlock) with their explanation


Resource-Allocation Graph Algorithm
Claim edge Pi → Rj indicated that process Pj may request resource Rj; represented by a dashed line.
● Claim edge converts to request edge when a process requests a resource.
● When a resource is released by a process, assignment edge reconverts to a claim edge.
● Resources must be claimed a priori in the system.


Resource-Allocation Graph


EXPLANATION:
A set of vertices V and a set of edges E.
● V is partitioned into two types:
- P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
- R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
request edge – directed edge P1  Rj
assignment edge – directed edge Rj  Pi
----
● Process
● Resource Type with 4 instances
● Pi requests instance of Rj
● Pi is holding an instance of Rj

● The Resource Allocation Graph (RAG) above is composed of 3 processes and four resources.
● R1 or resource 1 is composed of only one instance
● R2 has 2 instances
● R3 has one instance
● R4 has 3 instances
● P1 requests instance of R1
● P2 is holding an instance of R1
● P2 requests instance of R3
● P3 holds an instance of R3
● There is no deadlock found

------------------------


Resource Allocation Graph With A Deadlock


EXPLANATION:
If graph contains a cycle
=> if only one instance per resource type, then deadlock.
=> if several instances per resource type, possibility of deadlock.

● The Resource Allocation Graph (RAG) above is composed of 3 processes and four resources.
● R1 or resource 1 is composed of only one instance
● R2 has 2 instances
● R3 has one instance
● R4 has 3 instances
● P1 requests instance of R1
● P2 is holding an instance of R1
● P2 requests instance of R3
● P3 holds an instance of R3
● P3 requests an instance of R1
● There is deadlock based of the RAG no. 2 because all of the instances of R2 are held by the P1 and P2, then P3 is requesting for instances of R2. R2 cannot give any instances to P3 that is why a deadlock occurred.

------------------------


Resource Allocation Graph With A Cycle But No Deadlock


EXPLANATION:
If graph contains no cycles => no deadlock.

● The Resource Allocation Graph (RAG) above is composed of4 processes and 2 resources.
● R1 and R2 has composed of 2 instances
● P1 requests instance of R1
● P2 is holding an instance of R1
● P3 also holds an instance of R1
● P3 requests an instance of R2
● P4 holds an instance of R2
● P1 also holds an instance of R2
● There is no deadlock.

------------------------


Safe, Unsafe , Deadlock State


EXPLANATION:
If a system is in safe state => no deadlocks.
If a system is in unsafe state => possibility of deadlock.
Avoidance => ensure that a system will never enter an unsafe state.

------------------------


Resource-Allocation Graph For Deadlock Avoidance


EXPLANATION:
Requires that the system has some additional a priori information available.
● Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need.
● The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition.
● Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes.

● The Resource Allocation Graph (RAG) above is composed of 2 processes and 2 resources.
● P1 holds an instance of R1
● P2 requests an instance of R1
● P1 and P2 may request an instance of R2

------------------------


Unsafe State In Resource-Allocation Graph


EXPLANATION:
● Basic fact:
If a system is in unsafe state => possibility of deadlock.
● The RAG above is compose of 2 resources and 2 processes
● P1 holds an instance of R1
● P2 is requesting an instance of R1
● P2 holds an instance of R2
● P1 may request an instance of R2

------------------------


Resource-Allocation Graph and Wait-for Graph


EXPLANATION:
Single Instance of Each Resource Type
● Maintain wait-for graph
- Nodes are processes.
- Pi → Pj if Pi is waiting for Pj.
● Periodically invoke an algorithm that searches for a cycle in the graph.
● An algorithm to detect a cycle in a graph requires an order of n^2 operations, where n is the number of vertices in the graph.
Several Instances of a Resource Type
Available: A vector of length m indicates the number of available resources of each type.
Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process.
Request: An n x m matrix indicates the current request of each process. If Request [ij] = k, then process Pi is requesting k more instances of resource type. Rj.

additional...
A Wait-For Graph in computer science is a directed graph used for deadlock detection in operating systems and relational database systems.
In Computer Science, a system that allows concurrent operation of multiple processes and locking of resources and which does not provide mechanisms to avoid or prevent deadlock must support a mechanism to detect deadlocks and an algorithm for recovering from them.
One such deadlock detection algorithm makes use of a Wait-For Graph to track which other processes a process is currently blocking on. In a Wait-for Graph, processes are represented as nodes, and an edge from process Pi to Pj implies Pj is holding a resource that Pi needs and thus Pi is waiting for Pj to release its lock on that resource.

1.) Resource Allocation Graph


Example of a Resource Allocation Graph



Resource Allocation Graph With A Deadlock



Resource Allocation Graph With A Cycle But No Deadlock



Resource-Allocation Graph For Deadlock Avoidance



Unsafe State In Resource-Allocation Graph



Resource-Allocation Graph and Wait-for Graph



Resource-Allocation Graph

● Process


● Resource Type with 4 instances


● Pi requests instance of Rj


● Pi is holding an instance of Rj


---------------------------------


QUESTION:
=>How would you know if there's a DEADLOCK based on the resource allocation graph?

ANSWER:
If graph contains no cycles => no deadlock.
If graph contains a cycle
=> if only one instance per resource type, then deadlock (meaning if the cycle goes on a single path, it will result to a DEALOCK).
=> if several instances per resource type, possibility of deadlock (meaning if the resource allocation graph has several cycles it has a POSSIBILITY for DEADLOCK).

Recovery from Deadlock


Recovery from Deadlock: Process Termination
• Abort all deadlocked processes.
• Abort one process at a time until the deadlock cycle is eliminated.
• In which order should we choose to abort?
– Priority of the process.
– How long process has computed, and how much longer to completion.
– Resources the process has used.
– Resources process needs to complete.
– How many processes will need to be terminated.
– Is process interactive or batch?

Recovery from Deadlock: Resource Preemption
Selecting a victim – minimize cost
Rollback – return to some safe state, restart process from that state
– Require the system to keep more information about the state of all the running processes
Starvation – same process may always be picked as victim, include number of rollback in cost factor


Deadlock Detection


Overview
• Allow system to enter a deadlock state
• Detection algorithm: examine the system state to determine whether a deadlock has occurred
• Recovery scheme: recover from the deadlock

Single Instance of Each Resource Type
• Maintain wait-for graph
– Nodes are processes.
– This graph is obtained from the resource-allocation graph by removing the nodes of type resource and collapsing the appropriate edges
– Pi  Pj if Pi is waiting for Pj.
• Periodically invoke an algorithm that searches for a cycle in the graph.
• An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of vertices in the graph.

Resource-Allocation Graph And Wait-for Graph


Several Instances of a Resource Type
Available: A vector of length m indicates the number of available resources of each type
Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process
Request: An n x m matrix indicates the current request of each process. If Request [i,j] = k, then process Pi is requesting k more instances of resource type Rj

Combined Approach to Deadlock Handling
• Combine the three basic approaches
Prevention
Avoidance
Detection
• Allowing the use of the optimal approach for each of resources in the system
• Partition resources into hierarchically ordered classes.
• Use most appropriate technique for handling deadlocks within each class
• Group resources into a number of different resource classes
• Use the linear ordering strategy defined previously for the prevention of circular wait to prevent deadlocks between resource classes
• Within a resource class, use the algorithm that is most appropriate for that class
• Swappable space
Hold-and-Wait prevention strategy + avoidance
• Process resources: Assignable devices such as tape drives, files…
Avoidance + Resource ordering
• Main memory: assignable in pages or segments
Preemption
• Internal resources: PCB, I/O channels…
Resource ordering

Deadlock Prevention


Ensure that at least one of the necessary conditions cannot hold
Prevent deadlocks by constraining how requests for resources can be made


Mutual Exclusion – not required for sharable resources; must hold for non-sharable resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources.
– Method 1: require each process to request and be allocated all its resources before it begins execution
– Method 2: allow a process to request resources only when the process has none
– Example: copy data from tape drive to disk file, sort disk file, print
Disadvantage
• Low resource utilization
• Starvation possible
No Preemption – if process A holding resources requests another resource that cannot be immediately allocated to it
– Method 1: All resources currently being held by A are preempted
• Preempted resources are added to A’s waiting resource list
• A will be restarted only when it can regain its old resources, as well as the new ones that it is requesting.
– Method 2: Check the requested resources for following conditions
• If it is allocated to a process waiting for additional resources, preempt it from the waiting process and allocate it to A
• If it is held by a process not waiting, A must wait
– A’s resources may be preempted, but only if another process requests them

Often applied to resources whose state can be easily saved and restored later, such as CPU registers and memory space

Circular Wait – impose a total ordering of all resource types
– Example: F(tape drive) = 1, F(disk drive) = 5, F(Printer) = 12
• F is defined according to the normal order of resource usage
– Method 1: require that each process requests resources in an increasing order of enumeration
• OK if tape drive  disk drive  Printer
• Not OK if disk drive  tape drive  Printer
– Method 2: Whenever a process requests an instance of Rj, it has released any resources Ri such that F(Ri) > F(Rj)

Methods for Handling Deadlock


Methods for Handling Deadlocks
• Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state
– Deadlock prevention
– Deadlock avoidance
• Allow the system to enter a deadlock state, detect it and then recover
• Ignore the problem and pretend that deadlocks never occur in the system
– Used by most operating systems, including UNIX
– The undetected deadlock will result in the deterioration of the system performance. Eventually, the system will stop functioning and will need to be restarted manually

Deadlock Characterization


Necessary Conditions
Mutual exclusion: At least one resource must be held in a non-sharable mode
Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by others
No preemption: a resource can be released only voluntarily by the process holding it, after it has completed its task
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Deadlock can arise if four conditions hold simultaneously.

Resource-Allocation Graph
• A set of vertices V and a set of edges E.
• V is partitioned into two types:
– P = {P1, P2, …, Pn} (All processes in the system)
– R = {R1, R2, …, Rm} (All resources in the system)
• Two kinds of edges
Request edge – directed edge P1  Rj
Assignment edge – directed edge Rj  Pi

continuation..
• Process

• Resource Type with 4 instances

• Pi requests instance of Rj

• Pi is holding an instance of Rj

Example of a Resource Allocation Graph


Resource Allocation Graph With A Cycle But No Deadlock


RAG And Deadlock
• If graph contains no cycles  no deadlock.
• If graph contains a cycle 
– If only one instance per resource type, then deadlock.
– If several instances per resource type, possibility of deadlock.

Thread Scheduling


Thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we'll describe here. We'll also mention some key varitions between the platforms.

Note that we'll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. (On some systems, switching on different processors is staggered to avoid contention on shared scheduling tables.) Unless otherwise specified, we'll use the term thread scheduler to refer to this overall system of coordinated per-CPU schedulers.

Across platforms, thread scheduling1 tends to be based on at least the following criteria:

****a priority, or in fact usually multiple "priority" settings that we'll discuss below;

****a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration);

****a state, notably "runnable" vs "waiting";

****metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".

Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:

****a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority;

****otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU;

****there are a few extra "tweaks" to make things work

Multi-processor Scheduling


*Will consider only shared memory multiprocessor

*Salient features:
-One or more caches: cache affinity is important
-Semaphores/locks typically implemented as spin-locks: preemption during critical sections

Real-Time Scheduling


Distinctive Embedded System Attributes
● Reactive: computations occur in response to external events
Periodic events (e.g. rotating machinery and control loops).
Aperiodic events (e.g. button click).
● Real Time: correctness is partially a function of time
Hard real time.
– Absolute deadline, beyond which answer is useless
– May include minimum time as well as maximum time
Soft real time.
– Approximate deadline
– Utility of answer degrades with time difference from deadline

Real-Time Review
Real time is not just “real fast”
Real time means that correctness of result depends on both functional correctness and time that the result is delivered.
Soft real time
Utility degrades with distance from deadline.
Hard real time
System fails if deadline window is missed.
Firm real time
Result has no utility outside deadline window, but system can withstand a few missed results.

Type of Real-Time Scheduling

Taxonomy of Real-Time Scheduling

Dynamic vs. Static
Dynamic schedule computed at run-time based on tasks really executing.
Static schedule done at compile time for all possible tasks.
Preemptive permits one task to preempt another one of lower priority

Schedulability
● NP-hard if there are any resources dependencies
Prove it definitely cannot be scheduled.
Find a schedule if it is easy to do.
Stuck in the middle somewhere.

Necessary and sufficient schedulability test


Scheduling Parameters
Assume N CPUs available for execution of a single task set
Set of tasks {Ti}
Periods pi.
Deadline di (completion deadline after task is queued).
Execution time ci (amount of CPU time to complete).
Handy values:
Laxity li = di – ci (amount of slack time before Ti must begin execution).
Utilization factor ui = ci/pi(portion of CPU used).

Static Schedule
Assume non-preemptive system with 5 Restrictions:
1. Tasks {Ti} are periodic, with hard deadlines and no jitter
2. Tasks are completely independent
3. Deadline = period pi = di
4. Computation time ci is known and constant
5. Context switching is free (zero cost) INCLUDING network messages to send context to another CPU(!)
Consider least common multiple of periods pi
This considers all possible cases of period phase differences.
Worst case is time that is product of all periods; usually not that bad.
If you can figure out (somehow) how to schedule this, you win.
Performance
Optimal if all tasks always run; can get up to 100% utilization.
If it runs once, it will always work.

EDF: Earliest Deadline First
Assume a preemptive system with dynamic priorities, and (same 5 restrictions)
Scheduling policy:
Always execute the task with the nearest deadline
Performance
Optimal for uniprocessor (supports up to 100% of CPU usage in all situations)
If you’re overloaded, ensures that a lot of tasks don’t complete
– Gives everyone a chance to fail at the expense of the later tasks

Least Laxity
Assume a preemptive system with dynamic priorities, and (same 5 restrictions)
Scheduling policy:
Always execute the task with the smallest laxity.
Performance:
Optimal for uniprocessor (supports up to 100% of CPU usage in all situations).
– Similar in properties to EDF
A little more general than EDF for multiprocessors.
– Takes into account that slack time is more meaningful than deadline for tasks of mixed computing sizes
Probably more graceful degradations.
– Laxity measure permits dumping tasks that are hopeless causes

EDF/Least Laxity Tradeoffs
Pro:
If it works, it can get 100% efficiency (on a uniprocessor)
Con:
It is not always feasible to prove that it will work in all cases.
– And having it work for a while doesn’t mean it will always work
Requires dynamic prioritization.
The laxity time hack for global priority has limits.
– May take too many bits to achieve fine-grain temporal ordering
– May take too many bits to achieve a long enough time horizon

Rate Monotonic
Assume a preemptive system with static priorities, and (same 5 restrictions) plus

Scheduling policy:
Highest static priority goes to shortest period; always execute highest priority.
Performance:
Provides a guarantee for schedulability with CPU load of ~70%.
– Even with arbitrarily selected task periods
– Can do better if you know about periods & offsets
If all periods are multiple of shortest period, works for CPU load of 100%.

SUBSTANTIAL INFORMATION ABOUT THREAD OF ATLEAST THREE OS


LINUX THREADS

- Linux refers to them as tasks rather than threads
- Thread creation is done through clone() system call
- clone() allows a child task to share the address space of the parent task (process)

----------

WINDOWS NT’s Threads

Processes in NT can consist of one or more threads.

- Primary thread - When a process is created, one thread is generated along with it.

This object is then scheduled on a system wide basis by the kernel to execute on a processor.
After the primary thread has started, it can create other threads that share its address space and system resources but have independent contexts, which include execution stacks and thread specific data. A thread can execute any part of a process' code, including a part currently being executed by another thread.

It is through threads, provided in the Win32 application programmer interface (API), that Windows NT allows programmers to exploit the benefits of concurrency and parallelism.

- Fiber - is NT’s smallest user-level object of execution. It executes in the context of a thread and is unknown to the operating system kernel. A thread can consist of one or more fibers as determined by the application programmer. ˝Some literature ˝[1,11] assume that there is a one-to-one mapping of userlevel objects to kernel-level objects, this is inaccurate. Windows NT does ˝provide the means for many-to-many ˝scheduling. However, NT's design is poorly documented and the application programmer is responsible for the control of fibers such as allocating memory, scheduling them on threads and preemption.

----------

WINDOWS XP THREAD

Implements the one-to-one mapping
Each thread contains
- A thread id
- Register set
- Separate user and kernel stacks
- Private data storage area
The register set, stacks, and private storage area are known as the context of the threads
The primary data structures of a thread include:
- ETHREAD (executive thread block)
- KTHREAD (kernel thread block)
- TEB (thread environment block)

CPU SCHEDULING ALGORITMS


Scheduling Algorithms

1.First-come, first-served (FCFS) scheduling
2.Shortest-job first (SJF) scheduling
3.Priority scheduling
4.Round-robin scheduling
5.Multilevel queue scheduling
6.Multilevel feedback queue scheduling

First-come, First-served(FCFS) scheduling
-is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.

Shortest-job-first (SJF) scheduling
-is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general

Priority-scheduling algorithm,
- which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation.

Round-robin (RR) scheduling
- is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.

The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.

Multilevel queue algorithms
-allow different algorithms to be used for various classes of processes. The most common is a foreground interactive queue which uses RR scheduling, and a background batch queue, which uses FCFS scheduling.

Multilevel feedback queues
-allow processes to move from one queue to another.

Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.

Operating Systems supporting threads at the kernel level must schedule threads - not processes - for execution. This is the case with Solaris 2 and Windows 2000 where both systems schedule threads using preemptive priority based on scheduling algorithm including support for real-time threads. The Linux process scheduler also uses a priority-based algorithm with real-time supports as well. The scheduling algorithms for these three operating systems typically favor interactive over batch and CPU-bound processes.systems typically favor interactive over batch and CPU-bound processes.

Welcome

=] an eXciting WORLD awaits you [=

"dount count your
friends
on a
SUNNY DAY
when the sky is blue
and
laughter is
abundant.
Instead,
wait for the
STORM
when the clouds are dark
and
smiles are
scarce.
When someone
stands
besides
you
and
lift your
SPIRITs high,
then you'll
know who deserves
to be
called
FRIENDS..
"

Followers

Labels