mojo's Blog

Swapping Policies 본문

학교에서 들은거 정리/운영체제

Swapping Policies

_mojo_ 2021. 10. 29. 23:16

The average memory access time(AMAT).

 

AMAT = (P(hit) * T(M)) + (P(Miss) * T(D))

 

T(M) << T(D) 즉, Disk에 접근하는 비용이 Memory에 접근하는 비용보다 월등히 크므로 

 

AMAT 를 줄이기 위해서 Cache 안에 data를 찾을 수 있도록 Hit 을 증가시켜야 한다. 

 

 

DRAM - volatile, Byte-addressable

 

HDD/SSD - non-volatile, Block Device 이며 load, store 를 못한다.

 

 

Workload Example : The No-Locality Workload

 

 

Each reference is to a random page within the set of accessed pages.

 

- Workload accesses 100 unique pages over time.

- Choosing the next page to refer to at random.

 

 

When the cache is large enough to fit the entire workload,

it also doesn't matter which policy you use.

 

No-Locality 란 reuse를 하지 않는것을 의미

 

LRU, FIFO, RAND가 linear 하게 증가하는 것을 볼 수 있다.

 

 

Workload Example : The 80-20 Workload

 

 

Exhibits locality : 80% of the reference are made to 20% of the page

 

The remaining 20% of the reference are made to the remaining 80% of the pages.

 

 

LRU is more likely to hold onto the hot pages.

 

80-20 Workload 로 locality 가 있으므로 알 수 있는 점은 LRU는 locality 가 있을 때 효과가

있다는 것을 알 수 있다.

 

 

Workload Example : The Looping Sequential

 

 

Refer to 50 pages in sequence.

 

- Starting at 0, then 1, ... up to page 49, and then we Loop, repeating those accesses,

  for total of 10,000 accesses to 50 unique pages.

 

 

Page를 0 ~ 49(Working set)개를 연속적으로 읽는다.

 

Cache 크기가 49까지 될때는 locality안에 들어오지 않아서 Hit 발생이 없다.

 

즉, Cache size < Working size 일 경우 지속적 miss가 발생한다.

 

 

Approximating LRU : Clock Algorithm

 

 

Require hardware support : a use bit

 

- Whenever a page is referenced, the use bit is set by hardware to 1.- Hardware never clears the bit, though; that is the responsibility of the OS

 

Clock Algorithm

 

- All pages of the system arranges in a circular list.- A clock han points to some particular page to begin with.- The algorithm continues until it finds a use bit that is set to 0.

 

 

 

 

Clock algorithm doesn't do as well as perfect LRU, it does better then approach that don't consider history at all.

 

 

LRU 와 거의 비등비등하지만 아직까지 LRU가 가장 좋아 보임.

 

 

Considering Dirty Pages

 

 

The hardware includes a modified bit (a.k.a dirty bit)

 

- Page has been modified and is thus dirty, it must be written back to disk to evict it.

- Page has not been modified, the eviction is free.

 

 

Prefetching : The OS guesses that a page is about to be used, and thus bring it in ahead of time.

 

 

 

Clustering, Grouping

=> Collection a number of pending writes together in memory and write them

     to disk in one write.

 

- Perform a single large write more efficiently than many small ones.

 

Thrashing

=> Memory is oversubscribed and the memory demands of the set of running processes

      exceeds the available physical memory.

 

- Decide not to run a subset of processes.

- Reduced set of processes working sets fit in memory.

 

 

프로그램이 너무 많이 뜨면 swapping 하는 cost가 더 많이 든다.

 

어느정도 지나면 CPU Utilization이 확 떨어지는데 이는 즉, swapping 하는 cost 가 많이 드는 것을 의미한다.

 

Degree를 높이면 CPU Utilization이 간헐적으로 올라가다가 어느 순간에 프로세스가 전부 

blocked(disk operations) 하는 상태가 발생하게 된다.

 

메모리 영역에 접근하게 될 때, 메모리에 페이지 부재(=페이지 폴트(Page fault)율)이 높은 것을 의미하며,

심각한 성능 저하를 초래한다.

 

 

Implementing LRU Algorithm

 

 

1. Counter implementation

 

- Every page table entry has a counter.

- Every time a page is referenced through this entry, copy the (logical) clock into the counter.

- The clock is incremented for every memory reference

-When a page needs to be replaced, look at the counters to determine which page is to be replaced.

 

2. Stack implementation

 

- Keep a stack of page numbers in a doubly linked list form

- Whenever a page is reference : Move it to the top; the top of the stack is always the most 

   recently used page and the bottom is the LRU page

- Each update is a little more expensive.

- No search for a replacement.

 

 

 

LRU Approximation Algorithim 

 

 

 

Reference Bit Algorithm

 

- With each page, associate a reference bit (initially = 0)

- When a page is referenced, set the reference bit to 1 (by hardware)

- Replace the one which is 0 (if one exists).

- We know which pages were used and which were not used, but we do not know the order.

 

Additional Reference Bits Algorithm (Reference Byte algorithm)

 

- Use multiple bits.

- Record the reference bits at regular intervals.

 

ex ) 

Keep an 8-bit byte for each page table in memory.

 

At regular interval, shift the reference bit into the high order bit of its 8-bit byte,

shifting the other bits right 1 bit, discarding the low order bit.

 

Theses 8-bit shift registers contain the history of page use for the last eight time periods.

 

When page fault occurs, we interpret theses 8-bit bytes as unsigned integers and the page

with the lowest number is the LRU page and it can be replaced.

 

 

Second Chance (Clock) Algorithm

 

- Keep the circular list of pages loaded in memory

- Reference bit is set to 1 when page is initially loaded

- When it's time to replace a page in page fault handler, we inspect reference bits of pages

 

If the value is 0, replace

If the value is 1, then :

   - Set reference bit to 0 (give second chance to see if it will be referenced again)

   - Leave the page in memory

   - Inspect next page in clock order

 

Reference string : a b g a d e a b a d e g d e

Number of frames = 4

 

 

Number of page faults = 10

 

 

Enhanced Second Chance Algorithm

 

- Enhance the second chance algorithm and consider both reference and modify bits

- Four classes

   (0, 0) : neither recently used nor modified - best candidate!

   (0, 1) : not recently used but modified

   (1, 0) : recently used but clean

   (1, 1) : recently used and modified

 

- When page replacement is called for, examine the class to which that page

   belongs, and replace the first page encountered in the lowest nonempty class

- May have to scan the circular queue several times

 

'학교에서 들은거 정리 > 운영체제' 카테고리의 다른 글

Conditional Variables  (0) 2021.11.12
Lock based Data Structure  (0) 2021.11.11
Swapping Mechanisms  (0) 2021.10.27
Advanced Page Tables  (0) 2021.10.15
Translation Lookaside Buffer  (0) 2021.10.14
Comments