mojo's Blog

Swapping Mechanisms 본문

학교에서 들은거 정리/운영체제

Swapping Mechanisms

_mojo_ 2021. 10. 27. 20:04

Beyond Physical Memory : Mechanisms

 

 

 

Use part of dist as memory.

 

Os need a place to stash away portions of address space that currently aren't in great demand.

 

In modern systems, this role is usually served by a hard disk drive.

 

 

Memory Hierarchy는 Registers, Cache, Main Memory 와 Mass Stoarage를 구분지어서 볼 수 있다.

 

Register, Cache, Main Memory는 volatile 하여 저장했던 내용이 증발하지만 (DRAM 과 같은)

Mass Stoarge 는 involatile 하여 저장했던 내용이 증발하지 않는다. (SSD 와 같은)

 

hard disk drive는 넘쳐나는 데이터를 저장하기 위한 용도이다.

 

 

Swap Space

 

 

Reserve some space on the disk for moving pages back and forth.

 

OS needs to remember the swap space, in page-sized unit.

 

 

 

 

Present Bit

 

 

Add some machinery higher up in the system in order to support swapping the pages to and from the disk.

 

When the hardware looks in the PTE, it may find that the page is not present in physical memory.

 

 

Page fault 

 

- Accessing page that is not in physical memory.

- If a page is not present and has been swapped disk, the OS needs to swap the page back 

   into memory in order to service the page fault.

 

Page replacement

 

- The OS likes to page out pages to make room for the new pages the OS is about to bring in.

- The process of picking a page to kick out or replace is known as page replacement policy.

 

 

 

위 사진을 보면 TLB가 생략되었음을 알 수 있다. (하지만 이해할 수 있음)

 

1. Reference : TLB에서 해당 VPN가 insert되지 않은 상태이므로 Page Table을 reference 한다.

 

2. Trap : Trap, 즉 system call 으로 이는 OS 가 처리해준다. (kernel 영역)

 

3. Check storage whether page is exist : Present Bit이 0 이였음을 간접적으로 알 수 있다.

 

4. Get the page : Disk에서 VPN에 해당하는 PFN가 존재하므로 physical memory로 가져온다.

 

5. Reset Page Table : Physical memory로 가져왔으니 Present Bit을 1로 설정해줘야 한다.

 

6. Reinstruction : TLB에 insert하였으므로 Retryinstruction 한다.

 

 

Page Fault Control Flow - Hardware

 

 

VPN = (VirtualAddress & VPN_MASK) >> SHIFT
(Success, TlbEntry) = TLB_Lookup(VPN)
if(Success == True){
    if(CanAccess(TlbEntry.ProtectBits) == True){
        offset = VirtualAddress & OFFSET_MASK
        PhysAddr = (TlbEntry.PFN << SHIFT) | Offset
        Register = AccessMemory(PhysAddr)
    }
    else RaiseException(PROTECTION_FAULT)
}
else{
    PTEAddr = PTBR + (VPN * sizeof(PTE))
    PTE = AccessMemory(PTEAddr)
    if(PTE.Valid == False)
        RaiseException(SEGMENTATION_FAULT)
    else{
        if(CanAccess(PTE.ProtectBits) == False)
            RaiseException(PROTECTION_FAULT)
        else if(PTE.Present == True){
            TLB_Insert(VPN, PTE.PFN, PTE.ProtectBits)
            RetryInstruction()
        }
        else if(PTE.Present == False)
            RaiseException(PAGE_FAULT)
    }
}

 

 

많이 익숙한 코드이다.

 

PTE.Present == False 에 대한 부분을 보도록 하자.

 

Hardware는 Memory에 해당 Page Table Entry가 없을 경우 즉, Disk에 있을 경우 Page_Fault 를 띄운다.

 

 

Page Fault Control Flow - Software

 

 

PFN = FindFreePhysicalPage()
if(PFN == -1) // no free page found
    PFN = EvictPage() // run replacement algorithm
DiskRead(PTE.DiskAddr, PFN) // sleep (waiting for I/O)
PTE.present = True // update page table with present
PTE.PFN = PFN // bit and translation (PFN)
RetryInstruction() // retry instruction

 

The OS must find a physical frame for the soon-be-faulted-in page to reside within.

 

If there is no such page, waiting for the replacement algorithm to run and kick some pages out of memory.

 

 

When to perform replacement

 

 

Lazy approach

 

- OS waits until memory is entirely full, and only then replaces a page to make room    

   for some other page.

- This is unrealistic.

 

Swap Daemon, Page Daemon

 

- There are fewer than LW(low watermark) pages available, a background thread    

   that is responsible for freeing memory runs.

- The thread evicts pages until there are HW(high watermark) pages available.

 

 


 

Goal of Cache Management

 

 

To minimize the number of cache misses.

 

The average memory access time (AMAT).

 

 

The Optimal Replacement Policy

 

 

Lead to the fewest number of misses overall.

 

- Replace the page that will be accessed furthest in the future.

- Result in the fewest-possible cache misses.

 

Serve only as a comparison point, to know how close we are to perfect.

 

 

Tracing the Optimal Policy

 

 

 

Hit rate is Hits / (Hits+Misses) * 100 = 54.6 %

 

이 경우는 가장 좋은 방법이지만 미래를 알 수 없어서 실현할 수 없다.

 

그래서 이를 해결하기 위한 여러가지 policy를 살펴보도록 한다.

 

 

A Simple Policy : FIFO

 

 

 

Pages were placed in a queue when they enter the system.

 

When a replacement occurs, the page on the tail of the queue(the "First-in" pages) is evicted.

 

It is simple to implement, but can't determine the importance of blocks.

 

 

 

 

Hit rate is Hits / (Hits+Misses) * 100 = 36.4 %

 

Even though page 0 had been accessed a number of times, FIFO still kicks it out.

 

 

BELADY'S ANOMALY

 

 

We would expect the cache hit rate to increase when the cache gets larger.

 

But in this case, with FIFO, it gets worse.

 

 

 

Cache 가 page frame을 수용할 수 있는 size가 커질때 좋아지거나(2 => 3, 4 = > 5), 안 좋아지거나(3 => 4) 둘 중 하나다. (FIFO)

 

FIFO 로는 Optimal Policy를 따라가기엔 너무 벅차다.

 

다음 Policy를 보도록 한다.

 

 

Another Simple Policy : Random

 

 

Picks a random page to replace under memory pressure.

 

It doesn't really try to be too intelligent in picking which blocks to evict.

 

Random does depends entirely upon how lucky Random gets in its choice.

 

 

Hit rate is Hits / (Hits+Misses) * 100 = 45.5 %

 

Sometimes, Random is as good as optimal, achieving 6 hits on the example trace.

 

 

 

Using History

 

 

Learn on the past and use history.

 

Two type of historical information.

 

 

Replace the least-recently-used page.

 

 

Hit rate is Hits / (Hits+Misses) * 100 = 54.5 %

 

지금까지 사용한 policy 중에서 가장 인상적인 policy 이다. (optimal policy와 동일한 수치)

'학교에서 들은거 정리 > 운영체제' 카테고리의 다른 글

Lock based Data Structure  (0) 2021.11.11
Swapping Policies  (0) 2021.10.29
Advanced Page Tables  (0) 2021.10.15
Translation Lookaside Buffer  (0) 2021.10.14
Paging  (0) 2021.10.08
Comments