| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Memory Management

Page history last edited by Dmitry PNGHS 1 year, 9 months ago

Go:

 Visual Taxonomy Links   Hide/Show:

Taxonomy Path

https://en.wikipedia.org/wiki/Memory_management

"Memory allocation" redirects here. For memory allocation in the brain, see Neuronal memory allocation.

This article is about memory management in an address space. For management of physical memory, see Memory management (operating systems).

Operating systems

Operating system placement.svg

About this image

Common features

Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time.[1]

Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the size of the virtual address space beyond the available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance.

In some operating systems, e.g. OS/360 and successors,[2] memory is managed by the operating system.[note 1] In other operating systems, e.g. Unix-like operating systems, memory is managed at the application level.

Memory management within an address space is generally categorized as either manual memory management or automatic memory management.

Contents

Manual memory management

An example of external fragmentation

Main article: Manual memory management

The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool[note 2] of memory called the heap or free store.[note 3] At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.

Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadata can also inflate the size of (individually) small allocations. This is often managed by chunking. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" (i.e. that there are no "memory leaks").

Efficiency

The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).[1]

Implementations

Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer

reference. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the kernel, and may use any of the following methods:

Fixed-size blocks allocation

Main article: Memory pool

Fixed-size blocks allocation, also called memory pool allocation, uses a free list of fixed-size blocks of memory (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but suffers from fragmentation, especially with long memory addresses. However, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games.

Buddy blocks

Further information: Buddy memory allocation

In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of memory of a certain power of two in size, or blocks of some other convenient size progression. All blocks of a particular size are kept in a sorted linked list or tree and all new blocks that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested than is available, the smallest available size is selected and split. One of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the correspondingly larger-sized buddy-block list.

Slab allocation

Main article: Slab allocation

This memory allocation mechanism preallocates memory chunks suitable to fit objects of a certain type or size.[3] These chunks are called caches and the allocator only has to keep track of a list of free cache slots. Constructing an object will use any one of the free cache slots and destructing an object will add a slot back to the free cache slot list. This technique alleviates memory fragmentation and is efficient as there is no need to search for a suitable portion of memory, as any open slot will suffice.

Stack allocation

Main article: Stack-based memory allocation

Many Unix-like systems as well as Microsoft Windows implement a function called alloca for dynamically allocating stack memory in a way similar to the heap-based malloc. A compiler typically translates it to inlined instructions manipulating the stack pointer.[4] Although there is no need of manually freeing memory allocated this way as it is automatically freed when the function that called alloca returns, there exists a risk of overflow. And since alloca is an ad hoc expansion seen in many systems but never in POSIX or the C standard, its behavior in case of a stack overflow is undefined.

A safer version of alloca called _malloca, which reports errors, exists on Microsoft Windows. It requires the use of _freea.[5] gnulib provides an equivalent interface, albeit instead of throwing an SEH exception on overflow, it delegates to malloc when an overlarge size is detected.[6] A similar feature can be emulated using manual accounting and size-checking, such as in the uses of alloca_account in glibc.[7]

Automatic memory management

See also: Automatic variable and Call stack

In many programming language implementations, the runtime environment for the program automatically allocates memory in the call stack for non-static local variables of a subroutine, called automatic variables, when the subroutine is called, and automatically releases that memory when the subroutine is exited. Special declarations may allow local variables to retain values between invocations of the procedure, or may allow local variables to be accessed by other subroutines. The automatic allocation of local variables makes recursion possible, to a depth limited by available memory.

Garbage collection

Main article: Garbage collection (computer science)

Garbage collection is a strategy for automatically detecting memory allocated to objects that are no longer usable in a program, and returning that allocated memory to a pool of free memory locations. This method is in contrast to "manual" memory management where a programmer explicitly codes memory requests and memory releases in the program. While automatic garbage collection has the advantages of reducing programmer workload and preventing certain kinds of memory allocation bugs, garbage collection does require memory resources of its own, and can compete with the application program for processor time.

Systems with virtual memory

Main articles: Memory protection and Shared memory (interprocess communication)

Virtual memory is a method of decoupling the memory organization from the physical hardware. The applications operate on memory via virtual addresses. Each attempt by the application to access a particular virtual memory address results in the virtual memory address being translated to an actual physical address.[8] In this way the addition of virtual memory enables granular control over memory systems and methods of access.

In virtual memory systems the operating system limits how a process can access the memory. This feature, called memory protection, can be used to disallow a process to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one program from interfering with the operation of another.

Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share information. Shared memory is one of the fastest techniques for inter-process communication.

Memory is usually classified by access rate into primary storage and secondary storage. Memory management systems, among other operations, also handle the moving of information between these two levels of memory.

Memory management in OS/360 and successors

IBM System/360 does not support virtual memory.[note 4] Memory isolation of jobs is optionally accomplished using protection keys, assigning storage for each job a different key, 0 for the supervisor or 1–15. Memory management in OS/360 is a supervisor function. Storage is requested using the GETMAIN macro and freed using the FREEMAIN macro, which result in a call to the supervisor (SVC) to perform the operation.

In OS/360 the details vary depending on how the system is generated, e.g., for PCP, MFT, MVT.

In OS/360 MVT, suballocation within a job's region or the shared System Queue Area (SQA) is based on subpools, areas a multiple of 2 KB in size—the size of an area protected by a protection key. Subpools are numbered 0–255.[9] Within a region subpools are assigned either the job's storage protection or the supervisor's key, key 0. Subpools 0–127 receive the job's key. Initially only subpool zero is created, and all user storage requests are satisfied from subpool 0, unless another is specified in the memory request. Subpools 250–255 are created by memory requests by the supervisor on behalf of the job. Most of these are assigned key 0, although a few get the key of the job. Subpool numbers are also relevant in MFT, although the details are much simpler.[10] MFT uses fixed partitions redefinable by the operator instead of dynamic regions and PCP has only a single partition.

Each subpool is mapped by a list of control blocks identifying allocated and free memory blocks within the subpool. Memory is allocated by finding a free area of sufficient size, or by allocating additional blocks in the subpool, up to the region size of the job. It is possible to free all or part of an allocated memory area.[11]

The details for OS/VS1 are similar[12] to those for MFT and for MVT; the details for OS/VS2 are similar to those for MVT, except that the page size is 4 KiB. For both OS/VS1 and OS/VS2 the shared System Queue Area (SQA) is nonpageable.

In MVS the address space includes an additional pageable shared area, the Common Storage Area (CSA), and an additional private area, the System Work area (SWA). Also, the storage keys 0-7 are all reserved for use by privileged code.

See also

Notes

  1. Except on the Model 67

References

  1. OSVS1Dig, pp. 2.37-2.39, VS1 Storage Subpools.

OS360Sup

OS Release 21 IBM System/360 Operating System Supervisor Services and Macro Instructions (PDF). Systems Reference Library (Eighth ed.). IBM. September 1974. GC28-6646-7. OSVS1Dig

OS/VS1 Programmer's Reference Digest Release 6 (PDF). Systems (Sixth ed.). IBM. November 1975. GC24-5091-5. 

Further reading

External links

Wikibooks has more on the topic of: Memory management

Memory management

Hardware
Virtual memory
Memory segmentation
Memory allocation
Manual memory management
Garbage collection
Memory safety
Issues
Other
Authority control: National libraries

Edit this at Wikidata


Links  

https://en.wikipedia.org/wiki/Category:Memory_management

This category covers software which manages the access and allocation of main memory. This includes real memory and the subcategory for virtual memory.

Subcategories

C

`M

` ► Memory management algorithms (1 C, 14 P)

`S

`V

` ► Virtual memory (31 P)

`X

` ► X86 memory management (1 C, 19 P)

Pages

` Memory management

`A

` Antifragmentation

Automatic Reference Counting

Automatic variable

`B

` Bank switching

Base and bounds

Bélády's anomaly

.bss

Bus error

`C

` C dynamic memory allocation

Call stack

Capability-based addressing

Chunking (computing)

Code segment

Commit charge

`D

` Data segment

Deallocation

Memory deallocation

Dispose pattern

DOS memory management

Dynamic video memory technology

`F

` Finalization

Finalizer

Free list

`G

` Garbage collection (computer science)

Global Descriptor Table

`H

` Hoard memory allocator

`I

` Interleaved memory

Interrupt descriptor table

Input–output memory management unit

`L

` Local Descriptor Table

`M

` Mac OS memory management

Manual memory management

Memory management (operating systems)

Memory management controller

Memory management unit

Memory pool

Memory protection

Memory segmentation

`O

` Object resurrection

Out of memory

`P

` Page (computer memory)

Page cache

Page table

Paging

Phantom reference

Physical Address Extension

Pointer swizzling

Pool (computer science)

Proportional set size

`R

` Reference counting

Region inference

Region-based memory management

Resident set size

Resource allocation (computer)

`S

` Sbrk

Segment descriptor

Segmentation fault

Shadow memory

Shadow RAM (Acorn)

Shadow stack

Shared graphics memory

Shared memory

Sideways address space

Soft reference

Stack-based memory allocation

Static variable

Storage violation

`T

` Transcendent memory

`U

` Unique set size

`W

` Weak reference

Write barrier

`Z

` Zero page

Zombie object

Zram

Zswap

Pages in Other Languages

Categories:

Operating system technology

Programming language implementation

Comments (0)

You don't have permission to comment on this page.