Memory manager
|
A memory manager is a part of a computer program which accepts requests from the program to allocate and deallocate chunks of memory.
The objectives of the memory manager is generally to allow dynamic memory allocation. For example, in the C programming language, without use of a memory allocation library, memory can only be allocated statically and on the stack, while the use of a memory manager allows for dynamic allocation on the heap. The memory manager can also make allocation and deallocation more efficient, group allocated blocks according to particular conditions, and even collect statistics, and trace memory access violations.
The interface to the memory manager is a set of functions which are used by the program to allocate and release memory. In C, the memory manager is primarily accessed through malloc
and free
; there are some additional functions, such as realloc
and calloc
that are closely related to malloc
. In C++ there are 12 global constructs for managing memory: new
, new[]
, delete
, delete[]
, new(nothrow)
, new(nothrow)[]
, delete(nothrow)
, delete(nothrow)[]
, malloc
, calloc
, realloc
, and free
.
PC "Memory Managers" of the LIM EMS era
The DOS community once used the term "memory manager" in a completely different way. A PC "memory manager" was a software system which employed a variety of clever tricks to give applications access to more than 640K of "conventional memory". The best-known examples were Quarterdeck's QEMM product and Microsoft's EMM386.
During the period roughly spanning the mid-1980s through the mid-1990s, PCs were typically limited to 640K of address space for accessing RAM. Software gradually outgrew this constraint as RAM became cheaper. To allow software to use more than 640K, a mechanism called expanded memory appeared. It integrated bank-switching hardware with software support, allowing applications to exploit additional RAM through a window in the address space. The driver servicing the bank-switching hardware was called an Expanded Memory Manager.
Although applications could use expanded memory with relative freedom, many other software components such as drivers and TSRs were still normally constrained to reside within the 640K "conventional memory" area, which soon became a critically scarce resource.
Clever tricks appeared, notably with QRAM and the LOADHI command, which allowed to place device drivers and TSRs in expanded memory. QRAM did not require a 80386 processor.
The biggest break-through came with the Intel 80386 processor, its virtual memory support and virtual 8086 mode. This combination allowed to freely model the address space used by real mode programs and a wide variety of software-only approaches were invented to move as much code as possible out of the 640K area.
Memory managers like QEMM might move the bulk of the code for a driver or TSR into extended memory and replace it with a small fingerhold that was capable of accessing the extended-memory-resident code. They might analyze memory usage to detect drivers that required more RAM during startup than they did subsequently, and recover and reuse the memory that was no longer needed after startup. They might even remap areas of memory normally used for memory-mapped I/O. Many of these tricks involved assumptions about the functioning of drivers and other components. In effect, memory managers might reverse-engineer and modify other vendors' code on the fly. As might be expected, such tricks did not always work. Therefore, memory manages also incorporated very elaborate systems of configurable options, and provisions for recovery should a selected option render the PC unbootable (a frequent occurrence).
Installing and configuring a memory manager might involve hours of experimentation with options, repeatedly rebooting the machine, and testing the results. But conventional memory was so valuable that PC owners felt that such time was well-spent if the result was to free up 30K or 40K of conventional memory space.pt:Gerenciamento de memória it:Gestore della memoria