7 Storage Management – Introduction to Data Structures in C

Chapter 7

Storage Management

Chapter Outline


The fundamental purpose of any program is to manipulate data and its storage in the computer memory. Storage management consists of techniques that are used to manage the heap. Two memory management techniques are used for this purpose. They are:

  1. Static storage management
  2. Dynamic storage management.

Static Storage Management It is necessary to load the program into the memory before execution of a program. Before execution of a program, it is essential that the system should have enough memory. When program execution starts, it takes memory through operating system. Once the memory is allocated to the program, the memory allocated cannot be increased or decreased during program execution. The same memory cannot be used by other programs. All these are taken care of by the operating system. For example, an array is best example of static implementation in which the memory allocated is fixed in size.

Dynamic Storage Management The dynamic storage management allows the user to manage the memory during program execution. According to the request made by the program, memory can be allocated. In this technique, wastage of memory can be avoided. This is efficient for multiprogramming and single user environment, in which more than one program is executed simultaneously. The exact amount of memory needed for the programs can be estimated only during program execution. The operating system performs dynamic storage management. The linked list is an example of dynamic storage management. In linked list, we have noticed during program execution that when new node is created memory is allocated. Thus, depending upon the need of the user, any number of nodes can be created. Following are the techniques of dynamic memory management.

  1. Fixed block allocation
  2. Variable block allocation.

The variable block allocation is further divided into following types:

  1. First fit
  2. Next fit
  3. Best fit
  4. Worst fit.

De-allocation Techniques The different ways of de-allocation techniques are mentioned below:

  1. Ordered de-allocation
  2. Random de-allocation.

The free blocks of memory storage are nothing but a collection of non-contiguous memory blocks. They can be accessed in sequence by pointer from one block to another. The size of block depends on the method applied as given below.

7.3.1 Fixed Block Storage

This is a straightforward method in which, all the blocks are of identical size. The user can decide the size of the block. The operating system keeps a pointer called AVAIL. This pointer points to memory as shown in the Fig. 7.1.

Figure 7.1 Free storage with fixed block memory

There are two functions getnode () and freenode (). Using these functions, the user program communicates with operating system. The getnode () is used to obtain memory from the heap for storage of data. The argument passed denotes amount of memory required. When the user program makes a call to this function, a pointer to block in heap is returned after successful allocation otherwise returns NULL. The NULL value indicates that there is not enough memory available. The returnnode () is opposite of the getnode () function in which when a node is no longer needed its allocated memory is released and returned to pool as shown in Fig. 7.2.

Figure 7.2 Obtaining memory block

Figure 7.3 Mechanism of return node ()

Though the above method is straightforward, it consumes more memory. For example, the size of each block is 1 KB (1024 bytes) and the program requests the memory of 1.2 KB, in such a situation the memory allocated is 2 KB, hence, the extra memory allocated is of no use. If the size of the block is decreased, it reduces the program performance.

7.3.2 Variable Block Storage

We know that fixed size block method has a limitation, and this can be overcome by the use of variable block storage. The functions getnode () and freenode () suppose that the blocks of memory are ordered in ascending order according to their sizes. The node has an extra field that holds the size of block as per the Fig. 7.3. The function getnode () returns a memory block which is more than or equal to the requested memory by the caller function.

In static storage management, memory blocks of fixed sizes are allocated for storing the programs whereas in many applications there is a need to have memory of different sizes. In getnode () and returnnode () function, we discussed that memory blocks of different sizes are present and blocks are connected with each other. However, in reality when there is no program in the memory, the whole memory is considered as a block. In this situation, the blocks of required sizes are automatically made through use of system by calling the functions many times. The blocks returned in all executions are together shaped as a block as per the Fig. 7.4 (a to d). Suppose, the pointer returns M1, M2, M3 and M4.

Figure 7.4 (a) Total block memory without any program

In the above Fig. 7.4 (a), no program is loaded and hence total block of memory is available.

Figure 7.4 (b) Four programs are loaded in memory

As per the above Fig. 7.4 (b), four programs M1 to M4 are loaded and memory is allocated to them in sequence of their order. The Fig. 7.4 (b) also indicates the unused memory, which is available for use.

Figure 7.4 (c) Three programs and unused memory

The execution of programs M3 is completed and when it quits from the memory the space is returned to main storage. Here, blocks of variable sizes are generated.

Figure 7.4 (d) Program and memory

Another program (M5) requests for memory and the memory needed is allocated to it. Thus, the dynamic memory management performs the following tasks:

  1. It searches requested blocks of memory and allocates memory.
  2. Supervises the released blocks of memory.
  3. Combines the tiny blocks of memory to bigger blocks.

In order to provide the above services two memory management systems are used. They are (i) boundary tag system and (ii) buddy system.


The boundary tag system is a resourceful memory system for dynamic memory management system. It works on a node structure as per the Fig. 7.5. In the node structure for a specified block, first and last words are set aside for block information. The LL and RL are pointers to predecessor and successor blocks from the current block. The argument S contains amount of free memory available. The TAG is flag field and stores values 0 (free) or 1 (engaged). There are two flag fields on the two boundaries of the block. Hence, the system is called boundary tag system. The filed UPLINK points to beginning of the same block. The uses of all the above fields are explained at appropriate situations. These systems sustain a circular linked list of free memory blocks. Using circular doubly linked implementation a block can be easily released. The header node is also used in the boundary tag system and holds the value zero in field S and TAG.

Figure 7.5 Node structure


As discussed in the list structure, the request for memory block to system can be made in the following ways:

  1. First-fit storage allocation
  2. Best-fit storage allocation
  3. Worst-fit storage allocation
  4. Next-fit storage allocation.

7.5.1 First-fit Allocation

This is a straightforward way of memory allocation. In this method when a request is made, first available free blocks are searched. When a free block of memory is obtained, a pointer pointing to the same block is returned to call function. In this method, the memory allocated is always larger than the memory required. This process requires a minimum search for the needed size of memory. The first fit method allocates lowest memory addresses.

7.5.2 Best-fit Storage Allocation

In this type, the required memory and allocated memory are calculated. The memory is allocated in such a way that the need of memory and allocated amount of memory is nearly the same. For example, required memory is R and allocated memory is A and assume the difference is D. Then, D is very small. In the examples, so far we have observed that whenever memory is needed for a single node it is requested by the program using malloc () function. Hence, our unit of memory is node. This technique reduces consumption of memory as the required memory and allocated memory is very close.

7.5.3 Worst-fit Storage Allocation

This method is opposite of best-fit method. Instead of finding a smaller block close to request size of block, this method searches the largest block and allocates it. The worst-fit allocation decreases the creation of small block as created in best-fit allocation.

7.5.4 Next-fit Storage Allocation

We have already learnt the first-fit allocation method and in this method searching starts from the beginning of the free list. In case of next fit storage allocation searching starts from the last allocation in the list. In this method pointer pointing to the free list is stored next to the allocation node. It is utilized to start the next call. The main goal of this method is to trim down the search process by recoiling from checking of smaller blocks, which is lengthy in execution.

7.5.5 External Fragmentation

Let us assume that a request is made for bulky amount of storage blocks and as soon as the request is released, the linked list of existing blocks are searched and if returned it means adjacent blocks are not merged into one. This also means that the average block size available is small and larger blocks are not available as per the request. In case if a request for a bigger block is accepted, it may possibly have to be ignored due to lack of a single block on the free list that is large enough, although the total amount of free storage is much greater than the requested storage amount. This process of decomposing the total available storage into a larger number of comparatively small blocks is known as external fragmentation.

7.5.6 Internal Fragmentation

It is possible to reduce external fragmentation to some extent by allocating a larger block than the requested size. It can be done by avoiding partition of a block into various parts. If we apply this method, it could happen that a request for storage is rejected due to the lack of blocks of the desired size. This can occur even if the amount of storage that is wasted is more than enough to fulfil the required amount. Internal fragmentation is nothing but dividing the total unused storage into available blocks. Further these blocks are allocated with some parts of these blocks remains unused, but not made availble to other programs. Fragmentation must be done cautiously as it is a main problem in dynamic memory management.


The de-allocation of the memory is a very essential process and utmost care must be taken by the programmer to shun the wastage of memory. In static implementation of data structure the wastage of memory is more than that in the dynamic implementation. This topic is devoted for discussion of de-allocation of allocated memory, which is returned to free store. It is one type of re-cycling. The system sources allocated must be returned to the system if the program is not in use. This way the performance of the system is maintained.

We have observed in the boundary tag system that it inserts the block to the list of free blocks and merge recently inserted blocks with its neighbouring blocks.


The buddy system is the alternative storage management system. This system limits the sizes of the block to some fixed set of sizes. Till now we have learnt that the block sizes are fixed arbitrarily. The linked list keeps the size of blocks of buddy system to a restricted size. When a request is made for size S then the size X, which is the minimum of the fixed sizes but equivalent to or larger than S, is searched and block of size X is allocated.

In case if block of size S is not available, then a larger block is split into two sub-buddies (block). The blocks splitted are also of equal and fixed sizes. This procedure is repetitively done until a block of size X is formed. The restriction sizes in buddy system are as F0, F1 to FM. According to the recurrence, the requirement of block sizes can be expressed by the following equation:

M is maximum size.

For given values of g and f

F0=V0, F1=V1, …, Fg−1 = Vg−1

If values of g=1 and F0 =8 the block sizes are 8,16,32,64,128. These values can be obtained from Eq. 7.1. We can also observe that the block sizes are successive powers of two. The buddy system based on the above condition is called binary buddy system.

One more buddy system, which restricts the fixed sizes to the Fibonacci series sequence, is known as Fibonacci buddy system. The initial value is g=2 and F0=8, F1=13. The sizes of the blocks are 8,13,21,34,55,89. The values of F0, F1 and F2 are depending according to the application.

When a request is made to get memory space to the system, the system searches for a block, which is closely equal to but not less than the requested memory block. When the abovedescribed block is found it is immediately allocated, else next larger block in the list will be divided until the block of nearest size is found. On the contrary, when a block of memory is returned to the system (de-allocated), the returned block is pooled with another free block into larger free blocks.

In order to illustrate the above allocation and de-allocation methods, consider the fibonacci buddy system in which the restricted block sizes are 8,13,21,34,55,89 and 144. The 144 is the biggest possible block from the memory available. The Fig. 7.6 illustrates the allocation and de-allocation.

From Fig. 7.6 the following points are observed. In the beginning, the system keeps a free block of size 144. When a demand for memory comes for size 25, the system divides and creates two buddies of sizes 89 and 55, respectively. The second buddy, i.e. 55 is divided into two buddies as 21 and 34. The block of size 34 is closely nearest further and hence it is used for allocation as per Fig. 7.6.

Figure 7.6 Allocation procedure for size 25

For a succeeding request for a block of size say 40, the system searches for a matching block and sets the block of size 89 as obtainable but it is too huge for the made request. That is why the partition is done as 55 and 34, the block size 55 is allocated, and the request is fulfilled. The Fig. 7.7 illustrates this point.

Figure 7.7 Allocation procedure for size 40

Figure 7.8 Releasing of block of size 55

Consider the situation if block of size 55 is de-allocated and returned to the system as shown in Fig. 7.8. The two buddies namely 55 and 34 are free buddies. These two free buddies come together and block of size 89 is formed. We know that the block 89 is a free buddy and the combination comes to the end of the procedure.

Figure 7.9 De-allocation of block size (34)

So far we have seen theoretical representation. The use of structure of a node to implement above operations can be defined as follows:

There are two fields and they are right and left link. These two link fields store the address of the predecessor and successor of the block. The size is stored in the size field. In the field, the absolute size is not stored and only index value according to the recurrence relation is stored. Suppose, Fibonacci system is applied then the equation would be,

Let F0= 8 and F1=13

The value of size field are as under

Absolute size (Fn) Sizes (n)
F0 =8

The field CODE is used to get information about the blocks, that is, whether the two blocks are buddies or not. The buddy property can be determined by the solution given by Hinds as follows:

Initially, the block is largest, then field CODE = 0

After partition of parent block, two blocks left and right are created. Then,




After combining these two buddies the ensuing block CODE field is one less as compared to its left buddy as follows:



By the following method, various partitions from biggest block to the smallest block can be created.

Depending upon the value of code field we can determine whether the block is left block or right block. When the value of CODE=0 then it is right block and if the value is greater than 0 then it is a left block (buddy). The code value denotes the number of splits of some larger block so far made by the buddy.

Consider two cases of allocation,

  1. Best-fit strategy
  2. Fibonacci buddy system.

According to Fibonacci buddy system, FN=FN−1 + FN−2


2 ≤ nM=6 When F0=8 and F1=13

At the starting, the block of size 34 is allocated as shown in the Fig. 7.10 (a).

Figure 7.10(a)

Assume that the successive request is made for a block size 50. The system will search for the required closer block from header. When the block of size 89 is found which is larger than the requested one, partition is carried out. After partition, we get two buddies of 55 and 34 size. The block of size 55 is allocated as per the Fig. 7.10 (b).

Figure 7.10(b)

When memory is not required it is released, i.e. the memory block allocated is returned to heap. Consider the following Fig. 7.10 (c).

Figure 7.10(c)

When a block of size 34 is returned as shown in the Fig. 7.10 (c) the code of free block is one and hence it is a left buddy of any larger buddy. Its right corresponding block is free; they can be combined as shown in Fig. 7.10 (c). The size of newly formed block will be 55 and it is a free buddy. We can notice that it is a left buddy. One more combination can be done if we find its right corresponding block. Figs 7.10 (d) and 7.10 (e) show this block. This is the largest block and no further combinations are possible. Thus, de-allocation is complete.

Figure 7.10(d)

Figure 7.10(e)


After studying the buddy system now, it is time to turn to binary buddy system (BBS). The BBS is based on the recurrence relation given as follows:

In the binary system, the block sizes are power of base 2. In this system the block of memory is accurately partitioned into two sections of fifty-fifty. This system also has one advantage that the base address of other buddy system can be easily computed, if the size along with base address of at least one buddy is available for the block of size 2k. In this way when a block of memory is released, its amalgamation with its other matching part is straightforward.


The part of free memory is mixed together with the dynamically used partitions throughout the memory and this is one of the critical problem in dynamic storage allocation. For example, consider the system whose memory picture is as per the Fig. 7.11. When all requirements for storage are of fixed size, then it is sufficient to link all unused available blocks together into an existing space list and the request received can be fulfilled. On the other hand, when storage requests received are of different sizes, it might be too expensive. For example, assume that a request for a block of size 256 KB is accepted even if total free space available is 360 KB. The space available is more than the storage amount needed. However, the needed storage space cannot be allocated. This is because of external fragmentation. It is usually detected in dynamic memory management systems with different size blocks.

Figure 7.11 Dynamic partitioning of memory

We have already learnt in previous sections that combining of contiguous free portions when free blocks are returned is a technique frequently followed to decrease fragmentation and thus the amount of unused wasted memory is prohibited. Nevertheless, the above techniques are inclined to put off the impact moderately than to avert the problem. Comprehensive study provides the fact that after some time of operation, the system is likely to attain a status of constancy and maintain a rule which is called fifty per cent rule. This rule points out that the number of used and free blocks are same, i.e. 50% each. In addition, the study divulges that more or less one-third of memory is not used and wasted because of fragmentation. This is the case even if contiguous free areas are re- combined every time a block is returned. Furthermore, this type of implementation slows down execution speed of program as the system has a task of releasing and coalescing block at the same time during program execution.

When memory is fragmented, it can be re-allocated a few or entire portion into one last part of memory and consequently coalesce the small holes into one big free part. This practice of recovering memory storage is known de-fragmentation or compaction. Compaction mechanism is shifting of used blocks from one place in memory to other location. To accomplish this all the active processes are required to be suspended and in fact copied from one part of memory into a different part.

The process of memory compaction is possibly carried out whenever required. In some memory management methods, compact process is performed when a free portion of memory is available. In this way, gathering maximum of the free memory into one large portion takes place. A substitute way is to compact only when the request received cannot be fulfilled and system fails to allocate the memory. If the combined sizes of free portions of memory go above the requested amount of memory available, or else, free memory cannot satisfy the pending need in any case then compaction may not be useful.

There are two approaches:

  1. Incremental compaction
  2. Selective compaction.

Figure 7.12 Actual memory

Incremental Compaction In this type every free block is moved into one end of the memory to compose a big hole. This procedure is very straightforward to apply but costly. The Fig. 7.13 shows incremental compaction.

Selective Compaction Selective compaction seeks for least number of free blocks, movement of which produces a larger free hole. The obtained hole may be at any place in the memory and need not be at the end of the memory. Fig. 7.14 shows a memory picture of selective compaction.

Through selective compaction, only by moving block M3 of size 200 KB from location 500 KB to the location 700 KB; continuing this way we can create the largest possible free holes of size 360 KB at the location 340 KB.

Therefore, time for shifting data is minimum. However, with the incremental method we have to move first the memory portion of the block M3 from 500 KB location to 340 KB location, then the memory content of the block M4 at 540 KB location. Therefore, here, total movement is 200 KB + 100 KB = 300 KB instead of 200 KB in selective method. Selective compaction approach is seldom applied because of the overhead in evaluating the option while selecting the most advantageous moving method. Time for selecting such block may be more than the time required to shift the block one-by-one at the end of the memory. Consequently, a more general approach to compaction is to reallocate all free blocks to one end of memory.

Figure 7.13 Incremental compaction

Figure 7.14 Selective compaction

7.9.1 Memory (Storage) Allocation

The operating system of the computer controls entire system resources and permits or denies the use of system resources by other programs. The other programs need to request the operating system for any requirement of memory. The operating system manages the entire process and allocates the memory to other programs. It performs the memory allocation using the following two systems:

Linked Allocation We know that a linked list is nothing but a sequence of nodes linked with the pointer. Every node has two parts, i.e. data field and link field. The data is stored in the data filed and link field stores address of next field.

Fixed Sized Partition Technique In this type memory space is partitioned into fixed sections and each block holds given memory. The size of the block is predetermined and cannot be modified at run time. If the program size is less than the block of memory, the memory more than the requirement remains unused. If the memory of the block is less than the program size, the program will not be executed.

7.9.2 Storage Pools

Storage pools mean every node present in memory. There are two ways to manage the pools:

Bit Tables/Paging Technique When all the nodes of the data structure are of identical size, this method is applicable. In this technique, every program is split into fixed size pages. Later, each page is sub-divided into fixed blocks of nodes known page frames. For example, memory size is 20 KB and every page frames needs 200 KB. The memory space divided as following table.

Page Frame Bit

In the above table one shows free space and initially for page frame 0 to 9 the page frame status is zero. For example, five programs are to be executed and memory needed is as follows:

Program Memory needed
Program −1
60 KB
Program −2
80 KB
Program −3
80 KB
Program −4
40 KB
Program −5
40 KB

For the preceding table the memory allocation will be as follows:

In the above table, the bit status 1 specifies that all the nodes of the memory be already allocated to 1 to 5 programs. Assume, program 3 and 1 complete their execution and the memory engaged due to these programs is released. The status bit 2 points the altered bit status. The paging technique has some drawbacks and undergoes trouble for searching the best possible page frame size. In case the size of the program is larger and memory space allocated is more then memory is wasted. Similarly, if the page frame is small, the program has to undergo several partitions and it will be very tedious to match the frame with the required size. In paging technique, the entire program must be memory resident. In this situation, if program size is larger than available memory, the program will not be executed.

Demand Paging Method To overcome the limitations of the paging technique, demand paging method is followed. In this method, a large program is divided into small sizes of program segments and first page is then loaded in the memory. If few functions or statements of first pages, which are currently executed in the memory having relation with statements which are in next page, then the required page is loaded from disk to memory. In this procedure, the operating system can take part and transfer the required page to the disk and vice-versa. This is known as thrashing.


Garbage collection means the de-allocation of unused memory. Deciding when to release the allocated memory is very simple. The memory de-allocation is done when the programmer requests by executing a specific function. Languages like Java have in-built garbage collection system. The function free () in C and delete in C++ are used to release the memory. The garbage collection system checks the entire memory, marks the unused nodes and releases the memory associated with them.

The garbage collection consists of two steps:

Marking In this process, all the accessible nodes are marked and one field is kept for marking. The field is marked with true or false.

Collection In this process, memory allocated of all the nodes, which are not marked, are released.

The garbage collection is a complex procedure and has the following disadvantages:

  1. The garbage collection technique is applied when there is not much space.
  2. The garbage collection may not be helpful if the pointers associated with the list are not properly pointed.

However, the memory de-allocation is not as easy as it appears. The memory allocated during compile time can be freed easily. The memory allocated dynamically at run time can be de-allocated but some care must be taken. The dynamic structure such as linked list contains nodes, which are dynamically created and contains reference of next nodes.


In this chapter, the reader is exposed to the allocation and release of memory. Different techniques of allocation of memory are discussed. Memory is one of the precious resources and it is essential to manage it efficiently. Two methods are used for management of memory.

Static storage management When program execution starts, it takes memory through operating system. Once a memory is allocated to the program, the memory allocated cannot be increased or decreased during program execution.

Dynamic storage management The dynamic storage management allows the user to manage the memory during program execution. According to the request made by the program, memory can be allocated. Under dynamic memory management various techniques have been discussed in this chapter. The best-fit, first-fit and worst-fit memory management techniques have been briefed.

  1. Answer the following questions:
    1. What do you mean by static memory management? Explain in detail.
    2. Distinguish between the static and dynamic memory management techniques.
    3. Explain the various methods used to allocate memory under dynamic memory management.
    4. Explain the demand paging method.
    5. What is compaction?