Optimization of software

The front end of a compiler is in charge of constructing an intermediate representation of the source program, while the back end generates the intended target program from the intermediate representation and the symbol table information. Before passing the intermediate code to the compiler’s back end, it is important to improve the intermediate code to produce better target code. A compiler’s code optimization phase aims to improve the target code without altering its output or causing side effects.

Today, the majority of compiler research is carried out during the optimization process. There are a variety of traditional techniques (e.g.

Eliminating common sub-expressions, Dead-Code removal, Constant Folding, and other optimization techniques, However, as software products grow in size and complexity, and as they are used in embedded, web-based, and mobile systems, more streamlined copies of the source code are required. The issues of code optimization for such systems are discussed in this research study, as well as some recently developed code optimization strategies.

The practice of changing a piece of source code into a more efficient target code is known as code optimization. Both time and space are used to measure efficiency. The most common way to accomplish optimization is to employ a set of optimizing transformations, which are algorithms that take a piece of source code and change it into a semantically comparable output code that uses fewer resources. The majority of optimization approaches aim to improve the target code by removing unneeded instructions from the object code or replacing one sequence of instructions with a faster one.

One of the most crucial aspects of a compiler is optimization. Code optimization aims to optimize the source code to provide better target code. A better target code is usually one that is more efficient in terms of time and space. Other objectives, such as target code that uses less power, may be considered to measure the goodness of code. Processor architectures are becoming more sophisticated in current times. With the emergence of multicore and embedded devices, quicker target code that uses less space and power to run has become necessary. A compiler’s code optimization phase tries to tackle these problems by producing better target code while maintaining the desired output.


1.3 The Optimization step is present in the Compiler Architecture.

The un-optimized version of the target machine code or the intermediate representation of the source code can both be used for code optimization. The code optimization step, when applied to the intermediate form, reduces the size of the Abstract Syntax Tree or the Three Address Code instructions. Otherwise, the code optimization phase tries to decide which instructions to emit, how to allocate registers, and when to spill, among other things, if it is applied as part of final code production.



Since the previous decade, a variety of traditional optimization techniques have been applied to code optimization. Some of these strategies are used on the source code’s basic blocks, while others are used throughout the entire function. Numerous new optimization strategies have been developed as a result of the recent study. The focus of this research article will be on novel code optimization approaches, although there will also be a summary of traditional techniques.

2.1 Classical Optimization Techniques

The following are some examples of traditional code optimization techniques:


  1. Local Optimization


  1. Global Optimization


  1. Inter-Procedural Optimization


2.1.1 Local Optimization

The division of three-address instruction sequences into basic blocks is the first step in the code optimization phase of a compiler. The nodes of a flow graph are made up of these basic pieces. Within each fundamental block, local optimization is carried out. By doing local optimization within each basic block on its own, we may frequently achieve a significant improvement in the running time of code. These optimizations require less investigation because basic blocks have no control flow.

The following strategies can be used to accomplish local optimization:

(I) Eliminating frequent subexpressions in the immediate area,

(ii) Elimination of dead codes

(iii) Algebraic identities are used-

Use of arithmetic identities (a)

(b) A local drop-in strength, or the substitution of a less expensive operator for a more expensive one.

(c) Folding at a Constant Rate

(iv) Rearranging assertions that are not interdependent.

2.1.2 Optimization of the entire system (Intra-Procedural Methods)

Techniques for global optimization work on entire functions. Improvement in global optimization takes into account what occurs across basic blocks.


Data-flow analysis is the foundation of the majority of global optimization strategies. The results of the data-flow analysis are all the same: they identify some attribute for each instruction in the program that must hold every time that instruction is executed.

Leave a Comment

Your email address will not be published.