Optimization (computer science)
|
In computing, optimization is the process of modifying a system to improve its efficiency. The system can be a single computer program, a collection of computers or even an entire network such as the Internet.
Although the word "optimization" shares the same root as "optimal," it is rare for the process of optimization to produce a truly optimal system for all purposes. There will always be tradeoffs.
Optimization must be approached with caution. Tony Hoare stated, and Donald Knuth famously restated, that "Premature optimization is the root of all evil." It is important to first have sound algorithms and a working prototype.
Contents |
Basis
Tasks can often be performed more efficiently. For example, consider the following C code snippet:
int i, sum = 0; for (i = 1; i <= N; i++) sum += i; printf ("sum: %d\n", sum);
This code can (assuming no overflow) be rewritten using a mathematical formula like:
int sum = (N * (N+1)) / 2; printf ("sum: %d\n", sum);
The term "optimization" usually presumes the system retains the same functionality. However, a significant improvement in performance can often be achieved by solving only the actual problem and removing extraneous functionality. For example, if it were reasonable to assume the program does not need to handle more than (say) 100 items of input, one could use static rather than dynamic memory allocation.
Tradeoff
Optimization will generally focus on one or two of execution time, memory usage, disk space, bandwidth or some other resource. This will usually require a tradeoff — where one is optimized at the expense of others. For example, increasing the size of cache improves runtime performance, but also increases the memory consumption. Other less common tradeoffs include code clarity and conciseness.
Different fields
In operations research, optimization is the problem of determining the inputs of a function that minimize or maximize its value. Sometimes constraints are imposed on the values that the inputs can take; this problem is known as constrained optimization.
In computer programming, optimization usually specifically means to modify code and its compilation settings on a given computer architecture to produce more efficient software.
Typical problems have such a large number of possibilities that a programming organization can only afford a "good enough" solution.
Bottlenecks
Optimization requires finding a bottleneck: the critical part of the code that is the primary consumer of the needed resource. Improving about 20% of code is often responsible for 80% of the results (see also Pareto principle).
The architectural design of a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than any other item of the design. More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup and initialization time of the more complex algorithm can outweigh the benefit.
The more memory the program uses, the faster it will generally run. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor. Performance can be greatly improved by reading the entire file then writing the filtered result, though this uses much more memory. Caching the result is similarly effective, though also requiring larger memory use.
When to optimize
Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug. Compiler optimization, for example, may introduce odd behavior because of compiler bugs. Because of this, optimization or performance tuning must be done at the end of the development stage.
Premature optimization occurs when a programmer lets performance considerations affect how he/she designs a piece of code, before getting the design right. This can result in a design that is not as clean as it could have been.
A better approach is to design first, code from the design and then profile/benchmark the resulting code to see what should be optimized. A simple and elegant design will be easier to optimize later on anyway. Often, the parts of the code that turn out to be slow are not those you would have expected so you may add complexity needlessly by optimizing too soon.
In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer should still take care not to confuse design and optimization.
Also, before deploying a piece of code, especially in interpreted languages like PHP and JavaScript, it is considered good practice in some circles to remove comments, whitespace, and unused method or function definitions. Software may also be used to condense or refactor the code in ways that make it run faster on a given platform, for example replacing for loops with while loops, which can often make it harder to understand the original intent of the author.
If the programmer discards the original commented and formatted code, maintainability is sacrificed. It becomes harder for a human to read, debug and subsequently modify or extend the code once it has been optimized.
It is bad practice to make those who will come along after you work harder. This anti-pattern can be avoided by keeping a copy of the original code, and by working from the original code when making changes to optimized software.
Automated and manual optimization
Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior algorithm.
Optimizing a whole system is usually done by human beings because the system is too complex for automated optimizers. Grid computing or distributed computing aims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time.
In this technique, programmers or system administrators explicitly change code so that the system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations.
First of all, it is extremely important to use a profiler to locate the bottleneck. Programmers usually think they have a clear idea of what the bottleneck is, but they are frequently completely wrong. Optimizing an unimportant piece of code will not help the overall program speed.
When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program: more often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.
After one is reasonably sure that the best algorithm is selected, code optimization can start: loops can be unrolled (for lower loop overhead, although this can often lead to lower speed, due to overloading the processor's instruction cache, data types as small as possible can be used, integer arithmetic can be used instead of a floating-point one, and so on.
Performance bottlenecks can be due to the language rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a different, faster programming language. For example, it is common for very high-level languages like Python to have modules written in C for greater speed. Programs already written in C can have modules written in assembly. Programs written in D can use the inline assembler.
Rewriting pays off because of a law known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So optimizing just a small part of the program can have a huge effect on the overall speed.
Manual optimization often has the side-effect of undermining readability. Thus code optimizations should be carefully documented and their effect on future development evaluated.
The program that does the automated optimization is called an optimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers often can tailor the generated code to specific processors.
Today, automated optimizations are almost exclusively limited to compiler optimization.
Techniques
Load balancing spreads the load over a large number of servers. Often load balancing is done transparently (i.e., without users noticing it), using a so-called layer 4 router.
Caching stores intermediate products of computation to avoid duplicate computations.
Subpages
References
- Jon Louis Bentley: Writing Efficient Programs, ISBN 0139702512.
Related terms
- Abstract interpretation
- Control flow graph
- SSA form
- Queueing theory
- Simulation
- Lazy evaluation
- Speculative execution
- Memoization
- Caching
- FX!32
- Low level virtual machine
- Profiling
External links
- Software Optimization at Link-time And Run-time (http://www.cs.arizona.edu/solar/)