Essay Sample on Basic Principles of the Algorithms
Introduction to Algorithms
The most basic techniques in developing an algorithm are as follows:
- Divide and conquer
- Dynamic programming
- Greedy algorithms
A solvable problem can be solved using one type of algorithm of each type: the methodical way, the obvious way, the clever way, and the miraculous way.
Date Added: October 14, 2022
To understand algorithms, the user must know at least one programming language at a level where they can translate the codes into solutions to a problem. It is also necessary that the user has a working knowledge of data structures: stacks, arrays, linked lists, queues, trees, disjoint sets, heaps, and graphs (Wikibooks, 2004). The user must also know basic algorithms such as sorting, binary search, depth-first search, or breadth-first search. If the user is unfamiliar with these things, it would be helpful to consult further references about data structures before studying algorithms.
The Importance of Efficiency
Efficiency is not required in every problem there is. To understand algorithmic efficiency, however, efficiency is concerned with the space and the time needed to execute the task (Wikibooks, 2004). When space or time is inexpensive or abundant, the programmer can focus on the solution without worrying about compiling the code and running faster.
There are particular cases where efficiency matters:
- Limited resources
- A large set of data
- Real-time applications (where latency matters most)
- Computationally costly jobs
- Subroutines that require frequent use in the program run
Brief Discussion of Common Algorithmic Techniques
Divide and Conquer
In certain problems where the program input is already in the array, the solution can be made by cutting the problem into smaller chunks (divide), recursively tackling these small parts, then combining the tiny individual solutions into one result. Some good examples of divide-and-conquer algorithms are quick-to-sort and merge-sort algorithms.
This technique is not the most efficient because it is a brute-force strategy. However, optimizations can be made to the program to reduce the number of branches. When one leaf is visited, the algorithm will go back up the stack to undo the program choices that failed, then proceed to other branches of the tree. Backtracking is a technique that works best with problems where there is already a self-similarity. This means that smaller problem instances resemble the entirety of the problem.
For countless applications, randomization is becoming increasingly important. This technique generates and uses random numbers that fit a tailored solution to one instance of a problem.
The fundamental concept with hill climbing is, to begin with, an inefficient solution to the problem, then apply repeated optimization techniques to this poor solution until it becomes more optimal or when a specific criterion is met. Hill climbing works best in network flow. It is useful in many problems that depict different relationships, making it applicable outside computing networks. As a hill climbing technique, network flows can solve matching problems outside of computer networks.
This technique works best with backtracking algorithms because dynamic programming is an optimization technique. When there is a need to solve subproblems repeatedly, precious time can be saved by dealing with the small subproblems (or leaves) first (smallest to largest, bottom-up strategy) and storing each subproblem solution in a table.
Wikibooks. (2004, September 28). Algorithms/Introduction. Retrieved August 16, 2022, from https://en.wikibooks.org/wiki/Algorithms/Introduction