Greedy Algorithm in real life
Why is it called a Greedy Algorithm?
The goal of the computational technique known as a greedy algorithm is to ultimately produce a globally optimum solution by choosing the best choice at each minor stage. This suggests that the algorithm choose the best solution offered at the moment without taking consequences into account. Since it selects the best immediate output while neglecting the big picture, it is viewed as greedy.
Let’s see something more on Greedy Algorithm
Rather of considering the entire solution, a greedy algorithm works by selecting the best response in each step before moving on to the next one until it reaches the finish. It simply hopes that the course it chooses is the globally optimal one, but as has repeatedly been shown, this approach rarely yields a globally optimal result. In fact, it's completely feasible that the best immediate fixes result in the worst possible overall situation.
Think of it as taking a lot of short cuts in the manufacturing industry: in the short term, significant amounts are saved on manufacturing costs, but this ultimately results in failure because quality is compromised, leading to product returns and low sales as consumers become accustomed to the "cheap" product. Yet this is not always true; in many situations, such as when building a Huffman tree or a decision learning tree, the greedy algorithm performs best in order to discover or approximation the globally optimum answer.
Choose the route, for instance, that has the highest overall sum. Due to shortsightedness, a greedy algorithm would choose the purple path rather than the red one, which produces the biggest sum.
For instance,
From the starting point to the destination, we must travel as cheaply as possible. Since the three possible solutions have cost pathways of 50(20+30) through A, 8(3+5) through B, and 80(79+1) through C, respectively,8 through B, is the least expensive option, making it the best choice. This is the local optimum, and in order to compute the global optimal solution, we discover the local optimum at each stage in this manner.
Problem categories we face:
- Minimization Issue:
- Maximization Issue:
- Optimization Issue:
Assuming all requirements are satisfied, solving an issue is simple. The term "minimization problem" is used when the situation calls for a minimal outcome.
A maximisation issue is a situation where the best possible outcome is required.
An problem is referred to as an optimization issue if it calls for either minimal or maximum outcomes.
Several Kind of Solutions
- Feasible Solution:
- Ideal Solution:
Today, there are several workable solutions available when a problem arises. The condition that has been attached to that challenge is taken into account as we choose solutions that satisfy the need. One of them is known as a Feasible Solution since it allows us to get the desired outcomes and satisfies the stated need.
An ideal solution is one that already works and achieves the desired result of the problem; it yields the best result. There could be a minimum or maximum result for this objective. The most important lesson to learn from this is that there is only one perfect solution for every circumstance.
What benefits does the greedy algorithm offer?
The ability to employ greedy algorithms to address a wide range of issues is their main benefit. They might be employed to determine the quickest path through a network or to suggest the most effective path for a vehicle. They are also incredibly simple to use. As a result, greedy algorithms are highly good at tackling many different issues. Yet, one drawback of greedy algorithms is how challenging it is to build them. Moreover, greedy algorithms are quick and efficient at solving a variety of issues. They can therefore be applied to a range of problems.
What are the drawbacks of the greedy algorithm?
It basically assembles a solution piece by piece, selecting the subsequent piece so that it immediately produces the optimal answer to the current situation at hand. As a result, the effects of the current decision are not considered or worried about. It never takes into account the previous decisions made, therefore while it provides a nearly ideal answer, it falls short of being optimal. Examples of issues for which it is unable to give an ideal solution are the Knapsack Problem and the Traveling Salesman Dilemma.
The greedy algorithm is utilised in what places?
A greedy algorithm is one that makes numerous modest moves towards a bigger goal. They are used by artificial intelligence (AI) to master data. The main goal of greedy algorithms is to minimise the amount of labour required while yet achieving the highest potential result. The fundamental concept is to begin with a few workable options and progress to the best. As this could take a lot of time, greedy algorithms are frequently used in real-world settings to learn from dat
The Greedy Algorithm's Fundamental Elements
Now that we are more familiar with this mechanism, let's investigate what makes a greedy algorithm unique from other systems:
- Candidate pool
- Selecting capability
- The ability to be feasible
- A goal-oriented purpose
- A solution feature
- Job Sequencing
- Arrange the jobs according to decreasing profit margins.
- Try to assign each job to the earliest available time slot that can accommodate it, starting with the job that will bring in the maximum profit. Skip the job if no such time slot is available.
- Keep assigning jobs in this way until all of them have been done so or all of the available time slots have been filled.
- Knapsack problem
- Determine the weight-to-value ratio for each item.
- Arrange the items according to their value-to-weight ratios in decreasing order.
- Fill the knapsack to its maximum capacity without going over, starting with the item that has the highest value-to-weight ratio.After the knapsack is full or there are no more items to add, proceed to step 4 and repeat step 3 with the subsequent highest-value item.
- Minimum Spanning techniques
- Kruskal's Algorithm:
- Prim's Algorithm:
- Optimal Merge pattern
- Huffman Coding
- Ascertain how frequently each character appears in the message.
- Construct a binary tree with each character as a leaf node and the node weight being the frequency of occurrence.
- Create a parent node out of the two nodes with the two smallest weights, and set the parent node's weight equal to the sum of their weights.
- Continue performing step 3 until the binary tree consists of just one root node.
- Beginning at the root node, give the binary tree's left and right branches a "0" bit each and a "1" bit each.
- Use the produced prefix code to encrypt the message, replacing each character with the correct bit string.
- Dijkstra Algorithm
- Set the distance between the source vertex and all other vertices to zero, with the exception of the source vertex, whose distance is initialised to infinity.
- From the group of vertices whose distance is unknown, choose the one with the least distance and include it in the group of vertices whose distance is known.
- Calculate the distance as the product of the distance from the source vertex to the selected vertex and the weight of the edge connecting the selected vertex and the adjacent vertex for each neighbouring vertex of the selected vertex for which the distance is not yet known. Update the nearby vertex's distance if this distance is smaller than the vertex's current distance.
- Keep going back and forth between steps 2 and 3 to add every vertex to the group of vertex whose distance is known.
This set is used to produce a response.
It chooses the most qualified applicant to be a part of the answer.
The ability of a candidate to contribute to the solution is determined in this section.
A whole or incomplete solution is given a value.
This is used to show whether a suitable solution has been found.
How to employ a greedy algorithm?
A common computer science problem called "job sequencing" entails allocating a group of tasks to a single machine with the aim of maximising the sum of the profits realised from finishing the tasks. Job sequencing can be resolved in the context of greedy algorithms using the following straightforward method:
The Greedy algorithm is the name given to this method because it selects the highest-paying task that can be completed in the earliest available time window at each stage in the hope that this will result in a globally optimal outcome.
Note that this technique presupposes that jobs can be partially finished, i.e., a job can start before its deadline and finish after its deadline. If not, then an alternative approach needs to be used.
The Knapsack issue is a well-known optimization problem in computer science that asks how to best pack a knapsack with a finite amount of space full of things, each of which has a different weight and value.
Using a Greedy algorithm based on the item's value-to-weight ratio is one potential solution to the Knapsack problem in the context of greedy algorithms. The steps in this algorithm are as follows:
When the items have a high value to weight ratio and the knapsack capacit.y is constrained, this approach performs well. Unfortunately, it might not always result in the best answer. For instance, adding the lighter item first may enable for more overall value to be put to the knapsack if the items have identical value-to-weight ratios but differing weights.
The Fractional Knapsack algorithm, which allows for the division of items into fractional amounts, and the Greedy heuristic approach, which combines Greedy and dynamic programming techniques, are two more Greedy algorithms that can be used to solve the Knapsack issue.
Finding the subset of edges that connect each vertex in a given graph while minimising the weight of those edges is the goal of the classic computer science problem known as the Minimum Spanning Tree (MST). Kruskal's algorithm and Prim's algorithm are two well-known methods for resolving this issue in the context of greedy algorithms.
The first step of this technique is to order all of the graph's edges in non-decreasing order of their weights. The edge with the least weight is then chosen, and it is added to the MST. If it doesn't produce a cycle, it repeats the procedure, picking the next smallest edge to add to the MST. When all of the edges have been taken into account or all of the vertices are connected, the algorithm terminates.
The first step of this procedure is to choose any vertex and add it to the MST. Then, it chooses and adds to the MST the edge with the minimum weight connecting a vertex inside the MST to a vertex outside the MST. When all the vertices are connected, the process is repeated, adding the edge with the smallest weight that joins a vertex inside the MST to a vertex outside the MST.
The greedy algorithms used by Kruskal and Prim always choose the locally ideal solution. These might not always result in the best answer overall, though. Whereas Prim's algorithm always chooses the edge with the smallest weight connecting the MST to a vertex outside the MST, Kruskal's algorithm always chooses the smallest edge. Other techniques, including Boruvka's or the Reverse-Delete algorithm, might be better appropriate in some circumstances to address the MST problem.
A well-known problem in computer science is the Optimal Merge Pattern, which requires combining several sorted arrays into a single sorted array with the least amount of time complexity. The Merge Sort algorithm, a Greedy technique in the context of greedy algorithms, can be used to solve the optimal merge pattern problem.
The Merge Sort algorithm works by repeatedly dividing the input arrays into halves until each subarray contains only one element. Then, it merges pairs of adjacent subarrays into a single sorted array by comparing the first elements of each subarray and selecting the smallest one. This process is repeated until there is only one sorted array left, which is the result of the optimal merge pattern.
The Divide-and-Conquer method known as Merge Sort merges the subarrays using a Greedy strategy. The combined array is always ordered because it always chooses the smallest element from each subarray to merge. Moreover, the algorithm's O(n log n) time complexity makes it the best choice for combining n sorted arrays into a single sorted array.
Huffman coding is a lossless data compression technology that represents characters or symbols in a message using a variable-length prefix code. Based on the frequency of occurrence of characters in the message, it uses shorter bit strings to represent more common characters and longer bit strings to represent less frequent characters.
The Huffman coding algorithm employs a Greedy strategy in the context of greedy algorithms to create the ideal prefix code for the supplied message. Here is how the algorithm operates:
The prefix code created by the Huffman coding algorithm is optimised in terms of the total number of bits needed to encode the message. This is done by giving more frequently occurring characters shorter bit strings, which shortens the message's total length. The algorithm's time complexity is O(n log n), where n is the message's character count.
In a weighted graph, the shortest path between a source vertex and all other vertices is determined using Dijkstra's method, a greedy algorithm. It operates by retaining a collection of vertices from which the shortest distance to the source vertex is already known and a set from which it is not yet known.
Here is how the algorithm operates:
A list of the shortest routes from the source vertex to each other vertex in the graph is the algorithm's output.
Dijkstra's algorithm always selects the vertex with the smallest distance from the source vertex in each iteration, ensuring that the shortest path is found in a greedy manner. However, it does not work for graphs with negative weight edges. For such graphs, the Bellman-Ford algorithm or the Floyd-Warshall algorithm can be used. Additionally, if the destination vertex is known in advance, a modified version of Dijkstra's algorithm called the A* algorithm can be used to further optimize the search.
Comments
Post a Comment