teamemancipation

Chapter 19 - Advanced Graph Algorithms

Chapter 19 – Advanced Graph Algorithms

Chapter 19: Advanced Graph Algorithms Welcome to Chapter 19, where we’ll explore some of the more advanced algorithms used in graph theory. These algorithms are crucial for solving complex problems related to networks, paths, and flows. In this chapter, we will cover: Shortest Path Algorithms: Dijkstra’s Algorithm Bellman-Ford Algorithm Maximum Flow Algorithms: Ford-Fulkerson Method Edmonds-Karp Algorithm 19.1 Shortest Path Algorithms In graph theory, finding the shortest path between two nodes is a common problem. Depending on the type of graph (directed, undirected, weighted, unweighted), different algorithms can be used to find the shortest paths efficiently. 19.1.1 Dijkstra’s Algorithm Definition: Dijkstra’s Algorithm finds the shortest path from a source node to all other nodes in a graph with non-negative edge weights. It follows a greedy approach by choosing the shortest path at each step. How Dijkstra’s Algorithm Works: Initialize Distances: Start by assigning an infinite distance to all nodes except the source node, which gets a distance of 0. Priority Queue: Use a priority queue to select the node with the smallest known distance. Relaxation: For each neighboring node, check if a shorter path can be found through the current node. If so, update the distance. Repeat: Continue this process until all nodes have been visited and their shortest paths are known. Dijkstra’s Algorithm Example: Consider the graph below: A –1–> B A –4–> C B –2–> C B –6–> D C –3–> D To find the shortest path from A to all other nodes: Initialization: Set the distance to A as 0, and all others as infinity: Distances: A = 0, B = ∞, C = ∞, D = ∞ Priority Queue: Start with A (0). Relaxation: From A, update B (0 + 1 = 1) and C (0 + 4 = 4). New distances: A = 0, B = 1, C = 4, D = ∞ Next Node: Select B (1), update C (1 + 2 = 3) and D (1 + 6 = 7). New distances: A = 0, B = 1, C = 3, D = 7 Next Node: Select C (3), update D (3 + 3 = 6). Final distances: A = 0, B = 1, C = 3, D = 6 The shortest paths from A are: A to B: 1 A to C: 3 A to D: 6 Dijkstra’s Algorithm Implementation in C++: #include <iostream> #include <vector> #include <queue> #include <limits.h> using namespace std; #define INF INT_MAX // Structure for a graph edge struct Edge { int dest, weight; }; // Dijkstra’s algorithm to find the shortest path void dijkstra(vector<vector<Edge>>& graph, int src) { int V = graph.size(); vector<int> dist(V, INF); dist[src] = 0; priority_queue<pair<int, int>, vector<pair<int, int>>, greater<pair<int, int>>> pq; pq.push({0, src}); while (!pq.empty()) { int u = pq.top().second; pq.pop(); for (auto edge : graph[u]) { int v = edge.dest; int weight = edge.weight; if (dist[u] + weight < dist[v]) { dist[v] = dist[u] + weight; pq.push({dist[v], v}); } } } cout << "Vertex\tDistance from Source\n"; for (int i = 0; i < V; ++i) cout << i << "\t\t" << dist[i] << "\n"; } int main() { int V = 5; vector<vector<Edge>> graph(V); graph[0].push_back({1, 10}); graph[0].push_back({4, 5}); graph[1].push_back({2, 1}); graph[2].push_back({3, 4}); graph[4].push_back({1, 3}); graph[4].push_back({2, 9}); dijkstra(graph, 0); return 0; } This code illustrates Dijkstra’s algorithm for a graph with 5 vertices. 19.1.2 Bellman-Ford Algorithm Definition: The Bellman-Ford Algorithm is another method for finding the shortest path from a source node to all other nodes, but unlike Dijkstra’s algorithm, it works with graphs that may contain negative weight edges. How Bellman-Ford Algorithm Works: Initialize Distances: Set the distance of the source node to 0 and all other nodes to infinity. Relax All Edges: For each edge, update the distance to its destination if a shorter path is found through the source. Repeat: Perform the relaxation process V-1 times, where V is the number of vertices. Negative Cycle Detection: In the final pass, check for any further distance reductions. If found, a negative weight cycle exists. Example: Consider the graph: A –1–> B B –2–> C C –(-1)–> A B –3–> D To find the shortest path from A: Initialize distances: A = 0, B = ∞, C = ∞, D = ∞. After the first pass: A to B: 1 B to C: 1 + 2 = 3 B to D: 1 + 3 = 4 After the second pass, no changes occur, confirming the final shortest paths. Bellman-Ford Algorithm Implementation in C++: #include <iostream> #include <vector> using namespace std; struct Edge { int u, v, weight; }; void bellmanFord(int V, int E, vector<Edge>& edges, int src) { vector<int> dist(V, INT_MAX); dist[src] = 0; for (int i = 1; i <= V – 1; i++) { for (int j = 0; j < E; j++) { int u = edges[j].u; int v = edges[j].v; int weight = edges[j].weight; if (dist[u] != INT_MAX && dist[u] + weight < dist[v]) { dist[v] = dist[u] + weight; } } } // Check for negative-weight cycles for (int i = 0; i < E; i++) { int u = edges[i].u; int v = edges[i].v; int weight = edges[i].weight; if (dist[u] != INT_MAX && dist[u] + weight < dist[v]) { cout << "Graph contains a negative-weight cycle\n"; return; } } cout << "Vertex\tDistance from Source\n"; for (int i = 0; i < V; i++) cout << i << "\t\t" << dist[i] << "\n"; } int main() { int V = 5, E = 8; vector<Edge> edges = {{0, 1, -1}, {0, 2, 4}, {1, 2, 3}, {1, 3, 2}, {1, 4, 2}, {3, 2, 5}, {3, 1, 1}, {4, 3, -3}}; bellmanFord(V, E, edges, 0); return 0; } The Bellman-Ford algorithm is more versatile than Dijkstra’s because it can handle negative weights but is less efficient in practice. 19.2 Maximum Flow Algorithms In network flow problems, we aim to send as much flow as possible from a source node to a destination node, subject to capacity limits on the edges. Two classic algorithms for finding the maximum

Chapter 19 – Advanced Graph Algorithms Read More »

Chapter 18 - Greedy Algorithms

Chapter 18 – Greedy Algorithms

Chapter 18: Greedy Algorithms Welcome to Chapter 18, where we’ll dive into the fascinating world of Greedy Algorithms. A Greedy Algorithm works by making the best decision at each step, aiming to reach an optimal solution. The idea is simple: take the immediate advantage without worrying about the future. While this may sound risky, greedy algorithms often lead to optimal solutions in many real-world problems. In this chapter, we will explore two popular applications of greedy algorithms: Huffman Coding — for data compression. Minimum Spanning Trees — using Kruskal’s and Prim’s algorithms for efficient network design. Let’s get started with Huffman Coding! 18.1 Huffman Coding Definition: Huffman Coding is a greedy algorithm used for lossless data compression. It assigns variable-length codes to input characters, with shorter codes assigned to more frequent characters and longer codes to less frequent ones. This minimizes the overall size of the encoded data. How Huffman Coding Works: Here’s how you can visualize Huffman Coding: Frequency Calculation: Start by calculating the frequency of each character in the input data. Build a Priority Queue: Insert all characters (or symbols) into a priority queue, where the element with the lowest frequency has the highest priority. Construct the Huffman Tree: Repeatedly extract the two characters with the smallest frequencies from the queue, combine them into a new node (with the combined frequency), and reinsert this node back into the queue. Keep doing this until there’s only one node left – the root of the Huffman tree. Generate Codes: Starting from the root of the tree, assign 0 to the left edge and 1 to the right edge of each node. Traverse the tree to generate the variable-length codes for each character. Example: Let’s say we want to encode the string "AABACD". Character frequencies: A: 3 B: 1 C: 1 D: 1 Create a priority queue: Insert A(3), B(1), C(1), D(1). Building the tree: Combine B and C (1 + 1 = 2), now we have A(3), BC(2), D(1). Combine D and BC (1 + 2 = 3), now we have A(3), DBC(3). Combine A and DBC (3 + 3 = 6), this becomes the root. Generate Huffman Codes: A = 0 B = 101 C = 100 D = 11 So, the encoded string for "AABACD" becomes: 0 0 101 0 100 11. Huffman Coding Implementation in C: #include <stdio.h> #include <stdlib.h> // A Huffman tree node struct MinHeapNode { char data; unsigned freq; struct MinHeapNode *left, *right; }; // Function to create a new min heap node struct MinHeapNode* newNode(char data, unsigned freq) { struct MinHeapNode* temp = (struct MinHeapNode*) malloc(sizeof(struct MinHeapNode)); temp->data = data; temp->freq = freq; temp->left = temp->right = NULL; return temp; } // A utility function to print an array of size n void printArray(int arr[], int n) { for (int i = 0; i < n; ++i) printf("%d", arr[i]); printf("\n"); } // Print Huffman codes from the root of the tree void printHuffmanCodes(struct MinHeapNode* root, int arr[], int top) { if (root->left) { arr[top] = 0; printHuffmanCodes(root->left, arr, top + 1); } if (root->right) { arr[top] = 1; printHuffmanCodes(root->right, arr, top + 1); } if (!(root->left && root->right)) { printf("%c: ", root->data); printArray(arr, top); } } // This function builds the Huffman Tree and prints the codes by traversing the built tree void HuffmanCodes(char data[], int freq[], int size) { // Create a simple example of a Huffman Tree (for demonstration purposes) struct MinHeapNode* root = newNode(‘$’, 0); root->left = newNode(‘A’, freq[0]); root->right = newNode(‘B’, freq[1]); int arr[100], top = 0; printHuffmanCodes(root, arr, top); } int main() { char arr[] = {‘A’, ‘B’}; int freq[] = {5, 9}; int size = sizeof(arr) / sizeof(arr[0]); HuffmanCodes(arr, freq, size); return 0; } This code gives a basic illustration of Huffman Coding. You would need a priority queue or min-heap to fully implement the algorithm, which can be a great exercise for you to try! Pros and Cons of Huffman Coding: Pros: Reduces the size of data significantly, leading to efficient storage. Works well for characters with varying frequencies. Cons: Requires prior knowledge of character frequencies. Can be time-consuming to build for very large datasets. 18.2 Minimum Spanning Trees Now let’s move to the Minimum Spanning Tree (MST) problem. An MST is a subgraph of a graph that connects all vertices with the minimum possible total edge weight, without forming any cycles. There are two popular algorithms for finding the MST: Kruskal’s Algorithm Prim’s Algorithm Let’s start with Kruskal’s Algorithm. Kruskal’s Algorithm Kruskal’s Algorithm is a greedy approach that treats the graph as a forest, where each vertex is its own tree. It keeps adding the smallest edge that connects two separate trees until all vertices are connected. How Kruskal’s Algorithm Works: Sort all the edges in increasing order based on their weights. Initialize an empty forest (collection of trees), where each vertex is its own tree. Pick the smallest edge. If it connects two different trees, add it to the MST and merge the trees. Repeat this until all vertices are connected in a single tree. Kruskal’s Algorithm Example: Let’s say we have the following graph with edges: Edge Weight A-B 1 A-C 4 B-C 3 B-D 2 C-D 5 Sort edges: A-B (1), B-D (2), B-C (3), A-C (4), C-D (5). Start by adding A-B to the MST. Add B-D next. Add B-C. Skip C-D (since it would create a cycle). The MST includes edges: A-B, B-D, B-C with a total weight of 6. Kruskal’s Algorithm Implementation in C: #include <stdio.h> #include <stdlib.h> // Structure to represent an edge struct Edge { int src, dest, weight; }; // Structure to represent a graph struct Graph { int V, E; struct Edge* edges; }; // Create a new graph struct Graph* createGraph(int V, int E) { struct Graph* graph = (struct Graph*) malloc(sizeof(struct Graph)); graph->V = V; graph->E = E; graph->edges = (struct Edge*) malloc(graph->E * sizeof(struct Edge)); return graph; } // A utility function to find the subset of an element int find(int

Chapter 18 – Greedy Algorithms Read More »

Chapter 17 - Advanced Sorting Algorithms

Chapter 17 – Advanced Sorting Algorithms

Chapter 17: Advanced Sorting Algorithms Welcome back, dear reader! Now that we’ve dipped our toes into the world of simple sorting algorithms, it’s time to dive a little deeper. In this chapter, we’ll explore some more powerful sorting algorithms that are designed to handle larger datasets efficiently. Before we jump in, take a moment to appreciate how far you’ve come! Sorting is one of the most important concepts in data structures, and understanding it thoroughly gives you a solid foundation for tackling more complex problems. So, what’s next? Let’s begin with a sorting algorithm that’s fast, efficient, and widely used in real-world applications — Quick Sort. 17.1 Quick Sort Definition: Quick Sort is a divide-and-conquer algorithm. It works by selecting a "pivot" element from the array and partitioning the other elements into two sub-arrays: those smaller than the pivot and those larger. The pivot is then placed in its correct position, and the process is recursively applied to the sub-arrays. How It Works: Quick Sort might sound a little tricky, but let’s break it down step-by-step: Pick an element from the array. This is called the pivot. Partition the array into two parts: elements less than the pivot go to the left, and elements greater than the pivot go to the right. Recursively apply Quick Sort to the left and right sub-arrays. Combine the results, and you have a sorted array! Let’s walk through an example to see how Quick Sort works in action. Quick Sort Example: Imagine we have the following array: [30, 10, 50, 20, 60, 40] We choose 30 as our pivot (you can choose any element, but for simplicity, we’ll take the first one). Elements less than 30: [10, 20] Elements greater than 30: [50, 60, 40] Now we recursively sort these two sub-arrays. The left sub-array [10, 20] is already sorted, so no further action is needed there. But the right sub-array [50, 60, 40] needs sorting. We choose 50 as the pivot for this sub-array: Elements less than 50: [40] Elements greater than 50: [60] Now, both [40] and [60] are sorted individually. We combine everything, and voila! The final sorted array is: [10, 20, 30, 40, 50, 60] See how it works? It’s like dividing the problem into smaller pieces until it’s easy to solve, and then putting everything back together. Time Complexity: Best case: O(n log n) — This happens when the pivot divides the array into two nearly equal parts each time. Worst case: O(n²) — This occurs when the pivot is always the smallest or largest element, leading to unbalanced partitions. Average case: O(n log n) — Generally, Quick Sort is quite efficient, and this is the expected performance for most datasets. Quick Sort Implementation in C: Let’s write a Quick Sort program in C, but instead of using just arrays, we’ll also show you how to use pointers. Pointers are powerful tools in C, and they make managing memory much easier. #include <stdio.h> // Function to swap two elements using pointers void swap(int *a, int *b) { int temp = *a; *a = *b; *b = temp; } // Partition function that returns the index of the pivot int partition(int arr[], int low, int high) { int pivot = arr[high]; // Choosing the last element as the pivot int i = (low – 1); // Index of the smaller element for (int j = low; j <= high – 1; j++) { if (arr[j] < pivot) { i++; // Increment the index of the smaller element swap(&arr[i], &arr[j]); } } swap(&arr[i + 1], &arr[high]); return (i + 1); } // Quick Sort function that uses recursion void quickSort(int arr[], int low, int high) { if (low < high) { // Get the pivot element in its correct position int pi = partition(arr, low, high); // Recursively sort the left and right sub-arrays quickSort(arr, low, pi – 1); quickSort(arr, pi + 1, high); } } // Function to print the array void printArray(int arr[], int size) { for (int i = 0; i < size; i++) { printf("%d ", arr[i]); } printf("\n"); } int main() { int arr[] = {30, 10, 50, 20, 60, 40}; int n = sizeof(arr) / sizeof(arr[0]); printf("Original array: \n"); printArray(arr, n); quickSort(arr, 0, n – 1); printf("Sorted array: \n"); printArray(arr, n); return 0; } Explanation: The partition() function is the heart of the Quick Sort algorithm. It rearranges the array by moving smaller elements to the left of the pivot and larger elements to the right. We use pointers in the swap() function to directly modify the elements of the array in memory. The quickSort() function is recursive and sorts the array by dividing it into smaller sub-arrays around the pivot. Why Use Quick Sort? Quick Sort is fast and works well for most real-world data. It’s not only good for small datasets but also scales efficiently for larger ones. And since it’s a divide-and-conquer algorithm, it can be optimized for parallel processing. But wait! We’re not done yet. Next, we’ll look at another very important sorting algorithm — Merge Sort. This one is also based on the divide-and-conquer approach, but it works in a slightly different way. Should we continue with Merge Sort? I promise it’s super interesting! Alright! Let’s keep the momentum going and dive into Merge Sort — another powerful sorting algorithm that you’ll often encounter in the real world. If you liked the divide-and-conquer strategy of Quick Sort, you’re going to love Merge Sort because it takes this strategy to the next level with guaranteed performance. 17.2 Merge Sort Definition: Merge Sort is a divide-and-conquer algorithm that breaks the array into smaller sub-arrays, sorts those sub-arrays, and then merges them back together in the correct order. The key idea in Merge Sort is that merging two sorted arrays is easier than sorting a large unsorted array. So instead of sorting the entire array in one go, we: Break the array into two halves. Recursively sort both halves. Merge the two

Chapter 17 – Advanced Sorting Algorithms Read More »

Chapter 16 - Sorting Algorithms

Chapter 16 – Sorting Algorithms

Chapter 16: Sorting Algorithms Sorting algorithms are fundamental in computer science and are used in nearly every application that involves data. Sorting helps in arranging data in a specific order, usually ascending or descending, making data retrieval and manipulation easier. In this chapter, we will explore various sorting algorithms, their logic, code implementations, and analyze their efficiency. Let’s get started with one of the simplest sorting algorithms — Bubble Sort. 16.1 Bubble Sort Definition: Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The process is repeated until the list is sorted. How It Works: Compare the first two elements. If the first element is larger, swap them. Move to the next pair of elements and repeat the comparison and swapping process. Continue doing this for the entire list. At the end of the first pass, the largest element will "bubble up" to its correct position at the end of the list. Repeat the process for the rest of the list, ignoring the last sorted elements. Continue until no more swaps are needed. Time Complexity: Worst case: O(n²) — This happens when the list is in reverse order. Best case: O(n) — This occurs when the list is already sorted. Bubble Sort Implementation in C: #include <stdio.h> void bubbleSort(int arr[], int n) { for (int i = 0; i < n-1; i++) { for (int j = 0; j < n-i-1; j++) { if (arr[j] > arr[j+1]) { // Swap arr[j] and arr[j+1] int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } } } void printArray(int arr[], int size) { for (int i = 0; i < size; i++) { printf("%d ", arr[i]); } printf("\n"); } int main() { int arr[] = {64, 34, 25, 12, 22, 11, 90}; int n = sizeof(arr)/sizeof(arr[0]); bubbleSort(arr, n); printf("Sorted array: \n"); printArray(arr, n); return 0; } Explanation: The bubbleSort() function performs repeated passes through the array, comparing adjacent elements and swapping them if necessary. The largest element "bubbles" up to the correct position after each pass. The printArray() function is used to print the sorted array after the algorithm finishes. Pros and Cons of Bubble Sort: Pros: Easy to understand and implement. Good for small datasets or nearly sorted data. Cons: Not efficient for large datasets due to its O(n²) time complexity. Performs unnecessary comparisons even when the array is already sorted. 16.2 Selection Sort Definition: Selection Sort is another simple sorting algorithm. It divides the input list into two parts: a sorted part and an unsorted part. Initially, the sorted part is empty, and the unsorted part contains the entire list. The algorithm proceeds by repeatedly selecting the smallest element from the unsorted part and swapping it with the leftmost unsorted element. How It Works: Start with the first element and assume it’s the minimum. Compare it with every other element in the array. If a smaller element is found, mark that as the new minimum. At the end of the first pass, swap the minimum element with the first element. Move to the next element and repeat the process. Continue until the entire list is sorted. Time Complexity: Worst, Average, and Best case: O(n²) Selection Sort Implementation in C++: #include <iostream> using namespace std; void selectionSort(int arr[], int n) { for (int i = 0; i < n-1; i++) { int minIdx = i; for (int j = i+1; j < n; j++) { if (arr[j] < arr[minIdx]) { minIdx = j; } } // Swap the found minimum element with the first element int temp = arr[minIdx]; arr[minIdx] = arr[i]; arr[i] = temp; } } void printArray(int arr[], int size) { for (int i = 0; i < size; i++) { cout << arr[i] << " "; } cout << endl; } int main() { int arr[] = {64, 25, 12, 22, 11}; int n = sizeof(arr)/sizeof(arr[0]); selectionSort(arr, n); cout << "Sorted array: \n"; printArray(arr, n); return 0; } Explanation: The selectionSort() function selects the smallest element from the unsorted portion of the array and swaps it with the leftmost unsorted element. The printArray() function prints the sorted array. Pros and Cons of Selection Sort: Pros: Simple and easy to understand. Reduces the number of swaps compared to bubble sort. Cons: Like Bubble Sort, it’s inefficient for large datasets. Time complexity remains O(n²) even if the array is partially sorted. 16.3 Insertion Sort Definition: Insertion Sort works by dividing the array into a sorted and an unsorted section. The sorted section starts with the first element. The algorithm picks elements from the unsorted section and places them into the correct position in the sorted section. How It Works: Start with the second element as the key. Compare the key with the elements in the sorted section. Shift larger elements one position to the right to make space for the key. Insert the key into its correct position. Repeat this process for each element in the unsorted section. Time Complexity: Worst and Average case: O(n²) Best case: O(n) (when the array is already sorted) Insertion Sort Implementation in C++: #include <iostream> using namespace std; void insertionSort(int arr[], int n) { for (int i = 1; i < n; i++) { int key = arr[i]; int j = i – 1; // Shift elements that are greater than key to one position ahead while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j–; } arr[j + 1] = key; } } void printArray(int arr[], int size) { for (int i = 0; i < size; i++) { cout << arr[i] << " "; } cout << endl; } int main() { int arr[] = {12, 11, 13, 5, 6}; int n = sizeof(arr) / sizeof(arr[0]); insertionSort(arr, n); cout << "Sorted array: \n"; printArray(arr, n); return 0; } Explanation: The insertionSort() function inserts each element into its correct position by shifting larger elements one step to the

Chapter 16 – Sorting Algorithms Read More »

Chapter 15 - Searching Algorithms

Chapter 15 – Searching Algorithms

Chapter 15: Searching Algorithms Introduction Searching and sorting algorithms are crucial in Data Structures and Algorithms (DSA) because they help us organize and access data efficiently. Whether you’re building a search engine, a database, or even a simple contact list, you’ll need to understand these algorithms deeply. This chapter focuses on two important tasks: Searching: Finding a specific element within a dataset. Sorting: Arranging data in a certain order (ascending, descending, etc.) for better management. To keep things interesting, we’ll break the chapter into digestible sections. First, we’ll focus on searching algorithms, and then in subsequent parts, we’ll cover sorting algorithms. As we proceed, remember to try the code examples on your IDE or notebook—this practice will cement your understanding. Part 1: Searching Algorithms 1.1 Linear Search Let’s start with one of the simplest searching algorithms: Linear Search. Though not the most efficient for large datasets, linear search is straightforward and can be used on unsorted data. Definition: Linear search traverses the entire data structure from the beginning to the end, comparing each element with the target value. If the target is found, the algorithm returns its position; otherwise, it returns a "not found" result. Use Case: Imagine you have a list of unsorted contact names. If you’re looking for someone, linear search is the only choice unless the list is sorted. Linear search is often used in small datasets where performance isn’t a concern. Algorithm: Here’s a simple breakdown of how linear search works: Start at the first element of the data structure. Compare the current element with the target value. If they match, return the index (position). If not, move to the next element. Repeat this process until you find the target or reach the end. Time Complexity: O(n) — because in the worst case, you might have to check every element. Let’s implement this in C and C++! C Programming: Linear Search in an Array #include <stdio.h> int linearSearch(int arr[], int n, int target) { for (int i = 0; i < n; i++) { if (arr[i] == target) { return i; // Target found, return the index } } return -1; // Target not found } int main() { int arr[] = {10, 23, 45, 70, 11, 15}; int n = sizeof(arr) / sizeof(arr[0]); int target = 70; int result = linearSearch(arr, n, target); if (result != -1) { printf("Element found at index: %d\n", result); } else { printf("Element not found.\n"); } return 0; } C++ Programming: Linear Search in an Array #include <iostream> using namespace std; int linearSearch(int arr[], int n, int target) { for (int i = 0; i < n; i++) { if (arr[i] == target) { return i; // Target found, return the index } } return -1; // Target not found } int main() { int arr[] = {2, 4, 0, 1, 9}; int n = sizeof(arr) / sizeof(arr[0]); int target = 1; int result = linearSearch(arr, n, target); if (result != -1) { cout << "Element found at index: " << result << endl; } else { cout << "Element not found." << endl; } return 0; } Explanation: In both programs, we create an array of integers and then implement the linear search function. The function traverses through the array, comparing each element with the target. If it finds the target, it returns the index of the element. If not, it returns -1, indicating that the element was not found. Discussion on Linked Lists: Linear search works similarly in Linked Lists, but instead of indexing like we do in arrays, we traverse the nodes. Since a linked list is a sequential structure without random access, we need to move from node to node. C Programming: Linear Search in a Singly Linked List #include <stdio.h> #include <stdlib.h> struct Node { int data; struct Node* next; }; int linearSearch(struct Node* head, int target) { struct Node* current = head; int index = 0; while (current != NULL) { if (current->data == target) { return index; } current = current->next; index++; } return -1; // Target not found } // Helper function to create a new node struct Node* createNode(int data) { struct Node* newNode = (struct Node*)malloc(sizeof(struct Node)); newNode->data = data; newNode->next = NULL; return newNode; } int main() { struct Node* head = createNode(10); head->next = createNode(20); head->next->next = createNode(30); head->next->next->next = createNode(40); int target = 30; int result = linearSearch(head, target); if (result != -1) { printf("Element found at position: %d\n", result); } else { printf("Element not found.\n"); } return 0; } Explanation: Here, we create a singly linked list and use a linear search algorithm to find the target value. We traverse the list node by node, checking each node’s data. Since there’s no concept of index in a linked list, we maintain a manual counter (index). If we find the target, we return its position. Otherwise, we return -1. Advantages and Disadvantages of Linear Search: Advantages: Simple and easy to implement. Works on both sorted and unsorted datasets. Disadvantages: Inefficient for large datasets, especially when compared to more advanced algorithms like binary search. Takes linear time in the worst case (O(n)). Where to Use Linear Search: When dealing with small datasets or unsorted data. For data structures like linked lists, where other search algorithms might be less efficient due to lack of direct indexing. This concludes our detailed discussion of Linear Search across different data structures. Next Part Preview: Binary Search In the next part of the chapter, we will dive into the more efficient Binary Search, where we optimize the searching process by dividing the dataset in half. But remember, binary search only works on sorted datasets, so sorting comes into play! You’ll also see how binary search works not just with arrays but with Binary Search Trees (BSTs), a crucial data structure in DSA. And as always, feel free to check out resources on digilearn.cloud from Emancipation Edutech Private Limited, where this book is available for free reading! Part 2: Binary Search 1.2 Binary

Chapter 15 – Searching Algorithms Read More »

Chapter 14 - Tree Traversal Methods

Chapter 14 – Tree Traversal Methods

Chapter 14: Tree Traversal Methods In this chapter, we will explore Tree Traversal Methods, which are vital for accessing and processing the data stored in tree structures. Trees are non-linear data structures that represent hierarchical relationships, making traversing them crucial for various applications, such as searching, sorting, and modifying data. Traversing a tree means visiting all its nodes in a specified order. The choice of traversal method can significantly impact the efficiency and outcome of operations performed on the tree. There are two main categories of tree traversal methods: Depth-First Traversal (DFT) Breadth-First Traversal (BFT) 1. Depth-First Traversal (DFT) Depth-First Traversal explores as far down one branch of the tree as possible before backtracking to explore other branches. This method is typically implemented using recursion or an explicit stack data structure. There are three primary types of Depth-First Traversal: 1.1 Pre-Order Traversal In Pre-Order Traversal, nodes are visited in the following order: Visit the current node. Traverse the left subtree. Traverse the right subtree. Use Cases for Pre-Order Traversal Copying a Tree: If you need to create a duplicate of a tree, pre-order traversal allows you to visit each node, storing its value in a new tree structure. Prefix Expression Evaluation: In expressions written in prefix notation (also known as Polish notation), pre-order traversal is essential for evaluating the expression. Example: Given the binary tree: A / \ B C / \ D E The Pre-Order Traversal would yield: A, B, D, E, C. Implementation: class Node: def __init__(self, key): self.left = None self.right = None self.val = key def pre_order_traversal(root): if root: print(root.val, end=’ ‘) # Visit the node pre_order_traversal(root.left) # Traverse left subtree pre_order_traversal(root.right) # Traverse right subtree # Create the tree root = Node(‘A’) root.left = Node(‘B’) root.right = Node(‘C’) root.left.left = Node(‘D’) root.left.right = Node(‘E’) print("Pre-Order Traversal:") pre_order_traversal(root) # Output: A B D E C 1.2 In-Order Traversal In In-Order Traversal, nodes are visited in this order: Traverse the left subtree. Visit the current node. Traverse the right subtree. Use Cases for In-Order Traversal Binary Search Trees (BST): In-order traversal is used to retrieve the elements of a BST in sorted order. Expression Tree Evaluation: For expression trees, in-order traversal can be useful for generating infix notation. Example: Using the same binary tree, the In-Order Traversal yields: D, B, E, A, C. Implementation: def in_order_traversal(root): if root: in_order_traversal(root.left) # Traverse left subtree print(root.val, end=’ ‘) # Visit the node in_order_traversal(root.right) # Traverse right subtree print("\nIn-Order Traversal:") in_order_traversal(root) # Output: D B E A C 1.3 Post-Order Traversal In Post-Order Traversal, nodes are visited in this order: Traverse the left subtree. Traverse the right subtree. Visit the current node. Use Cases for Post-Order Traversal Deleting a Tree: When deallocating memory for tree nodes, post-order traversal ensures that child nodes are deleted before the parent node. Postfix Expression Evaluation: In postfix notation (Reverse Polish notation), post-order traversal is used for evaluation. Example: Again, using the same binary tree, the Post-Order Traversal yields: D, E, B, C, A. Implementation: def post_order_traversal(root): if root: post_order_traversal(root.left) # Traverse left subtree post_order_traversal(root.right) # Traverse right subtree print(root.val, end=’ ‘) # Visit the node print("\nPost-Order Traversal:") post_order_traversal(root) # Output: D E B C A 2. Breadth-First Traversal (BFT) Breadth-First Traversal explores all the nodes at the present depth level before moving on to the nodes at the next depth level. This traversal method is typically implemented using a queue data structure. 2.1 Level Order Traversal In Level Order Traversal, the nodes are visited level by level from top to bottom. Starting from the root, it visits all nodes at the present depth level before moving on to the nodes at the next depth level. Use Cases for Level Order Traversal Finding the Shortest Path: In unweighted trees or graphs, level order traversal is useful for finding the shortest path between nodes. Completeness Checking: To check whether a binary tree is complete, level order traversal can help validate the condition at each level. Example: Using the same binary tree, the Level Order Traversal yields: A, B, C, D, E. Implementation: from collections import deque def level_order_traversal(root): if root is None: return queue = deque([root]) # Initialize the queue with the root node while queue: current = queue.popleft() # Dequeue the front node print(current.val, end=’ ‘) # Visit the node if current.left: # Enqueue left child queue.append(current.left) if current.right: # Enqueue right child queue.append(current.right) print("\nLevel Order Traversal:") level_order_traversal(root) # Output: A B C D E Comparison of Traversal Methods Traversal Method Order of Visiting Nodes Use Cases Pre-Order Node, Left, Right Copying a tree, prefix expression In-Order Left, Node, Right Binary search trees, sorted output Post-Order Left, Right, Node Deleting a tree, postfix expression Level Order Level by Level Finding shortest path in unweighted trees Complexity Analysis Understanding the time and space complexity of these traversal methods is essential: Time Complexity: All traversal methods (pre-order, in-order, post-order, and level order) have a time complexity of O(n), where n is the number of nodes in the tree. Each node is visited exactly once. Space Complexity: Pre-Order, In-Order, and Post-Order: The space complexity for these recursive methods is O(h), where h is the height of the tree. This is due to the stack space used by recursive calls. For balanced trees, this is O(log n), but for skewed trees, it can be O(n). Level Order: The space complexity for level order traversal is O(w), where w is the maximum width of the tree. In the worst case, this can also be O(n). Conclusion Tree traversal methods are fundamental for effectively working with tree data structures. They allow us to explore and manipulate tree nodes efficiently, which is essential for various applications in computer science and programming. In our next chapter, we will delve into Binary Trees and Binary Search Trees (BST), laying the groundwork for understanding more complex tree structures. Remember, for free access to this book and other resources provided by Emancipation Edutech Private Limited, you can visit digilearn.cloud. Happy learning!

Chapter 14 – Tree Traversal Methods Read More »

Chapter 13 - AVL Trees and Red-Black Trees

Chapter 13 – AVL Trees and Red-Black Trees

Chapter 13: AVL Trees and Red-Black Trees In this chapter, we will dive into AVL Trees and Red-Black Trees, two important types of self-balancing binary search trees. These trees ensure that the height of the tree remains balanced, providing improved efficiency for operations such as insertion, deletion, and searching. Self-balancing trees like AVL Trees and Red-Black Trees guarantee that the operations on the tree have a time complexity of O(log n), even in the worst-case scenario. Let’s explore them in detail! What are AVL Trees? An AVL Tree is a type of self-balancing binary search tree where the difference between the heights of the left and right subtrees of any node (known as the balance factor) is at most 1. The AVL Tree is named after its inventors Adelson-Velsky and Landis, who introduced the concept in 1962. Key Properties of AVL Trees: Balance Factor: For every node in an AVL tree, the balance factor must be either -1, 0, or +1. Balance Factor = Height of Left Subtree – Height of Right Subtree If the balance factor is not within this range, the tree is considered unbalanced and requires rotation to rebalance. Rotations: To maintain the balance of an AVL tree, we use rotations when an insertion or deletion operation causes the tree to become unbalanced. Single Rotations: Right or Left rotation. Double Rotations: A combination of right and left rotations. Example of an AVL Tree Consider inserting the elements 10, 20, and 30 into an empty AVL Tree. Step 1: Insert 10: 10 Step 2: Insert 20: 10 \ 20 Step 3: Insert 30. This causes the tree to become unbalanced because the balance factor of node 10 becomes -2 (left height = 0, right height = 2). To rebalance the tree, we perform a left rotation at node 10: Before rotation: 10 \ 20 \ 30 After rotation: 20 / \ 10 30 Now, the tree is balanced again. Rotations in AVL Trees To maintain the balance factor of AVL trees, we perform rotations. There are four types of rotations used to rebalance an AVL tree: 1. Left Rotation (LL Rotation) A left rotation is performed when a node becomes unbalanced due to the right subtree being taller than the left subtree. This happens when a new node is inserted into the right subtree of the right child. Example: Before left rotation: 10 \ 20 \ 30 After left rotation: 20 / \ 10 30 2. Right Rotation (RR Rotation) A right rotation is performed when a node becomes unbalanced due to the left subtree being taller than the right subtree. This happens when a new node is inserted into the left subtree of the left child. Example: Before right rotation: 30 / 20 / 10 After right rotation: 20 / \ 10 30 3. Left-Right Rotation (LR Rotation) A left-right rotation is a combination of left and right rotations. It is performed when a new node is inserted into the right subtree of the left child. Example: Before LR rotation: 30 / 10 \ 20 First, perform a left rotation on the left child (10), then perform a right rotation on the root (30): After LR rotation: 20 / \ 10 30 4. Right-Left Rotation (RL Rotation) A right-left rotation is a combination of right and left rotations. It is performed when a new node is inserted into the left subtree of the right child. Example: Before RL rotation: 10 \ 30 / 20 First, perform a right rotation on the right child (30), then perform a left rotation on the root (10): After RL rotation: 20 / \ 10 30 What are Red-Black Trees? A Red-Black Tree is another type of self-balancing binary search tree. It is similar to an AVL tree, but it has a more relaxed balancing criterion, which makes it faster for insertion and deletion operations. The key feature of a Red-Black Tree is the use of colors to maintain balance. Key Properties of Red-Black Trees: Each node is colored either red or black. The root node must always be black. No two consecutive red nodes can appear on the same path (i.e., a red node cannot have a red parent or red child). Every path from a node to its descendant null nodes must contain the same number of black nodes. The longest path from the root to a leaf is no more than twice as long as the shortest path (this ensures that the tree is balanced). Example of a Red-Black Tree Consider inserting the elements 10, 20, and 30 into an empty Red-Black Tree. Step 1: Insert 10 (the root node is always black). 10 (black) Step 2: Insert 20. Since it is greater than 10, it is inserted as the right child and colored red: 10 (black) \ 20 (red) Step 3: Insert 30. Since this creates a violation of two consecutive red nodes (20 and 30), we perform a left rotation at node 10 and recolor: Before rotation: 10 (black) \ 20 (red) \ 30 (red) After rotation and recoloring: 20 (black) / \ 10 (red) 30 (red) Now, the tree satisfies all the Red-Black Tree properties. Rotations in Red-Black Trees Similar to AVL trees, Red-Black Trees also use rotations to restore balance when a violation occurs. The rotations are the same as those in AVL trees: left rotation, right rotation, left-right rotation, and right-left rotation. Comparison: AVL Trees vs. Red-Black Trees Feature AVL Trees Red-Black Trees Balancing Method Strictly balanced Loosely balanced Height Difference At most 1 No more than twice the shortest path Rebalancing Frequent rotations Fewer rotations Efficiency in Insertion/Deletion Slower in large trees Faster for insertion/deletion Search Efficiency Slightly better Slightly worse than AVL trees Applications of AVL and Red-Black Trees Database Indexing: Both AVL and Red-Black Trees are widely used in databases for maintaining ordered records efficiently. Memory Management: Red-Black Trees are often used in memory allocators (e.g., Linux kernel uses Red-Black Trees). Networking: AVL and Red-Black Trees are useful in routing

Chapter 13 – AVL Trees and Red-Black Trees Read More »

Chapter 12 - Binary Trees and Binary Search Trees (BST)

Chapter 12 – Binary Trees and Binary Search Trees (BST)

Chapter 12: Binary Trees and Binary Search Trees (BST) Welcome to the fascinating world of Trees! Trees are one of the most fundamental data structures in computer science, and they are widely used in various applications like file systems, databases, and more. This chapter focuses on two essential types of trees: Binary Trees and Binary Search Trees (BSTs). What is a Tree? A Tree is a non-linear data structure composed of nodes. It is a hierarchical structure with a root node and sub-nodes, each connected by edges. Unlike arrays, linked lists, stacks, or queues, trees are nonlinear and allow for more complex relationships between elements. A tree follows these rules: Each node contains a value or data. The root node is the topmost node in the tree. Each node has zero or more child nodes. Each node can only have one parent (except for the root, which has no parent). What is a Binary Tree? A Binary Tree is a type of tree where each node can have at most two children, often referred to as the left child and right child. Characteristics of a Binary Tree: Node: Contains data, and two pointers (left and right). Edge: The connection between two nodes. Root: The topmost node in the tree. Leaf Node: A node with no children. Height: The number of edges from the root to the deepest leaf. Structure of a Binary Tree Here’s a simple binary tree: 10 / \ 5 15 / \ / \ 3 7 12 20 In this example: 10 is the root. 5 and 15 are children of 10. 3, 7, 12, and 20 are leaf nodes. Types of Binary Trees Full Binary Tree: Every node has either 0 or 2 children. Complete Binary Tree: All levels are completely filled, except possibly the last level, which is filled from left to right. Perfect Binary Tree: All internal nodes have two children, and all leaves are at the same level. Balanced Binary Tree: The height of the tree is balanced, meaning the difference between the height of the left and right subtrees for any node is at most 1. What is a Binary Search Tree (BST)? A Binary Search Tree (BST) is a special kind of binary tree where the nodes are arranged in a specific order. In a BST: The left subtree of a node contains only nodes with values less than the node’s value. The right subtree contains only nodes with values greater than the node’s value. Both the left and right subtrees must also be binary search trees. This property makes BSTs highly efficient for searching, insertion, and deletion operations. Example of a Binary Search Tree Consider the following tree: 15 / \ 10 25 / \ / \ 5 12 20 30 In this BST: Nodes to the left of 15 are smaller than 15 (i.e., 10, 5, 12). Nodes to the right of 15 are greater than 15 (i.e., 25, 20, 30). This ordering ensures efficient search operations. Operations on Binary Search Trees 1. Insertion To insert a new element into a BST, we start at the root and compare the new element with the current node: If it is smaller, we move to the left subtree. If it is greater, we move to the right subtree. We continue this until we find an empty spot, where we insert the new node. Example: Inserting 8 into the following BST: 10 / \ 5 15 / \ 3 7 Since 8 is greater than 5 and less than 10, we move to the right child of 5 and place it there: 10 / \ 5 15 / \ 3 7 \ 8 2. Deletion To delete a node from a BST, we must consider three cases: Case 1: The node to be deleted is a leaf (has no children). Case 2: The node to be deleted has one child. Case 3: The node to be deleted has two children. Example: Let’s delete the node 7 from the tree above. Since it has one child (8), we simply replace 7 with 8. 10 / \ 5 15 / \ 3 8 3. Search To search for a value in a BST: Start at the root. If the value is smaller than the current node, move left; if it’s greater, move right. Repeat this process until you find the value or reach a NULL node (the value does not exist in the tree). Applications of Binary Search Trees Search Operations: Due to the structured order, searching for an element in a BST is more efficient than in unsorted arrays or linked lists. In-memory databases: BSTs are commonly used in database indexing where efficient searching is crucial. Network Routing: BSTs help in storing hierarchical data, which can be efficiently searched, like network routes. Dynamic Data Structures: In scenarios like building self-balancing trees in applications, BSTs form the backbone. Systems like Emancipation Edutech Private Limited often use Binary Search Trees to manage student records efficiently by indexing them according to registration numbers. For free educational resources, visit digilearn.cloud, where you can access this book and much more! Binary Tree Traversal Methods To interact with the data in a binary tree, we need to traverse it. There are three primary traversal techniques: 1. In-order Traversal (Left, Root, Right) This traversal visits nodes in ascending order in a BST. Visit Left Subtree -> Visit Root -> Visit Right Subtree Example: For the tree: 10 / \ 5 15 / \ 3 7 The in-order traversal would be: 3 -> 5 -> 7 -> 10 -> 15. 2. Pre-order Traversal (Root, Left, Right) This traversal visits the root node first, followed by the left subtree, and then the right subtree. Visit Root -> Visit Left Subtree -> Visit Right Subtree Example: For the same tree: 10 / \ 5 15 / \ 3 7 The pre-order traversal would be: 10 -> 5 -> 3 -> 7 -> 15. 3. Post-order Traversal (Left, Right, Root) This traversal visits

Chapter 12 – Binary Trees and Binary Search Trees (BST) Read More »

Chapter 11 - Deques (Double-Ended Queues)

Chapter 11 – Deques (Double-Ended Queues)

Chapter 11: Deques (Double-Ended Queues) Welcome to Chapter 11! In this chapter, we’ll explore Deques, or Double-Ended Queues, an interesting and versatile data structure that extends the concept of a standard queue by allowing insertion and deletion at both ends. This concept will help you understand more advanced applications of queues and solve real-life problems like managing text editors, undo operations, or simulations. Let’s begin! What is a Deque? A Deque (pronounced “deck”) stands for Double-Ended Queue, and it is a generalized form of the standard queue. Unlike a regular queue, where insertion is done at the back and deletion is done at the front, a deque allows insertion and deletion from both ends of the structure. In simple terms: You can add an element at either the front or the back. You can remove an element from either the front or the back. This makes the deque a very flexible data structure. Imagine you’re standing in a line (queue) but now, you have the option to add someone to the front of the line, or let them sneak in at the back. That’s how a deque works! Types of Deques Deques can be classified into two types: Input-Restricted Deque: In this type of deque, insertion is restricted to one end (say, the rear), but deletion can occur at both ends. Output-Restricted Deque: In this type, deletion is restricted to one end (say, the front), but insertion can happen at both ends. Both types allow different levels of control depending on your use case. Applications of Deques Before we get into the operations, let’s explore some real-world applications of deques: Undo/Redo Operations: In text editors or drawing tools, deques can be used to manage undo and redo functionality. You can easily move forward and backward in the history of actions. Palindrome Checking: Deques are perfect for checking palindromes (words or phrases that read the same forward and backward) because they allow access from both ends. Sliding Window Problems: In algorithms, deques are often used to maintain a sliding window of data, useful in problems like finding the maximum or minimum of all subarrays of a given size. Task Scheduling: In simulations or real-time systems like the ones developed by Emancipation Edutech Private Limited, deques can manage tasks with both high and low priorities, offering flexibility in choosing which task to execute next. You can read more about these applications and other use cases for free on digilearn.cloud! Basic Operations of Deques Now, let’s break down the core operations that can be performed on a deque: Insertion at Front: Insert an element at the front of the deque. Insertion at Rear: Insert an element at the rear of the deque. Deletion from Front: Remove an element from the front of the deque. Deletion from Rear: Remove an element from the rear of the deque. Get Front: Retrieve the front element without removing it. Get Rear: Retrieve the rear element without removing it. Check if Empty: Check if the deque is empty. Check if Full: Check if the deque is full (in a limited-size implementation). These operations make deques highly versatile, with various possibilities for manipulating the data structure. Deque Representation and Implementation Let’s implement a deque using an array in C to better understand how it works. Define the Deque Structure #include <stdio.h> #include <stdlib.h> #define MAX 100 struct Deque { int arr[MAX]; // Array to store elements int front; // Points to the front end int rear; // Points to the rear end int size; // Current number of elements }; arr: This array will store our elements. front: This pointer will indicate the front of the deque. rear: This pointer will indicate the rear of the deque. size: This will track the number of elements in the deque. Initialize the Deque void initialize(struct Deque* dq) { dq->front = -1; // Initially, front is -1 (empty state) dq->rear = -1; // Initially, rear is -1 (empty state) dq->size = 0; // No elements in the deque } Insertion at Front void insertFront(struct Deque* dq, int value) { if (dq->size == MAX) { // Check if deque is full printf("Deque Overflow\n"); return; } if (dq->front == -1) { // First element insertion dq->front = 0; dq->rear = 0; } else if (dq->front == 0) { // Circular condition dq->front = MAX – 1; } else { dq->front–; } dq->arr[dq->front] = value; dq->size++; } Insertion at Rear void insertRear(struct Deque* dq, int value) { if (dq->size == MAX) { // Check if deque is full printf("Deque Overflow\n"); return; } if (dq->rear == -1) { // First element insertion dq->rear = 0; dq->front = 0; } else if (dq->rear == MAX – 1) { // Circular condition dq->rear = 0; } else { dq->rear++; } dq->arr[dq->rear] = value; dq->size++; } Deletion from Front void deleteFront(struct Deque* dq) { if (dq->size == 0) { // Check if deque is empty printf("Deque Underflow\n"); return; } if (dq->front == dq->rear) { // Only one element in deque dq->front = -1; dq->rear = -1; } else if (dq->front == MAX – 1) { // Circular condition dq->front = 0; } else { dq->front++; } dq->size–; } Deletion from Rear void deleteRear(struct Deque* dq) { if (dq->size == 0) { // Check if deque is empty printf("Deque Underflow\n"); return; } if (dq->front == dq->rear) { // Only one element in deque dq->front = -1; dq->rear = -1; } else if (dq->rear == 0) { // Circular condition dq->rear = MAX – 1; } else { dq->rear–; } dq->size–; } Checking if the Deque is Empty or Full int isEmpty(struct Deque* dq) { return dq->size == 0; } int isFull(struct Deque* dq) { return dq->size == MAX; } Real-World Example Imagine you’re developing a browser. The deque can be used to implement forward and backward navigation: You can insert URLs into the deque as the user browses different websites. When the user clicks the "back" button, you remove the URL from the rear (current page) and go back

Chapter 11 – Deques (Double-Ended Queues) Read More »

Chapter 10 - Priority Queues

Chapter 10 – Priority Queues

Chapter 10: Priority Queues Welcome to Chapter 10! In this chapter, we’ll explore Priority Queues, a special type of queue where elements are processed based on their priority rather than the order in which they were added. Priority queues are incredibly useful in various real-world applications, such as task scheduling, Dijkstra’s shortest path algorithm, and handling network packets. Let’s dive into the concept and understand how they work! What is a Priority Queue? A Priority Queue is a data structure where each element is associated with a priority, and the element with the highest (or lowest) priority is dequeued first. Unlike a regular queue, which follows the First-In-First-Out (FIFO) principle, priority queues can rearrange the order of processing based on priorities. For example, in an emergency room, patients with life-threatening conditions (higher priority) will be treated first, even if they arrived after other patients with minor injuries. Here’s a basic visual of how a priority queue works: Priority Queue: [Critical Patient (Priority 3)] -> [Moderate Patient (Priority 2)] -> [Minor Patient (Priority 1)] In this case, the Critical Patient will be dequeued and treated first, regardless of the order in which they entered the queue. Types of Priority Queues There are two types of priority queues: Max-Priority Queue: In this type, the element with the highest priority is dequeued first. Min-Priority Queue: In this type, the element with the lowest priority is dequeued first. For instance, in task scheduling systems at Emancipation Edutech Private Limited, a max-priority queue might be used to prioritize the most urgent student requests first. Priority Queue Operations The following operations are performed on priority queues: Insert (Enqueue): Add an element to the queue with a given priority. Extract Max (Dequeue): Remove the element with the highest priority (in max-priority queues). Get Max: Retrieve, but do not remove, the element with the highest priority. Increase Priority: Change the priority of an element in the queue. Let’s understand these operations more deeply by implementing a priority queue using arrays. Priority Queue Implementation Using Arrays To keep things simple, we’ll implement a max-priority queue using arrays, where higher numbers indicate higher priorities. Define the Queue Structure #include <stdio.h> #include <stdlib.h> #define MAX 100 // Max size of the priority queue struct PriorityQueue { int data[MAX]; // Array to store elements int priority[MAX]; // Array to store priorities int size; // Number of elements in the queue }; data: Stores the elements. priority: Stores the priority associated with each element. size: Tracks the current number of elements in the queue. Initialize the Priority Queue void initialize(struct PriorityQueue* pq) { pq->size = 0; // Queue is initially empty } Insert Operation void insert(struct PriorityQueue* pq, int value, int prio) { if (pq->size == MAX) { // Check if the queue is full printf("Queue Overflow\n"); return; } int i = pq->size; pq->data[i] = value; pq->priority[i] = prio; pq->size++; } Here, we simply insert an element along with its priority at the end of the queue. Extract Max Operation int extractMax(struct PriorityQueue* pq) { if (pq->size == 0) { // Check if the queue is empty printf("Queue Underflow\n"); return -1; } int maxIndex = 0; // Find the element with the highest priority for (int i = 1; i < pq->size; i++) { if (pq->priority[i] > pq->priority[maxIndex]) { maxIndex = i; } } int maxValue = pq->data[maxIndex]; // Store the highest priority value // Shift the remaining elements for (int i = maxIndex; i < pq->size – 1; i++) { pq->data[i] = pq->data[i + 1]; pq->priority[i] = pq->priority[i + 1]; } pq->size–; return maxValue; } In this operation, we scan the queue to find the element with the highest priority and then remove it, shifting the rest of the elements. Example Usage int main() { struct PriorityQueue pq; initialize(&pq); // Insert elements with priorities insert(&pq, 10, 1); // Insert value 10 with priority 1 insert(&pq, 20, 4); // Insert value 20 with priority 4 insert(&pq, 30, 3); // Insert value 30 with priority 3 // Extract the highest priority element printf("Extracted element: %d\n", extractMax(&pq)); return 0; } Output: Extracted element: 20 Notice how 20 was extracted because it had the highest priority (4). Using Priority Queues in Real Life Priority queues are extremely useful in real-world applications. Let’s explore some common scenarios: Task Scheduling: Operating systems use priority queues to manage tasks. Processes with higher priority are executed before lower-priority processes, ensuring time-sensitive tasks are handled quickly. Network Traffic: In network systems, priority queues help manage data packets. Critical packets (such as emergency notifications) are transmitted first, while less important packets (like email) are handled later. Dijkstra’s Algorithm: Priority queues are used in algorithms like Dijkstra’s Shortest Path Algorithm. Nodes with the smallest distance (priority) are processed first, helping to find the shortest path between two points efficiently. Emergency Systems: Priority queues manage patients in hospitals, where the most critical patients (with higher priorities) are treated first. At Emancipation Edutech Private Limited, we could use priority queues to manage student queries or schedule courses based on demand, ensuring that high-demand courses are given priority. Don’t forget to explore digilearn.cloud to access more free educational resources! Implementing Priority Queues Using Heaps While arrays are a simple way to implement priority queues, they’re not the most efficient. For faster access to the highest-priority element, heaps are commonly used. A binary heap allows both insertion and extraction of the maximum element in O(log n) time, compared to O(n) time with arrays. Let’s see how a max-heap works with priority queues: Max-Heap Representation A max-heap is a binary tree where the parent node is always greater than or equal to its children. This ensures that the root always contains the highest-priority element, which can be removed efficiently. 20 / \ 15 10 / \ 8 5 Here, 20 is the maximum value, and extracting it will require only O(log n) time to rearrange the heap. Applications of Priority Queues Priority queues are widely used in areas like: Operating Systems: For task scheduling and resource management.

Chapter 10 – Priority Queues Read More »

Scroll to Top
Contact Form Demo