In this post we will learn about How to solve problems using master theorem examples.
Master Theorem is the method bywhich we can solve recursive function easily. It is the combination ofmathematical and scientific approach. Thistheorem design in such a way by which we can solve all type of recursive function. Basically,Master Theorem consists of three cases which isuse according to the problem.
In Mastertheorem one case use at a time among all threecases and it only depends upon the data given inthe recursive problem.
T(n) = a T(n/b) + f (n) ,
where a ≥ 1, b > 1, and f is asymptotically positive.
What is Master Theorem?
Let a ≥ 1 and b > 1 be constants, let f(n) be a function, and let T(n) be defined on the non negative integers by the recurrence. T(n)= aT(n/b)=f(n) where we interpret n/b to mean either [n/b] or [n/b]. The T(n) can be bounded asymptotically as follows. if f(n)= O(nlogb a-∈) for some constant ∈ > 0, then T(n) = Θ(nlogb a) if f(n)= Θ(nlogb a) then T(n)=Θ(nlogb lgn) if f(n)= Ω(nlogb a+∈) for some constant ∈ > 0 and if a f(n/b)≤cf(n) for some constant c < 1 and all sufficiently large n, then T(n)= Θ(f(n)).
T(n) = a T(n/b) + f (n) ,
where a ≥ 1, b > 1, and f is asymptotically positive.
n is the size of the problem.
a is the number of sub problems in the recursion.
n/b is the size of each sub problem. (Assume that all sub problems are essentially the same size.)
f (n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the
problem and the cost of merging the solutions to the sub problems.
Case 1
Generic form
If f(n) = O (nc) where c < logb a (using Big O notation) then: T (n) = Θ (nlogba)
Case 2
Generic form
If it is true, for some constant k ≥ 0, that: f(n) = Θ (nc logk n) where c = logb a then: T(n) = Θ (nc logk+1 n)
Case 3
Generic form
If it is true that: f(n) = Ω (nc) where c > logb a and if it is also true that: af ( n/b ) ≤ kf(n) for some constant k < 1 and sufficiently large n (often called the regularity condition) then: T (n) = Θ (f(n))
Master Theorem Examples:
Let T (n) = T (n/2) + 1/2 n2 + n. What are the parameters?
Solution: a=1, b=2, d=2, Since 1 < 22, case 1 applies.
Thus we conclude that
T (n) ∈ Θ(nd) = Θ(n2)
Let T (n) = 2T (n/4) + √n + 42. What are the parameters?
Solution: a = 2, b = 4, d = 1/2
Therefore which condition?
Since 2 = 41/2 , case 2 applies.
Thus we conclude that T (n) ∈ Θ(nd log n) = Θ(√n log n).
T(n) = 9T(n=3)+n. Here a = 9, b = 3, f(n) = n, and nlogb a = nlog3 9 = Θ(n2).
Since f(n) = O(nlog39−∈) for ∈ = 1, case 1 of the master theorem applies, and the solution is T(n) = Θ(n2).
When you cannot use master theorem?
You cannot use the master theorem if T (n) is not monotone, ex: T (n) = sin n f(n) is not a polynomial, ex: T (n) = 2T (n2 ) + 2n b cannot be expressed as a constant, ex: T (n) = T (√n)
Have you ever wondered How Many Types of Data Structures Are There? Here is complete List of Data structures and short note on each one of them.
How Many Types of Data Structures Are There?
Data Structures
This document gives extensive tutorials on all DS topics. A data file is a specific mechanism for organising an application data file. This page provides useful examples of different types of data. Array Linked Lists Stack Queues – Binary Trees – Search Trees. I can store the table that has the same types of files as our database in ArrayData Format. How can the linker of lists change in relation to them? The node exchanged is the same as any other database.
An array is a collection of items stored at contiguous memory locations.
Types of data:
There are 4 types of Data
Nominal,
Ordinal,
Discrete,
Continuous
Hash Table:
A Hash table (hash map) is a data structure that implements an associative array abstract data type, a structure that can map keys to values.
Graphs.
A Graph is a non-linear data structure consisting of nodes and edges. The nodes are sometimes also referred to as vertices and the edges are lines or arcs that connect any two nodes in the graph
Binary Search Tree · The left subtree of a node contains only nodes with keys lesser than the node’s key.
Red-Black Tree
A red-black tree is a kind of self-balancing binary search tree where each node has an extra bit, and that bit is often interpreted as the color (red or black).
B Tree is a specialized m-way tree that can be widely used for disk access. A B-Tree of order m can have at most m-1 keys and m children.
Linear Probing
Linear probing is a scheme in computer programming for resolving collisions in hash tables, data structures for maintaining a collection of key–value pairs and looking up the value associated with a given key
Need of Data Structures
The need of Data Structure includes efficiency and reusability. Data structure provides a way of organizing, managing, and storing data efficiently. With the help of data structure, the data items can be traversed easily.
Advantages of Data Structures
Data structure helps in efficient storage of data in the storage device.
Data structure usage provides convenience while retrieving the data from storage device.
Data structure provides effective and efficient processing of small as well as large amount of data.
Usage of proper data structure, can help programmer save lots of time or processing time while operations such as storage, retrieval or processing of data.
Manipulation of large amount of data can be carried out easily with the use of good data structure approach.
Most of the well organized data structures like Array, stack, queues, graph, tree, linked list has well built and pre-planned approach for operations like storage, addition, retrieval, manipulation, deletion, etc. While using them, programmer can be completely rely on these data structures.
Data structure usage can simply encourage reusability in long run as well.
Binary tree
A Binary tree is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child.
Binary Heap
A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues.
Weight-balanced Tree
A weight-balanced tree is a binary search tree that stores the sizes of subtrees in the nodes.
Data Structures and Algorithms Notes for Placement
Data Structures and Algorithms Notes for Placement
Data Structures
Please check the tutorial section on the various data structures for more information. Data structures represent an efficient way of managing data in a computer system in order to enable efficient operations. The pages present instructions on data structures in detail.
Asymptotic analysis is the process of calculating the running time of an algorithm in mathematical units to find the program’s limitations, or “run-time performance.” The goal is to determine the best case, worst case and average case time required to execute a given task. While not a method of deep learning training, Asymptotic analysis is a crucial diagnostic tool for programmers to evaluate an algorithm’s efficiency, rather than just its accuracy.
The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely.
Algorithm analysis
Analysis of algorithms is the determination of the amount of time and space resources required to execute it.
Need for Data Structure
The need of Data Structure includes efficiency and reusability. Data structure provides a way of organizing, managing, and storing data efficiently. With the help of data structure, the data items can be traversed easily.
Execution Time Cases
The time spent by the job actively using processor resources is its execution time. The execution time of each job instance from the same task is likely to differ. Execution times are of interest to real-time systems designers usually in the context of worst-case execution times.
Algorithmic complexity is a measure of how long an algorithm would take to complete given an input of size n.
Time Complexity
Time required to analyze the given problem of particular size is known as the time complexity
Installation on UNIX/Linux
sudo apt install GCC.
GCC — version.
cd Desktop.
Key takeaway: Commands are case sensitive.
touch program.c.
GCC program.c -o program.
Key takeaway: The executable file name can be different from the source file name.
./program.
Characteristics of a Data Structure
Linear
Non-Linear
Static
Homogenous
Non-Homogenous
Dynamic
Time Complexity
Space Complexity
Tell me the Abstract Data Type (ADT)?
Tell me the use of data structure?
Data Structures allow programmers to manage the data in the most efficient manner that enhances the algorithm’s performance.
They allow the implementation of a scientific approach as against traditional methodologies. When it comes to aspects such as processing speed, handling or multiple requests and searching through numerous records, data structures prove handy.
They allow efficient use of memory. Data structures optimize memory usage. This makes an impact usually in situations that involve the processing of huge datasets.
Data structures are reusable. They can be integrated as a package into a library. These libraries can be used in various situations by various stakeholders as needed.
They allow multiple solutions for a particular problem, allowing the user to select the data structure to solve the problem in the best way.
Here is a complete guide of Data Structures and Algorithms Quick Revision for students and Aspirants who wants to appear for coding interviews.
Data Structures and Algorithms Quick Revision
These pages give detailed tutorials describing various data structure types. Data structures provide an efficient way to store data on any computing system. You will find several articles about each information structure here.
Asymptotic analysis is the process of calculating the running time of an algorithm in mathematical units to find the program’s limitations, or “run-time performance.” The goal is to determine the best case, worst case and average case time required to execute a given task. While not a method of deep learning training, Asymptotic analysis is a crucial diagnostic tool for programmers to evaluate an algorithm’s efficiency, rather than just its accuracy.
The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely.
Algorithm analysis
Analysis of algorithms is the determination of the amount of time and space resources required to execute it.
Need for Data Structure
The need of Data Structure includes efficiency and reusability. Data structure provides a way of organizing, managing, and storing data efficiently. With the help of data structure, the data items can be traversed easily.
Execution Time Cases
The time spent by the job actively using processor resources is its execution time. The execution time of each job instance from the same task is likely to differ. Execution times are of interest to real-time systems designers usually in the context of worst-case execution times.
Characteristics of an Algorithm
Unambiguous
Input
Output
Finiteness
Feasibility
Independent
Tell me the best way to write algorithms?
Obtaining a description of the problem.
Analyzing the problem.
Developing a high-level algorithm.
Refining the algorithm by adding more detail.
Reviewing the algorithm.
Algorithm Complexity
Algorithmic complexity is a measure of how long an algorithm would take to complete given an input of size n.
Time Complexity
Time required to analyze the given problem of particular size is known as the time complexity
Here is an Amazing guide on Data Structures and Algorithms Refresher for students and for job seekers who are looking for Refreshing their Data Structures and algorithms skill during interviews and exams.
Data Structures and Algorithms Refresher
Data Structures
Here is an overview tutorial that covers data structure techniques and themes. A data structure refers to a special process to organizing data into computers in order for them to efficiently be used. Offers information and guidance on a variety of database types.
Specifically, we need to store lists with the same data types using array data structure. List length can be determined by interpolative or recursive code.
Data Structures Asymptotic Analysis
Asymptotic analysis is the process of calculating the running time of an algorithm in mathematical units to find the program’s limitations, or “run-time performance.” The goal is to determine the best case, worst case and average case time required to execute a given task. While not a method of deep learning training, Asymptotic analysis is a crucial diagnostic tool for programmers to evaluate an algorithm’s efficiency, rather than just its accuracy.
7 Best Data Structures and Algorithms Courses for Programmers
The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely.
Algorithm Analysis:
Analysis of algorithms is the determination of the amount of time and space resources required to execute it.
Need for Data Structure
The need of Data Structure includes efficiency and reusability. Data structure provides a way of organizing, managing, and storing data efficiently. With the help of data structure, the data items can be traversed easily.
Execution Time Cases
The time spent by the job actively using processor resources is its execution time. The execution time of each job instance from the same task is likely to differ. Execution times are of interest to real-time systems designers usually in the context of worst-case execution times.
Characteristics of an Algorithm
Unambiguous
Input
Output
Finiteness
Feasibility
Independent
Tell me the best way to write algorithms?
Obtaining a description of the problem.
Analyzing the problem.
Developing a high-level algorithm.
Refining the algorithm by adding more detail.
Reviewing the algorithm.
Algorithm Complexity
Algorithmic complexity is a measure of how long an algorithm would take to complete given an input of size n.
Time Complexity
Time required to analyze the given problem of particular size is known as the time complexity
Characteristics of a Data Structure
Linear
Non-Linear
Static
Homogenous
Non-Homogenous
Dynamic
Time Complexity
Space Complexity
Local environment setup:
Setting up local development environment.
The C compiler
A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or “code” that a computer’s processor uses
Text Editor
Text Editor is a free app that allows you to create, open, and edit text files on your computer and Google Drive.
A Data structure is an arrangement of data in a computer’s memory or disk. Data structures include Arrays, Stacks, Queues, linked lists, binary trees, and hash tables, and etc.
These are data structures used for space partitioning or binary space partitioning.
Space-partitioning trees
Segment tree
Interval tree
Range tree
Bin
K-d tree
Implicit k-d tree
Min/max k-d tree
Relaxed k-d tree
Adaptive k-d tree
Quadtree
Octree
Linear octree
Z-order
UB-tree
R-tree
R+ tree
R* tree
Hilbert R-tree
X-tree
Metric tree
Cover tree
M-tree
VP-tree
BK-tree
Bounding interval hierarchy
Bounding volume hierarchy
BSP tree
Rapidly exploring random tree
Application-specific Trees:
Abstract syntax tree
Parse tree
Decision tree
Alternating decision tree
Minimax tree
Expectiminimax tree
Finger tree
Expression tree
Log-structured merge-tree
Lexicographic Search Tree
Hash-based Structures:
Bloom filter
Count–min sketch
Distributed hash table
Double hashing
Dynamic perfect hash table
Hash array mapped trie
Hash list
Hash table
Hash tree
Hash trie
Koorde
Prefix hash tree
Rolling hash
MinHash
Quotient filter
Ctrie
Graphs
Adjacency list
Adjacency matrix
Graph-structured stack
Scene graph
Decision tree
Binary decision diagram
Zero-suppressed decision diagram
And-inverter graph
Directed graph
Directed acyclic graph
Propositional directed acyclic graph
Multigraph
Hypergraph
List of All Algorithms:
Graph algorithms
Coloring algorithm: Graph coloring algorithm.
Hopcroft–Karp algorithm: convert a bipartite graph to a maximum cardinality matching
Hungarian algorithm: algorithm for finding a perfect matching
Prüfer coding: conversion between a labeled tree and its Prüfer sequence
Tarjan’s off-line lowest common ancestors algorithm: computes lowest common ancestors for pairs of nodes in a tree
Topological sort: finds linear order of nodes (e.g. jobs) based on their dependencies.
Minimum spanning tree
Borůvka’s algorithm
Kruskal’s algorithm
Prim’s algorithm
Reverse-delete algorithm
Shortest path problem
Bellman–Ford algorithm: computes shortest paths in a weighted graph (where some of the edge weights may be negative)
Dijkstra’s algorithm: computes shortest paths in a graph with non-negative edge weights
Floyd–Warshall algorithm: solves the all pairs shortest path problem in a weighted, directed graph
Johnson’s algorithm: All pairs shortest path algorithm in sparse weighted directed graph
Transitive closure problem: find the transitive closure of a given binary relation
Traveling salesman problem
Christofides algorithm
Nearest neighbour algorithm
Graph search
A: special case of best-first search that uses heuristics to improve speed B: a best-first graph search algorithm that finds the least-cost path from a given initial node to any goal node (out of one or more possible goals)
Backtracking: abandons partial solutions when they are found not to satisfy a complete solution
Beam search: is a heuristic search algorithm that is an optimization of best-first search that reduces its memory requirement
Beam stack search: integrates backtracking with beam search
Best-first search: traverses a graph in the order of likely importance using a priority queue
Bidirectional search: find the shortest path from an initial vertex to a goal vertex in a directed graph
Breadth-first search: traverses a graph level by level
Brute-force search: An exhaustive and reliable search method, but computationally inefficient in many applications.
D: an incremental heuristic search algorithm Depth-first search: traverses a graph branch by branch Dijkstra’s algorithm: A special case of A for which no heuristic function is used
General Problem Solver: a seminal theorem-proving algorithm intended to work as a universal problem solver machine.
Iterative deepening depth-first search (IDDFS): a state space search strategy
Jump point search: An optimization to A* which may reduce computation time by an order of magnitude using further heuristics.
Lexicographic breadth-first search (also known as Lex-BFS): a linear time algorithm for ordering the vertices of a graph
Uniform-cost search: a tree search that finds the lowest-cost route where costs vary
Selection Algorithms
Quickselect
Introselect
Sequence search
Linear search: locates an item in an unsorted sequence
Selection algorithm: finds the kth largest item in a sequence
Ternary search: a technique for finding the minimum or maximum of a function that is either strictly increasing and then strictly decreasing or vice versa
Sorted lists
Binary search algorithm: locates an item in a sorted sequence
Fibonacci search technique: search a sorted sequence using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers
Jump search (or block search): linear search on a smaller subset of the sequence
Predictive search: binary-like search which factors in magnitude of search term versus the high and low values in the search. Sometimes called dictionary search or interpolated search.
Uniform binary search: an optimization of the classic binary search algorithm
Sequence Merging:
Simple merge algorithm
k-way merge algorithm
Union (merge, with elements on the output not repeated)
Sequence Sorting:
Bubble sort: for each pair of indices, swap the items if out of order
Comb sort
Gnome sort
Odd–even sort
Quicksort: divide list into two, with all items on the first list coming before all items on the second list.; then sort the two lists. Often the method of choice
Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that’s the Greek letter “omega.”
Big Omega definition
A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some nonnegative integer n0 such that c2g(n) ≤ t(n) ≤ c1g(n) for all n ≥ n0
Big Omega(Ω) function is used in computer science to describe the performance or complexity of an algorithm. If a running time is Ω(f(n)), then for large enough n, the running time is at least k⋅f(n) for some constant k.
Big Omega describes the lower bound of an algorithm.
Big-Omega is commonly denoted by Ω, is an Asymptotic Notation to denote the best case analysis of an algorithm.
Ω notation provides an asymptotic lower bound.
Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound.
We use big-Ω notation to show that least certain amount of time for an algorithm.
If running time is Ω(f(n)), then for large enough n, the running time is at least k. f(n) for some constant k.
Best case Ω Of an algorithm describes a lower bound for all combinations of input. It implies that the function can never get any better than the specified value. For example, when sorting an array, the best case is when the array is already correctly sorted.
Worst case Ω describes a lower bound for worst-case input combinations. It is possibly greater than the best case. For example, when sorting an array, the worst case is when the array is sorted in reverse order.
Figure: Big Omega
Big Omega Examples:
n3 + 4n2 = Ω(n2)
Proof: Here, we have f(n) = n3 + 4n2, and g(n) = n2 It is not too hard to see that if n ≥ 0, n3 ≤ n3 + 4n2 • We have already seen that if n ≥ 1, n2 ≤ n3 • Thus when n ≥ 1, n2 ≤ n3 ≤ n3 + 4n2 • Therefore, 1n2 ≤ n3 + 4n2 for all n ≥ 1 • Thus, we have shown that n3 + 4n2 = Ω(n2) (by definition of Big-Ω, with n0 = 1, and c = 1.)
Let f(n) = 5.5n2 – 7n.
We need to verify whether f(n) is Ω(n2). Let c be a constant such that 5.5n2 – 7n ≥ cn2, or, n ≥ 7/5.5-c. Fix c = 3, to get n ≥ 2.8. So, our n0 = 2.8 and c = 3. This shows that there exists positive constants c = 3 and n0 = 2.8 such that 0 ≤ cn2 ≤ f(n), ∀n ≥ n0.
Big omega function:
Similar to big O notation, big Omega(Ω) function is used in computer science to describe the performance or complexity of an algorithm.
Trees are used to impose a hierarchical structure on a collection of data items.
Here is complete Guide on Binary and Binary Search Trees.
What is Tree?
A tree is a set of one or more nodes T such that: there is a specially designated node called a root The remaining nodes are partitioned into n disjointed set of nodes 1, T2,…,Tn, each of which is a tree.
Figure: A Tree Structure
This is a tree because it is a set of nodes {A,B,C,D,E,F,G,H,I}, with node A as a root node and the remaining nodes partitioned into three disjointed sets {B,G,H,I}, { C,E,F} and {D}, respectively.
Each of these sets is a tree because each satisfies the aforementioned definition properly.
Figure: A Non Tree Structure
Even though this is a set of nodes {A,B,C,D,E,F,G,H,I}, with node A as a root node, this is not a tree because the fact that node E is shared makes it impossible to partition nodes B through I into disjointed sets.
Degree of a Node of a Tree
The degree of a node of a tree is the number of subtrees having this node as a root. In other words, the degree is the number of descendants of a node. If the degree is zero, it is called a terminal or leaf node of a tree.
Degree of a Tree
The degree of a tree is defined as the maximum of degree of the nodes of the tree, that is, degree of tree = max (degree(node i) for I = 1 to n)
Level of a Node
We define the level of the node by taking the level of the root node as 1, and incrementing it by 1 as we move from the root towards the subtrees. So the level of all the descendants of the root nodes will be 2. The level of their descendants will be 3, and so on. We then define the depth of the tree to be the maximum value of the level of the node of the tree.
Binary Tree
A binary tree is a finite set of nodes that is either empty or it consists of a root and two disjoint binary trees called the left sub tree and the right sub tree.
A binary tree is a rooted tree in which every node has at most two children. A full binary tree is a tree in which every node has zero or two children. Also known as a proper binary tree. A perfect binary tree is a full binary tree in which all leaves (vertices with zero children) are at the same depth (distance from the root, also called height).
A binary tree is similar to a tree, but not quite the same. For a binary tree, each node can have zero, one, or two children. In addition, each child node is clearly identified as either the left child or the right child.
Binary Search Tree
A binary search tree is a binary tree in which the data in the nodes are ordered in a particular way. To be precise, starting at any given node, the data in any nodes of its left sub tree must all be less than the item in the given node, and the data in any nodes of its right sub tree must be greater than or equal to the data in the given node. For numbers this can obviously be done. For strings, alphabetical ordering is often used. For records of data, a comparison based on a particular field (the key field) is often used.
Binary Search Tree Example
Figure: Binary Search Tree Example
Binary Search Tree In Data Structure
A binary tree is a type of data structure for storing data such as numbers in an organized way.
Binary search trees allow binary search for fast lookup, addition and removal of data items, and can be used to implement dynamic sets and lookup tables.
The order of nodes in a BST means that each comparison skips about half of the remaining tree, so the whole lookup takes time proportional to the binary logarithm of the number of items stored in the tree. This is much better than the linear time required to find items by key in an (unsorted) array, but slower than the corresponding operations on hash tables. Several variants of the binary search tree have been studied.
Binary Search Tree Algorithm
If root == NULL
return NULL;
If number == root->data
return root->data;
If number < root->data
return search(root->left)
If number > root->data
return search(root->right)
Binary Search Tree Search Algorithm
Step 1: IF ROOT -> DATA = ITEM OR ROOT = NULL
Return ROOT
ELSE
IF ROOT < ROOT -> DATA
Return search(ROOT -> LEFT, ITEM)
ELSE
Return search(ROOT -> RIGHT,ITEM)
[END OF IF]
[END OF IF]
Step 2: END
In everyday life, we encounter queues everywhere – a line of people waiting to buy a ticket or waiting to be served in a bank, all such lines of people are queues. The first person in line is the next one served, and when another person arrives, he/she joins the queue at the rear.
A Queue is a chronologically ordered list.
The difference between a stack and a queue is that, in a stack, elements are added and removed at the same end (the top), whereas in a queue, values are added at one end (the rear) and removed from the other end (the front).
Queue Definition
A Queue is an ordered collection of items from which items may be deleted at one end (called the front of the queue) and into which items may be inserted at the other end (the rear of the queue).
The Queue ADT stores arbitrary objects
Insertions and deletions follow the first-in first-out (FIFO) scheme
Insertions are at the rear of the queue and removals are at the front of the queue.
Source: Wikipedia
Main Queue operations
enqueue(object o): inserts element o at the end of the queue
dequeue(): removes and returns the element at the front of the queue.
Auxiliary Queue operations:
front(): returns the element at the front without removing it
size(): returns the number of elements stored
isEmpty(): returns a Boolean value indicating whether no elements are stored.
Exceptions:
Attempting the execution of dequeue or front on an empty queue throws an EmptyQueueException.
Operations of Queue
The following operations can be applied to a queue
InitQueue(Queue): creates an empty queue.
append(Item): inserts an item to the rear of the queue.
remove(Queue): removes an item from the front of the queue. Elements can only be added to the rear of the queue and removed from the front of the queue.
isEmpty(Queue): returns true if the queue is empty.
Array Implementation
data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority Queues are a subclass of Linear Lists, which maintain the First-In-First-Out order of elements. Insertion of elements is carried out at the ‘Tail‘ of the queue and deletion is carried out at the ‘Head‘ of the queue.
A queue is an ordered by position, not by value collection of data, with the following operations defined on it:
Operations
An array-based queue requires us to know two values a priori: the type of data contained in the queue, and the size of the array.
The queue itself is a structure containing three elements: Data, the array of data, Head, an index that keeps track of the first element in the queue (location where data is removed from the queue), and Tail, an index that keeps track of the last element in the queue (location where elements are inserted into the queue).
The operations that can be performed in a Queue are: Initialize: Initialize internal structure; create an empty queue.
Pseudo-Code: Set Head to 1 Set Tail to 1 Return the Queue to the user.
Enqueue: Add new element to the tail of the queue. This operation adds a new element at the rear end of the queue.
Pseudo-Code: If Queue is Full (Tail = Size of Queue + 1) Then Output ―Overflow, Queue is full, cannot Enqueue.‖ Else Place Element in Queue (Tail) Increment Tail (Tail = Tail + 1) Return the queue to the user.
Dequeue : Remove an element from the head of the queue. This operation removes an element only from the front of the queue.
Pseudo-Code: If Queue is Empty (Head = Tail) Then Output ―Underflow, Queue is empty, cannot dequeue.‖ Else Element: = Queue(Head); Move all the elements from head+1 to Size of Queue one step to the left Return Element
A stack is a basic data structure that can be logically thought as linear structure represented by a real physical stack or pile, a structure where insertion and deletion of items takes place at one end called top of the stack.
The basic implementation of a stack is also called a LIFO (Last In First Out).
Stack Data StructureDefinition:
A stack data structure is a list of elements in which an element may be inserted or deleted only at one end, called me top of the stack. This means, in particular, that elements are removed from a stack in the reverse order of that in which they were inserted into the stack.
Special terminology is used for two basic operations associated with stacks:
(a) “Push” is the term used to insert an element into a stack.
(b) “Pop” is the term used to delete an element from a stack.
There are basically three operations that can be performed on stacks. 1) Inserting an item into a stack (push). 2) Deleting an item from the stack (pop). 3) Displaying the contents of the stack(pip).
Fig: Wikipedia
Below are some of operations a stack data type supports:
Stack<item-type> Operations
push (new-item :item-type)
Adds an item onto the stack.
top ():item-type
Returns the last item pushed onto the stack.
pop ()
Removes the most-recently-pushed item from the stack.
is-empty ():Boolean
True if no more items can be popped and there is no top item.
is-full ():Boolean
True if no more items can be pushed.
get-size ():Integer
Returns the number of elements on the stack.
All operations except get-size() can be performed in O(1) time. get-size() runs in at worst O(N).
Array Implementation of a Stack Data Structure:
When an array is used to implement a stack, the push and pop operations are realized by using the operations available on an array. The limitation of an array implementation is that the stack cannot grow and shrink dynamically as per the requirement.
C program to implement a stack using an array.
#include <stdio.h>
#define MAX 10 /* The maximum size of the stack */
#include <stdlib.h>
void push(int stack[], int *top, int value)
{
if(*top < MAX )
{
*top = *top + 1;
stack[*top] = value;
}
else
{
printf("The stack is full can not push a value\n");
exit(0);
}
}
void pop(int stack[], int *top, int * value)
{
if(*top >= 0 )
{
*value = stack[*top];
*top = *top - 1;
}
else
{
printf("The stack is empty can not pop a value\n");
exit(0);
}
}
void main()
{
int stack[MAX];
int top = -1;
int n,value;
do
{
do
{
printf("Enter the element to be pushed\n");
scanf("%d",&value);
push(stack,&top,value);
printf("Enter 1 to continue\n");
scanf("%d",&n);
} while(n == 1);
printf("Enter 1 to pop an element\n");
scanf("%d",&n);
while( n == 1)
{
pop(stack,&top,&value);
printf("The value poped is %d\n",value);
printf("Enter 1 to pop an element\n");
scanf("%d",&n);
}
printf("Enter 1 to continue\n");
scanf("%d",&n);
} while(n == 1);
}
Example Output:
Enter the element to be pushed
10
Enter 1 to continue
1
Enter the element to be pushed
20
Enter 1 to continue
0
Enter 1 to pop an element
1
The value popped is 20
Enter 1 to pop an element
0
Enter 1 to continue
1
Enter the element to be pushed
40
Enter 1 to continue
1
Enter the element to be pushed
50
Enter 1 to continue
0
Enter 1 to pop an element
1
The value popped is 50
Enter 1 to pop an element
1
The value popped is 40 Enter 1 to pop an element
1
The value popped is 10 Enter 1 to pop an element
0
Enter 1 to continue
0
Linked List Implementation
type Stack<item_type>
data list:Singly Linked List<item_type>
constructor()
list := new Singly-Linked-List()
end constructor
When you want to push something onto the list, you simply add it to the front of the linked list. The previous top is then ”next” from the item being added and the list’s front pointer points to the new item.
method push(new_item:item_type)
list.prepend(new_item)
end method
To look at the top item, you just examine the first item in the linked list
method top():item_type
return list.get-begin().get-value()
end method
When you want to pop something off the list, simply remove the first item from the linked list.
method pop() list.remove-first() end method
A check for emptiness is easy. Just check if the list is empty
method is-empty():Boolean
return list.is-empty()
end method
A check for full is simple. Linked lists are considered to be limitless in size.
method is-full():Boolean
return False
end method
A check for the size is again passed through to the list.
method get-size():Integer
return list.get-size()
end method
end type
In a linked list, accessing the first element is an O(1) operation because the list contains a pointer that checks for empty as done here are also O(1). Most of the time, users of a Stack do not use the getSize() operation, and so a bit of space can be saved by not optimizing it.
All operations are O(1) with exception of occasional push and clear, which should replace all entries by null in order to let them be garbage-collected. Array implementation does not replace null entries. The Vector implementation does.
Program for implementation of a stack using the linked list.
# include <stdio.h>
# include <stdlib.h>
struct node
{
int data;
struct node *link;
};
struct node *push(struct node *p, int value)
{
struct node *temp;
temp=(struct node *)malloc(sizeof(struct node));
/* creates new node
using data value
passed as parameter */
if(temp==NULL)
{
printf("No Memory available Error\n");
exit(0);
}
temp->data = value;
temp->link = p;
p = temp;
return(p);
}
struct node *pop(struct node *p, int *value)
{
struct node *temp;
if(p==NULL)
{
printf(" The stack is empty can not pop Error\n");
exit(0);
}
*value = p->data;
temp = p;
p = p->link;
free(temp);
return(p);
}
void main()
{
struct node *top = NULL;
int n,value;
do
{
do
{
printf("Enter the element to be pushed\n");
scanf("%d",&value);
top = push(top,value);
printf("Enter 1 to continue\n");
scanf("%d",&n);
} while(n == 1);
printf("Enter 1 to pop an element\n");
scanf("%d",&n);
while( n == 1)
{
top = pop(top,&value);
printf("The value poped is %d\n",value);
printf("Enter 1 to pop an element\n");
scanf("%d",&n);
}
printf("Enter 1 to continue\n");
scanf("%d",&n);
} while(n == 1);
}
Example Output:
Input and Output
Enter the element to be pushed
10
Enter 1 to continue
1
Enter the element to be pushed
20
Enter 1 to continue
0
Enter 1 to pop an element
1
The value popped is 20
Enter 1 to pop an element
1
The value poped is 10
Enter 1 to pop an element
0
Enter 1 to continue
1
Enter the element to be pushed
30
Enter 1 to continue
1
Enter the element to be pushed
40
Enter 1 to continue
0
Enter 1 to pop an element
1
The value popped is 40
Enter 1 to pop an element
0
Enter 1 to continue
1
Enter the element to be pushed
50
Enter 1 to continue
0
Enter 1 to pop an element
1
The value popped is 50
Enter 1 to pop an element
1
The value popped is 30
Enter 1 to pop an element
0
Enter 1 to continue
0