We know that Object-oriented Programming can help us to design and build huge systems, but this is only the half history. Usually, we use computer programs because we need to manage huge amounts of data and do it within a reasonable amount of time.
Here is where algorithm design comes into play. The way we manage the data into the algorithm will determine the time that will take to solve the problem. (See above the Amount of data – Time correlation table)
Is this post, I want to talk about how using the correct data structure can help us to get a successful cost-effective algorithm.
When developing an algorithm, there are several things that we should take into account:
1- Amount of data
2- How we structure the data
3- How we manage the data
4- Time and space complexity
We assume that the main aim of our algorithm is to manage a big amount of data ( >1000 items ). The difference of time between two algorithms (let’s say O(x) vs O(x²)) for 10 elements is almost irrelevant.
Our work as a software engineers is to avoid unnecessary resource consumption and decrease the server costs.
The most important step when choosing a Data Structure is to know what kind of operations will be performed against that Data Structure.
Most common operations are Access, Search, Insertion and Deletion. Requirements may vary depending of each case but we base our research in those 4 as a first step.
Of course, the most valuable aspect of an algorithm is the programmer creativity. Let’s consider then, the following Data Structure table a cheatsheet for that.
|List with PI||O(n)||O(n)||O(1)||O(1)||O(n)|
*as long as we overdimension the hash table there will be less collisions and we will get the item easily.
My plan is to get more deeply into the listed Data Structures. If you find interesting any other Data Structure, please comment below. I will appreciate any suggestion.