# Big O in the real life programming – Why algorithm complexity may not matter in real life programming?

All of us programmers have studied complexity of algorithms at some point of time in our lives. We saw that how an algorithm that runs in O(n) time is better that one which runs in O(n2) or O(n3). We studied different tools and techniques and mathematical concepts to analyze running times so that we can improve upon them later. However, we don’t often see programmers put those things in practice when they develop algorithms and write programs in day to day programming. Why is it so? Is it because people are too lazy to do all that math? Or is it not required at all? We will see in this article.

#### The applicability of Big-O in real life scenario

The Big-O notation is the most commonly used notation while studying worst case time complexity of algorithms. Let use the same for our discussion. Let us assume that we have a an algorithm to solve a problem and that the worst case time complexity of the algorithm is given by a function f(n) = 2n2 + 6n + 10 which of the order O(n2). Let us also assume that upon optimization we can possibly bring it down to g(n) = 21 n logn + 35n + 10 which is of the order O(n logn).

The below table shows how these two functions grow with respect to n.

As you can see in this example, the algorithm with O(n2) complexity actually performs better than the algorithm with O(nlogn) complexity will do for smaller size inputs. So, in a program where the input size is likely to stay small; for example if it represents the number of items in a user’s shopping cart or the number of flights starting from a given airport on a day; it will actually be better to use the original without optimizing. However, if the input size is possibly going to be higher, you might need to optimize the algorithm.

#### What if the input size is big enough to make a difference?

If the input size large enough to make a difference in the running time, you might still want to avoid optimizing. This is because the gain in performance achieved may not justify the amount of time taken to optimize the code. In today’s world, software developers need to deliver one business requirement after another, where delivering the next business requirement takes precedence over making a fraction of a second’s improvement in something that is already delivered. However, when working on a very large scale, even these small fractions may have a significant effect.

Thus we see that almost all programs fall in one of these three categories.

While with every passing day, the number of programs falling in the third category is increasing, however, a huge majority of programs still fall in the first two categories.

#### How to optimize a program without optimizing the running time complexity?

There are a lot of ways in which we can optimize a program without optimizing the running time complexity. One of the most important ways to do so is Reducing IO operations. IO operations involves things like making an API call, reading/writing file from/to the disk, fetching data from DB, writing data to DB etc. One of the best way to reduce IO is to look for IO calls inside loops and see if they can be moved outside the loop. For an example consider the following pseudo code.

``````# Let variable students hold a list of students whose
# grade report needs to be generated.
for each student in students:
student_data = getStudentData(student)
report.write()``````

Here, we have a query inside the loop to fetch student’s data from the DB. Each query is a separate IO operation and their completion times will add up in the running time of our program.

Now consider the following pseudo code to do the same thing:

``````# Let variable students hold a list of students whose
# grade report needs to be generated.
all_students_data = fetchAllStudentData(students)
for student_data in all_students_data: