10 Math Concepts for Programmers

The not so secret behind becoming a proficient programmer - Math & it’s top 10 concepts.



10 Math Concepts for Programmers
Image by Author

 

As the demand for programmers increases, the supply naturally will meet with more people entering the industry everyday. However, it is a competitive industry. In order to continuously improve yourself, skill-set and increase your salary - you need to prove that you are a proficient programmer. One way you can do this is by learning the things that people don’t typically know. 

A lot of people break into the programming industry with the assumption that you do not need to know the Math behind it. Although this is somewhat true, being able to understand the logical mathematical concept behind programming will make you a more proficient programmer. 

How is that? By understanding what you are doing and what is happening. That’s how.

So let’s jump right into it. What are the top 10 math concepts for programmers?

 

Boolean Algebra

 

Boolean algebra stems off algebra. I guess that was obvious. If you are a programmer or on your quest to become one, you probably already know what Boolean is. If not, I'll quickly define it. 

Boolean is a data type/binary variable that has one of the two possible values, for example 0 (false) or 1 (true). The boolean data type is backed by boolean algebra, in which the variable's values are known as the truth values, true and false. When working with boolean algebra, there are three operators that you can use: 

  • Conjunction or AND operation
  • Disjunction or OR operation
  • Negation or Not operation

These can be visually represented as venn diagrams, to give you a better understanding of the output. Boolean algebra is made up of 6 laws:

  • Commutative law
  • Associative law
  • Distributive law
  • AND law
  • OR law
  • Inversion law

 

Numeral Systems

 

Computers understand numbers, and this is why they need a numeral system. A numeral system is known as a writing system used to express numbers. For example, you have these four most common number system types:

  1. Decimal number system (Base- 10)
  2. Binary number system (Base- 2)
  3. Octal number system (Base-8)
  4. Hexadecimal number system (Base- 16)

Computers work off a Base- 2 numeral system, where the possible digits are 0 and 1. Base64 is also used to encode binary data in a string format.

 

Floating Point

 

More on learning about numbers, we have floating point. A floating point is a variable data type which represents real numbers as an approximation. A floating-point number is a number in which the position of the decimal point can move around or "float" instead of being in a fixed position. This allows developers to make a trade-off between range and precision.

But why an approximation? Computers only have a limited amount of space, either 32 bits (single precision) or 64 bits (double precision). 64 bit is the default for programming languages such as Python and JavaScript. An example of floating-point numbers are 1.29, 87.565, and 9038724.2. It can either be a positive or negative whole number with a decimal point. 

 

Logarithms

 

Also known as log is a mathematical concept that uses the inverse of exponentials to answer the question. So why is logarithms important to programmers> Because it simplifies complex mathematical calculations. For example, 1000 = 10^4 can also be written as 4 = log101000.

The base number is a mathematical object that needs to be multiplied by itself. The exponent is a number that identifies how many times a base number needs to be multiplied by itself. Therefore, a logarithm is an exponent which indicates to what power a base must be raised to produce a given number.

When log uses Base- 2 it is a binary logarithm, and if it’s Base- 10 it is a common logarithm. 

 

Set Theory

 

A set is an unordered unique collection of values, which do not need to have any relation to one another. They can only contain unique items, and cannot contain the same item twice or more.

For example excel files or a database contain tables which have a set of unique rows. This is a type of discrete math as these structures can have a finite number of elements. The aim of set theory is to understand the collections of values, and the relations between one another. This is typically used for data analysts, SQL experts and data scientists. 

You can do this by using:

  • Inner join or intersection - Returns a set containing elements that are present in both sets
  • Outer join or union - Returns elements from both sets
  • Union all – Same as the outer join operator, but it will contain all duplicates.
  • Except or Minus – A Minus B is a set containing elements from the set A that are not elements of the set B 

 

Combinatorics

 

Combinatorics is the art of counting things in order to obtain results, and understand certain properties of finite structures through patterns. Programming is all about solving problems, and combinatorics is the way we can arrange objects to study these finite discrete structures.

The Combinatorics formula is a combination of Permutation and Combination.

  • Permutation is the act of arranging a set into some order or sequence
  • Combination is the selection of values of the set where the order is not taken into consideration.

 

Graph Theory

 

As you already know, a graph is a visual representation of a set of values and these values can be connected. When it comes to data, these values are connected due to variables - which in graph theory are known as links. 

Graph Theory is the study of graphs concerning the relationship among edges and vertices of connected sets of points. This allows us to create a  pairwise relationship between objects using the vertices, also known as nodes that are connected by the edges, known as the lines. A graph is represented as a pair G(V, E), where V represents the finite set vertices and E represents the finite set edges.

 

Complexity Theory

 

Complexity theory is the study of the amount of time and memory it takes for an algorithm to run as a function of the input size. There are two types of complexities:

  • Space complexity - the amount of memory an algorithm needs to run.
  • Time complexity - the amount of time an algorithm needs to run.

More people are concerned about time complexity as we can reuse the memory of an algorithm. When it comes to time complexity, the best way to measure it is by considering the number of operations the algorithm performs. Algorithms are built using if-statements and loops, therefore to reduce time spent you want to use code that has as few if-statements and loops as possible.

Complexity theory for algorithms uses the big-o notation to help describe and provide a better understanding of the limiting behavior of an algorithm. It is used to classify algorithms by how they respond to changes in input size.

 

Statistics

 

Ahhh statistics. If you’re looking to get into artificial intelligence, you need to know about statistics. AI and machine learning are nice names used for statistics. Statistical programming is used to solve data-heavy problems, such as ChatGPT. ChatGPT’s response is all based on the probability of matching the prompt provided by the user. 

You will need to learn more than mean, median and mode when it comes to statistical programming. You will need to learn about bias, covariance, and bayes theorem. As a programmer, you will be set tasks and realize you will be asking is this a linear regression problem or a logistic regression problem. Understanding the difference between the two will help you identify what type of task you have at hand. 

 

Linear Algebra

 

You may have looked at linear algebra in school - or you may have not. Linear algebra is very important and is widely used in computer graphics and deep learning. In order to grasp linear algebra, you will need to understand these three words:

  • Scalar - a single numerical value
  • Vector - a list of numbers or 1 dimensional array
  • Matrix - a grid or 2 dimensional array

Vectors can represent points and the direction in a 3D space, whereas matrices can represent transformations that happen to these vectors. 

 

Wrapping it up

 

This article provides you with a quick overview of the top 10 math concepts that will improve your programming career. Learning the intricacies will not only make your day-to-day tasks smoother and easier to understand, but it can be showcased to your employer on your potential. 

If you are looking for a FREE book to help you, check out: Mathematics for Machine Learning: The Free eBook
 
 
Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.