Mastering TensorFlow Tensors in 5 Easy Steps
Discover how the building blocks of TensorFlow works at the lower level and learn how to make the most of Tensor objects.
By Orhan G. Yalçın, AI Researcher
If you are reading this article, I am sure that we share similar interests and are/will be in similar industries. So let’s connect via Linkedin! Please do not hesitate to send a contact request! Orhan G. Yalçın — Linkedin
In this post, we will dive into the details of TensorFlow Tensors. We will cover all the topics related to Tensors in Tensorflow in these five simple steps:
- Step I: Definition of Tensors → What is a Tensor?
- Step II: Creation of Tensors → Functions to Create Tensor Objects
- Step III: Qualifications of Tensors → Characteristics and Features of Tensor Objects
- Step IV: Operations with Tensors → Indexing, Basic Tensor Operations, Shape Manipulation, and Broadcasting
- Step V: Special Types of Tensors → Special Tensor Types Other than Regular Tensors
Definition of Tensors: What is a Tensor?
Tensors are TensorFlow’s multi-dimensional arrays with uniform type. They are very similar to NumPy arrays, and they are immutable, which means that they cannot be altered once created. You can only create a new copy with the edits.
Let’s see how Tensors work with code example. But first, to work with TensorFlow objects, we need to import the TensorFlow library. We often use NumPy with TensorFlow, so let’s also import NumPy with the following lines:
Creation of Tensors: Creating Tensor Objects
There are several ways to create a
tf.Tensor object. Let’s start with a few examples. You can create Tensor objects with several TensorFlow functions, as shown in the below examples:
Output: tf.Tensor([[1 2 3 4 5]], shape=(1, 5), dtype=int32) tf.Tensor([[1. 1. 1. 1. 1.]], shape=(1, 5), dtype=float32) tf.Tensor([[0. 0. 0. 0. 0.]], shape=(1, 5), dtype=float32) tf.Tensor([1 2 3 4 5], shape=(5,), dtype=int32)
As you can see, we created Tensor objects with the shape
(1, 5) with three different functions and a fourth Tensor object with the shape
tf.range() function. Note that
tf.zeros accepts the shape as the required argument since their element values are pre-determined.
Qualifications of Tensors: Characteristics and Features of Tensor Objects
TensorFlow Tensors are created as
tf.Tensor objects, and they have several characteristic features. First of all, they have a rank based on the number of dimensions they have. Secondly, they have a shape, a list that consists of the lengths of all their dimensions. All tensors have a size, which is the total number of elements within a Tensor. Finally, their elements are all recorded in a uniform Dtype (data type). Let’s take a closer look at each of these features.
Rank System and Dimension
Tensors are categorized based on the number of dimensions they have:
- Rank-0 (Scalar) Tensor: A tensor containing a single value and no axes (0-dimension);
- Rank-1 Tensor: A tensor containing a list of values in a single axis (1-dimension);
- Rank-2 Tensor: A tensor containing 2-axes (2-dimensions); and
- Rank-N Tensor: A tensor containing N-axis (N-dimensions).
For example, we can create a Rank-3 tensor by passing a three-level nested list object to the
tf.constant function. For this example, we can split the numbers into a 3-level nested list with three-element at each level:
The code to create a Rank-3 Tensor object
Output: tf.Tensor( [[[ 0 1 2] [ 3 4 5]] [[ 6 7 8] [ 9 10 11]]], shape=(2, 2, 3), dtype=int32)
We can view the number of dimensions that our `rank_3_tensor` object currently has with the `.ndim` attribute.
Output: The number of dimensions in our Tensor object is 3
The shape feature is another attribute that every Tensor has. It shows the size of each dimension in the form of a list. We can view the shape of the
rank_3_tensor object we created with the
.shape attribute, as shown below:
Output: The shape of our Tensor object is (2, 2, 3)
As you can see, our tensor has 2 elements at the first level, 2 elements in the second level, and 3 elements in the third level.
Size is another feature that Tensors have, and it means the total number of elements a Tensor has. We cannot measure the size with an attribute of the Tensor object. Instead, we need to use
tf.size() function. Finally, we will convert the output to NumPy with the instance function
.numpy() to get a more readable result:
Output: The size of our Tensor object is 12
Tensors often contain numerical data types such as floats and ints, but may contain many other data types such as complex numbers and strings.
Each Tensor object, however, must store all its elements in a single uniform data type. Therefore, we can also view the type of data selected for a particular Tensor object with the
.dtype attribute, as shown below:
Output: The data type selected for this Tensor object is <dtype: 'int32'>
Operations with Tensors
An index is a numerical representation of an item’s position in a sequence. This sequence can refer to many things: a list, a string of characters, or any arbitrary sequence of values.
TensorFlow also follows standard Python indexing rules, which is similar to list indexing or NumPy array indexing.
A few rules about indexing:
- Indices start at zero (0).
- Negative index (“-n”) value means backward counting from the end.
- Colons (“:”) are used for slicing:
- Commas (“,”) are used to reach deeper levels.
Let’s create a
rank_1_tensor with the following lines:
Output: tf.Tensor([ 0 1 2 3 4 5 6 7 8 9 10 11], shape=(12,), dtype=int32)
and test out our rules no.1, no.2, and no.3:
Output: First element is: 0 Last element is: 11 Elements in between the 1st and the last are: [ 1 2 3 4 5 6 7 8 9 10]
Now, let’s create our
rank_2_tensor object with the following code:
Output: tf.Tensor( [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11]], shape=(2, 6), dtype=int32)
and test the 4th rule with several examples:
Output: The first element of the first level is: [0 1 2 3 4 5] The second element of the first level is: [ 6 7 8 9 10 11] The first element of the second level is: 0 The third element of the second level is: 2
Now, we covered the basics of indexing, so let’s take a look at the basic operations we can conduct on Tensors.
Basic Operations with Tensors
You can easily do basic math operations on tensors such as:
- Element-wise Multiplication
- Matrix Multiplication
- Finding the Maximum or Minimum
- Finding the Index of the Max Element
- Computing Softmax Value
Let’s see these operations in action. We will create two Tensor objects and apply these operations.
We can start with addition.
Output: tf.Tensor( [[ 3. 7.] [11. 15.]], shape=(2, 2), dtype=float32)
Let’s continue with the element-wise multiplication.
Output: tf.Tensor( [[ 2. 12.] [30. 56.]], shape=(2, 2), dtype=float32)
We can also do matrix multiplication:
Output: tf.Tensor( [[22. 34.] [46. 74.]], shape=(2, 2), dtype=float32)
NOTE: Matmul operations lays in the heart of deep learning algorithms. Therefore, although you will not use matmul directly, it is crucial to be aware of these operations.
Examples of other operations we listed above:
Output: The Max value of the tensor object b is: 7.0 The index position of the Max of the tensor object b is: [1 1] The softmax computation result of the tensor object b is: [[0.11920291 0.880797 ] [0.11920291 0.880797 ]]
Just as in NumPy arrays and pandas DataFrames, you can reshape Tensor objects as well.
The tf.reshape operations are very fast since the underlying data does not need to be duplicated. For the reshape operation, we can use the
tf.reshape() function. Let's use the
tf.reshape function in code:
Output: The shape of our initial Tensor object is: (1, 6) The shape of our initial Tensor object is: (6, 1) The shape of our initial Tensor object is: (3, 2) The shape of our flattened Tensor object is: tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32)
As you can see, we can easily reshape our Tensor objects. But beware that when doing reshape operations, a developer must be reasonable. Otherwise, the Tensor might get mixed up or can even raise an error. So, look out for that 😀.
When we try to do combined operations using multiple Tensor objects, the smaller Tensors can stretch out automatically to fit larger tensors, just as NumPy arrays can. For example, when you attempt to multiply a scalar Tensor with a Rank-2 Tensor, the scalar is stretched to multiply every Rank-2 Tensor element. See the example below:
Output: tf.Tensor( [[ 5 10] [15 20]], shape=(2, 2), dtype=int32)
Thanks to broadcasting, you don’t have to worry about matching sizes when doing math operations on Tensors.
Special Types of Tensors
We tend to generate Tensors in a rectangular shape and store numerical values as elements. However, TensorFlow also supports irregular, or specialized, Tensor types, which are:
- Ragged Tensors
- String Tensors
- Sparse Tensors
Let's take a closer look at what each of them is.
Ragged tensors are tensors with different numbers of elements along the size axis, as shown in Figure X.
You can build a Ragged Tensor, as shown below:
Output: <tf.RaggedTensor [[1, 2, 3], [4, 5], ]>
String Tensors are tensors, which stores string objects. We can build a String Tensor just as you create a regular Tensor object. But, we pass string objects as elements instead of numerical objects, as shown below:
Output: tf.Tensor([b'With this' b'code, I am' b'creating a String Tensor'], shape=(3,), dtype=string)
Finally, Sparse Tensors are rectangular Tensors for sparse data. When you have holes (i.e., Null values) in your data, Sparse Tensors are to-go objects. Creating a sparse Tensor is a bit time consuming and should be more mainstream. But, here is an example:
Output: tf.Tensor( [[ 25 0 0 0 0] [ 0 0 0 0 0] [ 0 0 50 0 0] [ 0 0 0 0 0] [ 0 0 0 0 100]], shape=(5, 5), dtype=int32)
We have successfully covered the basics of TensorFlow’s Tensor objects.
Give yourself a pat on the back!
This should give you a lot of confidence since you are now much more informed about the building blocks of the TensorFlow framework.
Beginner's Guide to TensorFlow 2.x for Deep Learning Applications
Understanding the TensorFlow Platform and What it has to Offer to a Machine Learning Expert
Continue with Part 3 of the series:
Mastering TensorFlow “Variables” in 5 Easy Step
Learn how to use TensorFlow Variables, their differences from plain Tensor objects, and when they are preferred over…
Subscribe to the Mailing List for the Full Code
If you would like to have access to full code on Google Colab and the rest of my latest content, consider subscribing to the mailing list:
Slide to Subscribe to My Newsletter
Finally, if you are interested in applied deep learning tutorials, check out some of my articles:
Image Classification in 10 Minutes with MNIST Dataset
Using Convolutional Neural Networks to Classify Handwritten Digits with TensorFlow and Keras | Supervised Deep Learning
Image Generation in 10 Minutes with Generative Adversarial Networks
Using Unsupervised Deep Learning to Generate Handwritten Digits with Deep Convolutional GANs using TensorFlow and the…
Image Noise Reduction in 10 Minutes with Convolutional Autoencoders
Using Deep Convolutional Autoencoders to Clean (or Denoise) Noisy Images with the help of Fashion MNIST | Unsupervised…
Using Recurrent Neural Networks to Predict Bitcoin (BTC) Prices
Wouldn’t it be awesome if you were, somehow, able to predict tomorrow’s Bitcoin (BTC) price? Cryptocurrency market has…
Bio: Orhan G. Yalçın is an AI Researcher in the legal domain. He is a qualified lawyer with business development and data science skills, and has previously worked as a legal trainee for Allen & Overy on capital markets, competition, and corporate law matters.
Original. Reposted with permission.
- WTF is a Tensor?!?
- Getting Started with TensorFlow 2
- The Most Important Fundamentals of PyTorch you Should Know