Change the Background of Any Video with 5 Lines of Code

Learn to blur, color, grayscale and create a virtual background for a video with PixelLib.



By Ayoola Olafenwa, Independent AI Researcher

Figure

Photo by Author

 

PixelLib is a library created to enable easy implementation of object segmentation in real life applications. PixelLib supports image tuning, which is the ability to alter the background of any image. PixelLib now supports video tuning, which is the ability to alter the background of videos and camera’s feeds. PixelLib employs the technique of object segmentation to perform excellent foreground and background subtraction. PixelLib makes use of deeplabv3+ model trained on pascalvoc dataset and the dataset supports 20 object categories.

person,bus,car,aeroplane, bicycle, ,motorbike,bird, boat, bottle,  cat, chair, cow, dinningtable, dog, horse pottedplant, sheep, sofa, train, tv


Background effects supported are as follows:
1 Changing the background of an image with a picture
2 Assigning a distinct color to the background of an image and a video.
3 Blurring the background of an image and a video.
4 Grayscaling the background of an image of an image and a video.
5 Creating a virtual background for a video.

Install PixelLib and its dependencies:

Install Tensorflow with:(PixelLib supports tensorflow 2.0 and above)

  • pip3 install tensorflow

Install PixelLib with

  • pip3 install pixellib

If installed, upgrade to the latest version using:

  • pip3 install pixellib — upgrade

 

Detection of a target object

 
In some applications, you may want to target the detection of a particular object in an image or a video. The deeplab model by default detects all the objects it supports in an image or video. It is now possible to filter out unused detections and target a particular object in an image or a video.

sample image

Figure

Source: Unsplash.com By Strvnge Films

 

We intend to blur the background of the image above.

Code to blur image’s background.

output image

Image for post

Our goal is to completely blur the background of the person in this image, but we are not satisfied with the presence of other objects. Therefore, there is need to modify the code to detect a target object.

It is still the same code except we introduced an extra parameter detect in the blur_bg function.

change_bg.blur_bg("sample.jpg", extreme = True, output_image_name="output_img.jpg", detect = "person")


detect: This is the parameter that determines the target object to be detected. The value of detect is set to person. This means that the model will detect only person in the image.

Image for post

This is the new image with only our target object shown.

If we intend to show only the cars present in this image, we just have to change the value of detect from person to car.

change_bg.blur_bg("sample.jpg", extreme = True, output_image_name="output_img.jpg", detect = "car")


Image for post

We set the target object to car and every other object present in the image was blurred with the background.

Color background of target object

Target detections can be done with color effect.

change_bg.color_bg("sample.jpg", colors = (0,128,0), output_image_name="output_img.jpg", detect = "person")


Image for post

Change the background of a target object with a new picture

background image

 

change_bg.change_bg_img("sample.jpg", "background.jpg", output_image_name="output_img.jpg", detect = "person")


Image for post

Grayscale the background of a target object

change_bg.gray_bg("sample.jpg", output_image_name="output_img.jpg", detect = "person")


Image for post

Read this article to have a comprehensive knowledge about background editing in images with PixelLib.

Change the Background of Any Image with 5 Lines of Code
 

 

Video tuning with PixelLib

 
Video tuning is the ability to alter the background of any video.

Blur Video background

PixelLib makes it convenient to blur the background of any video using just five lines of code.

sample_video

code to blur the background of a video file

import pixellib                       
from pixellib.tune_bg import alter_bg  
                                             
change_bg = alter_bg(model_type = "pb")
change_bg.load_pascalvoc_model("xception_pascalvoc.pb")


We imported pixellib, and from pixellib we imported in the class alter_bg. Instance of the class was created, and within the class, we added a parameter model_type and set it to pbWe finally called the function to load the model.

Note: PixelLib supports two types of deeplabv3+ models, keras and tensorflow models. The keras model is extracted from the tensorflow model’s checkpoint. The tensorflow model performs better than the keras model extracted from its checkpoint. We will make use of tensorflow model. Download the tensorflow model from here .

There are three parameters that determine the degree to which the background is blurred.

low: When it is set to true, the background is blurred slightly.

moderate: When it is set to true, the background is moderately blurred.

extreme: When it is set to true, the background is deeply blurred.

change_bg.blur_video("sample_video.mp4", extreme = True, frames_per_second=10, output_video_name="blur_video.mp4", detect = "person")


This is the line of code that blurs the video’s background. This function takes in five parameters:

video_path: This is the path to the video file we want to blur its background.

extreme: It is set to true and the background of the video would be extremely blurred.

frames_per_second: This is the parameter to set the number of frames per second for the output video file. In this case, it is set to 10 i.e the saved video file will have 10 frames per second.

output_video_name:This is the saved video. The output video will be saved in your current working directory.

detect: This is the parameter that chooses the target object in the video. It is set to person.
output video

Blur the Background of Camera’s Feeds

import cv2capture = cv2.VideoCapture(0)


We imported cv2 and included the code to capture camera’s frames.

change_bg.blur_camera(capture, extreme = True, frames_per_second= 5, output_video_name=”output_video.mp4", show_frames= True,frame_name= “frame”, detect = "person")


In the code for blurring camera’s frames, we replaced the video’s filepath to capture i.e we are going to process a stream of camera’s frames instead of a video file.We added extra parameters for the purpose of showing the camera’s frames:

show_frames: This is the parameter that handles showing of blurred camera’s frames.

frame_name: This is the name given to the shown camera’s frame.

Output Video

Wow! PixelLib successfully blurred my background in the video.

 

Create a Virtual Background for a Video

 
PixelLib makes it super easy to create a virtual background for any video, and you can make use of any image to create a virtual background for a video.

sample video

Image to serve as background for a video

 

change_bg.change_video_bg(“sample_video.mp4”, “bg.jpg”, frames_per_second = 10, output_video_name=”output_video.mp4", detect = “person”)


It is still the same code except we called the function change_video_bg to create a virtual background for the video. The function takes in the path of the image we want to use as background for the video.

Output Video

Beautiful demo! We are able to successfully create a virtual space background for the video.

Create a Virtual Background for Camera’s Feeds

change_bg.change_camera_bg(cap, “space.jpg”, frames_per_second = 5, show_frames=True, frame_name=”frame”, output_video_name=”output_video.mp4", detect = “person”)


It is similar to the code we used to blur camera’s frames. The only difference is that we called the function change_camera_bg. We performed the same routine, replaced the video’s filepath to capture, and added the same parameters.

Output Video

Wow! PixelLib successfully created a virtual background for my video.

Color Video background

PixelLib makes it possible to assign any color to the background of a video.

code to color the background of a video file

change_bg.color_video("sample_video.mp4", colors =  (0, 128, 0), frames_per_second=15, output_video_name="output_video.mp4", detect = "person")


It is still the same code, except we called the function color_video to give the video’s background a distinct color. The function color_bg takes the parameter colors, and colors’s RGB value is set to green. The RGB value of green color is (0, 128, 0).
output video

change_bg.color_video("sample_video.mp4", colors =  (0, 128, 0), frames_per_second=15, output_video_name="output_video.mp4", detect = "person")


The same video with a white background

Color the Background of Camera’s Feeds

change_bg.color_camera(capture, frames_per_second=10,colors = (0, 128, 0), show_frames = True, frame_name = “frame”, output_video_name=”output_video.mp4", detect = "person")


It is similar to the code we used to create a virtual background for camera’s frames. The only difference is that we called the function color_camera. We performed the same routine, replaced the video’s filepath to capture, and added the same parameters.

Output Video

Beautiful demo! My background was successfully colored green with PixelLib.

Grayscale Video background

code to grayscale the background of a video file

change_bg.gray_video("sample_video.mp4", frames_per_second=10, output_video_name="output_video.mp4", detect = "person")


output video

Note: The background of the video would be altered and the objects present would maintain their original quality.

Grayscale the Background of Camera’s Feeds

It is similar to the code we used to color camera’s frames. The only difference is that we called the function gray_camera. We performed the same routine, replaced the video filepath to capture, and added the same parameters.

Visit PixelLib’s official github repository
Visit PixelLib’s offical documentation

Reach to me via:

Email: olafenwaayoola@gmail.com

Linkedin: Ayoola Olafenwa

Twitter: @AyoolaOlafenwa

Facebook: Ayoola Olafenwa

Check out these articles written on how to make use of PixelLib for semantic and instance segmentation of objects in images and videos.

Image Segmentation With 5 Lines 0f Code
Semantic and Instance Segmentation with PixelLib.
 

Video Segmentation With 5 Lines of Code
Semantic and instance segmentation of videos.
 

Semantic Segmentation of 150 classes of objects With 5 Lines of Code
Semantic segmentation of 150 classes of objects with PixelLib
 

Custom Instance Segmentation Training With 7 Lines Of Code.
Train your dataset with 7 Lines of Code to implement instance segmentation and object detection.

 
Bio: Ayoola Olafenwa is an independent AI Researcher who specializes in the field of computer vision.

Original. Reposted with permission.

Related: