Sent Successfully.
Home / Blog / Artificial Intelligence / OpenCV – Various Image Filters
OpenCV – Various Image Filters
Table of Content
We need robots that can see an image and comprehend its information in the same ways that humans do, in order to extract the greatest value from visual data. Humans are able to summarise what they have seen, narrate what they have seen, and recall faces or objects. We need to create computers with the ability to extract hidden value from the vast amount of visual input if we want a machine to think like us.
Through the provision of techniques and features that may mimic human vision, computer vision has made it possible for machines to see. We have created ways that incorporate various automated extraction operations so that a Machine can comprehend the substance of digital data. Every tiny piece of information from the image is included in these extractions, including objects, 3D models, text descriptions, shadows, recognition, and more.
A machine requires an image to be processed in variant ways to simplify, tune, and enhancing the content in some ways, this is known as image processing. The reason we do this is because of the complexity inherent in the world of visuals. For example, a given image may contain an object seen from different angles, or corrupted by poor contrast, or distorted due to variations in lighting, or there could be an occlusion, and so on. A successful Computer vision application must be able to ‘see’ in an infinite number of sceneries and succeed in extracting something of value and meaning. So, processing of raw visual input performs operations such as smoothing, sharpening, contrasting or stretching of an image. Image processing allows us to: Normalize the image and its photometric properties, such as normalizing the brightness, colors, backlight, etc., It helps remove noise from an image, scaling, zooming, cropping object of interest from an image. These operations allow us to convert images in other forms of visual data and train machines in applying a successful computer vision. Click here to learn Data Science Course
The way a computer interprets a picture is through 2D signals made up of a pixel matrix. Consider a picture as a function, with the pixel values used to represent the image's bits. A pixel is the name for the smallest component of a picture. Simply said, pictures are transformed into pixel values, which are integers that machines use to interpret and analyse image data. A 2D depiction of the visible light spectrum is called an image. They have X and Y dimensions, which equate to pixels, for representation. In the case of a coloured image (which has three channels: red, green, and blue), an image is therefore to a computer a combination of pixel values ranging from 0 to 255. The colours and hues of pixels are changed in some way during image processing to affect an image's overall or partial look. Applying a system known as filters—also referred to as kernels or masks in image processing operations—makes this adjustment. With the use of image filters, pixels may be changed in a variety of ways, including brightness, contrast, textures, tones, and other unique effects.
Figure1 Computer's perception of an image represented as a matrix of Pixel Values (Source: Stanford AI Lab)
Click here to explore 360DigiTMG.
For the early phases of processing and to create effective computer vision models, the raw picture input might be distorted in a variety of ways; image filters remove these unwanted traits or highlight particular aspects of an image. Understanding a picture comes after being able to read it. How does a computer know that the image it is shown shows a mango tree? Here, machine learning plays a role. A picture may be taught to a computer using machine learning techniques. The learning algorithm can comprehend aspects such as the shape of the tree, leaves, colour, the fruit of the tree, etc. by being shown a number of instances of photographs featuring a mango tree. A component of such training is image processing. When it comes to processing photos, there are many different image filtering processes; this article discusses a handful.
Figure 2 A pixel is characterized by its(x,y) coordinates and its value in an image. Image is represented as a 2D array of pixels
Learn the core concepts of Data Science Course video on YouTube:
OpenCV and Image Processing
OpenCV (Open-Source Computer Vision Library), a freely available library by intel research, provides a sheer number of functions and algorithms that enable us to better understand and develop Computer Vision Models. Image Processing is a significant capability of Computer Vision and the OpenCV library that makes developing these capabilities much easier. It serves as a framework for several image processing and computer vision programs. Its open-source platform allows us to access much image-related processing.
The idea behind applying filters is to enable computers in building successful algorithms that can replicate human vision, by learning, being able to assess, label, and respond from enormous volumes of visual inputs. OpenCV provides readymade algorithms to speed up the corrections, modifications, and other image processing tasks. An image can be filtered using High Pass Filters (HPF) and Low Pass Filters (LPF). LPF is mostly used to blur an image or remove noise from an image. HPF can be used to remove noise as well and detect edges in an image.
Figure3OpenCV Basic Operations on Images (Source:OpenCVdocs)
The article that follows examines a few image filtering techniques that let users utilise OpenCV's built-in image filter functions to apply different effects to a picture, such as smoothing, blurring, erosion, and dilution. These functions apply a variety of linear and non-linear mathematical procedures to 2D pictures. The image processing computing process is illustrated here. Each pixel position (x, y) in the input picture, as well as any nearby pixels or neighbourhoods, are taken into account and utilised to compute the output pixel. It is a weighted sum of pixel values in linear filters. In order to produce an output with the same size as an input picture, the computed output is saved in the destination image at the precise (x,y) location. The calculation of an image filter function is shown in the pictures below.
Figure4 For each pixel in the input image , the outcome is written on the same location at the target image
Figure5 The above computation is repeated for every pixel in the source image to generate the filtered image (Source: Digital Image Processing : Introduction, bits)
In Image processing, this computation of transforming an image by applying a kernel (Filter) over each pixel and its surrounding pixels across the entire image is also known as convolution. The Kernel is a term used for a matrix of values that determine the transformation effect in the convolution process. In simple words, Kernel is an array, that defines the filtering action. The figure below illustrates the steps in a convolutional process, first, a kernel matrix is placed over each pixel of the image, each value of the kernel is then multiplied with the corresponding pixel of the image. Second, the summation of resulting values and returns a new value for the center pixel. Last, this process is repeated across the entire image. The figure below illustrates the Convolution process visually.
Figure 4 Convolution Operation on a 7x7 image with a 3x3 Kernel (Source: Basic things to Know about Convolution @bdhuma)
"Linear and non-linear image filters are two types of image filters used in image processing. The value of an output pixel in Linear is a linear combination of the input pixel's surrounds' pixel values. The value of an output pixel in non-linear image filters is not a linear function of its input. In addition to linear and non-linear filters, we may create our own to find certain features, objects, or forms in a picture. Noise reduction is the main function of image filters. It typically works on a neighbourhood of pixels in a picture and minimises the amount of undesirable noise in that area. When filtering images in the spatial domain, the filtered picture is produced by convolving the input with the filter function. Where each input pixel's immediate surroundings are convolutioned over." (2014) Abdul R. Z. Numerous different types of filters are required to address the intricate problems of image processing. While non-linear filters may also maintain edges in addition to successfully removing noise, linear filters can effectively reduce noise. Depending on the situations and demands of image processing, a combination of filters may be necessary. Developers may use a variety of cutting-edge CV techniques for real-time image and video processing, analytics, and machine learning using the OpenCV Image Processing Toolbox. When combined with neural networks, it truly shines, enabling programmers to create cutting-edge, robust applications. The OpenCV Platform open-source library's many image filter features are examined in the section of the article below.
OpenCV - Various Image Filter Functions
Figure 5 OpenCV Image Filter Functions
Bilateral Filter
The Bilateral Image filter can reduce unwanted noise effectively while sharpening the edges within an image. “It’s a non-linear, edge-preserving, and de-noising smoothing image filter. In OpenCV supporting Python programing language, the Bilateral Filter is represented as bilateral filter() function. It has several parameters, as illustrated in the syntax; cv2.bilateralFilter(src, dst, d, sigmaSpace, borderType). The Images below illustrates (a) the source image (b) and the output image after applying the Bilateral Filter.
(b)InputImage
(b)OutputImage
ImageSource: OpenCV Image Filters Tutorials point)
Box Filter
By adding a mask for a smoothing effect, a box filter distorts a picture. "The Box filter is a linear filter in which each output pixel has a value equal to the average value of its surrounding pixels in the source image." The Box Filter is represented as a box filter () function in programming. Its syntax is expressed as box filter (src, dst, ddepth, ksize, anchor, normalise, borderType), and it contains a number of arguments.
(a)InputImage
(b)OutputImage
ImageSource: OpenCV Image Filters Tutorials point)
SQRBox Filter
The SQRBox filter “calculates the normalized sum of squares of the pixel values overlapping the filter” “For every pixel (x,y) in the input image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel (x,y)”. In programming, the SQRBox Filter is represented as SQRboxFilter() function. It has several parameters; its syntax is illustrated as sqrBoxFilter(src, dst, ddepth, ksize)
(a)InputImage
(b)OutputImage
ImageSource: OpenCV Image Filters Tutorials point)
Filter2D
The kernel and source picture are convolved using the Filter2D filter function. The function applies a linear filter on the input picture, producing a grainy and coloured image as the output. The syntax for a Filter2D function is filter2D (src, dst, ddepth, kernel) in programming terms.
(a)InputImage
(b)OutputImage
ImageSource: OpenCV Image Filters Tutorials point)
Dilation Filter
The Dilation Filter adds pixels to the boundaries of objects in an image. The convolution process highlights the bright regions, the object size in bright shades of an image increases while the size of an object in dark shade decreases. A kernel matrix is prepared to apply in the convolution process using the get Structuring Element (int shape, Size ksize) method, this generates a kernel of a specific shape used in the Dilation of an image. The programming syntax for this filter is illustrated as dilate (src, dst, kernel)
Erosion
To eliminate pixels from object boundaries in a picture, use the erosion filter. The filter erodes/removes insignificant object edges so that significant object edges or objects may be discovered. The number of pixels eliminated from the objects of an image relies on the kernel created with a certain shape and size using the structuring element get Structuring Element (int shape, Size ksize). The filter is stated as erode (src, dst, kernel) in programming syntax.
(a).Input (b). Dilation (c). Erosion
Morphological Operations
A Morphological Operation evaluates grayscale, and color images to perform a wide range of calculations on the image such as noise removal, image segmentation, edge, and corner detection. Morphological operations are based on two techniques- dilation and erosion. In a Grayscale image, dilation increases the forefront object boundaries, while erosion increases the background object boundaries. These operations can be used for various image processing techniques to produce edge detection methods. Dilation thickens regions in an image while erosion shrinks regions in an image, the difference between the two produces a boundary emphasis. This is useful for sharpening & detecting edges. The programming syntax for the Morphological Operation is illustrated with the morphologyEx() Function and the syntax: morphologyEx(src, dst, op, kernel)
Figure 6 Morphological image processing Operations (Source: Packt> Library)
Image Pyramids
We are able to resize the original picture using the image Pyramid function. Resizing a photograph might result in different image sizes and a reduction in resolution. Edge detection and picture mixing are further applications for the pyramid function. When processing photos, it may be necessary to adjust the default resolution or resize the original image. This may be done in order to upsize (Zoom in) or downsize (Zoom out) an input. The resolution of the resultant picture will be lower than the source, therefore upscaling the image after the downscale may result in information loss. We are able to locate things in photographs at various scales thanks to the multi-scale representation. Picture Fusion, which reconstructs a picture based on the fusion of two images and their various layers of the pyramid, is another usage for the image pyramid function. The Laplacian Pyramid is another name for this. The syntax for the image pyramid in the OpenCV Python code is pyrDown() or pyrUp(src, dst, dstsize, borderType).
Figure 7 Image Pyramid Layers, each layer represents a downsized sample of the source image (img Source: pyimagesearch)
Gaussian Filter
Gaussian Filter is used to blur out images and remove noise detail. It is a linear filter used as a smoothing function, and for blurring the image uniformly, including the image contents, edges, and reduce contrast. While it removes noise effectively, can blur out edges around objects in an image. Its programming syntax is represented as GaussianBlur(src, dst, ksize, sigmaX)
Median Filter
The Median Blur, a non-linear filter, may be used to successfully reduce noise while maintaining somewhat sharp edges. Because it is a result of the environment in which the objects in a picture are located, noise might occasionally carry important information. The Median filter minimises noise while keeping the image's valuable features. An edge-preserving smoothing filter is another name for it. Laplacian, clever edge detection procedures, as well as the median, are available to successfully maintain edges. The Median filter is denoted in programming as medianBlur(src, dst, ksize).
Figure 8 (a) Source Image (b) Median Filter (c) Gaussian Blur (Source: docs.gimp.org median blur)
Visual data that is devoid of noise doesn't exist in the actual world; instead, it might be contaminated or not. With the use of filtering techniques, we may get rid of undesirable traits or improve some aspects of a picture, including softening the borders or enhancing its size, shape, or colours. To improve picture samples, a variety of image filtering techniques are utilised. A machine needs well-trained computer vision algorithms to attain great performance. Image processing is crucial in teaching computers to perceive as people do and to interpret information more accurately in the midst of massive amounts of data. The image filtering methods offered by the comprehensive open-source package OpenCV were highlighted in this article.
Click here to learn Data Science Course, Data Science Course in Hyderabad, Data Science Course in Bangalore
Data Science Placement Success Story
Data Science Training Institutes in Other Locations
Agra, Ahmedabad, Amritsar, Anand, Anantapur, Bangalore, Bhopal, Bhubaneswar, Chengalpattu, Chennai, Cochin, Dehradun, Malaysia, Dombivli, Durgapur, Ernakulam, Erode, Gandhinagar, Ghaziabad, Gorakhpur, Gwalior, Hebbal, Hyderabad, Jabalpur, Jalandhar, Jammu, Jamshedpur, Jodhpur, Khammam, Kolhapur, Kothrud, Ludhiana, Madurai, Meerut, Mohali, Moradabad, Noida, Pimpri, Pondicherry, Pune, Rajkot, Ranchi, Rohtak, Roorkee, Rourkela, Shimla, Shimoga, Siliguri, Srinagar, Thane, Thiruvananthapuram, Tiruchchirappalli, Trichur, Udaipur, Yelahanka, Andhra Pradesh, Anna Nagar, Bhilai, Borivali, Calicut, Chandigarh, Chromepet, Coimbatore, Dilsukhnagar, ECIL, Faridabad, Greater Warangal, Guduvanchery, Guntur, Gurgaon, Guwahati, Hoodi, Indore, Jaipur, Kalaburagi, Kanpur, Kharadi, Kochi, Kolkata, Kompally, Lucknow, Mangalore, Mumbai, Mysore, Nagpur, Nashik, Navi Mumbai, Patna, Porur, Raipur, Salem, Surat, Thoraipakkam, Trichy, Uppal, Vadodara, Varanasi, Vijayawada, Vizag, Tirunelveli, Aurangabad
Navigate to Address
360DigiTMG - Data Science Course, Data Scientist Course Training in Chennai
D.No: C1, No.3, 3rd Floor, State Highway 49A, 330, Rajiv Gandhi Salai, NJK Avenue, Thoraipakkam, Tamil Nadu 600097
1800-212-654-321