Extract contours opencv


  • Contour Detection using OpenCV (Python/C++)
  • Python OpenCV cv2 Find Contours in Image
  • The method of extracting connected region contour by opencv
  • Finding extreme points in contours with OpenCV
  • Image Foreground Extraction using OpenCV Contour Detection
  • #006 OpenCV projects – How to detect contours and match shapes in an image
  • Contour Detection using OpenCV (Python/C++)

    In this article, you will get to learn how to carry out image foreground extraction using OpenCV contour detection method. We will use the OpenCV computer vision library along with the contour detection technique for this. A Bit of Background and Introduction Foreground extraction is a very popular task in the field of computer vision. Using the method of foreground extraction, we try to extract any image or object that is of interest to us, and discard the rest of the background.

    Recent deep learning based image segmentation techniques have made this really easier. But we can achieve this using pure computer vision techniques as well. Grabcut is one of the most popular methods when it comes to computer vision based image foreground extraction.

    You can find more about foreground extraction using Grabcut in this amazing post by Adrian Rosebrock. After reading that article, I thought of doing the same, but without using Grabcut. In Grabcut we provide a rectangular area where the object of interest might be present.

    After that, the Grabcut algorithm handles the rest. So, how are we going to do that without using the Grabcut algorithm?

    After that we can treat it as a foreground image and discard the rest of the things as background image. We can do this by using the contour detection technique. Using contour detection, we can find the pixels surrounding the object that we want to extract and then proceed further. We will see in detail how we can achieve image foreground extraction using OpenCV contour detection further into this article. Not only that, we will also try to change the background of the resulting foreground to make things a little bit more interesting.

    So, you can expect something like the following image. Figure 1. An example showing image foreground extraction using OpenCV contour detection technique. The image shows how after extracting the image of the person we can also apply a new background and make it look better. In figure 1, the top image shows the original unedited image which has a white background. Nothing too fancy. The middle image shows the foreground image. That is when we have extracted only the person from the top image.

    What you see in the bottom-most image is after we have merged the extracted foreground image with a new colorful background. Now, that is pretty interesting. I hope that you are interested to follow this article through to the end.

    Libraries and Dependencies For this tutorial, there is just one major library that we need. That is the OpenCV computer vision library. I have used version 4. Although I recommend using the same version that I am using, still you should not be facing any issues if using any 4. Sometimes, the very recent versions of opencv-python have issues with the imshow function.

    At least, I have faced some errors regarding the same a few times. If you are using any of the newer versions and face issues, then try installing version 4. Directory Structure We will be using the following directory structure for this tutorial. The input folder contains the input images that we will be using in this tutorial.

    There are four images in total. And finally, the outputs folder will contain the output images after we run our Python scripts. Coming to the input images, you can download them by clicking the button below.

    Download Input Files After downloading, just unzip the file inside your project directory and you are good to go. All the images are taken from Pixabay and are free to use. We have everything set up now. From the next section onward, we will move into the coding part of this tutorial.

    We will start with the utils. This Python file contains a few utility functions that we can execute when required. We are keeping these functions separate so that our code remains as clean and readable as possible. The following code block contains the two imports that we need for the utility functions.

    Function to Find the Largest Contour The very first function that we will write is to find the largest contour area in an image. The following code block contains the function definition. At line 10, we find all the contours in the image. Line 15 finds the largest contour among all the contours that are calculated. We can easily do this by using the max function and by providing the contours as an iterable and cv2.

    At least, it will reduce one line of code each time. Function to Apply New Background to the Extracted Foreground Image Now, you have seen in figure 1 how we can add a new background to the extracted foreground image. We may not want to do that with every foreground image. Therefore, we will write a function for that. And we will call the function whenever we want to apply the new background to the extracted foreground image. One is mask3d, that is the foreground image mask.

    The foreground parameter is the extracted foreground object in RGB format. The first steps are to normalize mask3d and get the scaled image of mask3d and foreground using cv2. Then we read the background image, resize it to match the shape of the foreground image, and convert its data type for further operations.

    At line 43, we again use cv2. Then we get the new image with the background by adding the foreground and background image. At last, we show the image on the screen and save it to disk.

    We have completed all the utility functions that we need. We can now move ahead to write the code for image foreground extraction using OpenCV contour detection.

    ArgumentParser parser. Be default, it stores the value as False. While executing the code, if we pass -n or --new-background then only we will call the function to apply the new background to the extracted foreground image. Reading the Image and Converting to Binary Image Now, we will read the image that we want to extract the foreground object from. We will also apply thresholding to convert it into a binary image containing only black and white pixels.

    This also removes very minor noise in the background. Then we convert the image to grayscale format and apply the thresholding for converting it to binary image.

    Find the Largest Contour Area As we have converted the image to binary format, we can easily find all the contours in the image. The function returns the largest contour area. Then we create a copy of the original image and apply that contour area on the image. We mark all the pixels with green color to visualize the contour area perfectly. We will get to see the output of this while executing the code.

    Creating a Mask and Marking the Sure and Probable Pixels To do any further operations, we first have to create a new mask a black background. This will have the same size as the grayscale image. As we have not resized the image yet, this means that this mask will be the same size as the original image.

    At line 32, we fill an area on the created mask with white pixels whose shape is going to be the same as that of the largest contour that we have obtained till now. For example, if the largest contour area is of a person, then we create that shape on the new mask and fill that area with white pixels. The next few lines are important. Line 35 creates a copy of the mask so as to not to edit the original mask. While creating the new mask, we had all the pixel values as zero. This means that the mask was all black.

    Then we filled it with a white-colored contour shape marking all the pixels as This means that we know for sure that all the black pixels make up the background and all the white pixels make up the foreground or the object. So, at line 36, we say that whichever pixels have a value of 0 are surely background pixels.

    This we do using cv2. Line 37 says that whichever pixels have a value of are probably foreground. We do this using cv2. But as all pixels are either 0 or , we know for sure that pixels with a value of are surely foreground. So, we mark the obvious foreground as well at line 38 using cv2. This is very important that we do the above steps, else, none of the future processing on the new mask will work correctly.

    Creating a Final Mask with the Known Foreground and Background Pixels By now, we know which pixels are surely background, which pixels are probably foreground, and which pixels are surely foreground. Using this knowledge, we will create a final binary mask.

    Python OpenCV cv2 Find Contours in Image

    Summary Application of Contours in Computer Vision Some really cool applications have been built, using contours for motion detection or segmentation. Here are some examples: Motion Detection: In surveillance video, motion detection technology has numerous applications, ranging from indoor and outdoor security environments, traffic control, behaviour detection during sports activities, detection of unattended objects, and even compression of video.

    In the figure below, see how detecting the movement of people in a video stream could be useful in a surveillance application. Notice how the group of people standing still in the left side of the image are not detected. Only those in motion are captured. Do refer to this paper to study this approach in detail. An example of moving object detection, identifying the persons in motion. Note that the person standing still on the left is not being detected.

    Unattended object detection: Any unattended object in public places is generally considered as a suspicious object. Using contours is one approach that can be used to perform segmentation. Refer to this post for more details. The following images show simple examples of such an application: An example of simple image foreground extraction, and adding a new background to the image using contour detection.

    Typically, a specific contour refers to boundary pixels that have the same color and intensity. OpenCV makes it really easy to find and draw contours in images. The following figure shows how these algorithms can detect the contours of simple objects.

    Comparative image, input image and output with contours overlaid. Just follow these steps: Read the Image and convert it to Grayscale Format Read the image and convert the image to grayscale format. Converting the image to grayscale is very important as it prepares the image for the next step. Converting the image to a single channel grayscale image is important for thresholding, which in turn is necessary for the contour detection algorithm to work properly.

    Apply Binary Thresholding While finding contours, first always apply binary thresholding or Canny edge detection to the grayscale image. Here, we will apply binary thresholding. This converts the image to black and white, highlighting the objects-of-interest to make things easy for the contour-detection algorithm.

    Thresholding turns the border of the object in the image completely white, with all pixels having the same intensity. The algorithm can now detect the borders of the objects from these white pixels.

    Note: The black pixels, having value 0, are perceived as background pixels and ignored. At this point, one question may arise. What if we use single channels like R red , G green , or B blue instead of grayscale thresholded images? In such a case, the contour detection algorithm will not work well. As we discussed previously, the algorithm looks for borders, and similar intensity pixels to detect the contours.

    A binary image provides this information much better than a single RGB color channel image. In a later portion of the blog, we have resultant images when using only a single R, G, or B channel instead of grayscale and thresholded images.

    Find the Contours Use the findContours function to detect the contours in the image. Once contours have been identified, use the drawContours function to overlay the contours on the original RGB image. The above steps will make much more sense, and become even clearer when we will start to code. It's FREE! We assume that the image is inside the input folder of the current project directory. The next step is to convert the image into a grayscale image single channel format.

    Any pixel with a value greater than will be set to a value of white. All remaining pixels in the resulting image will be set to 0 black. The threshold value of is a tunable parameter, so you can experiment with it. After thresholding, visualize the binary image, using the imshow function as shown below.

    It is a binary representation of the original RGB image. You can clearly see how the pen, the borders of the tablet and the phone are all white. The contour algorithm will consider these as objects, and find the contour points around the borders of these white objects.

    Note how the background is completely black, including the backside of the phone. Such regions will be ignored by the algorithm. Taking the white pixels around the perimeter of each object as similar-intensity pixels, the algorithm will join them to form a contour based on a similarity measure.

    The resultant binary image after applying the threshold function. Start with the findContours function. It has three required arguments, as shown below.

    For optional arguments, please refer to the documentation page here. More contour retrieval modes are available, we will be discussing them too. You can learn more details on these options here. We will be discussing both in more detail below. It is easy to visualize and understand results from different methods on the same image.

    In the code samples below, we therefore make a copy of the original image and then demonstrate the methods not wanting to edit the original. Next, use the drawContours function to overlay the contours on the RGB image. This function has four required and several optional arguments.

    The first four arguments below are required. For the optional arguments, please refer to the documentation page here. Using this argument, you can specify the index position from this list, indicating exactly which contour point you want to draw. Providing a negative value will draw all the contour points. We are drawing the points in green. Python: detect the contours on the binary image using cv2. We also save the image to disk.

    The following figure shows the original image on the left , as well as the original image with the contours overlaid on the right. The original image and the image with contours drawn on it.

    As you can see in the above figure, the contours produced by the algorithm do a nice job of identifying the boundary of each object. However, if you look closely at the phone, you will find that it contains more than one contour. Separate contours have been identified for the circular areas associated with the camera lens and light.

    Keep in mind that the accuracy and quality of the contour algorithm is heavily dependent on the quality of the binary image that is supplied look at the binary image in the previous section again, you can see the lines associated with these secondary contours. Some applications require high quality contours. In such cases, experiment with different thresholds when creating the binary image, and see if that improves the resulting contours.

    There are other approaches that can be used to eliminate unwanted contours from the binary maps prior to contour generation. You can also use more advanced features associated with the contour algorithm that we will be discussing here. Using Single Channel: Red, Green, or Blue Just to get an idea, the following are some results when using red, green and blue channels separately, while detecting contours. We discussed this in the contour detection steps previously. Contour detection results when using blue, green and red single channels instead of grayscale, thresholded images.

    In the above image we can see that the contour detection algorithm is not able to find the contours properly. This is because it is not able to detect the borders of the objects properly, and also the intensity difference between the pixels is not well defined. This is the reason we prefer to use a grayscale, and binary thresholded image for detecting contours.

    This means that any of the points along the straight paths will be dismissed, and we will be left with only the end points. For example, consider a contour, along a rectangle.

    All the contour points, except the four corner points will be dismissed. The following image shows the results. Now, why is that? The credit goes to the drawContours function. The most straightforward way is to loop over the contour points manually, and draw a circle on the detected contour coordinates, using OpenCV. Also, we use a different image that will actually help us visualize the results of the algorithm. Almost everything is the same as in the previous code example, except the two additional for loops and some variable names.

    The first for loop cycles over each contour area present in the contours list. The second loops over each of the coordinates in that area. We then draw a green circle on each coordinate point, using the circle function from OpenCV. Finally, we visualize the results and save it to disk. The vertical and horizontal straight lines of the book are completely ignored. Observe the output image, which is on the right-hand side in the above figure. Note that the vertical and horizontal sides of the book contain only four points at the corners of the book.

    Also observe that the letters and bird are indicated with discrete points and not line segments. Contour Hierarchies Hierarchies denote the parent-child relationship between contours.

    The method of extracting connected region contour by opencv

    Jump Right To The Downloads Section Finding extreme points in contours with OpenCV In the remainder of this blog post, I am going to demonstrate how to find the extreme north, south, east, and west x, y -coordinates along a contour, like in the image at the top of this blog post.

    By computing the extreme points along the hand, we can better approximate the palm region highlighted as a blue circle : Figure 2: Using extreme points along the hand allows us to approximate the center of the palm. Which in turn allows us to recognize gestures, such as the number of fingers we are holding up: Figure 3: Finding extreme points along a contour with OpenCV plays a pivotal role in hand gesture recognition.

    We are going to compute the extreme north, south, east, and west x, y -coordinates along the hand contour. Where our goal is to compute the extreme points along the contour of the hand in the image. We then load our example image from disk, convert it to grayscale, and blur it slightly. Line 12 performs thresholding, allowing us to segment the hand region from the rest of the image. After thresholding, our binary image looks like this: Figure 5: Our image after thresholding.

    Using the method of foreground extraction, we try to extract any image or object that is of interest to us, and discard the rest of the background. Recent deep learning based image whirlpool wsf26c3exf01 diagnostic mode techniques have made this really easier.

    But we can achieve this using pure computer vision techniques as well. Grabcut is one of the most popular methods when it comes to computer vision based image foreground extraction. You can find more about foreground extraction using Grabcut in this amazing post by Adrian Rosebrock. After reading that article, I thought of doing the same, but without using Grabcut. In Grabcut we provide a rectangular area where the object of interest might be present.

    After that, the Grabcut algorithm handles the rest. So, how are we going to do that without using the Grabcut algorithm? After that we can treat it as a foreground image and discard the rest of the things as background image. We can do this by using the contour detection technique. Using contour detection, we can find the pixels surrounding the object that we want to extract and then proceed further.

    We will see in detail how we can achieve image foreground extraction using OpenCV contour detection further into this article. Not only that, we will also try to change the background of the resulting foreground to make things a little bit more interesting. So, you can expect something like the following image. Figure 1. An example showing image foreground extraction using OpenCV contour detection technique.

    The image shows how after extracting the image of the person we can also apply a new background and make it look better. In figure 1, the top image shows the original unedited image which has a white background. Nothing too fancy. The middle image shows the foreground image. That is when we have extracted only the person from the top image.

    What you see in the bottom-most image is after we have merged the extracted foreground image with a new colorful background. Now, that is pretty interesting. I hope that you are interested to follow this article through to the end. Libraries and Dependencies For this tutorial, there is just one major library that we need. That is the OpenCV computer vision library.

    I have used version 4. Although I recommend using the same version that I am using, still you should not be facing any issues if using any 4. Sometimes, the very recent versions of opencv-python have issues with the imshow function.

    At least, I have faced some errors regarding the same a few times. If you are using any of the newer versions and face issues, then try installing version 4. Directory Structure We will be using the following directory structure for this tutorial. The input folder contains the input images that we will be using in this tutorial. There are four images in total. And finally, the outputs folder will contain the output images after we run our Python scripts.

    Coming to the input images, you can download them by clicking the button below. Download Input Files After downloading, just unzip the file inside your project directory and you are good to go. All the images are taken from Pixabay and are free to use.

    We have everything set up now. From the next section onward, we will move into the coding part of this tutorial.

    Finding extreme points in contours with OpenCV

    We will start with the utils. This Python file contains a few utility functions that we can execute when required. We are keeping these functions separate so that our code remains as clean and readable as possible. The following code block contains the two imports that we need for the utility functions.

    Image Foreground Extraction using OpenCV Contour Detection

    Function to Find the Largest Contour The very first function that we will write is to find the largest contour area in an image. The following code block contains the function definition. At line 10, we find all the contours in the image.

    Line 15 finds the largest contour among all the contours that are calculated. We can easily do this by using the max function and by providing the contours as an iterable and cv2. At least, it will reduce one line of code each time. Function to Apply New Background to the Extracted Foreground Image Now, you have seen in figure 1 how we can add a new background to the extracted foreground image. We may not want to do that with every foreground image. Therefore, we will write a function for that.

    And we will call the function whenever we want to apply the new background to the extracted foreground image. One is mask3d, that is the foreground image mask. The foreground parameter is the extracted foreground object in RGB format. The first steps are to normalize mask3d and get the scaled image of mask3d and foreground using cv2.

    Then we read the background image, resize it to match the shape of the foreground image, and convert its data type for further operations. At line 43, we again use cv2.

    Then we get the new image with the background by adding the foreground and background image. At last, we show the image on the screen and save it to disk. We have completed all the utility functions that we need. We can now move ahead to write the code for image foreground extraction using OpenCV contour detection. ArgumentParser parser. Be default, it stores the value as False. While executing the code, if we pass -n or --new-background then only we will call the function to apply the new background to the extracted foreground image.

    Reading the Image and Converting to Binary Image Now, we will read the image that we want to extract the foreground object from. We will also apply thresholding to convert it into a binary image containing only black and white pixels.

    #006 OpenCV projects – How to detect contours and match shapes in an image

    This also removes very minor noise in the background. Then we convert the image to grayscale format and apply the thresholding for converting it to binary image. Find the Largest Contour Area As we have converted the image to binary format, we can easily find all the contours in the image. The function returns the largest contour area. Then we create a copy of the original image and apply that contour area on the image.

    We mark all the pixels with green color to visualize the contour area perfectly.


    thoughts on “Extract contours opencv

    Leave a Reply

    Your email address will not be published. Required fields are marked *