How Does Computer Vision Work?
Explain how Computer Vision works?
Computer vision mimics how human eyes and brains work to both identify and process images. The processing components of computer vision are:
- Image acquisition
- Image processing
- Image analysis and interpretation
This is the process of transforming the analog world into the digital world. The real world is translated into binary data and interpreted as digital images. There are different tools for creating these datasets, including digital compact cameras and DSLR, webcams and embedded cameras, and consumer 3D cameras and DSLR. Usually, the data collected by these devices need to be post-processed so that they can be exploited efficiently in the steps that follow.
Image processing involves the initial, low-level processing of images. Algorithms are used to deduce low-level information on parts of the images from the binary data obtained in the earlier image acquisition. This kind of information is identified by point features, segments or image edges.
Image processing involves advanced techniques and applied mathematics algorithms, including the following steps:
1. Edge Detection
In image processing, the edge detection technique is used to identify the boundaries of objects in an image. An edge is a curve that takes a quick path in image intensity. Edges are usually identified with boundaries of objects in an image. Finding edges assists not only in detecting images but also in correctly interpreting more complex situations where objects may be overlapping. Edge detection methods include canny, Roberts and Fuzzy logic methods.
This is the process of splitting an image into several parts. Segmentation is used to recognize objects or any other relevant information in digital images. An image is partitioned into distinct regions, each containing pixels with similar characters. This segmentation builds on low-level processing in order to transform the image into one or more high-level images that the computer can further analyze. Segmentation methods include:
- Thresholding methods
- Color-based segmentation
- Transform methods
- Texture method
Image classification is the process of allocating measurable space over to pixels in a digital image. There are two major techniques for classifying images:
- Supervised method
- Unsupervised method
a. Supervised image classification
In supervised image classification, information classes are gathered from the image. These are referred to as training sites. The image processing software then uses the training sites and applies them to the whole image.
Supervised image classification uses the spectral signatures identified from the training sites to classify the image. An image is classified according to what it resembles most in the training set. Basically, supervised image classification involves the following three steps:
i. Selecting training sites
ii. Generating a signature file
iii. Classifying the image
b. Unsupervised Classification
Unsupervised image classification involves analyzing a group of pixels and categorizing them according to the computerized groupings in their image values. As compared with supervised image classification, unsupervised method does not need analyst intervention (i.e., user intervention). The basic logic in unsupervised image recognition is that values within a specific cover type should behave with similar gray levels, whereas data in different classes should have different gray levels.
Unsupervised image classification steps are;
i. Generating clusters
ii. Assigning classes
Other image classifications include:
- Object-oriented image classification
- Parallelepiped classification
- Maximum likelihood classification
- Minimum distance classification
4. Detection and Matching of Features
The process detecting and matching of features is divided into three steps:
i. Detection: Interesting or easily-matched feature points from each image are identified.
ii. Description: The local appearance of every feature point is described in a way that is invariant under changes in scale, translation, in-plane rotation and illumination. We end up with a descriptor vector for every feature point.
iii. Matching: To identify similar features, the descriptors are compared across all images.
Image Analysis and Interpretation
After all that processing and work, image analysis and interpretation is the last step in computer vision. It involves analyzing the data from the previous steps to make decisions accordingly, for example in new drone technology or even the suggestions from Facebook of which friends you might want to tag in a newly uploaded image. This final step of image analysis and understanding applies high-level algorithms.