• W(s,b): is the continuous wavelet transform coefficient of f(x), where s is the scale (inverse of frequency) and b is the positional feature shift.
0
• ψ ∗ : is the complex conjugate function of wavelet ψ 0 called the analytic wavelet function.
Equation (1.34) shows that the wavelet transform is a mapping from a 1-variable function f(x) to a function W(s,b) depending on two variables: the scale variable s and the displacement variable b. The coefficient
Maybe you are interested!
-
Illustration of Basic Steps in an Image Processing and Recognition System. -
The relationship between travel motivation, destination image and destination choice - A case study of Binh Dinh province tourism destination - 1 -
Skills for Working with Text Channels and Combining Image Channels -
Results in Increased Knowledge and Skills in Self-Monitoring Blood Pressure -
Indirect Monitoring Through Incentives:
transformation 1
√𝑠
in (1.34) ensures wavelet normalization with different decomposition scales s
each other ‖ψ 0 (𝑥, 𝑏)‖ = ‖ψ 0 ‖ .
Some forms of wavelet functions such as: Haar Wavelet, Daubechies4 Wavelet, Synlets4 Wavelet, etc... In this thesis, I only mention the Haar Wavelet form as follows:
Haar Wavelet Transform:
The Haar Wavelet Transform is a simple transformation in the Wavelet transforms. Figure (1.11) below shows the form of the function ψ(t) with the Haar transform on the t axis. Due to the simplicity of the Haar transform, it is widely used in image compression. When applying this transform to compress images, the image compression algorithm on the computer has some differences from the mathematical formula of the Haar transform:

Figure 1.9: Function ψ (t) of the Haar transform
1.2.5. Classification
Classification is the decisive step in the recognition process. All steps in the process aim to separate the samples successfully. The classification process can be understood as the process of converting quantitative input data and qualitative output data. The output of the classification can be a discrete choice of a class from the defined classes, or it can be a real-valued vector representing the values that can be assumed that the sample is formed from the corresponding classes.
Classification algorithms are mainly divided into two main methods. They are statistical methods and syntactic methods. Using artificial neural networks (ANN) such as [18] or support vector machines (SVM) such as [12], [13], [14], [15] for classification is a relatively different method although in terms of mechanism it also works with the features of the object.
The classifier can send feedback information to the extractor and the classifier to correct the errors of these two parts.
1.2.6. Image recognition and interpretation
1.2.6.1. General introduction to identification
When observing a photo, in addition to the perception of size and color, the objects in the photo also bring some cognitive meaning to the observer. Therefore, the processing process does not stop at improving the quality of the photo and storing it, but also adds another step of automatically identifying the objects in the photo to extract the information they contain. Image recognition can be considered the final stage of the image processing process. We can look at this work activity simply by assigning names to the image objects. Image recognition is just a special case of pattern recognition, here we consider the principles of pattern recognition applied to image recognition.
According to [4] , image recognition is the process of identifying an image. This is usually obtained by comparing it with a previously learned (or stored) standard template. Interpolation is a judgment of meaning based on recognition.
For example, a series of digits and dashes on an envelope can be interpolated into a telephone code.
There are many different ways to classify images. According to the theory of recognition [3] , mathematical models of images are classified into two basic types of image recognition:
- Parameter identification.
- Structural identification.
Also according to [3] , the nature of the identification process includes 3 main stages:
- Select object representation model.
- Select decision rules (recognition methods) and infer the learning process.
- Learning recognition.
Once the object representation model has been determined, which can be quantitative (parametric model) or qualitative (structural model), the recognition process moves to the learning phase. Learning is a very important phase, the learning operation aims to improve and adjust the partitioning of the object set into classes.
Some of the most popular recognition objects currently being applied in science and technology are: character recognition, text recognition, fingerprint recognition, barcode recognition, human face recognition, flower recognition, pet recognition, etc.
The image recognition process can be performed through the following steps: image data collection, preprocessing, analysis, standardization, feature extraction and classification.
1.2.6.2. Image identification method
In this method the sample will be represented in numerical form and the classification procedure is to arrange these numerical values into classes.
Pattern Classification Technique
There are two types of classification: supervised pattern classification and unsupervised pattern classification. With the pattern recognition method, we need to pay more attention to supervised classification techniques.
As considered in a feature extraction step, the features of the object are represented by numerical values and these values are considered as components of the sample representation vector. When we put a set of standard samples into the system, the feature extraction process will create standard sample vectors distributed in the sample space. For each sample vector, we have mapped it to the interpretation space, that is, we know its name. Thus, this standard sample vector can be completely divided into classes corresponding to a name. These classes are called standard classes.
A class will actually occupy some portion of the sample space, and the region of a class is often called a cluster. In reality, the sample space is not always perfectly separable, and clusters may overlap.
Structure recognition method
Representing pattern features using structured classification. Besides numerical methods, it is one of the pattern recognition methods. While with the pattern method, people perform meaning assignment for individual patterns, the structural method considers complex objects made up of primitives and the relationships between them. The relational features between primitives will work with a list of decisions similar to the image analyzer of the human brain. Modeling such a process on a computer will be difficult, so the structural procedure is not as popular as the numerical procedure.
Chapter 2: RESEARCH ON METHODS TO SOLVE THE PROBLEM OF AIR BUBBLE IDENTIFICATION
2.1. FORMULATION OF AIR BUBBLE IDENTIFICATION PROBLEM
To monitor the aeration process in the microbial tanks of water environment monitoring units through surveillance cameras. The problem here is how to monitor whether the aeration process is regular or not. Therefore, it is necessary to identify images with air bubbles (in case of aeration) and images without air bubbles (in case of no aeration) through identifying image frames extracted from surveillance cameras. To solve this problem, I have conducted research on knowledge of digital image processing, digital image processing methods, machine learning algorithms, classification training and image recognition as introduced in Chapter 1 and Chapter 2 and built the problem of recognizing air bubbles in the following 3 stages:
- Phase 1: Image processing.
- Phase 2: Training process – sample classification.
- Phase 3: Image recognition process.
Digitalization
Pre-processing
The problem can be described and solved by the following model:
Data acquisition
image
Digitalization
Pre-processing
T T is a special class of school teachers .
Data
train
Data
identification
Model
after training
Result
identification
Classified Training
Identification
Identification
Figure 2.1: General model for the bubble identification problem.
2.1.1. Image processing
Image acquisition
Bubble image data is collected from video clips extracted from the environmental monitoring unit's microbiological tank surveillance cameras. From there, it can be separated into 2 different clip sets (one set for the bubble form and one set for the bubble-free form) to facilitate the image processing and analysis process for training the model. The recognition process can include images or video clips or extract images directly from the surveillance camera.
Digitalizer
Images captured by a camera are usually digital signals of the CCD (Charge Coupled Device) type, but can also be analog signals of the CCIR type tube camera, which need to be converted to discrete signals and digitized by quantization before moving on to the processing, analysis or storage stage.
Bubble images in environmental monitoring units are mostly captured via IP cameras, which often use CMOS or CCD sensors to capture images, digitize, process and encode, then transmit digital signals via Ethernet cable to a computer or a network storage device NVR (Network Video Recorder).
Image processing
Due to various reasons, it could be due to the image acquisition device, the light source, the interference on the lake surface or interference from another cause, etc., the captured image may have low contrast, may be degraded, etc. Therefore, at this stage, pre-processing is needed to improve the image quality. Some operations that need to be performed are:
- Noise filtering smooths images.
- Convert color space to grayscale.
In this thesis, the aim of image feature extraction is to use Entropy measure to determine the uncertainty of whether there are bubbles or not for the image and use Canny to find the image edge, so only using image transformation to gray level is enough to perform feature extraction.
Next is the normalization to reduce the parameters affected by the noise of the transformation (here it is the reduction of the data to a common form in which the feature extraction can be performed correctly) introduced in the feature extraction section below.
Image analysis and feature extraction
In image feature analysis and extraction, the goal is to find pixel feature regions that serve well for correct classification. Feature extraction is the representation of patterns by image object features, helping to distinguish different sample classes, it also transforms the inherent properties of image objects or created by image acquisition devices. Some methods that can be implemented are: PCA method [20] , Morphology method, Entropy measurement method, Canny method, etc.
To perform the process of training a good classifier, the image feature extraction step plays a very important role. The image feature here is the image content feature, which is the analysis of the actual content of the image frames. The image content is
represented by color, shape, texture, local features or any information from the image content itself. To solve the problem of bubble image recognition, through studying basic knowledge of image processing, image analysis techniques, this thesis will use the feature extraction method in 2 directions which will be presented in the feature extraction section below.
Save training data (Image write):
It is the act of saving the processed data and analyzing the feature extraction in the above steps to serve the task of training sample classification (called training data).
2.1.2. Sample classification training process
From the data saved in the above processing, the data needs to be labeled and trained to classify samples.
Training:
Perform image classification and training. This is the decisive step for this bubble recognition problem, the processing aims to successfully separate image samples. Classification is also the process of converting quantitative input data and qualitative output data. The output of the classification is a real-value vector representing the values that can be recognized as the sample formed from the corresponding classes. In this bubble recognition problem, it is necessary to classify the image into 2 classes (class 1 is the image with bubbles and class 2 is the image without bubbles). Some machine learning algorithms can be performed such as artificial neural networks (ANN) or support vector machines (SVM), etc. Therefore, choosing a classification method that is suitable for the problem and the available data is extremely important. This issue will be presented in the image classification section below.
Save the model after training (Train Model):
After training, the training model (Model) needs to be saved for future image recognition.
2.1.3. Identification process
After training and saving the training model (Model), the next task is to use this model to compare with sample frames directly from surveillance cameras or from video clips extracted from surveillance cameras of biological tanks to predict and detect cases of image frames with air bubbles and cases of image frames without air bubbles to determine whether the biological tank is aerated or not. This can be done by the following stages:
Identification data:
Data from the camera or video clips extracted from the camera or a set of images that need to be tested. This identification data can go through digitization, pre-processing and image feature extraction stages and be compared to the trained classification model for prediction and identification.
Identification:
This is done by loading data from cameras from microbiological tanks or loading data from video clips extracted from cameras or any random images and comparing them with the trained model. From there, it will determine which layer the sample is in (the layer with air bubbles or the layer without air bubbles) and provide identification information.
Identification results:
As a result of the prediction process, the image frame with air bubbles and the image frame without air bubbles from the surveillance camera can be identified to determine the case of aerated tanks and the case of non-aerated tanks. Finally, the identification information can be exported to a text file.
2.2. FEATURE EXTRACTION PROBLEM FOR AIR BUBBLE IMAGES
Image feature extraction for images is to find pixel feature regions that serve well for correct classification. Feature extraction is the representation of samples by the features of the image object, helping to distinguish different sample classes. In this bubble/no-bubble image recognition problem, due to the nature of the image and data taken from the surveillance camera along with the collected data obtained and researched, tested extraction with a number of methods and decided to research feature extraction in two directions as follows:
2.2.1. Using Entropy combined with Fuzzylogic and Wavelet
In this approach, feature extraction is performed by determining the Entropy measure for pixels to determine the uncertainty of pixels that are likely to be air bubbles. At the same time, Fuzzy Logic is used to remove pixels that are unclearly air bubbles. Then, the Haar Wavelet Transform is used to reduce the size of the image data to a small form that still retains enough important information for the pixels.
- First: Determine the Entropy measure for the pixels to determine the uncertainty for pixels that are likely to be air bubbles or not. In this study, I decided to determine the Entropy measure for the pixels to determine the uncertainty for pixels that are likely to be air bubbles or not for each set of images. Here, I chose each set of 100 images as shown in Figure (2.2) below for the reason: every 1 second the camera reads about 8 frames of images and the microorganisms will die in about 15 to 16 seconds if the tank lacks oxygen (when not aerated). Thus, choosing 100 images is about 12.5 seconds, just enough to detect an unaerated tank.
100
1 pixel
1
Figure 2.2: Representation of Entropy calculation of image pixels through a set of images.
- Second: Use Fuzzy Logic with the use of an Activation function to remove pixels that are unclear whether they are air bubbles or not.
1
∝
-1
Figure 2.3: Illustration of the Activation function.
- Third: Use the Haar Wavelet transform to reduce the size of the image data to a small form as illustrated in figure (2.4) while still retaining all the important information for the pixels. Reduce the data in part to train faster.

Figure 2.4: Function ψ (t) of the Haar transform
After feature extraction, the processed data set will be saved here, analyzing the feature extraction by determining the Entropy value for the pixel to determine the reliability of the pixel having air bubbles or not, combining Fuzzy Logic with using an Activation function to remove pixels that are unclear whether they are air bubbles or not (in the case where Entropy is close to 0) and transforming and reducing the data using the Haar Wavelet transform to serve training to produce the best possible classification model for air bubble recognition.
2.2.2. Using the method of finding image edges
In this direction, feature extraction is done by using Gradient image edge finding technique (using some operators such as Roberts, Prewitt, Sobel, Canny) based on calculating the maximum and minimum values of the first derivative of the image. From there, compare, evaluate the results and choose the most suitable edge finding method to bring images with the best edge quality for training the classifier.
After feature extraction, here we will save the processed data set, analyze feature extraction by finding image edges that are evaluated to be better than other methods after experimentation to get the best image features to serve the training to give the best classification model that can serve the recognition of air bubbles.





