CN112633090A - Method for identifying mould and dust in concentrated wind system of robot vision and application - Google Patents

Method for identifying mould and dust in concentrated wind system of robot vision and application Download PDF

Info

Publication number
CN112633090A
CN112633090A CN202011440546.0A CN202011440546A CN112633090A CN 112633090 A CN112633090 A CN 112633090A CN 202011440546 A CN202011440546 A CN 202011440546A CN 112633090 A CN112633090 A CN 112633090A
Authority
CN
China
Prior art keywords
dust
image
robot
mould
mold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011440546.0A
Other languages
Chinese (zh)
Inventor
曾令杰
高军
张承全
侯玉梅
贺廉洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202011440546.0A priority Critical patent/CN112633090A/en
Publication of CN112633090A publication Critical patent/CN112633090A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

A method for identifying mould and dust in a concentrated wind system based on robot vision is characterized in that a mobile pipeline robot carrying a high-definition camera is used for collecting a panoramic video stream in a ventilation system, multiple algorithms are combined for carrying out illumination reduction, noise elimination and image enhancement on an image frame containing the mould and the dust, and the identification of the mould and dust image characteristics is realized through the construction of a deep neural network. In the application aspect, a big data analysis method is introduced according to video sampling data of periodic inspection, and the position of dust accumulation and mould which are easily generated in the wind pipeline is revealed to provide a basis for periodic and directional cleaning of a ventilation system. The method provided by the invention can greatly improve the detection efficiency of the substances and provide support for the establishment of a visual diagnosis and quantitative evaluation method for the internal environment of the centralized ventilation system, aiming at the defects that the detection of the mould and the dust in the existing centralized ventilation system mainly depends on biological sampling culture, the time consumption is long, and analysis can only be performed on certain parts of the ventilation system.

Description

Method for identifying mould and dust in concentrated wind system of robot vision and application
Technical Field
The invention belongs to the technical field of computer image recognition, and relates to an object detection and classification method based on robot vision, in particular to a method for identifying mould and dust in a concentrated wind system based on robot vision.
Background
The public building area in China exceeds 128 hundred million square meters (2020 data), and more than 20 percent of the public building area is provided with a centralized air-conditioning ventilation system. The system mainly comprises a fresh air port, an air supply and exhaust fan, a filter, a ventilation pipeline and the like, can control and adjust the temperature and humidity of indoor air, provides fresh and clean air and removes indoor pollutants, and creates a comfortable and healthy indoor environment for people. On the one hand, air directed to the indoor environment inevitably comes into contact with the air conditioning ventilation system, which may alter the original composition of the air. On the other hand, normal operation of an air conditioning ventilation system may also change the composition of the indoor air, whether operating itself or concentrating the relevant parameter settings of the ventilation system. The former mainly means that different components existing in the ventilation system directly affect indoor air, such as dust particles and microorganisms (such as dust-associated mold and bacteria) in the system enter the indoor along with airflow; the latter includes not only the influence of parameters such as the number of times of ventilation and the air flow organization on the indoor air, but also the influence of the whole system which can be used as a propagation channel (an air circulation loop formed by air supply and return air) to convey the existing pollutants (such as pathogenic bacteria) in the air from a certain indoor environment to more indoor environments, thereby causing the health problems of indoor personnel with wider range. Under the condition that COVID-19 is too abusive, before the mechanism that virus microorganisms are spread through a centralized air conditioning ventilation system is not disclosed, the research on the method for visually diagnosing and quantitatively evaluating the internal environment of the centralized ventilation system and the related technology has important significance.
The visual identification of visible contaminants (mold and dust) by robots in centralized ventilation systems requires the assistance of image recognition techniques. The technology is a technology for processing, analyzing and understanding images by using a computer to identify various targets and objects in different modes, and plays an important role and influence in intelligent data acquisition and processing taking the images as main bodies. The image recognition technology can effectively deal with the problems of detection and recognition of a specific target object, classification and labeling of images, subjective image quality assessment and the like.
In the early image recognition system, feature extraction methods such as scale invariant feature transformation and histogram of directional gradient are mainly adopted, and then extracted features are input into a classifier for classification and recognition. The image recognition system in this period generally aims at a certain specific recognition task, and the scale of data is not large, the generalization capability is poor, and it is difficult to realize accurate recognition effect in practical application problems. Subsequently, the deep learning model based on the convolutional neural network achieves very significant performance improvement on a large-scale image classification task, and the hot tide of deep learning research is raised. The machine vision taking the deep neural network as the bottom layer is rapidly expanded to the fields of image classification of tens of millions of orders, unmanned driving technology, human face fine recognition, human body motion posture estimation and the like. Deep learning utilizes basic structures such as convolution layers, pooling layers and full-link layers of the convolutional neural network, so that the network can learn and extract relevant features by itself and utilize the features. This feature provides many benefits to many studies, and can eliminate the tedious modeling process. In the machine vision development related to the present invention, there are no studies on the application of image recognition to the automated detection of rice mildew fungi and the classification screening study of biological colony morphology, which relate machine vision to the visualization of the internal environment of a building centralized ventilation system that approximates a "black box".
Disclosure of Invention
The invention aims to provide a method for identifying mould and dust in a concentrated wind system based on robot vision. The introduction of robot vision realizes the detection of the internal environment of the concentrated wind system which cannot be monitored (relatively closed) at ordinary times, provides basis for the regular and directional disinfection and cleaning of the concentrated wind system, and ensures the health of indoor personnel.
In order to achieve the purpose, the invention adopts the technical scheme that:
the problem of collection of panoramic video streams inside a ventilation system is solved through a movable pipeline robot carrying a high-definition camera, illumination reduction, noise elimination, image enhancement and the like are carried out on a picture frame containing mould and dust through combination of various algorithms, and accurate classification of mould and dust image features is achieved through construction of a deep neural network. The method comprises the following steps:
(1) establishing a database through images in the wind pipeline shot by the robot, and researching visual characteristics of two typical visible pollutants, namely mould and dust in the wind pipeline; (2) introducing a self-adaptive algorithm to enhance an image, denoising a Gaussian filtered image and restoring the wiener filtering of a relative motion blurred image of a pipeline to preprocess a video stream in the actual detection process; (3) constructing a deep learning target detection algorithm, determining key parameters such as a frame, hidden layers, step length and the like of a convolutional neural network, and extracting image characteristics of mould and dust in an air pipeline by using the network; (4) establishing a database of mould and dust generation positions and coverage areas of a centralized ventilation system based on visual images of mould and dust and position information of the robot during video sampling; (5) according to video sampling data of regular inspection, a k-means cluster analysis method is used for revealing where dust and mold are easy to generate in the wind pipeline from a big data layer.
Further, the method comprises the following steps:
(1) a database is built through images in the wind pipeline shot by a robot, and the visual characteristics of two typical visible pollutants, namely mould and dust deposit, in the wind pipeline are researched:
and (1.1) the robot is a crawler-type robot, and the robot vision system is an image acquisition module carried on a vehicle body holder of the robot.
(1.2) the image acquisition module is an industrial CCD camera, has the advantages of high sensitivity, small distortion, small volume, vibration resistance and the like, and is also provided with an LED combined light source because the lighting condition in the centralized ventilation system is poor.
The robot and all devices mounted thereon are selected and combined as required by those skilled in the art.
(1.3) acquiring mildew and dust images with different visual characteristics as main data sources of an initial visual characteristic database through a robot visual experiment (different angles, different light sources and different brightness degrees) in an actual ventilation system, and supplementing the images with rotation, blurring and noise.
(1.4) the database can be further expanded according to the mould and dust images shot by the robot image acquisition system in the actual application process, so that the number of the acquired images is more than or equal to 5000.
(2) And (3) introducing an adaptive algorithm to enhance the image, denoising the Gaussian filtered image and restoring the wiener filtering of the relative motion blurred image of the pipeline to preprocess the video stream.
(2.1) the adaptive image enhancement algorithm (Retinex) mainly aims at the problem of uneven image illumination caused by an LED light source under a lighting condition, when incident light irradiates on a reflecting object, the incident light enters human eyes for imaging through reflection of the reflecting object, and the process can be expressed by a formula as follows:
H(x,y)=I(x,y)·R(x,y) (1)
wherein x and y are coordinates of pixel points of the two-dimensional image respectively; h (x, y) represents an original image obtained upon entering the human eye; i (x, y) represents a luminance image, representing the dynamic range of the image pixels; r (x, y) represents a reflectance image, reflecting the intrinsic properties of the image.
(2.2) the adaptive Retinex algorithm uses the obtained first-order filtered image as the illumination component of the current image, i.e. I' (x, y), by performing a Gaussian convolution operation on the original image
Figure BDA0002830291710000031
Where I (x, y) represents the luminance image, representing the dynamic range of the image pixels. r is1And (x, y) is an adaptive correction coefficient.
And then, the following formula is utilized to obtain a reflection component, and the reflection image is the enhanced image.
RRetinex(x,y)=logH(x,y)-log[G(x,y)*H(x,y)] (3)
Figure BDA0002830291710000032
Wherein R isRetinexRepresenting the image enhanced by the adaptive Retinex algorithm; "+" is a convolution symbol; g (x, y) is a Gaussian filter function and c is the standard deviation.
And (2.3) processing the noise problem of the shot image by adopting a Gaussian filtering algorithm. The method comprises the following basic steps:
1) substituting the distance from other pixel points in the neighborhood to the center of the neighborhood into a two-dimensional Gaussian function for the obtained image matrix to calculate a Gaussian template; 2) if the elements in the template are in a decimal form, normalization processing is needed, and the value of the element at the upper left corner of the template is reduced to 1; 3) aligning the center of the Gaussian template to an image matrix to be processed, multiplying corresponding elements and weighting, wherein the area without the elements is filled with supplementary 0 elements; 4) and (3) repeating the steps 1-3 on each element in the image matrix, wherein the output matrix is the result after Gaussian filtering and denoising.
And (2.4) for the image blurring problem caused by the movement of the robot involved in the preprocessing of the shooting image frame, restoring the blurred image by using a wiener filter. The algorithm control equation is as follows:
ε2=E[(f-f')2]→min (5)
wherein f is an image before recovery; f' is an image restored by using wiener filtering; e [. cndot. ] represents the mean square error. The fourier transform formula of the restored image is:
Figure BDA0002830291710000041
wherein f '(u, v) is the inverse of f'; sη(u, v) is the power spectrum of the noise; sf(u, v) is the power spectrum of the original image; h (u, v) is a Fourier transform of a degenerate function, having
|H(u,v)|2=H*(u,v)H(u,v) (7)
H*(u, v) is the complex conjugate of H (u, v), and thus, a restored image f' can be obtained.
(3) And constructing a deep learning target detection algorithm, determining key parameters such as a frame, hidden layers, step length and the like of the convolutional neural network, and extracting image characteristics of mould and dust in the wind pipeline by using the network.
(3.1) the classified retrieval of the wind pipeline images is realized by searching frame pictures containing mould or dust in a video stream shot by the robot image acquisition module and adopting an image feature classification algorithm based on a deep convolutional neural network.
(3.2) the convolutional neural network for image classification in the wind pipeline is an AlexNet network, and the network comprises 23 layers in total, and mainly comprises an input layer, a convolutional layer, an activation function layer, a pooling layer, a full-connection layer and an output layer.
(3.3) the input image of the input layer is the preprocessed wind pipeline image, and the size is [224 multiplied by 3 ];
connecting neurons in the convolutional layer with a local region in the previous input layer or pooling layer, and performing inner product calculation on the weights of the neurons and the region of the previous layer to obtain 5 layers; the activation function layer carries out activation operation on each element, aims to accelerate calculation convergence, and corresponds to the convolution layer, and has 5 layers in total; the pooling layer mainly carries out aggregation statistical operation on features at different positions in a local area of the image, corresponds to the convolution layer and has 5 layers in total; and finally, 3 full connection layers and corresponding activation layers are arranged, and the output layer performs regression by using Softmax.
(4) And establishing a database of mould and dust generation positions and coverage areas of the centralized ventilation system based on the visual images of the mould and dust and the position information of the robot during video sampling. The database is built by adopting a MySQL platform.
(5) According to video sampling data of regular inspection, a k-means cluster analysis method is used for revealing where dust and mold are easy to generate in the wind pipeline from a big data layer.
(5.1) the k-means cluster analysis method is a common big data analysis method, and the realization process is as follows:
1) giving a data set with the size of n, selecting the position of the mildew and dust accumulated in each inspection and identification of the robot as data to be clustered, setting O as I, and randomly selecting k clustering centers Zj(O), j 1, 2.., k, O represents the clustering center for different iteration rounds.
2) Calculating the distance D (x) of each sample data object from the cluster centeri,Zj(O)), i ═ 1,2,3.
3) Let O ═ O +1, calculate the new cluster center and the objective function value in the error sum of squares criterion:
Figure BDA0002830291710000051
4) judging if Jc(O+1)-JcIf the (O) | < Threshold or the object class is unchanged, finishing the clustering, otherwise, returning to the step 2).
(5.2) k-means clustering has a corresponding function module in Matlab, which can be conveniently realized, and only the input database needs to be recorded into a matrix form commonly used by Matlab.
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses a technical idea of patrolling a central air-conditioning ventilation system by a pipeline type robot carrying a machine vision system, firstly applies a target detection algorithm based on deep learning and a plurality of image optimization algorithms (a self-adaptive image enhancement algorithm, a Gaussian filter algorithm and a wiener filter algorithm) to the visual feature extraction of mould and dust accumulation in a low-lighting air pipeline, and introduces a big data analysis method to reveal where dust accumulation or mould is easy to breed in the system. The problems that dust deposition, mold microorganisms and the like in the conventional ventilation system mainly depend on multi-point sampling and waste time and labor are solved, so that related technicians in the field can evaluate the internal environment of the centralized ventilation system in a visual and quantitative mode, and a basis is provided for regular and directional cleaning of the ventilation system.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic view of a robot to be equipped with an image recognition system.
Fig. 3 is an AlexNet deep neural network structure.
FIG. 4 is a schematic view showing the effect of identifying mold dust in the wind pipe.
FIG. 5 is a schematic view of a location of mold and dust accumulation in a centralized ventilation system.
Fig. 6 is a schematic diagram of a robot vision experiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The embodiment is implemented based on the technical scheme of the invention, and comprises a detailed implementation mode and a specific operation process.
S101, a flow chart of the method for identifying the mold and the dust in the wind concentration system based on the robot vision is shown in fig. 1, and the flow is mainly divided into 5 stages.
The method for identifying the mold and the dust in the concentrated wind system based on the robot vision comprises the following steps:
s102, the wind pipeline inspection robot carrying the vision system has the general structure as shown in figure 1, and mainly comprises a robot body, an image acquisition module and a control module. Wherein the robot body comprises tracked robot, support and the cloud platform that has locate function, and image acquisition module then is located the cloud platform, through the adjustable image acquisition module height of support to the requirement is patrolled and examined to the wind pipeline that adapts to different pipe diameters. The robot body adopts a track driving mode and is matched with a direct-current speed reduction driving motor, the stress area of the robot and the lower surface of a wind pipeline is enlarged, friction is increased, and the robot has stronger obstacle crossing capability in the pipeline (can cross a pipe section connection with an included angle of two pipe sections being more than or equal to 110 degrees) under the condition that a vehicle body stably travels.
S103, the robot camera cloud deck comprises rotational degrees of freedom in the x direction and the y direction, and through control of the gear reduction motor and the synchronous belt, the image acquisition module can achieve pitching motion and rolling motion left and right, and the shooting range of the camera can be greatly improved. The camera carried by the robot is an industrial CCD camera, has the advantages of high sensitivity, small distortion, small volume, vibration resistance and the like, and is also provided with an LED combined light source due to poor lighting conditions in the centralized ventilation system. When the robot works in the closed air duct, the robot drives the body to move through the control module, the CCD camera collects panoramic images in the air duct in real time, the collected images and data are recorded through the camera and the storage device of the sensor, and after the standby robot finishes polling and recovers, offline analysis is carried out on the measured images and data.
S104, obtaining the mould and dust images with different visual characteristics as main data sources of an initial database through a robot visual experiment (different angles, different light sources and different brightness degrees) in an actual ventilation system, and supplementing the rotated, blurred and noise-added images. In future engineering application, the mould and dust database can be further expanded according to mould and dust images shot by the robot image acquisition systems in different centralized ventilation systems, and preferably, the number of the expanded images is more than or equal to 5000.
S105, the problem of uneven image illumination caused by LED light source auxiliary imaging under the low-lighting condition is solved by introducing a self-adaptive Retinex algorithm, a Gaussian filter algorithm is constructed to remove image noise, a wiener filter is adopted to restore a blurred image formed in the moving process of the robot, and a complete set of method for preprocessing the visual image in the low-lighting wind pipeline is formed.
S106, the convolution neural network for classifying the images in the wind pipeline is slightly complex in structure, is an AlexNet network, and comprises 23 layers in total, and mainly comprises an input layer, a convolution layer, an activation function layer, a pooling layer, a full-connection layer and an output layer. In this embodiment, video frames shot by the robot are unified to the same size 224 × 224 × 3 as input, the convolution layer 1 performs convolution operation on the wind pipeline image by adopting 96 convolution kernels with the size of 11 × 11 × 3 and the step length of 4, then the activation layer 1 and the pooling layer 1 are arranged, the window size of the pooling layer is 3, the step length is 2, and the pooling operation is performed on the feature of the previous layer; the convolution layer 2 uses 256 convolution cores of 5 multiplied by 96 and the step length is 1 to carry out convolution operation on the previous layer of the wind pipeline image, then an active layer 2 and a pooling layer 2 are arranged, the window size of the pooling layer is 3, the step length is 2, and the pooling operation is carried out on the previous layer of the characteristics; the convolution layer 3 uses 384 convolution cores with the size of 3 multiplied by 256 and the step length of 1 to carry out convolution operation on the wind pipeline image of the previous layer, then an active layer 3 and a pooling layer 3 are arranged, the window size of the pooling layer is 3, the step length is 2, and the pooling operation is carried out on the feature of the previous layer; the convolution layer 4 adopts 384 convolution cores with the size of 3 multiplied by 192 and the step length of 1 to carry out convolution operation on the wind pipeline image of the previous layer, and then an activation layer 4 and a pooling layer 4 are arranged; the convolution layer 5 adopts 256 convolution cores with the size of 3 × 3 × 192 and the step length of 1 to perform convolution operation on the wind pipeline image of the previous layer, and then the active layer 5 and the pooling layer 5 are arranged. And finally, 3 full connection layers and corresponding activation layers are arranged, and the output layer performs regression by using Softmax. The whole deep neural network structure is shown in fig. 3. The identification effect of mold and dust in the wind duct after image preprocessing and deep learning video feature extraction is shown in fig. 4, wherein the number represents the mold and dust coverage area estimated by the specific surface area of the duct and the visual window.
S107, according to the visual images of the mold and the dust and the position information of the robot during video sampling, a database of mold and dust generation positions and coverage areas of the centralized ventilation system can be established, and the data can be used as a basis for directionally disinfecting and cleaning the centralized ventilation system in the current state. Based on video sampling data of regular inspection, a k-means cluster analysis method is used for revealing where dust and mold are prone to being generated inside the wind pipeline from a big data layer, and fig. 5 is a schematic diagram of positions of the wind pipeline system where mold and dust are prone to being generated.
S108, before the method is put into practical application, the effectiveness of the robot vision system in identifying the mould and the dust under different system operation conditions needs to be tested through experiments. Since the preparation of mold is difficult and dust deposition is often associated with mold in the ventilation system, the experiments in this example only examined the effectiveness of the depth vision image feature extraction algorithm by the dust deposition created by the dust generator during the operation of the ventilation system. The generated dust is pumped into the air supply flow through the guide pipe, part of air ports are firstly closed in the accumulation stage of the dust, the air speed is adjusted, the dust forms a visually observable scale at each position in the ventilation system, the robot is placed in a preset maintenance port of the ventilation system, the robot is patrolled according to a planned path and shoots a panoramic video flow in the air duct, the image frames containing the dust extracted at different angles, light sources and light and shade degrees are used as a training set to train the deep neural network, the ventilation system and the dust generator are further operated for a period of time again, the image frames containing the dust shot by the patrol of the robot at the time are used as a testing set to verify the accuracy of visual identification, and therefore the primary construction of a dust image feature database and the verification of the effectiveness of the implementation of a deep vision algorithm are simultaneously realized. The robot vision experiment is shown in fig. 6.
Therefore, the method can be used for solving the difficult problems that the internal environment of the centralized ventilation system cannot be visualized and the limitation of the conventional quantitative monitoring method is large, guiding the centralized ventilation system to be changed from a black box to a white box, providing a basis for the timed and directional cleaning and disinfection of the centralized ventilation system, and ensuring the health of indoor personnel.
The embodiments described above are described to facilitate an understanding and use of the invention by those skilled in the art. It will be readily apparent to those skilled in the art that various modifications to these embodiments may be made, and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications within the scope of the present invention based on the disclosure of the present invention.

Claims (14)

1. A method for identifying mould and dust in a concentrated wind system of robot vision is characterized by comprising the following steps: the method is characterized in that the collection of the panoramic video stream inside the ventilation system is realized through a movable pipeline robot carrying a high-definition camera, the illumination reduction, the noise elimination and the image enhancement are carried out on the image frame containing the mould and the dust by combining various algorithms, and the identification of the mould and the dust image characteristics is realized through the construction of a deep neural network.
2. The method for identifying mold and dust in a concentrated wind system based on robot vision as claimed in claim 1, comprising the steps of:
1) determining the visual characteristics of two typical visible pollutants-mould and dust in the wind pipeline;
2) aiming at a panoramic video stream inside an air pipeline shot by a CCD camera carried by an inspection robot, preprocessing the video stream by enhancing image denoising through a self-adaptive algorithm, Gaussian filtering image denoising and wiener filtering recovery of a pipeline relative motion fuzzy image;
3) and constructing a deep learning target detection algorithm, determining key parameters such as a frame, hidden layers, step length and the like of the convolutional neural network, and extracting image characteristics of mould and dust in the wind pipeline by using the network.
3. The method for identifying mold and dust in a concentrated wind system based on robot vision according to claim 1, wherein the method comprises the following steps: the robot is a crawler-type robot, and the robot vision system is an image acquisition module carried on a vehicle body holder of the robot.
4. The method for identifying mold and dust in a concentrated wind system based on robot vision as claimed in claim 2, wherein step 1) comprises acquiring a mold and dust image with different visual characteristics as a main data source of an initial visual characteristic database through a robot vision experiment in an actual ventilation system, and supplementing a rotating, fuzzy and noisy image.
5. The method for identifying mold and dust in a concentrated wind system based on robot vision according to claim 4, wherein the method comprises the following steps: the robot vision experiment comprises the conditions of different angles, different light sources and different light and shade degrees.
6. The method for identifying mold and dust in a concentrated wind system based on robot vision according to claim 4, wherein the method comprises the following steps: the initial visual characteristic database is expanded according to the mould and dust images shot by the robot image acquisition system in the actual scene.
7. The method for identifying mold and dust in a concentrated wind system based on robot vision of claim 6, wherein: the expansion means that the number of the collected images is more than or equal to 5000.
8. The robot vision identification method for mold and dust in the concentrated wind system according to claim 2, wherein the step 2) comprises:
(2.1) aiming at the problem of uneven image illumination caused by an LED light source under the condition of low lighting, introducing a self-adaptive Retinex algorithm to perform Gaussian convolution operation on an original image, and taking an obtained first-order filtering image as an illumination component of a current image:
Figure FDA0002830291700000021
wherein I (x, y) represents a luminance image, representing the dynamic range of the image pixels; r is1(x, y) is the adaptive correction coefficient;
then, the following formula is used to obtain the reflection component, the reflection image is the enhanced image,
RRetinex(x,y)=logH(x,y)-log[G(x,y)*H(x,y)]
Figure FDA0002830291700000022
wherein R isRetinexRepresentation adaptationThe image enhanced by Retinex algorithm; "+" is a convolution symbol; g (x, y) is a Gaussian filter function, and c is a standard deviation;
(2.2) for the noise problem of the shot image, adopting a Gaussian filtering algorithm to process;
(2.3) for the image blurring problem caused by the movement of the robot in the preprocessing of the shooting image frame, restoring a blurred image by using a wiener filter; the method restores an image by minimizing the mean square error between a restored wind duct image and an image before restoration, and is expressed as:
ε2=E[(f-f')2]→min
wherein f is an image before recovery; f' is an image restored by using wiener filtering; e [. cndot. ] represents the mean square error; the fourier transform formula of the restored image is:
Figure FDA0002830291700000023
wherein f '(u, v) is the inverse of f'; h (u, v) is the Fourier transform of the degradation function.
9. The method for identifying mold and dust in a concentrated wind system based on robot vision according to claim 2, wherein the deep convolutional neural network framework in step 3) comprises: the system comprises a convolutional layer, an active layer, a pooling layer and a full-connection layer, wherein data transfer among the layers is completed through an active function.
10. The method for identifying mold and dust in a concentrated wind system based on robot vision of claim 9, wherein: the device comprises 1 input layer, 5 hidden convolution layers and corresponding active layers and pooling layers, and 3 full-connection layers and corresponding active layers.
11. The application of the method for identifying the mold and the dust in the concentrated wind system with the robot vision as claimed in claim 1 is characterized in that: establishing a database of mould and dust generation positions and coverage areas of a centralized ventilation system based on visual images of mould and dust and position information of the robot during video sampling;
according to video sampling data of regular inspection, a big data analysis method is introduced to reveal where dust and mould are easy to generate in the wind pipeline.
12. The application of the method for identifying mold and dust in the concentrated wind system based on the robot vision as claimed in claim 11, wherein the method comprises the following steps: and (3) revealing where dust and mold are easy to generate in the wind pipeline from a big data level by using a k-means cluster analysis method.
13. The application of the method for identifying mold and dust in the concentrated wind system based on the robot vision as claimed in claim 12, wherein the method comprises the following steps: wherein the objective function of the k-means cluster is as follows:
Figure FDA0002830291700000031
wherein x iskIs the position of a clustering point; zj() Is the cluster center.
14. The application of the method for identifying mold and dust in the concentrated wind system based on the robot vision as claimed in claim 11, wherein the method comprises the following steps: reveal where the interior of the wind pipe is easy to generate dust and mold, and provide a basis for regular and directional cleaning of the ventilation system.
CN202011440546.0A 2020-12-11 2020-12-11 Method for identifying mould and dust in concentrated wind system of robot vision and application Pending CN112633090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011440546.0A CN112633090A (en) 2020-12-11 2020-12-11 Method for identifying mould and dust in concentrated wind system of robot vision and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011440546.0A CN112633090A (en) 2020-12-11 2020-12-11 Method for identifying mould and dust in concentrated wind system of robot vision and application

Publications (1)

Publication Number Publication Date
CN112633090A true CN112633090A (en) 2021-04-09

Family

ID=75309219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011440546.0A Pending CN112633090A (en) 2020-12-11 2020-12-11 Method for identifying mould and dust in concentrated wind system of robot vision and application

Country Status (1)

Country Link
CN (1) CN112633090A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113156830A (en) * 2021-04-27 2021-07-23 重庆文理学院 Intelligent household control system based on face recognition
CN113838136A (en) * 2021-10-19 2021-12-24 泰州市祥龙五金制品有限公司 Multi-stage conditioning system utilizing cloud storage

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140288A (en) * 2007-10-09 2008-03-12 华南理工大学 Central air-conditioning flue pipe air quality remote analysis system and method thereof
CN108109177A (en) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 Pipe robot vision processing system and method based on monocular cam
CN207540009U (en) * 2017-10-11 2018-06-26 西安交通大学 A kind of haze controlling device
CN109747824A (en) * 2019-01-14 2019-05-14 中国计量大学 Device and barrier-avoiding method for unmanned plane avoidance inside chimney
CN110926426A (en) * 2019-12-03 2020-03-27 西安科技大学 Real-time detection device and detection method for residual muck volume of subway shield segment
CN211344445U (en) * 2019-08-30 2020-08-25 苏州碧晟环保科技有限公司 Robot for CCTV pipeline detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140288A (en) * 2007-10-09 2008-03-12 华南理工大学 Central air-conditioning flue pipe air quality remote analysis system and method thereof
CN108109177A (en) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 Pipe robot vision processing system and method based on monocular cam
CN207540009U (en) * 2017-10-11 2018-06-26 西安交通大学 A kind of haze controlling device
CN109747824A (en) * 2019-01-14 2019-05-14 中国计量大学 Device and barrier-avoiding method for unmanned plane avoidance inside chimney
CN211344445U (en) * 2019-08-30 2020-08-25 苏州碧晟环保科技有限公司 Robot for CCTV pipeline detection
CN110926426A (en) * 2019-12-03 2020-03-27 西安科技大学 Real-time detection device and detection method for residual muck volume of subway shield segment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
YONGHUN SHIN ET AL: "Content Awareness-based Color Image Enhancement", 《THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON CONSUMER ELECTRONICS (ISCE 2014)》 *
李静: "基于视频球的压力管道内表面缺陷检测方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
焦李成 等: "《简明人工智能》", 西安电子科技大学出版社 *
王永雄: "管道机器人控制、导航和管道检测技术研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
申屠华斌等: "滞流前后管道生物膜微生物群落径向分布差异", 《浙江大学学报(工学版)》 *
陈松: "管道巡检机器人视觉系统关键技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113156830A (en) * 2021-04-27 2021-07-23 重庆文理学院 Intelligent household control system based on face recognition
CN113156830B (en) * 2021-04-27 2022-09-27 重庆文理学院 Intelligent household control system based on face recognition
CN113838136A (en) * 2021-10-19 2021-12-24 泰州市祥龙五金制品有限公司 Multi-stage conditioning system utilizing cloud storage

Similar Documents

Publication Publication Date Title
CN111310862B (en) Image enhancement-based deep neural network license plate positioning method in complex environment
CN107133943B (en) A kind of visible detection method of stockbridge damper defects detection
CN106875373B (en) Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm
CN106991668B (en) Evaluation method for pictures shot by skynet camera
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN107229929A (en) A kind of license plate locating method based on R CNN
CN104517125B (en) The image method for real time tracking and system of high-speed object
CN112686833B (en) Industrial product surface defect detection and classification device based on convolutional neural network
CN112633090A (en) Method for identifying mould and dust in concentrated wind system of robot vision and application
KR101414670B1 (en) Object tracking method in thermal image using online random forest and particle filter
CN106874929B (en) Pearl classification method based on deep learning
CN106709421B (en) Cell image identification and classification method based on transform domain features and CNN
CN101726498B (en) Intelligent detector and method of copper strip surface quality on basis of vision bionics
CN111028238B (en) Robot vision-based three-dimensional segmentation method and system for complex special-shaped curved surface
CN113376172B (en) Welding seam defect detection system based on vision and eddy current and detection method thereof
CN106529441B (en) Depth motion figure Human bodys&#39; response method based on smeared out boundary fragment
CN114170598B (en) Colony height scanning imaging device, and automatic colony counting equipment and method capable of distinguishing atypical colonies
CN115131325A (en) Breaker fault operation and maintenance monitoring method and system based on image recognition and analysis
CN111402249B (en) Image evolution analysis method based on deep learning
CN115497015A (en) River floating pollutant identification method based on convolutional neural network
CN114463843A (en) Multi-feature fusion fish abnormal behavior detection method based on deep learning
Wang et al. Automated defect and contaminant inspection of HVAC duct
CN105930793A (en) Human body detection method based on SAE characteristic visual learning
CN113591973B (en) Intelligent comparison method for appearance state change of track plate
CN109241932B (en) Thermal infrared human body action identification method based on motion variance map phase characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409

RJ01 Rejection of invention patent application after publication