CN116872233A - Campus inspection robot and control method thereof - Google Patents
Campus inspection robot and control method thereof Download PDFInfo
- Publication number
- CN116872233A CN116872233A CN202311151754.2A CN202311151754A CN116872233A CN 116872233 A CN116872233 A CN 116872233A CN 202311151754 A CN202311151754 A CN 202311151754A CN 116872233 A CN116872233 A CN 116872233A
- Authority
- CN
- China
- Prior art keywords
- garbage
- feature
- surrounding area
- feature map
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 79
- 238000000034 method Methods 0.000 title claims abstract description 33
- 239000010813 municipal solid waste Substances 0.000 claims abstract description 255
- 238000004140 cleaning Methods 0.000 claims abstract description 15
- 238000003062 neural network model Methods 0.000 claims abstract description 13
- 239000013598 vector Substances 0.000 claims description 49
- 238000000605 extraction Methods 0.000 claims description 27
- 238000013527 convolutional neural network Methods 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 22
- 230000004927 fusion Effects 0.000 claims description 19
- 238000010586 diagram Methods 0.000 claims description 17
- 238000009826 distribution Methods 0.000 claims description 16
- 238000005457 optimization Methods 0.000 claims description 15
- 238000007781 pre-processing Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 8
- 238000012795 verification Methods 0.000 claims description 8
- 230000002093 peripheral effect Effects 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 230000004931 aggregating effect Effects 0.000 claims description 2
- 230000002776 aggregation Effects 0.000 claims description 2
- 238000004220 aggregation Methods 0.000 claims description 2
- 238000000638 solvent extraction Methods 0.000 claims 1
- 230000036541 health Effects 0.000 abstract description 6
- 238000012544 monitoring process Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 11
- 238000001914 filtration Methods 0.000 description 8
- 238000011176 pooling Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000003066 decision tree Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 241000255925 Diptera Species 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 244000052616 bacterial pathogen Species 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009323 psychological health Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/20—Checking timed patrols, e.g. of watchman
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/24—Reminder alarms, e.g. anti-loss alarms
Abstract
The application relates to the technical field of intelligent robots, and particularly discloses a campus inspection robot and a control method thereof, wherein the campus inspection robot acquires images around a garbage can acquired by the campus inspection robot; extracting image features of the images around the garbage can by using a feature extractor based on a deep neural network model to obtain a feature map of the area around the garbage can; the feature images of the surrounding areas of the garbage can are passed through a classifier to obtain classification results, and the classification results are used for indicating whether garbage exists in the surrounding areas of the garbage can; and generating a control instruction based on the classification result, wherein the control instruction is used for indicating whether the garbage cleaning prompt is generated or not. Therefore, the automatic identification and monitoring of whether garbage exists in the surrounding area of the garbage can be realized based on the machine vision technology of the inspection robot, so that the complexity of manual inspection is reduced, the inspection efficiency and precision are improved, and meanwhile, the health and safety of students and teaching staff are guaranteed.
Description
Technical Field
The application relates to the technical field of intelligent robots, in particular to a campus inspection robot and a control method thereof.
Background
Campus sanitation is one of the important aspects of school management, and is related to not only physical health and psychological health of students, but also the image and reputation of schools.
However, there may be a case where garbage is present around the garbage can due to insufficient environmental awareness of students and teaching staff, etc. The situation not only can influence the beauty and the sanitation of a campus, but also can lead various germs, such as flies, mosquitoes and the like and cause threat to the physical health of students and teaching staff. Currently, manual inspection is generally used to avoid this situation as much as possible.
However, since manual inspection is cumbersome and limited by weather, time, etc., an optimized solution is desired.
Disclosure of Invention
The embodiment of the application provides a campus inspection robot and a control method thereof, wherein images around a garbage can acquired by the campus inspection robot are acquired; extracting image features of the images around the garbage can by using a feature extractor based on a deep neural network model to obtain a feature map of the area around the garbage can; the feature images of the surrounding areas of the garbage can are passed through a classifier to obtain classification results, and the classification results are used for indicating whether garbage exists in the surrounding areas of the garbage can; and generating a control instruction based on the classification result, wherein the control instruction is used for indicating whether the garbage cleaning prompt is generated or not. Therefore, the automatic identification and monitoring of whether garbage exists in the surrounding area of the garbage can be realized based on the machine vision technology of the inspection robot, so that the complexity of manual inspection is reduced, the inspection efficiency and precision are improved, and meanwhile, the health and safety of students and teaching staff are guaranteed.
The embodiment of the application also provides a campus inspection robot, which comprises:
the image acquisition module is used for acquiring images around the garbage can acquired by the campus inspection robot;
the image feature extraction module is used for extracting image features of the images around the garbage can by using a feature extractor based on a deep neural network model so as to obtain a feature map of the area around the garbage can;
the inspection result dividing module is used for enabling the feature images of the surrounding areas of the garbage can to pass through a classifier to obtain classification results, wherein the classification results are used for indicating whether garbage exists in the surrounding areas of the garbage can; and
the instruction generation module is used for generating a control instruction based on the classification result, wherein the control instruction is used for indicating whether the garbage cleaning prompt is generated or not.
The embodiment of the application also provides a control method of the campus inspection robot, which comprises the following steps:
acquiring images around the garbage can acquired by the campus inspection robot;
extracting image features of the images around the garbage can by using a feature extractor based on a deep neural network model to obtain a feature map of the area around the garbage can;
the feature images of the surrounding areas of the garbage can are passed through a classifier to obtain classification results, and the classification results are used for indicating whether garbage exists in the surrounding areas of the garbage can; and generating a control instruction based on the classification result, wherein the control instruction is used for indicating whether the garbage cleaning prompt is generated or not.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
fig. 1 is a block diagram of a campus inspection robot provided in an embodiment of the present application.
Fig. 2 is a block diagram of the image feature extraction module in the campus inspection robot according to the embodiment of the present application.
Fig. 3 is a block diagram of the deep feature extraction unit in the campus inspection robot according to the embodiment of the present application.
Fig. 4 is a flowchart of a control method of a campus inspection robot provided in an embodiment of the present application.
Fig. 5 is a schematic diagram of a system architecture of a control method of a campus inspection robot according to an embodiment of the present application.
Fig. 6 is an application scenario diagram of a campus inspection robot provided in an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present application and their descriptions herein are for the purpose of explaining the present application, but are not to be construed as limiting the application.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In describing embodiments of the present application, unless otherwise indicated and limited thereto, the term "connected" should be construed broadly, for example, it may be an electrical connection, or may be a communication between two elements, or may be a direct connection, or may be an indirect connection via an intermediate medium, and it will be understood by those skilled in the art that the specific meaning of the term may be interpreted according to circumstances.
It should be noted that, the term "first\second\third" related to the embodiment of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that embodiments of the application described herein may be practiced in sequences other than those illustrated or described herein.
In one embodiment of the present application, fig. 1 is a block diagram of a campus inspection robot provided in the embodiment of the present application. As shown in fig. 1, a campus inspection robot 100 according to an embodiment of the present application includes: the image acquisition module 110 is used for acquiring images of the periphery of the garbage can acquired by the campus inspection robot; an image feature extraction module 120, configured to perform image feature extraction on the image around the garbage can by using a feature extractor based on a deep neural network model to obtain a feature map of an area around the garbage can; the inspection result dividing module 130 is configured to pass the feature map of the area around the garbage can through a classifier to obtain a classification result, where the classification result is used to indicate whether garbage exists in the area around the garbage can; and an instruction generating module 140, configured to generate a control instruction based on the classification result, where the control instruction is used to indicate whether to generate a garbage cleaning prompt.
Wherein the image acquisition module 110 acquires images via a camera or other sensor on the robot. In the image acquisition module, the position and the angle of the camera or the sensor can be ensured to cover the area around the garbage can. The deep neural network model used by the image feature extraction module 120 can accurately extract image features. The extracted algorithm is capable of handling changes in the image, such as illumination changes, noise, etc. The inspection result dividing module 130 can accurately classify the feature map of the surrounding area of the garbage can as the existence of garbage or the nonexistence of garbage. The performance of the classifier can meet the real-time requirement so as to generate control instructions in time. The instruction generation module 140 can accurately represent garbage cleaning prompts. The instruction generation module can process different control instructions, such as sending prompt information, triggering the cleaning robot and the like.
Can improve the safety and the clean and tidy degree of campus through campus inspection robot, alleviate the manual work and patrol and examine the work load of clearance, and then improve the efficiency of campus management. Specifically, the image acquisition module 110 is configured to acquire an image of the periphery of the trash can acquired by the campus inspection robot. Aiming at the technical problems, the technical conception of the application is to realize automatic identification and monitoring of whether garbage exists in the surrounding area of the garbage can based on the machine vision technology of the inspection robot, thereby reducing the complexity of manual inspection, improving the inspection efficiency and precision, and simultaneously guaranteeing the health and safety of students and teaching staff.
Specifically, in the technical scheme of the application, firstly, images of the periphery of the garbage can, which are acquired by the campus inspection robot, are acquired. In one embodiment of the application, the campus inspection robot needs to be equipped with a sensor device such as a camera or a laser radar to collect images of the surrounding area of the garbage can, and the sensor devices can be mounted on different parts of the campus inspection robot, such as the head, the body or the chassis of the campus inspection robot. Then, the image collected by the campus inspection robot needs to be preprocessed, such as color space conversion, image filtering, image segmentation and the like, so as to better extract the target area around the garbage can. Then, after preprocessing, the campus inspection robot needs to extract the features of the area around the garbage can, such as color, texture, shape, etc., from the image. These features may be extracted and analyzed by machine learning algorithms. Then, after the feature extraction, the campus inspection robot needs to perform target detection, namely judging whether garbage exists in the surrounding area of the garbage can. This process may be implemented using a deep learning algorithm, such as a Convolutional Neural Network (CNN). The campus inspection robot needs to analyze the target detection result, for example, determine the garbage amount, garbage type and other information of the surrounding area of the garbage can. This process may be implemented using image processing and machine learning algorithms. Finally, the campus inspection robot needs to generate corresponding instructions according to the analysis result, such as performing operations of cleaning garbage in the surrounding area of the garbage can, reporting the garbage quantity in the surrounding area of the garbage can, and the like. This process may be implemented using machine learning algorithms and a rules engine.
Specifically, the image feature extraction module 120 is configured to perform image feature extraction on the image around the trash can by using a feature extractor based on a deep neural network model to obtain a feature map of the area around the trash can. Fig. 2 is a block diagram of the image feature extraction module in the campus inspection robot provided in the embodiment of the present application, as shown in fig. 2, the image feature extraction module 120 includes: an image preprocessing unit 121, configured to perform image preprocessing on the garbage can surrounding image to obtain a preprocessed garbage can surrounding image; the shallow feature extraction unit 122 is configured to pass the preprocessed image around the garbage can through a shallow feature extractor based on a convolutional neural network model to obtain a shallow feature map of the area around the garbage can; a deep feature extraction unit 123, configured to pass the shallow feature map of the area around the garbage can through a ShuffleNetV2 basic block to obtain a deep feature map of the area around the garbage can; the fusion unit 124 is configured to fuse the shallow feature map of the surrounding area of the garbage can and the deep feature map of the surrounding area of the garbage can to obtain an initial feature map of the surrounding area of the garbage can; and an optimizing unit 125, configured to optimize the association distribution expression effect on the initial garbage can surrounding area feature map to obtain the garbage can surrounding area feature map.
First, in the image preprocessing unit 121, the image of the trash can surroundings is subjected to image preprocessing to obtain a preprocessed trash can surroundings image. That is, the inspection robot is used to replace eyes of people in manual inspection to monitor the situation around the garbage can in real time. It will be appreciated that for some practical reasons, such as uneven illumination, partial areas in the image may be brighter or darker, which may interfere with subsequent training and testing of the model, or even affect the accuracy of the model. To solve this problem, in the implementation of the present application, it is desirable to reduce noise in the image around the trash can and improve the quality of the image by means of image preprocessing. Among the means of image preprocessing include, but are not limited to: image enhancement, edge detection, filtering and noise reduction, color space conversion, binarization.
In the present application, the image preprocessing method includes, but is not limited to: 1. color space conversion, which is the process of converting an image from one color space to another. Common color spaces include RGB, HSV, lab, etc. Color space conversion can better extract image features such as color, brightness, etc. information. 2. Image filtering, which is the process of smoothing or sharpening an image. Common image filtering methods include gaussian filtering, median filtering, mean filtering and the like, and the image filtering can remove noise in an image, so that the image is clearer. 3. Image segmentation, which is the process of dividing an image into several different regions. Common image segmentation methods include threshold segmentation, edge detection, region growing, etc., which can better extract target regions, such as regions around a trash can. 4. Morphological processing, which is a process of performing morphological operations on an image. Common morphological operations include corrosion, expansion, open operation, closed operation, and the like. Morphological processing can better remove noise in the image while preserving the shape and structure of the target region. 5. Image enhancement, which is a process of performing enhancement processing on an image. Common image enhancement methods include histogram equalization, contrast enhancement, and the like. Image enhancement can better display image details and improve image quality.
Then, in the shallow feature extraction unit 122, it has a good feature extraction capability in view of the convolutional neural network (Convolutional Neural Network, abbreviated as CNN) being widely used in image processing. In the technical scheme of the application, the convolutional neural network is expected to be utilized to acquire and capture the key information in the images around the garbage can after pretreatment. Specifically, the preprocessed garbage can surrounding image is passed through a shallow feature extractor based on a convolutional neural network model to obtain a garbage can surrounding area shallow feature map. That is, the convolutional neural network model is used as an implementation mode of a shallow feature extractor, and shallow feature information such as illumination, texture, shape and other features are extracted from the preprocessed images around the garbage can.
In a specific example of the present application, the convolutional neural network model has a network structure of conv2d_1- > max_pooling2d_1- > conv2d_2- > max_pooling2d_2- > conv2d_3- > conv2d_4- > conv2d_5- > max_pooling2d_3. That is, the convolutional neural network model includes a first convolutional layer, a first pooled layer, a second convolutional layer, a second pooled layer, a third convolutional layer, a fourth convolutional layer, a fifth convolutional layer, and a third pooled layer. Wherein the first convolution layer uses 96 convolution kernels with the size of 11×11, the second convolution layer uses 256 convolution kernels with the size of 5×5, the third convolution layer uses 384 convolution kernels with the size of 3×3, the fourth convolution layer uses 384 convolution kernels with the size of 3×3, the fifth convolution layer uses 256 convolution kernels with the size of 3×3, and the first pooling layer, the second pooling layer and the third pooling layer all adopt a maximum pooling operation with a pooling window of 3×3, and the step size is 2.
The convolutional neural network (Convolutional Neural Networks, CNN for short) is a deep learning neural network model, can effectively process high-dimensional data, and has the advantages of automatic feature extraction, hierarchical representation, strong generalization capability and the like. The core of CNN is a convolution layer (Convolutional Layer) that extracts features from the input data by convolution operations. In the convolution layer, a convolution operation is performed on input data by a convolution kernel (also referred to as a filter), thereby obtaining a set of Feature maps (Feature maps). Parameters such as the size, step length, filling and the like of the convolution kernel can be adjusted as required. In addition to the convolutional Layer, the CNN includes a Pooling Layer (Pooling Layer) for reducing the dimension of the feature map and reducing the amount of computation.
The shallow feature extractor based on the convolutional neural network model can be used for extracting shallow features from the preprocessed images around the garbage can so as to facilitate subsequent garbage detection and classification tasks. The main function of the convolutional neural network is to automatically extract the characteristics, so that the characteristics of the input image can be extracted and reduced in dimension through components such as a convolutional layer, a pooling layer and the like, and therefore, more abstract and high-level characteristic representation is obtained. In the images of the area around the trash can, the convolutional neural network can automatically learn some trash-related features, such as the color, shape, texture and the like of the trash, so that more accurate and effective feature representation is provided for subsequent trash detection and classification tasks.
Next, in the deep feature extraction unit 123, the shallow feature map of the surrounding area of the garbage can is passed through a ShuffleNetV2 basic block to obtain a deep feature map of the surrounding area of the garbage can. That is, the deep feature map of the area surrounding the trash can may characterize more complex, abstract features with better feature differentiation.
Fig. 3 is a block diagram of the deep feature extraction unit in the campus inspection robot according to the embodiment of the present application, as shown in fig. 3, the deep feature extraction unit 123 includes: a segmentation subunit 1231, configured to segment the feature map of the area around the garbage can along a channel dimension to obtain a shallow part feature map of the area around the first garbage can and a shallow part feature map of the area around the second garbage can; the convolution processing subunit 1232 is configured to perform convolution processing on the shallow part of the feature map of the area around the second garbage can to obtain a deep feature map of the area around the second garbage can; a cascading subunit 1233, configured to cascade the shallow part feature map of the area around the first dustbin with the deep part feature map of the area around the second dustbin to obtain a fused area feature map around the dustbin; and a channel shuffling subunit 1234 configured to perform channel shuffling on the fused garbage can peripheral region feature map to obtain the garbage can peripheral region deep feature map.
In the embodiment of the application, the convolution processing is performed on the basis of the shallow features, so that the feature information of a deeper level, such as more essential semantic information in an image, can be further extracted. However, this reduces the feature extraction capability of the network due to the lack of information exchange between feature maps after segmentation. The channel shuffling operation can increase interaction between feature graphs to a certain extent, and can promote information between channels to flow fully on the premise of not influencing network accuracy, so that learning capability between feature graphs is improved.
More specifically, the process of convolving the shallow part feature map of the area around the second garbage can to obtain the deep part feature map of the area around the second garbage can includes: carrying out point convolution on the partial characteristic map of the shallow layer of the area around the second garbage can to obtain a characteristic map after point convolution treatment; then, carrying out convolution processing based on a two-dimensional convolution kernel on the feature map after the point convolution processing to obtain a deep part feature map; then, carrying out point convolution processing on the deep part characteristic map to obtain a channel modulation deep part characteristic map; and then, carrying out batch normalization processing on the channel modulation deep part characteristic map to obtain a deep characteristic map of the area around the second garbage can. Here, the point convolution is a special convolution operation that functions to perform a convolution operation on each element of one input data. Unlike conventional convolution layers that use filters for convolution, the convolution calculation of a point convolution at each location only takes one pixel value at that location as the weight of the convolution kernel, which makes the calculation of the point convolution layer less costly. Since the number of convolution kernels typically determines the channel dimension of the output data, point convolution is often used to adjust the channel dimension of the data.
It should be appreciated that in another embodiment of the application, the ShuffleNetV2 basic block is a lightweight deep convolutional neural network model that is primarily characterized by efficient computation and memory utilization. Wherein, the ShuffleNetV2 basic block comprises: a group convolution layer (Group Convolution Layer) divides the input feature map into groups and convolves each group, thereby reducing the amount of computation and the amount of parameters. And (3) carrying out Channel-by-Channel random rearrangement (Channel Shuffle), and carrying out Channel shuffling operation on the feature graphs obtained after the grouping convolution, so as to increase interaction and information mobility between the feature graphs. A bottleneck structure (Bottleneck Structure) comprising a 1x1 convolution layer, a 3x3 depth separable convolution layer and a 1x1 convolution layer for extracting a more rich and abstract feature representation.
The shallow feature map of the surrounding area of the garbage can be converted into the deep feature map by using the shufflenet V2 basic block, so that the distinguishing degree and accuracy of the features are further improved, the shufflenet V2 basic block has high-efficiency calculation and memory utilization rate, and quick and accurate feature extraction can be realized in a scene with limited resources such as mobile equipment.
Then, in the fusing unit 124, the shallow feature map of the surrounding area of the garbage can and the deep feature map of the surrounding area of the garbage can are fused again to make full use of the edge and texture feature information in the shallow features and the advanced semantic information in the deep features, so as to obtain the feature map of the surrounding area of the garbage can. Therefore, the feature expression of the feature map of the area around the garbage can is richer and more comprehensive.
Wherein, the fusing unit 124 is configured to: fusing the shallow feature map of the surrounding area of the garbage can and the deep feature map of the surrounding area of the garbage can by the following fusion formula to obtain an initial surrounding area feature map of the garbage can; wherein, the fusion formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the characteristic diagram of the area around the initial garbage can, < >>Representing a shallow characteristic diagram of the surrounding area of the garbage can, < > in the area of the garbage can>Representing a deep characteristic diagram of the surrounding area of the garbage can, "-a->"means that the elements at the corresponding positions of the shallow feature map of the surrounding area of the garbage can and the deep feature map of the surrounding area of the garbage can are added together, and +.>And->And representing weighting parameters for controlling balance between the shallow characteristic map of the surrounding area of the garbage can and the deep characteristic map of the surrounding area of the garbage can.
The fusion of the feature images of the surrounding areas of the garbage can is to fully utilize the advantages of the shallow features and the deep features. Shallow features contain edges and texture features, while deep features contain higher levels of semantic information. By fusing the two features, a more accurate and complete feature map of the area around the garbage can be obtained, so that the recognition accuracy of garbage in the area around the garbage can is improved.
In another embodiment of the application, first shallow and deep features are extracted from the images of the area surrounding the trash can. The shallow features and deep features are then fused, and different methods, such as feature stitching, feature weighting, feature crossing, etc., may be used. And then, inputting the fused characteristic diagram into a convolutional neural network for target detection and classification so as to realize automatic identification and monitoring of garbage in the surrounding area of the garbage can.
By means of the fusion mode, characteristic information of different layers can be fully utilized, and detection accuracy and robustness of garbage in areas around the garbage can are improved.
Finally, in the optimizing unit 125, it includes: the matrix unfolding subunit is used for respectively unfolding the feature matrix of each feature matrix of the feature map of the area around the initial garbage can to obtain a plurality of feature vectors; the evaluation optimization subunit is used for carrying out multisource information fusion pre-verification distribution evaluation optimization on each feature vector row in the plurality of feature vectors so as to obtain a plurality of optimized feature vectors; and the aggregation reconstruction subunit is used for reconstructing and restoring each optimized characteristic vector into a characteristic matrix, and aggregating and reconstructing each characteristic matrix along the channel dimension to obtain the characteristic map of the surrounding area of the garbage can.
Respectively expanding each feature matrix of the feature map of the area around the initial garbage can to obtain a plurality of feature vectors, wherein the feature matrix comprises the following steps: for the initial feature map of the area around the garbage can, the feature map is firstly required to be decomposed into a plurality of feature matrixes, and each feature matrix corresponds to one feature map; for each feature matrix, it can be expanded into a feature vector. Specifically, each row or each column of the matrix may be used as a feature vector, and then the feature vectors are arranged in a certain order to form a long vector; for a plurality of feature matrixes, the feature vectors of the feature matrixes can be connected in a certain sequence to form a longer vector which is used as the feature vector of the surrounding area of the whole garbage can.
It should be noted that the feature matrix expansion mode can be adjusted and selected according to specific application scenarios and tasks, for example, different expansion sequences, feature selection modes and the like can be selected, so as to obtain better feature representation effects.
In the technical scheme of the application, for the garbage can surrounding area feature map, each feature matrix along the channel dimension expresses the image semantic features of the garbage can surrounding image, the image semantic features expressed by the feature matrices comprise shallow image semantic features and deep image semantic features, and the integral association of the image semantic features expressed by the feature matrices of the garbage can surrounding area feature map may be reduced due to the convolution operation performed by the SheffeNetV 2 basic block and the channel shuffling performed on the fused garbage can surrounding area feature map fusing the shallow image semantic features and the deep image semantic features.
Here, considering that each feature matrix of the garbage can surrounding area feature map may be regarded as a local feature set in the overall combined feature set of the garbage can surrounding area feature map, and that each feature matrix has a homologous channel semantic association relationship from the garbage can surrounding image, each feature matrix of the garbage can surrounding area feature map has a multi-source information association relationship corresponding to a plurality of local channel association distributions of the garbage can surrounding area feature map in addition to a neighborhood distribution relationship associated with each other.
Therefore, in order to promote the effect of expressing the association distribution of each feature matrix of the feature map of the area around the garbage can as a whole, the applicant of the present application firstly expands each feature matrix into a feature vector, and then marks each feature vector of the feature vectors as, for example, a feature vector of the plurality of feature vectorsPerforming multisource information fusion pre-verification distribution evaluation optimization to obtain optimized feature vector +.>The method is specifically expressed as follows: evaluating and optimizing the multisource information fusion pre-verification distribution of each feature vector row in the feature vectors by using the following optimization formula to obtain a plurality of optimized feature vectors; wherein, the optimization formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the +.>Individual feature vectors->Is the mean feature vector, ++>Neighborhood setting superparameter,/->Represents a logarithmic function with base 2, +.>Representing subtraction by position +.>Is the +.>And (5) optimizing the feature vectors.
Here, the optimization of the multisource information fusion pre-verification distribution evaluation can be used for realizing effective folding of the pre-verification information of each feature vector on the local synthesis distribution based on the quasi-maximum likelihood estimation of the feature distribution fusion robustness for the feature local collection formed by a plurality of mutually-associated neighborhood parts, and the optimization paradigm of standard expected fusion information which can be used for evaluating the internal association in the collection and the change relation between the collection is obtained through the pre-verification distribution construction under the multisource condition, so that the information expression effect of the feature vector fusion based on the multisource information association is improved. Therefore, the optimized feature vector is restored to the feature matrix, so that the feature matrix combination can obtain the feature map of the surrounding area of the garbage can with a better overall expression effect, and the accuracy of the classification result obtained by the classifier is improved.
Specifically, the inspection result dividing module 130 is configured to pass the feature map of the area around the garbage can through a classifier to obtain a classification result, where the classification result is used to indicate whether garbage exists in the area around the garbage can. And then, the characteristic map of the area around the garbage can is passed through a classifier to obtain a classification result, wherein the classification result indicates whether garbage exists in the area around the garbage can. That is, a classifier is used to classify the feature map from the area around the trash can. Specifically, the task of the classifier is to map the feature map of the area around the garbage can into a binary classifier, so as to output two cases of 'garbage exists in the area around the garbage can' and 'garbage does not exist in the area around the garbage can'.
Wherein, the inspection result dividing module 130 includes: the matrix unfolding unit is used for unfolding the feature map of the area around the garbage can into classification feature vectors according to row vectors or column vectors; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and the classification unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
After the feature map of the area around the garbage can is obtained, the feature map needs to be classified to judge whether garbage exists in the area. The classifier may be implemented as a logistic regression classifier (Logistic Regression Classifier), which is a linear classifier commonly used for two classification problems. The classifier may also be implemented as a decision tree classifier (Decision Tree Classifier), which is a tree structure based classifier that can handle multiple classification problems. The classifier may also be implemented as a support vector machine classifier (Support Vector Machine Classifier), which is a maximum-interval based classifier that can handle both two-classification and multi-classification problems. The classifier can also be implemented as a random forest classifier (Random Forest Classifier), which is an integrated learning method based on decision trees, that can handle multiple classification problems. The classifier can also be implemented as a convolutional neural network classifier (Convolutional Neural Network Classifier), which is a deep learning-based classifier suitable for problems such as image classification and speech recognition.
Specifically, the instruction generating module 140 is configured to generate a control instruction based on the classification result, where the control instruction is used to indicate whether to generate a garbage cleaning prompt.
In an actual scenario, based on the classification result, a control instruction is generated, the control instruction indicating whether a garbage cleaning prompt is generated. That is, when the classifier judges that garbage exists around the garbage can, the robot can send out garbage cleaning prompts to remind students or teaching staff, so that the school is helped to manage environmental sanitation. In contrast, if the classifier determines that there is no garbage around the garbage bin, no prompt is required. In this way, the school is intelligently hygienically managed.
In summary, the campus inspection robot 100 according to the embodiment of the present application is illustrated, which realizes automatic recognition and monitoring of whether garbage exists in the surrounding area of the garbage can based on the machine vision technology of the inspection robot, thereby reducing the complexity of manual inspection, improving inspection efficiency and precision, and guaranteeing the health and safety of students and teaching staff.
Fig. 4 is a flowchart of a control method of a campus inspection robot provided in an embodiment of the present application. Fig. 5 is a schematic diagram of a system architecture of a control method of a campus inspection robot according to an embodiment of the present application. As shown in fig. 4 and 5, a control method of a campus inspection robot includes: 210, acquiring images around the garbage can acquired by a campus inspection robot; 220, extracting image features of the images around the garbage can by using a feature extractor based on a deep neural network model to obtain a feature map of the area around the garbage can; 230, passing the feature map of the area around the garbage can through a classifier to obtain a classification result, wherein the classification result is used for indicating whether garbage exists in the area around the garbage can or not; and 240, based on the classification result, generating a control instruction, wherein the control instruction is used for indicating whether garbage cleaning prompt is generated.
Specifically, in the control method of the campus inspection robot, the image feature extraction is performed on the image around the dustbin by using a feature extractor based on a deep neural network model to obtain a feature map of the area around the dustbin, including: performing image preprocessing on the images around the garbage can to obtain preprocessed images around the garbage can; the preprocessed images around the garbage can pass through a shallow feature extractor based on a convolutional neural network model to obtain a shallow feature map of the surrounding area of the garbage can; the shallow layer feature map of the surrounding area of the garbage can passes through a SheffeNetV 2 basic block to obtain a deep layer feature map of the surrounding area of the garbage can; fusing the shallow feature map of the surrounding area of the garbage can with the deep feature map of the surrounding area of the garbage can to obtain an initial surrounding area feature map of the garbage can; and carrying out association distribution expression effect optimization on the initial garbage can surrounding area feature map to obtain the garbage can surrounding area feature map.
It will be appreciated by those skilled in the art that the specific operations of the respective steps in the above-described control method of the campus inspection robot have been described in detail in the above description of the campus inspection robot with reference to fig. 1 to 3, and thus, repetitive descriptions thereof will be omitted.
Fig. 6 is an application scenario diagram of a campus inspection robot provided in an embodiment of the present application. As shown in fig. 6, in this application scenario, first, a trash can surrounding image (e.g., C as illustrated in fig. 6) acquired by a campus inspection robot (e.g., M as illustrated in fig. 6) is acquired; the acquired trash can surrounding image is then input into a server (e.g., S as illustrated in fig. 6) deployed with a campus inspection algorithm, wherein the server is capable of processing the trash can surrounding image based on the campus inspection algorithm to generate a control instruction that is used to indicate whether a trash cleaning prompt is generated.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the application, and is not meant to limit the scope of the application, but to limit the application to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the application are intended to be included within the scope of the application.
Claims (8)
1. The campus inspection robot is characterized by comprising:
the image acquisition module is used for acquiring images around the garbage can acquired by the campus inspection robot;
the image feature extraction module is used for extracting image features of the images around the garbage can by using a feature extractor based on a deep neural network model so as to obtain a feature map of the area around the garbage can;
the inspection result dividing module is used for enabling the feature images of the surrounding areas of the garbage can to pass through a classifier to obtain classification results, wherein the classification results are used for indicating whether garbage exists in the surrounding areas of the garbage can; and
the instruction generation module is used for generating a control instruction based on the classification result, wherein the control instruction is used for indicating whether a garbage cleaning prompt is generated or not;
wherein, the image feature extraction module includes:
the image preprocessing unit is used for preprocessing the images around the garbage can to obtain preprocessed images around the garbage can;
the shallow feature extraction unit is used for enabling the preprocessed images around the garbage bin to pass through a shallow feature extractor based on a convolutional neural network model so as to obtain a shallow feature map of the surrounding area of the garbage bin;
the deep feature extraction unit is used for enabling the shallow feature map of the surrounding area of the garbage can to pass through a SheffleNetV 2 basic block to obtain a deep feature map of the surrounding area of the garbage can;
the fusion unit is used for fusing the shallow feature map of the surrounding area of the garbage can and the deep feature map of the surrounding area of the garbage can to obtain an initial surrounding area feature map of the garbage can; and
and the optimizing unit is used for optimizing the association distribution expression effect of the initial garbage can surrounding area feature map so as to obtain the garbage can surrounding area feature map.
2. The campus inspection robot of claim 1, wherein the deep feature extraction unit comprises:
the segmentation subunit is used for segmenting the feature map of the peripheral area of the garbage can along the channel dimension to obtain a shallow part feature map of the peripheral area of the first garbage can and a shallow part feature map of the peripheral area of the second garbage can;
the convolution processing subunit is used for carrying out convolution processing on the shallow part characteristic map of the peripheral area of the second garbage can to obtain a deep characteristic map of the peripheral area of the second garbage can;
the cascading subunit is used for cascading the shallow part characteristic map of the surrounding area of the first garbage can and the deep part characteristic map of the surrounding area of the second garbage can to obtain a fused surrounding area characteristic map of the garbage can; and
and the channel shuffling subunit is used for carrying out channel shuffling on the feature images of the areas around the fusion garbage can so as to obtain the deep feature images of the areas around the garbage can.
3. The campus inspection robot of claim 2, wherein the convolution processing subunit comprises:
the point convolution secondary sub-unit is used for carrying out point convolution on the partial characteristic map of the shallow layer of the area around the second garbage can to obtain a characteristic map after point convolution processing;
the two-dimensional convolution secondary subunit is used for carrying out convolution processing on the feature map after the point convolution processing based on a two-dimensional convolution kernel so as to obtain a deep part feature map;
the channel processing secondary subunit is used for carrying out point convolution processing on the deep part characteristic map to obtain a channel modulation deep part characteristic map; and
and the normalization secondary subunit is used for carrying out batch normalization processing on the channel modulation deep part characteristic map so as to obtain the deep characteristic map of the area around the second garbage can.
4. A campus inspection robot as claimed in claim 3, wherein the fusion unit is configured to: fusing the shallow feature map of the surrounding area of the garbage can and the deep feature map of the surrounding area of the garbage can by the following fusion formula to obtain an initial surrounding area feature map of the garbage can;
wherein, the fusion formula is:
wherein (1)>Characteristic diagram representing surrounding area of initial garbage can,/>Representing a shallow characteristic diagram of the surrounding area of the garbage can, < > in the area of the garbage can>Representing a deep characteristic diagram of the surrounding area of the garbage can, "-a->"means that the elements at the corresponding positions of the shallow feature map of the surrounding area of the garbage can and the deep feature map of the surrounding area of the garbage can are added together, and +.>And->And representing weighting parameters for controlling balance between the shallow characteristic map of the surrounding area of the garbage can and the deep characteristic map of the surrounding area of the garbage can.
5. The campus inspection robot of claim 4, wherein the optimization unit includes:
the matrix unfolding subunit is used for respectively unfolding the feature matrix of each feature matrix of the feature map of the area around the initial garbage can to obtain a plurality of feature vectors;
the evaluation optimization subunit is used for carrying out multisource information fusion pre-verification distribution evaluation optimization on each feature vector row in the plurality of feature vectors so as to obtain a plurality of optimized feature vectors; and the aggregation reconstruction subunit is used for reconstructing and restoring each optimized characteristic vector into a characteristic matrix and aggregating and reconstructing each characteristic matrix along the channel dimension to obtain the characteristic map of the surrounding area of the garbage can.
6. The campus inspection robot of claim 5, wherein the assessment optimization subunit is configured to: evaluating and optimizing the multisource information fusion pre-verification distribution of each feature vector row in the feature vectors by using the following optimization formula to obtain a plurality of optimized feature vectors;
wherein, the optimization formula is:
wherein (1)>Is the +.>Individual feature vectors->Is the mean feature vector, ++>Setting up superparameters for a neighborhood->Represents a logarithmic function with base 2, +.>Representing subtraction by position +.>Is the +.>And optimizing the feature vector.
7. The campus inspection robot of claim 6, wherein the inspection result partitioning module comprises:
the matrix unfolding unit is used for unfolding the feature map of the area around the garbage can into classification feature vectors according to row vectors or column vectors;
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and the classification unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
8. The control method of the campus inspection robot is characterized by comprising the following steps:
acquiring images around the garbage can acquired by the campus inspection robot;
extracting image features of the images around the garbage can by using a feature extractor based on a deep neural network model to obtain a feature map of the area around the garbage can;
the feature images of the surrounding areas of the garbage can are passed through a classifier to obtain classification results, and the classification results are used for indicating whether garbage exists in the surrounding areas of the garbage can; based on the classification result, generating a control instruction, wherein the control instruction is used for indicating whether a garbage cleaning prompt is generated or not;
the method for extracting the image features of the garbage can surrounding image by using a feature extractor based on a deep neural network model to obtain a garbage can surrounding area feature map comprises the following steps:
performing image preprocessing on the images around the garbage can to obtain preprocessed images around the garbage can;
the preprocessed images around the garbage can pass through a shallow feature extractor based on a convolutional neural network model to obtain a shallow feature map of the surrounding area of the garbage can;
the shallow layer feature map of the surrounding area of the garbage can passes through a SheffeNetV 2 basic block to obtain a deep layer feature map of the surrounding area of the garbage can;
fusing the shallow feature map of the surrounding area of the garbage can with the deep feature map of the surrounding area of the garbage can to obtain an initial surrounding area feature map of the garbage can; and carrying out association distribution expression effect optimization on the initial garbage bin surrounding area feature map to obtain the garbage bin surrounding area feature map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311151754.2A CN116872233A (en) | 2023-09-07 | 2023-09-07 | Campus inspection robot and control method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311151754.2A CN116872233A (en) | 2023-09-07 | 2023-09-07 | Campus inspection robot and control method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116872233A true CN116872233A (en) | 2023-10-13 |
Family
ID=88266638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311151754.2A Pending CN116872233A (en) | 2023-09-07 | 2023-09-07 | Campus inspection robot and control method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116872233A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110519582A (en) * | 2019-08-16 | 2019-11-29 | 哈尔滨工程大学 | A kind of crusing robot data collection system and collecting method |
CN110839127A (en) * | 2018-08-16 | 2020-02-25 | 深圳市优必选科技有限公司 | Inspection robot snapshot method, device and system and inspection robot |
US20200082167A1 (en) * | 2018-09-07 | 2020-03-12 | Ben Shalom | System and method for trash-detection and management |
CN110924340A (en) * | 2019-11-25 | 2020-03-27 | 武汉思睿博特自动化系统有限公司 | Mobile robot system for intelligently picking up garbage and implementation method |
CN111974704A (en) * | 2020-08-14 | 2020-11-24 | 东北大学秦皇岛分校 | Garbage classification detection system and method based on computer vision |
CN115557122A (en) * | 2022-10-18 | 2023-01-03 | 刘纯 | Intelligent environmental sanitation management system integrated with environmental sanitation |
CN116188864A (en) * | 2023-03-04 | 2023-05-30 | 福州大学 | Garbage image classification method based on improved MobileNet V2 |
CN116403213A (en) * | 2023-06-08 | 2023-07-07 | 杭州华得森生物技术有限公司 | Circulating tumor cell detector based on artificial intelligence and method thereof |
CN116502899A (en) * | 2023-06-29 | 2023-07-28 | 吉贝克信息技术(北京)有限公司 | Risk rating model generation method, device and storage medium based on artificial intelligence |
CN116645568A (en) * | 2023-06-09 | 2023-08-25 | 广东电网有限责任公司 | Target detection method, target detection device, electronic equipment and storage medium |
-
2023
- 2023-09-07 CN CN202311151754.2A patent/CN116872233A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110839127A (en) * | 2018-08-16 | 2020-02-25 | 深圳市优必选科技有限公司 | Inspection robot snapshot method, device and system and inspection robot |
US20200082167A1 (en) * | 2018-09-07 | 2020-03-12 | Ben Shalom | System and method for trash-detection and management |
CN110519582A (en) * | 2019-08-16 | 2019-11-29 | 哈尔滨工程大学 | A kind of crusing robot data collection system and collecting method |
CN110924340A (en) * | 2019-11-25 | 2020-03-27 | 武汉思睿博特自动化系统有限公司 | Mobile robot system for intelligently picking up garbage and implementation method |
CN111974704A (en) * | 2020-08-14 | 2020-11-24 | 东北大学秦皇岛分校 | Garbage classification detection system and method based on computer vision |
CN115557122A (en) * | 2022-10-18 | 2023-01-03 | 刘纯 | Intelligent environmental sanitation management system integrated with environmental sanitation |
CN116188864A (en) * | 2023-03-04 | 2023-05-30 | 福州大学 | Garbage image classification method based on improved MobileNet V2 |
CN116403213A (en) * | 2023-06-08 | 2023-07-07 | 杭州华得森生物技术有限公司 | Circulating tumor cell detector based on artificial intelligence and method thereof |
CN116645568A (en) * | 2023-06-09 | 2023-08-25 | 广东电网有限责任公司 | Target detection method, target detection device, electronic equipment and storage medium |
CN116502899A (en) * | 2023-06-29 | 2023-07-28 | 吉贝克信息技术(北京)有限公司 | Risk rating model generation method, device and storage medium based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860533B (en) | Image recognition method and device, storage medium and electronic device | |
CN107016405B (en) | A kind of pest image classification method based on classification prediction convolutional neural networks | |
AU2020102885A4 (en) | Disease recognition method of winter jujube based on deep convolutional neural network and disease image | |
CN108875821A (en) | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing | |
CN110647875B (en) | Method for segmenting and identifying model structure of blood cells and blood cell identification method | |
CN108038466B (en) | Multi-channel human eye closure recognition method based on convolutional neural network | |
CN109902646A (en) | A kind of gait recognition method based on long memory network in short-term | |
CN111046880A (en) | Infrared target image segmentation method and system, electronic device and storage medium | |
CN108446729A (en) | Egg embryo classification method based on convolutional neural networks | |
CN107563389A (en) | A kind of corps diseases recognition methods based on deep learning | |
CN104063686B (en) | Crop leaf diseases image interactive diagnostic system and method | |
CN111291809A (en) | Processing device, method and storage medium | |
Mondal et al. | Detection and classification technique of Yellow Vein Mosaic Virus disease in okra leaf images using leaf vein extraction and Naive Bayesian classifier | |
CN109657582A (en) | Recognition methods, device, computer equipment and the storage medium of face mood | |
CN114972208B (en) | YOLOv 4-based lightweight wheat scab detection method | |
CN116543386A (en) | Agricultural pest image identification method based on convolutional neural network | |
Verma et al. | Vision based detection and classification of disease on rice crops using convolutional neural network | |
Monigari et al. | Plant leaf disease prediction | |
CN111126155A (en) | Pedestrian re-identification method for generating confrontation network based on semantic constraint | |
Pramudhita et al. | Strawberry Plant Diseases Classification Using CNN Based on MobileNetV3-Large and EfficientNet-B0 Architecture | |
CN113128308B (en) | Pedestrian detection method, device, equipment and medium in port scene | |
CN112906510A (en) | Fishery resource statistical method and system | |
CN117152790A (en) | Method and system for detecting cow face in complex scene | |
CN107341456B (en) | Weather sunny and cloudy classification method based on single outdoor color image | |
CN116872233A (en) | Campus inspection robot and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |