CN117315439A - Tool detection method, system and storage medium - Google Patents
Tool detection method, system and storage medium Download PDFInfo
- Publication number
- CN117315439A CN117315439A CN202311251930.XA CN202311251930A CN117315439A CN 117315439 A CN117315439 A CN 117315439A CN 202311251930 A CN202311251930 A CN 202311251930A CN 117315439 A CN117315439 A CN 117315439A
- Authority
- CN
- China
- Prior art keywords
- tool detection
- model
- tool
- training
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 118
- 238000012549 training Methods 0.000 claims abstract description 53
- 238000012795 verification Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 230000007246 mechanism Effects 0.000 claims description 30
- 238000007689 inspection Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 3
- 238000010200 validation analysis Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 32
- 238000012986 modification Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 238000003062 neural network model Methods 0.000 description 7
- 230000002708 enhancing effect Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- IXKSXJFAGXLQOQ-XISFHERQSA-N WHWLQLKPGQPMY Chemical compound C([C@@H](C(=O)N[C@@H](CC=1C2=CC=CC=C2NC=1)C(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H](CC(C)C)C(=O)N1CCC[C@H]1C(=O)NCC(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H](CC(O)=O)C(=O)N1CCC[C@H]1C(=O)N[C@@H](CCSC)C(=O)N[C@@H](CC=1C=CC(O)=CC=1)C(O)=O)NC(=O)[C@@H](N)CC=1C2=CC=CC=C2NC=1)C1=CNC=N1 IXKSXJFAGXLQOQ-XISFHERQSA-N 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
Abstract
The invention relates to the technical field of computer vision and discloses a tool detection method, a tool detection system and a storage medium. The method comprises the following steps: acquiring a data set; preprocessing the data set to obtain a training set and a verification set; performing data enhancement processing on the training set to obtain a processed training set; training an initial tool to detect a network model by using the processed training set to obtainDetecting a network model by the trained tool; use improvementThe loss function optimizes the parameters of the trained tool detection network model to obtain an optimized tool detection model; and carrying out data enhancement processing on the verification set to obtain a processed verification set, and using the processed verification set to verify the optimized tool detection model to obtain a detection result. The invention can improve the robustness and accuracy of tool detection.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a tool detection method, a tool detection system and a storage medium.
Background
With the development of modern society, a park is an important place for working of staff of a large number of enterprises and public institutions, in some special parks, such as a high-precision instrument assembly workshop and a critical patient nursing room of a hospital, if staff wear a tool according to regulations, the work cannot be unfolded, the work environment is broken, whether the work is worn by the staff or not is strictly required by the park, whether the worker wears the tool or not is detected by a deep learning method at present, and the method has some problems that a training set for training a neural network model is not subjected to modification resolution and scale transformation treatment, so that the robustness and accuracy of tool detection are not high; the neural network model is not combined with an ECA mechanism or is combined with a traditional ECA mechanism, so that the accuracy of tool detection is low; the neural network model does not improve the loss function, so that parameters of the neural network model are not optimal, and the accuracy of tool detection is low.
The prior art discloses a garment attribute identification detection method based on a deep learning target detection algorithm, which obtains the attribute of the garment by marking and classifying the original garment image, such as: sleeves, collars and the like, then preprocessing such as turning over, translating and the like on the clothing picture, and then carrying out recognition detection on clothing properties by a target detection algorithm based on deep learning. The preprocessing of the clothing image refers to marking the positions of clothing attributes of the image and classifying the clothing attributes, and then preprocessing such as overturning and translating the image by using a traditional image algorithm to achieve the effect of data augmentation. Garment attribute identification detection of the deep learning-based target detection algorithm
Firstly, fully extracting clothing attribute features by using a deep convolutional neural network, then fusing multi-layer features by using a target detection algorithm feature pyramid, and finally identifying and detecting clothing attributes by using a full convolutional neural network, wherein the prior art does not modify resolution and scale transformation processing on a training set, so that the robustness and accuracy of tool detection are not high; the neural network model is not combined with an ECA mechanism or is combined with a traditional ECA mechanism, so that the accuracy of tool detection is low; the neural network model does not improve the loss function, so that parameters of the neural network model are not optimal, and the accuracy of tool detection is low.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a tool detection method, a system and a storage medium.
In order to achieve the above object, the present invention provides a tool detection method, including:
step S1: acquiring a data set;
step S2: preprocessing the data set to obtain a training set and a verification set;
step S3: performing data enhancement processing on the training set to obtain a processed training set;
step S4: training an initial tool detection network model by using the processed training set to obtain a trained tool detection network model;
step S5: using improvementsThe loss function optimizes the parameters of the trained tool detection network model to obtain an optimized tool detection model;
step S6: performing data enhancement processing on the verification set to obtain a processed verification set, and using the processed verification set to verify the optimized tool detection model to obtain a final tool detection model;
step S7: and inputting the image to be detected into a final tool detection model to obtain a detection result.
Further, the dataset of step S1 includes tooling pictures and other clothing pictures.
Further, the specific process of step S2 includes:
step S2.1: labeling the tool picture with tool labels, and labeling the other clothing pictures with other clothing labels;
step S2.2: marking the marked picture according to 8: the scale of 2 is divided into training and validation sets.
Further, the data enhancement processing in step S3 employs at least one of random scaling, flip-up and down, adding a mosaic, modifying resolution and scaling.
Further, the initial tooling detection model described in step S4 is determined from a combination of the modified ECA mechanism and the Yolov5S model.
Further, the improved ECA mechanism adds a convolution branch to the original convolution branch.
Further, the improvement described in step S5The loss function is determined by:
wherein,representation->A loss function; />Representing +.>A factor; />Representing the adjustment factor in calculating the focus loss function.
Further, the method comprises the steps of,determined by the following formula:
wherein,representing the parameter in the focus loss function>;/>Representing the real label.
Further, the invention also provides a tool detection system, which is characterized by comprising:
the acquisition module is used for: for acquiring a dataset;
and a pretreatment module: the method comprises the steps of preprocessing the data set to obtain a training set and a verification set;
and the data enhancement processing module is used for: the data enhancement processing is used for carrying out data enhancement processing on the training set to obtain a processed training set;
training module: the training method is used for training the initial tool detection network model by using the processed training set to obtain a trained tool detection network model;
and an optimization module: for use of improvementsThe loss function optimizes the parameters of the trained tool detection network model to obtain an optimized tool detection model;
and (3) a verification module: the method comprises the steps of performing data enhancement processing on the verification set to obtain a processed verification set, and using the processed verification set to verify the optimized tool detection model to obtain a final tool detection model;
and a detection module: and inputting the image to be detected into a final tool detection model to obtain a detection result.
Finally, the present invention also provides a computer readable storage medium, on which a computer program of a tool detection method is stored, which when processed implements the steps of the tool detection method.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, the training set is subjected to data enhancement processing, so that the detection accuracy and the robustness of the tool detection model are improved; the invention also combines the improved ECA mechanism and the Yolov5s model to form an initial tool detection model, thereby enhancing the capability of extracting the characteristics of the model, and the invention also discloses a method for detecting the characteristics of the model by combining the improved ECA mechanism with the Yolov5s modelModification of the loss function and use of the modificationThe loss function optimizes the model and adjusts parameters of the model, so that the detection accuracy of the model is improved.
Drawings
FIG. 1 is a flow chart of a tool detection method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a tool inspection system according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a mosaic processing of a tool detection method according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of an improvement in a tool inspection method according to an embodiment of the present inventionLoss functionLoss function accuracy contrast map;
FIG. 5 is a diagram showing a comparison of an improved ECA attention mechanism and a conventional ECA attention mechanism box_loss for a tool detection method according to an embodiment of the present invention;
FIG. 6 is a graph of improved ECA attention mechanism versus conventional ECA attention mechanism training accuracy for a tool detection method in accordance with an embodiment of the present invention;
FIG. 7 is a diagram of a tool detection model structure of a tool detection method according to an embodiment of the present invention;
fig. 8 is a comparison chart of training results of a tool detection method according to an embodiment of the present invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
Embodiment one:
as shown in fig. 1, a tool detection method according to a preferred embodiment of the present invention includes:
step S1: acquiring a data set;
step S2: preprocessing a data set to obtain a training set and a verification set;
step S3: performing data enhancement processing on the training set to obtain a processed training set;
step S4: training an initial tool detection network model by using the processed training set to obtain a trained tool detection network model;
step S5: using improvementsThe loss function optimizes the parameters of the trained tool detection network model to obtain an optimized tool detection model;
step S6: performing data enhancement processing on the verification set to obtain a processed verification set, and using the processed verification set to verify the optimized tool detection model to obtain a final tool detection model;
step S7: and inputting the image to be detected into a final tool detection model to obtain a detection result.
According to the invention, the training set is subjected to data enhancement processing, so that the detection accuracy and the robustness of the tool detection model are improved;the invention also combines the improved ECA mechanism and the Yolov5s model to form an initial tool detection model, thereby enhancing the capability of extracting the characteristics of the model, and the invention also discloses a method for detecting the characteristics of the model by combining the improved ECA mechanism with the Yolov5s modelModification of the loss function and use of the modificationThe loss function optimizes the model and adjusts parameters of the model, so that the detection accuracy of the model is improved.
Embodiment two:
as shown in fig. 1, a tool detection method according to a preferred embodiment of the present invention includes:
step S1: acquiring a data set;
in this embodiment, 9492 pictures are obtained by a web crawler as a dataset, including tooling pictures and other clothing pictures.
Step S2: preprocessing a data set to obtain a training set and a verification set;
in this embodiment, preprocessing the data set includes:
step S2.1: labeling the tool picture with tool labels, and labeling other clothing labels with other clothing pictures;
step S2.2: marking the marked picture according to 8: the scale of 2 is divided into training and validation sets.
Step S3: performing data enhancement processing on the training set to obtain a processed training set;
in this embodiment, at least one of random scaling, up-down flipping, adding mosaics, modifying resolution and scale transformation is adopted in the data enhancement processing, the diversity of data can be increased by adopting the data enhancement processing, and the processed pictures are used for training the detection model, so that the model can better detect targets with specific angles, and the robustness of the model is improved.
Step S4: training an initial tool detection network model by using the processed training set to obtain a trained tool detection network model;
in this embodiment, the initial tooling detection model is determined by a combination of the modified ECA mechanism and the Yolov5s model, and the ECA (Efficient Channel Attention) attention mechanism is an attention mechanism used in computer vision tasks. In contrast to conventional attention mechanisms (e.g., SE attention mechanisms) which typically involve computing global interactions between channels and which are computationally complex, the ECA attention mechanism provides a more efficient alternative approach, and the present invention proposes an improved ECA attention mechanism based on conventional ECA (Efficient Channel Attention) attention mechanisms and further enhancing the performance of the model by introducing an additional parallel convolution branch, the conventional ECA attention mechanism being a lightweight attention module that can adaptively adjust the channel weights of the convolution feature map, thereby enhancing the response of important information in the feature map. Although ECA attention mechanism is excellent in target detection task, there is a limit when processing some complex scenes and target scale change, in order to further enhance the representation capability and adaptability of the model, the invention introduces a parallel convolution branch which is parallel to the original convolution branch and performs feature fusion after convolution, such design allows the model to obtain richer feature information on different scales and semantic levels, enhances the representation capability of complex targets and small targets, and simultaneously, can better integrate the feature representation of the parallel convolution branch and ECA attention mechanism through feature fusion operation, thereby further improving the performance of the model,
specifically, firstly, carrying out global average pooling operation on 1024 channel feature images input in the previous layer to obtain global average value of each channel;
then, mapping the global average value to a new dimension through a one-dimensional convolution layer to enable the global average value to have the same size as the original channel number;
next, limiting the value of the new dimension between 0 and 1 through a Sigmoid activation function to obtain the attention weight of the channel;
finally, the attention weight is multiplied with the original characteristic diagram to obtain the characteristic diagram with the adjusted attention of the channel, and the characteristic diagram is input into the next layer of convolution network.
Step S5: using improvementsThe loss function optimizes the parameters of the trained tool detection network model to obtain an optimized tool detection model;
in the present embodiment of the present invention, in the present embodiment,by adjusting the weights of the positive and negative samples and the weights of the difficult and easy samples, the problem of unbalanced categories in target detection can be effectively solved, the prediction capability of the model for few categories is improved, more attention points are placed on the samples which are difficult to classify, the prediction effect of the model for few categories is improved,
conventional onesThe loss function is a loss function optimized for the target detection task that is excellent in handling class imbalance problems, however, conventional when class samples are severely unbalanced or noisyPossibly subject to a certain disturbance during convergence by at +.>To add more nonlinear factors, the present invention is directed to +.>The loss function is improved, which not only retains +.>The sensitivity of the loss function to samples of different categories is further enhanced, and the loss function is smoother and more stable when rare category and noise data are processed through squaring operation, so that the training process of the model is facilitated to be optimized.
In particular the number of the elements,the expression of (2) is as follows:
wherein the method comprises the steps ofIs the nn. Bcewithlogitsloss function for classification problems. The method combines a sigmoid function and two kinds of cross entropy loss, and is suitable for the condition that the output does not pass through an activation function; />Is the adjustment factor in calculating the focus loss function, < ->Representing a parameter alpha in the focus loss function, wherein the default value is 0.25; />Representing the real label. Wherein->Is the adjustment factor in the calculation of the focus loss function, based on the real tag +.>Predictive probability value->And gamma parameter->And (5) calculating to obtain the product.
The alpha_factor in the expression is improved, the nonlinear factor is increased, and the improved alpha_factor is as follows:
wherein the method comprises the steps ofIs to calculate +.>Factors.
Thus, the proposed improvementThe loss function is:
step S6: performing data enhancement processing on the verification set to obtain a processed verification set, and using the processed verification set to verify the optimized tool detection model to obtain a final tool detection model;
in this embodiment, the verification set is used to verify whether the tool detection model meets the requirements.
Step S7: and inputting the image to be detected into a final tool detection model to obtain a detection result.
In this embodiment, the detection result can be obtained only by inputting the image to be detected into the tool detection model.
According to the invention, the training set is subjected to data enhancement processing, so that the detection accuracy and the robustness of the tool detection model are improved; the invention also combines the improved ECA mechanism and the Yolov5s model to form an initial tool detection model, thereby enhancing the feature extraction capability of the modelForce, the invention also providesModification of the loss function and use of the modificationThe loss function optimizes the model and adjusts parameters of the model, so that the detection accuracy of the model is improved.
Example III
The preferred embodiment also provides a computer-readable storage medium having stored thereon a computer program of a tool detection method, which when processed, implements the steps of the tool detection method.
In conclusion, the data enhancement processing is performed on the training set, so that the detection accuracy and the robustness of the tool detection model are improved; the invention also combines the improved ECA mechanism and the Yolov5s model to form an initial tool detection model, thereby enhancing the capability of extracting the characteristics of the model, and the invention also discloses a method for detecting the characteristics of the model by combining the improved ECA mechanism with the Yolov5s modelModification of the loss function and use of the modificationThe loss function optimizes the model and adjusts parameters of the model, so that the detection accuracy of the model is improved.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present invention, and these modifications and substitutions should also be considered as being within the scope of the present invention.
Claims (10)
1. The tool detection method is characterized by comprising the following steps of:
step S1: acquiring a data set;
step S2: preprocessing the data set to obtain a training set and a verification set;
step S3: performing data enhancement processing on the training set to obtain a processed training set;
step S4: training an initial tool detection network model by using the processed training set to obtain a trained tool detection network model;
step S5: using improvementsThe loss function optimizes the parameters of the trained tool detection network model to obtain an optimized tool detection model;
step S6: performing data enhancement processing on the verification set to obtain a processed verification set, and using the processed verification set to verify the optimized tool detection model to obtain a final tool detection model;
step S7: and inputting the image to be detected into a final tool detection model to obtain a detection result.
2. The method according to claim 1, wherein the dataset of step S1 comprises a tooling picture and other clothing pictures.
3. The tool detection method according to claim 2, wherein the specific process of step S2 includes:
step S2.1: labeling the tool picture with tool labels, and labeling the other clothing pictures with other clothing labels;
step S2.2: marking the marked picture according to 8: the scale of 2 is divided into training and validation sets.
4. A tool inspection method according to claim 3, wherein the data enhancement processing in step S3 employs at least one of random scaling, flip-up and down, adding a mosaic, modifying resolution and scaling.
5. A tool inspection method according to claim 3, wherein the initial tool inspection model in step S4 is determined by a combination of a modified ECA mechanism and a Yolov5S model.
6. The tool inspection method of claim 5 wherein the modified ECA mechanism adds a convolution branch to the original convolution branch.
7. The method of claim 6, wherein the step S5 is modifiedThe loss function is determined by:
wherein,representation->A loss function; />Representing +.>A factor; />Representing the adjustment factor in calculating the focus loss function.
8. The method of claim 6, wherein,determined by the following formula:
wherein,representing the parameter in the focus loss function>;/>Representing the real label.
9. Frock detecting system, characterized in that includes:
the acquisition module is used for: for acquiring a dataset;
and a pretreatment module: the method comprises the steps of preprocessing the data set to obtain a training set and a verification set;
and the data enhancement processing module is used for: the data enhancement processing is used for carrying out data enhancement processing on the training set to obtain a processed training set;
training module: the training method is used for training the initial tool detection network model by using the processed training set to obtain a trained tool detection network model;
and an optimization module: for use of improvementsThe loss function optimizes the parameters of the trained tool detection network model to obtain an optimized tool detection model;
and (3) a verification module: the method comprises the steps of performing data enhancement processing on the verification set to obtain a processed verification set, and using the processed verification set to verify the optimized tool detection model to obtain a final tool detection model;
and a detection module: and inputting the image to be detected into a final tool detection model to obtain a detection result.
10. A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the tool detection method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311251930.XA CN117315439A (en) | 2023-09-26 | 2023-09-26 | Tool detection method, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311251930.XA CN117315439A (en) | 2023-09-26 | 2023-09-26 | Tool detection method, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117315439A true CN117315439A (en) | 2023-12-29 |
Family
ID=89296581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311251930.XA Pending CN117315439A (en) | 2023-09-26 | 2023-09-26 | Tool detection method, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117315439A (en) |
-
2023
- 2023-09-26 CN CN202311251930.XA patent/CN117315439A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109584248B (en) | Infrared target instance segmentation method based on feature fusion and dense connection network | |
CN110348319B (en) | Face anti-counterfeiting method based on face depth information and edge image fusion | |
CN109740665A (en) | Shielded image ship object detection method and system based on expertise constraint | |
CN111611874B (en) | Face mask wearing detection method based on ResNet and Canny | |
CN112801169B (en) | Camouflage target detection method, system, device and storage medium based on improved YOLO algorithm | |
CN112364931B (en) | Few-sample target detection method and network system based on meta-feature and weight adjustment | |
WO2019207557A1 (en) | Method for distinguishing a real three-dimensional object from a two-dimensional spoof of the real object | |
CN110879982B (en) | Crowd counting system and method | |
CN109035300B (en) | Target tracking method based on depth feature and average peak correlation energy | |
CN110543906B (en) | Automatic skin recognition method based on Mask R-CNN model | |
CN112819748B (en) | Training method and device for strip steel surface defect recognition model | |
CN114092793A (en) | End-to-end biological target detection method suitable for complex underwater environment | |
CN113762269A (en) | Chinese character OCR recognition method, system, medium and application based on neural network | |
CN115512206A (en) | Improved YOLOv5 target detection method suitable for low-illumination environment | |
CN115439458A (en) | Industrial image defect target detection algorithm based on depth map attention | |
CN115861715A (en) | Knowledge representation enhancement-based image target relation recognition algorithm | |
CN111523342A (en) | Two-dimensional code detection and correction method in complex scene | |
CN112668662B (en) | Outdoor mountain forest environment target detection method based on improved YOLOv3 network | |
CN110458234B (en) | Vehicle searching method with map based on deep learning | |
Li et al. | Wafer crack detection based on yolov4 target detection method | |
CN112183287A (en) | People counting method of mobile robot under complex background | |
CN116740572A (en) | Marine vessel target detection method and system based on improved YOLOX | |
CN117315439A (en) | Tool detection method, system and storage medium | |
CN116092179A (en) | Improved Yolox fall detection system | |
Shen et al. | Multi-task fundus image quality assessment via transfer learning and landmarks detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |