CN114154571B - Intelligent auxiliary labeling method and system for image - Google Patents
Intelligent auxiliary labeling method and system for image Download PDFInfo
- Publication number
- CN114154571B CN114154571B CN202111450317.1A CN202111450317A CN114154571B CN 114154571 B CN114154571 B CN 114154571B CN 202111450317 A CN202111450317 A CN 202111450317A CN 114154571 B CN114154571 B CN 114154571B
- Authority
- CN
- China
- Prior art keywords
- label
- labeling
- training
- marking
- labels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the technical field of image intelligent labeling, and particularly discloses an intelligent auxiliary labeling method and system for an image, wherein the method comprises the following steps: scanning acquired images, starting one or more training units and one or more data transmission channels correspondingly, training a labeling log and a training image set which are stored in a storage module correspondingly according to different labels, screening the labeling log, reclassifying the screened training image set, calling a first storage list, updating the reclassified training image set and the storage labels of the corresponding labeling log on the basis of the first storage list to form a second storage list, and detecting a target; according to the invention, the marking tool is intelligently perfected based on the image classification algorithm, the target detection algorithm and the log data analysis, so that the marking efficiency is improved, the time of marking personnel is reduced, and the marking accuracy is improved.
Description
Technical Field
The invention relates to the technical field of image intelligent annotation, in particular to an auxiliary annotation method and system aiming at the intellectualization of an image.
Background
In recent years, with the rapid development of artificial intelligence, the algorithm has higher and higher requirements on data, and high-quality massive structured data becomes an algorithm propeller of each artificial intelligence company; how to mark data quickly and with high quality is a pain point marked manually; the labor intensive labeling work also needs auxiliary tools and intelligent tools;
how to design a method and a system, and carrying out intelligent auxiliary labeling on unlabeled data is a technical scheme to be solved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an intelligent auxiliary labeling method and system for an image.
The technical scheme is as follows:
an intelligent auxiliary annotation method for images, comprising the following steps:
scanning an acquired image, loading one or more label units during scanning, starting one or more scanning units, and executing one or more primary classification threads to correspondingly label the acquired image to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
starting one or more training units and one or more data transmission channels correspondingly, training the labeling logs and the training image sets correspondingly stored in the storage module according to different labels to capture the use states and the occurrence frequency of the labels, and labeling the labels with high frequency to label the label attributes of the labels; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
thirdly, screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and fourthly, the target detection module performs target detection on the training image set according to the priority level based on the second storage list to form an annotation file corresponding to the annotation frame and the annotation log, and optimizes the annotation of the label unit on the image through the annotation file.
Preferably, the annotation log comprises: marking time, marking attributes, marking labels and storing positions.
Preferably, the capturing the use state and the occurrence frequency of the tag comprises: counting the occurrence frequency of different labels in a plurality of periods, prompting the marking attribute appearing last time, prompting the label appearing more frequently in the current period, and continuously displaying the label appearing more frequently in a plurality of periods.
Preferably, the step of optimizing the labeling of the image by the label unit through the labeling file comprises the following steps:
forming a second storage list based on the step three) to obtain a training image set corresponding to the second storage list, and performing target detection on the training image set according to the priority level to determine the position of the marking frame in the second storage list and the corresponding training image set;
based on the position in the second storage list, intercepting the corresponding sub-content contained in the training image set to obtain sub-image data corresponding to the sub-content;
starting one or more training units based on the sub-image data, training the sub-image data to capture the use state and the occurrence frequency of the label corresponding to the sub-image data, and labeling the label attribute of the label with high frequency; and monitoring the label attribute in real time according to a set period to check the times of labeling in the set period, predicting according to the times of labeling in the period, and displaying the predicted result as a prompt.
Preferably, the annotation file comprises an image file and an annotation file.
Preferably, the target detection module adopts training yolo-v5 to perform target detection; and predicting the target in the image based on target detection, forming an xml annotation file and storing the xml annotation file.
The invention also provides an intelligent auxiliary annotation system for the image, which comprises
The acquisition module is used for acquiring images generated in the running process;
the scanning module is used for scanning the acquired images, loading one or more label units during scanning, starting one or more scanning units and executing one or more primary classification threads to correspondingly label the acquired images to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
the pre-training model module starts one or more training units, correspondingly starts one or more data transmission channels, trains labeling logs and training image sets which are correspondingly stored in the storage module according to different labels so as to capture the use states and the occurrence frequency of the labels, and labels the labels with high frequency in the label attributes;
the monitoring module is used for monitoring the label attribute in real time according to a set period so as to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period and setting the priority level of the label according to the evaluation result;
the classification module is used for screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and the target detection module is used for carrying out target detection on the training image set according to the priority level based on the second storage list so as to form a labeling file corresponding to the labeling frame and the labeling log and further optimize the labeling of the label unit on the image.
Preferably, the target detection module comprises a labeling frame, an intercepting unit, a storage unit and a sending unit;
the number of the marking frames is multiple, and each marking frame corresponds to the second storage list one by one; the marking frame is used for selecting the training image sets in the same and second storage lists according to set rules;
the intercepting unit is used for intercepting the sub-content selected by the marking frame;
the storage unit is used for storing the intercepted sub-content;
and the sending unit extracts the sub-contents in the storage unit and sends the sub-contents to the pre-training model module for training according to a set period.
Compared with the prior art, the invention has the beneficial effects that:
the method and the device have the advantages that the labeling tool is intelligently perfected based on image classification, target detection and log data analysis, so that the labeling efficiency is improved on one hand, and the labeling accuracy is also improved on the other hand.
This application adopts intelligent marking, does not carry out artificial interference among the marking process, can also adopt retraining once more to the result of marking to obtain the result of optimizing, can also indicate and show the result of marking simultaneously, help analyst to analyze, in order to optimize the marking unit.
Drawings
The invention is illustrated and described only by way of example and not by way of limitation in the scope of the invention as set forth in the following drawings, in which:
FIG. 1: marking a flow schematic diagram in the second embodiment of the invention;
FIG. 2 is a schematic diagram: the invention aims at the structural schematic diagram of an intelligent auxiliary labeling system of an image;
FIG. 3: the invention marks a structural schematic diagram of a target detection module in a system;
FIG. 4: a schematic diagram of a labeling process in an embodiment of the invention;
Detailed Description
In order to make the objects, technical solutions, design methods, and advantages of the present invention more apparent, the present invention will be further described in detail by specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to FIG. 1, the invention provides an intelligent auxiliary annotation system for images, which comprises
The acquisition module is used for acquiring images generated in the operation process;
the scanning module is used for scanning the acquired images, loading one or more label units during scanning, starting one or more scanning units and executing one or more primary classification threads to correspondingly label the acquired images to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
the pre-training model module starts one or more training units, correspondingly starts one or more data transmission channels, trains labeling logs and training image sets which are correspondingly stored in the storage module according to different labels so as to capture the use states and the occurrence frequency of the labels, and labels the labels with high frequency in the label attributes;
the monitoring module is used for monitoring the label attribute in real time according to a set period so as to check the times of label marking in the set period, evaluating the use frequency according to the times of label marking in the period and setting the priority level of the label according to the evaluation result;
the classification module is used for screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and the target detection module is used for carrying out target detection on the training image set according to the priority level based on the second storage list so as to form a labeling file corresponding to the labeling frame and the labeling log and further optimize the labeling of the label unit on the image.
The target detection module comprises a marking frame, an intercepting unit, a storage unit and a sending unit;
in the above, the number of the label boxes is multiple, and each label box corresponds to the second storage list one by one; the marking frame is used for selecting the training image sets in the same and second storage lists according to set rules;
the intercepting unit is used for intercepting the sub-content selected by the marking frame;
the storage unit is used for storing the intercepted sub-content;
and the sending unit extracts the sub-contents in the storage unit and sends the sub-contents to the pre-training model module for training according to a set period.
In the above, the tag unit, the scan unit and the primary classification thread correspond to each other, that is, one tag unit corresponds to one scan unit, and one scan unit corresponds to one primary classification thread; since the image generated by the running process is from more than one source, in order to accelerate the scanning efficiency, one or more label units are arranged, one or more scanning units are correspondingly started, and one or more primary classification threads are correspondingly executed.
In the above, the transmission channels and the training units are in one-to-one correspondence, and one or more primary classification threads are adopted, so that the primary classified data is correspondingly stored, and when training is performed, one or more data transmission channels are started, and the labeling logs and the training image sets which are correspondingly stored in the storage module according to different labels are transmitted to the corresponding one or more training units through the one or more data transmission channels to be respectively trained.
In the above, the annotation log comprises: marking time, marking attributes, marking labels and storing positions.
In the above, the capturing the use state and the occurrence frequency of the tag includes:
counting the occurrence frequency of different labels in a plurality of periods, prompting the marking attribute appearing last time, prompting the label appearing more frequently in the current period, and continuously displaying the label appearing more frequently in a plurality of periods.
In the above, the annotation file includes an image file and an annotation file.
In the above, the target detection module performs target detection by using training yolo-v 5; and predicting the target in the image based on target detection, forming an xml annotation file and storing the xml annotation file.
Example 1
An intelligent auxiliary annotation method for images, comprising the following steps:
s1, scanning an acquired image, loading one or more label units during scanning, starting one or more scanning units, executing one or more primary classification threads, and correspondingly labeling the acquired image to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
s2, starting one or more training units, correspondingly starting one or more data transmission channels, training the labeling logs and the training image sets correspondingly stored in the storage module according to different labels to capture the use states and the occurrence frequency of the labels, and labeling the labels with high frequency to label the label attributes of the labels; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
s3, screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and S4, the target detection module performs target detection on the training image set according to the priority level based on the second storage list to form an annotation file corresponding to the annotation frame and the annotation log, and optimizes the annotation of the label unit on the image through the annotation file.
The embodiment 1 is used for intelligently perfecting the labeling tool based on image classification, target detection and log data analysis, so that the labeling efficiency is improved on one hand, and the labeling accuracy is also improved on the other hand.
Example 2
An intelligent auxiliary annotation method for images comprises the following steps:
s1, scanning an acquired image, loading one or more label units during scanning, starting one or more scanning units, executing one or more primary classification threads, and correspondingly labeling the acquired image to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
s2, starting one or more training units, correspondingly starting one or more data transmission channels, training the labeling logs and the training image sets correspondingly stored in the storage module according to different labels to capture the use states and the occurrence frequency of the labels, and labeling the labels with high frequency to label the label attributes of the labels; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
s3, screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and S4, the target detection module performs target detection on the training image set according to the priority level based on the second storage list to form an annotation file corresponding to the annotation frame and the annotation log, and optimizes the annotation of the label unit to the image through the annotation file.
S3.1, forming a second storage list based on the S3 to obtain a training image set corresponding to the second storage list, and performing target detection on the training image set according to priority levels to determine the position of the marking frame in the second storage list and the corresponding training image set;
s3.1, based on the position in the second storage list, intercepting the corresponding sub-content contained in the training image set to obtain sub-image data corresponding to the sub-content;
starting one or more training units based on the sub-image data, training the sub-image data to capture the use state and the occurrence frequency of the label corresponding to the sub-image data, and labeling the label attribute of the label with high frequency; monitoring the label attribute in real time according to a set period to check the times of label marking in the set period, predicting according to the times of label marking in the period, and displaying the predicted result as a prompt;
the above-mentioned S3 and S4 are repeated for a plurality of cycles.
Embodiment 2 may also adopt retraining for the labeled result to obtain an optimized result, and also may perform prompting and displaying for the labeled result, which is helpful for an analyst to analyze to optimize the labeling unit.
While embodiments of the present invention have been described above, the above description is illustrative, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the market, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (6)
1. An intelligent auxiliary labeling method for an image is characterized by comprising the following steps:
scanning an acquired image, loading one or more label units during scanning, starting one or more scanning units, and executing one or more primary classification threads to correspondingly label the acquired image to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
starting one or more training units and one or more data transmission channels correspondingly, training the labeling logs and the training image sets correspondingly stored in the storage module according to different labels to capture the use states and the occurrence frequency of the labels, and labeling the labels with high frequency to label the label attributes of the labels; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
thirdly, screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling a first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
the target detection module is used for carrying out target detection on the training image set according to the priority level based on the second storage list so as to form an annotation file corresponding to an annotation frame and an annotation log, and optimizing the annotation of the label unit on the image through the annotation file;
the step of marking the image by optimizing the label unit through the marking file comprises the following steps:
forming a second storage list based on the third step) to obtain a training image set corresponding to the second storage list, and performing target detection on the training image set according to the priority level to determine the position of the marking frame in the second storage list and the corresponding training image set;
intercepting corresponding sub-content contained in the training image set based on the position in the second storage list to obtain sub-image data corresponding to the sub-content;
starting one or more training units based on the sub-image data, training the sub-image data to capture the use state and the occurrence frequency of the label corresponding to the sub-image data, and labeling the label attribute of the label with high frequency; and monitoring the label attribute in real time according to a set period to check the times of labeling in the set period, predicting according to the times of labeling in the period, and displaying the predicted result as a prompt.
2. The intelligent assisted annotation method for images of claim 1, wherein the annotation log comprises: marking time, marking attributes, marking labels and storing positions.
3. The intelligent auxiliary labeling method for images according to claim 1, wherein the capturing the use state and the occurrence frequency of the label comprises: counting the occurrence frequency of different labels in a plurality of periods, prompting the marking attribute appearing last time, prompting the label appearing more frequently in the current period, and continuously displaying the label appearing more frequently in a plurality of periods.
4. The intelligent assisted annotation method for images of claim 1, wherein the annotation file comprises an image file and an annotation file.
5. The intelligent auxiliary labeling method for images according to claim 1, wherein the target detection module adopts training yolo-v5 for target detection; and predicting the target in the image based on target detection, forming an xml annotation file and storing the xml annotation file.
6. An intelligent auxiliary annotation system for images is characterized by comprising
The acquisition module is used for acquiring images generated in the operation process;
the scanning module is used for scanning the acquired images, loading one or more label units during scanning, starting one or more scanning units and executing one or more primary classification threads so as to correspondingly label the acquired images and form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
the pre-training model module starts one or more training units and correspondingly starts one or more data transmission channels, and trains the labeling logs and training image sets which are correspondingly stored in the storage module according to different labels so as to capture the use states and the occurrence frequency of the labels and label the labels with high frequency on the label attributes;
the monitoring module is used for monitoring the label attribute in real time according to a set period so as to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period and setting the priority level of the label according to the evaluation result;
the classification module is used for screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
the target detection module is used for carrying out target detection on the training image set according to the priority level based on the second storage list so as to form a labeling file corresponding to a labeling frame and a labeling log and further optimize the labeling of the label unit on the image;
the target detection module comprises a marking frame, an intercepting unit, a storage unit and a sending unit;
the number of the marking frames is multiple, and each marking frame corresponds to the second storage list one by one; the marking frame is used for selecting the training image sets in the same and second storage lists according to set rules;
the intercepting unit is used for intercepting the sub-content selected by the marking frame;
the storage unit is used for storing the intercepted sub-content;
and the sending unit extracts the sub-contents in the storage unit and sends the sub-contents to the pre-training model module for training according to a set period.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111450317.1A CN114154571B (en) | 2021-12-01 | 2021-12-01 | Intelligent auxiliary labeling method and system for image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111450317.1A CN114154571B (en) | 2021-12-01 | 2021-12-01 | Intelligent auxiliary labeling method and system for image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114154571A CN114154571A (en) | 2022-03-08 |
CN114154571B true CN114154571B (en) | 2023-04-07 |
Family
ID=80455476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111450317.1A Active CN114154571B (en) | 2021-12-01 | 2021-12-01 | Intelligent auxiliary labeling method and system for image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114154571B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115795076B (en) * | 2023-01-09 | 2023-07-14 | 北京阿丘科技有限公司 | Cross-labeling method, device, equipment and storage medium for image data |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2885874A1 (en) * | 2014-04-04 | 2015-10-04 | Bradford A. Folkens | Image processing system including image priority |
CA2885858A1 (en) * | 2014-04-04 | 2015-10-04 | Bradford A. Folkens | Image tagging system |
US9767386B2 (en) * | 2015-06-23 | 2017-09-19 | Adobe Systems Incorporated | Training a classifier algorithm used for automatically generating tags to be applied to images |
CN112424822B (en) * | 2018-08-06 | 2024-07-23 | 株式会社岛津制作所 | Method for generating learning data set, method for generating learning model, and image analysis device |
-
2021
- 2021-12-01 CN CN202111450317.1A patent/CN114154571B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114154571A (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114240939B (en) | Method, system, equipment and medium for detecting appearance defects of mainboard components | |
CN110781839A (en) | Sliding window-based small and medium target identification method in large-size image | |
CN111289538A (en) | PCB element detection system and detection method based on machine vision | |
CN114154571B (en) | Intelligent auxiliary labeling method and system for image | |
CN115797811B (en) | Agricultural product detection method and system based on vision | |
CN115331002A (en) | Method for realizing remote processing of heating power station fault based on AR glasses | |
CN113470005A (en) | Welding spot detection device and welding spot detection method for cylindrical battery cap | |
CN117351271A (en) | Fault monitoring method and system for high-voltage distribution line monitoring equipment and storage medium thereof | |
CN111768380A (en) | Method for detecting surface defects of industrial spare and accessory parts | |
CN117114420B (en) | Image recognition-based industrial and trade safety accident risk management and control system and method | |
CN117372377B (en) | Broken line detection method and device for monocrystalline silicon ridge line and electronic equipment | |
CN114387564A (en) | Head-knocking engine-off pumping-stopping detection method based on YOLOv5 | |
CN114863311A (en) | Automatic tracking method and system for inspection target of transformer substation robot | |
CN114170138A (en) | Unsupervised industrial image anomaly detection model establishing method, detection method and system | |
CN113487166A (en) | Chemical fiber floating filament quality detection method and system based on convolutional neural network | |
CN117636314A (en) | Seedling missing identification method, device, equipment and medium | |
CN111047731A (en) | AR technology-based telecommunication room inspection method and system | |
WO2023280117A1 (en) | Indication signal recognition method and device, and computer storage medium | |
CN115909493A (en) | Teacher improper gesture detection method and system for classroom real-time recorded video | |
CN116977241A (en) | Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component | |
CN113887644A (en) | Method for constructing material point state classification model and material point automatic calling method | |
CN110782038A (en) | Method and system for automatically marking training sample and method and system for supervised learning | |
CN113804704A (en) | Circuit board detection method, visual detection equipment and device with storage function | |
CN118587571A (en) | Small target recognition system and method based on deep learning and oriented to nuclear power underwater mobile shooting scene | |
CN116483024A (en) | Operation regulation control system for numerical control equipment based on big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |