CN114154571A - Intelligent auxiliary labeling method and system for image - Google Patents

Intelligent auxiliary labeling method and system for image Download PDF

Info

Publication number
CN114154571A
CN114154571A CN202111450317.1A CN202111450317A CN114154571A CN 114154571 A CN114154571 A CN 114154571A CN 202111450317 A CN202111450317 A CN 202111450317A CN 114154571 A CN114154571 A CN 114154571A
Authority
CN
China
Prior art keywords
label
training
labeling
labels
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111450317.1A
Other languages
Chinese (zh)
Other versions
CN114154571B (en
Inventor
张文
王而川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smart Park Solution Technology Co ltd
Original Assignee
Beijing Smart Park Solution Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smart Park Solution Technology Co ltd filed Critical Beijing Smart Park Solution Technology Co ltd
Priority to CN202111450317.1A priority Critical patent/CN114154571B/en
Publication of CN114154571A publication Critical patent/CN114154571A/en
Application granted granted Critical
Publication of CN114154571B publication Critical patent/CN114154571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of image intelligent labeling, and particularly discloses an intelligent auxiliary labeling method and system for an image, wherein the method comprises the following steps: firstly, scanning an acquired image, secondly, starting one or more training units, correspondingly starting one or more data transmission channels, and training a label log and a training image set which are correspondingly stored in a storage module according to different labels, thirdly, screening the label log, reclassifying the screened training image set, calling a first storage list, updating the reclassified training image set and the corresponding storage labels of the label log on the basis of the first storage list to form a second storage list, and fourthly, detecting a target; according to the invention, the marking tool is intelligently perfected based on the image classification algorithm, the target detection algorithm and the log data analysis, so that the marking efficiency is improved, the time of marking personnel is reduced, and the marking accuracy is improved.

Description

Intelligent auxiliary labeling method and system for image
Technical Field
The invention relates to the technical field of image intelligent annotation, in particular to an auxiliary annotation method and system aiming at the intellectualization of an image.
Background
In recent years, with the rapid development of artificial intelligence, the requirement of an algorithm on data is higher and higher, and high-quality massive structured data becomes an algorithm propeller of each artificial intelligence company; how to label data quickly and with high quality is a pain point of manual labeling; the labor intensive marking work also needs auxiliary tools and intelligent tools;
how to design a method and a system, and carrying out intelligent auxiliary labeling on unlabeled data is a technical scheme to be solved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an intelligent auxiliary annotation method and system for an image.
The technical scheme is as follows:
an intelligent auxiliary annotation method for images, comprising the following steps:
scanning an acquired image, loading one or more label units during scanning, starting one or more scanning units, and executing one or more primary classification threads to correspondingly label the acquired image to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
starting one or more training units and one or more data transmission channels correspondingly, training the labeling logs and the training image sets correspondingly stored in the storage module according to different labels to capture the use states and the occurrence frequency of the labels, and labeling the labels with high frequency to label the label attributes of the labels; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
thirdly, screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling a first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and fourthly, the target detection module performs target detection on the training image set according to the priority level based on the second storage list to form an annotation file corresponding to the annotation frame and the annotation log, and optimizes the annotation of the label unit to the image through the annotation file.
Preferably, the annotation log comprises: marking time, marking attributes, marking labels and storing positions.
Preferably, the capturing the use state and the occurrence frequency of the tag comprises: counting the occurrence frequency of different labels in a plurality of periods, prompting the marking attribute appearing last time, prompting the label appearing more frequently in the current period, and continuously displaying the label appearing more frequently in a plurality of periods.
Preferably, the step of optimizing the labeling of the image by the label unit through the labeling file comprises the following steps:
forming a second storage list based on the step three) to obtain a training image set corresponding to the second storage list, and performing target detection on the training image set according to the priority level to determine the position of the marking frame in the second storage list and the corresponding training image set;
based on the position in the second storage list, intercepting the corresponding sub-content contained in the training image set to obtain sub-image data corresponding to the sub-content;
starting one or more training units based on the sub-image data, training the sub-image data to capture the use state and the occurrence frequency of the label corresponding to the sub-image data, and labeling the label attribute of the label with high frequency; and monitoring the label attribute in real time according to a set period to check the times of labeling in the set period, predicting according to the times of labeling in the period, and displaying the predicted result as a prompt.
Preferably, the annotation file comprises an image file and an annotation file.
Preferably, the target detection module adopts training yolo-v5 to perform target detection; and predicting the target in the image based on target detection, forming an xml annotation file and storing the xml annotation file.
The invention also provides an intelligent auxiliary annotation system for the image, which comprises
The acquisition module is used for acquiring images generated in the running process;
the scanning module is used for scanning the acquired images, loading one or more label units during scanning, starting one or more scanning units and executing one or more primary classification threads to correspondingly label the acquired images to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
the pre-training model module starts one or more training units, correspondingly starts one or more data transmission channels, trains labeling logs and training image sets which are correspondingly stored in the storage module according to different labels so as to capture the use states and the occurrence frequency of the labels, and labels the labels with high frequency in the label attributes;
the monitoring module is used for monitoring the label attribute in real time according to a set period so as to check the times of label marking in the set period, evaluating the use frequency according to the times of label marking in the period and setting the priority level of the label according to the evaluation result;
the classification module is used for screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and the target detection module is used for carrying out target detection on the training image set according to the priority level based on the second storage list so as to form a labeling file corresponding to the labeling frame and the labeling log and further optimize the labeling of the label unit on the image.
Preferably, the target detection module comprises a labeling frame, an intercepting unit, a storage unit and a sending unit;
the number of the marking frames is multiple, and each marking frame corresponds to the second storage list one by one; the marking frame is used for selecting the training image sets in the same and second storage lists according to set rules;
the intercepting unit is used for intercepting the sub-content selected by the marking frame;
the storage unit is used for storing the intercepted sub-content;
and the sending unit extracts the sub-contents in the storage unit and sends the sub-contents to the pre-training model module for training according to a set period.
Compared with the prior art, the invention has the beneficial effects that:
the method and the device have the advantages that the labeling tool is intelligently perfected based on image classification, target detection and log data analysis, so that the labeling efficiency is improved on one hand, and the labeling accuracy is also improved on the other hand.
This application adopts intelligent marking, does not carry out artificial interference among the marking process, can also adopt retraining once more to the result of marking to obtain the result of optimizing, can also indicate and show the result of marking simultaneously, help analyst to analyze, in order to optimize the marking unit.
Drawings
The invention is illustrated and described only by way of example and not by way of limitation in the scope of the invention as set forth in the following drawings, in which:
FIG. 1: marking a flow schematic diagram in an embodiment II of the invention;
FIG. 2: the invention aims at the structural schematic diagram of an intelligent auxiliary labeling system of an image;
FIG. 3: the invention marks a structural schematic diagram of a target detection module in a system;
FIG. 4: a schematic diagram of a labeling process in an embodiment of the invention;
Detailed Description
In order to make the objects, technical solutions, design methods, and advantages of the present invention more apparent, the present invention will be further described in detail by specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to FIG. 1, the invention provides an intelligent auxiliary annotation system for images, which comprises
The acquisition module is used for acquiring images generated in the running process;
the scanning module is used for scanning the acquired images, loading one or more label units during scanning, starting one or more scanning units and executing one or more primary classification threads to correspondingly label the acquired images to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
the pre-training model module starts one or more training units, correspondingly starts one or more data transmission channels, trains labeling logs and training image sets which are correspondingly stored in the storage module according to different labels so as to capture the use states and the occurrence frequency of the labels, and labels the labels with high frequency in the label attributes;
the monitoring module is used for monitoring the label attribute in real time according to a set period so as to check the times of label marking in the set period, evaluating the use frequency according to the times of label marking in the period and setting the priority level of the label according to the evaluation result;
the classification module is used for screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and the target detection module is used for carrying out target detection on the training image set according to the priority level based on the second storage list so as to form a labeling file corresponding to the labeling frame and the labeling log and further optimize the labeling of the label unit on the image.
The target detection module comprises a marking frame, an intercepting unit, a storage unit and a sending unit;
in the above, the number of the label boxes is multiple, and each label box corresponds to the second storage list one by one; the marking frame is used for selecting the training image sets in the same and second storage lists according to set rules;
the intercepting unit is used for intercepting the sub-content selected by the marking frame;
the storage unit is used for storing the intercepted sub-content;
and the sending unit extracts the sub-contents in the storage unit and sends the sub-contents to the pre-training model module for training according to a set period.
In the above, the tag unit, the scan unit and the primary classification thread correspond to each other, that is, one tag unit corresponds to one scan unit, and one scan unit corresponds to one primary classification thread; since the image generated by the running process is from more than one source, in order to accelerate the scanning efficiency, one or more label units are arranged, one or more scanning units are correspondingly started, and one or more primary classification threads are correspondingly executed.
In the above, the transmission channels and the training units are in one-to-one correspondence, and one or more primary classification threads are adopted, so that the primary classified data is correspondingly stored, and when training is performed, one or more data transmission channels are started, and the labeling logs and the training image sets which are correspondingly stored in the storage module according to different labels are transmitted to the corresponding one or more training units through the one or more data transmission channels to be respectively trained.
In the above, the annotation log comprises: marking time, marking attributes, marking labels and storing positions.
In the above, the capturing the use state and the occurrence frequency of the tag includes:
counting the occurrence frequency of different labels in a plurality of periods, prompting the marking attribute appearing last time, prompting the label appearing more frequently in the current period, and continuously displaying the label appearing more frequently in a plurality of periods.
In the above, the annotation file includes an image file and an annotation file.
In the above, the target detection module performs target detection by using training yolo-v 5; and predicting the target in the image based on target detection, forming an xml annotation file and storing the xml annotation file.
Example 1
An intelligent auxiliary annotation method for images, comprising the following steps:
s1, scanning the collected images, loading one or more label units during scanning, starting one or more scanning units, and executing one or more primary classification threads to correspondingly label the collected images to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
s2, starting one or more training units and one or more data transmission channels correspondingly, training the label log and the training image set correspondingly stored in the storage module according to different labels to capture the use state and the occurrence frequency of the labels, and labeling the label attributes of the labels with high frequency; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
s3, the labeling logs are screened according to the priority levels of the set labels, the training image sets corresponding to the screened labeling logs are classified again, the first storage list is called, and the training image sets classified again and the storage labels of the corresponding labeling logs are updated on the basis of the first storage list to form a second storage list;
and S4, the target detection module performs target detection on the training image set according to the priority level based on the second storage list to form an annotation file corresponding to the annotation frame and the annotation log, and optimizes the annotation of the label unit to the image through the annotation file.
The embodiment 1 is used for intelligently perfecting the labeling tool based on image classification, target detection and log data analysis, so that the labeling efficiency is improved on one hand, and the labeling accuracy is also improved on the other hand.
Example 2
An intelligent auxiliary annotation method for images, comprising the following steps:
s1, scanning the collected images, loading one or more label units during scanning, starting one or more scanning units, and executing one or more primary classification threads to correspondingly label the collected images to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
s2, starting one or more training units and one or more data transmission channels correspondingly, training the label log and the training image set correspondingly stored in the storage module according to different labels to capture the use state and the occurrence frequency of the labels, and labeling the label attributes of the labels with high frequency; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
s3, the labeling logs are screened according to the priority levels of the set labels, the training image sets corresponding to the screened labeling logs are classified again, the first storage list is called, and the training image sets classified again and the storage labels of the corresponding labeling logs are updated on the basis of the first storage list to form a second storage list;
and S4, the target detection module performs target detection on the training image set according to the priority level based on the second storage list to form an annotation file corresponding to the annotation frame and the annotation log, and optimizes the annotation of the label unit to the image through the annotation file.
S3.1, forming a second storage list based on S3 to obtain a training image set corresponding to the second storage list, and performing target detection on the training image set according to priority levels to determine the position of the marking frame in the second storage list and the corresponding training image set;
s3.1, based on the position in the second storage list, intercepting the corresponding sub-content contained in the training image set to obtain sub-image data corresponding to the sub-content;
starting one or more training units based on the sub-image data, training the sub-image data to capture the use state and the occurrence frequency of the label corresponding to the sub-image data, and labeling the label attribute of the label with high frequency; monitoring the label attribute in real time according to a set period to check the times of label marking in the set period, predicting according to the times of label marking in the period, and displaying the predicted result as a prompt;
the above-mentioned S3 and S4 are repeated for a plurality of cycles.
Embodiment 2 may also adopt retraining for the labeled result to obtain an optimized result, and also may perform prompting and displaying for the labeled result, which is helpful for an analyst to analyze to optimize the labeling unit.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. An intelligent auxiliary labeling method for an image is characterized by comprising the following steps:
scanning an acquired image, loading one or more label units during scanning, starting one or more scanning units, and executing one or more primary classification threads to correspondingly label the acquired image to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
starting one or more training units and one or more data transmission channels correspondingly, training the labeling logs and the training image sets correspondingly stored in the storage module according to different labels to capture the use states and the occurrence frequency of the labels, and labeling the labels with high frequency to label the label attributes of the labels; monitoring the label attribute in real time according to a set period to check the number of times of label marking in the set period, evaluating the use frequency according to the number of times of label marking in the period, and setting the priority level of the label according to the evaluation result;
thirdly, screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling a first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and fourthly, the target detection module performs target detection on the training image set according to the priority level based on the second storage list to form an annotation file corresponding to the annotation frame and the annotation log, and optimizes the annotation of the label unit to the image through the annotation file.
2. The intelligent assisted annotation method for images of claim 1, wherein the annotation log comprises: marking time, marking attributes, marking labels and storing positions.
3. The intelligent auxiliary labeling method for images according to claim 1, wherein the capturing the use state and the occurrence frequency of the label comprises: counting the occurrence frequency of different labels in a plurality of periods, prompting the marking attribute appearing last time, prompting the label appearing more frequently in the current period, and continuously displaying the label appearing more frequently in a plurality of periods.
4. The intelligent auxiliary annotation method for images according to claim 1, wherein the step of optimizing annotation of the image by the tag unit via the annotation file comprises:
forming a second storage list based on the step three) to obtain a training image set corresponding to the second storage list, and performing target detection on the training image set according to the priority level to determine the position of the marking frame in the second storage list and the corresponding training image set;
based on the position in the second storage list, intercepting the corresponding sub-content contained in the training image set to obtain sub-image data corresponding to the sub-content;
starting one or more training units based on the sub-image data, training the sub-image data to capture the use state and the occurrence frequency of the label corresponding to the sub-image data, and labeling the label attribute of the label with high frequency; and monitoring the label attribute in real time according to a set period to check the times of labeling in the set period, predicting according to the times of labeling in the period, and displaying the predicted result as a prompt.
5. The intelligent assisted annotation method for images of claim 1, wherein the annotation file comprises an image file and an annotation file.
6. The intelligent auxiliary labeling method for images according to claim 1, wherein the target detection module adopts training yolo-v5 for target detection; and predicting the target in the image based on target detection, forming an xml annotation file and storing the xml annotation file.
7. An intelligent auxiliary labeling system for images is characterized by comprising
The acquisition module is used for acquiring images generated in the running process;
the scanning module is used for scanning the acquired images, loading one or more label units during scanning, starting one or more scanning units and executing one or more primary classification threads to correspondingly label the acquired images to form a labeled training image set and a corresponding labeled log; correspondingly storing the training image set and the corresponding labeling logs according to the primary classification threads corresponding to the labels, and forming a first storage list;
the pre-training model module starts one or more training units, correspondingly starts one or more data transmission channels, trains labeling logs and training image sets which are correspondingly stored in the storage module according to different labels so as to capture the use states and the occurrence frequency of the labels, and labels the labels with high frequency in the label attributes;
the monitoring module is used for monitoring the label attribute in real time according to a set period so as to check the times of label marking in the set period, evaluating the use frequency according to the times of label marking in the period and setting the priority level of the label according to the evaluation result;
the classification module is used for screening the labeled logs according to the priority levels of the set labels, reclassifying the training image sets corresponding to the screened labeled logs, calling the first storage list, and updating the reclassified training image sets and the storage labels of the corresponding labeled logs on the basis of the first storage list to form a second storage list;
and the target detection module is used for carrying out target detection on the training image set according to the priority level based on the second storage list so as to form a labeling file corresponding to the labeling frame and the labeling log and further optimize the labeling of the label unit on the image.
8. The intelligent auxiliary annotation system for image according to claim 7, wherein said object detection module comprises an annotation frame, a capturing unit, a storage unit and a sending unit;
the number of the marking frames is multiple, and each marking frame corresponds to the second storage list one by one; the marking frame is used for selecting the training image sets in the same and second storage lists according to set rules;
the intercepting unit is used for intercepting the sub-content selected by the marking frame;
the storage unit is used for storing the intercepted sub-content;
and the sending unit extracts the sub-contents in the storage unit and sends the sub-contents to the pre-training model module for training according to a set period.
CN202111450317.1A 2021-12-01 2021-12-01 Intelligent auxiliary labeling method and system for image Active CN114154571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111450317.1A CN114154571B (en) 2021-12-01 2021-12-01 Intelligent auxiliary labeling method and system for image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111450317.1A CN114154571B (en) 2021-12-01 2021-12-01 Intelligent auxiliary labeling method and system for image

Publications (2)

Publication Number Publication Date
CN114154571A true CN114154571A (en) 2022-03-08
CN114154571B CN114154571B (en) 2023-04-07

Family

ID=80455476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111450317.1A Active CN114154571B (en) 2021-12-01 2021-12-01 Intelligent auxiliary labeling method and system for image

Country Status (1)

Country Link
CN (1) CN114154571B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795076A (en) * 2023-01-09 2023-03-14 北京阿丘科技有限公司 Cross labeling method, device and equipment for image data and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2885874A1 (en) * 2014-04-04 2015-10-04 Bradford A. Folkens Image processing system including image priority
CN105046630A (en) * 2014-04-04 2015-11-11 影像搜索者公司 image tag add system
US20160379091A1 (en) * 2015-06-23 2016-12-29 Adobe Systems Incorporated Training a classifier algorithm used for automatically generating tags to be applied to images
US20210272288A1 (en) * 2018-08-06 2021-09-02 Shimadzu Corporation Training Label Image Correction Method, Trained Model Creation Method, and Image Analysis Device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2885874A1 (en) * 2014-04-04 2015-10-04 Bradford A. Folkens Image processing system including image priority
CN105046630A (en) * 2014-04-04 2015-11-11 影像搜索者公司 image tag add system
US20160379091A1 (en) * 2015-06-23 2016-12-29 Adobe Systems Incorporated Training a classifier algorithm used for automatically generating tags to be applied to images
US20210272288A1 (en) * 2018-08-06 2021-09-02 Shimadzu Corporation Training Label Image Correction Method, Trained Model Creation Method, and Image Analysis Device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHENGHUA GAO,ET AL: "《Automatic image tagging via category label and web data》", 《PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》 *
臧淼: "《图像自动标注关键技术研究》", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115795076A (en) * 2023-01-09 2023-03-14 北京阿丘科技有限公司 Cross labeling method, device and equipment for image data and storage medium
CN115795076B (en) * 2023-01-09 2023-07-14 北京阿丘科技有限公司 Cross-labeling method, device, equipment and storage medium for image data

Also Published As

Publication number Publication date
CN114154571B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN114240939B (en) Method, system, equipment and medium for detecting appearance defects of mainboard components
CN110781839A (en) Sliding window-based small and medium target identification method in large-size image
CN111652225B (en) Non-invasive camera shooting and reading method and system based on deep learning
CN112966772A (en) Multi-person online image semi-automatic labeling method and system
CN114154571B (en) Intelligent auxiliary labeling method and system for image
CN110738630A (en) Training method and detection system of recursive deep learning system
CN115797811B (en) Agricultural product detection method and system based on vision
CN115331002A (en) Method for realizing remote processing of heating power station fault based on AR glasses
CN116958889A (en) Semi-supervised small sample target detection method based on pseudo tag
CN111768380A (en) Method for detecting surface defects of industrial spare and accessory parts
CN117114420B (en) Image recognition-based industrial and trade safety accident risk management and control system and method
CN114863311A (en) Automatic tracking method and system for inspection target of transformer substation robot
CN117372377B (en) Broken line detection method and device for monocrystalline silicon ridge line and electronic equipment
CN114170138A (en) Unsupervised industrial image anomaly detection model establishing method, detection method and system
CN110163084A (en) Operator action measure of supervision, device and electronic equipment
CN113487166A (en) Chemical fiber floating filament quality detection method and system based on convolutional neural network
CN113408630A (en) Transformer substation indicator lamp state identification method
CN117351271A (en) Fault monitoring method and system for high-voltage distribution line monitoring equipment and storage medium thereof
CN111047731A (en) AR technology-based telecommunication room inspection method and system
CN114387564A (en) Head-knocking engine-off pumping-stopping detection method based on YOLOv5
CN113887644A (en) Method for constructing material point state classification model and material point automatic calling method
CN113804704A (en) Circuit board detection method, visual detection equipment and device with storage function
CN117671413A (en) Automatic labeling system and automatic labeling method for target detection data
KR20230151337A (en) Machine learning model learning apparatus and method reflecting user's correction information
CN110782038A (en) Method and system for automatically marking training sample and method and system for supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant