CN113496232A - Label checking method and device - Google Patents

Label checking method and device Download PDF

Info

Publication number
CN113496232A
CN113496232A CN202010193375.XA CN202010193375A CN113496232A CN 113496232 A CN113496232 A CN 113496232A CN 202010193375 A CN202010193375 A CN 202010193375A CN 113496232 A CN113496232 A CN 113496232A
Authority
CN
China
Prior art keywords
picture
original training
label
training
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010193375.XA
Other languages
Chinese (zh)
Other versions
CN113496232B (en
Inventor
姚沛
张勍颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010193375.XA priority Critical patent/CN113496232B/en
Publication of CN113496232A publication Critical patent/CN113496232A/en
Application granted granted Critical
Publication of CN113496232B publication Critical patent/CN113496232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a tag verification method and device. In the application, the original training picture and the original training label corresponding to the original training picture are used in advance to generate the picture-label verification data (at least comprising the original training label) corresponding to the original training picture, then when the deep learning model is trained, or before the deep learning model is trained, the picture-label verification data (at least comprising the original training label) corresponding to the original training picture is used to verify the current training label corresponding to the original training picture, and the wrong training label corresponding to the original training picture is corrected in time, so that risks of tampering, making mistakes and the like of the original training label corresponding to the original training picture are greatly reduced, and the stability of the model training process is ensured.

Description

Label checking method and device
Technical Field
The present application relates to computer technologies, and in particular, to a tag verification method and apparatus.
Background
The deep learning model is trained without leaving the training data set. The training data set mainly comprises training pictures and training labels. Here, the training labels are in one-to-one correspondence with the training pictures, and are used for labeling information of the training pictures corresponding to the training labels, such as picture categories, picture paths, and the like. For convenience of description, the original training picture is also referred to as an original training picture, and the training label initially corresponding to the original training picture is referred to as an original training label.
In the application, the original training picture is not modified. The original training labels originally corresponding to the original training pictures can be easily tampered, even lost and the like. Once the original training labels are tampered with or even lost, the training of the deep learning model is affected.
Disclosure of Invention
The application provides a label checking method and label checking equipment, so that a current training label corresponding to an original training picture at present is checked.
In one aspect, the present application provides a tag verification method, including:
generating picture-label verification data corresponding to the original training picture by using the original training picture and an original training label corresponding to the original training picture; the picture-tag verification data at least includes: the original training labels;
when the current training label corresponding to the original training picture is determined to be verified, the current training label corresponding to the original training picture is verified according to the generated picture-label verification data corresponding to the original training picture, and when the current training label corresponding to the original training picture does not pass the verification, the current training label corresponding to the original training picture is updated by using the generated picture-label verification data corresponding to the original training picture.
As an embodiment, the generating of the picture-label verification data corresponding to the original training picture by using the original training picture and the original training label corresponding to the original training picture includes:
acquiring an original training picture and an original training label corresponding to the original training picture;
and adding the original training label corresponding to the obtained original training picture to the specified position of the original training picture to obtain the picture-label verification data corresponding to the original training picture.
As an embodiment, the designated positions are: a tail, or a head.
As an embodiment, the determining to check the current training label currently corresponding to the original training picture includes:
when an externally triggered label verification instruction aiming at the original training picture is received, determining to verify a current training label currently corresponding to the original training picture; or,
when detecting that the test accuracy value of the deep learning model is smaller than a set accuracy value, if the original training picture and the current training label currently corresponding to the original training picture do not participate in the training of the deep learning model currently, determining to verify the current training label currently corresponding to the original training picture; the test precision value is the precision value of the deep learning model tested by the test sample.
As an embodiment, the verifying the current training label currently corresponding to the original training picture according to the generated picture-label verification data corresponding to the original training picture includes:
reading an original training label from the generated picture-label verification data corresponding to the original training picture;
and comparing whether the read original training label is consistent with the current training label corresponding to the original training picture, if so, determining that the current training label corresponding to the original training picture passes the verification, and if not, determining that the current training label corresponding to the original training picture does not pass the verification.
As an embodiment, the updating, by using the generated picture-to-tag verification data corresponding to the original training picture, a current training tag currently corresponding to the original training picture includes:
and updating the current training label corresponding to the original training picture to the original training label in the picture-label checking data corresponding to the original training picture.
As an embodiment, after updating the current training label currently corresponding to the original training picture by using the generated picture-label verification data corresponding to the original training picture, the method further includes:
and training a deep learning model according to the original training picture and a current training label currently corresponding to the original training picture.
As an embodiment, the method is applied to a server;
and the original training label corresponding to the original training picture is a training label which is obtained by the server and marked by the client and used for describing the original training picture.
As an embodiment, after completing training the deep learning model, the method further comprises:
inputting an image to be detected into the deep learning model, and carrying out image processing on the input image and outputting a processing result by the deep learning model; the image processing includes at least: object recognition, and/or scene separation, and/or object detection.
In another aspect, the present application provides a tag verification apparatus, including:
the verification data generating unit is used for generating picture-label verification data corresponding to the original training picture by using the original training picture and an original training label corresponding to the original training picture; the picture-tag verification data at least includes: the original training labels;
the verification unit is used for verifying the current training label currently corresponding to the original training picture according to the picture-label verification data which is generated by the verification data generation unit and corresponds to the original training picture when the current training label currently corresponding to the original training picture is determined to be verified;
and the updating unit is used for updating the current training label corresponding to the original training picture by using the generated picture-label verification data corresponding to the original training picture when the verification unit verifies that the current training label corresponding to the original training picture does not pass the verification.
As an embodiment, the generating unit of verification data generates the picture-label verification data corresponding to the original training picture by using the original training picture and the original training label corresponding to the original training picture, including:
acquiring an original training picture and an original training label corresponding to the original training picture from an initial training data set;
and adding the original training label corresponding to the obtained original training picture to the specified position of the original training picture to obtain the picture-label verification data corresponding to the original training picture.
As an embodiment, the designated positions are: a tail, or a head.
As an embodiment, the determining, by the checking unit, that the checking of the current training label currently corresponding to the original training picture includes:
when an externally triggered label verification instruction aiming at the original training picture is received, determining to verify a current training label currently corresponding to the original training picture; or,
when detecting that the test accuracy value of the deep learning model is smaller than a set accuracy value, if the original training picture and the current training label currently corresponding to the original training picture do not participate in the training of the deep learning model currently, determining to verify the current training label currently corresponding to the original training picture; the test precision value is the precision value of the deep learning model tested by the test sample.
As an embodiment, the verifying unit, according to the generated picture-tag verification data corresponding to the original training picture, performing verification on a current training tag currently corresponding to the original training picture includes:
reading an original training label from the generated picture-label verification data corresponding to the original training picture;
and comparing whether the read original training label is consistent with the current training label corresponding to the original training picture, if so, determining that the current training label corresponding to the original training picture passes the verification, and if not, determining that the current training label corresponding to the original training picture does not pass the verification.
As an embodiment, the updating, by the updating unit, updating the current training label currently corresponding to the original training picture by using the picture-label verification data corresponding to the generated original training picture includes:
and updating the current training label corresponding to the original training picture to the original training label in the picture-label checking data corresponding to the original training picture.
As an embodiment, after updating the current training label currently corresponding to the original training picture by using the picture-label verification data corresponding to the generated original training picture, the updating unit further triggers training of the deep learning model according to the original training picture and the current training label currently corresponding to the original training picture.
As an embodiment, after the deep learning model completes training, the updating unit further triggers to input the image to be detected into the deep learning model, so that the deep learning model performs image processing on the input image and outputs a processing result; the image processing includes at least: object recognition, and/or scene separation, and/or object detection.
In another aspect, the present application provides an electronic device comprising: a processor and a machine-readable storage medium;
the machine-readable storage medium stores machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to implement the method steps disclosed above.
According to the technical scheme, the original training picture and the original training label corresponding to the original training picture are used in advance to generate the picture-label verification data (at least including the original training label) corresponding to the original training picture, then when the deep learning model is trained or before the deep learning model is trained, the picture-label verification data (at least including the original training label) corresponding to the original training picture is used to verify the current training label corresponding to the original training picture, and the error training label corresponding to the original training picture is corrected, so that risks of tampering, errors and the like of the original training label corresponding to the original training picture are greatly reduced, and stability of the model training process is guaranteed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a method provided by an embodiment of the present application;
FIG. 2 is a flowchart illustrating an implementation of step 101 provided in an embodiment of the present application;
FIG. 3 is a flowchart of the verification of step 102 provided by the embodiments of the present application;
fig. 4 is a diagram of an application scene structure provided in the embodiment of the present application;
FIG. 5 is a block diagram of an apparatus according to an embodiment of the present disclosure;
fig. 6 is a hardware structure diagram of the device according to the embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
In the deep learning training commonly used at present, focus is often on how to acquire the training labels, but the training labels are not verified. This causes the following technical problems as described in the background art: the training label is tampered, even the training label is not corresponding to the training picture due to partial loss, and the like; if the training labels which are falsified and even do not correspond to the training pictures are used for training the deep learning model subsequently, the performance of the trained deep learning model cannot meet the requirements, and even errors occur in the process of training the deep learning model, so that the deep learning model cannot be trained continuously.
In order to solve the technical problem, the application provides a tag verification method. By the method, the training labels are checked as required, so that the training labels can be updated in time when the training labels fail to be checked, and the training reliability of the deep learning model is improved.
In order to make the method provided by the present application easier to understand, the method provided by the present application is described in detail below with reference to the accompanying drawings and examples:
referring to fig. 1, fig. 1 is a flowchart of a method provided in an embodiment of the present application. The process can be applied to electronic equipment. In one example, the electronic device herein may be a training server.
As shown in fig. 1, the process may include the following steps:
step 101, generating picture-label verification data corresponding to an original training picture by using the original training picture and an original training label corresponding to the original training picture.
The deep learning model is trained without leaving a training data set, where the training data set may include: the training pictures and the training labels corresponding to the training pictures. In an example, the training picture initially obtained may be referred to as an original training picture, and the training label initially corresponding to the original training picture is referred to as an original training label.
In specific implementation, in this step 101, there are many implementation forms for generating the image-tag verification data corresponding to the original training image by using the original training image and the original training tag corresponding to the original training image, and one implementation manner is illustrated by an example in the flow shown in fig. 2 below, which is not repeated here.
It should be noted that, in this embodiment, no matter how the picture-label verification data corresponding to the original training picture is finally generated in step 101, the finally generated picture-label verification data at least includes the original training label corresponding to the original training picture. The following describes the structure of the picture-tag verification data by way of example with reference to the flow shown in fig. 2, which is not repeated herein.
102, when it is determined that the current training label corresponding to the original training picture is verified, verifying the current training label corresponding to the original training picture according to the generated picture-label verification data corresponding to the original training picture, and updating the current training label corresponding to the original training picture by using the generated picture-label verification data corresponding to the original training picture when the current training label corresponding to the original training picture does not pass the verification.
In application, the training pictures in the training data set are not tampered or even lost, i.e. the training pictures in the training data set are always the original training pictures. For convenience of description, the training pictures in the training data set are hereinafter referred to as original training pictures.
Although the training image in the training data set is still the original training image, the training label corresponding to the original training image in the training data set may be tampered. That is, the training label currently corresponding to the original training image in the training data set may not be the original training label any more, and for convenience of description, the training label currently corresponding to the original training image in the training data set is referred to as a current training label.
As an embodiment, in step 102, it is determined that there are many implementation manners for checking the current training label currently corresponding to the original training picture, and three implementation manners are listed in the following, which is not repeated herein.
As described in step 102, when it is determined that the current training label corresponding to the original training picture is verified, the current training label corresponding to the original training picture is verified according to the generated picture-label verification data corresponding to the original training picture. As to how to verify the current training label currently corresponding to the original training image according to the generated image-label verification data corresponding to the original training image, there are many implementation manners, and one implementation manner will be illustrated in fig. 3 below, which is not described herein again.
When the current training label corresponding to the original training picture does not pass the verification, it is indicated that the current training label corresponding to the original training picture is not the original training label, and in order to prevent the deep learning model from being affected by continuously training the deep learning model by using the current training label corresponding to the original training picture, as described in step 102, the current training label corresponding to the original training picture is updated by using the generated picture-label verification data corresponding to the original training picture. As an embodiment, here, updating the current training label currently corresponding to the original training picture by using the picture-label verification data corresponding to the generated original training picture may include: and updating the current training label corresponding to the original training picture to the original training label in the picture-label checking data corresponding to the original training picture.
After the current training label currently corresponding to the original training picture is updated to the original training label in the picture-label verification data corresponding to the original training picture, the deep learning model can be trained directly according to the original training picture and the current training label currently corresponding to the original training picture.
Thus, the flow shown in fig. 1 is completed.
As can be seen from the flow shown in fig. 1, in the present application, the original training picture and the original training label corresponding to the original training picture are used in advance to generate picture-label verification data (at least including the original training label) corresponding to the original training picture, and then, when the deep learning model is trained, or before the deep learning model is trained, the picture-label verification data (at least including the original training label) corresponding to the original training picture is used to verify the current training label currently corresponding to the original training picture, and correct the erroneous training label currently corresponding to the original training picture, so that risks of tampering, error and the like of the training label currently corresponding to the original training picture are greatly reduced, and stability of the model training process is ensured.
How to generate the picture-label verification data corresponding to the original training picture by using the original training picture and the original training label corresponding to the original training picture in step 101 is described below by way of example in fig. 2:
referring to fig. 2, fig. 2 is a flowchart of a step 101 implemented by an embodiment of the present application. As shown in fig. 2, the process may include the following steps:
step 201, an original training picture and an original training label corresponding to the original training picture are obtained from an initial training data set.
As an embodiment, in this step 201, the original training image in the initial training data set and the original training label corresponding to the original training image may be read, that is, the original training image and the original training label corresponding to the original training image are obtained from the initial training data set in step 201. It should be noted that, in this step 201, the original training image and the original training label corresponding to the original training image are obtained from the initial training data set, and the training data set is not affected.
Step 202, adding the original training label corresponding to the obtained original training picture to the designated position of the original training picture to obtain the picture-label verification data corresponding to the original training picture.
In one example, in order not to change the picture data of the original picture itself, the specified positions here may be: a tail, or a head. Taking the tail as an example, in this step 202, the original training label corresponding to the obtained original training picture may be added to the tail position of the original training picture, so as to obtain the picture-label verification data corresponding to the original training picture.
When the picture-label verification data corresponding to the original training picture is obtained, the picture-label verification data can be stored in a designated storage medium such as a database, so that the current training label currently corresponding to the original training picture can be verified subsequently according to the generated picture-label verification data corresponding to the original training picture. It should be noted that the image-tag verification data is used for verifying the training tags, which is not used for training the deep learning model.
The flowchart shown in fig. 2 is thus completed.
By the process shown in fig. 2, how to generate the picture-label verification data corresponding to the original training picture by using the original training picture and the original training label corresponding to the original training picture is realized. It should be noted that fig. 2 is only an example and is not intended to be limiting.
How to determine to verify the current training label currently corresponding to the original training picture in step 102 is described in three ways as follows:
mode 1: in an example, in step 102 of this method 1, determining to check the current training label currently corresponding to the original training picture may include: when a label checking instruction which is triggered by an external user and aims at the original training picture is received, the current training label which is currently corresponding to the original training picture is determined to be checked. Note that, the tag verification instruction may be issued before the deep learning model is trained by using the training data set, may be issued when the deep learning model is started to be trained, or may be issued during the deep learning model training process and before the deep learning model is trained by using the original training picture, and the timing of sending the tag verification instruction is not particularly limited in the present application.
Mode 2: in an example, in this embodiment 2, the determining to check the current training label currently corresponding to the original training picture in the above step 102 may include: when the test accuracy value of the deep learning model is smaller than the set accuracy value, if the original training picture and the current training label currently corresponding to the original training picture do not participate in the training of the deep learning model currently, the current training label currently corresponding to the original training picture is determined to be verified. Here, the test precision value is a precision value of the deep learning model tested by the test sample.
Mode 3: in an example, in this embodiment 3, the determining to check the current training label currently corresponding to the original training picture in the above step 102 may include: when the original training picture and the current training label corresponding to the original training picture are used for training the deep learning model, if the current time point is one of the preset verification time points, the current training label corresponding to the original training picture is determined to be verified.
How to determine to check the current training label currently corresponding to the original training picture in step 102 is described in three ways by way of example. The three modes can perform label training on the original training picture according to needs, and the training reliability of the deep learning model is improved. It should be noted that how to determine to check the current training label currently corresponding to the original training picture in step 102 is not limited to the three manners described above, and there are many other manners, such as performing label check on all original training pictures by direct default, which is not limited herein.
How to verify the current training label currently corresponding to the original training picture according to the generated picture-label verification data corresponding to the original training picture in step 102 is described in the following example:
referring to fig. 3, fig. 3 is a flowchart of verification in step 102 according to an embodiment of the present disclosure. The process is used to implement how to verify the current training label currently corresponding to the original training picture according to the generated picture-label verification data corresponding to the original training picture in step 102.
As shown in fig. 3, the process may include:
step 301, reading an original training label from the generated picture-label verification data corresponding to the original training picture.
As described in the flow illustrated in fig. 2, the reading of the original training tag from the generated picture-tag verification data corresponding to the original training picture in step 301 may include: reading the original training labels added at the specified positions of the original training pictures from the generated picture-label verification data corresponding to the original training pictures.
Step 302, comparing whether the read original training label is consistent with the current training label corresponding to the original training picture, if so, determining that the current training label corresponding to the original training picture passes the verification, and if not, determining that the current training label corresponding to the original training picture does not pass the verification.
The flow shown in fig. 3 is completed.
Through the flow shown in fig. 3, how to verify the current training label currently corresponding to the original training picture according to the generated picture-label verification data corresponding to the original training picture in step 102 is realized. It should be noted that fig. 3 is only one implementation manner, and is not limited thereto.
The methods provided herein are described above. In order to make the method provided by the present application clearer, the method provided by the present application is described in an embodiment with reference to an application scenario:
referring to fig. 4, fig. 4 is a diagram of an application scenario structure provided in the embodiment of the present application. As shown in fig. 4, the application scenario includes a terminal device 401, a terminal device 402, a terminal device 403, a network 404, and a server 405. In one example, the server 405 may include a server or a server cluster. It should be noted that the terminal device, the network, and the server in fig. 4 are only schematic, and any number of terminal devices, networks, and servers may be set according to implementation needs, or an application scenario only including a server or a terminal device may also exist.
In fig. 4, network 404 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
In fig. 4, a user may use terminal devices 401, 402, 403 to interact with a server 405 over a network 404 to receive or send messages or the like. The terminal devices 401, 402, 403 may be various electronic devices having a display screen and supporting web browsing, such as smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
In fig. 4, the server 405 may be a server or a server cluster providing various services, and may analyze and process data such as user requests received from the terminal devices 401, 402, and 403, and feed back processing results to the terminal devices 401, 402, and 403.
As an embodiment, the tag verification method provided by the embodiment of the present disclosure may be executed by the server 405. The server 405 may be based on the processes disclosed in fig. 1 to 3 above. Taking fig. 1 as an example, then:
initially, a user may label a tag (denoted as an original training tag) describing an original training picture through the terminal device 401, 402, 403. After the original training labels corresponding to the original training pictures are labeled by the terminal devices 401, 402, and 403, as an embodiment, the original training labels may be recorded in one set (denoted as a label set), and the original training pictures corresponding to each original training label in the label set may be recorded in another set (denoted as a picture set). In one example, both the set of tags and the set of pictures may be stored in a designated database.
The server 405 obtains an original training picture from the picture set, and obtains an original training label corresponding to the original training picture from the label set. When the server 405 acquires the original training picture and the original training label corresponding to the original training picture, the picture-label verification data corresponding to the original training picture is generated according to the flow shown in fig. 2.
Then, as an embodiment, when receiving a tag verification instruction sent by the terminal device 401, 402, or 403 through the network 404, and/or when performing model training, the server 405 verifies a current training tag currently corresponding to an original training picture. As described above, the original training picture is not easily tampered because the picture itself is generally kept unchanged. However, the training labels corresponding to the original training pictures are easily modified, which may not be the previous original training labels when the model training is currently performed. Based on this, in an example, the server 405 may check the current training label currently corresponding to the original training picture according to the process shown in fig. 3, and when the current training label currently corresponding to the original training picture does not pass the check, update the current training label currently corresponding to the original training picture by using the generated picture-label check data corresponding to the original training picture, and finally update the current training label currently corresponding to the original training picture to the previous original training label. And when the current training label corresponding to the original training picture passes the verification, the current training label corresponding to the original training picture is still the previous original training label. It can be seen that, through the verification, it can be finally ensured that the current training label corresponding to the original training picture is still the original training label.
Thereafter, as an embodiment, the server 405 may train the deep learning model directly according to the original training picture and the current training label (i.e. the original training label at this time) currently corresponding to the original training picture.
After the deep learning model is trained, the server 405 inputs the image to be detected to the deep learning model, so that the deep learning model performs image processing on the input image and outputs a processing result.
In one example, the image processing at least includes: and (4) target identification. The object recognition is for example object classification, which identifies from the image which objects are of a first class, which objects are of a second class and which objects are of a third class. For example, the method is applied to a monitoring scene of the violation vehicles, and the violation vehicles such as vehicles with opened high beams in the daytime and vehicles with opened trunk in the driving process are identified from the vehicle images.
In one example, the image processing at least includes: and (4) separating scenes. An example of scene partitioning is to partition an image into sets of pixel regions with a particular semantic meaning and identify the categories of the pixel regions. For example, in a lane scene, a lane image is divided into several groups of pixel regions with specific semantic meanings, such as lanes, sidewalks, trees, vehicles and the like, so that image analysis and processing are facilitated.
In one example, the image processing at least includes: and detecting the object. Examples of object detection are: the method comprises the steps of detecting target objects existing in an image by using theories and methods in the fields of image processing, pattern recognition and the like, determining semantic categories of the target objects, and calibrating the positions of the target objects in the image. For example, the method is applied to a monitoring scene of a violation vehicle, and a vehicle with a high beam turned on in the daytime is detected from a vehicle image.
It should be noted that the above object recognition, and/or scene separation, and/or object detection are only embodiments illustrating image processing, and other image processing methods applicable to the above deep learning model (a deep learning model trained based on a label corresponding to a picture and a picture) are also applicable, and the present embodiment is not limited to this.
The following describes the apparatus provided in the present application:
referring to fig. 5, fig. 5 is a diagram illustrating a structure of the apparatus according to the present invention. The device includes:
the verification data generating unit is used for generating picture-label verification data corresponding to the original training picture by using the original training picture and an original training label corresponding to the original training picture; the picture-tag verification data at least includes: the original training labels;
the verification unit is used for verifying the current training label currently corresponding to the original training picture according to the picture-label verification data which is generated by the verification data generation unit and corresponds to the original training picture when the current training label currently corresponding to the original training picture is determined to be verified;
and the updating unit is used for updating the current training label corresponding to the original training picture by using the generated picture-label verification data corresponding to the original training picture when the current training label corresponding to the original training picture is not verified by the verifying unit.
As an embodiment, the generating unit of verification data generates the picture-label verification data corresponding to the original training picture by using the original training picture and the original training label corresponding to the original training picture, including:
acquiring an original training picture and an original training label corresponding to the original training picture from an initial training data set;
and adding the original training label corresponding to the obtained original training picture to the specified position of the original training picture to obtain the picture-label verification data corresponding to the original training picture.
As an embodiment, the designated positions are: a tail, or a head.
As an embodiment, the determining, by the checking unit, that the checking of the current training label currently corresponding to the original training picture includes:
when an externally triggered label verification instruction aiming at the original training picture is received, determining to verify a current training label currently corresponding to the original training picture; or,
when detecting that the test accuracy value of the deep learning model is smaller than a set accuracy value, if the original training picture and the current training label currently corresponding to the original training picture do not participate in the training of the deep learning model currently, determining to verify the current training label currently corresponding to the original training picture; the test precision value is the precision value of the deep learning model tested by the test sample.
As an embodiment, the verifying unit, according to the generated picture-tag verification data corresponding to the original training picture, performing verification on a current training tag currently corresponding to the original training picture includes:
reading an original training label from the generated picture-label verification data corresponding to the original training picture;
and comparing whether the read original training label is consistent with the current training label corresponding to the original training picture, if so, determining that the current training label corresponding to the original training picture passes the verification, and if not, determining that the current training label corresponding to the original training picture does not pass the verification.
As an embodiment, the updating, by the updating unit, updating the current training label currently corresponding to the original training picture by using the picture-label verification data corresponding to the generated original training picture includes:
and updating the current training label corresponding to the original training picture to the original training label in the picture-label checking data corresponding to the original training picture.
As an embodiment, after updating the current training label currently corresponding to the original training picture by using the picture-label verification data corresponding to the generated original training picture, the updating unit further triggers training of the deep learning model according to the original training picture and the current training label currently corresponding to the original training picture.
As an embodiment, the tag verification apparatus is applied to a server;
and the original training label corresponding to the original training picture is a training label which is obtained by a verification data generation unit in the server and is labeled by the terminal equipment and used for describing the original training picture.
As an embodiment, after completing training the deep learning model, the updating unit further triggers inputting the image to be detected into the deep learning model, so as to perform image processing on the input image by the deep learning model and output a processing result; the image processing includes at least: object recognition, and/or scene separation, and/or object detection.
Thus, the description of the structure of the apparatus shown in fig. 5 is completed.
Correspondingly, the application also provides a hardware structure of the device shown in fig. 6. Referring to fig. 6, the hardware structure may include: a processor and a machine-readable storage medium having stored thereon machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for tag verification, the method comprising:
generating picture-label verification data corresponding to the original training picture by using the original training picture and an original training label corresponding to the original training picture; the picture-tag verification data at least includes: the original training labels;
when the current training label corresponding to the original training picture is determined to be verified, the current training label corresponding to the original training picture is verified according to the generated picture-label verification data corresponding to the original training picture, and when the current training label corresponding to the original training picture does not pass the verification, the current training label corresponding to the original training picture is updated by using the generated picture-label verification data corresponding to the original training picture.
2. The method of claim 1, wherein generating the picture-label verification data corresponding to the original training picture by using the original training picture and the original training label corresponding to the original training picture comprises:
acquiring an original training picture and an original training label corresponding to the original training picture;
and adding the original training label corresponding to the obtained original training picture to the specified position of the original training picture to obtain the picture-label verification data corresponding to the original training picture.
3. The method of claim 2, wherein the designated locations are: a tail, or a head.
4. The method of claim 1, wherein the determining to verify the current training label currently corresponding to the original training picture comprises:
when an externally triggered label verification instruction aiming at the original training picture is received, determining to verify a current training label currently corresponding to the original training picture; or,
when detecting that the test accuracy value of the deep learning model is smaller than a set accuracy value, if the original training picture and the current training label currently corresponding to the original training picture do not participate in the training of the deep learning model currently, determining to verify the current training label currently corresponding to the original training picture; the test precision value is the precision value of the deep learning model tested by the test sample.
5. The method according to claim 1, wherein the verifying the current training label currently corresponding to the original training picture according to the generated picture-label verification data corresponding to the original training picture comprises:
reading an original training label from the generated picture-label verification data corresponding to the original training picture;
and comparing whether the read original training label is consistent with the current training label corresponding to the original training picture, if so, determining that the current training label corresponding to the original training picture passes the verification, and if not, determining that the current training label corresponding to the original training picture does not pass the verification.
6. The method of claim 1, wherein the updating the current training label currently corresponding to the original training picture by using the generated picture-label verification data corresponding to the original training picture comprises:
and updating the current training label corresponding to the original training picture to the original training label in the picture-label checking data corresponding to the original training picture.
7. The method of claim 1, wherein after updating a current training label currently corresponding to an original training picture by using the generated picture-label verification data corresponding to the original training picture, the method further comprises:
and training a deep learning model according to the original training picture and a current training label currently corresponding to the original training picture.
8. The method according to any one of claims 1 to 7, wherein the method is applied to a server;
and the original training label corresponding to the original training picture is a training label which is obtained by the server and marked by the terminal equipment and is used for describing the original training picture.
9. The method of claim 7, wherein after completing training the deep learning model, the method further comprises:
inputting an image to be detected into the deep learning model, and carrying out image processing on the input image and outputting a processing result by the deep learning model; the image processing includes at least: object recognition, and/or scene separation, and/or object detection.
10. A tag verification apparatus, comprising:
the verification data generating unit is used for generating picture-label verification data corresponding to the original training picture by using the original training picture and an original training label corresponding to the original training picture; the picture-tag verification data at least includes: the original training labels;
the verification unit is used for verifying the current training label currently corresponding to the original training picture according to the picture-label verification data which is generated by the verification data generation unit and corresponds to the original training picture when the current training label currently corresponding to the original training picture is determined to be verified;
and the updating unit is used for updating the current training label corresponding to the original training picture by using the generated picture-label verification data corresponding to the original training picture when the verification unit verifies that the current training label corresponding to the original training picture does not pass the verification.
CN202010193375.XA 2020-03-18 2020-03-18 Label verification method and device Active CN113496232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010193375.XA CN113496232B (en) 2020-03-18 2020-03-18 Label verification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010193375.XA CN113496232B (en) 2020-03-18 2020-03-18 Label verification method and device

Publications (2)

Publication Number Publication Date
CN113496232A true CN113496232A (en) 2021-10-12
CN113496232B CN113496232B (en) 2024-05-28

Family

ID=77994343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010193375.XA Active CN113496232B (en) 2020-03-18 2020-03-18 Label verification method and device

Country Status (1)

Country Link
CN (1) CN113496232B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090281972A1 (en) * 2008-05-06 2009-11-12 Microsoft Corporation Adaptive learning framework for data correction
CN108052959A (en) * 2017-11-15 2018-05-18 南京邮电大学 A kind of method for improving deep learning picture recognition algorithm robustness
CN108897829A (en) * 2018-06-22 2018-11-27 广州多益网络股份有限公司 Modification method, device and the storage medium of data label
CN109840531A (en) * 2017-11-24 2019-06-04 华为技术有限公司 The method and apparatus of training multi-tag disaggregated model
CN109934242A (en) * 2017-12-15 2019-06-25 北京京东尚科信息技术有限公司 Image identification method and device
CN110163849A (en) * 2019-04-28 2019-08-23 上海鹰瞳医疗科技有限公司 Training data processing method, disaggregated model training method and equipment
CN110210535A (en) * 2019-05-21 2019-09-06 北京市商汤科技开发有限公司 Neural network training method and device and image processing method and device
CN110210624A (en) * 2018-07-05 2019-09-06 第四范式(北京)技术有限公司 Execute method, apparatus, equipment and the storage medium of machine-learning process
CN110298386A (en) * 2019-06-10 2019-10-01 成都积微物联集团股份有限公司 A kind of label automation definition method of image content-based
US20190354810A1 (en) * 2018-05-21 2019-11-21 Astound Ai, Inc. Active learning to reduce noise in labels
US20190377972A1 (en) * 2018-06-08 2019-12-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for training, classification model, mobile terminal, and readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090281972A1 (en) * 2008-05-06 2009-11-12 Microsoft Corporation Adaptive learning framework for data correction
CN108052959A (en) * 2017-11-15 2018-05-18 南京邮电大学 A kind of method for improving deep learning picture recognition algorithm robustness
CN109840531A (en) * 2017-11-24 2019-06-04 华为技术有限公司 The method and apparatus of training multi-tag disaggregated model
CN109934242A (en) * 2017-12-15 2019-06-25 北京京东尚科信息技术有限公司 Image identification method and device
US20190354810A1 (en) * 2018-05-21 2019-11-21 Astound Ai, Inc. Active learning to reduce noise in labels
US20190377972A1 (en) * 2018-06-08 2019-12-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for training, classification model, mobile terminal, and readable storage medium
CN108897829A (en) * 2018-06-22 2018-11-27 广州多益网络股份有限公司 Modification method, device and the storage medium of data label
CN110210624A (en) * 2018-07-05 2019-09-06 第四范式(北京)技术有限公司 Execute method, apparatus, equipment and the storage medium of machine-learning process
CN110163849A (en) * 2019-04-28 2019-08-23 上海鹰瞳医疗科技有限公司 Training data processing method, disaggregated model training method and equipment
CN110210535A (en) * 2019-05-21 2019-09-06 北京市商汤科技开发有限公司 Neural network training method and device and image processing method and device
CN110298386A (en) * 2019-06-10 2019-10-01 成都积微物联集团股份有限公司 A kind of label automation definition method of image content-based

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄华东;方小勇;陈政;洪俊;黄樱;: "一种基于RBF的时序缺失数据修复方法", 怀化学院学报, no. 05, 28 May 2013 (2013-05-28) *

Also Published As

Publication number Publication date
CN113496232B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN109117831B (en) Training method and device of object detection network
CN111523413B (en) Method and device for generating face image
US12101411B2 (en) System and method for decentralized digital structured data storage, management, and authentication using blockchain
CN106485261B (en) Image recognition method and device
CN110674349B (en) Video POI (Point of interest) identification method and device and electronic equipment
CN113627395B (en) Text recognition method, device, medium and electronic equipment
CN110399933B (en) Data annotation correction method and device, computer readable medium and electronic equipment
CN110427998A (en) Model training, object detection method and device, electronic equipment, storage medium
CN113791750B (en) Virtual content display method, device and computer readable storage medium
CN111340015A (en) Positioning method and device
CN113971402A (en) Content identification method, device, medium and electronic equipment
CN112084114B (en) Method and apparatus for testing interfaces
CN113496232B (en) Label verification method and device
US20230205670A1 (en) Method and electronic checking system for checking performances of an application on a target equipment of a vehicle, related computer program and applications platform
CN109542743B (en) Log checking method and device, electronic equipment and computer readable storage medium
CN108710658B (en) Data record storage method and device
US20220382570A1 (en) Transforming asset operation video to augmented reality guidance model
CN113742775B (en) Image data security detection method, system and storage medium
CN112966752B (en) Image matching method and device
CN113886140A (en) Artificial intelligence model output data judgment system based on credibility verification
CN108628750B (en) Test code processing method and device
CN108875638B (en) Face matching test method, device and system
CN113672514A (en) Test method, test device, server and storage medium
CN116939292B (en) Video text content monitoring method and system in rail transit environment
CN112308175A (en) Method and device for identifying an item

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant