CN115019126A - Image sample screening method and device, electronic equipment and storage medium - Google Patents

Image sample screening method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115019126A
CN115019126A CN202210580125.0A CN202210580125A CN115019126A CN 115019126 A CN115019126 A CN 115019126A CN 202210580125 A CN202210580125 A CN 202210580125A CN 115019126 A CN115019126 A CN 115019126A
Authority
CN
China
Prior art keywords
image sample
value
loss function
image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210580125.0A
Other languages
Chinese (zh)
Inventor
吴华栋
张展鹏
成慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202210580125.0A priority Critical patent/CN115019126A/en
Publication of CN115019126A publication Critical patent/CN115019126A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for screening an image sample, an electronic device and a storage medium. The method comprises the following steps: obtaining a value of a loss function corresponding to the image sample; obtaining difference information between at least two prediction results corresponding to the image sample, and/or obtaining image quality information of the image sample; and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the value of the loss function.

Description

Image sample screening method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for screening an image sample, an electronic device, and a storage medium.
Background
With the development of technology, sweeping robots begin to walk into thousands of households. The sweeping robot with the visual perception capability is developing vigorously, and the quality of an image sample of a model for training the sweeping robot determines the upper limit of the visual perception capability of the sweeping robot to a great extent. Wherein the noise present in the image samples will seriously affect the performance of the model. Therefore, it is important to screen image samples.
Disclosure of Invention
The present disclosure provides a screening technical scheme of an image sample.
According to an aspect of the present disclosure, there is provided a method of screening an image sample, including:
obtaining a value of a loss function corresponding to the image sample;
obtaining difference information between at least two prediction results corresponding to the image sample, and/or obtaining image quality information of the image sample;
and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the value of the loss function.
The method comprises the steps of obtaining at least one of difference information between at least two prediction results corresponding to an image sample and image quality information of the image sample by obtaining a value of a loss function corresponding to the image sample, and carrying out screening processing on the image sample according to the at least one of the difference information and the image quality information and the value of the loss function, so that the accuracy and the speed of screening the image sample can be improved, the noise of an image sample set is reduced, the effect of neural network training can be improved, and the performance of a neural network can be improved.
In a possible implementation manner, the obtaining a value of a loss function corresponding to an image sample includes: obtaining the average value of the loss function of the image sample in the training process of the neural network;
the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the average value of the loss function.
Since the average value of the loss function of the noise sample in the training process of the neural network is generally larger, and the average value of the loss function of the non-noise sample in the training process of the neural network is generally smaller, in this implementation manner, the average value of the loss function of the image sample in the training process of the neural network is obtained, and the image sample is subjected to the screening processing according to at least one of the difference information and the image quality information and the average value of the loss function, so that the potential noise sample can be screened more accurately.
In a possible implementation manner, the obtaining an average value of a loss function of the image sample in a training process of the neural network includes:
and obtaining the average value of the loss function of the image sample in the training process of at least two neural networks with different network structures.
In this implementation manner, by obtaining an average value of loss functions of an image sample in the training process of at least two neural networks with different network structures, and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the average value of the loss functions of the image sample in the training process of at least two neural networks with different network structures, it is possible to reduce erroneous judgment and missed judgment.
In a possible implementation manner, the obtaining a value of a loss function corresponding to an image sample includes: obtaining an initial value of a loss function of an image sample at a training starting stage, and obtaining a final value of the loss function of the image sample at a training finishing stage;
the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: determining a first difference value between an absolute value of a final value of the loss function and an absolute value of an initial value of the loss function; and screening the image sample according to at least one of the difference information and the image quality information and the first difference value.
In this implementation manner, by obtaining an initial value of a loss function of an image sample at a training start stage and a final value of the loss function of the image sample at a training end stage, determining a first difference value between an absolute value of the final value of the loss function and an absolute value of the initial value of the loss function, and performing a screening process on the image sample according to at least one of the difference information and the image quality information and the first difference value, a potential noise sample can be effectively screened out.
In a possible implementation manner, the obtaining an initial value of a loss function of an image sample at a training start stage and obtaining a final value of the loss function of the image sample at a training end stage includes:
determining an initial value of a loss function of an image sample at the initial stage of training according to the value of the loss function of the image sample in the first round of training of the neural network;
and determining the final value of the loss function of the image sample at the training end stage according to the value of the loss function of the image sample in the last round of training of the neural network.
In this implementation, the initial value of the loss function of the image sample at the training start stage is determined according to the value of the loss function of the image sample in the first round of training of the neural network, and the final value of the loss function of the image sample at the training end stage is determined according to the value of the loss function of the image sample in the last round of training of the neural network, so that the initial value of the loss function of the image sample at the training start stage and the final value of the loss function of the image sample at the training end stage can be quickly and accurately determined, thereby contributing to improving the speed and accuracy of image sample screening.
In a possible implementation manner, the determining an initial value of a loss function of the image sample at a training start stage according to a value of the loss function of the image sample in a first round of training of the neural network includes: determining the average value of the loss functions of the image samples in the first round of training of at least two neural networks with different network structures as the initial value of the loss function of the image samples in the training starting stage;
determining a final value of a loss function of the image sample at the training end stage according to a value of the loss function of the image sample in the last round of training of the neural network, including: and determining the average value of the loss functions of the image samples in the last round of training of the at least two neural networks as the final value of the loss functions of the image samples in the training end stage.
In this implementation, an average value of loss functions of an image sample in a first round of training of at least two neural networks with different network structures is determined as an initial value of the loss function of the image sample in a training start stage, an average value of the loss functions of the image sample in a last round of training of the at least two neural networks is determined as a final value of the loss function of the image sample in a training end stage, a first difference value between an absolute value of the final value of the loss function and an absolute value of the initial value of the loss function is determined, and the image sample is subjected to a screening process according to at least one of the difference information and the image quality information and the first difference value, so that erroneous judgment and missing judgment can be reduced.
In a possible implementation manner, the obtaining difference information between at least two prediction results corresponding to the image sample includes:
processing the image sample through at least two neural networks with different network structures to obtain at least two prediction results which are in one-to-one correspondence with the at least two neural networks;
determining difference information between the at least two predictors.
In the implementation mode, the image sample is processed through at least two neural networks with different network structures, at least two prediction results corresponding to the at least two neural networks one to one are obtained, and difference information between the at least two prediction results is determined, so that misjudgment and missing judgment of the noise sample can be reduced.
In a possible implementation manner, the determining difference information between the at least two prediction results includes: for any one of the at least two prediction results, determining the intersection ratio between a detection frame in the prediction result and an annotation frame corresponding to the image sample; determining the discrete degree of the intersection and parallel ratio corresponding to the at least two prediction results;
the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the discrete degree and the image quality information and the value of the loss function.
In this implementation manner, for any one of the at least two prediction results, an intersection ratio between a detection frame in the prediction results and an annotation frame corresponding to the image sample is determined, a dispersion degree of the intersection ratio corresponding to the at least two prediction results is determined, and the image sample is subjected to screening processing according to at least one of the dispersion degree and the image quality information and a value of the loss function, so that a potential noise sample can be screened more effectively.
In one possible implementation, the image quality information includes at least one of: the ambiguity of the image sample, the definition of the image sample and the size information of the labeling frame in the image sample.
In the implementation mode, the fuzziness and/or the definition of the image sample are obtained, and the image sample is subjected to screening processing at least according to the fuzziness and/or the definition of the image sample, so that the noise sample is screened out; the size information of the labeling frame in the image sample is obtained, and the image sample is screened at least according to the size information of the labeling frame in the image sample, so that the noise sample is screened.
In a possible implementation manner, the performing a screening process on the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes:
determining a prediction value of the image sample belonging to a noise sample according to at least one of the difference information and the image quality information and a value of the loss function;
and screening the image sample according to the predicted value.
In this implementation, a predicted value of the image sample belonging to a noise sample is determined according to at least one of the difference information and the image quality information and a value of the loss function, and the image sample is subjected to a screening process according to the predicted value, thereby enabling more objective and accurate screening of the image sample.
In a possible implementation manner, the performing, according to the predicted value, a screening process on the image sample includes:
responding to the predicted value larger than a first preset threshold value, and performing rechecking processing on the image sample;
alternatively, the first and second electrodes may be,
responsive to the prediction value being less than or equal to the first preset threshold, retaining the image sample.
In this implementation, objective and reliable image sample screening can be achieved by performing review processing on the image sample in response to the predicted value being greater than a first preset threshold, or by reserving the image sample in response to the predicted value being less than or equal to the first preset threshold.
In a possible implementation manner, the performing, in response to the prediction value being greater than a first preset threshold, a review process on the image sample includes:
in response to the predicted value being greater than a first preset threshold and the image quality information not meeting a preset image quality condition, discarding the image sample;
alternatively, the first and second electrodes may be,
and responding to the fact that the predicted value is larger than the first preset threshold value and the image quality information meets the preset image quality condition, and re-labeling the image sample.
In this implementation manner, in response to that the predicted value is greater than the first preset threshold and that the image quality information satisfies the preset image quality condition, the image sample is re-labeled, and after the image sample is re-labeled, the re-labeled image sample may be added to the image sample set, so that noise reduction of the image sample set may be achieved, and the prediction effect of the neural network may be improved.
According to an aspect of the present disclosure, there is provided an apparatus for screening an image sample, including:
the first obtaining module is used for obtaining the value of a loss function corresponding to the image sample;
a second obtaining module, configured to obtain difference information between at least two prediction results corresponding to the image sample, and/or obtain image quality information of the image sample;
and the screening module is used for screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function.
In one possible implementation manner, the first obtaining module is configured to: obtaining the average value of the loss function of the image sample in the training process of the neural network;
the screening module is used for: and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the average value of the loss function.
In one possible implementation manner, the first obtaining module is configured to:
and obtaining the average value of the loss function of the image sample in the training process of at least two neural networks with different network structures.
In one possible implementation manner, the first obtaining module is configured to: obtaining an initial value of a loss function of an image sample at a training starting stage, and obtaining a final value of the loss function of the image sample at a training finishing stage;
the screening module is used for: determining a first difference value between an absolute value of a final value of the loss function and an absolute value of an initial value of the loss function; and screening the image sample according to at least one of the difference information and the image quality information and the first difference value.
In one possible implementation manner, the first obtaining module is configured to:
determining an initial value of a loss function of an image sample at the initial stage of training according to the value of the loss function of the image sample in the first round of training of the neural network;
and determining the final value of the loss function of the image sample at the training end stage according to the value of the loss function of the image sample in the last round of training of the neural network.
In one possible implementation manner, the first obtaining module is configured to:
determining the average value of the loss functions of the image samples in the first round of training of at least two neural networks with different network structures as the initial value of the loss function of the image samples in the training starting stage;
and determining the average value of the loss functions of the image samples in the last round of training of the at least two neural networks as the final value of the loss functions of the image samples in the training end stage.
In one possible implementation manner, the second obtaining module is configured to:
processing the image sample through at least two neural networks with different network structures to obtain at least two prediction results which are in one-to-one correspondence with the at least two neural networks;
determining difference information between the at least two predictors.
In one possible implementation manner, the second obtaining module is configured to: for any one of the at least two prediction results, determining the intersection ratio between a detection frame in the prediction result and an annotation frame corresponding to the image sample; determining the discrete degree of the intersection ratio corresponding to the at least two prediction results;
the screening module is used for: and performing screening processing on the image sample according to at least one of the discrete degree and the image quality information and the value of the loss function.
In one possible implementation, the image quality information includes at least one of: the ambiguity of the image sample, the definition of the image sample and the size information of the labeling frame in the image sample.
In one possible implementation, the screening module is configured to:
determining a prediction value that the image sample belongs to a noise sample according to at least one of the difference information and the image quality information and a value of the loss function;
and screening the image sample according to the predicted value.
In one possible implementation, the screening module is configured to:
responding to the predicted value being larger than a first preset threshold value, and performing rechecking processing on the image sample;
alternatively, the first and second electrodes may be,
responsive to the prediction value being less than or equal to the first preset threshold, retaining the image sample.
In one possible implementation, the screening module is configured to:
in response to the predicted value being greater than a first preset threshold and the image quality information not meeting a preset image quality condition, discarding the image sample;
alternatively, the first and second liquid crystal display panels may be,
and in response to the predicted value being larger than the first preset threshold value and the image quality information meeting the preset image quality condition, re-labeling the image sample.
According to an aspect of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
In the embodiment of the disclosure, at least one of difference information between at least two prediction results corresponding to an image sample and image quality information of the image sample is obtained by obtaining a value of a loss function corresponding to the image sample, and the image sample is subjected to screening processing according to the at least one of the difference information and the image quality information and the value of the loss function, so that accuracy and speed of screening the image sample can be improved, noise of an image sample set is reduced, an effect of neural network training can be improved, and performance of a neural network can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a screening method of an image sample provided by an embodiment of the present disclosure.
Fig. 2 shows a block diagram of a screening apparatus for an image sample provided by an embodiment of the present disclosure.
Fig. 3 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
There are various causes of noise generation in the labeled data of an image sample, and the following three causes can be mainly summarized: firstly, the information in the labeling process is insufficient, so that the class data cannot be fully described; secondly, the identification degree is low due to poor quality of the image sample, so that a labeling person is difficult to correctly label a corresponding label in the labeling process; thirdly, the results obtained by labeling the same image sample by different labeling personnel roots are not completely consistent due to different subjectivity.
In the related art, screening of image samples is often relied on manually. However, manual screening of a large number of image samples is time and labor consuming. Some solutions in the related art attempt to find the tag noise in a specific scene based on specific rules, however, these rules have great limitations, and it is difficult to sufficiently mine the complex tag noise.
The disclosed embodiments provide a method, an apparatus, a device, a storage medium, and a program product for screening an image sample, wherein a value of a loss function corresponding to the image sample is obtained, at least one of difference information between at least two prediction results corresponding to the image sample and image quality information of the image sample is obtained, and the image sample is subjected to a screening process according to the at least one of the difference information and the image quality information and the value of the loss function, so that accuracy and speed of screening the image sample can be improved, noise of an image sample set is reduced, an effect of neural network training can be improved, and performance of a neural network can be improved.
The following describes in detail the screening method of an image sample provided in the embodiments of the present disclosure with reference to the drawings.
Fig. 1 shows a flowchart of a screening method of an image sample provided by an embodiment of the present disclosure. In a possible implementation manner, the execution subject of the screening method for the image sample may be a screening apparatus for the image sample, for example, the screening method for the image sample may be executed by a terminal device or a server or other electronic devices. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, or a wearable device. In some possible implementations, the screening method of the image sample may be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 1, the method for screening an image sample includes steps S11 to S13.
In step S11, the value of the loss function corresponding to the image sample is obtained.
In step S12, difference information between at least two prediction results corresponding to the image sample is obtained, and/or image quality information of the image sample is obtained.
In step S13, a filtering process is performed on the image sample according to at least one of the difference information and the image quality information, and the value of the loss function.
The screening method of the image samples provided by the embodiment of the disclosure can be used for screening the image samples in any image sample set. The screened image sample set can be used for training a neural network for target detection or a neural network for target recognition, and the like. For example, the screened image sample set may be used to train a neural network carried in a floor-sweeping robot, a floor-mopping robot, an AGV (Automated Guided Vehicle), a meal delivery robot, and the like.
Wherein the image sample set comprises a plurality of image samples. In some application scenarios, the image sample may also be referred to as a training image. By adopting the screening method of the image samples provided by the embodiment of the disclosure, each image sample in the image sample set can be respectively subjected to screening treatment.
In the embodiment of the disclosure, an image sample may be predicted through a neural network to obtain a prediction result corresponding to the image sample, and a value of a loss function corresponding to the image sample may be determined according to difference information between the prediction result corresponding to the image sample and annotation data corresponding to the image sample. The prediction result corresponding to the image sample may include at least one of a position of the detection frame, a size of the detection frame, a probability that the detection frame contains the target object, a type of the object in the detection frame, and the like. The target object may be an object of a specified type or a specified object. The number of the designated types may be one or more than two, and the number of the designated objects may be one or more than two. The data type of the annotation data corresponding to the image sample may be the same as or different from the data type of the prediction result corresponding to the image sample.
In a possible implementation manner, the obtaining a value of a loss function corresponding to an image sample includes: obtaining the average value of the loss function of the image sample in the training process of the neural network; the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the average value of the loss function.
In this implementation, an average of a loss function of the image samples during training of the neural network may be obtained in response to completion of the neural network training. The average value of the loss function of the image sample during the training of the neural network may represent an average value of absolute values of the loss function of the image sample during the training of the neural network.
As one example of this implementation, completion of training of the neural network may be determined in response to the neural network passing testing of the test set. The test set comprises a plurality of test samples, and each test sample in the plurality of test samples is a non-noise sample. For example, each test sample in the test set may be a manually selected, simple image sample. The neural network is tested through a test set consisting of non-noise samples to determine the timing of the end of the training of the neural network, so that the probability of over-fitting and under-fitting of the neural network can be reduced. In addition, the timing of the completion of the neural network training is determined by using a test set consisting of non-noise samples, so that the loss function of the noise samples is larger and the loss function of the non-noise samples is smaller after the neural network training is completed, and the noise samples can be screened out conveniently.
In this example, a test of the neural network through the test set may be determined in response to a value of the loss function corresponding to the test sample in the test set satisfying a preset convergence condition. For example, the preset convergence condition may include at least one of: the average value of the absolute values of the loss functions corresponding to the test samples in the test set is smaller than a second preset threshold, the sum of the absolute values of the loss functions corresponding to the test samples in the test set is smaller than a third preset threshold, and the absolute values of the loss functions corresponding to the test samples in the test set are smaller than a fourth preset threshold.
In one example, the neural network may be tested using the test set in response to the neural network completing an epoch of training. Wherein an epoch may indicate that the neural network is trained once using all image samples in the image sample set. That is, all image samples in the set of image samples pass through the neural network once and return once, which is called an epoch.
In another example, the neural network may be tested using the test set in response to the neural network completing one iteration.
As another example of this implementation, the completion of the neural network training may be determined in response to the training of the neural network reaching a preset number of epochs.
Since the average value of the loss function of the noise sample in the training process of the neural network is generally larger, and the average value of the loss function of the non-noise sample in the training process of the neural network is generally smaller, in this implementation, the image sample is subjected to the screening process by obtaining the average value of the loss function of the image sample in the training process of the neural network, and according to at least one of the difference information and the image quality information and the average value of the loss function, thereby being capable of screening out the potential noise sample more accurately.
As an example of this implementation, the obtaining an average value of a loss function of the image sample during the training of the neural network includes: and obtaining the average value of the loss function of the image sample in the training process of at least two neural networks with different network structures.
In one example, the network structures of any two of the at least two neural networks are different from each other.
In another example, there are at least two network structures in the at least two neural networks. In this example, in a case where the number of the neural networks in the at least two neural networks is two, the network structures of the two neural networks are not the same; in a case where the number of the neural networks in the at least two neural networks is three or more, network structures of at least two of the at least three neural networks are not the same.
For example, an average value of the loss function of the image sample during the training process of 10 neural networks with different network structures can be obtained. In this example, 10 neural networks may be denoted as a first neural network to a tenth neural network, respectively. The first neural network may be trained using the set of image samples until the training of the first neural network is complete. After the training of the first neural network is completed, an average of a loss function of the image sample during the training of the first neural network may be determined. Similarly, an average of the loss function of the image sample during training of the second neural network, an average of the loss function of the image sample during training of the third neural network, an average of the loss function of the image sample during training of the fourth neural network, an average of the loss function of the image sample during training of the fifth neural network may be determined, the average value of the loss function of the image sample in the training process of the sixth neural network, the average value of the loss function of the image sample in the training process of the seventh neural network, the average value of the loss function of the image sample in the training process of the eighth neural network, the average value of the loss function of the image sample in the training process of the ninth neural network, and the average value of the loss function of the image sample in the training process of the tenth neural network. According to the average value of the loss function of the image sample in the training process of the first neural network, the average value of the loss function of the image sample in the training process of the second neural network, the average value of the loss function of the image sample in the training process of the third neural network, the average value of the loss function of the image sample in the training process of the fourth neural network, the average value of the loss function of the image sample in the training process of the fifth neural network, the average value of the loss function of the image sample in the training process of the sixth neural network, the average value of the loss function of the image sample in the training process of the seventh neural network, the average value of the loss function of the image sample in the training process of the eighth neural network, the average value of the loss function of the image sample in the training process of the ninth neural network and the loss function of the image sample in the training process of the tenth neural network The average value of the number can be determined as the average value of the loss function of the image sample in the training process of the 10 neural networks.
In this example, by obtaining an average value of loss functions of an image sample in the training process of at least two neural networks with different network structures, and performing a screening process on the image sample according to at least one of the difference information and the image quality information and the average value of the loss functions of the image sample in the training process of at least two neural networks with different network structures, it is possible to reduce erroneous judgment and missed judgment. The misjudgment may indicate that the non-noise sample is determined as the noise sample, and the missed judgment may indicate that the noise sample is determined as the non-noise sample.
As another example of this implementation, the obtaining an average of a loss function of the image sample during training of the neural network includes: and acquiring the average value of the loss function of the image sample in the training process of the single neural network.
In a possible implementation manner, the obtaining a value of a loss function corresponding to an image sample includes: obtaining an initial value of a loss function of an image sample at a training starting stage, and obtaining a final value of the loss function of the image sample at a training finishing stage; the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: determining a first difference value between an absolute value of a final value of the loss function and an absolute value of an initial value of the loss function; and screening the image sample according to at least one of the difference information and the image quality information and the first difference value.
In this implementation, the training start phase may represent the first m rounds of the training process and the training end phase may represent the last n rounds of the training process. Wherein the first m rounds of the training process may represent the first m rounds of the training process. The first m rounds of the training process may represent the first m epochs of the training process, and the last n rounds of the training process may represent the last n epochs of the training process. For example, m may equal 2 and n may equal 2. As another example, m may equal 3 and n may equal 3. The implementation mode does not limit the sizes of m and n, and the technical personnel in the field can flexibly set according to the requirements of practical application scenes.
In this implementation, a difference between an absolute value of a final value of the loss function and an absolute value of an initial value of the loss function may be taken as the first difference value. Alternatively, a difference between an absolute value of an initial value of the loss function and an absolute value of a final value of the loss function may be used as the first difference value.
In the training process of the neural network, the absolute value of the initial value of the loss function corresponding to the simple pattern book is usually larger, and the absolute value of the final value of the loss function corresponding to the simple pattern book is usually smaller; the absolute value of the initial value of the loss function for a difficult image sample is typically large, and the absolute value of the final value of the loss function for a difficult pattern sample is typically large. The noise samples are usually mixed in the difficult samples, the absolute value of the initial value of the loss function corresponding to the noise samples is usually large, and the absolute value of the final value of the loss function corresponding to the noise samples is also usually large. Therefore, in this implementation, by obtaining an initial value of a loss function of an image sample at a training start stage and a final value of the loss function of the image sample at a training end stage, determining a first difference value between an absolute value of the final value of the loss function and an absolute value of the initial value of the loss function, and performing a screening process on the image sample according to at least one of the difference information and the image quality information and the first difference value, a potential noise sample can be effectively screened out.
As an example of this implementation, the obtaining an initial value of a loss function of the image sample at a training start stage and obtaining a final value of the loss function of the image sample at a training end stage includes: determining an initial value of a loss function of an image sample at the initial stage of training according to the value of the loss function of the image sample in the first round of training of the neural network; and determining the final value of the loss function of the image sample at the training end stage according to the value of the loss function of the image sample in the last round of training of the neural network. In this example, the first round of training may represent the first epoch, and the last round of training may represent the last epoch. In this example, the initial value of the loss function of the image sample at the training start stage is determined according to the value of the loss function of the image sample in the first round of training of the neural network, and the final value of the loss function of the image sample at the training end stage is determined according to the value of the loss function of the image sample in the last round of training of the neural network, so that the initial value of the loss function of the image sample at the training start stage and the final value of the loss function of the image sample at the training end stage can be quickly and accurately determined, and the speed and the accuracy of image sample screening can be improved.
In one example, the determining an initial value of the loss function of the image sample at the training start stage according to the value of the loss function of the image sample in the first round of training of the neural network includes: determining the average value of the loss function of an image sample in the first round of training of at least two neural networks with different network structures as the initial value of the loss function of the image sample in the training starting stage; determining a final value of a loss function of the image sample at the training end stage according to a value of the loss function of the image sample in the last round of training of the neural network, including: and determining the average value of the loss functions of the image samples in the last round of training of the at least two neural networks as the final value of the loss functions of the image samples in the training end stage.
In one example, the network structures of any two of the at least two neural networks are different from each other.
In another example, among the at least two neural networks, there are at least two network structures. In this example, in a case where the number of the neural networks in the at least two neural networks is two, the network structures of the two neural networks are not the same; in a case where the number of the neural networks in the at least two neural networks is three or more, network structures of at least two of the at least three neural networks are not the same.
In this example, the average value of the loss function of the image sample in the first round of training of the at least two neural networks with different network structures may represent the average value of the absolute values of the loss function of the image sample in the first round of training of the at least two neural networks; the average of the loss function of the image sample in the last training round of the at least two neural networks may represent an average of absolute values of the loss function of the image sample in the last training round of the at least two neural networks.
In one example, an average value of the loss function of the image sample in the first training round of 10 neural networks with different network structures may be obtained, and the average value of the loss function of the image sample in the first training round of 10 neural networks may be used as an initial value of the loss function of the image sample in the training starting stage; an average value of the loss functions of the image samples in the last training round of the 10 neural networks can be obtained, and the average value of the loss functions of the image samples in the last training round of the 10 neural networks can be used as a final value of the loss functions of the image samples in the training end stage.
In this example, by determining an average value of loss functions of an image sample in a first round of training of at least two neural networks having different network structures as an initial value of the loss function of the image sample at a training start stage, determining an average value of the loss functions of the image sample in a last round of training of the at least two neural networks as a final value of the loss function of the image sample at a training end stage, determining a first difference value between an absolute value of the final value of the loss function and an absolute value of the initial value of the loss function, and performing a screening process on the image sample according to at least one of the difference information and the image quality information and the first difference value, it is possible to reduce erroneous judgment and erroneous judgment.
In another example, the determining an initial value of the loss function of the image sample at the training start stage according to the value of the loss function of the image sample in the first round of training of the neural network includes: determining the value of a loss function of an image sample in a first round of training of a designated neural network as the initial value of the loss function of the image sample at the initial stage of training; determining a final value of a loss function of the image sample at the training end stage according to a value of the loss function of the image sample in the last round of training of the neural network, including: and determining the value of the loss function of the image sample in the last round of training of the specified neural network as the final value of the loss function of the image sample in the training end stage. In this example, the image sample may be processed by a single neural network, and an initial value of a loss function of the image sample at a training start stage and a final value of the loss function of the image sample at a training end stage are obtained.
As another example of this implementation, the obtaining an initial value of a loss function of the image sample at a training start stage and obtaining a final value of the loss function of the image sample at a training end stage includes: determining an initial value of a loss function of an image sample at the training starting stage according to an average value of the loss function of the image sample in the first m rounds of training of a neural network; and determining the final value of the loss function of the image sample at the training end stage according to the average value of the loss function of the image sample in the last n rounds of training of the neural network.
In one example, the determining an initial value of the loss function of the image sample at the training start stage according to an average value of the loss functions of the image sample in the first m rounds of training of the neural network includes: determining the average value of the loss functions of the image samples in the front m rounds of training of at least two neural networks with different network structures as the initial value of the loss function of the image samples in the training starting stage; determining a final value of a loss function of the image sample at the training end stage according to an average value of the loss functions of the image sample in the last n rounds of training of the neural network, including: and determining the average value of the loss function of the image sample in the last n rounds of training of at least two neural networks with different network structures as the final value of the loss function of the image sample in the training end stage.
In another example, the determining an initial value of the loss function of the image sample at the training start stage according to an average value of the loss functions of the image sample in the first m rounds of training of the neural network includes: determining the average value of the loss functions of the image samples in the first m rounds of training of the designated neural network as the initial value of the loss function of the image samples in the training starting stage; determining a final value of a loss function of the image sample at the training end stage according to an average value of the loss functions of the image sample in the last n rounds of training of the neural network, including: and determining the average value of the loss functions of the image samples in the last n rounds of training of the specified neural network as the final value of the loss functions of the image samples in the training end stage.
In the embodiment of the present disclosure, the image sample may be subjected to a screening process according to difference information between at least two prediction results corresponding to the image sample. The at least two prediction results may be obtained through at least two neural networks with different structures, or obtained through at least two training phases of the same neural network, which is not limited herein. Wherein the greater the difference between the at least two predictors, the greater the probability that the image sample belongs to a noise sample; the smaller the difference between the at least two predictors is, the smaller the probability that the image sample belongs to a noise sample.
In a possible implementation manner, the obtaining difference information between at least two prediction results corresponding to the image sample includes: processing the image sample through at least two neural networks with different network structures to obtain at least two prediction results which are in one-to-one correspondence with the at least two neural networks; determining difference information between the at least two predictors. In one example, the network structures of any two of the at least two neural networks are different from each other. In another example, among the at least two neural networks, there are at least two network structures. In this implementation manner, after training of at least two neural networks with different network structures is completed, the image sample is processed by the at least two neural networks, so as to obtain at least two prediction results corresponding to the at least two neural networks one to one. In one example, the image sample may be processed by 10 neural networks with different network structures, so as to obtain 10 prediction results corresponding to the 10 neural networks one to one, and difference information between the 10 prediction results may be determined. In this implementation manner, the image sample is processed by at least two neural networks with different network structures, so as to obtain at least two prediction results corresponding to the at least two neural networks one to one, and determine difference information between the at least two prediction results, thereby reducing misjudgment and missed judgment of the noise sample.
As an example of this implementation, the determining difference information between the at least two prediction results includes: for any one of the at least two prediction results, determining the intersection ratio between a detection frame in the prediction result and an annotation frame corresponding to the image sample; determining the discrete degree of the intersection and parallel ratio corresponding to the at least two prediction results; the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the discrete degree and the image quality information and the value of the loss function.
In this example, for any one of the at least two prediction results, an area of an intersection region of the detection frame in the prediction result and the annotation frame corresponding to the image sample may be determined, an area of a union region of the detection frame in the prediction result and the annotation frame corresponding to the image sample may be determined, and a ratio of the area of the intersection region and the area of the union region may be determined as an intersection and ratio between the detection frame in the prediction result and the annotation frame corresponding to the image sample. The discrete degree of the intersection ratio corresponding to the at least two prediction results may be represented by a variance or a standard deviation of the intersection ratio corresponding to the at least two prediction results, which is not limited herein. In this example, the greater the degree of dispersion of the intersection ratio corresponding to the at least two predictors, the greater the difference between the at least two predictors; the smaller the dispersion degree of the intersection ratio corresponding to the at least two prediction results is, the smaller the difference between the at least two prediction results is.
In this example, for any one of the at least two prediction results, an intersection ratio between a detection frame in the prediction results and an annotation frame corresponding to the image sample is determined, a dispersion degree of the intersection ratio corresponding to the at least two prediction results is determined, and the image sample is subjected to screening processing according to at least one of the dispersion degree and the image quality information and a value of the loss function, so that a potential noise sample can be screened more effectively.
As another example of this implementation, the determining difference information between the at least two predictors includes: determining the dispersion degree of the detection boxes in the at least two prediction results; the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the discrete degree and the image quality information and the value of the loss function. Wherein, the discrete degree of the geometric center of the detection frame in the at least two prediction results can be determined as the discrete degree of the detection frame in the at least two prediction results; or, the discrete degree of the upper left corner point of the detection frame in the at least two prediction results may be determined as the discrete degree of the detection frame in the at least two prediction results; or, the dispersion degree of the right lower corner point of the detection frame in the at least two prediction results may be determined as the dispersion degree of the detection frame in the at least two prediction results; and so on. In this example, the greater the degree of dispersion of the detection boxes in the at least two predictors, the greater the difference between the at least two predictors; the smaller the dispersion degree of the detection frames in the at least two predicted results is, the smaller the difference between the at least two predicted results is.
As another example of this implementation, the determining difference information between the at least two predictors includes: determining the area of the intersection region of the detection frames in the at least two prediction results; the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the discrete degree and the image quality information and the value of the loss function. In this example, the smaller the area of the intersection region of the detection boxes in the at least two prediction results, the greater the difference between the at least two prediction results; the larger the area of the intersection region of the detection frames in the at least two prediction results is, the smaller the difference between the at least two prediction results is.
In the embodiment of the present disclosure, the image sample may be subjected to a screening process according to image quality information of the image sample. The image quality information of the image sample may be any information that can indicate the image quality of the image sample.
In one possible implementation, the image quality information includes at least one of: the ambiguity of the image sample, the definition of the image sample and the size information of the labeling frame in the image sample.
As one example of this implementation, the image quality information of the image sample may include a degree of blur of the image sample. As another example of this implementation, the image quality information of the image sample may include a sharpness of the image sample. As another example of this implementation, the image quality information of the image sample may include a blur and a sharpness of the image sample.
In an application scenario of a mobile robot (e.g., a sweeping robot, a mopping robot, a meal delivery robot, etc.), since the mobile robot is moving, an image sample may be blurred, which may increase the probability that the image sample is a noise sample. In this implementation, the filtering of the noise sample is facilitated by obtaining the blurriness and/or sharpness of the image sample and performing a filtering process on the image sample according to at least the blurriness and/or sharpness of the image sample.
As one example of this implementation, the image quality information may include size information of an annotation box in the image sample. In one example, the size information of the label box in the image sample can be represented by the length and width of the label box. In another example, the size information of the label box in the image sample can be represented by the area of the label box. In one example, in the case that the size of the labeling frame in the image sample is larger than a preset area threshold, the larger the size of the labeling frame is, the greater the probability that the image sample belongs to the noise sample is.
In an application scenario of a mobile robot, since the mobile robot is moving, a situation may occur in which a camera is too close to an obstacle (e.g., an object, a human body, or an animal body) to cause a defocus. In this implementation, the size information of the labeling frame in the image sample is obtained, and the image sample is subjected to the screening processing according to at least the size information of the labeling frame in the image sample, thereby facilitating the screening of the noise sample.
In a possible implementation manner, a value of a loss function corresponding to an image sample, difference information between at least two prediction results corresponding to the image sample, and image quality information of the image sample may be obtained, and the image sample may be subjected to a screening process according to the value of the loss function, the difference information, and the image quality information.
In another possible implementation manner, difference information between a value of a loss function corresponding to an image sample and at least two prediction results corresponding to the image sample may be obtained, and the image sample may be subjected to a screening process according to the value of the loss function and the difference information.
In another possible implementation manner, a value of a loss function corresponding to an image sample and image quality information of the image sample may be obtained, and the image sample is subjected to a screening process according to the value of the loss function and the image quality information.
In a possible implementation manner, the performing a screening process on the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: determining a prediction value of the image sample belonging to a noise sample according to at least one of the difference information and the image quality information and a value of the loss function; and screening the image sample according to the predicted value.
As an example of this implementation, the probability that the image sample belongs to the noise sample is positively correlated with the prediction value that the image sample belongs to the noise sample. That is, the larger the prediction value that the image sample belongs to the noise sample is, the larger the probability that the image sample belongs to the noise sample is; the smaller the prediction value that the image sample belongs to the noise sample, the smaller the probability that the image sample belongs to the noise sample.
As another example of this implementation, the probability that the image sample belongs to a noise sample is inversely related to the prediction value that the image sample belongs to a noise sample. That is, the larger the prediction value that the image sample belongs to the noise sample is, the smaller the probability that the image sample belongs to the noise sample is; the smaller the prediction value of the image sample belonging to the noise sample is, the greater the probability that the image sample belongs to the noise sample is.
As an example of this implementation, a prediction value that the image sample belongs to a noise sample may be determined according to the value of the loss function, the difference information, and the image quality information.
In one example, the average | L of the loss functions of the image samples during the training process of 10 neural networks with different network structures can be obtained avg L. The average value of the loss function of the image sample in the first round of training of the 10 neural networks may be determined as an initial value of the loss function of the image sample in a training start phase, the average value of the loss function of the image sample in the last round of training of the 10 neural networks may be determined as a final value of the loss function of the image sample in a training end phase, and the absolute value of the final value of the loss function and the absolute value of the initial value of the loss function may be determined as a first difference value | L diff L. Processing the image sample through the 10 neural networks to obtain 10 prediction results corresponding to the 10 neural networks one by one; for any one of the 10 prediction results, determining the intersection ratio between the detection frame in the prediction result and the annotation frame corresponding to the image sample; the discrete degree B of the intersection ratio corresponding to the 10 prediction results can be determined diff . The fuzziness F of the image sample and the size information B of the labeling frame in the image sample can be obtained size . The predicted value c of the image sample belonging to the noise sample can be determined using equation 1:
Figure BDA0003662046480000161
wherein alpha is 1 Represents | L avg | corresponding weight, α 2 To represent
Figure BDA0003662046480000162
Corresponding weight, α 3 Is shown as B diff Corresponding weight, α 4 Denotes the weight, α, corresponding to F 5 Is represented by B size The corresponding weight. Alpha is alpha 1 、α 2 、α 3 、α 4 And alpha 5 May be determined empirically. b represents a preset area threshold. S represents a predetermined constant, and S is ≦ α 5 b. For example, S ═ α 5 b。
In this example, the probability that the image sample belongs to a noise sample is positively correlated with the prediction value that the image sample belongs to a noise sample.
As another example of this implementation, a prediction value that the image sample belongs to a noise sample may be determined according to the value of the loss function and the difference information.
As another example of this implementation, a prediction value that the image sample belongs to a noise sample may be determined according to the value of the loss function and the image quality information.
For example, the prediction value of the image sample belonging to the noise sample may be obtained by directly superimposing or performing a weighting operation according to at least one of the difference information and the image quality information and the value of the loss function. In other embodiments, the prediction value for determining that the image sample belongs to the noise sample may be calculated in other manners. In this implementation, a predicted value of the image sample belonging to a noise sample is determined according to at least one of the difference information and the image quality information and a value of the loss function, and the image sample is subjected to a screening process according to the predicted value, thereby enabling more objective and accurate screening of the image sample.
As an example of this implementation, the performing, according to the predicted value, a screening process on the image sample includes: responding to the predicted value larger than a first preset threshold value, and performing rechecking processing on the image sample; alternatively, in response to the prediction value being less than or equal to the first preset threshold, retaining the image sample. In this example, the probability that the image sample belongs to a noise sample is positively correlated with the prediction value that the image sample belongs to a noise sample. When the predicted value is greater than a first preset threshold, it may be determined that the image sample belongs to a suspected noise sample, and the image sample may be subjected to a review process. In the case where the prediction value is less than or equal to a first preset threshold, it may be determined that the image sample belongs to a non-noise sample, and the image sample may be retained. In this example, objective and reliable image sample screening can be achieved by performing review processing on the image sample in response to the predicted value being greater than a first preset threshold value, or by retaining the image sample in response to the predicted value being less than or equal to the first preset threshold value.
In one example, the performing, in response to the prediction value being greater than a first preset threshold, a review process on the image sample includes: in response to the predicted value being greater than a first preset threshold and the image quality information not meeting a preset image quality condition, discarding the image sample; or, in response to that the predicted value is greater than the first preset threshold and the image quality information meets the preset image quality condition, re-labeling the image sample.
Wherein the preset image quality condition may include at least one of: the ambiguity is less than or equal to a preset ambiguity threshold, the definition is greater than or equal to a preset definition threshold, the blocked area of the object in the detection frame is less than or equal to a preset area, and the blocked area proportion of the object in the detection frame is less than or equal to a preset area proportion.
In this example, by discarding the image sample in response to the predicted value being greater than a first preset threshold and the image quality information not satisfying a preset image quality condition, the efficiency of image screening can be improved and the prediction effect of the neural network can be improved. The image samples are re-labeled in response to the fact that the predicted value is larger than the first preset threshold value and the image quality information meets the preset image quality condition, and the re-labeled image samples can be added into the image sample set after being re-labeled, so that noise reduction of the image sample set can be achieved, and the prediction effect of the neural network can be improved.
In another example, the performing a review process on the image sample in response to the predicted value being greater than a first preset threshold value includes: discarding the image sample in response to the prediction value being greater than a first preset threshold.
As another example of this implementation, the performing a filtering process on the image sample according to the predicted value includes: sequencing the image samples in the image sample set according to the sequence of the predicted values from large to small; and reserving the sequenced N image samples, wherein N is a positive integer. In this example, the probability that the image sample belongs to a noise sample is positively correlated with the prediction value that the image sample belongs to a noise sample.
In another possible implementation manner, the performing a screening process on the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: acquiring a preset image sample screening rule; judging whether the image sample accords with the image sample screening rule or not according to at least one of the difference information and the image quality information and the value of the loss function to obtain a judgment result; and screening the image sample according to the judgment result.
As an example of this implementation, the image sample screening rule may include a first preset condition corresponding to a value of the loss function, a second preset condition corresponding to the difference information, and a third preset condition corresponding to the image quality information. In this example, the image sample may be determined to comply with the image sample screening rule in response to that a value of a loss function corresponding to the image sample complies with the first preset condition, difference information between at least two prediction results corresponding to the image sample complies with the second preset condition, and image quality information of the image sample complies with the third preset condition; otherwise, it may be determined that the image sample does not comply with the image sample screening rule.
As another example of this implementation, the image sample filtering rule may include a first preset condition corresponding to a value of the loss function and a second preset condition corresponding to the difference information. In this example, it may be determined that the image sample conforms to the image sample screening rule in response to that the value of the loss function corresponding to the image sample conforms to the first preset condition and that difference information between at least two prediction results corresponding to the image sample conforms to the second preset condition; otherwise, it may be determined that the image sample does not comply with the image sample screening rule.
As another example of this implementation, the image sample filtering rule may include a first preset condition corresponding to a value of the loss function and a third preset condition corresponding to the image quality information. In this example, it may be determined that the image sample conforms to the image sample screening rule in response to that the value of the loss function corresponding to the image sample conforms to the first preset condition and the image quality information of the image sample conforms to the third preset condition; otherwise, it may be determined that the image sample does not comply with the image sample screening rules.
In this implementation, the image sample may be retained in response to the image sample complying with the image sample screening rule, and discarded in response to the image sample not complying with the image sample screening rule.
The screening method of the image sample provided by the embodiment of the disclosure can be applied to application scenes such as target detection, target identification, visual perception, floor sweeping robots, floor mopping robots, AGV and meal delivery robots.
The screening method of the image sample provided by the embodiment of the disclosure is described below through a specific application scenario.
In the application scenario, the average value | L of the loss function of the image sample in the training process of 10 neural networks with different network structures can be obtained avg |。
The average value of the loss function of the image sample in the first training round of the 10 neural networks can be determined as the initial value of the loss function of the image sample in the training starting stage; determining the average value of the loss functions of the image samples in the last round of training of the 10 neural networks as the final value of the loss functions of the image samples in the training end stage; determining an absolute value of a final value of the loss function and an absolute value of an initial value of the loss function as a first difference value | L diff |。
The image samples can be processed through the 10 neural networks, and 10 prediction results corresponding to the 10 neural networks in a one-to-one mode are obtained. For any one of the 10 prediction results, the intersection ratio between the detection frame in the prediction result and the annotation frame corresponding to the image sample can be determined. The discrete degree B of the intersection ratio corresponding to the 10 predicted results can be determined diff
The fuzziness F of the image sample and the size information B of the labeling frame in the image sample can be obtained size
The predicted value that the image sample belongs to the noise sample can be determined using equation 1 above. The image samples may be discarded in response to the prediction value being greater than a first preset threshold and the image quality information not satisfying a preset image quality condition; the image sample may be re-labeled in response to the predicted value being greater than the first preset threshold and the image quality information satisfying the preset image quality condition; the image sample may be retained in response to the prediction value being less than or equal to the first preset threshold.
The present disclosure also provides another method for screening an image sample, including: obtaining difference information between at least two prediction results corresponding to the image samples; and screening the image sample according to the difference information.
In a possible implementation manner, the obtaining difference information between at least two prediction results corresponding to the image sample includes: processing the image sample through at least two neural networks with different network structures to obtain at least two prediction results which are in one-to-one correspondence with the at least two neural networks; determining difference information between the at least two predictors.
As an example of this implementation, the determining difference information between the at least two prediction results includes: for any one of the at least two prediction results, determining the intersection ratio between a detection frame in the prediction result and an annotation frame corresponding to the image sample; determining the discrete degree of the intersection and parallel ratio corresponding to the at least two prediction results; the screening the image sample according to the difference information comprises: and screening the image sample according to the discrete degree.
In a possible implementation manner, the performing a screening process on the image sample according to the difference information includes: obtaining image quality information of the image sample; and screening the image sample according to the difference information and the image quality information.
The present disclosure also provides another method for screening an image sample, including: obtaining the size information of a labeling frame in an image sample; and screening the image sample according to the size information of the labeling frame.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a screening apparatus for image samples, an electronic device, a computer-readable storage medium, and a computer program product, which can be used to implement any one of the screening methods for image samples provided by the present disclosure, and corresponding technical solutions and technical effects can be referred to in corresponding descriptions of the method sections and are not described again.
Fig. 2 shows a block diagram of a screening apparatus for an image sample provided by an embodiment of the present disclosure. As shown in fig. 2, the apparatus for screening an image sample includes:
a first obtaining module 21, configured to obtain a value of a loss function corresponding to an image sample;
a second obtaining module 22, configured to obtain difference information between at least two prediction results corresponding to the image sample, and/or obtain image quality information of the image sample;
a screening module 23, configured to perform screening processing on the image sample according to at least one of the difference information and the image quality information and a value of the loss function.
In a possible implementation manner, the first obtaining module 21 is configured to: obtaining the average value of the loss function of the image sample in the training process of the neural network;
the screening module 23 is configured to: and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the average value of the loss function.
In a possible implementation manner, the first obtaining module 21 is configured to:
and obtaining the average value of the loss function of the image sample in the training process of at least two neural networks with different network structures.
In a possible implementation manner, the first obtaining module 21 is configured to: obtaining an initial value of a loss function of an image sample at a training starting stage, and obtaining a final value of the loss function of the image sample at a training finishing stage;
the screening module 23 is configured to: determining a first difference value between an absolute value of a final value of the loss function and an absolute value of an initial value of the loss function; and screening the image sample according to at least one of the difference information and the image quality information and the first difference value.
In a possible implementation manner, the first obtaining module 21 is configured to:
determining an initial value of a loss function of an image sample at the initial stage of training according to the value of the loss function of the image sample in the first round of training of the neural network;
and determining the final value of the loss function of the image sample at the training end stage according to the value of the loss function of the image sample in the last round of training of the neural network.
In a possible implementation manner, the first obtaining module 21 is configured to:
determining the average value of the loss functions of the image samples in the first round of training of at least two neural networks with different network structures as the initial value of the loss function of the image samples in the training starting stage;
and determining the average value of the loss functions of the image samples in the last round of training of the at least two neural networks as the final value of the loss functions of the image samples in the training end stage.
In a possible implementation manner, the second obtaining module 22 is configured to:
processing the image sample through at least two neural networks with different network structures to obtain at least two prediction results which are in one-to-one correspondence with the at least two neural networks;
determining difference information between the at least two predictors.
In a possible implementation manner, the second obtaining module 22 is configured to: for any one of the at least two prediction results, determining the intersection ratio between a detection frame in the prediction result and an annotation frame corresponding to the image sample; determining the discrete degree of the intersection and parallel ratio corresponding to the at least two prediction results;
the screening module 23 is configured to: and performing screening processing on the image sample according to at least one of the discrete degree and the image quality information and the value of the loss function.
In one possible implementation, the image quality information includes at least one of: the ambiguity of the image sample, the definition of the image sample and the size information of the labeling frame in the image sample.
In a possible implementation manner, the screening module 23 is configured to:
determining a prediction value that the image sample belongs to a noise sample according to at least one of the difference information and the image quality information and a value of the loss function;
and screening the image sample according to the predicted value.
In a possible implementation manner, the screening module 23 is configured to:
responding to the predicted value being larger than a first preset threshold value, and performing rechecking processing on the image sample;
alternatively, the first and second electrodes may be,
responsive to the prediction value being less than or equal to the first preset threshold, retaining the image sample.
In a possible implementation manner, the screening module 23 is configured to:
in response to the predicted value being greater than a first preset threshold and the image quality information not meeting a preset image quality condition, discarding the image sample;
alternatively, the first and second electrodes may be,
and in response to the predicted value being larger than the first preset threshold value and the image quality information meeting the preset image quality condition, re-labeling the image sample.
In the embodiment of the disclosure, at least one of difference information between at least two prediction results corresponding to an image sample and image quality information of the image sample is obtained by obtaining a value of a loss function corresponding to the image sample, and the image sample is subjected to screening processing according to the at least one of the difference information and the image quality information and the value of the loss function, so that accuracy and speed of screening the image sample can be improved, noise of an image sample set is reduced, an effect of neural network training can be improved, and performance of a neural network can be improved.
The present disclosure also provides another image sample screening apparatus, including: the third obtaining module is used for obtaining difference information between at least two prediction results corresponding to the image samples; and the screening module is used for screening the image sample according to the difference information.
In one possible implementation manner, the third obtaining module is configured to: processing the image sample through at least two neural networks with different network structures to obtain at least two prediction results which are in one-to-one correspondence with the at least two neural networks; determining difference information between the at least two predictors.
As an example of this implementation, the third obtaining module is configured to: for any one of the at least two prediction results, determining the intersection ratio between a detection frame in the prediction result and an annotation frame corresponding to the image sample; determining the discrete degree of the intersection and parallel ratio corresponding to the at least two prediction results; the screening module is used for: and screening the image sample according to the discrete degree.
In one possible implementation, the apparatus further includes: a fourth obtaining module, configured to obtain image quality information of the image sample; the screening module is used for: and screening the image sample according to the difference information and the image quality information.
The present disclosure also provides another image sample screening apparatus, including: the fifth obtaining module is used for obtaining the size information of the labeling frame in the image sample; and the screening module is used for screening the image sample according to the size information of the labeling frame.
In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for concrete implementation and technical effects, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described method. The computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
Embodiments of the present disclosure also provide a computer program, which includes computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes the above method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-volatile computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
An embodiment of the present disclosure further provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 3, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932 TM ) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X) TM ) Multi-user, multi-process computer operating system (Unix) TM ) Free and open native code Unix-like operating System (Linux) TM ) Open native code Unix-like operating System (FreeBSD) TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be interpreted as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or an electrical signal transmitted through an electrical wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
If the technical scheme of the embodiment of the disclosure relates to personal information, a product applying the technical scheme of the embodiment of the disclosure clearly informs personal information processing rules before processing the personal information, and obtains personal autonomous consent. If the technical scheme of the embodiment of the disclosure relates to sensitive personal information, before the sensitive personal information is processed, a product applying the technical scheme of the embodiment of the disclosure obtains individual consent, and simultaneously meets the requirement of 'explicit consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A method of screening an image sample, comprising:
obtaining a value of a loss function corresponding to the image sample;
obtaining difference information between at least two prediction results corresponding to the image sample, and/or obtaining image quality information of the image sample;
and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the value of the loss function.
2. The method of claim 1,
the obtaining of the value of the loss function corresponding to the image sample includes: obtaining the average value of the loss function of the image sample in the training process of the neural network;
the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the difference information and the image quality information and the average value of the loss function.
3. The method of claim 2, wherein obtaining an average of a loss function of the image sample during training of the neural network comprises:
and obtaining the average value of the loss function of the image sample in the training process of at least two neural networks with different network structures.
4. The method according to any one of claims 1 to 3,
the obtaining of the value of the loss function corresponding to the image sample includes: obtaining an initial value of a loss function of an image sample at a training starting stage, and obtaining a final value of the loss function of the image sample at a training finishing stage;
the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: determining a first difference value between an absolute value of a final value of the loss function and an absolute value of an initial value of the loss function; and screening the image sample according to at least one of the difference information and the image quality information and the first difference value.
5. The method of claim 4, wherein obtaining an initial value of a loss function of the image sample at a training start stage and obtaining a final value of the loss function of the image sample at a training end stage comprises:
determining an initial value of a loss function of an image sample at the initial stage of training according to the value of the loss function of the image sample in the first round of training of the neural network;
and determining the final value of the loss function of the image sample at the training end stage according to the value of the loss function of the image sample in the last round of training of the neural network.
6. The method of claim 5,
the determining an initial value of the loss function of the image sample at the training starting stage according to the value of the loss function of the image sample in the first round of training of the neural network comprises: determining the average value of the loss functions of the image samples in the first round of training of at least two neural networks with different network structures as the initial value of the loss function of the image samples in the training starting stage;
determining a final value of a loss function of the image sample at the training end stage according to a value of the loss function of the image sample in the last round of training of the neural network, including: and determining the average value of the loss functions of the image samples in the last round of training of the at least two neural networks as the final value of the loss functions of the image samples in the training end stage.
7. The method according to any one of claims 1 to 6, wherein the obtaining difference information between at least two predictions corresponding to the image sample comprises:
processing the image sample through at least two neural networks with different network structures to obtain at least two prediction results which are in one-to-one correspondence with the at least two neural networks;
determining difference information between the at least two predictors.
8. The method of claim 7,
the determining difference information between the at least two predicted results comprises: for any one of the at least two prediction results, determining the intersection ratio between a detection frame in the prediction result and an annotation frame corresponding to the image sample; determining the discrete degree of the intersection and parallel ratio corresponding to the at least two prediction results;
the screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function includes: and performing screening processing on the image sample according to at least one of the discrete degree and the image quality information and the value of the loss function.
9. The method according to any one of claims 1 to 8, wherein the performing a screening process on the image sample according to at least one of the difference information and the image quality information and the value of the loss function comprises:
determining a prediction value of the image sample belonging to a noise sample according to at least one of the difference information and the image quality information and a value of the loss function;
and screening the image sample according to the predicted value.
10. The method according to claim 9, wherein the screening the image sample according to the predicted value comprises:
responding to the predicted value being larger than a first preset threshold value, and performing rechecking processing on the image sample;
alternatively, the first and second electrodes may be,
responsive to the prediction value being less than or equal to the first preset threshold, retaining the image sample.
11. An apparatus for screening an image sample, comprising:
the first obtaining module is used for obtaining the value of a loss function corresponding to the image sample;
a second obtaining module, configured to obtain difference information between at least two prediction results corresponding to the image sample, and/or obtain image quality information of the image sample;
and the screening module is used for screening the image sample according to at least one of the difference information and the image quality information and the value of the loss function.
12. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202210580125.0A 2022-05-25 2022-05-25 Image sample screening method and device, electronic equipment and storage medium Withdrawn CN115019126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210580125.0A CN115019126A (en) 2022-05-25 2022-05-25 Image sample screening method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210580125.0A CN115019126A (en) 2022-05-25 2022-05-25 Image sample screening method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115019126A true CN115019126A (en) 2022-09-06

Family

ID=83070581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210580125.0A Withdrawn CN115019126A (en) 2022-05-25 2022-05-25 Image sample screening method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115019126A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132206A (en) * 2020-09-18 2020-12-25 青岛商汤科技有限公司 Image recognition method, training method of related model, related device and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132206A (en) * 2020-09-18 2020-12-25 青岛商汤科技有限公司 Image recognition method, training method of related model, related device and equipment

Similar Documents

Publication Publication Date Title
US20210004984A1 (en) Method and apparatus for training 6d pose estimation network based on deep learning iterative matching
CN109816039B (en) Cross-modal information retrieval method and device and storage medium
CN108229504B (en) Image analysis method and device
KR102454930B1 (en) Image description statement positioning method and apparatus, electronic device and storage medium
CN108335306B (en) Image processing method and device, electronic equipment and storage medium
CN110060205B (en) Image processing method and device, storage medium and electronic equipment
CN111738263A (en) Target detection method and device, electronic equipment and storage medium
CN115019126A (en) Image sample screening method and device, electronic equipment and storage medium
CN108010052A (en) Method for tracking target and system, storage medium and electric terminal in complex scene
CN108876817B (en) Cross track analysis method and device, electronic equipment and storage medium
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN114594963A (en) Model deployment method and device, electronic equipment and storage medium
CN113269307B (en) Neural network training method and target re-identification method
CN110390344B (en) Alternative frame updating method and device
CN116363538B (en) Bridge detection method and system based on unmanned aerial vehicle
CN109033264B (en) Video analysis method and device, electronic equipment and storage medium
CN113052874A (en) Target tracking method and device, electronic equipment and storage medium
CN114691912A (en) Method, apparatus and computer-readable storage medium for image processing
CN113591885A (en) Target detection model training method, device and computer storage medium
CN110781809A (en) Identification method and device based on registration feature update and electronic equipment
CN110807127A (en) Video recommendation method and device
CN115205553A (en) Image data cleaning method and device, electronic equipment and storage medium
CN114998694A (en) Method, apparatus, device, medium and program product for training image processing model
CN111507931B (en) Data processing method and device
CN113963322B (en) Detection model training method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220906