CN113408482A - Training sample generation method and device - Google Patents

Training sample generation method and device Download PDF

Info

Publication number
CN113408482A
CN113408482A CN202110791909.3A CN202110791909A CN113408482A CN 113408482 A CN113408482 A CN 113408482A CN 202110791909 A CN202110791909 A CN 202110791909A CN 113408482 A CN113408482 A CN 113408482A
Authority
CN
China
Prior art keywords
target
label
vehicle
classification model
vehicle information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110791909.3A
Other languages
Chinese (zh)
Other versions
CN113408482B (en
Inventor
陈晓
张伟
谢思敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lianji Technology Co ltd
Original Assignee
Hangzhou Lianji Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lianji Technology Co ltd filed Critical Hangzhou Lianji Technology Co ltd
Priority to CN202110791909.3A priority Critical patent/CN113408482B/en
Publication of CN113408482A publication Critical patent/CN113408482A/en
Application granted granted Critical
Publication of CN113408482B publication Critical patent/CN113408482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a generation method and a generation device of a training sample, wherein the generation method comprises the following steps: obtaining original label sets corresponding to the plurality of vehicle sample images; reserving a first label corresponding to single target vehicle information in an original label set, and replacing second labels corresponding to the rest vehicle information with preset parameters to obtain a first label set corresponding to the original label set; and circularly executing the steps of obtaining a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequences of different target vehicle information, and forming a first label set group to obtain a target training sample set. After the cyclic arrangement, the number of the first tags of different target vehicle information is uniform and the distribution has cyclicity, so that the uniform distribution of the first tags of different target vehicle information can be ensured.

Description

Training sample generation method and device
Technical Field
The present application belongs to the technical field of image processing, and in particular, to a method and an apparatus for generating a training sample.
Background
Multi-classification models are commonly used recognition means in the field of image processing, and can be applied to different scenes, for example: the method is applied to a vehicle identification scene to realize identification of tags corresponding to different vehicle information such as vehicle types, vehicle colors, vehicle brands and orientations. In particular, for a vehicle identification scene, it is often necessary to identify tags corresponding to a plurality of pieces of vehicle information in a vehicle sample image at the same time. Therefore, in the training stage of the multi-classification model, training samples with multiple labels (the training samples include vehicle sample images and multiple labels corresponding to the vehicle sample images) need to be obtained, and the multi-classification model is trained through the training samples.
However, in the process of acquiring the training sample, all labels of the vehicle information corresponding to the vehicle sample image cannot be acquired due to the fact that the vehicle sample image is too fuzzy, the subject is missing or the attribute is scarce. The labels of each kind of vehicle information in the training sample set are not uniformly distributed, and the classification effect of the multi-classification model obtained by training according to the training sample set is poor. Therefore, how to ensure the uniform distribution of various labels in the training sample set becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, a terminal device and a computer-readable storage medium for generating a training sample, which can solve the technical problem that data distribution of each label in a training sample set is not uniform.
A first aspect of an embodiment of the present application provides a method for generating a training sample, where the method includes:
obtaining original label sets corresponding to the plurality of vehicle sample images; the original label set comprises a set formed by labels corresponding to different pieces of vehicle information;
reserving a first label corresponding to single target vehicle information in the original label set, and replacing second labels corresponding to other vehicle information with preset parameters to obtain a first label set corresponding to the original label set; the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model;
according to the preset sequence of different target vehicle information, acquiring a target first label set corresponding to each preset sequence from a plurality of first label sets to form a first label set group; the target first label set is a first label set comprising the target vehicle information corresponding to the preset sequence;
circularly executing the step of obtaining a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequence of different target vehicle information to form a first label set group, and taking each first label set group and a vehicle sample image corresponding to each first label set group as a target training sample set; the target training sample set is used to train the multi-classification model.
A second aspect of an embodiment of the present application provides an apparatus for generating a training sample, where the apparatus includes:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original label set corresponding to each of a plurality of vehicle sample images; the original label set comprises a set formed by labels corresponding to different pieces of vehicle information;
the processing unit is used for reserving a first label corresponding to single target vehicle information in the original label set and replacing second labels corresponding to other vehicle information with preset parameters to obtain a first label set corresponding to the original label set; the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model;
the arrangement unit is used for acquiring a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequence of different target vehicle information to form a first label set group; the target first label set is a first label set comprising the target vehicle information corresponding to the preset sequence;
a circulation unit, configured to execute the steps of obtaining, in a plurality of first label sets, one target first label set corresponding to each preset order according to the preset order of different pieces of target vehicle information to form a first label set group, and taking each first label set group and a vehicle sample image corresponding to each first label set group as a target training sample set; the target training sample set is used to train the multi-classification model.
A third aspect of embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the method according to the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: according to the method and the device, the first label corresponding to the single target vehicle information in the original label set is reserved, and the second labels corresponding to the rest vehicle information are replaced by the preset parameters, so that the first label set is obtained. The preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model. That is, there is one and only one tag in each first set of tags. And then according to the preset sequence of different target vehicle information, obtaining a target first label set corresponding to each preset sequence from the plurality of first label sets to form a first label set group, and executing the steps in a circulating manner to obtain a target training sample set. After the cyclic arrangement, the number of the first tags of different target vehicle information is uniform and the distribution has cyclicity, so that the uniform distribution of the first tags of different target vehicle information can be ensured. The technical problem of uneven data distribution of each label in the training sample set is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 shows a schematic flow chart of a method of generating training samples provided herein;
FIG. 2 shows a schematic flow chart of step 101 in a training sample generation method provided by the present application;
FIG. 3 shows a schematic flow chart of step 1012 of a training sample generation method provided by the present application;
FIG. 4 shows a schematic flow chart of step 102 of a training sample generation method provided by the present application;
FIG. 5 shows a schematic flow diagram of another method of generating training samples provided herein;
FIG. 6 shows a schematic flow chart of another training sample generation method provided herein;
FIG. 7 shows a schematic flow diagram of another method of generating training samples provided herein;
FIG. 8 is a schematic diagram illustrating a network structure of a target multi-classification model provided herein;
FIG. 9 is a schematic diagram of an apparatus for generating training samples provided herein;
fig. 10 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to better understand the technical problem solved by the present application, the above background art is further explained herein with reference to examples:
the vehicle recognition technology is a technology for recognizing vehicle information using a deep learning model. The deep learning network adopted by the vehicle identification technology comprises a single classification model and a multi-classification model. The processes of identifying the vehicle information by the single classification model and the multi-classification model are respectively as follows:
firstly, a single classification model: the processor calls a plurality of single classification models respectively, and different single classification models respectively correspond to different vehicle information (such as vehicle type, vehicle color, vehicle brand, orientation and the like). Different single classification models respectively identify different vehicle information in the vehicle sample image. Wherein, different single classification models all need to be trained and deployed independently. Most devices have limited memory and computing power and cannot bear the storage and operation of a plurality of single classification models. Meanwhile, the deployment and design of a plurality of single classification models also increase the development difficulty and the development time. In order to overcome the limitation of the single classification model, the multi-classification model is inoculated.
The multi-classification model: the processor invokes the multi-classification model. The multi-classification model identifies different vehicle information for the vehicle sample images. The multi-vehicle information identification can be realized without a plurality of single classification models.
When the multi-classification model is trained, a training sample (the training sample includes a vehicle sample image and a plurality of labels corresponding to the vehicle sample image) having a plurality of labels (each label corresponds to one piece of vehicle information) is acquired, and the multi-classification model is trained through the training sample.
However, in the process of acquiring the training sample, all labels of the vehicle information corresponding to the vehicle sample image cannot be acquired due to the fact that the vehicle sample image is too fuzzy, the subject is missing, or the vehicle information is scarce. And the labels corresponding to each vehicle information in the training sample set are unevenly distributed. And further, the classification effect of the multi-classification model obtained by training according to the training sample set is poor. Therefore, how to ensure the uniform distribution of various labels in the training sample set becomes a technical problem which needs to be solved urgently.
In view of this, embodiments of the present application provide a training sample generation method, a training sample generation apparatus, a terminal device, and a computer-readable storage medium, which may solve the above technical problems.
First, the present application provides a method for generating training samples. Referring to fig. 1, fig. 1 shows a schematic flow chart of a training sample generation method provided in the present application. As shown in fig. 1, the generation method may include the steps of:
step 101, obtaining original label sets corresponding to a plurality of vehicle sample images; the original tag set comprises a set formed by tags corresponding to different pieces of vehicle information.
Different tags are used to characterize different vehicle information. Vehicle information includes, but is not limited to, vehicle information such as vehicle type, vehicle color, license plate type, license plate color, vehicle brand, and orientation. The original tag set includes tags corresponding to different pieces of vehicle information, as shown in table 1:
table 1:
Figure BDA0003161279170000061
table 1 is only an example, and the vehicle information type, the number of the vehicle information types, the tag number, and the number of images in table 1 are not limited at all.
As shown in table 1, the tag set of the vehicle sample image a includes the following tag {2,1,0,1}, and the tag set of the vehicle sample image B includes the following tag {1,2,3,1 }. Each kind of vehicle information corresponds to a plurality of labels, and the plurality of labels represent different vehicle information respectively. For example, the vehicle type corresponding label "1" characterizes a minibus, and the vehicle type corresponding label "2" characterizes a truck. Another example is: the vehicle color label corresponds to label "1" for black and label "2" for white.
The original set of tags may be existing data in a database. The original label set may also be a label set obtained by manually performing pre-labeling on the vehicle sample image. The existing data in the database is often low in stock, so that the training requirement cannot be met. And manual labeling is labor intensive and time consuming. Therefore, the embodiment provides a way to generate an original label set, which can improve the efficiency and quality of labeling, and the specific process is as shown in the following fig. 2 alternative embodiment:
as an alternative embodiment of the present application, step 101 includes steps 1011 to 1015. Referring to fig. 2, fig. 2 shows a schematic flow chart of step 101 in a training sample generation method provided in the present application.
Step 1011, a plurality of first vehicle sample images and a pre-trained first multi-classification model are obtained.
Training labels corresponding to a plurality of training images and a plurality of training images are obtained in an existing database. And outputting the training images to an initial multi-classification model for processing to obtain an initial classification result output by the initial multi-classification model. And updating parameters in the initial multi-classification model by calculating a loss value between the initial classification result and the training label. And sequentially and circularly executing the training process by each training image and the training label corresponding to each training image to obtain a pre-trained first multi-classification model. The node performing the training process may be before step 1011 or in step 1011.
A plurality of first vehicle sample images are acquired in an existing database.
Step 1012, inputting the first vehicle sample image into the first multi-classification model for processing, so as to obtain a first original label set corresponding to the first vehicle sample image output by the first multi-classification model.
As an alternative embodiment of the present application, step 1012 includes steps a1 through A3. Referring to fig. 3, fig. 3 is a schematic flow chart illustrating step 1012 of a training sample generation method provided in the present application.
Step A1, intercepting a target image corresponding to an image area where a vehicle is located in each filtered vehicle sample image through a vehicle detection model.
A certain redundant area generally exists in each filtered vehicle sample image, so that a target image corresponding to an image area where a vehicle is located in each filtered vehicle sample image is intercepted through a vehicle detection model, and unnecessary calculation amount is reduced.
And step A2, filtering a plurality of first vehicle sample images through a vehicle filtering model to obtain filtered vehicle sample images.
Since the non-vehicle images may exist in the plurality of first vehicle sample images, the non-vehicle images are filtered by the vehicle filtering model to obtain filtered vehicle sample images.
Step A3, inputting the target image into the first multi-classification model for processing, and obtaining a first original label set corresponding to the first vehicle sample image output by the first multi-classification model.
Step 1013, obtaining a plurality of second vehicle sample images augmented based on the first original set of tags.
Since there may be data missing from the plurality of first vehicle sample images acquired in step 1011, for example: absence of a certain vehicle type or absence of a certain vehicle color, etc. In order to supplement the missing data, in step 1013, a vehicle sample image corresponding to the missing data is supplemented based on the vehicle information obtained from the first original tag set. For example: and if the pickup truck type in the vehicle types is not available, supplementing the pickup truck image corresponding to the pickup truck type.
The supplemental vehicle sample image and the first vehicle sample image constitute a second vehicle sample image. The determination of missing data may be performed by human determination. Or comparing the data in the preset vehicle information list one by one to obtain a missing data list, and supplementing a supplementary vehicle sample image corresponding to the missing data according to the missing data list.
As an alternative embodiment of the present application, the determination of the missing data may be made for only one representative piece of vehicle information to reduce the determination calculation amount. For example: because the sample image corresponding to the vehicle brand often includes relatively complete vehicle information (i.e., the sample image corresponding to the vehicle brand often includes vehicle information such as vehicle type, vehicle color, license plate type, license plate color, orientation, and the like), missing data can be determined only according to the vehicle brand. Conversely, if a corresponding color sample image is acquired based on the license plate color, the color sample image may include an image including only the license plate region (i.e., vehicle information such as the brand, type, and orientation of the vehicle is missing), and thus the license plate color is not representative of many pieces of vehicle information. Therefore, it is possible to preferentially select the brand of the vehicle as a representative and determine the missing data.
Step 1014, inputting the second vehicle sample image set into the first multi-classification model for processing, and obtaining a second original label set corresponding to the second vehicle sample image output by the first multi-classification model.
The execution process of step 1014 is similar to that of the alternative embodiment shown in fig. 3, please refer to the alternative embodiment shown in fig. 3, which is not described herein again.
Step 1015, regarding the second original label set as the original label sets corresponding to the plurality of vehicle sample images.
Step 102, reserving a first label corresponding to single target vehicle information in the original label set, and replacing second labels corresponding to the rest vehicle information with preset parameters to obtain a first label set corresponding to the original label set; the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model.
Due to the different vehicle information in each vehicle sample image, there may be certain deletions, such as: when the image is shot, the license plate cannot be shot, so that information such as the number of the license plate, the type of the license plate, the color of the license plate and the like is lost. Another example is: the vehicle sample image is too fuzzy to distinguish information such as vehicle brand, license plate number and the like.
Therefore, the number of missing vehicle information in each vehicle sample image is different, and the unified processing cannot be performed. Therefore, each original label set is 'unified' in advance for subsequent processing. The unification refers to that a first label corresponding to a single piece of target vehicle information is reserved in an original label set, and second labels corresponding to the rest pieces of vehicle information are replaced by preset parameters to obtain the first label set corresponding to the original label set. I.e. only one label is kept in each first set of labels.
Illustratively, the "normalization" process is shown in tables 2 and 3:
table 2:
Figure BDA0003161279170000091
table 3:
Figure BDA0003161279170000101
here, tables 2 and 3 are only examples, and the vehicle information type, the number of the vehicle information types, the tag value, the number of the tags, the preset parameter, and the number of images in tables 2 and 3 are not limited at all.
As shown in table 2, the original label set of the vehicle sample image a only includes 1 type of label, the original label set of the vehicle sample image B only includes 3 types of labels, the original label set of the vehicle sample image C only includes 3 types of labels, and the original label set of the vehicle sample image D only includes 2 types of labels.
Only a single piece of target vehicle information is reserved in each original label set in table 2, and the second labels corresponding to the rest pieces of vehicle information are replaced by a preset parameter "-1000" (the preset parameter may also be other numerical values), so as to obtain the first label set shown in table 3.
It is emphasized that the target vehicle information corresponding to each vehicle sample image needs to be preset. The preset mode is as follows: all the vehicle sample images are equally divided into several parts (the number of parts is equal to the number of types) according to the number of types of the vehicle information. Each vehicle sample image corresponds to different types of vehicle information in sequence. For example: currently, 1000 vehicle sample images exist, and the number of types of vehicle information is 5, then the 1000 vehicle sample images are equally divided into 5 parts (250 parts each), and the 5 parts of vehicle sample images sequentially correspond to different types of vehicle information.
If all the vehicle sample images cannot be divided equally, namely the number of types cannot be divided completely, a small number of the vehicle sample images can be eliminated or copied until the number of the images of all the vehicle sample images can be divided completely by the number of types.
As an alternative embodiment of the present application, step 102 includes steps 1021 through 1022. Referring to fig. 4, fig. 4 is a schematic flow chart illustrating step 102 in a training sample generation method provided in the present application.
Step 1021, correcting the error label set in the plurality of original label sets to obtain a second label set; the error label set refers to a label set obtained by the multi-classification model through error classification.
There may be a misclassified set of false tags due to the original set of tags resulting from the first multi-classification model in the alternative embodiment shown in fig. 2. Therefore, after the plurality of original label sets are obtained, the error label sets in the plurality of original label sets are corrected to obtain the second label set.
Step 1022, reserving a first tag corresponding to a single piece of target vehicle information in the second tag set, and replacing second tags corresponding to the rest of vehicle information with preset parameters to obtain the first tag set corresponding to each second tag set.
As an alternative embodiment of the present application, step 102 may also process all original labelsets into the first labelset without performing correction on the erroneous labelsets.
103, acquiring a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequence of different target vehicle information to form a first label set group; the target first tag set is a first tag set including the target vehicle information corresponding to the preset sequence.
The preset sequence is a preset sequence so as to orderly arrange the first label sets with different target vehicle information. The preset sequence can be arranged from difficult to easy or from easy to difficult according to the classification difficulty of different target vehicle information.
The target first tag set is a first tag set including target vehicle information corresponding to a preset order.
Illustratively, it is assumed that the preset order of the target vehicle information is a vehicle type tag, a license plate color tag, a vehicle brand tag, and a license plate type tag. The complete execution process of step 103 is: and acquiring a first target first label set with the target vehicle information being the vehicle type from the plurality of first label sets. And acquiring a second target first label set with target vehicle information as a license plate color label from the plurality of first label sets. And acquiring a third target first label set of which the target vehicle information is a vehicle brand label from the plurality of first label sets. And acquiring a fourth target first label set with the target vehicle information as a license plate type label from the plurality of first label sets. The first target first set of tags, the second target first set of tags, the third target first set of tags, and the fourth target first set of tags comprise a first set of tags. As shown in table 4:
table 4:
Figure BDA0003161279170000121
table 4 is only an example, and the vehicle information type, the number of the vehicle information types, the tag value, the number of the tags, the preset parameter, and the number of the images in table 4 are not limited at all.
As shown in table 4, the first label sets corresponding to the vehicle sample image a, the vehicle sample image C, the vehicle sample image B, and the vehicle sample image D constitute a first label set group. The classification difficulty of the vehicle type label, the license plate color label, the vehicle brand label and the license plate type label is from easy to difficult.
Step 104, circularly executing the step of acquiring a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequence of different target vehicle information to form a first label set group, and taking each first label set group and a vehicle sample image corresponding to each first label set group as a target training sample set; the target training sample set is used to train the multi-classification model.
Step 103 is repeated to obtain different first tag set groups, where the first tag sets in each first tag set group are all different (i.e., each first tag set is extracted only once). And taking each first label set group and the vehicle sample image corresponding to each first label set group as a target training sample set.
Illustratively, taking eight vehicle sample images and four types of vehicle information as an example, two first tag set groups as shown in table 5 can be obtained:
table 5:
Figure BDA0003161279170000131
Figure BDA0003161279170000141
table 5 is only an example, and the vehicle information type, the number of the vehicle information types, the tag value, the number of the tags, the preset parameter, and the number of the images in table 5 are not limited at all.
As shown in table 5, since each first tag set group is circularly arranged, the tag data of each type of vehicle information is uniformly distributed. And then the classification performance of the multi-classification model trained according to the uniformly distributed target training sample set is relatively balanced.
As an optional embodiment of the present application, if the number of the types of the vehicle information and the number of the plurality of first tab sets cannot be completely divided (that is, the remaining first tab sets cannot be combined into the first tab set group), the first tab set that has been taken out is sequentially combined with the other remaining first tab sets from the head until the first tab set with the largest number is combined. For example: the number of the five first tag sets respectively containing different target vehicle information is 5, 4, 3 and 5. The first tag sets are a1, a2, A3, a4 and a5, the second tag sets are B1, B2, B3, B4 and B5, the third tag sets are C1, C2, C3 and C4, the fourth tag sets are D1, D2 and D3, and the fifth tag sets are E1, E2, E3, E4 and E5. It is understood that the fourth first tab set is first taken out, so that the combination with the other remaining first tab sets can be repeated according to the sequence of D1, D2 and D3 until the combination of the first tab set with the largest number is completed. The fourth first tag set has the same structure, and is not described herein again.
As an alternative embodiment of the present application, after step 104, step 105 to step 106 are further included. Referring to fig. 5, fig. 5 shows a schematic flow chart of another training sample generation method provided in the present application.
And 105, training the first multi-classification model through the target training sample set to obtain a second multi-classification model.
Because the coverage of data in the target training sample set obtained after the processing of all the optional embodiments is often insufficient, a more comprehensive target training sample set can be obtained by performing multiple rounds of circulation.
Therefore, the first multi-classification model is optimized according to the target training sample set to obtain the second multi-classification model, so that more accurate training samples can be obtained in the circulation process.
And 106, taking the second multi-classification model as a pre-trained first multi-classification model, taking an image in a target training sample set as the first vehicle sample image, and circularly executing the steps of obtaining the plurality of first vehicle sample images and the pre-trained first multi-classification model and the subsequent steps until the target training sample set meets a preset condition.
And taking the second multi-classification model as a pre-trained first multi-classification model, taking the image in the target training sample set as the first vehicle sample image, and circularly executing the step 1011 to continuously expand the target training sample until the target training sample set meets the preset condition.
The preset condition may refer to that the number of samples of the target training sample reaches a threshold or the number of types of samples reaches a threshold.
As an optional embodiment of the present application, the reason why the first multi-classification model is wrongly classified is that the first multi-classification model cannot be sufficiently trained in some kind of vehicle information (that is, training data corresponding to some kind of vehicle information is less), so that a classification error rate of the first multi-classification model for the vehicle information is higher. Therefore, based on the above rule, the classification performance of the first multi-classification model can be used as a preset condition.
In this embodiment, a first tag corresponding to a single piece of target vehicle information in an original tag set is retained, and second tags corresponding to the remaining pieces of vehicle information are replaced with preset parameters, so as to obtain a first tag set. The preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model. That is, there is one and only one tag in each first set of tags. And then according to the preset sequence of different target vehicle information, obtaining a target first label set corresponding to each preset sequence from the plurality of first label sets to form a first label set group, and executing the steps in a circulating manner to obtain a target training sample set. After the cyclic arrangement, the number of the first tags of different target vehicle information is uniform and the distribution has cyclicity, so that the uniform distribution of the first tags of different target vehicle information can be ensured. The technical problem of uneven data distribution of each label in the training sample set is solved.
Optionally, on the basis of all the above embodiments, the generating method further includes the following steps, please refer to fig. 6, and fig. 6 shows a schematic flowchart of another training sample generating method provided in the present application.
Step 601, training a target multi-classification model through the target training sample set to obtain a trained target multi-classification model.
It is to be understood that the embodiments shown in fig. 1 to 5 are processes for acquiring a target training sample set, that is, processes for acquiring training data. And step 601 is a process of training a target multi-classification model using a target training sample set. Wherein the target multi-classification model may be an initialized multi-classification model or a first multi-classification model. The first multi-classification model is trained for multiple times, so that the parameters of the first multi-classification model are better, and the first multi-classification model can be preferentially selected as the target multi-classification model.
As an alternative embodiment of the present application, the step 601 includes steps 6011 to 6018. Referring to fig. 7, fig. 7 shows a schematic flow chart of another training sample generation method provided in the present application.
Step 6011, inputting the target first label set and the vehicle sample image corresponding to the target first label set into the target multi-classification model.
The target multi-classification model may employ a Caffe deep learning framework. Network structure of target multi-classification model referring to fig. 8, fig. 8 shows a schematic diagram of a network structure of a target multi-classification model provided in the present application. As shown in fig. 8, the target multi-classification model includes an input layer (the input layer includes a Data layer and a Slice layer), a feature extraction layer (a backbone network), a plurality of first branch networks, and a second branch network. Each branch network is composed of a full connection layer and a Softmax layer which correspond to each other. The branch networks of the target multi-classification model can be more or less, and can be set according to actual classification requirements.
Step 6012, the input layer inputs the vehicle sample image into a feature extraction layer.
Step 6013, the feature extraction layer performs feature extraction on the vehicle sample image to obtain feature data of the vehicle sample image.
Step 6014, the input layer segments the target first label set to obtain a plurality of preset parameters and first labels corresponding to the plurality of pieces of vehicle information.
Among them, since the Caffe deep learning framework supports only one-dimensional labels, it is necessary to support multi-dimensional labels (first label set). Among them, data tags often adopt the lmdb format, and multidimensional tags can generally directly adopt the hdf5 format but we generally select the lmdb format in terms of data reading rate and validity of large data sets. The convert _ imageset. cpp in Caffe may be modified to support multi-tag generation of the target first set of tags in the lmdb format. While storing the first set of tags in the lmdb format, the first set of tags in the lmdb format may be stored into a vector because a single variable cannot store the first set of tags in the lmdb format.
When the first label set is input into the target multi-classification model, the first label set needs to be segmented to obtain a plurality of preset parameters and first labels corresponding to the multi-class vehicle information. In the Caffe deep learning framework, a Slice layer can be adopted to segment the target first label set.
As an embodiment of the present application, the input layer may further obtain tags corresponding to various types of vehicle information by: before inputting the information into the input layer, different zone bits are set for the labels corresponding to various types of vehicle information. When the input layer pulls labels corresponding to various types of vehicle information, corresponding training samples are pulled in the target training samples according to the zone bits.
Step 6015, the input layer inputs the plurality of preset parameters into the first branch networks corresponding to the plurality of preset parameters, respectively; the input layer inputs a first tag into the second branch network.
Step 6016, the first branch network suspends the training operation of the target multi-classification model by the vehicle information corresponding to the preset parameter according to the preset parameter.
Step 6017, the second branch network obtains a target classification prediction result according to the feature data; calculating a loss between the target classification predictor and the first label; and updating the network weight parameters in the target multi-classification model according to the loss.
And the full-connection layer in the second branch network obtains a target classification result (namely labels corresponding to various types of vehicle information) according to the characteristic data. The probability of each label is calculated by the Softmax layer, and the calculation process is as follows:
Figure BDA0003161279170000181
wherein a represents the probability of output of the Softmax layer, Z represents the label of the fully-connected layer,
Figure BDA0003161279170000182
indicates the output of n ezjAnd (4) adding.
Calculating a loss value according to the probability output by the Softmax layer and the first label, wherein the calculation process is as follows:
Figure BDA0003161279170000183
wherein, Loss represents a Loss value, y represents the probability (1 or 0) corresponding to the first label, and a represents the probability of output of the Softmax layer.
And performing back propagation according to the loss value, and updating the network parameters in the target multi-classification model.
Step 6018, each target training sample sequentially executes the step of inputting the target first label set and the vehicle sample image corresponding to the target first label set into the target multi-classification model and subsequent steps to obtain a trained target multi-classification model.
As an optional embodiment of the present application, the target training sample set may be divided into a training set and a test set, where the training set is used to perform the processes of step 6011 to step 6018 to obtain a target multi-classification model. The test set is used for verifying the classification accuracy of the target multi-classification model. And if the classification accuracy of the target multi-classification model is lower than the threshold value, repeatedly acquiring a new training set to train the target multi-classification model until the classification accuracy is not lower than the threshold value.
In this embodiment, due to the uniform distribution of the first labels of different pieces of target vehicle information in the target training sample set, the phenomenon that a target multi-classification model obtained by training according to the target training sample set is over-fitted to a certain class is avoided, the classification performance is more balanced, and the classification accuracy of each type of processed information is high. And due to the multi-task joint training, the local minimum values of different tasks in the multi-task are in different positions, and the hidden layer can be prevented from falling into the local minimum values through interaction.
Fig. 9 shows a schematic diagram of a training sample generation apparatus 9 provided in the present application, and fig. 9 shows a schematic diagram of a training sample generation apparatus provided in the present application, and the training sample generation apparatus shown in fig. 9 includes:
an obtaining unit 91, configured to obtain an original label set corresponding to each of the plurality of vehicle sample images; the original label set comprises a set formed by labels corresponding to different pieces of vehicle information;
the processing unit 92 is configured to retain a first tag corresponding to a single piece of target vehicle information in the original tag set, and replace second tags corresponding to the remaining pieces of vehicle information with preset parameters to obtain a first tag set corresponding to the original tag set; the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model;
the arranging unit 93 is configured to obtain, in accordance with preset sequences of different pieces of target vehicle information, one target first tag set corresponding to each preset sequence from among the plurality of first tag sets, and form a first tag set group; the target first label set is a first label set comprising the target vehicle information corresponding to the preset sequence;
a circulation unit 94, configured to execute the steps of obtaining, in a plurality of first label sets, one target first label set corresponding to each preset order according to the preset order of different pieces of target vehicle information to form a first label set group, and taking each first label set group and a vehicle sample image corresponding to each first label set group as a target training sample set; the target training sample set is used to train the multi-classification model.
The generation device of the training sample obtains a first label set by reserving a first label corresponding to a single piece of target vehicle information in an original label set and replacing second labels corresponding to the rest pieces of vehicle information with preset parameters. And the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model. That is, there is one and only one tag in each first set of tags. And then according to the preset sequence of different target vehicle information, obtaining a target first label set corresponding to each preset sequence from the plurality of first label sets to form a first label set group, and executing the steps in a circulating manner to obtain a target training sample set. After the cyclic arrangement, the number of the first tags of different target vehicle information is uniform and the distribution has cyclicity, so that the uniform distribution of the first tags of different target vehicle information can be ensured. The technical problem of uneven data distribution of each label in the training sample set is solved.
Fig. 10 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 10, a terminal device 100 of this embodiment includes: a processor 1001, a memory 1002 and a computer program 1003, such as a training sample acquisition program, stored in said memory 1002 and executable on said processor 1001. When the processor 1001 executes the computer program 1003, the steps in each embodiment of the training sample generation method described above are implemented, for example, steps 101 to 104 shown in fig. 1. Alternatively, the processor 1001, when executing the computer program 1003, implements the functions of the units in the above-described device embodiments, for example, the functions of the units 91 to 94 shown in fig. 9.
Illustratively, the computer program 1003 may be divided into one or more units, which are stored in the memory 1002 and executed by the processor 1001 to implement the present invention. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program 1003 in the terminal device 10. For example, the specific functions of the computer program 1003 that may be divided into units are as follows:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original label set corresponding to each of a plurality of vehicle sample images; the original label set comprises a set formed by labels corresponding to different pieces of vehicle information;
the processing unit is used for reserving a first label corresponding to single target vehicle information in the original label set and replacing second labels corresponding to other vehicle information with preset parameters to obtain a first label set corresponding to the original label set; the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model;
the arrangement unit is used for acquiring a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequence of different target vehicle information to form a first label set group; the target first label set is a first label set comprising the target vehicle information corresponding to the preset sequence;
a circulation unit, configured to execute the steps of obtaining, in a plurality of first label sets, one target first label set corresponding to each preset order according to the preset order of different pieces of target vehicle information to form a first label set group, and taking each first label set group and a vehicle sample image corresponding to each first label set group as a target training sample set; the target training sample set is used to train the multi-classification model.
The terminal device includes, but is not limited to, a processor 1001 and a memory 1002. Those skilled in the art will appreciate that fig. 10 is merely an example of one type of terminal device 10 and is not intended to limit one type of terminal device 10 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the one type of terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 1001 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1002 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 1002 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the terminal device 10. Further, the memory 1002 may include both an internal storage unit and an external storage device of the terminal device 10. The memory 1002 is used for storing the computer programs and other programs and data required by the roaming control device. The memory 1002 may also be used to temporarily store data that has been output or is to be output.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to monitoring ". Similarly, the phrase "if it is determined" or "if [ a described condition or event ] is monitored" may be interpreted depending on the context to mean "upon determining" or "in response to determining" or "upon monitoring [ a described condition or event ]" or "in response to monitoring [ a described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for generating training samples, the method comprising:
obtaining original label sets corresponding to the plurality of vehicle sample images; the original label set comprises a set formed by labels corresponding to different pieces of vehicle information;
reserving a first label corresponding to single target vehicle information in the original label set, and replacing second labels corresponding to other vehicle information with preset parameters to obtain a first label set corresponding to the original label set; the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model;
according to the preset sequence of different target vehicle information, acquiring a target first label set corresponding to each preset sequence from a plurality of first label sets to form a first label set group; the target first label set is a first label set comprising the target vehicle information corresponding to the preset sequence;
circularly executing the step of obtaining a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequence of different target vehicle information to form a first label set group, and taking each first label set group and a vehicle sample image corresponding to each first label set group as a target training sample set; the target training sample set is used for training or testing the multi-classification model.
2. The method of generating as claimed in claim 1, wherein said obtaining an original set of labels corresponding to each of a plurality of vehicle sample images comprises:
obtaining a plurality of first vehicle sample images and a pre-trained first multi-classification model;
inputting the first vehicle sample image into the first multi-classification model for processing to obtain a first original label set corresponding to the first vehicle sample image output by the first multi-classification model;
obtaining a plurality of second vehicle sample images augmented based on the first original set of tags;
inputting the second vehicle sample image set into the first multi-classification model for processing to obtain a second original label set corresponding to the second vehicle sample image output by the first multi-classification model;
and taking the second original label set as the original label set corresponding to each of the plurality of vehicle sample images.
3. The generation method according to claim 2, wherein after the step of cyclically executing the steps of obtaining a target first tag set corresponding to each preset order from a plurality of first tag sets according to the preset order of different target vehicle information to form a first tag set group, and taking each first tag set group and a vehicle sample image corresponding to each first tag set group as a target training sample set, the method further comprises:
training the first multi-classification model through the target training sample set to obtain a second multi-classification model;
and taking the second multi-classification model as a pre-trained first multi-classification model, taking images in a target training sample set as the first vehicle sample images, and circularly executing the steps of obtaining the plurality of first vehicle sample images and the pre-trained first multi-classification model and the subsequent steps until the target training sample set meets a preset condition.
4. The method of generating as described in claim 2, wherein said inputting said first vehicle sample image into said first multi-classification model for processing results in a first original set of labels corresponding to said first vehicle sample image being output by said first multi-classification model, comprising:
intercepting a target image corresponding to an image area where a vehicle is located in each filtered vehicle sample image through a vehicle detection model;
filtering a plurality of first vehicle sample images through a vehicle filtering model to obtain filtered vehicle sample images;
and inputting the target image into the first multi-classification model for processing to obtain a first original label set corresponding to the first vehicle sample image output by the first multi-classification model.
5. The generation method of claim 1, wherein the step of reserving a first tag corresponding to a single piece of target vehicle information in the original tag set and replacing second tags corresponding to the rest pieces of vehicle information with preset parameters to obtain the first tag set corresponding to the original tag set comprises:
correcting the error label set in the original label sets to obtain a second label set; the error label set is a label set obtained by the multi-classification model through error classification;
and reserving a first label corresponding to the single piece of target vehicle information in the second label set, and replacing second labels corresponding to the rest pieces of vehicle information with preset parameters to obtain the first label set corresponding to each second label set.
6. The generation method of any one of claims 1 to 5, further comprising:
and training a target multi-classification model through the target training sample set to obtain the trained target multi-classification model.
7. The generation method of claim 6, wherein the target multi-classification model comprises an input layer, a feature extraction layer, a plurality of first branch networks, and a second branch network;
the training of the target multi-classification model through the target training sample set to obtain the trained target multi-classification model comprises the following steps:
inputting a target first label set and a vehicle sample image corresponding to the target first label set into the target multi-classification model;
the input layer inputs the vehicle sample image into a feature extraction layer;
the characteristic extraction layer is used for extracting the characteristics of the vehicle sample image to obtain the characteristic data of the vehicle sample image;
the input layer divides the target first label set to obtain a plurality of preset parameters and first labels corresponding to a plurality of pieces of vehicle information;
the input layer respectively inputs a plurality of preset parameters into the first branch networks corresponding to the preset parameters; the input layer inputting a first tag into the second branch network;
the first branch network suspends the training operation of the vehicle information corresponding to the preset parameter on the target multi-classification model according to the preset parameter;
the second branch network obtains a target classification prediction result according to the characteristic data; calculating a loss between the target classification predictor and the first label; updating network weight parameters in the target multi-classification model according to the loss;
and each target training sample sequentially executes the step of inputting the target first label set and the vehicle sample image corresponding to the target first label set into the target multi-classification model and the subsequent steps to obtain the trained target multi-classification model.
8. An apparatus for generating training samples, the apparatus comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original label set corresponding to each of a plurality of vehicle sample images; the original label set comprises a set formed by labels corresponding to different pieces of vehicle information;
the processing unit is used for reserving a first label corresponding to single target vehicle information in the original label set and replacing second labels corresponding to other vehicle information with preset parameters to obtain a first label set corresponding to the original label set; the preset parameters are used for suspending the training operation or the testing operation of the rest vehicle information on the multi-classification model; the first label is used for training or testing the multi-classification model;
the arrangement unit is used for acquiring a target first label set corresponding to each preset sequence from a plurality of first label sets according to the preset sequence of different target vehicle information to form a first label set group; the target first label set is a first label set comprising the target vehicle information corresponding to the preset sequence;
a circulation unit, configured to execute the steps of obtaining, in a plurality of first label sets, one target first label set corresponding to each preset order according to the preset order of different pieces of target vehicle information to form a first label set group, and taking each first label set group and a vehicle sample image corresponding to each first label set group as a target training sample set; the target training sample set is used for training or testing the multi-classification model.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110791909.3A 2021-07-13 2021-07-13 Training sample generation method and generation device Active CN113408482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110791909.3A CN113408482B (en) 2021-07-13 2021-07-13 Training sample generation method and generation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110791909.3A CN113408482B (en) 2021-07-13 2021-07-13 Training sample generation method and generation device

Publications (2)

Publication Number Publication Date
CN113408482A true CN113408482A (en) 2021-09-17
CN113408482B CN113408482B (en) 2023-10-10

Family

ID=77686183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110791909.3A Active CN113408482B (en) 2021-07-13 2021-07-13 Training sample generation method and generation device

Country Status (1)

Country Link
CN (1) CN113408482B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677541A (en) * 2022-03-23 2022-06-28 成都智元汇信息技术股份有限公司 Method and system for extracting adhesion sample set based on target

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874840A (en) * 2016-12-30 2017-06-20 东软集团股份有限公司 Vehicle information recognition method and device
CN106971174A (en) * 2017-04-24 2017-07-21 华南理工大学 A kind of CNN models, CNN training methods and the vein identification method based on CNN
US20190050711A1 (en) * 2017-08-08 2019-02-14 Neusoft Corporation Method, storage medium and electronic device for detecting vehicle crashes
CN109903127A (en) * 2019-02-14 2019-06-18 广州视源电子科技股份有限公司 A kind of group recommending method, device, storage medium and server
CN109961094A (en) * 2019-03-07 2019-07-02 北京达佳互联信息技术有限公司 Sample acquiring method, device, electronic equipment and readable storage medium storing program for executing
US20190325259A1 (en) * 2018-04-12 2019-10-24 Discovery Communications, Llc Feature extraction and machine learning for automated metadata analysis
CN110781919A (en) * 2019-09-23 2020-02-11 腾讯云计算(北京)有限责任公司 Classification model training method, classification device and classification equipment
WO2020083073A1 (en) * 2018-10-23 2020-04-30 苏州科达科技股份有限公司 Non-motorized vehicle image multi-label classification method, system, device and storage medium
CN112101175A (en) * 2020-09-09 2020-12-18 沈阳帝信人工智能产业研究院有限公司 Expressway vehicle detection and multi-attribute feature extraction method based on local images
CN112257650A (en) * 2020-11-04 2021-01-22 南京领行科技股份有限公司 Passenger portrait method, device, storage medium and electronic equipment
CN112529100A (en) * 2020-12-24 2021-03-19 深圳前海微众银行股份有限公司 Training method and device for multi-classification model, electronic equipment and storage medium
CN113033715A (en) * 2021-05-24 2021-06-25 禾多科技(北京)有限公司 Target detection model training method and target vehicle detection information generation method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874840A (en) * 2016-12-30 2017-06-20 东软集团股份有限公司 Vehicle information recognition method and device
CN106971174A (en) * 2017-04-24 2017-07-21 华南理工大学 A kind of CNN models, CNN training methods and the vein identification method based on CNN
US20190050711A1 (en) * 2017-08-08 2019-02-14 Neusoft Corporation Method, storage medium and electronic device for detecting vehicle crashes
US20190325259A1 (en) * 2018-04-12 2019-10-24 Discovery Communications, Llc Feature extraction and machine learning for automated metadata analysis
WO2020083073A1 (en) * 2018-10-23 2020-04-30 苏州科达科技股份有限公司 Non-motorized vehicle image multi-label classification method, system, device and storage medium
CN109903127A (en) * 2019-02-14 2019-06-18 广州视源电子科技股份有限公司 A kind of group recommending method, device, storage medium and server
CN109961094A (en) * 2019-03-07 2019-07-02 北京达佳互联信息技术有限公司 Sample acquiring method, device, electronic equipment and readable storage medium storing program for executing
CN110781919A (en) * 2019-09-23 2020-02-11 腾讯云计算(北京)有限责任公司 Classification model training method, classification device and classification equipment
CN112101175A (en) * 2020-09-09 2020-12-18 沈阳帝信人工智能产业研究院有限公司 Expressway vehicle detection and multi-attribute feature extraction method based on local images
CN112257650A (en) * 2020-11-04 2021-01-22 南京领行科技股份有限公司 Passenger portrait method, device, storage medium and electronic equipment
CN112529100A (en) * 2020-12-24 2021-03-19 深圳前海微众银行股份有限公司 Training method and device for multi-classification model, electronic equipment and storage medium
CN113033715A (en) * 2021-05-24 2021-06-25 禾多科技(北京)有限公司 Target detection model training method and target vehicle detection information generation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677541A (en) * 2022-03-23 2022-06-28 成都智元汇信息技术股份有限公司 Method and system for extracting adhesion sample set based on target
CN114677541B (en) * 2022-03-23 2023-04-28 成都智元汇信息技术股份有限公司 Method and system for extracting bonding sample set based on target

Also Published As

Publication number Publication date
CN113408482B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
CN109359696B (en) Vehicle money identification method, system and storage medium
CN105144239B (en) Image processing apparatus, image processing method
CN112561080B (en) Sample screening method, sample screening device and terminal equipment
CN110188829B (en) Neural network training method, target recognition method and related products
JP2015087903A (en) Apparatus and method for information processing
KR20170109304A (en) Method for parallel learning of cascade classifier by object recognition
US20230298314A1 (en) Image clustering method and apparatus, computer device, and storage medium
CN113408482A (en) Training sample generation method and device
CN111783812A (en) Method and device for identifying forbidden images and computer readable storage medium
CN114494823A (en) Commodity identification, detection and counting method and system in retail scene
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN116266387A (en) YOLOV4 image recognition algorithm and system based on re-parameterized residual error structure and coordinate attention mechanism
CN113902944A (en) Model training and scene recognition method, device, equipment and medium
CN110728229B (en) Image processing method, device, equipment and storage medium
CN115374517A (en) Testing method and device for wiring software, electronic equipment and storage medium
US10311084B2 (en) Method and system for constructing a classifier
US11216922B2 (en) Systems and methods for recognition of user-provided images
CN113160135A (en) Intelligent colon lesion identification method, system and medium based on unsupervised migration image classification
CN112214639A (en) Video screening method, video screening device and terminal equipment
US20130080137A1 (en) Conversion method and system
CN108229521B (en) Object recognition network training method, device and system and application thereof
CN112989869A (en) Optimization method, device and equipment of face quality detection model and storage medium
CN111400522B (en) Traffic sign recognition method, training method and equipment
CN117332303B (en) Label correction method for clusters
CN114422199B (en) CMS (content management system) identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant