CN117671400A - Sample collection method, device, electronic equipment and storage medium - Google Patents

Sample collection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117671400A
CN117671400A CN202210958491.5A CN202210958491A CN117671400A CN 117671400 A CN117671400 A CN 117671400A CN 202210958491 A CN202210958491 A CN 202210958491A CN 117671400 A CN117671400 A CN 117671400A
Authority
CN
China
Prior art keywords
target object
feature vector
classification result
classification
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210958491.5A
Other languages
Chinese (zh)
Inventor
贾书军
郭瑞瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Meihang Intelligent Network Automobile Technology Co ltd
Original Assignee
Shenyang Meihang Intelligent Network Automobile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Meihang Intelligent Network Automobile Technology Co ltd filed Critical Shenyang Meihang Intelligent Network Automobile Technology Co ltd
Priority to CN202210958491.5A priority Critical patent/CN117671400A/en
Publication of CN117671400A publication Critical patent/CN117671400A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a sample collection method, a sample collection device, electronic equipment and a storage medium. The method comprises the following steps: acquiring continuous frame images, and determining feature vectors of a target object based on the continuous frame images; matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object; and collecting samples based on the classification result of the target object, and updating the feature vector sample model based on the collected samples. By the technical scheme, the automatic updating of the feature vector sample model is realized, and the quality of samples in the feature vector sample model is improved.

Description

Sample collection method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a sample collection method, a sample collection device, an electronic device, and a storage medium.
Background
In the research of intelligent traffic systems, an on-board vision system based on an image processing technology is an important branch field, particularly a road traffic sign, and is an important object of intelligent traffic research because the on-board vision system contains key information of road traffic indication.
In the process of implementing the present invention, the inventor finds that at least the following technical problems exist in the prior art:
the sample acquisition of the traffic sign mark is difficult, and the obtained sample types and the visual effect are not rich enough and the representative is insufficient (low-value sample).
Disclosure of Invention
The invention provides a sample collection method, a sample collection device, electronic equipment and a storage medium, which are used for solving the problem that a sample is not rich enough and is low in representativeness (low in value).
According to an aspect of the present invention, there is provided a sample collection method comprising:
acquiring continuous frame images, and determining feature vectors of a target object based on the continuous frame images;
matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object;
and collecting samples based on the classification result of the target object, and updating the feature vector sample model based on the collected samples.
According to another aspect of the present invention there is provided a sample collection device comprising:
the feature vector extraction module is used for acquiring continuous frame images and determining feature vectors of a target object based on the continuous frame images;
The classification result determining module is used for matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object;
and the sample collection module is used for collecting samples based on the classification result of the target object and updating the feature vector sample model based on the collected samples.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the sample collection method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to perform the sample collection method according to any one of the embodiments of the present invention.
According to the technical scheme, the feature vector of the target object is determined based on the continuous frame images by acquiring the continuous frame images, the feature vector of the target object is matched in the pre-established feature vector sample model to obtain the classification result of the target object, and after the classification result of the target object is obtained, sample collection can be carried out according to the classification result of the target object, so that the collected samples are screened, high-value samples are obtained, the feature vector sample model is updated by the high-value collected samples, the quality of the samples in the feature vector sample model is improved, and the problem of low sample value is solved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a sample collection method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a tracking and classifying method according to a first embodiment of the present invention;
FIG. 3 is a flow chart of a sample collection method according to a second embodiment of the present invention;
FIG. 4 is a flow chart of a sample collection method according to a third embodiment of the present invention;
FIG. 5 is a flow chart of a sample collection method according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural view of a sample collection device according to a fifth embodiment of the present invention;
Fig. 7 is a schematic structural diagram of an electronic device implementing a sample collection method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "target," "initial," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a sample collection method according to an embodiment of the present invention, where the method may be performed by a sample collection device, and the sample collection device may be implemented in hardware and/or software, and the sample collection device may be configured in a computer terminal and/or a server. As shown in fig. 1, the method includes:
s110, acquiring continuous frame images, and determining the feature vector of the target object based on the continuous frame images.
In the embodiment of the present invention, the continuous frame image refers to a continuous multi-frame image. For example, in a traffic identification scene, the continuous frame image may be a continuous multi-frame image in a road traffic video containing a traffic identification.
For example, the continuous frame images may be acquired by an image acquisition device, where the image acquisition device may be a camera provided on the vehicle; or, the continuous frame image may be obtained by retrieving from a preset storage path, and the method for obtaining the continuous frame image is not limited in this embodiment.
In some embodiments, subsequent to acquisition of successive frame images, successive frame images may be detected and tracked. The detection algorithms may include, but are not limited to, yolov4, yolov5, SSD (Single Shot MultiBox Detector), etc., and the tracking algorithms include, but are not limited to, optical flow tracking algorithms, particle filtering algorithms, deep learning algorithms, etc.
In some embodiments, after the continuous frame images are acquired, feature extraction may be further performed on the continuous frame images to obtain feature vectors of target objects in the continuous frame images, where the target objects may be objects to be tracked, for example, in a traffic sign recognition scene, the target objects may be traffic signs.
And S120, matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object.
In the embodiment of the invention, the feature vector sample model refers to a sample library storing feature vectors of a plurality of target objects. The classification result refers to a class of the target object, for example, the classification result may be a class a, a class B, or the like.
Specifically, the feature vector of the target object may be compared with each type of feature vector in the pre-established feature vector sample model, and the classification result of the target object may be determined according to the comparison result. Wherein, the category feature vector refers to the feature vector of the known target object category. In some embodiments, the comparison result may be a feature vector similarity result; in some embodiments, the comparison result may also be a feature vector distance result, which is not limited in this embodiment.
And S130, collecting samples based on the classification result of the target object, and updating the feature vector sample model based on the collected samples.
After the classification result of the target object is obtained, sample collection can be performed according to the classification result of the target object, so that sample screening is realized, a high-value collected sample is obtained, a feature vector sample model is updated by the high-value collected sample, and the quality of samples in the feature vector sample model is improved. Wherein, the collection of the sample refers to the collection of the sample according to the classification result.
On the basis of the above embodiments, after updating the feature vector sample model based on the collected samples, the method further includes: counting the frequency information of each candidate feature vector in the updated feature vector sample model; if the frequency information of any candidate feature vector meets the preset threshold condition, the feature information associated with the candidate feature vector is sent to a cloud server so as to update a feature vector library.
The candidate feature vector refers to a feature vector of which the target object category is not confirmed through the cloud. The feature information associated with the candidate feature vector may include, but is not limited to, feature pictures, feature vectors, belonging categories, and the like. The frequency information refers to count information of feature vectors. The preset threshold condition may be set empirically, and the specific value of the threshold is not limited herein. The feature vector library is a database which is located at the cloud and stores feature vector samples and feature information associated with the feature vector samples.
In an exemplary manner, in a traffic sign recognition scene, counting information of candidate feature vectors of each category is counted, if the counting reaches a preset threshold condition, the candidate feature vectors are features of the traffic sign, information such as feature pictures, feature vectors, belonging categories and the like associated with the candidate feature vectors can be uploaded to a cloud server, and the cloud generates optimal traffic sign feature vectors according to the uploaded data position relationship, the uploaded categories and the like, so that the traffic sign feature vectors are characterized, and the detection effect is improved.
On the basis of the above embodiments, after determining the classification result of the target object based on the feature vector, it further includes: under the condition that the target object is successfully tracked, counting classification results of a plurality of images to be identified in a tracking sequence; and taking the classification result with the highest occurrence frequency among the classification results of the plurality of images to be identified as the classification result of the tracking sequence.
It can be understood that by taking the classification result with the highest occurrence frequency among the classification results of the plurality of images to be identified as the classification result of the tracking sequence, the reliability of the classification result can be improved.
On the basis of the above embodiments, after obtaining the classification result of the target object, the method further includes: under the condition that the tracking of the target object fails, respectively acquiring the center points of at least two target objects; determining a direction vector between the center points of adjacent target objects; screening the unmatched frames based on the matching range determined by the included angle between the direction vectors to obtain target unmatched frames; and classifying the target unmatched frames to obtain classification results of the unmatched frames.
FIG. 2 is a schematic diagram illustrating a tracking and classifying method according to an embodiment of the present invention; when a target object is tracked, the central points of a tracking frame n, a tracking frame n-1 and a tracking frame n-2 can be acquired, wherein n represents the number of the tracking frames; further, direction vectors v1 and v2 between adjacent tracking frame center points are determined, where v1 is the vector of tracking frame n-2 center point and tracking frame n-1 center point and v2 is the vector of tracking frame n-1 center point and tracking frame n center point. The unmatched box in fig. 2 is an unmatched detection box, the nearest unmatched box ranges between v1 and v2 included angles, alpha and beta angles are extended to two sides, and the distance of a dotted line can be 2-4 times of the width of the tracking frame. When there are multiple unmatched boxes within the matchable range, the nearest 2-3 are selected for classification. As shown in fig. 2, the unmatched box 0 and the unmatched box 2 are outside the matchable range, and are not classified; the unmatched boxes 1 are classified within a matchable range. If the classification result of the unmatched box 1 is consistent with the classification result of the tracking box n, the tracking box n-1 and the tracking box n-2, the unmatched box 1 is indicated as the box of the target object, and the unmatched box 1 is added into the tracking record.
On the basis of the foregoing embodiments, after the classification result of the unmatched box is obtained, the method further includes: if the classification result of the unmatched frame is the same as the classification category of the target object, adding the unmatched frame into a tracking sequence, and modifying the tracking state into normal tracking; and if the classification result of the unmatched frame is different from the classification category of the target object, determining the tracking state as tracking failure.
If the classification result of the unmatched frame is the same as the classification category of the target object, the unmatched frame is considered as a problem of matching the tracking frame with the detection frame, and the unmatched frame is added into the tracking sequence, so that the tracking state is modified to be normal tracking, and the tracking is continued. If the classification result of the unmatched frame is different from the classification category of the target object, determining the tracking state as the tracking failure, and continuing tracking.
According to the technical scheme, the characteristic vector of the target object is determined based on the continuous frame images by acquiring the continuous frame images, the characteristic vector of the target object is matched in the pre-established characteristic vector sample model to obtain the classification result of the target object, and after the classification result of the target object is obtained, sample collection can be carried out according to the classification result of the target object, so that the collected samples are screened, high-value collected samples are obtained, the characteristic vector sample model is updated by the high-value collected samples, and the quality of the samples in the characteristic vector sample model is improved.
Example two
Fig. 3 is a flowchart of a sample collection method according to a second embodiment of the present invention, where the refinement of "determining a feature vector of a target object based on the continuous frame images" according to the above embodiment is optional, where the determining a feature vector of a target object based on the continuous frame images includes: performing target tracking on the continuous frame images to obtain at least one target object; and inputting the image data corresponding to the target object into a classification feature extraction model to obtain the feature vector of the target object.
As shown in fig. 3, the method includes:
s210, acquiring continuous frame images, and carrying out target tracking on the continuous frame images to obtain at least one target object.
S220, inputting the image data corresponding to the target object into a classification feature extraction model to obtain a feature vector of the target object.
And S230, matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object.
S240, collecting samples based on the classification result of the target object, and updating the feature vector sample model based on the collected samples.
In the present embodiment, the algorithm for realizing the target tracking may include, but is not limited to, an optical flow tracking algorithm, a particle filtering algorithm, a deep learning algorithm, and the like, which are not limited herein. In this embodiment, the target object may be a traffic sign located within the tracking frame.
Specifically, acquiring size information of a target object; matching a classification feature extraction model corresponding to the size information of the target object based on the size information of the target object; and inputting the image data corresponding to the target object into a classification feature extraction model corresponding to the size information of the target object to obtain the feature vector of the target object.
The size information of the target object may be a size of a tracking frame including the target object.
In this embodiment, classification feature extraction models with various sizes may be trained in advance, and in the process of actually extracting feature vectors, the corresponding classification feature extraction models may be matched according to the size of the target object, and image data corresponding to the target object may be input to the classification feature extraction model corresponding to the size information of the target object, so as to improve the extraction accuracy of the feature vectors.
The training process of the classification feature extraction model is exemplified as follows: several classification feature extraction models of different sizes, such as 48x48,84x84,128x128, etc. can be trained by a Few-shot method. Wherein 48x48 may correspond to the small-size classification feature extraction model, and model parameters of the small-size classification feature extraction model are shown in table 1; 84x84 may correspond to a mid-size classification feature extraction model, model parameters of which are shown in table 2; 128x128 can correspond to a large-size classification feature extraction model, model parameters of the large-size classification feature extraction model are shown in table 3, the three models with different sizes are respectively trained, and weight parameters of the models are mutually independent. It should be noted that the number and the size of the classification feature extraction models may be adjusted according to practical situations, and this embodiment is merely illustrative.
TABLE 1
Conv1,3,64,3,1 I:48x48x3 O:48x48x64
MaxPool(2) I:48x48x64 O:24x24x64
Conv2,64,64,3,1 I:24x24x64 O:24x24x64
MaxPool(2) I:24x24x64 O:12x12x64
Conv3,64,128,3,1 I:12x12x64 O:12x12x128
MaxPool(2) I:12x12x128 O:6x6x128
Conv4,128,128,3,1 I:6x6x128 O:6x6x128
AvgPool(1) I:6x6x128 O:1x1x128
Outputting feature vectors O:[128]
TABLE 2
Conv1,3,64,3,1 I:84x84x3 O:84x84x64
MaxPool(2) I:84x84x64 O:42x42x64
Conv2,64,64,3,1 I:42x42x64 O:42x42x64
MaxPool(2) I:42x42x64 O:21x21x64
Conv3,64,64,3,1 I:21x21x64 O:21x21x64
MaxPool(2) I:21x21x64 O:10x10x64
Conv4,64,128,3,1 I:10x10x64 O:10x10x128
AvgPool(1) I:10x10x128 O:1x1x128
Outputting feature vectors O:[128]
TABLE 3 Table 3
According to the technical scheme provided by the embodiment of the invention, the corresponding classification characteristic extraction model can be matched according to the size of the target object, and the image data corresponding to the target object is input into the classification characteristic extraction model corresponding to the size information of the target object, so that the extraction precision of the feature vector is improved.
Example III
Fig. 4 is a flowchart of a sample collection method according to a third embodiment of the present invention, where the "matching the feature vector of the target object in a pre-established feature vector sample model to obtain the classification result of the target object" is performed on the basis of the above embodiment, and optionally, the feature vector of the target object is compared with each class of feature vectors in the pre-established feature vector sample model, and the classification result of the target object is determined based on the comparison result.
As shown in fig. 4, the method includes:
s310, acquiring continuous frame images, and determining the feature vector of the target object based on the continuous frame images.
S320, comparing the feature vector of the target object with each type of feature vector in a pre-established feature vector sample model, and determining a classification result of the target object based on the comparison result.
S330, collecting samples based on the classification result of the target object, and updating the feature vector sample model based on the collected samples.
In this embodiment, the category feature vector refers to a feature vector of a known target object category, and may include, but is not limited to, an artificial feature vector and a candidate feature vector, wherein the artificial feature vector refers to a feature vector of a target object category that has been confirmed by the cloud, and the candidate feature vector refers to a feature vector of a target object category that has not been confirmed by the cloud.
Exemplary, feature vector sample model specific relationships are shown in table 4, and are specifically as follows:
TABLE 4 Table 4
Category(s) Features (e.g. a character) Eigenvalues Usage count Feature image
Class A Artificial character 1 128Float n Without any means for
Class A Artificial feature 2 128Float n Without any means for
Class A ......
Class A Artificial feature 10 128Float n Without any means for
Class A Candidate feature 1 128Float n Img1
Class A Candidate feature 2 128Float n Img2
Class A ...... .......
Class A Candidate feature 10 128Float n Img10
Class B Artificial character 1 128Float n Without any means for
Class B Artificial feature 2 128Float n Without any means for
Class B ......
Class B Artificial feature 10 128Float n Without any means for
Class B Candidate feature 1 128Float n Img1
Class B Candidate feature 2 128Float n Img2
Class B ...... .......
Class B Candidate feature 10 128Float n Img10
... ... ... ... ...
It should be noted that, the relationship between the feature image and the feature vector may be established by the feature vector sample model. In some embodiments, the feature vector sample model may also be used to count feature vectors, with the counting rules specifically: the artificial feature use count is initially set to 0, the candidate feature use count is initially set to-1, and after the candidate feature setting is completed, the candidate feature use count is set to 0.
In some embodiments, the comparison result may be a feature vector similarity result; in some embodiments, the comparison result may also be a feature vector distance result, which is not limited in this embodiment. After the comparison result is obtained, a category feature vector with the highest similarity or the smallest distance can be selected from the plurality of comparison results, and the category corresponding to the category feature vector is used as the classification result of the target object.
Specifically, determining the distance between the feature vector and each type of feature vector in the feature vector sample model; and comparing the feature vector with the distance between each type of feature vector in the feature vector sample model.
For example, the distance between the feature vector and each type of feature vector in the feature vector sample model may be a euclidean distance, a mahalanobis distance, or the like, which is not limited herein.
According to the technical scheme provided by the embodiment of the invention, the feature vector of the target object is compared with each type of feature vector in the pre-established feature vector sample model, and the classification result of the target object is determined based on the comparison result, so that the automatic classification of the feature vector is realized, and the efficiency of sample collection is improved.
Example IV
Fig. 5 is a flowchart of a sample collection method according to a fourth embodiment of the present invention, where the sample collection is performed on the basis of the above embodiment by refining the "sample collection based on the classification result of the target object", and optionally, determining whether the classification result of the target object meets a preset update condition, and performing sample collection based on the determination result.
As shown in fig. 5, the method includes:
s410, acquiring continuous frame images, and determining the feature vector of the target object based on the continuous frame images.
And S420, matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object.
And S430, judging whether the classification result of the target object meets a preset updating condition, collecting samples based on the judgment result, and updating the feature vector sample model based on the collected samples.
In this embodiment, the preset update condition refers to a judgment condition for screening the sample.
Specifically, under the condition that the target object is successfully tracked, if the classification result of the current target object is different from the classification category of the tracking sequence, classifying the detection frame image data corresponding to the current target object to obtain a detection frame classification result; and comparing the detection frame classification result with the tracking sequence classification category, and collecting samples based on the comparison result.
On the basis of the above embodiment, comparing the detection frame classification result with the tracking sequence classification category, and performing sample collection based on the comparison result includes: if the classification result of the detection frame is the same as the classification category of the tracking sequence, replacing the tracking frame corresponding to the current target object with the detection frame; if the classification result of the detection frame is different from the classification result of the tracking sequence and the classification result of the detection frame is the same as the classification result of the current target object, respectively adding the feature vector and the feature image corresponding to the current target object into the candidate feature vector list and the candidate feature picture list to finish sample collection.
In the traffic identification scene, for example, when the target object is successfully tracked and the classification result of the current target object is different from the classification category of the tracking sequence, the content of the detection frame corresponding to the tracking frame is classified.
And if the classification result of the detection frame is consistent with the classification category of the tracking sequence, replacing the tracking frame corresponding to the current target object with the detection frame, and continuing tracking. In the process of replacing the tracking frame with the detection frame, smoothing processing can be performed so that the frame is not too large.
If the classification result of the detection frame is inconsistent with the classification category of the tracking sequence, the classification result of the detection frame is identical with the classification result of the current target object, and more than 75% of the classification results under different sizes are identical, the feature vector corresponding to the current target object and the intercepted feature image are respectively added into a candidate feature vector list and a candidate feature picture list, so that a series of traffic sign samples are obtained, and traffic sign sample collection is completed. Otherwise, tracking is continued. The series of traffic identification samples refer to a plurality of feature vectors associated with the current target object and feature images corresponding to the feature vectors.
On the basis of the above embodiment, determining whether the classification result of the target object satisfies the preset updating condition, and performing sample collection based on the determination result includes: counting classification results of a plurality of images to be identified in the tracking sequence; taking the classification result with the highest occurrence frequency in the classification results of the plurality of images to be identified as a tracking sequence classification category; if the classification result of the detection frame is different from the classification result of the tracking sequence and the classification result of the detection frame is the same as the classification result of the current target object, respectively adding the feature vector and the feature image corresponding to the current target object into the candidate feature vector list and the candidate feature picture list to finish sample collection.
It can be understood that by taking the classification result with the highest occurrence frequency among the classification results of the plurality of images to be identified as the classification result of the tracking sequence, the reliability of the classification result can be improved.
According to the technical scheme provided by the embodiment of the invention, whether the classification result of the target object meets the preset updating condition is judged, and the sample collection is performed based on the judgment result, so that the screening of the sample is realized, and the sample quality is improved.
Example five
Fig. 6 is a schematic structural diagram of a sample collection device according to a fifth embodiment of the present invention. As shown in fig. 6, the apparatus includes:
a feature vector extraction module 510, configured to acquire continuous frame images, and determine a feature vector of a target object based on the continuous frame images;
the classification result determining module 520 is configured to match the feature vector of the target object in a pre-established feature vector sample model, so as to obtain a classification result of the target object;
and the sample collection module 530 is configured to collect samples based on the classification result of the target object, and update the feature vector sample model based on the collected samples.
According to the technical scheme, the characteristic vector of the target object is determined based on the continuous frame images by acquiring the continuous frame images, the characteristic vector of the target object is matched in the pre-established characteristic vector sample model to obtain the classification result of the target object, and after the classification result of the target object is obtained, sample collection can be carried out according to the classification result of the target object, so that the collected samples are screened, high-value collected samples are obtained, the characteristic vector sample model is updated by the high-value collected samples, and the quality of the samples in the characteristic vector sample model is improved.
Optionally, the feature vector extraction module 510 includes:
the target tracking unit is used for carrying out target tracking on the continuous frame images to obtain at least one target object;
and the feature vector extraction unit is used for inputting the image data corresponding to the target object into the classification feature extraction model to obtain the feature vector of the target object.
Optionally, the feature vector extraction unit is specifically configured to:
acquiring size information of the target object;
matching a classification feature extraction model corresponding to the size information of the target object based on the size information of the target object;
and inputting the image data corresponding to the target object into a classification feature extraction model corresponding to the size information of the target object to obtain the feature vector of the target object.
Optionally, the classification result determining module 520 includes:
and the feature vector comparison unit is used for comparing the feature vector of the target object with each type of feature vector in a pre-established feature vector sample model and determining the classification result of the target object based on the comparison result.
Optionally, the feature vector comparing unit is specifically configured to:
determining the distance between the feature vector and each type of feature vector in the feature vector sample model;
And comparing the feature vector with the distance between each type of feature vector in the feature vector sample model.
Optionally, the sample collection module 530 includes:
and the classification result judging unit is used for judging whether the classification result of the target object meets the preset updating condition or not, and collecting samples based on the judgment result.
Optionally, the classification result judging unit includes:
the image data classifying subunit is used for classifying the detection frame image data corresponding to the current target object to obtain a detection frame classifying result if the classifying result of the current target object is different from the classifying category of the tracking sequence under the condition that the target object is successfully tracked;
and the classification result comparison subunit is used for comparing the detection frame classification result with the tracking sequence classification category and collecting samples based on the comparison result.
Optionally, the classification result comparing subunit is specifically configured to:
if the classification result of the detection frame is the same as the classification category of the tracking sequence, replacing the tracking frame corresponding to the current target object with the detection frame;
and if the classification result of the detection frame is different from the classification result of the tracking sequence and the classification result of the detection frame is the same as the classification result of the current target object, respectively adding the feature vector and the feature image corresponding to the current target object into a candidate feature vector list and a candidate feature picture list to finish sample collection.
Optionally, the classification result judging unit includes:
counting classification results of a plurality of images to be identified in the tracking sequence;
taking the classification result with the highest occurrence frequency in the classification results of the plurality of images to be identified as a tracking sequence classification category;
if the classification result of the detection frame is different from the classification result of the tracking sequence and the classification result of the detection frame is the same as the classification result of the current target object, respectively adding the feature vector and the feature image corresponding to the current target object into a candidate feature vector list and a candidate feature picture list to finish sample collection.
Optionally, the device is further configured to:
counting the frequency information of each candidate feature vector in the updated feature vector sample model;
if the frequency information of any candidate feature vector meets the preset threshold condition, the feature information associated with the candidate feature vector is sent to a cloud server so as to update a feature vector library.
Optionally, the device is further configured to:
under the condition that the target object is successfully tracked, counting classification results of a plurality of images to be identified in a tracking sequence;
and taking the classification result with the highest occurrence frequency among the classification results of the plurality of images to be identified as the classification result of the tracking sequence.
Optionally, the device is further configured to:
under the condition that the tracking of the target object fails, respectively acquiring the center points of at least two target objects;
determining a direction vector between the center points of adjacent target objects;
screening the unmatched frames based on the matching range determined by the included angle between the direction vectors to obtain target unmatched frames;
and classifying the target unmatched frames to obtain classification results of the unmatched frames.
Optionally, the device is further configured to:
if the classification result of the unmatched frame is the same as the classification category of the target object, adding the unmatched frame into a tracking sequence, and modifying the tracking state into normal tracking;
and if the classification result of the unmatched frame is different from the classification category of the target object, determining the tracking state as tracking failure.
The sample collection device provided by the embodiment of the invention can execute the sample collection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example six
Fig. 7 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as a sample collection method, which includes:
acquiring continuous frame images, and determining feature vectors of a target object based on the continuous frame images;
matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object;
and collecting samples based on the classification result of the target object, and updating the feature vector sample model based on the collected samples.
In some embodiments, the sample collection method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the sample collection method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the sample collection method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (16)

1. A method of sample collection, comprising:
acquiring continuous frame images, and determining feature vectors of a target object based on the continuous frame images;
matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object;
and collecting samples based on the classification result of the target object, and updating the feature vector sample model based on the collected samples.
2. The method of claim 1, wherein the determining feature vectors of a target object based on the successive frame images comprises:
performing target tracking on the continuous frame images to obtain at least one target object;
and inputting the image data corresponding to the target object into a classification feature extraction model to obtain the feature vector of the target object.
3. The method according to claim 2, wherein the inputting the image data corresponding to the target object into the classification feature extraction model to obtain the feature vector of the target object includes:
acquiring size information of the target object;
matching a classification feature extraction model corresponding to the size information of the target object based on the size information of the target object;
and inputting the image data corresponding to the target object into a classification feature extraction model corresponding to the size information of the target object to obtain the feature vector of the target object.
4. The method according to claim 1, wherein the matching the feature vector of the target object in a pre-established feature vector sample model to obtain the classification result of the target object includes:
And comparing the feature vector of the target object with each class of feature vector in a pre-established feature vector sample model, and determining a classification result of the target object based on the comparison result.
5. The method of claim 4, wherein comparing the feature vector to each type of feature vector in a pre-established feature vector sample model comprises:
determining the distance between the feature vector and each type of feature vector in the feature vector sample model;
and comparing the feature vector with the distance between each type of feature vector in the feature vector sample model.
6. The method of claim 1, wherein the sample collection based on the classification of the target object comprises:
judging whether the classification result of the target object meets a preset updating condition or not, and collecting samples based on the judgment result.
7. The method of claim 6, wherein determining whether the classification result of the target object meets a preset update condition, and performing sample collection based on the determination result, comprises:
if the target object is successfully tracked, classifying the detection frame image data corresponding to the current target object if the classification result of the current target object is different from the classification category of the tracking sequence, so as to obtain a detection frame classification result;
And comparing the detection frame classification result with the tracking sequence classification category, and collecting samples based on the comparison result.
8. The method of claim 7, wherein comparing the detection frame classification result with the tracking sequence classification category, and performing sample collection based on the comparison result comprises:
if the classification result of the detection frame is the same as the classification category of the tracking sequence, replacing the tracking frame corresponding to the current target object with the detection frame;
and if the classification result of the detection frame is different from the classification result of the tracking sequence and the classification result of the detection frame is the same as the classification result of the current target object, respectively adding the feature vector and the feature image corresponding to the current target object into a candidate feature vector list and a candidate feature picture list to finish sample collection.
9. The method of claim 6, wherein determining whether the classification result of the target object meets a preset update condition, and performing sample collection based on the determination result, comprises:
counting classification results of a plurality of images to be identified in the tracking sequence;
taking the classification result with the highest occurrence frequency in the classification results of the plurality of images to be identified as a tracking sequence classification category;
If the classification result of the detection frame is different from the classification result of the tracking sequence and the classification result of the detection frame is the same as the classification result of the current target object, respectively adding the feature vector and the feature image corresponding to the current target object into a candidate feature vector list and a candidate feature picture list to finish sample collection.
10. The method of claim 1, further comprising, after updating the feature vector sample model based on the collected samples:
counting the frequency information of each candidate feature vector in the updated feature vector sample model;
if the frequency information of any candidate feature vector meets the preset threshold condition, the feature information associated with the candidate feature vector is sent to a cloud server so as to update a feature vector library.
11. The method according to claim 1, further comprising, after obtaining the classification result of the target object:
under the condition that the target object is successfully tracked, counting classification results of a plurality of images to be identified in a tracking sequence;
and taking the classification result with the highest occurrence frequency among the classification results of the plurality of images to be identified as the classification result of the tracking sequence.
12. The method according to claim 1, further comprising, after obtaining the classification result of the target object:
under the condition that the tracking of the target object fails, respectively acquiring the center points of at least two target objects;
determining a direction vector between the center points of adjacent target objects;
screening the unmatched frames based on the matching range determined by the included angle between the direction vectors to obtain target unmatched frames;
and classifying the target unmatched frames to obtain classification results of the unmatched frames.
13. The method of claim 12, further comprising, after said obtaining the classification result of the unmatched box:
if the classification result of the unmatched frame is the same as the classification category of the target object, adding the unmatched frame into a tracking sequence, and modifying the tracking state into normal tracking;
and if the classification result of the unmatched frame is different from the classification category of the target object, determining the tracking state as tracking failure.
14. A sample collection device, comprising:
the feature vector extraction module is used for acquiring continuous frame images and determining feature vectors of a target object based on the continuous frame images;
The classification result determining module is used for matching the feature vector of the target object in a pre-established feature vector sample model to obtain a classification result of the target object;
and the sample collection module is used for collecting samples based on the classification result of the target object and updating the feature vector sample model based on the collected samples.
15. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the sample collection method of any one of claims 1-13.
16. A computer readable storage medium storing computer instructions for causing a processor to perform the sample collection method of any one of claims 1-13.
CN202210958491.5A 2022-08-09 2022-08-09 Sample collection method, device, electronic equipment and storage medium Pending CN117671400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210958491.5A CN117671400A (en) 2022-08-09 2022-08-09 Sample collection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210958491.5A CN117671400A (en) 2022-08-09 2022-08-09 Sample collection method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117671400A true CN117671400A (en) 2024-03-08

Family

ID=90071702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210958491.5A Pending CN117671400A (en) 2022-08-09 2022-08-09 Sample collection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117671400A (en)

Similar Documents

Publication Publication Date Title
CN113255694B (en) Training image feature extraction model and method and device for extracting image features
CN112633276B (en) Training method, recognition method, device, equipment and medium
CN113205037B (en) Event detection method, event detection device, electronic equipment and readable storage medium
CN113313053B (en) Image processing method, device, apparatus, medium, and program product
CN111931859B (en) Multi-label image recognition method and device
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115861400A (en) Target object detection method, training method and device and electronic equipment
CN115471476A (en) Method, device, equipment and medium for detecting component defects
CN116309963B (en) Batch labeling method and device for images, electronic equipment and storage medium
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment
CN115953434B (en) Track matching method, track matching device, electronic equipment and storage medium
CN112214639B (en) Video screening method, video screening device and terminal equipment
CN115393755A (en) Visual target tracking method, device, equipment and storage medium
EP4209959A1 (en) Target identification method and apparatus, and electronic device
CN115147814A (en) Recognition method of traffic indication object and training method of target detection model
CN117671400A (en) Sample collection method, device, electronic equipment and storage medium
CN112818972B (en) Method and device for detecting interest point image, electronic equipment and storage medium
CN115205555B (en) Method for determining similar images, training method, information determining method and equipment
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN113360688B (en) Method, device and system for constructing information base
CN117746069B (en) Graph searching model training method and graph searching method
CN116186549B (en) Model training method, device, equipment and medium
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN116258769B (en) Positioning verification method and device, electronic equipment and storage medium
CN117725614A (en) License plate desensitizing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination