CN109829467A - Image labeling method, electronic device and non-transient computer-readable storage medium - Google Patents

Image labeling method, electronic device and non-transient computer-readable storage medium Download PDF

Info

Publication number
CN109829467A
CN109829467A CN201711285222.2A CN201711285222A CN109829467A CN 109829467 A CN109829467 A CN 109829467A CN 201711285222 A CN201711285222 A CN 201711285222A CN 109829467 A CN109829467 A CN 109829467A
Authority
CN
China
Prior art keywords
frame
image
processor
classifier
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711285222.2A
Other languages
Chinese (zh)
Inventor
蒋欣翰
陈彦霖
林谦
余兆伟
李孟灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute for Information Industry
Original Assignee
Institute for Information Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute for Information Industry filed Critical Institute for Information Industry
Publication of CN109829467A publication Critical patent/CN109829467A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

A kind of image labeling method, comprising: obtain multiple images frame;It is recognized from multiple picture frame and tracks one or more target pieces;Condition is selected according to one first, selects multiple candidate key picture frames from multiple picture frame;Determine multiple first index of similarity of multiple candidate key picture frame;Determine multiple second index of similarity of multiple adjacent image frames;By multiple candidate key picture frame, condition person is selected together with meeting one second in multiple adjacent image frame, is selected as multiple key images frames;Multiple key images frame is presented in a figure user interface, and the markup information about one or more target pieces is shown by the figure user interface.

Description

Image labeling method, electronic device and non-transient computer-readable storage medium
Technical field
The invention relates to a kind of image processing method, electronic device and non-transient computer-readable storage mediums, and It is stored in particular to a kind of image labeling (image annotation) method, electronic device and non-transient readable in computer Media.
Background technique
It has been the development trend of artificial intelligence in conjunction with deep learning (deep learning) technology of computer vision at present. However deep learning network needs a large amount of image labeling sample, and the deep learning net of high correctness could be generated via training Network.
The method of image labeling takes artificial mark mostly at present.Operator need to be for the picture frame in video data (frame) frame selects object one by one, and associated reference name is inputted to it.However working as in video data has a large amount of target piece When, such artificial notation methods are not only time-consuming but also work consuming.
Summary of the invention
The present invention proposes a kind of image labeling method, electronic device and non-transient computer-readable storage medium, can be automatic The invalid image frames sample that repeatability is high in video data is filtered out, and filters out the key images frame with object structure diversity It is browsed for user, and increases newly, corrects mark object, with being consumed as a result, being saved needed for image labeling in turn for sophisticated image mark The manpower taken.On the other hand, technology proposed by the present invention can more import expertise feedback mechanism to promote acquisition key images The correctness and robustness of frame.
According to an aspect of the invention, it is proposed that a kind of image labeling method realized by the electronic device comprising processor, It include: processor from video data acquirement image frame sequence, image frame sequence includes multiple images frame;Processor is to image frame sequence Column execute object detection and tracing program, to recognize from these picture frames and track one or more target pieces;Processor root Select condition according to first, select multiple candidate key picture frames from these picture frames, wherein first select condition include when one or A target piece in multiple target pieces starts to occur in the picture frame in these picture frames or starts to disappear, by this image Frame is selected as one of these candidate key picture frames;Processor determines multiple first similarities of these candidate key picture frames Index, each first index of similarity is processor by similarity calculation, by the corresponding time in these candidate key picture frames Select key images frame the first covariance value and this correspond to candidate key picture frame is obtained along different directions statistics multiple the One variance yields determines;Processor determines multiple second index of similarity of multiple adjacent image frames, each adjacent image frame and this A little candidate key picture frame at least one are adjacent, and each second index of similarity is processor by similarity calculation, by these The second covariance value and this correspondence adjacent image frame that adjacent image frame is corresponded in adjacent image frame take along different directions statistics The multiple second party differences obtained determine;Processor is by these candidate key picture frames, together with meeting in these adjacent image frames Two select condition person, are elected to be multiple key images frames, and second to select condition include when the adjacent image in these adjacent image frames Corresponding first index of similarity of the second index of similarity of correspondence of frame and the candidate key picture frame adjacent to adjacent image frame Between difference be more than similarity threshold value, then adjacent image frame is selected to the one as key images frame;Processor by this A little key images frames are presented in figure user interface, and are shown by figure user interface about one or more targets One markup information of object.
According to another aspect of the invention, it is proposed that a kind of non-transient computer-readable storage medium.Non-transient computer-readable Storage media is taken to store one or more instructions, this one or more instruction is for processor execution, so as to include the electricity of this processor Sub-device executes image labeling method of the invention.
According to another aspect of the invention, a kind of electronic device is proposed.Electronic device includes memory and processor.Place Manage device couple memory, and be configured and to: from video data obtain image frame sequence, image frame sequence includes multiple images Frame;Object detection and tracing program are executed to image frame sequence, to recognize from these picture frames and track one or more targets Object;Condition is selected according to first, selects multiple candidate key picture frames from these picture frames, wherein first selects condition and include It, will when the target piece in one or more target pieces starts to occur in the picture frame in these picture frames or starts to disappear This picture frame is selected as one of these candidate key picture frames;Obtain multiple first similarities of these candidate key picture frames Index, each first index of similarity are by the first association side of the correspondence candidate key picture frame in these candidate key picture frames Difference and this correspondence candidate key picture frame are determined along multiple first party differences that different directions statistics obtains;Obtain multiple phases Multiple second index of similarity of adjacent picture frame, each adjacent image frame is adjacent with these candidate key picture frame at least one, Each second index of similarity is the second covariance value and this correspondence by corresponding to adjacent image frame in these adjacent image frames Adjacent image frame is determined along multiple second party differences that different directions statistics obtains;By these candidate key picture frames, together with this Meet second in a little adjacent image frames and select condition person, be elected to be multiple key images frames, second selects condition including when these phases The second index of similarity of correspondence of adjacent image frame in adjacent picture frame and the candidate key picture frame adjacent to adjacent image frame The first index of similarity of correspondence between difference be more than similarity threshold value, then adjacent image frame is selected as key images The one of frame;These key images frames are presented in figure user interface, and is shown and is closed by figure user interface In a markup information of one or more target pieces.
Detailed description of the invention
For the above objects, features and advantages of the present invention can be clearer and more comprehensible, below in conjunction with attached drawing to tool of the invention Body embodiment elaborates, in which:
Fig. 1 is painted the flow chart of the image labeling method of an embodiment according to the present invention.
Fig. 2 is painted an example flow chart for searching candidate key picture frame.
Fig. 3 is painted the schematic diagram of type variable window object detection.
Fig. 4 is painted an example flow chart that key images frame is selected from the adjacent image frame of candidate key picture frame.
Fig. 5 is painted the schematic diagram that key images frame is selected from multiple continuous picture frames.
Fig. 6 is painted the schematic diagram of the figure user interface of an embodiment according to the present invention.
Fig. 7 is painted a non-limiting detailed flowchart of Fig. 1 step 114.
Fig. 8 is painted the schematic diagram of HOG characteristic strengthening.
Fig. 9 is painted the flow chart of the adaptive training of the multi-class classifier of an embodiment according to the present invention.
Figure 10 is painted the schematic diagram of the training sample distance value different classes of relative to classifier.
Figure 11 is painted the schematic diagram in the different classes of parameter section of classifier.
Figure 12 is painted the schematic diagram of the adaptive training of multi-class classifier.
Component label instructions in figure:
102、104、106、108、110、112、114、202、204、206、208、402、404、406、408、410、412、 702,704,902,904,906,908,910,912,914,916: step
IL1~ILP: image layer
W1~W5: detection form
F1~F7: picture frame
OB1~OB3,614,616: target piece
610,612: selecting object using frame
600: figure user interface
602: key images frame display area
604: main operation region
606A, 606B: tab area
608: operation key
KF1~KFM: key images frame
802: block
804: unit
VA1, VA2:HOG feature group
VA1 ', VA2 ': HOG feature group after reinforcing
LP0~LP3: classification
Distance value
The average value of distance value
The standard deviation of distance value
The upper limit value of parameter area
The lower limit value of parameter area
OSHk: distance value reference point
1202,1204,1206: the stage
Specific embodiment
The present invention proposes a kind of image labeling method, electronic device and non-transient computer-readable storage medium.Image mark Note for example refers to through computer vision technology, identifies to one or more particular artifacts in video data, and to identifying Particular artifact assign corresponding title or meaning of one's words narration.Image sensing by taking the application of unmanned vehicle automatic Pilot as an example, on vehicle Device can obtain the video streaming of road map picture, by image labeling technology, automated driving system can be allowed to identify car body context Object, such as pedestrian, vehicle, cat and dog etc., automated driving system can according to the environment object and corresponding mark identified, Corresponding reaction is made, such as dodges and comes across the pedestrian in front suddenly.
Image labeling method of the invention can be implemented by electronic device.Electronic device is for example including memory and processing Device.Memory can store program, instruction, data or the archives for obtaining or executing for processor.Processor couples memory, It is configured the image labeling method of the executable embodiment of the present invention.Processor can be implemented as micro-control unit (microcontroller), microprocessor (microprocessor), digital signal processor (digital signal Processor), special application integrated circuit (application specific integrated circuit, ASIC), number Word logic circuit, scene can programmed logic gate array (field programmable gate array, FPGA) or other tools There is the hardware element of operation processing function.Image labeling method of the invention can also be implemented as a software program, this software program It can be stored in non-transient computer readable media (non-transitory computer readable medium), such as firmly Disk, CD, portable disk, memory can be held when processor is loaded into this software program from non-transient computer-readable storage medium Row image labeling method of the invention.
Fig. 1 is painted the flow chart of the image labeling method of an embodiment according to the present invention.The image labeling method can Implemented by including the electronic device of processor.
Step 102, processor executes video decompression, to obtain image frame sequence, image frame sequence packet from video data Include multiple images frame (frame).
Step 104, processor searches candidate key picture frame from the picture frame obtained.In one embodiment, processor Object detection and tracing program can be executed, to image frame sequence to recognize from multiple images frame and track one or more objects Part, and when variation of the structure feature for judging a target piece in a picture frame is more than a predetermined threshold level, by the figure As frame is selected as candidate key picture frame.
Step 106, processor determines key images from picture frame.Key images are in addition to including selected by step 104 Candidate key picture frame can also select the person that meets specified conditions as crucial figure from the adjacent image frame of candidate key picture frame Picture.Signified two picture frames " adjacent " herein, refer in a continuous image frame sequence (such as video streaming), time sequence Upper two picture frames adjacent to each other, such as two picture frames acquired by two sampling time points of connecting.
Step 108, key images frame is presented in figure user interface by processor (graphicaluserinterface, GUI), and the mark letter about target piece is shown by figure user interface Breath.Title or semantic illustration of the markup information for example including target piece, such as " pedestrian ", " mobile car " etc..
Figure user interface be also available for users to be intended to from shown key images frame center choosing it is newly-increased unidentified Object, and it is marked.For example, for the picture frame comprising complex background, there may be part object that can not be identified And tracking, user can be used the mode of manual frame choosing and select unrecognized object image and right from key images frame center at this time It is marked.It is that user's frame selects object by the object image that user's frame selects.
It should be noted that term " user " word used herein for example including possess can be performed image mark of the present invention The personage of the electronic device of injecting method or entity operate or using the personage of the electronic device or entities or with other Mode personage associated with the electronic device or entity.It will be realized that " user " word is not intended to become restrictive, It and may include the various embodiments beyond described example.
Step 110, processor selects object to carry out object tracking for user's frame.This step can be with any of object Algorithm is tracked to realize.
In step 112, processor obtains filling result.For example, processor can be via the graphical use of step 108 Person interface receives user operation, and responds this user and operate one filling result of generation.Filling result is for example including using Person's frame selects object and selects user's markup information of object about this user's frame, and it is to capture certainly that wherein user's frame, which selects object, The picture material of key images frame.For example, user can select a certain key images by figure user interface frame The image of a people selects object as user's frame in frame, and inputting corresponding markup information is " pedestrian ".
In one embodiment, image labeling method can further comprise step 114.It in step 114, is selected user's frame Object is made feature extraction, is strengthened.Feature extraction, reinforcing result can be provided in step 104 training simultaneously as training sample The classifier to execute object detection is updated, strengthens the efficiency of image labeling by the feedback of expertise whereby.
Fig. 2 is painted an example flow chart for searching candidate key picture frame.In an infinite example, the process of Fig. 2 can Such as it is implemented in Fig. 1 step 104.Step 202 and 204 may be included in an object detection and tracing program.
In step 202, processor can detect target piece from multiple successive image frames of video data.In an embodiment In, it is realized using image pyramid (image pyramid) collocation classifier pyramid (classifier pyramid) Hybrid variable form object detection algorithm carry out object detection.Above-mentioned hybrid algorithm will cooperate Fig. 3 to explain.So The present invention is not limited thereto, and step 202 also can detect algorithm by any of object to realize, such as Lis Hartel sign (Haar-like) algorithm, adaptive enhancing (adaboost) algorithm etc., design the classification of detectable target piece whereby Device.
In step 204, processor will carry out object tracking to the target piece detected.In one embodiment, available With the core correlation based on histograms of oriented gradients (histogram of oriented gradient, HOG) feature Filter (kernelized correlation filter, KCF) object tracing program continues the target piece detected Track its dynamic.
For example, target piece image first can be changed into gray level image to capture the HOG feature of target piece by processor, Frequency domain conversion is made to this HOG feature again, to obtain HOG frequency domain character.Later, KCF object tracing program can be performed in processor To track this HOG frequency domain character, the tracking to target piece is realized whereby.A frequency domain conversion e.g. Fourier turns It changes, can be expressed as follows:
In formula one, β indicates histogram block (bin) component being stored in each HOG unit (cell);X, y expression will calculate The block coordinate in Fourier transform region.
In addition to aforesaid way, step 204 also can track algorithm by any of object to realize, such as detection view Window (detect window) algorithm, correlation filter (correlation filter) algorithm etc..
In step 206, processor can determine whether picture frame meets first and select condition.If so, processor in step 208 The picture frame that first selects condition will be met to select as candidate key picture frame.If it is not, then sentencing for next picture frame work It is disconnected.First selects condition for example including when a target piece starts to occur in a picture frame or starts to disappear, then by this image Frame is selected as one of these candidate key picture frames.Signified object " appearance " or " disappearance " herein, refer to be object knot The case where structure changing features are more than a predetermined threshold level.For example, if in video data pedestrian image by front turn round to The back side indicates that the corresponding positive object of this person disappears then for processor, and the object for corresponding to this person back side occurs.
Fig. 3 is painted the schematic diagram of type variable window object detection.According to this example, processor can build respectively each picture frame Found corresponding image pyramid.Each image pyramid may include the image layer of multiple and different resolutions.As shown in figure 3, image Pyramid includes the P image layer IL of resolution from high to low1~ILP, wherein P is the positive integer greater than 1.Each image layer IL1 ~ILPIn be realize a corresponding classifier pyramid to carry out object detection.In the example in figure 3, each classifier gold word Tower respectively includes 5 detection form W1~W5 of different sizes.Processor can refer to object knot with one by searching from picture frame Structure matching, and size meets the object image of a certain detection form, detects target piece.Although classifier gold word in this example Tower is implemented to include 5 different size of detection form W1~W5, but the present invention is not limited thereto.In other examples, divide The quantity that form is detected in class device pyramid can be arbitrary.
Fig. 4 is painted an example flow chart that key images frame is selected from the adjacent image frame of candidate key picture frame.It is non-one In the example of restriction, the process of Fig. 4 can be for example implemented in the step 106 of Fig. 1.
In step 402, processor can operation obtain the first index of similarity of candidate key picture frame.First similarity refers to Mark can be for example by processor by similarity calculation, according to the first covariance value (σ for corresponding to candidate key picture frame1xy) and This corresponds to multiple first party difference (σ that candidate key picture frame is obtained along different directions (such as direction x, y) statistics1x、σ1y) certainly It is fixed.In one embodiment, the first index of similarity (S1(x, y)) it can be expressed as follows:
Wherein Above-mentioned NpIndicate that a picture frame is divided into a total the piece number (patch), NxIndicate certain it is a piece of in total block in the x-direction Line number, NyTotal block columns in indicating a piece of in the y-direction, μiIndicate certain it is a piece of in i-th of block pixel average,Table Show the pixel average of jth row block in the x-direction,Indicate the pixel average of kth column block in the y-direction.
In step 404, processor operation obtains adjacent image frame, and (each adjacent image frame and candidate key picture frame are at least One is adjacent) the second index of similarity.Second index of similarity can be for example by processor by similarity calculation, according to corresponding Second covariance value (σ of adjacent image frame2xy) and this correspond to adjacent image frame along different directions (such as direction x, y) statistics take The multiple second party difference (σ obtained2x、σ2y) determine.In one embodiment, the second index of similarity (S2(x, y)) it can be expressed as follows:
Wherein
Similarity calculation employed in above-mentioned steps 402,404 can also be measured similar between object by other The algorithm of degree is realized, such as Euclidean distance (Euclidean distance) algorithm, cosine similarity (cosine Similarity) algorithm, Pearson came correlation (Pearson correlation) algorithm, inverse family frequency (inverse User frequency, IUF) similarity algorithm etc..
In step 406, processor judges whether adjacent image frame meets second and select condition.Second, which selects condition, for example wraps Include the second index of similarity of correspondence (S when adjacent image frame2(x, y)) with adjacent to this adjacent image frame candidate key image The first index of similarity of correspondence (S of frame1(x, y)) between difference be more than a similarity threshold value, that is, both in image Object architectural difference degree is quite big, then selects this adjacent image frame as one of key images frame.
In step 408, processor, which will meet second in the adjacent image frame of candidate key picture frame, to be selected condition person and selects As key images frame.
Conversely, key images will not be picked as by not meeting the adjacent image that second selects condition in step 410 Frame.
Later, in step 412, processor can be by all candidate key picture frames, together with meeting in adjacent image frame Two select condition person, and output is used as key images frame.
Fig. 5 is painted the schematic diagram that key images frame is picked out from multiple continuous picture frames.In the example of Fig. 5, image Frame F1~F7 is continuous 7 picture frames in video data.Pedestrian image be considered as target piece OB1 appear in picture frame F1~ In F3, and disappear in picture frame F4.Dog image of leaning to one side is considered as target piece OB2 and appears in picture frame F5, dog direct picture It is considered as target piece OB3 and appears in picture frame F6~F7.Due to target piece (target piece OB1/ in picture frame F1, F5, F6 OB2/OB3) start to occur, and target piece (OB1) starts to disappear in picture frame F4, therefore picture frame F1, F4~F6 will be selected As candidate key picture frame.
Judge then for candidate key picture frame F1, F4~F6 adjacent image frame F2, F3, F7 work.Due to neighbor map As frame F2, F7 are similar to neighbouring candidate key picture frame F1, F6 respectively, therefore adjacent image frame F2, F7 are excluded from as pass Key picture frame.And since adjacent image frame F3 and neighbouring candidate key picture frame F4 difference are quite big, therefore adjacent image frame F3 will It is picked as key images frame.
Finally, the key images frame of output will include picture frame F1 and F3~F6.Key images frame can be for example sorted At a sequence, and it is shown in a figure user interface.
Fig. 6 is painted the schematic diagram of the figure user interface 600 of an embodiment according to the present invention.In the example of Fig. 6, Figure user interface 600 include key images frame display area 602, main operation region 604 and tab area 606A, 606B。
Key images frame display area 602 can be shown by M key images frame KF1~KFMThe sequence of composition, wherein M is positive Integer.User can click any key images frame in key images frame display area 602, the crucial figure being selected As frame will be presented in main operation region 604.
User can select unrecognized object in 604 center of main operation region.By taking Fig. 6 as an example, key images frame KF3Quilt Choose, wherein object 614,616 is identified target piece, and object 610,612 be it is unrecognized, pass through user That chooses manually selects object using frame.
User can be to using frame that object is selected to be labeled, to assign corresponding title or meaning of one's words narration.Relevant mark Information can for example be shown in tab area 606A.Object 610 is selected to be marked using frame as shown in fig. 6, tab area 606A can be shown Note is " pedestrian 2 ", and shows and select object 612 to be noted as " dog " using frame.
The markup information of identified target piece can then be shown in tab area 606B.As shown in fig. 6, tab area 606B can displaying target object 614 be noted as " vehicle ", and displaying target object 616 is noted as " pedestrian 1 ".
Figure user interface 600 more may include one or more operation keys 608.For example, operation key 608 ("+ Newly-increased object ") clicked after, user can carry out for key images content frame shown in main operation region 604 User's frame selects the selection of object, and to it plus corresponding mark.Operation key 608 can also be implemented to pulldownmenus, choosing The mark narration that may include preset mark narration in list and had used.
It should be noted that the example of Fig. 6 is only to illustrate a wherein embodiment of the invention, rather than to limit the present invention. Figure user interface of the invention can also be implemented to the configuration of other texts and/or figure, as long as this graphically makes User interface, which is available for users to from key images frame definition user's frame, selects object and the corresponding markup information of input.
Fig. 7 is painted a non-limiting detailed flowchart of Fig. 1 step 114.In order to which the image labeling process after allowing can be adaptive It identifies and tracks user's frame that user increases newly with answering select object, in step 702, processor first can select object to user's frame Part makees characteristic strengthening, and the feature after reinforcing is then trained classifier as training sample in step 704.The classification Device has the function of sorting out corresponding classification and non-corresponding classification, and can realize the object detection and tracking journey in step 104 Sequence, to identify target piece.Classifier can be support vector machines (SVM) classifier, be also possible to other kinds of linearly reflect Penetrate classifier, seem expense snow line discriminatory analysis classifier (Fisher ' s linear discriminant classifier), Simple belleville classifier (naive Bayes classifier) etc..The implementation of step 704 can effectively reduce classifier because because Newly-increased user's frame selects object and required increased quantity, and then promotes the operation efficiency and accuracy rate of classification and identification.
The implementation of step 702 can cooperate object used in Fig. 1 step 104 to detect and track algorithm.For example, If object detection and tracking are based on the HOG feature of image, step 702 can be implemented as the reinforcing to HOG feature.Together Reason, if the detection of object used in Fig. 1 step 104 and tracking are based on other specific image features, step 702 will It is implemented as the reinforcing to the specific image features.
For strengthening HOG feature, it is as follows that characteristic strengthening program can be performed in processor: selecting object to be divided into user's frame more A block (block);A block to be processed is chosen from these blocks;HOG feature extraction program is executed, to obtain pending district Multiple 2nd HOG features of the adjacent block of multiple first HOG features and block to be processed of block;To special including the first HOG The one norm operation (norm) of a feature set cooperation of sign and the 2nd HOG feature, to obtain regularization parameter;Joined according to normalization It is several that normalization process is made to the first HOG feature, to obtain the first HOG feature after multiple reinforcings, and journey is detected and tracked for object Sequence carries out object detection.
HOG feature extraction program for example,
(1) in calculation block each location of pixels edge strength (Mi):
In formula four, x1、x-1Respectively indicate the grey scale pixel value before and after the direction x of target pixel location, y1、y-1It respectively indicates Grey scale pixel value above and below the direction y of target pixel location.
(2) in calculation block all edge strengths summation (Msum):
In formula five, n indicates the sum of all pixels in block.
(3) durection component (B being finally stored in each histogram block is calculatedi):
In formula six, MbIndicate all edge strengths being categorized in histogram block.
In addition, when making normalization process to pending district block, the feature of its adjacent block can be referred to, with from adjacent block Characteristic information judge which vector is main or successional edge, then again for more prominent or be important Marginal vectors calculate normalization.
In one embodiment, regularization parameter can be expressed as follows:
In formula seven, x1~xnEach HOG characteristic value for needing to be normalized calculating is represented, for example including all first HOG feature and the 2nd HOG feature.Then, the HOG characteristic normal result of block to be processed can be calculatedIt is as follows:
Wherein H(x,y)Indicate result before the HOG characteristic normal of block to be processed.
In one embodiment, processor can omit step 702, and directly select the feature of object as training using user's frame Sample trains classifier.
By the above-mentioned means, the major side direction character of continuous block can be highlighted.In one embodiment, processor more may be used The sequence of feature is accessed when detecting/track according to object to arrange and store the characteristic value being calculated, more accurately to obtain User's frame selects the feature of object.
Fig. 8 is painted the schematic diagram of HOG characteristic strengthening.It is 3 × 3 blocks 802 of display, each block in the example of Fig. 8 802 include 2 × 2 units 804, and each unit 804 is for example including multiple pixel (not shown)s.Before normalization process, needle To different blocks 802, the HOG feature group of corresponding different directions can be obtained, such as VA1, VA2.After normalization process, HOG is special Sign group VA1, VA2 can be converted into HOG feature group VA1 ' and VA2 ' after strengthening respectively.Can be seen that, compared to HOG feature group VA1, VA2, the HOG feature of part is highlighted in HOG feature group VA1 ' and VA2 ' after reinforcing.
Fig. 9 is painted the flow chart of the adaptive training of the multi-class classifier of an embodiment according to the present invention.In step 902, processor realizes multiple classifiers in object detection and tracing program to carry out object detection.
In step 904, processor chooses a classifier from multiple classifiers, and provides multiple trained samples to this classifier This, to establish parameter area respectively for multiple classifications, wherein these classifications are that corresponding target piece and user's frame select object The classification of part judges.
In step 906, processor searches one not overlapped with other parameters range in these parameter area areas can area Point parameter area, and by can the correspondence category label of differentiation parameter range be that can distinguish classification.
In step 908, processor selects a classification to be distinguished, the correspondence parameter model of this classification to be distinguished from these classifications Enclose is overlapped with the other parameters range in these parameter areas.In one embodiment, the correspondence parameter of classification to be distinguished Range is overlapped with the other parameters range of quantity most in these parameter areas.
In step 910, this can be marked as the classification to be distinguished by, which choosing from these classifiers, can distinguish the another of classification One classifier.
In step 912, remove from these parameter areas to differentiation parameter range.
In step 914, judge whether all classifiers being selected can allow each classification labeled in these classifiers For classification can be distinguished.If so, subsequent steps 916, unselected classifier is deleted from these classifiers.If it is not, then returning It returns step 906 and continues to execute adaptive training process, until all classifiers being selected can allow each classification to be marked as Classification can be distinguished.
In one embodiment, multiple specific training samples that a certain classification is corresponded in training sample can be provided to by processor Classifier, further according to the average value and standard deviation of these distance values, determines the corresponding ginseng of the category to obtain multiple distance values Number range.It is explained below in conjunction with Figure 10 and Figure 11.
In addition, according to following embodiment, for still untrained object classification, (such as corresponding user's frame selects the object of object Part classification) training sample be positive sample as classifier, and for the training sample of other object classifications be then as point The negative sample of class device.
Figure 10 is painted the schematic diagram of the training sample distance value different classes of relative to classifier.According to this embodiment, locate Training sample can be substituted into each classifier to obtain corresponding distance value by reason device.For example, to k-th of SVM classifier generation Can to obtain corresponding distance value as follows for j-th of training sample for entering for i-th of classification:
WhereinIndicate the vector of a feature vector size (feature vector size);It indicates from the i-th class The feature vector that other j-th of training sample takes out;ρkIndicate the rho parameter of k-th of SVM classifier.Then, processor can be counted The average value for calculating distance value is as follows:
Wherein stiIndicate the quantity of the training sample of the i-th classification.
By the above-mentioned means, can be by different classes of projection to the one-dimensional space, wherein OSHkIndicate k-th of SVM classifier Distance value reference point.
Figure 11 is painted the schematic diagram in the different classes of parameter section of classifier.As shown in figure 11, different classification LP1、 LP2Be respectively corresponding to an one-dimensional parameter area (And), wherein each parameter area Central value be respective distances value average value (And), and the upper limit value of parameter area and lower limit value respectively with The average value can be for example expressed as follows at a distance of one times of standard deviation, standard deviation:
According to LP of all categories1、LP2Corresponding distance average (And) and standard deviation (And), respectively The upper limit value of parameter area can be for example expressed as follows:
The lower limit value of each parameter area can be for example expressed as follows:
Although the upper limit value of parameter area and lower limit value are with respective average respectively at a distance of one times in above-mentioned example Standard deviation, but the present invention is not limited thereto.The size of parameter area can be adjusted according to different applications.
Figure 12 is painted the schematic diagram of the adaptive training of multi-class classifier.In the example of Figure 12, need to be distinguished Classification includes LP0、LP1、LP2And LP3.In the stage 1202, classification LP can be distinguished using the first SVM classifier0And non-classification LP0.In other words, classification LP can be distinguished0Parameter area do not overlap with the parameter area of other classifications.And remaining classification LP1、LP2、LP3Parameter area because overlapping, therefore the first SVM classifier and can not effectively classify.In the stage 1204, The second SVM classifier is introduced to distinguish the classification LP to overlap with most parameter areas2.In the stage 1206, deletion has been trained Parameter section used in the first classifier and the second classifier completed.In this way, may separate out all categories LP0~ LP3Corresponding parameter section.In other words, by the above-mentioned means, only needing that 4 classifications can be completed using two classifiers Classification.Compared to the practice that corresponding classifier need to be arranged in tradition for each classification, multi-class classifier proposed by the present invention Adaptive training method can effectively reduce the use of classifier, and then promote operation efficiency.
In conclusion the present invention proposes a kind of image labeling method, electronic device and non-transient readable in computer storage matchmaker Body can filter out the invalid image frames sample that repeatability is high in video data automatically, and filter out with object structure diversity Key images frame is browsed for user, and is increased newly, corrected mark object, with sophisticated image mark as a result, saving image in turn The manpower expended needed for mark.On the other hand, technology proposed by the present invention can more import expertise feedback mechanism and be picked with being promoted Take the correctness and robustness of key images frame.
Although the present invention is disclosed as above with preferred embodiment, however, it is not to limit the invention, any this field skill Art personnel, without departing from the spirit and scope of the present invention, when can make a little modification and perfect therefore of the invention protection model It encloses to work as and subject to the definition of the claims.

Claims (20)

1. a kind of image labeling method realized by the electronic device comprising a processor, comprising:
The processor obtains an image frame sequence from a video data, which includes multiple images frame;
The processor executes object detection and tracing program to the image frame sequence, to recognize and chase after from multiple picture frame One or more target pieces of track;
The processor selects condition according to one first, selects multiple candidate key picture frames from multiple picture frame, wherein this One to select condition include a target piece in one or more target pieces in the picture frame in multiple picture frame Start to occur or start to disappear, then selects the picture frame as one of multiple candidate key picture frame;
The processor determines multiple first index of similarity of multiple candidate key picture frame, wherein respectively first similarity refers to Mark is the processor by a similarity calculation, according to the corresponding candidate key picture frame in multiple candidate key picture frame One first covariance value and the correspondence candidate key picture frame along different directions statistics obtain multiple first party differences determine It is fixed;
The processor determines multiple second index of similarity of multiple adjacent image frames, wherein respectively the adjacent image frame with it is multiple Candidate key picture frame at least one is adjacent, and respectively second index of similarity is that the processor passes through the similarity calculation, according to One second covariance value of a corresponding adjacent image frame and the correspondence adjacent image frame are along different in multiple adjacent image frame Multiple second party differences that directional statistics obtain determine;
The processor selects condition by multiple candidate key picture frame, together with meeting one second in multiple adjacent image frame Person is elected to be multiple key images frames, and second to select condition include the adjacent image frame in multiple adjacent image frame for this One corresponding first similarity of one corresponding second index of similarity and the candidate key picture frame adjacent to the adjacent image frame Difference between index is more than a similarity threshold value, then the adjacent image frame is selected to the one as the key images frame;
Multiple key images frame is presented in a figure user interface by the processor, and passes through figure user circle Face shows the markup information about one or more target pieces.
2. image labeling method as described in claim 1, which is characterized in that object detection and tracing program include:
Device through this process establishes multiple images pyramid to multiple picture frame, and respectively the image pyramid includes multiple and different The image layer of resolution;And
Device through this process carries out object to multiple image layer in the respectively image pyramid with multiple classifier pyramids Detection.
3. image labeling method as claimed in claim 2, which is characterized in that object detection and tracing program further include:
Device through this process captures a histograms of oriented gradients (the histogram of of one or more target pieces Oriented gradient, HOG) feature;
The conversion of one frequency domain is made to the HOG feature, to obtain a HOG frequency domain character;And
Device through this process executes a core correlation filter (kernelized correlation filter, KCF) Object tracing program is to track the HOG frequency domain character.
4. image labeling method as described in claim 1, further includes:
Device through this process receives user operation via the figure user interface;
Device through this process responds the user and operates one filling of generation as a result, the filling result includes that user's frame selects object Part and user's markup information that object is selected about user's frame, it is that acquisition is more from this that wherein user's frame, which selects object, The picture material of a key images frame.
5. image labeling method as claimed in claim 4, further includes:
Device through this process executes a characteristic strengthening program, comprising:
Object is selected to be divided into multiple blocks user's frame;
A block to be processed is chosen from multiple block;
A HOG feature extraction program is executed, to obtain the multiple first HOG features and the pending district of the block to be processed Multiple 2nd HOG features of multiple adjacent blocks of block, wherein multiple adjacent block is adjacent to the block to be processed;
One norm operation of a feature set cooperation to including multiple first HOG feature and multiple 2nd HOG feature, to take Obtain a regularization parameter;
One normalization process is made to multiple first HOG feature according to the regularization parameter, to obtain the first HOG after multiple reinforcings Feature, and object detection is carried out for object detection and tracing program.
6. image labeling method as claimed in claim 4, further includes:
(a) device realizes multiple classifiers in object detection and tracing program to carry out object detection through this process;
(b) device through this process chooses a classifier from multiple classifier, and provides multiple trained samples to the classifier This, to establish multiple parameters range for multiple classifications, wherein multiple classification be corresponding one or more target pieces and User's frame selects the classification of object to judge;
(c) device through this process, searching one not overlapped with other parameters range in multiple parameter area area can distinguish Parameter area, and by this can a corresponding category label of differentiation parameter range be that one can distinguish classification;
(d) device through this process, a classification to be distinguished is selected from multiple classification, should classification be distinguished a corresponding parameter model Enclose is overlapped with the other parameters range in multiple parameter area;
(e) device through this process, this can be marked as the classification to be distinguished by, which choosing from multiple classifier, can distinguish classification Another classifier;
(f) removing from multiple parameter area should be to differentiation parameter range;
(g) device through this process repeats step (c)~(f), until the classifiers being selected all in multiple classifier can allow Each category, which is respectively labeled as this, can distinguish classification;And
(h) device through this process, unselected classifier is deleted from multiple classifier.
7. image labeling method as claimed in claim 6, which is characterized in that should the correspondence parameter area of classification to be distinguished be It overlaps with the other parameters range of quantity most in multiple parameter area.
8. image labeling method as claimed in claim 6, further includes:
Device through this process will correspond to multiple specific trained samples of a particular category of multiple classification in multiple training sample Originally it is provided to the classifier, to obtain multiple distance values;And
Device through this process determines in multiple parameter area according to the average value and a standard deviation of multiple distance value One special parameter range of the corresponding particular category.
9. image labeling method as claimed in claim 8, which is characterized in that the central value of the special parameter range is average for this Value, a upper limit value of the special parameter range and a lower limit value are respectively with the average value at a distance of the one times standard deviation.
10. image labeling method as claimed in claim 4, which is characterized in that multiple classifier is support vector machines (SVM) Classifier.
11. a kind of non-transient computer-readable storage medium, which stores one or more and refers to It enables, one or more instructions are for processor execution, so as to include an electronic device execution such as claim 1 of the processor To the operation of any image labeling method in 10.
12. a kind of electronic device, comprising:
One memory;And
One processor couples the memory, and be configured and to;
An image frame sequence is obtained from a video data, which includes multiple images frame;
The detection of one object and tracing program are executed to the image frame sequence, to recognize and track one or more from multiple picture frame A target piece;
Condition is selected according to one first, selects multiple candidate key picture frames from multiple picture frame, wherein this first selects item Part includes that the target piece worked as in one or more target pieces starts to occur in the picture frame in multiple picture frame Or start to disappear, which is selected as one of multiple candidate key picture frame;
Multiple first index of similarity of multiple candidate key picture frame are obtained, respectively first index of similarity is the processor The one first association side by a similarity calculation, according to the corresponding candidate key picture frame in multiple candidate key picture frame Difference and the correspondence candidate key picture frame are determined along multiple first party differences that different directions statistics obtains;
Multiple second index of similarity of multiple adjacent image frames are obtained, respectively the adjacent image frame and multiple candidate key image Frame at least one is adjacent, and respectively second index of similarity is the processor by the similarity calculation, by multiple adjacent image One second covariance value of a corresponding adjacent image frame and the correspondence adjacent image frame are obtained along different directions statistics in frame Multiple second party differences determine;
By multiple candidate key picture frame, condition person is selected together with meeting one second in multiple adjacent image frame, is elected to be more A key images frame, this second selects condition includes the adjacent image frame in multiple adjacent image frame one corresponding second Between index of similarity and corresponding first index of similarity of the candidate key picture frame adjacent to the adjacent image frame Difference is more than a similarity threshold value, then the adjacent image frame is selected to the one as the key images frame;
Multiple key images frame is presented in a figure user interface, and is closed by figure user interface display In a markup information of one or more target pieces.
13. electronic device as claimed in claim 12, which is characterized in that the processor more to:
Multiple images pyramid is established to multiple picture frame, respectively the image pyramid includes the image of multiple and different resolutions Layer;And
To multiple image layer in the respectively image pyramid, object detection is carried out with multiple classifier pyramids.
14. electronic device as claimed in claim 13, which is characterized in that the processor more to:
Capture one or more target pieces a histograms of oriented gradients (Histogram of oriented gradient, HOG) feature;
The conversion of one frequency domain is made to the HOG feature, to obtain a HOG frequency domain character;And
Execute a core correlation filter (Kernelized Correlation Filter, KCF) object tracing program with Track the HOG frequency domain character.
15. electronic device as claimed in claim 12, which is characterized in that the processor more to:
User operation is received via the figure user interface;
User operation is responded, generates a filling as a result, the filling result includes that user's frame selects object and about this User's frame selects user's markup information of object, and wherein user's frame, which selects object, is captured from multiple key images frame Picture material.
16. electronic device as claimed in claim 15, which is characterized in that the processor more to:
Execute a characteristic strengthening program, comprising:
Object is selected to be divided into multiple blocks user's frame;
A block to be processed is chosen from multiple block;
A HOG feature extraction program is executed, to obtain the multiple first HOG features and the pending district of the block to be processed Multiple 2nd HOG features of multiple adjacent blocks of block, wherein multiple adjacent block is adjacent to the block to be processed;
One norm operation of a feature set cooperation to including multiple first HOG feature and multiple 2nd HOG feature, to take Obtain a regularization parameter;
One normalization process is made to multiple first HOG feature according to the regularization parameter, to obtain the first HOG after multiple reinforcings Feature, and object detection is carried out for object detection and tracing program.
17. electronic device as claimed in claim 15, which is characterized in that the processor more to:
(a) in object detection and tracing program, multiple classifiers are realized to carry out object detection;
(b) classifier is chosen from multiple classifier, and provides multiple training samples to the classifier, to be directed to multiple classes Multiple parameters range is not established, wherein multiple classification is that corresponding one or more target pieces and user's frame select object Classification judgement;
(c) search do not overlap with other parameters range in multiple parameter area area one can differentiation parameter range, and will This can a corresponding category label of differentiation parameter range be that one can distinguish classification;
(d) classification to be distinguished is selected from multiple classification, should classification be distinguished a corresponding parameter area be with it is multiple Other parameters range in parameter area overlaps;
(e) classification to be distinguished can be marked as this by, which choosing from multiple classifier, can distinguish another classifier of classification;
(f) removing from multiple parameter area should be to differentiation parameter range;
(g) step (c)~(f) is repeated, until the classifiers being selected all in multiple classifier can allow each category to be distinguished Classification can be distinguished by being marked as this;And
(h) unselected classifier is deleted from multiple classifier.
18. electronic device as claimed in claim 17, which is characterized in that should classification be distinguished the correspondence parameter area be with The other parameters range of most quantity overlaps in multiple parameter area.
19. electronic device as claimed in claim 17, which is characterized in that the processor more to:
The multiple specific training samples for the particular category that multiple classification is corresponded in multiple training sample are provided to this point Class device, to obtain multiple distance values;And
According to the average value and a standard deviation of multiple distance value, determine to correspond to the particular category in multiple parameter area A special parameter range.
20. electronic device as claimed in claim 19, which is characterized in that the central value of the special parameter range is average for this Value, a upper limit value of the special parameter range and a lower limit value are respectively with the average value at a distance of the one times standard deviation.
CN201711285222.2A 2017-11-23 2017-12-07 Image labeling method, electronic device and non-transient computer-readable storage medium Pending CN109829467A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW106140836 2017-11-23
TW106140836A TWI651662B (en) 2017-11-23 2017-11-23 Image annotation method, electronic device and non-transitory computer readable storage medium

Publications (1)

Publication Number Publication Date
CN109829467A true CN109829467A (en) 2019-05-31

Family

ID=66213743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711285222.2A Pending CN109829467A (en) 2017-11-23 2017-12-07 Image labeling method, electronic device and non-transient computer-readable storage medium

Country Status (3)

Country Link
US (1) US10430663B2 (en)
CN (1) CN109829467A (en)
TW (1) TWI651662B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724296A (en) * 2020-06-30 2020-09-29 北京百度网讯科技有限公司 Method, device, equipment and storage medium for displaying image
CN111737519A (en) * 2020-06-09 2020-10-02 北京奇艺世纪科技有限公司 Method and device for identifying robot account, electronic equipment and computer-readable storage medium
CN114627036A (en) * 2022-03-14 2022-06-14 北京有竹居网络技术有限公司 Multimedia resource processing method and device, readable medium and electronic equipment

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7067812B2 (en) * 2018-03-20 2022-05-16 日本電気株式会社 Information processing device and control method
CN108898086B (en) * 2018-06-20 2023-05-26 腾讯科技(深圳)有限公司 Video image processing method and device, computer readable medium and electronic equipment
CN109801279B (en) * 2019-01-21 2021-02-02 京东方科技集团股份有限公司 Method and device for detecting target in image, electronic equipment and storage medium
CN110290426B (en) * 2019-06-24 2022-04-19 腾讯科技(深圳)有限公司 Method, device and equipment for displaying resources and storage medium
CN110378247B (en) * 2019-06-26 2023-09-26 腾讯科技(深圳)有限公司 Virtual object recognition method and device, storage medium and electronic device
US11544555B1 (en) * 2019-07-30 2023-01-03 Intuit Inc. Invoice data classification and clustering
CN110533685B (en) * 2019-08-30 2023-10-24 腾讯科技(深圳)有限公司 Object tracking method and device, storage medium and electronic device
CN110866480B (en) * 2019-11-07 2021-09-17 浙江大华技术股份有限公司 Object tracking method and device, storage medium and electronic device
CN111159494B (en) * 2019-12-30 2024-04-05 北京航天云路有限公司 Data labeling method for multi-user concurrent processing
CN111161323B (en) * 2019-12-31 2023-11-28 北京理工大学重庆创新中心 Complex scene target tracking method and system based on correlation filtering
CN113312949B (en) * 2020-04-13 2023-11-24 阿里巴巴集团控股有限公司 Video data processing method, video data processing device and electronic equipment
CN111950588B (en) * 2020-07-03 2023-10-17 国网冀北电力有限公司 Distributed power island detection method based on improved Adaboost algorithm
CN111860305B (en) * 2020-07-17 2023-08-01 北京百度网讯科技有限公司 Image labeling method and device, electronic equipment and storage medium
CN112507859B (en) * 2020-12-05 2024-01-12 西北工业大学 Visual tracking method for mobile robot
TWI795752B (en) * 2021-03-30 2023-03-11 歐特明電子股份有限公司 Development device and development method for training vehicle autonomous driving system
CN113420149A (en) * 2021-06-30 2021-09-21 北京百度网讯科技有限公司 Data labeling method and device
TWI830549B (en) * 2022-12-22 2024-01-21 財團法人工業技術研究院 Objects automatic labeling method and system applying the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968884A (en) * 2009-07-28 2011-02-09 索尼株式会社 Method and device for detecting target in video image
CN103065300A (en) * 2012-12-24 2013-04-24 安科智慧城市技术(中国)有限公司 Method for video labeling and device for video labeling
CN107133569A (en) * 2017-04-06 2017-09-05 同济大学 The many granularity mask methods of monitor video based on extensive Multi-label learning

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5502727A (en) * 1993-04-20 1996-03-26 At&T Corp. Image and audio communication system having graphical annotation capability
US20020173721A1 (en) * 1999-08-20 2002-11-21 Novasonics, Inc. User interface for handheld imaging devices
US6763148B1 (en) * 2000-11-13 2004-07-13 Visual Key, Inc. Image recognition methods
US7437005B2 (en) * 2004-02-17 2008-10-14 Microsoft Corporation Rapid visual sorting of digital files and data
US20100050080A1 (en) * 2007-04-13 2010-02-25 Scott Allan Libert Systems and methods for specifying frame-accurate images for media asset management
US9904852B2 (en) * 2013-05-23 2018-02-27 Sri International Real-time object detection, tracking and occlusion reasoning
US20160014482A1 (en) * 2014-07-14 2016-01-14 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Generating Video Summary Sequences From One or More Video Segments
US9760970B2 (en) * 2015-03-18 2017-09-12 Hitachi, Ltd. Video analysis and post processing of multiple video streams
AU2015271975A1 (en) * 2015-12-21 2017-07-06 Canon Kabushiki Kaisha An imaging system and method for classifying a concept type in video
US10319412B2 (en) * 2016-11-16 2019-06-11 Adobe Inc. Robust tracking of objects in videos

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968884A (en) * 2009-07-28 2011-02-09 索尼株式会社 Method and device for detecting target in video image
CN103065300A (en) * 2012-12-24 2013-04-24 安科智慧城市技术(中国)有限公司 Method for video labeling and device for video labeling
CN107133569A (en) * 2017-04-06 2017-09-05 同济大学 The many granularity mask methods of monitor video based on extensive Multi-label learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737519A (en) * 2020-06-09 2020-10-02 北京奇艺世纪科技有限公司 Method and device for identifying robot account, electronic equipment and computer-readable storage medium
CN111737519B (en) * 2020-06-09 2023-10-03 北京奇艺世纪科技有限公司 Method and device for identifying robot account, electronic equipment and computer readable storage medium
CN111724296A (en) * 2020-06-30 2020-09-29 北京百度网讯科技有限公司 Method, device, equipment and storage medium for displaying image
CN111724296B (en) * 2020-06-30 2024-04-02 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for displaying image
CN114627036A (en) * 2022-03-14 2022-06-14 北京有竹居网络技术有限公司 Multimedia resource processing method and device, readable medium and electronic equipment
CN114627036B (en) * 2022-03-14 2023-10-27 北京有竹居网络技术有限公司 Processing method and device of multimedia resources, readable medium and electronic equipment

Also Published As

Publication number Publication date
TW201926140A (en) 2019-07-01
US20190156123A1 (en) 2019-05-23
TWI651662B (en) 2019-02-21
US10430663B2 (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN109829467A (en) Image labeling method, electronic device and non-transient computer-readable storage medium
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN103761531B (en) The sparse coding license plate character recognition method of Shape-based interpolation contour feature
CN105612554B (en) Method for characterizing the image obtained by video-medical equipment
CN102496001B (en) Method of video monitor object automatic detection and system thereof
Alexe et al. Searching for objects driven by context
Pan et al. A robust system to detect and localize texts in natural scene images
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN108549870A (en) A kind of method and device that article display is differentiated
CN105574063A (en) Image retrieval method based on visual saliency
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
CN107808126A (en) Vehicle retrieval method and device
CN102208038A (en) Image classification method based on visual dictionary
CN110807434A (en) Pedestrian re-identification system and method based on combination of human body analysis and coarse and fine particle sizes
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN110222582B (en) Image processing method and camera
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
Kobyshev et al. Matching features correctly through semantic understanding
CN103106414A (en) Detecting method of passer-bys in intelligent video surveillance
Wang et al. S 3 d: scalable pedestrian detection via score scale surface discrimination
CN110599463A (en) Tongue image detection and positioning algorithm based on lightweight cascade neural network
CN110659374A (en) Method for searching images by images based on neural network extraction of vehicle characteristic values and attributes
CN115527269A (en) Intelligent human body posture image identification method and system
CN111428730B (en) Weak supervision fine-grained object classification method
Zhu et al. Scene text relocation with guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190531

WD01 Invention patent application deemed withdrawn after publication