CN109492584A - A kind of recognition and tracking method and electronic equipment - Google Patents
A kind of recognition and tracking method and electronic equipment Download PDFInfo
- Publication number
- CN109492584A CN109492584A CN201811331011.2A CN201811331011A CN109492584A CN 109492584 A CN109492584 A CN 109492584A CN 201811331011 A CN201811331011 A CN 201811331011A CN 109492584 A CN109492584 A CN 109492584A
- Authority
- CN
- China
- Prior art keywords
- stream
- tracking object
- video frames
- tracking
- moment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000004458 analytical method Methods 0.000 claims abstract description 81
- 238000010586 diagram Methods 0.000 description 14
- 230000017260 vegetative to reproductive phase transition of meristem Effects 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000000700 radioactive tracer Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000004454 trace mineral analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
This application provides a kind of recognition and tracking methods, comprising: obtains stream of video frames, the stream of video frames includes at least two frame images;According to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least one tracking object;Meet preset condition based on the first category information in the stream of video frames, the stream of video frames is identified using second category information, obtains at least one tracking object.Using this method, by the way that elder generation is obtained tracking object according to first category information analysis stream of video frames, and when the first category information meets preset condition, then identified according to second category informational video frame stream, it obtains tracking object, in the program, starts to carry out analysis identification only in accordance with a kind of classification information, it uses other classification informations to be identified again when category information can not be identified, improves the accuracy of recognition and tracking.
Description
Technical field
The present invention relates to field of electronic devices, and more specifically, it relates to a kind of recognition and tracking method and electronic equipments.
Background technique
Under the overall situation in artificial intelligence epoch, recognition of face tracer technique is applied to various fields, becomes weight
The user identity identification and authentication means wanted.
By taking the recognition of face based on video as an example, main flow includes Face datection, tracking, identification, comprehensive identification: face
Detection determines the position of face in the current frame;In the position of previous frame, tracking is used to determine face current given face
The position of frame;Identification then refers to searching and the most like face template of current face;Based on tracking and recognition result, comprehensive identification is given
Final recognition result out.
In above-mentioned process, tracking step and identification step are usually independently sequentially carried out.In practical application, hidden
The influence of the factors such as gear, tracking result is simultaneously unreliable, or even there is a situation where trace into another face from a face.
Summary of the invention
In view of this, the present invention provides a kind of recognition and tracking method, solve in the prior art that recognition and tracking result is not
Reliable problem.
To achieve the above object, the invention provides the following technical scheme:
A kind of recognition and tracking method, comprising:
Stream of video frames is obtained, the stream of video frames includes at least two frame images;
According to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least one tracking pair
As;
Meet preset condition based on the first category information in the stream of video frames, according to second category information to the view
Frequency frame stream is identified, at least one tracking object is obtained.
Preferably, above-mentioned method, judges whether the first category information in stream of video frames meets preset condition, comprising:
It obtains containing at least two tracking object in any frame image based on analysis, obtains at least two tracking object
Location information;
Whether the relative position for calculating any two tracking object meets pre-determined distance, obtains the first calculated result;
The relative position of the first tracking object and the second tracking object is characterized no more than pre- based on first calculated result
If distance, then the first category information in the stream of video frames meets preset condition;
The relative position of the first tracking object and the second tracking object is characterized greater than default based on first calculated result
Distance, then the first category information in the stream of video frames is unsatisfactory for preset condition.
Preferably, above-mentioned method, judges whether the first category information in stream of video frames meets preset condition, comprising:
According to preset first analysis rule, the stream of video frames is analyzed, obtains analysis result;
Characterizing the image at the first moment based on the analysis result includes the first tracking object, and the image at the second moment includes
First tracking object and the second tracking object, then the first category information in the stream of video frames meets preset condition;
Based on it is described analysis result characterize the first moment image and the image at the second moment include the first tracking object not
Comprising the second tracking object, then the first category information in the stream of video frames is unsatisfactory for preset condition, and first moment is early
In second moment.
Preferably, above-mentioned method, it is described that the stream of video frames is identified according to second category information, it obtains at least
Two tracking objects, comprising:
According to preset analysis rule, signature analysis is carried out at least two frame images in the stream of video frames, obtains first
Tracking object and/or the second tracking object.
Preferably, above-mentioned method, it is described that the stream of video frames is identified according to second category information, it obtains at least
One tracking object, comprising:
The image in the stream of video frames is successively analyzed, wherein included at least two initial tracking objects and its phase are obtained
Close information;
According to the initial tracking object and its relevant information, the initial tracking object of any two adjacent moment is calculated
Similarity;
According to the similarity of the initial tracking object of any two adjacent moment, analysis obtains the initial of third moment
The tracking object same tracking object corresponding with the initial tracking object at the 4th moment, the third moment and the 4th moment
It is adjacent;
The initial tracking object of the same tracking object of correspondence is arranged by tracking moment sequence, is obtained in the stream of video frames
Tracking object.
Preferably, above-mentioned method, the relevant information include moment, spatial position and characteristic information, the foundation institute
Initial tracking object and its relevant information are stated, the similarity of the initial tracking object of any two adjacent moment is calculated, comprising:
According to preset similarity algorithm, the spatial position of any initial tracking object and characteristic information, it is calculated and appoints
Similarity between one initial tracking object and the initial tracking object at neighbor tracking moment;
It specifically includes:
The similarity is calculated using following formula
similarity(Vi,Vj)=w1*L_similarity (Li,Lj)+w2*F_similarity(Fi,Fj)
Wherein, i and j respectively indicates different tracking moment, similarity (Vi,Vj) indicate that i and j is tracked at the beginning of the moment two
Similarity between beginning tracking object, L representation space position, L_similarity (Li,Lj) indicate that i and j is tracked at the beginning of the moment two
Spatial position similarity between beginning tracking object, F indicate characteristic information, F_similarity (Fi,Fj) when indicating that i and j are tracked
The characteristic similarity between two initial tracking objects is carved, w1 and w2 are weights.
A kind of electronic equipment, comprising:
Ontology;
Processor, for obtaining stream of video frames, the stream of video frames includes at least two frame images;According to the stream of video frames
First category information, analysis obtains in any frame image comprising at least one tracking object;Based in the stream of video frames
First category information meets preset condition, identifies according to second category information to the stream of video frames, obtains at least one
Tracking object.
Preferably, above-mentioned electronic equipment, further includes:
Camera obtains stream of video frames for acquiring video.
Preferably, above-mentioned electronic equipment, further includes:
Display screen, for showing the image of the tracking object.
A kind of electronic equipment, comprising:
Module is obtained, for obtaining stream of video frames, the stream of video frames includes at least two frame images;
Analysis module, for the first category information according to the stream of video frames, analysis obtain include in any frame image
At least one tracking object;
Identification module, for meeting preset condition based on the first category information in the stream of video frames, according to the second class
Other information identifies the stream of video frames, obtains at least one tracking object.
It can be seen via above technical scheme that compared with prior art, the present invention provides a kind of recognition and tracking method, packets
It includes: obtaining stream of video frames, the stream of video frames includes at least two frame images;According to the first category information of the stream of video frames,
Analysis obtains in any frame image comprising at least one tracking object;Met based on the first category information in the stream of video frames
Preset condition identifies the stream of video frames using second category information, obtains at least one tracking object.Using the party
Method, by the way that elder generation is obtained tracking object according to first category information analysis stream of video frames, and when the first category information meets in advance
If then being identified according to second category informational video frame stream when condition, obtain tracking object, in the program, start only in accordance with
A kind of classification information carries out analysis identification, uses other classification informations to be known again when category information can not be identified
Not, the accuracy of recognition and tracking is improved.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of recognition and tracking embodiment of the method 1 provided by the present application;
Fig. 2 is a kind of flow chart of recognition and tracking embodiment of the method 2 provided by the present application;
Fig. 3 is a kind of flow chart of recognition and tracking embodiment of the method 3 provided by the present application;
Fig. 4 is a kind of flow chart of recognition and tracking embodiment of the method 4 provided by the present application;
Fig. 5 is a kind of flow chart of recognition and tracking embodiment of the method 5 provided by the present application;
Fig. 6 is initial tracking result schematic diagram in a kind of recognition and tracking embodiment of the method 5 provided by the present application;
Fig. 7 is probability graph model schematic diagram in a kind of recognition and tracking embodiment of the method 5 provided by the present application;
Fig. 8 is tracking result schematic diagram in a kind of recognition and tracking embodiment of the method 5 provided by the present application;
Fig. 9 is a structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application;
Figure 10 is another structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application;
Figure 11 is the another structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application;
Figure 12 is the structural schematic diagram of a kind of electronic equipment embodiment 2 provided by the present application.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, it is a kind of flow chart of recognition and tracking embodiment of the method 1 provided by the present application, this method application
In an electronic equipment, method includes the following steps:
Step S101: stream of video frames is obtained;
Wherein, the stream of video frames includes at least two frame images.
Wherein, stream of video frames can be what the video capture device being connected with the electronic equipment was shot, be also possible to
What the video acquisition device being arranged in the electronic equipment was shot.
The video acquisition device being arranged in the video capture device or the electronic equipment being connected with the electronic equipment will be shot
The stream of video frames arrived is sent to the electronic equipment, which receives stream of video frames.The video frame that the electronic equipment receives
It may include at least two frame images in stream.Every frame image can be with are as follows: picture or image.
Specifically, the image that the stream of video frames is used to treat tracking object is acquired.
Step S102: according to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least
One tracking object;
It wherein, include the information of multiple classifications in the stream of video frames, it, should such as first category information and second category information
First, second is only for two class classification informations of difference.
It can be realized according only to the information of one of classification (such as first category information) in the video in the application
The analysis for the tracking object for including determines.
It should be noted that the tracking object obtained according to first category information analysis, is only to have primarily determined the frame
With one, two even more than tracking object in image.
For example, according to the first category information of the stream of video frames, it can analyze to obtain in any frame image therein and include
One, two even more than tracking object.
Step S103: preset condition is met based on the first category information in the stream of video frames, is believed according to second category
Breath identifies the stream of video frames, obtains at least one tracking object.
Wherein, preset condition is met based on the first category information, then according to the second category information in the stream of video frames
The stream of video frames is identified.
It should be noted that when first category information meets preset condition, then it can not be further according to the first category information
Content in the stream of video frames is identified, then the stream of video frames is identified according to second category information at this time, with
To tracking object therein.In the program, start to carry out analysis identification only in accordance with a kind of classification information, when category information can not
It uses other classification informations to be identified again when being identified, improves the accuracy of recognition and tracking.
Certainly, it should be noted that identifying processing is carried out to the content in stream of video frames with true according to second category information
Determine in the process (step S103) of tracking object, only can carry out identifying processing according to the second category information, it can also foundation
Content comprising second category information carries out the content in the stream of video frames to do detailed solution in identifying processing subsequent embodiment
It releases, is not detailed in the present embodiment.
In specific implementation, when first category information is unsatisfactory for preset condition, then not further according to second category information to view
Frequency frame stream is identified, carries out trace analysis again according to first category information.
It should be noted that this identifies stream of video frames according to first category information, is lower essence in specific implementation
Degree/accuracy identification can only identify several tracking objects involved in the stream of video frames, and according to the second class
Other information identifies stream of video frames, is degree of precision/accuracy identification, can be to the tracking in the stream of video frames
Object is accurately identified, and is identified with this to different tracking objects.
To sum up, a kind of recognition and tracking method provided in this embodiment, comprising: obtain stream of video frames, the stream of video frames packet
Include at least two frame images;According to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least one
A tracking object;Meet preset condition based on the first category information in the stream of video frames, using second category information to institute
It states stream of video frames to be identified, obtains at least one tracking object.Using this method, by will first be divided according to first category information
Analysis stream of video frames obtains tracking object, and when the first category information meets preset condition, then it is regarded according to second category information
Frequency frame stream identified, obtains tracking object, in the program, starts to carry out analysis identification only in accordance with a kind of classification information, when this
It uses other classification informations to be identified again when classification information can not be identified, improves the accuracy of recognition and tracking.
As shown in Figure 2, it is a kind of flow chart of recognition and tracking embodiment of the method 2 provided by the present application, this method includes
Following steps:
Step S201: stream of video frames is obtained;
Step S202: according to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least
One tracking object;
Step S201-202 is consistent with the step S101-102 in embodiment 1, does not repeat them here in the present embodiment.
Step S203: obtaining containing at least two tracking object in any frame image based on analysis, obtains described at least two
The location information of a tracking object;
It should be noted that the first category information includes the location information of tracking object in the present embodiment.
Specifically, the position that the location information of the tracking object can be the tracking object where in stream of video frames, is
It is codetermined by its position in each frame image;It is also possible to position of the tracking object in any frame image.
Certainly, in specific implementation, when analysis obtains at least one tracking object in step S202, this can be synchronized
The location information of tracking object.The position that the tracking object synchronized then need to be only obtained in this step, without carrying out
It computes repeatedly.
Step S204: whether the relative position for calculating any two tracking object meets pre-determined distance, obtains the first calculating
As a result;
Wherein, it when having at least two tracking objects involved in the stream of video frames, calculates opposite between the tracking object
Position (is blocked close to the user image of image collecting device and is adopted far from image to determine whether the two will appear image overlap
The user image of acquisition means) and then tracking is caused to go wrong.
Relative position specifically based on first calculated result the first tracking object of characterization and the second tracking object is not
Greater than pre-determined distance, then the first category information in the image meets preset condition;It is characterized based on first calculated result
The relative position of first tracking object and the second tracking object is greater than pre-determined distance, then the first category information in the image is not
Meet preset condition.
Specifically, the two is temporary when the relative position of first tracking object and the second tracking object is greater than pre-determined distance
It is not in there is a situation where blocking, without carrying out the processing of other classification informations to the stream of video frames;And first tracking pair
When as being not more than pre-determined distance with the relative position of the second tracking object, the two it is possible that there is a situation where blocking, then this
When need to carry out the stream of video frames processing of other classification informations, with determination tracking object therein.
In specific implementation, when the relative position of the first tracking object and the second tracking object changes never more than pre-determined distance
When for greater than pre-determined distance, it can continue to obtain tracking therein with analyzing the stream of video frames according to first category information
Object, to reduce the data processing amount carried out in analytic process.
Step S205: preset condition is met based on the first category information in the stream of video frames, is believed according to second category
Breath identifies the stream of video frames, obtains at least one tracking object.
Step S205 is consistent with the step S103 in embodiment 1, does not repeat them here in the present embodiment.
To sum up, in a kind of recognition and tracking method provided in this embodiment, the first category information in the judgement stream of video frames
Whether meet preset condition, comprising: obtain containing at least two tracking object in any frame image based on analysis, obtain it is described extremely
The location information of few two tracking objects;Whether the relative position for calculating any two tracking object meets pre-determined distance, obtains
First calculated result;Relative position based on first calculated result characterization the first tracking object and the second tracking object is little
In pre-determined distance, then the first category information in the stream of video frames meets preset condition;Based on first calculation result table
The relative position for levying the first tracking object and the second tracking object is greater than pre-determined distance, then the first category in the stream of video frames
Information is unsatisfactory for preset condition.In the program, first determine whether that the two will occur according to the relative position between tracking object
Blocking causes tracking to go wrong, and is identified to obtain tracking object to stream of video frames further according to second category information, improves and know
The accuracy not tracked.
As shown in Figure 3, it is a kind of flow chart of recognition and tracking embodiment of the method 3 provided by the present application, this method includes
Following steps:
Step S301: stream of video frames is obtained;
Step S302: according to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least
One tracking object;
Step S301-302 is consistent with the step S101-102 in embodiment 1, does not repeat them here in the present embodiment.
Step S303: according to preset first analysis rule, analyzing the stream of video frames, obtains analysis result;
It should be noted that in the present embodiment, the first category information representation number of tracking object.
Certainly, in specific implementation, when analysis obtains at least one tracking object in step s 302, this can be synchronized
The number of tracking object.The number that the tracking object for including in every frame image then need to be only obtained in this step, no longer needs to
The secondary analysis for carrying out tracking object number.
Specifically, being changed in this step by the number to the tracking object for including in every frame image, realize to video frame
The analysis of stream obtains analysis result.
Specifically, include the first tracking object based on the image that the analysis result characterized for the first moment, the second moment
Image includes the first tracking object and the second tracking object, then the first category information in the stream of video frames meets default item
Part;The image at the first moment is characterized based on the analysis result and the image at the second moment includes that the first tracking object does not include the
Two tracking objects, then the first category information in the stream of video frames is unsatisfactory for preset condition, and first moment is earlier than described
Second moment.
It should be noted that the first moment and the second moment include (the first tracking of identical tracking object in the present embodiment
Object), and there is new tracking object (the second tracking object) at the second moment, then the first category in the stream of video frames is believed
Breath meets preset condition;Otherwise (do not occur new tracking object in the stream of video frames), the first category letter in the stream of video frames
Breath is unsatisfactory for preset condition.
It include the image of a tracking object as a specific example, in the image at the first moment, thereafter second
Comprising there are two the image of tracking object, then increasing a tracking object in stream of video frames in the image at moment.In order to improve
The accuracy of recognition and tracking needs to identify two tracking objects in image respectively, in order to avoid occur during tracking
The problems such as mistake.
Step S304: preset condition is met based on the first category information in the stream of video frames, is believed according to second category
Breath identifies the stream of video frames, obtains at least one tracking object.
Step S304 is consistent with the step S103 in embodiment 1, does not repeat them here in the present embodiment.
To sum up, in a kind of recognition and tracking method provided in this embodiment, the first category information in the judgement stream of video frames
Whether preset condition is met, comprising: according to preset first analysis rule, analyze the stream of video frames, analyzed
As a result;It include the first tracking object based on the image that the analysis result characterized for the first moment, the image at the second moment includes the
One tracking object and the second tracking object, then the first category information in the stream of video frames meets preset condition;Based on described
The image and the image at the second moment for analyzing result the first moment of characterization include that the first tracking object does not include the second tracking object,
Then the first category information in the stream of video frames is unsatisfactory for preset condition, and first moment is earlier than second moment.It should
In scheme, new tracking object is first determined whether according to the number of tracking object, further according to second category information to video
Frame stream is identified to obtain tracking object, is distinguished with realizing to different tracking objects, is improved the accuracy of recognition and tracking.
As shown in Figure 4, it is a kind of flow chart of recognition and tracking embodiment of the method 4 provided by the present application, this method includes
Following steps:
Step S401: stream of video frames is obtained;
Step S402: according to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least
One tracking object;
Step S401-402 is consistent with the step S101-102 in embodiment 1, does not repeat them here in the present embodiment.
Step S403: preset condition is met based on the first category information in the stream of video frames, according to preset analysis
Rule carries out signature analysis at least two frame images in the stream of video frames, obtains the first tracking object and/or the second tracking pair
As.
Wherein, the first category information in the stream of video frames meets preset condition, needs according to second category information to view
Tracking object in frequency frame stream is identified.
In the present embodiment, which is the characteristic information of image, specifically, being the personage for including in the image
Characteristic information.
Wherein, which can be the higher character features analysis rule of precision, such as deep learning model
Or neural network model etc., realize to personage's behavioural characteristic in character facial feature/stream of video frames in every frame image into
Row analysis, determines that the tracking object in different frame image belongs to the same person.
In specific implementation, which includes: character facial feature/and/or personage's behavioural characteristic.
It should be noted that when image overlap occurs for two tracking objects, i.e., one of personage has blocked another person
When object, only one tracking object in image, and when the two does not block (two personages separate), there are two tracking in image
Object.
To sum up, in a kind of recognition and tracking method provided in this embodiment, the foundation second category information is to the video frame
Stream is identified, at least two tracking objects are obtained, comprising: according to preset analysis rule, in the stream of video frames at least
Two frame images carry out signature analysis, obtain the first tracking object and/or the second tracking object.Using this method, by video
Image in frame stream carries out signature analysis, is determined to tracking object therein, and based on the analysis determine as a result, it is possible to
Different tracking objects is distinguished, new tracking object occur or when image overlap occurs for two tracking objects, is kept away
Exempt from recognition and tracking and mistake occurs.
As shown in Figure 5, it is a kind of flow chart of recognition and tracking embodiment of the method 5 provided by the present application, this method includes
Following steps:
Step S501: stream of video frames is obtained;
Step S502: according to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least
One tracking object;
Step S501-502 is consistent with the step S101-102 in embodiment 1, does not repeat them here in the present embodiment.
Step S503: successively analyzing the image in the stream of video frames, obtains wherein included at least two initial tracking
Object and its relevant information;
Wherein, initial tracking object is contained at least one in every frame image of the stream of video frames.
It should be noted that occur the insecure problem of tracking result to different tracking objects due to being, so, this implementation
Example in, necessarily have in the image for the stream of video frames being related to some frames can comprising two even more than object to be tracked, and by with
The object of track it is possible that track insecure problem, so, in the present embodiment, carried out more primarily directed to the situation
The identification of accurate tracking object, to realize the tracking of high reliablity.
Specifically, in this step analysis obtain be every frame image initial recognition result and its relevant information, wherein should
Relevant information may include having time, position, feature etc..
In specific implementation, this feature may include traditional characteristic, such as color histogram, HOG (Histogram of
Oriented Gradient, histograms of oriented gradients), LBP (Local Binary Patterns, local binary patterns) etc.,
Or the feature that neural network is extracted, such as it is derived from the output of a certain layer of neural network.
Step S504: according to the initial tracking object and its relevant information, the initial of any two adjacent moment is calculated
The similarity of tracking object;
It wherein, include initial tracking object in any one frame image, to first in the image of any two adjacent moment
The similarity of beginning tracking object is calculated, to determine whether the initial tracking object identified in step S503 is accurate.
Specifically, the relevant information includes moment, spatial position and characteristic information, then step S504 is then according to default
Similarity algorithm, any initial tracking object spatial position and characteristic information, be calculated any initial tracking object with
Similarity between the initial tracking object at neighbor tracking moment;
It specifically includes:
The similarity is calculated using following formula
similarity(Vi,Vj)=w1*L_similarity (Li,Lj)+w2*F_similarity(Fi,Fj)
Wherein, i and j respectively indicates different tracking moment, similarity (Vi,Vj) indicate that i and j is tracked at the beginning of the moment two
Similarity between beginning tracking object, L representation space position, L_similarity (Li,Lj) indicate that i and j is tracked at the beginning of the moment two
Spatial position similarity between beginning tracking object, F indicate characteristic information, F_similarity (Fi,Fj) when indicating that i and j are tracked
The characteristic similarity between two initial tracking objects is carved, w1 and w2 are weights.
Wherein, the value of w1 and w2 can be configured according to the actual situation, and specific value does not limit in this application
System.
It should be noted that the different tracking moment that the i and j is indicated are the adjacent tracking moment.
Specifically, L_similarity (the Li,Lj) can be calculated using the friendship union concept of rectangle, ideal shape
It is infinite approach 1 under state.
Specifically, F_similarity (the Fi,Fj) can be identified using face characteristic, calculate Euclidean distance (such as 256 dimensions
Vector), it is calculated higher apart from smaller similarity;Alternatively, can also calculate using COS distance, the feature phase of the two is obtained
Like degree.
It should be noted that the similarity between two initial tracking objects, including but not limited to characteristic similarity and
Spatial position similarity.
It should be noted that being to calculate separately its phase for each of every frame image initial tracking object in this step
The similarity of initial tracking object in the image of adjacent moment frame.
As a specific example, have in the image in the image of t moment with initial tracking object A, t+1 a moment
There are two initial tracking object A and B, then need to calculate separately the respective similarity value of A and B at A and the t+1 moment of the t moment.
As a specific example, there are two initial tracking object A and B, the images at t+1 moment for tool in the image of t moment
For middle tool there are two initial tracking object A and B, then the A and B for needing to calculate separately A and the t+1 moment of the t moment are respective similar
The respective similarity value of A and B at B and the t+1 moment of angle value and t moment.
Step S505: according to the similarity of the initial tracking object of any two adjacent moment, analysis obtains third
The initial tracking object at the moment same tracking object corresponding with the initial tracking object at the 4th moment, the third moment and institute
It is adjacent to state for the 4th moment;
Wherein, by the calculating in step S504, at least the one of the initial tracking object moment adjacent thereto at each moment
The similarity value of a initial tracking object has obtained.
Then, it is based on adjacent moment, for the similarity between the adjacent initial tracking object of two frames, in a wherein frame
Subject to one initial tracking object, select the maximum initial tracking object of similarity value as the same tracking object.
As a specific example, have in the image in the image of t moment with initial tracking object A, t+1 a moment
There are two initial tracking object A and B, wherein the similarity value of the A at A and the t+1 moment of the t moment is 0.9, the A of the t moment
Similarity value with the B at t+1 moment is 0.05, then the A at A and the t+1 moment of the t moment is the same tracking object.
As a specific example, have in the image in the image of t moment with initial tracking object A, t+1 a moment
There are two initial tracking object A and B, wherein the similarity value of the A at A and the t+1 moment of the t moment is 0.3, the A of the t moment
Similarity value with the B at t+1 moment is 0.8, then the B at A and the t+1 moment of the t moment is the same tracking object.
Step S506: the initial tracking object of the same tracking object of correspondence is arranged by tracking moment sequence, is obtained described
Tracking object in stream of video frames.
Wherein, according to the sequencing at tracking moment, by the first of the same tracking object of correspondence determined in step S505
Beginning tracking object is arranged, and obtained sequence is exactly a tracking object in the stream of video frames.
In specific implementation, in order to reduce the pressure of data processing, can will a few frame images composition stream of video frames at
Reason, to realize that identification determines corresponding tracking object.
For example, analyze with the stream of video frames that 5 frame images form the process of determining tracking object.Certainly, this Shen
Please in the image frame number for including in the stream of video frames is not limited, can be configured according to the actual situation in specific implementation.
In specific implementation, the tracking object in determining stream of video frames can be realized by establishing probability graph model, specifically
Process is as follows: it is subject to and tracks the moment and sequentially arrange initial tracking object, the initial tracking object corresponding one of each tracking object
The vertex of a probability graph model;According to the initial tracking object at any one initial tracking object tracking moment adjacent thereto
Similarity, the side as the corresponding vertex of above-mentioned two initial tracking object.
As shown in FIG. 6 is initial tracking result schematic diagram, includes three trackers: tracker in the initial tracking result
1,2 and 3, the corresponding initial tracking object of each tracker, tracker 1 divides in this 5 frame image of t, t+1, t+2, t+3, t+4
Not Ju You a tracking result, which is indicated with solid black dot;Tracker 2 divides in this 2 frame image of t+1, t+2
Not Ju You a tracking result, which is indicated with solid line soft dot;Tracker 3 has one in this 1 frame image of t+4
A tracking result, the tracking result are indicated with dotted open circles point.
According to the initial tracking result, establish probability graph model, specifically, using initial tracking object as vertex, according to
The sequencing at track moment arranges all initial tracking objects, should with the initial tracking object at side connection neighbor tracking moment
While adopting indicated by an arrow, the similarity value of two initial tracking objects of connection is represented.
Specifically, probability graph model uses G=<V, E>expression, wherein G indicates that probability graph model, V indicate vertex, and E is indicated
Side.
Wherein, it when establishing probability graph model, can be realized during obtaining stream of video frames, which can come from cutting
To the initial tracking object of present frame.The vertex contains four dimensions information: moment, spatial position, feature and identification knot
Fruit (state), i.e. Vi={ Ti,Li,Fi,Si}。
As shown in Figure 7 is probability graph model schematic diagram, the side including 8 vertex and connection vertex, wherein 8 tops
Point is arranged successively according to the tracking moment, is laterally arranged from t to t+4 according to the moment.
Any two adjacent moment can be specifically calculated using the algorithm of calculating similarity involved in step S504
The similarity value of initial tracking object.
Such as it is 0.8 that vertex 1 and the similarity value on vertex 2, which is calculated, the similarity value on vertex 1 and vertex 3 is 0.2,
It can then determine the corresponding same tracking object in vertex 1 and 2.
Certainly, similarity value is indicated in the present embodiment using the probability between 0-1, in the unlimited system specific implementation of the application
The representation method of the similarity can also be indicated using other numerical value.
It should be noted that the similarity between the vertex, including but not limited to characteristic similarity and spatial position phase
Like degree.
It should be noted that under normal circumstances, because in a frame image being not in two identical faces,
In other words for a certain people, it should only one face.
So having following constraint in the probability graph model:
Similarity between two vertex of synchronization is 0, i.e., the two is connected by no side;
Two vertex of synchronization have different state/recognition results always.
As shown in Figure 8 is tracking result schematic diagram, the side including 8 vertex and connection vertex, wherein 8 vertex
It is arranged successively according to the tracking moment, is laterally arranged from t to t+4 according to the moment.Wherein, the vertex 124 connected by arrow is corresponding
Tracker 1, the corresponding tracker 2 in vertex 3567 connected by arrow, the corresponding tracker 3 in vertex 8, and tracker 1 and tracker 3
Corresponding user A, the corresponding tracking user B of tracker 2.
It should be noted that in scheme shown in the Fig. 6-8, according to tracking result it should be understood that t moment starts
There is user A to be tracked, user B starts to be tracked at the t+1 moment, the t+3 moment, only one tracking object in image, at this time
User B has blocked user A, t+4 moment, and user B no longer blocks user A, and there are two track object in image.
In specific implementation, figure optimization can also be carried out for the probability graph model, especially by Optimized Iterative mode, to this
The shape that the state on each vertex in probability graph model is constantly updated, and will optimize the vertex in convergent probability graph model
State is as final recognition result.
Specifically, can be using belief propagation algorithm (BP, Belief Propagation) in probability graph model
It is preferentially restrained on vertex.
To sum up, in a kind of recognition and tracking method provided in this embodiment, the foundation second category information is to the video frame
Stream is identified, at least one tracking object is obtained, comprising: the image in the stream of video frames is successively analyzed, wherein being wrapped
The initial tracking object of at least two contained and its relevant information;According to the initial tracking object and its relevant information, calculates and appoint
The similarity of the initial tracking object of two adjacent moments of anticipating;According to the initial tracking object of any two adjacent moment
Similarity, analysis obtain the initial tracking object same tracking pair corresponding with the initial tracking object at the 4th moment at third moment
As the third moment is adjacent with the 4th moment;By the initial tracking object of the same tracking object of correspondence by the tracking moment
Sequence arranges, and obtains the tracking object in the stream of video frames.It, can be by calculating any two adjacent moment in the program
The similarity of initial tracking object, determines the initial tracking object for belonging to the same tracking object in the stream of video frames, realizes
Determine the tracking object in the stream of video frames.
Corresponding with a kind of above-mentioned information processing method embodiment provided by the present application, present invention also provides applications should
The electronic equipment embodiment of information processing method.
As shown in Figure 9 is a structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application, the electronic equipment
Including with flowering structure: ontology 901 and processor 902.
Wherein, processor 902 is set in ontology 901, and for obtaining stream of video frames, the stream of video frames includes at least two
Frame image;According to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least one tracking pair
As;Meet preset condition based on the first category information in the stream of video frames, according to second category information to the video frame
Stream is identified, at least one tracking object is obtained.
In specific implementation, which can be using the structure in electronic equipment with data-handling capacity, such as CPU
(central processing unit, central processing unit) etc..
As shown in Figure 10 is another structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application, the electronics
Equipment includes with flowering structure: ontology 1001, processor 1002 and camera 1003.
Wherein, camera 1003 obtain stream of video frames for acquiring video.
As shown in figure 11 is the another structural schematic diagram of a kind of electronic equipment embodiment 1 provided by the present application, the electronics
Equipment includes with flowering structure: ontology 1101, processor 1102 and camera 1103.
Wherein, display screen 1103, for showing the image of the tracking object.
In specific implementation, only the image of the tracking object can be shown in the display screen, it can also be in video frame
The image mark of tracking object is gone out in stream, to realize the purpose for prompting the tracking object.
Preferably, judge whether the first category information in stream of video frames meets preset condition, comprising:
It obtains containing at least two tracking object in any frame image based on analysis, obtains at least two tracking object
Location information;
Whether the relative position for calculating any two tracking object meets pre-determined distance, obtains the first calculated result;
The relative position of the first tracking object and the second tracking object is characterized no more than pre- based on first calculated result
If distance, then the first category information in the stream of video frames meets preset condition;
The relative position of the first tracking object and the second tracking object is characterized greater than default based on first calculated result
Distance, then the first category information in the stream of video frames is unsatisfactory for preset condition.
Preferably, judge whether the first category information in stream of video frames meets preset condition, comprising:
According to preset first analysis rule, the stream of video frames is analyzed, obtains analysis result;
Characterizing the image at the first moment based on the analysis result includes the first tracking object, and the image at the second moment includes
First tracking object and the second tracking object, then the first category information in the stream of video frames meets preset condition;
Based on it is described analysis result characterize the first moment image and the image at the second moment include the first tracking object not
Comprising the second tracking object, then the first category information in the stream of video frames is unsatisfactory for preset condition, and first moment is early
In second moment.
Preferably, described that the stream of video frames is identified according to second category information, obtain at least two tracking pair
As, comprising:
According to preset analysis rule, signature analysis is carried out at least two frame images in the stream of video frames, obtains first
Tracking object and/or the second tracking object.
Preferably, described that the stream of video frames is identified according to second category information, obtain at least one tracking pair
As, comprising:
The image in the stream of video frames is successively analyzed, wherein included at least two initial tracking objects and its phase are obtained
Close information;
According to the initial tracking object and its relevant information, the initial tracking object of any two adjacent moment is calculated
Similarity;
According to the similarity of the initial tracking object of any two adjacent moment, analysis obtains the initial of third moment
The tracking object same tracking object corresponding with the initial tracking object at the 4th moment, the third moment and the 4th moment
It is adjacent;
The initial tracking object of the same tracking object of correspondence is arranged by tracking moment sequence, is obtained in the stream of video frames
Tracking object.
Preferably, the relevant information includes moment, spatial position and characteristic information, described according to the initial tracking pair
As and its relevant information, calculate any two adjacent moment initial tracking object similarity, comprising:
According to preset similarity algorithm, the spatial position of any initial tracking object and characteristic information, it is calculated and appoints
Similarity between one initial tracking object and the initial tracking object at neighbor tracking moment;
It specifically includes:
The similarity is calculated using following formula
similarity(Vi,Vj)=w1*L_similarity (Li,Lj)+w2*F_similarity(Fi,Fj)
Wherein, i and j respectively indicates different tracking moment, similarity (Vi,Vj) indicate that i and j is tracked at the beginning of the moment two
Similarity between beginning tracking object, L representation space position, L_similarity (Li,Lj) indicate that i and j is tracked at the beginning of the moment two
Spatial position similarity between beginning tracking object, F indicate characteristic information, F_similarity (Fi,Fj) when indicating that i and j are tracked
The characteristic similarity between two initial tracking objects is carved, w1 and w2 are weights.
To sum up, a kind of electronic equipment provided in this embodiment, by will be first according to first category information analysis stream of video frames
Tracking object is obtained, and when the first category information meets preset condition, then it is carried out according to second category informational video frame stream
Identification, obtains tracking object, in the program, starts to carry out analysis identification only in accordance with a kind of classification information, when category information without
It uses other classification informations to be identified again when method is identified, improves the accuracy of recognition and tracking.
As shown in figure 12 is the structural schematic diagram of a kind of electronic equipment embodiment 2 provided by the present application, the electronic equipment
Including with flowering structure: obtaining module 1201, analysis module 1202 and identification module 1203.
Wherein, the acquisition module 1201, for obtaining stream of video frames, the stream of video frames includes at least two frame images;
Wherein, the analysis module 1202, for the first category information according to the stream of video frames, analysis obtains any frame
It include at least one tracking object in image;
Wherein, the identification module 1203, for meeting preset condition based on the first category information in the stream of video frames,
The stream of video frames is identified according to second category information, obtains at least one tracking object.
To sum up, a kind of electronic equipment provided in this embodiment, by will be first according to first category information analysis stream of video frames
Tracking object is obtained, and when the first category information meets preset condition, then it is carried out according to second category informational video frame stream
Identification, obtains tracking object, in the program, starts to carry out analysis identification only in accordance with a kind of classification information, when category information without
It uses other classification informations to be identified again when method is identified, improves the accuracy of recognition and tracking.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.The device provided for embodiment
For, since it is corresponding with the method that embodiment provides, so being described relatively simple, related place is said referring to method part
It is bright.
To the above description of provided embodiment, enable those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and principle provided in this article and features of novelty phase one
The widest scope of cause.
Claims (10)
1. a kind of recognition and tracking method, comprising:
Stream of video frames is obtained, the stream of video frames includes at least two frame images;
According to the first category information of the stream of video frames, analysis is obtained in any frame image comprising at least one tracking object;
Meet preset condition based on the first category information in the stream of video frames, according to second category information to the video frame
Stream is identified, at least one tracking object is obtained.
2. being wrapped according to the method described in claim 1, judging whether the first category information in stream of video frames meets preset condition
It includes:
It obtains containing at least two tracking object in any frame image based on analysis, obtains the position of at least two tracking object
Confidence breath;
Whether the relative position for calculating any two tracking object meets pre-determined distance, obtains the first calculated result;
Based on first calculated result characterize the relative position of the first tracking object and the second tracking object no more than it is default away from
From then the first category information in the stream of video frames meets preset condition;
The relative position for characterizing the first tracking object and the second tracking object based on first calculated result is greater than pre-determined distance,
Then the first category information in the stream of video frames is unsatisfactory for preset condition.
3. being wrapped according to the method described in claim 1, judging whether the first category information in stream of video frames meets preset condition
It includes:
According to preset first analysis rule, the stream of video frames is analyzed, obtains analysis result;
The image for characterizing for the first moment based on the analysis result includes the first tracking object, and the image at the second moment includes first
Tracking object and the second tracking object, then the first category information in the stream of video frames meets preset condition;
The image at the first moment is characterized based on the analysis result and the image at the second moment includes that the first tracking object does not include
Second tracking object, then the first category information in the stream of video frames is unsatisfactory for preset condition, and first moment is earlier than institute
Stated for the second moment.
4. method according to claim 1-3, described to carry out according to second category information to the stream of video frames
Identification, obtains at least two tracking objects, comprising:
According to preset analysis rule, signature analysis is carried out at least two frame images in the stream of video frames, obtains the first tracking
Object and/or the second tracking object.
5. method according to claim 1-3, described to carry out according to second category information to the stream of video frames
Identification, obtains at least one tracking object, comprising:
The image in the stream of video frames is successively analyzed, wherein included at least two initial tracking objects and its related letter are obtained
Breath;
According to the initial tracking object and its relevant information, the similar of the initial tracking object of any two adjacent moment is calculated
Degree;
According to the similarity of the initial tracking object of any two adjacent moment, analysis obtains the initial tracking at third moment
The object same tracking object corresponding with the initial tracking object at the 4th moment, the third moment and the 4th moment phase
It is adjacent;
By the initial tracking object of the same tracking object of correspondence by tracking the moment sequence arrange, obtain in the stream of video frames with
Track object.
6. according to the method described in claim 5, the relevant information includes moment, spatial position and characteristic information, it is described according to
According to the initial tracking object and its relevant information, the similarity of the initial tracking object of any two adjacent moment, packet are calculated
It includes:
According to preset similarity algorithm, the spatial position of any initial tracking object and characteristic information, it is calculated any first
Similarity between beginning tracking object and the initial tracking object at neighbor tracking moment;
It specifically includes:
The similarity is calculated using following formula
similarity(Vi,Vj)=w1*L_similarity (Li,Lj)+w2*F_similarity(Fi,Fj)
Wherein, i and j respectively indicates different tracking moment, similarity (Vi,Vj) indicate i and j tracking the moment two initially with
Similarity between track object, L representation space position, L_similarity (Li,Lj) indicate i and j tracking the moment two initially with
Spatial position similarity between track object, F indicate characteristic information, F_similarity (Fi,Fj) indicate that i and j tracks the moment two
Characteristic similarity between a initial tracking object, w1 and w2 are weights.
7. a kind of electronic equipment, comprising:
Ontology;
Processor, for obtaining stream of video frames, the stream of video frames includes at least two frame images;According to the of the stream of video frames
One classification information, analysis obtain in any frame image comprising at least one tracking object;Based on first in the stream of video frames
Classification information meets preset condition, identifies according to second category information to the stream of video frames, obtains at least one tracking
Object.
8. electronic equipment according to claim 7, further includes:
Camera obtains stream of video frames for acquiring video.
9. electronic equipment according to claim 7, further includes:
Display screen, for showing the image of the tracking object.
10. a kind of electronic equipment, comprising:
Module is obtained, for obtaining stream of video frames, the stream of video frames includes at least two frame images;
Analysis module, for the first category information according to the stream of video frames, analysis is obtained in any frame image comprising at least
One tracking object;
Identification module is believed for meeting preset condition based on the first category information in the stream of video frames according to second category
Breath identifies the stream of video frames, obtains at least one tracking object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811331011.2A CN109492584A (en) | 2018-11-09 | 2018-11-09 | A kind of recognition and tracking method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811331011.2A CN109492584A (en) | 2018-11-09 | 2018-11-09 | A kind of recognition and tracking method and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109492584A true CN109492584A (en) | 2019-03-19 |
Family
ID=65694203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811331011.2A Pending CN109492584A (en) | 2018-11-09 | 2018-11-09 | A kind of recognition and tracking method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492584A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112307821A (en) * | 2019-07-29 | 2021-02-02 | 顺丰科技有限公司 | Video stream processing method, device, equipment and storage medium |
CN112437279A (en) * | 2020-11-23 | 2021-03-02 | 方战领 | Video analysis method and device |
CN112926371A (en) * | 2019-12-06 | 2021-06-08 | 中国移动通信集团设计院有限公司 | Road surveying method and system |
CN112989934A (en) * | 2021-02-05 | 2021-06-18 | 方战领 | Video analysis method, device and system |
CN112989934B (en) * | 2021-02-05 | 2024-05-24 | 方战领 | Video analysis method, device and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147859A (en) * | 2011-04-06 | 2011-08-10 | 浙江浙大华是科技有限公司 | Ship monitoring method |
CN105678288A (en) * | 2016-03-04 | 2016-06-15 | 北京邮电大学 | Target tracking method and device |
US9443320B1 (en) * | 2015-05-18 | 2016-09-13 | Xerox Corporation | Multi-object tracking with generic object proposals |
CN106875428A (en) * | 2017-01-19 | 2017-06-20 | 博康智能信息技术有限公司 | A kind of multi-object tracking method and device |
CN108230353A (en) * | 2017-03-03 | 2018-06-29 | 北京市商汤科技开发有限公司 | Method for tracking target, system and electronic equipment |
-
2018
- 2018-11-09 CN CN201811331011.2A patent/CN109492584A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147859A (en) * | 2011-04-06 | 2011-08-10 | 浙江浙大华是科技有限公司 | Ship monitoring method |
US9443320B1 (en) * | 2015-05-18 | 2016-09-13 | Xerox Corporation | Multi-object tracking with generic object proposals |
CN105678288A (en) * | 2016-03-04 | 2016-06-15 | 北京邮电大学 | Target tracking method and device |
CN106875428A (en) * | 2017-01-19 | 2017-06-20 | 博康智能信息技术有限公司 | A kind of multi-object tracking method and device |
CN108230353A (en) * | 2017-03-03 | 2018-06-29 | 北京市商汤科技开发有限公司 | Method for tracking target, system and electronic equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112307821A (en) * | 2019-07-29 | 2021-02-02 | 顺丰科技有限公司 | Video stream processing method, device, equipment and storage medium |
CN112926371A (en) * | 2019-12-06 | 2021-06-08 | 中国移动通信集团设计院有限公司 | Road surveying method and system |
CN112926371B (en) * | 2019-12-06 | 2023-11-03 | 中国移动通信集团设计院有限公司 | Road survey method and system |
CN112437279A (en) * | 2020-11-23 | 2021-03-02 | 方战领 | Video analysis method and device |
CN112989934A (en) * | 2021-02-05 | 2021-06-18 | 方战领 | Video analysis method, device and system |
CN112989934B (en) * | 2021-02-05 | 2024-05-24 | 方战领 | Video analysis method, device and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ahmed et al. | Vision based hand gesture recognition using dynamic time warping for Indian sign language | |
CN106897658B (en) | Method and device for identifying human face living body | |
CN110688929B (en) | Human skeleton joint point positioning method and device | |
Lu et al. | Blob analysis of the head and hands: A method for deception detection | |
CN107316029B (en) | A kind of living body verification method and equipment | |
CN105740781B (en) | Three-dimensional human face living body detection method and device | |
CN109902659A (en) | Method and apparatus for handling human body image | |
CN106557726A (en) | A kind of band is mourned in silence the system for face identity authentication and its method of formula In vivo detection | |
CN109492584A (en) | A kind of recognition and tracking method and electronic equipment | |
CN108596041A (en) | A kind of human face in-vivo detection method based on video | |
CN109872407B (en) | Face recognition method, device and equipment, and card punching method, device and system | |
WO2017092573A1 (en) | In-vivo detection method, apparatus and system based on eyeball tracking | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
CN109711309A (en) | A kind of method whether automatic identification portrait picture closes one's eyes | |
CN109740567A (en) | Key point location model training method, localization method, device and equipment | |
WO2024060978A1 (en) | Key point detection model training method and apparatus and virtual character driving method and apparatus | |
CN113378804A (en) | Self-service sampling detection method and device, terminal equipment and storage medium | |
AU2021203869A1 (en) | Methods, devices, electronic apparatuses and storage media of image processing | |
CN112633217A (en) | Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model | |
CN112149553A (en) | Examination cheating behavior identification method | |
CN115620398A (en) | Target action detection method and device | |
CN105407069B (en) | Living body authentication method, apparatus, client device and server | |
Zhang et al. | Facial component-landmark detection with weakly-supervised lr-cnn | |
US20220300774A1 (en) | Methods, apparatuses, devices and storage media for detecting correlated objects involved in image | |
JP6377566B2 (en) | Line-of-sight measurement device, line-of-sight measurement method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190319 |
|
RJ01 | Rejection of invention patent application after publication |