CN103533237B - A kind of method extracting key frame of video from video - Google Patents

A kind of method extracting key frame of video from video Download PDF

Info

Publication number
CN103533237B
CN103533237B CN201310456215.XA CN201310456215A CN103533237B CN 103533237 B CN103533237 B CN 103533237B CN 201310456215 A CN201310456215 A CN 201310456215A CN 103533237 B CN103533237 B CN 103533237B
Authority
CN
China
Prior art keywords
video
frame
information
shooting
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310456215.XA
Other languages
Chinese (zh)
Other versions
CN103533237A (en
Inventor
刘华平
刘玉龙
孙富春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201310456215.XA priority Critical patent/CN103533237B/en
Publication of CN103533237A publication Critical patent/CN103533237A/en
Application granted granted Critical
Publication of CN103533237B publication Critical patent/CN103533237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The present invention relates to a kind of method extracting key frame of video from video, belong to technical field of image processing.The method extracting key frame of video from video of the present invention, operator carry out video capture by device to scene interested.Frame of video, acceleration information, azimuth information and the dimensional information of device synchronous recording video in shooting process.Directly according to acceleration information, azimuth information and dimensional information after having shot, each frame frame of video is calculated its weight.Desired key frame of video is extracted finally according to weight and desired number of key frames.The method extracting key frame of video from video that the present invention proposes, can extract key frame of video more accurately from video by less amount of calculation.

Description

A kind of method extracting key frame of video from video
Technical field
The present invention relates to a kind of method extracting key frame of video from video, belong to technical field of image processing.
Background technology
Along with the increase of hand-held capture apparatus (such as mobile phone, digital camera and hand-supported camera) quantity in recent years, individual The number of videos that people user uses portable equipment to shoot also rolls up.Various video captures application on smart mobile phone, as Instagram etc., also have stimulated the propagation of these videos, and Instagram, in 24 hours reached the standard grade, just has 5,000,000 to regard Frequency is uploaded.
So many application causes to produce substantial amounts of video every day.The place different from Word message is, video information Cannot directly be retrieved, therefore finding useful information from substantial amounts of video is the work extremely taken time and effort.At present The method taked mainly relies on the content manually checking video, and is labeled it, and this is the method for very poor efficiency undoubtedly.
Therefore the extraction of key frame becomes and significantly works.The Key-frame Extraction Algorithm that presently, there are mainly has following several Kind: 1) extract key frame according to fixed time interval, 2) calculate adjacent a few frame difference in color (or gray scale), Determine whether it is key frame.3) method based on motion analysis.
It is the most directly perceived for extracting key frame according to Fixed Time Interval, calculates simplest way, but the shortcoming of this method is also It is obvious that key frame may not be certain to be equally distributed.It is simple that the most this kind of method is applicable to content, the video that the time is shorter.
The method calculating adjacent frame difference is relatively more reasonable, but relatively difficult in choosing of threshold value, and threshold value value crosses I Can cause selecting too many key frame, value is excessive, likely can cause leakage choosing.And this method calculates upper the most more complicated.
Main method based on motion analysis mainly calculates the quantity of motion in camera lens by optical flow analysis, on the ground that quantity of motion is minimum Side chooses key frame, and it is excessive that its shortcoming lies also in amount of calculation.
Summary of the invention
The present invention seeks to propose a kind of method extracting key frame of video from video, to solve existing key-frame extraction side The shortcoming of method, the acceleration information of engraving device time each in content, photographic head zoom behavior and the video in conjunction with photographic head shooting And filming apparatus towards etc. information, according to the intention of video capture person, be calculated key frame.
The method extracting key frame of video from video that the present invention proposes, comprises the following steps:
(1) use video capture device, floor scene, obtain video, set total T frame frame of video in video, and Record the zoom scale information of each shooting moment filming apparatus photographic head;
(2) use the frequency identical with shooting video, record each shooting moment filming apparatus edge in rectangular coordinate system respectively The linear acceleration information of x, y, z axle;
(3) use the frequency identical with shooting video, with aspect sensor record each shooting the moment shooting be equipped in above-mentioned directly Azimuth information in angle coordinate system;
(4) according to azimuth information, linear acceleration information and the dimensional information of above-mentioned record, from video, key frame is extracted, Comprise the following steps:
(4-1) characteristic information of device when shooting kth frame frame of video is extracted in video, including: kth frame frame of video shoots Filming apparatus azimuth information o in momentk=[ox,k,oy,k,oz,k]T, wherein ox,kWhen representing the shooting of kth frame frame of video Carve the angle of the roll angle of filming apparatus, i.e. device minor face and horizontal plane, oy,kRepresent that the kth frame frame of video shooting moment claps Take the photograph the angle on the long limit of the luffing angle of device, i.e. device and horizontal plane, oz,kRepresent kth frame frame of video shooting moment shooting dress The angle of vacillating now to the left, now to the right put, i.e. the direction of device top sensing and the angle of direct north;The kth frame frame of video shooting moment The acceleration information a of filming apparatusk=[ax,k,ay,k,az,k]T, wherein ax,k,ay,k,az,kFor device respectively directly The x of angle coordinate system, y, the acceleration in z-axis, dimensional information skRepresent the zoom scale of photographic head when shooting kth frame;
(4-2) use discrete cosine transform, video obtained above is carried out feature information extraction, obtains kth frame in video The frame of video characteristic information f of frame of videok
(4-3) repeat step (4-1) and step (4-2), obtain the filming apparatus side of each frame frame of video in above-mentioned video Position information, the acceleration information of filming apparatus, the zoom scale of photographic head and frame of video characteristic information;
(4-5) the acceleration weights omega of each frame frame of video in video is calculatedak: ωak=exp (-λ1||ak||2), Wherein λ1Parameter is regulated for acceleration, | | ak||2Represent acceleration information akTwo norms of vector, λ1Span can root Determining according to the order of magnitude of acceleration, span is: 0.1~1;
(4-6) calculate each frame frame of video in video yardstick weights omegask: ωsk=exp (λ2sk), wherein λ2Parameter, λ is regulated for yardstick2Span be: 0.5~1;
(4-7) total weights omega of each frame frame of video in video is calculatedk: ωkakωsk
(4-8) use K mean algorithm, the filming apparatus azimuth information in all frame of video shooting moment in above-mentioned video is carried out Cluster, obtains C cluster centre, and C is the parameter chosen according to information such as video lengths, and the span of C is: 1~T, T is the frame number of all frame of video in video, and all of frame of video is referred to the azimuth information with corresponding filming apparatus connects most The near apoplexy due to endogenous wind belonging to cluster centre;
(4-9) optimization object function is set up as follows:
J = Σ k = 1 T Σ j = 1 C ω k μ k l 2 | | o k - v j ( p ) | | 2 2 ,
Constraints is:0≤μkj≤1
Wherein k is the sequence number of frame of video, and j is the classification of cluster centre, j ∈ [1, C], μkjIt is parameter to be solved, υjPoly- Class center, p is current iteration number of times;
(4-10) when initializing, if p=0,The vector that initial value is jth cluster centre;
(4-11) μ is calculatedkj:
(4-12) according to above-mentioned result of calculation, μ is updatedkjValue, calculate μkj:
μ k j = μ k j / Σ j = 1 C μ k j
(4-13) according to step (4-12) calculated μkj, calculate
(4-14) an iteration ends threshold epsilon is set, ifThen make p=p + 1, and return step (4-11), ifThen carry out step (4-15), ε Span be: 0.01~0.001;
(4-15) by following formula, an initial key frame set K={t is obtained1,t1,…,tC}:Wherein j ∈ [1, C];
(4-16) similarity of the frame of video characteristic information of any two width frame of video in above-mentioned initial key frame set K is calculatedWherein i, j ∈ [1, C];
(4-17) similarity threshold is set, in traversal step (4-16) calculated initial key frame set K Any two frames, calculate the similarity of the frame of video characteristic information of any two framesWith similarity threshold Compare, ifAndFrom above-mentioned initial key frame set K, then delete tj; IfAndFrom above-mentioned initial key frame set K, then delete ti;IfIn above-mentioned initial key frame set K, then retain tiAnd tj, repeating this step, the set K obtained is Key frame of video, the span of δ is: 0.2~0.3.
The method extracting key frame of video from video that the present invention proposes, its advantage is, can shoot the same of video user Time, the motion of device during records photographing video, towards and the change of focal length, and according to the information of filming apparatus, thus it is speculated that go out User shooting time intention, the acceleration of such as device is less, and towards keep certain numerical value continue for some time time, It is believed that user is shooting a certain scene targetedly, namely this scene is crucial to user.Accordingly it is assumed that permissible From video, key frame of video is extracted more accurately by less amount of calculation.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) of the method extracting key frame of video from video that the present invention proposes.
Detailed description of the invention
The method extracting key frame of video from video that the present invention proposes, its FB(flow block) is as it is shown in figure 1, include following step Rapid:
(1) use video capture device, floor scene, obtain video, set total T frame frame of video in video, and Record the zoom scale information of each shooting moment filming apparatus photographic head;
(2) use the frequency identical with shooting video, record each shooting moment filming apparatus edge in rectangular coordinate system respectively The linear acceleration information of x, y, z axle;
(3) use the frequency identical with shooting video, with aspect sensor record each shooting the moment shooting be equipped in above-mentioned directly Azimuth information in angle coordinate system;
(4) according to azimuth information, linear acceleration information and the dimensional information of above-mentioned record, from video, key frame is extracted, Comprise the following steps:
(4-1) characteristic information of device when shooting kth frame frame of video is extracted in video, including: kth frame frame of video shoots Filming apparatus azimuth information o in momentk=[ox,k,oy,k,oz,k]T, wherein ox,kWhen representing the shooting of kth frame frame of video Carve the angle of the roll angle of filming apparatus, i.e. device minor face and horizontal plane, oy,kRepresent that the kth frame frame of video shooting moment claps Take the photograph the angle on the long limit of the luffing angle of device, i.e. device and horizontal plane, oz,kRepresent kth frame frame of video shooting moment shooting dress The angle of vacillating now to the left, now to the right put, i.e. the direction of device top sensing and the angle of direct north;The kth frame frame of video shooting moment The acceleration information a of filming apparatusk=[ax,k,ay,k,az,k]T, wherein ax,k,ay,k,az,kFor device respectively directly The x of angle coordinate system, y, the acceleration in z-axis, dimensional information skRepresent the zoom scale of photographic head when shooting kth frame;
(4-2) use discrete cosine transform, video obtained above is carried out feature information extraction, obtains kth frame in video The frame of video characteristic information f of frame of videok
(4-3) repeat step (4-1) and step (4-2), obtain the filming apparatus side of each frame frame of video in above-mentioned video Position information, the acceleration information of filming apparatus, the zoom scale of photographic head and frame of video characteristic information;
(4-5) the acceleration weights omega of each frame frame of video in video is calculatedak: ωak=exp (-λ1||ak||2), Wherein λ1Parameter is regulated for acceleration, | | ak||2Represent acceleration information akTwo norms of vector, λ1Span can root Determining according to the order of magnitude of acceleration, span is: 0.1~1;
(4-6) calculate each frame frame of video in video yardstick weights omegask: ωsk=exp (λ2sk), wherein λ2Parameter, λ is regulated for yardstick2Span be: 0.5~1;
(4-7) total weights omega of each frame frame of video in video is calculatedk: ωkakωsk
(4-8) use K mean algorithm, the filming apparatus azimuth information in all frame of video shooting moment in above-mentioned video is carried out Cluster, obtains C cluster centre, and C is the parameter chosen according to information such as video lengths, and the span of C is: 1~T, T is the frame number of all frame of video in video, and all of frame of video is referred to the azimuth information with corresponding filming apparatus connects most The near apoplexy due to endogenous wind belonging to cluster centre;
(4-9) optimization object function is set up as follows:
J = Σ k = 1 T Σ j = 1 C ω k μ k j 2 | | o k - v j ( p ) | | 2 2 ,
Constraints is:0≤μkj≤1
Wherein k is the sequence number of frame of video, and j is the classification of cluster centre, j ∈ [1, C], μkjIt is parameter to be solved, υjPoly- Class center, p is current iteration number of times;
(4-10) when initializing, if p=0,The vector that initial value is jth cluster centre;
(4-11) μ is calculatedkj:
(4-12) according to above-mentioned result of calculation, μ is updatedkjValue, calculate μkj:
μ k j = μ k j / Σ j = 1 C μ k j
(4-13) according to step (4-12) calculated μkj, calculate
(4-14) an iteration ends threshold epsilon is set, ifThen make p=p + 1, and return step (4-11), ifThen carry out step (4-15), ε Span be: 0.01~0.001;
(4-15) by following formula, an initial key frame set K={t is obtained1,t1,…,tC}:Wherein j ∈ [1, C];
(4-16) similarity of the frame of video characteristic information of any two width frame of video in above-mentioned initial key frame set K is calculatedWherein i, j ∈ [1, C];
(4-17) similarity threshold is set, in traversal step (4-16) calculated initial key frame set K Any two frames, calculate the similarity of the frame of video characteristic information of any two framesWith similarity threshold Compare, ifAndFrom above-mentioned initial key frame set K, then delete tj; IfAndFrom above-mentioned initial key frame set K, then delete ti;IfIn above-mentioned initial key frame set K, then retain tiAnd tj, repeating this step, the set K obtained is Key frame of video, the span of δ is: 0.2~0.3.
The method extracting key frame of video from video of the present invention, scene interested is regarded by operator by device Frequency shooting.Frame of video, acceleration information, azimuth information and the dimensional information of device synchronous recording video in shooting process. Directly utilize acceleration information, azimuth information and dimensional information after having shot and each frame frame of video is calculated its weight.Finally Desired key frame of video is extracted according to weight and desired number of key frames.

Claims (1)

1. the method extracting key frame of video from video, it is characterised in that the method comprises the following steps:
(1) use video capture device, floor scene, obtain video, set total T frame frame of video in video, and Record the zoom scale information of each shooting moment filming apparatus photographic head;
(2) use the frequency identical with shooting video, record each shooting moment filming apparatus edge in rectangular coordinate system respectively The linear acceleration information of x, y, z axle;
(3) use the frequency identical with shooting video, with aspect sensor record each shooting the moment shooting be equipped in above-mentioned directly Azimuth information in angle coordinate system;
(4) according to azimuth information, linear acceleration information and the dimensional information of above-mentioned record, from video, key frame is extracted, Comprise the following steps:
(4-1) characteristic information of device when shooting kth frame frame of video is extracted in video, including: kth frame frame of video shoots Filming apparatus azimuth information o in momentk=[ox,k,oy,k,oz,k]T, wherein ox,kWhen representing the shooting of kth frame frame of video Carve the angle of the roll angle of filming apparatus, i.e. device minor face and horizontal plane, oy,kRepresent that the kth frame frame of video shooting moment claps Take the photograph the angle on the long limit of the luffing angle of device, i.e. device and horizontal plane, oz,kRepresent kth frame frame of video shooting moment shooting dress The angle of vacillating now to the left, now to the right put, i.e. the direction of device top sensing and the angle of direct north;The kth frame frame of video shooting moment The acceleration information a of filming apparatusk=[ax,k,ay,k,az,k]T, wherein ax,k,ay,k,az,kFor device respectively directly The x of angle coordinate system, y, the acceleration in z-axis, dimensional information SkRepresent the zoom scale of photographic head when shooting kth frame;
(4-2) use discrete cosine transform, video obtained above is carried out feature information extraction, obtains kth frame in video The frame of video characteristic information f of frame of videok
(4-3) repeat step (4-1) and step (4-2), obtain the filming apparatus side of each frame frame of video in above-mentioned video Position information, the acceleration information of filming apparatus, the zoom scale of photographic head and frame of video characteristic information;
(4-5) the acceleration weights omega of each frame frame of video in video is calculatedak: ωak=exp (-λ1||ak||2), Wherein λ1Parameter is regulated for acceleration, | | ak||2Represent acceleration information akTwo norms of vector, λ1Span can root Determining according to the order of magnitude of acceleration, span is: 0.1~1;
(4-6) calculate each frame frame of video in video yardstick weights omegask: ωsk=exp (λ2sk), wherein λ2Parameter, λ is regulated for yardstick2Span be: 0.5~1;
(4-7) total weights omega of each frame frame of video in video is calculatedk: ωkakωsk
(4-8) use K mean algorithm, the filming apparatus azimuth information in all frame of video shooting moment in above-mentioned video is carried out Cluster, obtains C cluster centre, and C is the parameter chosen according to information such as video lengths, and the span of C is: 1~T, T is the frame number of all frame of video in video, and all of frame of video is referred to the azimuth information with corresponding filming apparatus connects most The near apoplexy due to endogenous wind belonging to cluster centre;
(4-9) optimization object function is set up as follows:
J = Σ k = 1 T Σ j = 1 C ω k μ k j 2 || o k - v j ( p ) || 2 2 ,
Constraints is:
Wherein k is the sequence number of frame of video, and j is the classification of cluster centre, j ∈ [1, C], μkjIt is parameter to be solved, υjPoly- Class center, p is current iteration number of times;
(4-10) when initializing, if p=0,The vector that initial value is jth cluster centre;
(4-11) μ is calculatedkj:
(4-12) according to above-mentioned result of calculation, μ is updatedkjValue, calculate μkj:
μ k j = μ k j / Σ j = 1 C μ k j
(4-13) according to step (4-12) calculated μkj, calculate
(4-14) an iteration ends threshold epsilon is set, ifThen make p=p + 1, and return step (4-11), ifThen carry out step (4-15), ε Span be: 0.01~0.001;
(4-15) by following formula, an initial key frame set K={t is obtained1,t1,…,tC}:Wherein j ∈ [1, C];
(4-16) similarity of the frame of video characteristic information of any two width frame of video in above-mentioned initial key frame set K is calculatedWherein i, j ∈ [1, C];
(4-17) similarity threshold is set, in traversal step (4-16) calculated initial key frame set K Any two frames, calculate the similarity of the frame of video characteristic information of any two framesWith similarity threshold Compare, ifAndFrom above-mentioned initial key frame set K, then delete tj; IfAndFrom above-mentioned initial key frame set K, then delete ti;IfIn above-mentioned initial key frame set K, then retain tiAnd tj, repeating this step, the set K obtained is Key frame of video, the span of δ is: 0.2~0.3.
CN201310456215.XA 2013-09-29 2013-09-29 A kind of method extracting key frame of video from video Active CN103533237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310456215.XA CN103533237B (en) 2013-09-29 2013-09-29 A kind of method extracting key frame of video from video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310456215.XA CN103533237B (en) 2013-09-29 2013-09-29 A kind of method extracting key frame of video from video

Publications (2)

Publication Number Publication Date
CN103533237A CN103533237A (en) 2014-01-22
CN103533237B true CN103533237B (en) 2016-08-17

Family

ID=49934874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310456215.XA Active CN103533237B (en) 2013-09-29 2013-09-29 A kind of method extracting key frame of video from video

Country Status (1)

Country Link
CN (1) CN103533237B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284240B (en) * 2014-09-17 2018-02-02 小米科技有限责任公司 Video browsing approach and device
US9799376B2 (en) 2014-09-17 2017-10-24 Xiaomi Inc. Method and device for video browsing based on keyframe
US9818032B2 (en) * 2015-10-28 2017-11-14 Intel Corporation Automatic video summarization
CN106528586A (en) * 2016-05-13 2017-03-22 上海理工大学 Human behavior video identification method
CN106534949A (en) * 2016-11-25 2017-03-22 济南中维世纪科技有限公司 Method for prolonging video storage time of video monitoring system
CN107197162B (en) * 2017-07-07 2020-11-13 盯盯拍(深圳)技术股份有限公司 Shooting method, shooting device, video storage equipment and shooting terminal
CN108364338B (en) * 2018-02-06 2022-03-15 创新先进技术有限公司 Image data processing method and device and electronic equipment
CN109299329A (en) * 2018-09-11 2019-02-01 京东方科技集团股份有限公司 The method, apparatus and electronic equipment, ustomer premises access equipment of pictures are generated from video
CN109920518B (en) * 2019-03-08 2021-11-16 腾讯科技(深圳)有限公司 Medical image analysis method, medical image analysis device, computer equipment and storage medium
CN110448870B (en) * 2019-08-16 2021-09-28 深圳特蓝图科技有限公司 Human body posture training method
CN112288838A (en) * 2020-10-27 2021-01-29 北京爱奇艺科技有限公司 Data processing method and device
CN116939197A (en) * 2023-09-15 2023-10-24 海看网络科技(山东)股份有限公司 Live program head broadcasting and replay content consistency monitoring method based on audio and video

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7469010B2 (en) * 2001-01-08 2008-12-23 Canon Kabushiki Kaisha Extracting key frames from a video sequence
CN101398855A (en) * 2008-10-24 2009-04-01 清华大学 Video key frame extracting method and system
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7469010B2 (en) * 2001-01-08 2008-12-23 Canon Kabushiki Kaisha Extracting key frames from a video sequence
CN101398855A (en) * 2008-10-24 2009-04-01 清华大学 Video key frame extracting method and system
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Key Frame Extraction Using Unsupervised Clustering Based On a Statistical Model;YANG Shuping,LIN Xinggang;《TSINGHUA SCIENCE AND TECHNOLOGY》;20050430;第10卷(第2期);169-173 *
基于内容的视频检索的关键帧提取;陆伟艳,夏定元,刘毅;《微计算机信息》;20080416;第23卷(第33期);298-300 *

Also Published As

Publication number Publication date
CN103533237A (en) 2014-01-22

Similar Documents

Publication Publication Date Title
CN103533237B (en) A kind of method extracting key frame of video from video
CN103530881B (en) Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal
US11636610B2 (en) Determining multiple camera positions from multiple videos
CN109238173B (en) Three-dimensional live-action reconstruction system for coal storage yard and rapid coal quantity estimation method
CN102833486B (en) The method and device of face displaying ratio in a kind of real-time adjusting video images
CN104517095B (en) A kind of number of people dividing method based on depth image
CN106875436B (en) It is a kind of based on feature dot density by the method and apparatus of focusing storehouse estimating depth
CN112967341B (en) Indoor visual positioning method, system, equipment and storage medium based on live-action image
CN107133969A (en) A kind of mobile platform moving target detecting method based on background back projection
CN107977656A (en) A kind of pedestrian recognition methods and system again
CN102761768A (en) Method and device for realizing three-dimensional imaging
CN103530638A (en) Method for matching pedestrians under multiple cameras
CN105488519A (en) Video classification method based on video scale information
CN104537381B (en) A kind of fuzzy image recognition method based on fuzzy invariant features
CN104063871A (en) Method for segmenting image sequence scene of wearable device
US20130208984A1 (en) Content scene determination device
CN106572193A (en) Method for providing GPS information of acquisition point of image acquisition device for image acquisition device
CN110717593B (en) Method and device for neural network training, mobile information measurement and key frame detection
CN104504162B (en) A kind of video retrieval method based on robot vision platform
CN107492147A (en) A kind of power equipment three-dimensional modeling method
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
CN113065506A (en) Human body posture recognition method and system
CN104992155A (en) Method and apparatus for acquiring face positions
CN105426871B (en) A kind of similarity measurement calculation method identified again suitable for moving pedestrian
JP2008276613A (en) Mobile body determination device, computer program and mobile body determination method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant