CN102682302A - Human body posture identification method based on multi-characteristic fusion of key frame - Google Patents
Human body posture identification method based on multi-characteristic fusion of key frame Download PDFInfo
- Publication number
- CN102682302A CN102682302A CN2012100638935A CN201210063893A CN102682302A CN 102682302 A CN102682302 A CN 102682302A CN 2012100638935 A CN2012100638935 A CN 2012100638935A CN 201210063893 A CN201210063893 A CN 201210063893A CN 102682302 A CN102682302 A CN 102682302A
- Authority
- CN
- China
- Prior art keywords
- key frame
- human body
- frame
- video
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a human body posture identification method based on multi-characteristic fusion of a key frame, comprising the following steps of: (1) extracting Hu invariant moment characteristics from a video image; calculating a covering rate of an image sequence; extracting the highest covering percentage of the covering rate as a candidate key frame; then calculating a distortion rate of the candidate key frame and extracting the minimum distortion percentage as the key frame; (2) carrying out extraction of a foreground image on the key frame to obtain the foreground image of a moving human body; (3) extracting characteristic information of the key frame, wherein the characteristic information comprises a six-planet model, a six-planet angle and eccentricity; obtaining a multi-characteristic fused image characteristic vector; and (4) utilizing a one-to-one trained classification model, wherein the classification model is a posture classifier based on an SVM (Secure Virtual Machine); and identifying a posture. The human body posture identification method has the advantages of simplified calculation, good stability and good robustness.
Description
Technical field
The present invention relates to a kind of human body attitude recognition methods.
Background technology
In recent years; Along with China and other emerging economy urbanization construction are quickened; City management problems such as a series of traffic that simultaneous urbanization caused that movement of population increases, public security, video monitoring is more and more universal, and is also increasingly high to the intelligentized demand of video monitoring.People hope from video, to extract each application in the live and work that more information is applied to people, for example safety monitoring, Smart Home, man-machine interaction, sportsman's supplemental training etc. through intellectual analysis.The security protection industry in places such as bank, rail, warehouse, needs to realize automatically motion target detection and tracking, and the abnormal alarm situation, thereby reduces all kinds of losses; In house security system and medical monitoring system, can in time detect become a Buddhist monk or nun in old man or patient whether abnormal conditions take place; In field in intelligent robotics, hope can be to the analysis of people's in the video attitude, gesture, language, exchanges with the people and interactive; In some trainings such as sports, dance training, system can analyze athletic joint kinematic parameter, is used to improve athletic training patterns.
Two kinds of methods are adopted in existing human body attitude identification usually: template matching method and state-space method.Template matching method is to convert image sequence into one group of static in shape pattern; In identifying, use the behavior sample of storing in advance to come people's in the interpretation of images sequence motion then, the advantage of this algorithm is that computation complexity is low, realizes simple; But the time interval to behavior is responsive, poor robustness.State-space method is with state-space model each static posture of definition and as a state; Connect through certain probability between these states; Each motion sequence can be seen the once traversal between these static posture different conditions as, and calculates the process of its joint probability, and this algorithm can be avoided the problem to time interval modeling; But need training sample big, computation complexity is high.
Key frame is meant quantity of information maximum, a representational pair or multiple image in video sequence; The content outline that can reflect one section video; To accomplish simultaneously as far as possible succinct, data volume is few, therefore in the video analysis process, extract key frame and carry out gesture recognition and improve video analysis efficient significantly.Key-frame extraction technology commonly used roughly can be divided into: based on the method for shot boundary; Be that video is divided into several independent camera lenses according to scene; First frame, last frame or the intermediate frame of getting every set of shots are as key frame; This algorithm is simple, quick, but does not consider the complicacy of vision content, can't represent the video segment that content is long; Method based on video content analysis; Be to utilize characteristics of image; Judge the feature difference degree of video frame image and present frame, difference more greatly then is key frame, and this type algorithm can obtain effect preferably for the video of different length and content; But change when violent for camera motion and video content, the key frame of extraction will be very unstable; Based on the method for motion analysis, the typical case of this type algorithm representative has optical flow method, and the calculated amount of such algorithm is bigger, and is strong for the movable information dependence of part; Based on the method for cluster, such algorithm is through cluster analysis, extracts cluster centre as key frame, and its advantage is a reflecting video content preferably, but its complex algorithm, and poor stability.
Summary of the invention
For the deficiency of the complex algorithm that overcomes existing human body attitude recognition methods, poor stability, poor robustness, the present invention provides a kind of and simplifies calculating, has good stability, robustness is good based on the human body attitude recognition methods of many Feature Fusion of key frame.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of human body attitude recognition methods of the many Feature Fusion based on key frame, said human body attitude recognition methods may further comprise the steps:
(1) video image is extracted the Hu invariant moment features, the coverage rate of sequence of computed images, extracting the highest setting covering percentage of coverage rate is candidate's key frame, the distortion rate of calculated candidate key frame is a key frame to extract wherein minimum distortion percentage then;
(2) key frame is carried out the extraction of foreground image, obtain the foreground image of movement human after;
(3) characteristic information of extraction key frame, said characteristic information is six star models, six star angle and eccentricities; Obtain the characteristics of image vector of many Feature Fusion;
(4) use the man-to-man disaggregated model that trains, said disaggregated model is the attitude sorter based on SVM, and attitude is discerned.
Further, in the said step (4), to N class attitude, therefore SVM of design between any two types of samples just needs design N* (N-1)/2 SVM.
Further again, in the said step (4), the training process of disaggregated model is following:
(4.1) at first the foreground image of human body is carried out pre-service; For different behavior video segments; Earlier it is carried out the background training based on code book,, obtain the foreground image of movement human through the image difference of video sequence frame and background model; And carry out morphologic Flame Image Process, remove picture noise;
(4.2) extraction comprises the characteristic of the training data of 11 kinds of attitudes; The feature database of forming master sample; The method of using colourful attitude to merge is carried out posture feature to movement human and is described, and said 11 kinds of attitudes are respectively walking, little jump, jump greatly, sidle, squat down, bend over, creep, push-up, sit-ups and sit down;
(4.3), make up multicategory classification model based on SVM through study to master sample;
(4.4) utilize test sample book that model is verified,, turn back to (4.1) if be lower than the accuracy of expectation then adjust training sample, up to the accuracy that is higher than expectation, the disaggregated model after obtaining training.
Further; In the said step (3); The leaching process of said six star models: after extracting the barycenter of human body foreground image; Human body silhouettes point that obtains and the distance between the center of mass point are divided into left and right sides two parts with the human body outline profile, calculate the going up most of its two parts silhouette, down and the distance of Far Left point, rightmost point and barycenter respectively; Obtained the axis of the heart point that six point arrive at six star models after, calculate the angle information between axis and the adjacent axis; Calculate the eccentricity of human body silhouette.
In the said step (1), calculate the process of coverage rate: at first calculate the likeness coefficient of per two frames, in the associated frame set of then listing present frame in of the similarity of other frames in present frame and the video greater than the likeness coefficient mean value of present frame and other frames; Then in the associated frame set in frame and the video ratio of all frame numbers be the coverage rate of present frame, ask for the coverage rate of each two field picture in the video, and get 30% the highest frame of coverage rate as candidate's key frame;
The process of calculated distortion rate: the estimated probability of the gray-scale value of computed image at first; And the gray level average of image; Calculate single order, second order, the third moment of each two field picture then; Submeter is represented average, variance and measure of skewness, with the characteristic of this tri-vector presentation video, and calculates the average square of all frames in average square and the video of associated frame set of key frame; The average square of the associated frame set of last calculated candidate key frame and the objective function maximal value of the average square of all frames, thus the distortion rate of candidate's key frame obtained, get 50% final key frame of minimum as this video.
Beneficial effect of the present invention mainly shows: (1) adopts the attitude of many Feature Fusion to describe operator in algorithm, can better describe human body attitude, strengthens describing the embodiment ability to attitude of operator, can set up more accurate attitude model and recognition result; (2) in the video analysis process, added the notion of key frame, described the entirety of video, can make video analysis more efficient the posture analysis of key frame through key frame.
Description of drawings
Fig. 1 is based on the process flow diagram of human body attitude recognition methods of many Feature Fusion of key frame.
Fig. 2 is based on the process flow diagram of the extraction method of key frame of video content.
Fig. 3 is six a star aspect of model synoptic diagram of many characteristic models.
Fig. 4 is six star angle character synoptic diagram of many characteristic models.
Fig. 5 is the eccentricity synoptic diagram of many characteristic models.
Fig. 6 is the synoptic diagram of the SVM that designs one to one.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1~Fig. 6, a kind of human body attitude recognition methods of the many Feature Fusion based on key frame, said human body attitude recognition methods may further comprise the steps:
(1) video image is extracted the Hu invariant moment features, the coverage rate of sequence of computed images, extracting the highest setting covering percentage of coverage rate is candidate's key frame, the distortion rate of calculated candidate key frame is a key frame to extract wherein minimum distortion percentage then;
(2) key frame is carried out the extraction of foreground image, obtain the foreground image of movement human after;
(3) characteristic information of extraction key frame, said characteristic information is six star models, six star angle and eccentricities; Obtain the characteristics of image vector of many Feature Fusion;
(4) use the man-to-man disaggregated model that trains, said disaggregated model is the attitude sorter based on SVM, and attitude is discerned.
In the present embodiment, the training process of attitude model comprises: the expression that many feature extractions of attitude master sample, attitude are described operator and based on the study of the multicategory classification device of SVMs (SVM:Support Vector Machine); The process of gesture recognition comprises: expression and gesture recognition that many feature extractions of content-based key-frame extraction, key frame, attitude are described operator; The method structure is as shown in Figure 1, and wherein content-based key-frame extraction, attitude based on multi-feature fusion are represented and be the gordian technique in this method based on the gesture recognition of SVM.
Content-based key-frame extraction: complicacy and the stability of considering algorithm; The present invention proposes a kind of key-frame extraction algorithm based on video content analysis, selects to comprise the maximum key frame sequence of information in the video sequence through the coverage rate and the distortion rate that calculate sequence of frames of video.
Because can there be certain geometric distortion in video image in shooting process; And geometric distortion will be brought very big influence for image recognition; Therefore need a kind of method with rotation and constant rate property; Use the characteristics of image of Hu invariant moments to extract key frame in this method, as shown in Figure 2 based on the key-frame extraction algorithm flow of invariant moments.
(1) Hu invariant moments
The Hu invariant moments is to put forward in 1962, and it is indeformable to have translation, rotation and a yardstick, can enough squares space in corresponding conversion express and the analysis image coordinate transform.Under the image discrete state, ask multistage common square of image and central moment.When image changed, common square can change, and is still responsive to rotation but centre distance then has translation invariance.Directly carry out character representation, can not make characteristic have translation, rotation and constant rate property simultaneously with common square or central moment.If utilize normalization central moment, then characteristic not only has translation invariance, but also has constant rate property.
Hu utilizes second order and third central moment to construct seven invariant moments, and they can keep translation, convergent-divergent and invariable rotary under the consecutive image condition.Video image is extracted the Hu invariant moments, obtain the feature description operator of image.
(2) coverage rate
After calculating the Hu invariant moment features of image, utilize the coverage rate of the every sub-picture of this feature calculation, extract candidate's key frame of video.Calculate the process of coverage rate, at first calculate the likeness coefficient of per two frames, in the associated frame set of then listing present frame in of the similarity of other frames in present frame and the video greater than the likeness coefficient mean value of present frame and other frames.Then in the associated frame set in frame and the video ratio of all frame numbers be the coverage rate of present frame, ask for the coverage rate of each two field picture in the video, and get 30% the highest frame of coverage rate as candidate's key frame.
(3) distortion rate
Coverage rate has been calculated the probability that each frame can be represented the information of other frames; But can't from candidate's key frame, tell which frame is best key frame; Therefore we are through calculating the distortion rate between other frames in each candidate frame and the video, and distortion rate is low more is chosen as key frame.
The estimated probability of the gray-scale value of computed image at first; And the gray level average of image; Calculate single order, second order, the third moment of each two field picture then; Submeter is represented average, variance and measure of skewness, with the characteristic of this tri-vector presentation video, and calculates the average square of all frames in average square and the video of associated frame set of key frame.The average square of the associated frame set of last calculated candidate key frame and the objective function maximal value of the average square of all frames, thus the distortion rate of candidate's key frame obtained, get 50% final key frame of minimum as this video.
Attitude based on multi-feature fusion is represented: behind the above key frame that calculates video sequence, extract the characteristic information of key frame, and carry out gesture recognition.The characteristic information that this paper adopts is the human body attitude characteristic of six star models, six star angles and the many Feature Fusion of eccentricity, thereby obtains the characteristics of image vector of many Feature Fusion.
With the algorithm that the attitude of many Feature Fusion is described and template matches combines, the attitude of many Feature Fusion is described algorithm can describe out human body attitude comparatively exactly, has overcome template matching method to the sensitivity in the time interval, has strengthened the robustness of method.
(1) six star model
For the different behaviors of human body, comprised the abundantest information that to describe the behavior in its profile, so this method selects six star models as one of them characteristic for use.After six star models promptly extract the barycenter of human body foreground image, human body silhouettes point that obtains and the distance between the center of mass point.The human body outline profile is divided into left and right sides two parts, calculates the going up most of its two parts silhouette, the distance of a left side (right side) edge point and barycenter down and respectively, as shown in Figure 3.
(2) six star angles
Second angle that is characterized as six stars in many characteristics promptly after six star models have obtained the axis of the heart point that six point arrive, calculated the angle information between axis and the adjacent axis, obtains six angles, and be as shown in Figure 4.
(3) eccentricity
This method is extracted the eccentricity of human body contour outline as one of characteristic, calculates human body silhouette eccentricity through formula, thereby has obtained the characteristics of image vector of many Feature Fusion, and is as shown in Figure 5.
Attitude sorter based on SVM: this method selects the multicategory classification device of support vector that modeling and classification are carried out in the human body behavior.(Support Vector Machine SVM) can obtain generalization ability sorter preferably to SVMs under small sampling condition.SVM can be transformed into a linear space with the non-linear characteristics space through kernel function, constructs lineoid in the feature space after conversion then, the optimization model between type of obtaining and the class.Because what this paper extracted is the many characteristics that merge, and is a nonlinear space therefore, (Radial Basis Function RBF) carries out the feature space conversion, finally obtains the optimal classification face between all kinds of to select to use radially basic kernel function.
Owing to the training sample of gathering can be because there be certain noise in the error in the gatherer process; Therefore in SVMs, define penalty factor and nuclear parameter γ and solved study and the inseparable problem excessively that causes owing to noise; Transfer excellent method to confirm penalty factor and nuclear parameter through parameter in the training process; Thereby obtain an optimal classification face, structure obtains SVM multi-class targets sorter at last.
Through kernel function proper vector is carried out after higher-dimension transforms,, calculate sorter with the function in the lower dimensional space to the training sample study that exercises supervision.Use the multicategory classification device method for designing of (one-versus-one) one to one among the present invention; To N class attitude; Therefore SVM of design between any two types of samples just needs design N* (N-1)/2 SVM, as shown in Figure 6; Three kinds of attitude classes are arranged, sorter f of design between attitude class 1 and the attitude class 2
1,2(x), design category device f between attitude class 2 and the attitude class 3
2,3(x), the sorter f between attitude class 1 and the attitude class 3
1,3(x), totally 3 sorters.When a unknown sample is carried out the branch time-like, obtain a classification results to each sorter, divide who gets the most votes's classification to be the classification of this unknown sample at last.
Gathered more than 400 video segment (720*576 altogether; 30fps); Movement human video data and the Weizmann human body behavior public database taken have certainly been comprised; And video data is divided into training data and test data set; Its ratio is 2: 1, has comprised 11 kinds of attitudes altogether, be respectively walking (walk), 1 (jump1) that jump, 2 (jump2) that jump, sidle (sidewalk), squat down (squat), bend over (stoop), creep (crawl), push-up (push-up), sit-ups (sit-up), (sit) sits down.
The training stage of disaggregated model:
(1) at first the human body prospect is carried out pre-service,, earlier it is carried out the background training based on code book for different behavior video segments; Image difference through video sequence frame and background model; Obtain the foreground image of movement human, and carry out morphologic Flame Image Process, remove picture noise;
(2) extraction comprises the characteristic (like Fig. 3,4, shown in 5) of the training data of 11 kinds of attitudes, forms the feature database of master sample, and the method for using colourful attitude to merge is carried out posture feature to movement human and described;
(3), make up multicategory classification model based on SVM through study to master sample;
(4) utilize test sample book that model is verified its correctness, the accuracy that is lower than expectation is then adjusted training sample.
To discerning in the input video, comprise four actions, have 214 two field pictures, algorithm extracts 32 key frames, realizes the identification of attitude sequence, and as shown in Figure 8, detailed process is following:
(1) image is extracted the Hu invariant moment features, is used for calculating the coverage rate of video image sequence, extract coverage rate the highest 30% be candidate's key frame, the distortion rate of calculated candidate key frame then, with extraction wherein 50% be key frame;
(2) key frame is carried out the extraction of foreground image, obtain the foreground image of movement human after;
(3) use colourful attitude to merge human body attitude in the algorithm and describe operator;
(4) with the disaggregated model that trains, attitude is discerned.
Claims (5)
1. human body attitude recognition methods based on many Feature Fusion of key frame, it is characterized in that: said human body attitude recognition methods may further comprise the steps:
(1) video image is extracted the Hu invariant moment features, the coverage rate of sequence of computed images, extracting the highest setting covering percentage of coverage rate is candidate's key frame, the distortion rate of calculated candidate key frame is a key frame to extract wherein minimum distortion percentage then;
(2) key frame is carried out the extraction of foreground image, obtain the foreground image of movement human after;
(3) characteristic information of extraction key frame, said characteristic information is six star models, six star angle and eccentricities; Obtain the characteristics of image vector of many Feature Fusion;
(4) use the man-to-man disaggregated model that trains, said disaggregated model is the attitude sorter based on SVM, and attitude is discerned.
2. the human body attitude recognition methods of the many Feature Fusion based on key frame as claimed in claim 1 is characterized in that: in the said step (4), to N class attitude, therefore SVM of design between any two types of samples just needs design N* (N-1)/2 SVM.
3. according to claim 1 or claim 2 human body attitude recognition methods based on many Feature Fusion of key frame, it is characterized in that: in the said step (4), the training process of disaggregated model is following:
(4.1) at first the foreground image of human body is carried out pre-service; For different behavior video segments; Earlier it is carried out the background training based on code book,, obtain the foreground image of movement human through the image difference of video sequence frame and background model; And carry out morphologic Flame Image Process, remove picture noise;
(4.2) extraction comprises the characteristic of the training data of 11 kinds of attitudes; The feature database of forming master sample; The method of using colourful attitude to merge is carried out posture feature to movement human and is described, and said 11 kinds of attitudes are respectively walking, little jump, jump greatly, sidle, squat down, bend over, creep, push-up, sit-ups and sit down;
(4.3), make up multicategory classification model based on SVM through study to master sample;
(4.4) utilize test sample book that model is verified,, turn back to (4.1) if be lower than the accuracy of expectation then adjust training sample, up to the accuracy that is higher than expectation, the disaggregated model after obtaining training.
4. according to claim 1 or claim 2 human body attitude recognition methods based on many Feature Fusion of key frame; It is characterized in that: in the said step (3); The leaching process of said six star models: after extracting the barycenter of human body foreground image; Human body silhouettes point that obtains and the distance between the center of mass point are divided into left and right sides two parts with the human body outline profile, calculate the going up most of its two parts silhouette, down and the distance of Far Left point, rightmost point and barycenter respectively; Obtained the axis of the heart point that six point arrive at six star models after, calculate the angle information between axis and the adjacent axis; Calculate the eccentricity of human body silhouette.
5. according to claim 1 or claim 2 human body attitude recognition methods based on many Feature Fusion of key frame; It is characterized in that: in the said step (1); Calculate the process of coverage rate: at first calculate the likeness coefficient of per two frames, in the associated frame set of then listing present frame in of the similarity of other frames in present frame and the video greater than the likeness coefficient mean value of present frame and other frames; Then in the associated frame set in frame and the video ratio of all frame numbers be the coverage rate of present frame, ask for the coverage rate of each two field picture in the video, and get 30% the highest frame of coverage rate as candidate's key frame;
The process of calculated distortion rate: the estimated probability of the gray-scale value of computed image at first; And the gray level average of image; Calculate single order, second order, the third moment of each two field picture then; Submeter is represented average, variance and measure of skewness, with the characteristic of this tri-vector presentation video, and calculates the average square of all frames in average square and the video of associated frame set of key frame; The average square of the associated frame set of last calculated candidate key frame and the objective function maximal value of the average square of all frames, thus the distortion rate of candidate's key frame obtained, get 50% final key frame of minimum as this video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210063893.5A CN102682302B (en) | 2012-03-12 | 2012-03-12 | Human body posture identification method based on multi-characteristic fusion of key frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210063893.5A CN102682302B (en) | 2012-03-12 | 2012-03-12 | Human body posture identification method based on multi-characteristic fusion of key frame |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102682302A true CN102682302A (en) | 2012-09-19 |
CN102682302B CN102682302B (en) | 2014-03-26 |
Family
ID=46814197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210063893.5A Active CN102682302B (en) | 2012-03-12 | 2012-03-12 | Human body posture identification method based on multi-characteristic fusion of key frame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102682302B (en) |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218831A (en) * | 2013-04-21 | 2013-07-24 | 北京航空航天大学 | Video moving target classification and identification method based on outline constraint |
WO2014176790A1 (en) * | 2013-05-03 | 2014-11-06 | Nokia Corporation | A method and technical equipment for people identification |
CN104274182A (en) * | 2013-07-01 | 2015-01-14 | 株式会社东芝 | Motion information processing apparatus and method |
CN104331712A (en) * | 2014-11-24 | 2015-02-04 | 齐齐哈尔格林环保科技开发有限公司 | Automatic classifying method for algae cell images |
WO2015078134A1 (en) * | 2013-11-29 | 2015-06-04 | 华为技术有限公司 | Video classification method and device |
CN105184767A (en) * | 2015-07-22 | 2015-12-23 | 北京工业大学 | Moving human body attitude similarity measuring method |
CN105184257A (en) * | 2015-09-08 | 2015-12-23 | 北京航空航天大学 | Target detection method and device |
CN106295532A (en) * | 2016-08-01 | 2017-01-04 | 河海大学 | A kind of human motion recognition method in video image |
CN106599907A (en) * | 2016-11-29 | 2017-04-26 | 北京航空航天大学 | Multi-feature fusion-based dynamic scene classification method and apparatus |
CN106980815A (en) * | 2017-02-07 | 2017-07-25 | 王俊 | Facial paralysis objective evaluation method under being supervised based on H B rank scores |
CN107087211A (en) * | 2017-03-30 | 2017-08-22 | 北京奇艺世纪科技有限公司 | A kind of anchor shots detection method and device |
CN107203753A (en) * | 2017-05-25 | 2017-09-26 | 西安工业大学 | A kind of action identification method based on fuzzy neural network and graph model reasoning |
CN107239728A (en) * | 2017-01-04 | 2017-10-10 | 北京深鉴智能科技有限公司 | Unmanned plane interactive device and method based on deep learning Attitude estimation |
CN107330414A (en) * | 2017-07-07 | 2017-11-07 | 郑州轻工业学院 | Act of violence monitoring method |
CN107483887A (en) * | 2017-08-11 | 2017-12-15 | 中国地质大学(武汉) | The early-warning detection method of emergency case in a kind of smart city video monitoring |
CN107798313A (en) * | 2017-11-22 | 2018-03-13 | 杨晓艳 | A kind of human posture recognition method, device, terminal and storage medium |
CN107832713A (en) * | 2017-11-13 | 2018-03-23 | 南京邮电大学 | A kind of human posture recognition method based on OptiTrack |
CN108229336A (en) * | 2017-12-13 | 2018-06-29 | 北京市商汤科技开发有限公司 | Video identification and training method and device, electronic equipment, program and medium |
CN108256433A (en) * | 2017-12-22 | 2018-07-06 | 银河水滴科技(北京)有限公司 | A kind of athletic posture appraisal procedure and system |
CN108615241A (en) * | 2018-04-28 | 2018-10-02 | 四川大学 | A kind of quick estimation method of human posture based on light stream |
CN108681740A (en) * | 2018-04-04 | 2018-10-19 | 儒安科技有限公司 | Vehicle type classification method based on multi-category support vector machines |
CN108965920A (en) * | 2018-08-08 | 2018-12-07 | 北京未来媒体科技股份有限公司 | A kind of video content demolition method and device |
CN109190474A (en) * | 2018-08-01 | 2019-01-11 | 南昌大学 | Human body animation extraction method of key frame based on posture conspicuousness |
CN109508684A (en) * | 2018-11-21 | 2019-03-22 | 中山大学 | A kind of method of Human bodys' response in video |
CN109583340A (en) * | 2018-11-15 | 2019-04-05 | 中山大学 | A kind of video object detection method based on deep learning |
CN109670520A (en) * | 2017-10-13 | 2019-04-23 | 杭州海康威视数字技术股份有限公司 | A kind of targeted attitude recognition methods, device and electronic equipment |
CN109858406A (en) * | 2019-01-17 | 2019-06-07 | 西北大学 | A kind of extraction method of key frame based on artis information |
CN110309720A (en) * | 2019-05-27 | 2019-10-08 | 北京奇艺世纪科技有限公司 | Video detecting method, device, electronic equipment and computer-readable medium |
CN110400332A (en) * | 2018-04-25 | 2019-11-01 | 杭州海康威视数字技术股份有限公司 | A kind of target detection tracking method, device and computer equipment |
CN110457999A (en) * | 2019-06-27 | 2019-11-15 | 广东工业大学 | A kind of animal posture behavior estimation based on deep learning and SVM and mood recognition methods |
CN110490901A (en) * | 2019-07-15 | 2019-11-22 | 武汉大学 | The pedestrian detection tracking of anti-attitudes vibration |
CN111368810A (en) * | 2020-05-26 | 2020-07-03 | 西南交通大学 | Sit-up detection system and method based on human body and skeleton key point identification |
CN111783650A (en) * | 2020-06-30 | 2020-10-16 | 北京百度网讯科技有限公司 | Model training method, action recognition method, device, equipment and storage medium |
CN111797714A (en) * | 2020-06-16 | 2020-10-20 | 浙江大学 | Multi-view human motion capture method based on key point clustering |
CN112090053A (en) * | 2020-09-14 | 2020-12-18 | 成都拟合未来科技有限公司 | 3D interactive fitness training method, device, equipment and medium |
CN112926522A (en) * | 2021-03-30 | 2021-06-08 | 广东省科学院智能制造研究所 | Behavior identification method based on skeleton attitude and space-time diagram convolutional network |
CN112932470A (en) * | 2021-01-27 | 2021-06-11 | 上海萱闱医疗科技有限公司 | Push-up training evaluation method and device, equipment and storage medium |
US20210390713A1 (en) * | 2020-06-12 | 2021-12-16 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for performing motion transfer using a learning model |
CN116310015A (en) * | 2023-03-15 | 2023-06-23 | 杭州若夕企业管理有限公司 | Computer system, method and medium |
WO2023197390A1 (en) * | 2022-04-15 | 2023-10-19 | 北京航空航天大学杭州创新研究院 | Posture tracking method and apparatus, electronic device, and computer readable medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101576953A (en) * | 2009-06-10 | 2009-11-11 | 北京中星微电子有限公司 | Classification method and device of human body posture |
CN101794384A (en) * | 2010-03-12 | 2010-08-04 | 浙江大学 | Shooting action identification method based on human body skeleton map extraction and grouping motion diagram inquiry |
US20110228976A1 (en) * | 2010-03-19 | 2011-09-22 | Microsoft Corporation | Proxy training data for human body tracking |
CN102289672A (en) * | 2011-06-03 | 2011-12-21 | 天津大学 | Infrared gait identification method adopting double-channel feature fusion |
-
2012
- 2012-03-12 CN CN201210063893.5A patent/CN102682302B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101576953A (en) * | 2009-06-10 | 2009-11-11 | 北京中星微电子有限公司 | Classification method and device of human body posture |
CN101794384A (en) * | 2010-03-12 | 2010-08-04 | 浙江大学 | Shooting action identification method based on human body skeleton map extraction and grouping motion diagram inquiry |
US20110228976A1 (en) * | 2010-03-19 | 2011-09-22 | Microsoft Corporation | Proxy training data for human body tracking |
CN102289672A (en) * | 2011-06-03 | 2011-12-21 | 天津大学 | Infrared gait identification method adopting double-channel feature fusion |
Non-Patent Citations (2)
Title |
---|
孙斌: "基于支持向量机的视频监控运动对象识别方法研究", 《现代计算机》 * |
谢非 等: "基于支持向量机的多种人体姿态识别", 《重庆工学院学报(自然科学)》 * |
Cited By (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218831B (en) * | 2013-04-21 | 2015-11-18 | 北京航空航天大学 | A kind of video frequency motion target classifying identification method based on profile constraint |
CN103218831A (en) * | 2013-04-21 | 2013-07-24 | 北京航空航天大学 | Video moving target classification and identification method based on outline constraint |
WO2014176790A1 (en) * | 2013-05-03 | 2014-11-06 | Nokia Corporation | A method and technical equipment for people identification |
CN105164696A (en) * | 2013-05-03 | 2015-12-16 | 诺基亚技术有限公司 | A method and technical equipment for people identification |
CN104274182A (en) * | 2013-07-01 | 2015-01-14 | 株式会社东芝 | Motion information processing apparatus and method |
WO2015078134A1 (en) * | 2013-11-29 | 2015-06-04 | 华为技术有限公司 | Video classification method and device |
US10002296B2 (en) | 2013-11-29 | 2018-06-19 | Huawei Technologies Co., Ltd. | Video classification method and apparatus |
CN104331712B (en) * | 2014-11-24 | 2017-08-25 | 齐齐哈尔格林环保科技开发有限公司 | A kind of alga cells classification of images method |
CN104331712A (en) * | 2014-11-24 | 2015-02-04 | 齐齐哈尔格林环保科技开发有限公司 | Automatic classifying method for algae cell images |
CN105184767A (en) * | 2015-07-22 | 2015-12-23 | 北京工业大学 | Moving human body attitude similarity measuring method |
CN105184767B (en) * | 2015-07-22 | 2018-04-06 | 北京工业大学 | A kind of movement human posture method for measuring similarity |
CN105184257B (en) * | 2015-09-08 | 2018-08-07 | 北京航空航天大学 | Object detection method and device |
CN105184257A (en) * | 2015-09-08 | 2015-12-23 | 北京航空航天大学 | Target detection method and device |
CN106295532A (en) * | 2016-08-01 | 2017-01-04 | 河海大学 | A kind of human motion recognition method in video image |
CN106295532B (en) * | 2016-08-01 | 2019-09-24 | 河海大学 | A kind of human motion recognition method in video image |
CN106599907B (en) * | 2016-11-29 | 2019-11-29 | 北京航空航天大学 | The dynamic scene classification method and device of multiple features fusion |
CN106599907A (en) * | 2016-11-29 | 2017-04-26 | 北京航空航天大学 | Multi-feature fusion-based dynamic scene classification method and apparatus |
CN107239728A (en) * | 2017-01-04 | 2017-10-10 | 北京深鉴智能科技有限公司 | Unmanned plane interactive device and method based on deep learning Attitude estimation |
CN106980815A (en) * | 2017-02-07 | 2017-07-25 | 王俊 | Facial paralysis objective evaluation method under being supervised based on H B rank scores |
CN107087211A (en) * | 2017-03-30 | 2017-08-22 | 北京奇艺世纪科技有限公司 | A kind of anchor shots detection method and device |
CN107203753A (en) * | 2017-05-25 | 2017-09-26 | 西安工业大学 | A kind of action identification method based on fuzzy neural network and graph model reasoning |
CN107203753B (en) * | 2017-05-25 | 2020-09-08 | 西安工业大学 | Action recognition method based on fuzzy neural network and graph model reasoning |
CN107330414A (en) * | 2017-07-07 | 2017-11-07 | 郑州轻工业学院 | Act of violence monitoring method |
CN107483887A (en) * | 2017-08-11 | 2017-12-15 | 中国地质大学(武汉) | The early-warning detection method of emergency case in a kind of smart city video monitoring |
CN107483887B (en) * | 2017-08-11 | 2020-05-22 | 中国地质大学(武汉) | Early warning detection method for emergency in smart city video monitoring |
CN109670520A (en) * | 2017-10-13 | 2019-04-23 | 杭州海康威视数字技术股份有限公司 | A kind of targeted attitude recognition methods, device and electronic equipment |
CN107832713B (en) * | 2017-11-13 | 2021-11-16 | 南京邮电大学 | Human body posture recognition method based on OptiTrack |
CN107832713A (en) * | 2017-11-13 | 2018-03-23 | 南京邮电大学 | A kind of human posture recognition method based on OptiTrack |
CN107798313A (en) * | 2017-11-22 | 2018-03-13 | 杨晓艳 | A kind of human posture recognition method, device, terminal and storage medium |
WO2019114405A1 (en) * | 2017-12-13 | 2019-06-20 | 北京市商汤科技开发有限公司 | Video recognition and training method and apparatus, electronic device and medium |
CN108229336A (en) * | 2017-12-13 | 2018-06-29 | 北京市商汤科技开发有限公司 | Video identification and training method and device, electronic equipment, program and medium |
US10909380B2 (en) | 2017-12-13 | 2021-02-02 | Beijing Sensetime Technology Development Co., Ltd | Methods and apparatuses for recognizing video and training, electronic device and medium |
CN108256433A (en) * | 2017-12-22 | 2018-07-06 | 银河水滴科技(北京)有限公司 | A kind of athletic posture appraisal procedure and system |
CN108256433B (en) * | 2017-12-22 | 2020-12-25 | 银河水滴科技(北京)有限公司 | Motion attitude assessment method and system |
CN108681740A (en) * | 2018-04-04 | 2018-10-19 | 儒安科技有限公司 | Vehicle type classification method based on multi-category support vector machines |
CN110400332A (en) * | 2018-04-25 | 2019-11-01 | 杭州海康威视数字技术股份有限公司 | A kind of target detection tracking method, device and computer equipment |
CN110400332B (en) * | 2018-04-25 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | Target detection tracking method and device and computer equipment |
CN108615241A (en) * | 2018-04-28 | 2018-10-02 | 四川大学 | A kind of quick estimation method of human posture based on light stream |
CN109190474B (en) * | 2018-08-01 | 2021-07-20 | 南昌大学 | Human body animation key frame extraction method based on gesture significance |
CN109190474A (en) * | 2018-08-01 | 2019-01-11 | 南昌大学 | Human body animation extraction method of key frame based on posture conspicuousness |
CN108965920A (en) * | 2018-08-08 | 2018-12-07 | 北京未来媒体科技股份有限公司 | A kind of video content demolition method and device |
CN109583340A (en) * | 2018-11-15 | 2019-04-05 | 中山大学 | A kind of video object detection method based on deep learning |
CN109508684B (en) * | 2018-11-21 | 2022-12-27 | 中山大学 | Method for recognizing human behavior in video |
CN109508684A (en) * | 2018-11-21 | 2019-03-22 | 中山大学 | A kind of method of Human bodys' response in video |
CN109858406B (en) * | 2019-01-17 | 2023-04-07 | 西北大学 | Key frame extraction method based on joint point information |
CN109858406A (en) * | 2019-01-17 | 2019-06-07 | 西北大学 | A kind of extraction method of key frame based on artis information |
CN110309720A (en) * | 2019-05-27 | 2019-10-08 | 北京奇艺世纪科技有限公司 | Video detecting method, device, electronic equipment and computer-readable medium |
CN110457999B (en) * | 2019-06-27 | 2022-11-04 | 广东工业大学 | Animal posture behavior estimation and mood recognition method based on deep learning and SVM |
CN110457999A (en) * | 2019-06-27 | 2019-11-15 | 广东工业大学 | A kind of animal posture behavior estimation based on deep learning and SVM and mood recognition methods |
CN110490901A (en) * | 2019-07-15 | 2019-11-22 | 武汉大学 | The pedestrian detection tracking of anti-attitudes vibration |
CN111368810B (en) * | 2020-05-26 | 2020-08-25 | 西南交通大学 | Sit-up detection system and method based on human body and skeleton key point identification |
CN111368810A (en) * | 2020-05-26 | 2020-07-03 | 西南交通大学 | Sit-up detection system and method based on human body and skeleton key point identification |
US20210390713A1 (en) * | 2020-06-12 | 2021-12-16 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for performing motion transfer using a learning model |
WO2021248432A1 (en) * | 2020-06-12 | 2021-12-16 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for performing motion transfer using a learning model |
US11830204B2 (en) * | 2020-06-12 | 2023-11-28 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for performing motion transfer using a learning model |
CN111797714B (en) * | 2020-06-16 | 2022-04-26 | 浙江大学 | Multi-view human motion capture method based on key point clustering |
CN111797714A (en) * | 2020-06-16 | 2020-10-20 | 浙江大学 | Multi-view human motion capture method based on key point clustering |
CN111783650A (en) * | 2020-06-30 | 2020-10-16 | 北京百度网讯科技有限公司 | Model training method, action recognition method, device, equipment and storage medium |
CN112090053A (en) * | 2020-09-14 | 2020-12-18 | 成都拟合未来科技有限公司 | 3D interactive fitness training method, device, equipment and medium |
CN112932470A (en) * | 2021-01-27 | 2021-06-11 | 上海萱闱医疗科技有限公司 | Push-up training evaluation method and device, equipment and storage medium |
CN112932470B (en) * | 2021-01-27 | 2023-12-29 | 上海萱闱医疗科技有限公司 | Assessment method and device for push-up training, equipment and storage medium |
CN112926522A (en) * | 2021-03-30 | 2021-06-08 | 广东省科学院智能制造研究所 | Behavior identification method based on skeleton attitude and space-time diagram convolutional network |
CN112926522B (en) * | 2021-03-30 | 2023-11-24 | 广东省科学院智能制造研究所 | Behavior recognition method based on skeleton gesture and space-time diagram convolution network |
WO2023197390A1 (en) * | 2022-04-15 | 2023-10-19 | 北京航空航天大学杭州创新研究院 | Posture tracking method and apparatus, electronic device, and computer readable medium |
CN116310015A (en) * | 2023-03-15 | 2023-06-23 | 杭州若夕企业管理有限公司 | Computer system, method and medium |
Also Published As
Publication number | Publication date |
---|---|
CN102682302B (en) | 2014-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102682302B (en) | Human body posture identification method based on multi-characteristic fusion of key frame | |
Singh et al. | Video benchmarks of human action datasets: a review | |
Zhu et al. | Fusing spatiotemporal features and joints for 3d action recognition | |
Song et al. | Tracking revisited using RGBD camera: Unified benchmark and baselines | |
Kuo et al. | How does person identity recognition help multi-person tracking? | |
CN103295016B (en) | Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics | |
Kadkhodamohammadi et al. | A multi-view RGB-D approach for human pose estimation in operating rooms | |
CN105095884B (en) | A kind of pedestrian's identifying system and processing method based on random forest support vector machines | |
US9183431B2 (en) | Apparatus and method for providing activity recognition based application service | |
CN106295568A (en) | The mankind's naturalness emotion identification method combined based on expression and behavior bimodal | |
CN103942577A (en) | Identity identification method based on self-established sample library and composite characters in video monitoring | |
CN107944431A (en) | A kind of intelligent identification Method based on motion change | |
CN109711366A (en) | A kind of recognition methods again of the pedestrian based on group information loss function | |
CN104281572B (en) | A kind of target matching method and its system based on mutual information | |
CN104517097A (en) | Kinect-based moving human body posture recognition method | |
CN103186775A (en) | Human body motion recognition method based on mixed descriptor | |
CN104809469A (en) | Indoor scene image classification method facing service robot | |
Chen et al. | TriViews: A general framework to use 3D depth data effectively for action recognition | |
CN103955680A (en) | Action recognition method and device based on shape context | |
Tanisik et al. | Facial descriptors for human interaction recognition in still images | |
CN102855488A (en) | Three-dimensional gesture recognition method and system | |
Wei et al. | Human Activity Recognition using Deep Neural Network with Contextual Information. | |
CN101826155A (en) | Method for identifying act of shooting based on Haar characteristic and dynamic time sequence matching | |
Batool et al. | Telemonitoring of daily activities based on multi-sensors data fusion | |
Yan et al. | Human-object interaction recognition using multitask neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |