CN106096518A - Quick dynamic human body action extraction based on degree of depth study, recognition methods - Google Patents

Quick dynamic human body action extraction based on degree of depth study, recognition methods Download PDF

Info

Publication number
CN106096518A
CN106096518A CN201610387248.7A CN201610387248A CN106096518A CN 106096518 A CN106096518 A CN 106096518A CN 201610387248 A CN201610387248 A CN 201610387248A CN 106096518 A CN106096518 A CN 106096518A
Authority
CN
China
Prior art keywords
human body
degree
action
recognition methods
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610387248.7A
Other languages
Chinese (zh)
Inventor
姚鸣
姚一鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Duozhi Science And Technology Development Co Ltd
Original Assignee
Harbin Duozhi Science And Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Duozhi Science And Technology Development Co Ltd filed Critical Harbin Duozhi Science And Technology Development Co Ltd
Priority to CN201610387248.7A priority Critical patent/CN106096518A/en
Publication of CN106096518A publication Critical patent/CN106096518A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of quick dynamic human body action extraction based on degree of depth study, recognition methods.Current existing human body recognition technology and application have the deficiency of the following aspects, a human skeleton inherently complicated structure, the motor habit of different people, and its manner of execution is the most different, and this has resulted in the difficulty identifying human body universality.The present invention comprises the steps: it is first to describe the size of human body target, color, edge, profile, shape and the Global Information of the degree of depth, it is provided with by clue for action recognition, effective motion feature is extracted from video sequence, in the case of distant view, the movement locus of target is utilized to carry out trajectory analysis;In the case of close shot, then need to utilize the information extracted from image sequence that the extremity of target and trunk are carried out the modeling of two dimension or three-dimensional.The present invention is for quick dynamic human body action extraction based on degree of depth study, recognition methods.

Description

Quick dynamic human body action extraction based on degree of depth study, recognition methods
Technical field:
The present invention relates to action recognition field, be specifically related to a kind of quick dynamic human body action based on degree of depth study and extract, know Other method.
Background technology:
Current existing human body recognition technology and application have the deficiency of the following aspects, a human skeleton inherently complexity Structure, the motor habit of different people, its manner of execution is the most different, this resulted in identify human body universality difficulty.Its Secondary identified target location limitation is with mandatory, and identified target must adjust the position of oneself so that it is front alignment photographic head, Side all may None-identified, furthermore the speed of being in response to and efficiency, for continuous print human action, the data before frame and frame are deposited In redundancy, the memory space not only accounted for is big, also add amount of calculation.
Action recognition technology is motion characteristic based on people, the human body video flowing to input, first determines whether whether it exists The mankind, if there is the mankind, the most further provide the position letter of everyone position, size and each major joint index point Breath, and according to these information, extract the motion characteristic contained in everyone action further, and it is special with known action Levying and contrast, thus identify everyone ongoing action, action recognition technology has a wide range of applications, such as dangerous play Identify warning, human-computer interaction, medical assistance behavior modification, production of film and TV etc..
In order to quickly obtain and identify human body behavior, overcome above-mentioned the deficiencies in the prior art, the invention provides one The recognition methods learnt based on the computer degree of depth, that the method can be real-time, dynamic, quick, extraction and knowledge on a large scale Others' body action, can preferably be applied to the systems such as enterprise, school, the dangerous play alarm of government bodies, production of film and TV, fall Low hsrdware requirements, traditional method is quite strict, in the identification to ten thousand people's ranks, commonly for hsrdware requirements in action recognition Computer cannot meet computing demand at all, and this method passes through single knuckle, and double floating-point operations effectively solve hardware and took Big problem, randomly draws in filler test 100,000 people, only needs the identification of people extremely short to accomplish Real time identification.
Summary of the invention:
It is an object of the invention to provide a kind of quick dynamic human body action extraction based on degree of depth study, recognition methods.
Above-mentioned purpose is realized by following technical scheme:
A kind of based on the degree of depth study quick dynamic human body action extraction, recognition methods, first be describe human body target size, The Global Information of color, edge, profile, shape and the degree of depth, is provided with by clue for action recognition, extracts from video sequence Effective motion feature, in the case of distant view, utilizes the movement locus of target to carry out trajectory analysis;In the case of close shot, then need profit By the information extracted from image sequence, the extremity of target and trunk are carried out the modeling of two dimension or three-dimensional.
Described quick dynamic human body action extraction based on degree of depth study, recognition methods, by search body dimension, face The features such as color, edge, profile, shape, determine that mobile object is the mankind, intercept human body image by screening afterwards, then people With it on major joint position or more position, the index point that can be identified and follow the trail of is set, is shot by video camera The action of same human body, then according to Space geometric parameter, in conjunction with the mathematical model of some human motions, can extrapolate people The most each index point is in the position in each moment, and the combination of multiple index point positions just constitutes the integral position of human body, enters Row continuous print location recognition thus identify human action.
Described quick dynamic human body action extraction based on degree of depth study, recognition methods, to taking the photograph of described video camera As the image of head collection processes in real time, first gather and tell the people in image, pedestrian region frame is elected, so After each frame in this region is contrasted with its former frame and a later frame, calculate the mobile change of pixel in three frames, logical Cross and calculate the OpticalFlow that moves of pixel, there is shown the motion vector (Fx, Fy) that pixel moves, it follows that by this to Amount is decomposed so that:, after Gaussian filter filters, just obtain the character representation of paid close attention to pedestrian's action.
Beneficial effect:
1. the present invention is a kind of quick dynamic human body action extraction based on degree of depth study, recognition methods, mainly provides one Kind of recognition methods based on the study of the computer degree of depth, that the method can be real-time, dynamic, quickly, on a large scale extract and Identify human action, can preferably be applied to the systems such as enterprise, school, the dangerous play alarm of government bodies, production of film and TV, Reducing hsrdware requirements, traditional method is quite strict for hsrdware requirements in action recognition, in the identification to ten thousand people's ranks, general Logical computer cannot meet computing demand at all.
The present invention pass through single knuckle, double floating-point operations effectively solve hardware and take excessive problem, 100,000 people with In machine extraction filler test, only need the identification of people extremely short to accomplish Real time identification.
Accompanying drawing illustrates:
Accompanying drawing 1 is the action recognition technology schematic diagram of the neutral net of the present invention.
Accompanying drawing 2 is the motion characteristic figure of the present invention.
Detailed description of the invention:
Embodiment 1:
A kind of based on the degree of depth study quick dynamic human body action extraction, recognition methods, first be describe human body target size, The Global Information of color, edge, profile, shape and the degree of depth, is provided with by clue for action recognition, extracts from video sequence Effective motion feature, in the case of distant view, utilizes the movement locus of target to carry out trajectory analysis;In the case of close shot, then need profit By the information extracted from image sequence, the extremity of target and trunk are carried out the modeling of two dimension or three-dimensional.
Embodiment 2:
According to the quick dynamic human body action extraction based on degree of depth study described in embodiment 1, recognition methods, by search human body The features such as size, color, edge, profile, shape, determine that mobile object is the mankind, intercept human body image by screening afterwards, so After on the person, on major joint position or more position, the index point that can be identified and follow the trail of is set, by shooting Machine shoots the action of same human body, then according to Space geometric parameter, in conjunction with the mathematical model of some human motions, can push away Calculating on the person that each index point is in the position in each moment, the combination of multiple index point positions just constitutes the overall position of human body Put, carry out continuous print location recognition thus identify human action.
Embodiment 3:
According to the quick dynamic human body action extraction based on degree of depth study described in embodiment 1 or 2, recognition methods, to described The image of the camera collection of video camera processes in real time, first gathers and tells the people in image, by pedestrian region Frame is elected, and is then contrasted with its former frame and a later frame by each frame in this region, calculates pixel in three frames Mobile change, by calculating the OpticalFlow that pixel moves, there is shown the motion vector (Fx, Fy) that pixel moves, connects Get off, this vector decomposed so that:, after Gaussian filter filters, just obtain the mark sheet of paid close attention to pedestrian's action Show;
Motion characteristic image sees accompanying drawing 2
Feature calculation formula:

Claims (3)

1. quick dynamic human body action extraction based on degree of depth study, a recognition methods, is characterized in that: be first to describe human body The Global Information of the size of target, color, edge, profile, shape and the degree of depth, is provided with by clue for action recognition, from video Sequence extracts effective motion feature, in the case of distant view, utilizes the movement locus of target to carry out trajectory analysis;Close shot feelings Under condition, then need to utilize the information extracted from image sequence that the extremity of target and trunk are carried out the modeling of two dimension or three-dimensional.
Quick dynamic human body action extraction based on degree of depth study the most according to claim 1, recognition methods, its feature It is: by features such as search body dimension, color, edge, profile, shapes, determine that mobile object is the mankind, afterwards by screening Intercept human body image, then arrange on major joint position or more position on the person and can be identified and follow the trail of Index point, shoots the action of same human body, then according to Space geometric parameter, in conjunction with some human motions by video camera Mathematical model, can extrapolate on the person that each index point is in the position in each moment, and the combination of multiple index point positions is with regard to structure Become the integral position of human body, carried out continuous print location recognition thus identify human action.
Quick dynamic human body action extraction based on degree of depth study the most according to claim 1 and 2, recognition methods, it is special Levy and be: the image of the camera collection of described video camera is processed in real time, first gather and tell the people in image, will Pedestrian region frame is elected, and is then contrasted with its former frame and a later frame by each frame in this region, calculates three The mobile change of pixel in frame, by calculating the OpticalFlow that pixel moves, there is shown the motion vector that pixel moves (Fx, Fy), it follows that decompose this vector so that:, through gaussian filtering After device filtering, just obtain the character representation of paid close attention to pedestrian's action.
CN201610387248.7A 2016-06-02 2016-06-02 Quick dynamic human body action extraction based on degree of depth study, recognition methods Pending CN106096518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610387248.7A CN106096518A (en) 2016-06-02 2016-06-02 Quick dynamic human body action extraction based on degree of depth study, recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610387248.7A CN106096518A (en) 2016-06-02 2016-06-02 Quick dynamic human body action extraction based on degree of depth study, recognition methods

Publications (1)

Publication Number Publication Date
CN106096518A true CN106096518A (en) 2016-11-09

Family

ID=57447972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610387248.7A Pending CN106096518A (en) 2016-06-02 2016-06-02 Quick dynamic human body action extraction based on degree of depth study, recognition methods

Country Status (1)

Country Link
CN (1) CN106096518A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492108A (en) * 2017-08-18 2017-12-19 成都通甲优博科技有限责任公司 A kind of skeleton line extraction algorithm, system and storage medium based on deep learning
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN108655026A (en) * 2018-05-07 2018-10-16 上海交通大学 A kind of quick teaching sorting system of robot and method
CN110598569A (en) * 2019-08-20 2019-12-20 江西憶源多媒体科技有限公司 Action recognition method based on human body posture data
WO2020062760A1 (en) * 2018-09-26 2020-04-02 深圳市中视典数字科技有限公司 Motion capture system and method
CN111496770A (en) * 2020-04-09 2020-08-07 上海电机学院 Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN113095675A (en) * 2021-04-12 2021-07-09 华东师范大学 Method for monitoring action mode of examinee by means of identification point in network examination

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533528A (en) * 2009-04-18 2009-09-16 大连大学 Optical motion capture data processing method based on module piecewise linear model
CN101957655A (en) * 2009-07-17 2011-01-26 深圳泰山在线科技有限公司 Marked point-based motion recognition method and terminal equipment
US20120057761A1 (en) * 2010-09-01 2012-03-08 Sony Corporation Three dimensional human pose recognition method and apparatus
CN104463090A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Method for recognizing actions of human body skeleton of man-machine interactive system
CN105095867A (en) * 2015-07-21 2015-11-25 哈尔滨多智科技发展有限公司 Rapid dynamic face extraction and identification method based deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533528A (en) * 2009-04-18 2009-09-16 大连大学 Optical motion capture data processing method based on module piecewise linear model
CN101957655A (en) * 2009-07-17 2011-01-26 深圳泰山在线科技有限公司 Marked point-based motion recognition method and terminal equipment
US20120057761A1 (en) * 2010-09-01 2012-03-08 Sony Corporation Three dimensional human pose recognition method and apparatus
CN104463090A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Method for recognizing actions of human body skeleton of man-machine interactive system
CN105095867A (en) * 2015-07-21 2015-11-25 哈尔滨多智科技发展有限公司 Rapid dynamic face extraction and identification method based deep learning

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492108A (en) * 2017-08-18 2017-12-19 成都通甲优博科技有限责任公司 A kind of skeleton line extraction algorithm, system and storage medium based on deep learning
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN108655026A (en) * 2018-05-07 2018-10-16 上海交通大学 A kind of quick teaching sorting system of robot and method
CN108655026B (en) * 2018-05-07 2020-08-14 上海交通大学 Robot rapid teaching sorting system and method
WO2020062760A1 (en) * 2018-09-26 2020-04-02 深圳市中视典数字科技有限公司 Motion capture system and method
CN110598569A (en) * 2019-08-20 2019-12-20 江西憶源多媒体科技有限公司 Action recognition method based on human body posture data
CN110598569B (en) * 2019-08-20 2022-03-08 江西憶源多媒体科技有限公司 Action recognition method based on human body posture data
CN111496770A (en) * 2020-04-09 2020-08-07 上海电机学院 Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN113095675A (en) * 2021-04-12 2021-07-09 华东师范大学 Method for monitoring action mode of examinee by means of identification point in network examination
CN113095675B (en) * 2021-04-12 2022-03-29 华东师范大学 Method for monitoring action mode of examinee by means of identification point in network examination

Similar Documents

Publication Publication Date Title
CN106096518A (en) Quick dynamic human body action extraction based on degree of depth study, recognition methods
Singh et al. Eye in the sky: Real-time drone surveillance system (dss) for violent individuals identification using scatternet hybrid deep learning network
Bilinski et al. Human violence recognition and detection in surveillance videos
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
Li et al. Robust visual tracking based on convolutional features with illumination and occlusion handing
CN110517292A (en) Method for tracking target, device, system and computer readable storage medium
CN106651913A (en) Target tracking method based on correlation filtering and color histogram statistics and ADAS (Advanced Driving Assistance System)
WO2017129020A1 (en) Human behaviour recognition method and apparatus in video, and computer storage medium
CN107909059A (en) It is a kind of towards cooperateing with complicated City scenarios the traffic mark board of bionical vision to detect and recognition methods
Rodriguez et al. Detecting and segmenting humans in crowded scenes
CN108875586B (en) Functional limb rehabilitation training detection method based on depth image and skeleton data multi-feature fusion
CN110795982A (en) Apparent sight estimation method based on human body posture analysis
CN108681700A (en) A kind of complex behavior recognition methods
CN108280808B (en) Method for tracking target based on structuring output correlation filter
CN103020606A (en) Pedestrian detection method based on spatio-temporal context information
CN105303163B (en) A kind of method and detection device of target detection
CN103226713A (en) Multi-view behavior recognition method
CN104392445A (en) Method for dividing crowd in surveillance video into small groups
CN106327528A (en) Moving object tracking method and operation method of unmanned aerial vehicle
Duan et al. A more accurate mask detection algorithm based on Nao robot platform and YOLOv7
CN104616323B (en) A kind of time and space significance detection method based on slow signature analysis
Nguyen et al. Facemask wearing alert system based on simple architecture with low-computing devices
CN109389048A (en) Pedestrian detection and tracking in a kind of monitor video
CN102222324B (en) Symmetry property-based method for detecting salient regions of images
Yin et al. YOLO-EPF: Multi-scale smoke detection with enhanced pool former and multiple receptive fields

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 519000 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105, -23248 (central office)

Applicant after: Zhuhai wisdom Technology Co., Ltd.

Address before: 150000 Harbin, Nangang District, Thai Garden, the sea floor of the 25 floor, No. 3, No. 1 shops

Applicant before: HARBIN DUOZHI SCIENCE AND TECHNOLOGY DEVELOPMENT CO., LTD.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20161109