CN108549876A - The sitting posture detecting method estimated based on target detection and human body attitude - Google Patents

The sitting posture detecting method estimated based on target detection and human body attitude Download PDF

Info

Publication number
CN108549876A
CN108549876A CN201810357864.7A CN201810357864A CN108549876A CN 108549876 A CN108549876 A CN 108549876A CN 201810357864 A CN201810357864 A CN 201810357864A CN 108549876 A CN108549876 A CN 108549876A
Authority
CN
China
Prior art keywords
feature
sitting posture
target detection
human body
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810357864.7A
Other languages
Chinese (zh)
Inventor
高陈强
汤林
陈旭
汪澜
韩慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201810357864.7A priority Critical patent/CN108549876A/en
Publication of CN108549876A publication Critical patent/CN108549876A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of sitting posture detecting methods estimated based on target detection and human body attitude, belong to image procossing and technical field of computer vision.The present invention extracts first merges the fusion feature formed by feature I and feature II, and by the feature input CNN after fusion, if fusion feature comes from training set, is used to train network parameter;If fusion feature collects from verification, for verifying network parameter, and by back-propagation algorithm transmission error signal, gradient is updated, find optimal value, doing classification using flexible maximum activation function Softmax returns, and obtains final classification results and classification accuracy.The present invention solves the problems, such as to lose in Complex multi-target target in existing sitting posture detection, traditional method for relying on wearable device or sensor is abandoned, use the method based on target detection and human body attitude estimation, so that can accurately determine the sitting posture of each task object in the case that the crowd is dense in background complexity.

Description

The sitting posture detecting method estimated based on target detection and human body attitude
Technical field
The invention belongs to image procossings and technical field of computer vision, are related to a kind of based on target detection and human body attitude The sitting posture detecting method of estimation.
Background technology
With the further development of artificial intelligence technology, more and more concerns have also been obtained in depth learning technology. These industries intimately got up along with artificial intelligence technology such as pilotless automobile, intelligent domestic system are also carved always Ground changes people’s lives mode and the mode of production, and machine replaces the mankind, liberates the productive forces and suffered from extensively in all trades and professions Application.Teaching, way to manage in campus should also be as taking deep learning this " windward driving ", go to improve educator's Work.Before, people go the teaching efficiency of one teacher of assessment, are all to go to each classroom to patrol by special teaching supervisor, this Sample is not only time-consuming and laborious, but also it is also possible to is in the presence of omitting.Now, we can make full use of be distributed widely in it is each The video monitoring system in classroom carries out intellectual analysis to the teaching efficiency of every class, makes full use of with artificial intelligence technology Existing device resource.Therefore, how monitoring widely distributed in artificial intelligence and machine vision technique and combination campus is utilized Equipment carries out intellectual analysis, and provides reliable information in real time and be of great significance.
In conjunction with existing video monitoring system, the proposition based on the sitting posture detecting method that target detection and human body attitude are estimated There is special explanation meaning to the Campus MIS of students, can mainly apply in classroom to student's posture Detection and positioning.This includes following two aspects:On the one hand, if as soon as the classroom of teacher is vivid and interesting, then It is enough all students is attracted all to come back and listens to the teacher, and then the rhythm of teacher is walked.But if occur lying prone in the student to listen to the teacher It is absent-minded on desk, sleep the case where, so that it may the quality of instruction to illustrate this teacher is bad, needs to be improved the teaching side of oneself Formula.General method can be mainly divided into lays sensor, based on wearable device and based on single camera based on environment Method, these methods not only cannot carry out real-time online detection to multiple target, but also with high costs, and there is no much advantages.
Invention content
In view of this, the purpose of the present invention is to provide a kind of sitting posture detections estimated based on target detection and human body attitude Method can be detected and classify to human body sitting posture.
In order to achieve the above objectives, the present invention provides the following technical solutions:
Sitting posture detection is carried out using convolutional neural networks CNN, and it includes following step to be input to the extraction of the fusion feature in CNN Suddenly:
S1:Original image is manually marked, markup information includes encirclement frame Bounding Box, sitting posture classification and pass Node coordinate;
S2:Original image is input to target detection network, goes out single target figure using Bounding Box information interceptions Picture;
S3:Single target image is subjected to artis label by sitting posture classification, then the single target image of label is inputted To convolutional neural networks, the deep neural network feature of the last one convolutional layer output is extracted as feature I;
S4:By body joint point coordinate information and Bounding Box information inputs to more people's Attitude estimation networks, then to original Beginning image does more people's Attitude estimations, and is single human skeleton figure by the interception of more people's Attitude estimation figures;
S5:Single human skeleton figure is input to convolutional neural networks, extracts the depth god of the last one convolutional layer output Through network characterization as feature II;
S6:Feature I and feature II are merged.
Further, further include step S7:Feature after fusion is inputted in CNN, if fusion feature comes from training set, It is then used to train network parameter;If fusion feature collects from verification, for verifying network parameter, and pass through back-propagation algorithm Transmission error signal updates gradient, finds optimal value, and doing classification using flexible maximum activation function Softmax returns, and obtains most Whole classification results and classification accuracy.
Further, step S2 is specifically included:
The target detection network uses Faster RCNN networks, and Faster RCNN networks are by a candidate region network RPN and Fast RCNN network forms cascade network;Recommendation is selected in original image using RPN in first stage Region intercepts out single target figure using Fast RCNN in second stage to recommending the target in region further to segment Picture.
Further, described to select to recommend region in original image using RPN, it specifically includes:
The Bounding Box enclosing regions manually marked are sampled, and sampling area be positive sample region when select The sampling area is to recommend region;The positive sample region refers to the Duplication of sampling area and Bounding Box enclosing regions When more than threshold value, which is positive sample region, and threshold value is 0.6~0.9.
Further, the Duplication calculation formula of the sampling area and Bounding Box enclosing regions is:
Wherein:area(rg) it is Bounding Box enclosing regions, area (rn) it is sampling area.
Further, step 3 specifically includes:
Label is assigned to single target image according to sitting posture classification, the single target image of label is divided into training subset I With verification subset I, input is the single target image of triple channel of 40 × 40 pixels in CNN sorter networks, including three convolution Layer and corresponding nonlinear activation unit, the first two convolutional layer are used for indicating the high-level feature of image, the last one convolutional layer For generating high-level characteristic reaction, the characteristic pattern of the last one convolutional layer generation is extracted as the spy merged with follow-up phase Sign, i.e. feature I.
Further, step S4 is specifically included:
More people's Attitude estimations use G-RMI methods, first stage to be detected with Faster RCNN networks more in original image Individual, and the overlay areas Bounding Box are intercepted;Second stage uses the residual error network based on full convolutional network Resnet predicts intensive thermal map Dense Heatmap and compensation to each personage in the overlay areas Bounding Box Offset;Being accurately positioned for key point is obtained finally by the fusion of Dense Heatmap and Offset, to obtain single people Body skeleton drawing.
Further, step S5 is specifically included:
Single human skeleton figure is divided into training subset II and verification subset II, is 40 × 40 in the input of CNN sorter networks The single human skeleton figure of triple channel of pixel, including three convolutional layers and corresponding nonlinear activation unit, the first two convolutional layer For indicating that the high-level feature of image, the last one convolutional layer are used for generating high-level characteristic reaction, the last one is extracted The characteristic pattern that convolutional layer generates is as the feature merged with follow-up phase, i.e. feature II.
Further, described that feature I and feature II are subjected to fusion using attention Mechanism Model, it calculates first rationally Weight, be then weighted summation, the feature vector of a permeating h*
h*1h12h2
Wherein:α1Indicate the weight of feature I, h1Indicate the corresponding profile informations of feature I;α2Indicate the weight of feature II, h2Indicate the corresponding profile informations of feature II.
The beneficial effects of the present invention are:The present invention solves in existing sitting posture detection in Complex multi-target target The problem of loss, has abandoned traditional method for relying on wearable device or sensor, has used based on target detection and human body The method of Attitude estimation so that the sitting posture of each task object can be accurately determined in the case that the crowd is dense in background complexity.
Description of the drawings
In order to keep the purpose of the present invention, technical solution and advantageous effect clearer, the present invention provides following attached drawing and carries out Explanation:
Fig. 1 is the method flow diagram of feature extraction after present invention fusion;
Fig. 2 is the method flow diagram that sitting posture classification is realized using feature after fusion of the present invention.
Specific implementation mode
The sitting posture detection side estimated based on target detection and human body attitude a kind of to the present invention with reference to the accompanying drawings of the specification Method is further detailed.
The sitting posture detecting method estimated based on target detection and human body attitude is mainly estimated by human body target detection, more people's postures Five meter, feature extraction, Fusion Features and classification parts form.The method of target detection has much at this stage, is based on candidate regions The result that the method for domain network RPN obtains is best.The reasons why more people's Attitude estimations selection G-RMI methods is can to make full use of The Bounding Box information that first stage generates reduces model degree of redundancy and complexity, improves operational efficiency.Characteristics of image Extraction and selection be link critically important in image processing process, have important influence to subsequent image classification.In feature In terms of extraction, it is generally adopted by the characteristics of image of extraction engineer, such as edge feature, corner feature at this stage, these Feature calculation amount is big, and the information provided is very few, therefore the sitting posture detection side estimated based on target detection and human body attitude Method is using the convolution feature in convolutional neural networks.It is simple there is no carrying out to each feature in terms of Fusion Features Weighted average, but use attention Mechanism Model Attention-based Model, the spy for making model autonomous learning important Sign.Therefore, the sitting posture Detection task estimated based on target detection and human body attitude is sought under the conditions of complex background and more people Accurately detects and orient everyone different sitting posture.
As shown in Figure 1, fusion feature extraction includes the following steps:
S1:Original image is manually marked, markup information includes encirclement frame Bounding Box, sitting posture classification and pass Node coordinate;
S2:Original image is input to target detection network, goes out single target figure using Bounding Box information interceptions Picture.
The target detection network uses Faster RCNN networks, and Faster RCNN networks are by a candidate region network RPN and Fast RCNN network forms cascade network;Recommendation is selected in original image using RPN in first stage Region intercepts out single target figure using Fast RCNN in second stage to recommending the target in region further to segment Picture.Paper " the Faster R- that Shaoqing Ren, Kaiming He et al. is delivered can be referred to about Faster RCNN networks CNN:Towards Real-Time Object Detection with Region Proposal Networks”.
It is described to select to recommend region in original image using RPN, it specifically includes:
The Bounding Box enclosing regions manually marked are sampled, and sampling area be positive sample region when select The sampling area is to recommend region;The positive sample region refers to the Duplication of sampling area and Bounding Box enclosing regions When more than threshold value, which is positive sample region, and threshold value is 0.6~0.9.The threshold value of the present invention is 0.7.
Specifically the method for sampling is:When the Duplication of sampling area and Bounding Box enclosing regions is more than 0.7, Sampling area is positive sample region, when the Duplication of sampling area and Bounding Box enclosing regions is less than 0.7 more than 0.3 When, sampling area is thrown aside, the sampling area when the Duplication of sampling area and Bounding Box enclosing regions is less than 0.3 For negative sample region.
The Duplication calculation formula of the sampling area and Bounding Box enclosing regions is:
Wherein:area(rg) it is Bounding Box enclosing regions, area (rn) it is sampling area.
S3:Single target image is subjected to artis label by sitting posture classification, then the single target image of label is inputted To convolutional neural networks, the deep neural network feature of the last one convolutional layer output is extracted as feature I.
Label is assigned to single target image according to sitting posture classification, the single target image of label is divided into training subset I With verification subset I, input is the single target image of triple channel of 40 × 40 pixels in CNN sorter networks, including three convolution Layer and corresponding nonlinear activation unit, the first two convolutional layer are used for indicating the high-level feature of image, the last one convolutional layer For generating high-level characteristic reaction, the characteristic pattern of the last one convolutional layer generation is extracted as the spy merged with follow-up phase Sign, i.e. feature I.
S4:By body joint point coordinate information and Bounding Box information inputs to more people's Attitude estimation networks, then to original Beginning image does more people's Attitude estimations, and is single human skeleton figure by the interception of more people's Attitude estimation figures.
More people's Attitude estimations use G-RMI methods, first stage to be detected with Faster RCNN networks more in original image Individual, and the overlay areas Bounding Box are intercepted;Second stage uses the residual error network based on full convolutional network Resnet predicts intensive thermal map Dense Heatmap and compensation to each personage in the overlay areas Bounding Box Offset;Being accurately positioned for key point is obtained finally by the fusion of Dense Heatmap and Offset, to obtain single people Body skeleton drawing.About the specific method of G-RMI, can refer to by George Papandreou, what Tyler Zhu et al. were delivered Paper " Towards Accruate Multi-person Pose Estimation in the wild ".
S5:Single human skeleton figure is input to convolutional neural networks, extracts the depth god of the last one convolutional layer output Through network characterization as feature II.
Single human skeleton figure is divided into training subset II and verification subset II, is 40 × 40 in the input of CNN sorter networks The single human skeleton figure of triple channel of pixel, including three convolutional layers and corresponding nonlinear activation unit, the first two convolutional layer For indicating that the high-level feature of image, the last one convolutional layer are used for generating high-level characteristic reaction, the last one is extracted The characteristic pattern that convolutional layer generates is as the feature merged with follow-up phase, i.e. feature II.
S6:It is described that feature I and feature II are subjected to fusion using attention Mechanism Model, rational power is calculated first Weight, is then weighted summation, and permeate a feature vector h*
h*1h12h2
Wherein:α1Indicate the weight of feature I, h1Indicate the corresponding profile informations of feature I;α2Indicate the weight of feature II, h2Indicate the corresponding profile informations of feature II.
If fusion feature comes from training set, for training network parameter;If fusion feature collects from verification, it is used for Verify network parameter.
As shown in Fig. 2, extract respectively from training set and verification collection fusion feature after, future self-training collection fusion Feature is input to convolutional neural networks CNN and is trained, and then tests the fusion feature input CNN verification networks from verification collection Parameter is demonstrate,proved, doing classification to the characteristic pattern after fusion using flexible maximum activation function Softmax returns, and is calculated by backpropagation Method transmission error signal updates gradient, finds optimal value, obtains final classification results and classification accuracy.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include:ROM, RAM, disk or CD etc..
Embodiment provided above has carried out further detailed description, institute to the object, technical solutions and advantages of the present invention It should be understood that embodiment provided above is only the preferred embodiment of the present invention, be not intended to limit the invention, it is all Any modification, equivalent substitution, improvement and etc. made for the present invention, should be included in the present invention within the spirit and principles in the present invention Protection domain within.

Claims (9)

1. the sitting posture detecting method estimated based on target detection and human body attitude, which is characterized in that utilize convolutional neural networks CNN Sitting posture detection is carried out, and is input to the extraction of the fusion feature in CNN and includes the following steps:
S1:Original image is manually marked, markup information includes encirclement frame Bounding Box, sitting posture classification and artis Coordinate;
S2:Original image is input to target detection network, goes out single target image using Bounding Box information interceptions;
S3:Single target image is subjected to artis label by sitting posture classification, then the single target image of label is input to volume Product neural network extracts the deep neural network feature of the last one convolutional layer output as feature I;
S4:By body joint point coordinate information and Bounding Box information inputs to more people's Attitude estimation networks, then to original graph It is single human skeleton figure as doing more people's Attitude estimations, and by the interception of more people's Attitude estimation figures;
S5:Single human skeleton figure is input to convolutional neural networks, extracts the depth nerve net of the last one convolutional layer output Network feature is as feature II;
S6:Feature I and feature II are merged.
2. the sitting posture detecting method estimated as described in claim 1 based on target detection and human body attitude, which is characterized in that also Including step S7:By in the feature input CNN after fusion, if fusion feature comes from training set, it is used to train network parameter; If fusion feature collects from verification, for verifying network parameter, and pass through back-propagation algorithm transmission error signal, update ladder Degree finds optimal value, and doing classification using flexible maximum activation function Softmax returns, and obtains final classification results and classification Accuracy rate.
3. the sitting posture detecting method estimated as described in claim 1 based on target detection and human body attitude, which is characterized in that step Rapid S2 is specifically included:
The target detection network uses Faster RCNN networks, and Faster RCNN networks are by a candidate region network RPN Cascade network is formed with a Fast RCNN network;Recommended area is selected in original image using RPN in first stage Domain intercepts out single target image using Fast RCNN in second stage to recommending the target in region further to segment.
4. the sitting posture detecting method estimated as claimed in claim 3 based on target detection and human body attitude, which is characterized in that institute It states and selects to recommend region in original image using RPN, specifically include:
The Bounding Box enclosing regions manually marked are sampled, and sampling area be positive sample region when select this to adopt Sample region is to recommend region;The positive sample region refers to that the Duplication of sampling area and Bounding Box enclosing regions is more than When threshold value, which is positive sample region, and threshold value is 0.6~0.9.
5. the sitting posture detecting method estimated as claimed in claim 4 based on target detection and human body attitude, which is characterized in that institute The Duplication calculation formula for stating sampling area and Bounding Box enclosing regions is:
Wherein:area(rg) it is Bounding Box enclosing regions, area (rn) it is sampling area.
6. the sitting posture detecting method estimated as described in claim 1 based on target detection and human body attitude, which is characterized in that step Rapid S3 is specifically included:
Label is assigned to single target image according to sitting posture classification, the single target image of label is divided into training subset I and is tested Demonstrate,prove subset I, input is the single target image of triple channel of 40 × 40 pixels in CNN sorter networks, comprising three convolutional layers with Corresponding nonlinear activation unit, the first two convolutional layer are used for indicating that the high-level feature of image, the last one convolutional layer are used for High-level characteristic reaction is generated, extracts the characteristic pattern of the last one convolutional layer generation as the feature merged with follow-up phase, That is feature I.
7. the sitting posture detecting method estimated as described in claim 1 based on target detection and human body attitude, it is characterised in that:Step Rapid S4 is specifically included:
More people's Attitude estimations use G-RMI methods, first stage to be detected with Faster RCNN networks multiple in original image People, and the overlay areas Bounding Box are intercepted;Second stage uses the residual error network based on full convolutional network Resnet predicts intensive thermal map Dense Heatmap and compensation to each personage in the overlay areas Bounding Box Offset;Being accurately positioned for key point is obtained finally by the fusion of Dense Heatmap and Offset, to obtain single people Body skeleton drawing.
8. the sitting posture detecting method estimated as described in claim 1 based on target detection and human body attitude, it is characterised in that:Step Rapid S5 is specifically included:
Single human skeleton figure is divided into training subset II and verification subset II, is 40 × 40 pixels in the input of CNN sorter networks The single human skeleton figure of triple channel, including three convolutional layers and corresponding nonlinear activation unit, the first two convolutional layer are used for It indicates that the high-level feature of image, the last one convolutional layer are used for generating high-level characteristic reaction, extracts the last one convolution The characteristic pattern that layer generates is as the feature merged with follow-up phase, i.e. feature II.
9. the sitting posture detecting method estimated as described in claim 1 based on target detection and human body attitude, it is characterised in that:Institute It states and feature I and feature II is subjected to fusion using attention Mechanism Model, calculate rational weight first, be then weighted Summation, permeate a feature vector h*
h*1h12h2
Wherein:α1Indicate the weight of feature I, h1Indicate the corresponding profile informations of feature I;α2Indicate the weight of feature II, h2Table Show the corresponding profile informations of feature II.
CN201810357864.7A 2018-04-20 2018-04-20 The sitting posture detecting method estimated based on target detection and human body attitude Pending CN108549876A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810357864.7A CN108549876A (en) 2018-04-20 2018-04-20 The sitting posture detecting method estimated based on target detection and human body attitude

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810357864.7A CN108549876A (en) 2018-04-20 2018-04-20 The sitting posture detecting method estimated based on target detection and human body attitude

Publications (1)

Publication Number Publication Date
CN108549876A true CN108549876A (en) 2018-09-18

Family

ID=63511827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810357864.7A Pending CN108549876A (en) 2018-04-20 2018-04-20 The sitting posture detecting method estimated based on target detection and human body attitude

Country Status (1)

Country Link
CN (1) CN108549876A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447976A (en) * 2018-11-01 2019-03-08 电子科技大学 A kind of medical image cutting method and system based on artificial intelligence
CN109543549A (en) * 2018-10-26 2019-03-29 北京陌上花科技有限公司 Image processing method and device, mobile end equipment, server for more people's Attitude estimations
CN109657631A (en) * 2018-12-25 2019-04-19 上海智臻智能网络科技股份有限公司 Human posture recognition method and device
CN109711374A (en) * 2018-12-29 2019-05-03 深圳美图创新科技有限公司 Skeleton point recognition methods and device
CN109758756A (en) * 2019-02-28 2019-05-17 国家体育总局体育科学研究所 Gymnastics video analysis method and system based on 3D camera
CN109858444A (en) * 2019-01-31 2019-06-07 北京字节跳动网络技术有限公司 The training method and device of human body critical point detection model
CN109858376A (en) * 2019-01-02 2019-06-07 武汉大学 A kind of intelligent desk lamp with child healthy learning supervisory role
CN110070001A (en) * 2019-03-28 2019-07-30 上海拍拍贷金融信息服务有限公司 Behavioral value method and device, computer readable storage medium
CN110123347A (en) * 2019-03-22 2019-08-16 杭州深睿博联科技有限公司 Image processing method and device for breast molybdenum target
CN110210402A (en) * 2019-06-03 2019-09-06 北京卡路里信息技术有限公司 Feature extracting method, device, terminal device and storage medium
CN110321786A (en) * 2019-05-10 2019-10-11 北京邮电大学 A kind of human body sitting posture based on deep learning monitors method and system in real time
CN110415270A (en) * 2019-06-17 2019-11-05 广东第二师范学院 A kind of human motion form evaluation method based on double study mapping increment dimensionality reduction models
CN110543578A (en) * 2019-08-09 2019-12-06 华为技术有限公司 object recognition method and device
CN110807380A (en) * 2019-10-22 2020-02-18 北京达佳互联信息技术有限公司 Human body key point detection method and device
CN110826500A (en) * 2019-11-08 2020-02-21 福建帝视信息科技有限公司 Method for estimating 3D human body posture based on antagonistic network of motion link space
CN110956218A (en) * 2019-12-10 2020-04-03 同济人工智能研究院(苏州)有限公司 Method for generating target detection football candidate points of Nao robot based on Heatmap
CN111222437A (en) * 2019-12-31 2020-06-02 浙江工业大学 Human body posture estimation method based on multi-depth image feature fusion
WO2020250046A1 (en) * 2019-06-14 2020-12-17 Wrnch Inc. Method and system for monocular depth estimation of persons
CN112329728A (en) * 2020-11-27 2021-02-05 顾翀 Multi-person sitting posture detection method and system based on object detection
CN112689842A (en) * 2020-03-26 2021-04-20 华为技术有限公司 Target detection method and device
CN112819885A (en) * 2021-02-20 2021-05-18 深圳市英威诺科技有限公司 Animal identification method, device and equipment based on deep learning and storage medium
CN113065431A (en) * 2021-03-22 2021-07-02 浙江理工大学 Human body violation prediction method based on hidden Markov model and recurrent neural network
CN113288122A (en) * 2021-05-21 2021-08-24 河南理工大学 Wearable sitting posture monitoring device and sitting posture monitoring method
CN113379794A (en) * 2021-05-19 2021-09-10 重庆邮电大学 Single-target tracking system and method based on attention-key point prediction model
CN113487674A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Human body pose estimation system and method
CN113627326A (en) * 2021-08-10 2021-11-09 国网福建省电力有限公司营销服务中心 Behavior identification method based on wearable device and human skeleton
CN113705631A (en) * 2021-08-10 2021-11-26 重庆邮电大学 3D point cloud target detection method based on graph convolution
WO2022041222A1 (en) * 2020-08-31 2022-03-03 Top Team Technology Development Limited Process and system for image classification

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630410A (en) * 2009-08-18 2010-01-20 北京航空航天大学 Human body sitting posture judgment method based on single camera
CN103500330A (en) * 2013-10-23 2014-01-08 中科唯实科技(北京)有限公司 Semi-supervised human detection method based on multi-sensor and multi-feature fusion
KR101563297B1 (en) * 2014-04-23 2015-10-26 한양대학교 산학협력단 Method and apparatus for recognizing action in video
CN105335716A (en) * 2015-10-29 2016-02-17 北京工业大学 Improved UDN joint-feature extraction-based pedestrian detection method
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN106445138A (en) * 2016-09-21 2017-02-22 中国农业大学 Human body posture feature extracting method based on 3D joint point coordinates
CN106650827A (en) * 2016-12-30 2017-05-10 南京大学 Human body posture estimation method and system based on structure guidance deep learning
CN107358149A (en) * 2017-05-27 2017-11-17 深圳市深网视界科技有限公司 A kind of human body attitude detection method and device
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630410A (en) * 2009-08-18 2010-01-20 北京航空航天大学 Human body sitting posture judgment method based on single camera
CN103500330A (en) * 2013-10-23 2014-01-08 中科唯实科技(北京)有限公司 Semi-supervised human detection method based on multi-sensor and multi-feature fusion
KR101563297B1 (en) * 2014-04-23 2015-10-26 한양대학교 산학협력단 Method and apparatus for recognizing action in video
CN105335716A (en) * 2015-10-29 2016-02-17 北京工业大学 Improved UDN joint-feature extraction-based pedestrian detection method
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN106445138A (en) * 2016-09-21 2017-02-22 中国农业大学 Human body posture feature extracting method based on 3D joint point coordinates
CN106650827A (en) * 2016-12-30 2017-05-10 南京大学 Human body posture estimation method and system based on structure guidance deep learning
CN107358149A (en) * 2017-05-27 2017-11-17 深圳市深网视界科技有限公司 A kind of human body attitude detection method and device
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GEORGE PAPANDREOU 等: "Towards Accurate Multi-person Pose Estimation in the Wild", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
LONGHUI WEI 等: "GLAD:Global-Local-Alignment Descriptor for Pedestrian Retrieval", 《2017 ACM》 *
SHAOQING REN 等: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
代西果: "基于卷积神经网络的人体姿态识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈万军 等: "基于深度信息的人体动作识别研究综述", 《西安理工大学学报》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543549A (en) * 2018-10-26 2019-03-29 北京陌上花科技有限公司 Image processing method and device, mobile end equipment, server for more people's Attitude estimations
CN109447976B (en) * 2018-11-01 2020-07-07 电子科技大学 Medical image segmentation method and system based on artificial intelligence
CN109447976A (en) * 2018-11-01 2019-03-08 电子科技大学 A kind of medical image cutting method and system based on artificial intelligence
CN109657631A (en) * 2018-12-25 2019-04-19 上海智臻智能网络科技股份有限公司 Human posture recognition method and device
CN109657631B (en) * 2018-12-25 2020-08-11 上海智臻智能网络科技股份有限公司 Human body posture recognition method and device
CN109711374A (en) * 2018-12-29 2019-05-03 深圳美图创新科技有限公司 Skeleton point recognition methods and device
CN109711374B (en) * 2018-12-29 2021-06-04 深圳美图创新科技有限公司 Human body bone point identification method and device
CN109858376A (en) * 2019-01-02 2019-06-07 武汉大学 A kind of intelligent desk lamp with child healthy learning supervisory role
CN109858444A (en) * 2019-01-31 2019-06-07 北京字节跳动网络技术有限公司 The training method and device of human body critical point detection model
CN109758756A (en) * 2019-02-28 2019-05-17 国家体育总局体育科学研究所 Gymnastics video analysis method and system based on 3D camera
CN109758756B (en) * 2019-02-28 2021-03-23 国家体育总局体育科学研究所 Gymnastics video analysis method and system based on 3D camera
CN110123347A (en) * 2019-03-22 2019-08-16 杭州深睿博联科技有限公司 Image processing method and device for breast molybdenum target
CN110070001A (en) * 2019-03-28 2019-07-30 上海拍拍贷金融信息服务有限公司 Behavioral value method and device, computer readable storage medium
CN110321786A (en) * 2019-05-10 2019-10-11 北京邮电大学 A kind of human body sitting posture based on deep learning monitors method and system in real time
CN110210402A (en) * 2019-06-03 2019-09-06 北京卡路里信息技术有限公司 Feature extracting method, device, terminal device and storage medium
US11875529B2 (en) 2019-06-14 2024-01-16 Hinge Health, Inc. Method and system for monocular depth estimation of persons
WO2020250046A1 (en) * 2019-06-14 2020-12-17 Wrnch Inc. Method and system for monocular depth estimation of persons
US11354817B2 (en) 2019-06-14 2022-06-07 Hinge Health, Inc. Method and system for monocular depth estimation of persons
CN110415270A (en) * 2019-06-17 2019-11-05 广东第二师范学院 A kind of human motion form evaluation method based on double study mapping increment dimensionality reduction models
CN110543578B (en) * 2019-08-09 2024-05-14 华为技术有限公司 Object identification method and device
CN110543578A (en) * 2019-08-09 2019-12-06 华为技术有限公司 object recognition method and device
CN110807380A (en) * 2019-10-22 2020-02-18 北京达佳互联信息技术有限公司 Human body key point detection method and device
CN110807380B (en) * 2019-10-22 2023-04-07 北京达佳互联信息技术有限公司 Human body key point detection method and device
CN110826500A (en) * 2019-11-08 2020-02-21 福建帝视信息科技有限公司 Method for estimating 3D human body posture based on antagonistic network of motion link space
CN110826500B (en) * 2019-11-08 2023-04-14 福建帝视信息科技有限公司 Method for estimating 3D human body posture based on antagonistic network of motion link space
CN110956218A (en) * 2019-12-10 2020-04-03 同济人工智能研究院(苏州)有限公司 Method for generating target detection football candidate points of Nao robot based on Heatmap
CN111222437A (en) * 2019-12-31 2020-06-02 浙江工业大学 Human body posture estimation method based on multi-depth image feature fusion
CN112689842A (en) * 2020-03-26 2021-04-20 华为技术有限公司 Target detection method and device
WO2022041222A1 (en) * 2020-08-31 2022-03-03 Top Team Technology Development Limited Process and system for image classification
CN112329728A (en) * 2020-11-27 2021-02-05 顾翀 Multi-person sitting posture detection method and system based on object detection
CN112819885A (en) * 2021-02-20 2021-05-18 深圳市英威诺科技有限公司 Animal identification method, device and equipment based on deep learning and storage medium
CN113065431B (en) * 2021-03-22 2022-06-17 浙江理工大学 Human body violation prediction method based on hidden Markov model and recurrent neural network
CN113065431A (en) * 2021-03-22 2021-07-02 浙江理工大学 Human body violation prediction method based on hidden Markov model and recurrent neural network
CN113379794B (en) * 2021-05-19 2023-07-25 重庆邮电大学 Single-target tracking system and method based on attention-key point prediction model
CN113379794A (en) * 2021-05-19 2021-09-10 重庆邮电大学 Single-target tracking system and method based on attention-key point prediction model
CN113288122A (en) * 2021-05-21 2021-08-24 河南理工大学 Wearable sitting posture monitoring device and sitting posture monitoring method
CN113288122B (en) * 2021-05-21 2023-12-19 河南理工大学 Wearable sitting posture monitoring device and sitting posture monitoring method
CN113487674A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Human body pose estimation system and method
CN113487674B (en) * 2021-07-12 2024-03-08 未来元宇数字科技(北京)有限公司 Human body pose estimation system and method
CN113705631A (en) * 2021-08-10 2021-11-26 重庆邮电大学 3D point cloud target detection method based on graph convolution
CN113705631B (en) * 2021-08-10 2024-01-23 大庆瑞昂环保科技有限公司 3D point cloud target detection method based on graph convolution
CN113627326B (en) * 2021-08-10 2024-04-12 国网福建省电力有限公司营销服务中心 Behavior recognition method based on wearable equipment and human skeleton
CN113627326A (en) * 2021-08-10 2021-11-09 国网福建省电力有限公司营销服务中心 Behavior identification method based on wearable device and human skeleton

Similar Documents

Publication Publication Date Title
CN108549876A (en) The sitting posture detecting method estimated based on target detection and human body attitude
CN107506722A (en) One kind is based on depth sparse convolution neutral net face emotion identification method
CN108805009A (en) Classroom learning state monitoring method based on multimodal information fusion and system
CN107862705A (en) A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
WO2019028592A1 (en) Teaching assistance method and teaching assistance system using said method
CN106570464A (en) Human face recognition method and device for quickly processing human face shading
CN108986140A (en) Target scale adaptive tracking method based on correlation filtering and color detection
CN107633511A (en) A kind of blower fan vision detection system based on own coding neutral net
CN105447473A (en) PCANet-CNN-based arbitrary attitude facial expression recognition method
CN109241830B (en) Classroom lecture listening abnormity detection method based on illumination generation countermeasure network
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN106951923A (en) A kind of robot three-dimensional shape recognition process based on multi-camera Vision Fusion
CN107945210A (en) Target tracking algorism based on deep learning and environment self-adaption
CN110135282A (en) A kind of examinee based on depth convolutional neural networks model later plagiarizes cheat detection method
CN109886356A (en) A kind of target tracking method based on three branch's neural networks
CN107301376A (en) A kind of pedestrian detection method stimulated based on deep learning multilayer
CN109508661A (en) A kind of person's of raising one's hand detection method based on object detection and Attitude estimation
CN109191488A (en) A kind of Target Tracking System and method based on CSK Yu TLD blending algorithm
CN105894008A (en) Target motion track method through combination of feature point matching and deep nerve network detection
Balasuriya et al. Learning platform for visually impaired children through artificial intelligence and computer vision
CN109087337A (en) Long-time method for tracking target and system based on layering convolution feature
Li et al. An e-learning system model based on affective computing
CN109472464A (en) A kind of appraisal procedure of the online course quality based on eye movement tracking
Xu et al. Classroom attention analysis based on multiple euler angles constraint and head pose estimation
CN109712171A (en) A kind of Target Tracking System and method for tracking target based on correlation filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180918

RJ01 Rejection of invention patent application after publication