CN112364712A - Human posture-based sitting posture identification method and system and computer-readable storage medium - Google Patents

Human posture-based sitting posture identification method and system and computer-readable storage medium Download PDF

Info

Publication number
CN112364712A
CN112364712A CN202011131430.9A CN202011131430A CN112364712A CN 112364712 A CN112364712 A CN 112364712A CN 202011131430 A CN202011131430 A CN 202011131430A CN 112364712 A CN112364712 A CN 112364712A
Authority
CN
China
Prior art keywords
posture
human
sitting posture
sitting
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011131430.9A
Other languages
Chinese (zh)
Inventor
陈龙彪
郜盛夏
魏文轩
黄华生
刘林津
杨晨晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Mengniu Zhilian Lighting Co ltd
Xiamen University
Original Assignee
Fujian Mengniu Zhilian Lighting Co ltd
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Mengniu Zhilian Lighting Co ltd, Xiamen University filed Critical Fujian Mengniu Zhilian Lighting Co ltd
Priority to CN202011131430.9A priority Critical patent/CN112364712A/en
Publication of CN112364712A publication Critical patent/CN112364712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a sitting posture recognition method and system based on human body posture and a computer readable storage medium, wherein the method comprises the steps of collecting human body sitting posture pictures and constructing a human body posture characteristic training data set; constructing and training a neural network model for classifying sitting postures; predicting the sitting posture category by utilizing OpenPose and a trained neural network model; the method provided by the invention can directly extract the human skeleton characteristics and judge the sitting posture category on the basis of the human skeleton characteristics, thereby avoiding the influence of irrelevant factors such as background and the like and improving the accuracy rate of sitting posture identification.

Description

Human posture-based sitting posture identification method and system and computer-readable storage medium
Technical Field
The invention relates to the field of computer vision, in particular to a sitting posture identification method and system based on human body posture and a computer readable storage medium.
Background
In modern life, teenagers or professionals such as students need to sit still for a long time, and the long-time maintenance of poor sitting posture can cause sub-health of human bodies and many chronic diseases. Therefore, the sitting posture of the human body is continuously monitored and timely reminded, so that the sitting posture of the human body can be statically sitting for a long time to avoid the injury of cervical vertebra and other parts.
In recent years, the sitting posture recognition technology is increasingly emphasized by scientific research technologies at home and abroad. The prior art is mainly divided into the following two types: 1) depth camera based methods. The method mainly comprises the steps of obtaining three-dimensional data under a scene by using a depth sensing device, and extracting joint point data such as the head, the eyes and the like from the three-dimensional data. 2) A template-based approach. The pictures are compared to standard sitting pictures by comparing the degree of similarity of the actual test object to the standard template object based on FAST feature point extraction and description (ORB).
In recent years, with the rapid development of deep learning technology in the field of computer vision, a large number of sitting posture identification methods based on deep learning emerge. If the convolutional neural network is used for extracting picture features to classify the sitting postures, the method has the defect that the method is easily influenced by the background and the angle.
Disclosure of Invention
The invention mainly aims to overcome the defects in the prior art and provides a sitting posture identification method based on human body posture.
The invention adopts the following technical scheme:
a sitting posture identification method based on human body postures specifically comprises the following steps:
s1, collecting human sitting posture pictures, and constructing a human posture characteristic training data set;
s2, constructing and training a neural network model for classifying sitting posture categories;
and S3, predicting the sitting posture category by utilizing OpenPose and the trained neural network model.
Specifically, collecting human body sitting posture pictures in S1, constructing a human body posture feature training data set, specifically including:
s11, classifying the sitting postures into correcting, bending backwards, leaning forwards left, leaning forwards right and lying prone;
s12, acquiring a human sitting posture picture and manually marking different sitting posture categories;
s13, placing the pictures into different folders according to the sitting posture category, and automatically reading the pictures in the different folders by using a script program;
s14, loading the OpenPose model to extract human posture features in the picture, and storing the human posture features as a human posture feature picture;
s15, placing the human posture feature pictures output by the OpenPose model into different folders according to sitting posture categories;
s16, dividing the data set into a training set, a verification set and a test set;
and S17, randomly enhancing the data to increase the data set when the training set is loaded, namely translating, zooming and warping the picture.
Specifically, the constructing and training of the neural network model for classifying the sitting posture category in S2 specifically includes:
s21, selecting a predefined neural network in a deep learning framework, and setting the input size and the output dimension of the network according to the number of sitting posture categories;
s22, adopting a deep residual error network ResNet as a classification network, and simultaneously initializing parameters of the ResNet classification network by using a pre-training model of the ResNet classification network on an ImageNet data set;
s23, taking the softmax function as the loss function of ResNet;
and S24, training iterative model parameters by using a softmax classifier.
Specifically, the predicting the sitting posture category by using openpos and the trained neural network model in S3 specifically includes:
s31, loading an OpenPose model and a trained neural network model;
s32, selecting a sitting posture picture as an input of the OpenPose model, and storing the sitting posture picture as a human posture characteristic picture;
s33, inputting the human posture characteristic picture as a classification model, and predicting the probability of different sitting posture categories of the human posture characteristic picture by using a neural network model;
and S34, taking the person with the maximum probability value as the sitting posture category of the input picture.
In another aspect, the present invention provides a sitting posture recognition system based on human body posture, comprising:
an acquisition module: collecting human sitting posture pictures and constructing a human posture characteristic training data set;
a training module: constructing and training a neural network model for classifying sitting postures;
a prediction module: and predicting the sitting posture category by utilizing OpenPose and a trained neural network model.
Specifically, the acquisition module: gather human position of sitting picture, construct human posture characteristic training data set, specifically include:
classifying the sitting postures, and dividing the sitting postures into correcting, bending backwards, leaning forwards, left leaning forwards, right leaning forwards and lying prone;
acquiring a human body sitting posture picture and manually marking different sitting posture categories;
placing the pictures into different folders according to the sitting posture category, and automatically reading the pictures in the different folders by using a script program;
loading an OpenPose model to extract human posture features in the picture, merging the human posture features by the OpenPose model through a partial affinity field algorithm, and storing the human posture features as a human posture feature picture;
placing the human posture characteristic pictures output by the OpenPose model into different folders according to sitting posture categories;
dividing a data set into a training set, a verification set and a test set;
data enhancement is randomly performed to add data sets when the training set is loaded, i.e. to shift, zoom, warp the picture.
Specifically, the training module: the method comprises the following steps of constructing and training a neural network model for classifying sitting postures, and specifically comprises the following steps:
selecting a predefined neural network in a deep learning framework, and setting the input size and the output dimension of the network according to the number of sitting posture categories;
adopting a deep residual error network ResNet as a classification network, and simultaneously initializing parameters of the ResNet classification network by using a pre-training model of the ResNet classification network on an ImageNet data set;
taking a softmax function as a loss function of ResNet;
the iterative model parameters were trained using the softmax classifier.
Specifically, the prediction module: the sitting posture category is predicted by utilizing the OpenPose and a trained neural network model, and the method specifically comprises the following steps:
loading an OpenPose model and a trained neural network model;
selecting a sitting posture picture as an input of an OpenPose model, and combining human posture characteristics by utilizing a partial affinity field algorithm and storing the human posture characteristics as a human posture characteristic picture by using OpenPose;
the human posture characteristic picture is used as the input of a classification model, and the neural network model predicts the probability of different sitting posture categories of the human posture characteristic picture;
and taking the person with the maximum probability value as the sitting posture category of the input picture.
Still another aspect of the present invention provides a non-transitory computer readable storage medium storing computer instructions for executing the above-mentioned human posture-based sitting posture recognition method.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
(1) constructing a human posture characteristic training data set by collecting human sitting posture pictures; constructing and training a neural network model for classifying sitting postures; predicting the sitting posture category by utilizing OpenPose and a trained neural network model; the key point of the invention is that the human skeleton characteristics are directly extracted, and the sitting posture category is judged on the basis of the human skeleton characteristics, so that the influence of irrelevant factors such as background and the like is avoided, and the accuracy rate of sitting posture identification is improved.
Drawings
FIG. 1 is a flow chart of a sitting posture recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a process for predicting a sitting posture category according to an embodiment of the present invention;
fig. 3 is a detailed parameter setting of the ResNet network according to an embodiment of the present invention;
FIG. 4 is a diagram of an example of a practical application of the sitting posture recognition of the present invention, where FIG. 4(a) is an original diagram and FIG. 4(b) is a diagram of a corresponding human posture feature;
fig. 5 is a diagram of another practical application example of the sitting posture recognition of the present invention, where fig. 5(a) is an original diagram and fig. 5(b) is a corresponding human posture feature diagram;
fig. 6 is another practical example of sitting posture recognition according to the present invention, where fig. 6(a) is an original drawing and fig. 6(b) is a corresponding human posture feature diagram.
The invention is described in further detail below with reference to the figures and specific examples.
Detailed Description
Fig. 1-2 are a flow chart and a schematic flow chart of a sitting posture recognition method based on human body posture according to the present invention, which specifically includes:
s1, collecting human sitting posture pictures, and constructing a human posture characteristic training data set;
s2, constructing and training a neural network model for classifying sitting posture categories;
and S3, predicting the sitting posture category by utilizing OpenPose and the trained neural network model.
Further, step S1 specifically includes:
s11, classifying the sitting postures into correcting, bending backwards, leaning forwards left, leaning forwards right and lying prone;
s12, shooting and collecting a large number of human sitting posture pictures and manually marking different sitting posture categories;
s13, placing the pictures into different folders according to the sitting posture types, and automatically reading the pictures in the different folders by using a script program, wherein the folder where the pictures are located contains the sitting posture type information of the pictures;
s14, extracting a human posture feature map by using Openpos in the text, and storing the human posture feature map as a picture;
the openpos human posture recognition project is an open source library developed by the university of Cambridge (CMU) based on a convolutional neural network and supervised learning and framed by caffe. The gesture estimation of human body action, facial expression, finger motion and the like can be realized. The method is suitable for single person and multiple persons, and has excellent robustness. The overall process of OpenPose recognition of the human body posture comprises the steps of predicting the confidence coefficient of key points and the affinity vector of the key points of an input image, clustering the key points, and finally connecting a skeleton. The human body posture characteristic diagram in the picture is extracted by using the OpenPose model and is stored as the picture.
And S15, similarly, placing the human posture characteristic pictures output by OpenPose into different folders according to the sitting posture types.
S16, dividing the data set into a training set, a verification set and a test set;
and S17, randomly enhancing the data to increase the data set when the training set is loaded, namely translating, zooming and warping the picture.
S2, constructing and training a neural network model for classifying the sitting posture class, wherein ResNet is adopted.
Further, step S2 specifically includes:
s21, selecting a predefined depth residual error network ResNet as a classification network at keras (a deep learning framework), and setting the input size of the network to be (640,480,3) according to the number of sitting posture classes, wherein the input size represents the length, the width, the channel number and the output dimension of an input picture respectively. In particular, ResNet50 is employed herein as a classification network. The residual network ResNet is characterized by ease of optimization and can improve accuracy by adding significant depth. The inner residual block uses jump connection, and the problem of gradient disappearance caused by depth increase in a deep neural network is relieved. The specific parameters of the ResNet50 are shown as the 5 th column 50-layer from the left in FIG. 3, and are shown as the detailed parameter setting diagram of the ResNet network in the embodiment of the invention in FIG. 3.
S22, initializing ResNet network parameters by using a pre-training model of ResNet on the ImageNet data set, wherein the last layer of parameters of the network is a randomly initialized full connection layer, and the rest parameters are parameters of the pre-training model on other data sets. Specifically, a ResNet model which is pre-trained on the ImageNet data set is used, and after the ResNet model output layer is removed, a full connection layer which is randomly initialized is connected to serve as the output layer of the model.
S23, taking a softmax function as a loss function of ResNet, wherein the softmax has the following specific formula:
Figure BDA0002735288240000061
wherein y isiRepresents a certain class, fjA value, L, representing the classiIndicating a loss of a certain class.
S24, since the number of sitting posture categories exemplified herein is 6, the output dimension of the classification network is a 1 × 6 dimensional array;
and S25, inputting 16 pieces of picture data with sitting posture categories in each batch by using a softmax classifier, and iteratively training model parameters. And when the loss of the training set and the loss of the verification set tend to be stable, the model is saved after training.
S3, utilizing the classification models trained in OpenPose and S2 to predict the sitting posture category;
further, step S3 specifically includes:
s31, loading the OpenPose model and the trained classification model in S2, which is hereinafter referred to as a classification model.
And S32, selecting a picture as an input of the OpenPose model, and outputting a human skeleton characteristic picture called a human posture characteristic picture by the OpenPose, wherein the human skeleton characteristic picture is a thermodynamic diagram of main skeleton joints of a human body. Namely, the main bones of the human body, such as the trunk and the limbs of the human body.
S33, the human posture feature graph is used as input of a classification model, the input dimension is [640,480,3], the ResNet model outputs probabilities of different sitting posture categories of the feature graph, the output dimension of the embodiment of the invention is [1,6], the probabilities are respectively expressed as the probabilities of correcting, bending backwards, leaning forwards left, leaning forwards right and leaning forward, and the sitting posture category with the highest probability is finally selected as a prediction result.
And S34, taking the person with the maximum probability value as the sitting posture category of the input picture.
The embodiment of the invention actually realizes the sitting posture identification method based on the human body posture according to the steps, and the following is an example of practical application of the method provided by the invention.
Fig. 4(a) shows an original diagram of an end-sitting diagram, a straight back, and two regular hands on a table, and fig. 4(b) shows a corresponding human posture feature diagram extracted from the image by the openpos model provided by the present invention, which is a thermodynamic diagram of main skeletal joints of a human body. Namely, the main skeletons of the human body, such as the trunk, the limbs, and the like of the human body; then the probabilities of correcting, bending backwards, leaning forwards, left leaning forwards, right leaning forwards and lying prone [9.34e-013.99 e-021.13 e-021.41 e-023.93 e-044.13 e-08] obtained by the classification model are finally selected as the prediction results, and the sitting posture category of the graph 4(a) is obtained as sitting posture.
As shown in fig. 5(a), the original diagram is a sitting diagram for operating a computer, the upper body is tilted forward, and fig. 5(b) is a corresponding human posture feature diagram extracted from the picture by the openpos model provided by the present invention, which is a thermodynamic diagram of main skeletal joints of a human body. Namely, the main skeletons of the human body, such as the trunk, the limbs, and the like of the human body; then the probabilities of correcting, bending backwards, leaning forwards left, leaning forwards right and leaning over [5.22 e-094.58 e-089.99 e-018.68 e-081.33 e-082.56 e-10] obtained by the classification model are finally selected as the prediction results, and the sitting posture category of the graph 5(a) is obtained as the leaning forwards.
As shown in fig. 6(a), the original drawing is a drawing for reading a book, the upper half of the body is lying on a desk, and fig. 6(b) is a corresponding human posture characteristic drawing extracted from the picture by the openpos model provided by the present invention, which is a thermodynamic diagram of main skeletal joints of the human body. Namely, the main skeletons of the human body, such as the trunk, the limbs, and the like of the human body; then the probabilities of correcting, bending backwards, leaning forwards, left leaning forwards, right leaning forwards and lying prone [2.45e-127.46 e-176.98 e-083.51e-142.27e-11] obtained by the classification model are finally selected as the prediction results, and the sitting posture category of the graph 6(a) is obtained as lying prone.
The method provided by the invention directly extracts the human skeleton characteristics and judges the sitting posture type on the basis of the human skeleton characteristics, thereby avoiding the influence of irrelevant factors such as background and the like and improving the accuracy rate of sitting posture identification.
Another aspect of an embodiment of the present invention provides a sitting posture recognition system based on human body posture, including:
an acquisition module: collecting human sitting posture pictures and constructing a human posture characteristic training data set;
a training module: constructing and training a neural network model for classifying sitting postures;
a prediction module: and predicting the sitting posture category by utilizing OpenPose and a trained neural network model.
Specifically, the acquisition module: gather human position of sitting picture, construct human posture characteristic training data set, specifically include:
classifying the sitting postures, and dividing the sitting postures into correcting, bending backwards, leaning forwards, left leaning forwards, right leaning forwards and lying prone;
acquiring a human body sitting posture picture and manually marking different sitting posture categories;
placing the pictures into different folders according to the sitting posture category, and automatically reading the pictures in the different folders by using a script program;
loading an OpenPose model to extract human posture features in the picture, merging the human posture features by the OpenPose model through a partial affinity field algorithm, and storing the human posture features as a human posture feature picture;
placing the human posture characteristic pictures output by the OpenPose model into different folders according to sitting posture categories;
dividing a data set into a training set, a verification set and a test set;
data enhancement is randomly performed to add data sets when the training set is loaded, i.e. to shift, zoom, warp the picture.
Specifically, the training module: the method comprises the following steps of constructing and training a neural network model for classifying sitting postures, and specifically comprises the following steps:
selecting a predefined neural network in a deep learning framework, and setting the input size and the output dimension of the network according to the number of sitting posture categories;
adopting a deep residual error network ResNet as a classification network, and simultaneously initializing parameters of the ResNet classification network by using a pre-training model of the ResNet classification network on an ImageNet data set;
taking a softmax function as a loss function of ResNet;
the iterative model parameters were trained using the softmax classifier.
Specifically, the prediction module: the sitting posture category is predicted by utilizing the OpenPose and a trained neural network model, and the method specifically comprises the following steps:
loading an OpenPose model and a trained neural network model;
selecting a sitting posture picture as an input of an OpenPose model, and combining human posture characteristics by utilizing a partial affinity field algorithm and storing the human posture characteristics as a human posture characteristic picture by using OpenPose;
the human posture characteristic picture is used as the input of a classification model, and the neural network model predicts the probability of different sitting posture categories of the human posture characteristic picture;
and taking the person with the maximum probability value as the sitting posture category of the input picture.
Yet another aspect of the embodiments of the present invention provides a non-transitory computer readable storage medium storing computer instructions, wherein the computer instructions are configured to execute the above sitting posture recognition method based on human body posture.
Computer instructions (also known as programs, software applications or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. The terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal.
The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept should fall within the scope of infringing the present invention.

Claims (9)

1. A sitting posture identification method based on human body postures is characterized by comprising the following steps:
s1, collecting human sitting posture pictures, and constructing a human posture characteristic training data set;
s2, constructing and training a neural network model for classifying sitting posture categories;
and S3, predicting the sitting posture category by utilizing OpenPose and the trained neural network model.
2. The human posture-based sitting posture recognition method as claimed in claim 1, wherein the step of collecting human sitting posture pictures in S1 and constructing a human posture feature training data set specifically comprises:
s11, classifying the sitting postures into correcting, bending backwards, leaning forwards left, leaning forwards right and lying prone;
s12, acquiring a human sitting posture picture and manually marking different sitting posture categories;
s13, placing the pictures into different folders according to the sitting posture category, and automatically reading the pictures in the different folders by using a script program;
s14, loading an OpenPose model to extract human posture features in the picture, merging the human posture features by the OpenPose model through a partial affinity field algorithm, and storing the human posture features as a human posture feature picture;
s15, placing the human posture feature pictures output by the OpenPose model into different folders according to sitting posture categories;
s16, dividing the data set into a training set, a verification set and a test set;
and S17, randomly enhancing the data to increase the data set when the training set is loaded, namely translating, zooming and warping the picture.
3. The human posture-based sitting posture recognition method as claimed in claim 1, wherein the constructing and training of the neural network model for classifying the sitting posture category in S2 specifically comprises:
s21, selecting a predefined neural network in a deep learning framework, and setting the input size and the output dimension of the network according to the number of sitting posture categories;
s22, adopting a deep residual error network ResNet as a classification network, and simultaneously initializing parameters of the ResNet classification network by using a pre-training model of the ResNet classification network on an ImageNet data set;
s23, taking the softmax function as the loss function of ResNet;
and S24, training iterative model parameters by using a softmax classifier.
4. The method of claim 1, wherein the step of predicting the sitting posture category by using OpenPose and a trained neural network model in the step S3 specifically comprises:
s31, loading an OpenPose model and a trained neural network model;
s32, selecting a sitting posture picture as an input of an OpenPose model, and storing the human posture feature picture into the OpenPose model by combining human posture features through a partial affinity field algorithm;
s33, inputting the human posture characteristic picture as a classification model, and predicting the probability of different sitting posture categories of the human posture characteristic picture by using a neural network model;
and S34, taking the person with the maximum probability value as the sitting posture category of the input picture.
5. A sitting posture recognition system based on human body posture, comprising:
an acquisition module: collecting human sitting posture pictures and constructing a human posture characteristic training data set;
a training module: constructing and training a neural network model for classifying sitting postures;
a prediction module: and predicting the sitting posture category by utilizing OpenPose and a trained neural network model.
6. The human posture-based sitting posture recognition system of claim 5, wherein the acquisition module: gather human position of sitting picture, construct human posture characteristic training data set, specifically include:
classifying the sitting postures, and dividing the sitting postures into correcting, bending backwards, leaning forwards, left leaning forwards, right leaning forwards and lying prone;
acquiring a human body sitting posture picture and manually marking different sitting posture categories;
placing the pictures into different folders according to the sitting posture category, and automatically reading the pictures in the different folders by using a script program;
loading an OpenPose model to extract human posture features in the picture, merging the human posture features by the OpenPose model through a partial affinity field algorithm, and storing the human posture features as a human posture feature picture;
placing the human posture characteristic pictures output by the OpenPose model into different folders according to sitting posture categories;
dividing a data set into a training set, a verification set and a test set;
data enhancement is randomly performed to add data sets when the training set is loaded, i.e. to shift, zoom, warp the picture.
7. The human-posture-based sitting posture recognition system of claim 5, wherein the training module: the method comprises the following steps of constructing and training a neural network model for classifying sitting postures, and specifically comprises the following steps:
selecting a predefined neural network in a deep learning framework, and setting the input size and the output dimension of the network according to the number of sitting posture categories;
adopting a deep residual error network ResNet as a classification network, and simultaneously initializing parameters of the ResNet classification network by using a pre-training model of the ResNet classification network on an ImageNet data set;
taking a softmax function as a loss function of ResNet;
the iterative model parameters were trained using the softmax classifier.
8. The human-posture-based sitting posture recognition system of claim 5, wherein the prediction module: the sitting posture category is predicted by utilizing the OpenPose and a trained neural network model, and the method specifically comprises the following steps:
loading an OpenPose model and a trained neural network model;
selecting a sitting posture picture as an input of an OpenPose model, and combining human posture characteristics by utilizing a partial affinity field algorithm and storing the human posture characteristics as a human posture characteristic picture by using OpenPose;
the human posture characteristic picture is used as the input of a classification model, and the neural network model predicts the probability of different sitting posture categories of the human posture characteristic picture;
and taking the person with the maximum probability value as the sitting posture category of the input picture.
9. A non-transitory computer readable storage medium having computer instructions stored thereon for performing the method of any one of claims 1-4.
CN202011131430.9A 2020-10-21 2020-10-21 Human posture-based sitting posture identification method and system and computer-readable storage medium Pending CN112364712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011131430.9A CN112364712A (en) 2020-10-21 2020-10-21 Human posture-based sitting posture identification method and system and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011131430.9A CN112364712A (en) 2020-10-21 2020-10-21 Human posture-based sitting posture identification method and system and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112364712A true CN112364712A (en) 2021-02-12

Family

ID=74511388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011131430.9A Pending CN112364712A (en) 2020-10-21 2020-10-21 Human posture-based sitting posture identification method and system and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112364712A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326778A (en) * 2021-05-31 2021-08-31 中科计算技术西部研究院 Human body posture detection method and device based on image recognition and storage medium
CN113487566A (en) * 2021-07-05 2021-10-08 杭州萤石软件有限公司 Bad posture detection method and detection device
CN113627236A (en) * 2021-06-24 2021-11-09 广东技术师范大学 Sitting posture identification method, device, equipment and storage medium
CN114582014A (en) * 2022-01-25 2022-06-03 珠海视熙科技有限公司 Method and device for recognizing human body sitting posture in depth image and storage medium
CN116935494A (en) * 2023-09-15 2023-10-24 吉林大学 Multi-person sitting posture identification method based on lightweight network model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344700A (en) * 2018-08-22 2019-02-15 浙江工商大学 A kind of pedestrian's posture attribute recognition approach based on deep neural network
CN110716792A (en) * 2019-09-19 2020-01-21 华中科技大学 Target detector and construction method and application thereof
US20200105014A1 (en) * 2018-09-28 2020-04-02 Wipro Limited Method and system for detecting pose of a subject in real-time
CN111062245A (en) * 2019-10-31 2020-04-24 北京交通大学 Locomotive driver fatigue state monitoring method based on upper body posture
CN111091046A (en) * 2019-10-28 2020-05-01 北京灵鹦科技有限公司 User bad sitting posture correction system based on machine vision
CN111160085A (en) * 2019-11-19 2020-05-15 天津中科智能识别产业技术研究院有限公司 Human body image key point posture estimation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344700A (en) * 2018-08-22 2019-02-15 浙江工商大学 A kind of pedestrian's posture attribute recognition approach based on deep neural network
US20200105014A1 (en) * 2018-09-28 2020-04-02 Wipro Limited Method and system for detecting pose of a subject in real-time
CN110716792A (en) * 2019-09-19 2020-01-21 华中科技大学 Target detector and construction method and application thereof
CN111091046A (en) * 2019-10-28 2020-05-01 北京灵鹦科技有限公司 User bad sitting posture correction system based on machine vision
CN111062245A (en) * 2019-10-31 2020-04-24 北京交通大学 Locomotive driver fatigue state monitoring method based on upper body posture
CN111160085A (en) * 2019-11-19 2020-05-15 天津中科智能识别产业技术研究院有限公司 Human body image key point posture estimation method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326778A (en) * 2021-05-31 2021-08-31 中科计算技术西部研究院 Human body posture detection method and device based on image recognition and storage medium
CN113627236A (en) * 2021-06-24 2021-11-09 广东技术师范大学 Sitting posture identification method, device, equipment and storage medium
CN113487566A (en) * 2021-07-05 2021-10-08 杭州萤石软件有限公司 Bad posture detection method and detection device
CN114582014A (en) * 2022-01-25 2022-06-03 珠海视熙科技有限公司 Method and device for recognizing human body sitting posture in depth image and storage medium
CN116935494A (en) * 2023-09-15 2023-10-24 吉林大学 Multi-person sitting posture identification method based on lightweight network model
CN116935494B (en) * 2023-09-15 2023-12-12 吉林大学 Multi-person sitting posture identification method based on lightweight network model

Similar Documents

Publication Publication Date Title
CN112364712A (en) Human posture-based sitting posture identification method and system and computer-readable storage medium
US11657230B2 (en) Referring image segmentation
Joze et al. Ms-asl: A large-scale data set and benchmark for understanding american sign language
CN109409222B (en) Multi-view facial expression recognition method based on mobile terminal
KR101967410B1 (en) Automatically mining person models of celebrities for visual search applications
Ji et al. Interactive body part contrast mining for human interaction recognition
CN104350509B (en) Quick attitude detector
Hu et al. Global-local enhancement network for NMF-aware sign language recognition
Borji et al. Human vs. computer in scene and object recognition
CN111046821B (en) Video behavior recognition method and system and electronic equipment
CN110674741A (en) Machine vision gesture recognition method based on dual-channel feature fusion
WO2022120843A1 (en) Three-dimensional human body reconstruction method and apparatus, and computer device and storage medium
KR20210041856A (en) Method and apparatus for generating learning data required to learn animation characters based on deep learning
CN110728194A (en) Intelligent training method and device based on micro-expression and action recognition and storage medium
CN114937285B (en) Dynamic gesture recognition method, device, equipment and storage medium
CN114332911A (en) Head posture detection method and device and computer equipment
Yan et al. A joint convolutional bidirectional LSTM framework for facial expression recognition
CN115797948A (en) Character recognition method, device and equipment
JP2007213528A (en) Action recognition system
CN114821466A (en) Light indoor fire recognition method based on improved YOLO model
Tur et al. Isolated sign recognition with a siamese neural network of RGB and depth streams
CN117809339A (en) Human body posture estimation method based on deformable convolutional coding network and feature region attention
CN111582057B (en) Face verification method based on local receptive field
CN116935438A (en) Pedestrian image re-recognition method based on autonomous evolution of model structure
CN113887373B (en) Attitude identification method and system based on urban intelligent sports parallel fusion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination