CN111967376A - Pose identification and detection method based on neural network - Google Patents

Pose identification and detection method based on neural network Download PDF

Info

Publication number
CN111967376A
CN111967376A CN202010820827.2A CN202010820827A CN111967376A CN 111967376 A CN111967376 A CN 111967376A CN 202010820827 A CN202010820827 A CN 202010820827A CN 111967376 A CN111967376 A CN 111967376A
Authority
CN
China
Prior art keywords
sitting posture
network model
sample images
training
positive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010820827.2A
Other languages
Chinese (zh)
Inventor
杨小康
李恒宇
刘军
谢少荣
罗均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202010820827.2A priority Critical patent/CN111967376A/en
Publication of CN111967376A publication Critical patent/CN111967376A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of computer vision, and particularly relates to a pose identification and detection method based on a neural network, which comprises the following two aspects: on one hand, establishing a pose recognition network model; on the other hand, the network model is used for carrying out recognition detection and correction prompting on the sitting posture. The establishment of the pose recognition network model comprises the following steps: (1) acquiring positive and negative sample images of a sitting posture; (2) carrying out classification label marking on the positive and negative sample images of the sitting posture; (3) fusing the positive and negative sample images of the sitting posture with corresponding classification labels to obtain fused training sample images; (4) and inputting the training sample image into a classification network for training to obtain a pose recognition network model. The invention carries out classification statistics on the sitting posture, establishes the posture identification network model based on the ResNet-50 network, and identifies the sitting posture by utilizing the network model, thereby having the advantages of high identification speed and high accuracy, the accuracy can reach more than 85 percent, and the invention has low cost and simple operation.

Description

Pose identification and detection method based on neural network
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a pose identification and detection method based on a neural network.
Background
With the rapid development of computer vision technology, people pay more and more attention to the application of computer vision technology to various aspects of human life.
The problem that people need to solve the problem of correcting the sitting posture is that the sitting posture is not end, a series of hazards such as spine deformation, vision reduction and the like are easily caused due to the sitting posture, the hazards are particularly prominent for children. At present, most of the wearable correction equipment is adopted for continuously correcting the sitting posture of the children, although the correction of the sitting posture of the children can be realized to a certain degree, the use efficiency is low, the correction range is narrow, if the children can be only promoted to keep the upper half body in an upright state, and the true whole-body sitting posture correction cannot be realized. In addition, some methods for measuring and correcting by using instruments, such as methods for judging the sitting posture state of a child by measuring the induced pressure value on the seat and reminding the child, are inaccurate in judging the sitting posture of the child, and cannot comprehensively detect the current sitting posture category of the child.
Disclosure of Invention
The invention aims to provide a position and posture identification and detection method based on a neural network, which is used for effectively identifying and correcting sitting postures and is particularly effective for correcting the sitting postures of children.
Based on the purpose, the invention adopts the following technical scheme: a pose recognition and detection method based on a neural network comprises the following two aspects: on one hand, establishing a pose recognition network model; and on the other hand, the method for carrying out real-time recognition detection and correction prompt on the sitting posture by utilizing the posture recognition network model.
A method of generating a pose recognition network model, comprising the steps of:
(1) acquiring positive and negative sample images of a sitting posture;
(2) carrying out classification label marking on positive and negative sample images of a sitting posture;
(3) fusing the positive and negative sample images of the sitting posture with corresponding classification labels to obtain fused training sample images;
(4) and inputting the training sample image into a classification network for training to obtain a pose recognition network model.
Furthermore, the positive sample image in the positive and negative sample images of the sitting posture is an image of a normal sitting posture, and the negative sample image in the positive and negative sample images of the sitting posture is seven different images of a waist-bending back-containing image, a left/right body inclination, an excessively forward body inclination, legs placed on the chair, a single hand supported on the chair, a shoulder inclination, a side body sitting and a seesaw 'Erlang leg'; the classification label corresponding to the sitting posture positive sample image is a number '0'; the classification labels corresponding to seven different sitting posture images, namely, the images with the back bent, the left/right body inclination, the body excessively forward, the legs placed on the chair, the one hand supported on the chair, the shoulder inclination, the side sitting and the stilling 'Erlang legs' in the negative sample image are the numbers '1', '2', '3', '4', '5', '6' and '7', namely, yi∈[0,1,2,3,4,5,6,7]。
Further, the total number of the training sample images is at least 1600, and the number of eight sitting posture images labeled from '0' to '7' in the training sample images is the same; the width (W) and the height (H) of each image in the training sample image are 224 pixels.
Further, the classification network is a ResNet-50 network.
Further, the step (4) of inputting the training sample image into the classification network for training to obtain the pose recognition network model comprises the following specific processes:
training sample images were processed as follows 3:1, randomly dividing the training set T and the verification set V into a training set T and a verification set V, inputting the training set T and the verification set V with the image size of H multiplied by W multiplied by 3 into a ResNet-50 network, outputting a 7 multiplied by 2048 characteristic diagram after convolution, Batchnormalization, maximum pooling and residual block operation, inputting the characteristic diagram into a global pooling layer to generate a 1 multiplied by 2048 one-dimensional vector, and connecting a full connection layer behind the one-dimensional vector, wherein the number of neurons of the full connection layer is 8, and the weight size is 2048 multiplied by 8; and connecting a softmax classifier behind the full connection layer, and optimizing a loss function by adopting an SGD gradient descent method to train the pose recognition network to obtain a pose recognition network model.
Further, the loss function includes a loss function L corresponding to the digital labeliAnd a total loss function L, a loss function LiAnd the total loss function L should satisfy the following condition:
Figure BDA0002634354890000021
Figure BDA0002634354890000022
wherein, yi∈[0,1,2,3,4,5,6,7]And f is the score value output after the image is input into the ResNet-50 network.
The method for carrying out real-time recognition detection and correction prompting on the sitting posture by utilizing the posture recognition network model comprises the following steps:
(1) collecting a sitting posture video, and acquiring a sitting posture image in real time;
(2) inputting the sitting posture image into a pose recognition network model, and obtaining scores S ═ S of 8 categories through calculation0,s1,s2...,s8B, prediction category ypredIs the index value corresponding to max { S }, and finally outputs the sitting posture class C, C belongs to [0,1,2,3,4,5,6,7 ∈ [0 ]];
(3) C, judging the result of the output sitting posture type, if C is 0, judging that the current sitting posture is a normal sitting posture, and repeating the steps (1) and (2) after 15s to perform cycle detection; if C is not equal to 0, judging that the current sitting posture is not the normal sitting posture, sending out a warning prompt tone by the automatic alarm device, and repeating the steps (1) and (2) after 15s to perform circular detection.
Further, the "warning" alert tones include the following categories:
when C is 1, the warning tone is 'bend waist with back, please correct sitting posture';
when C is 2, the warning tone is "lean right and left, please correct sitting posture";
when C is 3, the warning tone is "the body is too inclined forward, please correct the sitting posture";
when C is 4, the warning tone is "leg on seat, please correct sitting posture";
when C is 5, the warning tone is 'one hand is supported on the seat, please correct sitting posture';
when C is 6, the warning tone is "shoulder leaning, side sitting, please correct sitting posture";
when C is 7, the warning tone is "see-saw the legs, please correct the sitting posture".
Compared with the prior art, the invention has the following beneficial effects:
1. the invention carries out classification statistics on the sitting posture, establishes the posture identification network model based on the ResNet-50 neural network, and identifies the sitting posture by utilizing the posture identification network model, thereby having the advantages of high identification speed and high identification accuracy, the accuracy can reach more than 85 percent, and the invention has low cost and simple operation.
2. The sitting posture image is acquired in real time, the set-up posture identification network model is used for carrying out real-time sitting posture identification on the sitting posture image acquired in real time, and the 'alarm' prompt of the automatic alarm device is combined to carry out correction prompt on the sitting posture, so that manual intervention is not needed, and the sitting posture correction is promoted well.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
Example 1
A method for generating a pose recognition network model, for example, a child sitting posture recognition, as shown in part a of fig. 1, includes the following steps:
(1) acquiring positive and negative sample images of the sitting posture of the child, and establishing a data set X epsilon { X } of the positive and negative sample images of the sitting posture of the childi}i=1:n,xi∈RH×W×3The method specifically comprises the following steps:
the positive sample image in the positive and negative sample images of the sitting posture of the child is an image of the normal sitting posture of the child, namely the sitting posture of the child in an end-sitting state; the negative sample images are seven abnormal sitting posture images of a child bending down to contain the back, leaning left/right, leaning forward, putting legs on a chair, supporting the chair with one hand, leaning shoulders, sitting sideways and lifting legs.
Collecting positive and negative sample images of child sitting posture, cutting the positive and negative sample images to be used as a data set X epsilon { Xi}i=1:n,xi∈RH×W×3The total number of images in the data set X is n, and n is larger than or equal to 1600. The number of images of eight different sitting postures of the child in the normal sitting posture and the abnormal sitting posture in the data set is equal. Images in the data set are all RGB three-channel color images, collected original images of positive and negative samples of sitting postures of children are cut to 224 pixels in width and height, namely W is H is 224 pixels.
(2) Classifying the positive and negative sample images of the sitting posture of the child in the data set by using a label Y, and specifically comprising the following steps:
the digital labels corresponding to the child normal sitting posture and the seven types of abnormal sitting postures in the data set are shown in table 1, and the classification label corresponding to the child sitting posture correction sample image is a digital "0"; in the negative sample image, the classification labels corresponding to seven abnormal sitting posture images, namely, the child bends over with the back, the body inclines left/right, the body inclines too far, the legs are placed on the seat, the child is supported on the seat by one hand, the shoulder inclines, the side sits and the seesaws are the numbers of '1', '2', '3', '4', '5', '6' and '7', namely, yi∈[0,1,2,3,4,5,6,7],yi∈Y。
TABLE 1 Positive and negative sample images of children's sitting posture and digital labels corresponding to the same
Label (R) Sitting posture
0 Normal position of sitting (end seat)
1 Waist-bending back-containing bag
2 Body left/right tilt
3 The body is too anteverted
4 Legs are arranged on the seat
5 One hand on the seat and the shoulders inclined
6 Sitting on one's side
7 Seesaw 'Erlang leg'
(3) And (3) carrying out manual label marking on the positive and negative sample images of the sitting posture of the child in the data set by using the digital label established in the step (2) to obtain a digitally marked positive and negative sample image of the sitting posture of the child, namely the data set sample image containing the digital label. When the image is artificially labeled, the sitting posture state of the child can be manually identified through common general knowledge of daily life.
In addition, the data set X and the corresponding label data are divided into a training set T and a verification set V according to the proportion of 3:1, and each category needs to be randomly divided.
(4) Inputting the sitting posture images in the training set T and the verification set V into a classification network ResNet-50 basic network for training to obtain a posture identification network model; the training aims to reduce the value of the loss function as much as possible, and simultaneously, parameters are adjusted to ensure that the trained network model has higher accuracy, so that the classification network can better fit the sitting posture data of the children, and the specific training process is as follows:
training sample images were processed as follows 3:1, randomly dividing the training set T and the verification set V into a training set T and a verification set V, inputting the training set T and the verification set V with the image size of H multiplied by W multiplied by 3 into a ResNet-50 network, outputting a 7 multiplied by 2048 characteristic diagram after convolution, Batchnormalization, maximum pooling and residual block operation, inputting the characteristic diagram into a global pooling layer to generate a 1 multiplied by 2048 one-dimensional vector, and connecting a full connection layer behind the one-dimensional vector, wherein the number of neurons of the full connection layer is 8, and the weight size is 2048 multiplied by 8; and connecting a softmax classifier behind the full connection layer, and optimizing a loss function by adopting an SGD gradient descent method to train the pose recognition network to obtain a pose recognition network model.
Wherein, the parameter adjustment process is as follows: initializing the weight in a Kaiming initialization mode during training; carrying out color, translation, turnover, scaling and miscut augmentation on the training set T and the verification set V, and sending the training set T and the verification set V into a network for training; the network training batch _ size is set to 32; setting the initial learning rate to be 0.01, and when the value (Loss) of the Loss function is stably reduced, dividing the learning rate by 10 to continue training; weight attenuation (weight _ decay) is set to 0.0001; momentum (Momentum) is set to 0.9; and training for 200 rounds, and storing the parameter with the highest classification accuracy on the verification set V as the parameter of the pose recognition network. The pose recognition network does not need pre-training, and training can be performed by directly utilizing the training set T and the verification set V.
Wherein the loss function comprises a loss function L corresponding to the digital labeliAnd a total loss function L, a loss function LiAnd the total loss function L should satisfy the following condition:
Figure BDA0002634354890000051
Figure BDA0002634354890000052
wherein, yi∈[0,1,2,3,4,5,6,7]And f is the score value output after the image is input into the ResNet-50 network.
The accuracy of the posture identification network trained by the training set T and the verification set V on the child sitting posture category identification is higher than 85%.
Example 2
A method for performing real-time recognition, detection and correction prompting on the sitting posture of a child by using the posture recognition network model established in embodiment 1, as shown in part B in fig. 1, includes the following steps:
(1) acquiring a child sitting posture video, and acquiring a child sitting posture image in real time; when the camera is used for real-time video acquisition, the shooting angle of the camera is the same as the acquisition angle of the image in the data set X, and the limbs, the trunk and the head of the child need to be completely contained; the child sitting posture image is a certain frame image in the video.
(2) Cutting the obtained children sitting posture image to the size of W-H-224 pixels, inputting the cut image H multiplied by W multiplied by 3 into a posture identification network model, and obtaining scores S-S of 8 different sitting posture categories through calculation0,s1,s2...,s8B, sitting posture prediction category ypredIs the index value corresponding to max { S }, and finally predicts the category y according to the sitting posturepredOutputting the sitting posture class C of the children, wherein C belongs to [0,1,2,3,4,5,6,7 ]];
(3) C, judging the result of the output sitting posture category of the child, if C is 0, judging that the current sitting posture of the child is a normal sitting posture, and repeating the steps (1) and (2) after 15s to perform circular detection; if C is not equal to 0, judging that the current sitting posture of the child is not the normal sitting posture, sending out a warning prompt tone by the automatic alarm device, and repeating the steps (1) and (2) after 15s to carry out circular detection.
Wherein, the warning prompt tone includes the following categories:
when C is 1, the warning tone is 'bend waist with back, please correct sitting posture';
when C is 2, the warning tone is "lean right and left, please correct sitting posture";
when C is 3, the warning tone is "the body is too inclined forward, please correct the sitting posture";
when C is 4, the warning tone is "leg on seat, please correct sitting posture";
when C is 5, the warning tone is 'one hand is supported on the seat, please correct sitting posture';
when C is 6, the warning tone is "shoulder leaning, side sitting, please correct sitting posture";
when C is 7, the warning tone is "see-saw the legs, please correct the sitting posture".

Claims (8)

1. A method for generating a pose recognition network model is characterized by comprising the following steps:
(1) acquiring positive and negative sample images of a sitting posture;
(2) carrying out classification label marking on the positive and negative sample images of the sitting posture;
(3) fusing the positive and negative sample images of the sitting posture with corresponding classification labels to obtain fused training sample images;
(4) and inputting the training sample image into a classification network for training to obtain a pose recognition network model.
2. The method for generating the pose recognition network model according to claim 1, wherein the positive sample images in the positive and negative sample images in the sitting posture are normal sitting posture images, and the negative sample images in the positive and negative sample images in the sitting posture are seven different sitting posture images including waist bending, body left/right inclination, body over forward inclination, legs placed on a chair, one-hand supported on the chair, shoulder inclination, side sitting, and "leg for two landings" in a seesaw manner; the classification label corresponding to the sitting posture positive sample image is a number '0'; bending down in the negative sample image and including back, leaning left/right, leaning forward, putting legs on the chair, supporting the chair with one hand, leaning shoulders, sitting on sideThe classification labels corresponding to seven different sitting posture images, namely 'Erlang leg' are the numbers '1', '2', '3', '4', '5', '6' and '7', namely yi∈[0,1,2,3,4,5,6,7]。
3. The method for generating the pose recognition network model according to claim 2, wherein the total number of the training sample images is at least 1600, and the number of eight sitting posture images labeled from "0" to "7" in the training sample images is the same; the width (W) and the height (H) of each image in the training sample image are 224 pixels.
4. The method for generating the pose recognition network model of claim 3, wherein the classification network is a ResNet-50 network.
5. The method for generating the pose recognition network model according to any one of claims 1 to 4, wherein the step (4) of inputting the training sample image into the classification network for training comprises the following specific steps:
training sample images were processed as follows 3:1, randomly dividing the training set T and the verification set V into a training set T and a verification set V, inputting the training set T and the verification set V with the image size of H multiplied by W multiplied by 3 into a ResNet-50 network, outputting a 7 multiplied by 2048 characteristic diagram after convolution, Batch Normalization, maximum pooling and residual block operation, inputting the characteristic diagram into a global pooling layer to generate a 1 multiplied by 2048 one-dimensional vector, and connecting a full connection layer behind the one-dimensional vector, wherein the number of neurons of the full connection layer is 8, and the weight size is 2048 multiplied by 8; and connecting a softmax classifier behind the full connection layer, and optimizing a loss function by adopting an SGD gradient descent method to train the pose recognition network to obtain a pose recognition network model.
6. The method for generating a pose recognition network model according to claim 5, wherein the loss function comprises a loss function L of a corresponding digital labeliAnd a total loss function L, a loss function LiAnd total lossThe loss function L should satisfy the following condition:
Figure FDA0002634354880000021
Figure FDA0002634354880000022
wherein, yi∈[0,1,2,3,4,5,6,7]And f is the score value output after the image is input into the ResNet-50 network.
7. A method of sitting posture identification, comprising the steps of:
(1) collecting a sitting posture video, and acquiring a sitting posture image in real time;
(2) inputting the sitting posture image into a pose recognition network model, and obtaining scores S ═ S of 8 categories through calculation0,s1,s2...,s8B, prediction category ypredIs the index value corresponding to max { S }, and finally outputs the sitting posture class C, C belongs to [0,1,2,3,4,5,6,7 ∈ [0 ]];
(3) C, judging the result of the output sitting posture type, if C is 0, judging that the current sitting posture is a normal sitting posture, and repeating the steps (1) and (2) after 15s to perform cycle detection; if C is not equal to 0, judging that the current sitting posture is not the normal sitting posture, sending out a warning prompt tone by the automatic alarm device, and repeating the steps (1) and (2) after 15s to perform circular detection.
8. The method of claim 7, wherein the "warning" tone includes the following categories:
when C is 1, the warning tone is 'bend waist with back, please correct sitting posture';
when C is 2, the warning tone is "lean right and left, please correct sitting posture";
when C is 3, the warning tone is "the body is too inclined forward, please correct the sitting posture";
when C is 4, the warning tone is "leg on seat, please correct sitting posture";
when C is 5, the warning tone is 'one hand is supported on the seat, please correct sitting posture';
when C is 6, the warning tone is "shoulder leaning, side sitting, please correct sitting posture";
when C is 7, the warning tone is "see-saw the legs, please correct the sitting posture".
CN202010820827.2A 2020-08-14 2020-08-14 Pose identification and detection method based on neural network Pending CN111967376A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010820827.2A CN111967376A (en) 2020-08-14 2020-08-14 Pose identification and detection method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010820827.2A CN111967376A (en) 2020-08-14 2020-08-14 Pose identification and detection method based on neural network

Publications (1)

Publication Number Publication Date
CN111967376A true CN111967376A (en) 2020-11-20

Family

ID=73387767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010820827.2A Pending CN111967376A (en) 2020-08-14 2020-08-14 Pose identification and detection method based on neural network

Country Status (1)

Country Link
CN (1) CN111967376A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657271A (en) * 2021-08-17 2021-11-16 上海科技大学 Sitting posture detection method and system combining quantifiable factors and non-quantifiable factors for judgment
CN114582014A (en) * 2022-01-25 2022-06-03 珠海视熙科技有限公司 Method and device for recognizing human body sitting posture in depth image and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446961A (en) * 2018-10-19 2019-03-08 北京达佳互联信息技术有限公司 Pose detection method, device, equipment and storage medium
CN110287882A (en) * 2019-06-26 2019-09-27 北京林业大学 A kind of big chrysanthemum kind image-recognizing method based on deep learning
CN111127848A (en) * 2019-12-27 2020-05-08 深圳奥比中光科技有限公司 Human body sitting posture detection system and method
CN111178313A (en) * 2020-01-02 2020-05-19 深圳数联天下智能科技有限公司 Method and equipment for monitoring user sitting posture
CN111325239A (en) * 2020-01-21 2020-06-23 上海眼控科技股份有限公司 Image-based weather identification method and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446961A (en) * 2018-10-19 2019-03-08 北京达佳互联信息技术有限公司 Pose detection method, device, equipment and storage medium
CN110287882A (en) * 2019-06-26 2019-09-27 北京林业大学 A kind of big chrysanthemum kind image-recognizing method based on deep learning
CN111127848A (en) * 2019-12-27 2020-05-08 深圳奥比中光科技有限公司 Human body sitting posture detection system and method
CN111178313A (en) * 2020-01-02 2020-05-19 深圳数联天下智能科技有限公司 Method and equipment for monitoring user sitting posture
CN111325239A (en) * 2020-01-21 2020-06-23 上海眼控科技股份有限公司 Image-based weather identification method and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657271A (en) * 2021-08-17 2021-11-16 上海科技大学 Sitting posture detection method and system combining quantifiable factors and non-quantifiable factors for judgment
CN113657271B (en) * 2021-08-17 2023-10-03 上海科技大学 Sitting posture detection method and system combining quantifiable factors and unquantifiable factor judgment
CN114582014A (en) * 2022-01-25 2022-06-03 珠海视熙科技有限公司 Method and device for recognizing human body sitting posture in depth image and storage medium

Similar Documents

Publication Publication Date Title
CN111881705A (en) Data processing, training and recognition method, device and storage medium
WO2020038254A1 (en) Image processing method and apparatus for target recognition
CN111738942A (en) Generation countermeasure network image defogging method fusing feature pyramid
CN111091109B (en) Method, system and equipment for predicting age and gender based on face image
CN111967376A (en) Pose identification and detection method based on neural network
CN107832740B (en) Teaching quality assessment method and system for remote teaching
CN111488850B (en) Neural network-based old people falling detection method
US20080240489A1 (en) Feature Change Image Creation Method, Feature Change Image Creation Device, and Feature Change Image Creation Program
CN115661943B (en) Fall detection method based on lightweight attitude assessment network
CN106845327B (en) Training method, face alignment method and the device of face alignment model
CN106980852A (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
JPH07507168A (en) How to form a template
CN109559362B (en) Image subject face replacing method and device
US11127181B2 (en) Avatar facial expression generating system and method of avatar facial expression generation
CN110991317B (en) Crowd counting method based on multi-scale perspective sensing network
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN111734974B (en) Intelligent desk lamp with sitting posture reminding function
US20050238209A1 (en) Image recognition apparatus, image extraction apparatus, image extraction method, and program
CN109325435A (en) Video actions identification and location algorithm based on cascade neural network
CN113361342B (en) Multi-mode-based human body sitting posture detection method and device
CN116805433B (en) Human motion trail data analysis system
Mahasantipiya et al. Bite mark identification using neural networks: A preliminary study
CN108399358B (en) Expression display method and system for video chat
CN114582014A (en) Method and device for recognizing human body sitting posture in depth image and storage medium
CN115100560A (en) Method, device and equipment for monitoring bad state of user and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination