CN112329728A - Multi-person sitting posture detection method and system based on object detection - Google Patents

Multi-person sitting posture detection method and system based on object detection Download PDF

Info

Publication number
CN112329728A
CN112329728A CN202011364696.8A CN202011364696A CN112329728A CN 112329728 A CN112329728 A CN 112329728A CN 202011364696 A CN202011364696 A CN 202011364696A CN 112329728 A CN112329728 A CN 112329728A
Authority
CN
China
Prior art keywords
sitting posture
image
detection model
deep
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011364696.8A
Other languages
Chinese (zh)
Inventor
顾翀
顾则行
吴越凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011364696.8A priority Critical patent/CN112329728A/en
Publication of CN112329728A publication Critical patent/CN112329728A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multi-person sitting posture detection method and a multi-person sitting posture detection system based on object detection, wherein the method specifically comprises the following steps: shooting a multi-person sitting posture image, inputting the multi-person sitting posture image into the trained deep object detection model to obtain a plurality of single-person sitting posture images, and inputting the single-person sitting posture image into the trained deep sitting posture detection model to obtain a sitting posture classification result of the single-person sitting posture image; wherein, the training process of the deep object detection model comprises the following steps: collecting a plurality of multi-person sitting posture sample images marked by pictures to form a first training set, and training a deep object detection model by adopting the first training set; the training process of the deep sitting posture detection model comprises the following steps: and collecting a plurality of single sitting posture sample images marked by pictures to form a second training set, and training a deep sitting posture detection model by adopting the second training set. Compared with the prior art, the method has the advantages of high recall rate, high accuracy, high recognition accuracy, low design difficulty and the like.

Description

Multi-person sitting posture detection method and system based on object detection
Technical Field
The invention relates to a sitting posture detection technology, in particular to a multi-person sitting posture detection method and system based on object detection.
Background
Data show that the number of Chinese myopia patients is up to 6 hundred million, and accounts for almost 50% of the total population of China. The myopia rate of teenagers is the first in the world, wherein the myopia rate of junior high school students and university students in China is over 70%, spine diseases in China have a trend of being younger in recent years, more than 40% of people under 40 years old suffer from various spine diseases, and causes of myopia and spine diseases, and the wrong sitting posture is an important factor except factors such as long sitting posture time and insufficient physical exercise. At present, pose monitoring is mainly realized by the following three modes, the first mode is a correction method, a student wears correction braces or installs a corrector on a desk and a chair, but the corrector can influence the growth and development of the student; the second method is a sensor method, in which a sensor is installed in a desk and chair, such as an infrared sensor for detecting the distance between a user and a desktop, or hip pressure analysis, and the like, because each desk and chair needs to be installed with hardware, the initial cost and the later maintenance cost are high; the third is to use a 3D camera to collect and analyze three-dimensional data of a human body and then analyze the data, and the equipment is expensive and is usually dedicated to a single student.
With the application of AI, some people use an image analysis method to perform sitting posture detection and recognition, and Huangxu et al apply a depth vision algorithm to the sitting posture detection problem in the sitting posture vision recognition method research based on discriminant deep learning, and perform character segmentation based on a face detection and region expansion method, which has the following problems:
the human face sitting posture detection effect is poor, for example, sitting postures such as head lowering, lying on a table, head raising and the like, and the human face detection fails due to the fact that the human face is shielded;
the complex environment area is difficult to expand, because under the condition of multiple persons, image characters obtained by a monitoring camera have overlapping conditions, the character segmentation needs to bear double algorithm errors of face detection and area expansion, and the algorithm design difficulty is high.
Disclosure of Invention
The present invention aims to overcome the above-mentioned drawbacks of the prior art and provide a method and a system for detecting a sitting posture of multiple persons based on object detection, which have high recall rate, accuracy and recognition accuracy, and low design difficulty.
The purpose of the invention can be realized by the following technical scheme:
a multi-person sitting posture detection method based on object detection specifically comprises the following steps:
shooting a multi-person sitting posture image, inputting the multi-person sitting posture image into the trained deep object detection model to obtain a plurality of single-person sitting posture images, and inputting the single-person sitting posture image into the trained deep sitting posture detection model to obtain a sitting posture classification result of the single-person sitting posture image;
wherein, the training process of the deep object detection model comprises the following steps: collecting a plurality of multi-person sitting posture sample images marked by pictures to form a first training set, and training a deep object detection model by adopting the first training set;
the training process of the deep sitting posture detection model comprises the following steps: and collecting a plurality of single sitting posture sample images marked by pictures to form a second training set, and training a deep sitting posture detection model by adopting the second training set.
Further, the single sitting posture image output by the trained deep sitting posture detection model is used as a single sitting posture sample image.
Further, the picture marking process of the single sitting posture sample image specifically comprises the following steps:
collecting a plurality of single sitting posture sample images, wherein each single sitting posture sample image correspondingly collects a plurality of groups of labeled categories;
adding the single sitting posture sample image meeting the judgment formula into a second training set, and taking the labeling category with the maximum number of similar labeling categories as the labeling category of the single sitting posture sample image, wherein the judgment formula is as follows:
Figure BDA0002805075070000021
wherein n is the maximum value of the number of similar labeling categories, m is the total number of the labeling categories, and x is a set proportion.
Further, the deep object detection model and the deep sitting posture detection model are based on an easy DL platform.
Furthermore, the image of the sample image with the multiple sitting postures is subjected to image annotation by adopting an easy DL platform.
A multi-person sitting posture detection system based on object detection, comprising:
the image acquisition module comprises an image shooting unit and a sample acquisition unit, wherein the image shooting unit is used for shooting a multi-person sitting posture image, and the sample acquisition unit is used for acquiring a multi-person sitting posture image, a multi-person sitting posture sample image and a single-person sitting posture sample image;
the image labeling module is used for carrying out image labeling on the multi-person sitting posture image and the single-person sitting posture sample image;
the model training module is used for training the deep object detection model by taking the multi-person sitting posture sample image marked by the picture as a first training set, and training the deep sitting posture detection model by taking the single-person sitting posture sample image marked by the picture as a second training set;
the object detection module is used for inputting the multi-person sitting posture image into the trained deep object detection model to obtain a plurality of single-person sitting posture images;
and the sitting posture detection module is used for inputting the single sitting posture image into the trained deep sitting posture detection model to obtain a sitting posture classification result of the single sitting posture image.
Furthermore, the sample acquisition unit takes the single sitting posture image output by the trained deep sitting posture detection model as a single sitting posture sample image.
Further, the picture marking process of the single sitting posture sample image specifically comprises the following steps:
the system comprises a sample acquisition unit, an image labeling module, a data processing module and a data processing module, wherein the sample acquisition unit acquires a plurality of single sitting posture sample images, the image labeling module performs multiple labeling on each single sitting posture sample image to correspondingly obtain a plurality of groups of labeling categories, and the labeling category with the largest number of the same labeling categories is used as the labeling category of the single sitting posture sample image;
the model training module adds the single sitting posture sample image meeting the judgment formula into a second training set, wherein the judgment formula is as follows:
Figure BDA0002805075070000031
wherein n is the maximum value of the number of similar labeling categories, m is the total number of the labeling categories, and x is a set proportion.
Further, the deep object detection model and the deep sitting posture detection model are based on an easy DL platform.
Furthermore, the image labeling module adopts an easy DL platform to perform image labeling on the sample image of the multiple people sitting postures.
Compared with the prior art, the invention has the following beneficial effects:
(1) the method adopts the trained deep object detection model to recognize the multi-person sitting posture image in real time to obtain a plurality of single-person sitting posture images, inputs the single-person sitting posture images into the trained deep sitting posture detection model to obtain the sitting posture classification result of the single-person sitting posture images, and compared with the face recognition, the method avoids the situation that the face cannot be detected, greatly improves the recall rate of person detection by the deep object detection model, has high accuracy and recognition accuracy, and greatly reduces the design difficulty of the deep object detection model;
(2) according to the invention, the single sitting posture image output by the trained deep sitting posture detection model is used as the single sitting posture sample image, the unmarked single sitting posture image is fully utilized, the operation is convenient, the obtained real-time scene image is utilized for training, the deep sitting posture detection model can learn more low-rise and high-rise characteristics which effectively represent the sitting posture in the training process through the structure of the hidden layer, and the sitting posture identification accuracy under the real-time scene is high;
(3) the method comprises the steps of collecting a plurality of single sitting posture sample images, correspondingly collecting a plurality of groups of label categories from each single sitting posture sample image, taking the label category with the largest number of similar label categories as the label category of the single sitting posture sample image, and adding the single sitting posture sample image with the similar label category number accounting for a ratio exceeding a set ratio into a second training set, so that the accuracy and the uniformity of the label categories of the single sitting posture sample images are improved, and the accuracy of a deep sitting posture detection model is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
A multi-person sitting posture detection method based on object detection is disclosed as shown in figure 1, and specifically comprises the following steps:
shooting a multi-person sitting posture image, inputting the multi-person sitting posture image into a trained deep object detection model to obtain a plurality of single-person sitting posture images, inputting the single-person sitting posture image into the trained deep sitting posture detection model to obtain a sitting posture classification result of the single-person sitting posture image, and classifying the sitting posture classification result into a correct sitting posture and an incorrect sitting posture;
the training process of the depth object detection model comprises the following steps: collecting a plurality of multi-person sitting posture sample images marked by pictures to form a first training set, and training a deep object detection model by adopting the first training set;
the training process of the deep sitting posture detection model comprises the following steps: and collecting a plurality of single sitting posture sample images marked by pictures to form a second training set, and training a deep sitting posture detection model by adopting the second training set.
And taking the single sitting posture image output by the trained deep sitting posture detection model as a single sitting posture sample image.
The picture marking process of the single sitting posture sample image specifically comprises the following steps:
collecting 154 single sitting posture sample images, wherein each single sitting posture sample image correspondingly collects 4 groups of labeled categories;
adding the single sitting posture sample image meeting the judgment formula into a second training set, taking the labeling category with the most labeling category quantity of the same kind as the labeling category of the single sitting posture sample image, and judging the formula as follows:
Figure BDA0002805075070000041
wherein n is the maximum value of the number of similar labeling categories, and m is the total number of the labeling categories. The labeling categories are divided into correct sitting postures and wrong sitting postures, and the single-person sitting posture sample image labeling results are shown in table 1:
TABLE 1 statistical table of image labeling results of single sitting posture samples
Figure BDA0002805075070000042
Figure BDA0002805075070000051
As can be seen from table 1, a total of 147 single-person sitting posture samples labeled with images can be added to the second training set, wherein the labeled category distribution results are shown in table 2:
Figure BDA0002805075070000052
the deep object detection model and the deep sitting posture detection model are based on an easy DL platform, images of a multi-person sitting posture sample are marked by the easy DL platform, and the easy DL platform has the advantages of small required data volume and simplicity in operation.
Example 2
A system for detecting a sitting posture of a plurality of persons corresponding to embodiment 1 includes: the device comprises an image acquisition module, an image labeling module, a model training module, an object detection module and a sitting posture detection module;
the image acquisition module comprises an image shooting unit and a sample acquisition unit, wherein the image shooting unit is used for shooting a multi-person sitting posture image, and the sample acquisition unit is used for acquiring a multi-person sitting posture image, a multi-person sitting posture sample image and a single-person sitting posture sample image;
the image labeling module is used for carrying out image labeling on the multi-person sitting posture image and the single-person sitting posture sample image;
the model training module is used for training the deep object detection model by taking the multi-person sitting posture sample image marked by the picture as a first training set, and training the deep sitting posture detection model by taking the single-person sitting posture sample image marked by the picture as a second training set;
the object detection module constructs an image segmentation service based on a Python flash framework, the image segmentation service stores a trained deep object detection model, and the image segmentation service inputs a multi-person sitting posture image into the trained deep object detection model to obtain a plurality of single-person sitting posture images;
the sitting posture detection module constructs a single sitting posture detection service based on a Python flash frame, a trained deep sitting posture detection model is stored in the single sitting posture detection service, and the single sitting posture detection service inputs the trained deep sitting posture detection model into a single sitting posture image to obtain a sitting posture classification result of the single sitting posture image.
The deep object detection model and the deep sitting posture detection model are based on an easy DL platform.
And the sample acquisition unit takes the single sitting posture image output by the trained deep sitting posture detection model as a single sitting posture sample image.
The image annotation module adopts an easy DL platform to perform image annotation on the multi-person sitting posture sample image;
the image labeling module specifically performs the process of image labeling on the single sitting posture sample image as follows:
the method comprises the following steps that a sample collection unit collects a plurality of single sitting posture sample images, an image labeling module labels each single sitting posture sample image for a plurality of times to correspondingly obtain a plurality of groups of labeling categories, and the labeling category with the largest number of similar labeling categories is used as the labeling category of the single sitting posture sample image;
the model training module adds the single sitting posture sample image meeting the judgment formula into a second training set, and the judgment formula is as follows:
Figure BDA0002805075070000061
wherein n is the maximum value of the number of similar labeling categories, m is the total number of the labeling categories, and x is a set proportion.
The embodiment 1 and the embodiment 2 provide a multi-user sitting posture detection method and system based on object detection, a multi-user sitting posture image is divided into a single-user sitting posture image by using a deep object detection model, the single-user sitting posture is obtained by the deep sitting posture detection model according to the single-user sitting posture image, the recall rate is greatly improved compared with face detection, and multi-user sitting posture detection in a real environment is realized.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A multi-person sitting posture detection method based on object detection is characterized by comprising the following steps:
shooting a multi-person sitting posture image, inputting the multi-person sitting posture image into the trained deep object detection model to obtain a plurality of single-person sitting posture images, and inputting the single-person sitting posture image into the trained deep sitting posture detection model to obtain a sitting posture classification result of the single-person sitting posture image;
wherein, the training process of the deep object detection model comprises the following steps: collecting a plurality of multi-person sitting posture sample images marked by pictures to form a first training set, and training a deep object detection model by adopting the first training set;
the training process of the deep sitting posture detection model comprises the following steps: and collecting a plurality of single sitting posture sample images marked by pictures to form a second training set, and training a deep sitting posture detection model by adopting the second training set.
2. The method for detecting the sitting postures of multiple persons based on the object detection as claimed in claim 1, wherein a single sitting posture image output by the trained deep sitting posture detection model is used as a single sitting posture sample image.
3. The object detection-based multi-person sitting posture detection method according to claim 1, wherein the picture labeling process of the single-person sitting posture sample image is specifically as follows:
collecting a plurality of single sitting posture sample images, wherein each single sitting posture sample image correspondingly collects a plurality of groups of labeled categories;
adding the single sitting posture sample image meeting the judgment formula into a second training set, and taking the labeling category with the maximum number of similar labeling categories as the labeling category of the single sitting posture sample image, wherein the judgment formula is as follows:
Figure FDA0002805075060000011
wherein n is the maximum value of the number of similar labeling categories, m is the total number of the labeling categories, and x is a set proportion.
4. The method as claimed in claim 1, wherein the deep object detection model and the deep sitting posture detection model are based on easy DL platform.
5. The method as claimed in claim 1, wherein the multi-sitting posture sample image is labeled with an easy dl platform.
6. A multi-person sitting posture detection system based on object detection is characterized by comprising:
the image acquisition module comprises an image shooting unit and a sample acquisition unit, wherein the image shooting unit is used for shooting a multi-person sitting posture image, and the sample acquisition unit is used for acquiring a multi-person sitting posture image, a multi-person sitting posture sample image and a single-person sitting posture sample image;
the image labeling module is used for carrying out image labeling on the multi-person sitting posture image and the single-person sitting posture sample image;
the model training module is used for training the deep object detection model by taking the multi-person sitting posture sample image marked by the picture as a first training set, and training the deep sitting posture detection model by taking the single-person sitting posture sample image marked by the picture as a second training set;
the object detection module is used for inputting the multi-person sitting posture image into the trained deep object detection model to obtain a plurality of single-person sitting posture images;
and the sitting posture detection module is used for inputting the single sitting posture image into the trained deep sitting posture detection model to obtain a sitting posture classification result of the single sitting posture image.
7. The system as claimed in claim 6, wherein the sample collecting unit takes the single sitting posture image outputted from the trained deep sitting posture detecting model as the single sitting posture sample image.
8. The system for detecting the sitting posture of a plurality of people based on the object detection as claimed in claim 6, wherein the process of image annotation of the sample image of the sitting posture of a single person is as follows:
the system comprises a sample acquisition unit, an image labeling module, a data processing module and a data processing module, wherein the sample acquisition unit acquires a plurality of single sitting posture sample images, the image labeling module performs multiple labeling on each single sitting posture sample image to correspondingly obtain a plurality of groups of labeling categories, and the labeling category with the largest number of the same labeling categories is used as the labeling category of the single sitting posture sample image;
the model training module adds the single sitting posture sample image meeting the judgment formula into a second training set, wherein the judgment formula is as follows:
Figure FDA0002805075060000021
wherein n is the maximum value of the number of similar labeling categories, m is the total number of the labeling categories, and x is a set proportion.
9. The system as claimed in claim 6, wherein the deep object detection model and the deep sitting posture detection model are based on easy DL platform.
10. The system as claimed in claim 6, wherein the image annotation module employs easy DL platform to perform image annotation on the multi-person sitting posture sample image.
CN202011364696.8A 2020-11-27 2020-11-27 Multi-person sitting posture detection method and system based on object detection Pending CN112329728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011364696.8A CN112329728A (en) 2020-11-27 2020-11-27 Multi-person sitting posture detection method and system based on object detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011364696.8A CN112329728A (en) 2020-11-27 2020-11-27 Multi-person sitting posture detection method and system based on object detection

Publications (1)

Publication Number Publication Date
CN112329728A true CN112329728A (en) 2021-02-05

Family

ID=74309082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011364696.8A Pending CN112329728A (en) 2020-11-27 2020-11-27 Multi-person sitting posture detection method and system based on object detection

Country Status (1)

Country Link
CN (1) CN112329728A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169453A (en) * 2017-05-16 2017-09-15 湖南巨汇科技发展有限公司 A kind of sitting posture detecting method based on depth transducer
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN110321786A (en) * 2019-05-10 2019-10-11 北京邮电大学 A kind of human body sitting posture based on deep learning monitors method and system in real time
CN110659565A (en) * 2019-08-15 2020-01-07 电子科技大学 3D multi-person human body posture estimation method based on porous convolution
CN111178313A (en) * 2020-01-02 2020-05-19 深圳数联天下智能科技有限公司 Method and equipment for monitoring user sitting posture
CN111178280A (en) * 2019-12-31 2020-05-19 北京儒博科技有限公司 Human body sitting posture identification method, device, equipment and storage medium
CN111738091A (en) * 2020-05-27 2020-10-02 复旦大学 Posture estimation and human body analysis system based on multi-task deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169453A (en) * 2017-05-16 2017-09-15 湖南巨汇科技发展有限公司 A kind of sitting posture detecting method based on depth transducer
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN110321786A (en) * 2019-05-10 2019-10-11 北京邮电大学 A kind of human body sitting posture based on deep learning monitors method and system in real time
CN110659565A (en) * 2019-08-15 2020-01-07 电子科技大学 3D multi-person human body posture estimation method based on porous convolution
CN111178280A (en) * 2019-12-31 2020-05-19 北京儒博科技有限公司 Human body sitting posture identification method, device, equipment and storage medium
CN111178313A (en) * 2020-01-02 2020-05-19 深圳数联天下智能科技有限公司 Method and equipment for monitoring user sitting posture
CN111738091A (en) * 2020-05-27 2020-10-02 复旦大学 Posture estimation and human body analysis system based on multi-task deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘祥龙等编著: "飞桨PaddlePaddle深度学习实战", 《哈尔滨:东北林业大学出版社》, pages: 61 - 62 *

Similar Documents

Publication Publication Date Title
CN104866829B (en) A kind of across age face verification method based on feature learning
CN106682616A (en) Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN111709358A (en) Teacher-student behavior analysis system based on classroom video
CN113069080B (en) Difficult airway assessment method and device based on artificial intelligence
CN106778657A (en) Neonatal pain expression classification method based on convolutional neural networks
CN107767935A (en) Medical image specification processing system and method based on artificial intelligence
CN104143079A (en) Method and system for face attribute recognition
CN109508755B (en) Psychological assessment method based on image cognition
CN104376315A (en) Detection method based on computer image processing and mode recognition and application of detection method
CN109034099A (en) A kind of expression recognition method and device
CN105224921A (en) A kind of facial image preferentially system and disposal route
CN109544523A (en) Quality of human face image evaluation method and device based on more attribute face alignments
CN109241917A (en) A kind of classroom behavior detection system based on computer vision
CN112990137B (en) Classroom student sitting posture analysis method based on template matching
CN110363129A (en) Autism early screening system based on smile normal form and audio-video behavioural analysis
CN107103293B (en) It is a kind of that the point estimation method is watched attentively based on joint entropy
WO2021248815A1 (en) High-precision child sitting posture detection and correction method and device
CN104376611A (en) Method and device for attendance of persons descending well on basis of face recognition
CN111444879A (en) Joint strain autonomous rehabilitation action recognition method and system
CN116563887B (en) Sleeping posture monitoring method based on lightweight convolutional neural network
CN110338759A (en) A kind of front pain expression data acquisition method
CN102184016A (en) Noncontact type mouse control method based on video sequence recognition
CN111444389A (en) Conference video analysis method and system based on target detection
CN109614927A (en) Micro- Expression Recognition based on front and back frame difference and Feature Dimension Reduction
CN111539408A (en) Intelligent point reading scheme based on photographing and object recognizing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210205