WO2021248815A1 - 一种高精度的儿童坐姿检测与矫正方法及装置 - Google Patents

一种高精度的儿童坐姿检测与矫正方法及装置 Download PDF

Info

Publication number
WO2021248815A1
WO2021248815A1 PCT/CN2020/128883 CN2020128883W WO2021248815A1 WO 2021248815 A1 WO2021248815 A1 WO 2021248815A1 CN 2020128883 W CN2020128883 W CN 2020128883W WO 2021248815 A1 WO2021248815 A1 WO 2021248815A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature points
key
data
feature
detection
Prior art date
Application number
PCT/CN2020/128883
Other languages
English (en)
French (fr)
Inventor
李龙
宋恒
赵丹
崔修涛
Original Assignee
德派(嘉兴)医疗器械有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 德派(嘉兴)医疗器械有限公司 filed Critical 德派(嘉兴)医疗器械有限公司
Publication of WO2021248815A1 publication Critical patent/WO2021248815A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Definitions

  • the present invention relates to the technical field of computer vision processing, in particular to a high-precision method and device for children's sitting posture detection and correction.
  • the Chinese patent application publication number CN104622610A discloses a sitting posture correction device based on infrared visual distance monitoring, and proposes to use a base and a distance detection module that need to be worn by the monitored person to detect the sitting posture of the wearer, although the volume is small, But in the face of active and curious children, its use effect will be affected, and it uses infrared to detect the distance between the wearer and the visible object to detect the sitting posture. It can only detect the general state of the child's head and neck, and cannot be comprehensive. To understand the curvature of children’s spine, it is not practical and accurate.
  • the purpose of the present invention is to provide a high-precision method and device for detecting and correcting children's sitting posture that can perform stable and reliable sitting posture analysis with fast processing speed and comprehensively.
  • a high-precision child sitting posture detection and correction method includes the following steps:
  • the feature detection module determines whether it is a monitoring object according to the key feature points
  • S4 Calculate the region where the corresponding key feature points in the next frame are located according to the facial key point data corresponding to the current frame number, and define the region as an ROI region;
  • step S1 the video data of the monitored object is collected by edge AI extraction, and the key feature points correspond to the back, chest and abdomen of the monitored object.
  • step S3 cutting, scaling, filtering, denoising, histogram equalization, and gray level balancing are performed on the video frame containing the key feature points, and converted into a normalized standard image;
  • the standard image is segmented according to the bending direction of the spine part to obtain the key point data.
  • step S4 the ROI area in the t+1 frame is obtained according to the position coordinates of the key point data in the t frame.
  • step S6 the attention mechanism is used to repeatedly compare the details of the recognized object to improve the accuracy of the comparison.
  • the image of the key point data can be reconstructed according to the principle of end to end before the comparison It is output after high-resolution images.
  • the LSTM classification method is used to classify the detection data of the back, chest and abdomen of the monitored object.
  • a high-precision child sitting posture detection and correction device including a data acquisition module, a feature detection module, a feature detection module of interest, an algorithm module, a quantitative analysis module, and a standard feature database;
  • the data collection module collects video data of the monitored object, extracts key feature points of the spine of the monitored object, and sequentially submits the key feature points to the feature detection module according to a time sequence;
  • the feature detection module judges whether it is a monitoring object according to the key feature points, and sends the data that meets the requirements to the interest feature detection module;
  • the feature of interest detection module performs separate detection based on the different key feature points to obtain the key point data of the monitored object, and the algorithm module calculates the sum of the next frame based on the separated single item of the key feature points.
  • the algorithm module performs self-inspection on the ROI area to determine whether it is the spine part of the monitored object, if it is, then sends the ROI area to the feature of interest to continue detection, if not, interrupts the feature of interest detection module Separation test;
  • the quantitative analysis module obtains the key point data in real time, integrates and compares with the corresponding data in the standard feature database, and obtains a quantified learning state evaluation result.
  • the present invention includes at least one of the following beneficial technical effects:
  • the filtering and denoising can improve the system's anti-disturbance ability under light changes and posture changes, and improve the accuracy of spine recognition.
  • Fig. 1 is a block diagram of a method according to an embodiment of the present invention
  • FIG. 2 is a specific process flow diagram of an embodiment of the present invention.
  • the feature detection module determines whether it is a monitoring object according to the key feature points
  • S4 Calculate the region where the corresponding key feature points in the next frame are located according to the facial key point data corresponding to the current frame number, and define the region as an ROI region;
  • a confrontation network based on sample data, which specifically includes four steps: obtaining sample data, preprocessing training samples, generating lighting confrontation training for the confrontation network, and generating pose confrontation training for the confrontation network.
  • step of acquiring sample data it is required to acquire spine parts of various illuminations and angles as sample data.
  • This embodiment uses 13 postures in CMU Multi-PIE and images of spine parts under 20 illumination conditions as the training data set. Since it is convenient to train the model later, first normalize each sample image.
  • an image and the target lighting label are selected from the sample data as the input of the lighting generator, the generator outputs the target lighting image, and then the target lighting image and the original lighting label are sent to the lighting generation again
  • the device gets the fake original lighting image.
  • the discriminator feeds back the errors of the real image and the false original illumination image to the illumination generator, and the identity classifier and the illumination classifier respectively feed back the errors of the target face image and the identity information and illumination information of the generated image to the illumination generator; illumination generation Trainers, discriminators, and classifiers are continuously iterative training.
  • step S1 the video data of the monitored object is collected by edge AI extraction, and the key feature points correspond to the back, chest and abdomen of the monitored object.
  • step S3 crop, zoom, filter, denoise, histogram equalization, and gray balance are performed on the video frame containing the key feature points, and convert it into a normalized standard image;
  • the standard image is segmented according to the bending direction of the spine part to obtain the key point data.
  • step S4 the ROI area in the t+1 frame is obtained according to the position coordinates of the key point data in the t frame.
  • step S6 the attention mechanism is used to repeatedly compare the details of the recognized object to improve the accuracy of the comparison.
  • the face key point data image can be reconstructed into a high-resolution image according to the principle of end to end before the comparison. After output.
  • the LSTM classification method is used to classify the detection data of the back, chest and abdomen of the monitored object.
  • a high-precision child sitting posture detection and correction device including a data acquisition module, a feature detection module, a feature detection module of interest, an algorithm module, a quantitative analysis module, and a standard feature database;
  • the data collection module collects the video data of the monitored object, extracts the key feature points of the spine of the monitored object, and submits the key feature points to the feature detection module in sequence according to the time sequence;
  • the feature detection module judges whether it is a monitoring object according to the key feature points, and sends the data that meets the requirements to the interest feature detection module;
  • the feature of interest detection module performs separation and detection according to different key feature points to obtain key point data of the monitored object, and the algorithm module calculates the ROI area associated with the single key feature point in the next frame according to the separated single key feature point;
  • the algorithm module performs self-inspection on the ROI area to determine whether it is the spine part of the monitored object, if it is, it sends the ROI area to the feature of interest to continue the detection, if not, it interrupts the separation and detection of the feature of interest detection module;
  • the standard feature database is the children's sitting posture knowledge base, which contains data of various sitting posture models.
  • the quantitative analysis module obtains key point data in real time, integrates and compares with the corresponding data in the standard feature database, and obtains the result of the quantified learning state evaluation.
  • the present invention includes at least one of the following beneficial technical effects:
  • the filtering and denoising can improve the system's anti-disturbance ability under light changes and posture changes, and improve the accuracy of spine recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种高精度的儿童坐姿检测与矫正方法,通过预先对采集到的图像划分出ROI区域进行精细的检测和识别,一方面可以降低输入的数据量,另一方可以简化问题,提高流程的处理效率,提升处理速度,再配合ROI区域跟踪以及相关的滤波除噪,可以提升系统在光线变化以及姿态变化下的抗扰动能力,提升脊柱部位识别的精度。

Description

一种高精度的儿童坐姿检测与矫正方法及装置 技术领域
本发明涉及本发明涉及计算机视觉处理技术领域,具体涉及一种高精度的儿童坐姿检测与矫正方法及装置。
背景技术
随着我国的不断发展,儿童教育日益受到社会重视,学习方式也由单一的在校学习拓展到居家在线学习、居家线下学习等多种方式。但是儿童的自律性一般欠佳,家长精力亦有限,居家学习时老师难以监督,因此常常学习效率较低。
现有两种对学生在线上教学进行监督的装置和方法,一种是接触式的,检测效果比较准确但是需要传感器与儿童直接接触,会对其学习产生一定的影响;另一种是非接触式的,通过摄像头对儿童的外在行为表现和内在生理变化。
如申请公布号为CN104622610A的中国专利公开了基于红外视距监控的坐姿矫正装置,提出了使用需要受监视者佩戴的基座和距离检测模块等对佩戴者的坐姿进行检测,虽然体积较小,但是面对好动且好奇的儿童,其使用效果会受到影响,而且其是通过红外检测佩戴者与可视物之间的距离来检测坐姿,仅仅能够检测儿童头颈部的大致状态,无法全面的了解儿童脊柱部位的弯曲情况,实用性较差,且精准度不高。
发明内容
针对现有技术存在的不足,本发明的目的是提供一种处理速度快且全面的进行稳定可靠的坐姿分析的高精度的儿童坐姿检测与矫正方法及装置。
本发明的上述发明目的是通过以下技术方案得以实现的:
一种高精度的儿童坐姿检测与矫正方法,包括如下步骤:
S1、按照预设频率对监测对象的视频数据进行收集,提取所述监测对象的脊柱部位的关键特征点,并按照时序依次将所述关键特征点提交给特征检测模块;
S2、由所述特征检测模块根据所述关键特征点判断是否为监测对象;
若是,进入步骤S3;
若不是,返回步骤S1;
S3、将所述关键特征点进行分割,获得所述脊柱部位的关键点数据;
S4、根据当前帧数对应的所述面部关键点数据推算出下一帧中对应的所述关键特征点所在的区域,并将该区域定义为ROI区域;
S5、对所述ROI区域进行自检,判断是否为所述监测对象的所述脊柱部位;
若是,进入步骤S3;
若不是,返回步骤S1;
S6、通过量化分析模块实时获取所述关键点数据,整合与所述标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。
在步骤S1中,采用边缘AI提取的方式对所述监测对象的视频数据进行收集,所述关键特征点对应所述监测对象的背部和胸部以及腹部。
在步骤S3中,对含有所述关键特征点的视频帧进行裁剪、缩放、滤波、去噪、直方图均衡和灰度平衡,转换成归一化的标准图像;
再将所述标准图像按照所述脊柱部位的弯曲方向进行分割,得到所述关键点数据。
在步骤S4中,根据t帧中所述关键点数据的位置坐标获取t+1帧中的所述ROI区域。
在步骤S6中,采用attention机制,反复比对识别对象细节,提高对比的精准度。
当所述关键点数据的分辨率的分辨率无法满足于所述标准特征数据库中的对应数据进行有效比对时,可在比对之前对所述关键点数据的图像按照end to end的原则重建为高分辨率图像后输出。
采用LSTM分类法对述监测对象的背部和胸部以及腹部的检测数据进行分类。
一种高精度的儿童坐姿检测与矫正装置,包括数据采集模块、特征检测模块、感兴趣特征检测模块、算法模块和量化分析模块以及标准特征数据库;
所述数据采集模块对监测对象的视频数据进行收集,提取所述监测对象的脊柱部位的关键特征点,并按照时序依次将所述关键特征点提交给所述特征检测模块;
所述特征检测模块根据所述关键特征点判断是否为监测对象,并将符合要求的数据发送至所述感兴趣特征检测模块;
所述感兴趣特征检测模块根据不同所述关键特征点进行分离检测,获得所述监测对象关键点数据,并由所述算法模块根据分离后的单项所述关键特征点推算出下一帧的与该单项所述关键特征点关联的ROI区域;
所述算法模块对所述ROI区域进行自检,判断是否为监测对象的脊柱部位,若是则发送所述ROI区域至所述感兴趣特征继续检测,若不是则中断所述感兴趣特征检测模块的分离检测;
所述量化分析模块实时获取所述关键点数据,整合与所述标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。
综上,本发明包括以下至少一种有益技术效果:
通过预先对采集到的图像划分出ROI区域进行精细的检测和识别,一方面可以降低输 入的数据量,另一方可以简化问题,提高流程的处理效率,提升处理速度,再配合ROI区域跟踪以及相关的滤波除噪,可以提升系统在光线变化以及姿态变化下的抗扰动能力,提升脊柱部位识别的精度。
附图说明
图1是本发明实施例的方法框图;
图2是本发明实施例的具体流程处理图。
具体实施方式
以下结合附图对本发明作进一步详细说明。
参照图1,为本发明公开的一种高精度的儿童坐姿检测与矫正方法,包括如下步骤:
S1、按照预设频率对监测对象的视频数据进行收集,提取所述监测对象的脊柱部位的关键特征点,并按照时序依次将所述关键特征点提交给特征检测模块;
S2、由所述特征检测模块根据所述关键特征点判断是否为监测对象;
若是,进入步骤S3;
若不是,返回步骤S1;
S3、将所述关键特征点进行分割,获得所述脊柱部位的关键点数据;
S4、根据当前帧数对应的所述面部关键点数据推算出下一帧中对应的所述关键特征点所在的区域,并将该区域定义为ROI区域;
S5、对所述ROI区域进行自检,判断是否为所述监测对象的所述脊柱部位;
若是,进入步骤S3;
若不是,返回步骤S1;
本实施例中,首先需要根据样本数据训练生成对抗网络,具体包括获取样本数据、训练样本预处理、生成对抗网络的光照对抗训练、生成对抗网络的姿态对抗训练四个步骤。
在获取样本数据步骤,要求获取各种光照以及角度的脊柱部位作为样本数据,本实施例采用CMU Multi-PIE中13种姿态,以及20种光照条件下的脊柱部位图像作为训练数据集。由于便于后面模型训练,首先对各个样本图像进行归一化。
在生成对抗网络的光照对抗训练步骤,从样本数据中选择一张图像以及目标光照标签作为光照生成器的输入,生成器输出目标光照图像,然后将目标光照图像跟原始光照标签再次送入光照生成器得到假原始光照图像。判别器将真实图像和假原始光照图像的误差反馈给光照生成器,身份分类器和光照分类器分别将目标人脸图像和生成图像的身份信息和光照信息的误差反馈给光照生成器;光照生成器、判别器、分类器不断迭代训练。
若不是,返回步骤S1;
S6、通过量化分析模块实时获取关键点数据,整合分类后与标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。
在步骤S1中,采用边缘AI提取的方式对所述监测对象的视频数据进行收集,所述关键特征点对应所述监测对象的背部和胸部以及腹部。
在步骤S3中,对含有关键特征点的视频帧进行裁剪、缩放、滤波、去噪、直方图均衡和灰度平衡,转换成归一化的标准图像;
再将所述标准图像按照所述脊柱部位的弯曲方向进行分割,得到所述关键点数据。
在步骤S4中,根据t帧中关键点数据的位置坐标获取t+1帧中的ROI区域。
在步骤S6中,采用attention机制,反复比对识别对象细节,提高对比的精准度。
当面部关键点数据的分辨率的分辨率无法满足于标准特征数据库中的对应数据进行有效比对时,可在比对之前对面部关键点数据图像按照end to end的原则重建为高分辨率图像后输出。
采用LSTM分类法对述监测对象的背部和胸部以及腹部的检测数据进行分类。
一种高精度的儿童坐姿检测与矫正装置,包括数据采集模块、特征检测模块、感兴趣特征检测模块、算法模块和量化分析模块以及标准特征数据库;
数据采集模块对监测对象的视频数据进行收集,提取监测对象的脊柱部位的关键特征点,并按照时序依次将关键特征点提交给特征检测模块;
特征检测模块根据关键特征点判断是否为监测对象,并将符合要求的数据发送至感兴趣特征检测模块;
感兴趣特征检测模块根据不同关键特征点进行分离检测,获得监测对象关键点数据,并由算法模块根据分离后的单项关键特征点推算出下一帧的与该单项关键特征点关联的ROI区域;
算法模块对ROI区域进行自检,判断是否为监测对象的脊柱部位,若是则发送ROI区域至感兴趣特征继续检测,若不是则中断感兴趣特征检测模块的分离检测;
标准特征数据库为儿童坐姿知识库,内含多种坐姿模型数据。量化分析模块实时获取关键点数据,整合与标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。综上,本发明包括以下至少一种有益技术效果:
通过预先对采集到的图像划分出ROI区域进行精细的检测和识别,一方面可以降低输入的数据量,另一方可以简化问题,提高流程的处理效率,提升处理速度,再配合ROI区域跟踪以及相关的滤波除噪,可以提升系统在光线变化以及姿态变化下的抗扰动能力,提升脊柱部位识别的精度。
本具体实施方式的实施例均为本发明的较佳实施例,并非依此限制本发明的保护范围,故:凡依本发明的结构、形状、原理所做的等效变化,均应涵盖于本发明的保护范围之内。

Claims (8)

  1. 一种高精度的儿童坐姿检测与矫正方法,其特征在于:包括如下步骤:
    S1、按照预设频率对监测对象的视频数据进行收集,提取所述监测对象的脊柱部位的关键特征点,并按照时序依次将所述关键特征点提交给特征检测模块;
    S2、由所述特征检测模块根据所述关键特征点判断是否为监测对象;
    若是,进入步骤S3;
    若不是,返回步骤S1;
    S3、将所述关键特征点进行分割,获得所述脊柱部位的关键点数据;
    S4、根据当前帧数对应的所述面部关键点数据推算出下一帧中对应的所述关键特征点所在的区域,并将该区域定义为ROI区域;
    S5、对所述ROI区域进行自检,判断是否为所述监测对象的所述脊柱部位;
    若是,进入步骤S3;
    若不是,返回步骤S1;
    S6、通过量化分析模块实时获取所述关键点数据,整合与所述标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。
  2. 根据权利要求1所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S1中,采用边缘AI提取的方式对所述监测对象的视频数据进行收集,所述关键特征点对应所述监测对象的背部和胸部以及腹部。
  3. 根据权利要求1所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S3中,对含有所述关键特征点的视频帧进行裁剪、缩放、滤波、去噪、直方图均衡和灰度平衡,转换成归一化的标准图像;
    再将所述标准图像按照所述脊柱部位的弯曲方向进行分割,得到所述关键点数据。
  4. 根据权利要求3所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S4中,根据t帧中所述关键点数据的位置坐标获取t+1帧中的所述ROI区域。
  5. 根据权利要求1所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S6中,采用attention机制,反复比对识别对象细节,提高对比的精准度。
  6. 根据权利要求5所述的高精度的儿童坐姿检测与矫正方法,其特征在于:当所述关键点数据的分辨率的分辨率无法满足于所述标准特征数据库中的对应数据进行有效比对时,可在比对之前对所述关键点数据的图像按照end to end的原则重建为高分辨率图像后输出。
  7. 根据权利要求2所述的高精度的儿童坐姿检测与矫正方法,其特征在于:采用 LSTM分类法对述监测对象的背部和胸部以及腹部的检测数据进行分类。
  8. 一种高精度的儿童坐姿检测与矫正装置,其特征在于:包括数据采集模块、特征检测模块、感兴趣特征检测模块、算法模块和量化分析模块以及标准特征数据库;
    所述数据采集模块对监测对象的视频数据进行收集,提取所述监测对象的脊柱部位的关键特征点,并按照时序依次将所述关键特征点提交给所述特征检测模块;
    所述特征检测模块根据所述关键特征点判断是否为监测对象,并将符合要求的数据发送至所述感兴趣特征检测模块;
    所述感兴趣特征检测模块根据不同所述关键特征点进行分离检测,获得所述监测对象关键点数据,并由所述算法模块根据分离后的单项所述关键特征点推算出下一帧的与该单项所述关键特征点关联的ROI区域;
    所述算法模块对所述ROI区域进行自检,判断是否为监测对象的脊柱部位,若是则发送所述ROI区域至所述感兴趣特征继续检测,若不是则中断所述感兴趣特征检测模块的分离检测;
    所述量化分析模块实时获取所述关键点数据,整合与所述标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。
PCT/CN2020/128883 2020-06-13 2020-11-15 一种高精度的儿童坐姿检测与矫正方法及装置 WO2021248815A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010538606.6 2020-06-13
CN202010538606.6A CN111695520A (zh) 2020-06-13 2020-06-13 一种高精度的儿童坐姿检测与矫正方法及装置

Publications (1)

Publication Number Publication Date
WO2021248815A1 true WO2021248815A1 (zh) 2021-12-16

Family

ID=72480812

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128883 WO2021248815A1 (zh) 2020-06-13 2020-11-15 一种高精度的儿童坐姿检测与矫正方法及装置

Country Status (2)

Country Link
CN (1) CN111695520A (zh)
WO (1) WO2021248815A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469175A (zh) * 2023-06-20 2023-07-21 青岛黄海学院 一种幼儿教育可视化互动方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695520A (zh) * 2020-06-13 2020-09-22 德沃康科技集团有限公司 一种高精度的儿童坐姿检测与矫正方法及装置
CN113780220A (zh) * 2021-09-17 2021-12-10 东胜神州旅游管理有限公司 一种基于童脸识别的儿童坐姿检测方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130028517A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd. Apparatus, method, and medium detecting object pose
CN104038738A (zh) * 2014-06-04 2014-09-10 东北大学 一种提取人体关节点坐标的智能监控系统及方法
CN106951871A (zh) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 操作体的运动轨迹识别方法、装置和电子设备
CN109176536A (zh) * 2018-08-06 2019-01-11 深圳市沃特沃德股份有限公司 姿势判断方法及装置
CN109190562A (zh) * 2018-09-05 2019-01-11 广州维纳斯家居股份有限公司 智能坐姿监控方法、装置、智能升降桌及存储介质
CN111127848A (zh) * 2019-12-27 2020-05-08 深圳奥比中光科技有限公司 一种人体坐姿检测系统及方法
CN111695520A (zh) * 2020-06-13 2020-09-22 德沃康科技集团有限公司 一种高精度的儿童坐姿检测与矫正方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130028517A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd. Apparatus, method, and medium detecting object pose
CN104038738A (zh) * 2014-06-04 2014-09-10 东北大学 一种提取人体关节点坐标的智能监控系统及方法
CN106951871A (zh) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 操作体的运动轨迹识别方法、装置和电子设备
CN109176536A (zh) * 2018-08-06 2019-01-11 深圳市沃特沃德股份有限公司 姿势判断方法及装置
CN109190562A (zh) * 2018-09-05 2019-01-11 广州维纳斯家居股份有限公司 智能坐姿监控方法、装置、智能升降桌及存储介质
CN111127848A (zh) * 2019-12-27 2020-05-08 深圳奥比中光科技有限公司 一种人体坐姿检测系统及方法
CN111695520A (zh) * 2020-06-13 2020-09-22 德沃康科技集团有限公司 一种高精度的儿童坐姿检测与矫正方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116469175A (zh) * 2023-06-20 2023-07-21 青岛黄海学院 一种幼儿教育可视化互动方法及系统
CN116469175B (zh) * 2023-06-20 2023-08-29 青岛黄海学院 一种幼儿教育可视化互动方法及系统

Also Published As

Publication number Publication date
CN111695520A (zh) 2020-09-22

Similar Documents

Publication Publication Date Title
WO2021248815A1 (zh) 一种高精度的儿童坐姿检测与矫正方法及装置
KR101683712B1 (ko) 트레이스 변환을 이용한 홍채 및 눈 인식 시스템
CN110837784B (zh) 一种基于人体头部特征的考场偷窥作弊检测系统
CN105138954B (zh) 一种图像自动筛选查询识别系统
CN111563452B (zh) 一种基于实例分割的多人体姿态检测及状态判别方法
Rouhi et al. A review on feature extraction techniques in face recognition
CN111507592B (zh) 一种面向服刑人员的主动改造行为的评估方法
CN104123543A (zh) 一种基于人脸识别的眼球运动识别方法
CN105740779A (zh) 人脸活体检测的方法和装置
CN111914643A (zh) 一种基于骨骼关键点检测的人体动作识别方法
CN109544523A (zh) 基于多属性人脸比对的人脸图像质量评价方法及装置
CN104091173B (zh) 一种基于网络摄像机的性别识别方法及装置
WO2021248814A1 (zh) 一种鲁棒的家庭儿童学习状态视觉监督方法及装置
US20230237694A1 (en) Method and system for detecting children's sitting posture based on face recognition of children
CN110163567A (zh) 基于多任务级联卷积神经网络的课堂点名系统
Phuong et al. An eye blink detection technique in video surveillance based on eye aspect ratio
Tang et al. Automatic facial expression analysis of students in teaching environments
Jin et al. Estimating human weight from a single image
CN107862246A (zh) 一种基于多视角学习的眼睛注视方向检测方法
CN112329698A (zh) 一种基于智慧黑板的人脸识别方法和系统
Chen et al. Intelligent Recognition of Physical Education Teachers' Behaviors Using Kinect Sensors and Machine Learning.
Batista Locating facial features using an anthropometric face model for determining the gaze of faces in image sequences
CN111008569A (zh) 一种基于人脸语义特征约束卷积网络的眼镜检测方法
Dongre et al. Automated Online Exam Proctoring using Deep Learning Model
Wang et al. A rapid recognition of athlete's human posture based on SVM decision tree

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940231

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20940231

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20940231

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20940231

Country of ref document: EP

Kind code of ref document: A1