WO2021248815A1 - 一种高精度的儿童坐姿检测与矫正方法及装置 - Google Patents
一种高精度的儿童坐姿检测与矫正方法及装置 Download PDFInfo
- Publication number
- WO2021248815A1 WO2021248815A1 PCT/CN2020/128883 CN2020128883W WO2021248815A1 WO 2021248815 A1 WO2021248815 A1 WO 2021248815A1 CN 2020128883 W CN2020128883 W CN 2020128883W WO 2021248815 A1 WO2021248815 A1 WO 2021248815A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature points
- key
- data
- feature
- detection
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000012937 correction Methods 0.000 title claims abstract description 14
- 238000004445 quantitative analysis Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 7
- 210000001015 abdomen Anatomy 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 238000007689 inspection Methods 0.000 claims description 6
- 238000005452 bending Methods 0.000 claims description 3
- 238000013480 data collection Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 3
- 238000001612 separation test Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 8
- 238000001914 filtration Methods 0.000 abstract description 4
- 230000036544 posture Effects 0.000 description 15
- 238000005286 illumination Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Definitions
- the present invention relates to the technical field of computer vision processing, in particular to a high-precision method and device for children's sitting posture detection and correction.
- the Chinese patent application publication number CN104622610A discloses a sitting posture correction device based on infrared visual distance monitoring, and proposes to use a base and a distance detection module that need to be worn by the monitored person to detect the sitting posture of the wearer, although the volume is small, But in the face of active and curious children, its use effect will be affected, and it uses infrared to detect the distance between the wearer and the visible object to detect the sitting posture. It can only detect the general state of the child's head and neck, and cannot be comprehensive. To understand the curvature of children’s spine, it is not practical and accurate.
- the purpose of the present invention is to provide a high-precision method and device for detecting and correcting children's sitting posture that can perform stable and reliable sitting posture analysis with fast processing speed and comprehensively.
- a high-precision child sitting posture detection and correction method includes the following steps:
- the feature detection module determines whether it is a monitoring object according to the key feature points
- S4 Calculate the region where the corresponding key feature points in the next frame are located according to the facial key point data corresponding to the current frame number, and define the region as an ROI region;
- step S1 the video data of the monitored object is collected by edge AI extraction, and the key feature points correspond to the back, chest and abdomen of the monitored object.
- step S3 cutting, scaling, filtering, denoising, histogram equalization, and gray level balancing are performed on the video frame containing the key feature points, and converted into a normalized standard image;
- the standard image is segmented according to the bending direction of the spine part to obtain the key point data.
- step S4 the ROI area in the t+1 frame is obtained according to the position coordinates of the key point data in the t frame.
- step S6 the attention mechanism is used to repeatedly compare the details of the recognized object to improve the accuracy of the comparison.
- the image of the key point data can be reconstructed according to the principle of end to end before the comparison It is output after high-resolution images.
- the LSTM classification method is used to classify the detection data of the back, chest and abdomen of the monitored object.
- a high-precision child sitting posture detection and correction device including a data acquisition module, a feature detection module, a feature detection module of interest, an algorithm module, a quantitative analysis module, and a standard feature database;
- the data collection module collects video data of the monitored object, extracts key feature points of the spine of the monitored object, and sequentially submits the key feature points to the feature detection module according to a time sequence;
- the feature detection module judges whether it is a monitoring object according to the key feature points, and sends the data that meets the requirements to the interest feature detection module;
- the feature of interest detection module performs separate detection based on the different key feature points to obtain the key point data of the monitored object, and the algorithm module calculates the sum of the next frame based on the separated single item of the key feature points.
- the algorithm module performs self-inspection on the ROI area to determine whether it is the spine part of the monitored object, if it is, then sends the ROI area to the feature of interest to continue detection, if not, interrupts the feature of interest detection module Separation test;
- the quantitative analysis module obtains the key point data in real time, integrates and compares with the corresponding data in the standard feature database, and obtains a quantified learning state evaluation result.
- the present invention includes at least one of the following beneficial technical effects:
- the filtering and denoising can improve the system's anti-disturbance ability under light changes and posture changes, and improve the accuracy of spine recognition.
- Fig. 1 is a block diagram of a method according to an embodiment of the present invention
- FIG. 2 is a specific process flow diagram of an embodiment of the present invention.
- the feature detection module determines whether it is a monitoring object according to the key feature points
- S4 Calculate the region where the corresponding key feature points in the next frame are located according to the facial key point data corresponding to the current frame number, and define the region as an ROI region;
- a confrontation network based on sample data, which specifically includes four steps: obtaining sample data, preprocessing training samples, generating lighting confrontation training for the confrontation network, and generating pose confrontation training for the confrontation network.
- step of acquiring sample data it is required to acquire spine parts of various illuminations and angles as sample data.
- This embodiment uses 13 postures in CMU Multi-PIE and images of spine parts under 20 illumination conditions as the training data set. Since it is convenient to train the model later, first normalize each sample image.
- an image and the target lighting label are selected from the sample data as the input of the lighting generator, the generator outputs the target lighting image, and then the target lighting image and the original lighting label are sent to the lighting generation again
- the device gets the fake original lighting image.
- the discriminator feeds back the errors of the real image and the false original illumination image to the illumination generator, and the identity classifier and the illumination classifier respectively feed back the errors of the target face image and the identity information and illumination information of the generated image to the illumination generator; illumination generation Trainers, discriminators, and classifiers are continuously iterative training.
- step S1 the video data of the monitored object is collected by edge AI extraction, and the key feature points correspond to the back, chest and abdomen of the monitored object.
- step S3 crop, zoom, filter, denoise, histogram equalization, and gray balance are performed on the video frame containing the key feature points, and convert it into a normalized standard image;
- the standard image is segmented according to the bending direction of the spine part to obtain the key point data.
- step S4 the ROI area in the t+1 frame is obtained according to the position coordinates of the key point data in the t frame.
- step S6 the attention mechanism is used to repeatedly compare the details of the recognized object to improve the accuracy of the comparison.
- the face key point data image can be reconstructed into a high-resolution image according to the principle of end to end before the comparison. After output.
- the LSTM classification method is used to classify the detection data of the back, chest and abdomen of the monitored object.
- a high-precision child sitting posture detection and correction device including a data acquisition module, a feature detection module, a feature detection module of interest, an algorithm module, a quantitative analysis module, and a standard feature database;
- the data collection module collects the video data of the monitored object, extracts the key feature points of the spine of the monitored object, and submits the key feature points to the feature detection module in sequence according to the time sequence;
- the feature detection module judges whether it is a monitoring object according to the key feature points, and sends the data that meets the requirements to the interest feature detection module;
- the feature of interest detection module performs separation and detection according to different key feature points to obtain key point data of the monitored object, and the algorithm module calculates the ROI area associated with the single key feature point in the next frame according to the separated single key feature point;
- the algorithm module performs self-inspection on the ROI area to determine whether it is the spine part of the monitored object, if it is, it sends the ROI area to the feature of interest to continue the detection, if not, it interrupts the separation and detection of the feature of interest detection module;
- the standard feature database is the children's sitting posture knowledge base, which contains data of various sitting posture models.
- the quantitative analysis module obtains key point data in real time, integrates and compares with the corresponding data in the standard feature database, and obtains the result of the quantified learning state evaluation.
- the present invention includes at least one of the following beneficial technical effects:
- the filtering and denoising can improve the system's anti-disturbance ability under light changes and posture changes, and improve the accuracy of spine recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (8)
- 一种高精度的儿童坐姿检测与矫正方法,其特征在于:包括如下步骤:S1、按照预设频率对监测对象的视频数据进行收集,提取所述监测对象的脊柱部位的关键特征点,并按照时序依次将所述关键特征点提交给特征检测模块;S2、由所述特征检测模块根据所述关键特征点判断是否为监测对象;若是,进入步骤S3;若不是,返回步骤S1;S3、将所述关键特征点进行分割,获得所述脊柱部位的关键点数据;S4、根据当前帧数对应的所述面部关键点数据推算出下一帧中对应的所述关键特征点所在的区域,并将该区域定义为ROI区域;S5、对所述ROI区域进行自检,判断是否为所述监测对象的所述脊柱部位;若是,进入步骤S3;若不是,返回步骤S1;S6、通过量化分析模块实时获取所述关键点数据,整合与所述标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。
- 根据权利要求1所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S1中,采用边缘AI提取的方式对所述监测对象的视频数据进行收集,所述关键特征点对应所述监测对象的背部和胸部以及腹部。
- 根据权利要求1所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S3中,对含有所述关键特征点的视频帧进行裁剪、缩放、滤波、去噪、直方图均衡和灰度平衡,转换成归一化的标准图像;再将所述标准图像按照所述脊柱部位的弯曲方向进行分割,得到所述关键点数据。
- 根据权利要求3所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S4中,根据t帧中所述关键点数据的位置坐标获取t+1帧中的所述ROI区域。
- 根据权利要求1所述的高精度的儿童坐姿检测与矫正方法,其特征在于:在步骤S6中,采用attention机制,反复比对识别对象细节,提高对比的精准度。
- 根据权利要求5所述的高精度的儿童坐姿检测与矫正方法,其特征在于:当所述关键点数据的分辨率的分辨率无法满足于所述标准特征数据库中的对应数据进行有效比对时,可在比对之前对所述关键点数据的图像按照end to end的原则重建为高分辨率图像后输出。
- 根据权利要求2所述的高精度的儿童坐姿检测与矫正方法,其特征在于:采用 LSTM分类法对述监测对象的背部和胸部以及腹部的检测数据进行分类。
- 一种高精度的儿童坐姿检测与矫正装置,其特征在于:包括数据采集模块、特征检测模块、感兴趣特征检测模块、算法模块和量化分析模块以及标准特征数据库;所述数据采集模块对监测对象的视频数据进行收集,提取所述监测对象的脊柱部位的关键特征点,并按照时序依次将所述关键特征点提交给所述特征检测模块;所述特征检测模块根据所述关键特征点判断是否为监测对象,并将符合要求的数据发送至所述感兴趣特征检测模块;所述感兴趣特征检测模块根据不同所述关键特征点进行分离检测,获得所述监测对象关键点数据,并由所述算法模块根据分离后的单项所述关键特征点推算出下一帧的与该单项所述关键特征点关联的ROI区域;所述算法模块对所述ROI区域进行自检,判断是否为监测对象的脊柱部位,若是则发送所述ROI区域至所述感兴趣特征继续检测,若不是则中断所述感兴趣特征检测模块的分离检测;所述量化分析模块实时获取所述关键点数据,整合与所述标准特征数据库中的对应数据进行比对,得出量化后的学习状态评价的结果。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010538606.6 | 2020-06-13 | ||
CN202010538606.6A CN111695520A (zh) | 2020-06-13 | 2020-06-13 | 一种高精度的儿童坐姿检测与矫正方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021248815A1 true WO2021248815A1 (zh) | 2021-12-16 |
Family
ID=72480812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/128883 WO2021248815A1 (zh) | 2020-06-13 | 2020-11-15 | 一种高精度的儿童坐姿检测与矫正方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111695520A (zh) |
WO (1) | WO2021248815A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116469175A (zh) * | 2023-06-20 | 2023-07-21 | 青岛黄海学院 | 一种幼儿教育可视化互动方法及系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111695520A (zh) * | 2020-06-13 | 2020-09-22 | 德沃康科技集团有限公司 | 一种高精度的儿童坐姿检测与矫正方法及装置 |
CN113780220A (zh) * | 2021-09-17 | 2021-12-10 | 东胜神州旅游管理有限公司 | 一种基于童脸识别的儿童坐姿检测方法及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130028517A1 (en) * | 2011-07-27 | 2013-01-31 | Samsung Electronics Co., Ltd. | Apparatus, method, and medium detecting object pose |
CN104038738A (zh) * | 2014-06-04 | 2014-09-10 | 东北大学 | 一种提取人体关节点坐标的智能监控系统及方法 |
CN106951871A (zh) * | 2017-03-24 | 2017-07-14 | 北京地平线机器人技术研发有限公司 | 操作体的运动轨迹识别方法、装置和电子设备 |
CN109176536A (zh) * | 2018-08-06 | 2019-01-11 | 深圳市沃特沃德股份有限公司 | 姿势判断方法及装置 |
CN109190562A (zh) * | 2018-09-05 | 2019-01-11 | 广州维纳斯家居股份有限公司 | 智能坐姿监控方法、装置、智能升降桌及存储介质 |
CN111127848A (zh) * | 2019-12-27 | 2020-05-08 | 深圳奥比中光科技有限公司 | 一种人体坐姿检测系统及方法 |
CN111695520A (zh) * | 2020-06-13 | 2020-09-22 | 德沃康科技集团有限公司 | 一种高精度的儿童坐姿检测与矫正方法及装置 |
-
2020
- 2020-06-13 CN CN202010538606.6A patent/CN111695520A/zh active Pending
- 2020-11-15 WO PCT/CN2020/128883 patent/WO2021248815A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130028517A1 (en) * | 2011-07-27 | 2013-01-31 | Samsung Electronics Co., Ltd. | Apparatus, method, and medium detecting object pose |
CN104038738A (zh) * | 2014-06-04 | 2014-09-10 | 东北大学 | 一种提取人体关节点坐标的智能监控系统及方法 |
CN106951871A (zh) * | 2017-03-24 | 2017-07-14 | 北京地平线机器人技术研发有限公司 | 操作体的运动轨迹识别方法、装置和电子设备 |
CN109176536A (zh) * | 2018-08-06 | 2019-01-11 | 深圳市沃特沃德股份有限公司 | 姿势判断方法及装置 |
CN109190562A (zh) * | 2018-09-05 | 2019-01-11 | 广州维纳斯家居股份有限公司 | 智能坐姿监控方法、装置、智能升降桌及存储介质 |
CN111127848A (zh) * | 2019-12-27 | 2020-05-08 | 深圳奥比中光科技有限公司 | 一种人体坐姿检测系统及方法 |
CN111695520A (zh) * | 2020-06-13 | 2020-09-22 | 德沃康科技集团有限公司 | 一种高精度的儿童坐姿检测与矫正方法及装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116469175A (zh) * | 2023-06-20 | 2023-07-21 | 青岛黄海学院 | 一种幼儿教育可视化互动方法及系统 |
CN116469175B (zh) * | 2023-06-20 | 2023-08-29 | 青岛黄海学院 | 一种幼儿教育可视化互动方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN111695520A (zh) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021248815A1 (zh) | 一种高精度的儿童坐姿检测与矫正方法及装置 | |
KR101683712B1 (ko) | 트레이스 변환을 이용한 홍채 및 눈 인식 시스템 | |
CN110837784B (zh) | 一种基于人体头部特征的考场偷窥作弊检测系统 | |
CN105138954B (zh) | 一种图像自动筛选查询识别系统 | |
CN111563452B (zh) | 一种基于实例分割的多人体姿态检测及状态判别方法 | |
Rouhi et al. | A review on feature extraction techniques in face recognition | |
CN111507592B (zh) | 一种面向服刑人员的主动改造行为的评估方法 | |
CN104123543A (zh) | 一种基于人脸识别的眼球运动识别方法 | |
CN105740779A (zh) | 人脸活体检测的方法和装置 | |
CN111914643A (zh) | 一种基于骨骼关键点检测的人体动作识别方法 | |
CN109544523A (zh) | 基于多属性人脸比对的人脸图像质量评价方法及装置 | |
CN104091173B (zh) | 一种基于网络摄像机的性别识别方法及装置 | |
WO2021248814A1 (zh) | 一种鲁棒的家庭儿童学习状态视觉监督方法及装置 | |
US20230237694A1 (en) | Method and system for detecting children's sitting posture based on face recognition of children | |
CN110163567A (zh) | 基于多任务级联卷积神经网络的课堂点名系统 | |
Phuong et al. | An eye blink detection technique in video surveillance based on eye aspect ratio | |
Tang et al. | Automatic facial expression analysis of students in teaching environments | |
Jin et al. | Estimating human weight from a single image | |
CN107862246A (zh) | 一种基于多视角学习的眼睛注视方向检测方法 | |
CN112329698A (zh) | 一种基于智慧黑板的人脸识别方法和系统 | |
Chen et al. | Intelligent Recognition of Physical Education Teachers' Behaviors Using Kinect Sensors and Machine Learning. | |
Batista | Locating facial features using an anthropometric face model for determining the gaze of faces in image sequences | |
CN111008569A (zh) | 一种基于人脸语义特征约束卷积网络的眼镜检测方法 | |
Dongre et al. | Automated Online Exam Proctoring using Deep Learning Model | |
Wang et al. | A rapid recognition of athlete's human posture based on SVM decision tree |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20940231 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20940231 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20940231 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20940231 Country of ref document: EP Kind code of ref document: A1 |