WO2023040577A1 - 一种对儿童使用屏幕终端进行智能管控的方法及系统 - Google Patents

一种对儿童使用屏幕终端进行智能管控的方法及系统 Download PDF

Info

Publication number
WO2023040577A1
WO2023040577A1 PCT/CN2022/113472 CN2022113472W WO2023040577A1 WO 2023040577 A1 WO2023040577 A1 WO 2023040577A1 CN 2022113472 W CN2022113472 W CN 2022113472W WO 2023040577 A1 WO2023040577 A1 WO 2023040577A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
distance
screen terminal
information
children
Prior art date
Application number
PCT/CN2022/113472
Other languages
English (en)
French (fr)
Inventor
黄水财
Original Assignee
浙江灵创网络科技有限公司
东胜神州旅游管理有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江灵创网络科技有限公司, 东胜神州旅游管理有限公司 filed Critical 浙江灵创网络科技有限公司
Priority to DE112022000166.6T priority Critical patent/DE112022000166T5/de
Priority to JP2023529007A priority patent/JP7540657B2/ja
Publication of WO2023040577A1 publication Critical patent/WO2023040577A1/zh
Priority to US18/296,175 priority patent/US20230237699A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the technical field of smart home terminals, in particular to a method and system for intelligently controlling screen terminals used by children.
  • the purpose of the present invention is to solve the above-mentioned problems mentioned in the background technology, and propose a method and system for intelligently controlling children's use of screen terminals.
  • the present invention firstly proposes a method for intelligently controlling children's use of screen terminals, including: collecting images of target areas to obtain target images; performing face detection on the target images; , use the preset face feature model to extract the feature value of the face; match the extracted feature value with the pre-trained face data set, when the feature value is the same as the first face data set in the face data set When the data of a face data set is matched, obtain the human skeleton position information in the target image; perform binocular correction on the human skeleton position information to obtain human skeleton relationship information and human body distance information; according to the human skeleton relationship information , Human body distance information to judge whether the sitting posture state and or the distance state are abnormal.
  • a reminder message is generated for reminder.
  • the sitting posture state or the distance state is abnormal and exceeds the set threshold, the corresponding control signal is output to the screen.
  • Terminal equipment is controlled to realize intelligent management and control of children's use of screen terminals.
  • correspondingly controlling the screen terminal device specifically includes: reducing the volume of the screen terminal when the number of abnormalities in the sitting posture state or the distance state exceeds the first set threshold; When the number of abnormalities in the sitting state or the distance state exceeds the second set threshold, the infrared shutdown of the screen terminal is performed; when the number of abnormalities in the sitting state or the distance state exceeds the third set threshold, the power supply of the screen terminal is cut off.
  • it also includes: obtaining the age range of the face template matched with the first face data set data, when the face template is in the first age range, the control screen terminal is always kept in the off mode; when the person When the face template is located in the second age interval, when the child is detected as the first sitting posture and the first distance condition, the control screen terminal is turned on for the first set time interval; when the face template is located in the third age interval, when the detection When the child is in the second sitting posture and the second distance, the control screen terminal is turned on for the second set time interval.
  • it also includes: when the date is a non-examination day, when the screen terminal is turned on, every third set time interval is kept closed for a fourth set time interval.
  • the binocular stereo correction of the human skeleton position information to obtain the human skeleton relationship information and the human body distance information specifically includes: obtaining world coordinate information of key parts of the human body; obtaining parallax information according to the world coordinate information; Obtaining human body distance information by distance measurement; obtaining human body bone relationship information according to the parallax information and the human body distance information.
  • the obtaining the human skeleton relationship information according to the parallax information and the human body distance information includes:
  • the first human skeleton relationship information obtained through the left and right eye world coordinates of the tip of the nose when the first human skeleton relationship information is greater than the first parallax setting threshold, the child is reminded to perform body level correction.
  • the first face data set is a face data set whose age is 4-16 years old
  • the second face data set is a face data set whose age is above 16 years old.
  • the infrared control device receives the corresponding control signal, and according to the corresponding control signal, lowers the volume or shuts down the infrared device or directly cuts off the power supply so as to realize intelligent control of the screen terminal used by children.
  • the embodiment of the present invention also provides a system for intelligently managing and controlling screen terminals used by children, including: an image acquisition module configured to acquire an image of a target area to obtain a target image; a face detection module, the The face detection module is configured to detect the face of the target image; the feature value extraction module is configured to detect the face of the person with a preset face feature model The face is subjected to feature value extraction to obtain a face template; a face matching module, which is configured to perform face matching on the face template and a pre-trained face data set; human skeleton position information acquisition module, the human skeleton position information acquisition module is configured to acquire the human skeleton position information in the target image when the face template matches the first face dataset data in the face dataset; Eye correction module, the binocular correction module is configured to perform binocular correction on the human skeleton position information to obtain human skeleton relationship information and human body distance information; intelligent management and control module, the intelligent management and control module is configured to Skeletal relationship information and human body distance information determine whether the sitting posture
  • a reminder message is generated for reminder.
  • the sitting posture state or distance state is abnormal and exceeds the set threshold, the corresponding control is output.
  • the signal controls the screen terminal equipment to realize the intelligent control of children's use of the screen terminal.
  • the embodiment of the present invention provides a method and system for intelligently controlling children's use of screen terminals.
  • the age of the child is automatically and intelligently identified, and the child's sitting posture, distance, etc. are adjusted according to the age of the child.
  • Real-time intelligent supervision, and intelligent control of the opening and closing time of the screen terminal so as to guide children to use the screen terminal equipment healthily.
  • the embodiment of the present invention can realize the management of the screen terminal device without manual operation, reduces the trouble of manual device management, and also realizes the specific management and control of children's use of the screen terminal by age group , which increases the degree of intelligence and has the advantage of being able to be used in multiple scenarios.
  • FIG. 1 is one of the block flow diagrams of a method for intelligently controlling a screen terminal used by children provided by an embodiment of the present invention
  • Fig. 2 is the second block flow diagram of the method for intelligently controlling children's use of screen terminals provided by an embodiment of the present invention
  • Fig. 3 is the third block flow diagram of the method for intelligently controlling screen terminals used by children provided by an embodiment of the present invention.
  • Fig. 4 is the fourth block flow diagram of the method for intelligently controlling screen terminals used by children provided by an embodiment of the present invention.
  • Fig. 5 is the fifth block diagram of the method for intelligently controlling children's use of screen terminals provided by an embodiment of the present invention.
  • FIG. 6 is a system block diagram of a system for intelligently managing screen terminals used by children provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a method for intelligently controlling children's use of screen terminals, including the following steps:
  • Step S10 image acquisition is performed on the target area to obtain a target image.
  • image acquisition is to use one or more cameras to record images within a certain range from the screen terminal to generate target image information, wherein the cameras can be integrated into the screen terminal or externally installed on the screen.
  • the camera is connected to the processing unit, and sends the captured target image to the processing unit for subsequent series of processing.
  • the camera can be connected to the processing unit in a wired or wireless manner to perform corresponding data transmission.
  • the processing unit can be a processor integrated in the screen terminal, or a processor in the central control device of the Internet of Things.
  • the central control device of the Internet of Things includes but is not limited to: Tmall Genie, Xiaodu, and Huawei smart devices .
  • Step S20 performing face detection on the target image.
  • the purpose of face detection is to use the face detection algorithm to search any frame of the target image to determine whether there is a face in the target image, because the target image may contain images that are not faces.
  • Objects such as furniture in a house and other parts of a person such as legs, shoulders and arms.
  • Face detection can be performed on any frame of the target image through the built-in face detection algorithm of the processing unit. If there is a face in the frame, subsequent steps such as face feature extraction will be performed.
  • the face detection algorithm can be implemented by using the classifier that comes with OpenCV.
  • OpenCV is an open source cross-platform computer vision library that can run on Linux, Windows, Android and other operating systems and can be used for image processing and computer vision. Application development.
  • the face detection algorithm based on yolo is used for face detection.
  • the target image By cutting the target image, it is divided into 49 image blocks, and then each image block is calculated separately to determine the position of the face.
  • the yolo-based face detection algorithm divides the target image into 49 image blocks, in the subsequent feature extraction stage, key parts such as eyelids can be refined and detected, thereby improving the accuracy of face feature extraction and face matching Spend.
  • the orientation gradient histogram is used to detect the position of the face
  • the target image is first grayscaled, and then the gradient of the pixels in the image is calculated, and the image can be converted into the form of the orientation gradient histogram to detect and obtain the face position. face position.
  • Step S30 when a human face is detected, the feature value of the human face is extracted using a preset human face feature model.
  • the weight pruning is performed on age difference distinguishing parts such as wrinkles on the face, corners of the eyes, and bags under the eyes through the darknet deep learning framework based on yolo, thereby realizing the extraction of face feature values.
  • the feature value of the face image is extracted by a pre-trained face feature model to obtain a face template
  • the pre-trained face feature model can be recognized by calling the FaceRecognizer class in OpenCV Algorithms, such as Eigenfaces algorithm or Fisherfaces algorithm, provide a common interface for face recognition algorithms.
  • Step S40 matching the extracted feature value with the pre-trained face data set, when the feature value matches the first face data set data in the face data set, obtain the position information of the human skeleton.
  • the feature regression method can be used to train all the face eigenvalues in the face data set.
  • the training result divides the face data set into the first face data set and the second face data set according to the face attributes, and then through the face
  • the method of attribute recognition is used for matching.
  • the first human face data set is a human face data set whose age is 4 to 16 years old
  • the second human face data set is a human face data set whose age is over 16 years old. .
  • the first face data set is a face data set of ages 4-12
  • the second face data set is a face data set of people over 12 years old.
  • the use of face data sets with ages from 4 to 16 can prevent some children from being excluded by the intelligent management and control system because their faces are more mature and their actual age is younger than their appearance age.
  • the face recognition method can be used to more accurately identify children of different age groups by calculating the Euclidean distance between the target face and the weight vector of each person in the face database.
  • the face subject in the acquired target image belongs to the age range represented by the first face data set.
  • the face subject in the target image may be an adult over 16 years old or a child under 4 years old, and this subject does not belong to the scope of intelligent management and control using the screen of the present invention.
  • the human skeleton position information in the target image is obtained, and the human skeleton position information is the world coordinates of each key part of the human body.
  • Step S50 performing binocular correction on the human skeleton position information to obtain human skeleton relationship information and human body distance information.
  • performing binocular correction on the human skeleton position information to obtain the human skeleton relationship information and the human body distance information specifically includes the following steps:
  • step S510 the world coordinate information of the key parts of the human body is obtained, such as the world coordinates of shoulders, eyes, nose tip and other parts.
  • the world coordinates of the nose tip are obtained.
  • Step S520 obtain the parallax information according to the world coordinate information, and calculate the parallax of key parts of the human body according to the obtained world coordinate information.
  • the nose tip is located in the center of the human body, and the nose tip will only have parallax in the abscissa. If there is a large parallax in the ordinate, then the human body is not horizontal or the equipment is not placed horizontally. In other embodiments, any number of bone positions can be selected, and the world coordinate information can be used to obtain the parallax information. For example, the world coordinate information of the left and right shoulders can be used to obtain the parallax information.
  • the specific formula is
  • the embodiments of the present invention can accurately measure and calculate the distance through binocular ranging to perform distance proportional calculations, and can perform bone measurement and calculation at any distance (within the limit line of sight), thereby obtaining more accurate results of sitting posture.
  • Step S540 obtain the human skeleton relationship information according to the parallax information and the human body distance information.
  • the first human skeleton relationship information obtained through the left and right eye world coordinates of the tip of the nose, when the first human skeleton relationship information If the information is greater than the threshold set by the first parallax, the child is reminded to perform body level correction.
  • the left shoulder and the right shoulder are used as an example of various key parts of the human body to obtain the information on the human skeleton relationship
  • the specific calculation formula is:
  • Step S60 judging whether the sitting posture state and or the distance state are abnormal according to the human body skeleton relationship information and the human body distance information, when the sitting posture state or the distance state is abnormal, generating a reminder message to remind, when the sitting posture state or the distance state is abnormal and exceeds the set
  • the threshold is set, the corresponding control signal is output to control the screen terminal equipment so as to realize the intelligent management and control of children's use of the screen terminal.
  • the left and right shoulder relationship information can be obtained according to the bone position coordinates of the left and right shoulders of the human body, and then the left and right shoulder inclination angle can be obtained according to the left and right shoulder relationship information.
  • the specific calculation formula is as follows: Finally, the sitting posture status of the human body is judged according to the left and right shoulder inclination angles. When the left and right shoulder inclination angles exceed the set threshold of the shoulder inclination angle, it is determined that the current sitting posture state is abnormal and a reminder message is generated to remind. When a certain set threshold is exceeded, a corresponding control signal is output to control the screen terminal equipment so as to realize intelligent control of children using the screen terminal.
  • the human body distance information obtained by binocular ranging can be compared with the distance setting threshold.
  • the human body distance information is less than the setting threshold, it is determined that the current distance state is abnormal and a reminder message is generated for reminder.
  • the corresponding control signal is output to control the screen terminal device, so as to realize the intelligent management and control of children's use of the screen terminal.
  • the corresponding control of the screen terminal device specifically includes the following steps:
  • Step S610 when the number of abnormalities in the sitting posture state or the distance state exceeds the first set threshold, reduce the volume of the screen terminal;
  • Step S620 when the number of abnormalities in the sitting posture state or the distance state exceeds the second set threshold, turn off the infrared screen terminal;
  • Step S630 when the number of abnormalities in the sitting posture state or the distance state exceeds the third set threshold, cut off the power supply of the screen terminal.
  • the corresponding control of the screen terminal device includes the following steps:
  • Step S611 when the abnormal time of the sitting posture state or the distance state exceeds the first set threshold, reduce the volume of the screen terminal;
  • Step S621 when the abnormal time of the sitting posture state or the distance state exceeds the second set threshold, turn off the infrared screen terminal;
  • Step S631 when the abnormal time of the sitting posture state or the distance state exceeds the third set threshold, cut off the power supply of the screen terminal. Therefore, the above-mentioned first, second and third set thresholds are the number of times or the length of time that can be set manually.
  • various protocols including the MQTT protocol can be used to send control signals from the data processor to the relevant control screen terminal control device, and the screen control device includes but is not limited to a learning infrared controller.
  • the embodiment of the present invention also has the function of distinguishing and supervising children of different ages.
  • the above-mentioned face recognition technology is used to separate children into smaller age intervals to obtain several age intervals, and then obtain the supervised children.
  • Step S710 obtaining the age interval of the face template matching the first face data set data
  • Step S720 when the face template of the supervising child is in the first age range, the control screen terminal is always kept in the closed mode
  • Step S730 when the face template of the supervising child is in the second age range, under the condition that the child is detected to be in the first sitting posture and the first distance, control the screen terminal to turn on the first set time interval;
  • Step S740 when the face template supervising the child is in the third age range, the screen terminal is controlled to turn on the second set time interval under the condition that the child is detected to be in the second sitting posture and the second distance.
  • every third set time interval is kept off for a fourth set time interval.
  • level 0 exam time : Do not use any entertainment screen terminals (replaced by TV below), level 1: the highest supervision mode, you can only watch TV for half an hour a day, keep your left and right shoulders level, and keep a distance of 3 meters from the TV.
  • Level 2 Second-highest supervision mode, you can watch 45 minutes of TV every day, keep your left and right shoulders at 3°, and keep a distance of 2.5 meters from the TV.
  • Level 3 Weak supervision level, you can watch TV any number of times a day, but you need to take a 10-minute break every 45 minutes at a distance of 2.5 meters from the TV.
  • the embodiment of the present invention automatically and intelligently recognizes the age of the child when the child uses the screen terminal, conducts real-time intelligent supervision of the child's sitting posture and distance according to the age of the child, and intelligently controls the opening and closing time of the screen terminal , so as to guide children to use screen terminal devices healthily.
  • the embodiment of the present invention can realize the management of the screen terminal device without manual operation, reduces the trouble of manual device management, and also realizes the specific management and control of children's use of the screen terminal by age group , which increases the degree of intelligence and has the advantage of being able to be used in multiple scenarios.
  • an embodiment of the present invention also provides a system for intelligently controlling screen terminals used by children, as shown in FIG. 6 , the system includes:
  • An image acquisition module 100 the image acquisition module is configured to acquire an image of a target area to obtain a target image
  • a face detection module 200 the face detection module is configured to perform face detection on the target image
  • a feature value extraction module 300 the feature value extraction module is configured to perform feature value extraction on the face with a preset face feature model to obtain a face template when a face is detected;
  • Face matching module 400 the face matching module is configured to carry out face matching with the face template and the pre-trained face data set;
  • Skeleton position acquisition module 500 the skeleton position acquisition module is configured to acquire human skeleton position information in the target image when the face template matches the first face dataset data in the face dataset ;
  • a binocular correction module 600 the binocular correction module is configured to perform binocular correction on the human skeleton position information to obtain human skeleton relationship information and human body distance information;
  • the intelligent management and control module 700 the intelligent management and control module is configured to judge whether the sitting posture state and or the distance state are abnormal according to the human skeleton relationship information and the human body distance information, and generate a reminder message to remind when the sitting posture state or the distance state is abnormal, When the sitting posture state or the distance state is abnormal and exceeds the set threshold, the corresponding control signal is output to control the screen terminal device, so as to realize the intelligent management and control of children's use of the screen terminal.
  • the embodiment of the present invention proposes a system for intelligently controlling children's use of screen terminals.
  • the system can be implemented in the form of a program and run on a computer device.
  • the memory of the computer device can store various program modules that make up the system for intelligently controlling the screen terminals used by children, such as the image acquisition module 100 shown in Figure 6, the face detection module 200, the feature value extraction module 300, the face Matching module 400 , bone position acquisition module 500 , binocular correction module 600 , and intelligent management and control module 700 .
  • the program constituted by each program module enables the processor to execute the steps in a method for intelligently controlling a screen terminal used by children according to each embodiment of the present application described in this specification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Automation & Control Theory (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种对儿童使用屏幕终端进行智能管控的方法及系统,涉及智能家庭终端的技术领域,在儿童使用屏幕终端时,自动智能识别该儿童的年龄,根据儿童年龄的不同,对儿童坐姿、距离等方面进行实时智能监督,并对屏幕终端的开启、关闭的时间进行智能管控,从而引导儿童健康的使用屏幕终端设备。相比于的现有技术方案,本发明无需人为操作就能实现对屏幕终端设备进行管理,减少了手动进行设备管理的麻烦,还实现了对儿童使用屏幕终端进行分年龄段的特定管控,增加了智能化程度,且具有可以在多场景下使用的优点。

Description

一种对儿童使用屏幕终端进行智能管控的方法及系统 技术领域
本发明涉及智能家庭终端的技术领域,特别涉及一种对儿童使用屏幕终端进行智能管控的方法及系统。
背景技术
近年来,由于坐姿不正而导致的近视、驼背的儿童人数正在逐渐增加,坐姿不规范是造成儿童视力下降的主要原因之一,对儿童身体的健康发育十分有害。很多儿童没有正确的坐姿意识,需要大人随时随刻进行提醒。特别是在坐姿不规范的情况下长时间使用屏幕终端更加会造成儿童视力的下降,因此,亟待一种对儿童使用屏幕终端进行智能管控的方法。
在现有对屏幕终端进行管理的方法中,一般需要人为使用应用软件或者语音操控中控设备去管理电视等屏幕终端设备,不能起到对儿童的监管作用,即无法监督儿童使用电器的时间,使用时姿势状态等情况,家长如果想查看儿童使用屏幕终端设备的情况,则需要远程控制家中摄像头,如果需要对屏幕终端进行管理,还需要打开相对应的设备管理使用应用软件手动进行设备管理操作,故存在使用麻烦,不够智能等问题。
发明内容
本发明的目的就是解决背景技术中提到的上述问题,提出一种对儿童使用屏幕终端进行智能管控的方法及系统。
为实现上述目的,本发明首先提出了一种对儿童使用屏幕终端进行智能管控的方法,包括:对目标区域进行图像采集得到目标图像;对所述目标图像进行人脸检测;当检测到人脸时,以预设人脸特征模型对所述人脸进行特征值提取;将提取到的特征值与预先训练好的人脸数据集进行匹配,当所述特征值与所述人脸数据集中第一人脸数据集数据相匹配时,获取所述目标图像中的人体骨骼位置信息;对所述人体骨骼位置信息进行双目校正得到人体骨骼关系信息以及人体距离信息;根据所述人体骨骼关系信息、人体距离信息判断坐姿状态和或距离状态是否异常,当坐姿状态或距离状态异常时,生成提醒信息以进行提醒,当坐姿状态或距离状态异常且超出设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
可选的,当坐姿状态或距离状态异常且超出设定阈值时,对屏幕终端设备进行相应控制具体包括:当坐姿状态或距离状态异常次数超出第一设定阈值时,降低屏幕终 端的音量;当坐姿状态或距离状态异常次数超出第二设定阈值时,对屏幕终端进行红外关机;当坐姿状态或距离状态异常次数超出第三设定阈值时,切断屏幕终端电源。
可选的,还包括:获取与第一人脸数据集数据相匹配人脸模板的年龄区间,当所述人脸模板位于第一年龄区间时,控制屏幕终端始终保持关闭模式;当所述人脸模板位于第二年龄区间时,在检测到儿童为第一坐姿、第一距离条件下,控制屏幕终端开启第一设定时间间隔;当所述人脸模板位于第三年龄区间时,在检测到儿童为第二坐姿、第二距离条件下,控制屏幕终端开启第二设定时间间隔。
可选的,还包括:当日期为非考试日,屏幕终端在开启状态下,每隔第三设定时间间隔,保持时长为第四设定时间间隔的关闭。
可选的,所述对人体骨骼位置信息进行双目立体校正得到人体骨骼关系信息以及人体距离信息具体包括:获取人体关键部分的世界坐标信息;根据所述世界坐标信息得到视差信息;通过双目测距获取人体距离信息;根据所述视差信息、所述人体距离信息得到人体骨骼关系信息。
可选的,所述根据视差信息、人体距离信息得到人体骨骼关系信息包括:|右肩膀纵坐标-左肩膀纵坐标|*(人体实际距离-人体标准测算距离)*(比例校正系数)。
可选的,通过鼻尖的左右目世界坐标得到的第一人体骨骼关系信息,当所述第一人体骨骼关系信息大于第一视差设定阈值,提醒儿童进行人体水平矫正。
可选的,所述第一人脸数据集为年龄为4~16岁的人脸数据集合,所述第二人脸数据集为年龄为16岁以上的人脸数据集合。
可选的,红外控制设备接收相应控制信号,并根据相应控制信号进行降低音量或红外关机或直接切断电源从而实现对儿童使用屏幕终端进行智能管控。
本发明实施例还提供了一种对儿童使用屏幕终端进行智能管控的系统,包括:图像采集模块,所述图像采集模块被配置为对目标区域进行图像采集得到目标图像;人脸检测模块,所述人脸检测模块被配置为对所述目标图像进行人脸检测;特征值提取模块,所述特征值提取模块被配置为当检测到人脸时,以预设人脸特征模型对所述人脸进行特征值提取,得到人脸模板;人脸匹配模块,所述人脸匹配模块被配置为将所述人脸模板与预先训练好的人脸数据集进行人脸匹配;人体骨骼位置信息获取模块,所述人体骨骼位置信息获取模块被配置为当所述人脸模板与所述人脸数据集中第一人脸数据集数据相匹配时,获取所述目标图像中的人体骨骼位置信息;双目校正模块,所述双目校正模块被配置为对所述人体骨骼位置信息进行双目校正得到人体骨骼关系信息以及人体距离信息;智能管控模块,所述智能管控模块被配置为根据所述人体骨骼关系信息、人体距离信息判断坐姿状态和或距离状态是否异常,当坐姿状态或距离 状态异常时,生成提醒信息以进行提醒,当坐姿状态或距离状态异常且超出设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
本发明的有益效果:
本发明实施例提供的一种对儿童使用屏幕终端进行智能管控的方法及系统,在儿童使用屏幕终端时,自动智能识别该儿童的年龄,根据儿童年龄的不同,对儿童坐姿、距离等方面进行实时智能监督,并对屏幕终端的开启、关闭的时间进行智能管控,从而引导儿童健康的使用屏幕终端设备。相比于的现有技术方案,本发明实施例无需人为操作就能实现对屏幕终端设备进行管理,减少了手动进行设备管理的麻烦,还实现了对儿童使用屏幕终端进行分年龄段的特定管控,增加了智能化程度,且具有可以在多场景下使用的优点。
本发明的特征及优点将通过实施例结合附图进行详细说明。
附图说明
图1为本发明实施例提供的对儿童使用屏幕终端进行智能管控的方法的流程框图之一;
图2为本发明实施例提供的对儿童使用屏幕终端进行智能管控的方法的流程框图之二;
图3为本发明实施例提供的对儿童使用屏幕终端进行智能管控的方法的流程框图之三;
图4为本发明实施例提供的对儿童使用屏幕终端进行智能管控的方法的流程框图之四;
图5为本发明实施例提供的对儿童使用屏幕终端进行智能管控的方法的流程框图之五;
图6为本发明实施例提供的对儿童使用屏幕终端进行智能管控系统的系统框图。
具体实施方式
为了便于本领域技术人员的理解,下面将结合具体实施例对本发明作进一步详细描述。
请参考图1,本发明实施例提供了一种对儿童使用屏幕终端进行智能管控的方法,包括下列步骤:
步骤S10,对目标区域进行图像采集得到目标图像。
在本实施例中,图像采集是通过一个或多个摄像头来记录距离屏幕终端一定范围内的图像,从而生成目标图像信息,其中所述摄像头可以采用集成于屏幕终端的方式, 也可以采用外置于屏幕的方式。摄像头与处理单元相连,并将采集到目标图像发送至所述处理单元进行后续一系列的处理,具体的,摄像头可以通过有线或无线的方式与所述处理单元相连来进行相应的数据传输。所述处理单元可以为集成在屏幕终端中的处理器,也可以为物联网中控设备中的处理器,所述物联网中控设备包括但不限于:天猫精灵、小度、小米智能设备。
步骤S20,对所述目标图像进行人脸检测。
其中人脸检测的目的是将获取到的目标图像中任意一帧图像,采用人脸检测算法对目标图像进行搜索来确定目标图像中是否存在人脸,因为目标图像中可能包含有不是人脸的物体,如屋内家具以及人的其他部位(如腿、肩膀和手臂)。
可通过处理单元内置的人脸检测算法对目标图像中任意一帧图像进行人脸检测,如果该帧中存在人脸,则进行后续的人脸特征提取等步骤。其中人脸检测算法可以利用OpenCV自带的分类器实现,OpenCV是一个开源的跨品台计算机视觉库,可以运行在Linux、Windows、Android等操作系统上,可以被用来进行图像处理和计算机视觉应用的开发。
在本实施例中,采用基于yolo的人脸检测算法进行人脸检测,通过将目标图像进行切割,切分成为49个图像块,再对每个图像块进行分别测算以确定人脸位置,此外因为所述基于yolo的人脸检测算法将目标图像切分成为49个图像块,在后续特征提取阶段,可以对眼睑等关键部分进行细化检测,从而提升人脸特征提取以及人脸匹配的准确度。
在其他实施例中,使用方向梯度直方图来检测人脸位置,先将目标图像灰度化,接着计算图像中像素的梯度,通过将图像转变成方向梯度直方图形式,就可以检测并获得人脸位置。
步骤S30,当检测到人脸时,以预设人脸特征模型对人脸进行特征值提取。
在本实施例中,通过基于yolo的darknet深度学习框架对脸上皱纹、眼角、眼袋等年龄差别区分部位进行权重剪枝,从而实现了人脸特征值的提取。
在其他实施例中,通过预先训练好的人脸特征模型对人脸图像进行特征值提取,得到人脸模板,预先训练好的人脸特征模型可以通过调用OpenCV中Facerecognizer类自带的人脸识别算法,例如Eigenfaces算法或Fisherfaces算法得到,为人脸识别算法提供了一种通用的接口。
步骤S40,将提取到的特征值与预先训练好的人脸数据集进行匹配,当所述特征值与所述人脸数据集中第一人脸数据集数据相匹配时,获取所述目标图像中的人体骨骼位置信息。
可采用特征回归方法对人脸数据集中所有的人脸特征值进行训练,训练结果将人脸数据集按人脸属性分为第一人脸数据集和第二人脸数据集,再通过人脸属性识别的方法进行匹配,在本实施例中第一人脸数据集为年龄为4~16岁的人脸数据集合,所述第二人脸数据集为年龄为16岁以上的人脸数据集合。
在其他实施例中,第一人脸数据集为年龄4~12岁的人脸数据集合,第二人脸数据集为12岁以上人脸数据集合。
本实施例采用年龄为4~16岁的人脸数据集合可以避免部分儿童因为人脸较为成熟,实际年龄小于外貌年龄,从而被智能管控系统排除在外情况的发生。
在对于需要将儿童按照更加小的年龄区间进行分别,从而进行更加细致化、差异化控制的应用场景,先对人脸数据集中所有的人脸特征值进行训练分为若干个不同区间的人脸数据集,再对每个不同年龄段的儿童进行区别测算。
具体的,可采用人脸识别的方法可通过计算目标人脸与人脸库中每个人的权值向量之间的欧式距离,更加准确的识别不同年龄段的儿童。
通过将目标图像中人脸的特征值与第一人脸数据集进行匹配,可以判定获取的目标图像中人脸主体属于第一人脸数据集所代表的年龄区间。
在本实施例中,为年龄为4~16岁的儿童,属于本发明一种对儿童使用屏幕终端进行智能化管控的主体。
不匹配时,目标图像中人脸主体可能为16岁以上的成年人或是未满4岁的幼儿,该主体不属于本发明使用屏幕进行智能化管控的范围。
当目标图像中人脸主体属于第一人脸数据集所代表的年龄区间时,获取目标图像中的人体骨骼位置信息,所述人体骨骼位置信息人体各个关键部位的世界坐标。
步骤S50,对所述人体骨骼位置信息进行双目校正得到人体骨骼关系信息以及人体距离信息。
请参考图5,对所述人体骨骼位置信息进行双目校正得到人体骨骼关系信息以及人体距离信息具体包括如下几个步骤:
步骤S510,获取人体关键部分的世界坐标信息,例如肩膀、眼睛、鼻尖等部位的世界坐标,在本实施例中,获取鼻尖的世界坐标。
步骤S520,根据所述世界坐标信息得到视差信息,通过根据获取的世界坐标信息来对人体的关键部分的视差进行测算,在本实施例中,通过鼻尖的左右目世界坐标进行测算得出视差,鼻尖位于人体位置中心,鼻尖只会在横坐标中存在视差,若纵坐标也出现较大视差,那么人体不为水平或设备摆放非水平。在其他实施例中,可选用任 意多个骨骼位置的,世界坐标信息得到视差信息,例如选用左右肩的世界坐标信息得到视差信息,具体公式为|右肩膀纵坐标-左肩膀纵坐标|。
步骤S530,通过双目测距获取人体距离信息;其中计算公式为人体距离信息=人体实际距离-人体标准测算距离。
在现有技术中,因为人体在不同距离下,在视觉识别中,缩放比和世界坐标差会变化,故坐姿测算都需要在固定距离下。本发明实施例通过双目测距可以准确测算距离以便进行距离比例运算,可以在任意距离下(极限视距内)进行骨骼测算,从而求得更加准确的坐姿结果。
步骤S540,根据所述视差信息、所述人体距离信息得到人体骨骼关系信息,在本实施例中,通过鼻尖的左右目世界坐标得到的第一人体骨骼关系信息,当所述第一人体骨骼关系信息大于第一视差设定阈值,提醒儿童进行人体水平矫正。
在其他实施例中,采用左肩和右肩作为人体各个关键部位中的一个例子,来得到人体骨骼关系信息,具体的计算公式为:|右肩膀纵坐标-左肩膀纵坐标|*(人体实际距离-人体标准测算距离)*(比例校正系数),其中比例校正系数可通过人体实际距离与人体标准测算距离之间的关系预先设定。
步骤S60,根据所述人体骨骼关系信息、人体距离信息判断坐姿状态和或距离状态是否异常,当坐姿状态或距离状态异常时,生成提醒信息以进行提醒,当坐姿状态或距离状态异常且超出设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
具体的,可根据人体左右肩处的骨骼位置坐标得到左右肩关系信息,再根据左右肩关系信息得到左右肩倾斜角度,具体计算公式如下:
Figure PCTCN2022113472-appb-000001
Figure PCTCN2022113472-appb-000002
最后根据左右肩倾斜角度来判断人体坐姿状态,当左右肩倾斜角度超出肩倾斜角度设定阈值时,判定当前坐姿状态为异常并生成提醒信息以进行提醒,当坐姿状态异常次数或坐姿状态异常时间超出某一设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
同理,可根据双目测距得到的人体距离信息与距离设定设定阈值进行比较,当人体距离信息小于设定设定阈值时,判定当前距离状态为异常并生成提醒信息以进行提醒,当距离状态异常的次数或距离状态异常的时间超出设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
请参考图3,在发明实施例中,坐姿状态或距离状态异常且超出设定阈值时,对屏幕终端设备进行相应控制具体包括如下几个步骤:
步骤S610,当坐姿状态或距离状态异常次数超出第一设定阈值时,降低屏幕终端的音量;
步骤S620,当坐姿状态或距离状态异常次数超出第二设定阈值时,对屏幕终端进行红外关机;
步骤S630,当坐姿状态或距离状态异常次数超出第三设定阈值时,切断屏幕终端电源。
请参考图4,在其他实施例中,坐姿状态或距离状态异常且超出设定阈值时,对屏幕终端设备进行相应控制包括如下步骤:
步骤S611,当坐姿状态或距离状态异常时间超出第一设定阈值时,降低屏幕终端的音量;
步骤S621,当坐姿状态或距离状态异常时间超出第二设定阈值时,对屏幕终端进行红外关机;
步骤S631,当坐姿状态或距离状态异常时间超出第三设定阈值时,切断屏幕终端电源。故上述第一、第二、第三设定阈值为可进行人为设定的次数或时间长度。
在对屏幕终端进行相应控制方面,可采用MQTT协议在内的多种协议,从数据处理器向相关控制屏幕终端控制设备发送控制信号,所述屏幕控制设备包括但不限于学习型红外控制器。
本发明实施例还具有根据不同年龄的儿童进行区分监管的功能,先采用上述提到的人脸识别技术将儿童按照更加小的年龄区间进行分别,得到若干个年龄区间,再获取被监管儿童的人脸模版所在的年龄区间,再根据被监管儿童的人脸模版所在的年龄区间,进行有针对性的区分监管,具体步骤请参考图5:
步骤S710,获取与第一人脸数据集数据相匹配人脸模板的年龄区间;
步骤S720,当监管儿童的人脸模版板位于第一年龄区间时,控制屏幕终端始终保持关闭模式;
步骤S730,当监管儿童的人脸模版位于第二年龄区间时,在检测到儿童为第一坐姿、第一距离条件下,控制屏幕终端开启第一设定时间间隔;
步骤S740,当监管儿童的人脸模版位于第三年龄区间时,在检测到儿童为第二坐姿、第二距离条件下,控制屏幕终端开启第二设定时间间隔。
此外,当检测到当日期为非考试日时且屏幕终端在开启状态下,每隔第三设定时间间隔,保持时长为第四设定时间间隔的关闭。
具体的,可通过获取被监管儿童的人脸模版所在的年龄区间,再根据日历时间推算被监管儿童是否在重点考试时间,根据其年龄区间设定不同的监管等级,例如,等 级0:考试时间:不得使用任何娱乐性屏幕终端(以下由电视代替),等级1:最高监管模式,每天仅可以观看半小时电视,左右肩需保持水平、距离电视3米。等级2:次高监管模式,每天可以观看45分钟电视,左右肩保持3°,距离电视2.5米。等级3:弱监管等级,可以每天观看任意次电视,但距离电视2.5米且每隔45分钟需要休息10分钟。
本发明实施例在儿童使用屏幕终端时,自动智能识别该儿童的年龄,根据儿童年龄的不同,对儿童坐姿、距离等方面进行实时智能监督,并对屏幕终端的开启、关闭的时间进行智能管控,从而引导儿童健康的使用屏幕终端设备。相比于的现有技术方案,本发明实施例无需人为操作就能实现对屏幕终端设备进行管理,减少了手动进行设备管理的麻烦,还实现了对儿童使用屏幕终端进行分年龄段的特定管控,增加了智能化程度,且具有可以在多场景下使用的优点。
此外,基于对儿童使用屏幕终端进行智能管控的方法,本发明实施例还提供了一种对儿童使用屏幕终端进行智能管控的系统,如图6所示,该系统包括:
图像采集模块100,所述图像采集模块被配置为对目标区域进行图像采集得到目标图像;
人脸检测模块200,所述人脸检测模块被配置为对所述目标图像进行人脸检测;
特征值提取模块300,所述特征值提取模块被配置为当检测到人脸时,以预设人脸特征模型对所述人脸进行特征值提取,得到人脸模板;
人脸匹配模块400,所述人脸匹配模块被配置为将所述人脸模板与预先训练好的人脸数据集进行人脸匹配;
骨骼位置获取模块500,所述骨骼位置获取模块被配置为当所述人脸模板与所述人脸数据集中第一人脸数据集数据相匹配时,获取所述目标图像中的人体骨骼位置信息;
双目校正模块600,所述双目校正模块被配置为对所述人体骨骼位置信息进行双目校正得到人体骨骼关系信息以及人体距离信息;
智能管控模块700,所述智能管控模块被配置为根据所述人体骨骼关系信息、人体距离信息判断坐姿状态和或距离状态是否异常,当坐姿状态或距离状态异常时,生成提醒信息以进行提醒,当坐姿状态或距离状态异常且超出设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
综上所述,本发明实施例提出了一种对儿童使用屏幕终端进行智能管控的系统,该系统可以实现为一种程序的形式,在计算机设备上运行。计算机设备的存储器中可存储组成该对儿童使用屏幕终端进行智能管控的系统的各个程序模块,比如,图6所示的图像采集模块100、人脸检测模块200、特征值提取模块300、人脸匹配模块400、 骨骼位置获取模块500、双目校正模块600、智能管控模块700。各个程序模块构成的程序使得处理器执行本说明书中描述的本申请各个实施例的一种对儿童使用屏幕终端进行智能管控的方法中的步骤。
上述实施例是对本发明的说明,不是对本发明的限定,任何对本发明简单变换后的方案均属于本发明的保护范围。以上所述仅是本发明的优选实施方式,本发明的保护范围并不仅局限于上述实施例,凡属于本发明思路下的技术方案均属于本发明的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理前提下的若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种对儿童使用屏幕终端进行智能管控的方法,其特征在于,包括:对目标区域进行图像采集得到目标图像;对所述目标图像进行人脸检测;当检测到人脸时,以预设人脸特征模型对人脸进行特征值提取;将提取到的特征值与预先训练好的人脸数据集进行匹配,当所述特征值与所述人脸数据集中第一人脸数据集数据相匹配时,获取所述目标图像中的人体骨骼位置信息;对所述人体骨骼位置信息进行双目校正得到人体骨骼关系信息以及人体距离信息;根据所述人体骨骼关系信息、人体距离信息判断坐姿状态和或距离状态是否异常,当坐姿状态或距离状态异常时,生成提醒信息,当坐姿状态或距离状态异常且超出设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
  2. 根据权利要求1所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,当坐姿状态或距离状态异常且超出设定阈值时,对屏幕终端设备进行相应控制具体包括:当坐姿状态或距离状态异常次数超出第一设定阈值时,降低屏幕终端的音量;当坐姿状态或距离状态异常次数超出第二设定阈值时,对屏幕终端进行红外关机;当坐姿状态或距离状态异常次数超出第三设定阈值时,切断屏幕终端电源。
  3. 根据权利要求1所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,还包括:获取与第一人脸数据集数据相匹配人脸模板的年龄区间;当所述人脸模板位于第一年龄区间时,控制屏幕终端始终保持关闭模式;当所述人脸模板位于第二年龄区间时,在检测到儿童为第一坐姿、第一距离条件下,控制屏幕终端开启第一设定时间间隔;当所述人脸模板位于第三年龄区间时,在检测到儿童为第二坐姿、第二距离条件下,控制屏幕终端开启第二设定时间间隔。
  4. 根据权利要求1所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,还包括:当日期为非考试日,屏幕终端在开启状态下,每隔第三设定时间间隔,保持时长为第四设定时间间隔的关闭。
  5. 根据权利要求1所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,所述对人体骨骼位置信息进行双目立体校正得到人体骨骼关系信息以及人体距离信息具体包括:获取人体关键部分的世界坐标信息;根据所述世界坐标信息得到视差信息;通过双目测距获取人体距离信息;根据视差信息、人体距离信息得到人体骨骼关系信息。
  6. 根据权利要求5所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,所述根据视差信息、人体距离信息得到人体骨骼关系信息包括:|右肩膀纵坐标-左肩膀纵坐标|*(人体实际距离-人体标准测算距离)*(比例校正系数)。
  7. 根据权利要求1所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,通过鼻尖的左右目世界坐标得到的第一人体骨骼关系信息,当所述第一人体骨骼关系信息大于第一视差设定阈值,提醒儿童进行人体水平矫正。
  8. 根据权利要求1所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,所述第一人脸数据集为年龄为4~16岁的人脸数据集合,所述第二人脸数据集为年龄为16岁以上的人脸数据集合。
  9. 根据权利要求1所述的对儿童使用屏幕终端进行智能管控的方法,其特征在于,红外控制设备接收相应控制信号,并根据相应控制信号进行降低音量或红外关机或直接切断电源从而实现对儿童使用屏幕终端进行智能管控。
  10. 一种对儿童使用屏幕终端进行智能管控的系统,其特征在于,包括:
    图像采集模块,所述图像采集模块被配置为对目标区域进行图像采集得到目标图像;
    人脸检测模块,所述人脸检测模块被配置为对所述目标图像进行人脸检测;
    特征值提取模块,所述特征值提取模块被配置为当检测到人脸时,以预设人脸特征模型对所述人脸进行特征值提取,得到人脸模板;
    人脸匹配模块,所述人脸匹配模块被配置为将所述人脸模板与预先训练好的人脸数据集进行人脸匹配;
    人体骨骼位置信息获取模块,所述人体骨骼位置信息获取模块被配置为当所述人脸模板与所述人脸数据集中第一人脸数据集数据相匹配时,获取所述目标图像中的人体骨骼位置信息;
    双目校正模块,所述双目校正模块被配置为对所述人体骨骼位置信息进行双目校正得到人体骨骼关系信息以及人体距离信息;
    智能管控模块,所述智能管控模块被配置为根据所述人体骨骼关系信息、人体距离信息判断坐姿状态和或距离状态是否异常,当坐姿状态或距离状态异常时,生成提醒信息以进行提醒,当坐姿状态或距离状态异常且超出设定阈值时,输出相应控制信号对屏幕终端设备进行控制从而实现对儿童使用屏幕终端进行智能管控。
PCT/CN2022/113472 2021-09-17 2022-08-19 一种对儿童使用屏幕终端进行智能管控的方法及系统 WO2023040577A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
DE112022000166.6T DE112022000166T5 (de) 2021-09-17 2022-08-19 Verfahren und System zur intelligenten Verwaltung und Steuerung der Nutzung von Bildschirmterminals durch Kinder
JP2023529007A JP7540657B2 (ja) 2021-09-17 2022-08-19 児童のモニタ端末の使用をインテリジェントに制御する方法及びシステム
US18/296,175 US20230237699A1 (en) 2021-09-17 2023-04-05 Method and system for itelligently controlling children's usage of screen terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111092773.3A CN113807252B (zh) 2021-09-17 2021-09-17 一种对儿童使用屏幕终端进行智能管控的方法及系统
CN202111092773.3 2021-09-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/296,175 Continuation US20230237699A1 (en) 2021-09-17 2023-04-05 Method and system for itelligently controlling children's usage of screen terminal

Publications (1)

Publication Number Publication Date
WO2023040577A1 true WO2023040577A1 (zh) 2023-03-23

Family

ID=78939616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113472 WO2023040577A1 (zh) 2021-09-17 2022-08-19 一种对儿童使用屏幕终端进行智能管控的方法及系统

Country Status (5)

Country Link
US (1) US20230237699A1 (zh)
JP (1) JP7540657B2 (zh)
CN (1) CN113807252B (zh)
DE (1) DE112022000166T5 (zh)
WO (1) WO2023040577A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807252B (zh) * 2021-09-17 2024-01-26 东胜神州旅游管理有限公司 一种对儿童使用屏幕终端进行智能管控的方法及系统
CN115170075B (zh) * 2022-07-06 2023-06-16 深圳警通人才科技有限公司 一种基于数字化平台技术的智慧办公系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729981A (zh) * 2014-01-03 2014-04-16 东南大学 一种儿童坐姿监控智能终端
WO2017152649A1 (zh) * 2016-03-08 2017-09-14 珠海全志科技股份有限公司 一种自动提示人眼离屏幕距离的方法和系统
CN109271028A (zh) * 2018-09-18 2019-01-25 北京猎户星空科技有限公司 智能设备的控制方法、装置、设备和存储介质
CN113807252A (zh) * 2021-09-17 2021-12-17 东胜神州旅游管理有限公司 一种对儿童使用屏幕终端进行智能管控的方法及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5771127B2 (ja) 2011-11-15 2015-08-26 日本放送協会 注目度推定装置およびそのプログラム
US11051689B2 (en) 2018-11-02 2021-07-06 International Business Machines Corporation Real-time passive monitoring and assessment of pediatric eye health
JP2021111890A (ja) 2020-01-14 2021-08-02 三菱電機エンジニアリング株式会社 映像表示装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729981A (zh) * 2014-01-03 2014-04-16 东南大学 一种儿童坐姿监控智能终端
WO2017152649A1 (zh) * 2016-03-08 2017-09-14 珠海全志科技股份有限公司 一种自动提示人眼离屏幕距离的方法和系统
CN109271028A (zh) * 2018-09-18 2019-01-25 北京猎户星空科技有限公司 智能设备的控制方法、装置、设备和存储介质
CN113807252A (zh) * 2021-09-17 2021-12-17 东胜神州旅游管理有限公司 一种对儿童使用屏幕终端进行智能管控的方法及系统

Also Published As

Publication number Publication date
CN113807252A (zh) 2021-12-17
JP2023549864A (ja) 2023-11-29
US20230237699A1 (en) 2023-07-27
JP7540657B2 (ja) 2024-08-27
CN113807252B (zh) 2024-01-26
DE112022000166T5 (de) 2023-11-16

Similar Documents

Publication Publication Date Title
WO2023040577A1 (zh) 一种对儿童使用屏幕终端进行智能管控的方法及系统
WO2017152649A1 (zh) 一种自动提示人眼离屏幕距离的方法和系统
CN103729981B (zh) 一种儿童坐姿监控智能终端
CN106846734B (zh) 一种疲劳驾驶检测装置及方法
WO2023040578A1 (zh) 一种基于童脸识别的儿童坐姿检测方法及系统
CN111145739A (zh) 一种基于视觉的免唤醒语音识别方法、计算机可读存储介质及空调
CN103908063A (zh) 一种矫正坐姿的智能书桌及其矫正方法
CN106792177A (zh) 一种电视控制方法及系统
TWI729983B (zh) 電子裝置、螢幕調節系統及方法
WO2015158087A1 (zh) 一种检测人眼健康状态的方法、装置及移动终端
CN104951808A (zh) 一种用于机器人交互对象检测的3d视线方向估计方法
CN103793719A (zh) 一种基于人眼定位的单目测距方法和系统
CN107958572B (zh) 一种婴儿监控系统
CN103948236A (zh) 一种矫正坐姿的智能书桌及其矫正方法
CN103908064A (zh) 一种矫正坐姿的智能书桌及其矫正方法
CN107273071A (zh) 电子装置、屏幕调节系统及方法
CN111265220A (zh) 一种近视预警方法、装置及设备
CN111461042A (zh) 跌倒检测方法及系统
CN111447497A (zh) 智能播放设备及其节能控制方法
CN103908066A (zh) 一种矫正坐姿的智能书桌及其矫正方法
CN108010579A (zh) 健康监护系统
CN103919359A (zh) 一种矫正坐姿的智能书桌及其矫正方法
WO2023040576A1 (zh) 一种目标为儿童的双目测距方法及系统
CN208092911U (zh) 一种婴儿监控系统
WO2021258644A1 (zh) 基于机器视觉的室内环境健康度调节方法与系统

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2023529007

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 112022000166

Country of ref document: DE

WWE Wipo information: entry into national phase

Ref document number: 202317042526

Country of ref document: IN

122 Ep: pct application non-entry in european phase

Ref document number: 22868951

Country of ref document: EP

Kind code of ref document: A1