CN111694980A - Robust family child learning state visual supervision method and device - Google Patents

Robust family child learning state visual supervision method and device Download PDF

Info

Publication number
CN111694980A
CN111694980A CN202010538607.0A CN202010538607A CN111694980A CN 111694980 A CN111694980 A CN 111694980A CN 202010538607 A CN202010538607 A CN 202010538607A CN 111694980 A CN111694980 A CN 111694980A
Authority
CN
China
Prior art keywords
facial
data
face
key
roi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010538607.0A
Other languages
Chinese (zh)
Inventor
李龙
宋恒
赵丹
崔修涛
林月胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dewertokin Technology Group Co Ltd
Original Assignee
Dewertokin Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dewertokin Technology Group Co Ltd filed Critical Dewertokin Technology Group Co Ltd
Priority to CN202010538607.0A priority Critical patent/CN111694980A/en
Publication of CN111694980A publication Critical patent/CN111694980A/en
Priority to PCT/CN2020/128882 priority patent/WO2021248814A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention relates to a robust visual monitoring method for family children learning state, which can reduce the input data amount on one hand and simplify the problem on the other hand by dividing an acquired image into an ROI region in advance for fine detection and identification, improve the processing efficiency of the process, improve the processing speed, match with appearance and geometric fusion detection of deep learning, match with ROI region tracking and related filtering and denoising, improve the anti-disturbance capability of the system under light change and posture change and improve the accuracy of facial identification.

Description

Robust family child learning state visual supervision method and device
Technical Field
The invention relates to the technical field of computer vision processing, in particular to a robust method and a robust device for supervising the learning state of family children.
Background
With the continuous development of China, the children education is increasingly paid attention by society, and the learning mode is also expanded from single school learning to multiple modes such as home online learning and home offline learning. However, children are generally poor in self-discipline, parents have limited energy, and teachers are difficult to supervise during home learning, so that the learning efficiency is often low.
Two existing ways for supervising on-line teaching of students are contact, the detection effect is accurate, but the sensors are required to be in direct contact with children, and certain influence is generated on learning of the students; the other is non-contact, with the external behavioural performance and internal physiological changes of the child by the camera.
For example, chinese patent application publication No. CN110867105A discloses a family learning supervision method and system based on edge computing, which proposes a method related to computer vision, but does not describe a specific implementation method of facial state behavior analysis; the Chinese patent with application publication number CN110197169A discloses a non-contact learning state monitoring system and a learning state detection method, and provides a facial state analysis method based on a digital computer vision kit.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a robust family child learning state supervision method and device which are high in processing speed and can perform stable and reliable facial behavior analysis under the conditions of complex illumination and large posture.
The above object of the present invention is achieved by the following technical solutions:
a robust family child learning state visual supervision method comprises the following steps:
s1, collecting face data according to a preset frequency, extracting a plurality of key feature points, and submitting the key feature points to a feature detection module in sequence according to a time sequence;
s2, judging whether the key feature points are the monitoring objects or not by the feature detection module;
if yes, go to step S3;
if not, returning to the step S1;
s3, separating the key feature points of different face areas according to the requirement of face recognition to obtain a plurality of groups of face key point data;
s4, calculating the region of the key feature point corresponding to the next frame according to the face key point data corresponding to the current frame number, and defining the region as an ROI region;
s5, performing self-inspection on the ROI to judge whether the ROI is a human face of a monitored object;
if yes, entering step S3, and performing deep learning thermodynamic detection on the data of the ROI to acquire facial thermodynamic information;
if not, returning to the step S1;
and S6, acquiring the facial key point data and the facial thermodynamic information in real time through a quantitative analysis module, integrating and classifying the facial key point data and the facial thermodynamic information, and comparing the integrated and classified data with corresponding data in the standard feature database to obtain a quantized learning state evaluation result.
In step S1, face data is collected by edge AI extraction, and the key feature points correspond to the eyes, nose tip, mouth, and facial contour of the face.
In step S3, clipping, scaling, filtering, denoising, histogram equalization, and gray scale balancing are performed on the video frame containing the key feature points, and the video frame is converted into a normalized standard image;
and then segmenting the standard image according to the facial organ region to obtain the facial key point data.
In step S4, the ROI region in t +1 frame is acquired from the position coordinates of the facial key point data in t frame.
In step S6, an attention mechanism is used to repeatedly compare the details of the identified objects, so as to improve the comparison accuracy.
When the resolution of the facial key point data and the facial thermal information cannot meet the requirement of effective comparison of corresponding data in the standard feature database, the facial key point data and the image of the facial thermal information can be reconstructed into a high-resolution image according to the end-to-end principle before comparison and then output.
And classifying the detection data of different parts of the face by using an LSTM classification method.
A robust vision supervision device for learning states of family children comprises a data acquisition module, a feature detection module, an interested feature detection module, a thermal image detection module, an algorithm module, a quantitative analysis module and a standard feature database;
the data acquisition module collects face data, extracts a plurality of key feature points, and submits the key feature points to the feature detection module in sequence according to a time sequence;
the feature detection module judges whether the key feature points are monitoring objects or not according to the key feature points, and sends data meeting requirements to the interested feature detection module and the thermal image detection module;
the interesting feature detection module performs separation detection according to different key feature points to obtain face key point data of a monitored object, and the algorithm module calculates an ROI (region of interest) of a next frame associated with a single key feature point according to the separated single key feature point;
the algorithm module carries out self-checking on the ROI and judges whether the ROI is a face of a monitored object, if so, the ROI is sent to the interested feature for continuous detection, and if not, the separation detection of the interested feature detection module is interrupted;
the thermal image detection module performs thermal detection according to the data of the ROI to acquire facial thermal information;
and the quantitative analysis module acquires the facial key point data and the facial thermodynamic information in real time, integrates and classifies the facial key point data and the facial thermodynamic information, and compares the data with corresponding data in the standard characteristic database to obtain a quantized learning state evaluation result.
In summary, the invention includes at least one of the following beneficial technical effects:
through marking off the ROI region to the image of gathering in advance and carrying out meticulous detection and discernment, can reduce the data bulk of input on the one hand, the other party can simplify the problem, improves the treatment effeciency of flow, promotes processing speed, cooperates the heat map detection of degree of depth study again, cooperates ROI region tracking and relevant filtering noise removal, can promote the anti-disturbance ability of system under light change and gesture change, promotes the precision of facial discernment.
Drawings
FIG. 1 is a block diagram of a method of an embodiment of the invention;
fig. 2 is a detailed flow process diagram of an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the invention discloses a robust vision supervision method for learning states of family children, which comprises the following steps:
s1, collecting face data according to a preset frequency, extracting a plurality of key feature points, and submitting the key feature points to a feature detection module in sequence according to a time sequence;
s2, judging whether the object is a monitoring object or not by the characteristic detection module according to the key characteristic points;
if yes, go to step S3;
if not, returning to the step S1;
s3, separating key feature points of different face areas according to the requirement of face recognition to obtain multiple groups of face key point data;
s4, calculating the area of the key feature point corresponding to the next frame according to the face key point data corresponding to the current frame number, and defining the area as an ROI area; to master
S5, carrying out self-inspection on the ROI and judging whether the ROI is a human face of a monitoring object;
if yes, the method goes to step S3, and deep learning thermodynamic detection is carried out on the data of the ROI to obtain facial thermodynamic information;
in this embodiment, the countermeasure network needs to be generated according to sample data training, and specifically includes four steps of obtaining sample data, training sample preprocessing, generating illumination countermeasure training of the countermeasure network, and generating posture countermeasure training of the countermeasure network.
In the step of obtaining sample data, the face images under various illumination and angles are required to be obtained as sample data, and in the embodiment, 13 postures in the CMU Multi-PIE and the face images under 20 illumination conditions are adopted as training data sets. Since later model training is facilitated, each sample image is normalized first.
In the training sample preprocessing step, the embodiment performs key point detection on a face image by an MTCNN method, then selects the left eye, the right eye, the nose, the left mouth and the right mouth as five key points, and stores the key point coordinates, the image path and the label into a text file together, so as to obtain a heatmap image corresponding to the key points for training and testing during training.
In the step of lighting countermeasure training for generating the countermeasure network, an image and a target lighting label are selected from sample data as input of a lighting generator, the generator outputs the target lighting image, and then the target lighting image and the original lighting label are sent to the lighting generator again to obtain a false original lighting image. The discriminator feeds back errors of the real image and the false original illumination image to the illumination generator, and the identity classifier and the illumination classifier respectively feed back errors of the identity information and the illumination information of the target face image and the generated image to the illumination generator; and continuously performing iterative training on the illumination generator, the discriminator and the classifier.
If not, returning to the step S1;
and S6, acquiring the facial key point data and the facial thermodynamic information in real time through a quantitative analysis module, integrating and classifying the facial key point data and the facial thermodynamic information, and comparing the data with corresponding data in a standard feature database to obtain a quantized learning state evaluation result.
In step S1, face data is collected by edge AI extraction, and the key feature points correspond to the eyes, nose tip, mouth, and facial contour of the face.
In step S3, clipping, scaling, filtering, denoising, histogram equalization, and gray scale balancing are performed on the video frame containing the key feature points, and converted into a normalized standard image;
and then segmenting the standard image according to the facial organ region to obtain facial key point data.
In step S4, the ROI region in the t +1 frame is acquired from the position coordinates of the face key point data in the t frame.
In step S6, an attention mechanism is used to repeatedly compare the details of the identified objects, so as to improve the comparison accuracy.
When the resolution of the face key point data and the face thermal information cannot meet the requirement of effective comparison of corresponding data in the standard feature database, the image of the face key point data and the face thermal information can be reconstructed into a high-resolution image according to the end to end principle before comparison and then output.
And classifying the detection data of different parts of the face by using an LSTM classification method.
A robust vision supervision device for learning states of family children comprises a data acquisition module, a feature detection module, an interested feature detection module, a thermal image detection module, an algorithm module, a quantitative analysis module and a standard feature database;
the data acquisition module collects face data, extracts a plurality of key feature points, and submits the key feature points to the feature detection module in sequence according to a time sequence;
the feature detection module judges whether the object is a monitored object or not according to the key feature points, and sends data meeting requirements to the interested feature detection module and the thermal image detection module;
the interesting feature detection module performs separation detection according to different key feature points to obtain face key point data of a monitored object, and the algorithm module calculates an ROI (region of interest) of the next frame related to the single key feature point according to the separated single key feature point;
the algorithm module carries out self-checking on the ROI, judges whether the ROI is a face of a monitored object, if so, sends the ROI to the interested feature for continuous detection, and if not, interrupts the separation detection of the interested feature detection module;
the thermal image detection module performs thermal detection according to the data of the ROI area to acquire facial thermal information;
and the quantitative analysis module acquires the facial key point data and the facial thermodynamic information in real time, integrates and classifies the facial key point data and the facial thermodynamic information, and compares the data with corresponding data in a standard characteristic database to obtain a quantized learning state evaluation result. In comparison, accurate tuning of a particular individual, such as detection of eyeball status, may be increased. The eyeball model in the standard characteristic database can be reconstructed according to the eyeball structure of the current monitored object, so that the eyeball state detection accuracy is improved.
In summary, the invention includes at least one of the following beneficial technical effects:
through marking off the ROI region to the image of gathering in advance and carrying out meticulous detection and discernment, can reduce the data bulk of input on the one hand, the other party can simplify the problem, improves the treatment effeciency of flow, promotes processing speed, cooperates the heat map detection of degree of depth study again, cooperates ROI region tracking and relevant filtering noise removal, can promote the anti-disturbance ability of system under light change and gesture change, promotes the precision of facial discernment.
The embodiments of the present invention are preferred embodiments of the present invention, and the scope of the present invention is not limited by these embodiments, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (8)

1. A robust family children learning state visual supervision method is characterized by comprising the following steps: the method comprises the following steps:
s1, collecting face data according to a preset frequency, extracting a plurality of key feature points, and submitting the key feature points to a feature detection module in sequence according to a time sequence;
s2, judging whether the key feature points are the monitoring objects or not by the feature detection module;
if yes, go to step S3;
if not, returning to the step S1;
s3, separating the key feature points of different face areas according to the requirement of face recognition to obtain a plurality of groups of face key point data;
s4, calculating the region of the key feature point corresponding to the next frame according to the face key point data corresponding to the current frame number, and defining the region as an ROI region;
s5, performing self-inspection on the ROI to judge whether the ROI is a human face of a monitored object;
if yes, entering step S3, and performing deep learning thermodynamic detection on the data of the ROI to acquire facial thermodynamic information;
if not, returning to the step S1;
and S6, acquiring the facial key point data and the facial thermodynamic information in real time through a quantitative analysis module, integrating and classifying the facial key point data and the facial thermodynamic information, and comparing the integrated and classified data with corresponding data in the standard feature database to obtain a quantized learning state evaluation result.
2. The robust home child learning state visual surveillance method of claim 1, characterized by: in step S1, face data is collected by edge AI extraction, and the key feature points correspond to the eyes, nose tip, mouth, and facial contour of the face.
3. The robust home child learning state visual surveillance method of claim 1, characterized by: in step S3, clipping, scaling, filtering, denoising, histogram equalization, and gray scale balancing are performed on the video frame containing the key feature points, and the video frame is converted into a normalized standard image;
and then segmenting the standard image according to the facial organ region to obtain the facial key point data.
4. A robust home child learning state visual surveillance method according to claim 3, characterized by: in step S4, the ROI region in t +1 frame is acquired from the position coordinates of the facial key point data in t frame.
5. The robust home child learning state visual surveillance method of claim 1, characterized by: in step S6, an attention mechanism is used to repeatedly compare the details of the identified objects, so as to improve the comparison accuracy.
6. The robust home child learning state visual surveillance method of claim 5, wherein: when the resolution of the facial key point data and the facial thermal information cannot meet the requirement of effective comparison of corresponding data in the standard feature database, the facial key point data and the image of the facial thermal information can be reconstructed into a high-resolution image according to the end-to-end principle before comparison and then output.
7. The robust home child learning state visual surveillance method of claim 6, wherein: and classifying the detection data of different parts of the face by using an LSTM classification method.
8. A robust family children learning state vision supervision device is characterized in that: the system comprises a data acquisition module, a feature detection module, an interested feature detection module, a thermal image detection module, an algorithm module, a quantitative analysis module and a standard feature database;
the data acquisition module collects face data, extracts a plurality of key feature points, and submits the key feature points to the feature detection module in sequence according to a time sequence;
the feature detection module judges whether the key feature points are monitoring objects or not according to the key feature points, and sends data meeting requirements to the interested feature detection module and the thermal image detection module;
the interesting feature detection module performs separation detection according to different key feature points to obtain face key point data of a monitored object, and the algorithm module calculates an ROI (region of interest) of a next frame associated with a single key feature point according to the separated single key feature point;
the algorithm module carries out self-checking on the ROI and judges whether the ROI is a face of a monitored object, if so, the ROI is sent to the interested feature for continuous detection, and if not, the separation detection of the interested feature detection module is interrupted;
the thermal image detection module performs thermal detection according to the data of the ROI to acquire facial thermal information;
and the quantitative analysis module acquires the facial key point data and the facial thermodynamic information in real time, integrates and classifies the facial key point data and the facial thermodynamic information, and compares the data with corresponding data in the standard characteristic database to obtain a quantized learning state evaluation result.
CN202010538607.0A 2020-06-13 2020-06-13 Robust family child learning state visual supervision method and device Pending CN111694980A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010538607.0A CN111694980A (en) 2020-06-13 2020-06-13 Robust family child learning state visual supervision method and device
PCT/CN2020/128882 WO2021248814A1 (en) 2020-06-13 2020-11-15 Robust visual supervision method and apparatus for home learning state of child

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010538607.0A CN111694980A (en) 2020-06-13 2020-06-13 Robust family child learning state visual supervision method and device

Publications (1)

Publication Number Publication Date
CN111694980A true CN111694980A (en) 2020-09-22

Family

ID=72480855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010538607.0A Pending CN111694980A (en) 2020-06-13 2020-06-13 Robust family child learning state visual supervision method and device

Country Status (2)

Country Link
CN (1) CN111694980A (en)
WO (1) WO2021248814A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021248814A1 (en) * 2020-06-13 2021-12-16 德派(嘉兴)医疗器械有限公司 Robust visual supervision method and apparatus for home learning state of child

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373535A (en) * 2022-01-13 2022-04-19 刘威 Novel doctor-patient mechanism system based on Internet

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100397410C (en) * 2005-12-31 2008-06-25 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN109299685A (en) * 2018-09-14 2019-02-01 北京航空航天大学青岛研究院 Deduction network and its method for the estimation of human synovial 3D coordinate
CN109472198B (en) * 2018-09-28 2022-03-15 武汉工程大学 Gesture robust video smiling face recognition method
CN111160085A (en) * 2019-11-19 2020-05-15 天津中科智能识别产业技术研究院有限公司 Human body image key point posture estimation method
CN111046825A (en) * 2019-12-19 2020-04-21 杭州晨鹰军泰科技有限公司 Human body posture recognition method, device and system and computer readable storage medium
CN111694980A (en) * 2020-06-13 2020-09-22 德沃康科技集团有限公司 Robust family child learning state visual supervision method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021248814A1 (en) * 2020-06-13 2021-12-16 德派(嘉兴)医疗器械有限公司 Robust visual supervision method and apparatus for home learning state of child

Also Published As

Publication number Publication date
WO2021248814A1 (en) 2021-12-16

Similar Documents

Publication Publication Date Title
CN105138954B (en) A kind of image automatic screening inquiry identifying system
Agarwal et al. Learning to detect objects in images via a sparse, part-based representation
CN104951773A (en) Real-time face recognizing and monitoring system
CN111325115A (en) Countermeasures cross-modal pedestrian re-identification method and system with triple constraint loss
CN105046219A (en) Face identification system
WO2021248815A1 (en) High-precision child sitting posture detection and correction method and device
CN103593648B (en) Face recognition method for open environment
CN111694980A (en) Robust family child learning state visual supervision method and device
Kamgar-Parsi et al. Aircraft detection: A case study in using human similarity measure
Faria et al. Interface framework to drive an intelligent wheelchair using facial expressions
Wan et al. A facial recognition system for matching computerized composite sketches to facial photos using human visual system algorithms
Chen Evaluation technology of classroom students’ learning state based on deep learning
Curran et al. The use of neural networks in real-time face detection
Bora et al. ISL gesture recognition using multiple feature fusion
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation
Huang et al. Research on learning state based on students’ attitude and emotion in class learning
CN110348296B (en) Target identification method based on man-machine fusion
CN112949369A (en) Mass face gallery retrieval method based on man-machine cooperation
Akhtar et al. Temporal analysis of adaptive face recognition
Wang et al. A Dynamic Gesture Recognition Algorithm based on Feature Fusion from RGB-D Sensor
Naser et al. Facial recognition for partially occluded faces
CN116894978B (en) Online examination anti-cheating system integrating facial emotion and behavior multi-characteristics
Shah Automatic human face texture analysis for age and gender recognition
Bowns et al. Facial features and axis of symmetry extracted using natural orientation information
Boddu Face Recognition and Pattern Recognition Based Dress-Code Monitoring for Students

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201229

Address after: Office 1, 15th floor, building 1, Jiaxing photovoltaic technology innovation park, 1288 Kanghe Road, Xiuzhou District, Jiaxing City, Zhejiang Province, 314001

Applicant after: Depai (Jiaxing) medical equipment Co.,Ltd.

Address before: Room 247, building 6, Jiaxing photovoltaic technology innovation park, 1288 Kanghe Road, Xiuzhou District, Jiaxing City, Zhejiang Province, 314001

Applicant before: Dworkon Technology Group Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination