CN111209859B - Method for dynamically adapting display to visual angle based on face recognition - Google Patents

Method for dynamically adapting display to visual angle based on face recognition Download PDF

Info

Publication number
CN111209859B
CN111209859B CN202010010053.7A CN202010010053A CN111209859B CN 111209859 B CN111209859 B CN 111209859B CN 202010010053 A CN202010010053 A CN 202010010053A CN 111209859 B CN111209859 B CN 111209859B
Authority
CN
China
Prior art keywords
display
key
face recognition
information
face information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010010053.7A
Other languages
Chinese (zh)
Other versions
CN111209859A (en
Inventor
王卫
杨天浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Jusha Display Technology Co Ltd
Nanjing Jusha Medical Technology Co Ltd
Original Assignee
Nanjing Jusha Display Technology Co Ltd
Nanjing Jusha Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jusha Display Technology Co Ltd, Nanjing Jusha Medical Technology Co Ltd filed Critical Nanjing Jusha Display Technology Co Ltd
Priority to CN202010010053.7A priority Critical patent/CN111209859B/en
Publication of CN111209859A publication Critical patent/CN111209859A/en
Application granted granted Critical
Publication of CN111209859B publication Critical patent/CN111209859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for dynamically adapting to a visual angle of a display based on face recognition, which comprises the following steps: step SS1: inputting face information of the key person through a display built-in device; step SS2: identifying all face information in the view range of the camera; step SS3: comparing the entered face information with all face information in the visual field range to judge whether a target key person exists in front of the current display; step SS4: if the step SS3 is judged to be yes, dynamically adapting the display to the key person; if the step SS3 is determined to be none, the display incorporates as much face information as possible through a limited field of view. The invention solves the problem that the optimal visual angle of a doctor of a main knife needs to be ensured under the application scene of an operating room. Meanwhile, in the scene of a conference, consultation, morning meeting and other multi-person conference, the optimal view angle of key people is ensured or more people have better image watching experience.

Description

Method for dynamically adapting display to visual angle based on face recognition
Technical Field
The invention relates to a method for dynamically adapting to a visual angle of a display based on face recognition, and belongs to the technical field of face recognition application.
Background
With the increase of the complexity of modern operations, an integrated operating room has been developed. The integrated operating room integrates various medical instruments into the same operating room, so that the operation is performed efficiently, safely and conveniently, and the most important equipment for interaction with doctors is a medical display. Especially for minimally invasive surgery, the display truly restores the internal conditions of the human body through imaging, assists doctors in real-time diagnosis and performs next surgery planning. The visual angle of the display can be influenced by the stations of different doctors, the stations of the doctors are often the optimal positions for the operation, and if the visual angle of the display is irregular, the judgment of the doctors can be influenced, so that the safe operation is influenced. For large-scale endoscopic displays, there is only one operating room, so how to ensure the optimal viewing angle of the doctor of the main knife is an urgent problem to be solved. Therefore, it is a technical challenge in the art how to address the technical need to dynamically adapt a display to the perspective of a key character.
Disclosure of Invention
The invention aims to overcome the technical defects in the prior art, solve the technical problems, and provide a method for dynamically adapting to the visual angle of a display based on face recognition.
The invention adopts the following technical scheme: the method for dynamically adapting the display to the visual angle based on the face recognition is characterized by comprising the following steps of:
step SS1: inputting face information of the key person through a display built-in device;
step SS2: identifying all face information in the view range of the camera;
step SS3: comparing the entered face information with all face information in the visual field range to judge whether a target key person exists in front of the current display;
step SS4: if the step SS3 is judged to be yes, dynamically adapting the display to the key person; if the step SS3 is determined to be none, the display incorporates as much face information as possible through a limited field of view.
As a preferred embodiment, the step SS1 specifically includes: and extracting and storing the facial information of the person by adopting a Softmax loss function and a discriminant face recognition algorithm to finish the facial information input of the key person.
As a preferred embodiment, the step SS1 further includes: the Softmax loss function is expressed as:
wherein: m is the number of samples input per training; n is the number of categories; x is x i Feature vectors for the ith sample; y is i Marking for the corresponding category; w and b are respectively a weight matrix and a bias vector of the last full connection layer; w (W) j A weight matrix of the j-th class; b j Is the corresponding bias term.
As a preferred embodiment, the step SS1 further includes: to eliminate the larger intra-class variation generated by the Softmax loss function, the intra-class becomes more compact, the features are more discriminant, and the intra-class cosine similarity loss function is adopted and expressed as:
in θ yi Is the included angle between the feature vector of the ith sample and its corresponding class weight vector.
As a preferred embodiment, the step SS1 further includes: to facilitate forward and backward propagation, equation (2) is converted into:
wherein:
equation (3) effectively describes the intra-class variation,for the actual loss layer input, let +.>Only calculation is needed in the forward propagation process:
during backward propagation, L c3 For z i The gradient of (2) is
As a preferred embodiment, the step SS1 further includes: in order to make the learned characteristics have discriminant, training is carried out under the common supervision of a Softmax loss function and an intra-class cosine similarity loss function, and the formed discriminant face recognition algorithm expression is as follows:
and lambda is a scalar quantity and is used for balancing two loss functions, and key character face information is recorded according to the Softmax loss function and the discriminant face recognition algorithm.
As a preferred embodiment, the step SS3 specifically includes: comparing all facial information in the visual field range of the entered database with the entered facial information of the key figures one by one to judge whether a target key figure exists in front of the current display; if the target key person exists, an instruction is sent to the display rotating device at the moment, and the key person is positioned on the central axis of the screen through the left-right rotation of the screen.
As a preferred embodiment, the step SS3 specifically further includes: the display camera takes the central axis of the vertical display as a reference, and when an included angle of an angle a exists between the key figure and the central axis, the display is dynamically rotated so that the key figure returns to the central axis.
As a preferred embodiment, the step SS3 specifically further includes: under the scene that the visual angle of the display camera is 30 degrees and the visual distance is 3m, when people with irrelevant keys in the visual field of the display, starting an ant colony algorithm module at the moment, and iterating according to the number of people contained in the visual field as a result.
As a preferred embodiment, the ant colony algorithm specifically includes: after each ant walks one step or traverses all n nodes are completed, namely after one cycle is finished, the residual information is updated;
the number of persons held in the field of view, which is the pheromone on the path (i, j) at time t+n, can be expressed as:
τ ij (t+n)=(1-ρ)×τ ij (t)+Δτ ij (t) (6)
iterating through ant colony algorithms in the formulas (6) and (7), determining the angle finally determined by the display when the iterated result is converged, and finally taking the optimal iterated result to be the maximum number of people accommodated in the visual field; wherein ρ is an information volatilization factor, and 1- ρ represents a residual factor; m represents m ants in total; τ ij (t+n) represents the amount of pheromone on the path (i, j) at time t+n, τ ij (t) represents the amount of pheromone on the path (i, j) at time t, Δτ ij (t) represents the difference in the amount of pheromone between time t+n and time t, and this difference is equal toRepresenting the sum of the pheromone differences caused on the path (i, j) from the first ant to the mth ant at time t.
The invention has the beneficial effects that: firstly, the invention solves the problem that the optimal visual angle of the doctor of the main knife needs to be ensured in the application scene of the operating room, and can ensure that various operation image information of a patient can be fed back to the doctor of the main knife through the display rapidly and without errors by inputting the face information of the doctor of the main knife, thereby improving the safety and the efficiency of the operation; secondly, the invention solves the problem that in the scene of a plurality of conferences such as conferences, consultations, morning meetings and the like, the optimal view angle of key people is ensured or more people have better image watching experience.
Drawings
Fig. 1 is a schematic flow chart of a preferred embodiment of the present invention.
FIG. 2 is a schematic diagram of the initial state of the process of dynamically adapting a display to a key character according to the present invention.
FIG. 3 is a schematic diagram of the end state of the process of dynamically adapting a display to a key character in accordance with the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
As shown in fig. 1, the invention provides a method for dynamically adapting to a viewing angle of a display based on face recognition, which comprises the following steps:
step SS1: inputting face information of the key person through a display built-in device;
step SS2: identifying all face information in the view range of the camera;
step SS3: comparing the entered face information with all face information in the visual field range to judge whether a target key person exists in front of the current display;
step SS4: if the step SS3 is judged to be yes, dynamically adapting the display to the key person; if the step SS3 is determined to be none, the display incorporates as much face information as possible through a limited field of view.
In step SS1, the display needs to have a camera with functions of living body detection, face acquisition, face comparison, face library management, etc. built in to input and process face information. Meanwhile, as the display needs to rotate in the process of dynamically adapting to the visual angle, the connection part of the display screen and the bracket needs to support left-right rotation to achieve the purpose of adapting to the visual angle of the watching crowd, corresponding software is developed, and the key character face information is input and stored in the database by clicking a software 'start input' button through the face recognition technology.
The invention aims to extract and store facial information of people by adopting a Softmax loss function and a discriminant face recognition algorithm, thereby finishing the input of the facial information of key people.
The Softmax penalty function is mainly used for multi-classification problems. From the perspective of probability theory, the Softmax penalty function aims to convert the true weight vector into a probability distribution, and is the cross entropy of the Softmax function. The Softmax loss function is expressed as:
wherein: m is the number of samples input per training; n is the number of categories; xi is the eigenvector of the ith sample; yi is the corresponding class label; w and b are respectively a weight matrix and a bias vector of the last full connection layer; wj is the weight matrix of the j-th class; bj is the corresponding bias term.
In order to eliminate the larger intra-class variation generated by the Softmax loss function, the intra-class becomes more compact, and the characteristics are more discriminative, the patent adopts an intra-class cosine similarity loss function expressed as:
in theta yi Is the included angle between the feature vector of the ith sample and its corresponding class weight vector.
To facilitate forward and backward propagation, equation (2) is converted into:
wherein:
equation (3) can effectively describe intra-class variations,for the actual loss layer input, let +.>Only during forward propagationThe calculation is needed:
during backward propagation, L c3 For z i The gradient of (2) is
In order to make the learned features have discriminant, training is carried out under the common supervision of Softmax loss and intra-class cosine similarity loss, and the formed discriminant face recognition algorithm expression is as follows:
where λ is a scalar used to balance the two loss functions.
The invention carries out the input of the face information of the key person according to the discriminant face recognition algorithm.
In step SS2, most of the existing display cameras have a viewing angle of 30 ° at maximum, click a software "face information in recognition range" button within a certain distance (3 m in the present invention) before taking the display, and the display will recognize all face information in the viewing range and enter the database. At this time, the software executes judgment, if the display has entered the face information of the key person, then step SS3 is performed; if no key character information is entered, step SS4 is performed.
In step SS3, if the key character face information is already entered by the display, all the face information in the visual field range already entered by the database is compared with the entered key character face information one by one to determine whether the target key character is in front of the current display. If the target key person exists, the software sends an instruction to the display rotating device, and the key person is positioned on the central axis of the screen through the left-right rotation of the screen, as shown in fig. 2 and 3.
When the display camera is based on the central axis of the vertical display and an included angle of an angle a exists between the key character and the central axis (as shown in fig. 2), the display is dynamically rotated to enable the key character to return to the central axis (as shown in fig. 3).
If not, go to step SS4.
In step SS4, the view angle of the display camera in step SS2 is still 30 °, and the viewing distance is 3m, and when there is no key character in the view field of the display, the software starts the ant colony algorithm module, and iterates the number of people in the view field as a result.
The ant colony Algorithm (AG) is a simulated optimization algorithm for simulating the foraging behavior of ants. The basic principle of the ant colony algorithm is as follows:
1. ants release pheromones on the path.
2. When the crossing which has not passed is hit, a road is randomly selected. At the same time, the pheromone related to the path length is released.
3. Pheromone concentration is inversely proportional to path length. When the following ants hit the intersection again, a path with higher pheromone concentration is selected.
4. The pheromone concentration on the optimal path is increasing.
5. And finally, the ant colony finds the optimal feeding path.
To avoid flooding heuristic information with too many residual pheromones, the residual information is updated after each ant walks one step or completes the traversal of all n nodes (i.e. one cycle is finished).
the pheromone (i.e., the number of people in the field of view that is most) update formula on path (i, j) at time t+n can be expressed as:
τ ij (t+n)=(1-ρ)×τ ij (t)+Δτ ij (t) (6)
the patent adopts the algorithm to iterate, when the iteration result is converged, the finally determined angle of the display is determined, and the optimal result of iteration is finally taken to be the maximum number of people accommodated in the field of view. In actual operation, the iteration times can be customized according to actual conditions. Wherein ρ is an information volatilization factor, and 1- ρ represents a residual factor; m represents m ants in total; τ ij (t+n) represents the amount of pheromone on the path (i, j) at time t+n, τ ij (t) represents the amount of pheromone on the path (i, j) at time t, Δτ ij (t) represents the difference in the amount of pheromone between time t+n and time t, and this difference is equal toRepresenting the sum of the pheromone differences caused on the path (i, j) from the first ant to the mth ant at time t.
The software sets a fixed period (such as 2 min) to rescan the face information in the current visual field range, or can immediately scan by clicking a button for 'face information in the identification range', so as to dynamically adapt to the current crowd visual angle for a new round.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (5)

1. The method for dynamically adapting the display to the visual angle based on the face recognition is characterized by comprising the following steps of:
step SS1: inputting face information of the key person through a display built-in device;
step SS2: identifying all face information in the view range of the camera;
step SS3: comparing the entered face information with all face information in the visual field range to judge whether a target key person exists in front of the current display;
step SS4: if the step SS3 is judged to be yes, dynamically adapting the display to the key person; if the step SS3 is determined to be none, the display incorporates facial information as much as possible through a limited field of view;
the step SS1 specifically includes: extracting and storing facial information of a person by adopting a Softmax loss function and a discriminant face recognition algorithm to finish the facial information input of a key person;
the step SS1 further includes: the Softmax loss function is expressed as:
wherein: m is the number of samples input per training; n is the number of categories; x is x i Feature vectors for the ith sample; y is i Marking for the corresponding category; w and b are respectively a weight matrix and a bias vector of the last full connection layer; w (W) j A weight matrix of the j-th class; b j Is a corresponding bias term;
the step SS1 further includes: to eliminate the larger intra-class variation generated by the Softmax loss function, the intra-class becomes more compact, the features are more discriminant, and the intra-class cosine similarity loss function is adopted and expressed as:
in the method, in the process of the invention,the included angle between the characteristic vector of the ith sample and the corresponding category weight vector is set;
the step SS1 further includes: to facilitate forward and backward propagation, equation (2) is converted into:
wherein:
equation (3) effectively describes the intra-class variation,for the actual loss layer input, let +.>Only calculation is needed in the forward propagation process:
during backward propagation, L c3 For z i The gradient of (2) is
The step SS1 further includes: in order to make the learned characteristics have discriminant, training is carried out under the common supervision of a Softmax loss function and an intra-class cosine similarity loss function, and the formed discriminant face recognition algorithm expression is as follows:
and lambda is a scalar quantity and is used for balancing two loss functions, and key character face information is recorded according to the Softmax loss function and the discriminant face recognition algorithm.
2. The method for dynamically adapting a viewing angle of a display based on face recognition according to claim 1, wherein the step SS3 specifically comprises: comparing all facial information in the visual field range of the entered database with the entered facial information of the key figures one by one to judge whether a target key figure exists in front of the current display; if the target key person exists, an instruction is sent to the display rotating device at the moment, and the key person is positioned on the central axis of the screen through the left-right rotation of the screen.
3. The method for dynamically adapting a viewing angle of a display based on face recognition according to claim 2, wherein said step SS3 specifically further comprises: the display camera takes the central axis of the vertical display as a reference, and when an included angle of an angle a exists between the key figure and the central axis, the display is dynamically rotated so that the key figure returns to the central axis.
4. A method for dynamically adapting a viewing angle of a display based on face recognition according to claim 3, wherein said step SS3 specifically further comprises: under the scene that the visual angle of the display camera is 30 degrees and the visual distance is 3m, when people with irrelevant keys in the visual field of the display, starting an ant colony algorithm module at the moment, and iterating according to the number of people contained in the visual field as a result.
5. The method for dynamically adapting a viewing angle of a display based on face recognition according to claim 4, wherein the ant colony algorithm specifically comprises: after each ant walks one step or traverses all n nodes are completed, namely after one cycle is finished, the residual information is updated;
the number of persons held in the field of view, which is the pheromone on the path (i, j) at time t+n, can be expressed as:
τ ij (t+n)=(1-ρ)×τ ij (t)+Δτ ij (t) (6)
iterating through ant colony algorithms in the formulas (6) and (7), determining the angle finally determined by the display when the iterated result is converged, and finally taking the optimal iterated result to be the maximum number of people accommodated in the visual field; wherein ρ is an information volatilization factor, and 1- ρ represents a residual factor; m represents m ants in total; τ ij (t+n) then tableThe amount of pheromone on path (i, j) at time t+n, τ ij (t) represents the amount of pheromone on the path (i, j) at time t, Δτ ij (t) represents the difference in the amount of pheromone between time t+n and time t, and this difference is equal toRepresenting the sum of the pheromone differences caused on the path (i, j) from the first ant to the mth ant at time t.
CN202010010053.7A 2020-01-06 2020-01-06 Method for dynamically adapting display to visual angle based on face recognition Active CN111209859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010010053.7A CN111209859B (en) 2020-01-06 2020-01-06 Method for dynamically adapting display to visual angle based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010010053.7A CN111209859B (en) 2020-01-06 2020-01-06 Method for dynamically adapting display to visual angle based on face recognition

Publications (2)

Publication Number Publication Date
CN111209859A CN111209859A (en) 2020-05-29
CN111209859B true CN111209859B (en) 2023-09-19

Family

ID=70785624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010010053.7A Active CN111209859B (en) 2020-01-06 2020-01-06 Method for dynamically adapting display to visual angle based on face recognition

Country Status (1)

Country Link
CN (1) CN111209859B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257671A (en) * 2020-11-16 2021-01-22 深圳市巨烽显示科技有限公司 Display device and personalized display effect adjusting method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033549A (en) * 2009-09-30 2011-04-27 三星电子(中国)研发中心 Viewing angle adjusting device of display device
CN105912960A (en) * 2016-03-24 2016-08-31 北京橙鑫数据科技有限公司 Anti-peeping method based on terminal and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9001183B2 (en) * 2012-06-15 2015-04-07 Cisco Technology, Inc. Adaptive switching of views for a video conference that involves a presentation apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033549A (en) * 2009-09-30 2011-04-27 三星电子(中国)研发中心 Viewing angle adjusting device of display device
CN105912960A (en) * 2016-03-24 2016-08-31 北京橙鑫数据科技有限公司 Anti-peeping method based on terminal and terminal

Also Published As

Publication number Publication date
CN111209859A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN108009521B (en) Face image matching method, device, terminal and storage medium
US7684651B2 (en) Image-based face search
CN108829719A (en) The non-true class quiz answers selection method of one kind and system
CN105005772B (en) A kind of video scene detection method
Massiceti et al. Flipdial: A generative model for two-way visual dialogue
CN108416065A (en) Image based on level neural network-sentence description generates system and method
CN106462725A (en) Systems and methods of monitoring activities at a gaming venue
US20120155717A1 (en) Image search including facial image
JPH11175246A (en) Sight line detector and method therefor
CN109063643B (en) Facial expression pain degree identification method under condition of partial hiding of facial information
Massiceti et al. Visual dialogue without vision or dialogue
CN115273180B (en) Online examination invigilating method based on random forest
CN111209859B (en) Method for dynamically adapting display to visual angle based on face recognition
CN114612960A (en) Method and device for traditional Chinese medicine health management through facial image
CN117935339A (en) Micro-expression recognition method based on multi-modal fusion
CN117009570A (en) Image-text retrieval method and device based on position information and confidence perception
CN115985153A (en) Operation and maintenance personnel training system based on image processing and behavior recognition
CN114238439B (en) Task-driven relational data view recommendation method based on joint embedding
Zhang et al. Students’ Classroom Behavior Detection Based on Human-Object Interaction Model
Narlagiri et al. Biometric authentication system based on face recognition
CN115661890A (en) Model training method, face recognition device, face recognition equipment and medium
CN114842712A (en) Sign language teaching system based on gesture recognition
Kadar et al. Scenenet: A perceptual ontology for scene understanding
JPH11219365A (en) Image retrieving device
CN115374268B (en) Multi-role decentralized collaborative interaction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant