CN112633215A - Embedded image acquisition device for recognizing behavior and emotion of children - Google Patents
Embedded image acquisition device for recognizing behavior and emotion of children Download PDFInfo
- Publication number
- CN112633215A CN112633215A CN202011604770.9A CN202011604770A CN112633215A CN 112633215 A CN112633215 A CN 112633215A CN 202011604770 A CN202011604770 A CN 202011604770A CN 112633215 A CN112633215 A CN 112633215A
- Authority
- CN
- China
- Prior art keywords
- face image
- information
- real
- preset
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 10
- 230000006399 behavior Effects 0.000 claims abstract description 10
- 230000014509 gene expression Effects 0.000 claims description 38
- 238000000034 method Methods 0.000 claims description 24
- 238000011156 evaluation Methods 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 18
- 230000008909 emotion recognition Effects 0.000 claims description 9
- 230000001815 facial effect Effects 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 4
- 208000020706 Autistic disease Diseases 0.000 description 8
- 206010003805 Autism Diseases 0.000 description 5
- 238000009434 installation Methods 0.000 description 4
- 208000029560 autism spectrum disease Diseases 0.000 description 3
- 208000036640 Asperger disease Diseases 0.000 description 2
- 201000006062 Asperger syndrome Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 208000012202 Pervasive developmental disease Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an embedded image acquisition device for identifying behaviors and emotions of children, which comprises a device main body, a base and a master control box, wherein the base is connected with the device main body through an adjusting column; a first display screen and a second display screen are embedded in the front end of the device main body, the second display screen is arranged below the first display screen, and a camera is arranged at the top end of the device main body; two motors are arranged inside the base, and brake shafts of the motors are fixedly connected with the bottom ends of the adjusting columns; two sleeves for connecting the adjusting columns are arranged in the device main body, and threads are arranged on the inner walls of the sleeves and the outer walls of the adjusting columns; the both sides of device main part all set up the embedded groove that is used for imbedding preset position. The invention can better and more conveniently acquire images and can more accurately recognize emotion.
Description
Technical Field
The invention relates to the field of image acquisition equipment, in particular to an embedded image acquisition device for recognizing behavior and emotion of children.
Background
Autism, also known as autism or autistic disorder, is a representative disease of pervasive developmental disorders. DSM-IV-TR classifies PDD into 5 types: autistic disorder, childhood disorganized disorder, asperger's syndrome, and unspecified PDD. Among them, autistic disorder and asperger syndrome are common. The morbidity reports of autism are different, the expression change of autistic children is small, emotion analysis needs to be carried out on the autistic children, and an image acquisition device needs to be used for acquiring facial image information for analysis when the emotion analysis is carried out.
The existing image acquisition device is highly fixed, so that the lower children with the height can not conveniently acquire images, the installation mode is single, the emotion recognition effect is not good enough, certain influence is brought to the use of the image acquisition device, and therefore the embedded image acquisition device for the emotion recognition of children behaviors is provided.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: how to solve current image acquisition device, it is highly fixed, lead to the lower children of height to gather the image not convenient enough to the mounting means is single, carries out emotion recognition's effect not good enough, has brought the problem of certain influence for image acquisition device's use, provides an embedded image acquisition device for children's action emotion recognition.
The invention solves the technical problems through the following technical scheme, and the device comprises a device main body, a base connected with the device main body through an adjusting column and a master control box fixedly arranged at the top end in the device main body;
a first display screen and a second display screen are embedded in the front end of the device main body, the second display screen is arranged below the first display screen, and a camera is arranged at the top end of the device main body;
two motors are arranged inside the base, and brake shafts of the motors are fixedly connected with the bottom ends of the adjusting columns;
two sleeves for connecting the adjusting columns are arranged in the device main body, and threads are arranged on the inner walls of the sleeves and the outer walls of the adjusting columns;
embedding grooves for embedding preset positions are formed in the two sides of the device main body;
the master control box comprises a data receiving module, a data processing module, a preset expression library, a data comparison module, a master control module and an instruction sending module;
the data receiving module is used for receiving the child face image information acquired by the camera, the data receiving module sends the received child face image information to the data processing module for processing, the data processing module processes the received child face image information to process a face image comparison feature, and the face image comparison feature is sent to the data comparison module;
the preset feature information of the child expression is prestored in the preset expression library, and the preset coefficient information of the child expression comprises a calmness coefficient, a happiness coefficient and a sadness coefficient;
the data comparison module extracts the obtained real-time face image comparison features, compares the extracted real-time face image comparison features with preset coefficient information of a child expression prestored in a preset expression library, matches the extracted real-time face image comparison features with the preset coefficient information of the child expression prestored in the preset expression library to obtain a matching result, the master control module generates result display information after the matching result is generated, the result display information is converted into a result display information instruction and then is sent to a second display screen by an instruction sending module, when the camera starts to collect data, the master control module generates movie playing information, and the movie playing information is converted into a movie playing instruction and then is sent to a first display screen;
the data processing module processes real-time child face image information and generates lifting information when face information is not found, and the lifting information is converted into a lifting instruction and then is sent to the motor by the instruction sending module.
Preferably, the bottom end of the device main body is in threaded connection with a plurality of hooks.
Preferably, the specific processing process of the real-time human face image contrast feature is as follows:
the method comprises the following steps: extracting the acquired child face image information, and extracting the clearest image from the acquired child face image information as a reference image;
step two: marking characteristic points, marking two outer mouth angles in the face image as a point A1 and a point A2, and marking the middle point of the lower lip as a point A3;
step three: drawing an arc line L1 by three characteristic points A1, A2 and A3;
step four: marking two canthi of the eye at one side as a point B1 and a point B2 respectively, marking the midpoint of the eye at one side as a point B3, and drawing an arc line L2 through three characteristic points B1, a point B2 and a point B3;
step five: marking two canthi of the eye on the other side as a point C1 and a point C2 respectively, marking the midpoint of the eye on the other side as a point C3, and drawing an arc line L3 through three characteristic points C1, a point C2 and a point C3;
step six: the obtained arc line L1, arc line L2 and arc line L3 are the real-time human face image contrast features.
Preferably, the specific alignment process of the data alignment module is as follows:
the method comprises the following steps: extracting the real-time facial image comparison features, and comparing the extracted real-time facial image comparison features with preset coefficient information of the child expression prestored in a preset expression library;
step two: comparing the real-time human face image comparison characteristics with a calmness coefficient, and generating a calmness evaluation result when the similarity of the real-time human face image comparison characteristics exceeds a preset value;
step three: comparing the real-time human face image comparison features with the happiness coefficients, and generating a happiness evaluation result when the similarity of the real-time human face image comparison features exceeds the preset similarity;
step four: comparing the real-time human face image comparison characteristics with the sadness coefficient, and generating a sadness evaluation result when the similarity of the real-time human face image comparison characteristics exceeds a preset value;
step five: and when any one of the calm evaluation result, the happy evaluation result and the sad evaluation result is generated, displaying the evaluation result on the second display screen.
Preferably, when the device is embedded into the preset position through the embedded groove, the lifting information is not generated.
Compared with the prior art, the invention has the following advantages: this an embedded image collection system for children's action emotion recognition, it can satisfy different installation demands to set up different mounting means, and can adjust its collection height according to children's height difference, the effectual shorter situation that can not gather its image information of avoiding children's height takes place, compare through the preset characteristic information that acquires the children's expression that better face image contrast characteristic prestores in with the preset expression storehouse simultaneously, thereby better more accurate emotion information who acquires children, make the device be worth using widely more.
Drawings
FIG. 1 is an overall block diagram of the present invention;
FIG. 2 is an internal view of the device body of the present invention;
FIG. 3 is a block diagram of the general control box structure of the present invention.
In the figure: 1. a device main body; 2. a base; 3. an adjustment column; 4. a first display screen; 5. a second display screen; 6. a master control box; 7. a camera; 8. a sleeve; 9. hooking; 10. a motor; 11. and a groove is embedded.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
As shown in fig. 1 to 3, the present embodiment provides a technical solution: an embedded image acquisition device for identifying behaviors and emotions of children comprises a device main body 1, a base 2 connected with the device main body 1 through an adjusting column 3 and a master control box 6 fixedly arranged at the top end in the device main body 1;
the master control box 6 is used for controlling the device main body 1 to operate to collect face image information of the autistic children;
a first display screen 4 and a second display screen 5 are embedded in the front end of the device main body 1, the second display screen 5 is arranged below the first display screen 4, and a camera 7 is arranged at the top end of the device main body 1;
the first display screen 4 is used for playing film contents which can attract attention of children, so that the children can look at the device, and the camera 7 can acquire clearer face image information of the autistic children;
two motors 10 are arranged inside the base 2, and the braking shafts of the motors 10 are fixedly connected with the bottom ends of the adjusting columns 3;
two sleeves 8 for connecting the adjusting columns 3 are arranged inside the device main body 1, and threads are arranged on the inner walls of the sleeves 8 and the outer walls of the adjusting columns 3;
the two motors 10 run synchronously to drive the two adjusting columns 3 to rotate, so that the length of the two adjusting columns 3 in the sleeve 8 is changed to realize the purpose of adjusting the height of the device;
both sides of the device main body 1 are provided with embedded grooves 11 for embedding preset positions;
when the installation position of the device needs to be embedded into a wall body, the device is embedded into the wall body only by detaching the base 2 and the adjusting column 3 and embedding the slide rail preset in the installation groove into the embedding groove 11;
the master control box 6 comprises a data receiving module, a data processing module, a preset expression library, a data comparison module, a master control module and an instruction sending module;
the data receiving module is used for receiving the child face image information acquired by the camera 7, the data receiving module sends the received child face image information to the data processing module for processing, the data processing module processes the received child face image information to process a face image comparison feature, and the face image comparison feature is sent to the data comparison module;
the preset feature information of the child expression is prestored in the preset expression library, and the preset coefficient information of the child expression comprises a calmness coefficient, a happiness coefficient and a sadness coefficient;
the data comparison module extracts the obtained real-time face image comparison features, compares the extracted real-time face image comparison features with preset coefficient information of a child expression prestored in a preset expression library, matches the extracted real-time face image comparison features with the preset coefficient information of the child expression prestored in the preset expression library to obtain a matching result, the master control module generates result display information after the matching result is generated, the result display information is converted into a result display information instruction and then is sent to the second display screen 5 by the instruction sending module, when the camera 7 starts to collect data, the master control module generates film playing information, and the film playing information is converted into a film playing instruction and then is sent to the first display screen 4;
the data processing module processes real-time child face image information and generates lifting information when face information is not found, and the lifting information is converted into a lifting instruction and then is sent to the motor 10 by the instruction sending module.
The bottom end of the device main body 1 is in threaded connection with a plurality of hooks 9, and the hooks 9 are used for hanging and placing hanging objects for attracting children.
The specific processing process of the real-time human face image contrast characteristic is as follows:
the method comprises the following steps: extracting the acquired child face image information, and extracting the clearest image from the acquired child face image information as a reference image;
step two: marking characteristic points, marking two outer mouth angles in the face image as a point A1 and a point A2, and marking the middle point of the lower lip as a point A3;
step three: drawing an arc line L1 by three characteristic points A1, A2 and A3;
step four: marking two canthi of the eye at one side as a point B1 and a point B2 respectively, marking the midpoint of the eye at one side as a point B3, and drawing an arc line L2 through three characteristic points B1, a point B2 and a point B3;
step five: marking two canthi of the eye on the other side as a point C1 and a point C2 respectively, marking the midpoint of the eye on the other side as a point C3, and drawing an arc line L3 through three characteristic points C1, a point C2 and a point C3;
step six: the obtained arc line L1, arc line L2 and arc line L3 are the real-time human face image contrast features.
The specific alignment process of the data alignment module is as follows:
the method comprises the following steps: extracting the real-time facial image comparison features, and comparing the extracted real-time facial image comparison features with preset coefficient information of the child expression prestored in a preset expression library;
step two: comparing the real-time human face image comparison characteristics with a calmness coefficient, and generating a calmness evaluation result when the similarity of the real-time human face image comparison characteristics exceeds a preset value;
step three: comparing the real-time human face image comparison features with the happiness coefficients, and generating a happiness evaluation result when the similarity of the real-time human face image comparison features exceeds the preset similarity;
step four: comparing the real-time human face image comparison characteristics with the sadness coefficient, and generating a sadness evaluation result when the similarity of the real-time human face image comparison characteristics exceeds a preset value;
step five: and when any one of the calm evaluation result, the happy evaluation result and the sad evaluation result is generated, displaying the evaluation result on the second display screen.
When the device is embedded into the preset position through the embedded groove 11, the lifting information is not generated.
In summary, when the device is used, after the device is installed and fixed, a child with autism is brought to a device, at this time, a first display screen 4 plays a movie content capable of attracting attention of the child, so that the child can look at the device, a camera 7 collects face image information of the child with autism, a data receiving module is used for receiving the face image information of the child collected by the camera 7, the data receiving module sends the received face image information of the child to a data processing module for processing, the data processing module processes the received face image information of the child and processes a face image contrast feature, the face image contrast feature is sent to a data comparison module, preset feature information of child expressions is prestored in a preset expression library, the preset coefficient information of the child expressions comprises a calm coefficient, a happy coefficient and a sad coefficient, the data comparison module extracts the obtained real-time face image contrast feature and the face image contrast feature prestored in the preset expression library The preset coefficient information of the child expression is compared and matched to obtain a matching result, after the matching result is generated, the master control module generates result display information, the result display information is converted into a result display information instruction and then is sent to the second display screen 5 by the instruction sending module, when the camera 7 starts to collect data, the master control module generates film playing information, the film playing information is converted into a film playing instruction and then is sent to the first display screen 4, the data processing module processes real-time child face image information and does not find face information, lifting information is generated, the lifting information is converted into a lifting instruction and then is sent to the motors 10 by the instruction sending module, the two motors 10 synchronously operate to drive the two adjusting columns 3 to rotate, and therefore the length of the two adjusting columns 3 in the sleeve 8 is changed to achieve the purpose of adjusting the height of the device.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (5)
1. An embedded image acquisition device for recognizing behavior and emotion of children is characterized by comprising a device main body (1), a base (2) connected with the device main body (1) through an adjusting column (3), and a main control box (6) fixedly installed at the top end in the device main body (1);
a first display screen (4) and a second display screen (5) are inlaid at the front end of the device main body (1), the second display screen (5) is arranged below the first display screen (4), and a camera (7) is arranged at the top end of the device main body (1);
two motors (10) are arranged inside the base (2), and brake shafts of the motors (10) are fixedly connected with the bottom ends of the adjusting columns (3);
two sleeves (8) used for being connected with the adjusting columns (3) are arranged inside the device main body (1), and threads are arranged on the inner walls of the sleeves (8) and the outer walls of the adjusting columns (3);
both sides of the device main body (1) are provided with embedded grooves (11) for embedding preset positions;
the master control box (6) comprises a data receiving module, a data processing module, a preset expression library, a data comparison module, a master control module and an instruction sending module;
the data receiving module is used for receiving the child face image information collected by the camera (7), the data receiving module sends the received child face image information to the data processing module for processing, the data processing module processes the received child face image information to process face image comparison characteristics, and the face image comparison characteristics are sent to the data comparison module;
the preset feature information of the child expression is prestored in the preset expression library, and the preset coefficient information of the child expression comprises a calmness coefficient, a happiness coefficient and a sadness coefficient;
the data comparison module extracts the obtained real-time face image comparison features, compares the extracted real-time face image comparison features with preset coefficient information of a child expression prestored in a preset expression library, matches the extracted real-time face image comparison features with the preset coefficient information of the child expression prestored in the preset expression library to obtain a matching result, the master control module generates result display information after the matching result is generated, the result display information is converted into a result display information instruction and then is sent to a second display screen (5) by an instruction sending module, when the camera (7) starts to collect data, the master control module generates movie playing information, and the movie playing information is converted into a movie playing instruction and then is sent to a first display screen (4);
the data processing module processes real-time child face image information and generates lifting information when face information is not found, and the lifting information is converted into a lifting instruction and then is sent to the motor (10) by the instruction sending module.
2. The embedded image capturing device for emotion recognition of children's behavior as claimed in claim 1, wherein: the bottom end of the device main body (1) is in threaded connection with a plurality of hooks (9).
3. The embedded image capturing device for emotion recognition of children's behavior as claimed in claim 1, wherein: the specific processing process of the real-time human face image contrast characteristic is as follows:
the method comprises the following steps: extracting the acquired child face image information, and extracting the clearest image from the acquired child face image information as a reference image;
step two: marking characteristic points, marking two outer mouth angles in the face image as a point A1 and a point A2, and marking the middle point of the lower lip as a point A3;
step three: drawing an arc line L1 by three characteristic points A1, A2 and A3;
step four: marking two canthi of the eye at one side as a point B1 and a point B2 respectively, marking the midpoint of the eye at one side as a point B3, and drawing an arc line L2 through three characteristic points B1, a point B2 and a point B3;
step five: marking two canthi of the eye on the other side as a point C1 and a point C2 respectively, marking the midpoint of the eye on the other side as a point C3, and drawing an arc line L3 through three characteristic points C1, a point C2 and a point C3;
step six: the obtained arc line L1, arc line L2 and arc line L3 are the real-time human face image contrast features.
4. The embedded image capturing device for emotion recognition of children's behavior as claimed in claim 1, wherein: the specific comparison process of the data comparison module is as follows:
the method comprises the following steps: extracting the real-time facial image comparison features, and comparing the extracted real-time facial image comparison features with preset coefficient information of the child expression prestored in a preset expression library;
step two: comparing the real-time human face image comparison characteristics with a calmness coefficient, and generating a calmness evaluation result when the similarity of the real-time human face image comparison characteristics exceeds a preset value;
step three: comparing the real-time human face image comparison features with the happiness coefficients, and generating a happiness evaluation result when the similarity of the real-time human face image comparison features exceeds the preset similarity;
step four: comparing the real-time human face image comparison characteristics with the sadness coefficient, and generating a sadness evaluation result when the similarity of the real-time human face image comparison characteristics exceeds a preset value;
step five: and when any one of the calm evaluation result, the happy evaluation result and the sad evaluation result is generated, displaying the evaluation result on the second display screen.
5. The embedded image capturing device for emotion recognition of children's behavior as claimed in claim 1, wherein: when the device is embedded into a preset position through the embedding groove (11), the lifting information is not generated any more.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011604770.9A CN112633215A (en) | 2020-12-29 | 2020-12-29 | Embedded image acquisition device for recognizing behavior and emotion of children |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011604770.9A CN112633215A (en) | 2020-12-29 | 2020-12-29 | Embedded image acquisition device for recognizing behavior and emotion of children |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112633215A true CN112633215A (en) | 2021-04-09 |
Family
ID=75286416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011604770.9A Pending CN112633215A (en) | 2020-12-29 | 2020-12-29 | Embedded image acquisition device for recognizing behavior and emotion of children |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112633215A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170188928A1 (en) * | 2015-12-24 | 2017-07-06 | Cagri Tanriover | Image-based mental state determination |
CN107729882A (en) * | 2017-11-19 | 2018-02-23 | 济源维恩科技开发有限公司 | Emotion identification decision method based on image recognition |
CN207409003U (en) * | 2017-10-31 | 2018-05-25 | 潍坊医学院 | A kind of human resources attendance recorder with health measuring function |
CN207785161U (en) * | 2017-05-10 | 2018-08-31 | 北京同方神火联合科技发展有限公司 | Expression analysis system |
WO2019184299A1 (en) * | 2018-03-28 | 2019-10-03 | 深圳创维-Rgb电子有限公司 | Microexpression recognition-based film and television scoring method, storage medium, and intelligent terminal |
CN209705621U (en) * | 2018-12-12 | 2019-11-29 | 广州悦派信息科技有限公司 | A kind of intelligent mood sensing device identifying face mood |
CN110889908A (en) * | 2019-12-10 | 2020-03-17 | 吴仁超 | Intelligent sign-in system integrating face recognition and data analysis |
WO2020224126A1 (en) * | 2019-05-06 | 2020-11-12 | 平安科技(深圳)有限公司 | Facial recognition-based adaptive adjustment method, system and readable storage medium |
-
2020
- 2020-12-29 CN CN202011604770.9A patent/CN112633215A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170188928A1 (en) * | 2015-12-24 | 2017-07-06 | Cagri Tanriover | Image-based mental state determination |
CN207785161U (en) * | 2017-05-10 | 2018-08-31 | 北京同方神火联合科技发展有限公司 | Expression analysis system |
CN207409003U (en) * | 2017-10-31 | 2018-05-25 | 潍坊医学院 | A kind of human resources attendance recorder with health measuring function |
CN107729882A (en) * | 2017-11-19 | 2018-02-23 | 济源维恩科技开发有限公司 | Emotion identification decision method based on image recognition |
WO2019184299A1 (en) * | 2018-03-28 | 2019-10-03 | 深圳创维-Rgb电子有限公司 | Microexpression recognition-based film and television scoring method, storage medium, and intelligent terminal |
CN209705621U (en) * | 2018-12-12 | 2019-11-29 | 广州悦派信息科技有限公司 | A kind of intelligent mood sensing device identifying face mood |
WO2020224126A1 (en) * | 2019-05-06 | 2020-11-12 | 平安科技(深圳)有限公司 | Facial recognition-based adaptive adjustment method, system and readable storage medium |
CN110889908A (en) * | 2019-12-10 | 2020-03-17 | 吴仁超 | Intelligent sign-in system integrating face recognition and data analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5841538B2 (en) | Interest level estimation device and interest level estimation method | |
CN101269635B (en) | Field watch apparatus | |
Werner et al. | Towards pain monitoring: Facial expression, head pose, a new database, an automatic system and remaining challenges | |
JP6521845B2 (en) | Device and method for measuring periodic fluctuation linked to heart beat | |
US10489679B2 (en) | Visualizing and updating long-term memory percepts in a video surveillance system | |
CN111344715A (en) | Object recognition system and method | |
CN109528217A (en) | A kind of mood detection and method for early warning based on physiological vibrations analysis | |
CN109389085B (en) | Lip language recognition model training method and device based on parameterized curve | |
CN115845350B (en) | Method and system for automatic ranging of standing long jump | |
Hakim et al. | Implementation of an image processing based smart parking system using Haar-Cascade method | |
CN108921072A (en) | A kind of the people flow rate statistical method, apparatus and system of view-based access control model sensor | |
CN112633215A (en) | Embedded image acquisition device for recognizing behavior and emotion of children | |
CN113610077A (en) | System method and equipment for monitoring and analyzing dissolution behavior by using artificial intelligence image recognition technology | |
KR101513414B1 (en) | Method and system for analyzing surveillance image | |
KR101795723B1 (en) | Recognition of basic emotion in facial expression using implicit synchronization of facial micro-movements | |
WO2011108183A1 (en) | Image processing device, content delivery system, image processing method, and program | |
Shinohara et al. | Estimation of facial expression intensity for lifelog videos retrieval | |
CN115410261A (en) | Face recognition heterogeneous data association analysis system | |
KR101736403B1 (en) | Recognition of basic emotion in facial expression using implicit synchronization of facial micro-movements | |
Pantic et al. | Facial gesture recognition in face image sequences: A study on facial gestures typical for speech articulation | |
CN106998464B (en) | Detect the method and device of thorn-like noise in video image | |
CN114445914A (en) | Millimeter wave data automatic labeling method and system based on video | |
Hong et al. | Micro-expression spotting: A benchmark | |
CN211180762U (en) | Electroencephalogram-based VR image emotion classification and intensity recognition system | |
CN107577995A (en) | The processing method and processing device of view data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210409 |
|
RJ01 | Rejection of invention patent application after publication |