CN111839552A - Intelligent physical and mental state recognizer based on 5G + AIoT - Google Patents
Intelligent physical and mental state recognizer based on 5G + AIoT Download PDFInfo
- Publication number
- CN111839552A CN111839552A CN202010725971.8A CN202010725971A CN111839552A CN 111839552 A CN111839552 A CN 111839552A CN 202010725971 A CN202010725971 A CN 202010725971A CN 111839552 A CN111839552 A CN 111839552A
- Authority
- CN
- China
- Prior art keywords
- user
- information
- terminal
- user terminal
- acquisition module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Psychiatry (AREA)
- Surgery (AREA)
- Social Psychology (AREA)
- Biophysics (AREA)
- Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Heart & Thoracic Surgery (AREA)
- Educational Technology (AREA)
- Molecular Biology (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Veterinary Medicine (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The invention relates to the technical field of endowment services, and aims to provide an intelligent physical and mental state recognizer based on 5G + AIoT. The invention discloses an intelligent mind and body state recognizer based on 5G + AIoT, which comprises a user information acquisition module, a user terminal and an AI platform, wherein an emotion recognition database is built in the AI platform, and the user terminal is in communication connection with a monitoring terminal through a server; the invention also discloses a working method of the intelligent physical and mental state recognizer based on the 5G + AIoT. The intelligent mind and body state recognizer based on 5G + AIoT and the working method thereof can be used for health monitoring of the old, and a guardian can know the health state of a person under guardianship in time conveniently.
Description
Technical Field
The invention relates to the technical field of endowment services, in particular to an intelligent physical and mental state recognizer based on 5G + AIoT.
Background
Along with the aggravation of the aging problem of the society, the requirements of the old people on living care, hospital health care, mental culture and the like are increasingly highlighted; in addition, with the increase of living pressure, the phenomena of work for children and women going out and the like are more and more common, the number of the empty-nest elderly is increased, the empty-nest elderly phenomenon becomes a social problem which cannot be ignored, and how to ensure the healthy life of the empty-nest elderly becomes a problem which people want to solve urgently.
At present, the condition that the emotional or mental health state of the empty-nest elderly is difficult to be correctly fed back exists, so that the health state of the empty-nest elderly cannot be valued by guardians, and the healthy life of the empty-nest elderly is not facilitated. In the prior art, the mental condition of the old is usually confirmed by means of a diagnosis conversation between a professional doctor and the old, however, the old is easy to have a conflict emotion, and the judgment of the result is further influenced.
Disclosure of Invention
The invention aims to solve the technical problems at least to a certain extent, and provides an intelligent mind and body state recognizer based on 5G + AIoT.
The technical scheme adopted by the invention is as follows:
an intelligent mind-body state recognizer based on 5G + AIoT comprises a user information acquisition module, a user terminal and an AI platform, wherein an emotion recognition database is built in the AI platform, and the user terminal is in communication connection with a monitoring terminal through a server;
the user information acquisition module is used for acquiring user information and sending the user information to the user terminal;
the user terminal is used for receiving the user information sent by the user information acquisition module and sending the received user information to the AI platform; the user terminal is also used for receiving and processing the analysis result fed back by the AI platform and then sending the analysis result to the monitoring terminal through the server;
the AI platform is used for receiving the user information sent by the user terminal, analyzing the received user information and the characteristic information in the emotion recognition database, and feeding back an analysis result to the user terminal;
and the monitoring terminal is used for receiving the analysis result sent by the server.
Preferably, the user terminal is also connected with a management terminal through a server in a communication way;
and the management terminal is used for receiving the analysis result sent by the server.
Preferably, the analysis result comprises physical and mental health information of the user, and the physical and mental health information of the user comprises tiredness, fear, depression, oversensitivity, self-attitude, suspicion, excitement or good.
Further preferably, the emotion recognition database includes a facial feature database; the user information acquisition module comprises an image acquisition module;
the image acquisition module is used for acquiring the facial image information of the user and sending the facial image information of the user to the user terminal;
the user terminal is used for receiving the user face image information sent by the image acquisition module and sending the received user face image information to the AI platform;
the AI platform is used for receiving the user face image information sent by the user terminal, then performing emotion analysis on the received user face image information and the face feature information in the face feature database to obtain the physical and mental health information of the user, and sending and feeding back the physical and mental health information of the user to the user terminal.
Further preferably, the facial features in the facial feature database include mouth arc, eye pupil state, camber of eyebrow, cheek arc, and/or wrinkle state.
Preferably, the emotion recognition database comprises an audio feature database; the user information acquisition module comprises an audio acquisition module;
the audio acquisition module is used for acquiring user audio information and sending the user audio information to the user terminal;
the user terminal is used for receiving the user audio information sent by the audio acquisition module and sending the received user audio information to the AI platform;
the AI platform is used for receiving the user audio information sent by the user terminal, then carrying out emotion analysis on the received user audio information and the audio characteristic information in the audio characteristic database to obtain the physical and mental health information of the user, and then feeding back the physical and mental health information of the user to the user terminal.
Preferably, the emotion recognition database comprises a video feature database; the user information acquisition module comprises a video acquisition module;
the video acquisition module is used for acquiring user video information and sending the user video information to the user terminal;
the user terminal is used for receiving the user video information sent by the video acquisition module and sending the received user video information to the AI platform;
the AI platform is used for receiving the user video information sent by the user terminal, then performing emotion analysis on the received user video information and the video characteristic information in the video characteristic database to obtain the physical and mental health information of the user, and feeding back the physical and mental health information of the user to the user terminal.
Further preferably, the user terminal, the management terminal and the management terminal are all realized by a 5G smart phone, and the user terminal, the management terminal and the management terminal realize wireless communication through a 5G transceiver module; and wireless communication is realized between the user terminal and the user information acquisition terminal through the Bluetooth transceiver module or the WIFI transceiver module.
Preferably, the user information acquisition module is triggered by the user terminal and/or the monitoring terminal.
The working method of the intelligent mind-body state recognizer based on 5G + AIoT comprises the following steps:
the user information acquisition module judges whether a health monitoring request exists in real time, and if yes, the user information acquisition module is triggered;
the user information acquisition module acquires user information and sends the user information to the user terminal;
the user terminal receives the user information sent by the user information acquisition module and sends the received user information to the AI platform;
the AI platform receives the user information sent by the user terminal, analyzes the received user information and the characteristic information in the emotion recognition database, and feeds back an analysis result to the user terminal;
the user terminal receives and processes the analysis result fed back by the AI platform, and then sends the analysis result to the monitoring terminal through the server;
the monitoring terminal receives the analysis result sent by the server.
The invention has the beneficial effects that: the system can be used for health monitoring of the old, and a guardian can know the health state of a person under guardianship in time conveniently. Particularly, the information acquisition module is used for acquiring the related information of the old people, so that the problem that a professional doctor and other personnel are directly communicated with the old people to cause the old people to generate a conflict emotion is avoided, meanwhile, the information acquisition module can be applied to the daily life of the old people, can comprehensively and objectively acquire the related information of the old people, then carries out analysis through the user terminal to obtain an analysis result related to the health of the old people, sends the analysis result to the monitoring terminal, and a guardian can know the health state of the old people through the monitoring terminal, so that the result is objective and reliable, and the problems that manual judgment is inaccurate and the like are avoided.
Drawings
FIG. 1 is a block diagram of the intelligent mind-body state recognizer based on 5G + AIoT in the invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
It will be understood that when an element is referred to herein as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, if a unit is referred to herein as being "directly connected" or "directly coupled" to another unit, it is intended that no intervening units are present. In addition, other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
Example 1:
the embodiment provides an intelligent mind and body state identifier based on 5G + AIoT, which, as shown in fig. 1, includes a user information acquisition module, a user terminal and an AI platform, wherein an emotion recognition database is built in the AI platform, and the user terminal is connected to a monitoring terminal through a server in a communication manner;
the user information acquisition module is used for acquiring user information and sending the user information to the user terminal;
the user terminal is used for receiving the user information sent by the user information acquisition module and sending the received user information to the AI platform; the user terminal is also used for receiving and processing the analysis result fed back by the AI platform and then sending the analysis result to the monitoring terminal through the server;
the AI platform is used for receiving the user information sent by the user terminal, analyzing the received user information and the characteristic information in the emotion recognition database, and feeding back an analysis result to the user terminal;
and the monitoring terminal is used for receiving the analysis result sent by the server.
The embodiment can be used for health monitoring of the old, and is convenient for the guardian to know the health state of the person under guardianship in time. Particularly, the embodiment collects the relevant information of the old man through the information acquisition module, the problem that personnel such as professional doctors directly communicate with the old man to cause the old man to generate conflict emotion is avoided, meanwhile, the information acquisition module can be applied to the daily life of the old man, the relevant information of the old man can be comprehensively and objectively collected, analysis is carried out through the user terminal, an analysis result related to the health of the old man is obtained, the analysis result is sent to the monitoring terminal, the health state of the old man can be known through the monitoring terminal by the monitoring person, the result is objective and reliable, and the problems that manual judgment is inaccurate and the like are avoided.
In the embodiment, after the user terminal receives and processes the analysis result fed back by the AI platform, the user terminal starts a man-machine conversation mode, and the user terminal proposes the problem corresponding to the emotion in the analysis result to the person under guardianship according to the analysis result output by the AI platform, performs secondary judgment on the emotion of the user, and outputs a verification result;
the user terminal compares the verification result with the analysis result output by the AI platform, judges whether the verification result is the same as the analysis result, and if so, sends the analysis result to the monitoring terminal through the server; if not, the user information sent by the user information acquisition module is sent to the AI platform again, the AI platform analyzes the received user information and the characteristic information in the emotion recognition database again until the verification result is the same as the analysis result output by the AI platform, and then the analysis result is sent to the monitoring terminal through the server.
Therefore, the emotion judgment accuracy can be realized.
In this embodiment, the user terminal is further connected with a management terminal through a server in a communication manner; and the management terminal is used for receiving the analysis result sent by the server. It should be noted that, a system administrator can obtain and store the health analysis result of the old through the management terminal, so as to facilitate subsequent system supervision and other work.
In this embodiment, the user terminal, the monitoring terminal, or the management terminal is any one of a mobile phone, a tablet computer, a notebook computer, or a desktop computer.
In this embodiment, the analysis result includes physical and mental health information of the user, and the physical and mental health information of the user includes tiredness, fear, depression, oversensitivity, inferiority, suspicion, excitement, or goodness. It should be noted that the mental health information of the user can directly reflect the mental health state of the user, and meanwhile, the information representing the emotion, such as facial images and voice information of the user, is conveniently confirmed by comparing with the characteristic information in the emotion recognition database.
In this embodiment, the emotion recognition database includes a facial feature database; the user information acquisition module comprises an image acquisition module;
the image acquisition module is used for acquiring the facial image information of the user and sending the facial image information of the user to the user terminal;
the user terminal is used for receiving the user face image information sent by the image acquisition module and sending the received user face image information to the AI platform;
and the AI platform is used for receiving the user face image information sent by the user terminal, then performing emotion analysis on the received user face image information and the face characteristic information in the face characteristic database to obtain the physical and mental health information of the user, and then sending and feeding back the physical and mental health information of the user to the user terminal and the management terminal.
It should be noted that the face information can best reflect the emotional state of the user, and in the implementation process, the steps of the user terminal performing emotion analysis are as follows: the user terminal compares the facial image information of the user with facial feature information in a facial feature database in an emotion recognition database, selects facial feature information matched with the facial image information of the user in the facial feature database, and then recognizes emotion information corresponding to the facial feature information, so that the physical and mental health information of the user is obtained.
Specifically, the facial features in the facial feature database and the video features in the video feature database each include mouth arc, eye pupil state, camber arc of eyebrows, cheek arc, and/or wrinkle state. It should be noted that, when the user terminal performs emotion analysis, the mouth radian, the eye pupil state, the eyebrow curvature, the cheek radian and/or the wrinkle state in the user face image information are compared with all face feature information in the face feature database, and the change conditions of the mouth radian, the eye pupil state, the eyebrow curvature, the cheek radian and/or the wrinkle state in the user video information are also compared with all face feature information in the video feature database, so as to finally obtain the face feature information closest to the user face image information, thereby facilitating accurate identification of the emotion information of the user.
Further, the emotion recognition database includes an audio feature database; the user information acquisition module comprises an audio acquisition module;
the audio acquisition module is used for acquiring user audio information and sending the user audio information to the user terminal;
the user terminal is used for receiving the user audio information sent by the audio acquisition module and sending the received user audio information to the AI platform;
and the AI platform is used for receiving the user audio information sent by the user terminal, then carrying out emotion analysis on the received user audio information and the audio characteristic information in the audio characteristic database to obtain the physical and mental health information of the user, and then feeding back the physical and mental health information of the user to the user terminal and the management terminal.
In this embodiment, the audio features in the audio feature database include intonation, tone, and/or speech rate.
It should be noted that the user audio information can reflect the change conditions of the user, such as intonation, tone, speed, etc., so as to provide a basis for acquiring the physical and mental health information of the user; in the implementation process, the steps of the emotion analysis of the user terminal are as follows: the user terminal compares the user audio information with the audio characteristic information in the audio characteristic database in the emotion recognition database, selects the audio characteristic information matched with the user audio information in the audio characteristic database, and then recognizes the emotion information corresponding to the audio characteristic information, so as to obtain the physical and mental health information of the user.
In this embodiment, the emotion recognition database includes a video feature database; the user information acquisition module comprises a video acquisition module;
the video acquisition module is used for acquiring user video information and sending the user video information to the user terminal;
the user terminal is used for receiving the user video information sent by the video acquisition module and sending the received user video information to the AI platform;
and the AI platform is used for receiving the user video information sent by the user terminal, then carrying out emotion analysis on the received user video information and the video characteristic information in the video characteristic database to obtain the physical and mental health information of the user, and then feeding back the physical and mental health information of the user to the user terminal.
It should be noted that the video information of the user can reflect the emotion change condition of the user, so as to provide a basis for acquiring the physical and mental health information of the user; in the implementation process, the steps of the emotion analysis of the user terminal are as follows: the user terminal compares the user video information with the video characteristic information in the video characteristic database in the emotion recognition database, selects the video characteristic information matched with the user video information in the video characteristic database, and then recognizes the emotion information corresponding to the video characteristic information, so as to obtain the physical and mental health information of the user.
In the embodiment, the user terminal, the management terminal and the management terminal are all realized by adopting a 5G smart phone, and the user terminal, the management terminal and the management terminal realize wireless communication through a 5G transceiver module; and wireless communication is realized between the user terminal and the user information acquisition terminal through the Bluetooth transceiver module or the WIFI transceiver module.
In this embodiment, the user information acquisition module is triggered by the user terminal and/or the monitoring terminal. It should be understood that the elderly can trigger the user information collection module by means of self-detection, and then analyze and obtain the current emotion information and send the current emotion information to the monitoring terminal.
The working method of the intelligent mind-body state recognizer based on the 5G + AIoT comprises the following steps:
the user information acquisition module judges whether a health monitoring request exists in real time, and if yes, the user information acquisition module is triggered;
the user information acquisition module acquires user information and sends the user information to the user terminal;
the user terminal receives the user information sent by the user information acquisition module and sends the received user information to the AI platform;
the AI platform receives the user information sent by the user terminal, analyzes the received user information and the characteristic information in the emotion recognition database, and feeds back an analysis result to the user terminal;
the user terminal receives and processes the analysis result fed back by the AI platform, and then sends the analysis result to the monitoring terminal and the management terminal through the server;
the monitoring terminal and the management terminal receive the analysis result sent by the server.
In order to realize the accuracy of emotion judgment, after the user terminal receives and processes the analysis result fed back by the AI platform, the method further comprises the following steps:
the user terminal starts a man-machine conversation mode, and the user terminal provides the monitored person with the problem corresponding to the emotion in the analysis result according to the analysis result output by the AI platform, performs secondary judgment on the emotion of the user and outputs a verification result;
the user terminal compares the verification result with the analysis result output by the AI platform, judges whether the verification result is the same as the analysis result, and if so, sends the analysis result to the monitoring terminal through the server; if not, the user information sent by the user information acquisition module is sent to the AI platform again, the AI platform analyzes the received user information and the characteristic information in the emotion recognition database again until the verification result is the same as the analysis result output by the AI platform, and then the analysis result is sent to the monitoring terminal through the server.
The various embodiments described above are merely illustrative, and may or may not be physically separate, as they relate to elements illustrated as separate components; if reference is made to a component displayed as a unit, it may or may not be a physical unit, and may be located in one place or distributed over a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications of the technical solutions described in the embodiments or equivalent replacements of some technical features may still be made. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.
Claims (9)
1. The utility model provides an intelligence mind-body state recognizer based on 5G + AIoT which characterized in that: the system comprises a user information acquisition module, a user terminal and an AI platform, wherein an emotion recognition database is built in the AI platform, and the user terminal is in communication connection with a monitoring terminal through a server;
the user information acquisition module is used for acquiring user information and sending the user information to the user terminal;
the user terminal is used for receiving the user information sent by the user information acquisition module and sending the received user information to the AI platform; the user terminal is also used for receiving and processing the analysis result fed back by the AI platform and then sending the analysis result to the monitoring terminal through the server;
the AI platform is used for receiving the user information sent by the user terminal, analyzing the received user information and the characteristic information in the emotion recognition database, and feeding back an analysis result to the user terminal;
and the monitoring terminal is used for receiving the analysis result sent by the server.
2. The intelligent mind-body state recognizer based on 5G + AIoT as claimed in claim 1, characterized in that: the user terminal is also connected with a management terminal through a server in a communication way;
and the management terminal is used for receiving the analysis result sent by the server.
3. The intelligent mind-body state recognizer based on 5G + AIoT as claimed in claim 1, characterized in that: the analysis result comprises physical and mental health information of the user, and the physical and mental health information of the user comprises tiredness, fear, depression, oversensitivity, self-low, suspicion, excitement or good.
4. The intelligent mind-body state recognizer based on 5G + AIoT of claim 3, wherein: the emotion recognition database comprises a facial feature database; the user information acquisition module comprises an image acquisition module;
the image acquisition module is used for acquiring the facial image information of the user and sending the facial image information of the user to the user terminal;
the user terminal is used for receiving the user face image information sent by the image acquisition module and sending the received user face image information to the AI platform;
the AI platform is used for receiving the user face image information sent by the user terminal, then performing emotion analysis on the received user face image information and the face feature information in the face feature database to obtain the physical and mental health information of the user, and sending and feeding back the physical and mental health information of the user to the user terminal.
5. The intelligent mind-body state recognizer based on 5G + AIoT of claim 4, wherein: facial features in the facial feature database include mouth arc, eye pupil state, curvature of eyebrows, cheek arc, and/or wrinkle state.
6. The intelligent mind-body state recognizer based on 5G + AIoT of claim 3, wherein: the emotion recognition database comprises an audio feature database; the user information acquisition module comprises an audio acquisition module;
the audio acquisition module is used for acquiring user audio information and sending the user audio information to the user terminal;
the user terminal is used for receiving the user audio information sent by the audio acquisition module and sending the received user audio information to the AI platform;
the AI platform is used for receiving the user audio information sent by the user terminal, then carrying out emotion analysis on the received user audio information and the audio characteristic information in the audio characteristic database to obtain the physical and mental health information of the user, and then feeding back the physical and mental health information of the user to the user terminal.
7. The intelligent mind-body state recognizer based on 5G + AIoT of claim 3, wherein: the emotion recognition database comprises a video feature database; the user information acquisition module comprises a video acquisition module;
the video acquisition module is used for acquiring user video information and sending the user video information to the user terminal;
the user terminal is used for receiving the user video information sent by the video acquisition module and sending the received user video information to the AI platform;
the AI platform is used for receiving the user video information sent by the user terminal, then performing emotion analysis on the received user video information and the video characteristic information in the video characteristic database to obtain the physical and mental health information of the user, and feeding back the physical and mental health information of the user to the user terminal.
8. The intelligent mind-body state recognizer based on 5G + AIoT as claimed in claim 2, characterized in that: the user terminal, the management terminal and the management terminal are all realized by adopting a 5G smart phone, and the user terminal, the management terminal and the management terminal realize wireless communication through a 5G transceiver module; and wireless communication is realized between the user terminal and the user information acquisition terminal through the Bluetooth transceiver module or the WIFI transceiver module.
9. The intelligent mind-body state recognizer based on 5G + AIoT as claimed in claim 1, characterized in that: the user information acquisition module is triggered by the user terminal and/or the monitoring terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010725971.8A CN111839552A (en) | 2020-07-24 | 2020-07-24 | Intelligent physical and mental state recognizer based on 5G + AIoT |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010725971.8A CN111839552A (en) | 2020-07-24 | 2020-07-24 | Intelligent physical and mental state recognizer based on 5G + AIoT |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111839552A true CN111839552A (en) | 2020-10-30 |
Family
ID=72949554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010725971.8A Withdrawn CN111839552A (en) | 2020-07-24 | 2020-07-24 | Intelligent physical and mental state recognizer based on 5G + AIoT |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111839552A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668833A (en) * | 2020-11-26 | 2021-04-16 | 平安普惠企业管理有限公司 | Staff work arrangement method, device, equipment and medium based on artificial intelligence |
-
2020
- 2020-07-24 CN CN202010725971.8A patent/CN111839552A/en not_active Withdrawn
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668833A (en) * | 2020-11-26 | 2021-04-16 | 平安普惠企业管理有限公司 | Staff work arrangement method, device, equipment and medium based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6101684B2 (en) | Method and system for assisting patients | |
WO2017193497A1 (en) | Fusion model-based intellectualized health management server and system, and control method therefor | |
Ekman et al. | Assessment of facial behavior in affective disorders | |
CN111613335A (en) | Health early warning system and method | |
CN112016367A (en) | Emotion recognition system and method and electronic equipment | |
CN109949438B (en) | Abnormal driving monitoring model establishing method and device and storage medium | |
CN107910073A (en) | A kind of emergency treatment previewing triage method and device | |
CN106510686A (en) | Heart disease diagnosis system based on cloud service | |
CN110755091A (en) | Personal mental health monitoring system and method | |
CN111839552A (en) | Intelligent physical and mental state recognizer based on 5G + AIoT | |
KR20180015010A (en) | Method for biometric human identification on electrocardiogram and ptt smartwatch system using the same method | |
CN113040773A (en) | Data acquisition and processing method | |
CN112669927A (en) | Household health service system | |
WO2016119498A1 (en) | Health information providing method and providing apparatus | |
CN115331837A (en) | Man-machine interaction intelligent inquiry system | |
CN112735591A (en) | Digital physical and mental health management system | |
CN113921098A (en) | Medical service evaluation method and system | |
Mantri et al. | Real time multimodal depression analysis | |
CN208910229U (en) | Medical system based on wearable device | |
KR102508262B1 (en) | Apparatus for diagnosing dementia using voice and eye tracking and method for providing dementia information | |
CN205318386U (en) | Wearable equipment based on iris information detection is healthy | |
KR20190118806A (en) | Emotion recognition apparatus based on space and emotion recognition system comprising the same | |
WO2023105887A1 (en) | Information processing device, information processing method, and recording medium | |
CN220820814U (en) | Medical warning system based on face image and emotion recognition | |
CN112687279B (en) | Medical instant messaging intelligent terminal based on voice technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20201030 |