CN116229581B - Intelligent interconnection first-aid system based on big data - Google Patents

Intelligent interconnection first-aid system based on big data Download PDF

Info

Publication number
CN116229581B
CN116229581B CN202310297735.4A CN202310297735A CN116229581B CN 116229581 B CN116229581 B CN 116229581B CN 202310297735 A CN202310297735 A CN 202310297735A CN 116229581 B CN116229581 B CN 116229581B
Authority
CN
China
Prior art keywords
layer
emergency
output
big data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310297735.4A
Other languages
Chinese (zh)
Other versions
CN116229581A (en
Inventor
涂建刚
费晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Anke Electronic Technology Co ltd
Original Assignee
Zhuhai Anke Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Anke Electronic Technology Co ltd filed Critical Zhuhai Anke Electronic Technology Co ltd
Priority to CN202310297735.4A priority Critical patent/CN116229581B/en
Publication of CN116229581A publication Critical patent/CN116229581A/en
Application granted granted Critical
Publication of CN116229581B publication Critical patent/CN116229581B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/40Information sensed or collected by the things relating to personal data, e.g. biometric data, records or preferences
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/20Analytics; Diagnosis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/50Safety; Security of things, users, data or systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/60Positioning; Navigation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention relates to the technical field of big data, and discloses an intelligent interconnection first-aid system based on big data, which comprises the following components: the first model generation module is used for establishing a first model, and the first model comprises a first layer, a second layer and a third layer, wherein the input of the first layer is a moving image of a user, and the output of the first layer is obstacle movement characteristics; the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic; the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D; the first layer and the second layer are respectively connected with the output of the first layer and the third layer, the first layer and the second layer are input of the second layer, and the output of the third layer corresponds to different first-aid condition classifications; an emergency decision module that selects a corresponding treatment strategy based on the identified emergency situation classification; according to the invention, the user information acquired by the Internet of things is processed and judged based on big data and deep learning, so that the potential danger reflected by the patient behavior can be found in time, and emergency treatment is performed.

Description

Intelligent interconnection first-aid system based on big data
Technical Field
The invention relates to the technical field of big data, in particular to an intelligent interconnection first-aid system based on big data.
Background
The existing first aid depends on help-aid persons to ask for help, and the help-aid persons lack of medical knowledge, so that the help-aid persons have insufficient knowledge of early dyskinesias and language disorders of some acute diseases, and once the diseases occur, help-aid ability can be lost.
Disclosure of Invention
The invention provides an intelligent interconnection first-aid system based on big data, which solves the technical problem that patients in the related art may not be able to help by themselves.
The invention provides an intelligent interconnection first-aid system based on big data, which comprises:
a first information acquisition module for acquiring a moving image of a user;
the first model generation module is used for establishing a first model, and the first model comprises a first layer, a second layer and a third layer, wherein the input of the first layer is a moving image of a user, and the output of the first layer is obstacle movement characteristics;
the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic;
the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D;
the first layer and the second layer are respectively connected with the output of the first layer and the output of the third layer, the first layer and the second layer are the input of the C layer, the C layer comprises a plurality of RNN units, each RNN unit inputs one barrier movement characteristic and one barrier sound characteristic, the D layer is connected with the output of the last RNN unit, and the D layer outputs more than two results which respectively correspond to different first-aid condition classifications;
memory s of t RNN unit of layer C t The calculation formula of (2) is as follows:
s t =f(W*s t-1 +U 1 *x t +U 2 *y t )
wherein W represents the input weight, U 1 Weights representing obstacle movement characteristics entered by the t-th RNN unit, U 2 Weights, x, representing characteristics of barrier sounds input by the t-th RNN unit t Representing the obstacle movement characteristics of the tth RNN unit input, y t Representing the barrier sound characteristics input by the t-th RNN unit, wherein f is an activation function;
output 0tg (V.times.s) of the t-th RNN unit of layer C t ) Wherein V is the output weight, s t Memory for the t RNN unit, g is the activation function;
a state recognition module for inputting the motion image and audio of the current user into the first model to generate an emergency classification;
an emergency decision module that selects a corresponding treatment strategy based on the identified emergency situation classification;
and the interconnection positioning module is used for positioning the position of the user based on the terminal equipment of the user.
Further, the D layer outputs two results, corresponding to two categories requiring emergency and not requiring emergency, respectively.
Further, the D layer outputs three results, corresponding to the disorder without abnormality and the disorder requiring emergency treatment, respectively.
Further, the first layer, the third layer and the second layer of the first model are respectively trained.
Further, the D layer vectorizes the output of the last RNN unit of the C layer and multiplies the vectorized output by the classification weight matrix to obtain a plurality of outputs; the size of the classification weight matrix is n×m, where N is the dimension after vectorization of the output of the last RNN unit of the C layer.
Further, the D layer is also connected with the E layer, the E layer comprises probability units connected with the outputs of the D layer one by one, and the outputs G of the probability units j The calculation formula of (2) is as follows:
S j is the j-th output value of the D layer, S k K is the set of output items of the D layer, which is the kth output value of the D layer.
Further, the second layer training loss function is:
wherein y is i,j A true value representing that the ith sample belongs to the jth emergency class, k is the total number of emergency classes, n is the total number of samples of the training set,representing the predicted value of the model for classifying the ith sample as belonging to the jth emergency, L reg Regularizing the term for L1.
The invention has the beneficial effects that:
the invention processes and judges the user information collected by the Internet of things based on big data and deep learning, can timely find potential danger reflected by patient behaviors, carries out emergency treatment, is used as an emergency system independent of self help seeking of the user, and can effectively improve the life safety of the user.
Drawings
Fig. 1 is an intelligent interconnected emergency system based on big data according to the present invention.
In the figure: the system comprises a first information acquisition module 101, a first model generation module 102, a state identification module 103, a first aid decision module 104 and an interconnection positioning module 105.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
Example 1
As shown in fig. 1, an intelligent interconnection emergency system based on big data includes:
a first information acquisition module 101 for acquiring a moving image of a user;
and the second information acquisition module is used for acquiring the audio data of the user.
The first model generating module 102 is configured to build a first model, where the first model includes a first layer, a second layer, and a third layer, and an input of the first layer is a moving image of a user, and an output is an obstacle movement feature;
the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic;
the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D;
the first layer and the second layer are respectively connected with the output of the first layer and the output of the third layer, the first layer and the second layer are the input of the C layer, the C layer comprises a plurality of RNN units, each RNN unit inputs one barrier movement characteristic and one barrier sound characteristic, the D layer is connected with the output of the last RNN unit, and the D layer outputs more than two results which respectively correspond to different first-aid condition classifications;
in one embodiment of the invention, the D layer outputs two results, corresponding to two categories requiring emergency and not requiring emergency, respectively.
In one embodiment of the present invention, the D layer outputs three results, corresponding to no anomaly, no need for emergency treatment, and no need for emergency treatment, respectively.
The first layer, the third layer and the second layer of the first model are respectively trained.
In one embodiment of the invention, memory s of the t RNN unit of layer C t The calculation formula of (2) is as follows:
s t =f(W*s t-1 +U 1 *x t +U 2 *y t )
wherein W represents the input weight, U 1 Weights representing obstacle movement characteristics entered by the t-th RNN unit, U 2 Weights, x, representing characteristics of barrier sounds input by the t-th RNN unit t Representing the obstacle movement characteristics of the tth RNN unit input, y t Representing the barrier sound characteristics input by the t-th RNN unit, wherein f is an activation function;
output o of t RNN unit of layer C t =g(V*s t ) Wherein V is the output weight, s t Memory for the t RNN unit, g is the activation function.
The output of the last RNN unit of the layer C is vectorized and then multiplied by the classification weight matrix to obtain a plurality of outputs;
the size of the classification weight matrix is n×m, where N is the dimension after vectorization of the output of the last RNN unit of the C layer.
In one embodiment of the invention, the A layer employs a MASK R-CNN neural network.
In one embodiment of the invention, the layer B employs an LSTM neural network, which time-sequences the user's audio before inputting to the BLSTM neural network.
In one embodiment of the invention, the D layer is also connected with the E layer, the E layer comprises probability units connected with the outputs of the D layer one by one, and the outputs G of the probability units j The calculation formula of (2) is as follows:
S j is the j-th output value of the D layer, S k K is the set of output items of the D layer, which is the kth output value of the D layer.
The second layer training loss function is:
wherein y is i,j A true value representing that the ith sample belongs to the jth emergency class, k is the total number of emergency classes, n is the total number of samples of the training set,representing the predicted value of the model for classifying the ith sample as belonging to the jth emergency, L reg Regularizing the term for L1.
A state recognition module 103 for inputting the motion image and audio of the current user into the first model to generate an emergency situation classification;
an emergency decision module 104 that selects a corresponding treatment strategy based on the identified emergency situation classification;
for example, for emergency situations classified as requiring emergency, the user location is located directly and then ambulances and medical personnel are dispatched.
For example, for emergency situations classified as not requiring emergency, the user can be contacted by telephone to communicate with the user, and further judge whether emergency is required.
In one embodiment of the present invention, preliminary determination of the condition of the user based on the medical record of the user and/or the moving image and audio of the user to prepare the medical apparatus to be carried is a conventional technical means in the art, and will not be described herein.
An interconnection positioning module 105, which positions the user based on the user's terminal device.
In the above embodiment of the present invention, both the capturing of the moving image of the user and the audio of the user are performed by the terminal device on the internet of things, and the terminal device for positioning the user may be the internet of things device such as the mobile phone of the user or the camera in the user room.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.

Claims (7)

1. An intelligent interconnected first-aid system based on big data, comprising:
a first information acquisition module for acquiring a moving image of a user;
the first model generation module is used for establishing a first model, and the first model comprises a first layer, a second layer and a third layer, wherein the input of the first layer is a moving image of a user, and the output of the first layer is obstacle movement characteristics;
the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic;
the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D;
the first layer and the second layer are respectively connected with the output of the first layer and the output of the third layer, the first layer and the second layer are the input of the C layer, the C layer comprises a plurality of RNN units, each RNN unit inputs one barrier movement characteristic and one barrier sound characteristic, the D layer is connected with the output of the last RNN unit, and the D layer outputs more than two results which respectively correspond to different first-aid condition classifications;
memory s of t RNN unit of layer C t The calculation formula of (2) is as follows:
s t =f(W*s t-1 +U 1 *x t +U 2 *y t )
wherein W represents the input weight, U 1 Weights representing obstacle movement characteristics entered by the t-th RNN unit, U 2 Weights representing the characteristics of the barrier sounds entered by the t-th RNN unit,x t representing the obstacle movement characteristics of the tth RNN unit input, y t Representing the barrier sound characteristics input by the t-th RNN unit, wherein f is an activation function;
output o of t RNN unit of layer C t =g(V*s t ) Wherein V is the output weight, s t Memory for the t RNN unit, g is the activation function;
a state recognition module for inputting the motion image and audio of the current user into the first model to generate an emergency classification;
an emergency decision module that selects a corresponding treatment strategy based on the identified emergency situation classification;
and the interconnection positioning module is used for positioning the position of the user based on the terminal equipment of the user.
2. The intelligent interconnected emergency system based on big data of claim 1, wherein the D layer outputs two results corresponding to two categories of emergency and no emergency, respectively.
3. The intelligent interconnected emergency system based on big data of claim 1, wherein the D layer outputs three results corresponding to no anomaly, no need for emergency and no need for emergency, respectively.
4. The intelligent interconnected emergency system based on big data of claim 1, wherein the first layer, the third layer, and the second layer of the first model are each trained.
5. The intelligent interconnected emergency system based on big data according to claim 1, wherein the D layer multiplies the output of the last RNN unit of the C layer after vectorization with a classification weight matrix to obtain a plurality of outputs; the size of the classification weight matrix is n×m, where N is the dimension after vectorization of the output of the last RNN unit of the C layer.
6. According to the weightsThe intelligent interconnected emergency system based on big data as set forth in claim 1, wherein the D layer is further connected to the E layer, the E layer includes probability units connected to the outputs of the D layer one by one, and the outputs of the probability units are G j The calculation formula of (2) is as follows:
S j is the j-th output value of the D layer, S k K is the set of output items of the D layer, which is the kth output value of the D layer.
7. The intelligent interconnected emergency system based on big data of claim 1, wherein the second layer training loss function is:
wherein y is i,j A true value representing that the ith sample belongs to the jth emergency class, k is the total number of emergency classes, n is the total number of samples of the training set,representing the predicted value of the model for classifying the ith sample as belonging to the jth emergency, L reg Regularizing the term for L1.
CN202310297735.4A 2023-03-23 2023-03-23 Intelligent interconnection first-aid system based on big data Active CN116229581B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310297735.4A CN116229581B (en) 2023-03-23 2023-03-23 Intelligent interconnection first-aid system based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310297735.4A CN116229581B (en) 2023-03-23 2023-03-23 Intelligent interconnection first-aid system based on big data

Publications (2)

Publication Number Publication Date
CN116229581A CN116229581A (en) 2023-06-06
CN116229581B true CN116229581B (en) 2023-09-19

Family

ID=86573146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310297735.4A Active CN116229581B (en) 2023-03-23 2023-03-23 Intelligent interconnection first-aid system based on big data

Country Status (1)

Country Link
CN (1) CN116229581B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764059A (en) * 2018-05-04 2018-11-06 南京邮电大学 A kind of Human bodys' response method and system based on neural network
CN109326355A (en) * 2018-08-16 2019-02-12 浙江树人学院 A kind of fireman's Breathiness monitoring earphone and its physical condition appraisal procedure
US10210860B1 (en) * 2018-07-27 2019-02-19 Deepgram, Inc. Augmented generalized deep learning with special vocabulary
CN109399437A (en) * 2018-12-28 2019-03-01 济南浪潮高新科技投资发展有限公司 Escalator emergency braking system and method based on human action identification
CN110786859A (en) * 2018-08-03 2020-02-14 格力电器(武汉)有限公司 Emergency alarm method, device and system
CN112861769A (en) * 2021-03-02 2021-05-28 武汉爱科森网络科技有限公司 Intelligent monitoring and early warning system and method for aged people
WO2021212883A1 (en) * 2020-04-20 2021-10-28 电子科技大学 Fall detection method based on intelligent mobile terminal
CN114403878A (en) * 2022-01-20 2022-04-29 南通理工学院 Voice fatigue detection method based on deep learning
CN114566275A (en) * 2022-02-21 2022-05-31 山东大学齐鲁医院 Pre-hospital emergency auxiliary system based on mixed reality
CN115271002A (en) * 2022-09-29 2022-11-01 广东机电职业技术学院 Identification method, first-aid decision method, medium and life health intelligent monitoring system
WO2022256942A1 (en) * 2021-06-10 2022-12-15 Bozena Kaminska Wearable device including processor and accelerator
CN115641610A (en) * 2022-10-14 2023-01-24 沈阳瞻言科技有限公司 Hand-waving help-seeking identification system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200388287A1 (en) * 2018-11-13 2020-12-10 CurieAI, Inc. Intelligent health monitoring

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764059A (en) * 2018-05-04 2018-11-06 南京邮电大学 A kind of Human bodys' response method and system based on neural network
US10210860B1 (en) * 2018-07-27 2019-02-19 Deepgram, Inc. Augmented generalized deep learning with special vocabulary
CN110786859A (en) * 2018-08-03 2020-02-14 格力电器(武汉)有限公司 Emergency alarm method, device and system
CN109326355A (en) * 2018-08-16 2019-02-12 浙江树人学院 A kind of fireman's Breathiness monitoring earphone and its physical condition appraisal procedure
CN109399437A (en) * 2018-12-28 2019-03-01 济南浪潮高新科技投资发展有限公司 Escalator emergency braking system and method based on human action identification
WO2021212883A1 (en) * 2020-04-20 2021-10-28 电子科技大学 Fall detection method based on intelligent mobile terminal
CN112861769A (en) * 2021-03-02 2021-05-28 武汉爱科森网络科技有限公司 Intelligent monitoring and early warning system and method for aged people
WO2022256942A1 (en) * 2021-06-10 2022-12-15 Bozena Kaminska Wearable device including processor and accelerator
CN114403878A (en) * 2022-01-20 2022-04-29 南通理工学院 Voice fatigue detection method based on deep learning
CN114566275A (en) * 2022-02-21 2022-05-31 山东大学齐鲁医院 Pre-hospital emergency auxiliary system based on mixed reality
CN115271002A (en) * 2022-09-29 2022-11-01 广东机电职业技术学院 Identification method, first-aid decision method, medium and life health intelligent monitoring system
CN115641610A (en) * 2022-10-14 2023-01-24 沈阳瞻言科技有限公司 Hand-waving help-seeking identification system and method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Audio IoT Analytics for Home Automation Safety;Sayed Khushal Shah等;《2018 IEEE International Conference on Big Data (Big Data)》;第5181-5186页 *
Emergency Fall Incidents Detection in Assisted Living Environments Utilizing Motion, Sound, and Visual Perceptual Components;Charalampos N. Doukas等;《IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE》;第15卷(第2期);第277-289页 *
基于无线网络的远程健康管理与急救系统设计;刘耀东等;《计算机技术与发展》;第22卷(第3期);第129-132页 *
融合运动学和声学特征的语音情感识别研究;任国凤;《中国博士学位论文全文数据库 信息科技辑》(第(2019)08期);I136-25 *
院前救治的儿童意识障碍浅析;纪学颖等;《中华灾害救援医学》;第9卷(第9期);第1219-1222页 *

Also Published As

Publication number Publication date
CN116229581A (en) 2023-06-06

Similar Documents

Publication Publication Date Title
Palaniappan et al. VEP optimal channel selection using genetic algorithm for neural network classification of alcoholics
CN112347908B (en) Surgical instrument image identification method based on space grouping attention model
EP3977479B1 (en) Method for assisting an interviewing party
Tasoulis et al. Statistical data mining of streaming motion data for activity and fall recognition in assistive environments
Al Machot et al. Human activity recognition based on real life scenarios
CN111582396A (en) Fault diagnosis method based on improved convolutional neural network
CN116564561A (en) Intelligent voice nursing system and nursing method based on physiological and emotion characteristics
Renjith et al. Speech based emotion recognition in Tamil and Telugu using LPCC and hurst parameters—A comparitive study using KNN and ANN classifiers
CN112101096A (en) Suicide emotion perception method based on multi-mode fusion of voice and micro-expression
CN110047518A (en) A kind of speech emotional analysis system
CN114724224A (en) Multi-mode emotion recognition method for medical care robot
CN112101097A (en) Depression and suicide tendency identification method integrating body language, micro expression and language
Saffari et al. DCNN-fuzzyWOA: artificial intelligence solution for automatic detection of covid-19 using X-ray images
CN112466284B (en) Mask voice identification method
CN116229581B (en) Intelligent interconnection first-aid system based on big data
CN117198468A (en) Intervention scheme intelligent management system based on behavior recognition and data analysis
Rony et al. An effective approach to communicate with the deaf and mute people by recognizing characters of one-hand bangla sign language using convolutional neural-network
CN110738985A (en) Cross-modal biometric feature recognition method and system based on voice signals
CN116313127A (en) Decision support system based on pre-hospital first-aid big data
Tiwari et al. Face Recognition using morphological method
Kishore et al. A hybrid method for activity monitoring using principal component analysis and back-propagation neural network
Rajesh Down syndrome detection using modified adaboost algorithm
Kedari et al. Face emotion detection using deep learning
CN107180236B (en) Multi-modal emotion recognition method based on brain-like model
Foo et al. A boosted multi-HMM classifier for recognition of visual speech elements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant