CN116229581A - Intelligent interconnection first-aid system based on big data - Google Patents
Intelligent interconnection first-aid system based on big data Download PDFInfo
- Publication number
- CN116229581A CN116229581A CN202310297735.4A CN202310297735A CN116229581A CN 116229581 A CN116229581 A CN 116229581A CN 202310297735 A CN202310297735 A CN 202310297735A CN 116229581 A CN116229581 A CN 116229581A
- Authority
- CN
- China
- Prior art keywords
- layer
- emergency
- output
- big data
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y20/00—Information sensed or collected by the things
- G16Y20/40—Information sensed or collected by the things relating to personal data, e.g. biometric data, records or preferences
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/20—Analytics; Diagnosis
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/50—Safety; Security of things, users, data or systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/60—Positioning; Navigation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
The invention relates to the technical field of big data, and discloses an intelligent interconnection first-aid system based on big data, which comprises the following components: the first model generation module is used for establishing a first model, and the first model comprises a first layer, a second layer and a third layer, wherein the input of the first layer is a moving image of a user, and the output of the first layer is obstacle movement characteristics; the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic; the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D; the first layer and the second layer are respectively connected with the output of the first layer and the second layer, the input of the first layer is the input of the second layer, and the output of the second layer corresponds to different first-aid condition classifications; an emergency decision module that selects a corresponding treatment strategy based on the identified emergency situation classification; according to the invention, the user information acquired by the Internet of things is processed and judged based on big data and deep learning, so that the potential danger reflected by the patient behavior can be found in time, and emergency treatment is performed.
Description
Technical Field
The invention relates to the technical field of big data, in particular to an intelligent interconnection first-aid system based on big data.
Background
The existing first aid depends on help-aid persons to ask for help, and the help-aid persons lack of medical knowledge, so that the help-aid persons have insufficient knowledge of early dyskinesias and language disorders of some acute diseases, and once the diseases occur, help-aid ability can be lost.
Disclosure of Invention
The invention provides an intelligent interconnection first-aid system based on big data, which solves the technical problem that patients in the related art may not be able to help by themselves.
The invention provides an intelligent interconnection first-aid system based on big data, which comprises:
a first information acquisition module for acquiring a moving image of a user;
the first model generation module is used for establishing a first model, and the first model comprises a first layer, a second layer and a third layer, wherein the input of the first layer is a moving image of a user, and the output of the first layer is obstacle movement characteristics;
the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic;
the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D;
the first layer and the second layer are respectively connected with the output of the first layer and the second layer, the A layer and the B layer are the input of the C layer, the C layer comprises a plurality of RNN units, each RNN unit inputs one obstacle movement characteristic and one obstacle sound characteristic, the D layer is connected with the output of the last RNN unit, and the D layer outputs more than two results which respectively correspond to different first-aid condition classifications;
a state recognition module for inputting the motion image and audio of the current user into the first model to generate an emergency classification;
an emergency decision module that selects a corresponding treatment strategy based on the identified emergency situation classification;
and the interconnection positioning module is used for positioning the position of the user based on the terminal equipment of the user.
Further, the D layer outputs two results, corresponding to two categories requiring emergency and not requiring emergency, respectively.
Further, the D layer outputs three results, corresponding to the disorder without abnormality and the disorder requiring emergency treatment, respectively.
Further, the first layer, the third layer and the second layer of the first model are respectively trained.
Further, memory s of the t RNN unit of layer C t The calculation formula of (2) is as follows:
s t =f(W*s t-1 +U 1 *x t +U 2 *y t )
wherein W represents the input weight, U 1 Weights representing obstacle movement characteristics entered by the t-th RNN unit, U 2 Weights, x, representing characteristics of barrier sounds input by the t-th RNN unit t Representing the obstacle movement characteristics of the tth RNN unit input, y t Representing the barrier sound characteristics input by the t-th RNN unit, wherein f is an activation function;
output o of t RNN unit of layer C t =g(V*s t ) Wherein V is the output weight, s t Memory for the t RNN unit, g is the activation function.
Further, the D layer vectorizes the output of the last RNN unit of the C layer and multiplies the vectorized output by the classification weight matrix to obtain a plurality of outputs; the size of the classification weight matrix is n×m, where N is the dimension after vectorization of the output of the last RNN unit of the C layer.
Further, the A layer adopts MASK R-CNN neural network.
Further, the layer B adopts an LSTM neural network, the audio of the user is processed in a time sequence mode, and then the audio is input into the BLSTM neural network.
Further, the D layer is also connected with the E layer, the E layer comprises probability units connected with the outputs of the D layer one by one, and the outputs G of the probability units j The calculation formula of (2) is as follows:
S j is the j-th output value of the D layer, S k K is the set of output items of the D layer, which is the kth output value of the D layer.
Further, the second layer training loss function is:
wherein y is i,j A true value representing that the ith sample belongs to the jth emergency class, k is the total number of emergency classes, n is the total number of samples of the training set,representing a predictive value, L, representing the classification of the ith sample belonging to the jth emergency condition by the representation model reg Regularizing the term for L1.
The invention has the beneficial effects that:
the invention processes and judges the user information collected by the Internet of things based on big data and deep learning, can timely find potential danger reflected by patient behaviors, carries out emergency treatment, is used as an emergency system independent of self help seeking of the user, and can effectively improve the life safety of the user.
Drawings
Fig. 1 is an intelligent interconnected emergency system based on big data according to the present invention.
In the figure: the system comprises a first information acquisition module 101, a first model generation module 102, a state identification module 103, a first aid decision module 104 and an interconnection positioning module 105.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
Example 1
As shown in fig. 1, an intelligent interconnection emergency system based on big data includes:
a first information acquisition module 101 for acquiring a moving image of a user;
and the second information acquisition module is used for acquiring the audio data of the user.
The first model generating module 102 is configured to build a first model, where the first model includes a first layer, a second layer, and a third layer, and an input of the first layer is a moving image of a user, and an output is an obstacle movement feature;
the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic;
the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D;
the first layer and the second layer are respectively connected with the output of the first layer and the second layer, the A layer and the B layer are the input of the C layer, the C layer comprises a plurality of RNN units, each RNN unit inputs one obstacle movement characteristic and one obstacle sound characteristic, the D layer is connected with the output of the last RNN unit, and the D layer outputs more than two results which respectively correspond to different first-aid condition classifications;
in one embodiment of the invention, the D layer outputs two results, corresponding to two categories requiring emergency and not requiring emergency, respectively.
In one embodiment of the present invention, the D layer outputs three results, corresponding to no anomaly, no need for emergency treatment, and no need for emergency treatment, respectively.
The first layer, the third layer and the second layer of the first model are respectively trained.
In one embodiment of the invention, memory s of the t RNN unit of layer C t The calculation formula of (2) is as follows:
s t =f(W*s t-1 +U 1 *x t +U 2 *y t )
wherein W represents the input weight, U 1 Weights representing obstacle movement characteristics entered by the t-th RNN unit, U 2 Weights, x, representing characteristics of barrier sounds input by the t-th RNN unit t Representing the obstacle movement characteristics of the tth RNN unit input, y t Representing the barrier sound characteristics input by the t-th RNN unit, wherein f is an activation function;
output o of t RNN unit of layer C t =g(V*s t ) Wherein V is the output weight, s t Memory for the t RNN unit, g is the activation function.
The output of the last RNN unit of the layer C is vectorized and then multiplied by the classification weight matrix to obtain a plurality of outputs;
the size of the classification weight matrix is n×m, where N is the dimension after vectorization of the output of the last RNN unit of the C layer.
In one embodiment of the invention, layer A employs a MASKR-CNN neural network.
In one embodiment of the invention, the layer B employs an LSTM neural network, which time-sequences the user's audio before inputting to the BLSTM neural network.
In one embodiment of the invention, the D layer is also connected with the E layer, the E layer comprises probability units connected with the outputs of the D layer one by one, and the outputs G of the probability units j The calculation formula of (2) is as follows:
S j is the j-th output value of the D layer, S k K is the set of output items of the D layer, which is the kth output value of the D layer.
The loss function of training of layer D is:
wherein y is i,j A true value representing that the ith sample belongs to the jth emergency class, k is the total number of emergency classes, n is the total number of samples of the training set,representing a predictive value, L, representing the classification of the ith sample belonging to the jth emergency condition by the representation model reg Regularizing the term for L1.
A state recognition module 103 for inputting the motion image and audio of the current user into the first model to generate an emergency situation classification;
an emergency decision module 104 that selects a corresponding treatment strategy based on the identified emergency situation classification;
for example, for emergency situations classified as requiring emergency, the user location is located directly and then ambulances and medical personnel are dispatched.
For example, for emergency situations classified as not requiring emergency, the user can be contacted by telephone to communicate with the user, and further judge whether emergency is required.
In one embodiment of the present invention, preliminary determination of the condition of the user based on the medical record of the user and/or the moving image and audio of the user to prepare the medical apparatus to be carried is a conventional technical means in the art, and will not be described herein.
An interconnection positioning module 105, which positions the user based on the user's terminal device.
In the above embodiment of the present invention, both the capturing of the moving image of the user and the audio of the user are performed by the terminal device on the internet of things, and the terminal device for positioning the user may be the internet of things device such as the mobile phone of the user or the camera in the user room.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.
Claims (10)
1. An intelligent interconnected first-aid system based on big data, comprising:
a first information acquisition module for acquiring a moving image of a user;
the first model generation module is used for establishing a first model, and the first model comprises a first layer, a second layer and a third layer, wherein the input of the first layer is a moving image of a user, and the output of the first layer is obstacle movement characteristics;
the third layer is input as the audio frequency of the user and output as the obstacle sound characteristic;
the second layer is an RNN neural network, and the RNN neural network comprises a layer A, a layer B, a layer C and a layer D;
the first layer and the second layer are respectively connected with the output of the first layer and the second layer, the A layer and the B layer are the input of the C layer, the C layer comprises a plurality of RNN units, each RNN unit inputs one obstacle movement characteristic and one obstacle sound characteristic, the D layer is connected with the output of the last RNN unit, and the D layer outputs more than two results which respectively correspond to different first-aid condition classifications;
a state recognition module for inputting the motion image and audio of the current user into the first model to generate an emergency classification;
an emergency decision module that selects a corresponding treatment strategy based on the identified emergency situation classification;
and the interconnection positioning module is used for positioning the position of the user based on the terminal equipment of the user.
2. The intelligent interconnected emergency system based on big data of claim 1, wherein the D layer outputs two results corresponding to two categories of emergency and no emergency, respectively.
3. The intelligent interconnected emergency system based on big data of claim 1, wherein the D layer outputs three results corresponding to no anomaly, no need for emergency and no need for emergency, respectively.
4. The intelligent interconnected emergency system based on big data of claim 1, wherein the first layer, the third layer, and the second layer of the first model are each trained.
5. The intelligent interconnected emergency system based on big data according to claim 1, wherein the memory s of the t RNN unit of layer C t The calculation formula of (2) is as follows:
S t =f(W*s t-1 +U 1 *x t +U 2 *y t )
wherein W represents the input weight, U 1 Weights representing obstacle movement characteristics entered by the t-th RNN unit, U 2 Weights, x, representing characteristics of barrier sounds input by the t-th RNN unit t Representing the obstacle movement characteristics of the tth RNN unit input, y t Representing the barrier sound characteristics input by the t-th RNN unit, wherein f is an activation function;
output o of t RNN unit of layer C t =g(V*s t ) Wherein V is the output weight, s t Memory for the t RNN unit, g is the activation function.
6. The intelligent interconnected emergency system based on big data according to claim 1, wherein the D layer multiplies the output of the last RNN unit of the C layer after vectorization with a classification weight matrix to obtain a plurality of outputs; the size of the classification weight matrix is n×m, where N is the dimension after vectorization of the output of the last RNN unit of the C layer.
7. The intelligent interconnected emergency system based on big data as set forth in claim 6, wherein the layer a employs a MASK R-CNN neural network.
8. The intelligent interconnected emergency system based on big data according to claim 1, wherein the layer B adopts LSTM neural network, and the audio of the user is processed in time sequence and then input into the BLSTM neural network.
9. The intelligent interconnected emergency system based on big data according to claim 1, wherein the D layer is further connected to the E layer, the E layer includes probability units connected to the outputs of the D layer one by one, and the outputs G of the probability units j The calculation formula of (2) is as follows:
S j is the j-th output value of the D layer, S k K is the set of output items of the D layer, which is the kth output value of the D layer.
10. The intelligent interconnected emergency system based on big data of claim 1, wherein the second layer training loss function is:
wherein y is i,j A true value representing that the ith sample belongs to the jth emergency class, k is the total number of emergency classes, n is the total number of samples of the training set,representing a predictive value, L, representing the classification of the ith sample belonging to the jth emergency condition by the representation model reg Regularizing the term for L1. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310297735.4A CN116229581B (en) | 2023-03-23 | 2023-03-23 | Intelligent interconnection first-aid system based on big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310297735.4A CN116229581B (en) | 2023-03-23 | 2023-03-23 | Intelligent interconnection first-aid system based on big data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116229581A true CN116229581A (en) | 2023-06-06 |
CN116229581B CN116229581B (en) | 2023-09-19 |
Family
ID=86573146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310297735.4A Active CN116229581B (en) | 2023-03-23 | 2023-03-23 | Intelligent interconnection first-aid system based on big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116229581B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764059A (en) * | 2018-05-04 | 2018-11-06 | 南京邮电大学 | A kind of Human bodys' response method and system based on neural network |
CN109326355A (en) * | 2018-08-16 | 2019-02-12 | 浙江树人学院 | A kind of fireman's Breathiness monitoring earphone and its physical condition appraisal procedure |
US10210860B1 (en) * | 2018-07-27 | 2019-02-19 | Deepgram, Inc. | Augmented generalized deep learning with special vocabulary |
CN109399437A (en) * | 2018-12-28 | 2019-03-01 | 济南浪潮高新科技投资发展有限公司 | Escalator emergency braking system and method based on human action identification |
CN110786859A (en) * | 2018-08-03 | 2020-02-14 | 格力电器(武汉)有限公司 | Emergency alarm method, device and system |
US20200388287A1 (en) * | 2018-11-13 | 2020-12-10 | CurieAI, Inc. | Intelligent health monitoring |
CN112861769A (en) * | 2021-03-02 | 2021-05-28 | 武汉爱科森网络科技有限公司 | Intelligent monitoring and early warning system and method for aged people |
US20210272580A1 (en) * | 2020-03-02 | 2021-09-02 | Espressif Systems (Shanghai) Co., Ltd. | System and method for offline embedded abnormal sound fault detection |
WO2021212883A1 (en) * | 2020-04-20 | 2021-10-28 | 电子科技大学 | Fall detection method based on intelligent mobile terminal |
CN114403878A (en) * | 2022-01-20 | 2022-04-29 | 南通理工学院 | Voice fatigue detection method based on deep learning |
CN114566275A (en) * | 2022-02-21 | 2022-05-31 | 山东大学齐鲁医院 | Pre-hospital emergency auxiliary system based on mixed reality |
CN115271002A (en) * | 2022-09-29 | 2022-11-01 | 广东机电职业技术学院 | Identification method, first-aid decision method, medium and life health intelligent monitoring system |
WO2022256942A1 (en) * | 2021-06-10 | 2022-12-15 | Bozena Kaminska | Wearable device including processor and accelerator |
CN115641610A (en) * | 2022-10-14 | 2023-01-24 | 沈阳瞻言科技有限公司 | Hand-waving help-seeking identification system and method |
-
2023
- 2023-03-23 CN CN202310297735.4A patent/CN116229581B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764059A (en) * | 2018-05-04 | 2018-11-06 | 南京邮电大学 | A kind of Human bodys' response method and system based on neural network |
US10210860B1 (en) * | 2018-07-27 | 2019-02-19 | Deepgram, Inc. | Augmented generalized deep learning with special vocabulary |
CN110786859A (en) * | 2018-08-03 | 2020-02-14 | 格力电器(武汉)有限公司 | Emergency alarm method, device and system |
CN109326355A (en) * | 2018-08-16 | 2019-02-12 | 浙江树人学院 | A kind of fireman's Breathiness monitoring earphone and its physical condition appraisal procedure |
US20200388287A1 (en) * | 2018-11-13 | 2020-12-10 | CurieAI, Inc. | Intelligent health monitoring |
CN109399437A (en) * | 2018-12-28 | 2019-03-01 | 济南浪潮高新科技投资发展有限公司 | Escalator emergency braking system and method based on human action identification |
US20210272580A1 (en) * | 2020-03-02 | 2021-09-02 | Espressif Systems (Shanghai) Co., Ltd. | System and method for offline embedded abnormal sound fault detection |
WO2021212883A1 (en) * | 2020-04-20 | 2021-10-28 | 电子科技大学 | Fall detection method based on intelligent mobile terminal |
CN112861769A (en) * | 2021-03-02 | 2021-05-28 | 武汉爱科森网络科技有限公司 | Intelligent monitoring and early warning system and method for aged people |
WO2022256942A1 (en) * | 2021-06-10 | 2022-12-15 | Bozena Kaminska | Wearable device including processor and accelerator |
CN114403878A (en) * | 2022-01-20 | 2022-04-29 | 南通理工学院 | Voice fatigue detection method based on deep learning |
CN114566275A (en) * | 2022-02-21 | 2022-05-31 | 山东大学齐鲁医院 | Pre-hospital emergency auxiliary system based on mixed reality |
CN115271002A (en) * | 2022-09-29 | 2022-11-01 | 广东机电职业技术学院 | Identification method, first-aid decision method, medium and life health intelligent monitoring system |
CN115641610A (en) * | 2022-10-14 | 2023-01-24 | 沈阳瞻言科技有限公司 | Hand-waving help-seeking identification system and method |
Non-Patent Citations (5)
Title |
---|
CHARALAMPOS N. DOUKAS等: "Emergency Fall Incidents Detection in Assisted Living Environments Utilizing Motion, Sound, and Visual Perceptual Components", 《IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE》, vol. 15, no. 2, pages 277 - 289, XP011373669, DOI: 10.1109/TITB.2010.2091140 * |
SAYED KHUSHAL SHAH等: "Audio IoT Analytics for Home Automation Safety", 《2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA)》, pages 5181 - 5186 * |
任国凤: "融合运动学和声学特征的语音情感识别研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 2019, pages 136 - 25 * |
刘耀东等: "基于无线网络的远程健康管理与急救系统设计", 《计算机技术与发展》, vol. 22, no. 3, pages 129 - 132 * |
纪学颖等: "院前救治的儿童意识障碍浅析", 《中华灾害救援医学》, vol. 9, no. 9, pages 1219 - 1222 * |
Also Published As
Publication number | Publication date |
---|---|
CN116229581B (en) | 2023-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3977479B1 (en) | Method for assisting an interviewing party | |
CN112347908B (en) | Surgical instrument image identification method based on space grouping attention model | |
Tasoulis et al. | Statistical data mining of streaming motion data for activity and fall recognition in assistive environments | |
CN111582396A (en) | Fault diagnosis method based on improved convolutional neural network | |
CN112101096A (en) | Suicide emotion perception method based on multi-mode fusion of voice and micro-expression | |
Renjith et al. | Speech based emotion recognition in Tamil and Telugu using LPCC and hurst parameters—A comparitive study using KNN and ANN classifiers | |
CN110244854A (en) | A kind of artificial intelligence approach of multi-class eeg data identification | |
CN110047518A (en) | A kind of speech emotional analysis system | |
CN114724224A (en) | Multi-mode emotion recognition method for medical care robot | |
CN112101097A (en) | Depression and suicide tendency identification method integrating body language, micro expression and language | |
CN112466284B (en) | Mask voice identification method | |
CN116229581B (en) | Intelligent interconnection first-aid system based on big data | |
Rony et al. | An effective approach to communicate with the deaf and mute people by recognizing characters of one-hand bangla sign language using convolutional neural-network | |
Khan et al. | Intelligent Malaysian sign language translation system using convolutional-based attention module with residual network | |
CN117198468A (en) | Intervention scheme intelligent management system based on behavior recognition and data analysis | |
Bhattacharjee et al. | A comparative study of supervised learning techniques for human activity monitoring using smart sensors | |
CN116313127A (en) | Decision support system based on pre-hospital first-aid big data | |
CN110738985A (en) | Cross-modal biometric feature recognition method and system based on voice signals | |
Azam et al. | Classification of COVID-19 symptoms using multilayer perceptron | |
Kishore et al. | A hybrid method for activity monitoring using principal component analysis and back-propagation neural network | |
Monica et al. | Recognition of medicine using cnn for visually impaired | |
CN113887339A (en) | Silent voice recognition system and method fusing surface electromyogram signal and lip image | |
CN107180236B (en) | Multi-modal emotion recognition method based on brain-like model | |
Park et al. | A study on hybrid model of HMMs and GMMs for mirror neuron system modeling using EEG signals | |
CN117373658B (en) | Data processing-based auxiliary diagnosis and treatment system and method for depression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |