CN114639149A - Sick bed terminal with emotion recognition function - Google Patents
Sick bed terminal with emotion recognition function Download PDFInfo
- Publication number
- CN114639149A CN114639149A CN202210269101.3A CN202210269101A CN114639149A CN 114639149 A CN114639149 A CN 114639149A CN 202210269101 A CN202210269101 A CN 202210269101A CN 114639149 A CN114639149 A CN 114639149A
- Authority
- CN
- China
- Prior art keywords
- image
- unit
- face
- emotion recognition
- emotion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M5/00—Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
- A61M5/14—Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
- A61M5/168—Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body
- A61M5/16877—Adjusting flow; Devices for setting a flow rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M5/00—Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
- A61M5/14—Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
- A61M5/168—Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body
- A61M5/16886—Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body for measuring fluid flow rate, i.e. flowmeters
- A61M5/1689—Drip counters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M5/00—Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
- A61M5/14—Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
- A61M5/168—Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body
- A61M5/172—Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body electrical or electronic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/18—General characteristics of the apparatus with alarm
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/33—Controlling, regulating or measuring
- A61M2205/3306—Optical measuring means
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Artificial Intelligence (AREA)
- Vascular Medicine (AREA)
- Hematology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Anesthesiology (AREA)
- Mathematical Physics (AREA)
- Fluid Mechanics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Psychiatry (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
Abstract
The invention discloses a sickbed terminal with emotion recognition function, which comprises: the system comprises a nursing calling module, an infusion monitoring module and an emotion recognition module; the emotion recognition module includes: the image acquisition unit is used for acquiring image information of a patient; an image recognition unit for recognizing a face area in the image information; an image cutting unit for cutting out a human face ROI from the face region; the image extraction unit is used for extracting a plurality of face activity units from the human face ROI; the image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image; the edge extraction unit is used for extracting the edges of the face image from the gray level image; and the emotion recognition unit is used for receiving the face activity unit, the gray level image and the edges of the face image, processing the face activity unit, the gray level image and the edges of the face image and outputting emotion categories. The sickbed terminal with the emotion recognition function can accurately recognize the emotion of a patient, so that a doctor can treat the patient conveniently.
Description
Technical Field
The invention relates to a sickbed terminal with an emotion recognition function.
Background
With the development of intellectualization, more and more traditional devices gradually have intelligent functions. At present, most ward bedside terminals on the market integrate intelligent functions such as voice call and early warning. However, the existing ward bedside terminal has no emotion recognition function, cannot sense emotion changes of patients in real time, cannot timely meet requirements of patients suffering from expression disorders, and is difficult to evaluate experience and satisfaction of the patients.
Disclosure of Invention
The invention provides a sickbed terminal with an emotion recognition function, which solves the technical problems mentioned above and specifically adopts the following technical scheme:
a hospital bed terminal having an emotion recognition function, comprising:
the nursing calling module is used for being operated by a patient to be connected to a nurse workstation and carrying out voice interaction;
the transfusion monitoring module is used for monitoring the transfusion condition of the patient;
an emotion recognition module for recognizing an emotional state of the patient;
the emotion recognition module includes:
the image acquisition unit is used for acquiring image information of a patient;
an image recognition unit for recognizing a face area in the image information;
the image cutting unit is used for cutting out a human face ROI from the face area;
the image extraction unit is used for extracting a plurality of face activity units from the human face ROI;
the image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image;
the edge extraction unit is used for extracting the edges of the face image from the gray level image;
and the emotion recognition unit is used for receiving the facial activity unit, the gray image and the human face image edge and processing the facial activity unit, the gray image and the human face image edge so as to output emotion categories.
Further, the emotion recognition unit includes:
the first splicing subunit is used for splicing the gray image and the edge of the face image;
the VGG19 convolutional neural network unit is used for receiving the splicing result of the first splicing subunit and extracting features to obtain emotional features;
the characteristic processing unit is used for flattening the emotional characteristics extracted by the VGG19 convolutional neural network unit into a 1-dimensional array;
the second splicing subunit is used for splicing the emotion characteristics processed by the facial activity unit and the characteristic processing unit;
and the classifier unit is used for receiving the splicing result of the second splicing subunit and processing the splicing result to obtain the emotion category.
Further, the classifier unit is composed of two fully connected layers and one ReLU activation function layer.
Further, the image extraction unit extracts 17 face motion units from the face ROI using the OpenFace tool, the 17 face motion units being AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, and AU45, respectively.
Further, the image graying sub-module grays the human face ROI by the following formula to obtain a grayscale image:
Gray=0.299*R+0.587*G+0.114*B
wherein, R, G, B are respectively the red, green, blue channel of RGB image, Gray is the image after the graying.
Further, the edge extraction submodule extracts the edges of the face image from the gray scale image through a canny algorithm, and in the process that the edge extraction submodule extracts the edges of the face image from the gray scale image through the canny algorithm, the upper threshold value and the lower threshold value are respectively set to be 100 and 50.
Further, the emotion recognition module further includes:
and the visual display unit is used for displaying the face image of the patient and the emotion category obtained by the analysis of the emotion recognition unit.
Further, the infusion monitoring module comprises:
the dripping speed monitoring unit is used for monitoring the infusion equipment to judge the current infusion dripping speed;
the intelligent early warning unit is used for judging whether the current infusion dripping speed is proper or not according to the infusion dripping speed detected by the dripping speed monitoring module and the type of the current infusion liquid medicine;
and the whole-process detection unit is used for calculating in real time according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine to obtain the residual liquid medicine capacity and the residual time.
Further, the infusion monitoring module further comprises:
and an automatic cutting unit for automatically cutting off the supply of the chemical liquid when the remaining time reaches a threshold value.
Further, the infusion monitoring module sends the infusion dripping speed, the residual liquid medicine capacity and the residual time to a post-nurse station server in real time;
the nurse station server sends the received information to a nurse workstation large screen for display;
the nurse station server sends an early warning signal to the nurse station early warning equipment when the remaining time reaches a threshold value;
and the early warning equipment of the nurse station sends out an early warning signal.
The sickbed terminal with the emotion recognition function has the beneficial effects that the mood of the patient can be accurately recognized, so that the sickbed terminal is beneficial to the treatment of the patient by a doctor.
The sickbed terminal with the emotion recognition function has the advantages that the edges of the face and the gray-scale image of the face are fused in a data level fusion mode, and a network is guided to extract the image contour characteristics; a feature level fusion mode is adopted to fuse the facial action unit and the high-level features automatically extracted by the neural network, and the reliability of the emotion recognition algorithm is improved by combining the priori knowledge and the high-level features.
Drawings
Fig. 1 is a schematic diagram of emotion recognition performed by an emotion recognition module of a hospital bed terminal having an emotion recognition function according to the present invention;
fig. 2 is a schematic diagram of a network structure of an emotion recognition unit of the present invention;
fig. 3 is a display schematic diagram of the visual display unit of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the embodiments.
The application discloses sick bed terminal with emotion recognition function mainly contains: nursing calling module, infusion monitor module and emotion recognition module. The nursing calling module is used for the patient to operate so as to be connected to the nurse workstation and carry out voice interaction. The infusion monitoring module is used for monitoring the infusion condition of the patient. The emotion recognition module is used for recognizing an emotional state of the patient.
Wherein, the emotion recognition module comprises: the device comprises an image acquisition unit, an image recognition unit, an image cutting unit, an image extraction unit, an image graying unit, an edge extraction unit and an emotion recognition unit. The emotion recognition module is responsible for monitoring the emotional state in real time, monitoring the seven components of the emotional state of the patient in real time by processing and analyzing the video image acquired by the visible light camera, and displaying and feeding back the result to the patient through the visual interface module. The method for emotion recognition by the emotion recognition module is shown in fig. 1.
Specifically, the image acquisition unit is used for acquiring image information of a patient. In this application, the image acquisition unit is the visible light camera, can carry out real-time video shooting. And cutting the video data according to the time frame to obtain a single-frame image. The image recognition unit is used for recognizing the face area in the image information. The image cutting sheet is used to cut out a face ROI (region of interest) from the face region. The image extraction unit is used for extracting a plurality of face activity units from the human face ROI. Specifically, the image extraction unit extracts 17 face motion units from the face ROI using the OpenFace tool, the 17 face motion units being AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, and AU45, respectively.
The image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image. Specifically, the image graying sub-module performs graying processing on the human face ROI through the following formula to obtain a grayscale image:
Gray=0.299*R+0.587*G+0.114*B
wherein, R, G and B are respectively red, green and blue channels of the RGB image, and Gray is the grayed image.
The edge extraction unit is used for extracting the edges of the face image from the gray level image. Specifically, the edge extraction submodule extracts the edges of the face image from the gray scale image through a canny algorithm, and in the process that the edge extraction submodule extracts the edges of the face image from the gray scale image through the canny algorithm, the upper threshold value and the lower threshold value are respectively set to be 100 and 50.
The emotion recognition unit is used for receiving the face activity unit, the gray level image and the edges of the face image, processing the face activity unit, the gray level image and the edges of the face image and outputting emotion categories. Fig. 2 is a diagram showing a network structure of an emotion recognition unit of the present application. Specifically, the emotion recognition unit includes: the device comprises a first splicing subunit, a VGG19 convolutional neural network unit, a feature processing unit, a second splicing subunit and a classifier unit.
The first splicing subunit is used for splicing the gray image and the edges of the face image. And the VGG19 convolutional neural network unit is used for receiving the splicing result of the first splicing subunit and extracting the features to obtain emotional features. The feature processing unit is used for flattening the emotional features extracted by the VGG19 convolutional neural network unit into a 1-dimensional array. And the second splicing subunit is used for splicing the emotional characteristics processed by the facial activity unit and the characteristic processing unit. And the classifier unit is used for receiving the splicing result of the second splicing subunit and processing the splicing result to obtain the emotion category. The classifier unit is composed of two full connection layers and a ReLU activation function layer.
Specifically, as shown in the following formula,
Fvgg=VGG([Gray:Edge])
Funion=[flatten(F):AU]
Fc=Linear(ReLU(Linear(Funion;θ2,b2));θ1,b1)
result=SoftMax(Fc)
wherein Gary is in the middle of R1×W×HFor grayed human face image, Edge belongs to R1×W×HEdge of face image extracted for Canny, [ Gray: Edge]∈R2×W×HThe channel level splicing of Gary and Edge is realized. And the human face edge detection result is a binary image which corresponds to the pixels of the original gray image one by one, the value of the pixels of the edge is judged to be 1, otherwise, the value is 0, and the human face edge detection result and the pixels of the original image are overlapped in a one-to-one correspondence mode through channel level splicing. By this step, the fusion of the a priori knowledge of the face contour is realized, and the spliced data is input into the VGG19 network. The VGG19 network is a classic structure of a convolutional neural network, 3x3 convolutional kernels are used for replacing 7x7 convolutional kernels, 23 x3 convolutional kernels are used for replacing 5 x 5 convolutional kernels, under the condition that the same perception field as that of a large-scale convolutional kernel is guaranteed, the depth of the network is improved, the effect of neural network feature extraction is improved to a certain extent, and features related to emotion recognition are automatically extracted through training in the VGG19 network. The feature extracted by the VGG19 network is denoted as Fvgg∈Rc×w×h. Flatten () is a flattening function that flattens the features extracted by VGG19 network into a 1-dimensional array of Flatten (F) e Rc·w·h. Flattened features and face motion unit intensity value array AU ∈ R extracted using openface17Splicing (this one-step splicing will be twoThe one-dimensional feature vectors are concatenated into a longer one-dimensional vector), the resulting feature is denoted as Funion∈Rc·w·h+17. This step enables the high level features extracted by the VGG19 network to be integrated with a priori knowledge of the face motion unit (AU). Theta2,θ1For full connection layer weight, b1,b2For the fully-connected layer bias term, the input to the classifier is FunionClassifier output FcAnd obtaining an emotion seven-component result after being processed by a SoftMax () function. The network training process uses cross entropy and the model processing process is shown in fig. 2.
As a preferred embodiment, the emotion recognition module further includes: and a visual display unit.
The visual display unit is used for displaying the face image of the patient and the emotion type obtained by the analysis of the emotion recognition unit. As shown in fig. 3, the entire display content of the screen is a camera shooting screen, and the patient is guided to expose the face in the camera view. And the real-time expression classification result is displayed on the left side of the interface. And representing the probability of each category of the real-time predicted facial expression classification result in a histogram mode. And step-by-step data statistics are displayed on the right side of the interface, the data are stored in the cloud, step-by-step data statistics are carried out every 10 seconds, and the result is represented in a radar chart form.
As a preferred embodiment, the infusion monitoring module comprises: the system comprises a dripping speed monitoring unit, an intelligent early warning unit and a whole-course detection unit.
The dripping speed monitoring unit is used for monitoring the infusion equipment to judge the current infusion dripping speed. Specifically, the dripping speed monitoring unit adopts an infrared detection technology, and the infusion dripping speed is accurately detected by the principle of infrared light intensity change in the dripping process of infusion dripping. The drop speed monitoring unit preferably adopts a wireless charging technology, so that poor charging contact caused by liquid medicine pollution in the infusion process can be effectively avoided.
The intelligent early warning unit is used for judging whether the current infusion dropping speed is proper or not according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine. The reasonable infusion dripping speed range is intelligently judged according to infusion medicines, and automatic early warning is realized when the infusion dripping speed range exceeds the range.
The whole-process detection unit is used for calculating in real time according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine to obtain the capacity and the remaining time of the remaining liquid medicine.
As a preferred embodiment, the infusion monitoring module further comprises: an automatic cutting unit.
The automatic cutting unit is used for automatically cutting off the supply of the liquid medicine when the remaining time reaches a threshold value. The automatic cutting unit automatically detects and judges the empty state of the transfusion, and the transfusion tube is automatically cut off by the active protection device, so that the situations of blood return and the like can be effectively prevented.
As a preferred embodiment, the infusion monitoring module sends the infusion dropping speed, the residual liquid medicine capacity and the residual time to the post-nurse station server in real time. And the server of the nurse station sends the received information to a large screen of the nurse workstation for display. And the nurse station server sends an early warning signal to the nurse station early warning equipment when the remaining time reaches a threshold value. And the early warning equipment of the nurse station sends out an early warning signal.
Specifically, the real-time infusion information of the whole ward can be displayed on a large screen of a nurse station in a summary manner, and the information such as infusion dripping speed, residual capacity and residual time is provided. The ward transfusion reminding information can be broadcasted on the large screen of the nurse station in real time. Such as: 09 bed with too fast dripping speed, 07 bed with transfusion completed, etc. Meanwhile, the ward infusion data can support safe storage. The storage includes information such as patient information, details of the drug, time, dripping speed, alarm, etc. The intelligent infusion data retrieval is supported according to patients, beds, time and the like. The infusion data storage, statistics and analysis functions are supported, and the working efficiency and the management level can be effectively improved through infusion big data analysis.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.
Claims (10)
1. A hospital bed terminal having an emotion recognition function, comprising:
the nursing calling module is used for being operated by a patient to be connected to a nurse workstation and carrying out voice interaction;
the transfusion monitoring module is used for monitoring the transfusion condition of the patient;
an emotion recognition module for recognizing an emotional state of the patient;
the emotion recognition module includes:
the image acquisition unit is used for acquiring image information of a patient;
an image recognition unit configured to recognize a face area in the image information;
an image cutting unit for cutting out a human face ROI from the face region;
the image extraction unit is used for extracting a plurality of face activity units from the human face ROI;
the image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image;
the edge extraction unit is used for extracting the edges of the face images from the gray level images;
and the emotion recognition unit is used for receiving the facial activity unit, the gray level image and the human face image edge and processing the facial activity unit, the gray level image and the human face image edge so as to output emotion categories.
2. The hospital bed terminal with emotion recognition function according to claim 1,
the emotion recognition unit includes:
the first splicing subunit is used for splicing the gray image and the edge of the face image;
the VGG19 convolutional neural network unit is used for receiving the splicing result of the first splicing subunit and performing feature extraction to obtain emotion features;
the characteristic processing unit is used for flattening the emotional characteristics extracted by the VGG19 convolutional neural network unit into a 1-dimensional array;
the second splicing subunit is used for splicing the facial activity unit and the emotion characteristics processed by the characteristic processing unit;
and the classifier unit is used for receiving the splicing result of the second splicing subunit and processing the splicing result to obtain the emotion category.
3. The patient bed terminal with emotion recognition function according to claim 2,
the classifier unit is composed of two full connection layers and a ReLU activation function layer.
4. The hospital bed terminal with emotion recognition function according to claim 1,
the image extraction unit extracts 17 face motion units from the face ROI using an OpenFace tool, the 17 face motion units being AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26 and AU45, respectively.
5. The patient bed terminal with emotion recognition function according to claim 1,
the image graying submodule carries out graying processing on the human face ROI through the following formula to obtain a grayscale image:
Gray=0.299*R+0.587*G+0.114*B
wherein, R, G, B are respectively the red, green, blue channel of RGB image, Gray is the image after the graying.
6. The hospital bed terminal with emotion recognition function according to claim 1,
the edge extraction submodule extracts the edges of the face image from the gray-scale image through a canny algorithm, and in the process that the edge extraction submodule extracts the edges of the face image from the gray-scale image through the canny algorithm, the upper threshold value and the lower threshold value are respectively set to be 100 and 50.
7. The patient bed terminal with emotion recognition function according to claim 1,
the emotion recognition module further includes:
and the visual display unit is used for displaying the face image of the patient and the emotion type obtained by the analysis of the emotion recognition unit.
8. The patient bed terminal with emotion recognition function according to claim 1,
the infusion monitoring module comprises:
the dripping speed monitoring unit is used for monitoring the infusion equipment to judge the current infusion dripping speed;
the intelligent early warning unit is used for judging whether the current infusion dripping speed is proper or not according to the infusion dripping speed detected by the dripping speed monitoring module and the type of the current infusion liquid medicine;
and the whole-process detection unit is used for calculating in real time according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine to obtain the capacity and the remaining time of the remaining liquid medicine.
9. The hospital bed terminal with emotion recognition function according to claim 8,
the infusion monitoring module further comprises:
and an automatic cutting unit for automatically cutting off the supply of the chemical liquid when the remaining time reaches a threshold value.
10. The hospital bed terminal with emotion recognition function according to claim 9,
the infusion monitoring module sends the infusion dripping speed, the residual liquid medicine capacity and the residual time to a post-nurse station server in real time;
the nurse station server sends the received information to a nurse workstation large screen for display;
the nurse station server sends an early warning signal to nurse station early warning equipment when the remaining time reaches a threshold value;
and the early warning equipment of the nurse station sends out an early warning signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210269101.3A CN114639149B (en) | 2022-03-18 | 2022-03-18 | Sick bed terminal with emotion recognition function |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210269101.3A CN114639149B (en) | 2022-03-18 | 2022-03-18 | Sick bed terminal with emotion recognition function |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114639149A true CN114639149A (en) | 2022-06-17 |
CN114639149B CN114639149B (en) | 2023-04-07 |
Family
ID=81950063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210269101.3A Active CN114639149B (en) | 2022-03-18 | 2022-03-18 | Sick bed terminal with emotion recognition function |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114639149B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102592063A (en) * | 2012-03-25 | 2012-07-18 | 河北普康医疗设备有限公司 | Digitalized information management system for nursing stations in hospitals and method for realizing same |
CN202600707U (en) * | 2012-03-25 | 2012-12-12 | 河北普康医疗设备有限公司 | Digital information management system used in hospital nurse station and nursing bed |
CN109460749A (en) * | 2018-12-18 | 2019-03-12 | 深圳壹账通智能科技有限公司 | Patient monitoring method, device, computer equipment and storage medium |
CN110020582A (en) * | 2018-12-10 | 2019-07-16 | 平安科技(深圳)有限公司 | Face Emotion identification method, apparatus, equipment and medium based on deep learning |
CN110188615A (en) * | 2019-04-30 | 2019-08-30 | 中国科学院计算技术研究所 | A kind of facial expression recognizing method, device, medium and system |
CN110516593A (en) * | 2019-08-27 | 2019-11-29 | 京东方科技集团股份有限公司 | A kind of emotional prediction device, emotional prediction method and display device |
CN112329683A (en) * | 2020-11-16 | 2021-02-05 | 常州大学 | Attention mechanism fusion-based multi-channel convolutional neural network facial expression recognition method |
CN113989890A (en) * | 2021-10-29 | 2022-01-28 | 河南科技大学 | Face expression recognition method based on multi-channel fusion and lightweight neural network |
-
2022
- 2022-03-18 CN CN202210269101.3A patent/CN114639149B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102592063A (en) * | 2012-03-25 | 2012-07-18 | 河北普康医疗设备有限公司 | Digitalized information management system for nursing stations in hospitals and method for realizing same |
CN202600707U (en) * | 2012-03-25 | 2012-12-12 | 河北普康医疗设备有限公司 | Digital information management system used in hospital nurse station and nursing bed |
CN110020582A (en) * | 2018-12-10 | 2019-07-16 | 平安科技(深圳)有限公司 | Face Emotion identification method, apparatus, equipment and medium based on deep learning |
CN109460749A (en) * | 2018-12-18 | 2019-03-12 | 深圳壹账通智能科技有限公司 | Patient monitoring method, device, computer equipment and storage medium |
CN110188615A (en) * | 2019-04-30 | 2019-08-30 | 中国科学院计算技术研究所 | A kind of facial expression recognizing method, device, medium and system |
CN110516593A (en) * | 2019-08-27 | 2019-11-29 | 京东方科技集团股份有限公司 | A kind of emotional prediction device, emotional prediction method and display device |
CN112329683A (en) * | 2020-11-16 | 2021-02-05 | 常州大学 | Attention mechanism fusion-based multi-channel convolutional neural network facial expression recognition method |
CN113989890A (en) * | 2021-10-29 | 2022-01-28 | 河南科技大学 | Face expression recognition method based on multi-channel fusion and lightweight neural network |
Also Published As
Publication number | Publication date |
---|---|
CN114639149B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10799168B2 (en) | Individual data sharing across a social network | |
US7418116B2 (en) | Imaging method and system | |
CN111292845B (en) | Intelligent nursing interaction system for intelligent ward | |
US11151610B2 (en) | Autonomous vehicle control using heart rate collection based on video imagery | |
CN106027978B (en) | Video monitoring abnormal behavior method for smart home old-age care | |
Boccignone et al. | Foveated shot detection for video segmentation | |
CN110477925A (en) | A kind of fall detection for home for the aged old man and method for early warning and system | |
CN112216065A (en) | Intelligent nursing system for behavior of old people and identification method | |
JP2019526416A (en) | Retina imaging apparatus and system having edge processing | |
CN109657571B (en) | Delivery monitoring method and device | |
CN110598643B (en) | Method and device for monitoring piglet compression | |
JP2018163644A (en) | Bed exit monitoring system | |
CN106027663A (en) | ICU nursing monitor system based on data sharing system of medical system | |
CN117038027A (en) | Nurse station information management system | |
CN109480775A (en) | A kind of icterus neonatorum identification device based on artificial intelligence, equipment, system | |
CN114639149B (en) | Sick bed terminal with emotion recognition function | |
CN114067236A (en) | Target person information detection device, detection method and storage medium | |
Chiang et al. | A vision-based human action recognition system for companion robots and human interaction | |
CN105498042B (en) | A kind of non-light-proof automatic alarm for infusion method and device thereof based on video | |
CN117315787B (en) | Infant milk-spitting real-time identification method, device and equipment based on machine vision | |
CN113947959A (en) | Remote teaching system and live broadcast problem screening system based on MR technology | |
WO2023093617A1 (en) | Children's liver disease continuous nursing method and system and storage medium | |
CN111150369A (en) | Medical assistance apparatus, medical assistance detection apparatus and method | |
CN115329128A (en) | Data processing method and device suitable for nutrition management system | |
CN107992196A (en) | A kind of man-machine interactive system of blink |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |