CN117219276A - Nutritional status prediction device, system, method and electronic equipment - Google Patents

Nutritional status prediction device, system, method and electronic equipment Download PDF

Info

Publication number
CN117219276A
CN117219276A CN202311188415.1A CN202311188415A CN117219276A CN 117219276 A CN117219276 A CN 117219276A CN 202311188415 A CN202311188415 A CN 202311188415A CN 117219276 A CN117219276 A CN 117219276A
Authority
CN
China
Prior art keywords
user
medical record
image
electronic medical
nutritional status
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311188415.1A
Other languages
Chinese (zh)
Inventor
陈伟
王雪
胡佳慧
姚宽达
赵伟
陈沫汐
韩美芬
赵琬清
付锦
方安
唐泳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Futong Zhikang Technology Co ltd
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Original Assignee
Beijing Futong Zhikang Technology Co ltd
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Futong Zhikang Technology Co ltd, Peking Union Medical College Hospital Chinese Academy of Medical Sciences filed Critical Beijing Futong Zhikang Technology Co ltd
Priority to CN202311188415.1A priority Critical patent/CN117219276A/en
Publication of CN117219276A publication Critical patent/CN117219276A/en
Pending legal-status Critical Current

Links

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application discloses a nutrition status prediction device, a system, a method and electronic equipment, wherein the nutrition status prediction device comprises: the medical record receiving module is used for acquiring the electronic medical record of the user; the medical record processing module is used for extracting various parameters carried by the electronic medical record from the electronic medical record; an image obtaining module for obtaining an image of a face area of a user; and the prediction module is used for extracting image characteristics of the face area from the image and acquiring a nutritional status prediction result of the user according to various parameters and the image characteristics. The application can combine the electronic medical record data of the patient and the characteristics of the facial area to predict the nutrition condition of the patient, and when medical staff determines the nutrition condition of the patient, the prediction result can be used as a reference, thereby effectively improving the accuracy of the nutrition condition prediction.

Description

Nutritional status prediction device, system, method and electronic equipment
Technical Field
The present application relates to the medical field, and more particularly, to a nutritional status prediction apparatus, system, method, and electronic device.
Background
When each new patient is admitted, the hospital should determine the patient's nutritional status within 24 hours, and if the patient has malnutrition, the patient's hospitalization outcome will be affected, such as prolonged hospitalization, increased mortality, etc. Since the staff who do not have unified indexes and carry out nutrition status confirmation are mostly medical staff without clinical nutrition expertise, the accuracy of nutrition status determination is not high.
Disclosure of Invention
In view of the above, the present application provides a nutritional status prediction apparatus, a system, a method, and an electronic device for solving the problem of low accuracy in determining nutritional status.
In order to achieve the above object, the following solutions have been proposed:
a nutritional status prediction device, the device comprising:
the medical record receiving module is used for acquiring the electronic medical record of the user;
the medical record processing module is used for extracting various parameters carried by the electronic medical record from the electronic medical record;
an image obtaining module for obtaining an image of a face area of the user;
and the prediction module is used for extracting the image characteristics of the face area from the image and acquiring a nutritional status prediction result of the user according to the various parameters and the image characteristics.
Optionally, the apparatus further includes:
and the nutrition support suggestion module is used for generating and outputting nutrition support recommendation information based on the nutrition condition prediction result of the user, wherein the nutrition support recommendation information comprises at least one of diet guidance, oral nutrition supplement, enteral nutrition support and parenteral nutrition support.
Optionally, the medical record processing module is specifically configured to:
if the data category in the electronic medical record is natural language data, acquiring various parameters carried by the electronic medical record through a pre-trained named entity recognition model;
and/or
And if the data category in the electronic medical record is structured data, extracting various parameters carried by the electronic medical record.
Optionally, the image obtaining module includes:
a picture obtaining subunit, configured to obtain a first picture, a second picture, and a third picture of the face of the user, where the first picture is a front face photo of the user, the second picture is a left face 90 ° photo of the face of the user, and the third picture is a right face 90 ° photo of the user;
the key point extraction subunit is used for extracting facial key points in the first picture, the second picture and the third picture through a preset key point detection model;
and the area determination subunit is used for determining an image of a face key area in the face area of the user according to the face key points.
Optionally, the prediction module includes:
a feature acquisition subunit for extracting image features of the face region from the image by convolution;
and the full-connection subunit is used for carrying out fusion splicing on the multiple parameters and the image characteristics to obtain fusion characteristics, classifying the fusion characteristics, and obtaining a nutritional status prediction result of the user based on the classification result.
A nutritional status prediction system, the system comprising: user terminal equipment, doctor terminal equipment and a server,
the doctor-side equipment stores the electronic medical record of the user and uploads the electronic medical record of the user to the server;
the user side equipment is provided with a camera, acquires an image of the facial area of the user and uploads the image to the server;
the server extracts various parameters carried by the electronic medical record from the electronic medical record, extracts image features of the face area from the image, and obtains a nutritional status prediction result of the user according to the various parameters and the image features.
Optionally, the server further transmits the nutritional status prediction result to the user terminal device and/or the doctor terminal device.
Optionally, the server further generates nutritional support recommendation information of the user based on the nutritional status prediction result, and outputs the nutritional support recommendation information to the user side device and/or the doctor side device, wherein the nutritional support recommendation information includes at least one of meal guidance, oral nutritional supplement, enteral nutritional support, and parenteral nutritional support.
A method of nutritional status prediction, the method comprising:
acquiring an electronic medical record of a user;
extracting various parameters carried by the electronic medical record from the electronic medical record;
obtaining an image of a facial region of the user;
and extracting image features of the face area from the image, and acquiring a nutritional status prediction result of the user according to the various parameters and the image features.
An electronic device includes a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of the nutritional status prediction method described above.
The application provides a nutritional status prediction device, a system, a method and electronic equipment, wherein the device can be used for predicting the nutritional status of a patient by combining electronic medical record data of the patient and characteristics of a facial area, and when medical staff determines the nutritional status of the patient, the prediction result can be used as a reference, so that the accuracy of nutritional status prediction is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a nutritional status prediction apparatus according to an embodiment of the present application;
fig. 2 is a schematic diagram of a facial image according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a nutritional status prediction method according to an embodiment of the present application;
fig. 4 is a block diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in fig. 1, an embodiment of the present application provides a nutritional status prediction apparatus, which may include:
the medical record receiving module 100 is configured to obtain an electronic medical record of a user.
The electronic medical record can be text data for recording personal data of the user, wherein the electronic medical record data of the user can be stored in a server, so that the user or doctor can inquire conveniently. The embodiment can directly acquire the electronic medical record of the user from the server and know the personal data of the user from the electronic medical record.
The medical record processing module 110 is configured to extract various parameters carried by the electronic medical record from the electronic medical record.
The parameters carried by the electronic medical record can be parameters representing various indexes of the body of the user, and the parameters carried by the electronic medical record can be divided into structural parameters and labeling parameters.
The structured parameters may be self-contained standard parameters in electronic medical records, and optionally, may be classified into disease diagnosis, vital signs (body temperature and pulse), laboratory test results (platelets, hemoglobin, white blood cells, red blood cells, hematocrit, lymphocyte absolute values, glucose, albumin, urea, creatinine, uric acid, total protein, prealbumin, alanine aminotransferase, C-reactive protein, neutrophil percentage, lymphocyte percentage, chlorine, direct bilirubin, sodium, calcium, potassium, inorganic phosphorus, and aspartate aminotransferase) of the user, and statistical information (age, body mass index, gender, education level, marital status, and insurance type) of the user.
The labeling parameters can be artificially labeled natural texts, and can be optionally occult blood, weight loss, food consumption loss, digestive tract symptoms, hemorrhage, anemia, fever, edema, diarrhea, general poor conditions and eating difficulties.
According to the embodiment, the parameters of different categories in the electronic medical record can be extracted through different extraction methods respectively. The structured parameters may be stored in a server in a standard data format, and the embodiment may directly obtain the structured parameters from the server. The annotation parameters can be extracted by a text processing model.
An image obtaining module 120 is configured to obtain an image of a face area of a user.
Wherein the user's face may reflect the user's physical condition to some extent. The face region may be a key region of the face or may be a complete face region. Wherein the facial key region may be a main region in the facial region of the user and be representative, alternatively, a temple region, a cheek region, an upper eyelid region, a lower eyelid region, and an earlobe region. The face region may be acquired in various ways, and optionally, the captured face image may be acquired by capturing the face of the user, or may be acquired by scanning the face of the user in real time.
The prediction module 130 is configured to extract image features of the facial area from the image, and obtain a nutritional status prediction result of the user according to various parameters and the image features.
The facial area can indirectly represent the physical condition of the user, the embodiment can predict the nutritional condition by adopting the facial area or the facial key area, the parameters carried in the electronic medical record can directly represent the physical condition of the user, the embodiment can extract the image characteristics of the facial area or the facial key area, and the image characteristics and the parameters carried in the electronic medical record are integrated to predict the nutritional condition of the user, so that the accuracy of predicting the nutritional condition can be effectively improved.
The embodiment of the application provides a nutritional status prediction device which can be used for predicting the nutritional status of a patient by combining electronic medical record data and facial areas of the patient, and when medical staff determines the nutritional status of the patient, the prediction result can be used as a reference, so that the accuracy of nutritional status prediction is effectively improved.
In another nutritional status prediction apparatus provided according to an embodiment of the present application, the apparatus may further include:
and a nutrition support suggestion module for generating and outputting nutrition support recommendation information based on the nutritional status prediction result of the user, wherein the nutrition support recommendation information comprises at least one of diet guidance, oral nutrition supplement, enteral nutrition support and parenteral nutrition support.
The nutrition support recommendation information can be nutrition supplementary information which accords with the current physical condition of the user. After the nutritional status prediction result of the user is obtained, the nutritional support recommendation information of the user can be determined according to the nutritional status prediction result, so that the user can conveniently determine the diet of the user according to the nutritional support recommendation information or a doctor can conveniently determine the nutritional supplement mode of the user. For example, if the user's nutritional status is predicted to be light to medium malnutrition, the user may be recommended to begin oral nutritional supplementation.
In another nutritional status prediction apparatus provided according to an embodiment of the present application, the medical record processing module 110 shown in fig. 1 may be specifically configured to:
if the data category in the electronic medical record is natural language data, acquiring various parameters carried by the electronic medical record through a pre-trained named entity recognition model;
and/or
And if the data category in the electronic medical record is structured data, extracting various parameters carried by the electronic medical record.
Before extracting various parameters carried by the electronic medical record, the embodiment can perform data preprocessing on the original data in the electronic medical record. Specifically, the data preprocessing may be to perform data cleaning on the original data, so as to remove noise, irrelevant information and erroneous data in the original data. Specifically, the data preprocessing may include: delete duplicate records (for multiple records with the same content, only one may be reserved); deleting blank lines and spaces (blank characters in the text are removed (blank characters can include spaces, tabs and line boxes) and invalid characters (invalid characters can include #, @, etc.)); deleting special symbols (special symbols in the text may include symbols such as brackets, quotation marks, semicolons, etc.); deleting stop words (stop words may include words such as "and", "yes" and the like, which may reduce noise and improve model performance); converting case (converting text uniformly into lower case or upper case form, facilitating subsequent processing).
Wherein, the data cleaning may include: data deduplication (checking and deleting duplicate data entries to reduce unnecessary redundant information, which can effectively increase data processing speed and reduce data storage cost); the missing value processing (identifying and filling missing values existing in the electronic medical record; specifically, statistics such as filling average value, median or mode can be selected according to specific conditions; in fixed conditions, direct deletion of records containing missing values can also be considered); outlier detection and processing (identifying and correcting outliers in electronic medical records, such as outliers, extreme values, or false inputs; data format normalization (unifying the data formats in electronic medical records so as to meet specific standards or requirements); data type conversion (data types may be converted from one format to another as necessary to meet data analysis requirements, e.g., converting string data to numeric data (e.g., integer or floating point numbers)).
The natural language may be a text word or language that is communicated from person to person. The named entity recognition model is a model for performing named entity recognition tasks in the field of natural language processing, wherein the named entity recognition tasks refer to recognizing named entities in a given unstructured text and classifying the entities, such as time, person name, place name, organization name and other types of entities. In order to extract parameters of the natural language data category in the electronic medical record, the embodiment may train the named entity recognition model using training data. The text of the input electronic medical record can be segmented through the pre-training language model, context information of words is obtained, semantic vectors of the words are characterized, a plurality of word vectors are obtained, and training data of a named entity recognition model can be formed by the plurality of word vectors. The structured data is stored in the server in a standard data format, and the embodiment may directly extract the structured data from the server.
In another nutritional status prediction apparatus provided according to an embodiment of the present application, the image obtaining module 120 shown in fig. 1 may include:
the image acquisition subunit is used for acquiring a first image, a second image and a third image of the face of the user, wherein the first image is a front face photo of the user, the second image is a left face 90-degree photo of the face of the user, and the third image is a right face 90-degree photo of the user;
the key point extraction subunit is used for extracting facial key points in the first picture, the second picture and the third picture through a preset key point detection model;
and the region determining subunit is used for determining an image of the face key region in the face region of the user according to the face key points.
The first picture, the second picture and the third picture shown in fig. 2, where the first picture may be a front face photo of the user, the second picture may be a left face 90 ° photo of the user, and the third picture may be a right face 90 ° photo of the user. Of course, the present embodiment may collect more angle face pictures for predicting the nutritional status of the user, and further, the angles of the first picture, the second picture and the third picture are not limited in this embodiment. After the first picture, the second picture and the third picture of the face of the user are acquired, 486 face key points of the face of the user can be acquired by using a pre-trained face key point detection model, and then face key points corresponding to face key areas (a temple area, a cheek area, an upper eyelid area, a lower eyelid area and an earlobe area) in the face areas are matched according to the face areas marked by people, so that coordinate point positions of the face key points corresponding to the face key areas are acquired. And masking the coordinate point positions of the facial pictures to obtain the images of the facial key areas of the users. The masking process described above may set the areas of the face picture of the user except for the face key area to black, so that the image of the face key area of the user can be acquired.
In another nutritional status prediction apparatus according to an embodiment of the present application, the prediction module 130 shown in fig. 1 includes:
a feature acquisition subunit for extracting image features of the face region from the image by convolution;
and the full-connection subunit is used for fusing and splicing various parameters and image features to obtain fused features, classifying the fused features, and obtaining a nutritional status prediction result of the user based on the classification result.
The embodiment can predict the nutrition status of the user through a deep learning model. Specifically, the deep learning model may acquire the image features of the face region through operations such as convolution, then fuse and splice the image features of the face region and various parameters carried by the electronic medical record after the digitization at a full-connection layer (i.e. a decision layer), and finally classify the image features through the fused features and acquire the nutritional status prediction result of the user based on the classification result. The facial region may be, among other things, a facial critical region (temple region, cheek region, upper eyelid region, lower eyelid region, and earlobe region) or a complete facial region. The numerical value of the various parameters carried by the electronic medical record is that the various parameters carried by the electronic medical record are converted from a text form to a numerical value form which can be understood and processed by a computer. The digitization of various parameters carried by the electronic medical record has important influence on subsequent tasks such as data analysis and deep learning. Specifically, for the structured parameters in the electronic medical record, the embodiment can use the single-hot encoding to convert the structured parameters into digital variables (combination of 0 and 1); for the labeling parameters in the electronic medical record, the embodiment can use word embedding technology to convert the labeling parameters into numerical forms which can be understood and processed by a computer. Word embedding is a technique in which words are represented as continuous vectors such that the relationship between words can be measured in terms of the distance between words. Alternatively, word embedding techniques may be Word2Vec and GloVe, etc. The word embedding technique can capture semantic relationships between words, thereby improving the understanding ability of the deep learning model.
Corresponding to the above-mentioned nutrition status prediction device, the embodiment of the present application further provides a nutrition status prediction system, which may include: a user side device 10, a doctor side device 20 and a server 30,
the doctor-side device 20 stores the electronic medical record of the user, and the doctor-side device 20 uploads the electronic medical record of the user to the server 30;
the user side device 10 is provided with a camera, and the user side device 10 collects images of the facial area of the user and uploads the images to the server 30;
the server 30 extracts various parameters carried by the electronic medical record from the electronic medical record, extracts image features of the face area from the image, and obtains a nutritional status prediction result of the user according to the various parameters and the image features.
Further, the server 30 may also send the nutritional status prediction result to the user side device 10 and/or the doctor side device 20. In addition, the server 30 may also generate nutrition support recommendation information based on the nutritional status prediction result of the user, and output the nutrition support recommendation information to the user terminal device 10 and/or the doctor terminal device 20. Wherein the nutritional support recommendation information includes at least one of dietary instruction, oral nutritional supplements, enteral nutritional support, parenteral nutritional support. After the nutritional status prediction result of the user is obtained, the nutritional status prediction result can be sent to the user side device 10 and/or the doctor side device 20, so that the user can know the nutritional status of the user and can help the doctor to determine the nutritional status of the user as a reference result. Further, the embodiment may also generate nutritional support recommendation information of the user based on the nutritional status prediction result, and output the nutritional support recommendation information to the user terminal device 10 and/or the doctor terminal device 20, so as to facilitate the user to perform meal planning and help the doctor determine the nutritional supplement plan of the user.
As shown in fig. 3, an embodiment of the present application further provides a nutritional status prediction method, which may include:
s20, acquiring an electronic medical record of a user;
s21, extracting various parameters carried by the electronic medical record from the electronic medical record;
s22, obtaining an image of a face area of a user;
s23, extracting image features of the face area from the image, and acquiring a nutritional status prediction result of the user according to various parameters and the image features.
The specific details of the embodiment of the nutritional status prediction device are described in detail in the above embodiments, and are not described herein.
As shown in fig. 4, an embodiment of the present application provides an electronic device 70 comprising at least one processor 701, and at least one memory 702 and bus 703 connected to the processor 701; wherein, the processor 701 and the memory 702 complete communication with each other through the bus 703; the processor 701 is configured to invoke the program instructions in the memory 702 to perform the nutritional status prediction method described above. The electronic device 70 herein may be a server, PC, PAD, cell phone, etc.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, the device includes one or more processors (CPUs), memory, and a bus. The device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part. The embodiment of the application is legal.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A nutritional status prediction device, the device comprising:
the medical record receiving module is used for acquiring the electronic medical record of the user;
the medical record processing module is used for extracting various parameters carried by the electronic medical record from the electronic medical record;
an image obtaining module for obtaining an image of a face area of the user;
and the prediction module is used for extracting the image characteristics of the face area from the image and acquiring a nutritional status prediction result of the user according to the various parameters and the image characteristics.
2. The nutritional status prediction device according to claim 1, wherein the device further comprises:
and the nutrition support suggestion module is used for generating and outputting nutrition support recommendation information based on the nutrition condition prediction result of the user, wherein the nutrition support recommendation information comprises at least one of diet guidance, oral nutrition supplement, enteral nutrition support and parenteral nutrition support.
3. The nutritional status prediction device of claim 1, wherein the medical record processing module is specifically configured to:
if the data category in the electronic medical record is natural language data, acquiring various parameters carried by the electronic medical record through a pre-trained named entity recognition model;
and/or
And if the data category in the electronic medical record is structured data, extracting various parameters carried by the electronic medical record.
4. The nutritional status prediction apparatus according to claim 1, wherein the image obtaining module comprises:
a picture obtaining subunit, configured to obtain a first picture, a second picture, and a third picture of the face of the user, where the first picture is a front face photo of the user, the second picture is a left face 90 ° photo of the face of the user, and the third picture is a right face 90 ° photo of the user;
the key point extraction subunit is used for extracting facial key points in the first picture, the second picture and the third picture through a preset key point detection model;
and the area determination subunit is used for determining an image of a face key area in the face area of the user according to the face key points.
5. The nutritional status prediction device of claim 1, wherein the prediction module comprises:
a feature acquisition subunit for extracting image features of the face region from the image by convolution;
and the full-connection subunit is used for carrying out fusion splicing on the multiple parameters and the image characteristics to obtain fusion characteristics, classifying the fusion characteristics, and obtaining a nutritional status prediction result of the user based on the classification result.
6. A nutritional status prediction system, the system comprising: user terminal equipment, doctor terminal equipment and a server,
the doctor-side equipment stores the electronic medical record of the user and uploads the electronic medical record of the user to the server;
the user side equipment is provided with a camera, acquires an image of the facial area of the user and uploads the image to the server;
the server extracts various parameters carried by the electronic medical record from the electronic medical record, extracts image features of the face area from the image, and obtains a nutritional status prediction result of the user according to the various parameters and the image features.
7. The nutritional status prediction system of claim 6, wherein the server further transmits the nutritional status prediction results to the user side device and/or the doctor side device.
8. The nutritional status prediction system of claim 6, wherein the server further generates nutritional support recommendation information for the user based on the nutritional status prediction results and outputs the nutritional support recommendation information to the user side device and/or the doctor side device, the nutritional support recommendation information including at least one of meal instruction, oral nutritional supplements, enteral nutritional support, parenteral nutritional support.
9. A method of nutritional status prediction, the method comprising:
acquiring an electronic medical record of a user;
extracting various parameters carried by the electronic medical record from the electronic medical record;
obtaining an image of a facial region of the user;
and extracting image features of the face area from the image, and acquiring a nutritional status prediction result of the user according to the various parameters and the image features.
10. An electronic device comprising a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of the nutritional status prediction method of claim 9.
CN202311188415.1A 2023-09-14 2023-09-14 Nutritional status prediction device, system, method and electronic equipment Pending CN117219276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311188415.1A CN117219276A (en) 2023-09-14 2023-09-14 Nutritional status prediction device, system, method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311188415.1A CN117219276A (en) 2023-09-14 2023-09-14 Nutritional status prediction device, system, method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117219276A true CN117219276A (en) 2023-12-12

Family

ID=89034843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311188415.1A Pending CN117219276A (en) 2023-09-14 2023-09-14 Nutritional status prediction device, system, method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117219276A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107658021A (en) * 2017-09-21 2018-02-02 北京康爱营养科技股份有限公司 Assessment of nutritional status method and apparatus
CN109390056A (en) * 2018-11-05 2019-02-26 平安科技(深圳)有限公司 Health forecast method, apparatus, terminal device and computer readable storage medium
CN111834014A (en) * 2020-07-17 2020-10-27 北京工业大学 Medical field named entity identification method and system
CN114242243A (en) * 2021-12-21 2022-03-25 中科麦迪人工智能研究院(苏州)有限公司 User health assessment method, device, equipment and storage medium
CN114550934A (en) * 2022-02-24 2022-05-27 山东第一医科大学附属省立医院(山东省立医院) Intensive care child nutrition support decision method and device based on STRONGKid evaluation strategy
CN115830017A (en) * 2023-02-09 2023-03-21 智慧眼科技股份有限公司 Tumor detection system, method, equipment and medium based on image-text multi-mode fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107658021A (en) * 2017-09-21 2018-02-02 北京康爱营养科技股份有限公司 Assessment of nutritional status method and apparatus
CN109390056A (en) * 2018-11-05 2019-02-26 平安科技(深圳)有限公司 Health forecast method, apparatus, terminal device and computer readable storage medium
CN111834014A (en) * 2020-07-17 2020-10-27 北京工业大学 Medical field named entity identification method and system
CN114242243A (en) * 2021-12-21 2022-03-25 中科麦迪人工智能研究院(苏州)有限公司 User health assessment method, device, equipment and storage medium
CN114550934A (en) * 2022-02-24 2022-05-27 山东第一医科大学附属省立医院(山东省立医院) Intensive care child nutrition support decision method and device based on STRONGKid evaluation strategy
CN115830017A (en) * 2023-02-09 2023-03-21 智慧眼科技股份有限公司 Tumor detection system, method, equipment and medium based on image-text multi-mode fusion

Similar Documents

Publication Publication Date Title
Gandhi et al. Multimodal sentiment analysis: A systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions
US20210183484A1 (en) Hierarchical cnn-transformer based machine learning
US20130173585A1 (en) Optimizing map/reduce searches by using synthetic events
CN112015917A (en) Data processing method and device based on knowledge graph and computer equipment
CN113284572B (en) Multi-modal heterogeneous medical data processing method and related device
CN113707307A (en) Disease analysis method and device, electronic equipment and storage medium
CN113782125B (en) Clinic scoring method and device based on artificial intelligence, electronic equipment and medium
Qiu et al. Egocentric image captioning for privacy-preserved passive dietary intake monitoring
CN114003758B (en) Training method and device of image retrieval model and retrieval method and device
TW202101477A (en) Method for applying a label made after sampling to neural network training model
CN115131638A (en) Training method, device, medium and equipment for visual text pre-training model
Prabhu et al. Harnessing emotions for depression detection
CN112560400A (en) Medical data processing method and device and storage medium
CN112364664A (en) Method and device for training intention recognition model and intention recognition and storage medium
McCullough et al. Convolutional neural network models for automatic preoperative severity assessment in unilateral cleft lip
Alsharid et al. Gaze-assisted automatic captioning of fetal ultrasound videos using three-way multi-modal deep neural networks
EP4068121A1 (en) Method and apparatus for acquiring character, page processing method, method for constructing knowledge graph, and medium
CN113707304B (en) Triage data processing method, triage data processing device, triage data processing equipment and storage medium
Vetter et al. Using sentence embeddings and semantic similarity for seeking consensus when assessing trustworthy ai
Shanmuganathan et al. Retracted: Software based sentiment analysis of clinical data for healthcare sector
CN117219276A (en) Nutritional status prediction device, system, method and electronic equipment
CN116864128A (en) Psychological state assessment system and method based on physical activity behavior pattern monitoring
CN114429822A (en) Medical record quality inspection method and device and storage medium
Etter et al. Project SEARCH (Scanning EARs for Child Health): validating an ear biometric tool for patient identification in Zambia
Zheng et al. Detecting Dementia from Face-Related Features with Automated Computational Methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination