CN115661907A - Biological feature recognition method and system - Google Patents

Biological feature recognition method and system Download PDF

Info

Publication number
CN115661907A
CN115661907A CN202211426155.2A CN202211426155A CN115661907A CN 115661907 A CN115661907 A CN 115661907A CN 202211426155 A CN202211426155 A CN 202211426155A CN 115661907 A CN115661907 A CN 115661907A
Authority
CN
China
Prior art keywords
face image
department
module
face
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211426155.2A
Other languages
Chinese (zh)
Inventor
吴俊宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yuantu Technology Co ltd
Original Assignee
Zhejiang Yuantu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Yuantu Technology Co ltd filed Critical Zhejiang Yuantu Technology Co ltd
Priority to CN202211426155.2A priority Critical patent/CN115661907A/en
Publication of CN115661907A publication Critical patent/CN115661907A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the technical field of doctor-patient communication, in particular to a biological feature identification method and a biological feature identification system, wherein the system comprises the following components: the front-end acquisition equipment is used for acquiring a first face image of a person entering a hall; the department terminal comprises a face acquisition module and a face recognition module, wherein the face acquisition module is used for acquiring a second face image of personnel entering the department; a processing center comprising: the judging module is used for judging whether the patient is a history patient; the calling module is used for calling historical medical record information; the project analysis module is used for analyzing according to the historical medical record information to obtain a predicted treatment project; the department matching module is used for matching the pre-treatment department according to the pre-estimated treatment items; the emotion analysis module is used for analyzing and obtaining face emotion information; the communication mode generation module is used for generating a suggested communication mode according to the face emotion information; and the sending module is used for sending the suggested communication mode to the doctor terminal corresponding to the department of treatment. The application has the effect of providing reference for the follow-up doctor-patient relationship or the treatment process.

Description

Biological feature recognition method and system
Technical Field
The invention relates to the technical field of doctor-patient communication, in particular to a biological feature identification method and system.
Background
The medical device operator may use the acquired patient data to perform patient identification to match the correct patient with the correct care (e.g., examination procedure, examination results, etc.). Whether the patient identification result is correct or not becomes a key factor in the medical process, and errors such as medication errors, infusion errors, inspection errors, wrong operation and opening of a patient, wrong baby holding during discharge and the like continuously occur in the whole medical industry due to the fact that the patient cannot be correctly identified.
Moreover, in the existing diagnosis link, the patient often forgets to take a medical insurance card or an identity card when going to a hospital, so that the identity authentication cannot be carried out, and registration cannot be carried out, and the phenomenon is particularly obvious in the old who does not use the intelligent system.
Therefore, one of the effective means for performing biometric identification is to provide "unique irreplaceability" to biometric identification information, thereby reducing potential identification errors. The existing biological feature identification technology comprises DNA, iris, finger corrugated, facial portrait, voice and the like, so that a hospital adopts face identification technology and the like to identify the identity of a patient, but the mode cannot provide reference for the doctor-patient relationship or the treatment process, and needs to be improved.
Disclosure of Invention
In order to provide reference for the follow-up doctor-patient relationship or the treatment process, the application provides a biological feature identification method and a biological feature identification system.
The above object of the present invention is achieved by the following technical solutions:
a biometric identification system comprising:
the system comprises front-end acquisition equipment, a front-end processing unit and a processing unit, wherein the front-end acquisition equipment is used for acquiring a first face image of a person entering a hall;
the department terminal comprises a face acquisition module and a display module, wherein the face acquisition module is used for acquiring a second face image of personnel entering the department;
a processing center comprising:
the judging module is used for comparing the first face image with prestored face images in a prestored face image library and judging whether the first face image is a historical patient;
the calling module is used for calling historical medical record information according to a prestored face image corresponding to the first face image if the patient is a historical patient;
the project analysis module is used for analyzing and obtaining the estimated medical treatment project according to the historical medical record information;
the department matching module is used for matching the pre-treatment department according to the pre-estimated treatment items;
the emotion analysis module is used for carrying out comparative analysis on the first face image and the corresponding prestored face image to obtain face emotion information;
the communication mode generation module is used for generating a suggested communication mode according to the face emotion information;
and the sending module is used for sending the suggested communication mode to a doctor terminal of a corresponding department for treatment for display.
By adopting the technical scheme, when a patient enters a hospital hall, the first face image of a person is collected and compared with the prestored face image in the database, when the patient is judged to be a historical patient, historical medical record information is transferred according to the prestored face image corresponding to the first face image and then analyzed, estimated treatment items of the patient are obtained, then the corresponding pre-treatment department is matched, emotion information of the patient is obtained, then a corresponding communication mode is generated and pushed to a doctor in the corresponding treatment room, reference is provided for a subsequent doctor-patient relationship or treatment process, and patient experience is improved.
The present application may be further configured in a preferred example to: the processing center also comprises a word generation module, and under the condition that a plurality of matched pre-diagnosis departments are provided, the word generation module generates a plurality of groups of communication words according to the suggested communication mode and the number of the pre-diagnosis departments;
the sending module is also used for distributing and sending the multiple groups of communication words to the doctor terminals of all the department of consultation.
By adopting the technical scheme, when a plurality of pre-consultation departments are provided, a plurality of groups of communication words are provided and sent to different departments in sequence, so that the patient can experience placation of different degrees in different departments in sequence.
The present application may be further configured in a preferred example to: the word generation module also comprises a sequencing unit, and the sequencing unit is used for sequencing a plurality of groups of communication words to form a word sequence;
the department terminal sends an arrival instruction to a processing center under the condition that a second face image with the similarity exceeding a preset value with the first face image is acquired;
the processing center selects a first communication word in the communication words which are not sent to the department terminal in the word sequence as a word to be sent;
and the sending module is used for sending the words to be sent to the doctor terminals of the corresponding department of treatment.
By adopting the technical scheme, every time a patient enters one department, the department communicates with the patient according to the suggested communication mode and the communication words displayed on the department terminal, and each time the patient arrives at one department, the patient can gradually experience a communication scene with gradually reduced comfort intensity, so that the mood is gradually relaxed.
The present application may be further configured in a preferred example to: the groups of communication words are arranged according to the size of the comfort intensity to form a word sequence.
The present application may be further configured in a preferred example to: and under the condition that the matched pre-diagnosis department is single, the word generating module generates a plurality of groups of communication words according to the suggested communication mode, and the sending module is also used for sending the plurality of groups of communication words to a doctor terminal of the pre-diagnosis department.
By adopting the technical scheme, when only a single department exists, multiple groups of communication words are sent to the department terminal for reference of a doctor, the language is organized to communicate with a patient, and the vocabulary amount and the communication abundance are improved.
The present application may be further configured in a preferred example to: the department terminal also comprises a voice supervision module which is used for collecting the dialogue information between the doctor and the patient and judging whether the dialogue information contains the communication words sent to the department terminal.
By adopting the technical scheme, the actual dialogue operation process of a doctor can be supervised, and follow-up rectification and excitation can be carried out.
The present application may be further configured in a preferred example to: the emotion analysis module includes:
the vector generation unit is used for generating a displacement vector of each characteristic point according to the pixel position of each characteristic point in the first face image and the pixel position of each characteristic point in the corresponding pre-stored face image;
and the reasoning unit is used for inputting the displacement vector set of each characteristic point into a pre-trained emotion model for reasoning to obtain the human face emotion information.
By adopting the technical scheme, the face emotion information obtained by inference through the model has strong practical significance, and correct face emotion can be inferred by simulating the change of the corresponding position of the face.
The second objective of the present invention is achieved by the following technical solutions:
a biometric identification method comprising:
the collecting device is used for collecting a first face image of a person entering a hall;
comparing the first face image with prestored face images in a prestored face image library, and judging whether the first face image is a historical patient;
if the patient is a historical patient, calling historical medical record information according to a pre-stored face image corresponding to the first face image;
analyzing according to the historical medical record information to obtain a predicted treatment item;
matching a pre-treatment department according to the estimated treatment items;
comparing and analyzing the first face image with a corresponding prestored face image to obtain face emotion information;
generating a suggested communication mode according to the face emotion information;
and sending the suggested communication mode to a doctor terminal corresponding to the department of treatment.
In summary, the present application includes at least one of the following beneficial technical effects:
1. when a patient enters a hospital lobby, acquiring a first face image of a person, comparing the first face image with a prestored face image in a database, when the patient is judged to be a historical patient, transferring historical medical record information according to the prestored face image corresponding to the first face image, analyzing to obtain an estimated diagnosis item of the patient, matching a corresponding pre-diagnosis department, acquiring emotion information of the pre-diagnosis department, generating a corresponding communication mode, pushing the corresponding communication mode to a doctor in the corresponding department, providing reference for a subsequent doctor-patient relationship or a diagnosis process, and improving patient experience;
2. when a patient enters a department, the department communicates with the patient according to the suggested communication mode and the communication words displayed on the department terminal, and when the patient arrives at the department, the patient can gradually experience communication scenes with gradually reduced comfort intensity, so that the mood is gradually relaxed;
3. the system can supervise the actual dialogue operation process of a doctor, and perform subsequent rectification and excitation.
Drawings
FIG. 1 is a block diagram of a biometric system according to an embodiment of the present application;
fig. 2 is a flowchart illustrating an implementation of a biometric identification method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the terms "first", "second", etc. in the present invention are used for distinguishing similar objects, and are not necessarily used for describing a particular order or sequence. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship, unless otherwise specified.
Fig. 1 is a schematic block diagram illustrating a biometric system according to an embodiment of the present invention, and as shown in fig. 1, the biometric system includes: the system comprises front-end acquisition equipment, a department terminal and a processing center, wherein the front-end acquisition equipment, the department terminal and the processing center are in communication connection, and particularly, communication is preferably realized in a wireless communication mode, such as a LoRa technology, a WiFi/IEEE 802.11 protocol, a ZigBee/802.15.4 protocol, a Thread/IEEE 802.15.4 protocol, a Z-Wave protocol and the like; each department provides a doctor with a department terminal, which may be an electronic device such as a cell phone, tablet, wearable device, etc. Such as a doctor's desktop computer terminal.
The department terminal has face collection module and display module, and wherein, the face collection module can be for the face module of making a video recording, and the face collection module is used for acquireing the second face image that the department got into personnel, and the display module is used for showing information, specifically, after the department got into personnel, the face at department terminal was made a video recording the module and is taken the personnel promptly, acquires second face image.
The processing center may include one or more processors to execute instructions to implement the various functions of the terminal. Further, the processing center may include one or more modules that facilitate interaction between the processing center and other devices, equipment, terminals. The processing center comprises a judging module, a calling module, a project analysis module, a department matching module, an emotion analysis module, a communication mode generation module, a word generation module and a sending module.
The judging module is used for comparing the first face image with the pre-stored face images in the pre-stored face image library and judging whether the first face image is a historical patient, and can understand that the first face image is compared with the information of each pre-stored face image in the pre-stored face image library on the basis of the information by identifying the position, the size and the characteristic information of each main facial organ of the first face image, and whether the first face image is a historical patient is judged according to the preset similarity.
Historical medical record information and treatment records corresponding to all pre-stored face images are stored in a database of the processing center in advance; if the patient is a historical patient, the calling module calls historical medical record information according to a prestored face image corresponding to the first face image; historical medical record information and treatment records corresponding to the pre-stored face images are stored in a database of the processing center in advance.
The project analysis module is used for analyzing and obtaining the estimated treatment project according to the historical medical record information, and specifically, inputting the historical medical record information into a pre-trained project model for reasoning to obtain the estimated treatment project; the project model is obtained by training in the following way:
labeling each historical medical record information sample in the historical medical record information sample training set to label a subsequent treatment item of each historical medical record information, wherein the subsequent treatment item is associated with all or part of information in the historical medical record information sample; and training the neural network through the labeled historical medical record information sample training set to obtain a project model.
The historical medical record information samples and the corresponding subsequent treatment items are acquired from the actual treatment history records.
The department matching module is used for matching the pre-diagnosis departments according to the estimated diagnosis items, one diagnosis item corresponds to one diagnosis department, one or more estimated diagnosis items can be provided, and one or more pre-diagnosis departments can be correspondingly matched.
The emotion analysis module is used for carrying out comparative analysis on the first face image and the corresponding prestored face image to obtain face emotion information; specifically, the emotion analysis module comprises a vector generation unit and a reasoning unit, wherein the vector generation unit is used for generating displacement vectors of all the feature points according to the pixel positions of all the feature points in the first face image and the pixel positions of all the corresponding feature points in the pre-stored face image; the method comprises the steps of obtaining displacement vectors of all feature points according to pixel coordinates of the key feature points in two human face images, inputting the displacement vector set of all feature points into a pre-trained emotion model by a reasoning unit for reasoning to obtain human face emotion information, wherein each feature point is a key feature point of a corresponding facial organ of a human face image, such as a nasal wing point, a mouth corner point, an eye corner point and the like, and the displacement vector set of each feature point is a set containing the displacement vectors of all feature points. The emotion model is obtained by training in the following way:
labeling each displacement vector set sample in the displacement vector set sample training set to label face emotion information of each displacement vector set, wherein the face emotion information is associated with all or part of information in the displacement vector set samples; and training the neural network through the labeled displacement vector set sample training set to obtain the emotion model.
The face emotion information obtained through reasoning by the model has strong practical significance, and correct face emotion can be obtained through reasoning by simulating the change of the corresponding position of the face.
The communication mode generation module is used for generating a suggested communication mode according to the face emotion information; specifically, the face emotion information includes happiness, impairment, peace, surprise, anger, nausea and fear, and accordingly, each face emotion information corresponds to a corresponding suggested communication mode, such as a communication mode of listening type, good-going type, responsibility guiding type, intelligence type, switch type, coincidence type, and the like.
Under the condition that a plurality of matched pre-diagnosis departments are provided, the word generation module generates a plurality of groups of communication words according to the suggested communication mode and the number of the pre-diagnosis departments; each group of communication words corresponds to one pre-diagnosis department, the word generation module further comprises a sequencing unit, the sequencing unit is used for sequencing a plurality of groups of communication words to form a word sequence, each group of communication words has corresponding comfort intensity, the plurality of groups of communication words are sequenced according to the comfort intensity to form the word sequence, specifically, the communication words are sequenced from large to small according to the comfort intensity, and the words can be short sentences or long sentences without limiting the specific number of words.
The sending module is also used for distributing and sending the multiple groups of communication words to the doctor terminals of all the department of consultation.
Specifically, under the condition that a department terminal acquires a second face image with similarity exceeding a preset value with a first face image, sending an arrival instruction to a processing center; after receiving the arrival instruction, the processing center selects the first communication word in the communication words which are not sent to the department terminal in the word sequence as the word to be sent, the sending module is used for sending the suggested communication mode and the communication words selected by the department to the doctor terminal of the corresponding department for treatment to display, and then, when a patient enters one department, the department communicates with the patient according to the suggested communication mode and the communication words displayed on the department terminal, and when the patient arrives at one department, the patient can gradually experience a communication scene with gradually reduced comfort intensity, so that the mood is gradually relaxed.
And under the condition that the matched pre-treatment department is single, the word generating module generates a plurality of groups of communication words according to the suggested communication mode, and the sending module is also used for sending the plurality of groups of communication words to the doctor terminal of the pre-treatment department.
In a possible implementation manner, the department terminal further includes a voice supervision module, which is used for collecting conversation information between a doctor and a patient and judging whether the conversation information contains a communication word sent to the department terminal. Specifically, after the patient enters the department, the department terminal acquires the second face image, at the moment, the conversation information starts to be acquired, the conversation information is converted into character information through the voice recognition model, and the character information is compared with the communication words corresponding to the department terminal so as to judge whether the conversation information contains the communication words sent to the department terminal. Therefore, the actual dialogue operation process of the doctor can be supervised, and subsequent rectification and excitation can be performed.
The present application also provides a biometric identification method, referring to fig. 2, including:
s1, collecting equipment is used for collecting a first face image of a person entering a hall;
s2, comparing the first face image with a prestored face image in a prestored face image library, and judging whether the first face image is a historical patient;
s3, if the patient is a historical patient, calling historical medical record information according to a pre-stored face image corresponding to the first face image;
s4, analyzing according to historical medical record information to obtain a predicted treatment item;
s5, matching a pre-treatment department according to the estimated treatment items;
s6, comparing and analyzing the first face image and the corresponding prestored face image to obtain face emotion information;
s7, generating a suggested communication mode according to the face emotion information;
and S8, sending the suggested communication mode to a doctor terminal corresponding to the department of treatment.
For the specific definition of the biometric identification method, reference may be made to the above definition of the biometric identification system, which is not described herein again. The various steps of the biometric identification method described above may be implemented in whole or in part by software, hardware, and combinations thereof.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A biometric identification system, comprising:
the system comprises front-end acquisition equipment, a front-end processing unit and a processing unit, wherein the front-end acquisition equipment is used for acquiring a first face image of a person entering a hall;
the department terminal comprises a face acquisition module and a display module, wherein the face acquisition module is used for acquiring a second face image of personnel entering the department;
a processing center comprising:
the judging module is used for comparing the first face image with prestored face images in a prestored face image library and judging whether the first face image is a historical patient;
the calling module is used for calling historical medical record information according to a prestored face image corresponding to the first face image if the patient is a historical patient;
the project analysis module is used for analyzing and obtaining the estimated medical treatment project according to the historical medical record information;
the department matching module is used for matching the pre-treatment department according to the pre-estimated treatment items;
the emotion analysis module is used for carrying out comparative analysis on the first face image and the corresponding prestored face image to obtain face emotion information;
the communication mode generation module is used for generating a suggested communication mode according to the face emotion information;
and the sending module is used for sending the suggested communication mode to a doctor terminal of a corresponding department for treatment for display.
2. The biometric identification system of claim 1, wherein the processing center further comprises a word generation module, and when there are a plurality of pre-consultation departments matched, the word generation module generates a plurality of sets of communication words according to the suggested communication manner and the number of the pre-consultation departments;
the sending module is also used for distributing and sending the multiple groups of communication words to the doctor terminals of all the department of consultation.
3. The biometric recognition system of claim 2, wherein the word generation module further comprises a ranking unit configured to rank the plurality of sets of communication words in order to form a word sequence;
the department terminal sends an arrival instruction to a processing center under the condition that a second face image with the similarity exceeding a preset value with the first face image is acquired;
the processing center selects a first communication word in the communication words which are not sent to the department terminal in the word sequence as a word to be sent;
and the sending module is used for sending the words to be sent to the doctor terminals of the corresponding department of treatment.
4. The biometric recognition system of claim 3, wherein the plurality of sets of communication words are arranged in a comforting-intensity-sized arrangement to form a word sequence.
5. The biometric identification system of claim 2, wherein the word generation module generates a plurality of sets of communication words according to the suggested communication manner if the matched pre-consultation department is single, and the transmission module is further configured to transmit the plurality of sets of communication words to a doctor terminal of the pre-consultation department.
6. The biometric identification system of any one of claims 2-5, wherein the department terminal further comprises a voice supervision module configured to collect dialogue information between a doctor and a patient and determine whether the dialogue information includes a communication word sent to the department terminal.
7. The biometric recognition system of claim 1, wherein the emotion analysis module comprises:
the vector generation unit is used for generating a displacement vector of each characteristic point according to the pixel position of each characteristic point in the first face image and the pixel position of each characteristic point in the corresponding pre-stored face image;
and the reasoning unit is used for inputting the displacement vector set of each characteristic point into a pre-trained emotion model for reasoning to obtain the human face emotion information.
8. A biometric identification method, comprising:
the collecting device is used for collecting a first face image of a person entering a hall;
comparing the first face image with prestored face images in a prestored face image library, and judging whether the first face image is a historical patient;
if the patient is a historical patient, calling historical medical record information according to a pre-stored face image corresponding to the first face image;
analyzing according to the historical medical record information to obtain a predicted treatment item;
matching a pre-treatment department according to the estimated treatment items;
comparing and analyzing the first face image with a corresponding prestored face image to obtain face emotion information;
generating a suggested communication mode according to the face emotion information;
and sending the suggested communication mode to a doctor terminal corresponding to the department of treatment.
CN202211426155.2A 2022-11-15 2022-11-15 Biological feature recognition method and system Pending CN115661907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211426155.2A CN115661907A (en) 2022-11-15 2022-11-15 Biological feature recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211426155.2A CN115661907A (en) 2022-11-15 2022-11-15 Biological feature recognition method and system

Publications (1)

Publication Number Publication Date
CN115661907A true CN115661907A (en) 2023-01-31

Family

ID=85021910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211426155.2A Pending CN115661907A (en) 2022-11-15 2022-11-15 Biological feature recognition method and system

Country Status (1)

Country Link
CN (1) CN115661907A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116013453A (en) * 2023-03-28 2023-04-25 中国人民解放军总医院 Medical record writing improvement system based on artificial intelligence technology
CN116383795A (en) * 2023-06-01 2023-07-04 杭州海康威视数字技术股份有限公司 Biological feature recognition method and device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116013453A (en) * 2023-03-28 2023-04-25 中国人民解放军总医院 Medical record writing improvement system based on artificial intelligence technology
CN116013453B (en) * 2023-03-28 2023-08-15 中国人民解放军总医院 Medical record writing improvement system based on artificial intelligence technology
CN116383795A (en) * 2023-06-01 2023-07-04 杭州海康威视数字技术股份有限公司 Biological feature recognition method and device and electronic equipment
CN116383795B (en) * 2023-06-01 2023-08-25 杭州海康威视数字技术股份有限公司 Biological feature recognition method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11681356B2 (en) System and method for automated data entry and workflow management
CN115661907A (en) Biological feature recognition method and system
JP2021099866A (en) Systems and methods
CN108899064A (en) Electronic health record generation method, device, computer equipment and storage medium
CN108139918A (en) Using every user as basic custom program feature
Carchiolo et al. Medical prescription classification: a NLP-based approach
KR20220004259A (en) Method and system for remote medical service using artificial intelligence
WO2020112147A1 (en) Method of an interactive health status assessment and system thereof
CN107910073A (en) A kind of emergency treatment previewing triage method and device
CN114187988A (en) Data processing method, device, system and storage medium
CN111986744B (en) Patient interface generation method and device for medical institution, electronic equipment and medium
US20180121715A1 (en) Method and system for providing feedback ui service of face recognition-based application
CN111651571A (en) Man-machine cooperation based session realization method, device, equipment and storage medium
US20170004288A1 (en) Interactive and multimedia medical report system and method thereof
CN109785977A (en) Automated information input method, system, device and storage medium
CN111696648A (en) Psychological consultation platform based on Internet
CN113689951A (en) Intelligent diagnosis guiding method, system and computer readable storage medium
US11862302B2 (en) Automated transcription and documentation of tele-health encounters
CN112863701A (en) Non-contact intelligent inquiry system
RU2699607C2 (en) High efficiency and reduced frequency of subsequent radiation studies by predicting base for next study
KR20110060039A (en) Communication robot and controlling method therof
WO2021094330A1 (en) System and method for collecting behavioural data to assist interpersonal interaction
CN113836284A (en) Method and device for constructing knowledge base and generating response statement
CN110473636B (en) Intelligent medical advice recommendation method and system based on deep learning
Arya et al. Heart disease prediction with machine learning and virtual reality: from future perspective

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination