CN113539519A - Video inquiry method and server - Google Patents

Video inquiry method and server Download PDF

Info

Publication number
CN113539519A
CN113539519A CN202110610533.1A CN202110610533A CN113539519A CN 113539519 A CN113539519 A CN 113539519A CN 202110610533 A CN202110610533 A CN 202110610533A CN 113539519 A CN113539519 A CN 113539519A
Authority
CN
China
Prior art keywords
user
information
target
users
inquiry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110610533.1A
Other languages
Chinese (zh)
Inventor
王真真
方鹏程
方丽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202110610533.1A priority Critical patent/CN113539519A/en
Publication of CN113539519A publication Critical patent/CN113539519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2141Access rights, e.g. capability lists, access control lists, access tables, access matrices

Abstract

The disclosure provides a video inquiry method and a server. The method comprises the following steps: after receiving a video inquiry command of a user, acquiring the face characteristics of the user; comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user; and if the facial features of the users exist in the facial features of the target users, determining user information corresponding to the facial features of the users according to the preset corresponding relation between the facial features of the target users and the user information, and sending the determined user information to the doctor terminal equipment for displaying. Therefore, whether the user has the video inquiry authority or not is determined through the face features, the video inquiry is carried out only after the video inquiry authority is determined, and the user information of the user is sent to the doctor terminal equipment to be displayed, so that the doctor can read the correct user information of the inquirer, and the video inquiry efficiency is improved.

Description

Video inquiry method and server
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to a video inquiry method and a server.
Background
With the rapid development of the internet, software for on-line health consultation is popular, but the image-text consultation mode is inconvenient for the old, and although the telephone consultation mode does not have the inconvenient operability of the image-text mode, doctors are inconvenient to check the state of an illness, and the doctors can only judge the state of the illness through the description of patients. The video inquiry method is convenient for the patient to directly describe symptoms and for the doctor to check, a clear picture is provided for the doctor through the high-definition camera, and the accuracy of disease analysis is improved. Therefore, it is necessary to make online health counseling in a video manner.
In the prior art, when using a video inquiry service, a user needs to purchase a service package, for example, the package includes a 3-person package and a 9-person package, that is, three or nine family members in the package can use the service. The user needs to select family members in a package at the time of the inquiry, so that the doctor can read basic information of the inquirer. If the inquirer uses the basic information of other family members in the package or the inquirer is not a family member in the package, but only uses the basic information of any family member in the package to perform video inquiry, the doctor can read wrong information of the inquirer, and the video inquiry efficiency is low.
Disclosure of Invention
The exemplary embodiment of the present disclosure provides a video inquiry method and a server, which are used for improving the efficiency of video inquiry.
A first aspect of the present disclosure provides a method of video interrogation, the method comprising:
after receiving a video inquiry command of a user, acquiring the face characteristics of the user;
comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user;
and if the facial features of the users exist in the facial features of the target users, determining user information corresponding to the facial features of the users according to the preset corresponding relation between the facial features of the target users and the user information, and sending the determined user information to the doctor terminal equipment for displaying.
In the embodiment, the face characteristics of the user are acquired by the user who performs the video inquiry, the face characteristics of the user are compared with the face characteristics of each target user corresponding to the account number used by the user, if the face characteristics of the user are determined to be in the face characteristics of each target user, the user is indicated to have the authority of the video inquiry, the user information corresponding to the face characteristics of the user is determined according to the preset corresponding relationship between the face characteristics of the target user and the user information, and the determined user information is sent to the doctor terminal device to be displayed. Therefore, whether the user has the video inquiry authority or not is determined through the face features, and after the video inquiry authority is determined, the user information of the user is sent to the doctor terminal device to be displayed, so that the doctor can read the correct user information of the inquirer, and the video inquiry efficiency is improved.
In one embodiment, the method further comprises:
and if the human face features of the target users do not have the same human face features as the human face features of the users, prompting that the users do not have the video inquiry authority.
In the embodiment, if the face features of the target users do not have the face features same as the face features of the users, the user is prompted to have no video inquiry authority, so that the situation that people without the authority falsely perform video inquiry is avoided.
In one embodiment, the correspondence between the facial features of the target user and the user information is established by:
after receiving user information of each target user corresponding to an account used by the user and an image of each target user, carrying out face recognition on the image of each target user to obtain face features of each target user;
and establishing a corresponding relation between the face characteristics of each target user and the user information.
In the embodiment, after receiving the user information of each target user corresponding to the account used by the user and the image of each target user, the face recognition is performed on the image of each target user to obtain the face features of each target user, so that the corresponding relationship between the face features of each target user and the user information is established, and therefore, the user information of the user who performs video inquiry can be determined through the corresponding relationship, so that the accuracy of the user information during video inquiry is improved.
In one embodiment, after the sending the determined user information to the doctor terminal device for displaying, the method further includes:
and if an instruction for acquiring the historical inquiry report of the user, which is sent by the doctor terminal equipment, is received, determining the historical inquiry report corresponding to the identity information of the user by using the preset corresponding relation between the identity information of each target user and the historical inquiry report, and sending the historical inquiry report to the doctor terminal equipment for display.
In the embodiment, after the instruction of acquiring the historical inquiry report of the user, which is sent by the doctor terminal device, is received, the historical inquiry report corresponding to the identity information of the user is determined by using the preset corresponding relationship between the identity information of each target user and the historical inquiry report, and the historical inquiry report is sent to the doctor terminal device for display, so that the condition that the inquiry record of the user is acquired by the doctor is correct is ensured, and the inquiry efficiency is improved.
In one embodiment, after the sending the determined user information to the doctor terminal device for displaying, the method further includes:
receiving the diagnosis information of the user sent by the doctor terminal equipment, and updating a historical inquiry report corresponding to the identity information of the user by using the diagnosis information; and the number of the first and second groups,
and sending the diagnosis information to user terminal equipment for display.
In this embodiment, the diagnostic information of the user sent by the doctor terminal device is received, the historical inquiry report corresponding to the identity information of the user is updated by using the diagnostic information, and the diagnostic information is sent to the user terminal device for display, so that the accuracy of the inquiry record of each user who performs video inquiry is ensured, and the management of the inquiry report of the user is facilitated.
A second aspect of the present disclosure provides a server comprising a memory and a processor, wherein:
the memory is configured to store the face characteristics of each target user corresponding to the account used by the user;
the processor configured to:
after receiving a video inquiry command of a user, acquiring the face characteristics of the user;
comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user;
and if the facial features of the users exist in the facial features of the target users, determining user information corresponding to the facial features of the users according to the preset corresponding relation between the facial features of the target users and the user information, and sending the determined user information to the doctor terminal equipment for displaying.
In one embodiment, the processor is further configured to:
and if the human face features of the target users do not have the same human face features as the human face features of the users, prompting that the users do not have the video inquiry authority.
In one embodiment, the processor is further configured to:
the corresponding relation between the face characteristics of the target user and the user information is established in the following mode:
after receiving user information of each target user corresponding to an account used by the user and an image of each target user, carrying out face recognition on the image of each target user to obtain face features of each target user;
and establishing a corresponding relation between the face characteristics of each target user and the user information.
In one embodiment, the processor is further configured to:
after the determined user information is sent to the doctor terminal equipment to be displayed, if an instruction for acquiring the historical inquiry report of the user sent by the doctor terminal equipment is received, the historical inquiry report corresponding to the user identity information is determined by using the preset corresponding relation between the identity information of each target user and the historical inquiry report, and the historical inquiry report is sent to the doctor terminal equipment to be displayed.
In one embodiment, the processor is further configured to:
after the determined user information is sent to the doctor terminal equipment for display, receiving the diagnosis information of the user sent by the doctor terminal equipment, and updating a historical inquiry report corresponding to the identity information of the user by using the diagnosis information; and the number of the first and second groups,
and sending the diagnosis information to user terminal equipment for display.
According to a third aspect provided by embodiments of the present disclosure, there is provided a computer storage medium storing a computer program for executing the method according to the first aspect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a suitable scenario in accordance with an embodiment of the present disclosure;
FIG. 2 is one of the flow diagrams of a video interrogation method according to one embodiment of the present disclosure;
FIG. 3 is a schematic interface diagram of a doctor terminal device of a video interrogation method according to one embodiment of the present disclosure;
fig. 4 is one of interface diagrams of a user terminal device of a video interrogation method according to one embodiment of the present disclosure;
fig. 5 is a second schematic interface diagram of a user terminal device of a video inquiry method according to an embodiment of the present disclosure;
fig. 6 is a third schematic interface diagram of a user terminal device of a video inquiry method according to an embodiment of the present disclosure;
FIG. 7 is a second flowchart of a video interrogation method according to an embodiment of the present disclosure;
FIG. 8 is a video interrogation apparatus according to one embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application scenario described in the embodiment of the present disclosure is for more clearly illustrating the technical solution of the embodiment of the present disclosure, and does not form a limitation on the technical solution provided in the embodiment of the present disclosure, and as a person having ordinary skill in the art knows, with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present disclosure is also applicable to similar technical problems. In the description of the present disclosure, the term "plurality" means two or more unless otherwise specified.
In the prior art, when using a video inquiry service, a user needs to purchase a service package, for example, the package includes a 3-person package and a 9-person package, that is, three or nine family members in the package can use the service. The user needs to select the family members in the package during inquiry, so that the doctor can read the basic information of the inquiry person, but if the inquiry person uses the basic information of other family members in the package or the inquiry person is not a family member in the package, the inquiry person only uses the basic information of any family member in the package to perform video inquiry, so that the doctor can read wrong inquiry person information. This can result in inefficient video interrogation.
Therefore, the present disclosure provides a video inquiry method, in which a user performing video inquiry acquires a face feature of the user, and compares the face feature of the user with face features of target users corresponding to an account used by the user, if it is determined that the face feature of the user is present in the face features of the target users, it indicates that the user has the authority of the video inquiry, and then determines user information corresponding to the face feature of the user according to a preset corresponding relationship between the face feature of the target user and the user information, and sends the determined user information to a doctor terminal device for display. Therefore, whether the user has the video inquiry authority or not is determined through the face features, when the video inquiry authority is determined, the video inquiry is started, and the user information of the user is sent to the doctor terminal device to be displayed, so that the doctor can read the correct user information of the inquirer, and the video inquiry efficiency is improved. The embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an application scenario of the video inquiry method includes a terminal device 110, a terminal device 120, and a server 130, where the terminal device 110 may be a mobile phone, a tablet computer, a smart television with a camera, a personal computer, and the like. The server 130 may be implemented by a single server or may be implemented by a plurality of servers. The server 130 may be implemented by a physical server or may be implemented by a virtual server.
In a possible application scenario, after a user sends a video inquiry through the terminal device 110, the server 130 obtains a facial feature of the user after receiving a video inquiry command sent by the terminal device 110; comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user; if the server 130 determines that the face features of the users exist in the face features of the target users, the server determines user information corresponding to the face features of the users according to a preset corresponding relationship between the face features of the target users and the user information, and sends the determined user information to the terminal device 120 for display.
Fig. 2 is a schematic flow chart of a video inquiry method of the present disclosure, which may include the following steps:
step 201: after receiving a video inquiry command of a user, acquiring the face characteristics of the user;
in one embodiment, step 201 can be implemented in the following two ways:
the first method is as follows: receiving the face characteristics of the user sent by a camera with a face recognition function;
the second method comprises the following steps: and receiving an image containing the face of the user sent by the camera, and carrying out face recognition on the image to obtain the face characteristics of the user.
In this embodiment, the face recognition of the image is performed by a preset face recognition algorithm.
Step 202: comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user;
step 203: and if the facial features of the users exist in the facial features of the target users, determining user information corresponding to the facial features of the users according to the preset corresponding relation between the facial features of the target users and the user information, and sending the determined user information to the doctor terminal equipment for displaying.
For example, the facial features of each target user corresponding to the account number used by the user include: face features 1, face features 2, face features 3, and face features 4. And if the face features 4 correspond to the face features of the user, determining the user information of the user through the face features 4.
The user information comprises information such as name, gender, height and age. As shown in fig. 3, a display interface of the doctor terminal device is shown, when a doctor and a user perform a video inquiry, the name, sex, height and age of the user performing the video inquiry are displayed in the terminal interface of the doctor.
Therefore, whether the user has the video inquiry authority or not is determined through the face features, and after the video inquiry authority is determined, the user information of the user is sent to the doctor terminal device to be displayed, so that the doctor can read the correct user information of the inquirer, and the video inquiry efficiency is improved.
In order to avoid impersonation of a person without permission to perform video inquiry, in an embodiment, if the face features of the target users do not have the same face features as those of the user, the user is prompted to have no permission for video inquiry.
For example, as shown in fig. 4, for an interface schematic diagram of a user terminal device, when it is determined that the face features of the target users do not have the face features that are the same as the face features of the target users, a prompt message shown in fig. 4 is displayed to prompt that the user does not have the authority of video inquiry.
In order to ensure the accuracy of the determined user information of the user performing the video inquiry, in one embodiment, the corresponding relationship between the face characteristics of the target user and the user information is established in the following manner:
after receiving user information of each target user corresponding to an account used by the user and an image of each target user, carrying out face recognition on the image of each target user to obtain face features of each target user; and establishing a corresponding relation between the face characteristics of each target user and the user information.
The user performing the video inquiry needs to install the application program for the video inquiry on the user terminal device, and the doctor needs to install the application program for the video inquiry on the doctor terminal device.
When a user uses a video inquiry application program for the first time, the user needs to purchase a service package for video inquiry, and needs to add user information (information such as name, sex, age, height, and the like) of target users in the package and upload images (images including faces of the target users) of the target users. After receiving user information of each target user corresponding to an account used by the user and an image of each target user, the server performs face recognition on the image of each target user to obtain face features of each target user; and establishing a corresponding relation between the face characteristics of each target user and the user information. As shown in fig. 5, an interface diagram of the user terminal device is shown, in which user information that a user needs to fill in a target user and an image of the target user are displayed. The input mode of the user image can be a mode of taking a picture, selecting a picture and the like. The present embodiment is not limited thereto.
It should be noted that the user who purchases the package is the master user, and the master user may add user information and images of other target users besides the master user.
Table 1 shows a correspondence relationship between face features and user information of each target user:
Figure BDA0003095671060000091
TABLE 1
After the application program of the video inquiry is installed, a package of home video inquiry is purchased, and after the user information and the corresponding images of each target user are added to the master user, if the user needs to perform the video inquiry, the target voice may be directly sent, for example, the target voice may be: and preset voice information such as 'I want to perform video inquiry' or 'video inquiry' and the like. Or clicking a relevant button of 'performing video inquiry' in the video inquiry application program to send an instruction for performing video inquiry to the server.
For example, as shown in the interface diagram in fig. 6, after the user clicks the "i want to perform video inquiry" button in the diagram, the server may receive an instruction to perform video inquiry, and start to perform the following steps. Therefore, the operation steps of video inquiry are simplified, and the user experience is improved.
In order to ensure that the inquiry records of the user obtained by the doctor are correct, in one embodiment, if an instruction for obtaining a historical inquiry report of the user sent by the doctor terminal device is received, a historical inquiry report corresponding to the identity information of the user is determined by using a preset corresponding relationship between the identity information of each target user and the historical inquiry report, and the historical inquiry report is sent to the doctor terminal device for display.
For example, as shown in table 1 described above, table 1 includes the correspondence between the user information of the target user and the historical inquiry report. If the target user 1 performs the video inquiry, the historical inquiry report corresponding to the user information of the target user 1 in the table 1 is sent to the doctor terminal device for display.
Wherein, the inquiry record mainly includes: time of inquiry, symptoms, results of diagnosis, etc. The present embodiment is not limited thereto.
In one embodiment, the diagnosis information of the user sent by the doctor terminal equipment is received, and a historical inquiry report corresponding to the identity information of the user is updated by using the diagnosis information; and sending the diagnosis information to user terminal equipment for display.
For example, as shown in table 2, for the previous historical inquiry report of the user, the inquiry report including the inquiry time, symptoms and diagnosis results is illustrated in table 2 as an example:
historical inquiry report Time of inquiry Symptoms and signs Diagnosis result
Inquiry report 1 2021.05.24 Headache (headache) Common cold
TABLE 2
If the received diagnosis information of the user sent by the doctor terminal equipment comprises: time: 2021.05.26, symptoms: cough, headache, diagnostic results: severe cold. Then, the historical inquiry report of the user is updated by using the diagnosis information, and the updated historical inquiry report of the user is shown in table 3:
historical inquiry report Time of inquiry Symptoms and signs Diagnosis result
Inquiry report 1 2021.05.24 Headache (headache) Common cold
Inquiry report 2 2021.05.26 Cough and headache Severe cold
TABLE 3
In order to protect the privacy of the users, each target user can set the authority of the own historical inquiry report, only the target user with the authority of viewing the historical inquiry report can view the historical inquiry report of the target user, and other target users without the authority of viewing cannot view the historical inquiry report of the user. Therefore, the privacy protection of the user is improved. In one embodiment, after receiving an instruction of a user to view a historical inquiry report of a specified target user, determining whether the user has the authority to view the historical inquiry report of the specified target user, and if so, sending the historical inquiry report of the specified target user to user terminal equipment for displaying; if not, prompting that the user does not have the viewing permission.
For example, the target user 1 sets privacy authority for the history inquiry report corresponding to the target user 1, and for example, the target user having the history inquiry report for viewing the target user 1 includes: target user 1 and target user 2. And if the server determines that the target user 3 does not have the permission to view the historical inquiry report of the target user 1 after receiving the instruction that the target user 3 wants to view the historical inquiry report of the target user 1, prompting that the target user 3 does not have the permission to view. And if the server receives the historical inquiry report that the target user 1 wants to check the server, determining that the target user 1 has the checking authority, and sending the historical inquiry report corresponding to the target user 1 to the user terminal equipment for displaying.
The user may also specify that one or more of the historical inquiry reports are subjected to authority setting, and the specific scheme is the same as the authority setting method for the historical inquiry reports described above, which is not described in detail herein.
For further understanding of the technical solution of the present disclosure, the following detailed description with reference to fig. 7 may include the following steps:
step 701: after receiving user information of each target user corresponding to an account used by the user and an image of each target user, carrying out face recognition on the image of each target user to obtain face features of each target user;
step 702: establishing a corresponding relation between the face characteristics of each target user and user information;
step 703: after receiving a video inquiry command of a user, acquiring the face characteristics of the user;
step 704: comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user;
step 705: judging whether the face features of the user exist in the face features of each target user, if so, executing a step 706, otherwise, executing a step 707;
step 706: determining user information corresponding to the face features of the user according to the preset corresponding relation between the face features of the target user and the user information, and sending the determined user information to doctor terminal equipment for displaying;
step 707: prompting that the user does not have video interrogation permission;
step 708: if an instruction for acquiring the historical inquiry report of the user, which is sent by the doctor terminal equipment, is received, determining the historical inquiry report corresponding to the identity information of the user by using the preset corresponding relation between the identity information of each target user and the historical inquiry report, and sending the historical inquiry report to the doctor terminal equipment for display;
step 709: and receiving the diagnosis information of the user sent by the doctor terminal equipment, updating a historical inquiry report corresponding to the identity information of the user by using the diagnosis information, and sending the diagnosis information to the user terminal equipment for display.
Fig. 8 is a schematic structural diagram of a video interrogation apparatus according to one embodiment of the present disclosure.
As shown in fig. 8, the video interrogation apparatus 800 of the present disclosure may include a facial feature determination module 810, a comparison module 820, and a user information determination 830.
The face feature determination module 810 is configured to obtain a face feature of a user after receiving a video inquiry instruction of the user;
a comparing module 820, configured to compare the facial features of the user with facial features of target users corresponding to the account used by the user;
and the user information determination 830 is configured to determine, if the face features of the user exist in the face features of the target users, user information corresponding to the face features of the user according to a preset corresponding relationship between the face features of the target users and the user information, and send the determined user information to the doctor terminal device for display.
In one embodiment, the apparatus further comprises:
a prompting module 840, configured to prompt that the user does not have the permission for video inquiry if the face features of the target users do not have the same face features as the face features of the user.
In one embodiment, the apparatus further comprises:
a corresponding relationship establishing module 850, configured to establish a corresponding relationship between the facial features of the target user and the user information by:
after receiving user information of each target user corresponding to an account used by the user and an image of each target user, carrying out face recognition on the image of each target user to obtain face features of each target user;
and establishing a corresponding relation between the face characteristics of each target user and the user information.
In one embodiment, the apparatus further comprises:
and an inquiry record determining module 860, configured to, after the determined user information is sent to the doctor terminal device for display, determine, by using a preset corresponding relationship between the identity information of each target user and a historical inquiry report, a historical inquiry report corresponding to the identity information of the user if an instruction sent by the doctor terminal device for obtaining the historical inquiry report of the user is received, and send the historical inquiry report to the doctor terminal device for display.
In one embodiment, the apparatus further comprises:
the inquiry record updating module 870 is configured to receive the diagnostic information of the user sent by the doctor terminal device after the determined user information is sent to the doctor terminal device for display, and update a historical inquiry report corresponding to the identity information of the user by using the diagnostic information; and the number of the first and second groups,
and sending the diagnosis information to user terminal equipment for display.
Having described a video interrogation method and apparatus according to an exemplary embodiment of the present disclosure, a server according to another exemplary embodiment of the present disclosure is described next.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a server according to the present disclosure may include at least one processor, and at least one computer storage medium. Wherein the computer storage medium stores program code that, when executed by the processor, causes the processor to perform the steps in the video interrogation methods described above in this specification according to various exemplary embodiments of the present disclosure. For example, the processor may perform step 201 and 203 as shown in FIG. 2.
A server 900 according to this embodiment of the present disclosure is described below with reference to fig. 9. The server 900 shown in fig. 9 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the server 900 is represented in the form of a general server. The components of server 900 may include, but are not limited to: the at least one processor 901, the at least one computer storage medium 902, and the bus 903 connecting the various system components (including the computer storage medium 902 and the processor 901).
Bus 903 represents one or more of any of several types of bus structures, including a computer storage media bus or computer storage media controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
Computer storage media 902 may include readable media in the form of volatile computer storage media, such as random access computer storage media (RAM)921 and/or cache storage media 922, and may further include read-only computer storage media (ROM) 923.
Computer storage media 902 may also include programs/utilities 925 having a set (at least one) of program modules 924, such program modules 924 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The server 900 may also communicate with one or more external devices 904 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the server 900, and/or with any devices (e.g., router, modem, etc.) that enable the server 900 to communicate with one or more other servers. Such communication may occur via input/output (I/O) interfaces 905. Moreover, the server 900 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network such as the Internet) via a network adapter 906. As shown, the network adapter 906 communicates with the other modules for the server 900 over the bus 903. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the server 900, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, various aspects of a video interrogation method provided by the present disclosure may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps in the video interrogation method according to various exemplary embodiments of the present disclosure described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a random access computer storage media (RAM), a read-only computer storage media (ROM), an erasable programmable read-only computer storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only computer storage media (CD-ROM), an optical computer storage media piece, a magnetic computer storage media piece, or any suitable combination of the foregoing.
The program product for video interrogation of embodiments of the present disclosure may employ a portable compact disk read-only computer storage medium (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (for example, through the internet using an internet service provider).
It should be noted that although several modules of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the modules described above may be embodied in one module, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module described above may be further divided into embodiments by a plurality of modules.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk computer storage media, CD-ROMs, optical computer storage media, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the present disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable computer storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable computer storage medium produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. A server, comprising a memory and a processor, wherein:
the memory is configured to store the face characteristics of each target user corresponding to the account used by the user;
the processor configured to:
after receiving a video inquiry command of a user, acquiring the face characteristics of the user;
comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user;
and if the facial features of the users exist in the facial features of the target users, determining user information corresponding to the facial features of the users according to the preset corresponding relation between the facial features of the target users and the user information, and sending the determined user information to the doctor terminal equipment for displaying.
2. The server of claim 1, wherein the processor is further configured to:
and if the human face features of the target users do not have the same human face features as the human face features of the users, prompting that the users do not have the video inquiry authority.
3. The server of claim 1, wherein the processor is further configured to:
the corresponding relation between the face characteristics of the target user and the user information is established in the following mode:
after receiving user information of each target user corresponding to an account used by the user and an image of each target user, carrying out face recognition on the image of each target user to obtain face features of each target user;
and establishing a corresponding relation between the face characteristics of each target user and the user information.
4. The server of claim 1, wherein the processor is further configured to:
after the determined user information is sent to the doctor terminal equipment to be displayed, if an instruction for acquiring the historical inquiry report of the user sent by the doctor terminal equipment is received, the historical inquiry report corresponding to the user identity information is determined by using the preset corresponding relation between the identity information of each target user and the historical inquiry report, and the historical inquiry report is sent to the doctor terminal equipment to be displayed.
5. The server according to any one of claims 1-4, wherein the processor is further configured to:
after the determined user information is sent to the doctor terminal equipment for display, receiving the diagnosis information of the user sent by the doctor terminal equipment, and updating a historical inquiry report corresponding to the identity information of the user by using the diagnosis information; and the number of the first and second groups,
and sending the diagnosis information to user terminal equipment for display.
6. A method of video interrogation, said method comprising:
after receiving a video inquiry command of a user, acquiring the face characteristics of the user;
comparing the facial features of the user with the facial features of each target user corresponding to the account used by the user;
and if the facial features of the users exist in the facial features of the target users, determining user information corresponding to the facial features of the users according to the preset corresponding relation between the facial features of the target users and the user information, and sending the determined user information to the doctor terminal equipment for displaying.
7. The method of claim 6, further comprising:
and if the human face features of the target users do not have the same human face features as the human face features of the users, prompting that the users do not have the video inquiry authority.
8. The method of claim 6, wherein the correspondence between the facial features of the target user and the user information is established by:
after receiving user information of each target user corresponding to an account used by the user and an image of each target user, carrying out face recognition on the image of each target user to obtain face features of each target user;
and establishing a corresponding relation between the face characteristics of each target user and the user information.
9. The method of claim 6, wherein after sending the determined user information to the physician terminal device for display, the method further comprises:
and if an instruction for acquiring the historical inquiry report of the user, which is sent by the doctor terminal equipment, is received, determining the historical inquiry report corresponding to the identity information of the user by using the preset corresponding relation between the identity information of each target user and the historical inquiry report, and sending the historical inquiry report to the doctor terminal equipment for display.
10. The method according to any one of claims 6 to 9, wherein after the determined user information is sent to the doctor terminal device for display, the method further comprises:
receiving the diagnosis information of the user sent by the doctor terminal equipment, and updating a historical inquiry report corresponding to the identity information of the user by using the diagnosis information; and the number of the first and second groups,
and sending the diagnosis information to user terminal equipment for display.
CN202110610533.1A 2021-06-01 2021-06-01 Video inquiry method and server Pending CN113539519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110610533.1A CN113539519A (en) 2021-06-01 2021-06-01 Video inquiry method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110610533.1A CN113539519A (en) 2021-06-01 2021-06-01 Video inquiry method and server

Publications (1)

Publication Number Publication Date
CN113539519A true CN113539519A (en) 2021-10-22

Family

ID=78095004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110610533.1A Pending CN113539519A (en) 2021-06-01 2021-06-01 Video inquiry method and server

Country Status (1)

Country Link
CN (1) CN113539519A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025220A (en) * 2021-11-02 2022-02-08 贵阳朗玛视讯科技有限公司 Multi-version IPTV control system and method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110119341A1 (en) * 2009-11-17 2011-05-19 Ling Jun Wong Device-Service Affiliation Via Internet Video Link (IVL)
CN102663444A (en) * 2012-03-26 2012-09-12 广州商景网络科技有限公司 Method for preventing account number from being stolen and system thereof
CN103475490A (en) * 2013-09-29 2013-12-25 广州网易计算机系统有限公司 Identity authentication method and device
KR20140000011A (en) * 2012-06-22 2014-01-02 서울대학교병원 (분사무소) System and method for remote medical examination
WO2015090131A1 (en) * 2013-12-19 2015-06-25 中山大学深圳研究院 Ims-based digital home interactive medical system
CN106529125A (en) * 2016-10-20 2017-03-22 山东中创软件工程股份有限公司 Remote diagnosis system
CN106570312A (en) * 2016-10-18 2017-04-19 捷开通讯(深圳)有限公司 Method and system for mobile medical data interaction, server and mobile terminal
CN106778003A (en) * 2016-12-26 2017-05-31 佛山市幻云科技有限公司 Telemedicine method and server
CN109599187A (en) * 2018-10-31 2019-04-09 北京春雨天下软件有限公司 A kind of online interrogation point examines method, server, terminal, equipment and medium
CN210403218U (en) * 2019-09-06 2020-04-24 云南中钰雕龙数据科技有限公司 Remote inquiry equipment and system based on medical integration
CN111476940A (en) * 2020-04-04 2020-07-31 大连遨游智能科技有限公司 Triage referral method and system based on self-service inquiry terminal
CN111710381A (en) * 2020-06-10 2020-09-25 深圳市好克医疗仪器股份有限公司 Remote diagnosis method, device, equipment and computer storage medium
CN112562871A (en) * 2020-12-22 2021-03-26 联仁健康医疗大数据科技股份有限公司 Online inquiry method, online inquiry device, electronic equipment and storage medium
CN112863702A (en) * 2021-02-22 2021-05-28 海信集团控股股份有限公司 Intelligent health service method and display equipment thereof

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110119341A1 (en) * 2009-11-17 2011-05-19 Ling Jun Wong Device-Service Affiliation Via Internet Video Link (IVL)
CN102663444A (en) * 2012-03-26 2012-09-12 广州商景网络科技有限公司 Method for preventing account number from being stolen and system thereof
KR20140000011A (en) * 2012-06-22 2014-01-02 서울대학교병원 (분사무소) System and method for remote medical examination
CN103475490A (en) * 2013-09-29 2013-12-25 广州网易计算机系统有限公司 Identity authentication method and device
WO2015090131A1 (en) * 2013-12-19 2015-06-25 中山大学深圳研究院 Ims-based digital home interactive medical system
CN106570312A (en) * 2016-10-18 2017-04-19 捷开通讯(深圳)有限公司 Method and system for mobile medical data interaction, server and mobile terminal
CN106529125A (en) * 2016-10-20 2017-03-22 山东中创软件工程股份有限公司 Remote diagnosis system
CN106778003A (en) * 2016-12-26 2017-05-31 佛山市幻云科技有限公司 Telemedicine method and server
CN109599187A (en) * 2018-10-31 2019-04-09 北京春雨天下软件有限公司 A kind of online interrogation point examines method, server, terminal, equipment and medium
CN210403218U (en) * 2019-09-06 2020-04-24 云南中钰雕龙数据科技有限公司 Remote inquiry equipment and system based on medical integration
CN111476940A (en) * 2020-04-04 2020-07-31 大连遨游智能科技有限公司 Triage referral method and system based on self-service inquiry terminal
CN111710381A (en) * 2020-06-10 2020-09-25 深圳市好克医疗仪器股份有限公司 Remote diagnosis method, device, equipment and computer storage medium
CN112562871A (en) * 2020-12-22 2021-03-26 联仁健康医疗大数据科技股份有限公司 Online inquiry method, online inquiry device, electronic equipment and storage medium
CN112863702A (en) * 2021-02-22 2021-05-28 海信集团控股股份有限公司 Intelligent health service method and display equipment thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025220A (en) * 2021-11-02 2022-02-08 贵阳朗玛视讯科技有限公司 Multi-version IPTV control system and method
CN114025220B (en) * 2021-11-02 2023-12-05 贵阳朗玛视讯科技有限公司 Control system and method for multi-version IPTV

Similar Documents

Publication Publication Date Title
CN104995865B (en) Service based on sound and/or face recognition provides
CN110457544A (en) A kind of data capture method, system, electronic equipment and storage medium
CN109982134B (en) Video teaching method based on diagnosis equipment, diagnosis equipment and system
US11777787B2 (en) Video-based maintenance method, maintenance terminal, server, system and storage medium
WO2016134307A1 (en) Coordinated mobile access to electronic medical records
AU2017268623A1 (en) Method, apparatus, terminal and storage medium of data displaying
US20160110372A1 (en) Method and apparatus for providing location-based social search service
CN113539519A (en) Video inquiry method and server
CN114330272A (en) Medical record template generation method and device, electronic equipment and storage medium
CN112465172A (en) Hospital intelligent treatment method and device
CN111370130B (en) Real-time processing method and device for medical data, storage medium and electronic equipment
CN111753203A (en) Card number recommendation method, device, equipment and medium
CN114501408A (en) Diagnosis and treatment data processing method and device, electronic equipment and storage medium
CN114663089A (en) Data processing method and device, electronic equipment and storage medium
CN113053531B (en) Medical data processing method, medical data processing device, computer readable storage medium and equipment
CN113919310A (en) Short message content determination method and device, electronic equipment and storage medium
CN113780855A (en) Medical institution supervision method and device, computer equipment and storage medium
US20230197213A1 (en) Medical information management system, clinical information acquisition server, medical information management method, and non-transitory recording medium storing a program
CN111090879A (en) Data processing method, device, readable storage medium, electronic equipment and system
US20100145727A1 (en) Interaction between healthcare software products
JP7185093B1 (en) Information processing device, information processing method and program
CN111554387B (en) Doctor information recommendation method and device, storage medium and electronic equipment
US9081876B2 (en) Methods and systems for navigating image series
EP4266322A1 (en) Systems and methods to contextualize clinical product/workflow issues for streamlined resolutions/recommendations
CN110289935B (en) Decoding method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination