CN116259425A - Information display method for remote diagnosis and treatment, head-mounted display device and medium - Google Patents

Information display method for remote diagnosis and treatment, head-mounted display device and medium Download PDF

Info

Publication number
CN116259425A
CN116259425A CN202211712387.4A CN202211712387A CN116259425A CN 116259425 A CN116259425 A CN 116259425A CN 202211712387 A CN202211712387 A CN 202211712387A CN 116259425 A CN116259425 A CN 116259425A
Authority
CN
China
Prior art keywords
diagnosis
information
target
treatment
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211712387.4A
Other languages
Chinese (zh)
Inventor
杨承燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Companion Technology Co ltd
Original Assignee
Hangzhou Companion Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Companion Technology Co ltd filed Critical Hangzhou Companion Technology Co ltd
Priority to CN202211712387.4A priority Critical patent/CN116259425A/en
Publication of CN116259425A publication Critical patent/CN116259425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0025Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

Embodiments of the present disclosure disclose an information display method, a head-mounted display device, and a medium for remote diagnosis and treatment. One embodiment of the method comprises the following steps: responding to the two-dimension code information scanned by the head-mounted display device on an outpatient service list, and calling target terminal equipment of a remote doctor user corresponding to the two-dimension code information; synchronizing the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display equipment to the target terminal equipment; responding to the received diagnosis and treatment result of the remote doctor user sent by the target terminal equipment, and determining the diagnosis and treatment result of the remote doctor user as a target diagnosis and treatment result; and displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device. The implementation mode can avoid the limitation of fixed places and fixed equipment, does not need to arrange places in advance and debug the fixed equipment, and reduces the operation difficulty.

Description

Information display method for remote diagnosis and treatment, head-mounted display device and medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to an information display method, a head-mounted display device, and a medium for remote diagnosis and treatment.
Background
The development of remote medical technology relieves the problem of uneven distribution of medical resources. Currently, in the case of remote consultation, the following methods are generally adopted: remote communication is performed using a fixed video device (e.g., a camera fixed on a medical device or equipment) at a fixed location (e.g., a dedicated consultation room).
However, the inventors have found that when the remote consultation is performed in the above manner, there are often the following technical problems:
first, it takes a long time to arrange the site and debug the equipment, and the basic equipment is complex and the operation difficulty is high.
Secondly, the remote video is unfavorable for the foreign medical specialist with the language not communicated to watch, and translation equipment is required to be additionally worn, so that the operation is complicated.
Thirdly, an expert who needs to reserve a remote consultation in advance cannot conduct the remote consultation when not reserved, needs to reserve and arrange numbers for a long time, and is complicated in operation.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a virtual diagnosis and treatment information display method, a head-mounted display device, and a computer-readable medium for remote diagnosis and treatment to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a virtual diagnosis and treatment information display method for remote diagnosis and treatment, applied to a head-mounted display device, the method including: responding to the two-dimension code information scanned by the head-mounted display device on an outpatient service list, and calling target terminal equipment of a remote doctor user corresponding to the two-dimension code information; synchronizing the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display equipment to the target terminal equipment; responding to the received diagnosis and treatment result of the remote doctor user sent by the target terminal equipment, and determining the diagnosis and treatment result of the remote doctor user as a target diagnosis and treatment result; and displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device.
Optionally, before displaying the virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result on the display screen of the head-mounted display device, the method further includes: determining a query tree corresponding to the preliminary diagnosis and treatment information in response to receiving the preliminary diagnosis and treatment information sent by the target terminal equipment, wherein the query tree comprises a query node chain set, each query node chain in the query node chain set takes a diagnosis and treatment result as a leaf node, and each query node chain corresponds to a query option sequence; selecting a target consultation node chain from the consultation node chain set according to each piece of input information of a field doctor user wearing the head-mounted display device aiming at the determined consultation tree, wherein each piece of input information is matched with a consultation option sequence of the target consultation node chain; and determining the diagnosis and treatment results included in the target inquiry node chain as target diagnosis and treatment results.
Optionally, displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result on a display screen of the head-mounted display device includes: obtaining diagnosis and treatment scheme information corresponding to the target diagnosis and treatment result, wherein the diagnosis and treatment scheme information comprises operation videos, and play node sequences correspond to the operation videos; determining the operation video as virtual diagnosis and treatment information; and displaying the virtual diagnosis and treatment information on the display screen according to the play node sequence.
Optionally, before synchronizing the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display device to the target terminal device, the method further includes: responding to receiving the connection information of the remote doctor user for the diagnosis and treatment conference, and acquiring the illness state information of the patient user corresponding to the diagnosis and treatment conference; and sending the illness state information to the corresponding target terminal equipment.
Optionally, before synchronizing the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display device to the target terminal device, the method further includes: and responding to the fact that the connection information of the remote doctor user for the diagnosis and treatment conference is not received within a preset time period after the request of the diagnosis and treatment conference is sent, and executing connection prompt operation corresponding to the remote doctor user.
Optionally, before the selecting, according to the input information of the on-site doctor user wearing the head-mounted display device, the target query node chain from the query node chain set, the method further includes: in response to determining that the offline time length of the head-mounted display device meets a preset time condition, displaying offline diagnosis prompt information in a display screen of the head-mounted display device; and determining a query tree corresponding to the offline diagnosis information according to the offline diagnosis information input by the on-site doctor user.
Optionally, the selecting, according to the input information of the on-site doctor user wearing the head-mounted display device for the determined query tree, a target query node chain from the query node chain set includes: determining a consultation node serving as a root node in the consultation tree as a target consultation node; for the target consultation node, the following determination steps are performed: displaying a consultation option set corresponding to a target consultation node in a display screen of the head-mounted display device; in response to detecting the input voice of the live doctor user, determining the input voice as input information; selecting a consultation option matched with the input information from the consultation option set as a target consultation option; and determining a query node chain corresponding to the target query option in the query node chain set as a target query node chain in response to determining that the target query node meets the preset ending intermediate node condition, wherein the query option sequence corresponding to the target query node chain is the same as the sequence formed by the selected target query options.
Optionally, the determining step further includes: and in response to determining that the target consultation node does not meet the preset ending intermediate node condition, taking a next consultation node corresponding to the target consultation option in the consultation tree as the target consultation node, and executing the determining step again.
Optionally, the method further comprises: and in response to detection of a low head operation of a field doctor user wearing the head-mounted display device, carrying out retirement processing on the virtual diagnosis and treatment information displayed in the display screen.
In a second aspect, some embodiments of the present disclosure provide a head mounted display device comprising: one or more processors; a storage device having one or more programs stored thereon, a multimedia information acquisition device for acquiring real-time multimedia information, a display screen for imaging in front of the eyes of a user, when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method described in any of the implementations of the first aspect.
In a third aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: according to the virtual diagnosis and treatment information display method for remote diagnosis and treatment, the limitation of fixed places and fixed equipment can be avoided, places do not need to be arranged in advance, the fixed equipment is not required to be debugged, and the operation difficulty is reduced. Specifically, the reason why it takes a long time to arrange a place, debug equipment, and operation difficulty is large is that: the device is limited by fixed places and fixed equipment, the place and the debugging equipment are required to be arranged for a long time, the basic equipment is complex, and the operation difficulty is high. Based on this, in some embodiments of the present disclosure, in response to the head-mounted display device scanning the two-dimensional code information on the outpatient service sheet, the target terminal device of the remote doctor user corresponding to the two-dimensional code information is called. Thus, the on-site doctor can directly call the remote doctor in the form of scanning the two-dimension code information by wearing the head-mounted display device. And then synchronizing the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display equipment to the target terminal equipment. Thereby, the remote doctor can watch the real-time multimedia information through the target terminal device. And secondly, responding to the received diagnosis and treatment result of the remote doctor user sent by the target terminal equipment, and determining the diagnosis and treatment result of the remote doctor user as a target diagnosis and treatment result. Therefore, diagnosis and treatment results diagnosed by the remote doctor can be directly determined as target diagnosis and treatment results, and the target diagnosis and treatment results can represent the cause of the patient user. And finally, displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device. Therefore, the combined consultation of the remote expert guidance and the on-site and remote doctors is realized, and the on-site doctor wearing the head-mounted display device can directly watch the virtual diagnosis and treatment information corresponding to the cause of the patient through the display screen of the head-mounted display device. Because on-site doctors can directly wear the head-mounted display device to participate in remote consultation with remote doctors, the restrictions of fixed places and fixed devices can be avoided, and the fixed places and the fixed devices do not need to be arranged in advance and debugged. And because the base equipment only relates to the head-mounted display equipment worn by the on-site doctor and the target terminal equipment used by the remote doctor, the base equipment is simplified, and the operation difficulty is reduced. Therefore, the limitation of fixed places and fixed equipment can be avoided, places do not need to be arranged in advance, the fixed equipment is debugged, and the operation difficulty is reduced.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is an architecture diagram of an exemplary system in which some embodiments of the present disclosure may be applied;
fig. 2 is a schematic view of an application scenario of a virtual diagnosis and treatment information display method for remote diagnosis and treatment according to some embodiments of the present disclosure;
FIG. 3 is a flow chart of some embodiments of a virtual diagnostic information display method for remote diagnostics in accordance with the present disclosure;
FIG. 4 is a flow chart of further embodiments of a virtual diagnostic information display method for remote diagnosis and treatment according to the present disclosure;
fig. 5 is a schematic structural diagram of a head mounted display device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The collection, storage, use, etc. of personal information (e.g., clinic notes, real-time multimedia information, illness state information, diagnosis and treatment results) of users involved in the present disclosure, before performing the corresponding operations, the relevant organizations or individuals are exhausted to the end, including carrying out personal information security influence evaluation, fulfilling obligations to the personal information body, obtaining authorized consent of the personal information body in advance, etc.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which the virtual diagnosis and treatment information display method for remote diagnosis and treatment of some embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, networks 104, 105, and a server 106. The network 104 is the medium used to provide communication links between the terminal devices 101, 102 and the server 106. The network 105 serves as a medium for providing a communication link between the terminal device 103 and the server 106. The networks 104, 105 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The remote doctor user may interact with the server 106 via the network 104 using the terminal devices 101, 102 to receive or send messages or the like. The field doctor user can interact with the server 106 via the network 105 using the terminal device 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a search class application, medical platform software, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102 are hardware, they may be various head-mounted display devices having a display screen and supporting information display, including but not limited to smartphones, tablet computers, electronic book readers, laptop and desktop computers, and the like. The terminal device 103 may be, but is not limited to, one of the following: AR glasses, VR glasses, MR glasses. When the terminal devices 101, 102, 103 are software, they can be installed in the head-mounted display devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
The server 106 may be a server providing various services, such as a background server providing support for information displayed on the terminal devices 101, 102, 103. The background server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
It should be noted that, the virtual diagnosis and treatment information display method for remote diagnosis and treatment provided by the embodiment of the present disclosure may be executed by the terminal device 103.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 is a schematic diagram of an application scenario of a virtual diagnosis and treatment information display method for remote diagnosis and treatment according to some embodiments of the present disclosure.
In the application scenario of fig. 2, first, in response to the head-mounted display device 201 scanning the two-dimensional code information 203 on the outpatient service sheet 202, the head-mounted display device 201 may call the target terminal device 204 of the remote doctor user corresponding to the two-dimensional code information 203. Then, the head-mounted display device 201 may synchronize the real-time multimedia information 205 collected by the multimedia information collecting means of the head-mounted display device 201 to the target terminal device 204. Next, the head-mounted display device 201 may determine the diagnosis and treatment result 206 of the remote doctor user as the target diagnosis and treatment result 207 in response to receiving the diagnosis and treatment result 206 of the remote doctor user transmitted from the target terminal device 204. Finally, the head-mounted display device 201 may display virtual diagnosis and treatment information 209 corresponding to the target diagnosis and treatment result 207 in a display screen 208 of the head-mounted display device.
The head-mounted display device 201 may be hardware or software. When the head-mounted display device 201 is hardware, it may be implemented as a distributed cluster composed of a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the head-mounted display device 201 is embodied as software, it may be installed in the above-listed hardware devices. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the numbers of head mounted display devices and target terminal devices in fig. 2 are merely illustrative. There may be any number of head mounted display devices and target terminal devices, as desired for implementation.
With continued reference to fig. 3, a flow 300 of some embodiments of a virtual diagnostic information display method for remote diagnostics is shown in accordance with the present disclosure. The virtual diagnosis and treatment information display method for remote diagnosis and treatment is applied to head-mounted display equipment and comprises the following steps of:
step 301, in response to the head-mounted display device scanning the two-dimensional code information on the outpatient service list, calling target terminal equipment of a remote doctor user corresponding to the two-dimensional code information.
In some embodiments, the executing body of the virtual diagnosis and treatment information display method (for example, the terminal device 103 shown in fig. 1) may call the target terminal device of the remote doctor user corresponding to the two-dimensional code information in response to the two-dimensional code information scanned onto the clinic bill by the head-mounted display device. The head-mounted display device may be a device for a user to view display content in a virtual space. The head mounted display device may be, but is not limited to, one of the following: AR glasses, VR glasses, MR glasses. Preferably, the head-mounted display device may be AR glasses. The clinic bill can be a bill of a patient user after registering in an outpatient department. The clinic bill can be a paper clinic bill or an electronic clinic bill. The two-dimensional code information can represent the two-dimensional code. For example, the two-dimensional code information may be a two-dimensional code image. The remote doctor user can be a remote user identified by two-dimension code information. For example, the remote doctor user may be a remote expert user. The target terminal device may be a terminal device having a user account of the remote doctor user registered therein. The target terminal device may include, but is not limited to, at least one of: cell phone, tablet computer, computer. In practice, the executing body may send a video call request to the target terminal device through a wireless connection manner, so as to call the target terminal device.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
Optionally, before step 302, the executing entity may obtain the patient information of the patient user corresponding to the medical conference in response to receiving the connection information of the remote doctor user for the medical conference. The diagnosis and treatment conference may be a conference of calling the target terminal device to perform joint diagnosis and treatment on patients. The connection information may characterize a meeting invitation for the remote doctor user to connect to the medical meeting. Here, the specific expression of the on information is not limited. For example, the above-described on information may be "connect". The condition information may be information characterizing the condition of the patient user. In practice, the executing body may acquire the patient condition information of the patient user from a database. The condition information may then be sent to the corresponding target terminal device. Therefore, the remote doctor user can be familiar with the illness state of the patient after the diagnosis and treatment conference is connected.
Optionally, before step 302, the executing body may execute the on prompt operation corresponding to the remote doctor user in response to not receiving the on information of the remote doctor user for the medical conference within a preset period of time after the request for the medical conference is issued. The operation of prompting the remote doctor to switch on the diagnosis and treatment conference can be performed by the operation of prompting the remote doctor user to switch on the diagnosis and treatment conference. The on-cue operation may include, but is not limited to, at least one of: telephone dialing operation, short message sending operation and instant messaging message sending operation. Here, the specific setting of the preset time period is not limited. In practice, the executing body may dial a telephone number corresponding to the remote doctor user to execute a telephone dialing operation. Then, in response to the remote doctor user not making a call within a predetermined period of time after making a call, a connection prompt message corresponding to the medical conference may be transmitted to the target terminal device. The connection prompt information may be information for prompting to connect the diagnosis and treatment conference. For example, the above-mentioned connection prompt information may be "xxx diagnosis and treatment meeting has reached the start time, please connect the meeting in time". "xxx" can be any character string that can characterize the topic of a medical conference. The mode of sending the connection prompt message can be a short message sending mode and/or an instant messaging message sending mode. Therefore, the remote doctor user can be timely prompted when the remote doctor user does not connect the diagnosis and treatment conference for a certain time.
Optionally, before step 302, the executing body may further execute the following calling steps:
the first step, acquiring the illness state information of a corresponding patient user in response to the fact that the head-mounted display device does not scan the two-dimensional code information on the outpatient service list. In practice, the executing body may acquire the illness state information of the corresponding patient user from the server.
And secondly, extracting a disease keyword set from the disease information. In practice, the executing body may extract the condition keyword set from the condition information through a keyword extraction algorithm. For example, the keyword extraction algorithm may be a keyword extraction algorithm based on a word graph model. The keyword extraction algorithm may also be a topic model-based keyword extraction algorithm. In practice, the executing body may also extract each initial condition keyword set from the condition information through each keyword extraction algorithm of different types. Then, the intersection of the respective initial condition keyword sets described above may be determined as the condition keyword set.
And thirdly, selecting each remote doctor user matched with the illness state keyword set from the remote doctor user set as a matched remote doctor user set. Wherein, the set of remote doctor users can be a set of individual remote doctor users. Each of the set of remote doctor users may correspond to at least one keyword. In practice, the executing body may select, from the set of remote doctor users, each remote doctor user whose corresponding keyword is the same as the condition keyword in the set of condition keywords as the set of matching remote doctor users. In practice, the executing body may further determine a similarity between the keyword of each remote doctor user in the set of remote doctor users and each disease keyword in the set of disease keywords, so as to obtain a set of similarity. Here, the similarity may be text similarity. Then, each of the remote doctor users whose corresponding similarity set satisfies the preset condition among the above-mentioned remote doctor user sets may be determined as a matching remote doctor user set. The preset condition may be that the set of similarities includes a similarity greater than the preset similarity. Here, the specific setting of the preset similarity is not limited.
And fourthly, selecting the matched remote doctor user meeting the preset idle time condition from the matched remote doctor user set as a target remote doctor user. The preset idle time condition may be that the current time is within an idle time period matching the remote doctor user. In practice, the executing entity may select any matching remote doctor user satisfying a preset idle time condition from the set of matching remote doctor users as a target remote doctor user.
And fifthly, calling the target terminal equipment of the target remote doctor user.
The calling step is used as an invention point of the embodiment of the disclosure, solves the technical problem three' an expert who needs to reserve a remote consultation in advance, cannot make the remote consultation when not reserved, needs to spend a long time of reservation and queuing, and causes complicated operation. Factors that cause the operation to be complicated are often as follows: the specialist who carries out the remote consultation needs to reserve in advance, can not carry out the remote consultation when not reserving, and needs to spend a long time of reservation and queuing. If the above factors are solved, the effect of simplifying the operation can be achieved. To achieve this effect, the present disclosure automatically matches each of the remote doctor users corresponding to the patient user's condition information from the set of remote doctor users when two-dimensional code information on the outpatient order is not scanned. Then, the target remote doctor user is automatically selected to make a call according to the matched idle time periods of the remote doctor users. Therefore, the specialist interfacing with the remote consultation does not need to reserve in advance, so that the remote consultation can be carried out when the specialist does not reserve, and the waiting time of reservation and queuing is avoided. Thereby simplifying the operation.
Step 302, synchronizing real-time multimedia information acquired by a multimedia information acquisition device of the head-mounted display device to the target terminal device.
In some embodiments, the executing body may synchronize the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display device with the target terminal device. The multimedia information collecting device may be a device for collecting multimedia information. The multimedia information collecting device may include, but is not limited to, at least one of the following: camera, microphone. The real-time multimedia information may be real-time multimedia information collected by a multimedia information collecting device. The real-time multimedia information may include, but is not limited to, at least one of the following: image, video, audio. In practice, the executing body may synchronize the real-time multimedia information to the target terminal device in response to detecting that the target terminal device accepts the video call request. The real-time multimedia information may be stream data of a photographed diagnosis site. The diagnostic site may be a real-world scene in which the patient user is located.
Optionally, the executing body may play the real-time multimedia information sent by the target terminal device. Therefore, the remote real-time communication between the on-site doctor user and the remote doctor user can be realized.
Optionally, the executing body may synchronize the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display device to the target terminal device through the following steps:
first, determining the communication language type sequence corresponding to the remote doctor user. The communication language type sequence may be the types of various communication languages used by the remote doctor user. The communication language types in the communication language type sequence may be arranged from large to small according to the proficiency of the remote doctor user. In practice, the executing entity may obtain the communication language type sequence corresponding to the remote doctor user from the server.
And step two, determining the language type corresponding to the real-time multimedia information. In practice, the executing body can identify the language type corresponding to the real-time multimedia information through a pre-trained language type identification model. The language type recognition model may be a neural network model with audio as input data and a recognized language type as output. Here, the type of the neural network model is not limited. Specifically, the real-time audio of the real-time multimedia information may be input into the language type recognition model to obtain a language type.
And thirdly, determining whether the communication language type corresponding to the language type exists in the communication language type sequence. In practice, the executing entity may determine whether the communication language type sequence has the same communication language type as the communication language type.
Fourth, in response to determining that the communication language type corresponding to the language type does not exist in the communication language type sequence, determining the first communication language type in the communication language type sequence as a target communication language type.
And fifthly, converting the real-time audio of the real-time multimedia information into a conversion text corresponding to the target communication language type. And converting the language type corresponding to the text into the target communication language type.
And sixthly, synchronizing the real-time multimedia information and the conversion text to the corresponding target terminal equipment.
The above-mentioned content is taken as an invention point of the embodiment of the present disclosure, and solves the second technical problem mentioned in the background art that the remote video is unfavorable for the foreign medical specialist who is not communicating with the language to watch, and the translation equipment needs to be additionally worn, so that the operation is complicated. Factors that cause the operation to be complicated are often as follows: the remote video is unfavorable for the foreign medical specialist with the language not communicated to watch, and the translation equipment is required to be additionally worn. If the above factors are solved, the effect of simplifying the operation can be achieved. To achieve this effect, the present disclosure translates real-time audio of real-time multimedia information into a converted text supported by the communication language type of the remote doctor user when it is determined that the language type of the real-time multimedia information does not correspond to each communication language type of the remote doctor user. Then, when the real-time multimedia information is synchronized to the target terminal device, the converted text of the real-time audio of the real-time multimedia information is also synchronized to the target terminal device. Thus, the remote doctor user can view the translated conversion text while viewing the real-time multimedia information to facilitate understanding. Therefore, additional translation equipment is not required to be worn, and the operation is simplified.
And step 303, determining the diagnosis and treatment result of the remote doctor user as a target diagnosis and treatment result in response to receiving the diagnosis and treatment result of the remote doctor user sent by the target terminal device.
In some embodiments, the executing body may determine the diagnosis and treatment result of the remote doctor user as the target diagnosis and treatment result in response to receiving the diagnosis and treatment result of the remote doctor user transmitted by the target terminal device. The diagnosis and treatment result can be the actual cause of the patient user determined by the remote doctor user according to the remote real-time multimedia information.
Step 304, displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device.
In some embodiments, the executing body may display virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device. The display screen may be a display screen for imaging in front of the eyes of the user. For example, the display screen may be a micro display screen. The virtual diagnosis and treatment information may be virtual information for displaying on a display screen, the virtual information corresponding to the target diagnosis and treatment result. For example, the virtual diagnosis and treatment information may include, but is not limited to, at least one of: diagnosis and treatment plan information corresponding to the target diagnosis and treatment result and history information of the diagnosis and treatment plan information. The diagnosis and treatment plan information may be a treatment plan for the target diagnosis and treatment result. The above-described representation of the protocol information may include, but is not limited to, at least one of the following: text, pictures, audio, video. The history information may be statistical related information of the treatment condition according to the diagnosis and treatment plan information. The representation of the history information may include, but is not limited to, at least one of the following: text, chart. The history information may include, but is not limited to, at least one of the following: average cure duration and cure rate. In practice, the execution subject may select at least one diagnosis and treatment plan information corresponding to the target diagnosis and treatment result from a pre-stored diagnosis and treatment plan information set as virtual diagnosis and treatment information. Wherein, each diagnosis and treatment scheme information in the diagnosis and treatment scheme information set corresponds to a diagnosis and treatment result. Then, diagnosis and treatment plan information included in the virtual diagnosis and treatment information may be displayed in a list form in a display screen of the head-mounted display device.
In some optional implementations of some embodiments, the executing body may display virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result on a display screen of the head-mounted display device through the following steps:
first, obtaining diagnosis and treatment scheme information corresponding to the target diagnosis and treatment result. Wherein, the diagnosis and treatment scheme information comprises operation videos. The operation video can be a video for guiding the on-site doctor to operate in practice. The operation video corresponds to a play node sequence. Each play node in the play node sequence corresponds to a video clip in the surgical operation video.
And secondly, determining the operation video as virtual diagnosis and treatment information.
And thirdly, displaying the virtual diagnosis and treatment information on the display screen according to the play node sequence. In practice, the execution body may play each video clip of the operation video sequentially according to the order of each play node in the play node sequence. In practice, for each playback node in the playback node sequence, the execution body may further execute the following playback steps:
And the first step, playing the video clips corresponding to the playing nodes. It can be understood that, when the playing step is performed for the first time, the playing node is the first playing node in the playing node sequence.
And a second step of displaying a replay control and a sequential play control in the display screen in response to determining that the video clip is played.
And thirdly, playing the video clip again in response to detecting the selection operation on the replay control. Here, the manner of selecting operation may be, but is not limited to, one of: gesture selection, voice selection, head selection, and touch selection. Gesture selection may be by way of a gesture to select a control. The voice selection may be by way of selecting a control through a voice password. The head control selection may be by way of a head rotation and/or movement selection control. The touch selection may be by way of selecting a control by a touch operation on the touch device. The touch device may be, but is not limited to, one of: cell phone, tablet computer, touch pad.
And fourthly, in response to detecting the selection operation acting on the sequential play control, determining the next play node corresponding to the play node in the play node sequence as the play node, and executing the play step again. Therefore, the on-site doctor user can control the playing progress of the operation video by himself, and each video clip corresponding to each playing node can be convenient for the on-site doctor user to play one video clip independently.
Optionally, the executing body may further perform a retirement process on the virtual diagnosis and treatment information displayed in the display screen in response to detecting a low head operation of a live doctor user wearing the head-mounted display device. The low head operation may be an operation of a low head when the on-site doctor user wears the head-mounted display device. The execution body may detect a low head operation by an inertial measurement unit built in the head-mounted display device. In practice, the executing body may clear the virtual diagnosis and treatment information from the display screen. In practice, the execution body may further move the virtual diagnosis and treatment information to a side position in the display screen. The side positions may be positions on any one of the upper, lower, left and right sides of the display screen. The specific setting of the side position is not limited. Therefore, the user can write at low head on site without affecting the watching and displaying of the scene.
It can be appreciated that the executing body may further perform recovery processing on the virtual diagnosis and treatment information subjected to the retirement processing in response to detecting a head-up operation of a live doctor user wearing the head-mounted display device. In practice, the execution subject may display the virtual medical information at a position where the virtual medical information is displayed in advance. Therefore, when the on-site user lifts his head, the virtual diagnosis and treatment information can be restored to be displayed in the display screen for the user to continue watching.
The above embodiments of the present disclosure have the following advantageous effects: according to the virtual diagnosis and treatment information display method for remote diagnosis and treatment, the limitation of fixed places and fixed equipment can be avoided, places do not need to be arranged in advance, the fixed equipment is not required to be debugged, and the operation difficulty is reduced. Specifically, the reason why it takes a long time to arrange a place, debug equipment, and operation difficulty is large is that: the device is limited by fixed places and fixed equipment, the place and the debugging equipment are required to be arranged for a long time, the basic equipment is complex, and the operation difficulty is high. Based on this, in some embodiments of the present disclosure, in response to the head-mounted display device scanning the two-dimensional code information on the outpatient service sheet, the target terminal device of the remote doctor user corresponding to the two-dimensional code information is called. Thus, the on-site doctor can directly call the remote doctor in the form of scanning the two-dimension code information by wearing the head-mounted display device. And then synchronizing the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display equipment to the target terminal equipment. Thereby, the remote doctor can watch the real-time multimedia information through the target terminal device. And secondly, responding to the received diagnosis and treatment result of the remote doctor user sent by the target terminal equipment, and determining the diagnosis and treatment result of the remote doctor user as a target diagnosis and treatment result. Therefore, diagnosis and treatment results diagnosed by the remote doctor can be directly determined as target diagnosis and treatment results, and the target diagnosis and treatment results can represent the cause of the patient user. And finally, displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device. Therefore, the combined consultation of the remote expert guidance and the on-site and remote doctors is realized, and the on-site doctor wearing the head-mounted display device can directly watch the virtual diagnosis and treatment information corresponding to the cause of the patient through the display screen of the head-mounted display device. Because on-site doctors can directly wear the head-mounted display device to participate in remote consultation with remote doctors, the restrictions of fixed places and fixed devices can be avoided, and the fixed places and the fixed devices do not need to be arranged in advance and debugged. And because the base equipment only relates to the head-mounted display equipment worn by the on-site doctor and the target terminal equipment used by the remote doctor, the base equipment is simplified, and the operation difficulty is reduced. Therefore, the limitation of fixed places and fixed equipment can be avoided, places do not need to be arranged in advance, the fixed equipment is debugged, and the operation difficulty is reduced.
With further reference to fig. 4, a flow 400 of further embodiments of a virtual diagnostic information display method for remote diagnostics is shown. The process 400 of the virtual diagnosis and treatment information display method for remote diagnosis and treatment is applied to a head-mounted display device, and comprises the following steps:
in step 401, in response to the head-mounted display device scanning the two-dimensional code information on the outpatient service list, calling target terminal equipment of a remote doctor user corresponding to the two-dimensional code information.
Step 402, synchronizing real-time multimedia information acquired by a multimedia information acquisition device of the head-mounted display device to the target terminal device.
Step 403, in response to receiving the diagnosis and treatment result of the remote doctor user sent by the target terminal device, determining the diagnosis and treatment result of the remote doctor user as the target diagnosis and treatment result.
In some embodiments, the specific implementation of steps 401-403 and the technical effects thereof may refer to steps 301-303 in those embodiments corresponding to fig. 3, which are not described herein.
Step 404, in response to receiving the preliminary diagnosis and treatment information sent by the target terminal device, determining a query tree corresponding to the preliminary diagnosis and treatment information.
In some embodiments, the executing body of the virtual diagnosis and treatment information display method (for example, the terminal device 103 shown in fig. 1) may determine, in response to receiving the preliminary diagnosis and treatment information sent by the target terminal device, a query tree corresponding to the preliminary diagnosis and treatment information. The preliminary diagnosis and treatment information can be the cause of the patient preliminarily determined by the remote doctor user according to the real-time multimedia information. For example, the preliminary diagnosis and treatment information may be "respiratory disease". The above-described query tree may be a decision model of a tree structure for determining a diagnosis of a patient. The query tree corresponds to a root node, a set of intermediate nodes and a set of leaf nodes. The query tree includes a set of query node chains. Each chain of query nodes in the set of query node chains is a branch from a root node to a leaf node in the query tree. Each inquiry node chain in the inquiry node chain set takes a diagnosis result as a leaf node, and each inquiry node chain corresponds to an inquiry option sequence. The inquiry option sequence may be each inquiry option to be selected at the intermediate node of each layer for dividing to the branch corresponding to the inquiry node chain. For example, the ACD options may be sequentially selected to partition into the chain of nodes for the corresponding interrogation of pneumonia. In practice, the executing body may acquire the query tree corresponding to the preliminary diagnosis and treatment information from the server. In practice, the executing body may locally acquire the stored query tree corresponding to the preliminary diagnosis and treatment information.
Optionally, before step 405, the executing body may further display offline diagnostic prompt information in a display screen of the head-mounted display device in response to determining that the offline time length of the head-mounted display device meets a preset time condition. The preset time condition may be that the offline time period is greater than or equal to a preset time period. Here, the specific setting of the preset time period is not limited. Then, a query tree corresponding to the offline diagnosis information may be determined according to the offline diagnosis information inputted by the on-site doctor user. The offline diagnosis information can be diagnosis and treatment results primarily determined by on-site doctor users aiming at the illness state of the patient. In practice, the executing body may locally acquire a prestored query tree corresponding to the offline diagnosis information. Therefore, when the offline time of the head-mounted display device reaches a certain time, the inquiry tree can be determined according to the preliminary diagnosis and treatment result of the on-site doctor user.
Step 405, selecting a target query node chain from the query node chain set according to respective input information of a field doctor user wearing the head-mounted display device for the determined query tree.
In some embodiments, the executing entity may select the target query node chain from the query node chain set according to respective input information of the determined query tree by a field doctor user wearing the head-mounted display device. The input manner corresponding to each piece of input information may include, but is not limited to, at least one of the following: voice input, head control input, gesture input, and touch control input. The respective input information is matched with a sequence of inquiry options of the target inquiry node chain. It should be noted that the above-mentioned input information is arranged according to the input sequence. In practice, the executing body may select, from the query node chain set, a query node chain in which a corresponding query option sequence matches with the respective input information as a target query node chain. Here, the manner in which the inquiry option sequence is matched with the above-described respective input information may include, but is not limited to, at least one of: the inquiry option sequence is identical to the text of each input information representation, and the similarity between the inquiry option sequence and the text of each input information representation is greater than a similarity threshold. When the input mode of the input information is voice input, the text represented by the input information can be a text corresponding to a voice password. When the input mode of the input information is head control input, gesture input or touch control input, the text represented by the input information can be selected text. When the input mode of the input information is gesture input or touch input, the text represented by the input information may be the input text.
In some optional implementations of some embodiments, the executing entity may select the target chain of inquiry nodes from the set of inquiry node chains by:
first, determining a consultation node serving as a root node in the consultation tree as a target consultation node.
Second, for the target consultation node, the following determination steps are performed:
the first substep, displaying a query option set corresponding to the target query node on a display screen of the head-mounted display device. The set of inquiry options may be a set of all inquiry options that may be selected and that correspond to the target inquiry node. The inquiry options in the inquiry options set can include an option identification and option content. In practice, the executing body may display the question and the question option set corresponding to the target question node on the display screen of the head-mounted display device. The question may be a question that the user of the on-site doctor is required to answer.
And a second sub-step of determining the input voice as input information in response to detecting the input voice of the live doctor user. The input voice may be voice for the option identifier or voice for the option content.
And a third sub-step of selecting a query option matching the input information from the query option set as a target query option. In practice, the executing body may select, from the set of inquiry options, an inquiry option including a voice text corresponding to the input information as a target inquiry option.
And a fourth sub-step of determining a query node chain corresponding to the target query option in the query node chain set as a target query node chain in response to determining that the target query node meets the preset ending intermediate node condition. The preset ending intermediate node condition may be that a next interrogation node of the target interrogation node is a leaf node. The query node chain corresponding to the target query option may be a branch where a leaf node corresponding to the target query option is located in the query tree. It should be noted that, the sequence of the inquiry options corresponding to the target inquiry node chain is the same as the sequence composed of the selected target inquiry options.
Optionally, the determining step may further include:
and a fifth sub-step of, in response to determining that the target consultation node does not meet the preset ending intermediate node condition, executing the determining step again with the next consultation node corresponding to the target consultation option in the consultation tree as the target consultation node. Therefore, the method can automatically jump to the next consultation node so that the on-site doctor user can continuously answer the consultation questions.
Step 406, determining the diagnosis and treatment result included in the target inquiry node chain as a target diagnosis and treatment result.
In some embodiments, the executing entity may determine the diagnosis and treatment result included in the target inquiry node chain as a target diagnosis and treatment result. In practice, the executing body may determine the diagnosis and treatment result corresponding to the last consultation node in the target consultation node chain as a target diagnosis and treatment result.
Step 407, displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device.
In some embodiments, the specific implementation of step 407 and the technical effects thereof may refer to step 304 in those embodiments corresponding to fig. 3, which are not described herein.
As can be seen in fig. 4, the process 400 of the virtual diagnosis and treat information display method for remote diagnosis and treat in some embodiments corresponding to fig. 4 embodies the step of expanding the inquiry tree, compared to the description of some embodiments corresponding to fig. 3. Thus, the embodiments describe a solution that allows an on-site physician user to directly ask or observe a patient based on questions corresponding to each of the consultation nodes in the consultation tree. The remote doctor user only needs to specify the preliminary diagnosis and treatment result, and the on-site doctor user does not need to indirectly inquire or observe the patient. Therefore, the communication time consumption is reduced, the operation steps are simplified, and the selected inquiry options can be automatically recorded through the input information of the on-site doctor user aiming at each inquiry node in the inquiry tree.
Referring now to fig. 5, a schematic diagram of a head mounted display device 500 (e.g., terminal device 103 of fig. 1) suitable for use in implementing some embodiments of the present disclosure is shown. The head mounted display device shown in fig. 5 is only one example and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the head mounted display device 500 may include a processing means 501 (e.g., a central processor, a graphics processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the head mounted display device 500 are also stored. The processing device 501, the ROM502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a display screen (e.g., a micro display screen), speakers, vibrators, etc.; and communication means 509. The communication means 509 may allow the head mounted display device 500 to communicate wirelessly or by wire with other devices to exchange data. While fig. 5 shows a head mounted display device 500 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 5 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communications device 509, or from the storage device 508, or from the ROM 502. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the head mounted display device; or may be present alone without being fitted into the head mounted display device. The computer readable medium carries one or more programs which, when executed by the head mounted display device, cause the head mounted display device to: responding to the two-dimension code information scanned by the head-mounted display device on an outpatient service list, and calling target terminal equipment of a remote doctor user corresponding to the two-dimension code information; synchronizing the real-time multimedia information acquired by the multimedia information acquisition device of the head-mounted display equipment to the target terminal equipment; responding to the received diagnosis and treatment result of the remote doctor user sent by the target terminal equipment, and determining the diagnosis and treatment result of the remote doctor user as a target diagnosis and treatment result; and displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (11)

1. A virtual diagnosis and treatment information display method for remote diagnosis and treatment is applied to a head-mounted display device and comprises the following steps:
responding to the two-dimension code information scanned by the head-mounted display device on an outpatient service list, and calling target terminal equipment of a remote doctor user corresponding to the two-dimension code information;
Synchronizing real-time multimedia information acquired by a multimedia information acquisition device of the head-mounted display device to the target terminal device;
responding to the received diagnosis and treatment result of the remote doctor user sent by the target terminal equipment, and determining the diagnosis and treatment result of the remote doctor user as a target diagnosis and treatment result;
and displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device.
2. The method of claim 1, wherein prior to displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device, the method further comprises:
determining a query tree corresponding to the preliminary diagnosis and treatment information in response to receiving the preliminary diagnosis and treatment information sent by the target terminal equipment, wherein the query tree comprises a query node chain set, each query node chain in the query node chain set takes a diagnosis and treatment result as a leaf node, and each query node chain corresponds to a query option sequence;
selecting a target consultation node chain from the consultation node chain set according to each piece of input information of a field doctor user wearing the head-mounted display device aiming at the determined consultation tree, wherein each piece of input information is matched with a consultation option sequence of the target consultation node chain;
And determining the diagnosis and treatment results included in the target inquiry node chain as target diagnosis and treatment results.
3. The method of claim 1, wherein the displaying virtual diagnosis and treatment information corresponding to the target diagnosis and treatment result in a display screen of the head-mounted display device comprises:
obtaining diagnosis and treatment scheme information corresponding to the target diagnosis and treatment result, wherein the diagnosis and treatment scheme information comprises operation videos, and the operation videos correspond to play node sequences;
determining the operation video as virtual diagnosis and treatment information;
and displaying the virtual diagnosis and treatment information in the display screen according to the play node sequence.
4. The method of claim 1, wherein prior to said synchronizing real-time multimedia information acquired by the multimedia information acquisition means of the head mounted display device to the target terminal device, the method further comprises:
responding to receiving the connection information of the remote doctor user for the diagnosis and treatment conference, and acquiring the illness state information of the patient user corresponding to the diagnosis and treatment conference;
and sending the illness state information to the corresponding target terminal equipment.
5. The method of claim 4, wherein prior to said synchronizing the real-time multimedia information acquired by the multimedia information acquisition means of the head mounted display device to the target terminal device, the method further comprises:
And responding to the fact that the connection information of the remote doctor user for the diagnosis and treatment conference is not received within a preset time period after the request of the diagnosis and treatment conference is sent, and executing connection prompt operation corresponding to the remote doctor user.
6. The method of claim 2, wherein prior to selecting the target chain of inquiry nodes from the set of inquiry node chains according to respective input information of a field doctor user wearing the head mounted display device for the determined inquiry tree, the method further comprises:
in response to determining that the offline time length of the head-mounted display device meets a preset time condition, displaying offline diagnosis prompt information in a display screen of the head-mounted display device;
and determining a query tree corresponding to the offline diagnosis information according to the offline diagnosis information input by the on-site doctor user.
7. The method of claim 2, wherein the selecting a target chain of inquiry nodes from the set of inquiry node chains according to respective input information of a field doctor user wearing the head mounted display device for the determined inquiry tree comprises:
determining a consultation node serving as a root node in the consultation tree as a target consultation node;
For the target consultation node, the following determination steps are performed:
displaying a consultation option set corresponding to a target consultation node in a display screen of the head-mounted display device;
in response to detecting an input voice of the live doctor user, determining the input voice as input information;
selecting a consultation option matched with the input information from the consultation option set as a target consultation option;
and determining a query node chain corresponding to the target query option in the query node chain set as a target query node chain in response to determining that the target query node meets the preset ending intermediate node condition, wherein the query option sequence corresponding to the target query node chain is the same as the sequence consisting of the selected target query options.
8. The method of claim 7, wherein the determining step further comprises:
and in response to determining that the target consultation node does not meet the preset ending intermediate node condition, taking the next consultation node corresponding to the target consultation option in the consultation tree as the target consultation node, and executing the determining step again.
9. The method according to one of claims 1-8, wherein the method further comprises:
And in response to detecting the low head operation of a field doctor user wearing the head-mounted display device, carrying out retirement processing on the virtual diagnosis and treatment information displayed in the display screen.
10. A head mounted display device comprising:
one or more processors;
a storage device having one or more programs stored thereon,
a multimedia information acquisition device for acquiring real-time multimedia information,
a display screen for imaging in front of the eyes of the user,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-9.
11. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-9.
CN202211712387.4A 2022-12-29 2022-12-29 Information display method for remote diagnosis and treatment, head-mounted display device and medium Pending CN116259425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211712387.4A CN116259425A (en) 2022-12-29 2022-12-29 Information display method for remote diagnosis and treatment, head-mounted display device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211712387.4A CN116259425A (en) 2022-12-29 2022-12-29 Information display method for remote diagnosis and treatment, head-mounted display device and medium

Publications (1)

Publication Number Publication Date
CN116259425A true CN116259425A (en) 2023-06-13

Family

ID=86687182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211712387.4A Pending CN116259425A (en) 2022-12-29 2022-12-29 Information display method for remote diagnosis and treatment, head-mounted display device and medium

Country Status (1)

Country Link
CN (1) CN116259425A (en)

Similar Documents

Publication Publication Date Title
US9773501B1 (en) Transcription of communication sessions
US11151765B2 (en) Method and apparatus for generating information
EP3889912A1 (en) Method and apparatus for generating video
CN109993150B (en) Method and device for identifying age
US10116458B2 (en) Family communications in a controlled-environment facility
CN107430858A (en) The metadata of transmission mark current speaker
CN111641514A (en) Electronic meeting intelligence
US20200314483A1 (en) Intelligent masking of non-verbal cues during a video communication
US10212389B2 (en) Device to device communication
US9787842B1 (en) Establishment of communication between devices
KR20200117118A (en) Methed of providing customized voice service based on deep-learning and system thereof
CN111933239A (en) Data processing method, device, system and storage medium
US11210563B2 (en) Method and apparatus for processing image
US20220215957A1 (en) Digital Nurse for Symptom and Risk Assessment
CN116762125A (en) Environment collaboration intelligent system and method
US20190333518A1 (en) Presentation of indications with respect to questions of a communication session
CN111312243A (en) Equipment interaction method and device
CN110867257A (en) Remote consultation system
CN113241070A (en) Hot word recall and updating method, device, storage medium and hot word system
CN117909463A (en) Business scene dialogue method and related products
CN110196900A (en) Exchange method and device for terminal
KR102511579B1 (en) Apparatus, system, method and program for providing telemedicine service using a medical treatment kit
CN116259425A (en) Information display method for remote diagnosis and treatment, head-mounted display device and medium
CN114188041B (en) Medical system for completing doctor-patient service in remote dialogue mode
US20190332899A1 (en) Analysis of image media corresponding to a communication session

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination