CN113194299B - Oral treatment real-time picture sharing method under intelligent medical scene - Google Patents

Oral treatment real-time picture sharing method under intelligent medical scene Download PDF

Info

Publication number
CN113194299B
CN113194299B CN202110743846.4A CN202110743846A CN113194299B CN 113194299 B CN113194299 B CN 113194299B CN 202110743846 A CN202110743846 A CN 202110743846A CN 113194299 B CN113194299 B CN 113194299B
Authority
CN
China
Prior art keywords
patient
oral
intelligent glasses
oral cavity
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110743846.4A
Other languages
Chinese (zh)
Other versions
CN113194299A (en
Inventor
何则波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiuyuan Cultural Creative Co Ltd
Original Assignee
Shenzhen Xiuyuan Cultural Creative Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiuyuan Cultural Creative Co Ltd filed Critical Shenzhen Xiuyuan Cultural Creative Co Ltd
Priority to CN202110743846.4A priority Critical patent/CN113194299B/en
Publication of CN113194299A publication Critical patent/CN113194299A/en
Application granted granted Critical
Publication of CN113194299B publication Critical patent/CN113194299B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/06Implements for therapeutic treatment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C5/00Filling or capping teeth
    • A61C5/40Implements for surgical treatment of the roots or nerves of the teeth; Nerve needles; Methods or instruments for medication of the roots
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Pathology (AREA)
  • Primary Health Care (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses an oral treatment real-time picture sharing method in an intelligent medical scene, wherein in the method, a first intelligent glasses is worn on a doctor, a second intelligent glasses is worn on a patient, the first intelligent glasses shoot an oral treatment real-time picture when the doctor carries out oral treatment on the patient from the visual angle of the doctor through a first shooting module arranged on the front side of a glasses frame of the first intelligent glasses, and the oral treatment real-time picture is transmitted to the second intelligent glasses; the second intelligent glasses acquire the oral treatment real-time picture transmitted by the first intelligent glasses, and the harmless laser is utilized to project the oral treatment real-time picture onto the retina of the patient. The embodiment of the application can enable the patient to view the oral treatment real-time picture when the doctor carries out oral treatment (such as oral tooth root canal treatment) on the patient, so that the patient can communicate with the doctor timely when the oral treatment process carried out on the doctor is not satisfactory, and the satisfaction degree of the patient on the oral treatment result can be promoted.

Description

Oral treatment real-time picture sharing method under intelligent medical scene
Technical Field
The application relates to the field of intelligent medical treatment, in particular to an oral treatment real-time picture sharing method under an intelligent medical scene.
Background
During the oral treatment (such as oral tooth root canal treatment) of a patient by a doctor, the patient cannot see the real-time picture of the oral treatment, and the patient can usually only see the oral treatment result in a mirror mode after the oral treatment is finished, so that even if the patient is not satisfactory to the oral treatment result (such as too large tooth pulp opening during the oral tooth root canal treatment), the patient is too late.
Disclosure of Invention
In view of this, the embodiment of the present application discloses an oral treatment real-time image sharing method in an intelligent medical scene.
The embodiment of the application discloses oral treatment real-time picture sharing method under intelligent medical scene, and first intelligent glasses are worn in a certain doctor, and second intelligent glasses are worn in a certain patient, the method includes:
the first intelligent glasses shoot oral treatment real-time pictures of the patient when the doctor carries out oral treatment on the patient from the visual angle of the doctor through a first shooting module arranged on the front side of a glasses frame of the first intelligent glasses;
the first smart glasses transmit the oral treatment real-time picture to the second smart glasses;
the second intelligent glasses acquire the oral treatment real-time picture transmitted by the first intelligent glasses;
the second smart glasses project the oral treatment real-time frame onto the patient's retina using harmless laser light.
As an optional implementation manner, in this application example, the patient lies on an oral cavity comprehensive treatment machine, and a second shooting module with an adjustable shooting direction is disposed on the oral cavity comprehensive treatment machine, the first smart glasses are wirelessly connected to the oral cavity comprehensive treatment machine, before the first smart glasses shoot an oral cavity treatment real-time picture for the patient during oral cavity treatment performed by the doctor from the doctor's perspective through the shooting module disposed on the front side of their glasses frames, the method further includes:
the oral cavity comprehensive treatment machine utilizes the second shooting module to shoot the glasses unique identifier arranged on the front side of the glasses frame of the second intelligent glasses, and queries the target verification information randomly distributed to the second intelligent glasses by the oral cavity comprehensive treatment machine when the second intelligent glasses are in wireless connection with the oral cavity comprehensive treatment machine for the last time according to the glasses unique identifier and transmits the target verification information to the first intelligent glasses;
the first intelligent glasses receive the target verification information and broadcast a wireless connection request containing the target verification information and the instant position of the first intelligent glasses;
the second intelligent glasses receive the wireless connection request, determine a target distance threshold value when the second intelligent glasses are identified to store the target verification information contained in the wireless connection request, judge whether a distance between the instant position of the second intelligent glasses and the instant position of the first intelligent glasses is smaller than or equal to the target distance threshold value, and if yes, establish wireless connection with the first intelligent glasses;
wherein the second smart glasses determine a target distance threshold when recognizing that the second smart glasses have stored the target verification information included in the wireless connection request, including:
when the second intelligent glasses recognize that the target verification information contained in the wireless connection request is stored in the second intelligent glasses, whether the historical times of wireless connection establishment between the second intelligent glasses and the first intelligent glasses in a set historical time period exceeds a specified time is judged, and if not, a preset distance threshold value is used as a target distance threshold value; if the number of the historical times exceeds the specified number of the historical times, calculating the time difference between the historical times and the specified times;
the second intelligent glasses obtain a distance increment, and the distance increment and the time difference are in a direct proportion relation; and calculating the sum of the preset distance threshold and the distance increment as a target distance threshold.
As an alternative embodiment, in the embodiment of the present application, the patient is a deaf-mute, and the method further comprises:
if the oral cavity comprehensive treatment machine captures a first hand language action made by the patient by utilizing the second shooting module in the process that the doctor performs oral cavity treatment on the patient, the first hand language action is translated into a first voice through artificial intelligence, and the first voice is output to the first intelligent glasses;
the first intelligent glasses receive the first voice and control a loudspeaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to;
the first intelligent glasses utilize a sound pickup module integrated with the first intelligent glasses to pick up a second voice sent by the doctor in response to the first voice, and the second voice is translated into a second voice action through artificial intelligence;
the first smart glasses suspend transmitting the oral treatment real-time pictures to the second smart glasses and transmit sign language video pictures of the virtual character configured to the first smart glasses when the virtual character performs the second sign language action to the second smart glasses, so that the second smart glasses project the sign language video pictures onto the retina of the patient by using harmless laser;
and when the sign language video pictures are transmitted to the second intelligent glasses, the first intelligent glasses continue to transmit the oral treatment real-time pictures to the second intelligent glasses.
As another alternative, in the examples of the present application, the patient is a deaf-mute, and the method further comprises:
if the oral cavity comprehensive treatment machine captures a first hand language action made by the patient by utilizing the second shooting module in the process that the doctor performs oral cavity treatment on the patient, the first hand language action is translated into a first voice through artificial intelligence, and the first voice is output to the first intelligent glasses;
the first intelligent glasses receive the first voice and control a loudspeaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to;
the second intelligent glasses pick up a second voice sent by the doctor in response to the first voice by utilizing a sound pickup module integrated with the second intelligent glasses, and translate the second voice into a second voice action through artificial intelligence;
the second smart glasses pause the projection of the oral treatment real-time picture onto the retina of the patient by using harmless laser, and project a sign language video picture of a virtual character configured by the second smart glasses when the virtual character performs the second sign language action onto the retina of the patient by using the harmless laser;
and after the second smart glasses project all sign language video pictures of the virtual character configured on the second smart glasses to the retina of the patient by using harmless laser, continuing to project the oral treatment real-time pictures to the retina of the patient by using the harmless laser.
As a further alternative, in the present examples, the patient is a deaf-mute, and the method further comprises:
if the oral cavity comprehensive treatment machine captures a first hand language action made by the patient by utilizing the second shooting module in the process that the doctor performs oral cavity treatment on the patient, the first hand language action is translated into a first voice through artificial intelligence, and the first voice is output to the first intelligent glasses;
the first intelligent glasses receive the first voice and control a loudspeaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to;
the second intelligent glasses pick up a second voice sent by the doctor in response to the first voice by utilizing a sound pickup module integrated with the second intelligent glasses, and translate the second voice into a second voice action through artificial intelligence;
the second smart glasses project sign language video pictures of virtual characters configured by the second smart glasses when the virtual characters make the second sign language actions as the foreground of the oral treatment real-time pictures on the retinas of the patient; wherein a partial area of the real-time picture of oral treatment that is occluded by the sign language video picture of the virtual character when the second sign language action is made is not visible to the patient.
As an optional implementation manner, in an embodiment of the present application, the method further includes:
the oral cavity comprehensive treatment machine converts the first voice into a corresponding first text and stores the first text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the first smart glasses transmit the second voice to the oral cavity comprehensive treatment machine;
the oral cavity comprehensive treatment machine receives the second voice, converts the second voice into a corresponding second text and stores the second text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the oral cavity comprehensive treatment machine establishes a text dialogue relation between the first text and the second text in the oral cavity treatment communication record.
As another optional implementation manner, in an embodiment of the present application, the method further includes:
the oral cavity comprehensive treatment machine converts the first voice into a corresponding first text and stores the first text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the second smart glasses transmit the second voice to the first smart glasses;
the first intelligent glasses receive the second voice and transmit the second voice to the oral cavity comprehensive treatment machine;
the oral cavity comprehensive treatment machine receives the second voice, converts the second voice into a corresponding second text and stores the second text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the oral cavity comprehensive treatment machine establishes a text dialogue relation between the first text and the second text in the oral cavity treatment communication record.
As an optional implementation manner, in an embodiment of the present application, the patient is a student of a certain school of deaf-mutes, the school of deaf-mutes is arranged with a service device, and the oral cavity complex therapeutic machine is in communication connection with the service device, and the method further includes:
the comprehensive treatment machine for the oral cavity determines guardian equipment bound with the identity information of the patient from the service equipment according to the identity information of the patient;
the oral cavity comprehensive treatment machine sends an inquiry message including the identity information of the patient to guardian equipment bound with the identity information of the patient, wherein the inquiry message is used for inquiring whether an oral cavity treatment communication record corresponding to the identity information of the patient needs to be checked;
the oral cavity comprehensive treatment machine judges whether a reply response which is sent by the guardian equipment bound by the identity information of the patient and is used for indicating that the oral cavity treatment communication record corresponding to the identity information of the patient needs to be checked is received or not within a specified time, and if the reply response is received, the text conversation relation which is not sent to the guardian equipment in the oral cavity treatment communication record corresponding to the identity information of the patient is sent to the guardian equipment.
As an optional implementation manner, in an embodiment of the present application, the oral cavity comprehensive treatment machine is located in an oral cavity patrol vehicle entering the school of the deaf-mute for patrol, the service device is communicatively connected to a vehicle monitoring unit and a radio frequency unit that are located outside a school gate of the school of the deaf-mute, the oral cavity comprehensive treatment machine uses the identity information of the patient as a basis, and before the guardian device that binds the identity information of the patient is determined from the service device, the method further includes:
when the service equipment identifies the license plate information of the oral patrol car through the vehicle monitoring unit and identifies that the driving trend of the oral patrol car is about to enter the deaf-mute academic school, the service equipment controls the radio frequency unit to transmit a first radio frequency signal containing the identity of the service equipment and the specified patrol position to the oral patrol car, and the service equipment stores the corresponding relation between the license plate information of the oral patrol car and the identity of the service equipment;
after the vehicle-mounted unit of the oral cavity patrol vehicle receives the first radio frequency signal, the identity of the service equipment is stored in a driving computer of the oral cavity patrol vehicle; when the oral patrol car reaches a certain target position of the school for deaf-mutes, the oral patrol car responds to an input service equipment access instruction to transmit the second radio-frequency signal comprising the target position, the license plate information of the oral patrol car and the identity of the service equipment;
after the service equipment receives the second radio frequency signal, if the target position is identified to be the same as the specified patrol position and the corresponding relation between the license plate information of the oral patrol vehicle and the identity of the service equipment stored in the service equipment is identified, the service equipment establishes communication connection with the oral patrol vehicle;
the comprehensive treatment machine for the oral cavity determines guardian equipment bound with the identity information of the patient from the service equipment according to the identity information of the patient, and comprises:
the oral cavity comprehensive treatment machine takes the identity information of the patient as a basis, and guardian equipment bound with the identity information of the patient is determined from the service equipment through communication connection between the service equipment and the oral cavity patrol vehicle.
As an optional implementation manner, in this embodiment of the application, after the oral cavity comprehensive treatment machine determines that a reply response, which is sent by the guardian device bound to the identity information of the patient and is used to indicate that the oral cavity treatment communication record corresponding to the identity information of the patient needs to be viewed within a specified time period, and before sending the text dialogue relationship, which is not sent to the guardian device in the oral cavity treatment communication record corresponding to the identity information of the patient, to the guardian device, the method further includes:
the oral cavity comprehensive treatment machine acquires a first card-punching place track reported by guardian equipment bound with the identity information of the patient from the service equipment; the first card punching place track comprises a first appointed number of card punching places, and any two card punching places in the first appointed number of card punching places are different from each other;
the oral cavity comprehensive treatment machine inserts non-punching positions, the number of which is in direct proportion to the linear distance between every two adjacent punching positions, between every two adjacent punching positions in the first punching position track to form a second punching position track;
the oral cavity comprehensive treatment machine sends the second card punching place track to guardian equipment bound with the identity information of the patient;
the oral cavity comprehensive treatment machine acquires a third card punching place track sent by guardian equipment bound with the identity information of the patient; and the third card punching place track consists of a second appointed number of places selected from the second card punching place track.
And when the oral cavity comprehensive treatment machine verifies that the third card punching place track is the same as the first card punching place track, and the second specified number is equal to the first specified number, and the set formed by the second specified number of places is the same as the set formed by the first specified number of card punching places, sending the text conversation relation which is not sent to the guardian equipment in the oral cavity treatment communication record corresponding to the identity information of the patient to the guardian equipment.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, in an intelligent medical scene, a first intelligent glasses is worn on a doctor, a second intelligent glasses is worn on a patient, the first intelligent glasses can shoot an oral treatment real-time picture when the doctor carries out oral treatment on the patient from the visual angle of the doctor through a first shooting module arranged on the front side of a glasses frame of the first intelligent glasses, and the oral treatment real-time picture is transmitted to the second intelligent glasses; the second intelligent glasses acquire the oral treatment real-time picture transmitted by the first intelligent glasses, and project the oral treatment real-time picture onto the retina of the patient by using harmless laser. Therefore, by implementing the embodiment of the application, the patient can watch the oral treatment real-time picture when the doctor carries out oral treatment (such as oral tooth root canal treatment) on the patient, so that the patient can communicate with the doctor timely when the oral treatment process carried out by the doctor is not satisfactory, and the satisfaction degree of the patient on the oral treatment result can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first embodiment of an oral treatment real-time image sharing method in an intelligent medical scene according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second embodiment of an oral treatment real-time image sharing method in an intelligent medical scene according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a third embodiment of an oral treatment real-time image sharing method in an intelligent medical scene according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a fourth embodiment of an oral treatment real-time image sharing method in an intelligent medical scene disclosed in the embodiment of the present application;
fig. 5 is a schematic flowchart of a fifth embodiment of an oral treatment real-time image sharing method in an intelligent medical scene according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first card punch location trajectory as disclosed in an embodiment of the present application;
fig. 7 is a schematic diagram of a second punch location track disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses oral treatment real-time picture sharing method under intelligent medical scene, can make the patient watch the oral treatment real-time picture when the doctor carries out oral treatment (such as oral tooth root canal treatment) to the patient to can be timely communicate with the doctor when the patient is dissatisfied to the oral treatment process that the doctor carries out, thereby be favorable to promoting the patient to the satisfaction of oral treatment result. The following detailed description is made with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for sharing an oral treatment real-time screen in an intelligent medical scene according to a first embodiment of the present disclosure. In the oral treatment real-time image sharing method in the intelligent medical scene depicted in fig. 1, a first smart glasses is worn by a doctor, a second smart glasses is worn by a patient, and the oral treatment real-time image sharing method in the intelligent medical scene includes the following steps:
101. first intelligent glasses pass through the first module of shooing that the spectacle-frame front side of first intelligent glasses set up is followed doctor's visual angle is shot the doctor is for oral treatment real-time picture when the patient carries out oral treatment.
Illustratively, a first shooting module can be arranged in the middle position of the front side of the glasses frame of the first intelligent glasses, when the doctor wears the first intelligent glasses to perform oral treatment on the patient, the doctor can watch the oral treatment real-time picture when the doctor performs oral treatment on the patient through the lenses of the first intelligent glasses, and the first intelligent glasses can shoot the oral treatment real-time picture of the patient for oral treatment from the visual angle of the doctor through the first shooting module, that is, the doctor sees through the lens of first intelligent glasses the doctor is oral treatment real-time picture when the patient carries out oral treatment with first intelligent glasses passes through first shooting module follow the doctor is taken the doctor is oral treatment real-time picture when the patient carries out oral treatment is the same.
It is understood that, in the embodiments of the present application, the oral treatment may include orthodontic treatment, dental treatment, mouth repair treatment, and the like, and the embodiments of the present application are not limited thereto.
102. The first smart glasses transmit the oral treatment real-time picture to the second smart glasses.
Illustratively, the first smart glasses transmit the oral treatment real-time picture to the second smart glasses via a wireless connection.
103. And the second intelligent glasses acquire the oral treatment real-time picture transmitted by the first intelligent glasses.
104. The second smart glasses project the oral treatment real-time frame onto the patient's retina using harmless laser light.
In some embodiments, the left temple of the second pair of smart glasses may be embedded with a laser projector that projects RGB laser light onto two small lenses mounted at an angle on the second pair of smart glasses, so that the light is transmitted to the retina of the patient, and the patient can view a real-time picture of the oral treatment when the doctor performs the oral treatment (e.g., oral root canal treatment).
Therefore, by implementing the method described in fig. 1, the patient can watch the oral treatment real-time picture when the doctor performs oral treatment (such as oral tooth root canal treatment) on the patient, so that the patient can communicate with the doctor in time when the patient is dissatisfied with the oral treatment process performed by the doctor, and the satisfaction degree of the patient on the oral treatment result can be improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for sharing an oral treatment real-time screen in an intelligent medical scene according to a second embodiment of the present disclosure. In the oral treatment real-time picture sharing method in the intelligent medical scene described in fig. 2, first intelligent glasses are worn by a doctor, second intelligent glasses are worn by a patient, the patient lies on the oral comprehensive treatment machine, a second shooting module with an adjustable shooting direction is arranged on the oral comprehensive treatment machine, and the first intelligent glasses are wirelessly connected with the oral comprehensive treatment machine. As shown in fig. 2, the oral treatment real-time screen sharing method under the intelligent medical scene includes the following steps:
201. the oral cavity comprehensive treatment machine utilizes the second shooting module to shoot the glasses unique identifier arranged on the front side of the glasses frame of the second intelligent glasses, and based on the glasses unique identifier, inquires out the target verification information randomly distributed for the second intelligent glasses by the oral cavity comprehensive treatment machine when the second intelligent glasses are in wireless connection with the oral cavity comprehensive treatment machine for the last time, and transmits the target verification information to the first intelligent glasses.
Illustratively, the glasses unique identifier may include a glasses unique two-dimensional code, a glasses unique special character, and the like, and the embodiments of the present application are not limited thereto.
In this embodiment, the oral cavity complex treatment machine may randomly allocate target verification information to each smart glasses (e.g., second smart glasses) wirelessly connected to the oral cavity complex treatment machine, where the target verification information includes, but is not limited to, a character string, and the target verification information is used as legal verification information when other smart glasses (e.g., first smart glasses) or other devices are further wirelessly connected to the smart glasses in a scenario where the other smart glasses (e.g., first smart glasses) or other devices are currently wirelessly connected to the oral cavity complex treatment machine.
202. And the first intelligent glasses receive the target verification information and broadcast a wireless connection request containing the target verification information and the instant position of the first intelligent glasses.
203. The second intelligent glasses receive the wireless connection request, determine a target distance threshold value when the second intelligent glasses are identified to store the target verification information contained in the wireless connection request, judge whether the distance between the instant position of the second intelligent glasses and the instant position of the first intelligent glasses is smaller than or equal to the target distance threshold value, and if yes, establish wireless connection with the first intelligent glasses.
In some embodiments, when the second smart glasses determine that the distance between the instant position of the second smart glasses and the instant position of the first smart glasses is not less than or equal to the target distance threshold, the second smart glasses may refuse to establish a wireless connection with the first smart glasses.
In some embodiments, determining, by the second smart glasses upon recognizing that the second smart glasses already store the target verification information included in the wireless connection request, a target distance threshold comprises:
when the second intelligent glasses recognize that the target verification information contained in the wireless connection request is stored in the second intelligent glasses, whether the historical times of wireless connection establishment between the second intelligent glasses and the first intelligent glasses in a set historical time period exceeds a specified time is judged, and if not, a preset distance threshold value is used as a target distance threshold value; if the number of the historical times exceeds the specified number of the historical times, calculating the time difference between the historical times and the specified times;
the second intelligent glasses obtain a distance increment, and the distance increment and the time difference are in a direct proportion relation; and calculating the sum of the preset distance threshold and the distance increment as a target distance threshold.
As can be seen, with the implementation of the foregoing embodiment, when the historical number of times of establishing wireless connection between the second smart glasses and the first smart glasses within a set historical period exceeds a specified number of times, the greater the difference between the historical number of times and the specified number of times, the longer the distance between the second smart glasses and the first smart glasses is allowed to establish wireless connection, so that the success rate of establishing wireless connection between the second smart glasses and the first smart glasses can be increased.
In some embodiments, the target verification information randomly allocated by the second smart glasses by the oral cavity integrated treatment machine when the second smart glasses wirelessly connect to the oral cavity integrated treatment machine the last time may further carry a valid period, and accordingly, in the step 203, after the second smart glasses judge that the distance between the instant position of the second smart glasses and the instant position of the first smart glasses is less than or equal to the target distance threshold, and before the wireless connection with the first smart glasses is established, the following steps may be further performed:
when the historical times of the wireless connection between the second intelligent glasses and the first intelligent glasses in the set historical time period do not exceed the designated times, the second intelligent glasses judge whether the current time is within the effective time period, and if so, the step of establishing the wireless connection between the second intelligent glasses and the first intelligent glasses is executed; if not, refusing to establish wireless connection with the first intelligent glasses;
or the second smart glasses obtain a duration increment when the historical times of the wireless connection between the second smart glasses and the first smart glasses within the set historical period exceeds the specified times, wherein the duration increment is in direct proportion to the time difference between the historical times and the specified times; and extending the effective time period by the time increment to obtain an effective target time period; judging whether the current time is within the effective target time period, if so, executing the step of establishing wireless connection with the first intelligent glasses; and if not, refusing to establish wireless connection with the first intelligent glasses.
Therefore, by implementing the embodiment, the success rate of establishing the wireless connection between the second intelligent glasses and the first intelligent glasses can be better improved.
204. First intelligent glasses pass through the first module of shooing that the spectacle-frame front side of first intelligent glasses set up is followed doctor's visual angle is shot the doctor is for oral treatment real-time picture when the patient carries out oral treatment.
205. The first smart glasses transmit the oral treatment real-time picture to the second smart glasses.
Namely, the first smart glasses transmit the oral treatment real-time picture to the second smart glasses through the established wireless connection between the first smart glasses and the second smart glasses.
206. And the second intelligent glasses acquire the oral treatment real-time picture transmitted by the first intelligent glasses.
207. The second smart glasses project the oral treatment real-time frame onto the patient's retina using harmless laser light.
Therefore, by implementing the method described in fig. 2, the patient can watch the oral treatment real-time picture when the doctor performs oral treatment (such as oral tooth root canal treatment) on the patient, so that the patient can communicate with the doctor in time when the patient is dissatisfied with the oral treatment process performed by the doctor, and the satisfaction degree of the patient on the oral treatment result can be improved.
In addition, by implementing the method described in fig. 2, the success rate of establishing the wireless connection between the second smart glasses and the first smart glasses can be better improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for sharing an oral treatment real-time screen in an intelligent medical scene according to a third embodiment of the present disclosure. In the oral treatment real-time picture sharing method in the intelligent medical scene described in fig. 3, a first intelligent glasses is worn by a doctor, a second intelligent glasses is worn by a patient, the patient lies on an oral comprehensive treatment machine, the patient is a deaf-mute, a second shooting module with an adjustable shooting direction is arranged on the oral comprehensive treatment machine, and the first intelligent glasses are wirelessly connected with the oral comprehensive treatment machine. As shown in fig. 3, the oral treatment real-time screen sharing method under the intelligent medical scene includes the following steps:
steps 301 to 307 are the same as steps 201 to 207 in the previous embodiment, and the embodiment of the present application is not repeated here.
308. The oral cavity comprehensive treatment machine is in the doctor is for the patient carries out oral treatment's in-process if utilize the second shooting module catches the first hand language action of making of patient, then will through artificial intelligence first hand language action is translated into first pronunciation, and will first pronunciation output give first intelligent glasses.
309. The first intelligent glasses receive the first voice and control the speaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to.
310. First intelligent glasses utilize first intelligent glasses integrated adapter module picks up the doctor response the second pronunciation that first pronunciation were sent to will through artificial intelligence the second pronunciation translates into the second hand's language action.
311. The first smart glasses suspend transmitting the oral treatment real-time pictures to the second smart glasses and transmit sign language video pictures of the virtual character configured to the first smart glasses when the virtual character performs the second sign language action to the second smart glasses, so that the second smart glasses project the sign language video pictures onto the retina of the patient by using harmless laser; and when the sign language video pictures are transmitted to the second intelligent glasses, the first intelligent glasses continue to transmit the oral treatment real-time pictures to the second intelligent glasses.
The steps 308 to 311 are implemented, so that the deaf-mute can perform oral treatment without the help of translation personnel in the process of oral treatment in an intelligent medical scene, and the oral treatment efficiency of a doctor on the deaf-mute can be improved.
As an alternative embodiment, the oral treatment real-time screen sharing method described in fig. 3 further includes:
the oral cavity comprehensive treatment machine converts the first voice into a corresponding first text and stores the first text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the first smart glasses transmit the second voice to the oral cavity comprehensive treatment machine;
the oral cavity comprehensive treatment machine receives the second voice, converts the second voice into a corresponding second text and stores the second text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the oral cavity comprehensive treatment machine establishes a text dialogue relation between the first text and the second text in the oral cavity treatment communication record.
In this embodiment, before the doctor performs oral treatment on a patient, the oral cavity complex treatment machine may read the identity information of the patient from the patient's identity card close to the oral cavity complex treatment machine; checking whether an oral treatment communication record corresponding to the identity information of the patient is created; if not, creating a blank oral treatment communication record corresponding to the identity information of the patient; if the information is stored, the oral cavity comprehensive treatment machine does not need to create a blank oral cavity treatment communication record corresponding to the identity information of the patient.
According to the embodiment, the oral treatment communication records corresponding to the deaf-mutes can be automatically generated in the process of oral treatment of the deaf-mutes by the doctors, so that the doctors are prevented from manually recording the oral treatment communication records corresponding to the deaf-mutes, information omission is avoided, and cross infection caused by manual recording is avoided.
Therefore, by implementing the method described in fig. 3, the patient can watch the oral treatment real-time picture when the doctor performs oral treatment (such as oral tooth root canal treatment) on the patient, so that the patient can communicate with the doctor in time when the patient is dissatisfied with the oral treatment process performed by the doctor, and the satisfaction degree of the patient on the oral treatment result can be improved.
In addition, by implementing the method described in fig. 3, the success rate of establishing the wireless connection between the second smart glasses and the first smart glasses can be better improved.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a method for sharing an oral treatment real-time screen in an intelligent medical scene according to a fourth embodiment of the present disclosure. In the oral treatment real-time picture sharing method in the intelligent medical scene described in fig. 4, a first intelligent glasses is worn by a doctor, a second intelligent glasses is worn by a patient, the patient lies on an oral comprehensive treatment machine, the patient is a deaf-mute, a second shooting module with an adjustable shooting direction is arranged on the oral comprehensive treatment machine, and the first intelligent glasses are wirelessly connected with the oral comprehensive treatment machine. As shown in fig. 4, the oral treatment real-time screen sharing method under the intelligent medical scene includes the following steps:
steps 401 to 407 are the same as steps 201 to 207 in the previous embodiment, and this embodiment of the present application is not repeated here.
408. The oral cavity comprehensive treatment machine is in the doctor is for the patient carries out oral treatment's in-process if utilize the second shooting module catches the first hand language action of making of patient, then will through artificial intelligence first hand language action is translated into first pronunciation, and will first pronunciation output give first intelligent glasses.
409. The first intelligent glasses receive the first voice and control the speaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to.
410. The second intelligent glasses utilize the integrated adapter module of second intelligent glasses to pick up the doctor response the second pronunciation that first pronunciation were sent to will through artificial intelligence the second pronunciation translates into the second hand language action.
411. The second smart glasses pause the projection of the oral treatment real-time picture onto the retina of the patient by using harmless laser, and project a sign language video picture of a virtual character configured by the second smart glasses when the virtual character performs the second sign language action onto the retina of the patient by using the harmless laser; and after the second smart glasses project all sign language video pictures of the virtual character configured on the second smart glasses to the retina of the patient by using harmless laser, continuing to project the oral treatment real-time pictures to the retina of the patient by using the harmless laser.
The steps 408 to 411 are implemented, so that the deaf-mute can perform oral treatment without help of translation personnel in the process of oral treatment in an intelligent medical scene, and the oral treatment efficiency of a doctor on the deaf-mute is favorably improved.
As an alternative embodiment, the oral treatment real-time screen sharing method described in fig. 4 further includes:
the oral cavity comprehensive treatment machine converts the first voice into a corresponding first text and stores the first text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the second smart glasses transmit the second voice to the first smart glasses;
the first intelligent glasses receive the second voice and transmit the second voice to the oral cavity comprehensive treatment machine;
the oral cavity comprehensive treatment machine receives the second voice, converts the second voice into a corresponding second text and stores the second text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the oral cavity comprehensive treatment machine establishes a text dialogue relation between the first text and the second text in the oral cavity treatment communication record.
According to the embodiment, the oral treatment communication records corresponding to the deaf-mutes can be automatically generated in the process of oral treatment of the deaf-mutes by the doctors, so that the doctors are prevented from manually recording the oral treatment communication records corresponding to the deaf-mutes, information omission is avoided, and cross infection caused by manual recording is avoided.
Therefore, by implementing the method described in fig. 4, the patient can watch the oral treatment real-time picture when the doctor performs oral treatment (such as oral tooth root canal treatment) on the patient, so that the patient can communicate with the doctor in time when the patient is dissatisfied with the oral treatment process performed by the doctor, and the satisfaction degree of the patient on the oral treatment result can be improved.
In addition, by implementing the method described in fig. 4, the success rate of establishing the wireless connection between the second smart glasses and the first smart glasses can be better improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a fifth embodiment of a method for sharing an oral treatment real-time screen in an intelligent medical scene according to the present application. In the oral treatment real-time picture sharing method in the intelligent medical scene described in fig. 5, a first intelligent glasses is worn by a doctor, a second intelligent glasses is worn by a patient, the patient lies on an oral comprehensive treatment machine, the patient is a deaf-mute, a second shooting module with an adjustable shooting direction is arranged on the oral comprehensive treatment machine, and the first intelligent glasses are wirelessly connected with the oral comprehensive treatment machine. As shown in fig. 5, the oral treatment real-time screen sharing method under the intelligent medical scene includes the following steps:
the steps 501 to 507 are the same as the steps 501 to 507 in the previous embodiment, and the embodiment of the present application is not repeated here.
508. The oral cavity comprehensive treatment machine is in the doctor is for the patient carries out oral treatment's in-process if utilize the second shooting module catches the first hand language action of making of patient, then will through artificial intelligence first hand language action is translated into first pronunciation, and will first pronunciation output give first intelligent glasses.
509. The first intelligent glasses receive the first voice and control the speaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to.
510. The second intelligent glasses utilize the integrated adapter module of second intelligent glasses to pick up the doctor response the second pronunciation that first pronunciation were sent to will through artificial intelligence the second pronunciation translates into the second hand language action.
511. The second smart glasses project sign language video pictures of virtual characters configured by the second smart glasses when the virtual characters make the second sign language actions as the foreground of the oral treatment real-time pictures on the retinas of the patient; wherein a partial area of the real-time picture of oral treatment that is occluded by the sign language video picture of the virtual character when the second sign language action is made is not visible to the patient.
In some embodiments, the left and right temples of the second smart glasses may each have a laser projector embedded thereon, wherein the laser projector on the left temple can project the oral treatment real-time picture onto the patient's retina using harmless laser light, and the laser projector on the right glasses leg can project the sign language video picture of the virtual character configured by the second intelligent glasses when the second sign language action is made on the retina of the patient as the foreground of the oral treatment real-time picture by using harmless laser, wherein a partial area of the real-time picture of oral treatment that is occluded by the sign language video picture of the virtual character when the second sign language action is made is not visible to the patient, this may enable the patient to preferentially view a sign language video frame of the virtual character configured with the second smart glasses when the second sign language action is made. In some embodiments, the area of the sign language video frame of the virtual character configured to the second smart glasses during the second language action is smaller than the area of the oral treatment real-time frame, so that the sign language video frame of the virtual character configured to the second smart glasses during the second language action can block a non-central region in the oral treatment real-time frame, so that the patient can view the sign language video frame of the virtual character configured to the second smart glasses during the second language action preferentially and view the real-time treatment process of the oral treatment site displayed in the central region of the oral treatment real-time frame simultaneously.
The steps 508 to 511 are implemented, so that the deaf-mute can perform oral treatment without help of translation personnel in the process of oral treatment in an intelligent medical scene, and the oral treatment efficiency of a doctor on the deaf-mute is improved.
As an alternative embodiment, the oral treatment real-time screen sharing method described in fig. 5 further includes:
the oral cavity comprehensive treatment machine converts the first voice into a corresponding first text and stores the first text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the second smart glasses transmit the second voice to the first smart glasses;
the first intelligent glasses receive the second voice and transmit the second voice to the oral cavity comprehensive treatment machine;
the oral cavity comprehensive treatment machine receives the second voice, converts the second voice into a corresponding second text and stores the second text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the oral cavity comprehensive treatment machine establishes a text dialogue relation between the first text and the second text in the oral cavity treatment communication record.
According to the embodiment, the oral treatment communication records corresponding to the deaf-mutes can be automatically generated in the process of oral treatment of the deaf-mutes by the doctors, so that the doctors are prevented from manually recording the oral treatment communication records corresponding to the deaf-mutes, information omission is avoided, and cross infection caused by manual recording is avoided.
Therefore, by implementing the method described in fig. 5, the patient can watch the oral treatment real-time picture when the doctor performs oral treatment (such as oral tooth root canal treatment) on the patient, so that the patient can communicate with the doctor in time when the patient is dissatisfied with the oral treatment process performed by the doctor, and the satisfaction degree of the patient on the oral treatment result can be improved.
In addition, by implementing the method described in fig. 5, the success rate of establishing the wireless connection between the second smart glasses and the first smart glasses can be better improved.
In an alternative embodiment, in the oral treatment real-time picture sharing method depicted in fig. 3, 4 or 5, the patient is a student who may be a school of deaf-mutes, the school of deaf-mutes is provided with a service device, and the oral cavity complex treatment machine is communicatively connected to the service device, the oral treatment real-time picture sharing method depicted in fig. 3, 4 or 5 may further include the following steps:
step A1, the oral cavity comprehensive treatment machine determines guardian equipment bound with the identity information of the patient from the service equipment according to the identity information of the patient.
Illustratively, the identity information of the patient may include the patient's name, grade, class, and school number. The guardian device to which the identity information of the patient is bound may be a mobile phone, tablet, or PC of the patient's guardian, and may also be a vehicle (e.g., a new energy automobile) of the patient's guardian.
Step A2, the oral cavity comprehensive treatment machine sends an inquiry message including the identity information of the patient to the guardian device bound with the identity information of the patient, wherein the inquiry message is used for inquiring whether the oral cavity treatment communication record corresponding to the identity information of the patient needs to be checked.
Step A3, the oral cavity comprehensive treatment machine judges whether a reply response which is sent by the guardian equipment bound by the identity information of the patient and is used for indicating that the oral cavity treatment communication record corresponding to the identity information of the patient needs to be checked is received within a specified time, and if the reply response is received, the text conversation relation which is not sent to the guardian equipment in the oral cavity treatment communication record corresponding to the identity information of the patient is sent to the guardian equipment.
The implementation of the steps a 1-A3 can realize that the oral treatment communication record corresponding to the identity information of the patient is automatically shared with the guardian device corresponding to the identity information of the patient, so that the guardian corresponding to the identity information of the patient can timely know the condition information of the patient when the patient is subjected to oral treatment by the doctor.
In some application scenarios, the oral cavity comprehensive treatment machine may be located in an oral cavity patrol vehicle entering the school for the deaf-mute to patrol, and the service device is in communication connection with a vehicle monitoring unit and a radio frequency unit which are arranged outside a school entrance of the school for the deaf-mute, so that the oral cavity comprehensive treatment machine determines, based on the identity information of the patient, a guardian device to which the identity information of the patient is bound from the service device, and the method further includes the following steps:
step B1, the service equipment is passing through vehicle monitoring unit discerns the license plate information of oral cavity inspection vehicle and discerns the trend of going of oral cavity inspection vehicle is for coming into deaf-mute's academic school time, and control the radio frequency unit is faced the transmission of oral cavity inspection vehicle contains the identification of service equipment and the first radio frequency signal of appointed position of patrolling, and the service equipment storage the license plate information of oral cavity inspection vehicle with the corresponding relation of the identification of service equipment.
Step B2, after the vehicle-mounted unit of the oral cavity patrol vehicle receives the first radio frequency signal, storing the identity of the service equipment into a driving computer of the oral cavity patrol vehicle; when the oral patrol vehicle reaches a certain target position of the school for deaf-mutes, the oral patrol vehicle responds to an input service equipment access instruction to transmit the second radio-frequency signal comprising the target position, the license plate information of the oral patrol vehicle and the identification of the service equipment.
Step B3, after the service device receives the second radio frequency signal, if it recognizes that the target location is the same as the designated patrol location and recognizes that the service device has stored the corresponding relationship between the license plate information of the oral patrol vehicle and the identity of the service device, the service device establishes a communication connection with the oral patrol vehicle.
Correspondingly, the integrated oral treatment machine determines the guardian device bound with the identity information of the patient from the service device according to the identity information of the patient, and the integrated oral treatment machine comprises:
the oral cavity comprehensive treatment machine takes the identity information of the patient as a basis, and guardian equipment bound with the identity information of the patient is determined from the service equipment through communication connection between the service equipment and the oral cavity patrol vehicle.
By implementing the steps B1-B3, the automatic communication connection between the oral comprehensive treatment machine in the oral patrol vehicle entering the school for the deaf-mute to patrol and the service equipment set by the school for the deaf-mute can be realized accurately without manual participation.
In some application scenarios, in step a3, after the oral cavity integrated therapy machine determines that a reply response indicating that the oral cavity treatment communication record corresponding to the identity information of the patient needs to be viewed is received from the guardian device bound to the identity information of the patient within a specified time period, and before sending the text dialogue relationship that has not been sent to the guardian device in the oral cavity treatment communication record corresponding to the identity information of the patient to the guardian device, the method further includes:
step A31, the oral cavity comprehensive therapeutic machine obtains a first card-punching place track reported by guardian equipment bound with the identity information of the patient from the service equipment; the first card punching place track comprises a first appointed number of card punching places, and any two card punching places in the first appointed number of card punching places are different from each other.
Step A32, the oral cavity comprehensive treatment machine inserts non-punching positions with the quantity being in direct proportion to the straight line distance between every two adjacent punching positions in the first punching position track to form a second punching position track.
That is, when the straight distance between the two adjacent punching sites is larger, the number of non-punching sites inserted between the two adjacent punching sites is larger; conversely, the smaller the linear distance between the two adjacent punching sites, the smaller the number of non-punching sites inserted between the two adjacent punching sites.
Step A33, the oral cavity comprehensive treatment machine sends the second card punching place track to the guardian equipment bound with the identity information of the patient.
Step A34, the oral cavity comprehensive treatment machine acquires a third card punching place track sent by guardian equipment bound with the identity information of the patient; and the third card punching place track consists of a second appointed number of places selected from the second card punching place track.
Step a35, after verifying that the third card punching location track is the same as the first card punching location track, and that the set of the second specified number of locations is the same as the set of the first specified number of card punching locations, the oral cavity comprehensive treatment machine sends the text dialogue relation that has not been sent to the guardian device in the oral cavity treatment communication record corresponding to the patient's identity information to the guardian device.
Wherein, the steps A31-A35 are implemented, only the guardian bound with the identity information of the patient is allowed to view the oral treatment communication record of the patient, and the oral treatment communication record of the patient is prevented from being leaked to other people and non-guardians as privacy information.
In this embodiment, when the guardian device bound with the identity information of the patient is the vehicle (such as a new energy automobile) of the guardian of the patient, a hand-press type location card punching button may be further arranged on a steering wheel of the vehicle of the guardian of the patient. When the vehicle of the guardian of the patient moves (namely, runs), recording the instant location of the vehicle as a card punching location correspondingly once when detecting that the hand-pressed location card punching button is pressed once, and generating a first card punching location track consisting of the recorded first specified number of card punching locations and storing the first card punching location track into a driving computer of the vehicle of the guardian of the patient when judging that the number of the recorded card punching locations reaches the first specified number; wherein any two of the first specified number of punch-through locations are different from each other.
Further, the service device may actively obtain the first card-punching location track from a vehicle computer of the patient's guardian; or, the service device may receive the first card punching location track actively reported by a vehicle computer of the patient's guardian vehicle.
For example, the first specified number is 5, and the 5 punch-card locations recorded by the patient's guardian's vehicle while moving are "Honda car sales center", "Vanda square", "Rose-Citizen", and "Vienna Hotel" on a certain street, the vehicle may determine that the 5 card-punching locations "honda car sales center", "wanda square", "rosy county", "vienna hotel" and "youth hotel" recorded during the movement reach the first designated number, generate a first card-punching location trajectory as shown in fig. 6 composed of the 5 card-punching locations "honda car sales center", "wanda square", "rosy county", "vienna hotel" and "youth hotel" and store the trajectory into the vehicle of the patient's guardian.
In the embodiment of the present application, any place included in the first card-punching place track is represented in the form of a place name (such as honda automobile sales center).
For example, the oral cavity complex treatment machine may determine all the sites located between the "honda car sales center" and the "wanda square" for the adjacent two punching sites "honda car sales center" and "wanda square" in the first punching site trajectory shown in fig. 6, and randomly select 2 non-punching sites "yuefeng corium car stereo" and "witmark dining room" inserted between the "honda car sales center" and the "wanda square" in a number proportional to the straight line distance between the "honda car sales center" and the "wanda square" from all the sites located between the "honda car sales center" and the "wanda square";
and, the oral cavity complex treating machine may determine all the sites located between the "Wandasha Square" and the "Mustexishire" for the adjacent two hit sites "Wandasha Square" and "Mustexishire" in the first hit site trajectory shown in FIG. 6, and randomly take out 3 non-hit sites "mommy recovery center", "Yong and soymilk", and "North Bay insurance" from all the sites located between the "Wandasha Square" and the "Mustexishire" in quantities proportional to the straight distance between the "Wandasha Square" and the "Mustexishire", interposed between the "Wandasha Square" and the "Mustexishire";
and, the oral cavity complex may determine all the sites located between "rosexicounty" and "vienna hotel" for the adjacent two dotting sites "rosexicounty" and "vienna hotel" in the first dotting site trajectory shown in fig. 6, and randomly take out 4 non-dotting sites "yunquan gym", "sivwangtang habengalore", "vadaemon", and "supermarket tengyo" from all the sites located between "rosexicounty" and "vienna hotel" in quantities proportional to the straight-line distance between "rosexicounty" and "vienna hotel" to be inserted between "rosexicounty" and "vienna hotel";
and the oral cavity complex treatment machine may determine all the sites located between the "east vienna hotel" and the "youth hotel" for two adjacent card-hitting sites "vienna hotel" and "youth hotel" in the first card-hitting site trajectory shown in fig. 6, and randomly select 5 non-card-hitting sites "same-name-wine sales row", "sichuan restaurant", "TCL air-conditioning exclusive shop", "bonbonbonic cabinet" and "laoxiang city chafing dish" in a number proportional to a straight-line distance between the "vienna hotel" and the "youth hotel" from all the sites located between the "vienna hotel" and the "youth hotel" to be inserted between the "vienna hotel" and the "youth hotel", thereby forming a second card-hitting site trajectory as shown in fig. 7.
It is understood that, when it is verified that the third punch-out location trajectory is the same as the first punch-out location trajectory, and the second designated number is equal to the first designated number, and the set of the second designated number of locations included in the third punch-out location trajectory is the same as the set of the first designated number of punch-out locations, the oral cavity complex treatment machine indicates that the third punch-out location trajectory is completely the same as the first punch-out location trajectory. For example, the oral cavity complex therapy machine sends the text conversation relationship that has not been sent to the guardian device in the oral cavity treatment communication record corresponding to the identity information of the patient to the guardian device only if it is verified that the third card punching place trajectory includes 5 card punching places of "honda car sales center", "wanda square", "rosy county", "vienna hotel" and "youth hotel" and does not include other places.
The embodiment of the application further discloses an oral treatment real-time picture sharing system under an intelligent medical scene, which comprises various devices for executing the previous method embodiment, such as the first intelligent glasses, the second intelligent glasses, the oral comprehensive treatment machine, the service device and the monitor device.
The present application discloses a computer-readable storage medium storing a computer program, wherein the computer program when executed causes a computer to perform some or all of the steps of the method in the above method embodiments.
The present application discloses a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
The present application discloses an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
The oral treatment real-time image sharing method in the intelligent medical scene disclosed by the application is described in detail, specific examples are applied in the method to explain the principle and the implementation mode of the application, and the description of the examples is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. The oral treatment real-time picture sharing method under the intelligent medical scene is characterized in that first intelligent glasses are worn by a doctor and second intelligent glasses are worn by a patient, and the method comprises the following steps:
the first intelligent glasses shoot oral treatment real-time pictures of the patient when the doctor carries out oral treatment on the patient from the visual angle of the doctor through a first shooting module arranged on the front side of a glasses frame of the first intelligent glasses;
the first smart glasses transmit the oral treatment real-time picture to the second smart glasses;
the second intelligent glasses acquire the oral treatment real-time picture transmitted by the first intelligent glasses;
the second smart glasses project the oral treatment real-time picture onto the patient's retina using harmless laser light;
the patient lies on the comprehensive treatment machine of oral cavity to be provided with the second shooting module of shooting direction adjustable on the comprehensive treatment machine of oral cavity, first intelligent glasses wireless connection the comprehensive treatment machine of oral cavity, before first intelligent glasses shoot the oral cavity treatment real-time picture when the doctor carries out oral cavity treatment for the patient from the doctor's visual angle through the first shooting module that its spectacle-frame front side set up, the method still includes:
the oral cavity comprehensive treatment machine utilizes the second shooting module to shoot the glasses unique identifier arranged on the front side of the glasses frame of the second intelligent glasses, and queries the target verification information randomly distributed to the second intelligent glasses by the oral cavity comprehensive treatment machine when the second intelligent glasses are in wireless connection with the oral cavity comprehensive treatment machine for the last time according to the glasses unique identifier and transmits the target verification information to the first intelligent glasses;
the first intelligent glasses receive the target verification information and broadcast a wireless connection request containing the target verification information and the instant position of the first intelligent glasses;
the second intelligent glasses receive the wireless connection request, determine a target distance threshold value when the second intelligent glasses are identified to store the target verification information contained in the wireless connection request, judge whether a distance value between the instant position of the second intelligent glasses and the instant position of the first intelligent glasses is smaller than or equal to the target distance threshold value, and if yes, establish wireless connection with the first intelligent glasses;
wherein the second smart glasses determine a target distance threshold when recognizing that the second smart glasses have stored the target verification information included in the wireless connection request, including:
when the second intelligent glasses recognize that the target verification information contained in the wireless connection request is stored in the second intelligent glasses, whether the historical times of wireless connection establishment between the second intelligent glasses and the first intelligent glasses in a set historical time period exceeds a specified time is judged, and if not, a preset distance threshold value is used as a target distance threshold value; if the number of the historical times exceeds the specified number of the historical times, calculating the time difference between the historical times and the specified times;
the second intelligent glasses obtain a distance increment, and the distance increment and the time difference are in a direct proportion relation; and calculating the sum of the preset distance threshold and the distance increment as a target distance threshold.
2. The oral treatment real-time picture sharing method according to claim 1, wherein the patient is a deaf-mute, the method further comprising:
if the oral cavity comprehensive treatment machine captures a first hand language action made by the patient by utilizing the second shooting module in the process that the doctor treats the oral cavity of the patient, the first hand language action is translated into a first voice through artificial intelligence, and the first voice is output to the first intelligent glasses;
the first intelligent glasses receive the first voice and control a loudspeaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to;
the first intelligent glasses utilize a sound pickup module integrated with the first intelligent glasses to pick up a second voice sent by the doctor in response to the first voice, and the second voice is translated into a second voice action through artificial intelligence;
the first smart glasses suspend transmitting the oral treatment real-time pictures to the second smart glasses and transmit sign language video pictures of the second sign language action made by the virtual character configured to the first smart glasses to the second smart glasses, so that the second smart glasses project the sign language video pictures onto the retina of the patient by using harmless laser;
and when the sign language video picture is transmitted to the second intelligent glasses, the first intelligent glasses continuously transmit the oral treatment real-time picture to the second intelligent glasses.
3. The oral treatment real-time picture sharing method according to claim 1, wherein the patient is a deaf-mute, the method further comprising:
if the oral cavity comprehensive treatment machine captures a first hand language action made by the patient by utilizing the second shooting module in the process that the doctor treats the oral cavity of the patient, the first hand language action is translated into a first voice through artificial intelligence, and the first voice is output to the first intelligent glasses;
the first intelligent glasses receive the first voice and control a loudspeaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to;
the second intelligent glasses pick up a second voice sent by the doctor in response to the first voice by utilizing a sound pickup module integrated with the second intelligent glasses, and translate the second voice into a second voice action through artificial intelligence;
the second smart glasses suspend projecting the oral treatment real-time picture onto the patient's retina using harmless laser light, and projecting a sign language video picture of the second gesture language action made by a virtual character configured to the second smart glasses onto the patient's retina using harmless laser light;
and after the second intelligent glasses use the harmless laser to project all sign language video pictures of the virtual character configured to the second intelligent glasses during the second sign language action onto the retinas of the patient, continuing to use the harmless laser to project the oral treatment real-time pictures onto the retinas of the patient.
4. The oral treatment real-time picture sharing method according to claim 1, wherein the patient is a deaf-mute, the method further comprising:
if the oral cavity comprehensive treatment machine captures a first hand language action made by the patient by utilizing the second shooting module in the process that the doctor treats the oral cavity of the patient, the first hand language action is translated into a first voice through artificial intelligence, and the first voice is output to the first intelligent glasses;
the first intelligent glasses receive the first voice and control a loudspeaker module integrated with the first intelligent glasses to play the first voice for the doctor to listen to;
the second intelligent glasses pick up a second voice sent by the doctor in response to the first voice by utilizing a sound pickup module integrated with the second intelligent glasses, and translate the second voice into a second voice action through artificial intelligence;
the second intelligent glasses project sign language video pictures of the virtual characters configured to the second intelligent glasses during the second sign language action as the foreground of the oral treatment real-time pictures onto the retinas of the patient; wherein a partial area of the real-time picture of oral treatment that is occluded by the sign language video picture of the virtual character when the second sign language action is made is not visible to the patient.
5. The oral treatment real-time picture sharing method according to claim 2, further comprising:
the oral cavity comprehensive treatment machine converts the first voice into a corresponding first text and stores the first text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the first smart glasses transmit the second voice to the oral cavity comprehensive treatment machine;
the oral cavity comprehensive treatment machine receives the second voice, converts the second voice into a corresponding second text and stores the second text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the oral cavity comprehensive treatment machine establishes a text dialogue relation between the first text and the second text in the oral cavity treatment communication record.
6. The oral treatment real-time picture sharing method according to claim 3 or 4,
the oral cavity comprehensive treatment machine converts the first voice into a corresponding first text and stores the first text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the second smart glasses transmit the second voice to the first smart glasses;
the first intelligent glasses receive the second voice and transmit the second voice to the oral cavity comprehensive treatment machine;
the oral cavity comprehensive treatment machine receives the second voice, converts the second voice into a corresponding second text and stores the second text into an oral cavity treatment communication record corresponding to the identity information of the patient;
the oral cavity comprehensive treatment machine establishes a text dialogue relation between the first text and the second text in the oral cavity treatment communication record.
7. The oral treatment real-time picture sharing method according to claim 6, wherein the patient is a student of a school of deaf-mutes, the school of deaf-mutes being arranged with a service device, the oral cavity complex treatment machine being in communication connection with the service device, the method further comprising:
the comprehensive treatment machine for the oral cavity determines guardian equipment bound with the identity information of the patient from the service equipment according to the identity information of the patient;
the oral cavity comprehensive treatment machine sends an inquiry message including the identity information of the patient to guardian equipment bound with the identity information of the patient, wherein the inquiry message is used for inquiring whether an oral cavity treatment communication record corresponding to the identity information of the patient needs to be checked;
the oral cavity comprehensive treatment machine judges whether a reply response which is sent by the guardian equipment bound by the identity information of the patient and is used for indicating that the oral cavity treatment communication record corresponding to the identity information of the patient needs to be checked is received or not within a specified time, and if the reply response is received, the text conversation relation which is not sent to the guardian equipment in the oral cavity treatment communication record corresponding to the identity information of the patient is sent to the guardian equipment.
8. The oral treatment real-time image sharing method according to claim 7, wherein the oral comprehensive treatment machine is located in an oral patrol vehicle entering the school of deaf-mute for patrol, the service device is in communication connection with a vehicle monitoring unit and a radio frequency unit which are arranged outside a school entrance of the school of deaf-mute, and the oral comprehensive treatment machine determines the identity information of the patient from the service device according to the identity information of the patient before the guardian device bound with the identity information of the patient, and the method further comprises:
when the service equipment identifies the license plate information of the oral patrol car through the vehicle monitoring unit and identifies that the driving trend of the oral patrol car is about to enter the deaf-mute school, the service equipment controls the radio frequency unit to transmit a first radio frequency signal containing the identity of the service equipment and the specified patrol position to the oral patrol car, and the service equipment stores the corresponding relation between the license plate information of the oral patrol car and the identity of the service equipment;
after the vehicle-mounted unit of the oral cavity patrol vehicle receives the first radio frequency signal, the identity of the service equipment is stored in a driving computer of the oral cavity patrol vehicle; when the oral patrol vehicle reaches a certain target position of the school for deaf-mutes, the oral patrol vehicle responds to an input service equipment access instruction to transmit a second radio-frequency signal comprising the target position, the license plate information of the oral patrol vehicle and the identity of the service equipment;
after the service equipment receives the second radio frequency signal, if the target position is identified to be the same as the specified patrol position and the corresponding relation between the license plate information of the oral patrol vehicle and the identity of the service equipment stored in the service equipment is identified, the service equipment establishes communication connection with the oral patrol vehicle;
the comprehensive treatment machine for the oral cavity determines guardian equipment bound with the identity information of the patient from the service equipment according to the identity information of the patient, and comprises:
the oral cavity comprehensive treatment machine takes the identity information of the patient as a basis, and guardian equipment bound with the identity information of the patient is determined from the service equipment through communication connection between the service equipment and the oral cavity patrol vehicle.
9. The oral treatment real-time image sharing method according to claim 8, wherein after the oral integrated treatment machine determines that a response sent by the guardian device bound with the patient's identity information is received within a specified time period and indicating that the oral treatment communication record corresponding to the patient's identity information needs to be checked, and before sending the text dialogue relationship that has not been sent to the guardian device in the oral treatment communication record corresponding to the patient's identity information to the guardian device, the method further comprises:
the oral cavity comprehensive treatment machine acquires a first card-punching place track reported by guardian equipment bound with the identity information of the patient from the service equipment; the first card punching place track comprises a first appointed number of card punching places, and any two card punching places in the first appointed number of card punching places are different from each other;
the oral cavity comprehensive treatment machine inserts non-punching positions, the number of which is in direct proportion to the linear distance between every two adjacent punching positions, between every two adjacent punching positions in the first punching position track to form a second punching position track;
the oral cavity comprehensive treatment machine sends the second card punching place track to guardian equipment bound with the identity information of the patient;
the oral cavity comprehensive treatment machine acquires a third card punching place track sent by guardian equipment bound with the identity information of the patient; the third card punching place track consists of a second appointed number of selected places in the second card punching place track; and when the third card punching place track is verified to be the same as the first card punching place track, and the second specified number of places is equal to the first specified number, and the set formed by the second specified number of places is the same as the set formed by the first specified number of card punching places, sending the text conversation relation which is not sent to the guardian device in the oral treatment communication record corresponding to the identity information of the patient to the guardian device.
CN202110743846.4A 2021-07-01 2021-07-01 Oral treatment real-time picture sharing method under intelligent medical scene Expired - Fee Related CN113194299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110743846.4A CN113194299B (en) 2021-07-01 2021-07-01 Oral treatment real-time picture sharing method under intelligent medical scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110743846.4A CN113194299B (en) 2021-07-01 2021-07-01 Oral treatment real-time picture sharing method under intelligent medical scene

Publications (2)

Publication Number Publication Date
CN113194299A CN113194299A (en) 2021-07-30
CN113194299B true CN113194299B (en) 2021-08-31

Family

ID=76976828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110743846.4A Expired - Fee Related CN113194299B (en) 2021-07-01 2021-07-01 Oral treatment real-time picture sharing method under intelligent medical scene

Country Status (1)

Country Link
CN (1) CN113194299B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286052B (en) * 2021-12-23 2022-08-26 广东景龙建设集团有限公司 Sharing method and system for wall decoration pictures of assembly type building

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104570399A (en) * 2014-12-30 2015-04-29 郑州大学 Video glasses for medical image sharing
CN104966433A (en) * 2015-07-17 2015-10-07 江西洪都航空工业集团有限责任公司 Intelligent glasses assisting deaf-mute conversation
CN211906492U (en) * 2020-02-27 2020-11-10 上海萃钛智能科技有限公司 Intelligent visual capture reminding device, reminding system and vision expander

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL308285A (en) * 2013-03-11 2024-01-01 Magic Leap Inc System and method for augmented and virtual reality
FR3021518A1 (en) * 2014-05-27 2015-12-04 Francois Duret VISUALIZATION DEVICE FOR FACILITATING MEASUREMENT AND 3D DIAGNOSIS BY OPTICAL FOOTPRINT IN DENTISTRY
WO2016158000A1 (en) * 2015-03-30 2016-10-06 ソニー株式会社 Information processing device, information processing method, and information processing system
CN104837215A (en) * 2015-04-14 2015-08-12 广东欧珀移动通信有限公司 Wireless access point connecting method and device
WO2016190607A1 (en) * 2015-05-22 2016-12-01 고려대학교 산학협력단 Smart glasses system for providing surgery assisting image and method for providing surgery assisting image by using smart glasses
US10045159B2 (en) * 2015-07-02 2018-08-07 Qualcomm Incorporated Providing, organizing, and managing location history records of a mobile device
CN205864618U (en) * 2016-07-18 2017-01-04 珠海格力电器股份有限公司 Intelligent glasses and image display device
CN107241579A (en) * 2017-07-14 2017-10-10 福建铁工机智能机器人有限公司 A kind of utilization AR realizes the method and apparatus of Telemedicine Consultation
KR102270170B1 (en) * 2018-11-14 2021-06-25 임승준 Surgery supporting instrument using augmented reality
CN111194027B (en) * 2018-11-15 2023-09-05 阿里巴巴集团控股有限公司 Network connection method, device and system
CN209842237U (en) * 2019-03-30 2019-12-24 上海翊视皓瞳信息科技有限公司 Medical treatment intelligence glasses
CN112353503A (en) * 2020-09-21 2021-02-12 上海长征医院 A intelligent glasses that is arranged in art real-time illumination to shoot and make a video recording

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104570399A (en) * 2014-12-30 2015-04-29 郑州大学 Video glasses for medical image sharing
CN104966433A (en) * 2015-07-17 2015-10-07 江西洪都航空工业集团有限责任公司 Intelligent glasses assisting deaf-mute conversation
CN211906492U (en) * 2020-02-27 2020-11-10 上海萃钛智能科技有限公司 Intelligent visual capture reminding device, reminding system and vision expander

Also Published As

Publication number Publication date
CN113194299A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
JP6946649B2 (en) Electronic devices, information processing methods and programs
JP5445981B2 (en) Viewer feeling judgment device for visually recognized scene
CN105700676A (en) Wearable glasses, control method thereof, and vehicle control system
WO2020020022A1 (en) Method for visual recognition and system thereof
CN113194299B (en) Oral treatment real-time picture sharing method under intelligent medical scene
CN105653020A (en) Time traveling method and apparatus and glasses or helmet using same
KR102047988B1 (en) Vision aids apparatus for the vulnerable group of sight, remote managing apparatus and method for vision aids
CN107092314A (en) A kind of head-mounted display apparatus and detection method that driving behavior monitor detection is provided
CN109147623A (en) A kind of museum's guide system that real-time positioning is visited
CN106445444A (en) Tourism landscape realization system based on virtual reality technology
CN105843395A (en) Glasses capable of interacting with electronic equipment as well as interaction method
CN112506336A (en) Head mounted display with haptic output
CN110210935B (en) Security authentication method and device, storage medium and electronic device
CN206906936U (en) A kind of head-mounted display apparatus that driving behavior monitor detection is provided
CN105430018B (en) A kind of data processing method, control equipment and system
CN111796740A (en) Unmanned vehicle control method, device and system based on wearable intelligent equipment
CN111026276A (en) Visual aid method and related product
DE102013019563B4 (en) Method for providing information about an environment to a smart device
CN113960788B (en) Image display method, device, AR glasses and storage medium
WO2022064633A1 (en) Information provision device, information provision system, information provision method, and non-transitory computer-readable medium
CN105242401A (en) Method for sharing other equipment data for automobile maintenance through intelligent eyeglasses
CN105182537A (en) Method of recognizing liquid leakage during maintenance process by intelligent glasses
CN112969053B (en) In-vehicle information transmission method and device, vehicle-mounted equipment and storage medium
CN108877407A (en) Methods, devices and systems and augmented reality glasses for supplementary AC
CN111813228B (en) Image transmission method and system based on user vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210831