CN114038259A - 5G virtual reality medical ultrasonic training system and method thereof - Google Patents

5G virtual reality medical ultrasonic training system and method thereof Download PDF

Info

Publication number
CN114038259A
CN114038259A CN202111223428.9A CN202111223428A CN114038259A CN 114038259 A CN114038259 A CN 114038259A CN 202111223428 A CN202111223428 A CN 202111223428A CN 114038259 A CN114038259 A CN 114038259A
Authority
CN
China
Prior art keywords
virtual
trainee
ultrasonic
image
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111223428.9A
Other languages
Chinese (zh)
Inventor
俞正义
蒋伟红
俞震
俞朝晖
施银桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111223428.9A priority Critical patent/CN114038259A/en
Publication of CN114038259A publication Critical patent/CN114038259A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a 5G virtual reality medical ultrasonic practical training system and a method thereof, wherein the method comprises the following steps: building a virtual environment, and acquiring the spatial position of each physical object and the spatial position of the virtual object in the virtual environment, wherein the physical object comprises: a handheld VR controller; the virtual object includes: one or a combination of a virtual B-ultrasonic machine, a virtual sickbed and a virtual examinee; monitoring the space coordinate and the space angle of the physical object in real time; generating a corresponding virtual B ultrasonic probe according to the space coordinate and the space angle; generating a corresponding virtual B ultrasonic image according to the phase position and the relative angle between the virtual B ultrasonic probe and the virtual object, and displaying the corresponding virtual B ultrasonic image to a trainee; and generating a local operation mirror image corresponding to the pose of the virtual B-ultrasonic probe in the visual range of the trainee in the virtual environment. By applying the embodiment of the invention, special hardware is not needed, and the training cost is reduced.

Description

5G virtual reality medical ultrasonic training system and method thereof
Technical Field
The invention relates to the technical field of ultrasonic diagnosis, in particular to a 5G virtual reality medical ultrasonic practical training system and a method thereof.
Background
B-mode ultrasound is currently widely used in the medical diagnostic field due to its non-invasive and non-radiative nature. Because the B-ultrasonic image needs to be manually made by a doctor to obtain a clear and stable B-ultrasonic image and correctly read, the process needs long-term experience accumulation; moreover, the clinical experience of doctors is accumulated depending on the number of cases and the types of diseases, and for some rare types of diseases, clinical images can be acquired after waiting for a long time, and for small and medium hospitals, the waiting time is longer. Therefore, training a qualified type-B ultrasonic doctor typically takes years.
In order to solve the above problems, B-mode simulation should be performed. In the existing patent "mixed reality simulation method and system" (US20140071165), a method for medical education using mixed reality is introduced, with emphasis on positioning of virtual scenes and physical entities; in the patent "enhanced ultrasound simulation using touch surface" (US20150154890), a method of performing B-mode ultrasound diagnostic simulation using a human body model, an electronic tag and a simulation probe is introduced; in the patent "ultrasound simulation method" (US20180301063), a method of B-ultrasound simulation in an AR/VR (Augmented Reality/Virtual Reality) environment is introduced. Furthermore, sonosim (https:// sonosim. com) and smjp (https:// smjpltd. uk) have also begun to provide commercial ultrasound simulation teaching lessons.
However, in order to improve usability, the existing AR/VR-based B-mode ultrasound simulation system needs dedicated hardware such as a precise human body model and a real ultrasound probe to establish a simulation environment, and has relatively high cost, lack of interaction with a patient and low immersion.
Disclosure of Invention
The invention aims to provide a 5G virtual reality medical ultrasonic practical training system and a method thereof so as to reduce the training cost.
The invention solves the technical problems through the following technical means:
a virtual reality medical ultrasound training method, the method comprising:
building a virtual environment, and acquiring the spatial position of each physical object and the spatial position of the virtual object in the virtual environment, wherein the physical object comprises: a handheld VR controller; the virtual object includes: one or a combination of a virtual B-ultrasonic machine, a virtual sickbed and a virtual examinee;
monitoring the space coordinate and the space angle of the physical object in real time;
generating a corresponding virtual B ultrasonic probe according to the space coordinate and the space angle;
generating a corresponding virtual B ultrasonic image according to the phase position and the relative angle between the virtual B ultrasonic probe and the virtual object, and displaying the corresponding virtual B ultrasonic image to a trainee;
generating a local operation mirror image corresponding to the pose of the virtual B-ultrasonic probe in the visual line range of the trainee in the virtual environment, wherein the pose of the mark corresponding to the virtual B-ultrasonic probe in the local operation mirror image is consistent with the pose of the virtual B-ultrasonic probe in real time
Optionally, the generating a corresponding virtual B-mode ultrasound image includes:
and acquiring a matched real B-ultrasonic three-dimensional image according to the probing section, and taking the real B-ultrasonic three-dimensional image as a virtual B-ultrasonic image.
Optionally, after obtaining the matched real B-mode ultrasound three-dimensional image according to the probing section, before using the real B-mode ultrasound three-dimensional image as the virtual B-mode ultrasound image, the step of generating the corresponding virtual B-mode ultrasound image according to the probing section further includes:
and carrying out lack video frame supplement on the real B-ultrasonic image which is recorded in advance and corresponds to the exploration section by utilizing an interpolation algorithm.
Optionally, the generating a corresponding virtual B-mode ultrasound image according to the probing section includes:
and generating a virtual B-ultrasonic image in real time by using a generation algorithm according to a pre-established human body mathematical model and an attribute model of the sound waves in the human body tissues.
Optionally, the method further includes:
acquiring a voice instruction of a trainee, and generating a virtual trainee under the corresponding instruction, wherein the voice instruction comprises: an instruction to ask the virtual examinee to change posture.
Optionally, after the step of generating the virtual examinee under the corresponding instruction, the method further includes:
and under the condition that the virtual B-ultrasonic probe is shielded by the body of the virtual examinee after the posture is changed, generating a local operation mirror image corresponding to the posture of the virtual B-ultrasonic probe within the visual range of the examinee in the virtual environment, wherein the posture of a mark corresponding to the virtual B-ultrasonic probe in the local operation mirror image is consistent with the posture of the virtual B-ultrasonic probe in real time.
Optionally, the method further includes:
generating operation guidance marks in the local operation image, wherein the operation guidance marks comprise: a virtual guide line, a compression pressure indication, or a combination thereof.
Optionally, when the operation guide mark is a pressing pressure indication, a differential mark of pressure level is made by using colors of different depths.
Optionally, the method further includes:
generating a virtualized image of a corresponding person in the virtual environment when a teacher teaches a trainee based on the virtual environment;
the teacher adjusts the observation position of the trainee by dragging the virtual image of the trainee to a specified position in the virtual environment, and generates virtual scene information aiming at the trainee according to the observation position;
or synchronizing the visual angle information of the trainee to the virtual scene information of the trainee, and generating the virtual scene information aiming at the trainee according to the visual angle information;
alternatively, eye marks for the teacher and/or trainee are generated in the virtual environment.
Optionally, the teacher and trainee are remotely online.
The invention also provides a virtual reality medical ultrasonic practical training system, which comprises: an environment acquisition unit, a physical object identification and positioning unit, a virtual B-ultrasonic image generation unit, a virtual scene generation unit and a virtual scene display unit, wherein,
the virtual scene generation unit is used for building a virtual environment;
the physical object identification and location unit is to: acquiring the spatial position of each physical object and the spatial position of a virtual object in a virtual environment, wherein the physical object comprises: a handheld VR controller; the virtual object includes: one or a combination of a virtual B-ultrasonic machine, a virtual sickbed and a virtual examinee;
the environment acquisition unit is used for monitoring the space coordinate and the space angle of the physical object in real time;
the virtual scene display unit is used for generating a corresponding virtual B ultrasonic probe according to the space coordinate and the space angle and displaying the virtual B ultrasonic probe to a user;
and the virtual B ultrasonic image generating unit is used for generating a corresponding virtual B ultrasonic image according to the phase position and the relative angle between the virtual B ultrasonic probe and the virtual object, and displaying the corresponding virtual B ultrasonic image to the trainee.
The invention has the advantages that:
by applying the embodiment of the invention, technologies such as virtual reality, artificial intelligence and the like are comprehensively applied to carry out virtual reality B-mode ultrasonic training, special hardware (a simulation probe and a simulation human body model) is not needed, the cost is low, and the application is convenient.
In addition, the trainee diagnoses the virtual trainee in a virtual environment, and the system generates a corresponding B ultrasonic image in real time through an artificial intelligence technology, so that the trainee can quickly master medical ultrasonic diagnosis skills and is familiar with the image characteristics of various rare cases. By means of low time delay and high bandwidth of 5G, the system further realizes remote virtual teaching training in different places, innovations are made on teaching interaction modes in virtual spaces, and infinite possibility of virtual reality is shown.
Drawings
Fig. 1 is a schematic flow chart of a 5G virtual reality medical ultrasound training method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of an initial scene of a virtual scene in an embodiment of the present invention;
FIG. 3 is a schematic view of a virtual environment after a trainee enters the virtual environment according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the generation of a virtual B-mode ultrasound probe in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a virtual environment showing virtual B-mode ultrasound images in accordance with an embodiment of the present invention;
FIG. 6 is a schematic view of a virtual scene after a local operation mirror is added in the present invention;
FIG. 7 is an enlarged view of a mirror image of a local operation in virtual training according to an embodiment of the present invention;
FIG. 8 is a schematic view of a virtual scene being operated by an observer during viewing by an operator in accordance with the present invention;
fig. 9 is a schematic structural diagram of a 5G virtual reality medical ultrasound practical training system provided in an embodiment of the present invention;
fig. 10 is a schematic view of a line of sight in a 5G virtual reality medical ultrasound training method provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Fig. 1 is a schematic flow chart of a 5G virtual reality medical ultrasound training method provided in an embodiment of the present invention, as shown in fig. 1,
s1: building a virtual environment, and acquiring the spatial position of each physical object and the spatial position of the virtual object in the virtual environment, wherein the physical object comprises: a handheld VR controller 203; the virtual object includes: one or a combination of a virtual B-ultrasonic machine 104, a virtual patient bed 101, a virtual examinee 102.
When building a virtual environment, fig. 2 is a schematic view of an initial scene of a virtual scene in an embodiment of the present invention, as shown in fig. 2, that is, a virtual B-ultrasonic machine 104, a display 103, a virtual hospital bed 101, and a virtual examinee 102 are provided in the virtual scene. The trainee 205 has not yet entered the virtual environment at this point.
The trainee 205 sits quietly, wears the VR head display 201, holds the VR controller 203, and enters the virtual B-mode ultrasound machine 104 room. The trainee 205 sits up in the room of the virtual B-mode machine 104, with the virtual B-mode machine 104 in front and the virtual patient 102 on the left. The trainee 205 can use a commercial virtual reality head mounted display (e.g., Oculus Quest 2, HP Reverb G2) for virtual reality B-mode ultrasound training without the need for specialized hardware such as simulation probes, simulated manikins, etc.
Fig. 3 is a schematic view of the virtual environment after the trainee 205 enters the virtual environment according to the embodiment of the present invention, and as shown in fig. 3, after the trainee 205 sits and wears the virtual reality head-mounted display, the spatial position of the virtual reality head-mounted display is obtained according to the position sensor carried in the virtual reality head-mounted display. While the trainee 205 is holding the handheld VR controller 203, the system uses a position sensor in the handheld VR controller 203 to obtain the spatial position of the handheld VR controller 203.
S2: and monitoring the space coordinates and the space angle of the physical object in real time.
After measuring the position of the head-mounted display of the trainee 205 and the position of the handheld VR controller 203, the head and the hand of the trainee 205 are displaced, so the positions of the head-mounted display and the handheld VR controller 203 are dynamically changed, and therefore, the spatial coordinates and the corresponding angles of the head-mounted display and the handheld VR controller 203 need to be tracked in real time, for example, the angle of the head-mounted display is mainly the viewing angle of the trainee 205, and the angle of the handheld VR controller 203 is mainly the direction angle and the height angle of the central axis of the handheld VR controller 203. Fig. 4 is a schematic diagram of generating a virtual B-mode ultrasound probe 401 in an embodiment of the present invention, and as shown in fig. 4, in practical applications, the relative position of the B-mode ultrasound probe and the examinee can be expressed by various methods. Taking the region orientation method as an example: firstly, gridding the surface of a human body (the smaller the grid is, the higher the precision is), and each grid is approximate to a plane; the grid where the probe is in contact with the human body is called an area; the direction of the B-ultrasonic probe in the rectangular coordinate system of the area plane is expressed in elevation and azimuth. The relative positions of the meshed abdominal part of the human body, the virtual B-ultrasonic probe 401 and the virtual examinee are (area: 403; horizontal angle: 33 degrees, elevation angle: 48 degrees).
The virtual reality head-mounted display device of the embodiment of the invention realizes the positioning and mapping function by using an Inside-out (Inside-out tracking) mode. When the Inside-out positioning mode is adopted, the environment acquisition unit is mainly a camera array on the VR head display 201. The basis of the Inside-out Localization approach is slam (simultaneous Localization and Mapping), also known as CML (current Localization and Localization, instant Localization and Mapping), which mainly fuses the input data from multiple sensors through a computer vision CV algorithm to determine the position of an object in a continuously updated digital map. The accurate positioning of the VR head display 201 and the two VR controllers 203 in the three-dimensional physical space can be simultaneously realized through the camera array on the VR helmet. In a typical B-ultrasonic training process, the trainee 205 adopts a sitting posture, the moving parts are mainly the head and the hands, the action amplitude is not large, and Inside-out is more suitable.
S3: and generating a corresponding virtual B-ultrasonic probe 401 according to the space coordinates and the space angle.
The system generates a corresponding virtual B-mode ultrasonic probe 401 according to the position of the handheld VR controller 203 and the corresponding inclination angle, and the inclination angle of the corresponding virtual B-mode ultrasonic probe 401 is consistent with the inclination angle of the corresponding virtual B-mode ultrasonic probe 401.
To improve fidelity, the virtual space may be mapped to the real physical space in equal proportion, i.e., where the trainee 205 holds the VR controller 203 and where the virtual B-mode ultrasound probe 401 is located in the virtual space.
S4: according to the phase position and the relative angle between the virtual B-mode ultrasound probe 401 and the virtual object, a corresponding virtual B-mode ultrasound image 501 is generated and displayed to the trainee 205.
Determining a probing section according to the relative position of the virtual B-mode ultrasound probe 401 and the virtual examinee, acquiring a matched real B-mode ultrasound three-dimensional image according to the probing section, and taking the real B-mode ultrasound three-dimensional image as a virtual B-mode ultrasound image 501.
The virtual B-mode ultrasound image 501 is generated in two ways: interpolation and generation methods can be used in the B-mode ultrasound image generation unit. The interpolation method uses a pre-recorded real B-mode ultrasonic image, and then uses an algorithm to supplement the lacking video frames for seamless continuous playing.
The generation method is used for calculating and generating the B-ultrasonic image in real time by establishing a human body mathematical model and an attribute model of sound waves in human tissues. Specifically, the generation of the virtual B-mode ultrasound image 501 can be performed by using the generation method disclosed in the document "simulation of ultrasound image based on geometric acoustics" (eurographic works on Visual Computing for Biology and Medicine (2012)).
With the rapid development of big data, prerecorded real B-ultrasonic images of various cases are relatively easy to obtain; the real-time processing and frame interpolation of the B-mode ultrasound image by using the artificial intelligence technology are relatively perfect, so that the interpolation method is a relatively good mode. In the interpolation method, the B-mode ultrasound image database stores all pre-recorded real B-mode ultrasound images, and three-dimensional images are adopted for later reconstruction. When recording the real B-mode image in advance into the B-mode image database, the related feature information (prerecorded feature information) should be recorded at the same time, including but not limited to:
the code of the receiver: the corresponding examinee code of the real B-ultrasonic image is pre-recorded. The examinee code is unique in the B-mode ultrasound image database. The examinee codes of all the examinees who pre-record the real B-mode ultrasonic images in the B-mode ultrasonic image database are the same.
Time of acquisition
The category: two/three dimensional
Parts of human body
Diagnosis of disease species
Static information of the examinee
Sex
Age (age)
Height of a person
Body weight
Other diagnostically relevant information
Dynamic information of image acquisition
The body posture of the patient (for example, lying on the back/side) at the time of collection
The physical condition of the person being examined (e.g. obturator/inspirator) at the time of collection
Relative position of B-ultrasonic probe and human body part of patient
Relative region of B-ultrasonic probe and human body part of patient
Relative angle between B-ultrasonic probe and human body part of patient
Relative distance between B-ultrasonic probe and body part of patient
The relative pressure value of the B-ultrasonic probe and the human body part of the patient belongs to the range;
and selecting the best matched real B ultrasonic three-dimensional image from the B ultrasonic image database for displaying according to the real-time characteristic information (the virtual B ultrasonic probe 401 setting transmitted by the training scene setting and virtual scene generating unit and the relative position information of the virtual B ultrasonic probe 401 and the virtual examinee).
Fig. 5 is a schematic view of a virtual environment showing a virtual B-mode ultrasound image 501 according to an embodiment of the present invention, as shown in fig. 5, the VR controller 203 is mapped to a virtual B-mode ultrasound probe 401 in a virtual space, and the trainee 205 performs an examination on the virtual examinee 102 using the virtual B-mode ultrasound probe 401 in the room of the virtual B-mode ultrasound machine 104. The virtual scene generation unit displays the B-mode ultrasound image on a virtual B-mode ultrasound display for the trainee 205 to view.
When the virtual B-mode ultrasound image 501 is displayed on the virtual display 103, the pre-recorded ultrasound three-dimensional image can be transformed and stitched based on a 3D scale invariant feature transform algorithm (IEEE ComputGraph appl. Mar-Apr 2011; 31(2): 36-48.). Or the deep learning technology is utilized to train the B ultrasonic image processing model, and a good effect can be achieved.
The trainee 205 can change the setting of the virtual B-mode ultrasound machine 104 by using the buttons on the VR controller 203, and can adjust the position of the virtual B-mode ultrasound probe 401 by moving the hand, and the B-mode ultrasound image of the virtual B-mode ultrasound display changes accordingly. The trainee 205 can take a diagnostic image using a button on the VR controller 203. The trainee 205 intercepts the B-mode ultrasound image. And taking down the VR head display 201 and exiting the virtual B-mode ultrasonic machine 104 room. The complete operation of the trainee 205 in the virtual B-mode machine 104 room is automatically recorded by the system. The trainee 205 diagnoses based on the captured B-mode ultrasound image, and compares the diagnosis with the characteristic information (diagnosis disease type) of the pre-recorded B-mode ultrasound image used in case setting and training to obtain a diagnosis and assessment score.
S4: and generating a local operation mirror image corresponding to the pose of the virtual B-ultrasonic probe within the visual line range of the trainee in the virtual environment, wherein the pose of the mark corresponding to the virtual B-ultrasonic probe in the local operation mirror image is consistent with the pose of the virtual B-ultrasonic probe in real time.
To facilitate the trainee 205 to observe the position and angle of the virtual type-B ultrasound probe 401, a local operation mirror corresponding to the pose of the virtual type-B ultrasound probe 401 can be generated within the trainee's 205 line of sight in the virtual environment, wherein the pose of the marker in the local operation mirror corresponding to the virtual type-B ultrasound probe 401 coincides with the pose of the virtual type-B ultrasound probe 401 in real time. When the virtual B-mode ultrasound probe 401 is occluded by the body of the virtual examinee after the posture is changed, the trainee can use the local operation mirror image as a reference to help the trainee to quickly locate.
The trainee 205 starts the virtual B-mode ultrasound machine 104 using the VR controller 203. The virtual scene generation unit displays the VR controller 203 as a virtual type-B ultrasound probe 401 while displaying the partial operation mirror on the left side of the type-B ultrasound machine display.
Fig. 6 is a schematic view of the virtual scene after the local operation mirror is added in the present invention, and as shown in fig. 6, a local operation mirror is added in front of the trainee's 205 sight line. The local operation images include a virtual examinee local body surface image 601 and a virtual B-mode ultrasound probe image 603. The virtual B-mode ultrasound probe image 603 is synchronized with the trainee's 205 hand operation in real time. In practice, this image cannot be seen due to line-of-sight limitations (the trainee 205 line-of-sight is mainly focused on B-mode images) or the trainee posture limitations (e.g., lying on side). The operation enhancement guides the local operation mirror image and the B-mode ultrasound image to be displayed in the visual line of the trainee 205 at the same time, so that the trainee 205 can accurately control the hand movement and acquire an accurate image.
The trainee 205 can see his own virtual hand and a hand-held virtual B-mode ultrasound probe 401 in a virtual scene. The relative position of the virtual B-mode ultrasound probe 401 and the virtual examinee a is coincident in physical space. The trainee 205 moves the VR controller 203 and the virtual B-mode ultrasound probe 401 in the virtual space moves accordingly. When the virtual B-mode ultrasound probe 401 moves above the abdomen of the virtual examinee a, the virtual B-mode ultrasound probe 401 appears in synchronization in the local operation mirror image, and the trainee 205 can grasp the relative position of the virtual B-mode ultrasound probe 401 and the abdomen of the virtual examinee a in real time through the local operation mirror image.
Without a simulated mannequin, it is difficult for the trainee 205 to accurately position the virtual B-mode probe 401 in the virtual scene. The local operation mirror image shows the action of the virtual B-mode ultrasonic probe 401 in front of the trainee 205 without changing the operation mode of the trainee 205, thereby effectively solving the problem of accurate positioning of the virtual B-mode ultrasonic probe 401.
In the embodiment of the invention, the commercial virtual reality head-mounted display is used for carrying out virtual reality B-mode ultrasonic training, special hardware (a simulation probe and a simulation human body model) is not required, the cost is low, and the application is convenient.
Case 1
Case description
The trainees 205 were examined for their ability to diagnose fatty liver.
Training scenario settings
Human body parts: abdomen part
And (3) diagnosing disease species: fatty liver: mild degree of
Static information of the examinee
Sex: for male
Age: 20-50
Height: 160 cm-180 cm
Weight: 60 kg-90 kg
Other relevant information: other information helpful to diagnosis
Initial virtual scene creation
The virtual scene creating unit searches a pre-recorded B-ultrasonic image which is set according with a training scene in a B-ultrasonic image database, and after a matched pre-recorded B-ultrasonic image is searched, a virtual doctor is created according to pre-recorded characteristic information of the pre-recorded B-ultrasonic image, so that a virtual scene is created:
virtual B-mode machine 104 room: 3 m long, 3 m wide and 3 m high
Virtual B-mode ultrasound machine 104:
virtual bed 101: height of 1.4 m and length of 2 m
Virtual doctor a:
the code of the receiver: a. the
25 year old male, height 1.75 m, weight 85kg (obesity)
Other relevant information: the patients who see the chief complaints of the reasons of the diagnosis and find the liver function abnormality for 2 days …
There are only 1 person in this virtual scene, and all functional units run on the trainee's 205VR head display 201.
Case 2
Case description
The instructor 801 instructs the trainee 205 to diagnose gallstone.
Training scenario settings
Human body parts: abdomen part
And (3) diagnosing disease species: multiple gallstones
Static information of the examinee
Sex: for male
Age: 55
Height: 173cm
Weight: 71kg of
Other relevant information: other information helpful to diagnosis
Initial virtual scene creation
The virtual scene creating unit searches a pre-recorded B-ultrasonic image which is set according with a training scene in a B-ultrasonic image database, and after a matched pre-recorded B-ultrasonic image is searched, a virtual doctor is created according to pre-recorded characteristic information of the pre-recorded B-ultrasonic image, so that a virtual scene is created:
virtual B-mode ultrasound machine 104 room: 3 m long, 3 m wide and 3 m high
Virtual B-mode ultrasound machine 104:
virtual bed 101: height of 1.4 m and length of 2 m
Virtual doctor a:
the code of the receiver: a. the
55 year old male with height of 1.73 m and weight of 71kg
Other relevant information: the patient who comes for the reason complains about the right upper abdominal pain … …
There are three virtual roles in this virtual scene:
the instructor: corresponding real guiding teacher 801
Trainee 205: corresponding to a real trainee 205
The patients to be diagnosed: virtual examinee A
In a further improvement of the embodiment of the present invention, the trainee 205 may request the virtual trainee to change the posture (e.g. lying on back/on side) through a voice command, the environment acquisition unit transmits the voice command to the virtual scene generation unit, the virtual scene generation unit recognizes the voice command, the posture of the virtual trainee 102 is changed according to the command, and the trainee 205 checks the changed posture.
The trainee 205 can request the virtual examinee 102 to change the state (e.g. inhale/block air) through the voice command, the environment acquisition unit transmits the voice command to the virtual scene generation unit, the virtual scene generation unit recognizes the voice command, the state of the virtual examinee 102 is changed according to the command, and the B-mode ultrasound image is changed correspondingly.
Further, when the operation guide mark 605 indicates the pressing pressure, the color of different depths is used to mark the pressure level.
According to the feature information of the prerecorded real B-mode ultrasound image, the optimal diagnosis position is the region 403, the horizontal angle α and the elevation angle β, as shown by the dotted line in the figure. The operation guide mark 605 is not displayed because of the drill training.
The trainee 205 instructs the virtual examinee a to hold breath, and the abdominal state of the virtual examinee a changes to hold breath.
The relative pressure of the virtual B-mode ultrasound probe 401 is normal when the virtual examinee a moves in the abdomen.
Fig. 7 is an enlarged schematic view of a partial operation mirror image during virtual training according to an embodiment of the present invention, and as shown in fig. 7, when the virtual B-mode ultrasound probe 401 moves to an abdominal position (region z, horizontal angle α, elevation angle β, general pressure), the real-time B-mode ultrasound image is unclear due to obesity of the virtual examinee a and accumulation of abdominal fat. The virtual B-mode ultrasonic probe 401 moves downwards, the virtual body surface area Z of the virtual examinee A gradually turns red, the visual feedback stress is increased, and the optimal position is reached. The partial operation mirror pressure visual feedback function may help the trainee 205 master the efforts of the operation without a simulated manikin.
Further, an operation guidance mark 605 is generated in the local operation image, wherein the operation guidance mark 605 includes: a virtual guide line, a compression pressure indication, or a combination thereof.
The local operation image may also display operation guidance marks 605 for a particular case scenario. The operation guide mark 605 is a visible virtual directional line indicating the optimal position (area, angle, distance) to the trainee 205. When the virtual B-mode ultrasound probe 401 is overlapped with the operation guide mark 605, a correct B-mode ultrasound image can be obtained. The operation guiding mark 605 in the local operation mirror image can help the trainee 205 to quickly locate the diagnosis part, thereby enhancing the training effect.
The partial operational mirror may also display the relative pressure of the virtual B-mode probe 401 in contact with the virtual examinee. When the virtual B-ultrasonic probe 401 is in contact with a virtual examinee, the relative pressure is normal, and the local body surface mirror image of the virtual examinee is displayed normally; the virtual B-mode ultrasound probe 401 continues to be pressed down, the color of the virtual subject's local body surface image changes (e.g., becomes red), and the relative pressure is displayed to re-display the red block 701.
The trainee 205 can manually adjust the position and angle of the local operation mirror in the virtual space for more efficient viewing.
In a further improvement of the embodiment of the present invention, both the trainee and the teacher wear the VR head display 201, the VR head display 201 is positioned in an Inside-out manner, and the environment acquisition unit of the VR head display 201 transmits the acquired ambient environment data to the physical object identification and positioning unit on the VR head display 201. The physical object identification and positioning unit analyzes the data acquired by the environment acquisition unit in real time, positions the three-dimensional position in the physical space in real time, and transmits the positioning information and the user information to the virtual scene generation unit of the remote application server in real time through the 5G network. The virtual scene generation unit is responsible for generating a three-dimensional virtual scene. And adds the user to the three-dimensional virtual scene according to the user information, and sends the virtual reality scene to the virtual scene display unit of the remote user VR head display 201 through the 5G network. The trainee and the teacher enter the same three-dimensional virtual scene.
Since the teacher is to give guidance to the trainee, there is also a bi-directional voice transmission path between the teacher VR head display 201 and the trainee VR head display 201.
The teacher and the trainee enter into virtual training to
There are two roles in the virtual B-mode machine 104 house:
the operator 802: and carrying out B-ultrasonic examination operation.
Observer 801: the operation of the operator 802 is observed.
The virtual training is divided into two stages:
a demonstration stage: the teacher is the operator 802, and performs the B-ultrasonic examination operation, and the trainee is the observer 801, and performs the observation.
In the demonstration phase, the teacher performs an operation demonstration by sitting on the real seat 2011 as the role of the operator 802, and the trainee performs observation as the role of the observer 801.
The observer 801 can view the operation of the operator 802 in real time in the virtual B-mode ultrasound machine 104 room. The observer 801 uses the last view angle to arbitrarily move the three-dimensional position in the virtual B-mode ultrasound machine 104 room using the VR controller 203, and adjusts the view angle by the movement of the VR headset 201.
The observer 801 and the teacher who is the operator 802 communicate with each other by 5G voice. To visualize the communication between the two parties, the observer 801 takes an anthropomorphic image in the virtual scene. The operator 802 instructs the observer 801 to observe at a specified position by voice, or may drag the anthropomorphic representation of the observer 801 directly to the specified position.
Fig. 8 is a schematic view of a virtual scene in which an observer 801 views the operation of an operator 802 during a demonstration phase of the present invention, and as shown in fig. 8, the observer 801 can view the operation of the operator 802 in real time in a virtual B-mode ultrasound machine 104 room. The observer 801 uses the last view angle to arbitrarily move the three-dimensional position in the virtual B-mode ultrasound machine 104 room using the VR controller 203, and adjusts the view angle by the movement of the VR headset 201.
With the virtualization of the virtual reality, the observer 801 can learn the operation of the operator 802 at an arbitrary angle at an arbitrary position in the three-dimensional space. In addition to the voice indication, the operator 802 may directly drag the virtual character of the observer 801 to a designated position in the virtual space, and the viewing angle of the observer 801 is synchronously shifted to the corresponding position, so that the observer 801 can observe the virtual character conveniently.
And entering a training stage after the demonstration stage is finished, wherein in the training stage, the trainee takes the role of an operator 802 to perform B-ultrasonic examination operation, and the teacher takes the role of an observer 801 to perform observation.
The teacher quits the role of operator 802 in the virtual environment, is taken over by the trainee, and the teacher takes over the role of observer 801. The flow is the same as the demonstration phase except for the role change.
The operator 802 and the observer 801 communicate by 5G voice.
In the embodiment of the invention, the real-operation teaching of the remote virtual space is realized by using a low-delay high-speed network (such as 5G/optical fiber/WiFi 6).
In practical applications, the teacher and trainee are remotely connected to the same virtual environment. The meaning of the remote on-line in the embodiment of the invention is as follows: the teacher may be in the same classroom as the trainee, or in a different classroom, or the classroom and the trainee may be in different administrative areas, e.g., the teacher may be located in the Shanghai quiet zone, and the trainee may be located in the North district of Shanghai floodgate, Beijing & sunny district, or Washington, USA, etc.
Example 2
Corresponding to the embodiment 1 of the invention, the embodiment 2 of the invention provides a 5G virtual reality medical ultrasonic practical training system.
Fig. 9 is a schematic structural diagram of a 5G virtual reality medical ultrasound training system provided in an embodiment of the present invention, and as shown in fig. 9, the system includes: an environment acquisition unit, a physical object identification and positioning unit, a virtual B-ultrasonic image generation unit, a virtual scene generation unit and a virtual scene display unit, wherein,
the virtual scene generation unit is used for building a virtual environment; the virtual scene generating unit fuses a virtual scene, a virtual object (the trainee 205 holds the ultrasonic probe) and the virtual B-ultrasonic image 501, and the virtual scene, the virtual object and the virtual B-ultrasonic image are displayed to the trainee 205 through the virtual scene display unit;
the physical object identification and location unit is to: acquiring the spatial position of each physical object and the spatial position of a virtual object in a virtual environment, wherein the physical object comprises: a handheld VR controller; the virtual object includes: one or a combination of a virtual B-ultrasonic machine, a virtual sickbed and a virtual examinee; the physical object recognition and positioning unit is arranged on the virtual reality head-mounted display device and is responsible for analyzing data acquired by the environment acquisition unit in real time, realizing real-time positioning of the three-dimensional positions of the VR head display 201 and the VR controller 203 in a physical space, and sending positioning information to the virtual scene generation unit in real time. The physical object recognition and positioning unit positions the head (through the VR head display 201) and the hand (through the VR controller 203) of the trainee 205 in real time, and the virtual scene generation unit converts the positioning information of the physical space into the visual angle of the trainee 205 and the position of the virtual B-ultrasonic probe 401 in the virtual space.
The environment acquisition unit is used for monitoring the space coordinate and the space angle of the physical object in real time; the environment acquisition unit runs on the virtual reality head mounted display device and is responsible for acquiring environmental data (video data, audio data, positioning data) around the trainee 205 so as to position and track the trainee 205 in the virtual space. When the Outside-in positioning mode is adopted, the environment acquisition unit is the external positioning base station.
The virtual scene display unit is used for generating a corresponding virtual B ultrasonic probe according to the space coordinate and the space angle and displaying the virtual B ultrasonic probe to a user; the virtual scene display unit runs on the virtual reality head-mounted display device, is connected with the display unit of the virtual reality head-mounted display device, and enters a virtual reality space to perform practical training after the trainee 205 wears the VR head display 201. The virtual scene generation unit is responsible for generating a three-dimensional virtual scene, can run on a virtual reality head-mounted display device or a remote server, and must run on the remote server when the virtual scene relates to multi-person cooperation. The initial virtual scene is a virtual B-ultrasonic machine 104 room, and the scene comprises a virtual B-ultrasonic machine 104, a virtual sickbed 101 and a virtual examinee. The virtual objects are three-dimensional objects, and the shapes of the virtual objects are consistent with those of corresponding real objects.
And the virtual B ultrasonic image generating unit is used for generating a corresponding virtual B ultrasonic image according to the phase position and the relative angle between the virtual B ultrasonic probe and the virtual object, and displaying the corresponding virtual B ultrasonic image to the trainee. The virtual scene generation unit is also used for transmitting the relative position (area, angle, distance) of the virtual B-ultrasonic probe 401 and the virtual examinee to the virtual B-ultrasonic image generation unit. The virtual B ultrasonic image generating unit generates a corresponding B ultrasonic image according to the practical training scene setting, the posture and the state of the virtual examinee 102 and the relative position of the virtual B ultrasonic probe 401 and the virtual examinee.
The virtual B-ultrasonic image generation unit and the virtual scene generation unit run at the same position, such as a virtual reality head-mounted display device or a remote server.
The working principle of the system is as follows: the environment acquisition unit acquires environmental data (video data, positioning data) around the trainee 205; the physical object recognition and positioning unit recognizes a physical object in the surrounding physical environment (the trainee 205, the trainee 205 holds the VR controller 203), positions the position of the physical object in the surrounding physical environment, and transmits the position to the virtual scene generation unit; the environment acquisition unit (the camera on the VR head display 201) starts to continue video acquisition. The physical object recognition and localization unit begins to continuously track the trainee's 205 movements.
The virtual scene generating unit generates initial virtual scenes (a virtual B-ultrasonic machine 104, a virtual sickbed 101 and a virtual examinee 102), converts a physical object (the examinee 205 holds the VR controller 203 by hand) into a virtual object (the examinee 205 holds the ultrasonic probe by hand), adds the virtual object to a corresponding position mapped to the virtual scene, and transmits the relative position of the virtual object (the B-ultrasonic probe) and the virtual examinee 102 to the virtual B-ultrasonic image generating unit;
the virtual B-mode ultrasound image generation unit generates a corresponding virtual B-mode ultrasound image 501 from the relative position of the virtual object (B-mode ultrasound probe) and the virtual examinee 102, and transmits the generated image to the virtual scene generation unit.
The system also comprises a B-ultrasonic image processing unit which is used for processing the real B-ultrasonic three-dimensional image according to the difference between the pre-recorded characteristic information and the real-time characteristic information of the real B-ultrasonic image and outputting the real-time B-ultrasonic image to the virtual scene generating unit.
Example 3
The embodiment 3 of the invention is added with the following steps on the basis of the embodiment 1:
the system also supports a virtual tutorial mode. At this time, two or more persons participate in the virtual scene, one of which is a teacher and the other is a trainee. The teacher diagnoses the virtual examinee using the virtual B-ultrasonic machine, and the trainee watches the diagnosis process of the teacher.
The trainee can use the VR controller to move the three-dimensional position at will in the virtual environment, and the visual angle is adjusted through the movement of the VR head display. Correspondingly, the system automatically senses the position of the trainee in the virtual environment, and updates the virtual scene information which can be seen in glasses of the trainee in real time according to the corresponding position, wherein the virtual environment is also called as a virtual B-ultrasonic machine room.
In order to facilitate the teacher to guide the trainee, the teacher can indicate the trainee to observe at the designated position through 5G voice, the trainee can feed back through 5G voice, or the teacher can use the VR controller to freely move the position of the trainee in a virtual environment, so that the trainee can conveniently observe, and the teaching effect is further improved.
Further, the teacher may also bring the trainee into the teacher's view. At this time, the trainee looks at the same angle as the teacher, and sees the same content as the teacher. For example, the teacher asks the trainee to learn the motion of own hand, introduces the trainee into the teacher's visual angle, and the teacher gazes and moves own hand (virtual), carries out the pronunciation explanation simultaneously, and the trainee observes with the visual angle of teacher, and the effect is better.
The trainees can actively switch to the visual angle of the teacher during normal observation so as to better study.
Further, fig. 10 is a schematic view of a line of sight in a 5G virtual reality medical ultrasound practical training method provided by the embodiment of the present invention, as shown in fig. 10, the embodiment of the present invention further includes a line of sight visualization function:
in the virtual teaching mode, trainees sometimes need to track the sight of teachers for better observation; meanwhile, teachers sometimes need to master the sight of trainees so as to improve teaching effects.
The line connecting the fovea 1001 and the pupil center 1003 is called the visual axis, i.e. the actual gaze direction of the human eye, and extends outward into the virtual scene.
In the virtual scene, the virtual line of sight 1007 is shown as a translucent visible light bar from the virtual character's eyes to the virtual examinee 102, which is not normally visible. In practice, the bisector of the angle of the line of sight corresponding to the two eyes may be used as the generated virtual line of sight, or the line of sight of one of the two eyes may be used as the generated virtual line of sight. The environment acquisition unit tracks the line of sight 1005 of the wearer's eye 1000 using an eye tracking system.
The user may control the display and hiding of the virtual line of sight, for example:
the teacher needs all trainees to track their own sight, and at this time, all trainees can see the virtual sight of the teacher, and the trainees' sight is invisible.
The trainee A can set up the sight of tracking the teacher in a personalized way, at the moment, only the trainee A can see the virtual sight of the teacher, and other trainees and the teacher can not see the virtual sight of the teacher.
The teacher needs to guide the line of sight of the trainee B, and at this time, the teacher can see the virtual line of sight of the trainee B and the virtual line of sight of the teacher, and when the two lines of sight converge on one virtual trainee 102, the guidance is completed.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A virtual reality medical ultrasound practical training method is characterized by comprising the following steps:
building a virtual environment, and acquiring the spatial position of each physical object and the spatial position of the virtual object in the virtual environment, wherein the physical object comprises: a handheld VR controller; the virtual object includes: one or a combination of a virtual B-ultrasonic machine, a virtual sickbed and a virtual examinee;
monitoring the space coordinate and the space angle of the physical object in real time;
generating a corresponding virtual B ultrasonic probe according to the space coordinate and the space angle;
generating a corresponding virtual B ultrasonic image according to the phase position and the relative angle between the virtual B ultrasonic probe and the virtual object, and displaying the corresponding virtual B ultrasonic image to a trainee;
and generating a local operation mirror image corresponding to the pose of the virtual B-ultrasonic probe within the visual line range of the trainee in the virtual environment, wherein the pose of the mark corresponding to the virtual B-ultrasonic probe in the local operation mirror image is consistent with the pose of the virtual B-ultrasonic probe in real time.
2. The virtual reality medical ultrasound practical training method according to claim 1, wherein the generating of the corresponding virtual B-mode ultrasound image includes:
acquiring a matched real B-ultrasonic three-dimensional image according to the exploration tangent plane, and taking the real B-ultrasonic three-dimensional image as a virtual B-ultrasonic image, wherein the virtual B-ultrasonic image comprises: and carrying out real B-ultrasonic images which lack video frame supplement, are recorded in advance and correspond to the exploration tangent plane by utilizing an interpolation algorithm, and generating virtual B-ultrasonic images in real time by utilizing a generation algorithm according to a preset human body mathematical model and an attribute model of sound waves in human body tissues.
3. The virtual reality medical ultrasound practical training method according to claim 1, further comprising:
acquiring a voice instruction of a trainee, and generating a virtual trainee under the corresponding instruction, wherein the voice instruction comprises: an instruction to ask the virtual examinee to change posture.
4. The virtual reality medical ultrasound practical training method according to claim 5, wherein a local operation mirror image corresponding to the pose of the virtual B ultrasonic probe is generated within the visual line range of the trainee in the virtual environment under the condition that the virtual B ultrasonic probe is shielded by the body of the virtual trainee after the posture of the virtual B ultrasonic probe is changed, wherein the pose of the mark corresponding to the virtual B ultrasonic probe in the local operation mirror image is consistent with the pose of the virtual B ultrasonic probe in real time.
5. The virtual reality medical ultrasound practical training method according to claim 1, further comprising:
generating operation guidance marks in the local operation image, wherein the operation guidance marks comprise: a virtual guide line, a compression pressure indication, or a combination thereof.
6. The virtual reality medical ultrasound practical training method according to claim 5, wherein when the operation guide mark is a pressing pressure indication, different colors of different depths are used for distinguishing the pressure level.
7. The virtual reality medical ultrasound practical training method according to claim 1, further comprising:
generating a virtualized image of a corresponding person in the virtual environment when a teacher teaches a trainee based on the virtual environment;
the teacher adjusts the observation position of the trainee by dragging the virtual image of the trainee to a designated position in the virtual environment, and generates virtual scene information for the trainee according to the observation position.
8. The virtual reality medical ultrasound practical training method according to claim 7, further comprising:
synchronizing the visual angle information of the trainee to the virtual scene information of the trainee, and generating the virtual scene information aiming at the trainee according to the visual angle information.
9. The virtual reality medical ultrasound practical training method according to claim 7, further comprising:
eye gaze markers for teachers and/or trainees are generated in a virtual environment.
10. A virtual reality medical ultrasound training system, the system comprising: an environment acquisition unit, a physical object identification and positioning unit, a virtual B-ultrasonic image generation unit, a virtual scene generation unit and a virtual scene display unit, wherein,
the virtual scene generation unit is used for building a virtual environment;
the physical object identification and location unit is to: acquiring the spatial position of each physical object and the spatial position of a virtual object in a virtual environment, wherein the physical object comprises: a handheld VR controller; the virtual object includes: one or a combination of a virtual B-ultrasonic machine, a virtual sickbed and a virtual examinee;
the environment acquisition unit is used for monitoring the space coordinate and the space angle of the physical object in real time;
the virtual scene display unit is used for generating a corresponding virtual B ultrasonic probe according to the space coordinate and the space angle and displaying the virtual B ultrasonic probe to a user;
and the virtual B ultrasonic image generating unit is used for generating a corresponding virtual B ultrasonic image according to the phase position and the relative angle between the virtual B ultrasonic probe and the virtual object, and displaying the corresponding virtual B ultrasonic image to the trainee.
CN202111223428.9A 2021-10-20 2021-10-20 5G virtual reality medical ultrasonic training system and method thereof Withdrawn CN114038259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111223428.9A CN114038259A (en) 2021-10-20 2021-10-20 5G virtual reality medical ultrasonic training system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111223428.9A CN114038259A (en) 2021-10-20 2021-10-20 5G virtual reality medical ultrasonic training system and method thereof

Publications (1)

Publication Number Publication Date
CN114038259A true CN114038259A (en) 2022-02-11

Family

ID=80135314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111223428.9A Withdrawn CN114038259A (en) 2021-10-20 2021-10-20 5G virtual reality medical ultrasonic training system and method thereof

Country Status (1)

Country Link
CN (1) CN114038259A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115132013A (en) * 2022-07-26 2022-09-30 北京大学深圳医院 Medical ultrasonic simulation teaching method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115132013A (en) * 2022-07-26 2022-09-30 北京大学深圳医院 Medical ultrasonic simulation teaching method and system
CN115132013B (en) * 2022-07-26 2023-03-14 北京大学深圳医院 Medical ultrasonic simulation teaching method and system

Similar Documents

Publication Publication Date Title
Coles et al. Integrating haptics with augmented reality in a femoral palpation and needle insertion training simulation
CN107067856B (en) Medical simulation training system and method
US9142145B2 (en) Medical training systems and methods
EP1051697B1 (en) Endoscopic tutorial system
US6485308B1 (en) Training aid for needle biopsy
US20190239850A1 (en) Augmented/mixed reality system and method for the guidance of a medical exam
US20140011173A1 (en) Training, skill assessment and monitoring users in ultrasound guided procedures
US9460637B2 (en) Stethoscopy training system and simulated stethoscope
US20090263775A1 (en) Systems and Methods for Surgical Simulation and Training
CN105788390A (en) Medical anatomy auxiliary teaching system based on augmented reality
CN109288591A (en) Surgical robot system
US20100167248A1 (en) Tracking and training system for medical procedures
US20030031993A1 (en) Medical examination teaching and measurement system
CN107527542B (en) Percussion training system based on motion capture
CN114038259A (en) 5G virtual reality medical ultrasonic training system and method thereof
CN111276022A (en) Gastroscope simulation operation system based on VR technique
CN116631252A (en) Physical examination simulation system and method based on mixed reality technology
RU2687564C1 (en) System for training and evaluating medical personnel performing injection and surgical minimally invasive procedures
Coles Investigating augmented reality visio-haptic techniques for medical training
Beacon et al. Assessing the suitability of Kinect for measuring the impact of a week-long Feldenkrais method workshop on pianists’ posture and movement
CN113989461A (en) B-ultrasonic auxiliary teaching system and method based on augmented reality technology
Abolmaesumi et al. A haptic-based system for medical image examination
Beolchi et al. Virtual reality for health care
RU2799123C1 (en) Method of learning using interaction with physical objects in virtual reality
RU2798405C1 (en) Simulation complex for abdominal cavity examination using vr simulation based on integrated tactile tracking technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220211

WW01 Invention patent application withdrawn after publication