WO2024095985A1 - Epidural anesthesia assistance system, epidural anesthesia training method, and display device control method - Google Patents

Epidural anesthesia assistance system, epidural anesthesia training method, and display device control method Download PDF

Info

Publication number
WO2024095985A1
WO2024095985A1 PCT/JP2023/039166 JP2023039166W WO2024095985A1 WO 2024095985 A1 WO2024095985 A1 WO 2024095985A1 JP 2023039166 W JP2023039166 W JP 2023039166W WO 2024095985 A1 WO2024095985 A1 WO 2024095985A1
Authority
WO
WIPO (PCT)
Prior art keywords
epidural
human body
epidural anesthesia
model
spine
Prior art date
Application number
PCT/JP2023/039166
Other languages
French (fr)
Japanese (ja)
Inventor
金幸 川前
達哉 早坂
和晴 河野
祐太 小森谷
Original Assignee
国立大学法人山形大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人山形大学 filed Critical 国立大学法人山形大学
Publication of WO2024095985A1 publication Critical patent/WO2024095985A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Definitions

  • the embodiments disclosed in this specification relate to an epidural anesthesia support system, an epidural anesthesia training method, and a method for controlling a display device.
  • Epidural anesthesia is used in conjunction with general anesthesia during surgery of the chest, abdomen, pelvis, or lower extremities.
  • the analgesic effect of epidural anesthesia is superior to that of paravertebral nerve blocks, and recent studies have shown that epidural anesthesia used during surgery can reduce postoperative cognitive dysfunction and postoperative stress responses.
  • epidural anesthesia is a highly challenging procedure.
  • the clinician administering epidural anesthesia must feel the tip of the epidural needle with their fingers and "walk" their fingertip to pinpoint the location of the epidural needle. In this way, the clinician is performing the "walking" technique blindly. For this reason, when administering epidural anesthesia, the clinician needs not only anatomical knowledge but also experience and intuition.
  • the incidence of difficulty or inability to insert an epidural catheter during epidural anesthesia is approximately 7%, and reaches approximately 26% among anesthesia trainees. Difficulty inserting an epidural catheter can cause pain to the patient over time, and multiple epidural punctures cause localized pain to the patient. Furthermore, if the patient is unable to remain still during the puncture, the risk of nerve damage due to the epidural puncture increases, and there are cases where the epidural puncture procedure must be interrupted.
  • Patent Document 1 and Non-Patent Document 1 describe an epidural puncture simulator for learning epidural puncture techniques.
  • the human body model (epidural puncture simulator) described in Patent Document 1 and Non-Patent Document 1 only provides a method for learning blind epidural puncture.
  • the conventional human body model does not improve blind epidural puncture, nor does it further improve the safety and speed of epidural puncture.
  • the epidural anesthesia support system, epidural anesthesia training method, and display device control method disclosed in this specification make it possible to improve blind epidural puncture and increase the safety and speed of epidural puncture.
  • the disclosed epidural anesthesia support system is an epidural anesthesia support system for supporting epidural anesthesia, and includes a goggle-type display device including a human body model that mimics at least a part of a human body that is the subject of epidural anesthesia training, a transparent display unit installed in front of the user's eyes, and an output processing unit that outputs a first aerial image to the transparent display unit so that, when the user looks at the human body model through the transparent display unit, a first aerial image showing at least a part of the spine is displayed at a corresponding position on the human body model.
  • a goggle-type display device including a human body model that mimics at least a part of a human body that is the subject of epidural anesthesia training, a transparent display unit installed in front of the user's eyes, and an output processing unit that outputs a first aerial image to the transparent display unit so that, when the user looks at the human body model through the transparent display unit, a first aerial image showing at least a part of the spine
  • the human body model has a spine model that mimics at least a part of the spine, and the output processing unit outputs the first aerial image based on three-dimensional object data generated from a CT image of the spine model acquired by a CT device.
  • the output processing unit outputs the second aerial image to the transparent display unit so that, when the user views the human body model through the transparent display unit, the second aerial image indicating at least one of the skin puncture point of the epidural needle used in epidural anesthesia, the puncture angle of the epidural needle, and the epidural space puncture point of the epidural needle is displayed at the corresponding position on the human body model.
  • the human body model has a spine model that mimics at least a part of the spine
  • the output processing unit outputs a first aerial image and a second aerial image based on three-dimensional object data generated from a CT image of the spine model inserted with the epidural needle acquired by a CT device, and the epidural needle is inserted into the spine model so as to satisfy medically appropriate skin puncture point, puncture angle, and epidural space puncture point.
  • the disclosed epidural anesthesia support system is an epidural anesthesia support system for supporting epidural anesthesia, and includes a goggle-type display device that includes a transparent display unit installed in front of the user's eyes, and an output processing unit that outputs an aerial image to the transparent display unit so that, when the user looks at the human body through the transparent display unit, an aerial image showing at least a part of the human body's spine is displayed at a corresponding position on the human body.
  • the goggle-type display device allows the user wearing it to imagine an aerial image, and after imagining the aerial image, the user removes the goggle-type display device and performs epidural anesthesia as usual.
  • the disclosed epidural anesthesia training method is an epidural anesthesia training method for training epidural anesthesia using the disclosed epidural anesthesia support system, and includes a step of displaying a first aerial image on a goggle-type display device to allow a user wearing the goggle-type display device to insert an epidural needle used in epidural anesthesia into a human body model.
  • the disclosed epidural anesthesia training method is an epidural anesthesia training method for training epidural anesthesia using the disclosed epidural anesthesia support system, and includes the steps of: displaying a first aerial image on a goggle-type display device to allow a user wearing a goggle-type display device to insert an epidural needle into a spinal model attached to a human body model; acquiring a CT image of the spinal model inserted by the user with the epidural needle; and displaying the skin puncture point, puncture angle, and epidural space puncture point of the epidural needle in the spinal model inserted by the user with the epidural needle.
  • the disclosed epidural anesthesia training method further includes a step in which the user puts on a goggle-type display device, visualizes the first aerial image, and then removes the goggle-type display device and inserts an epidural needle into the human body model.
  • the disclosed control method is a control method for a display device having an image acquisition unit, a storage unit, and a transparent display unit, in which the display device stores in the storage unit first three-dimensional object data showing at least a part of the shape of the surface of a specific human body, and second three-dimensional object data showing at least a part of the spine of the specific human body, and displays on the transparent display unit an aerial image showing at least a part of the spine of the specific human body based on the second three-dimensional object data, based on an image of the specific human body acquired by the image acquisition unit and the first three-dimensional object data.
  • the disclosed epidural anesthesia support system, epidural anesthesia training method, and display device control method make it possible to improve blind epidural puncture and increase the safety and speed of epidural puncture.
  • FIG. 1 is a schematic diagram for explaining an example of an overview of an epidural anesthesia support system.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of a human body model.
  • FIG. 13 is a schematic diagram showing an example of a method for generating a lumbar spine model inserted with an epidural needle.
  • FIG. 2A is a perspective view showing an example of the appearance of a wearable device
  • FIG. 2B is a diagram showing an example of a schematic configuration of the wearable device.
  • FIG. 1A is a schematic diagram showing an example of a three-dimensional object
  • FIG. 1B is a diagram showing an example of an aerial image displayed on a wearable device.
  • FIG. 13 is a schematic diagram for explaining images resulting from epidural needle puncture training.
  • FIG. 13 is a diagram showing an example of an operation flow of an epidural anesthesia training method.
  • the epidural anesthesia support system 1 includes a human body model 2 and a wearable device 3.
  • the human body model 2 has a main body section 21 that imitates a human body, and a skin section 22 that imitates at least a part of the skin on the back of the human body.
  • the main body section 21 has a housing section 23 that houses a spine model 24 that imitates at least a part of a spine.
  • the wearable device 3 is a goggle-type display device that can be worn on the head of a user (such as a doctor or medical intern).
  • a transparent display unit is installed in the wearable device 3 so that it is positioned directly in front of the eyes of the user wearing it.
  • the wearable device 3 has an MR (Mixed Reality) function that displays various aerial images on the transparent display unit.
  • the wearable device 3 displays an aerial image A1 of the spine model 24 at a corresponding position of the human body model 2 when a user wearing the wearable device 3 views the human body model 2 through the transparent display unit.
  • the wearable device 3 stores three-dimensional object data of a three-dimensional object D1 indicating the shape of the surface of the human body model 2 and three-dimensional object data of a three-dimensional object D2 indicating the shape of the surface of the spine model 24.
  • the wearable device 3 also stores the relative positional relationship between the three-dimensional object D1 and the three-dimensional object D2.
  • the wearable device 3 reads out the three-dimensional object data of the three-dimensional object D1, recognizes the shape of the human body model 2 using a known MR function, and specifies the position of the human body model 2 in real space.
  • the wearable device 3 displays the aerial image A1 on the transparent display unit based on the position of the human body model 2 in real space, the relative positional relationship between the three-dimensional object D1 and the three-dimensional object D2, and the three-dimensional object data of the three-dimensional object D2.
  • the aerial image A1 is a hologram (stereoscopic image) corresponding to the three-dimensional object D2 and is an example of a first aerial image.
  • the wearable device 3 displays an aerial image A1 showing the lumbar vertebrae at the position of the spine model 24 contained within the human body model 2 when the user wearing the wearable device 3 views the human body model 2 through the transparent display unit. This allows the user wearing the wearable device 3 to view the aerial image A1 displayed at a position corresponding to the spine model 24 contained within the human body model 2 through the transparent display unit.
  • the aerial image A1 is not limited to being displayed based on the three-dimensional object data of the three-dimensional object D2 showing the surface shape of the spine model 24.
  • the aerial image A1 may be displayed based on three-dimensional object data showing the surface shape of at least a part of the spine that is generated by a trainer operating an information processing device (such as a personal computer (PC)) capable of executing an application program for generating three-dimensional object data.
  • the trainer is a person who provides training in epidural puncture to trainees, such as medical interns.
  • the epidural anesthesia support system 1 has a wearable device 3 that displays an aerial image A1 showing at least a part of the spine of a human body model 2 at a position corresponding to at least a part of the spine.
  • the epidural anesthesia support system 1 makes it possible to visualize the area to be targeted for epidural puncture, improving blind epidural puncture and making it safer and more rapid.
  • FIG. 1 is merely for the purpose of deepening understanding of the contents of the present invention.
  • the present invention is specifically embodied in the following embodiments, and may be embodied in various modified forms without substantially departing from the principles of the present invention. All such modified forms are included within the scope of the present invention and the disclosure of this specification.
  • Human body model 2 is a diagram showing an example of a schematic configuration of the human body model 2.
  • the human body model 2 has a spine model 24, a skin portion 22, a cord portion 25, and the like.
  • the main body 21 shown in FIG. 2 has a waist portion that imitates the waist of a human body.
  • the main body 21 is not limited to a waist portion, and may further have a back portion that imitates the back of a human body, and/or a buttocks that imitates the buttocks of a human body.
  • the main body 21 may also have only one of the back portion and the buttocks.
  • the main body 21 may also include models that imitate other parts of the human body (for example, a neck portion that imitates a neck, an upper arm portion that imitates an upper arm, and/or a thigh portion that imitates a thigh, etc.).
  • the skin portion 22 is made of soft synthetic resin that mimics the skin and subcutaneous tissue, and is configured so that the softness and feel when palpated by a user is similar to that of human skin.
  • the storage portion 23 is a groove provided on the back surface of the human body model 2, and stores the spine model 24.
  • the spine model 24 is a model that simulates at least the spinous processes and the epidural space.
  • the spine model 24 shown in FIG. 2(b) is a model that simulates the lumbar vertebrae of the spine.
  • the spine model 24 is not limited to a model that simulates the lumbar vertebrae, and may be a model that simulates the cervical vertebrae, thoracic vertebrae, or sacral vertebrae.
  • the spine model 24 may also be a model that simulates two or more of the cervical vertebrae, thoracic vertebrae, lumbar vertebrae, and sacral vertebrae.
  • the spine model 24 is housed in the housing 23 (FIG. 2(b)) and covered by the skin 22 (FIG. 2(a)) so that it cannot be seen by the user.
  • a user who does not wear the wearable device 3 cannot see the spine model 24, and therefore blindly trains in epidural puncture using the human body model 2.
  • the human body model 2 is a model that imitates at least a part of a human body that is the subject of epidural anesthesia training
  • the spine model 24 is a model that imitates at least a part of a spine that is housed in the human body model 2 and that is the subject of epidural puncture in epidural anesthesia. Details of the spine model 24 will be described later.
  • the code section 25 is, for example, a QR code (registered trademark), and is associated with the three-dimensional object data of each of the three-dimensional objects D1 and D2.
  • Other barcode information such as an AR marker may also be used as the code section 25. The method of using the code section 25 will be described later.
  • (Spine model 24 with epidural needle N inserted) 3 is a schematic diagram showing an example of a method for generating a spine model 24 punctured with an epidural needle N.
  • the spine model 24 punctured with an epidural needle N is used to create three-dimensional object data.
  • the epidural needle N punctured into the spine model 24 corresponds to three-dimensional object data for displaying an aerial image A2 showing at least one of the skin puncture point of the epidural needle N, the puncture angle of the epidural needle N, and the epidural space puncture point of the epidural needle N.
  • the spine model 24 punctured with the epidural needle N is created by a trainer.
  • the aerial image A2 is an example of a second aerial image.
  • the spine model 24 has a spinous process portion 241, a transparent portion 242, and an epidural space portion 243.
  • the spinous process portion 241 is a model of a spinous process and is made of synthetic resin having the same hardness as a spinous process.
  • the transparent portion 242 is made of transparent silicon or the like configured to generate a sense of resistance similar to that of an interspinous ligament when the epidural needle N is punctured, and the punctured epidural needle N can be visually confirmed. Note that when the epidural needle N is punctured into the transparent portion 242, the puncture position of the epidural needle N on the upper surface of the transparent portion 242 may be referred to as the skin puncture point.
  • the epidural space portion 243 is a hollow tubular body that constitutes the epidural space.
  • the upper side of the epidural space portion 243 is a model of the ligamentum flavum, and the point where the epidural needle N punctured into the transparent portion 242 reaches the upper side of the epidural space portion 243 is the epidural space puncture point.
  • the trainer inserts the epidural needle N into the spine model 24.
  • the epidural needle N is inserted into the spine model 24 so as to satisfy medically appropriate skin puncture point, puncture angle, and epidural space puncture point.
  • the puncture angle is the deviation angle ⁇ between a straight line passing through the skin puncture point and the epidural space puncture point and an axis passing upward in FIG. 3(b).
  • the trainer cuts the epidural needle N extending from the upper side of the epidural space portion 243, thereby completing the spine model 24 punctured with the epidural needle N.
  • the spine model 24 punctured with the epidural needle N may be in the state in which the epidural needle N is inserted before cutting (FIG. 3(b)).
  • the trainer places the spine model 24, into which the epidural needle N has been inserted, in the human body model 2, and obtains a CT image of the human body model 2 using a CT (Computed Tomography) device (not shown).
  • CT Computerputed Tomography
  • the training officer uses an information processing device (not shown) such as a PC capable of executing a three-dimensional object data generation application program to generate three-dimensional object data for three-dimensional object D1 and three-dimensional object data for three-dimensional object D2 from the CT image of the human body model 2.
  • an information processing device such as a PC capable of executing a three-dimensional object data generation application program to generate three-dimensional object data for three-dimensional object D1 and three-dimensional object data for three-dimensional object D2 from the CT image of the human body model 2.
  • a well-known application program (“Unity Pro" (registered trademark)) may be used as the three-dimensional object data generation application program.
  • the information processing device identifies pixel P1 corresponding to the surfaces of the main body portion 21 and the skin portion 22, pixel P2 corresponding to the surface of the spinous process portion 241 and/or the epidural space portion 243 of the spine model 24, and pixel P3 corresponding to the surface of the epidural needle N.
  • the information processing device uses the identified pixels P1 to P3 in all CT images to generate three-dimensional object data by a known conversion process.
  • the three-dimensional object data of the three-dimensional object D1 is generated based on pixel P1
  • the three-dimensional object data of the three-dimensional object D2 is generated based on pixel P2.
  • the three-dimensional object data of the three-dimensional object D3 indicating the shape of the surface of the epidural needle N is generated based on pixel P3.
  • the three-dimensional object D3 is not limited to one that indicates the shape of the surface of the epidural needle N itself.
  • the three-dimensional object D3 may be a spherical object that corresponds to the skin puncture point of the epidural needle N, or a spherical object that indicates the epidural space puncture point of the epidural needle N.
  • the line segment object that corresponds to the epidural needle N may be designed as an arrow. In this case, the direction of the arrow is the puncture direction.
  • the three-dimensional object D3 may include multiple objects of each of the examples given above.
  • FIG. 4(a) is a perspective view showing an example of the wearable device 3
  • FIG. 4(b) is a diagram showing an example of a schematic configuration of the wearable device 3.
  • the wearable device 3 is a goggle-type display device that can be worn on the head of a user, and has an MR (Mixed Reality) function of displaying various aerial images on a half mirror unit 36 installed in the front direction (x direction) of the eyes of the user wearing the device.
  • the wearable device 3 includes a communication I/F 31, a storage unit 32, a sensor acquisition unit 33, a video output unit 34, a processing unit 35, a half mirror unit 36, an environmental sensor 37, a depth sensor 38, and the like.
  • MR HMD Head Mounted Display
  • Microsoft HoloLens 2 [searched on September 1, 2022]
  • the communication I/F 31 includes a communication interface circuit that performs wireless communication with an access point of a wireless LAN (Local Area Network) (not shown) based on the wireless communication method of the IEEE (The Institute of Electrical and Electronics Engineers, Inc.) 802.11 standard.
  • the communication I/F 31 receives three-dimensional object data transmitted by wireless communication from an information processing device.
  • the communication I/F 31 may also establish a wireless signal line with a base station (not shown) using the LTE (Long Term Evolution) method, a fifth generation (5G) mobile communication system, or the like, via a channel assigned by the base station, and perform communication with the base station.
  • LTE Long Term Evolution
  • 5G fifth generation
  • the communication I/F 31 may also have an interface circuit for performing short-range wireless communication according to a communication method such as Bluetooth (registered trademark), and may receive radio waves from the information processing device.
  • the communication I/F 31 may also include a communication interface circuit for a wired LAN. This allows the wearable device 3 to acquire three-dimensional object data (three-dimensional object data of three-dimensional object D1, three-dimensional object data of three-dimensional object D2, and three-dimensional object data of three-dimensional object D3) from the information processing device via the communication I/F 31.
  • the storage unit 32 is, for example, a semiconductor memory device such as a ROM (Read Only Memory) or a RAM (Random Access Memory).
  • the storage unit 32 stores an operating system program, a driver program, an application program, data, and the like used for processing in the processing unit 35.
  • the driver programs stored in the storage unit 32 include a communication device driver program that controls the communication I/F 31, an output device driver program that controls the video output unit 34, an environmental sensor device driver program that controls the environmental sensor 37, and a depth sensor device driver program that controls the depth sensor 38.
  • the application programs stored in the storage unit 32 are various control programs for realizing the MR function in the processing unit 35.
  • the storage unit 32 stores three-dimensional object data acquired from the information processing device (three-dimensional object data of three-dimensional object D1, three-dimensional object data of three-dimensional object D2, and three-dimensional object data of three-dimensional object D3).
  • the sensor acquisition unit 33 has a function of acquiring various sensor data from the environmental sensor 37 and the depth sensor 38 and passing it to the processing unit 35.
  • the video output unit 34 has a function of projecting a hollow image onto the half mirror unit 36 using a holographic optical element based on display data for displaying the hollow image.
  • the processing unit 35 is a processing device that loads the operating system program, driver program, and application program stored in the storage unit 32 into memory and executes the instructions contained in the loaded programs.
  • the processing unit 35 is, for example, an electronic circuit such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or a DSP (Digital Signal Processor), or a combination of various electronic circuits.
  • the processing unit 35 is illustrated as a single component, but the processing unit 35 may also be a collection of multiple physically separate processors.
  • the processing unit 35 executes various commands included in the control program, thereby functioning as a recognition unit 351 and an output processing unit 352.
  • the functions of the recognition unit 351 and the output processing unit 352 will be described later.
  • the half mirror unit 36 is an example of a translucent display unit that is placed in front of the user's eyes when the wearable device 3 is worn by the user.
  • the half mirror unit 36 displays the hollow image projected from the video output unit 34. This allows the user to view the displayed hollow image superimposed on the real world in the user's line of sight.
  • the environmental sensor 37 has an optical lens and an image sensor.
  • the optical lens focuses the light beam from the subject on the imaging surface of the image sensor.
  • the image sensor is a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), and outputs an image of the subject imaged on the imaging surface.
  • the environmental sensor 37 creates still image data in a specified file format from the image generated by the image sensor, and outputs it as environmental sensor data. When outputting the environmental sensor image data, the environmental sensor 37 may associate the image data with the time of acquisition.
  • the environmental sensor 37 also passes the environmental sensor data to the processing unit 35 at specified time intervals (e.g., one second).
  • the depth sensor 38 is, for example, a pair of infrared sensors.
  • the depth sensor 38 outputs an infrared image, measures distances corresponding to multiple points on the surface of the object from the output infrared image using a known active stereo method, and outputs depth data (distance data) corresponding to each of the multiple points.
  • the depth sensor 38 passes the depth data to the processing unit 35 at predetermined time intervals (for example, one second).
  • the recognition unit 351 receives environmental sensor data from the environmental sensor 37 and depth data from the depth sensor 38, it estimates its own position and generates three-dimensional mesh data of the surfaces of surrounding objects using the well-known SLAM (Simultaneous Localization and Mapping) technology.
  • SLAM Simultaneous Localization and Mapping
  • the recognition unit 351 also determines whether or not the code portion 25 is included in the image shown by the environmental sensor data from the environmental sensor 37. If the recognition unit 351 determines that the code portion 25 is included in the image, it acquires three-dimensional object data corresponding to the code portion 25 and passes it to the output processing unit 352. The recognition unit 351 also calculates the orientation (e.g., a three-dimensional unit vector) and distance of the code portion 25 relative to the wearable device 3 based on the size and shape of the code portion 25 in the image. Note that a known calculation method (e.g., JP 2016-57758 A) may be used as the process of calculating the orientation and distance of the target object based on the size and shape of a reference display object such as the code portion 25.
  • a known calculation method e.g., JP 2016-57758 A
  • the output processing unit 352 calculates three-dimensional objects D1, D2, and D3, which are three-dimensional objects D1 based on the three-dimensional object data acquired from the recognition unit 351, and which are rotated and resized based on the orientation and distance of the code portion 25 acquired from the recognition unit 351.
  • FIG. 5(a) is a schematic diagram showing an example of three-dimensional objects D1, D2, and D3. Note that FIG. 5(a) includes the code portion 25 for the purpose of explanation, but the acquired three-dimensional object data does not necessarily need to include object data corresponding to the code portion 25.
  • the output processing unit 352 intermittently aligns the 3D mesh data of the surface of the object corresponding to the human body model 2 output from the recognition unit 351 with the identified 3D object D1, and passes the display data (projection data) of the aerial image A1 corresponding to the 3D object D2 and the aerial image A2 corresponding to the 3D object D3 to the video output unit 34.
  • FIG. 5(b) is a diagram showing an example of the aerial images A1 and A2 displayed on the half mirror unit 36. Thereafter, the aerial images A1 and A2 are displayed moving and/or rotating according to the result of alignment between the 3D mesh data of the surface of the object corresponding to the human body model 2 and the identified 3D object D1, which is intermittently performed by the recognition unit 351.
  • the user trains to insert the epidural needle N into the human body model 2 while wearing the wearable device 3 on which the aerial images A1 and A2 are displayed.
  • the trainer removes the punctured spine model 24 from the human body model 2 after the user has inserted the epidural needle N, and obtains a CT image using a CT device.
  • the trainer uses an information processing device, the trainer generates and displays an image of the result of the puncture training, which includes both an image R1 of the epidural needle N that satisfies the medically appropriate skin puncture point, puncture angle, and epidural space puncture point, and an image R2 of the epidural needle N inserted by the user.
  • FIG. 6 is a diagram for explaining the result image of the puncture training of the epidural needle N.
  • FIG. 6(a) is a diagram showing the top surface of the spine model 24, and the position indicating the skin puncture point is displayed.
  • the position of the "x” is the position indicating the skin puncture point of the image R1
  • the position of the "o” is the position indicating the skin puncture point of the image R2.
  • FIG. 6(b) is a diagram seen from the side of the spine model 24, and the puncture angle is displayed.
  • an image R1 of the epidural needle N and an image R2 of the epidural needle N are displayed.
  • FIG. 6(c) is a diagram showing the upper surface of the epidural space portion 243, and the position indicating the epidural space puncture point is displayed.
  • the position of the "x" is the position indicating the epidural space puncture point of the image R1
  • the position of the "o” is the position indicating the epidural space puncture point of the image R2.
  • FIG. 7 shows an example of the operational flow of the epidural anesthesia training method.
  • step S101 when the user looks at the human body model 2 through the half mirror unit 36 after putting on the wearable device 3, the aerial image A1 and/or the aerial image A2 is displayed on the half mirror unit 36 (step S101).
  • the user performs a first training session in which the epidural needle N is inserted into the human body model 2 while wearing the wearable device 3 on which the aerial image A1 and the aerial image A2 are displayed (step S102).
  • the user imagines the aerial image A1 and/or the aerial image A2 displayed by the wearable device 3.
  • the trainer removes the punctured spine model 24 from the human body model 2 and obtains a CT image using the CT scanner (step S103).
  • step S104 the user performs a second training session of inserting the epidural needle N into the human body model 2 without wearing the wearable device 3 (step S104).
  • the trainer removes the punctured spine model 24 from the human body model 2 and obtains a CT image using the CT device (step S105).
  • the trainer uses an information processing device to generate and display an image of the results of the puncture training, which includes both an image R1 of the epidural needle N that satisfies the medically appropriate skin puncture point, puncture angle, and epidural space puncture point, and an image R2 of the epidural needle N inserted by the user (step S105), and the epidural anesthesia training method is completed.
  • the epidural anesthesia support system 1 of this embodiment has a wearable device 3 that displays an aerial image A1 showing the spine at a position corresponding to the spine of the human body model 2.
  • the epidural anesthesia support system 1 makes it possible to visualize the area to be targeted for epidural puncture, improving blind epidural puncture and making it safer and more rapid.
  • the present invention is not limited to the present embodiment.
  • the epidural anesthesia support system 1 does not need to include the human body model 2.
  • the wearable device 3 projects an aerial image A1 onto the half mirror unit 36 so that, instead of the human body model 2, the aerial image A1 showing at least a part of the spine of a human body such as a patient is displayed at a position corresponding to at least a part of the spine of the human body.
  • the trainer uses a CT device (not shown) to obtain a CT image of the human body (patient).
  • the trainer uses an information processing device (not shown) such as a PC capable of executing a three-dimensional object data generation application program to generate, from the CT image of the human body, three-dimensional object data of a three-dimensional object D1 showing at least a part of the shape of the surface of the human body (e.g., the torso (back, waist, and buttocks) etc.) and three-dimensional object data of a three-dimensional object D2 showing the shape of at least a part of the spine of the human body.
  • the trainer manually generates three-dimensional object data of a three-dimensional object D3.
  • the storage unit 32 stores the three-dimensional object data of the three-dimensional object D1, the three-dimensional object data of the three-dimensional object D2, and the three-dimensional object data of the three-dimensional object D3 obtained from the information processing device.
  • the storage unit 32 stores the shape of the three-dimensional object D1 and the code portion 25 displayed on a sheet member placed on the back of the human body in association with each other. In a case where the sheet member on which the code portion 25 is displayed is not provided on the human body, the storage unit 32 stores the shape of the three-dimensional object D1 and the position of at least some characteristic parts of the vertebrae of the human body in association with each other.
  • the three-dimensional object data of the three-dimensional object D1 is an example of first three-dimensional object data
  • the three-dimensional object data of the three-dimensional object D2 is an example of second three-dimensional object data.
  • at least some characteristic parts of the vertebrae of the human body associated with the shape of the three-dimensional object D1 are, for example, the seventh cervical vertebra (vertebral eminence) and the anterior superior iliac spine.
  • the recognition unit 351 calculates the orientation (e.g., a three-dimensional unit vector, etc.) and distance of the code portion 25 relative to the wearable device 3 using a known calculation method based on the size and shape of the code portion 25 in the image shown by the environmental sensor data from the environmental sensor 37.
  • the output processing unit 352 then calculates three-dimensional objects D1, D2, and D3, which are three-dimensional objects D1 based on the three-dimensional object data acquired from the recognition unit 351 and have been rotated and resized based on the orientation and distance of the code portion 25 acquired from the recognition unit 351.
  • the recognition unit 351 recognizes the position of at least some of the characteristic parts of the vertebrae of the human body, and identifies the human body based on the relative position of the characteristic parts and the shape of the surface of the torso of the human body. For example, the recognition unit 351 determines whether or not at least some of the characteristic parts of the vertebrae of the human body are included in the image shown by the environmental sensor data from the environmental sensor 37.
  • the environmental sensor 37 is an example of an image acquisition unit.
  • the recognition unit 351 determines that at least some of the characteristic parts of the vertebrae of the human body are included in the image, it acquires a three-dimensional object D1 associated with the position of the characteristic parts and passes it to the output processing unit 352. In addition, the recognition unit 351 calculates the orientation (e.g., a three-dimensional unit vector, etc.) and distance of the characteristic parts relative to the wearable device 3 based on the position of at least some of the characteristic parts of the vertebrae in the image using a known calculation method.
  • orientation e.g., a three-dimensional unit vector, etc.
  • the output processing unit 352 calculates three-dimensional object D1, three-dimensional object D2, and three-dimensional object D3, which are three-dimensional object D1 based on the three-dimensional object data acquired from the recognition unit 351, and which have been rotated and resized based on the orientation and distance of the characteristic points acquired from the recognition unit 351.
  • the output processing unit 352 intermittently performs alignment between the three-dimensional mesh data of the surface of the object corresponding to the human body model 2 output from the recognition unit 351 and the three-dimensional object D1, while passing the display data (projection data) of the aerial image A1 corresponding to the three-dimensional object D2 and/or the aerial image A2 corresponding to the three-dimensional object D3 to the video output unit 34.
  • the epidural anesthesia support system 1 and the wearable device 3 can superimpose the aerial image A1 and/or the aerial image A2 corresponding to the spine of the human body on the human body, encouraging the user to understand the shape of the spine of the human body and allowing the user to imagine the aerial image A1 and/or the aerial image A2 displayed by the wearable device 3.
  • the user can remove the wearable device 3 and perform epidural anesthesia as usual.
  • the three-dimensional object D2 of the corresponding spine may be deformed in accordance with the change in the lumbar surface of the human body.
  • multiple points of the three-dimensional object D2 may be associated with the position of the three-dimensional object D1 immediately adjacent to each point, and the output processing unit 352 may deform the three-dimensional object D1 in synchronization with the deformation of the lumbar region of the human body, and move the multiple corresponding points of the three-dimensional object D2 to follow the deformation of the three-dimensional object D1, thereby deforming the aerial image A1 corresponding to the spine.
  • step S104 of the epidural anesthesia training method shown in Figure 7 the user may perform a second training session of inserting the epidural needle N into the human body model 2 while wearing the wearable device 3 displaying the aerial images A1 and A2.
  • Reference Signs List 1 Epidural anesthesia support system 2 Human body model 21 Main body 22 Skin 23 Storage section 24 Spinal model 241 Spinous process section 242 Transparent section 243 Epidural space section 25 Cord section 3 Wearable device 31 Communication I/F 32 Memory unit 33 Sensor acquisition unit 34 Video output unit 35 Processing unit 351 Recognition unit 352 Output processing unit 36 Half mirror unit 37 Environmental sensor 38 Depth sensor N Epidural needle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Hardware Design (AREA)
  • Public Health (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Medicinal Chemistry (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Instructional Devices (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides, inter alia, an epidural anesthesia assistance system that improves blind epidural puncture and increases the safety and swiftness of epidural puncture. The present invention relates to an epidural anesthesia assistance system for assisting in epidural puncture, said epidural anesthesia assistance system comprising a goggle-type display device provided with: a transmissive display unit disposed in the front direction of the eyes of a user; and an output processing unit that, when the user views a human body model or a patient's back through the transmissive display unit, outputs a first aerial image showing at least a portion of the spine to the transmissive display unit such that the first aerial image is displayed in a position corresponding to the human body model or the patient's back.

Description

硬膜外麻酔支援システム、硬膜外麻酔訓練方法、及び表示装置の制御方法Epidural anesthesia support system, epidural anesthesia training method, and display device control method
 本明細書で開示された実施形態は、硬膜外麻酔支援システム、硬膜外麻酔訓練方法、及び表示装置の制御方法に関する。 The embodiments disclosed in this specification relate to an epidural anesthesia support system, an epidural anesthesia training method, and a method for controlling a display device.
 硬膜外麻酔は、胸部、腹部、骨盤、下肢の手術の際に全身麻酔と併用して使用されるものである。硬膜外麻酔の鎮痛効果は、傍脊椎神経ブロックよりも秀でており、近年の研究によると、手術の際に硬膜外麻酔が使用された場合、術後の認知機能障害及び術後のストレス反応を軽減させる効果があることが示されている。 Epidural anesthesia is used in conjunction with general anesthesia during surgery of the chest, abdomen, pelvis, or lower extremities. The analgesic effect of epidural anesthesia is superior to that of paravertebral nerve blocks, and recent studies have shown that epidural anesthesia used during surgery can reduce postoperative cognitive dysfunction and postoperative stress responses.
 他方、硬膜外麻酔は難易度の高い手技である。硬膜外麻酔を施行する臨床医は、硬膜外針の指先を指で感じ、指先を“歩かせ”ながら、硬膜外倥の位置を特定しなければならない。このように、臨床医は、“歩く”という技術を盲目的に行っている。このため、硬膜外麻酔の施行に際し、臨床医には、解剖学的知識だけではなく経験や感覚が必要となる。 On the other hand, epidural anesthesia is a highly challenging procedure. The clinician administering epidural anesthesia must feel the tip of the epidural needle with their fingers and "walk" their fingertip to pinpoint the location of the epidural needle. In this way, the clinician is performing the "walking" technique blindly. For this reason, when administering epidural anesthesia, the clinician needs not only anatomical knowledge but also experience and intuition.
 硬膜外麻酔における硬膜外カテーテルの挿入困難及び挿入不可の発生率は、約7%であり、麻酔研修医においては約26%に及ぶ。硬膜外カテーテルの挿入困難は、時間経過に伴い患者に苦痛を強いることになり、また、複数回の硬膜外穿刺は患者に局所痛を与える。さらに、患者が穿刺時に安静を保てない際は硬膜外穿刺による神経損傷のリスクが上がり、硬膜外穿刺の手技を中断せざるを得ない状況に見舞われることもある。 The incidence of difficulty or inability to insert an epidural catheter during epidural anesthesia is approximately 7%, and reaches approximately 26% among anesthesia trainees. Difficulty inserting an epidural catheter can cause pain to the patient over time, and multiple epidural punctures cause localized pain to the patient. Furthermore, if the patient is unable to remain still during the puncture, the risk of nerve damage due to the epidural puncture increases, and there are cases where the epidural puncture procedure must be interrupted.
 このため、安全で正確な硬膜外麻酔を行うための方法は長年にわたり模索されている。例えば、特許文献1及び非特許文献1には、硬膜外穿刺の手技を習得するための硬膜外穿刺シミュレータが記載されている。 For this reason, methods for performing epidural anesthesia safely and accurately have been sought for many years. For example, Patent Document 1 and Non-Patent Document 1 describe an epidural puncture simulator for learning epidural puncture techniques.
特開2002-132138号公報JP 2002-132138 A
 しかしながら、特許文献1及び非特許文献1に記載された人体モデル(硬膜外穿刺シミュレータ)は、盲目的な硬膜外穿刺の習得方法の提供にとどまるものであった。このように、従来の人体モデルは、盲目的な硬膜外穿刺を改善するものではなく、硬膜外穿刺の安全性及び迅速性を更に向上させるものではなかった。 However, the human body model (epidural puncture simulator) described in Patent Document 1 and Non-Patent Document 1 only provides a method for learning blind epidural puncture. As such, the conventional human body model does not improve blind epidural puncture, nor does it further improve the safety and speed of epidural puncture.
 本明細書で開示された硬膜外麻酔支援システム、硬膜外麻酔訓練方法、及び表示装置の制御方法は、盲目的な硬膜外穿刺を改善し、硬膜外穿刺の安全性及び迅速性を向上させることを可能とする。 The epidural anesthesia support system, epidural anesthesia training method, and display device control method disclosed in this specification make it possible to improve blind epidural puncture and increase the safety and speed of epidural puncture.
 開示された硬膜外麻酔支援システムは、硬膜外麻酔を支援するための硬膜外麻酔支援システムであって、硬膜外麻酔の訓練対象となる人体の少なくとも一部を模した人体モデルと、ユーザの目の正面方向に設置された透過表示部と、ユーザが透過表示部を通して人体モデルを見た場合、脊椎の少なくとも一部を示す第1空中像が人体モデルの対応位置に表示されるように、第1空中像を透過表示部に出力する出力処理部と、を備えるゴーグル型の表示装置と、を有する。 The disclosed epidural anesthesia support system is an epidural anesthesia support system for supporting epidural anesthesia, and includes a goggle-type display device including a human body model that mimics at least a part of a human body that is the subject of epidural anesthesia training, a transparent display unit installed in front of the user's eyes, and an output processing unit that outputs a first aerial image to the transparent display unit so that, when the user looks at the human body model through the transparent display unit, a first aerial image showing at least a part of the spine is displayed at a corresponding position on the human body model.
 また、開示された硬膜外麻酔支援システムにおいて、人体モデルは、脊椎の少なくとも一部を模した脊椎モデルを有し、出力処理部は、CT装置によって取得された脊椎モデルのCT画像から生成された3次元オブジェクトデータに基づいて、第1空中像を出力することが好ましい。 Furthermore, in the disclosed epidural anesthesia support system, it is preferable that the human body model has a spine model that mimics at least a part of the spine, and the output processing unit outputs the first aerial image based on three-dimensional object data generated from a CT image of the spine model acquired by a CT device.
 また、開示された硬膜外麻酔支援システムにおいて、出力処理部は、ユーザが透過表示部を通して人体モデルを見た場合、硬膜外麻酔で用いられる硬膜外針の皮膚穿刺点、硬膜外針の穿刺角度、及び硬膜外針の硬膜外腔穿刺点のうちの少なくとも一つを示す第2空中像が人体モデルの対応位置に表示されるように、第2空中像を透過表示部に出力することが好ましい。 Furthermore, in the disclosed epidural anesthesia support system, it is preferable that the output processing unit outputs the second aerial image to the transparent display unit so that, when the user views the human body model through the transparent display unit, the second aerial image indicating at least one of the skin puncture point of the epidural needle used in epidural anesthesia, the puncture angle of the epidural needle, and the epidural space puncture point of the epidural needle is displayed at the corresponding position on the human body model.
 また、開示された硬膜外麻酔支援システムにおいて、人体モデルは、脊椎の少なくとも一部を模した脊椎モデルを有し、出力処理部は、CT装置によって取得された、硬膜外針が穿刺された脊椎モデルのCT画像から生成された3次元オブジェクトデータに基づいて、第1空中像及び第2空中像を出力し、硬膜外針は、医学的に適切な、皮膚穿刺点、穿刺角度、及び硬膜外腔穿刺点を満たすように、脊椎モデルに穿刺されることが好ましい。 Furthermore, in the disclosed epidural anesthesia support system, it is preferable that the human body model has a spine model that mimics at least a part of the spine, the output processing unit outputs a first aerial image and a second aerial image based on three-dimensional object data generated from a CT image of the spine model inserted with the epidural needle acquired by a CT device, and the epidural needle is inserted into the spine model so as to satisfy medically appropriate skin puncture point, puncture angle, and epidural space puncture point.
 開示された硬膜外麻酔支援システムは、硬膜外麻酔を支援するための硬膜外麻酔支援システムであって、ユーザの目の正面方向に設置された透過表示部と、ユーザが透過表示部を通して人体を見た場合、人体の脊椎の少なくとも一部を示す空中像が人体の対応位置に表示されるように、空中像を透過表示部に出力する出力処理部と、を備えるゴーグル型の表示装置、を有する。 The disclosed epidural anesthesia support system is an epidural anesthesia support system for supporting epidural anesthesia, and includes a goggle-type display device that includes a transparent display unit installed in front of the user's eyes, and an output processing unit that outputs an aerial image to the transparent display unit so that, when the user looks at the human body through the transparent display unit, an aerial image showing at least a part of the human body's spine is displayed at a corresponding position on the human body.
 また、開示された硬膜外麻酔支援システムにおいて、ゴーグル型の表示装置は、装着したユーザに空中像をイメージさせ、ユーザは、空中像のイメージ後にゴーグル型の表示装置を外し、通常通りの硬膜外麻酔を施行することが好ましい。 Furthermore, in the disclosed epidural anesthesia support system, it is preferable that the goggle-type display device allows the user wearing it to imagine an aerial image, and after imagining the aerial image, the user removes the goggle-type display device and performs epidural anesthesia as usual.
 開示された硬膜外麻酔訓練方法は、開示された硬膜外麻酔支援システムを用いて硬膜外麻酔を訓練するための硬膜外麻酔訓練方法であって、ゴーグル型の表示装置を装着したユーザに、硬膜外麻酔で用いられる硬膜外針を人体モデルに穿刺させるため、第1空中像をゴーグル型の表示装置に表示するステップ、を含む。 The disclosed epidural anesthesia training method is an epidural anesthesia training method for training epidural anesthesia using the disclosed epidural anesthesia support system, and includes a step of displaying a first aerial image on a goggle-type display device to allow a user wearing the goggle-type display device to insert an epidural needle used in epidural anesthesia into a human body model.
 開示された硬膜外麻酔訓練方法は、開示された硬膜外麻酔支援システムを用いて硬膜外麻酔を訓練するための硬膜外麻酔訓練方法であって、ゴーグル型の表示装置を装着したユーザに、人体モデルに装着された脊椎モデルに硬膜外針を穿刺させるため、第1空中像をゴーグル型の表示装置に表示するステップと、硬膜外針がユーザによって穿刺された脊椎モデルのCT画像を取得するステップと、硬膜外針がユーザによって穿刺された脊椎モデルにおける、当該硬膜外針の皮膚穿刺点、穿刺角度、及び硬膜外腔穿刺点を表示するステップと、を含む。 The disclosed epidural anesthesia training method is an epidural anesthesia training method for training epidural anesthesia using the disclosed epidural anesthesia support system, and includes the steps of: displaying a first aerial image on a goggle-type display device to allow a user wearing a goggle-type display device to insert an epidural needle into a spinal model attached to a human body model; acquiring a CT image of the spinal model inserted by the user with the epidural needle; and displaying the skin puncture point, puncture angle, and epidural space puncture point of the epidural needle in the spinal model inserted by the user with the epidural needle.
 また、開示された硬膜外麻酔訓練方法において、ユーザが、ゴーグル型の表示装置を装着して第1空中像をイメージした後にゴーグル型の表示装置を外し、硬膜外針を前記人体モデルに穿刺するステップ、を更に含むことが好ましい。 In addition, it is preferable that the disclosed epidural anesthesia training method further includes a step in which the user puts on a goggle-type display device, visualizes the first aerial image, and then removes the goggle-type display device and inserts an epidural needle into the human body model.
 開示された制御方法は、画像取得部と、記憶部と、透過表示部を備える表示装置の制御方法であって、表示装置が、特定の人体の表面の形状の少なくとも一部を示す第1の3次元オブジェクトデータと、特定の人体の脊椎の少なくとも一部を示す第2の3次元オブジェクトデータと、を記憶部に記憶し、画像取得部によって取得された特定の人体の画像と、第1の3次元オブジェクトデータとに基づいて、第2の3次元オブジェクトデータに基づく特定の人体の脊椎の少なくとも一部を示す空中像を透過表示部に表示する。 The disclosed control method is a control method for a display device having an image acquisition unit, a storage unit, and a transparent display unit, in which the display device stores in the storage unit first three-dimensional object data showing at least a part of the shape of the surface of a specific human body, and second three-dimensional object data showing at least a part of the spine of the specific human body, and displays on the transparent display unit an aerial image showing at least a part of the spine of the specific human body based on the second three-dimensional object data, based on an image of the specific human body acquired by the image acquisition unit and the first three-dimensional object data.
 開示された硬膜外麻酔支援システム、硬膜外麻酔訓練方法、及び表示装置の制御方法によって、盲目的な硬膜外穿刺を改善し、硬膜外穿刺の安全性及び迅速性を向上させることが可能となる。 The disclosed epidural anesthesia support system, epidural anesthesia training method, and display device control method make it possible to improve blind epidural puncture and increase the safety and speed of epidural puncture.
硬膜外麻酔支援システムの概要の一例を説明するための模式図である。1 is a schematic diagram for explaining an example of an overview of an epidural anesthesia support system. FIG. 人体モデルの概略構成の一例を示す図である。FIG. 1 is a diagram illustrating an example of a schematic configuration of a human body model. 硬膜外針が穿刺された腰椎モデルの生成方法の一例を示す模式図である。FIG. 13 is a schematic diagram showing an example of a method for generating a lumbar spine model inserted with an epidural needle. (a)は、ウェアラブル装置の外観の一例を示す斜視図であり、(b)は、ウェアラブル装置の概略構成の一例を示す図である。FIG. 2A is a perspective view showing an example of the appearance of a wearable device, and FIG. 2B is a diagram showing an example of a schematic configuration of the wearable device. (a)は、3次元オブジェクトの一例を示す模式図であり、(b)は、ウェアラブル装置において表示される空中像の一例を示す図である。FIG. 1A is a schematic diagram showing an example of a three-dimensional object, and FIG. 1B is a diagram showing an example of an aerial image displayed on a wearable device. 硬膜外針の穿刺訓練の結果画像を説明するための模式図である。FIG. 13 is a schematic diagram for explaining images resulting from epidural needle puncture training. 硬膜外麻酔訓練方法の動作フローの一例を示す図である。FIG. 13 is a diagram showing an example of an operation flow of an epidural anesthesia training method.
 以下、図面を参照しつつ、本発明の様々な実施形態について説明する。ただし、本発明の技術的範囲はそれらの実施形態に限定されず、特許請求の範囲に記載された発明とその均等物に及ぶ点に留意されたい。 Various embodiments of the present invention will be described below with reference to the drawings. However, please note that the technical scope of the present invention is not limited to these embodiments, but extends to the inventions described in the claims and their equivalents.
 (硬膜外麻酔支援システム1の概要)
 図1は、硬膜外麻酔支援システム1の概要の一例を説明するための模式図である。硬膜外麻酔支援システム1は、人体モデル2とウェアラブル装置3とを含む。人体モデル2は、人体を模した本体部21と、人体の背面の皮膚の少なくとも一部を模した皮膚部22とを有する。本体部21は、脊椎の少なくとも一部を模した脊椎モデル24を収容する収容部23を有する。
(Outline of Epidural Anesthesia Support System 1)
1 is a schematic diagram for explaining an example of an overview of an epidural anesthesia support system 1. The epidural anesthesia support system 1 includes a human body model 2 and a wearable device 3. The human body model 2 has a main body section 21 that imitates a human body, and a skin section 22 that imitates at least a part of the skin on the back of the human body. The main body section 21 has a housing section 23 that houses a spine model 24 that imitates at least a part of a spine.
 ウェアラブル装置3は、ユーザ(医師又は研修医等)の頭部に着用可能なゴーグル型の表示装置である。ウェアラブル装置3には、着用したユーザの目の正面方向に位置するように透過表示部が設置される。ウェアラブル装置3は、各種の空中像を透過表示部に表示するMR(Mixed Reality)機能を有する。 The wearable device 3 is a goggle-type display device that can be worn on the head of a user (such as a doctor or medical intern). A transparent display unit is installed in the wearable device 3 so that it is positioned directly in front of the eyes of the user wearing it. The wearable device 3 has an MR (Mixed Reality) function that displays various aerial images on the transparent display unit.
 図1(b)に示されるように、ウェアラブル装置3は、脊椎モデル24の空中像A1を、当該ウェアラブル装置3を着用したユーザが透過表示部を通して人体モデル2を見た場合における人体モデル2の対応位置に表示する。例えば、ウェアラブル装置3は、人体モデル2の表面の形状を示す3次元オブジェクトD1の3次元オブジェクトデータと、脊椎モデル24の表面の形状を示す3次元オブジェクトD2の3次元オブジェクトデータとを記憶する。また、ウェアラブル装置3は、3次元オブジェクトD1と3次元オブジェクトD2との相対的な位置関係を記憶する。ウェアラブル装置3は、3次元オブジェクトD1の3次元オブジェクトデータを読み出し、公知のMR機能を用い、人体モデル2の形状を認識し、人体モデル2の現実空間の位置を特定する。ウェアラブル装置3は、人体モデル2の現実空間の位置、3次元オブジェクトD1と3次元オブジェクトD2との相対的な位置関係、及び3次元オブジェクトD2の3次元オブジェクトデータとに基づいて、空中像A1を透過表示部に表示させる。例えば、空中像A1は、3次元オブジェクトD2に対応するホログラム(立体映像)であり、第1空中像の一例である。 As shown in FIG. 1(b), the wearable device 3 displays an aerial image A1 of the spine model 24 at a corresponding position of the human body model 2 when a user wearing the wearable device 3 views the human body model 2 through the transparent display unit. For example, the wearable device 3 stores three-dimensional object data of a three-dimensional object D1 indicating the shape of the surface of the human body model 2 and three-dimensional object data of a three-dimensional object D2 indicating the shape of the surface of the spine model 24. The wearable device 3 also stores the relative positional relationship between the three-dimensional object D1 and the three-dimensional object D2. The wearable device 3 reads out the three-dimensional object data of the three-dimensional object D1, recognizes the shape of the human body model 2 using a known MR function, and specifies the position of the human body model 2 in real space. The wearable device 3 displays the aerial image A1 on the transparent display unit based on the position of the human body model 2 in real space, the relative positional relationship between the three-dimensional object D1 and the three-dimensional object D2, and the three-dimensional object data of the three-dimensional object D2. For example, the aerial image A1 is a hologram (stereoscopic image) corresponding to the three-dimensional object D2 and is an example of a first aerial image.
 脊椎モデル24が腰椎を模したものである場合、ウェアラブル装置3は、腰椎を示す空中像A1を、当該ウェアラブル装置3を着用したユーザが透過表示部を通して人体モデル2を見た場合における人体モデル2内に収容された脊椎モデル24の位置に表示する。これにより、ウェアラブル装置3を着用したユーザは、透過表示部を通して、人体モデル2内に収容された脊椎モデル24に対応する位置に表示された空中像A1を視認することができる。 If the spine model 24 is an imitation of the lumbar vertebrae, the wearable device 3 displays an aerial image A1 showing the lumbar vertebrae at the position of the spine model 24 contained within the human body model 2 when the user wearing the wearable device 3 views the human body model 2 through the transparent display unit. This allows the user wearing the wearable device 3 to view the aerial image A1 displayed at a position corresponding to the spine model 24 contained within the human body model 2 through the transparent display unit.
 なお、空中像A1は、脊椎モデル24の表面の形状を示す3次元オブジェクトD2の3次元オブジェクトデータに基づいて表示されるものに限らない。例えば、3次元オブジェクトデータの生成アプリケーションプログラムが実行可能な情報処理装置(PC(Personal Computer)等)を、訓練担当者が操作することによって生成された脊椎の少なくとも一部の表面の形状を示す3次元オブジェクトデータに基づいて、空中像A1が表示されてもよい。訓練担当者は、研修医等の訓練対象者に対して硬膜外穿刺の訓練を提供する者である。 Note that the aerial image A1 is not limited to being displayed based on the three-dimensional object data of the three-dimensional object D2 showing the surface shape of the spine model 24. For example, the aerial image A1 may be displayed based on three-dimensional object data showing the surface shape of at least a part of the spine that is generated by a trainer operating an information processing device (such as a personal computer (PC)) capable of executing an application program for generating three-dimensional object data. The trainer is a person who provides training in epidural puncture to trainees, such as medical interns.
 以上、図1を参照しつつ説明したとおり、硬膜外麻酔支援システム1は、人体モデル2の脊椎の少なくとも一部に対応する位置に、当該脊椎の少なくとも一部を示す空中像A1が表示されるウェアラブル装置3を有する。このように、硬膜外麻酔支援システム1によって、硬膜外穿刺の対象となる部位を可視化することができ、盲目的な硬膜外穿刺を改善し、硬膜外穿刺の安全性及び迅速性を向上させることが可能となる。 As described above with reference to FIG. 1, the epidural anesthesia support system 1 has a wearable device 3 that displays an aerial image A1 showing at least a part of the spine of a human body model 2 at a position corresponding to at least a part of the spine. In this way, the epidural anesthesia support system 1 makes it possible to visualize the area to be targeted for epidural puncture, improving blind epidural puncture and making it safer and more rapid.
 なお、上述した図1の説明は、本発明の内容への理解を深めるための説明にすぎない。本発明は、具体的には、次に説明する各実施形態において実施され、且つ、本発明の原則を実質的に超えずに、さまざまな変形例によって実施されてもよい。このような変形例はすべて、本発明および本明細書の開示範囲に含まれる。 The above explanation of FIG. 1 is merely for the purpose of deepening understanding of the contents of the present invention. The present invention is specifically embodied in the following embodiments, and may be embodied in various modified forms without substantially departing from the principles of the present invention. All such modified forms are included within the scope of the present invention and the disclosure of this specification.
 (人体モデル2)
 図2は、人体モデル2の概略構成の一例を示す図である。人体モデル2は、脊椎モデル24、皮膚部22、コード部25等を有する。
(Human body model 2)
2 is a diagram showing an example of a schematic configuration of the human body model 2. The human body model 2 has a spine model 24, a skin portion 22, a cord portion 25, and the like.
 図2に示される本体部21は、人体の腰の部分を模した腰部を有する。本体部21は腰部に限定されず、人体の背中を模した背中部、及び/又は、人体の臀部を模した臀部をさらに有するものでもよい。また、本体部21は、背中部及び臀部のうちのいずれか一つのみを有するものでもよい。また、本体部21には、人体の他の部位を模したモデル(例えば、首を模した頸部、上腕を模した上腕部、及び/又は大腿を模した大腿部等)が含まれてもよい。 The main body 21 shown in FIG. 2 has a waist portion that imitates the waist of a human body. The main body 21 is not limited to a waist portion, and may further have a back portion that imitates the back of a human body, and/or a buttocks that imitates the buttocks of a human body. The main body 21 may also have only one of the back portion and the buttocks. The main body 21 may also include models that imitate other parts of the human body (for example, a neck portion that imitates a neck, an upper arm portion that imitates an upper arm, and/or a thigh portion that imitates a thigh, etc.).
 皮膚部22は、皮膚及び皮下組織を模した軟質合成樹脂であり、ユーザが触診した場合の柔らかさ及び触感が、人体の皮膚と同様となるように構成されている。収容部23は、人体モデル2の背中の面に設けられた凹溝であり、脊椎モデル24を収容する。 The skin portion 22 is made of soft synthetic resin that mimics the skin and subcutaneous tissue, and is configured so that the softness and feel when palpated by a user is similar to that of human skin. The storage portion 23 is a groove provided on the back surface of the human body model 2, and stores the spine model 24.
 脊椎モデル24は、少なくとも棘突起及び硬膜外腔を模したものである。図2(b)に示される脊椎モデル24は、脊椎のうちの腰椎を模したモデルである。脊椎モデル24は、腰椎を模したものに限定されず、頚椎、胸椎、又は仙椎を模したモデルでもよい。また、脊椎モデル24は、頚椎、胸椎、腰椎、及び仙椎のうちの2つ以上を模したモデルでもよい。 The spine model 24 is a model that simulates at least the spinous processes and the epidural space. The spine model 24 shown in FIG. 2(b) is a model that simulates the lumbar vertebrae of the spine. The spine model 24 is not limited to a model that simulates the lumbar vertebrae, and may be a model that simulates the cervical vertebrae, thoracic vertebrae, or sacral vertebrae. The spine model 24 may also be a model that simulates two or more of the cervical vertebrae, thoracic vertebrae, lumbar vertebrae, and sacral vertebrae.
 脊椎モデル24は、ユーザから視認できないように、収容部23内に収容され(図2(b))、皮膚部22に覆われる(図2(a))。ウェアラブル装置3を着用しないユーザは、脊椎モデル24を視認することができないため、人体モデル2を用いた盲目的な硬膜外穿刺手技の訓練を行う。このように、人体モデル2は、硬膜外麻酔の訓練対象となる人体の少なくとも一部を模したモデルであり、脊椎モデル24は、当該人体モデル2に収容され且つ硬膜外麻酔における硬膜外穿刺の対象となる脊椎の少なくとも一部を模したモデルである。なお、脊椎モデル24の詳細は後述する。 The spine model 24 is housed in the housing 23 (FIG. 2(b)) and covered by the skin 22 (FIG. 2(a)) so that it cannot be seen by the user. A user who does not wear the wearable device 3 cannot see the spine model 24, and therefore blindly trains in epidural puncture using the human body model 2. In this way, the human body model 2 is a model that imitates at least a part of a human body that is the subject of epidural anesthesia training, and the spine model 24 is a model that imitates at least a part of a spine that is housed in the human body model 2 and that is the subject of epidural puncture in epidural anesthesia. Details of the spine model 24 will be described later.
 コード部25は、例えばQRコード(登録商標)であり、3次元オブジェクトD1及び3次元オブジェクトD2のそれぞれの3次元オブジェクトデータと関連付けられている。コード部25として、ARマーカ等の他のバーコード情報が用いられてもよい。コード部25の使用方法については後述する。 The code section 25 is, for example, a QR code (registered trademark), and is associated with the three-dimensional object data of each of the three-dimensional objects D1 and D2. Other barcode information such as an AR marker may also be used as the code section 25. The method of using the code section 25 will be described later.
 人体モデル2のうちコード部25以外の構成要素として、例えば、公知の硬膜外穿刺シミュレータ(「腰椎・硬膜外穿刺シミュレータ ルンバールくんIIA」,2022年3月18日,株式会社京都科学,[2022年9月1日検索],インターネット<URL:https://www.kyotokagaku.com/jp/products_data/m43b_02/?utm_source=YT&utm_medium=L&utm_campaign=TR>)が用いられてもよい。 As a component of the human body model 2 other than the cord unit 25, for example, a known epidural puncture simulator ("Lumbar/Epidural Puncture Simulator Lumbar-kun IIA", March 18, 2022, Kyoto Scientific Co., Ltd., [searched September 1, 2022], Internet <URL: https://www.kyotokagaku.com/jp/products_data/m43b_02/?utm_source=YT&utm_medium=L&utm_campaign=TR>) may be used.
 (硬膜外針Nが穿刺された脊椎モデル24)
 図3は、硬膜外針Nが穿刺された脊椎モデル24の生成方法の一例を示す模式図である。硬膜外針Nが穿刺された脊椎モデル24は、3次元オブジェクトデータを作成するために用いられる。脊椎モデル24に穿刺された硬膜外針Nは、硬膜外針Nの皮膚穿刺点、硬膜外針Nの穿刺角度、及び硬膜外針Nの硬膜外腔穿刺点のうちの少なくとも一つを示す空中像A2を表示するための3次元オブジェクトデータに対応する。硬膜外針Nが穿刺された脊椎モデル24は、訓練担当者によって作成される。なお、空中像A2は、第2空中像の一例である。
(Spine model 24 with epidural needle N inserted)
3 is a schematic diagram showing an example of a method for generating a spine model 24 punctured with an epidural needle N. The spine model 24 punctured with an epidural needle N is used to create three-dimensional object data. The epidural needle N punctured into the spine model 24 corresponds to three-dimensional object data for displaying an aerial image A2 showing at least one of the skin puncture point of the epidural needle N, the puncture angle of the epidural needle N, and the epidural space puncture point of the epidural needle N. The spine model 24 punctured with the epidural needle N is created by a trainer. The aerial image A2 is an example of a second aerial image.
 図3(a)に示されるように、脊椎モデル24は、棘突起部241、透過部242、及び硬膜外腔部243を有する。棘突起部241は、棘突起を模したものであり、棘突起と同様の硬度を有する合成樹脂である。透過部242は、硬膜外針Nが穿刺された場合に、棘間靭帯と同様の抵抗感を生じさせるように構成された透明シリコン等であり、穿刺された硬膜外針Nを視認することができる。なお、硬膜外針Nが透過部242に穿刺された場合における、透過部242の上面の硬膜外針Nの穿刺位置を皮膚穿刺点と称する場合がある。硬膜外腔部243は、硬膜外腔を構成する中空の管状体である。図3に示される例では、硬膜外腔部243の上側は、黄靭帯を模しており、透過部242に穿刺された硬膜外針Nが硬膜外腔部243の上側に到達した点が、硬膜外腔穿刺点となる。 As shown in FIG. 3A, the spine model 24 has a spinous process portion 241, a transparent portion 242, and an epidural space portion 243. The spinous process portion 241 is a model of a spinous process and is made of synthetic resin having the same hardness as a spinous process. The transparent portion 242 is made of transparent silicon or the like configured to generate a sense of resistance similar to that of an interspinous ligament when the epidural needle N is punctured, and the punctured epidural needle N can be visually confirmed. Note that when the epidural needle N is punctured into the transparent portion 242, the puncture position of the epidural needle N on the upper surface of the transparent portion 242 may be referred to as the skin puncture point. The epidural space portion 243 is a hollow tubular body that constitutes the epidural space. In the example shown in FIG. 3, the upper side of the epidural space portion 243 is a model of the ligamentum flavum, and the point where the epidural needle N punctured into the transparent portion 242 reaches the upper side of the epidural space portion 243 is the epidural space puncture point.
 次に、図3(b)に示されるように、訓練担当者は、硬膜外針Nを脊椎モデル24に穿刺する。硬膜外針Nは、医学的に適切な、皮膚穿刺点、穿刺角度、及び硬膜外腔穿刺点を満たすように、脊椎モデル24に穿刺される。穿刺角度は、皮膚穿刺点及び硬膜外腔穿刺点を通過する直線と図3(b)の上方向を通過する軸との乖離角度θ等である。 Next, as shown in FIG. 3(b), the trainer inserts the epidural needle N into the spine model 24. The epidural needle N is inserted into the spine model 24 so as to satisfy medically appropriate skin puncture point, puncture angle, and epidural space puncture point. The puncture angle is the deviation angle θ between a straight line passing through the skin puncture point and the epidural space puncture point and an axis passing upward in FIG. 3(b).
 次に、図3(c)に示されるように、訓練担当者は、硬膜外腔部243の上側から延びている硬膜外針Nを切断することにより、硬膜外針Nが穿刺された脊椎モデル24が完成する。なお、硬膜外針Nが穿刺された脊椎モデル24は、切断する前の硬膜外針Nが穿刺された状態のもの(図3(b))でもよい。 Next, as shown in FIG. 3(c), the trainer cuts the epidural needle N extending from the upper side of the epidural space portion 243, thereby completing the spine model 24 punctured with the epidural needle N. Note that the spine model 24 punctured with the epidural needle N may be in the state in which the epidural needle N is inserted before cutting (FIG. 3(b)).
 (3次元オブジェクトデータ)
 以下、3次元オブジェクトデータの作成方法の一例について説明する。訓練担当者は、硬膜外針Nが穿刺された脊椎モデル24を人体モデル2に収容し、CT(Computed Tomography)装置(図示せず)を用いて、当該人体モデル2のCT画像を取得する。
(3D object data)
An example of a method for creating three-dimensional object data will be described below. The trainer places the spine model 24, into which the epidural needle N has been inserted, in the human body model 2, and obtains a CT image of the human body model 2 using a CT (Computed Tomography) device (not shown).
 次に、訓練担当者は、3次元オブジェクトデータの生成アプリケーションプログラムが実行可能なPC等の情報処理装置(図示せず)を用いて、人体モデル2のCT画像から、3次元オブジェクトD1の3次元オブジェクトデータ、3次元オブジェクトD2の3次元オブジェクトデータを生成する。3次元オブジェクトデータの生成アプリケーションプログラムとして、公知のアプリケーションプログラム(「Unity Pro」(登録商標))が用いられてもよい。 Next, the training officer uses an information processing device (not shown) such as a PC capable of executing a three-dimensional object data generation application program to generate three-dimensional object data for three-dimensional object D1 and three-dimensional object data for three-dimensional object D2 from the CT image of the human body model 2. A well-known application program ("Unity Pro" (registered trademark)) may be used as the three-dimensional object data generation application program.
 例えば、情報処理装置は、人体モデル2の各CT画像について、本体部21及び皮膚部22のサーフェイスに対応するピクセルP1、脊椎モデル24の棘突起部241及び/又は硬膜外腔部243のサーフェイスに対応するピクセルP2、硬膜外針Nのサーフェイスに対応するピクセルP3を特定する。次に、情報処理装置は、全てのCT画像において特定したピクセルP1~P3を用いて、公知の変換処理によって、3次元オブジェクトデータを生成する。3次元オブジェクトD1の3次元オブジェクトデータは、ピクセルP1に基づいて生成され、3次元オブジェクトD2の3次元オブジェクトデータは、ピクセルP2に基づいて生成される。また、硬膜外針Nの表面の形状を示す3次元オブジェクトD3の3次元オブジェクトデータが、ピクセルP3に基づいて生成される。 For example, for each CT image of the human body model 2, the information processing device identifies pixel P1 corresponding to the surfaces of the main body portion 21 and the skin portion 22, pixel P2 corresponding to the surface of the spinous process portion 241 and/or the epidural space portion 243 of the spine model 24, and pixel P3 corresponding to the surface of the epidural needle N. Next, the information processing device uses the identified pixels P1 to P3 in all CT images to generate three-dimensional object data by a known conversion process. The three-dimensional object data of the three-dimensional object D1 is generated based on pixel P1, and the three-dimensional object data of the three-dimensional object D2 is generated based on pixel P2. Furthermore, the three-dimensional object data of the three-dimensional object D3 indicating the shape of the surface of the epidural needle N is generated based on pixel P3.
 3次元オブジェクトD3は、硬膜外針N自体の表面の形状を示すものに限定されない。例えば、3次元オブジェクトD3は、硬膜外針Nの皮膚穿刺点に対応する球状のオブジェクトでもよく、硬膜外針Nの硬膜外腔穿刺点を示す球状のオブジェクトでもよい。また、硬膜外針Nに対応する線分状態のオブジェクトを、矢印のデザインとしてもよい。この場合の矢印の方向は穿刺方向である。3次元オブジェクトD3は、上述で例示した各オブジェクトを複数備えるものでもよい。 The three-dimensional object D3 is not limited to one that indicates the shape of the surface of the epidural needle N itself. For example, the three-dimensional object D3 may be a spherical object that corresponds to the skin puncture point of the epidural needle N, or a spherical object that indicates the epidural space puncture point of the epidural needle N. In addition, the line segment object that corresponds to the epidural needle N may be designed as an arrow. In this case, the direction of the arrow is the puncture direction. The three-dimensional object D3 may include multiple objects of each of the examples given above.
 (ウェアラブル装置3)
 図4(a)は、ウェアラブル装置3の一例を示す斜視図であり、図4(b)は、ウェアラブル装置3の概略構成の一例を示す図である。ウェアラブル装置3は、ユーザの頭部に着用可能なゴーグル型の表示装置であり、着用したユーザの目の正面方向(x方向)に設置されたハーフミラー部36に各種の空中像を表示するMR(Mixed Reality)機能を有する。ウェアラブル装置3は、このような機能を実現するため、通信I/F31、記憶部32、センサ取得部33、映像出力部34、処理部35、ハーフミラー部36、環境センサ37、深度センサ38等を備える。ウェアラブル装置3として、公知のMR用のHMD(Head Mounted Display)(例えば、「Microsoft HoloLens 2」,[2022年9月1日検索],インターネット<URL:https://www.microsoft.com/ja-jp/hololens/>)が用いられてもよい。
(Wearable device 3)
FIG. 4(a) is a perspective view showing an example of the wearable device 3, and FIG. 4(b) is a diagram showing an example of a schematic configuration of the wearable device 3. The wearable device 3 is a goggle-type display device that can be worn on the head of a user, and has an MR (Mixed Reality) function of displaying various aerial images on a half mirror unit 36 installed in the front direction (x direction) of the eyes of the user wearing the device. In order to realize such functions, the wearable device 3 includes a communication I/F 31, a storage unit 32, a sensor acquisition unit 33, a video output unit 34, a processing unit 35, a half mirror unit 36, an environmental sensor 37, a depth sensor 38, and the like. As the wearable device 3, a known MR HMD (Head Mounted Display) (for example, "Microsoft HoloLens 2", [searched on September 1, 2022], Internet <URL: https://www.microsoft.com/ja-jp/hololens/>) may be used.
 通信I/F31は、図示しない無線LAN(Local Area Network)のアクセスポイントとの間でIEEE(The Institute of Electrical and Electronics Engineers, Inc.)802.11規格の無線通信方式に基づいて無線通信を行う通信インターフェース回路を備える。通信I/F31は、情報処理装置から無線通信により送信された3次元オブジェクトデータを受信する。また、通信I/F31は、図示しない基地局により割り当てられるチャネルを介して、基地局との間でLTE(Long Term Evolution)方式、第5世代(5G)移動通信システム等による無線信号回線を確立し、基地局との間で通信を行ってもよい。また、通信I/F31は、Bluetooth(登録商標)等の通信方式に従った近距離無線通信を行うためのインターフェース回路を有し、情報処理装置からの電波を受信してもよい。また、通信I/F31は、有線LANの通信インターフェース回路を備えてもよい。これにより、ウェアラブル装置3は、通信I/F31を介して情報処理装置から3次元オブジェクトデータ(3次元オブジェクトD1の3次元オブジェクトデータ、3次元オブジェクトD2の3次元オブジェクトデータ、及び3次元オブジェクトD3の3次元オブジェクトデータ)を取得することができる。 The communication I/F 31 includes a communication interface circuit that performs wireless communication with an access point of a wireless LAN (Local Area Network) (not shown) based on the wireless communication method of the IEEE (The Institute of Electrical and Electronics Engineers, Inc.) 802.11 standard. The communication I/F 31 receives three-dimensional object data transmitted by wireless communication from an information processing device. The communication I/F 31 may also establish a wireless signal line with a base station (not shown) using the LTE (Long Term Evolution) method, a fifth generation (5G) mobile communication system, or the like, via a channel assigned by the base station, and perform communication with the base station. The communication I/F 31 may also have an interface circuit for performing short-range wireless communication according to a communication method such as Bluetooth (registered trademark), and may receive radio waves from the information processing device. The communication I/F 31 may also include a communication interface circuit for a wired LAN. This allows the wearable device 3 to acquire three-dimensional object data (three-dimensional object data of three-dimensional object D1, three-dimensional object data of three-dimensional object D2, and three-dimensional object data of three-dimensional object D3) from the information processing device via the communication I/F 31.
 記憶部32は、例えば、ROM(Read Only Memory)、RAM(Random Access Memory)等の半導体メモリ装置である。記憶部32は、処理部35における処理に用いられるオペレーティングシステムプログラム、ドライバプログラム、アプリケーションプログラム及びデータ等を記憶する。記憶部32に記憶されるドライバプログラムは、通信I/F31を制御する通信デバイスドライバプログラム、映像出力部34を制御する出力デバイスドライバプログラム、環境センサ37を制御する環境センサデバイスドライバプログラム、及び、深度センサ38を制御する深度センサデバイスドライバプログラム等である。記憶部32に記憶されるアプリケーションプログラムは、MR機能を処理部35に実現させるための各種制御プログラムである。さらに、記憶部32は、情報処理装置から取得した3次元オブジェクトデータ(3次元オブジェクトD1の3次元オブジェクトデータ、3次元オブジェクトD2の3次元オブジェクトデータ、及び3次元オブジェクトD3の3次元オブジェクトデータ)を記憶する。 The storage unit 32 is, for example, a semiconductor memory device such as a ROM (Read Only Memory) or a RAM (Random Access Memory). The storage unit 32 stores an operating system program, a driver program, an application program, data, and the like used for processing in the processing unit 35. The driver programs stored in the storage unit 32 include a communication device driver program that controls the communication I/F 31, an output device driver program that controls the video output unit 34, an environmental sensor device driver program that controls the environmental sensor 37, and a depth sensor device driver program that controls the depth sensor 38. The application programs stored in the storage unit 32 are various control programs for realizing the MR function in the processing unit 35. Furthermore, the storage unit 32 stores three-dimensional object data acquired from the information processing device (three-dimensional object data of three-dimensional object D1, three-dimensional object data of three-dimensional object D2, and three-dimensional object data of three-dimensional object D3).
 センサ取得部33は、環境センサ37及び深度センサ38からの各種センサデータを取得し、処理部35に渡す機能を有する。映像出力部34は、中空像を表示するための表示データに基づいて、ホログラフィック光学素子を使ってハーフミラー部36に中空像を投影する機能を有する。 The sensor acquisition unit 33 has a function of acquiring various sensor data from the environmental sensor 37 and the depth sensor 38 and passing it to the processing unit 35. The video output unit 34 has a function of projecting a hollow image onto the half mirror unit 36 using a holographic optical element based on display data for displaying the hollow image.
 処理部35は、記憶部32に記憶されているオペレーティングシステムプログラム、ドライバプログラム及びアプリケーションプログラムをメモリにロードし、ロードしたプログラムに含まれる命令を実行する処理装置である。処理部35は、例えば、CPU(Central Processing Unit)、MPU(Micro Processing Unit)、DSP(Digital Signal Processor)等の電子回路、又は各種電子回路の組み合わせである。図4(b)においては、処理部35が単一の構成要素として図示されているが、処理部35は複数の物理的に別体のプロセッサの集合であってもよい。 The processing unit 35 is a processing device that loads the operating system program, driver program, and application program stored in the storage unit 32 into memory and executes the instructions contained in the loaded programs. The processing unit 35 is, for example, an electronic circuit such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or a DSP (Digital Signal Processor), or a combination of various electronic circuits. In FIG. 4(b), the processing unit 35 is illustrated as a single component, but the processing unit 35 may also be a collection of multiple physically separate processors.
 処理部35は、制御プログラムに含まれる各種命令を実行することにより、認識部351、及び出力処理部352として機能する。認識部351、及び出力処理部352の機能については後述する。 The processing unit 35 executes various commands included in the control program, thereby functioning as a recognition unit 351 and an output processing unit 352. The functions of the recognition unit 351 and the output processing unit 352 will be described later.
 ハーフミラー部36は、ウェアラブル装置3がユーザによって着用された場合に、当該ユーザの目の正面方向に設置される透過表示部の一例である。ハーフミラー部36は、映像出力部34から投影された中空像を表示する。これにより、ユーザは、ユーザの視線方向の現実世界に重ねて、表示された中空像を視認することができる。 The half mirror unit 36 is an example of a translucent display unit that is placed in front of the user's eyes when the wearable device 3 is worn by the user. The half mirror unit 36 displays the hollow image projected from the video output unit 34. This allows the user to view the displayed hollow image superimposed on the real world in the user's line of sight.
 環境センサ37は、光学レンズ及び撮像素子等を有する。光学レンズは、被写体からの光束を撮像素子の撮像面上に結像させる。撮像素子は、CCD(Charge Coupled Device)又はCMOS(Complementary Metal Oxide Semiconductor)等であり、撮像面上に結像した被写体像の画像を出力する。環境センサ37は、撮像素子によって生成された画像から所定のファイル形式の静止画像データを作成して環境センサデータとして出力する。環境センサ37は、環境センサ画像データを出力する際に、画像データの取得時刻を対応付けてもよい。また、環境センサ37は所定時間(例えば1秒)ごとに環境センサデータを処理部35に渡す。 The environmental sensor 37 has an optical lens and an image sensor. The optical lens focuses the light beam from the subject on the imaging surface of the image sensor. The image sensor is a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), and outputs an image of the subject imaged on the imaging surface. The environmental sensor 37 creates still image data in a specified file format from the image generated by the image sensor, and outputs it as environmental sensor data. When outputting the environmental sensor image data, the environmental sensor 37 may associate the image data with the time of acquisition. The environmental sensor 37 also passes the environmental sensor data to the processing unit 35 at specified time intervals (e.g., one second).
 深度センサ38は、例えば一対の赤外線センサである。深度センサ38は、赤外線画像を出力し、出力した赤外線画像から公知のアクティブステレオ法を用いて物体の表面上の複数の点に対応する距離を測定して、複数の点のそれぞれに対応する深度データ(距離データ)を出力する。深度センサ38は所定時間(例えば1秒)ごとに深度データを処理部35に渡す。 The depth sensor 38 is, for example, a pair of infrared sensors. The depth sensor 38 outputs an infrared image, measures distances corresponding to multiple points on the surface of the object from the output infrared image using a known active stereo method, and outputs depth data (distance data) corresponding to each of the multiple points. The depth sensor 38 passes the depth data to the processing unit 35 at predetermined time intervals (for example, one second).
 以下、認識部351、及び出力処理部352の機能の一例について説明する。 Below, an example of the functions of the recognition unit 351 and the output processing unit 352 is described.
 認識部351は、環境センサ37からの環境センサデータ及び深度センサ38からの深度データを受け取るたびに、公知のSLAM(Simultaneous Localization And Mapping)技術を用いて、自己位置を推定するとともに周囲の物体の表面の3次元メッシュデータを生成する。 Each time the recognition unit 351 receives environmental sensor data from the environmental sensor 37 and depth data from the depth sensor 38, it estimates its own position and generates three-dimensional mesh data of the surfaces of surrounding objects using the well-known SLAM (Simultaneous Localization and Mapping) technology.
 また、認識部351は、環境センサ37からの環境センサデータによって示される画像内に、コード部25が含まれているか否かを判定する。認識部351は、画像内に、コード部25が含まれていると判定した場合、コード部25に対応する3次元オブジェクトデータを取得し、出力処理部352に渡す。また、認識部351は、画像内におけるコード部25の大きさ及び形状に基づき、ウェアラブル装置3に対するコード部25の向き(例えば、3次元の単位ベクトル等)及び距離を算出する。なお、コード部25等の基準表示物の大きさ及び形状に基づいて対象物の向き及び距離を算出する処理として、公知の算出方法(例えば、特開2016-57758号公報)が用いられてもよい。 The recognition unit 351 also determines whether or not the code portion 25 is included in the image shown by the environmental sensor data from the environmental sensor 37. If the recognition unit 351 determines that the code portion 25 is included in the image, it acquires three-dimensional object data corresponding to the code portion 25 and passes it to the output processing unit 352. The recognition unit 351 also calculates the orientation (e.g., a three-dimensional unit vector) and distance of the code portion 25 relative to the wearable device 3 based on the size and shape of the code portion 25 in the image. Note that a known calculation method (e.g., JP 2016-57758 A) may be used as the process of calculating the orientation and distance of the target object based on the size and shape of a reference display object such as the code portion 25.
 出力処理部352は、認識部351から取得した3次元オブジェクトデータに基づく3次元オブジェクトD1であって、認識部351から取得したコード部25の向き及び距離に基づいて回転させて大きさを変更した3次元オブジェクトD1、3次元オブジェクトD2、及び3次元オブジェクトD3を算出する。図5(a)は、3次元オブジェクトD1、3次元オブジェクトD2、及び3次元オブジェクトD3の一例を示す模式図である。なお、図5(a)には、説明のためコード部25が含まれているが、取得した3次元オブジェクトデータに、コード部25に対応するオブジェクトデータが含まれていなくてもよい。 The output processing unit 352 calculates three-dimensional objects D1, D2, and D3, which are three-dimensional objects D1 based on the three-dimensional object data acquired from the recognition unit 351, and which are rotated and resized based on the orientation and distance of the code portion 25 acquired from the recognition unit 351. FIG. 5(a) is a schematic diagram showing an example of three-dimensional objects D1, D2, and D3. Note that FIG. 5(a) includes the code portion 25 for the purpose of explanation, but the acquired three-dimensional object data does not necessarily need to include object data corresponding to the code portion 25.
 出力処理部352は、認識部351から出力された、人体モデル2に対応する物体の表面の3次元メッシュデータと、特定した3次元オブジェクトD1との位置合わせを断続的に実行しつつ、3次元オブジェクトD2に対応する空中像A1及び3次元オブジェクトD3に対応する空中像A2の表示データ(投影データ)を映像出力部34に渡す。 The output processing unit 352 intermittently aligns the 3D mesh data of the surface of the object corresponding to the human body model 2 output from the recognition unit 351 with the identified 3D object D1, and passes the display data (projection data) of the aerial image A1 corresponding to the 3D object D2 and the aerial image A2 corresponding to the 3D object D3 to the video output unit 34.
 ユーザは、このように作成された表示データ(投影データ)に基づく空中像A1及び空中像A2をハーフミラー部36において視認しつつ、前方の現実世界を見ることができる。図5(b)は、ハーフミラー部36において表示された空中像A1及びA2の一例を示す図である。以降、断続的に認識部351によって実行される、人体モデル2に対応する物体の表面の3次元メッシュデータと特定した3次元オブジェクトD1との位置合わせの結果に応じて、空中像A1及び空中像A2が移動表示及び/又は回転表示される。 The user can view the real world ahead while viewing the aerial images A1 and A2 based on the display data (projection data) created in this manner on the half mirror unit 36. FIG. 5(b) is a diagram showing an example of the aerial images A1 and A2 displayed on the half mirror unit 36. Thereafter, the aerial images A1 and A2 are displayed moving and/or rotating according to the result of alignment between the 3D mesh data of the surface of the object corresponding to the human body model 2 and the identified 3D object D1, which is intermittently performed by the recognition unit 351.
 ユーザは、空中像A1及び空中像A2が表示されたウェアラブル装置3を装着しつつ、硬膜外針Nを人体モデル2に穿刺する訓練を行う。訓練の終了後、訓練担当者は、ユーザによる硬膜外針Nの穿刺後に、穿刺された脊椎モデル24を人体モデル2から取り出して、CT装置を用いてCT画像を取得する。そして、訓練担当者は、情報処理装置を用いて、医学的に適切な、皮膚穿刺点、穿刺角度、及び硬膜外腔穿刺点を満たす硬膜外針Nの像R1と、ユーザによって穿刺された硬膜外針Nの像R2との両者を含む穿刺訓練の結果画像を生成して表示する。 The user trains to insert the epidural needle N into the human body model 2 while wearing the wearable device 3 on which the aerial images A1 and A2 are displayed. After the training is completed, the trainer removes the punctured spine model 24 from the human body model 2 after the user has inserted the epidural needle N, and obtains a CT image using a CT device. Then, using an information processing device, the trainer generates and displays an image of the result of the puncture training, which includes both an image R1 of the epidural needle N that satisfies the medically appropriate skin puncture point, puncture angle, and epidural space puncture point, and an image R2 of the epidural needle N inserted by the user.
 図6は、硬膜外針Nの穿刺訓練の結果画像を説明するための図である。図6(a)は、脊椎モデル24の上面を示す図であり、皮膚穿刺点を示す位置が表示される。図6(a)において、「×」の位置が、像R1の皮膚穿刺点を示す位置であり、「〇」の位置が、像R2の皮膚穿刺点を示す位置である。図6(b)は、脊椎モデル24の側面から見た図であり、穿刺角度が表示される。図6(b)において、硬膜外針Nの像R1と硬膜外針Nの像R2とが表示される。図6(c)は、硬膜外腔部243の上側の面を示す図であり、硬膜外腔穿刺点を示す位置が表示される。図6(c)において、「×」の位置が、像R1の硬膜外腔穿刺点を示す位置であり、「〇」の位置が、像R2の硬膜外腔穿刺点を示す位置である。このように、医学的に適切な、皮膚穿刺点、穿刺角度、及び硬膜外腔穿刺点を満たす硬膜外針Nの像R1と、ユーザによって穿刺された硬膜外針Nの像R2とを対比して、ユーザに提示することが可能となる。 6 is a diagram for explaining the result image of the puncture training of the epidural needle N. FIG. 6(a) is a diagram showing the top surface of the spine model 24, and the position indicating the skin puncture point is displayed. In FIG. 6(a), the position of the "x" is the position indicating the skin puncture point of the image R1, and the position of the "o" is the position indicating the skin puncture point of the image R2. FIG. 6(b) is a diagram seen from the side of the spine model 24, and the puncture angle is displayed. In FIG. 6(b), an image R1 of the epidural needle N and an image R2 of the epidural needle N are displayed. FIG. 6(c) is a diagram showing the upper surface of the epidural space portion 243, and the position indicating the epidural space puncture point is displayed. In FIG. 6(c), the position of the "x" is the position indicating the epidural space puncture point of the image R1, and the position of the "o" is the position indicating the epidural space puncture point of the image R2. In this way, it is possible to present to the user an image R1 of the epidural needle N that satisfies the medically appropriate skin puncture point, puncture angle, and epidural space puncture point, and an image R2 of the epidural needle N that has been punctured by the user.
 図7は、硬膜外麻酔訓練方法の動作フローの一例を示す図である。 FIG. 7 shows an example of the operational flow of the epidural anesthesia training method.
 まず、ユーザが、ウェアラブル装置3を装着した後にハーフミラー部36を通して人体モデル2を見た場合、ハーフミラー部36に、空中像A1及び/又は空中像A2が表示される(ステップS101)。 First, when the user looks at the human body model 2 through the half mirror unit 36 after putting on the wearable device 3, the aerial image A1 and/or the aerial image A2 is displayed on the half mirror unit 36 (step S101).
 次に、ユーザは、空中像A1及び空中像A2が表示されたウェアラブル装置3を装着したまま、硬膜外針Nを人体モデル2に穿刺する1回目の訓練を行う(ステップS102)。1回目の訓練時に、ユーザは、ウェアラブル装置3によって表示された空中像A1及び/又は空中像A2をイメージする。 Next, the user performs a first training session in which the epidural needle N is inserted into the human body model 2 while wearing the wearable device 3 on which the aerial image A1 and the aerial image A2 are displayed (step S102). During the first training session, the user imagines the aerial image A1 and/or the aerial image A2 displayed by the wearable device 3.
 次に、訓練担当者は、ユーザによる硬膜外針Nの1回目の穿刺後に、穿刺された脊椎モデル24を人体モデル2から取り出して、CT装置を用いてCT画像を取得する(ステップS103)。 Next, after the user has inserted the epidural needle N for the first time, the trainer removes the punctured spine model 24 from the human body model 2 and obtains a CT image using the CT scanner (step S103).
 次に、ユーザは、ウェアラブル装置3を装着せずに、硬膜外針Nを人体モデル2に穿刺する2回目の訓練を行う(ステップS104)。 Next, the user performs a second training session of inserting the epidural needle N into the human body model 2 without wearing the wearable device 3 (step S104).
 次に、訓練担当者は、ユーザによる硬膜外針Nの2回目の穿刺後に、穿刺された脊椎モデル24を人体モデル2から取り出して、CT装置を用いてCT画像を取得する(ステップS105)。 Next, after the user inserts the epidural needle N for the second time, the trainer removes the punctured spine model 24 from the human body model 2 and obtains a CT image using the CT device (step S105).
 そして、訓練担当者は、情報処理装置を用いて、医学的に適切な、皮膚穿刺点、穿刺角度、及び硬膜外腔穿刺点を満たす硬膜外針Nの像R1と、ユーザによって穿刺された硬膜外針Nの像R2との両者を含む穿刺訓練の結果画像を生成して表示し(ステップS105)、硬膜外麻酔訓練方法が終了する。 Then, the trainer uses an information processing device to generate and display an image of the results of the puncture training, which includes both an image R1 of the epidural needle N that satisfies the medically appropriate skin puncture point, puncture angle, and epidural space puncture point, and an image R2 of the epidural needle N inserted by the user (step S105), and the epidural anesthesia training method is completed.
 以上、詳述したとおり、本実施形態の硬膜外麻酔支援システム1は、人体モデル2の脊椎に対応する位置に当該脊椎を示す空中像A1が表示されるウェアラブル装置3を有する。このように、硬膜外麻酔支援システム1によって、硬膜外穿刺の対象となる部位を可視化することができ、盲目的な硬膜外穿刺を改善し、硬膜外穿刺の安全性及び迅速性を向上させることが可能となる。 As described above in detail, the epidural anesthesia support system 1 of this embodiment has a wearable device 3 that displays an aerial image A1 showing the spine at a position corresponding to the spine of the human body model 2. In this way, the epidural anesthesia support system 1 makes it possible to visualize the area to be targeted for epidural puncture, improving blind epidural puncture and making it safer and more rapid.
 (変形例1)
 なお、本発明は、本実施形態に限定されるものではない。例えば、硬膜外麻酔支援システム1には、人体モデル2が含まれなくてもよい。この場合、ウェアラブル装置3は、人体モデル2に替えて、患者等の人体の脊椎の少なくとも一部に対応する位置に、当該人体の脊椎の少なくとも一部を示す空中像A1が表示されるように、ハーフミラー部36に空中像A1を投影する。
(Variation 1)
Note that the present invention is not limited to the present embodiment. For example, the epidural anesthesia support system 1 does not need to include the human body model 2. In this case, the wearable device 3 projects an aerial image A1 onto the half mirror unit 36 so that, instead of the human body model 2, the aerial image A1 showing at least a part of the spine of a human body such as a patient is displayed at a position corresponding to at least a part of the spine of the human body.
 例えば、訓練担当者は、CT装置(図示せず)を用いて、当該人体(患者)のCT画像を取得する。次に、訓練担当者は、3次元オブジェクトデータの生成アプリケーションプログラムが実行可能なPC等の情報処理装置(図示せず)を用いて、人体のCT画像から、人体の表面の形状の少なくとも一部(例えば、胴体部(背中部、腰部、及び臀部)等)を示す3次元オブジェクトD1の3次元オブジェクトデータ、人体の脊椎の少なくとも一部の形状を示す3次元オブジェクトD2の3次元オブジェクトデータを生成する。この場合、訓練担当者は、3次元オブジェクトD3の3次元オブジェクトデータを手動操作で生成する。記憶部32は、情報処理装置から取得した、3次元オブジェクトD1の3次元オブジェクトデータ、3次元オブジェクトD2の3次元オブジェクトデータ、及び3次元オブジェクトD3の3次元オブジェクトデータを記憶する。 For example, the trainer uses a CT device (not shown) to obtain a CT image of the human body (patient). Next, the trainer uses an information processing device (not shown) such as a PC capable of executing a three-dimensional object data generation application program to generate, from the CT image of the human body, three-dimensional object data of a three-dimensional object D1 showing at least a part of the shape of the surface of the human body (e.g., the torso (back, waist, and buttocks) etc.) and three-dimensional object data of a three-dimensional object D2 showing the shape of at least a part of the spine of the human body. In this case, the trainer manually generates three-dimensional object data of a three-dimensional object D3. The storage unit 32 stores the three-dimensional object data of the three-dimensional object D1, the three-dimensional object data of the three-dimensional object D2, and the three-dimensional object data of the three-dimensional object D3 obtained from the information processing device.
 記憶部32は、3次元オブジェクトD1の形状と、人体の背面の部位に配置されたシート部材に表示されたコード部25とを互いに関連付けて記憶する。なお、人体にコード部25が表示されたシート部材が設けられない場合では、記憶部32は、3次元オブジェクトD1の形状と、人体の椎体の少なくとも一部の特徴箇所の位置とを互いに関連付けて記憶する。なお、3次元オブジェクトD1の3次元オブジェクトデータは、第1の3次元オブジェクトデータの一例であり、3次元オブジェクトD2の3次元オブジェクトデータは、第2の3次元オブジェクトデータの一例である。また、3次元オブジェクトD1の形状と関連付けられる人体の椎体の少なくとも一部の特徴箇所は、例えば、第7頚椎(隆椎)及び上前腸骨棘等である。 The storage unit 32 stores the shape of the three-dimensional object D1 and the code portion 25 displayed on a sheet member placed on the back of the human body in association with each other. In a case where the sheet member on which the code portion 25 is displayed is not provided on the human body, the storage unit 32 stores the shape of the three-dimensional object D1 and the position of at least some characteristic parts of the vertebrae of the human body in association with each other. The three-dimensional object data of the three-dimensional object D1 is an example of first three-dimensional object data, and the three-dimensional object data of the three-dimensional object D2 is an example of second three-dimensional object data. In addition, at least some characteristic parts of the vertebrae of the human body associated with the shape of the three-dimensional object D1 are, for example, the seventh cervical vertebra (vertebral eminence) and the anterior superior iliac spine.
 人体に、コード部25が表示されたシート部材が配置された場合、認識部351は、環境センサ37からの環境センサデータによって示される画像内におけるコード部25の大きさ及び形状に基づき、ウェアラブル装置3に対するコード部25の向き(例えば、3次元の単位ベクトル等)及び距離を公知の算出方法を用いて算出する。そして、出力処理部352は、認識部351から取得した3次元オブジェクトデータに基づく3次元オブジェクトD1であって、認識部351から取得したコード部25の向き及び距離に基づいて回転させて大きさを変更した3次元オブジェクトD1、3次元オブジェクトD2、及び3次元オブジェクトD3を算出する。 When the sheet member on which the code portion 25 is displayed is placed on the human body, the recognition unit 351 calculates the orientation (e.g., a three-dimensional unit vector, etc.) and distance of the code portion 25 relative to the wearable device 3 using a known calculation method based on the size and shape of the code portion 25 in the image shown by the environmental sensor data from the environmental sensor 37. The output processing unit 352 then calculates three-dimensional objects D1, D2, and D3, which are three-dimensional objects D1 based on the three-dimensional object data acquired from the recognition unit 351 and have been rotated and resized based on the orientation and distance of the code portion 25 acquired from the recognition unit 351.
 人体に、コード部25が表示されたシート部材が配置されない場合、認識部351は、人体の椎体の少なくとも一部の特徴箇所の位置を認識し、当該特徴箇所と人体の胴体部の表面の形状との相対位置とに基づいて、人体の識別を行う。例えば、認識部351は、環境センサ37からの環境センサデータによって示される画像内に、人体の椎体の少なくとも一部の特徴箇所が含まれているか否かを判定する。なお、環境センサ37は、画像取得部の一例である。認識部351は、画像内に、人体の椎体の少なくとも一部の特徴箇所が含まれていると判定した場合、当該特徴箇所の位置に関連付けられた3次元オブジェクトD1を取得し、出力処理部352に渡す。また、認識部351は、椎体の少なくとも一部の特徴箇所の画像内での位置に基づき、ウェアラブル装置3に対する当該特徴箇所の向き(例えば、3次元の単位ベクトル等)及び距離を公知の算出方法を用いて算出する。出力処理部352は、認識部351から取得した3次元オブジェクトデータに基づく3次元オブジェクトD1であって、認識部351から取得した特徴箇所の向き及び距離に基づいて回転させて大きさを変更した3次元オブジェクトD1、3次元オブジェクトD2、及び3次元オブジェクトD3を算出する。 When a sheet member on which the code portion 25 is displayed is not placed on the human body, the recognition unit 351 recognizes the position of at least some of the characteristic parts of the vertebrae of the human body, and identifies the human body based on the relative position of the characteristic parts and the shape of the surface of the torso of the human body. For example, the recognition unit 351 determines whether or not at least some of the characteristic parts of the vertebrae of the human body are included in the image shown by the environmental sensor data from the environmental sensor 37. The environmental sensor 37 is an example of an image acquisition unit. When the recognition unit 351 determines that at least some of the characteristic parts of the vertebrae of the human body are included in the image, it acquires a three-dimensional object D1 associated with the position of the characteristic parts and passes it to the output processing unit 352. In addition, the recognition unit 351 calculates the orientation (e.g., a three-dimensional unit vector, etc.) and distance of the characteristic parts relative to the wearable device 3 based on the position of at least some of the characteristic parts of the vertebrae in the image using a known calculation method. The output processing unit 352 calculates three-dimensional object D1, three-dimensional object D2, and three-dimensional object D3, which are three-dimensional object D1 based on the three-dimensional object data acquired from the recognition unit 351, and which have been rotated and resized based on the orientation and distance of the characteristic points acquired from the recognition unit 351.
 以降、出力処理部352は、認識部351から出力された、人体モデル2に対応する物体の表面の3次元メッシュデータと3次元オブジェクトD1との位置合わせを断続的に実行しつつ、3次元オブジェクトD2に対応する空中像A1及び/又は3次元オブジェクトD3に対応する空中像A2の表示データ(投影データ)を映像出力部34に渡す。このように、硬膜外麻酔支援システム1及びウェアラブル装置3は、人体の脊椎に対応する空中像A1及び/又は空中像A2を当該人体に重畳表示でき、ユーザに対して、当該人体の脊椎の形状等の理解を促すとともに、ウェアラブル装置3によって表示された空中像A1及び/又は空中像A2をイメージさせることを可能とする。そして、ユーザは、空中像A1及び/又は空中像A2のイメージ後にウェアラブル装置3を外し、通常通りの硬膜外麻酔を施行することができる。 Then, the output processing unit 352 intermittently performs alignment between the three-dimensional mesh data of the surface of the object corresponding to the human body model 2 output from the recognition unit 351 and the three-dimensional object D1, while passing the display data (projection data) of the aerial image A1 corresponding to the three-dimensional object D2 and/or the aerial image A2 corresponding to the three-dimensional object D3 to the video output unit 34. In this way, the epidural anesthesia support system 1 and the wearable device 3 can superimpose the aerial image A1 and/or the aerial image A2 corresponding to the spine of the human body on the human body, encouraging the user to understand the shape of the spine of the human body and allowing the user to imagine the aerial image A1 and/or the aerial image A2 displayed by the wearable device 3. After imagining the aerial image A1 and/or the aerial image A2, the user can remove the wearable device 3 and perform epidural anesthesia as usual.
 また、変形例1において、人体(患者)が腰部の表面を変形する行動(「腰をひねる」、「腰を曲げる」等)を行った場合、当該人体の腰部表面の変更に応じて、対応する脊椎の3次元オブジェクトD2を変形させてもよい。例えば、3次元オブジェクトD2の複数の点と、各点の直近の3次元オブジェクトD1の位置とを関連付け、出力処理部352は、人体の腰部の変形に同期させて3次元オブジェクトD1を変形し、3次元オブジェクトD1の変形に追従するように、対応する3次元オブジェクトD2の複数の点を移動させ、これにより脊椎に対応する空中像A1を変形させてもよい。 Furthermore, in the first modification, when the human body (patient) performs an action that deforms the surface of the lumbar region (such as "twisting the waist" or "bending the waist"), the three-dimensional object D2 of the corresponding spine may be deformed in accordance with the change in the lumbar surface of the human body. For example, multiple points of the three-dimensional object D2 may be associated with the position of the three-dimensional object D1 immediately adjacent to each point, and the output processing unit 352 may deform the three-dimensional object D1 in synchronization with the deformation of the lumbar region of the human body, and move the multiple corresponding points of the three-dimensional object D2 to follow the deformation of the three-dimensional object D1, thereby deforming the aerial image A1 corresponding to the spine.
 (変形例2)
 図7に示される硬膜外麻酔訓練方法のステップS104において、ユーザは、空中像A1及び空中像A2が表示されたウェアラブル装置3を装着したまま、硬膜外針Nを人体モデル2に穿刺する2回目の訓練を行ってもよい。
(Variation 2)
In step S104 of the epidural anesthesia training method shown in Figure 7, the user may perform a second training session of inserting the epidural needle N into the human body model 2 while wearing the wearable device 3 displaying the aerial images A1 and A2.
 1  硬膜外麻酔支援システム
 2  人体モデル
 21  本体部
 22  皮膚部
 23  収容部
 24  脊椎モデル
 241  棘突起部
 242  透過部
 243  硬膜外腔部
 25  コード部
 3  ウェアラブル装置
 31  通信I/F
 32  記憶部
 33  センサ取得部
 34  映像出力部
 35  処理部
 351  認識部
 352  出力処理部
 36  ハーフミラー部
 37  環境センサ
 38  深度センサ
 N  硬膜外針
Reference Signs List 1 Epidural anesthesia support system 2 Human body model 21 Main body 22 Skin 23 Storage section 24 Spinal model 241 Spinous process section 242 Transparent section 243 Epidural space section 25 Cord section 3 Wearable device 31 Communication I/F
32 Memory unit 33 Sensor acquisition unit 34 Video output unit 35 Processing unit 351 Recognition unit 352 Output processing unit 36 Half mirror unit 37 Environmental sensor 38 Depth sensor N Epidural needle

Claims (11)

  1.  硬膜外麻酔を支援するための硬膜外麻酔支援システムであって、
     前記硬膜外麻酔の訓練対象となる人体の少なくとも一部を模した人体モデルと、
      ユーザの目の正面方向に設置された透過表示部と、
      前記ユーザが前記透過表示部を通して前記人体モデルを見た場合、脊椎の少なくとも一部を示す第1空中像が前記人体モデルの対応位置に表示されるように、前記第1空中像を前記透過表示部に出力する出力処理部と、を備えるゴーグル型の表示装置と、
     を有することを特徴とする硬膜外麻酔支援システム。
    1. An epidural anesthesia support system for supporting epidural anesthesia, comprising:
    A human body model imitating at least a part of a human body that is a training subject for epidural anesthesia;
    A transmissive display unit disposed in front of a user's eyes;
    an output processing unit that outputs the first aerial image to the transmissive display unit so that, when the user views the human body model through the transmissive display unit, the first aerial image showing at least a part of a spine is displayed at a corresponding position on the human body model; and
    An epidural anesthesia support system comprising:
  2.  前記人体モデルは、前記脊椎の少なくとも一部を模した脊椎モデルを有し、
     前記出力処理部は、CT装置によって取得された前記脊椎モデルのCT画像から生成された3次元オブジェクトデータに基づいて、前記第1空中像を出力する、請求項1に記載の硬膜外麻酔支援システム。
    the human body model has a spine model that imitates at least a part of the spine,
    The epidural anesthesia support system according to claim 1 , wherein the output processing unit outputs the first aerial image based on three-dimensional object data generated from a CT image of the spine model acquired by a CT device.
  3.  前記出力処理部は、前記ユーザが前記透過表示部を通して前記人体モデルを見た場合、前記硬膜外麻酔で用いられる硬膜外針の皮膚穿刺点、前記硬膜外針の穿刺角度、及び前記硬膜外針の硬膜外腔穿刺点のうちの少なくとも一つを示す第2空中像が前記人体モデルの対応位置に表示されるように、前記第2空中像を前記透過表示部に出力する、請求項1に記載の硬膜外麻酔支援システム。 The epidural anesthesia support system according to claim 1, wherein the output processing unit outputs the second aerial image to the transparent display unit so that, when the user views the human body model through the transparent display unit, the second aerial image showing at least one of the skin puncture point of the epidural needle used in the epidural anesthesia, the puncture angle of the epidural needle, and the epidural space puncture point of the epidural needle is displayed at a corresponding position on the human body model.
  4.  前記人体モデルは、前記脊椎の少なくとも一部を模した脊椎モデルを有し、
     前記出力処理部は、CT装置によって取得された、前記硬膜外針が穿刺された前記脊椎モデルのCT画像から生成された3次元オブジェクトデータに基づいて、前記第1空中像及び前記第2空中像を出力し、
     前記硬膜外針は、医学的に適切な、前記皮膚穿刺点、前記穿刺角度、及び前記硬膜外腔穿刺点を満たすように、前記脊椎モデルに穿刺される、請求項3に記載の硬膜外麻酔支援システム。
    the human body model has a spine model that imitates at least a part of the spine,
    the output processing unit outputs the first aerial image and the second aerial image based on three-dimensional object data generated from a CT image of the spine model into which the epidural needle is inserted, the CT image being acquired by a CT device;
    The epidural anesthesia support system of claim 3 , wherein the epidural needle is inserted into the spinal model to satisfy medically appropriate skin puncture points, puncture angles, and epidural space puncture points.
  5.  硬膜外麻酔を支援するための硬膜外麻酔支援システムであって、
      ユーザの目の正面方向に設置された透過表示部と、
      前記ユーザが前記透過表示部を通して人体を見た場合、前記人体の脊椎の少なくとも一部を示す空中像が前記人体の対応位置に表示されるように、前記空中像を前記透過表示部に出力する出力処理部と、を備えるゴーグル型の表示装置、
     を有することを特徴とする硬膜外麻酔支援システム。
    1. An epidural anesthesia support system for supporting epidural anesthesia, comprising:
    A transmissive display unit disposed in front of a user's eyes;
    an output processing unit that outputs the aerial image to the transmissive display unit so that, when the user views a human body through the transmissive display unit, an aerial image showing at least a part of a spine of the human body is displayed at a corresponding position of the human body;
    An epidural anesthesia support system comprising:
  6.  前記ゴーグル型の表示装置は、装着した前記ユーザに前記空中像をイメージさせ、
     前記ユーザは、前記空中像のイメージ後に前記ゴーグル型の表示装置を外し、通常通りの硬膜外麻酔を施行する、請求項5に記載の硬膜外麻酔支援システム。
    the goggle-type display device allows the user wearing the goggle to visualize the aerial image;
    The epidural anesthesia support system according to claim 5 , wherein the user removes the goggle-type display device after imaging the aerial image and performs epidural anesthesia as usual.
  7.  請求項1~4のいずれか一項に記載の硬膜外麻酔支援システムを用いて前記硬膜外麻酔を訓練するための硬膜外麻酔訓練方法であって、
     前記ゴーグル型の表示装置を装着した前記ユーザに、前記硬膜外麻酔で用いられる硬膜外針を前記人体モデルに穿刺させるため、前記第1空中像を前記ゴーグル型の表示装置に表示するステップ、
     を含む硬膜外麻酔訓練方法。
    An epidural anesthesia training method for training epidural anesthesia using the epidural anesthesia support system according to any one of claims 1 to 4, comprising:
    displaying the first aerial image on the goggle-type display device so that the user wearing the goggle-type display device can insert an epidural needle used for epidural anesthesia into the human body model;
    Epidural anesthesia training method including:
  8.  前記ユーザが、前記ゴーグル型の表示装置を装着して前記第1空中像をイメージした後に前記ゴーグル型の表示装置を外し、前記硬膜外針を前記人体モデルに穿刺するステップ、
     を更に含む、請求項7に記載の硬膜外麻酔訓練方法。
    a step in which the user wears the goggle-type display device to image the first aerial image, and then removes the goggle-type display device and inserts the epidural needle into the human body model;
    8. The epidural anesthesia training method of claim 7, further comprising:
  9.  請求項2又は4に記載の硬膜外麻酔支援システムを用いて前記硬膜外麻酔を訓練するための硬膜外麻酔訓練方法であって、
     前記ゴーグル型の表示装置を装着した前記ユーザに、前記人体モデルに装着された前記脊椎モデルに硬膜外針を穿刺させるため、前記第1空中像を前記ゴーグル型の表示装置に表示するステップと、
     前記硬膜外針が前記ユーザによって穿刺された前記脊椎モデルのCT画像を取得するステップと、
     前記硬膜外針が前記ユーザによって穿刺された前記脊椎モデルにおける、当該硬膜外針の皮膚穿刺点、穿刺角度、及び硬膜外腔穿刺点を表示するステップと、
     を含む硬膜外麻酔訓練方法。
    5. An epidural anesthesia training method for training epidural anesthesia using the epidural anesthesia support system according to claim 2 or 4, comprising:
    displaying the first aerial image on the goggle-type display device so that the user wearing the goggle-type display device can insert an epidural needle into the spine model attached to the human body model;
    acquiring a CT image of the spine model into which the epidural needle is inserted by the user;
    displaying a skin puncture point, a puncture angle, and an epidural space puncture point of the epidural needle in the spinal model where the epidural needle has been punctured by the user;
    Epidural anesthesia training method including:
  10.  前記ユーザが、前記ゴーグル型の表示装置を装着して前記第1空中像をイメージした後に前記ゴーグル型の表示装置を外し、前記硬膜外針を前記人体モデルに穿刺するステップ、
     を更に含む、請求項9に記載の硬膜外麻酔訓練方法。
    a step in which the user wears the goggle-type display device to image the first aerial image, and then removes the goggle-type display device and inserts the epidural needle into the human body model;
    10. The epidural anesthesia training method of claim 9, further comprising:
  11.  画像取得部と、記憶部と、透過表示部を備える表示装置の制御方法であって、
     前記表示装置が、
      特定の人体の表面の形状の少なくとも一部を示す第1の3次元オブジェクトデータと、前記特定の人体の脊椎の少なくとも一部を示す第2の3次元オブジェクトデータと、を前記記憶部に記憶し、
      前記画像取得部によって取得された前記特定の人体の画像と、前記第1の3次元オブジェクトデータとに基づいて、前記第2の3次元オブジェクトデータに基づく前記特定の人体の脊椎の少なくとも一部を示す空中像を前記透過表示部に表示する、
     ことを特徴とする制御方法。
    A method for controlling a display device including an image acquisition unit, a storage unit, and a transmissive display unit, comprising:
    The display device,
    storing in the storage unit first three-dimensional object data representing at least a portion of a shape of a surface of a specific human body and second three-dimensional object data representing at least a portion of a spine of the specific human body;
    displaying, on the transmissive display unit, an aerial image showing at least a part of a spine of the specific human body based on the second three-dimensional object data, based on the image of the specific human body acquired by the image acquisition unit and the first three-dimensional object data;
    A control method comprising:
PCT/JP2023/039166 2022-10-31 2023-10-30 Epidural anesthesia assistance system, epidural anesthesia training method, and display device control method WO2024095985A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022174821A JP2024065792A (en) 2022-10-31 2022-10-31 Epidural anesthesia support system, epidural anesthesia training method, and display device control method
JP2022-174821 2022-10-31

Publications (1)

Publication Number Publication Date
WO2024095985A1 true WO2024095985A1 (en) 2024-05-10

Family

ID=90930582

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/039166 WO2024095985A1 (en) 2022-10-31 2023-10-30 Epidural anesthesia assistance system, epidural anesthesia training method, and display device control method

Country Status (2)

Country Link
JP (1) JP2024065792A (en)
WO (1) WO2024095985A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060194180A1 (en) * 1996-09-06 2006-08-31 Bevirt Joeben Hemispherical high bandwidth mechanical interface for computer systems
JP2018112646A (en) * 2017-01-11 2018-07-19 村上 貴志 Surgery training system
JP2018522646A (en) * 2015-06-25 2018-08-16 リヴァンナ メディカル、エルエルシー. Probe ultrasound guidance for anatomical features
JP2020038272A (en) * 2018-09-03 2020-03-12 学校法人 久留米大学 Controller, controller manufacturing method, pseudo experience system, and pseudo experience method
JP2021510107A (en) * 2018-01-08 2021-04-15 リヴァンナ メディカル、エルエルシー. Three-dimensional imaging and modeling of ultrasound image data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060194180A1 (en) * 1996-09-06 2006-08-31 Bevirt Joeben Hemispherical high bandwidth mechanical interface for computer systems
JP2018522646A (en) * 2015-06-25 2018-08-16 リヴァンナ メディカル、エルエルシー. Probe ultrasound guidance for anatomical features
JP2018112646A (en) * 2017-01-11 2018-07-19 村上 貴志 Surgery training system
JP2021510107A (en) * 2018-01-08 2021-04-15 リヴァンナ メディカル、エルエルシー. Three-dimensional imaging and modeling of ultrasound image data
JP2020038272A (en) * 2018-09-03 2020-03-12 学校法人 久留米大学 Controller, controller manufacturing method, pseudo experience system, and pseudo experience method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Spinal Simulator", UNISYS CO., LTD., 7 March 2021 (2021-03-07), XP093164328, Retrieved from the Internet <URL:https://www.unisis.co.jp/product/product-list/u_10100018/> *

Also Published As

Publication number Publication date
JP2024065792A (en) 2024-05-15

Similar Documents

Publication Publication Date Title
US11195340B2 (en) Systems and methods for rendering immersive environments
CN111986316B (en) System and method for generating pressure point diagrams based on remotely controlled haptic interactions
EP3809966B1 (en) Extended reality visualization of range of motion
US20190333480A1 (en) Improved Accuracy of Displayed Virtual Data with Optical Head Mount Displays for Mixed Reality
CN109069208B (en) Ultra-wideband positioning for wireless ultrasound tracking and communication
CN105250062B (en) A kind of 3D printing bone orthopedic brace preparation method based on medical image
US10601950B2 (en) Reality-augmented morphological procedure
CN109288591A (en) Surgical robot system
CN106880475A (en) Wear-type virtual reality vision training apparatus
JP2004298430A (en) Pain therapeutic support apparatus and method for displaying phantom limb image in animating manner in virtual space
Villamil et al. Simulation of the human TMJ behavior based on interdependent joints topology
US20200334998A1 (en) Wearable image display device for surgery and surgery information real-time display system
Shaikh et al. Exposure to extended reality and artificial intelligence-based manifestations: a primer on the future of hip and knee arthroplasty
Castelan et al. Augmented reality anatomy visualization for surgery assistance with HoloLens: AR surgery assistance with HoloLens
WO2024095985A1 (en) Epidural anesthesia assistance system, epidural anesthesia training method, and display device control method
Dugailly et al. Kinematics of the upper cervical spine during high velocity-low amplitude manipulation. Analysis of intra-and inter-operator reliability for pre-manipulation positioning and impulse displacements
JP6903317B2 (en) Neuropathic pain treatment support system and image generation method for pain treatment support
US20160030764A1 (en) Non-tactile sensory substitution device
CN115188232A (en) Medical teaching comprehensive training system and method based on MR-3D printing technology
JP7112077B2 (en) CONTROLLER, CONTROLLER MANUFACTURING METHOD, SIMULATED EXPERIENCE SYSTEM, AND SIMULATED EXPERIENCE METHOD
KR102202357B1 (en) System and method for timulus-responsive virtual object augmentation abouut haptic device
Tanaka et al. Creating Method for Real-Time CG Animation of Cooperative Motion of Both Hands for Mirror Box Therapy Support System
US20240177632A1 (en) Devices and methods assessing chiropractic technique
JP6425355B2 (en) Upper limb motor learning device
de la Lastra Augmented Reality in Image-Guided Therapy to Improve Surgical Planning and Guidance.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23885739

Country of ref document: EP

Kind code of ref document: A1