CN117253014A - Mixed reality interaction method and system for meridian acupoints - Google Patents
Mixed reality interaction method and system for meridian acupoints Download PDFInfo
- Publication number
- CN117253014A CN117253014A CN202310997062.3A CN202310997062A CN117253014A CN 117253014 A CN117253014 A CN 117253014A CN 202310997062 A CN202310997062 A CN 202310997062A CN 117253014 A CN117253014 A CN 117253014A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- model
- acupoint
- meridian
- acupoints
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000003993 interaction Effects 0.000 title claims abstract description 38
- 238000010276 construction Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 9
- 230000002452 interceptive effect Effects 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 4
- 238000001467 acupuncture Methods 0.000 abstract description 32
- 239000003814 drug Substances 0.000 abstract description 26
- 238000005516 engineering process Methods 0.000 abstract description 15
- 238000011282 treatment Methods 0.000 abstract description 10
- 239000011521 glass Substances 0.000 description 16
- 238000012545 processing Methods 0.000 description 14
- 230000000007 visual effect Effects 0.000 description 10
- 210000003484 anatomy Anatomy 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 210000001835 viscera Anatomy 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000010187 selection method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 230000001225 therapeutic effect Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 206010000060 Abdominal distension Diseases 0.000 description 2
- 239000002253 acid Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000000232 gallbladder Anatomy 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 210000003734 kidney Anatomy 0.000 description 2
- 210000002429 large intestine Anatomy 0.000 description 2
- 210000004185 liver Anatomy 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 210000003516 pericardium Anatomy 0.000 description 2
- 239000011148 porous material Substances 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 210000000813 small intestine Anatomy 0.000 description 2
- 210000000952 spleen Anatomy 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 210000002784 stomach Anatomy 0.000 description 2
- 230000008961 swelling Effects 0.000 description 2
- 210000002435 tendon Anatomy 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 241000544051 Damasonium alisma Species 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000004087 circulation Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 238000001802 infusion Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 230000005906 menstruation Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000008506 pathogenesis Effects 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 230000009278 visceral effect Effects 0.000 description 1
- 235000012773 waffles Nutrition 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H39/00—Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
- A61H39/02—Devices for locating such points
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Rehabilitation Therapy (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Finger-Pressure Massage (AREA)
- Pain & Pain Management (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Epidemiology (AREA)
- Animal Behavior & Ethology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Architecture (AREA)
- Physical Education & Sports Medicine (AREA)
Abstract
The invention discloses a channel and acupoint mixed reality interaction method and a channel and acupoint mixed reality interaction system, which are used for carrying out three-dimensional reconstruction on a standard human body to construct a three-dimensional human body model; establishing acupoint points on the three-dimensional human body model to construct a three-dimensional meridian acupoint model; acquiring scanning data of a body surface mark of a three-dimensional body surface structure model of a patient, and matching the constructed three-dimensional meridian and acupoint model into the three-dimensional body surface structure model of the patient; the three-dimensional meridian acupoint system structure of the patient is displayed in real time. The technical scheme of the invention can help people with the need of learning the channels and acupoints and related practitioners to reduce the learning curve by means of the mixed reality technology, and more accurately, quickly and intuitively find the positions of the channels and acupoints, so as to more accurately learn the channels and acupoints system and more accurately and effectively implement traditional Chinese medicine treatment such as acupuncture and massage.
Description
Technical Field
The invention relates to the technical field of reality interaction, in particular to a channel and acupoint mixed reality interaction method and system.
Background
Traditional Chinese medicine is one of the nature of Chinese culture, and the meridian points are the most distinctive contents of traditional Chinese medicine. The meridians are composed of twelve main meridians, eight extra meridians, twelve main meridians, fifteen collaterals, twelve meridian tendons and twelve skin portions, and are channels for qi and blood circulation, abdomen and limb joints communication, up and down, and internal and external communication. The theory of meridians is an important component of the basis of traditional medical theory in China. The Chinese medicine composition is combined with theory basis of visceral manifestation theory, pathogenesis theory and the like to supplement, so that the physiological function and pathological change of a human body are completely explained, and the clinical diagnosis and treatment of the Chinese medicine are guided. The acupoints are the special points for the infusion and input of qi into the viscera and meridians of the human body, and are the stimulation points for the acupuncture therapy of diseases, because they are the reaction points of the diseases and have a close relationship with the meridians, and the viscera lesions can react from the meridians to the corresponding acupoints. The definition of acupoints refers to the specific pores where the qi of the viscera and meridians are transmitted or infused into the body surface where the skin and muscle pores meet the condyles, and is divided into the points of meridians, points of extra-meridians and points of acli, which are used for infusing qi and blood of the viscera and meridians to communicate the body surface with the viscera in vivo.
However, the meridian points are different from the objectively visible entities such as vascular nerves, and there are individual differences in the body. In teaching or novice practice, there is often fuzzy positioning, which requires a doctor of traditional Chinese medicine with abundant experience to combine with subjective feeling of a patient to accurately position and accurately apply treatment. The accurate positioning of the acupoints of the channels and collaterals is of great significance in clinical application or teaching. Furthermore, the number of acupoints on the meridians is large, which is difficult to memorize in learning practice. Therefore, the traditional Chinese medicine students make simulation objects and wall charts of the acupoints of the human body, and accurately mark the positions and names of the acupoints, but in the traditional Chinese medicine learning and actual treatment process, the accurate positions of the meridian acupoints are difficult to quickly and accurately find out by facing the human body, and particularly, for the traditional Chinese medicine beginners and the crowd without the traditional Chinese medicine background, a more visual and accurate method for identifying the meridian acupoints is needed.
The three most classical acupoints are body surface mark acupoint selection, bone size division and finger same body size acupoint selection, and simple acupoint selection has been developed gradually in clinical exploration. In addition to body surface marker positioning, there are three other points, namely, bone-length division, finger-body-position and simple and convenient point-taking. The current mainstream view considers that the difference of the fingers from the same body size is too large, which is not beneficial to clinical acupoint selection and academic communication. The natural mark acupoint selection method and the bone size division acupoint selection method have higher credibility. However, these positioning methods are often subjective and have some controversy in application. Objective and standard acupoint selection becomes a current urgent problem to be solved.
With the continuous development of computer images and three-dimensional simulation technologies, more visualized human models of meridian points are continuously drawn by technicians, and a plurality of students hope to realize more objective and accurate positioning by a digital virtual meridian point positioning method. The Shanghai university of traditional Chinese medicine in 2000 developed the collection of three-dimensional human body (ChineseAcupointVisible Human, CAVH) data sets of Chinese acupoints, and provides the original basic data for explaining the general construction of acupoints. The first military medical university of the liberation army in 2003 completes the virtual Chinese project and lays a foundation for the visualization of the meridian points in the human body. In 2005, the university of Shanghai traditional Chinese medicine Yan Zhenguo et al fuses multi-level anatomical structure data of points which are easy to occur outside the sense of carelessness when the head and neck and trunk are needled and acupuncture point knowledge into a digital visual human model, positions the points on a three-dimensional visual human body by a classical traditional Chinese medicine point positioning method through an original transition coordinate system, realizes three-dimensional reconstruction of the points, and establishes an intelligent dynamic display three-dimensional visual human model for the acupuncture point needling process. Virtual acupuncture and moxibustion acupoint selection practical training systems were developed by the design of the King and beneficial folk team of Tianjin university of traditional Chinese medicine in 2013, and human bodies, channels and collaterals, acupoints and the like were modeled using 3ds Max (3D Studio Max) based on 3D (3-dimensional) modeling rendering and manufacturing software of a PC (personal computer) system. According to the traditional Chinese medicine meridian circulation diagram, meridian lines are drawn in the established three-dimensional human modeling, then on the basis of the meridian lines, human body acupuncture points are established by utilizing spheres, and the modeling engineering of the human body meridian acupuncture points (any pulse) is finally completed according to the traditional Chinese medicine meridian acupuncture point positions. In 2015, the Beijing industrial university forest waffle research develops a visual system platform of virtual human bodies and acupuncture points, displays the menstruation sensing phenomenon in a digital form, and develops an acupuncture teaching system capable of running on a mobile terminal based on an Android (an operating system based on free and open source codes of a Linux kernel (without GNU components)) technology. In 2018, the university of Chinese medicine teaching team in Fujian developed practice for over 2 years initially makes VR (virtual reality) demo of bladder meridian of foot sun, and is applied to VR teaching of traditional Chinese medicine. In 2007, the experts of universities and water star companies in China develop an acupoint massage robot which can automatically pick acupoints for the first time in the world, and the robot can automatically search the channels and collaterals and acupoints of a massaged person through a sensing device. The acupoint simple identification therapeutic apparatus is developed by the university of Harbin industry and Jilin university in 2010, and the acupoint is detected based on the characteristics of low resistance and high conductivity of the acupoint. The technical theory is still to be perfected, and the acupoint resistance is easily affected by various factors. The university of Shandong building in 2011 designs a traditional Chinese medicine massage robot, and the coordinates of the acupoints can be deduced by inputting human body data to automatically select the acupoints. The precision is slightly improved. In recent years, some therapeutic apparatuses for automatically positioning acupoints, such as an optical meridian pen and an optical automatic acupoint searching apparatus, have not been supported by enough evidence based on the theory.
Although the model establishment is more vivid than the simulation of real objects and wall map, the general operation is complex, the meridian and acupoint model cannot be designed in a personalized way, and the model is mostly used in traditional Chinese medicine teaching (such as acupoint recognition, acupuncture and the like), is less applicable to clinical practice, and has weak practicability and difficult popularization. And the acupoints positioned by the prior art and the drawn meridians only stay on the surface of the model, and the concept of the depth of the acupoints of the meridians is not mentioned. However, in clinical practice, in order to achieve therapeutic effects, it is often necessary to make the stimulation applied to the acupoints of the meridians go deep into the skin to reach the muscles and tendons, so that the acupoints of the meridians should be a deep structure system in clinical sense.
Disclosure of Invention
One of the purposes of the present invention is to overcome the shortcomings of the prior art, and to solve the problems of the prior art that the meridian point model is less applied to clinical practice, has weak practicality and is difficult to widely popularize, a method and a system for mixed reality interaction of meridian points are provided.
In order to achieve the above purpose, the present invention is realized by the following technical scheme:
in a first aspect, the present invention provides a method for mixed reality interaction of meridian points, the method comprising:
S1: performing three-dimensional reconstruction on a standard human body to construct a three-dimensional human body model;
s2: establishing acupoint points on the three-dimensional human body model to construct a three-dimensional meridian acupoint model;
s3: acquiring scanning data of a body surface mark of a three-dimensional body surface structure model of a patient, and matching the constructed three-dimensional meridian and acupoint model into the three-dimensional body surface structure model of the patient;
s4: the three-dimensional meridian acupoint system structure of the patient is displayed in real time.
In a preferred embodiment of the present application, in step S1, the three-dimensional manikin is built up by scanning data obtained by scanning a standard human body.
In a preferred embodiment of the present application, step S2 specifically includes:
s21: creating an interactive sphere representing the acupoints according to the needling depth and range of each acupoints on the three-dimensional human model;
s22: associating said acupoints with said three-dimensional mannequin; so that the acupoints can move synchronously when the three-dimensional human body model is enlarged and reduced;
s23: creating a cylindrical solid line segment between two acupoints; the cylindrical solid line segment extends along the direction parallel to the body surface skin;
S24: connecting each section of cylindrical solid line segment into a smooth cylindrical curve, and establishing a three-dimensional channel;
s25: based on the three-dimensional channels, two acupoints at adjacent positions on each channel are connected into line segments, which are represented by a cylindrical channel model to form fourteen channels.
In a preferred embodiment of the present application, step S3 specifically includes:
s31: 3D scanning is carried out on a patient to obtain scanning data of the patient;
s32: reconstructing a three-dimensional body surface structure model of the patient based on the scanning data;
s33: marking a plurality of body surface mark points in advance based on the three-dimensional human body model;
s34: and matching the constructed three-dimensional meridian point model in the three-dimensional human body model into a three-dimensional body surface structure model of the patient based on the body surface mark points.
In a second aspect, the present invention provides a meridian acupoint mixed reality interaction system, which includes a standard human body construction module, a acupoint construction module, a real-time reconstruction module and a display module that are connected with each other;
the standard human body construction module is used for carrying out three-dimensional reconstruction on a standard human body to construct a three-dimensional human body model;
The acupoint construction module is used for establishing acupoint points on the three-dimensional human body model so as to construct a three-dimensional meridian acupoint model;
the real-time reconstruction module is used for acquiring scanning data of body surface marks of a three-dimensional body surface structure model of a patient and accurately matching the constructed three-dimensional meridian point model into the three-dimensional body surface structure model of the patient;
the display module is used for displaying the three-dimensional meridian acupoint system structure of the patient in real time.
In a preferred embodiment of the present application, the standard anatomy module is configured to build the three-dimensional anatomy model from scan data obtained by scanning a standard anatomy.
In a preferred embodiment of the present application, the acupoints construction module is specifically configured to:
creating an interactive sphere representing the acupoints according to the needling depth and range of each acupoints on the three-dimensional human model;
associating said acupoints with said three-dimensional mannequin; so that the acupoints can move synchronously when the three-dimensional human body model is enlarged and reduced;
creating a cylindrical solid line segment between two acupoints; the cylindrical solid line segment extends along the direction parallel to the body surface skin;
Connecting each section of cylindrical solid line segment into a smooth cylindrical curve, and establishing a three-dimensional channel;
based on the three-dimensional channels, two acupoints at adjacent positions on each channel are connected into line segments, which are represented by a cylindrical channel model to form fourteen channels.
In a preferred embodiment of the present application, the real-time reconstruction module is specifically configured to:
3D scanning is carried out on a patient to obtain scanning data of the patient;
reconstructing a three-dimensional body surface structure model of the patient based on the scanning data;
marking a plurality of body surface mark points in advance based on the three-dimensional human body model;
and matching the constructed three-dimensional meridian point model in the three-dimensional human body model into a three-dimensional body surface structure model of the patient based on the body surface mark points.
In a third aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, which when executed on a computer, causes the computer to perform the method for mixed reality interaction of meridian points according to the first aspect.
In a fourth aspect, the present invention provides a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method of mixed reality interaction of acupoints of meridians according to the first aspect.
According to the channel and acupoint mixed reality interaction method and system disclosed by the invention, a 3D human body scanner is used for constructing a personalized human body model for each patient by means of a mixed reality technology, a standard channel and acupoint three-dimensional model is precisely matched and projected onto a real human body through registration points, and is intelligently matched and modified through a system, a deep three-dimensional channel and acupoint structure is precisely and real-timely observed through wearable MR (magnetic resonance) glasses, so that a learning curve of a crowd with a learning requirement on the channel and acupoint and a relevant practitioner is reduced, the positions of the channel and acupoint can be found more precisely, quickly and intuitively, the channel and acupoint system can be more accurately learned, and traditional Chinese medicine treatments such as acupuncture and massage can be more precisely and effectively implemented.
Drawings
The invention is described with the aid of the following figures:
FIG. 1 is a flow chart of the method for mixed reality interaction of the meridian points according to the embodiment 1 of the invention;
FIG. 2 is a schematic diagram showing the three-dimensional structure of the meridian points according to the embodiment 1 of the present invention;
FIG. 3 is a flowchart showing the steps S2 of the method for mixed reality interaction of meridian points according to embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of a three-dimensional coordinate system of the channels and collaterals between two adjacent acupoints according to the embodiment 1 of the present invention.
FIG. 5 is a flowchart showing the steps S3 in the method of the embodiment 1 of the present invention;
fig. 6 is a schematic diagram of a hybrid reality interaction device for meridian points according to example 2 of the present invention.
10-a standard human body construction module; a 20-acupoint construction module; 30-a real-time reconstruction module; 40-display module.
Detailed Description
For a better understanding of the technical solutions of the present application, embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Example 1
The embodiment 1 of the invention discloses a mixed reality interaction method for meridian points, which can accurately, quickly and individually construct a deep three-dimensional meridian point system by means of mixed reality technology.
Mixed Reality (MR) includes both augmented Reality and augmented virtualization, referring to a new visual environment created by merging the real and virtual worlds. Physical and digital objects coexist in the new visualization environment and interact in real time. In the seventh eighties of the last century, in order to enhance the simple self visual effect, the eyes can 'see' the surrounding environment under any situation, and the mediated reality proposed by Steve Mann (Steve Mann) is taught by the university of Toronto, "the parent of intelligent hardware", so as to design the wearable intelligent hardware, and preliminary exploration is carried out on the MR technology. Virtual Reality (VR) is a pure Virtual digital picture, augmented Reality (Augmented Reality, AR) Virtual digital picture plus naked eye Reality, and MR is a digitized Reality plus Virtual digital picture. In concept, MR and AR are closer together and are half-real and half-virtual images, but conventional AR technology uses prism optics to refract real images, the viewing angle is not as large as VR, the sharpness is also affected, and AR only superimposes virtual environments regardless of reality itself. The MR technology combines the advantages of VR and AR, can better embody the AR technology, and enables you to see reality that cannot be seen by naked eyes through one camera. MR systems generally employ three main features: 1. virtual and reality are combined; 2. in virtual three-dimensional (3D registration); 3. and (5) running in real time. The key point of MR is to interact with the real world and acquire information in time.
Referring to fig. 1 and 2, the meridian point mixed reality interaction method of this embodiment 1 includes:
s1: performing three-dimensional reconstruction on a standard human body to construct a three-dimensional human body model;
s2: creating acupoints on the three-dimensional human body model to construct a three-dimensional meridian acupoint model;
s3: acquiring scanning data of a body surface mark of a three-dimensional body surface structure model of a patient, and matching the constructed three-dimensional meridian and acupoint model into the three-dimensional body surface structure model of the patient;
s4: the three-dimensional meridian acupoint system structure of the patient is displayed in real time.
Specifically, in step S1, a three-dimensional phantom is built up from scan data obtained by scanning a standard human body. The human body model is three-dimensionally reconstructed through the high-precision 3D scanner, the high-precision 3D scanner acquires geometric shape and texture information of the object surface through a method of irradiating the object surface and collecting reflected light, and a three-dimensional human body model is generated through technologies such as data processing and triangulation, and finally, post-processing can be carried out to obtain a more real and accurate three-dimensional human body model. The three-dimensional reconstruction process comprises the following steps: 1. and (3) light source irradiation: a 3D scanner typically uses a laser or structured light technique as a light source to obtain geometric and texture information of the object surface by illuminating the scanned object surface. 2. Optical collection: the 3D scanner collects light reflected from the scanned object surface into a corresponding photosensitive element, such as a CCD (Charge-coupled Device) or CMOS (Complementary Metal Oxide Semiconductor ) to obtain information of the object surface. 3. And (3) data processing: the acquired object surface information is processed by 3D scanning software to generate a three-dimensional mannequin. The data processing comprises the processing of point cloud data and triangulation. 4. And (3) processing point cloud data: the 3D scanner can import the acquired point cloud data into software, and the point cloud data is processed through technologies such as filtering, registering and reconstructing, so that noise, invalid points and the like are removed, and more accurate data are obtained. 5. Triangulation: after the point cloud data processing is completed, the 3D scanner converts the point cloud data into a triangular mesh model by utilizing a triangulation algorithm so as to facilitate subsequent processing and application. 6. Post-treatment: the three-dimensional mannequin generated by the 3D scanner may be post-processed, such as texture mapping, smoothing, topology optimization, etc., to obtain a more realistic and accurate three-dimensional mannequin.
Referring to fig. 3, in the meridian point mixed reality interaction method of embodiment 1, step S2 specifically includes:
s21: creating an interactive sphere representing the acupoints according to the needling depth and range of each acupoints on the three-dimensional human model;
s22: associating the acupoints with the three-dimensional body model; so that the acupoints can move synchronously when the three-dimensional human body model is enlarged and reduced;
s23: creating a cylindrical solid line segment between two acupoints; the cylindrical solid line segment extends along the direction parallel to the body surface skin;
s24: connecting each section of cylindrical solid line segment into a smooth cylindrical curve, and establishing a three-dimensional channel;
s25: based on the three-dimensional channels, two points located at adjacent positions on each channel are connected into line segments, which are represented by a cylindrical channel model to form fourteen channels.
Specifically, in step S2, maya software (Maya 3D production software) is used to create acupoints on the three-dimensional mannequin. First a three-dimensional manikin is selected, and the selected object is activated. Then, an interactive sphere is created, a clear body surface mark is determined according to the national standard of the people's republic of China, the acupoint name and location, and textbooks of the tenth edition of the meridian acupoint science published by Chinese medical press, the body surface mark and the bone size are selected preferentially, and the acupoint selection method with the same body size of the finger is used as the supplement to establish the accurate positioning body surface acupoint. According to the depth and the range of the acupuncture of each acupoint in the textbook of the "Jing and Luo acupoints science", the depth and the diameter of the sphere representing the acupoint are adjusted. The acupoints are associated with the mannequin so that the acupoints can move synchronously when the mannequin is enlarged or reduced. Taking the most point of the points on the Taiyin meridian of hand as an example, the depth of the point on the Taiyin meridian of hand is 0.8-1.2 inch, the point is a sphere with depth of 0.8 inch and diameter of 0.4 inch, the point on Kong Zui is a sphere with depth of 0.5 inch and diameter of 0.5 inch, the meridian on the most point is a cylindrical channel with smooth transition from diameter of 0.4 inch to diameter of 0.5 inch, and so on. According to the records of national standard of the people's republic of China, the fourteen channels and points share 309 pairs of double points and 52 single points. The method comprises the following steps of: 11 pairs of lung channels and 20 pairs of large intestine channels; 45 pairs of stomach channels; spleen channel 21 pairs; 9 pairs of heart channels; 19 pairs of small intestine channels; bladder meridian 67 pairs; kidney channel 27 pairs; pericardium meridian 9 pairs; the triple energizer channel is 23 pairs; the gallbladder meridian is 44 pairs; the liver channel is 14 pairs. 24 conception vessels; 28 governor vessels. Therefore, in this example 1, twelve main meridians plus any meridian and governor meridian were taken without the odd and the Ayes points, and 361 total points were created on a total of fourteen main meridians. Then, a cylindrical solid line segment between two acupoints is created, and extends parallel to the direction of the body surface skin, and each cylindrical solid line segment is connected into a smooth cylindrical curve by using software post-processing, so that a three-dimensional meridian passage is established. Two acupoints on each meridian at adjacent positions are connected into line segments, which are represented by a cylindrical channel model to form fourteen meridians. After the positioning of the acupoints and the meridians is completed, the distribution lines of fourteen meridians are saved as model files. And spheres representing acupoints are not directly saved for presentation. Instead, the three-dimensional coordinates of the object in the Maya software are read through an object attribute column, the relative coordinates of the acupoints on the three-dimensional human body model are obtained, and the relative coordinates are written into a database. And reading data in a database in a program, and identifying the acupuncture points at given positions by using corresponding library functions. The model is converted into a format that can be used for programming. The three-dimensional human body model is set as a standard anatomical standing position, a three-dimensional space coordinate system is established by taking a navel center point of the three-dimensional human body model as a center coordinate, the acupoint points of the human body are compared with a sphere buried under the skin, a three-dimensional coordinate system is established for two adjacent acupoint points by taking a circle center connecting line of the sphere as a meridian, as shown in fig. 4, the space coordinates of the centers of the two adjacent acupoint points are respectively M points (x 1, y1, z 1) and N points (x 2, y2, z 2), and a straight line equation of the connecting line of the two points, namely the meridian center line, is (x-x 1)/(x 2-x 1) = (y-y 1) = (z-z 2-z 1)/(z 2-z 1), and the direction vector of the straight line is { x2-x1, y2-y1, z2-z1}.
Referring to fig. 5, in the meridian point mixed reality interaction method of embodiment 1, step S3 specifically includes:
s31: 3D scanning is carried out on a patient to obtain scanning data of the patient;
s32: reconstructing a three-dimensional body surface structure model of the patient based on the scanning data;
s33: marking a plurality of body surface mark points in advance based on a three-dimensional human body model;
s34: based on a plurality of body surface mark points, the constructed three-dimensional meridian and acupoint model in the three-dimensional human body model is matched into the three-dimensional body surface structure model of the patient.
Specifically, in step S3, a personalized 3D scan is performed on each patient by using a handheld high-precision laser 3D human body scanner, so as to reconstruct a three-dimensional body surface structure model of each patient, and the three-dimensional reconstruction process is described above and will not be repeated here. According to the existing human anatomy knowledge and meridian point positioning standard, scanning a three-dimensional body surface structure model of a patient by a handheld high-precision laser 3D human body scanner, and marking at 41 predefined individual table mark points; the determined 41 individual table mark points are used as registration points, and a meridian acupoint mixed reality interaction system is used for moderately stretching or shrinking according to the difference between the three-dimensional body surface structure model of the patient and the virtual Chinese male model I; in the process of accurately matching the constructed three-dimensional meridian point model in the first model of the virtual Chinese man with the three-dimensional body surface structure model of the patient, the meridian point model can be ensured to accurately cover the corresponding area on the 3D body surface model of the patient. Wherein the virtual Chinese male model I is a standard Chinese Han group male three-dimensional human model.
In step S4, the operator can observe the real-time three-dimensional meridian point system structure with depth of the patient in real time in the field of view through the head-wearing MR glasses. Through the head-wearing mixed reality glasses, an operator can observe the real-time three-dimensional meridian point system structure with depth of a patient in view in real time, and perform the positioning of the body surface of the meridian point and the depth machine. When a certain acupoint is required to be found, the laser locator emits a laser beam to be projected onto the corresponding acupoint, and the locating point is consistent with the acupoint observed in the mixed reality glasses, so that the patient and the person who does not wear the glasses can accurately locate the acupoint. Specifically, the MR glasses receive signals through a built-in computer system, match corresponding acupoints and control the laser localizer to emit and locate. A high-performance computer is built in the MR glasses, and information of the head posture, the visual angle, the position and the like of a patient is acquired through cooperation of various sensors (such as a camera, a gyroscope, an accelerometer and the like) and transmitted to the computer in real time. Meanwhile, the glasses also comprise a laser positioner which can accurately position the acupuncture points according to the data processed in the computer and transmit laser beams to the corresponding acupuncture points, thereby realizing the positioning of the acupuncture points. In the computer system, the scanning data of different patients are matched through pre-recorded data and algorithms, and the three-dimensional meridian point model constructed in the first model of the virtual Chinese man is accurately matched with the corresponding acupuncture points in the 3D body surface model of the patient, so that the meridian point body surface and depth machine positioning is realized.
The meridian point mixed reality interaction method of the embodiment 1 can continuously improve the accuracy of mixed reality interaction through clinical cases. The method uses the subjective feeling of 'acid, tingling, swelling and heaviness' generated by a patient, which are subjectively determined by a Chinese medical acupuncture doctor with senior resources, as a standard, and compares and verifies and continuously corrects the data of the real-time meridian and acupoint mixed reality image, thereby continuously improving the accuracy of the system. Selecting a patient to be subjected to acupuncture treatment, wherein the body surface position of the acupoints is subjectively determined by a senior Chinese medicine acupuncture doctor; the depth of the acupoints is determined by the senior Chinese medical acupuncture practitioner based on the subjective feeling of "qi" and "sour, tingling, distention and heaviness" of the patient. The subjective positioning data of the doctor of traditional Chinese medicine is compared with the real-time data of the meridian point mixed reality image, the comparison, verification and correction of the body surface and depth of the point are carried out, and the accuracy of the mixed reality meridian point instrument is improved continuously.
Example 2
The embodiment 2 of the invention discloses a channel and acupoint mixed reality interaction system, which can accurately, quickly and individually construct a deep three-dimensional channel and acupoint system by means of mixed reality technology.
Referring to fig. 6, the meridian point mixed reality interaction system of this embodiment 2 includes a standard human body construction module 10, a point construction module 20, a real-time reconstruction module 30 and a display module 40, which are connected with each other; the standard human body construction module 10 is used for carrying out three-dimensional reconstruction on a standard human body to construct a three-dimensional human body model; the acupoint construction module 20 is used for constructing acupoints on the three-dimensional human body model to construct a three-dimensional meridian acupoint model; the real-time reconstruction module 30 is used for acquiring scanning data of the body surface mark of the three-dimensional body surface structure model of the patient, and precisely matching the constructed three-dimensional meridian acupoint model into the three-dimensional body surface structure model of the patient; the display module 40 is used for displaying the three-dimensional meridian acupoint system structure of the patient in real time.
Specifically, the standard anatomy module 10 is configured to build a three-dimensional anatomy model from scan data obtained by scanning a standard anatomy. The human body model is three-dimensionally reconstructed through the high-precision 3D scanner, the high-precision 3D scanner acquires geometric shape and texture information of the object surface through a method of irradiating the object surface and collecting reflected light, and a three-dimensional human body model is generated through technologies such as data processing and triangulation, and finally, post-processing can be carried out to obtain a more real and accurate three-dimensional human body model. The three-dimensional reconstruction process comprises the following steps: 1. and (3) light source irradiation: a 3D scanner typically uses a laser or structured light technique as a light source to obtain geometric and texture information of the object surface by illuminating the scanned object surface. 2. Optical collection: the 3D scanner collects light reflected from the scanned object surface into a corresponding photosensitive element, such as a CCD (Charge-coupled Device) or CMOS (Complementary Metal Oxide Semiconductor ) to obtain information of the object surface. 3. And (3) data processing: the acquired object surface information is processed by 3D scanning software to generate a three-dimensional mannequin. The data processing comprises the processing of point cloud data and triangulation. 4. And (3) processing point cloud data: the 3D scanner can import the acquired point cloud data into software, and the point cloud data is processed through technologies such as filtering, registering and reconstructing, so that noise, invalid points and the like are removed, and more accurate data are obtained. 5. Triangulation: after the point cloud data processing is completed, the 3D scanner converts the point cloud data into a triangular mesh model by utilizing a triangulation algorithm so as to facilitate subsequent processing and application. 6. Post-treatment: the three-dimensional mannequin generated by the 3D scanner may be post-processed, such as texture mapping, smoothing, topology optimization, etc., to obtain a more realistic and accurate three-dimensional mannequin.
In the meridian point mixed reality interaction system of this embodiment 2, the point construction module 20 is specifically configured to: creating an interactive sphere representing the acupoints according to the needling depth and range of each acupoints on the three-dimensional human model; associating the acupoints with the three-dimensional body model; so that the acupoints can move synchronously when the three-dimensional human body model is enlarged and reduced; creating a cylindrical solid line segment between two acupoints; the cylindrical solid line segment extends along the direction parallel to the body surface skin; connecting each section of cylindrical solid line segment into a smooth cylindrical curve, and establishing a three-dimensional channel; based on the three-dimensional channels, two points located at adjacent positions on each channel are connected into line segments, which are represented by a cylindrical channel model to form fourteen channels.
Specifically, the acupoints are created on the three-dimensional mannequin using Maya software (Maya 3D manufacturing software) by the acupoints construction module 20. First a three-dimensional manikin is selected, and the selected object is activated. Then, an interactive sphere is created, a clear body surface mark is determined according to the national standard of the people's republic of China, the acupoint name and location, and textbooks of the tenth edition of the meridian acupoint science published by Chinese medical press, the body surface mark and the bone size are selected preferentially, and the acupoint selection method with the same body size of the finger is used as the supplement to establish the accurate positioning body surface acupoint. According to the depth and the range of the acupuncture of each acupoint in the textbook of the "Jing and Luo acupoints science", the depth and the diameter of the sphere representing the acupoint are adjusted. The acupoints are associated with the mannequin so that the acupoints can move synchronously when the mannequin is enlarged or reduced. Taking the most point of the points on the Taiyin meridian of hand as an example, the depth of the point on the Taiyin meridian of hand is 0.8-1.2 inch, the point is a sphere with depth of 0.8 inch and diameter of 0.4 inch, the point on Kong Zui is a sphere with depth of 0.5 inch and diameter of 0.5 inch, the meridian on the most point is a cylindrical channel with smooth transition from diameter of 0.4 inch to diameter of 0.5 inch, and so on. According to the records of national standard of the people's republic of China, the fourteen channels and points share 309 pairs of double points and 52 single points. The method comprises the following steps of: 11 pairs of lung channels and 20 pairs of large intestine channels; 45 pairs of stomach channels; spleen channel 21 pairs; 9 pairs of heart channels; 19 pairs of small intestine channels; bladder meridian 67 pairs; kidney channel 27 pairs; pericardium meridian 9 pairs; the triple energizer channel is 23 pairs; the gallbladder meridian is 44 pairs; the liver channel is 14 pairs. 24 conception vessels; 28 governor vessels. Therefore, in this example 2, twelve main meridians plus any meridian and governor meridian were taken without odd and acle points, and 361 total points were found on fourteen main meridians. Then, a cylindrical solid line segment between two acupoints is created, and extends parallel to the direction of the body surface skin, and each cylindrical solid line segment is connected into a smooth cylindrical curve by using software post-processing, so that a three-dimensional meridian passage is established. Two acupoints on each meridian at adjacent positions are connected into line segments, which are represented by a cylindrical channel model to form fourteen meridians. After the positioning of the acupoints and the meridians is completed, the distribution lines of fourteen meridians are saved as model files. And spheres representing acupoints are not directly saved for presentation. Instead, the three-dimensional coordinates of the object in the Maya software are read through an object attribute column, the relative coordinates of the acupoints on the three-dimensional human body model are obtained, and the relative coordinates are written into a database. And reading data in a database in a program, and identifying the acupuncture points at given positions by using corresponding library functions. The model is converted into a format that can be used for programming. The three-dimensional human body model is set as a standard anatomical standing position, a three-dimensional space coordinate system is established by taking a navel center point of the three-dimensional human body model as a center coordinate, the acupoint points of the human body are compared with a sphere buried under the skin, a three-dimensional coordinate system is established for two adjacent acupoint points by taking a circle center connecting line of the sphere as a meridian, as shown in fig. 4, the space coordinates of the centers of the two adjacent acupoint points are respectively M points (x 1, y1, z 1) and N points (x 2, y2, z 2), and a straight line equation of the connecting line of the two points, namely the meridian center line, is (x-x 1)/(x 2-x 1) = (y-y 1) = (z-z 2-z 1)/(z 2-z 1), and the direction vector of the straight line is { x2-x1, y2-y1, z2-z1}.
In the meridian point mixed reality interaction system of this embodiment 2, the real-time reconstruction module 30 is specifically configured to: 3D scanning is carried out on a patient to obtain scanning data of the patient; reconstructing a three-dimensional body surface structure model of the patient based on the scanning data; marking a plurality of body surface mark points in advance based on a three-dimensional human body model; based on a plurality of body surface mark points, the constructed three-dimensional meridian and acupoint model in the three-dimensional human body model is matched into the three-dimensional body surface structure model of the patient.
Specifically, through the real-time reconstruction module 30, the handheld high-precision laser 3D human body scanner is used to perform personalized 3D scanning on each patient, and a three-dimensional body surface structure model of each patient is reconstructed, and the three-dimensional reconstruction process is described above and is not repeated here. According to the existing human anatomy knowledge and meridian point positioning standard, scanning a three-dimensional body surface structure model of a patient by a handheld high-precision laser 3D human body scanner, and marking at 41 predefined individual table mark points; the determined 41 individual table mark points are used as registration points, and a meridian acupoint mixed reality interaction system is used for moderately stretching or shrinking according to the difference between the three-dimensional body surface structure model of the patient and the virtual Chinese male model I; in the process of accurately matching the constructed three-dimensional meridian point model in the first model of the virtual Chinese man with the three-dimensional body surface structure model of the patient, the meridian point model can be ensured to accurately cover the corresponding area on the 3D body surface model of the patient. Wherein the virtual Chinese male model I is a standard Chinese Han group male three-dimensional human model.
Through the display module 40, the operator can observe the real-time three-dimensional meridian point system structure with depth of the patient in real time in the visual field through the head-wearing MR glasses. Through the head-wearing mixed reality glasses, an operator can observe the real-time three-dimensional meridian point system structure with depth of a patient in view in real time, and perform the positioning of the body surface of the meridian point and the depth machine. When a certain acupoint is required to be found, the laser locator emits a laser beam to be projected onto the corresponding acupoint, and the locating point is consistent with the acupoint observed in the mixed reality glasses, so that the patient and the person who does not wear the glasses can accurately locate the acupoint. Specifically, the MR glasses receive signals through a built-in computer system, match corresponding acupoints and control the laser localizer to emit and locate. A high-performance computer is built in the MR glasses, and information of the head posture, the visual angle, the position and the like of a patient is acquired through cooperation of various sensors (such as a camera, a gyroscope, an accelerometer and the like) and transmitted to the computer in real time. Meanwhile, the glasses also comprise a laser positioner which can accurately position the acupuncture points according to the data processed in the computer and transmit laser beams to the corresponding acupuncture points, thereby realizing the positioning of the acupuncture points. In the computer system, the scanning data of different patients are matched through pre-recorded data and algorithms, and the three-dimensional meridian point model constructed in the first model of the virtual Chinese man is accurately matched with the corresponding acupuncture points in the 3D body surface model of the patient, so that the meridian point body surface and depth machine positioning is realized.
The meridian point mixed reality interaction system of embodiment 2 can continuously improve the accuracy of mixed reality interaction through clinical cases. The method uses the subjective feeling of 'acid, tingling, swelling and heaviness' generated by a patient, which are subjectively determined by a Chinese medical acupuncture doctor with senior resources, as a standard, and compares and verifies and continuously corrects the data of the real-time meridian and acupoint mixed reality image, thereby continuously improving the accuracy of the system. Selecting a patient to be subjected to acupuncture treatment, wherein the body surface position of the acupoints is subjectively determined by a senior Chinese medicine acupuncture doctor; the depth of the acupoints is determined by the senior Chinese medical acupuncture practitioner based on the subjective feeling of "qi" and "sour, tingling, distention and heaviness" of the patient. The subjective positioning data of the doctor of traditional Chinese medicine is compared with the real-time data of the meridian point mixed reality image, the comparison, verification and correction of the body surface and depth of the point are carried out, and the accuracy of the mixed reality meridian point instrument is improved continuously.
Example 3
Embodiment 3 of the invention discloses a computer readable storage medium, in which a computer program is stored, which when executed on a computer, causes the computer to execute the meridian point mixed reality interaction method as disclosed in embodiment 1.
Example 4
Embodiment 4 of the invention discloses a computer program product, which comprises a computer program that, when run on a computer, causes the computer to execute the meridian point mixed reality interaction method as disclosed in embodiment 1.
According to the channel and acupoint mixed reality interaction method and system disclosed by the invention, a 3D human body scanner is used for constructing a personalized human body model for each patient by means of a mixed reality technology, a standard channel and acupoint three-dimensional model is precisely matched and projected onto a real human body through registration points, and is intelligently matched and modified through a system, a deep three-dimensional channel and acupoint structure is precisely and real-timely observed through wearable MR (magnetic resonance) glasses, so that a learning curve of a crowd with a learning requirement on the channel and acupoint and a relevant practitioner is reduced, the positions of the channel and acupoint can be found more precisely, quickly and intuitively, the channel and acupoint system can be more accurately learned, and traditional Chinese medicine treatments such as acupuncture and massage can be more precisely and effectively implemented.
It should be understood that the above description of the specific embodiments of the present invention is only for illustrating the technical route and features of the present invention, and is for enabling those skilled in the art to understand the present invention and implement it accordingly, but the present invention is not limited to the above-described specific embodiments. All changes or modifications that come within the scope of the appended claims are intended to be embraced therein.
Claims (10)
1. A method for mixed reality interaction of meridian points, the method comprising:
s1: performing three-dimensional reconstruction on a standard human body to construct a three-dimensional human body model;
s2: establishing acupoint points on the three-dimensional human body model to construct a three-dimensional meridian acupoint model;
s3: acquiring scanning data of a body surface mark of a three-dimensional body surface structure model of a patient, and matching the constructed three-dimensional meridian and acupoint model into the three-dimensional body surface structure model of the patient;
s4: the three-dimensional meridian acupoint system structure of the patient is displayed in real time.
2. The method according to claim 1, wherein in step S1, the three-dimensional phantom is created by scanning data obtained by scanning a standard human body.
3. The method of mixed reality interaction for meridian points according to claim 1, wherein step S2 specifically comprises:
s21: creating an interactive sphere representing the acupoints according to the needling depth and range of each acupoints on the three-dimensional human model;
s22: associating said acupoints with said three-dimensional mannequin; so that the acupoints can move synchronously when the three-dimensional human body model is enlarged and reduced;
S23: creating a cylindrical solid line segment between two acupoints; the cylindrical solid line segment extends along the direction parallel to the body surface skin;
s24: connecting each section of cylindrical solid line segment into a smooth cylindrical curve, and establishing a three-dimensional channel;
s25: based on the three-dimensional channels, two acupoints at adjacent positions on each channel are connected into line segments, which are represented by a cylindrical channel model to form fourteen channels.
4. The method of mixed reality interaction for meridian points according to claim 1, wherein step S3 specifically comprises:
s31: 3D scanning is carried out on a patient to obtain scanning data of the patient;
s32: reconstructing a three-dimensional body surface structure model of the patient based on the scanning data;
s33: marking a plurality of body surface mark points in advance based on the three-dimensional human body model;
s34: and matching the constructed three-dimensional meridian point model in the three-dimensional human body model into a three-dimensional body surface structure model of the patient based on the body surface mark points.
5. The meridian and acupoint mixed reality interaction system is characterized by comprising a standard human body construction module, an acupoint construction module, a real-time reconstruction module and a display module which are connected with each other;
The standard human body construction module is used for carrying out three-dimensional reconstruction on a standard human body to construct a three-dimensional human body model; the acupoint construction module is used for establishing acupoint points on the three-dimensional human body model so as to construct a three-dimensional meridian acupoint model;
the real-time reconstruction module is used for acquiring scanning data of body surface marks of a three-dimensional body surface structure model of a patient and accurately matching the constructed three-dimensional meridian point model into the three-dimensional body surface structure model of the patient;
the display module is used for displaying the three-dimensional meridian acupoint system structure of the patient in real time.
6. The system of claim 5, wherein the standard body building module is configured to build the three-dimensional body model from scan data obtained from scanning a standard body.
7. The system of claim 5, wherein the acupoint construction module is configured to:
creating an interactive sphere representing the acupoints according to the needling depth and range of each acupoints on the three-dimensional human model;
associating said acupoints with said three-dimensional mannequin; so that the acupoints can move synchronously when the three-dimensional human body model is enlarged and reduced;
Creating a cylindrical solid line segment between two acupoints; the cylindrical solid line segment extends along the direction parallel to the body surface skin;
connecting each section of cylindrical solid line segment into a smooth cylindrical curve, and establishing a three-dimensional channel;
based on the three-dimensional channels, two acupoints at adjacent positions on each channel are connected into line segments, which are represented by a cylindrical channel model to form fourteen channels.
8. The method according to claim 5, wherein the real-time reconstruction module is specifically configured to:
3D scanning is carried out on a patient to obtain scanning data of the patient;
reconstructing a three-dimensional body surface structure model of the patient based on the scanning data;
marking a plurality of body surface mark points in advance based on the three-dimensional human body model;
and matching the constructed three-dimensional meridian point model in the three-dimensional human body model into a three-dimensional body surface structure model of the patient based on the body surface mark points.
9. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, which when executed on a computer, causes the computer to perform the mixed reality interaction method according to any one of claims 1-9.
10. A computer program product comprising a computer program which, when run on a computer, causes the computer to perform the mixed reality interaction method of the meridian points of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310997062.3A CN117253014A (en) | 2023-08-09 | 2023-08-09 | Mixed reality interaction method and system for meridian acupoints |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310997062.3A CN117253014A (en) | 2023-08-09 | 2023-08-09 | Mixed reality interaction method and system for meridian acupoints |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117253014A true CN117253014A (en) | 2023-12-19 |
Family
ID=89132027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310997062.3A Pending CN117253014A (en) | 2023-08-09 | 2023-08-09 | Mixed reality interaction method and system for meridian acupoints |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117253014A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118261983A (en) * | 2024-05-23 | 2024-06-28 | 成都中医药大学 | Method for positioning acupoints on back of human body based on improved depth filling algorithm |
-
2023
- 2023-08-09 CN CN202310997062.3A patent/CN117253014A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118261983A (en) * | 2024-05-23 | 2024-06-28 | 成都中医药大学 | Method for positioning acupoints on back of human body based on improved depth filling algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hsieh et al. | Preliminary study of VR and AR applications in medical and healthcare education | |
CN107067856B (en) | Medical simulation training system and method | |
CN101958079A (en) | Positioning model of channel acupuncture point in three-dimensional virtual human anatomy texture and application thereof | |
CN110335516B (en) | Method for performing VR cardiac surgery simulation by adopting VR cardiac surgery simulation system | |
CN110021445A (en) | A kind of medical system based on VR model | |
Liu et al. | The prospect for the application of the surgical navigation system based on artificial intelligence and augmented reality | |
CN106780653A (en) | The generation method of collaterals of human and acupuncture points on the human body Visual Graph | |
CN117253014A (en) | Mixed reality interaction method and system for meridian acupoints | |
CN110063886A (en) | A kind of AR augmented reality intelligent acupuncture and moxibustion headset equipment | |
CN109885156A (en) | A kind of virtual reality interaction systems and interactive approach | |
CN109273079A (en) | AI cloud diagnosis and therapy system based on Chinese medicine | |
CN111524433A (en) | Acupuncture training system and method | |
CN105718730A (en) | Quantitative evaluation method for pain of subject and system for implementing method | |
Ribeiro et al. | Techniques and devices used in palpation simulation with haptic feedback | |
CN109036063B (en) | Acupuncture treatment simulation training method and system | |
CN111276022A (en) | Gastroscope simulation operation system based on VR technique | |
CN110570739A (en) | traditional Chinese medicine acupuncture simulation training system based on mixed reality technology | |
CN113470173A (en) | Holographic digital human body modeling method and device | |
Kim et al. | Positioning standardized acupuncture points on the whole body based on X-ray computed tomography images | |
CN115662234B (en) | Thoracic surgery teaching system based on virtual reality | |
Chen et al. | Application of XR technology in stomatology education: Theoretical basis, application scenarios and future prospects | |
US20220270514A1 (en) | Providing training and assessment of physiatrics and cosmetics processes on a physical model having tactile sensors, using a virtual reality device | |
CN114496157A (en) | Prescription display method and system | |
CN115188232A (en) | Medical teaching comprehensive training system and method based on MR-3D printing technology | |
TW202308581A (en) | Dynamic three-dimensional meridian system of human body |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |