CN116646052A - Auxiliary acupuncture positioning system and method based on three-dimensional human body model - Google Patents

Auxiliary acupuncture positioning system and method based on three-dimensional human body model Download PDF

Info

Publication number
CN116646052A
CN116646052A CN202310777244.XA CN202310777244A CN116646052A CN 116646052 A CN116646052 A CN 116646052A CN 202310777244 A CN202310777244 A CN 202310777244A CN 116646052 A CN116646052 A CN 116646052A
Authority
CN
China
Prior art keywords
virtual
acupuncture
human body
patient
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310777244.XA
Other languages
Chinese (zh)
Other versions
CN116646052B (en
Inventor
安鹏
吴喜利
王亚峰
王文方
李流云
张涛
李星瑶
高琪
李金娥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Affiliated Hospital School of Medicine of Xian Jiaotong University
Original Assignee
Second Affiliated Hospital School of Medicine of Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Affiliated Hospital School of Medicine of Xian Jiaotong University filed Critical Second Affiliated Hospital School of Medicine of Xian Jiaotong University
Priority to CN202310777244.XA priority Critical patent/CN116646052B/en
Publication of CN116646052A publication Critical patent/CN116646052A/en
Application granted granted Critical
Publication of CN116646052B publication Critical patent/CN116646052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/02Devices for locating such points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/08Devices for applying needles to such points, i.e. for acupuncture ; Acupuncture needles or accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Rehabilitation Therapy (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Primary Health Care (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Urology & Nephrology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an auxiliary acupuncture positioning system and method based on a three-dimensional human body model. The system consists of head display equipment, a three-dimensional human body model, a human body gesture recognition module, a human body model alignment module, an acupuncture point positioning module and a user interaction module. The head display device captures images of the real environment and three-dimensional structure information thereof. The three-dimensional human body model is a virtual model, and virtual acupuncture points are marked on the three-dimensional human body model. The human body posture recognition module predicts a posture of the patient from the captured image. The mannequin alignment module aligns the predicted patient pose with the virtual mannequin using a numerical optimization algorithm. The acupuncture point positioning module converts the positions of the virtual acupuncture points to the real patient body according to the alignment relation. The user interaction module provides a user interaction mode, so that a user can see the body of a real patient and can see a virtual human body model and acupuncture points. The system enables an acupuncture operator to find acupuncture points more accurately and conveniently, and improves treatment effect.

Description

Auxiliary acupuncture positioning system and method based on three-dimensional human body model
Technical Field
The application relates to the technical field of medicine, in particular to an auxiliary acupuncture positioning system and method based on a three-dimensional human model.
Background
Acupuncture is an ancient medical technique, which mainly aims to treat diseases or improve physical conditions by applying stimulation to specific acupoints of a human body. However, the positioning of the acupoints is an important skill for acupuncture, and requires a lot of time and effort to learn and master. Even an experienced acupuncture operator may have errors in the positioning of the acupoints, affecting the acupuncture effect.
However, the acupuncture auxiliary systems on the market are mainly based on two-dimensional images, and have certain limitations on the positioning accuracy and user experience of acupuncture points. While some systems have begun to attempt to use three-dimensional mannequins, most are still in the experimental stage and require high technical demands on the user. In addition, the existing three-dimensional human model-based acupuncture auxiliary system mainly displays a three-dimensional model on a computer screen, and a user needs to operate the body of a patient while looking at the screen, which affects the convenience and accuracy of operation to a certain extent. In addition, the three-dimensional human body model in the existing system is often a unified standard model, and cannot adapt to individual differences of different patients.
With the development of new display technologies such as Virtual Reality (VR), augmented Reality (AR), and Mixed Reality (MR), more and more applications begin to attempt to use these technologies in the medical field to improve the efficiency and quality of medical services. In particular, in the fields of acupuncture and moxibustion and the like which need accurate operation, the techniques have great application potential.
Therefore, a brand new auxiliary acupuncture positioning system based on a three-dimensional human body model is developed, and novel display technologies such as Virtual Reality (VR), augmented Reality (AR), mixed Reality (MR) and the like are combined to improve the accuracy of acupuncture positioning, so that the auxiliary acupuncture positioning system is an important direction of technical development of the existing auxiliary acupuncture positioning system.
Disclosure of Invention
The application provides an auxiliary acupuncture positioning system based on a three-dimensional human body model, which is used for improving the positioning accuracy of acupuncture points
The system comprises: the head display device is provided with a built-in camera and a depth sensor and is used for capturing images of a real environment and three-dimensional structure information of the real environment;
a three-dimensional human body model which is a virtual human body model created by a computer and is marked with virtual acupuncture points;
the human body posture recognition module is used for predicting the posture of the patient from the images captured by the head display equipment and the three-dimensional structure information thereof;
the human body model alignment module is used for aligning the virtual human body model to the real patient body through a numerical optimization algorithm according to the predicted posture of the patient;
the acupuncture point positioning module is used for converting the position of the virtual acupuncture point to the real patient body according to the alignment relation between the virtual human body model and the real patient body;
the user interaction module is used for providing a user interaction mode, so that a user can see a virtual human body model and switch to an acupuncture point on the real patient body while seeing the real patient body.
Optionally, the user interaction module allows the user to select and operate acupuncture points through gestures or voice commands, and displays relevant information on a screen after the user selects one acupuncture point.
Optionally, the human body posture recognition module comprises a deep neural network for predicting the posture of the patient from the displayed images.
Optionally, the mannequin alignment module uses a gradient descent algorithm to calculate model parameters that minimize the difference between the virtual mannequin and the patient's body.
Optionally, the acupuncture point positioning module calculates the corresponding acupuncture point positions on the real patient body according to the alignment relationship and the positions of the virtual acupuncture points, so as to convert the positions of the virtual acupuncture points to the real patient body.
The application provides an auxiliary acupuncture positioning method based on a three-dimensional human body model, which comprises the following steps:
capturing an image of a real environment of a patient and acquiring three-dimensional structure information of the image by using head display equipment;
creating a three-dimensional human body model, and marking acupuncture points on the model;
predicting the posture of the patient from the image of the head display device;
aligning the virtual manikin to the real patient body through a numerical optimization algorithm according to the predicted patient posture;
according to the alignment relation between the virtual human body model and the real patient body, converting the positions of the virtual acupuncture points to the real patient body;
through the user interaction technology, a user interaction mode is provided, so that a user can see a virtual human body model and switch to an acupuncture point on a real patient body while seeing the real patient body.
Optionally, the user interaction mode further comprises allowing a user to select and operate acupuncture points through gestures or voice commands, and displaying relevant information on a screen after the user selects one acupuncture point.
Optionally, predicting the posture of the patient from the image of the head display device includes:
a deep neural network is used to predict the pose of the patient from the de novo displayed image.
Optionally, the aligning the virtual manikin to the real patient body through a numerical optimization algorithm according to the predicted patient posture includes:
using a gradient descent algorithm to find model parameters that minimize the difference between the virtual manikin and the patient's body;
based on the model parameters, a virtual manikin is aligned to the real patient body.
Optionally, the converting the position of the virtual acupuncture point to the real patient body according to the alignment relationship between the virtual manikin and the real patient body includes:
and calculating the corresponding acupuncture point positions on the real patient body according to the alignment relation and the positions of the virtual acupuncture points, so that the positions of the virtual acupuncture points are converted to the real patient body.
Compared with the traditional acupuncture positioning method based on two-dimensional pictures or naked eye identification, the three-dimensional human body model based acupuncture positioning system provided by the application can more accurately position acupuncture points, and avoids positioning errors caused by individual differences or posture changes. The head display device is utilized to capture the real environment, and the method can capture the physical state and the environment information of the patient in real time, so that compared with a static picture or a prerecorded video, the head display device can provide richer information and is more beneficial to accurate positioning. By combining the human body posture recognition and model alignment technology, the accuracy of acupuncture point positioning can be further improved by recognizing the actual posture of a patient and aligning the actual posture with the three-dimensional model. And the user interaction module enables a user to observe a virtual human body model and acupuncture points while observing a real environment through a virtual reality technology, so that user experience is greatly enhanced.
The beneficial technical effects brought by the application include: the system can realize accurate acupuncture point positioning, and improves the effect and efficiency of acupuncture treatment. In addition, through virtual reality technology, can let the acupuncture and moxibustion person have more audio-visual effect when carrying out acupuncture operation, improve the convenience and the accuracy of acupuncture operation. In addition, through utilizing head display equipment and degree of depth inductor, can realize the real-time supervision to the patient, in time adjust acupuncture tactics, satisfy individuation treatment's demand better.
Drawings
Fig. 1 is a schematic diagram of an auxiliary acupuncture positioning system based on a three-dimensional human body model according to a first embodiment of the present application.
Fig. 2 is a schematic diagram of a human body posture recognition module according to a first embodiment of the present application.
Fig. 3 is a schematic diagram of a posture prediction model according to a first embodiment of the present application.
Fig. 4 is a flowchart of an auxiliary acupuncture positioning method based on a three-dimensional human body model according to a second embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The first embodiment of the application provides an auxiliary acupuncture positioning system based on a three-dimensional human body model. Referring to fig. 1, a schematic diagram of a first embodiment of the present application is shown. The following provides a detailed description of an auxiliary acupuncture positioning system based on a three-dimensional mannequin according to a first embodiment of the present application with reference to fig. 1.
The auxiliary acupuncture positioning system 100 includes a head-display device 102 supporting mixed reality, a three-dimensional mannequin 104, a mannequin identification module 106, a mannequin alignment module 108, an acupuncture point positioning module 110, and a user interaction module 112.
The head-mounted device 102 supports mixed reality technology, and has a built-in camera and a depth sensor for capturing images of a real environment and acquiring three-dimensional structural information thereof.
Mixed Reality (MR) is a novel display technology, which combines the characteristics of Virtual Reality (VR) and Augmented Reality (AR), and combines the virtual world and the real world together through advanced devices and systems to create a new visual experience.
The basic features of MR technology are as follows:
(1) Mixed reality: MR not only embeds virtual objects in the real world, but also allows users to interact with virtual objects, creating a mixed reality and virtual environment.
(2) Interaction: MR allows users to interact with the virtual world using natural gestures, speech, etc., providing a more natural, intuitive user experience.
(3) The reality sense: by modeling real-time 3D of the real world, MR techniques enable virtual objects to appear in a more realistic manner in the real world, e.g., occlusion relationships of virtual objects with real world objects, lighting effects, etc. can be achieved.
MR technology requires the support of complex computer vision, machine learning, 3D modeling, etc., and has a very wide range of applications, such as game entertainment, remote collaboration, educational training, design and manufacturing, medical health, etc.
In mixed reality, the virtual object and the real object are displayed in the same field of view, and rich visual experience is provided for users. The head display device 102 provided by the present application is intended to achieve this effect.
The head display device 102 provided by the application has a micro-display for displaying virtual images in the field of view of the user. The device also includes a headband for securing the device to the head of a user. The size and shape of the headband may be adjusted to accommodate the head sizes and shapes of different users.
The camera included in the head-display device 102 is used for capturing the real environment in the field of view of the user, and the depth sensor is used for acquiring the three-dimensional structure information of the real environment. This information is used to generate a corresponding real environment model in the virtual environment, enabling the virtual object to interact with the real object in a realistic manner.
When the user wears the head display device provided by the embodiment, the built-in camera and the depth sensor of the device can start to capture and analyze the real environment in the field of view of the user. This information is converted into a model of the virtual environment by an image processing algorithm.
Next, the micro-display of the head-mounted device will display the virtual environment and virtual target in the field of view of the user. The virtual target displayed may be static or dynamic, depending on the requirements of the application. A user may interact with the virtual target through an input interface of the device, such as a touch pad, buttons, or a voice recognition system. In the manner described above, the head-mounted device 102 provides a mixed reality visual experience for the user.
The three-dimensional mannequin 104 provided in this embodiment is a virtual mannequin created on a computer and labeled with virtual acupuncture points. The following details the steps performed by the three-dimensional manikin 104.
First, a basic three-dimensional manikin needs to be created. This model may be created using a variety of known techniques including, but not limited to: manual modeling using computer graphics techniques, acquisition of real human models by laser scanning techniques, and the like. This model should include the main parts of the human body, such as the head, chest, abdomen, legs, etc., to provide the basis for the subsequent labeling of acupuncture points.
After the human body model is created, virtual acupuncture points need to be marked on the model. First, it is necessary to collect information about acupuncture points, including but not limited to: the name of the acupuncture point, the positioning method, the corresponding human body part, etc. Such information can be obtained from various traditional Chinese medical literature.
Then, the collected acupuncture point information needs to be converted into three-dimensional coordinates, which can be achieved by performing coordinate conversion using computer graphics technology. These three-dimensional coordinates then need to be applied to the manikin to label each acupuncture point to a corresponding location. Each acupuncture point may be represented by a sphere or other visual icon, and is accompanied by basic information of the acupuncture point to facilitate user recognition and understanding. Through the steps, the acupuncture point marking based on the three-dimensional human body model is completed.
The human body posture recognition module 106 provided in this embodiment is used for predicting the posture of the patient from the images captured by the head display device and the three-dimensional structure information thereof. This module is described below in connection with fig. 2. The module is mainly composed of an image preprocessing section 202, a posture prediction model 204, and a post-processing section 206.
An image preprocessing section 202: and the information such as images captured by a camera of the head display equipment is converted into a format suitable for inputting a gesture prediction model. Specific operations may include clipping, scaling, normalization, etc.
The image preprocessing section may be implemented by:
image reception S1001: first, information such as an image captured from a head display device is taken as an original input. These images may be RGB images or depth images, or a combination of both.
Intelligent cutting S1002: then, by using an intelligent clipping algorithm, the position of the human body in the image is automatically identified, and the partial region is clipped. The intelligent clipping algorithm uses a pre-trained deep learning model, such as a Convolutional Neural Network (CNN). The model is trained on a large number of images with human body position markers. When cutting, firstly, the image is input into a model, and the model outputs a thermodynamic diagram of the position of the human body. Then, the approximate center position of the human body is determined based on the maximum point in the thermodynamic diagram. Finally, a region of a predefined size is cut out, centered at this central position. In this way, it is ensured that the human body is located at the center in the cut image and as much information of the human body as possible is contained.
Scale normalization S1003: because different people have different body sizes, the cut images need to be subjected to scale normalization. In particular, the size of the image is uniformly scaled to a predefined size, for example 256x256 pixels. Thus, the processed image size is consistent regardless of the size of the original human body.
Pixel value normalization S1004: after scale normalization, the pixel values of the image also need to be normalized. The purpose of the normalization is to limit the pixel value range of the image to a range, such as 0,1 or-1, 1. This can make the training of the model more stable and prevent the performance of the model from being affected by too large or too small pixel values. Specific methods of pixel value normalization may include maximum-minimum normalization, Z-Score normalization, and the like.
Image enhancement S1005: finally, to enhance the generalization ability of the model, some enhancement operations such as random rotation, translation, scaling, etc. may also be performed on the image. There are a number of methods of image enhancement, and the appropriate method may be selected according to the specific application requirements and data characteristics.
Gesture prediction model 204: the model is a pre-trained deep learning model, and can predict the posture of a human body from an input image. The model may be a Convolutional Neural Network (CNN) or other model suitable for processing image data.
Such a model is commonly referred to as a "pose estimation network," which may be a Convolutional Neural Network (CNN) specifically designed for the task of human pose estimation. The basic structure of this will be described below with reference to fig. 3, as follows:
input layer 302: the input to the model is a preprocessed image, which may be 256x256 pixels in size. The image is typically in RGB format, so the number of channels input is 3.
Convolution layer 304: next, there is a series of convolutional layers. Each convolution layer contains convolution operations, nonlinear activation functions (e.g., reLU), and pooling operations (optional). The convolution operation may extract features from the image and the pooling operation may reduce the spatial dimensions of the features, thereby reducing the complexity of the model.
The keypoint prediction layer 306: following the convolutional layer is a keypoint prediction layer. The task of this layer is to predict the probability that each pixel in the image is a human critical point (e.g., neck, elbow, knee, etc.). It generally consists of a convolution operation and a Softmax activation function. The number of output channels of the convolution operation is equal to the number of key points, and each channel corresponds to a probability map of one key point. The Softmax function is used to normalize the values of the probability map to between 0-1.
Keypoint offset vector prediction layer 308: this layer is in parallel with the keypoint probability prediction layer 306. This layer consists of a convolution operation, with twice the number of output channels as the number of keypoints, because each keypoint has an offset in both the x and y directions. The task of this layer is to predict the offset vector for each keypoint, i.e. a small change with respect to the predicted coordinates.
Decoding layer 310: following the keypoint prediction layer is a decoding layer. The task of the decoding layer is to translate the probability map into specific keypoint coordinates. The specific method is that the maximum point in each probability map is found, then the corresponding offset vector is added, and the coordinates of the maximum point are used as the coordinates of the key points.
Output layer 312: finally, the output of the model is the coordinates of all the keypoints. They constitute the posture of the human body.
In practice, the structure and parameters of the deep learning model need to be optimized through a large amount of training data. The training data generally comprises images with human body key point marks, and parameters of the model can be continuously updated through back propagation, gradient descent and other optimization algorithms so as to enable the predicted result and the real result of the model to be as close as possible.
Post-processing portion 206: the post-processing part is used for converting the output of the gesture prediction model into gesture data which can be directly used. This may include converting continuous values of the model output into discrete gesture categories, or converting relative coordinates of the model output into absolute coordinates.
The post-treatment part comprises the following implementation steps:
data analysis S2001: first, the keypoint coordinates and the keypoint offset vector are parsed from the output of the pose prediction model. These data represent the location of the keypoints in the form of consecutive values.
Adjustment of key point coordinates S2002: the keypoint offset vector is then used to correct the keypoint coordinates so that the prediction result is more accurate. The specific method is that the coordinates of the key points are added with the corresponding offset vectors to obtain the corrected coordinates of the key points.
Coordinate conversion S2003: in order to meet the requirements of acupuncture positioning, the coordinates of the key points need to be converted from the coordinate system of the head display device to the real world coordinate system. This can be achieved by the following steps: firstly, mapping the corrected key point coordinates into a depth image to obtain pixel coordinates and depth values of the key points; the pixel coordinates and depth values are then converted to real world coordinates using internal parameters of the head-mounted device (such as focal length and principal point coordinates) and external parameters (such as position and orientation of the device).
Coordinate filtering S2004: to improve the stability of the coordinate prediction, we can use some filtering algorithms, such as kalman filter, to smooth the continuous prediction result.
Body part identification S2005: since the positions of the acupuncture points are generally defined with respect to the body parts, it is necessary to identify the body parts corresponding to the key points. This may be achieved by some predefined rule, for example, if a keypoint is located between two arm keypoints, it may be considered a chest keypoint.
Gesture category recognition S2006: finally, the posture category of the patient can be identified according to the relative position relation of the key points. This may be accomplished by some predefined rule, for example, if all hand keypoints are below head keypoints, then the patient may be deemed to be sitting.
When the head-mounted device 102 captures an image, the image is first sent to the image preprocessing section. The preprocessed image data is input into a gesture prediction model, and the model predicts the gesture of the human body in the image. Finally, the post-processing portion converts the predicted pose data into a format that can be used directly.
Through the steps, the human body posture recognition module can predict the posture of the patient from the image of the head display device, and provides a basis for subsequent acupuncture point positioning. The human body gesture recognition module can be realized in a computer or a computing unit in the head display device.
The mannequin alignment module 108 provided in this embodiment is configured to align the virtual mannequin to the real patient body through a numerical optimization algorithm according to the predicted posture of the patient.
The human body model alignment module 108 is mainly composed of a pose data processing section, an alignment algorithm section, and a model adjustment section.
An attitude data processing section: this part is responsible for processing the data from the human gesture recognition module and converting it into a format suitable for input alignment algorithms. This may include converting the gesture category into gesture parameters, or converting absolute coordinates into relative coordinates. If the entered pose data is in the form of a category (e.g., standing, sitting, etc.), it may be necessary to translate it into specific pose parameters such as joint angles or joint coordinates, etc. This step may be implemented by a look-up table, rule-based mapping, or machine learning model. If the entered pose data is absolute coordinates, it needs to be translated into coordinates relative to the manikin. This may be accomplished by subtracting an average of the model coordinates or other predetermined reference point.
Alignment algorithm part: the model is aligned with the real human body as much as possible by optimizing the posture parameters of the human body model by adopting a numerical optimization algorithm. The algorithm may be an iterative optimization algorithm, such as a gradient descent method, or other algorithm suitable for dealing with such problems.
The alignment algorithm part may be implemented by:
(1) Setting an objective function:
assume that predicted pose parameters are represented as vector P (including the positions or angles of all joints), and pose parameters of the virtual manikin are represented as vector M. The goal here is to minimize the difference between these two vectors.
Thus, the objective function may be set as the square of the euclidean distance of the two vectors, i.e. l= |p-m|| 2 . The smaller the value of this function, the smaller the difference between the pose representing the virtual model and the real pose.
On this basis, in order to improve the accuracy of acupuncture positioning, an optimization term for acupuncture points can be added. Assuming that the positions of the acupuncture points in the predicted pose and the model pose are P _ a and M _ a respectively, the position difference of the acupuncture points can also be included in the objective function, namely L= |P-M|| 2 +λ||P_a-M_a|| 2 Where λ is a weight parameter used to adjust the relative importance of the two terms.
The data (P_a) of the acupuncture point positions under the predicted posture are output from the established acupuncture knowledge base and the human posture recognition module. And (3) gesture recognition: first, the human body posture recognition module predicts the posture parameter P of the patient by analyzing the real environment image acquired from the head display device, which may include the positions or angles of all joints, etc. Acupuncture knowledge base: the relative position information of the acupuncture points relative to each joint is stored in the established acupuncture knowledge base. The information is compiled according to ancient classical such as Huangdi's Nei Jing and modern acupuncture results. Predicting the point positions of acupuncture points: according to the predicted posture parameter P and the acupuncture knowledge base, the acupuncture point position P_a under the predicted posture can be calculated. In particular, this typically involves some analytical or numerical geometric calculations, such as rotations and translations.
It should be noted that the predicted acupuncture point position p_a may not be completely accurate because there is a certain difference in the body structure of each person. This is why an optimization term for the acupuncture points is added to the objective function to further improve the accuracy of the acupuncture positioning.
(2) Optimization algorithm:
an appropriate numerical optimization algorithm, such as a gradient descent algorithm, is employed to minimize the objective function. Specifically, the pose parameter M of the model is first randomly initialized, and then M is adjusted in each iteration according to the gradient direction of the objective function, so that the value of the objective function gradually decreases.
In the optimization process, physical constraints of the model, such as upper and lower limits on joint angles, are considered. Specifically, if in a certain iteration, a certain joint angle of the model exceeds its physical upper and lower limits, it needs to be adjusted back within the limits. This can be achieved by clipping M after each iteration step, i.e. m=min (M, lower_bound), upper_bound, wherein lower_bound and upper_bound represent the lower and upper limits of the joint angle, respectively.
Through the steps, the gesture of the virtual human body model can be continuously optimized, so that the gesture is gradually close to the real gesture, the physical rationality of the model is ensured, and the accuracy of acupuncture positioning is improved.
Model adjustment section: the model adjustment section adjusts the posture of the virtual human body model according to the result of the alignment algorithm so as to be aligned with the real human body as much as possible.
When the human body posture recognition module predicts the posture of the patient, the posture data is firstly sent to the posture data processing part for processing. The processed data will be input to the alignment algorithm section, which will optimize the pose parameters of the mannequin to align the model to the real human body as much as possible. Finally, the model adjustment section adjusts the posture of the virtual human body model according to the optimization result. Meanwhile, according to the change of the model posture, the positions of acupuncture points on the virtual human body model are updated. For example, the position of the acupuncture point may be dynamically adjusted according to the changes of surrounding muscles and bones by linear interpolation or other suitable interpolation methods.
Through the above steps, alignment of the pose of the virtual manikin with the true patient pose can be achieved.
The acupuncture point positioning module 110 provided in this embodiment is configured to convert the position of the virtual acupuncture point to the real patient body according to the alignment relationship between the virtual mannequin and the real patient body.
The acupuncture point positioning module 110 is mainly composed of an alignment data processing part, a positioning algorithm part and a positioning result output part.
An alignment data processing section: the portion first receives data from the mannequin alignment module that includes information about the angle, position, etc. of the joints of the virtual mannequin. The work in this section is to transform these pose parameters into an alignment transformation matrix, i.e., to transform the model pose parameters into a 4x4 alignment transformation matrix that describes the transformations of rotation, translation, and scaling of the model. The specific transformation method can adopt Euler angles or quaternions to describe rotation, and corresponding vectors and values can be directly used for translation and scaling. The result of the transformation is fed into the positioning algorithm part.
Positioning algorithm part: the part receives the transformation matrix from the alignment data processing part and the position of the virtual acupuncture point set in advance. The position of the virtual acupuncture point may be expressed as a three-dimensional coordinate. The positioning algorithm applies the transformation matrix to the positions of the virtual acupuncture points to obtain the positions of the corresponding acupuncture points on the body of the real patient. A specific calculation process may use a matrix multiplication operation, that is, a new position=transformation matrix.
A positioning result output section: and the result of the positioning algorithm is converted into acupuncture point position data which can be directly used. The positioning result may be a three-dimensional coordinate which is converted into a distance and angle with respect to a reference point, such as a part of the patient's body or a camera of the head-mounted display device, or into pixel coordinates of a two-dimensional image. The specific conversion process can be completed through a corresponding coordinate system conversion formula. The final output format may be set according to actual requirements, for example, may be output in text or visual form.
When the human body model alignment module completes the alignment operation, the alignment data is firstly sent to the alignment data processing part for processing. The processed data are input into a positioning algorithm part, and the positioning algorithm calculates the corresponding acupuncture point positions on the real patient body according to the alignment data and the positions of the virtual acupuncture points. Finally, the positioning result output part converts the positioning result into a format which can be directly used.
Through the steps, the acupuncture point positioning module can convert the positions of the virtual acupuncture points to the real patient body, so that the basis is provided for the operation of an acupuncture engineer.
The user interaction module 112 provided in this embodiment is configured to provide a user interaction manner, so that a user can see a virtual mannequin and switch to an acupuncture point on a real patient body while seeing the real patient body.
The user interaction module is mainly composed of a Virtual Reality (VR) display part, an interaction input part and an interaction feedback part.
Virtual Reality (VR) display portion: the part is responsible for fusing and displaying the virtual mannequin and the virtual acupuncture points and the real environment image captured by the head display device into the visual field of the user. This involves some image synthesis techniques such as image fusion, image stitching, etc. In addition, the display part may be updated to an Augmented Reality (AR) display, which may combine a virtual manikin and acupuncture points with a real environment, and may also dynamically adjust display contents according to a user's viewing angle. This involves some image fusion and three-dimensional image rendering techniques to achieve virtual and real seamless fusion.
In addition, the system can display the virtual acupuncture points in the form of three-dimensional arrows or highlight areas, so that the visibility of the acupuncture points is improved, visual indication is provided for users, and the users are helped to better understand and position the acupuncture points. The three-dimensional arrows or the highlight areas can be distinguished in color or size according to the importance of the acupuncture points, the body positions of the acupuncture points and the like, so that a user can rapidly distinguish and identify the priority and the relative position of each acupuncture point.
In addition, the virtual acupuncture points may be displayed in the form of three-dimensional arrows or highlighted areas so that the user can better understand and locate the acupuncture points. In order to further improve the interactive experience of the user, more kinds of display modes can be introduced. For example, the scene when the acupuncture point is stimulated may be simulated by an animation effect, for example, by causing the model epidermis at the virtual acupuncture point to fluctuate or illuminate. Information about the name, effect and applicable disease of the acupuncture points can also be provided by adding descriptive text labels or graphic symbols to the virtual acupuncture points through Augmented Reality (AR) technology.
In some complex acupuncture operations, virtual Reality (VR) technology may also be used to demonstrate three-dimensional dynamic processes of the operation, such as the operation of a pin, a rotating pin, a lifting pin, etc., so that a user may learn about the steps and skills of the operation in detail from various angles, thereby further enhancing the teaching and training functions of the system. In addition, by combining the voice recognition technology, a user can inquire or operate the model and the acupuncture points through voice instructions, and a more convenient and quick interaction mode is provided. The innovative display modes not only improve the user experience of the system, but also are beneficial to improving the accuracy and efficiency of acupuncture positioning.
An interaction input section: the portion receives and processes interactive inputs from a user, such as gesture operations, voice commands, and the like. This requires the use of some user input recognition techniques such as gesture recognition, voice recognition, etc. In addition, a gaze tracking technique may be added to enable the system to know which acupuncture point or model area the user is looking at and automatically zoom in or highlight this area. The gaze tracking technique may be implemented by specialized devices or using a front-facing camera in some existing head-mounted devices (e.g., some type of VR device).
Interaction feedback part: the portion adjusts the display of the virtual mannequin and acupuncture points according to the user's interactive input, and provides corresponding user feedback. This may include changing the position, color, size, etc. properties of the virtual object or providing feedback in the form of vibration, sound, etc. In addition, virtual reality technology may be used to display an animation of the penetration of a virtual needle at a selected point after the user selects that point to provide visual feedback. In addition, a haptic feedback device, such as a hand-held vibration device or a wearable device, may be incorporated to generate vibrations when a virtual needle pierces the virtual model, enabling the user to feel the sensation of virtual needle penetration, thereby providing more intuitive and real feedback.
The user interaction module can also comprise a self-adaptive user auxiliary system, which can automatically adjust the display mode of the model and the acupuncture points and the feedback mode according to the operation habit and the technical level of the user. For example, for a primary user, the system may display more instructional information and auxiliary lines, and for a premium user, more degrees of freedom and custom options may be provided. The system can be automatically adjusted by collecting and analyzing operation data of a user, and can also be manually set by the user.
The picture that the user sees through the head display device is generated by a Virtual Reality (VR) display part, and the picture includes a virtual manikin, acupuncture points, and an image of a real environment. The user may perform operations such as rotating a model, selecting acupuncture points, etc., through the interactive input section. The interactive input section recognizes the input of the user and transmits the recognition result to the interactive feedback section. The interactive feedback section adjusts the display of the virtual object and provides feedback based on the recognition result so that the user can perceive that his operation has been received by the system and responded to.
Through the above steps, the user interaction module 112 can provide an intuitive and easy-to-operate interaction manner, so that the user can see and operate the virtual mannequin and acupuncture points while seeing the real patient body.
The second embodiment of the application provides an auxiliary acupuncture positioning method based on a three-dimensional human body model. Please refer to fig. 4, which is a schematic diagram of a second embodiment of the present application. The following provides a detailed description of an auxiliary acupuncture positioning method based on a three-dimensional human body model according to a second embodiment of the present application with reference to fig. 4. Since this embodiment is similar to the first embodiment, the description is relatively simple, please refer to the relevant portions of the first embodiment.
The auxiliary acupuncture positioning method based on the three-dimensional human body model provided by the embodiment comprises the following steps:
s400: capturing an image of a real environment of a patient and acquiring three-dimensional structure information of the image by using head display equipment;
s402: creating a three-dimensional human body model, and marking acupuncture points on the model;
s404: predicting the posture of the patient from the image of the head display device;
s406: aligning the virtual manikin to the real patient body through a numerical optimization algorithm according to the predicted patient posture;
s408, according to the alignment relation between the virtual human body model and the real patient body, converting the positions of the virtual acupuncture points to the real patient body;
s410: through the user interaction technology, a user interaction mode is provided, so that a user can see a virtual human body model and switch to an acupuncture point on a real patient body while seeing the real patient body.
In this embodiment, the user interaction manner further includes allowing the user to select and operate the acupuncture points through gestures or voice commands, and displaying related information on the screen after the user selects one acupuncture point.
In this embodiment, predicting the posture of the patient from the image of the head display device includes:
a deep neural network is used to predict the pose of the patient from the de novo displayed image.
In this embodiment, the aligning the virtual manikin to the real patient body by the numerical optimization algorithm according to the predicted patient posture includes:
using a gradient descent algorithm to find model parameters that minimize the difference between the virtual manikin and the patient's body;
based on the model parameters, a virtual manikin is aligned to the real patient body.
In this embodiment, the converting the position of the virtual acupuncture point to the real patient body according to the alignment relationship between the virtual manikin and the real patient body includes:
and calculating the corresponding acupuncture point positions on the real patient body according to the alignment relation and the positions of the virtual acupuncture points, so that the positions of the virtual acupuncture points are converted to the real patient body.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (10)

1. An auxiliary acupuncture positioning system based on a three-dimensional mannequin, the system comprising:
the head display device is provided with a built-in camera and a depth sensor and is used for capturing images of a real environment and three-dimensional structure information of the real environment;
a three-dimensional human body model which is a virtual human body model created by a computer and is marked with virtual acupuncture points;
the human body posture recognition module is used for predicting the posture of the patient from the images captured by the head display equipment and the three-dimensional structure information thereof;
the human body model alignment module is used for aligning the virtual human body model to the real patient body through a numerical optimization algorithm according to the predicted posture of the patient;
the acupuncture point positioning module is used for converting the position of the virtual acupuncture point to the real patient body according to the alignment relation between the virtual human body model and the real patient body;
the user interaction module is used for providing a user interaction mode, so that a user can see a virtual human body model and switch to an acupuncture point on the real patient body while seeing the real patient body.
2. The auxiliary acupuncture positioning system of claim 1, wherein the user interaction module allows a user to select and operate acupuncture points through gestures or voice commands, and displays related information on a screen after the user selects one acupuncture point.
3. The assisted acupuncture positioning system of claim 1, wherein the body posture recognition module comprises a deep neural network for predicting the posture of the patient from the displayed images.
4. The assisted acupuncture positioning system of claim 1, in which the mannequin alignment module uses a gradient descent algorithm to calculate model parameters that minimize the difference between the virtual mannequin and the patient's body.
5. The auxiliary acupuncture positioning system of claim 1, wherein the acupuncture point positioning module calculates a corresponding acupuncture point position on a real patient's body based on the alignment relationship and the position of the virtual acupuncture point, thereby converting the position of the virtual acupuncture point to the real patient's body.
6. An auxiliary acupuncture positioning method based on a three-dimensional human body model, which comprises the following steps:
capturing an image of a real environment of a patient and acquiring three-dimensional structure information of the image by using head display equipment;
creating a three-dimensional human body model, and marking acupuncture points on the model;
predicting the posture of the patient from the image of the head display device;
aligning the virtual manikin to the real patient body through a numerical optimization algorithm according to the predicted patient posture;
according to the alignment relation between the virtual human body model and the real patient body, converting the positions of the virtual acupuncture points to the real patient body;
through the user interaction technology, a user interaction mode is provided, so that a user can see a virtual human body model and switch to an acupuncture point on a real patient body while seeing the real patient body.
7. The method of claim 6, wherein the user interaction means further comprises allowing the user to select and operate acupuncture points through gestures or voice commands, and displaying related information on a screen after the user selects one acupuncture point.
8. The method for assisting acupuncture and moxibustion positioning according to claim 6, wherein predicting the posture of the patient from the image of the head display device comprises:
a deep neural network is used to predict the pose of the patient from the de novo displayed image.
9. The assisted acupuncture positioning method of claim 6, wherein aligning the virtual mannequin to the real patient body by a numerical optimization algorithm based on the predicted patient pose comprises:
using a gradient descent algorithm to find model parameters that minimize the difference between the virtual manikin and the patient's body;
based on the model parameters, a virtual manikin is aligned to the real patient body.
10. The method of positioning auxiliary acupuncture according to claim 6, wherein the converting the position of the virtual acupuncture point to the real patient body according to the alignment of the virtual manikin and the real patient body comprises:
and calculating the corresponding acupuncture point positions on the real patient body according to the alignment relation and the positions of the virtual acupuncture points, so that the positions of the virtual acupuncture points are converted to the real patient body.
CN202310777244.XA 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model Active CN116646052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310777244.XA CN116646052B (en) 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310777244.XA CN116646052B (en) 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model

Publications (2)

Publication Number Publication Date
CN116646052A true CN116646052A (en) 2023-08-25
CN116646052B CN116646052B (en) 2024-02-09

Family

ID=87624889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310777244.XA Active CN116646052B (en) 2023-06-28 2023-06-28 Auxiliary acupuncture positioning system and method based on three-dimensional human body model

Country Status (1)

Country Link
CN (1) CN116646052B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109243575A (en) * 2018-09-17 2019-01-18 华南理工大学 A kind of virtual acupuncture-moxibustion therapy method and system based on mobile interaction and augmented reality
US20200126297A1 (en) * 2018-10-17 2020-04-23 Midea Group Co., Ltd. System and method for generating acupuncture points on reconstructed 3d human body model for physical therapy
CN111524433A (en) * 2020-05-29 2020-08-11 深圳华鹊景医疗科技有限公司 Acupuncture training system and method
CN112258921A (en) * 2020-11-12 2021-01-22 胡玥 Acupuncture interactive teaching system and method based on virtual and mixed reality
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
KR20220074008A (en) * 2020-11-27 2022-06-03 아주통신(주) System for mixed-reality acupuncture training with dummy and acupuncture controller

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109243575A (en) * 2018-09-17 2019-01-18 华南理工大学 A kind of virtual acupuncture-moxibustion therapy method and system based on mobile interaction and augmented reality
US20200126297A1 (en) * 2018-10-17 2020-04-23 Midea Group Co., Ltd. System and method for generating acupuncture points on reconstructed 3d human body model for physical therapy
CN111524433A (en) * 2020-05-29 2020-08-11 深圳华鹊景医疗科技有限公司 Acupuncture training system and method
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
CN112258921A (en) * 2020-11-12 2021-01-22 胡玥 Acupuncture interactive teaching system and method based on virtual and mixed reality
KR20220074008A (en) * 2020-11-27 2022-06-03 아주통신(주) System for mixed-reality acupuncture training with dummy and acupuncture controller

Also Published As

Publication number Publication date
CN116646052B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
Carmigniani et al. Augmented reality: an overview
Coles et al. Integrating haptics with augmented reality in a femoral palpation and needle insertion training simulation
US9374522B2 (en) Video generating apparatus and method
US20040119662A1 (en) Arbitrary object tracking in augmented reality applications
CN109091380B (en) Traditional Chinese medicine system and method for realizing acupoint visualization by AR technology
Fang et al. Head-mounted display augmented reality in manufacturing: A systematic review
CN112348942B (en) Body-building interaction method and system
CN112687131A (en) 3D meridian circulation visual teaching system based on HoloLens
Rallis et al. An embodied learning game using kinect and labanotation for analysis and visualization of dance kinesiology
CN115315729A (en) Method and system for facilitating remote presentation or interaction
Smith et al. Mixed reality interaction and presentation techniques for medical visualisations
Scheggi et al. Shape and weight rendering for haptic augmented reality
JP2024016153A (en) Program, device, and method
LIU et al. A preliminary study of kinect-based real-time hand gesture interaction systems for touchless visualizations of hepatic structures in surgery
CN116646052B (en) Auxiliary acupuncture positioning system and method based on three-dimensional human body model
Goncalves et al. Reach out and touch space (motion learning)
KR20180099399A (en) Fitness Center and Sports Facility System Using a Augmented reality virtual trainer
KR20130109794A (en) Virtual arthroscope surgery simulation system
Quarles et al. A mixed reality approach for interactively blending dynamic models with corresponding physical phenomena
CN115170773A (en) Virtual classroom action interaction system and method based on metauniverse
KR20220120731A (en) Methods and apparatus for providing the contents of the affordance health care using mirror display
CN113842227A (en) Medical auxiliary three-dimensional model positioning matching method, system, equipment and medium
TWI644285B (en) Acupuncture visualization Chinese medicine system and method thereof by using AR technology
EP4303824A1 (en) System and method for monitoring a body pose of a user
KR102396104B1 (en) Apparatus for augmented reality convergence oriental medicine training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant