CN111053530A - Method and system for locating a position within a human body - Google Patents
Method and system for locating a position within a human body Download PDFInfo
- Publication number
- CN111053530A CN111053530A CN201910980140.2A CN201910980140A CN111053530A CN 111053530 A CN111053530 A CN 111053530A CN 201910980140 A CN201910980140 A CN 201910980140A CN 111053530 A CN111053530 A CN 111053530A
- Authority
- CN
- China
- Prior art keywords
- human body
- location
- user
- image
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 241000282414 Homo sapiens Species 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000013507 mapping Methods 0.000 claims abstract description 13
- 238000001467 acupuncture Methods 0.000 claims description 43
- 210000000056 organ Anatomy 0.000 claims description 29
- 210000000988 bone and bone Anatomy 0.000 claims description 28
- 210000004204 blood vessel Anatomy 0.000 claims description 14
- 230000035807 sensation Effects 0.000 claims description 3
- 238000011282 treatment Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 208000027418 Wounds and injury Diseases 0.000 description 5
- 206010052428 Wound Diseases 0.000 description 4
- 230000000881 depressing effect Effects 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 210000004705 lumbosacral region Anatomy 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 210000001991 scapula Anatomy 0.000 description 3
- 210000003491 skin Anatomy 0.000 description 3
- 210000001835 viscera Anatomy 0.000 description 3
- 230000006378 damage Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 206010052904 Musculoskeletal stiffness Diseases 0.000 description 1
- 208000000112 Myalgia Diseases 0.000 description 1
- 208000002193 Pain Diseases 0.000 description 1
- 238000005299 abrasion Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000001124 body fluid Anatomy 0.000 description 1
- 239000010839 body fluid Substances 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000000748 cardiovascular system Anatomy 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 210000005093 cutaneous system Anatomy 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000002249 digestive system Anatomy 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000002615 epidermis Anatomy 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 208000013465 muscle pain Diseases 0.000 description 1
- 230000003387 muscular Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- -1 qi (or energy) Substances 0.000 description 1
- 210000002345 respiratory system Anatomy 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4887—Locating particular structures in or on the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4887—Locating particular structures in or on the body
- A61B5/489—Blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7246—Details of waveform analysis using correlation, e.g. template matching or determination of similarity
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H39/00—Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
- A61H39/02—Devices for locating such points
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Rehabilitation Therapy (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Epidemiology (AREA)
- Pain & Pain Management (AREA)
- Physical Education & Sports Medicine (AREA)
- Vascular Medicine (AREA)
- Finger-Pressure Massage (AREA)
Abstract
The application discloses a method and a system for locating a position within a human body, wherein the method comprises: selecting a first target object within a human body; capturing an image of a human body; identifying a first interior portion within the human body directly below a first location on the surface of the human body; determining a first distance from the first location to the first inner portion; mapping the image, the first location and the first distance onto a three-dimensional reference phantom; and deriving a position of the first target object. The system for locating a target location within a human body using an image of the human body and a sub-epidermal depth of an interior portion of the human body and comprising: a main module; an image module adapted to pass the image to the main module; and a depth data module adapted to communicate the sub-epidermal depth to the primary module; wherein the main module is configured to map the image and the sub-epidermal depth onto a three-dimensional reference phantom and predict a target location.
Description
Technical Field
The present invention relates to a method and system for locating the position of an object, such as a bone, organ or acupuncture point, within a human body.
Background
The view of western medicine is that the human body is composed of several interdependent systems, such as the skeletal system, the muscular system, the nervous system, the respiratory system, the cardiovascular system, the digestive system, the cutaneous system, etc. Traditional medicine (TCM), however, looks very different at the human body. According to TCM, the human body consists of the organs of the five zang organs, the organs of the six fu organs, the five sense organs, the meridians and collaterals, etc. The meridian system is a distribution network that allows basic body substances, such as qi (or energy), blood, and body fluids, to flow to a number of different organs and throughout the body. The meridians are not blood vessels and have no anatomical structures. Along the meridians are acupuncture points located near the skin. Each acupoint is associated with an organ in the body. In the description of TCM, there are about 20 meridians and 400 acupoints. The positions of some acupuncture points are related to specific anatomical landmarks, for example, at a certain distance from the upper end of the bone. Each acupoint is usually identified using two letters and some numbers. The letters describe on which meridian the acupuncture point is present, and the numbers indicate its position along the meridian. For example, BL-22 is an acupoint at position 22 along the bladder channel.
Human beings have long used massage to relieve muscle stiffness, pain and other symptoms. Pressure is applied to the muscles and nerves to achieve the desired result. Since ancient times, TCM physicians have employed acupuncture to alleviate symptoms or treat people suffering from a variety of different diseases. According to the problem, the relevant acupuncture points are stimulated using fine needles to provide the necessary relief or treatment. It is believed that by stimulating the acupuncture points, imbalance or obstruction of the flow of qi (energy) is corrected.
Massage and acupuncture have traditionally been performed manually. Since many parts of the body cannot be felt using the fingers, the person performing the massage or acupuncture often has to guess or estimate their position. Therefore, validation of the effectiveness of manual treatment relies on the experience of the person performing the massage or acupuncture. With the development of a number of different technologies, such as computer vision and robotics, it is now possible to use robotic arms and other devices to mechanize or automate the massage and acupuncture processes. However, no two people have the same external body shape and size. Their body interior also varies, such as alignment of bones, organ size, etc. Thus, mechanized or automated massage and acupuncture processing is accurate and effective only when the automated system has complete and accurate system user information. More specifically, automated systems require full and accurate knowledge of the location of objects/targets such as the user's bones, internal organs, and acupuncture points in order to properly perform a massage or acupuncture treatment without causing harm to the person. Giving the system incomplete or inaccurate information will not only result in the system performing the aforementioned treatment in a less than ideal manner, but the operation may even result in injury to the user. Existing automated systems rely on some initial reference points that are manually marked on the user's body to locate the position of other objects of the user's body, such as bones, internal organs, and acupuncture points. However, these initial reference points on the surface of the user's body provide only one aspect of the information about the user's body. Accordingly, there is a need for methods and systems that more efficiently and accurately understand the user's body and locate other objects within it.
Disclosure of Invention
The invention discloses a method for positioning a position in a human body, which comprises the following steps: selecting a first target object within the human body, capturing an image of the human body, identifying a first interior portion within the human body directly below a first location on the surface of the human body, determining a first distance from the first location to the first interior portion, mapping the image, the first location and the first distance onto a three-dimensional reference phantom and (extrapolating) deriving the location of the first target object. The method further comprises the following steps: selecting a second target object within the human body, identifying a second interior portion within the human body directly below a second location on the surface of the human body, determining a second distance from the second location to the second interior portion, mapping the second location and the second distance onto the three-dimensional reference phantom and deriving a location of the second target object. The first and second internal portions may be bones or organs of a human body, and the first and second target objects are at least one of: acupoints, bones, organs, tender points, blood vessels and meridians of the human body.
The method of the present invention may further comprise: a line is marked on the surface of the body and the position of the line relative to the position of the first and second target objects is determined. Alternatively, the method of the present invention may further comprise: an area is marked on the surface of the human body and the position of the area relative to the position of the first and second target objects is determined.
Identifying the first and second interior portions within the human body may be performed by tactile sensation using a finger. Determining the first and second distances may include depressing at the first and second locations until the first and second interior portions are tactilely felt using a finger, and measuring a depth of depression at the first and second locations using a depth sensor.
The invention also discloses a system for locating a target location within a human body using the human body image and the sub-epidermal depth of the internal portion of the human body. The system includes a main module, an image module adapted to transmit an image of a human body to the main module, and a depth data module adapted to transmit a sub-epidermal depth to the main module. The main module is configured to map the image and the sub-epidermal depth onto a three-dimensional reference phantom and predict a target location.
The target position is the position of bones, organs, acupuncture points, channels and collaterals, pressure pain points or blood vessels of the human body. The three-dimensional reference phantom includes reference coordinates of at least one of acupuncture points, bones, organs, meridians, tender points, and blood vessels of a standard human body.
Drawings
Fig. 1 shows the process steps of a first embodiment of the method of the invention.
Fig. 2 shows the body of a prone user and the operator's arm pointing to a position on the surface of the user's body using a finger.
Fig. 3 shows a side view of the body of a prone user and an operator indicating a position on the surface of the user's body.
Fig. 4 shows a side view of the body of a prone user and the operator pressing down on the surface of the user's body until the lumbar vertebrae are felt.
Fig. 5 shows a depression made by an operator on the surface of the body of a prone user.
Fig. 6 shows two positions used as initial reference points, which are indicated on the prone user body surface using two fingers.
Fig. 7 shows three positions used as initial reference points and the derivation of certain acupuncture points based on the three initial reference positions.
FIG. 8 illustrates an embodiment of system modules for deriving a location of a target object.
Fig. 9 shows an automation system employing the present invention.
Detailed Description
The following detailed description refers to the accompanying drawings, which form a part hereof. The methods and systems described and illustrated in detail are for illustrative purposes and are not intended to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. In the present disclosure, a description or consideration of a given element in a particular figure or the use of a particular element number or reference thereto in corresponding descriptive material may include the same, equivalent or comparable element or element number in another figure or descriptive material associated therewith.
The elements in fig. 1-9 are described in table 1 below.
Table 1: description of elements in the figures
Element(s) | Description of the |
100 | User's |
110 | |
120 | |
130 | |
140 | The |
145 | |
150 | Surface of user's |
160 | Depression on the surface of the user's |
170 | From the |
180 | Line connecting first and |
190 | Derived |
200 | System module for deriving a position of a |
300 | Automated system for massaging or acupuncture |
The present invention discloses a method and system for locating the position of a target object such as a bone, an organ, an acupoint, a meridian, a tender point or a blood vessel within a human body 100. The methods and systems disclosed in the present invention may be used to improve the mechanization or automation of massage and acupuncture treatments. Although the figures used to illustrate the method of the present invention show the method as being performed on the back (or back) of a human body, the method can equally be applied on other parts of the body, such as the front (or front) and sides of the body.
Fig. 1-5 show a first embodiment of the method of the invention. Fig. 1 shows the steps taken by an operator to find the location of a target object within the body of a user. The user is a person who is subjected to a massage or acupuncture treatment. Fig. 2 shows a plan view of a user lying prone or face down on a platform. The user's body 100 has a surface 150 with an epidermal layer, which is the outer layer of the skin. In fig. 2, a first location 110 on a surface 150 of a user's body is indicated by an operator's arm using a finger 140. In addition to using the finger 140, the first location 110 may also be indicated using other means, such as a pointer.
The steps taken by the operator include: selecting a first target object within the user's body 100, capturing an image of the user's body 100, identifying a first interior portion within the user's body 100 directly below a first location 110 on a surface 150 of the user's body, and determining a first distance from the first location 110 on the surface 150 of the user's body to the first interior portion. The image, the first position 110 and the first distance are then mapped onto a three-dimensional reference phantom and the position of the first target object is derived. The position of the first target object is also referred to as the first target position.
An image or picture of the user's body is taken, captured or recorded using a suitable device, such as an optical device, a camera, an image sensor or a computer vision system. The image or picture of the user's body may be a full body or a half body image.
The first interior portion within the user's body 100 is directly below the first location 110 on the surface 150 of the user's body. The first interior portion may be an organ, bone, or other constituent part of the user's body. Preferably, the first inner portion selected for use is a portion that can be felt or detected using a finger. The first inner portion may also be selected based on how it relates to deriving the position of the first target object.
Having selected the first interior portion for use, the operator feels or detects the first interior portion of the user's body 100 using his fingers. Detectable locations on the first interior portion, such as bone ends or organ edges, are identified by touch using the finger 140. Thereafter, the operator indicates the first location 110 on the surface 150 of the user's body, wherein pressing the surface 150 vertically with a finger at the first location 110 will cause the operator to feel a detectable location on a first interior portion below the surface 150. In other words, the first location 110 on the surface 150 of the user's body is a location on the surface 150 that is in its natural state or level before the location is depressed. Depressing the first location 110 in a direction perpendicular to the surface 150 and toward the interior of the user's body will enable the operator to feel a detectable location on the first interior portion below the surface 150. The meaning of "vertical" and "positive" also includes "approximately vertical" and "almost vertical".
Fig. 3 shows a side view of the body of a prone user and the operator using a finger 140 to indicate a first position 110 on the user's body surface 150. The thus indicated first position 110 on the surface 150 of the user's body is captured or recorded using a suitable device, such as an optical device, a camera or a computer vision system. The coordinates of the first location 110 are calculated.
Having obtained the first location 110 on the surface 150 of the user's body, a first distance from the first location 110 on the surface 150 of the user's body to the first interior portion is determined and recorded. A first distance from the first location 110 to the first interior portion on the surface 150 of the user's body may be determined by depressing the first location 110 until the first interior portion is tactilely felt using the finger 140 and then measuring the depth of the depression 160 on the first location 110 using a depth sensor, which may include an optical device, a camera, or a computer vision system.
Fig. 4 shows a side view of the body of a prone user and the operator pressing down on the first location 110 on the user's body surface 150 until the first interior portion (in this example, the lumbar spine) is detected or felt. Other first positions corresponding to other first inner portions may be used. Fig. 5 shows a depression 160 by the operator on the surface of the prone user's body.
The image or picture of the user's body, the first position 110 and the first distance are initial references used in subsequent steps. Having obtained an image of the user's body 100, the location 110 on the user's body surface 150 and the distance from the location 110 on the surface 150 to the interior portion, the location of the target object within the user's body 100 may be derived, calculated or predicted. The target object may be a bone, an organ, an acupuncture point, a meridian, a tender point, or a blood vessel of the user's body 100, which is not used as an initial reference. The derivation, calculation or prediction of the position of the target object is achieved by mapping said image, position 110 and first distance onto a three-dimensional reference phantom. In the mapping process, scaling and transformation, including rigid and non-rigid transformations, are performed on the image of the user's body 100, the first location 110 on the user's body surface 150, and the first distance from the location 110 on the surface 150 to the first interior portion. Alternatively, the three-dimensional reference phantom may be scaled and transformed.
Using one location 110 on the user's body surface 150 as an initial reference will enable a limited number of target locations in the user's body 100 to be derived, calculated or predicted. Thus, the second location 120 may be added as another reference in the second embodiment of the method of the present invention. Therefore, the steps in the second embodiment of the method of the present invention further comprise: selecting a second target object within the user's body 100, identifying a second interior portion within the user's body 100 directly below a second location 120 on the user's body surface 150, determining a second distance from the second location 120 to the second interior portion, mapping the second location 120 and the second distance onto the three-dimensional reference user body model and deriving a location of the second target object.
Fig. 6 shows a second embodiment of the inventive method used for a user's body 100, where two locations (110 and 120) on the user's body surface 150 are used as reference points, which are indicated simultaneously using two fingers (140, 145). The first location 110 and the second location 120 may be captured and recorded one after the other or simultaneously using a suitable device such as an optical device, a camera, or a computer vision system.
Coordinates of the first and second locations (110, 120) are calculated. Having obtained first and second locations (110, 120) on the user's body surface 150, respective distances from the first and second locations (110, 120) on the user's body surface 150 to the first and second detectable locations on the first and second interior portions are determined or recorded. These distances are determined in the same manner as in the first embodiment described above.
The first and second inner portions may be bones or organs of the user's body 100. The first and second target objects are at least one of: bones, organs, acupuncture points, channels and collaterals, tender points, and blood vessels of the user's body 100. The reason for using bones or organs as the first or second internal part as an initial reference is that they can be easily detected, felt and identified using fingers.
In addition to using finger tactile sensation to detect or identify the first interior portion or the second interior portion, the operator may also use a suitable device or instrument to detect and identify them. In addition to using the depression (depression) depth to determine the first distance or the second distance, the operator may also use a suitable device or instrument to determine the aforementioned distance.
In addition to the coordinates of the first and second locations (110 and 120) on the user's body surface 150, the coordinates of the first and second detectable locations are also calculated based on the respective distances from the first and second locations (110 and 120) to the detectable locations on the first and second interior portions below the surface.
Using two locations (110 and 120) on the user's body surface 150 and two distances from each of the two locations to two detectable locations on two interior portions, the locations of more target objects can be accurately derived, calculated, or predicted. The accuracy increases with the number of initial reference positions.
Fig. 7 shows a third embodiment of the inventive method applied on a user's body 100, wherein three locations (110, 120 and 130) on the user's body surface 150 are used as initial reference points. The first position 110 is above the lumbar spine at L5 and the second position 120 is above the lumbar spine at L1. The third position 130 is on the edge of the scapula or the scapula. A line 180 connects the first location 110 and the second location 120, the line 170 being a line from the third location 130 on the scapula margin. After determining the respective distances from the three locations to the respective inner parts below them and performing the necessary mapping to map onto the three-dimensional reference phantom, some acupuncture points 190 along the bladder channel (BL) are derived. The names of the derived acupuncture points 190 are shown in the following table.
Derived acupuncture points | Name (R) |
BL-22 | Sanjiao Yu |
BL-23 | Shenshu (Kidney transport) |
BL-24 | Qihaiyu liquor |
BL-25 | Dachangshu liquor |
BL-26 | Guanyu Yu |
In a fourth embodiment of the method of the present invention, the steps may comprise: a line is marked on the surface 150 of the user's body and the position of the line relative to the position of the target object is determined. In a fifth embodiment of the method of the present invention, the steps may comprise: a point or region is marked on the user's body surface 150 and the position of the point or region relative to the position of the target object is determined. The line on the user's body surface 150 may correspond to a wound or abrasion on the user's body surface 150, for example. A point or area on the user's body surface 150 may correspond to a site of a ruptured skin or wound, for example. The site represented by the aforementioned lines, points or areas is a sensitive and vulnerable area, especially when the wound or wound is not fully healed. Massage or acupuncture should avoid these parts. Without associating these sites with the target object and providing information of these sites to the system, the massage or acupuncture treatment may injure the patient when these sensitive and fragile sites of the user's body 100 are touched or manipulated. Thus, lines, points or areas representing the parts to be avoided are captured or recorded using a suitable device, such as a camera, optical device or computer vision system. The positions of lines, points or regions relating to all target objects are calculated.
Not only do one person's bodies differ in shape and size in the X and Y dimensions from another person's body, they also differ in the Z dimension. Without consideration of the Z dimension, the information provided to an automated massage or acupuncture system is incomplete. When an automated system massages or acupunctures without information of internal organs in the Z dimension, its accuracy suffers. For example, the force applied by a robotic arm controlled by an automated massage system may be too strong to injure a person receiving a massage. In another example, the depth of the needle inserted by a manipulator controlled by an automated acupuncture system may be too deep.
The information obtained by using the various embodiments of the method of the present invention can be provided to an automated system for use in performing massage and acupuncture, such that the automated system has spatial knowledge of the exterior and interior of the user's body, such as the organs and bones within the user's body. The sub-epidermal depth or the distance from a location on the surface of the user's body to the inner part enables to accurately derive the location of the target object. When the automated system has the exact location of bones, organs, acupuncture points, meridians, tender points and blood vessels, the corresponding massage or acupuncture treatment can be done safely and with the precision that achieves the desired result.
The present invention also discloses a system for locating a target location within a user's body 100 using an image of the user's body 100 and the sub-epidermal depth of an internal portion of the user's body 100. The system comprises a main module, an image module adapted to transfer an image of the body 100 of the user to the main module, and a depth data module adapted to transfer the sub-epidermal depth to the main module. The main module is configured to map the image and the sub-epidermal depth of the user's body onto the three-dimensional reference phantom and predict a target position.
Fig. 8 shows an embodiment of a system for deriving, calculating or predicting a position of a target object or a target position within a user's body 100. The image module receives an image of the user's body 100, processes the image and passes information about the image to the main module. The graphic format of the image may be bitmap, TIFF, JPEG, GIF, PNG, or raw data. The depth data module receives the sub-epidermal depth, processes the depth information, and passes the relevant information to the main module. The main module derives, calculates or predicts the target position using the image of the user's body 100, the sub-epidermal depth of the internal portion and the three-dimensional reference phantom.
The target location is a location of a target object such as a bone, an organ, an acupuncture point, a meridian, a tender point, or a blood vessel of the user's body 100. The three-dimensional reference phantom includes reference coordinates of at least one of bones, organs, acupuncture points, meridians, tender points, and blood vessels of a standard or typical human body.
In order to derive, calculate or predict the positions of target objects of the user's body 100, such as bones, organs, acupuncture points, meridians, tender points and blood vessels, it is required that the three-dimensional reference phantom contains information of a desired target object. This information includes the identity of the target object and location information including coordinates specifying the location of one object relative to another object in a standard or typical human body. The coordinates that specify the position of one object relative to another object in a standard or typical human body are known as reference coordinates. The reference coordinates may be three-dimensional coordinates. As an example, if the internal part used as the initial reference is a bone and the target object is an acupoint, the three-dimensional reference phantom needs to have information on the bone and the acupoint, including three-dimensional reference coordinates of the bone with respect to the acupoint. The shape and size of the target object may be derived using three-dimensional reference coordinates for multiple locations of the target object.
The organs, tender points, vessels, or other parts of the body described herein may be those from the western medicine or TCM perspectives. The camera referred to in this specification may be a stereo camera.
Fig. 9 shows an embodiment of an automation system employing the invention. The automated system includes a computing system, a database, a robotic system, and a human-machine interface. The system modules shown in fig. 8 may be in a computing system. In automated systems, an operator may provide commands, instructions, or information to the system through a number of different forms of input, such as gestures, voice, switches, touch screen, or keyboard input. For example, after the first location 110 is indicated on the user's body surface 150, the operator may give a voice command to the system to initiate the capture of the first location 110 by a suitable device, such as an optical device, a camera, or a computer vision system. The speech is picked up or received by a microphone and interpreted by a speech recognition module in the computing system. In addition to initiating the capture of the first location 110, other information related to the first location 110, such as the name of an interior portion below the first location 110, may also be given to the system by the operator using voice as an input means. For example, if the first position 110 is above the user's lumbar vertebra L5, the operator may say "lumbar vertebra L5" and the spoken utterance will be recognized and recorded by the voice recognition module.
Alternatively, after using the fingers of one hand to designate the first location 110 on the surface 150 of the user's body 100, the operator may use the other hand to gesture the system to initiate the capture of the first location 110 by a suitable device, such as an optical device, camera, or computer vision system. The gesture may be, for example, an "OK" gesture. The gesture is interpreted by a gesture recognition module in the computing system.
When the operator indicates two positions, such as first and second positions (110 and 120), on the user's body surface 150 using both hands, the operator may initiate capture of the two positions by stepping on the foot pedal. Other channels that provide commands, instructions, or information to the system include a touch screen or keyboard, and the input is processed through corresponding I/O modules in the computing system.
In an automation system, a three-dimensional reference phantom is stored in a reference phantom portion of a database. Information about the image of the user's body 100, the location on the user's body surface 150 and the sub-epidermal depth of the internal portion or the distance from the location on the user's body surface 150 to the internal portion is stored in the user model part of the database. The derived, calculated or predicted position of the target object or target position is stored in a mapping model part of the database. The mapped model portion of the database also contains information derived from the mapping of the three-dimensional reference mannequin onto the image of the user's body 100, the location on the user's body surface 150 and the sub-epidermal depth of the internal portion or the distance from the location on the user's body surface 150 to the internal portion.
The information in the mapping model portion of the database is fed to an automated system that performs the massage or acupuncture treatment. The automated system adjusts its actions, e.g., the actions of the manipulator in the robotic system, based on the information in the mapping model so that the corresponding massage or acupuncture treatment is done safely and with an accuracy that achieves the desired result.
While various aspects and embodiments have been disclosed herein, those of ordinary skill in the art will appreciate that several of the above-disclosed structures, materials, parameters, or processes thereof can be modified, adapted, and combined as desired into alternative structures, processes, and/or applications. All such modifications, variations, adaptations, and/or improvements of the various embodiments disclosed are within the scope of the invention. The various aspects and embodiments disclosed herein are for purposes of illustration and not limitation, with the true scope and spirit of the invention being indicated by the claims.
Claims (11)
1. A method of locating a position within a human body, comprising:
selecting a first target object within a human body;
capturing an image of a human body;
identifying a first interior portion within the human body directly below a first location on the surface of the human body;
determining a first distance from the first location to the first inner portion;
mapping the image, the first location and the first distance onto a three-dimensional reference phantom; and
the position of the first target object is derived.
2. The method of claim 1, further comprising:
selecting a second target object within the human body;
identifying a second interior portion of the human body directly below a second location on the surface of the human body;
determining a second distance from the second location to the second inner portion;
mapping the second location and the second distance onto a three-dimensional reference phantom; and
the position of the second target object is derived.
3. The method of claim 2, wherein the first and second interior portions are bones or organs of the human body.
4. The method of claim 2, wherein the first and second target objects are at least one of: bones, organs, acupuncture points, channels and collaterals, tender points and blood vessels of the human body.
5. The method of claim 2, further comprising: a line is marked on the surface of the body and the position of the line relative to the position of the first and second target objects is determined.
6. The method of claim 2, further comprising: an area is marked on the surface of the human body and the position of the area relative to the position of the first and second target objects is determined.
7. The method of claim 2, wherein identifying the first and second interior portions within the human body is performed by tactile sensation using a finger.
8. The method of claim 2, wherein determining the first and second distances comprises:
pressing down in the first and second positions until the first and second inner portions are tactilely felt with a finger; and
the depth of depression at the first and second locations is measured using a depth sensor.
9. A system for locating a target location within a human body using an image of the human body and a sub-epidermal depth of an internal portion of the human body, comprising:
a main module;
an image module adapted to pass the image to the main module; and
a depth data module adapted to communicate sub-epidermal depth to the primary module;
wherein the main module is configured to map the image and the sub-epidermal depth onto a three-dimensional reference phantom and predict a target location.
10. The system of claim 9, wherein the target location is a location of a bone, organ, acupuncture point, meridian, tender point, or blood vessel of the human body.
11. The system of claim 9, wherein the three-dimensional reference phantom includes reference coordinates of at least one of a bone, an organ, an acupuncture point, a meridian, a tender point, and a blood vessel of a standard human body.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG10201809094TA SG10201809094TA (en) | 2018-10-16 | 2018-10-16 | Method And System Of Locating A Position Within A Human Body |
SG10201809094T | 2018-10-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111053530A true CN111053530A (en) | 2020-04-24 |
CN111053530B CN111053530B (en) | 2024-01-30 |
Family
ID=70297539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910980140.2A Active CN111053530B (en) | 2018-10-16 | 2019-10-15 | Method and system for locating position in human body |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111053530B (en) |
SG (1) | SG10201809094TA (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112184705A (en) * | 2020-10-28 | 2021-01-05 | 成都智数医联科技有限公司 | Human body acupuncture point identification, positioning and application system based on computer vision technology |
USD1009283S1 (en) | 2020-04-22 | 2023-12-26 | Aescape, Inc. | Therapy end effector |
US11858144B2 (en) | 2020-05-12 | 2024-01-02 | Aescape, Inc. | Method and system for autonomous body interaction |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140132525A (en) * | 2013-05-08 | 2014-11-18 | (주)약침학회 | Method for determining positions of acupuncture points and their depths of needle using 3-dimensionsal imaging system |
CN105411836A (en) * | 2016-01-13 | 2016-03-23 | 高得人 | Automatic precise acupoint positioning rehabilitation instruments |
CN206117856U (en) * | 2016-09-07 | 2017-04-19 | 李莲英 | Projection system |
CN106890082A (en) * | 2016-03-10 | 2017-06-27 | 程瑜 | Intelligence is attacked a vital point manipulator |
CN106959571A (en) * | 2017-04-07 | 2017-07-18 | 展谱光电科技(上海)有限公司 | Multispectral projection and camera device and multispectral projecting method |
KR101780319B1 (en) * | 2017-03-21 | 2017-09-21 | 대전대학교 산학협력단 | Apparatus and method for mapping 3 dimensional acupoint |
-
2018
- 2018-10-16 SG SG10201809094TA patent/SG10201809094TA/en unknown
-
2019
- 2019-10-15 CN CN201910980140.2A patent/CN111053530B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140132525A (en) * | 2013-05-08 | 2014-11-18 | (주)약침학회 | Method for determining positions of acupuncture points and their depths of needle using 3-dimensionsal imaging system |
CN105411836A (en) * | 2016-01-13 | 2016-03-23 | 高得人 | Automatic precise acupoint positioning rehabilitation instruments |
CN106890082A (en) * | 2016-03-10 | 2017-06-27 | 程瑜 | Intelligence is attacked a vital point manipulator |
CN206117856U (en) * | 2016-09-07 | 2017-04-19 | 李莲英 | Projection system |
KR101780319B1 (en) * | 2017-03-21 | 2017-09-21 | 대전대학교 산학협력단 | Apparatus and method for mapping 3 dimensional acupoint |
CN108379056A (en) * | 2017-03-21 | 2018-08-10 | 任允卿 | Three-dimensional warp cave mapping device and method |
CN106959571A (en) * | 2017-04-07 | 2017-07-18 | 展谱光电科技(上海)有限公司 | Multispectral projection and camera device and multispectral projecting method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD1009283S1 (en) | 2020-04-22 | 2023-12-26 | Aescape, Inc. | Therapy end effector |
US11858144B2 (en) | 2020-05-12 | 2024-01-02 | Aescape, Inc. | Method and system for autonomous body interaction |
CN112184705A (en) * | 2020-10-28 | 2021-01-05 | 成都智数医联科技有限公司 | Human body acupuncture point identification, positioning and application system based on computer vision technology |
Also Published As
Publication number | Publication date |
---|---|
CN111053530B (en) | 2024-01-30 |
SG10201809094TA (en) | 2020-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111868788B (en) | System and method for generating pressure point diagrams based on remotely controlled haptic interactions | |
CN111053530B (en) | Method and system for locating position in human body | |
US11589779B2 (en) | Finger segment tracker and digitizer | |
Klatzky et al. | The skin and its receptors 148 pathways to cortex and major cortical areas | |
CN104523420B (en) | A kind of online diagnosing and treating apparatus | |
CN109091380B (en) | Traditional Chinese medicine system and method for realizing acupoint visualization by AR technology | |
CN110990649A (en) | Cardiopulmonary resuscitation interactive training system based on gesture recognition technology | |
KR101507700B1 (en) | Computer rehabilitation method by hand motion recognition | |
CN112426356A (en) | Traditional Chinese medicine moxibustion applying equipment and 3D scanning auxiliary acupoint positioning method and system thereof | |
KR20210142042A (en) | Method and system for inputting acupoint to electronic medical record for oriental medicine hospital based patient's images | |
KR20110134737A (en) | Method and apparatus for muscle re-education training using electrical stimulation and 3d image feedback | |
US20130027368A1 (en) | Apparatus for displaying acupuncture points | |
JP6481622B2 (en) | Palpation support device, palpation support method, and palpation support program | |
US10010267B2 (en) | Massage measurement apparatus and massage measurement method | |
US11317854B1 (en) | Trigger point treatment method, system, and device for neuromusculoskeletal pain | |
CN110801392B (en) | Method and device for marking predetermined point positions on human body and electronic equipment | |
KR20200080534A (en) | System for estimating otorhinolaryngology and neurosurgery surgery based on simulator of virtual reality | |
Xuming et al. | A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method | |
TWI807678B (en) | Interactive massage part generating method and system | |
Udo et al. | Feedback Methods to Adjust Finger Orientation for High Accuracy Softness Evaluation with a Wearable Pressure Distribution Sensor in Cervix Examination | |
CN116912430B (en) | Device for constructing three-dimensional digital twin system of remote intervention operating room | |
TW201905836A (en) | Chinese medicine system using AR technology to realize acupuncture visualization and method thereof including an input module, a judgment module, an image processing module, and a display module | |
WO2007117695A2 (en) | Human anatomic mapping and positioning and anatomic targeting accuracy | |
Lee et al. | Design of A Hand Back Acupoint Massage Aid | |
WO2023037369A1 (en) | Devices, systems and methods for nerve treatment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |