CN111053530B - Method and system for locating position in human body - Google Patents

Method and system for locating position in human body Download PDF

Info

Publication number
CN111053530B
CN111053530B CN201910980140.2A CN201910980140A CN111053530B CN 111053530 B CN111053530 B CN 111053530B CN 201910980140 A CN201910980140 A CN 201910980140A CN 111053530 B CN111053530 B CN 111053530B
Authority
CN
China
Prior art keywords
location
human body
user
image
interior portion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910980140.2A
Other languages
Chinese (zh)
Other versions
CN111053530A (en
Inventor
张艺钟
刘恺
刘达运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Etrait Private Investment Co ltd
Original Assignee
Etrait Private Investment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Etrait Private Investment Co ltd filed Critical Etrait Private Investment Co ltd
Publication of CN111053530A publication Critical patent/CN111053530A/en
Application granted granted Critical
Publication of CN111053530B publication Critical patent/CN111053530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4887Locating particular structures in or on the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4887Locating particular structures in or on the body
    • A61B5/489Blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/02Devices for locating such points

Abstract

The application discloses a method and a system for locating a position in a human body, wherein the method comprises the following steps: selecting a first target object within the human body; capturing an image of a human body; identifying a first interior portion of the human body directly below the first location on the surface of the human body; determining a first distance from the first location to the first interior portion; mapping the image, the first location, and the first distance onto a three-dimensional reference mannequin; and deriving a position of the first target object. The system is for locating a target location within a human body using a sub-epidermal depth of an interior portion of the human body and comprises: a main module; an image module adapted to communicate the image to the main module; a depth data module adapted to communicate the subsurface depth to the main module; wherein the master module is configured to map the image and the subsurface depth onto a three-dimensional reference mannequin and predict a target location.

Description

Method and system for locating position in human body
Technical Field
The present invention relates to a method and system for locating the position of an object such as a bone, organ or acupoint within a human body.
Background
The view of western medicine is that the human body is composed of several interdependent systems, such as skeletal system, muscular system, nervous system, respiratory system, cardiovascular system, digestive system, skin system, etc. However, traditional Chinese Medicine (TCM) sees the human body very differently. According to TCM, the human body is composed of five viscera organs, six viscera organs, five sense organs, meridians, etc. The meridian system is a distribution network that allows basic body substances such as gas (or energy), blood and body fluids to flow to a plurality of different organs and through the body. The meridians are not blood vessels and have no anatomy. Along the meridian is an acupoint located near the skin. Each acupoint is associated with an organ in the body. In the description of TCM, there are about 20 meridians and 400 acupoints. The locations of some acupoints are associated with specific anatomical landmarks, for example, at a distance from the upper end of the bone. Each acupoint is typically identified using two letters and a few numbers. The letters describe on which meridian the acupoints are present and the numerals indicate their positions along the meridians. BL-22, for example, is an acupoint at location 22 along the bladder meridian.
Massage (tuina) has been used for a long time by humans to alleviate muscle stiffness, pain and other symptoms. Pressure is applied to the muscles and nerves to achieve the desired result. Since ancient times, TCM doctors have adopted acupuncture to alleviate symptoms or treat people suffering from a variety of different types of diseases. According to the problem, the relevant acupoints are stimulated using fine needles to provide the necessary relief or treatment. It is thought that by stimulating the acupoints, imbalance or blockage of the flow of qi (energy) is corrected.
Massage and acupuncture have traditionally been performed manually. Because many parts of the body are not felt using the fingers, the person performing massage or acupuncture often has to guess or estimate their position. Thus, verification of the effectiveness of the manual treatment depends on the experience of the person performing the massage or acupuncture. With the development of many different technologies such as computer vision and robotics, it is now possible to use manipulators and other devices to mechanize or automate the massage and acupuncture processes. However, no two persons have the same external body shape and size. Their body interiors also differ, such as alignment of bones, organ size, etc. Thus, the motorized or automated massage and acupuncture process is accurate and effective only when the automated system has complete and accurate system user information. More specifically, the automated system needs to fully and accurately know the location of objects/targets such as bones, internal organs and acupoints of a user to properly perform massage or acupuncture treatment without injuring human beings. Giving incomplete or inaccurate information to the system will not only result in the system performing the aforementioned processes in a less than ideal manner, but the operation may even result in injury to the user. Existing automated systems rely on some initial reference points manually marked on the user's body to locate the position of other objects of the user's body such as bones, internal organs and acupoints. However, these initial reference points on the user's body surface provide only one aspect of information about the user's body. Thus, there is a need for methods and systems that more efficiently and accurately understand the body of a user and locate other objects therein.
Disclosure of Invention
The invention discloses a method for positioning a position in a human body, which comprises the following steps: selecting a first target object within the human body, capturing an image of the human body, identifying a first interior portion of the human body directly below a first location on the surface of the human body, determining a first distance from the first location to the first interior portion, mapping the image, the first location and the first distance onto a three-dimensional reference human body model and (pushing) deriving a location of the first target object. The method further comprises the steps of: selecting a second target object within the human body, identifying a second interior portion of the human body directly below a second location on the human body surface, determining a second distance from the second location to the second interior portion, mapping the second location and the second distance onto a three-dimensional reference human body model, and deriving a location of the second target object. The first and second interior portions may be bones or organs of a human body, and the first and second target objects are at least one of: acupoints, bones, organs, pressure pain points, blood vessels and channels of human body.
The method of the invention may further comprise: marking a line on the surface of the human body and determining the position of the line relative to the positions of the first and second target objects. Alternatively, the method of the present invention may further comprise: marking an area on the surface of the human body and determining the position of the area relative to the positions of the first and second target objects.
The identification of the first and second interior portions within the human body may be performed by tactile sensation using a finger. Determining the first and second distances may include depressing at the first and second locations until the first and second interior portions are tactilely felt using a finger, and measuring a depth of depression at the first and second locations using a depth sensor.
A system for locating a target location within a human body using a human body image and a subsurface depth of a portion of the human body interior is also disclosed. The system includes a main module, an image module adapted to communicate a body image to the main module, and a depth data module adapted to communicate subsurface depth to the main module. The master module is configured to map the image and the subsurface depth onto a three-dimensional reference mannequin and predict a target location.
The target position is the position of bones, organs, acupoints, channels and collaterals, pressure pain points or blood vessels of the human body. The three-dimensional reference mannequin includes reference coordinates of at least one of acupoints, bones, organs, meridians, pressure pain points, and blood vessels of a standard human body.
Drawings
Fig. 1 shows the process steps of a first embodiment of the method of the invention.
Fig. 2 shows an operator's arm in a prone user's body and pointing out a position on the user's body surface with a finger.
Fig. 3 shows a side view of the body of a prone user and an operator indicating a position on the surface of the user's body.
Figure 4 shows a side view of the prone user's body with the operator pressing down on the user's body surface until the lumbar spine is felt.
Fig. 5 shows the depression made by the operator on the body surface of the prone user.
Fig. 6 shows two positions used as initial reference points, indicated on the prone user's body surface using two fingers.
Fig. 7 shows three positions used as initial reference points and some acupoints are derived based on the three initial reference positions.
FIG. 8 illustrates an embodiment of a system module for deriving a location of a target object.
Fig. 9 shows an automated system employing the present invention.
Detailed Description
The following detailed description is made with reference to the accompanying drawings, which form a part hereof. The detailed description and illustrated methods and systems are for purposes of illustration and are not intended to be limiting. Other embodiments may be utilized and other changes may be made without departing from the spirit or scope of the subject matter presented herein. In the present invention, the description or consideration of a given element in a particular figure or the use of a particular element number or reference to it in a corresponding descriptive material may include the same, equivalent or comparable element or element number in another figure or descriptive material associated therewith.
The elements in fig. 1-9 are described in table 1 below.
Table 1: description of elements in the drawings
Element(s) Description of the invention
100 User's body
110 First position
120 Second position
130 Third position
140 First finger
145 Second finger
150 User body surface
160 Depression on the surface of the body of a user
170 Wires from the edges of the scapula
180 Wire connecting first and second positions
190 Derived acupoints
200 System module for deriving a position of a target object
300 Automated system for massaging or acupuncture
Methods and systems for locating the position of a target object such as a bone, organ, acupoint, meridian, pressure pain point, or blood vessel within a human body 100 are disclosed. The methods and systems disclosed herein can be used to enhance the mechanization or automation of massage and acupuncture treatments. Although the figures used to illustrate the method of the present invention show the method being performed on the back (or rear) of a human body, the method may equally be applied on other parts of the body such as the front (or front) and sides of the body.
Fig. 1-5 show a first embodiment of the method of the invention. Fig. 1 shows the steps taken by an operator to find the location of a target object within the user's body. The user is a person who is subjected to a massage or acupuncture treatment. Fig. 2 shows a plan view of a user lying prone or face down on a platform. The user's body 100 has a surface 150 with an epidermal layer, which is the outer layer of skin. In fig. 2, a first location 110 on a surface 150 of a user's body is indicated by an operator's arm using a finger 140. In addition to using the finger 140, the first location 110 may also be indicated using other means, such as a pointer.
The steps taken by the operator include: selecting a first target object within the user's body 100, capturing an image of the user's body 100, identifying a first interior portion of the user's body 100 directly below the first location 110 on the surface 150 of the user's body, and determining a first distance from the first location 110 on the surface 150 of the user's body to the first interior portion. The image, the first location 110 and the first distance are then mapped onto a three-dimensional reference mannequin and the location of the first target object is derived. The position of the first target object is also referred to as the first target position.
An image or picture of the user's body is taken, captured or recorded using a suitable device such as an optical device, a camera, an image sensor or a computer vision system. The image or picture of the user's body may be a whole-body or half-body image.
A first interior portion within the user's body 100 is directly below the first location 110 on the surface 150 of the user's body. The first interior portion may be an organ, bone or other constituent of the user's body. Preferably, the first interior portion selected for use is a portion that is felt or detected using a finger. The first internal portion may also be selected based on how it relates to deriving the position of the first target object.
The first interior portion has been selected for use and the operator uses his finger to feel or detect the first interior portion of the user's body 100. A detectable location on the first interior portion, such as a bone end or organ edge, is identified by touch using finger 140. Thereafter, the operator indicates a first position 110 on the surface 150 of the user's body, wherein vertically depressing the surface 150 with a finger at the first position 110 will cause the operator to feel a detectable position on the first interior portion below the surface 150. In other words, the first position 110 on the surface 150 of the user's body is a position on the surface 150 that is in its natural state or level before the position is depressed. Pressing down on the first location 110 in a direction perpendicular to the surface 150 and toward the inside of the user's body will enable the operator to feel the detectable location on the first interior portion below the surface 150. The meaning of "vertical" and "positive" also includes "approximately vertical" and "nearly vertical".
Fig. 3 shows a side view of the prone user's body and an operator using a finger 140 to designate a first location 110 on the user's body surface 150. The indicated first position 110 on the surface 150 of the user's body is thus captured or recorded using a suitable device, such as an optical device, a camera or a computer vision system. The coordinates of the first location 110 are calculated.
A first location 110 on the surface 150 of the user's body has been obtained, and a first distance from the first location 110 on the surface 150 of the user's body to the first interior portion is determined and recorded. The first distance from the first location 110 on the surface 150 of the user's body to the first interior portion may be determined by depressing the first location 110 until the first interior portion is tactilely felt using the finger 140 and then measuring the depth of the depression 160 on the first location 110 using a depth sensor, which may include an optical device, a camera, or a computer vision system.
Figure 4 shows a side view of the prone user's body and the operator pressing down on the first location 110 on the user's body surface 150 until the first interior portion (in this example the lumbar spine) is detected or felt. Other first locations corresponding to the other first interior portions may be used. Fig. 5 shows a depression 160 made by an operator on the body surface of a prone user.
The image or picture of the user's body, the first location 110 and the first distance are initial references used in the subsequent steps. Having obtained an image of the user's body 100, the location 110 on the user's body surface 150, and the distance from the location 110 on the surface 150 to the interior portion, the location of the target object within the user's body 100 may be derived, calculated, or predicted. The target object may be a bone, organ, acupoint, meridian, pressure pain point or blood vessel of the user's body 100 that is not used as an initial reference. Derivation, calculation or prediction of the position of the target object is achieved by mapping the image, the position 110 and the first distance onto a three-dimensional reference mannequin. In the mapping process, scaling and transformations, including rigid and non-rigid transformations, are performed on the image of the user's body 100, the first location 110 on the user's body surface 150, and the first distance from the location 110 on the surface 150 to the first interior portion. Alternatively, the three-dimensional reference mannequin may be scaled and transformed.
Using one location 110 on the user's body surface 150 as an initial reference will enable a limited number of target locations in the user's body 100 to be derived, calculated or predicted. Thus, the second location 120 may be added as another reference in the second embodiment of the method of the present invention. Thus, the steps in the second embodiment of the method of the present invention further comprise: selecting a second target object within the user's body 100, identifying a second interior portion of the user's body 100 directly below the second location 120 on the user's body surface 150, determining a second distance from the second location 120 to the second interior portion, mapping the second location 120 and the second distance onto a three-dimensional reference user's body model, and deriving a location of the second target object.
Fig. 6 shows a second embodiment of the method of the invention used for a user's body 100, wherein two locations (110 and 120) on the user's body surface 150 are used as reference points, which are indicated simultaneously using two fingers (140, 145). The first location 110 and the second location 120 may be captured and recorded one after the other or simultaneously using a suitable device such as an optical device, a camera, or a computer vision system.
Coordinates of the first and second locations (110, 120) are calculated. Having obtained first and second locations (110, 120) on the user body surface 150, respective distances from the first and second locations (110, 120) on the user body surface 150 to first and second detectable locations on the first and second interior portions are determined or recorded. These distances are determined in the same manner as in the first embodiment described above.
The first and second interior portions may be bones or organs of the user's body 100. The first and second target objects are at least one of: bones, organs, acupoints, meridians, pressure pain points, and blood vessels of the user's body 100. The reason for using bones or organs as the first or second inner parts as initial reference is that they can be easily detected, felt and identified with the fingers.
In addition to using a finger to tactilely detect or identify the first interior portion or the second interior portion, an operator may also detect and identify them using an appropriate device or instrument. In addition to using the depression (dent) depth to determine the first distance or the second distance, the operator may also use an appropriate device or instrument to determine the aforementioned distances.
In addition to the coordinates of the first and second locations (110 and 120) on the user's body surface 150, the coordinates of the first and second detectable locations are also calculated based on the respective distances from the first and second locations (110 and 120) to the detectable locations on the first and second interior portions below the surface.
Using two locations (110 and 120) on the user's body surface 150 and two distances from each of the two locations to two detectable locations on two interior portions, the locations of more target objects can be accurately derived, calculated, or predicted. The accuracy increases with the number of initial reference positions.
Fig. 7 shows a third embodiment of the method of the invention applied on the user's body 100, wherein three locations (110, 120 and 130) on the user's body surface 150 are used as initial reference points. The first location 110 is above the L5 lumbar spine and the second location 120 is above the L1 lumbar spine. The third position 130 is at the edge of the scapula or on the scapula. Wire 180 connects first location 110 and second location 120, and wire 170 is a wire from third location 130 on the edge of the scapula. After determining the respective distances from the three locations to their respective underlying interior portions and making the necessary mapping to map onto the three-dimensional reference mannequin, some acupoints 190 along the bladder meridian (BL) are derived. The names of the derived acupoints 190 are shown in the following table.
Derived acupoints Name of the name
BL-22 Sanjiaoshu (Chinese character)
BL-23 Shenshu medicine for treating kidney disease
BL-24 Qihishu (Chinese character)
BL-25 Dachangshu medicine for treating large intestine
BL-26 Guanyuanshu medicine
In a fourth embodiment of the method of the present invention, the steps may include: a line is marked on the surface 150 of the user's body and the position of the line relative to the position of the target object is determined. In a fifth embodiment of the method of the present invention, the steps may include: a point or region is marked on the user's body surface 150 and the location of the point or region relative to the location of the target object is determined. The lines on the user body surface 150 may correspond to, for example, wounds or scratches on the user body surface 150. The point or area on the user's body surface 150 may correspond to, for example, a location of a ruptured skin or wound. The sites represented by the aforementioned lines, points or areas are sensitive and frangible areas, especially when the wound or wound is not completely healed. The massage or acupuncture treatment should avoid these parts. Without associating these sites with the target object and providing information of these sites to the system, the massage or acupuncture process may injure the patient as these sensitive and delicate sites of the user's body 100 are contacted or manipulated. Thus, the line, point or area representing the location to be avoided is captured or recorded using a suitable device such as a camera, optical device or computer vision system. The locations of lines, points or areas associated with all target objects are calculated.
The body of one person differs not only in shape and size in the X and Y dimensions from the body of another person, but also in the Z dimension. Without considering the Z dimension, the information provided to the automated massage or acupuncture system is incomplete. When the automated system performs massage or acupuncture without information of the internal organ in the Z-dimension, its accuracy is impaired. For example, the force exerted by a manipulator controlled by an automated massage system may be too strong, injuring the person receiving the massage. In another example, the depth of the needle inserted by the manipulator controlled by the automated acupuncture system may be too deep.
The information obtained by using the various embodiments of the method of the present invention can be provided to an automated system for use in massaging and acupuncture, such that the automated system has spatial knowledge of the exterior and interior of the user's body, such as organs and bones within the user's body. The subsurface depth or distance from a location on the user's body surface to an interior portion enables the location of the target object to be accurately derived. When the automated system has accurate locations of bones, organs, acupoints, meridians, pressure pain points, and blood vessels, the corresponding massage or acupuncture process can be performed safely and with the accuracy of achieving the desired result.
The present invention also discloses a system for locating a target location within a user's body 100 using an image of the user's body 100 and the subsurface depth of the interior portion of the user's body 100. The system comprises a main module, an image module adapted to transfer an image of the user's body 100 to the main module, and a depth data module adapted to transfer the subsurface depth to the main module. The main module is configured to map an image of the user's body and subsurface depth onto a three-dimensional reference mannequin and predict a target location.
Fig. 8 illustrates an embodiment of a system for deriving, calculating or predicting a position or target location of a target object within a user's body 100. The image module receives an image of the user's body 100, processes the image and passes information about the image to the main module. The graphical format of the image may be bitmap, TIFF, JPEG, GIF, PNG or raw unprocessed data. The depth data module receives the subsurface depth, processes the depth information, and communicates the relevant information to the master module. The master module derives, calculates or predicts a target position using an image of the user's body 100, the subsurface depth of the interior portion, and a three-dimensional reference mannequin.
The target location is the location of a target object such as a bone, organ, acupoint, meridian, pressure pain spot, or blood vessel of the user's body 100. The three-dimensional reference mannequin includes reference coordinates of at least one of bones, organs, acupoints, meridians, pressure pain points, and blood vessels of a standard or typical human body.
In order to derive, calculate or predict the position of a target object of the user's body 100, such as bones, organs, acupoints, meridians, pressure pain points and blood vessels, it is necessary that the three-dimensional reference manikin contains information of the desired target object. The information includes the identity of the target object and location information including coordinates specifying the location of one object relative to another object in a standard or typical human body. The coordinates specifying the position of one object relative to another object in a standard or typical human body are known as reference coordinates. The reference coordinates may be three-dimensional coordinates. As an example, if the internal part used as an initial reference is a bone and the target object is an acupoint, the three-dimensional reference manikin needs to have information about the bone and the acupoint, including three-dimensional reference coordinates of the bone with respect to the acupoint. The shape and size of the target object may be derived using three-dimensional reference coordinates of a plurality of locations of the target object.
The organs, pressure points, blood vessels, or other parts of the body described herein may be those organs, pressure points, blood vessels, or other parts from a western medical or TCM perspective. The camera mentioned in the present specification may be a stereoscopic camera.
Fig. 9 shows an embodiment of an automation system employing the present invention. The automated system includes a computing system, a database, a robotic system, and a human-machine interface. The system modules shown in fig. 8 may be in a computing system. In an automated system, an operator may provide commands, instructions, or information to the system through a variety of different forms of input, such as gestures, voice, switches, touch screen, or keyboard input. For example, after designating the first location 110 on the user's body surface 150, the operator may give a voice command to the system to initiate capture of the first location 110 by a suitable device, such as an optical device, camera, or computer vision system. The voice is picked up or received by a microphone and interpreted by a voice recognition module in the computing system. In addition to initiating capture of the first location 110, other information related to the first location 110, such as the name of the interior portion below the first location 110, may also be given to the system by the operator using voice as an input means. For example, if the first location 110 is above the lumbar vertebra L5 of the user, the operator may say "lumbar vertebra L5" and the spoken utterance will be recognized and recorded by the speech recognition module.
Alternatively, after designating the first location 110 on the surface 150 of the user's body 100 using the fingers of one hand, the operator may gesture the system using the other hand to initiate capturing the first location 110 by a suitable device such as an optical device, camera, or computer vision system. The gesture may be, for example, an "OK" gesture. The gesture is interpreted by a gesture recognition module in the computing system.
When the operator indicates two positions, such as first and second positions (110 and 120), on the user's body surface 150 using two hands, the operator may initiate capture of the two positions by stepping on the foot pedal. Other channels that provide commands, instructions, or information to the system include touch screens or keyboards, and input is processed through corresponding I/O modules in the computing system.
In an automated system, a three-dimensional reference mannequin is stored in a reference model portion of a database. Information about the image of the user's body 100, the location on the user's body surface 150 and the subsurface depth of the interior portion or the distance from the location on the user's body surface 150 to the interior portion is stored in the user model portion of the database. The derived, calculated or predicted position or target position of the target object is stored in a mapping model part of the database. The mapped model portion of the database also contains information derived from the mapping of the three-dimensional reference manikin to the image of the user's body 100, the location on the user's body surface 150 and the subsurface depth of the interior portion or the distance from the location on the user's body surface 150 to the interior portion.
The information in the mapping model portion of the database is fed to an automated system that performs massage or acupuncture treatments. The automation system adjusts its movements, for example movements of a manipulator in the robotic system, based on the information in the mapping model so that the corresponding massage or acupuncture treatment is done safely and with the accuracy to achieve the desired result.
While a number of different aspects and embodiments have been disclosed herein, those of skill in the art will appreciate that several of the above-disclosed structures, materials, parameters, or processes thereof may be modified, adapted, and combined as desired in alternative structures, processes, and/or applications. All such modifications, variations, adaptations and/or improvements of the various embodiments disclosed are within the scope of the invention. The various aspects and embodiments disclosed herein are for purposes of illustration and not limitation, with the true scope and spirit of the invention being indicated by the following claims.

Claims (9)

1. A method of locating a location within a human body, comprising:
selecting a target object in the human body;
capturing an image of a human body;
identifying a respective interior portion of the human body directly below the at least one location on the surface of the human body;
determining a distance from the at least one location to the respective interior portion;
mapping the image, the location and the distance onto a three-dimensional reference mannequin; and
Deriving a position of the target object;
wherein determining the distance comprises:
pressing down on the at least one location using a finger until the respective interior portion is felt by touch; and
The depth of depression at the at least one location is measured using a depth sensor.
2. The method of claim 1, wherein the respective interior portion is a bone or organ of a human body.
3. The method of claim 1, wherein the target object is at least one of: bones, organs, acupoints, channels and collaterals, pressure pain points and blood vessels of the human body.
4. The method of claim 1, further comprising: marking a line on a surface of a human body, and determining a position of the line relative to a position of the target object.
5. The method of claim 1, further comprising: marking an area on the surface of the human body and determining the position of the area relative to the position of the target object.
6. The method of claim 1, wherein identifying the respective interior portion within the human body is performed by touch using a finger.
7. A system for locating a target location within a human body using a human body image and a subsurface depth of a human body interior portion, comprising:
a main module;
an image module adapted to communicate the image to the main module; and
A depth data module adapted to communicate the subsurface depth to the main module;
wherein the master module is configured to map the image and the subsurface depth onto a three-dimensional reference mannequin and predict a target location;
wherein the subsurface depth is determined by pressing down on at least one location on the surface of the human body using a finger until the corresponding interior portion is felt by touch, and measuring the pressing down depth at the at least one location using a depth sensor.
8. The system of claim 7, wherein the target location is a location of a bone, organ, acupoint, meridian, pressure pain point, or blood vessel of the human body.
9. The system of claim 7, wherein the three-dimensional reference mannequin includes reference coordinates of at least one of bones, organs, acupoints, meridians, pressure pain points, and blood vessels of a standard human body.
CN201910980140.2A 2018-10-16 2019-10-15 Method and system for locating position in human body Active CN111053530B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10201809094TA SG10201809094TA (en) 2018-10-16 2018-10-16 Method And System Of Locating A Position Within A Human Body
SG10201809094T 2018-10-16

Publications (2)

Publication Number Publication Date
CN111053530A CN111053530A (en) 2020-04-24
CN111053530B true CN111053530B (en) 2024-01-30

Family

ID=70297539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910980140.2A Active CN111053530B (en) 2018-10-16 2019-10-15 Method and system for locating position in human body

Country Status (2)

Country Link
CN (1) CN111053530B (en)
SG (1) SG10201809094TA (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD1009283S1 (en) 2020-04-22 2023-12-26 Aescape, Inc. Therapy end effector
US11858144B2 (en) 2020-05-12 2024-01-02 Aescape, Inc. Method and system for autonomous body interaction
CN112184705B (en) * 2020-10-28 2022-07-05 成都智数医联科技有限公司 Human body acupuncture point identification, positioning and application system based on computer vision technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140132525A (en) * 2013-05-08 2014-11-18 (주)약침학회 Method for determining positions of acupuncture points and their depths of needle using 3-dimensionsal imaging system
CN105411836A (en) * 2016-01-13 2016-03-23 高得人 Automatic precise acupoint positioning rehabilitation instruments
CN206117856U (en) * 2016-09-07 2017-04-19 李莲英 Projection system
CN106890082A (en) * 2016-03-10 2017-06-27 程瑜 Intelligence is attacked a vital point manipulator
CN106959571A (en) * 2017-04-07 2017-07-18 展谱光电科技(上海)有限公司 Multispectral projection and camera device and multispectral projecting method
KR101780319B1 (en) * 2017-03-21 2017-09-21 대전대학교 산학협력단 Apparatus and method for mapping 3 dimensional acupoint

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140132525A (en) * 2013-05-08 2014-11-18 (주)약침학회 Method for determining positions of acupuncture points and their depths of needle using 3-dimensionsal imaging system
CN105411836A (en) * 2016-01-13 2016-03-23 高得人 Automatic precise acupoint positioning rehabilitation instruments
CN106890082A (en) * 2016-03-10 2017-06-27 程瑜 Intelligence is attacked a vital point manipulator
CN206117856U (en) * 2016-09-07 2017-04-19 李莲英 Projection system
KR101780319B1 (en) * 2017-03-21 2017-09-21 대전대학교 산학협력단 Apparatus and method for mapping 3 dimensional acupoint
CN108379056A (en) * 2017-03-21 2018-08-10 任允卿 Three-dimensional warp cave mapping device and method
CN106959571A (en) * 2017-04-07 2017-07-18 展谱光电科技(上海)有限公司 Multispectral projection and camera device and multispectral projecting method

Also Published As

Publication number Publication date
CN111053530A (en) 2020-04-24
SG10201809094TA (en) 2020-05-28

Similar Documents

Publication Publication Date Title
CN111053530B (en) Method and system for locating position in human body
CN111868788B (en) System and method for generating pressure point diagrams based on remotely controlled haptic interactions
Klatzky et al. The skin and its receptors 148 pathways to cortex and major cortical areas
JP6045139B2 (en) VIDEO GENERATION DEVICE, VIDEO GENERATION METHOD, AND PROGRAM
Gwilliam et al. Human vs. robotic tactile sensing: Detecting lumps in soft tissue
US20150056591A1 (en) Device for training users of an ultrasound imaging device
CN109091380B (en) Traditional Chinese medicine system and method for realizing acupoint visualization by AR technology
CN110990649A (en) Cardiopulmonary resuscitation interactive training system based on gesture recognition technology
KR101936082B1 (en) Vertual reality-based finger rehabilitation system using realsense camera and method thereof
KR20210142042A (en) Method and system for inputting acupoint to electronic medical record for oriental medicine hospital based patient's images
US20200348756A1 (en) Brain connectivity-based visual perception training device, method and program
JP6481622B2 (en) Palpation support device, palpation support method, and palpation support program
CN108733287A (en) Detection method, device, equipment and the storage medium of physical examination operation
US10010267B2 (en) Massage measurement apparatus and massage measurement method
CN110801392B (en) Method and device for marking predetermined point positions on human body and electronic equipment
KR101897512B1 (en) Face Fit Eyebrow tattoo system using 3D Face Recognition Scanner
KR20200080534A (en) System for estimating otorhinolaryngology and neurosurgery surgery based on simulator of virtual reality
KR102462821B1 (en) Oriental Medicine Abdominal Examination Apparatus
Xuming et al. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method
TWI629662B (en) Method for realizing acupoint visualization by AR technology
Udo et al. Feedback Methods to Adjust Finger Orientation for High Accuracy Softness Evaluation with a Wearable Pressure Distribution Sensor in Cervix Examination
TWI807678B (en) Interactive massage part generating method and system
TW201905836A (en) Chinese medicine system using AR technology to realize acupuncture visualization and method thereof including an input module, a judgment module, an image processing module, and a display module
Moreira Dynamic analysis of upper limbs movements after breast cancer surgery
CN117426977A (en) Auxiliary fine moxibustion treatment system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant