CN116671953A - Posture guiding method, apparatus, medium and program product in X-ray imaging - Google Patents

Posture guiding method, apparatus, medium and program product in X-ray imaging Download PDF

Info

Publication number
CN116671953A
CN116671953A CN202210167756.XA CN202210167756A CN116671953A CN 116671953 A CN116671953 A CN 116671953A CN 202210167756 A CN202210167756 A CN 202210167756A CN 116671953 A CN116671953 A CN 116671953A
Authority
CN
China
Prior art keywords
dimensional model
display medium
dimensional
pose
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210167756.XA
Other languages
Chinese (zh)
Inventor
彭希帅
曹景泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Shanghai Medical Equipment Ltd
Original Assignee
Siemens Shanghai Medical Equipment Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Shanghai Medical Equipment Ltd filed Critical Siemens Shanghai Medical Equipment Ltd
Priority to CN202210167756.XA priority Critical patent/CN116671953A/en
Publication of CN116671953A publication Critical patent/CN116671953A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/04Positioning of patients; Tiltable beds or the like
    • A61B6/0492Positioning of patients; Tiltable beds or the like using markers or indicia for aiding patient positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The embodiment of the invention discloses a posture guiding method, a posture guiding device, a posture guiding medium and a posture guiding program product in X-ray imaging. The method comprises the following steps: determining a target pose of an object in X-ray imaging; generating a three-dimensional model characterizing the pose of the target; the three-dimensional model is presented in a display medium. According to the embodiment of the invention, the to-be-tested person does not need to be guided manually to reach the target gesture, and the three-dimensional model representing the target gesture is presented in the display medium, so that the guiding time can be obviously reduced, and the experience of the to-be-tested person is improved. Moreover, the method can shoot the object to generate the three-dimensional model, can acquire the existing three-dimensional model from the model library, and has wide applicability. In addition, by presenting the prompt action, the object can enter the target gesture quickly, and the guiding efficiency is improved.

Description

Posture guiding method, apparatus, medium and program product in X-ray imaging
Technical Field
The present invention relates to the field of medical imaging technology, and in particular, to a method, apparatus, medium and program product for guiding a pose in X-ray imaging.
Background
X-rays are electromagnetic radiation having wavelengths between ultraviolet and gamma rays. X-rays have penetrability and have different penetrability to substances with different densities. In medicine, human organs and bones are generally projected with X-rays to form medical images.
X-ray imaging systems typically include an X-ray generation assembly, a chest-Wall-Stand (BWS) assembly, a table assembly, a cassette assembly containing a flat panel detector, and a remotely located control host, among others. The X-ray generating assembly emits X-rays transmitted through the irradiation imaging target by using high voltage provided by the high voltage generator, and forms medical image information of the imaging target on the flat panel detector. The flat panel detector transmits the medical image information to the control host. The imaging subject may stand near the chest frame assembly or lie on the couch assembly to receive X-ray images of the skull, chest, abdomen, joints, etc., respectively.
Prior to X-ray imaging, a guiding person (such as a radiological technician) needs to manually guide a person to be examined as an imaging subject to make different postures in order to examine the person to be examined. However, manual guidance requires a significant amount of time. In addition, the radiological technician has to move back and forth between the exposure room and the control room.
Disclosure of Invention
The embodiment of the invention provides a posture guiding method, a posture guiding device, a posture guiding medium and a posture guiding program product in X-ray imaging.
The technical scheme of the embodiment of the invention comprises the following steps:
a method of pose guidance in X-ray imaging, comprising:
determining a target pose of an object in X-ray imaging;
generating a three-dimensional model adapted to characterize the target pose;
the three-dimensional model is presented in a display medium.
Therefore, the embodiment of the invention does not manually guide the imaging object to form a gesture any more, and automatically guides the imaging object by presenting the three-dimensional model which is adapted to the target gesture of the characterization object, thereby saving the guiding time. Furthermore, the guiding person does not have to move back and forth between the exposure chamber and the control chamber.
In an exemplary embodiment, further comprising:
acquiring a three-dimensional image which is generated by shooting the object by using a camera component and is adapted to characterize the to-be-evaluated gesture of the object;
the three-dimensional image is presented in the display medium.
It can be seen that the embodiment of the invention further presents the three-dimensional image of the object in the display medium, and can display the to-be-evaluated gesture of the object.
In an exemplary embodiment, the presenting the three-dimensional model in a display medium includes:
acquiring background information adapted to characterize an environment in which the object is located;
fusing the three-dimensional model with the background information to form fused information;
presenting the fusion information in the display medium;
wherein the display medium comprises at least one of:
an electronic display screen; air; a curtain.
Therefore, the embodiment of the invention can present the three-dimensional model in an augmented reality mode, and the interactive experience of the user is improved.
In an exemplary embodiment, the generating a three-dimensional model adapted to characterize the target pose comprises:
acquiring an original three-dimensional model generated by shooting the object in the initial posture by using a camera component; adjusting the object in the original three-dimensional model from the starting pose to the target pose to generate the three-dimensional model; or (b)
Acquiring an original three-dimensional model containing an original object in a preset gesture from a model library; and adjusting the original object in the original three-dimensional model from the preset gesture to the target gesture to generate the three-dimensional model.
Therefore, the embodiment of the invention can shoot the object to generate the three-dimensional model, can acquire the existing three-dimensional model from the model library, and has wide applicability.
In an exemplary embodiment, further comprising:
determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium;
and evaluating the gesture to be evaluated based on the matching degree.
Therefore, the gesture to be evaluated can be evaluated based on the matching degree, and the gesture improvement method and the gesture improvement device are convenient for users.
In an exemplary embodiment, the determining the degree of matching of the three-dimensional image and the three-dimensional model in the display medium includes:
detecting a first spatial position of a predetermined keypoint of the object in the three-dimensional image; detecting a second spatial position of the predetermined keypoint in the three-dimensional model; determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first space position and the second space position; or (b)
Detecting a first normal vector cumulative distribution curve of the three-dimensional image and a second normal vector cumulative distribution curve of the three-dimensional model; and determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first normal vector accumulation distribution curve and the second normal vector accumulation distribution curve.
Therefore, the embodiment of the invention can determine the matching degree through key point detection or normal vector accumulation distribution curve detection, and has wide applicability.
In an exemplary embodiment, further comprising:
determining a difference between the pose to be evaluated and the target pose based on the degree of matching;
generating a hint action to reduce the discrepancy;
causing the three-dimensional model to present the hinting action in the display medium.
Therefore, the embodiment of the invention is beneficial to the object to quickly enter the target gesture by presenting the prompt action, and improves the guiding efficiency.
A posture guiding device in X-ray imaging, comprising:
a determination module for determining a target pose of an object in X-ray imaging;
a generation module for generating a three-dimensional model adapted to characterize the target pose;
and the presenting module is used for presenting the three-dimensional model in a display medium.
Therefore, the embodiment of the invention does not manually guide the imaging object to form a gesture any more, and automatically guides the imaging object by presenting the three-dimensional model which is adapted to the target gesture of the characterization object, thereby saving the guiding time. Furthermore, the guiding person does not have to move back and forth between the exposure chamber and the control chamber.
In an exemplary embodiment, the presenting module is configured to acquire a three-dimensional image generated by capturing the object with a camera component and adapted to characterize a pose of the object to be evaluated; the three-dimensional image is presented in the display medium.
It can be seen that the embodiment of the invention further presents the three-dimensional image of the object in the display medium, and can display the to-be-evaluated gesture of the object.
In an exemplary embodiment, the presenting module is configured to obtain background information adapted to characterize an environment in which the object is located; fusing the three-dimensional model with the background information to form fused information; presenting the fusion information in the display medium; wherein the display medium comprises at least one of: an electronic display screen; air; a curtain.
Therefore, the embodiment of the invention can present the three-dimensional model in an augmented reality mode, and the interactive experience of the user is improved.
In an exemplary embodiment, the generating module is configured to acquire an original three-dimensional model generated by capturing the object in the initial pose with the camera component; adjusting the object in the original three-dimensional model from the starting pose to the target pose to generate the three-dimensional model; or, acquiring an original three-dimensional model containing the original object in a preset gesture from a model library; and adjusting the original object in the original three-dimensional model from the preset gesture to the target gesture to generate the three-dimensional model.
Therefore, the embodiment of the invention can shoot the object to generate the three-dimensional model, can acquire the existing three-dimensional model from the model library, and has wide applicability.
In an exemplary embodiment, further comprising:
an evaluation module for determining a degree of matching of the three-dimensional image and the three-dimensional model in the display medium; and evaluating the gesture to be evaluated based on the matching degree.
Therefore, the gesture to be evaluated can be evaluated based on the matching degree, and the gesture improvement method and the gesture improvement device are convenient for users.
In an exemplary embodiment, the evaluation module is configured to detect a first spatial position of a predetermined keypoint of the object in the three-dimensional image; detecting a second spatial position of the predetermined keypoint in the three-dimensional model; determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first space position and the second space position; or the evaluation module is used for detecting a normal vector accumulation distribution curve of the three-dimensional image and a normal vector accumulation distribution curve of the three-dimensional model; and determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the normal vector cumulative distribution curve of the three-dimensional image and the normal vector cumulative distribution curve of the three-dimensional model.
Therefore, the embodiment of the invention can determine the matching degree through key point detection or normal vector accumulation distribution curve detection, and has wide applicability.
In an exemplary embodiment, further comprising:
the prompting module is used for determining the difference between the gesture to be evaluated and the target gesture based on the matching degree; generating a hint action to reduce the discrepancy; causing the three-dimensional model to present the hinting action in the display medium.
Therefore, the embodiment of the invention is beneficial to the object to quickly enter the target gesture by presenting the prompt action, and improves the guiding efficiency.
An attitude guidance device in X-ray imaging includes a processor and a memory;
the memory has stored therein an application executable by the processor for causing the processor to perform the pose guidance method in X-ray imaging as set forth in any of the above.
Therefore, the embodiment of the invention also provides a gesture guiding device with a memory-processor architecture, which is used for automatically guiding an imaging object by presenting a three-dimensional model adapted to the target gesture of the characterization object instead of manually guiding the imaging object to form a gesture, thereby saving guiding time. Furthermore, the guiding person does not have to move back and forth between the exposure chamber and the control chamber.
A computer readable storage medium having stored therein computer readable instructions for performing the pose guidance method in X-ray imaging as claimed in any of the above.
A computer program product comprising a computer program which, when executed by a processor, implements a pose guidance method in X-ray imaging as claimed in any one of the preceding claims.
Accordingly, embodiments of the present invention also propose a computer readable storage medium and a computer program product for automatically guiding an imaging object by presenting a three-dimensional model adapted to the target pose characterizing the object, saving guiding time, instead of manually guiding the imaging object to form a pose. Furthermore, the guiding person does not have to move back and forth between the exposure chamber and the control chamber.
Drawings
Fig. 1 is a flowchart of a posture guiding method in X-ray imaging according to an embodiment of the present invention.
FIG. 2 is a first schematic representation of a three-dimensional model representing a target pose of an object in X-ray imaging according to an embodiment of the invention.
FIG. 3 is a second schematic representation of a three-dimensional model representing a target pose of an object in X-ray imaging according to an embodiment of the invention.
FIG. 4 is a first schematic representation of a three-dimensional image representing a pose to be evaluated of an object according to an embodiment of the invention.
FIG. 5 is a second schematic representation of a three-dimensional image representing a pose to be evaluated of an object according to an embodiment of the invention.
FIG. 6 is a first schematic diagram of a fused representation of a three-dimensional image and a three-dimensional model according to an embodiment of the present invention.
FIG. 7 is a second schematic representation of a fused representation of a three-dimensional image and a three-dimensional model according to an embodiment of the present invention.
Fig. 8 is a structural view of an attitude guide device in X-ray imaging according to an embodiment of the present invention.
Fig. 9 is a block diagram of a pose guide apparatus in X-ray imaging having a memory-processor architecture according to an embodiment of the present invention.
Wherein, the reference numerals are as follows:
Detailed Description
In order to make the technical scheme and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description is intended to illustrate the invention and is not intended to limit the scope of the invention.
For simplicity and clarity of description, the following description sets forth aspects of the invention by describing several exemplary embodiments. Numerous details in the embodiments are provided solely to aid in the understanding of the invention. It will be apparent, however, that the embodiments of the invention may be practiced without limitation to these specific details. Some embodiments are not described in detail in order to avoid unnecessarily obscuring aspects of the present invention, but rather only to present a framework. Hereinafter, "comprising" means "including but not limited to", "according to … …" means "according to at least … …, but not limited to only … …". The term "a" or "an" is used herein to refer to a number of components, either one or more, or at least one, unless otherwise specified.
In view of the many drawbacks of manually guiding objects to form gestures, embodiments of the present invention do not manually guide objects anymore, saving guiding time by presenting a three-dimensional model adapted to characterize the target pose of the object to automatically guide the object. Furthermore, the guiding person does not have to move back and forth between the exposure chamber and the control chamber.
Fig. 1 is a flowchart of a posture guiding method in X-ray imaging according to an embodiment of the present invention. Preferably, the method of FIG. 1 may be performed by a controller. The controller may be implemented as a control host integrated into the X-ray imaging system, or as a control unit separate from the control host.
As shown in fig. 1, the method 100 includes:
step 101: a target pose of an object in X-ray imaging is determined.
Here, the object is an organism, or a tissue, organ or system of an organism, or the like, which is required to perform X-ray imaging. The target pose is a pose that the desired object has after being guided.
Step 102: a three-dimensional model adapted to characterize the pose of the target is generated.
Here, generating the three-dimensional model adapted to characterize the pose of the target includes at least one of:
(1) Acquiring an original three-dimensional model generated by shooting an object in a starting posture by using a camera component; an object in the original three-dimensional model is adjusted from a starting pose to a target pose to generate a three-dimensional model adapted to characterize the target pose.
For example, it is determined in step 101 that the object in X-ray imaging is the right hand of Zhang three and the target posture is the fist. The right hand of the person is shot by the shooting assembly, the right hand gesture during shooting is the initial gesture, and the person is assumed to be in a hand open state. Modeling can be performed on the basis of the three right hands in the shot hand open state to generate an original three-dimensional model of the three right hands; and the three right hands in the original three-dimensional model are adjusted from the hand open state to the fist holding state, namely the three-dimensional model representing the target state.
Thus, embodiments of the present invention may capture an object to generate a three-dimensional model. Since the object in the three-dimensional model is generated by shooting, the guiding effect on the object is good.
(2) Acquiring an original three-dimensional model containing an original object in a preset gesture from a model library; the original object in the original three-dimensional model is adjusted from a predetermined pose to a target pose to generate a three-dimensional model adapted to characterize the target pose.
For example, it is determined in step 101 that the object in X-ray imaging is the right hand of Zhang three and the target posture is the fist. And obtaining the original three-dimensional models of the four right hands of the plums from the model library. The original object is Li four, and the pose of the right hand of Li four in the original three-dimensional model is a preset pose, and the hand is assumed to be in an open state. And adjusting the four right hands of the plums in the original three-dimensional model from the hand open state to the fist holding posture, namely the three-dimensional model representing the target posture. In a preferred embodiment, when the difference in body size between the original object and the object in the X-ray imaging in step 101 is large, a scaling process may be performed on the original three-dimensional model so that the original object is adjusted to more match the object in the X-ray imaging in step 101.
Therefore, the embodiment of the invention can also acquire the existing three-dimensional model from the model library. The existing three-dimensional model is directly obtained from the model library, so that the difficulty of generating the three-dimensional model is reduced.
The foregoing exemplary description describes typical examples of generating a three-dimensional model adapted to characterize a target pose, and those skilled in the art will recognize that this description is merely exemplary and is not intended to limit the scope of embodiments of the present invention.
Step 103: the three-dimensional model is presented in a display medium.
Exemplary, the display medium may include: an electronic display screen; air; curtain, etc. The electronic display may include a variety of types, such as CRT, LCD, LED and three-dimensional displays.
In one embodiment, step 103 specifically includes: acquiring background information adapted to the environment in which the characterization object is located; fusing the three-dimensional model with the background information to form fusion information; the fusion information is presented in a display medium.
Therefore, the embodiment of the invention can present the three-dimensional model in an augmented reality mode, and the interactive experience of the user is improved.
In an exemplary embodiment, the method 100 further comprises: acquiring a three-dimensional image which is generated by shooting an object by using a camera component and is adapted to the gesture to be evaluated of the characterization object; a three-dimensional image is presented in a display medium. Preferably, a real-time three-dimensional image generated by photographing the subject with the photographing assembly may be acquired, and the real-time three-dimensional image may be presented in the display medium. It can be seen that the embodiment of the invention further presents the three-dimensional image of the object in the display medium, and can display the to-be-evaluated gesture of the object.
In one embodiment, the subject may be photographed with a photographing assembly to obtain a three-dimensional image. In another embodiment, a three-dimensional image of the object may be obtained from a storage medium (e.g., a cloud or local database), where the three-dimensional image is obtained by capturing an identification using a camera assembly. The light source of the camera assembly may or may not coincide with the X-ray source in the X-ray imaging system. When the light source of the photographing assembly coincides with the X-ray source in the X-ray imaging system, the photographing assembly is typically fixed on the bulb housing or on the beam splitter housing of the X-ray generating assembly. For example, a groove for accommodating the photographing assembly is disposed on the bulb housing or the case of the beam splitter, and the photographing assembly is fixed to the groove by means of a bolt connection, a snap connection, a wire rope bushing, or the like. When the light source of the photographing assembly does not coincide with the X-ray source in the X-ray imaging system, the photographing assembly may be arranged in an examination room where the subject is located, at any position suitable for photographing the subject, such as on a ceiling, on a floor, or on various components in a medical imaging system, etc.
In one embodiment, the photographing assembly includes at least one three-dimensional camera. The three-dimensional camera photographs a subject using a three-dimensional imaging technique to generate a three-dimensional image of an object. In one embodiment, the photographing assembly includes at least two-dimensional cameras, wherein each of the two-dimensional cameras is disposed at a predetermined position, respectively. In practice, a person skilled in the art can arrange the two-dimensional camera by selecting an appropriate position as a predetermined position as necessary. The photographing assembly may further include an image processor therein. The image processor synthesizes the two-dimensional images shot by the two-dimensional cameras into a three-dimensional image of the object, wherein the depth of field adopted by the image processor in the synthesis can be the depth of field of any two-dimensional image. Alternatively, each two-dimensional camera may send the two-dimensional images shot by the respective two-dimensional cameras to an image processor outside the shooting assembly, so that the two-dimensional images shot by the respective two-dimensional cameras are synthesized into the three-dimensional image of the person to be tested by the image processor outside the shooting assembly, wherein the depth of field adopted by the image processor outside the shooting assembly in the synthesis process can be the depth of field of any two-dimensional image. In particular, the image processor outside the camera assembly may be implemented as a control host in the X-ray imaging system, and may also be implemented as a separate control unit from the X-ray imaging system. Each two-dimensional camera may be arranged at any position in the examination room where the person to be examined is located, which is suitable for capturing an identification of the perimeter of the person to be examined, such as on the ceiling, on the floor or on various components in the X-ray imaging system, etc.
In one embodiment, the photographing assembly may include: at least one two-dimensional camera and at least one depth of field sensor. At least one two-dimensional camera and at least one depth of field sensor are mounted at the same location. The photographing assembly may further include an image processor therein. The image processor uses the depth of field provided by the depth sensor and the two-dimensional photograph provided by the two-dimensional camera to jointly generate a three-dimensional image of the object. Optionally, the two-dimensional camera sends the two-dimensional image of the object to an image processor outside the shooting assembly, and the depth of field sensor sends the acquired depth of field to the image processor outside the shooting assembly, so that the image processor outside the shooting assembly can generate the three-dimensional image of the object by using the depth of field and the two-dimensional photo together. Preferably, the image processor outside the camera assembly may be implemented as a control host in the X-ray imaging system, and may also be implemented as a separate control unit from the X-ray imaging system. The two-dimensional camera may be arranged in an examination room in which the person to be examined is located, at any position suitable for capturing an identification of the perimeter of the person to be examined, such as on a ceiling, on a floor or on various components in a medical imaging system, etc.
After the camera assembly acquires the three-dimensional image of the object, the three-dimensional image can be sent to a controller executing the flow of fig. 1 via a wired interface or a wireless interface. Preferably, the wired interface comprises at least one of: universal serial bus interfaces, controller area network interfaces, serial ports, and the like; the wireless interface includes at least one of: infrared interfaces, near field communication interfaces, bluetooth interfaces, zigbee interfaces, wireless broadband interfaces, and the like.
The above exemplary description of a typical example of an imaging assembly capturing an object to generate a three-dimensional image is merely exemplary and is not intended to limit the scope of embodiments of the present invention, as those skilled in the art will appreciate.
In an exemplary embodiment, the method 100 further comprises: determining the matching degree of the three-dimensional image and the three-dimensional model in a display medium; and evaluating the gesture to be evaluated based on the matching degree. Therefore, the gesture to be evaluated can be evaluated based on the matching degree, and the gesture improvement method and the gesture improvement device are convenient for users.
In one exemplary embodiment, the degree of matching of the three-dimensional image to the three-dimensional model is characterized based on the degree of spatial coincidence of the three-dimensional image to the three-dimensional model. In one exemplary embodiment, the degree of matching of the three-dimensional image to the three-dimensional model is characterized based on the surface similarity of the three-dimensional image to the three-dimensional model. In one exemplary embodiment, the degree of matching of the three-dimensional image to the three-dimensional model is characterized based on a weighted result of the degree of spatial coincidence of the three-dimensional image to the three-dimensional model and the degree of surface similarity. And a matching degree threshold value can be preset, when the determined matching degree is greater than or equal to the matching degree threshold value, the gesture to be evaluated is judged to accord with the expectation, and when the determined matching degree is less than the matching degree threshold value, the gesture to be evaluated is judged to not accord with the expectation.
In one exemplary embodiment, determining the degree of matching of the three-dimensional image to the three-dimensional model in the display medium includes:
(1) Detecting a first spatial position of a predetermined key point of the object in the three-dimensional image; detecting a second spatial position of the predetermined key point in the three-dimensional model; the degree of matching of the three-dimensional image and the three-dimensional model in the display medium is determined based on the degree of matching of the first spatial position and the second spatial position (e.g., the closer the first spatial position and the second spatial position are, the higher the degree of matching). The degree of matching characterizes the degree of spatial coincidence between the key points. The predetermined key point may be a feature point in the object. For example, when the object is a hand, the key points may be a wrist, a knuckle, a nail, or the like. Also, the number of predetermined keypoints may be one or more.
(2) Detecting a first normal vector cumulative distribution curve of the three-dimensional image and a second normal vector cumulative distribution curve of the three-dimensional model; and determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first normal vector accumulation distribution curve and the second normal vector accumulation distribution curve. Here, normal vector cumulative distribution curves of the three-dimensional image and the three-dimensional model are detected, and the two normal vector cumulative distribution curves are compared to determine the surface similarity between the three-dimensional image and the three-dimensional model, so that the matching degree between the three-dimensional image and the three-dimensional model is determined in the dimension of the surface similarity.
Therefore, the embodiment of the invention can determine the matching degree through key point detection or normal vector accumulation distribution curve detection, and has wide applicability.
In an exemplary embodiment, the method further comprises: determining a difference between the pose to be evaluated and the target pose based on the matching degree; generating a hint action for reducing the discrepancy; the three-dimensional model is caused to present a hint action in the display medium. Therefore, the embodiment of the invention is beneficial to the object to quickly enter the target gesture by presenting the prompt action, and improves the guiding efficiency.
FIG. 2 is a first schematic representation of a three-dimensional model representing a target pose of an object in X-ray imaging according to an embodiment of the invention. In fig. 2, the three-dimensional model 21 is embodied as a palm, which is open with the palm and has an inclination angle with respect to the horizontal plane. The palm posture is a posture that the desired object has after being guided. Moreover, the three-dimensional model 21 is presented in fusion with the background information 22, i.e. the three-dimensional model 21 is presented in an augmented reality manner.
FIG. 3 is a second schematic representation of a three-dimensional model representing a target pose of an object in X-ray imaging according to an embodiment of the invention. In fig. 3, the three-dimensional model 31 is embodied as a palm, which is in a palm-open position with the palm facing upward. The palm posture is a posture that the desired object has after being guided. The three-dimensional model 31 is presented in fusion with the background information 32, i.e. the three-dimensional model 31 is presented in an augmented reality manner. Wherein the background information 32 is blurring-processed, thereby prominently showing the three-dimensional model 31.
FIG. 4 is a first schematic representation of a three-dimensional image representing a pose to be evaluated of an object according to an embodiment of the invention. In fig. 4, a three-dimensional image 41 of the hand photographed by the photographing assembly is shown in a display screen. As can be seen from the three-dimensional image 41, the palm posture is palm open and the palm has an inclination angle with the horizontal plane. The palm pose is the pose to be evaluated. The spatial position of a predetermined key point on the three-dimensional image 41 is detected. The predetermined key points may be feature points in the object, such as wrists, knuckles, nails, and so forth. The detected keypoints are highlighted in the three-dimensional image 41. Such as highlighting wrist key 421 or little finger key 422, and so forth.
FIG. 5 is a second schematic representation of a three-dimensional image representing a pose to be evaluated of an object according to an embodiment of the invention. In fig. 5, a three-dimensional image 51 of a hand photographed by the photographing assembly is shown in a display screen. As seen from the three-dimensional image 51, the palm posture is palm open and palm level is down. The palm pose is the pose to be evaluated. The spatial position of a predetermined key point on the three-dimensional image 51 is detected. The predetermined key points may be feature points in the object, such as wrists, knuckles, nails, and so forth. The detected key points are highlighted in the three-dimensional image 51. Such as highlighting wrist key 521 or little finger key 522, and so forth.
FIG. 6 is a first schematic diagram of a fused representation of a three-dimensional image and a three-dimensional model according to an embodiment of the present invention. In fig. 6, the background information, the three-dimensional image 61 of the hand, and the three-dimensional model 71 of the hand are displayed in fusion in the display screen, and the corresponding key points between the three-dimensional image 61 and the three-dimensional model 71 coincide in spatial position and the surface shape is matched. Therefore, the gesture to be evaluated can be evaluated to be qualified, and a notification message for prompting that the gesture is correct can be sent out.
FIG. 7 is a schematic diagram of a second fused representation of a three-dimensional image and a three-dimensional model according to an embodiment of the present invention. In fig. 7, the background information, the three-dimensional image 81 of the hand, and the three-dimensional model 91 of the hand are displayed in fusion in the display screen, and the corresponding key point surface shapes between the three-dimensional image 81 and the three-dimensional model 91 match, but do not match spatially. At this time, the pose to be evaluated is unqualified, and the difference between the pose to be evaluated and the target pose is determined; generating a prompt action to reduce the discrepancy (e.g., prompt the user to hold a hand shape (because of the surface shape match) and move the hand to the right); the three-dimensional model is caused to present a hint action in the display medium.
It can be seen that the three-dimensional model of the embodiment of the present invention provides the target pose as a standard pose. The shooting component captures image information characterizing the current pose of the subject. By enhancing the view display, the object can see the difference between the current pose and the target pose in real-time. Further, according to the embodiment of the present invention, a warning message may be transmitted when an erroneous posture is formed.
Fig. 8 is a structural view of an attitude guide device in X-ray imaging according to an embodiment of the present invention.
As shown in fig. 8, the posture guide device 800 in X-ray imaging includes:
a determining module 801 for determining a target pose of an object in X-ray imaging;
a generation module 802 for generating a three-dimensional model adapted to characterize a target pose;
a rendering module 803 for rendering the three-dimensional model in a display medium.
In one embodiment, the presenting module 803 is configured to acquire a three-dimensional image, which is generated by capturing the object with the image capturing component and is adapted to characterize a pose to be evaluated of the object; a three-dimensional image is presented in a display medium.
In one embodiment, the presenting module 803 is configured to obtain background information adapted to the environment in which the characterizing object is located; fusing the three-dimensional model with the background information to form fusion information; presenting the fusion information in a display medium; wherein the display medium comprises at least one of: an electronic display screen; air; a curtain.
In one embodiment, a generating module 802 is configured to acquire an original three-dimensional model generated by capturing an object in a starting pose with a camera assembly; adjusting an object in the original three-dimensional model from a starting gesture to a target gesture to generate a three-dimensional model; or, acquiring an original three-dimensional model containing the original object in a preset gesture from a model library; the original object in the original three-dimensional model is adjusted from a predetermined pose to a target pose to generate a three-dimensional model.
In one embodiment, the method further comprises an evaluation module 804 for determining a matching degree of the three-dimensional image and the three-dimensional model in the display medium; and evaluating the gesture to be evaluated based on the matching degree.
In one embodiment, an evaluation module 804 is configured to detect a first spatial location of a predetermined keypoint of the object in the three-dimensional image; detecting a second spatial position of a predetermined key point in the three-dimensional model; determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first space position and the second space position; or, an evaluation module 804, configured to detect a normal vector cumulative distribution curve of the three-dimensional image and a normal vector cumulative distribution curve of the three-dimensional model; and determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the normal vector cumulative distribution curve of the three-dimensional image and the normal vector cumulative distribution curve of the three-dimensional model.
In one embodiment, the method further comprises a prompt module 805 for determining a difference between the pose to be evaluated and the target pose based on the degree of matching; generating a hint action for reducing the discrepancy; the three-dimensional model is caused to present a hint action in the display medium.
Fig. 9 is a block diagram of a pose guide apparatus in X-ray imaging having a memory-processor architecture according to an embodiment of the present invention.
As shown in fig. 9, the posture guiding device 900 in X-ray imaging includes a processor 901, a memory 902, and a computer program stored on the memory 902 and executable on the processor 901, which when executed by the processor 901 implements the posture guiding method in X-ray imaging as in any one of the above. The memory 902 may be implemented as a variety of storage and display media such as an electrically erasable programmable read-only memory (EEPROM), a Flash memory (Flash memory), a programmable read-only memory (PROM), and the like. Processor 901 may be implemented to include one or more central processors or one or more field programmable gate arrays that integrate one or more central processor cores. In particular, the central processor or central processor core may be implemented as a CPU or MCU or DSP, etc.
It should be noted that not all the steps and modules in the above processes and the structure diagrams are necessary, and some steps or modules may be omitted according to actual needs. The execution sequence of the steps is not fixed and can be adjusted as required. The division of the modules is merely for convenience of description and the division of functions adopted in the embodiments, and in actual implementation, one module may be implemented by a plurality of modules, and functions of a plurality of modules may be implemented by the same module, and the modules may be located in the same device or different devices.
The hardware modules in the various embodiments may be implemented mechanically or electronically. For example, a hardware module may include specially designed permanent circuits or logic devices (e.g., special purpose processors such as FPGAs or ASICs) for performing certain operations. A hardware module may also include programmable logic devices or circuits (e.g., including a general purpose processor or other programmable processor) temporarily configured by software for performing particular operations. As regards implementation of the hardware modules in a mechanical manner, either by dedicated permanent circuits or by circuits that are temporarily configured (e.g. by software), this may be determined by cost and time considerations.
The present invention also provides a machine-readable storage medium storing instructions for causing a machine to perform a method as herein. Specifically, a system or apparatus provided with a storage display medium on which software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage display medium. Further, some or all of the actual operations may be performed by an operating system or the like operating on a computer based on instructions of the program code. The program code read out from the storage display medium may also be written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then, based on instructions of the program code, a CPU or the like mounted on the expansion board or the expansion unit may be caused to perform part or all of actual operations, thereby realizing the functions of any of the above embodiments. Storage display medium implementations for providing program code include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs, DVD+RWs), magnetic tapes, non-volatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or cloud by a communications network.
The foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (17)

1. A posture guiding method (100) in X-ray imaging, characterized by comprising:
determining a target pose (101) of an object in X-ray imaging;
generating a three-dimensional model (102) adapted to characterize the target pose;
the three-dimensional model (103) is presented in a display medium.
2. The method (100) of claim 1, further comprising:
acquiring a three-dimensional image which is generated by shooting the object by using a camera component and is adapted to characterize the to-be-evaluated gesture of the object;
the three-dimensional image is presented in the display medium.
3. The method (100) of claim 1, wherein the presenting the three-dimensional model in a display medium comprises:
acquiring background information adapted to characterize an environment in which the object is located;
fusing the three-dimensional model with the background information to form fused information;
presenting the fusion information in the display medium;
wherein the display medium comprises at least one of:
an electronic display screen; air; a curtain.
4. The method (100) of claim 1, wherein the generating a three-dimensional model adapted to characterize the target pose comprises:
acquiring an original three-dimensional model generated by shooting the object in the initial posture by using a camera component; adjusting the object in the original three-dimensional model from the starting pose to the target pose to generate the three-dimensional model; or (b)
Acquiring an original three-dimensional model containing an original object in a preset gesture from a model library; and adjusting the original object in the original three-dimensional model from the preset gesture to the target gesture to generate the three-dimensional model.
5. The method (100) of claims 1-4, further comprising:
determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium;
and evaluating the gesture to be evaluated based on the matching degree.
6. The method (100) of claim 5, wherein the determining a degree of matching of the three-dimensional image with the three-dimensional model in the display medium comprises:
detecting a first spatial position of a predetermined keypoint of the object in the three-dimensional image; detecting a second spatial position of the predetermined keypoint in the three-dimensional model; determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first space position and the second space position; or (b)
Detecting a first normal vector cumulative distribution curve of the three-dimensional image and a second normal vector cumulative distribution curve of the three-dimensional model; and determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first normal vector accumulation distribution curve and the second normal vector accumulation distribution curve.
7. The method (100) of claim 5, further comprising:
determining a difference between the pose to be evaluated and the target pose based on the degree of matching;
generating a hint action to reduce the discrepancy;
causing the three-dimensional model to present the hinting action in the display medium.
8. A posture guiding device (800) in X-ray imaging, characterized by comprising:
a determination module (801) for determining a target pose of an object in X-ray imaging;
-a generation module (802) for generating a three-dimensional model adapted to characterize the target pose;
-a rendering module (803) for rendering the three-dimensional model in a display medium.
9. The apparatus (800) of claim 8, wherein,
-the rendering module (803) for obtaining a three-dimensional image generated by capturing the object with a camera assembly, adapted to characterize a pose of the object to be evaluated; the three-dimensional image is presented in the display medium.
10. The apparatus (800) of claim 8, wherein,
the presentation module (803) is configured to obtain background information adapted to characterize an environment in which the object is located; fusing the three-dimensional model with the background information to form fused information; presenting the fusion information in the display medium; wherein the display medium comprises at least one of: an electronic display screen; air; a curtain.
11. The apparatus (800) of claim 8, wherein,
the generating module (802) is used for acquiring an original three-dimensional model generated by shooting the object in the initial posture by using the shooting assembly; adjusting the object in the original three-dimensional model from the starting pose to the target pose to generate the three-dimensional model; or, acquiring an original three-dimensional model containing the original object in a preset gesture from a model library; and adjusting the original object in the original three-dimensional model from the preset gesture to the target gesture to generate the three-dimensional model.
12. The apparatus (800) of claims 8-11, further comprising:
an evaluation module (804) for determining a degree of matching of the three-dimensional image with the three-dimensional model in the display medium; and evaluating the gesture to be evaluated based on the matching degree.
13. The apparatus (800) of claim 12, wherein,
-the evaluation module (804) for detecting a first spatial position of a predetermined keypoint of the object in the three-dimensional image; detecting a second spatial position of the predetermined keypoint in the three-dimensional model; determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the first space position and the second space position; or, the evaluation module (804) is configured to detect a normal vector cumulative distribution curve of the three-dimensional image and a normal vector cumulative distribution curve of the three-dimensional model; and determining the matching degree of the three-dimensional image and the three-dimensional model in the display medium based on the matching degree of the normal vector cumulative distribution curve of the three-dimensional image and the normal vector cumulative distribution curve of the three-dimensional model.
14. The apparatus as recited in claim 12, further comprising:
a prompt module (805) for determining a difference between the pose to be evaluated and the target pose based on the degree of matching; generating a hint action to reduce the discrepancy; causing the three-dimensional model to present the hinting action in the display medium.
15. A pose guide device (900) in X-ray imaging, characterized by comprising a processor (901) and a memory (902);
the memory (902) has stored therein an application executable by the processor (901) for causing the processor (901) to perform the pose guidance method (100) in X-ray imaging as claimed in any one of claims 1 to 7.
16. A computer readable storage medium, characterized in that computer readable instructions are stored therein for performing the pose guidance method (100) in X-ray imaging according to any of claims 1 to 7.
17. A computer program product, characterized in that it comprises a computer program which, when executed by a processor, implements the pose guidance method (100) in X-ray imaging of any of claims 1 to 7.
CN202210167756.XA 2022-02-23 2022-02-23 Posture guiding method, apparatus, medium and program product in X-ray imaging Pending CN116671953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210167756.XA CN116671953A (en) 2022-02-23 2022-02-23 Posture guiding method, apparatus, medium and program product in X-ray imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210167756.XA CN116671953A (en) 2022-02-23 2022-02-23 Posture guiding method, apparatus, medium and program product in X-ray imaging

Publications (1)

Publication Number Publication Date
CN116671953A true CN116671953A (en) 2023-09-01

Family

ID=87781409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210167756.XA Pending CN116671953A (en) 2022-02-23 2022-02-23 Posture guiding method, apparatus, medium and program product in X-ray imaging

Country Status (1)

Country Link
CN (1) CN116671953A (en)

Similar Documents

Publication Publication Date Title
US11576645B2 (en) Systems and methods for scanning a patient in an imaging system
EP3406196A1 (en) X-ray system and method for standing subject
US10390779B2 (en) X-ray imaging apparatus and control method thereof
JP5921271B2 (en) Object measuring apparatus and object measuring method
CN106659448A (en) Method and system for configuring an x-ray imaging system
US20200229869A1 (en) System and method using augmented reality with shape alignment for medical device placement in bone
CN109452947A (en) For generating positioning image and the method to patient's imaging, x-ray imaging system
JP2004046772A (en) Method, system and apparatus for processing image
JP2015528359A (en) Method and apparatus for determining a point of interest on a three-dimensional object
PT1573498E (en) User interface system based on pointing device
CN106572298A (en) Display control apparatus and display control method
JP2006255430A (en) Individual identification device
JP2011254411A (en) Video projection system and video projection program
CN110742631B (en) Imaging method and device for medical image
KR20160046670A (en) Apparatus and Method for supporting image diagnosis
CN108139876B (en) System and method for immersive and interactive multimedia generation
US11207048B2 (en) X-ray image capturing apparatus and method of controlling the same
JP4659722B2 (en) Human body specific area extraction / determination device, human body specific area extraction / determination method, human body specific area extraction / determination program
US10102638B2 (en) Device and method for image registration, and a nontransitory recording medium
CN116671953A (en) Posture guiding method, apparatus, medium and program product in X-ray imaging
BR102012018661A2 (en) HEIGHT IMAGE ADJUSTMENT METHOD DISPLAYED ON A DISPLAY OF A SELF-SERVICE TERMINAL, AND, SELF-SERVICE TERMINAL
WO2023023956A1 (en) Method and apparatus for visualization of touch panel to object distance in x-ray imaging
CN117898747A (en) Method, apparatus, system, storage medium and program product for presenting objects
US10049480B2 (en) Image alignment device, method, and program
CN117858670A (en) Method and apparatus for determining touch panel-to-target distance in X-ray imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination