CN114305471A - Processing method and device for determining posture and pose, surgical system, surgical equipment and medium - Google Patents

Processing method and device for determining posture and pose, surgical system, surgical equipment and medium Download PDF

Info

Publication number
CN114305471A
CN114305471A CN202111661305.3A CN202111661305A CN114305471A CN 114305471 A CN114305471 A CN 114305471A CN 202111661305 A CN202111661305 A CN 202111661305A CN 114305471 A CN114305471 A CN 114305471A
Authority
CN
China
Prior art keywords
position information
posture
bone point
current image
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111661305.3A
Other languages
Chinese (zh)
Inventor
王俊
余坤璋
徐宏
孙晶晶
杨志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Kunbo Biotechnology Co Ltd
Original Assignee
Hangzhou Kunbo Biotechnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kunbo Biotechnology Co Ltd filed Critical Hangzhou Kunbo Biotechnology Co Ltd
Priority to CN202111661305.3A priority Critical patent/CN114305471A/en
Publication of CN114305471A publication Critical patent/CN114305471A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a processing method, a processing device, an operation system, equipment and a medium for determining a posture and a posture, wherein the processing method for determining the posture and the posture comprises the following steps: acquiring a current image; the current image is obtained by shooting a target human body by an image acquisition device; identifying a plurality of bone points of the target human body in the current image to obtain the actually measured position information of each bone point in the current image; determining whether any bone point is accurate in place or not by comparing the actually measured position information of any bone point in the current image with preset reference position information; when a plurality of skeleton points in the current image are all accurately in place, determining that the current posture pose of the target human body is matched with a reference posture pose, wherein the reference posture pose is the posture pose of the target human body when the inspection data of the target human body are collected.

Description

Processing method and device for determining posture and pose, surgical system, surgical equipment and medium
Technical Field
The invention relates to the field of medical instruments, in particular to a processing method, a processing device, an operation system, an operation device and a medium for determining a posture and a pose.
Background
Prior to surgery, examination data of the in vivo structure of the human body may be acquired, wherein the examination data may be, for example, CT data. Assistance may then be provided to the procedure, either directly or indirectly, based on the examination data.
However, the posture of the human body is uncertain, and different postures affect the form of the internal structure, so that the real form of the internal structure during the operation cannot be accurately reflected by the assistance provided based on the inspection data, and the accuracy and the effectiveness of the operation assistance are difficult to ensure.
Disclosure of Invention
The invention provides a processing method, a processing device, an operation system, equipment and a medium for determining a posture and a posture, and aims to solve the problem that the accuracy and the effectiveness of operation assistance are difficult to guarantee.
According to a first aspect of the present invention, there is provided a processing method for determining a posture and posture, including:
acquiring a current image; the current image is obtained by shooting a target human body by an image acquisition device;
identifying a plurality of bone points of the target human body in the current image to obtain the actually measured position information of each bone point in the current image;
determining whether any bone point is accurate in place or not by comparing the actually measured position information of any bone point in the current image with preset reference position information;
when a plurality of skeleton points in the current image are all accurately in place, determining that the current posture pose of the target human body is matched with a reference posture pose, wherein the reference posture pose is the posture pose of the target human body when the inspection data of the target human body are collected.
Optionally, identifying a plurality of bone points of the target human body in the current image to obtain measured position information of each bone point in the current image, including:
inputting the current image into a pre-trained recognition model, and obtaining a recognition result output by the recognition model, wherein the recognition result represents the actually measured position information of each bone point in the current image.
Optionally, the measured position information of any bone point includes measured row coordinates and measured column coordinates of any bone point in the pixel array of the current image;
the reference position information comprises reference row coordinates and reference column coordinates;
determining whether any bone point is accurate in place by comparing the measured position information of any bone point in the current image with preset reference position information, wherein the determining step comprises the following steps:
comparing the measured line coordinate and the reference line coordinate of any bone point, and the measured column coordinate and the reference column coordinate of any bone point;
and when the measured line coordinate of any bone point is matched with the reference line coordinate and the measured column coordinate of any bone point is matched with the reference column coordinate, determining that any bone point is accurate in place.
Optionally, before determining whether any bone point is accurate in place by comparing the measured position information of any bone point in the current image with preset reference position information, the method further includes:
identifying current identification information of each bone point in the target image;
acquiring reference identification information preset by each piece of reference position information; the reference identification information is used for identifying the bone point corresponding to the reference position information;
and determining the reference position information of any bone point in all the reference position information by comparing the current identification information with the reference identification information.
Optionally, the examination data comprises CT data;
the reference location information is determined by:
acquiring a reference image; the reference image is obtained by shooting the target human body by the image acquisition device when the CT data of the target human body are detected;
and identifying a plurality of bone points of the target human body in the reference image to obtain the position information of each bone point in the reference image, and taking the position information of each bone point in the reference image as the reference position information of the bone point.
Optionally, the reference image and the current image are both acquired when the target human body lies on the bed plate; the pose of the image acquisition device relative to the corresponding bed plate when the reference image is shot is matched with the pose of the image acquisition device relative to the corresponding bed plate when the current image is shot.
Optionally, identifying a plurality of bone points of the target human body in the current image, and obtaining the measured position information of each bone point in the current image, further includes:
determining the relative distance and the relative orientation between the actually measured position information of each bone point in the current image and the reference position information of the bone point to obtain pose adjustment information representing the relative distance and the relative orientation;
and feeding back the adjustment information to a user.
Optionally, identifying a plurality of bone points of the target human body in the current image, and obtaining the measured position information of each bone point in the current image, further includes:
displaying a plurality of bone point models for simulating corresponding bone points in a human-computer interaction interface, wherein the display position of each bone point model in the interface is matched with the measured position information of the corresponding bone point in the current image;
if any bone point is accurate in place, displaying a bone point model corresponding to the any bone point as a preset first display mode;
and if any bone point is not accurate in place, displaying the bone point model corresponding to the any bone point as a preset second display mode.
According to a second aspect of the present invention, there is provided a processing apparatus for determining a posture and attitude, comprising:
the image acquisition module is used for acquiring a current image; the current image is obtained by shooting a target human body by an image acquisition device;
a bone point identification module, configured to identify a plurality of bone points of the target human body in the current image, and obtain actual measurement position information of each bone point in the current image;
the in-place judging module is used for determining whether any bone point is in place accurately or not by comparing the actual measurement position information of any bone point in the current image with preset reference position information;
and the comparison module is used for determining that the current posture pose of the target human body is matched with a reference posture pose when a plurality of skeleton points in the current image are all accurately in place, wherein the reference posture pose is the posture pose of the target human body when the inspection data of the target human body are acquired.
According to a third aspect of the present invention, there is provided a surgical system comprising: a data processing section for executing the method according to the first aspect and its optional aspects, and an image capturing apparatus.
According to a fourth aspect of the present invention, there is provided an electronic device, comprising a processor and a memory,
the memory is used for storing codes;
the processor is configured to execute the code in the memory to implement the method according to the first aspect and its alternatives.
According to a fifth aspect of the present invention, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect and its alternatives.
According to the processing method, device, operation system, equipment and medium for determining the posture and posture, provided by the invention, a sufficient and effective basis can be provided for judging the posture and posture of the target human body through the current image acquisition of the target human body and the identification of the actually measured position information of the skeleton point.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagrammatic illustration of the construction of a surgical system in an exemplary embodiment of the present invention;
FIG. 2 is a diagrammatic illustration of the construction of a surgical system in accordance with another exemplary embodiment of the present invention;
FIG. 3 is a flow chart illustrating a method for determining posture and attitude in an exemplary embodiment of the invention;
fig. 4 is a schematic flow chart illustrating a comparison between measured location information and reference location information according to an exemplary embodiment of the present invention;
FIG. 5 is a schematic illustration of skeletal points and their identification information in an exemplary embodiment of the invention;
FIG. 6 is a flow diagram of a display model in an exemplary embodiment of the invention;
FIG. 7 is a flow chart illustrating feedback of adjustment information in an exemplary embodiment of the invention;
FIG. 8 is a schematic representation of program modules of a processing device for determining posture positions in an exemplary embodiment of the invention;
FIG. 9 is a schematic representation of program modules of a processing device for determining posture positions in another exemplary embodiment of the invention;
fig. 10 is a schematic configuration diagram of an electronic device in an exemplary embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
An embodiment of the present invention provides a surgical system, including: a data processing unit 101 and an image capturing device 102.
The data processing unit 101 may be any device or combination of devices having data processing capability, the data processing unit 101 may communicate with the image capturing device 102 in a wired or wireless manner, and the data processing unit 101 may be configured to execute the processing method for determining the posture and posture according to the embodiment of the present invention. Specifically, the data processing unit 101 may be a computer, a terminal, a server, or the like.
The image capturing device 102 is any device having an image capturing capability, and the captured image may form a video, a moving image, or a still image, may be a two-dimensional image, or may be a three-dimensional image, and specifically, the image capturing device 102 may be a camera, a depth camera, or the like.
The image capturing device 202 and the data processing unit 201 in the embodiment shown in fig. 2 are the same as or similar to the image capturing device 102 and the data processing unit 101 in the embodiment shown in fig. 1, and the same or similar contents are not repeated herein.
In the embodiment shown in fig. 2, the image capturing device 202 may be directly or indirectly fixedly connected to the bed board 203, that is: the relative pose between the image capturing device 202 and the bed plate 203 can be kept fixed, and further, the display position, shape, size, etc. of the bed plate 203 in the image captured by the image capturing device 102 are kept unchanged.
In one embodiment, the data processing portion 201 may communicate with the display device 204, and further, the data processing portion 201 may display the content to be displayed (for example, the skeletal point model mentioned in the present specification, and may also display the reference position information of each skeletal point, etc.) through the display device 204.
Referring to fig. 2, an embodiment of the present invention provides a processing method for determining a posture and pose, including:
s301: acquiring a current image;
the current image is obtained by shooting a target human body by an image acquisition device, specifically, the current image can be an image shot before or during an operation, and the whole target human body or only a part of the target human body can be shot in the current image;
the process of acquiring the current image may be, for example, a process of receiving the current image from the image acquisition device, and in some examples, at least one of correction, filtering, and the like may be performed on the current image;
s302: identifying a plurality of bone points of the target human body in the current image to obtain the actually measured position information of each bone point in the current image;
the bone points may be understood as position points in a corresponding image (e.g., a current image or a reference image mentioned in this specification) determined based on bones of a target human body, and specifically, each bone point may represent a position corresponding to one or more bones, and one bone point may be, for example, a center point, an end point, etc. of a certain bone; in one example, the bone point of the skull may be a position point located in the corresponding image (e.g., the current image or the reference image mentioned in this specification) to represent the position of the skull, which may be close to the center of the skull in the corresponding image, and in another example, the bone point of the finger may be one or more position points located in the corresponding image (e.g., the current image or the reference image mentioned in this specification) to represent the position of the finger, which may be close to the end point of the finger, the finger knuckle connection in the corresponding image;
the measured position information may be any information capable of describing the position of a bone point in a current image, and in an example, the measured position information of any bone point includes measured row coordinates and measured column coordinates of any bone point in a pixel array (i.e., an array of pixel points) of the current image; for example, if the current image has an N row by M column array, the measured row coordinates may be N1 and the measured column coordinates may be M1, where 1 ≦ N1 ≦ N, 1 ≦ M1 ≦ M; in other examples, N × M pixel points may also be sorted, and then the actually measured position information is represented based on the sequence position in the sorting result;
as an alternative, any means capable of identifying the target human bone point may be applied to step S302 in the art, for example, step S302 may include: inputting the current image into a pre-trained recognition model, and acquiring a recognition result output by the recognition model;
the recognition result represents the measured position information of each bone point in the current image, for example, the measured position information (for example, measured row coordinates and measured column coordinates) itself, or an image with the measured position information marked and the size matched with that of the current image; the identification model can be, for example, a convolutional neural network;
in other embodiments, the skeleton point can be searched in the current image according to a preset search rule;
s303: and determining whether the current posture position of the target human body in the current image is matched with a reference posture position or not by comparing the actually measured position information of any bone point in the current image with the corresponding reference position information.
When the current posture pose is matched with the reference posture pose, for example, a subsequent percutaneous puncture surgery needing to be positioned in vitro can be carried out, and the subsequent body surface positioning and surgery implementation are facilitated through the matching of the current posture pose and the reference posture.
The reference position information may be any information capable of describing a position where each bone point in the current image should be located, and in an example, the reference position information of any bone point includes a reference row coordinate and a reference column coordinate of the any bone point in a pixel array (i.e., an array of pixel points) of the current image; for example, if the current image has an N row by M column array, the reference row coordinate may be N2 and the measured column coordinate may be M2, where 1 ≦ N2 ≦ N, 1 ≦ M2 ≦ M; in other examples, N × M pixel points may also be sorted, and reference position information is represented based on a sequence position in a sorting result;
the actual measurement posture can be understood as the posture of the target human body reflected based on the actual measurement position information, and correspondingly, the reference posture is the posture of the target human body when each bone point is positioned in the corresponding reference position information; in one example, the reference posture and the corresponding reference position information may be detected when CT data of the target human body is detected.
For example, the reference location information may be determined by the following procedure:
acquiring a reference image, identifying a plurality of bone points of the target human body in the reference image, obtaining the position information of each bone point in the reference image, and taking the position information of each bone point in the reference image as the reference position information of the bone point;
the reference image may be obtained by capturing the target human body by the image capturing device when the CT data of the target human body is detected; the reference image definition and acquisition mode can be understood with reference to the acquisition mode of the current image, the mode of identifying the bone points in the reference image, and the mode of identifying the bone points in the current image.
The image acquisition device adopted when shooting the current image and the image acquisition device adopted when shooting the reference image can be the same or different, and the bed plate adopted when shooting the current image and the bed plate adopted when shooting the reference image can be the same or different;
whether identical or not, may satisfy: when the reference image is shot, the pose information of the image acquisition device relative to a bed plate where a target human body is located is matched with the pose information of the image acquisition device relative to the bed plate where the target human body is located when the current image is shot; where matching is understood to be the same or similar.
In one manner of determining whether the current posture pose matches the reference posture pose, the number of matches of the corresponding measured position information to the bone points of the corresponding reference position information may be counted, and the current posture pose matching the reference posture pose is determined if the number of matches is greater than a number threshold; otherwise, determining that the current posture pose does not match the reference posture pose;
in another way of determining whether the current posture pose matches the reference posture pose, when the comparison result is used to represent that the measured position information of each bone point matches the corresponding reference position information, it may be determined that the current posture pose matches the reference posture pose;
in another way of determining whether the current posture position matches the reference posture position, the distance between the measured position information and the reference position information corresponding to the same bone point may be counted, and then, according to the statistical value (e.g., average value, sum, etc.) of the distances of all the bone points, it is determined whether the current posture position matches the reference posture position, for example, if the statistical value is higher than the statistical value threshold, it is determined that the current posture position does not match, otherwise, if the statistical value is lower than the statistical value threshold, it is determined that the current posture position matches the reference posture position.
In the schemes of steps S301 to S303, a sufficient and effective basis can be provided for determining the posture and posture of the target human body by acquiring the current image of the target human body and identifying the actually-measured position information of the bone point, and on this basis, whether the current posture and posture of the target human body matches the reference posture and posture can be detected by comparing the actually-measured position information with the reference position information, so that the stability of the posture and posture of the target human body is ensured, and the influence of the changeable form of the internal structure of the target human body on the accuracy and effectiveness of the detection data under the changeable and nonuniform posture is avoided.
When the reference pose information is determined based on the reference image, the following steps can be effectively ensured: the current posture pose of the target human body is matched with the posture pose when the CT data are shot, at the moment, the form of the in-vivo structure of the target human body can be accurately adapted to the CT data, the CT data are met, and/or: any other operation auxiliary data determined based on the CT data guarantees the accuracy and effectiveness of the CT data or other operation auxiliary data on operation assistance and guidance.
In one example, the surgical assistance data may include, for example, at least one of:
puncture information (including puncture position, puncture direction, puncture depth, and the like) for a target human body;
a virtual model (e.g., a virtual bronchial tree) that simulates internal structures (e.g., a bronchial tree) in a target body;
navigation path of the endoscope in the target body.
In one embodiment, referring to fig. 4, comparing the measured position information of any bone point in the current image with the corresponding reference position information includes:
s401: maintaining a mapping relation between identification information of any skeletal point contained in the target human body and reference position information;
s402: acquiring identification information of any bone point in the current image, and determining reference position information corresponding to any bone point based on the acquired identification information and the mapping relation;
s403: comparing the measured line coordinate and the reference line coordinate of any bone point, and the measured column coordinate and the reference column coordinate of any bone point;
s404: and when the measured line coordinates of any bone point are matched with the reference line coordinates and the measured column coordinates of any bone point are matched with the reference column coordinates, determining that the measured position information of any bone point is matched with the corresponding reference position information.
The identification information may be any information for identifying the bone point, such as a name of the corresponding bone, a number of the bone or the bone point, or any other character or character string;
the mapping relationship can be understood as: the mapped reference position information and identification information (which can also be understood as corresponding reference position information and identification information) represent the positions of the bone points identified by the identification information;
the defined skeletal points and their identification information can be understood with reference to fig. 5.
Through the steps S401 and S402, the actually measured position information and the reference position information for the same bone point can be determined based on the identification information, so that the matching result of the actually measured position information and the reference position information can accurately reflect the matching result between the current posture and the reference posture.
The matching of the measured line coordinates and the reference line coordinates can be understood as the same or similar, and the similarity can be understood as: the difference between the actual measurement row coordinate and the reference row coordinate is in a first specified range;
the matching of the measured column coordinates and the reference column coordinates can be understood as the same or similar, and the similarity can be understood as: the difference between the measured column coordinate and the reference column coordinate is in a second specified range;
the first designated range and the second designated range may be the same or different.
In the above steps S403 and S404, it can be accurately reflected whether the measured position information is matched with the reference position information through matching of the coordinate values, so that quantifiable accurate matching of the measured position information and the corresponding reference position information is realized.
In an embodiment, referring to fig. 6, the processing method for determining the posture and posture further includes:
s601: displaying a bone point model for simulating each bone point contained in the target human body in a display interface;
s602: whether the comparison result represents that the actually measured position information of any bone point is matched with the corresponding reference position information or not;
the simulated position relation among all the skeleton point models is matched with the actually measured position relation of all the skeleton points in the current image in the target human body;
each skeleton point can correspond to a skeleton point model, and the skeleton point model can be a two-dimensional model, a three-dimensional model, a simple graph, a pattern or a complex model;
the simulated position relationship can represent the simulated relative direction and the simulated relative distance between the skeleton point models, and correspondingly, the measured position relationship can represent the measured relative direction and the measured relative distance between the skeleton points represented by the measured position information, wherein the simulated relative distance and the measured relative distance can be in proportion;
if the determination result in step S602 is yes, step S603 may be executed: displaying the bone point model corresponding to any one bone point as a preset first display mode;
if the determination result in step S602 is no, step S604 may be executed: and displaying the bone point model corresponding to any one bone point as a preset second display mode.
The first display mode and the second display mode can be understood as any display mode that can be visually perceived as different display results, for example, the first display mode and the second display mode can be displayed in at least one of different colors, sizes, shapes, and the like.
In one example, when the bone point models are displayed, reference models of all the bone points can be displayed, and the relative position relationship among all the reference models is matched with the reference position relationship of all the bone points in a reference image in the target human body;
each skeleton point can correspond to a reference model, and the reference model can be a two-dimensional model, a three-dimensional model, a simple graph, a pattern or a complex model;
the relative position relationship between the reference models may, for example, represent the relative position and relative distance between the reference models, and correspondingly, the reference position relationship may, for example, represent the reference relative position and reference relative distance between the bone points represented by the reference position information, and the relative distance between the reference models and the reference relative distance may be proportional;
in one example, during intraoperative display, a reference model is displayed at a corresponding position on a display interface by using a red dot, a bone point model is displayed at a corresponding position on the display interface by using a blue dot, and after the preoperative bone point and the intraoperative bone point are coincided (namely, after the measured position information and the reference position information are both matched), the reference model and the bone point model are displayed as green dots.
In the scheme, reliable, accurate and effective adjusting basis is provided for the pose adjustment of the target human body through the display of the model, and the efficiency of the pose adjustment of the posture is improved.
In one implementation, referring to fig. 7, the processing method for determining the posture and posture further includes:
s701: acquiring adjustment information between the measured position information of any one bone point and the corresponding reference position information;
the adjustment information comprises an adjustment distance and an adjustment direction;
s702: and feeding back the adjustment information to guide the target human body to move.
Specifically, the adjustment information may be fed back externally in a visual or auditory manner;
in one example, after the bone point model and the reference model are displayed, a line with an arrow is formed between the bone point model and the reference model to represent the adjustment direction, and a specific value or an interval range of the adjustment distance can be displayed between the bone point model and the reference model;
in another example, the adjustment information may be fed back in a text display manner, such as "skull moving x centimeters to the right"; similarly, the adjustment information can also be fed back in a voice manner.
In the scheme, effective feedback of the adjustment information is realized, so that a sufficient, reliable and visual basis is provided for the adjustment of the posture and the posture, and the efficiency of posture and posture adjustment of the posture is improved.
Referring to fig. 8, an embodiment of the present invention provides a processing apparatus 800 for determining a posture pose, including:
an image obtaining module 801, configured to obtain a current image; the current image is obtained by shooting a target human body by an image acquisition device;
a bone point identification module 802, configured to identify a plurality of bone points of the target human body in the current image, to obtain actual measurement position information of each bone point in the current image;
a comparing module 803, configured to determine whether the current posture pose of the target human body in the current image matches a reference posture pose by comparing the actually measured position information of any bone point in the current image with corresponding reference position information, where the reference posture pose is the posture pose of the target human body when each bone point is located in the corresponding reference position information.
Optionally, the alignment module 803 is specifically configured to:
counting the matching number of the corresponding bone points of which the actually-measured position information is matched with the corresponding reference position information, and determining that the current posture pose is matched with the reference posture pose under the condition that the matching number is greater than a number threshold;
or when the comparison result is used for representing that the actually measured position information of each bone point is matched with the corresponding reference position information, determining that the current posture pose is matched with the reference posture pose.
Optionally, the identifying module 802 is specifically configured to:
inputting the current image into a pre-trained recognition model, and obtaining a recognition result output by the recognition model, wherein the recognition result represents the actually measured position information of each bone point in the current image.
Optionally, the measured position information of any bone point includes measured row coordinates and measured column coordinates of the any bone point in the pixel array of the current image;
the reference position information of any skeleton point comprises reference row coordinates and reference column coordinates;
the alignment module 803 is specifically configured to:
comparing the measured line coordinate and the reference line coordinate of any bone point, and the measured column coordinate and the reference column coordinate of any bone point;
and when the measured line coordinates of any bone point are matched with the reference line coordinates and the measured column coordinates of any bone point are matched with the reference column coordinates, determining that the measured position information of any bone point is matched with the corresponding reference position information.
Optionally, the alignment module 803 is specifically configured to:
maintaining a mapping relation between identification information of any skeletal point contained in the target human body and reference position information;
and acquiring identification information of any bone point in the current image, and determining reference position information corresponding to any bone point based on the acquired identification information and the mapping relation.
Optionally, the reference position information is determined by the following process:
acquiring a reference image; the reference image is obtained by shooting the target human body by the image acquisition device when the CT data of the target human body are detected;
and identifying a plurality of bone points of the target human body in the reference image to obtain the position information of each bone point in the reference image, and taking the position information of each bone point in the reference image as the reference position information of the bone point.
Optionally, when the reference image is shot, the pose information of the image acquisition device relative to the bed plate where the target human body is located is matched with the pose information of the image acquisition device relative to the bed plate where the target human body is located when the current image is shot.
The image obtaining module 901, the bone point identifying module 902, and the comparing module 903 in the embodiment shown in fig. 9 are the same as or similar to the image obtaining module 801, the bone point identifying module 802, and the comparing module 803, and for the same or similar contents, no further description is given here.
The processing apparatus 900 for determining the posture and posture further includes:
an adjustment module 904 for:
acquiring adjustment information between the measured position information of any one bone point and corresponding reference position information, wherein the adjustment information comprises an adjustment distance and an adjustment direction;
and feeding back the adjustment information to guide the target human body to move.
Optionally, the processing apparatus 900 for determining the posture and posture further includes:
a display module 905, configured to:
displaying skeleton point models for simulating all skeleton points contained in the target human body in a display interface, wherein the simulated position relation among all the skeleton point models is matched with the actually measured position relation of all the skeleton points in the target human body;
if the comparison result represents that the actually measured position information of any bone point is matched with the corresponding reference position information, displaying a bone point model corresponding to the any bone point as a preset first display mode;
and if the comparison result represents that the measured position information of any bone point is not matched with the corresponding reference position information, displaying the bone point model corresponding to any bone point as a preset second display mode.
Referring to fig. 10, an electronic device 1000 is provided, including:
a processor 1001; and the number of the first and second groups,
a memory 1002 for storing executable instructions for the processor;
wherein the processor 1001 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 1001 can communicate with the memory 1002 via the bus 1003.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (13)

1. A processing method for determining posture and attitude is characterized by comprising the following steps:
acquiring a current image; the current image is obtained by shooting a target human body by an image acquisition device;
identifying a plurality of bone points of the target human body in the current image to obtain the actually measured position information of each bone point in the current image;
and determining whether the current posture pose of the target human body in the current image is matched with a reference posture pose by comparing the actually measured position information of any bone point in the current image with corresponding reference position information, wherein the reference posture pose is the posture pose of the target human body when each bone point is located in the corresponding reference position information.
2. The processing method for determining posture and attitude of claim 1,
determining whether the current posture pose of the target human body in the current image is matched with a reference posture pose by comparing the actually measured position information of any bone point in the current image with the corresponding reference position information, wherein the determining comprises the following steps:
counting the matching number of the corresponding bone points of which the actually-measured position information is matched with the corresponding reference position information, and determining that the current posture pose is matched with the reference posture pose under the condition that the matching number is greater than a number threshold;
or when the comparison result is used for representing that the actually measured position information of each bone point is matched with the corresponding reference position information, determining that the current posture pose is matched with the reference posture pose.
3. The processing method for determining posture and attitude of claim 1,
identifying a plurality of bone points of the target human body in the current image to obtain the actually measured position information of each bone point in the current image, wherein the method comprises the following steps:
inputting the current image into a pre-trained recognition model, and obtaining a recognition result output by the recognition model, wherein the recognition result represents the actually measured position information of each bone point in the current image.
4. The processing method for determining posture and attitude of claim 1,
the measured position information of any bone point comprises measured row coordinates and measured column coordinates of the any bone point in the pixel array of the current image;
the reference position information of any skeleton point comprises reference row coordinates and reference column coordinates;
comparing the measured position information of any bone point in the current image with preset reference position information, wherein the comparison comprises the following steps:
comparing the measured line coordinate and the reference line coordinate of any bone point, and the measured column coordinate and the reference column coordinate of any bone point;
and when the measured line coordinates of any bone point are matched with the reference line coordinates and the measured column coordinates of any bone point are matched with the reference column coordinates, determining that the measured position information of any bone point is matched with the corresponding reference position information.
5. The processing method for determining posture and attitude of claim 1,
comparing the measured position information of any bone point in the current image with the corresponding reference position information, including:
maintaining a mapping relation between identification information of any skeletal point contained in the target human body and reference position information;
and acquiring identification information of any bone point in the current image, and determining reference position information corresponding to any bone point based on the acquired identification information and the mapping relation.
6. The processing method for determining a posture and attitude according to any one of claims 1 to 5, wherein the reference position information is determined by:
acquiring a reference image; the reference image is obtained by shooting the target human body by the image acquisition device when the CT data of the target human body are detected;
and identifying a plurality of bone points of the target human body in the reference image to obtain the position information of each bone point in the reference image, and taking the position information of each bone point in the reference image as the reference position information of the bone point.
7. The processing method for determining posture and attitude of claim 6,
when the reference image is shot, the pose information of the image acquisition device relative to the bed plate where the target human body is located is matched with the pose information of the image acquisition device relative to the bed plate where the target human body is located when the current image is shot.
8. The processing method for determining the posture and attitude of any one of claims 1 to 5, further comprising:
acquiring adjustment information between the measured position information of any one bone point and corresponding reference position information, wherein the adjustment information comprises an adjustment distance and an adjustment direction;
and feeding back the adjustment information to guide the target human body to move.
9. The processing method for determining the posture and attitude of any one of claims 1 to 5, further comprising:
displaying skeleton point models for simulating all skeleton points contained in the target human body in a display interface, wherein the simulated position relation among all the skeleton point models is matched with the actually measured position relation of all the skeleton points in the current image in the target human body;
if the comparison result represents that the actually measured position information of any bone point is matched with the corresponding reference position information, displaying a bone point model corresponding to the any bone point as a preset first display mode;
and if the comparison result represents that the measured position information of any bone point is not matched with the corresponding reference position information, displaying the bone point model corresponding to any bone point as a preset second display mode.
10. A processing apparatus for determining a posture and attitude, comprising:
the image acquisition module is used for acquiring a current image; the current image is obtained by shooting a target human body by an image acquisition device;
a bone point identification module, configured to identify a plurality of bone points of the target human body in the current image, and obtain actual measurement position information of each bone point in the current image;
and the comparison module is used for determining whether the current posture pose of the target human body in the current image is matched with a reference posture pose by comparing the actually-measured position information of any bone point in the current image with corresponding reference position information, wherein the reference posture pose is the posture pose of the target human body when each bone point is located in the corresponding reference position information.
11. A surgical system, comprising: a data processing section and an image acquisition apparatus, the data processing section being configured to perform the method of any one of claims 1 to 9.
12. An electronic device, comprising a processor and a memory,
the memory is used for storing codes;
the processor is configured to execute the code in the memory to implement the method of any one of claims 1 to 9.
13. A storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 9.
CN202111661305.3A 2021-12-30 2021-12-30 Processing method and device for determining posture and pose, surgical system, surgical equipment and medium Pending CN114305471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111661305.3A CN114305471A (en) 2021-12-30 2021-12-30 Processing method and device for determining posture and pose, surgical system, surgical equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111661305.3A CN114305471A (en) 2021-12-30 2021-12-30 Processing method and device for determining posture and pose, surgical system, surgical equipment and medium

Publications (1)

Publication Number Publication Date
CN114305471A true CN114305471A (en) 2022-04-12

Family

ID=81019758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111661305.3A Pending CN114305471A (en) 2021-12-30 2021-12-30 Processing method and device for determining posture and pose, surgical system, surgical equipment and medium

Country Status (1)

Country Link
CN (1) CN114305471A (en)

Similar Documents

Publication Publication Date Title
CN110956635B (en) Lung segment segmentation method, device, equipment and storage medium
CN108520519B (en) Image processing method and device and computer readable storage medium
EP3406196A1 (en) X-ray system and method for standing subject
US10881353B2 (en) Machine-guided imaging techniques
CN107596578B (en) Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
JP6985856B2 (en) Information processing equipment, control methods and programs for information processing equipment
CN112382359B (en) Patient registration method and device, electronic equipment and computer readable medium
CN112016497A (en) Single-view Taijiquan action analysis and assessment system based on artificial intelligence
CN101542532A (en) A method, an apparatus and a computer program for data processing
US11847730B2 (en) Orientation detection in fluoroscopic images
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
CN110619621B (en) Method, device, electronic equipment and storage medium for identifying rib area in image
CN112352289A (en) Method and system for providing ECG analysis interface
CN113782159A (en) Medical image marking point matching method and device, electronic equipment and storage medium
CN109934798A (en) Internal object information labeling method and device, electronic equipment, storage medium
CN116650111A (en) Simulation and navigation method and system for bronchus foreign body removal operation
CN114305471A (en) Processing method and device for determining posture and pose, surgical system, surgical equipment and medium
CN113456228B (en) Tissue identification method and system for assisting operation
CN114266831A (en) Data processing method, device, equipment, medium and system for assisting operation
EP4140411A1 (en) Oral image marker detection method, and oral image matching device and method using same
CN114529502A (en) Method and system for depth-based learning for automated subject anatomy and orientation identification
CN114141337A (en) Method, system and application for constructing image automatic annotation model
EP4162502A1 (en) Estimating a position of an endoscope in a model of the human airways
CN113256625A (en) Electronic equipment and recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination