CN113110739A - Kinect-based method for positioning three-dimensional medical model through gestures - Google Patents

Kinect-based method for positioning three-dimensional medical model through gestures Download PDF

Info

Publication number
CN113110739A
CN113110739A CN202110365217.2A CN202110365217A CN113110739A CN 113110739 A CN113110739 A CN 113110739A CN 202110365217 A CN202110365217 A CN 202110365217A CN 113110739 A CN113110739 A CN 113110739A
Authority
CN
China
Prior art keywords
gesture
screen
shoulder
positioning
kinect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110365217.2A
Other languages
Chinese (zh)
Inventor
刘君
吴乙荣
崔飞
王炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110365217.2A priority Critical patent/CN113110739A/en
Publication of CN113110739A publication Critical patent/CN113110739A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a Kinect-based method for positioning a three-dimensional medical model through gestures, which comprises the following steps of S1: defining a gesture activity range experience value; s2: matching the gesture activity range with the actual size of the current screen; s3: anchor point positioning and jitter cancellation are determined. According to the invention, an empirical range value is defined according to the actual range of the arm of the hand, and is matched with the actual screen size, so that the left and right limits of the gesture experience correspond to the left and right boundaries of the screen, and the upper and lower limits of the gesture experience correspond to the upper and lower boundaries of the screen; the shoulders on the same side of the arm are taken as anchoring points and correspond to the central point position of the screen, and then shoulder shaking caused in the arm movement process is eliminated, so that the stability of gesture control can be improved.

Description

Kinect-based method for positioning three-dimensional medical model through gestures
Technical Field
The invention relates to the field of three-dimensional medical model image processing, in particular to a Kinect-based gesture positioning method for a three-dimensional medical model.
Background
The 'precision' surgical operation is the current hot development direction, wherein the real-time intraoperative guidance technology of the three-dimensional reconstruction model can give an early warning to key intraoperative operations in real time, and the safety and the accuracy of the operation are improved revolutionarily. In operation, the three-dimensional model of the organ is displayed on a screen, and according to the operation progress, a doctor needs to adjust the three-dimensional model in real time, including rotation, scaling, calling a prefabricated operation scheme and the like. Due to the aseptic requirement of an operating room, a doctor cannot operate the three-dimensional model through a physical contact control device, and often completes the operation of the three-dimensional model through a predetermined gesture/command rule in a remote gesture recognition mode.
However, current gesture recognition techniques, including gesture recognition via various imaging devices, lack accurate and easy positioning mechanisms that allow a physician to flexibly and stably position a three-dimensional model to a designated location on a screen.
Therefore, improvements to the prior art are yet to be made.
Disclosure of Invention
The invention aims to provide a Kinect-based method for positioning a three-dimensional medical model by gestures, and aims to solve the technical problems that the space dynamic range of gesture control of the existing three-dimensional model is not visually matched with the actual display space of a screen, and a stable anchoring point is not provided in the gesture positioning process.
In order to achieve the purpose, the technical scheme of the invention is as follows: a Kinect-based method for positioning a three-dimensional medical model through gestures comprises the following steps:
s1: defining gesture activity range experience values in an operating system, wherein when the upper arm of a human body is naturally lifted and the forearm is vertical to the upper arm, the left and right limits of the gesture experience in an x-y plane correspond to the left and right boundaries and the upper and lower boundaries of a screen;
s2: comparing the gesture active range with the current screen realityMatching the sizes; setting gesture anchor position to ChCorresponding to the central point position C of the screen; the moving range of the gesture is hhAt each position of horizontal position whRespectively corresponding to the height h and the width w of the screen;
s3: anchor point positioning and jitter elimination are carried out, the anchor point is selected as the shoulder coordinate on the same side of the gesture, EcThe average of the previous n shoulder coordinates after the gesture recognition is started, and the shoulder coordinate is E when the n +1 th time is refreshedn+1And if the threshold is T, the new anchor point A has the calculation formula as follows:
if dist (E)c, En+1) >And T, indicating that the shoulder has large displacement, taking the new shoulder coordinate as the anchor point coordinate, and resetting the counting to be 1:
A = En+1where dist is the Euclidean distance between two points,
dist(Ec,En+1) = sqrt((Ec_x - En+1_x)2 + (Ec_y - En+1_y)2);
if dist (E)c, En+1) <And T, calculating the accumulated average position of the shoulder as a new anchor position, and adding 1 to the count:
A = (Ec* n + En+1) / (n+1)。
the Kinect-based method for positioning the three-dimensional medical model through gestures comprises the steps that left and right boundaries and upper and lower boundaries of a screen corresponding to left and right limits of experience of gestures in an x-y plane are naturally lifted up by upper arms, when forearms are perpendicular to the upper arms, the shoulders on the same side are taken as central points, the left and right sides are respectively 15cm, and 30cm in total are taken as a gesture horizontal dynamic range; and naturally lifting the upper arm, and taking the shoulders on the same side as the central point, wherein the upper and lower parts are respectively 10cm, and the total 20cm is the vertical dynamic range of the gesture when the forearm is vertical to the upper arm.
In the method for positioning a three-dimensional medical model by gestures based on Kinect, in step S2, in the gesture moving process, it is assumed that the current position is phAnd the corresponding screen positioning positions are as follows: p, then the calculation formula of p is:
horizontal coordinate: p _ x = (p)h_x – Ch_x) / wh_x * w + C_x;
Vertical coordinates: p _ y = (p)h_y – Ch_y) / wh_y * w + C_y。
Has the advantages that: according to the invention, an empirical range value is defined according to the actual range of the arm of the hand, and is matched with the actual screen size, so that the left and right limits of the gesture experience correspond to the left and right boundaries of the screen, and the upper and lower limits of the gesture experience correspond to the upper and lower boundaries of the screen; meanwhile, the positioning point displacement caused by the gesture unit displacement is correspondingly linearly adjusted, then the shoulder position is used as an anchor point, and the shaking of the shoulder in the arm movement process is eliminated through a noise elimination algorithm (namely the shake elimination algorithm described in the step S3, and the algorithm for calculating the cumulative average by using the shoulder as the anchor point), so that the stable anchor point is realized.
Drawings
FIG. 1 is a block diagram of the steps of the present invention.
FIG. 2 is a schematic diagram of gesture range of motion of the present invention.
Fig. 3 is a schematic diagram of the actual size of the screen of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples.
As shown in fig. 1-3, the invention discloses a method for positioning a three-dimensional medical model based on a gesture of Kinect, which comprises the following steps:
s1: defining gesture activity range experience values in an operating system, wherein when the upper arm of a human body is naturally lifted and the forearm is vertical to the upper arm, the left and right limits of the gesture experience in an x-y plane correspond to the left and right boundaries and the upper and lower boundaries of a screen;
s2: matching the gesture activity range with the actual size of the current screen; setting gesture anchor position to ChCorresponding to the central point position C of the screen; the moving range of the gesture is hhAt each position of horizontal position whRespectively corresponding to the height h and the width w of the screen;
s3: anchor point positioning and jitter elimination are carried out, the anchor point is selected as the shoulder coordinate on the same side of the gesture, EcFor gesture recognitionThe average of the shoulder coordinates n times before and after the start, and the shoulder coordinate is E when the n +1 th time is refreshedn+1And if the threshold is T, the new anchor point A has the calculation formula as follows:
if dist (E)c, En+1) >And T, indicating that the shoulder has large displacement, taking the new shoulder coordinate as the anchor point coordinate, and resetting the counting to be 1:
A = En+1where dist is the Euclidean distance between two points,
dist(Ec,En+1) = sqrt((Ec_x - En+1_x)2 + (Ec_y - En+1_y)2);
if dist (E)c, En+1) <And T, calculating the accumulated average position of the shoulder as a new anchor position, and adding 1 to the count:
A = (Ec* n + En+1) / (n+1)。
after the method is adopted, the invention firstly defines the experience range of motion of the hand in an x-y plane (coronal plane) when the upper arm of the human body is naturally lifted and the forearm is vertical to the upper arm, and matches the motion in real time according to the change of the screen size. And the positioning point displacement caused by the gesture unit displacement is correspondingly linearly adjusted; the shoulders on the same side of the arm are used as anchoring points and correspond to the central point of the screen. Further, each time the shoulder positions are averaged over all shoulder positions after the start of gesture recognition as the current shoulder position. When the average value difference of the shoulder positions of the current time exceeds a preset threshold value, counting is restarted; therefore, the shoulder shaking caused by arm movement can be effectively eliminated, and the stability of gesture control is increased.
Therefore, the gesture moving range is more matched with the actual size of the screen, and stable and accurate model position positioning is realized.
The Kinect-based method for positioning the three-dimensional medical model through gestures comprises the steps that left and right boundaries and upper and lower boundaries of a screen corresponding to left and right limits of experience of gestures in an x-y plane are naturally lifted up by upper arms, when forearms are perpendicular to the upper arms, the shoulders on the same side are taken as central points, the left and right sides are respectively 15cm, and 30cm in total are taken as a gesture horizontal dynamic range; and naturally lifting the upper arm, and taking the shoulders on the same side as the central point, wherein the upper and lower parts are respectively 10cm, and the total 20cm is the vertical dynamic range of the gesture when the forearm is vertical to the upper arm.
In the method for positioning a three-dimensional medical model by gestures based on Kinect, in step S2, in the gesture moving process, it is assumed that the current position is phAnd the corresponding screen positioning positions are as follows: p, then the calculation formula of p is:
horizontal coordinate: p _ x = (p)h_x – Ch_x) / wh_x * w + C_x;
Vertical coordinates: p _ y = (p)h_y – Ch_y) / wh_y * w + C_y。
When the preoperative three-dimensional reconstruction model is used for intraoperative guidance of the shell surgery, the lever three-dimensional model needs to be aseptically controlled through gesture recognition, and the model display position and posture required by a doctor are achieved. The invention is based on Microsoft Kinect gesture sensor, realizes stable and accurate model position positioning, and the gesture space dynamic range in gesture control can be visually matched with the actual display space of the screen; the following difficulties in the gesture control and cursor positioning are solved:
1) the gesture moving range is not matched with the actual size of the screen, and the intuitiveness of gesture operation is affected.
2) The positioning model position lacks a stable and easy-to-use anchor point, and the relative position of the anchor point is used as the standard of model positioning.
According to the invention, an empirical range value is defined according to the actual range of the arm of the hand, and is matched with the actual screen size, so that the left and right limits of the gesture experience correspond to the left and right boundaries of the screen, and the upper and lower limits of the gesture experience correspond to the upper and lower boundaries of the screen; meanwhile, the positioning point displacement caused by the gesture unit displacement is correspondingly linearly adjusted, then the shoulder position is used as an anchor point, and the shaking of the shoulder in the arm movement process is eliminated through a noise elimination algorithm (namely, the shaking elimination algorithm described in the step S3, and the algorithm for calculating the cumulative average by using the shoulder as the anchor point), so that the stable anchor point is realized.
In addition, the use of the present invention is not limited to gesture recognition devices, the gesture form is clear and unambiguous, and is not limited to the specific form definition of the gesture, and the gesture form in fig. 2 is only an example.
The above is a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, modifications or equivalent substitutions of the technical solution of the present invention without inventive work may be made without departing from the scope of the present invention.

Claims (3)

1. A Kinect-based method for positioning a three-dimensional medical model through gestures is characterized by comprising the following steps:
s1: defining gesture activity range experience values in an operating system, wherein when the upper arm of a human body is naturally lifted and the forearm is vertical to the upper arm, the left and right limits of the gesture experience in an x-y plane correspond to the left and right boundaries and the upper and lower boundaries of a screen;
s2: matching the gesture activity range with the actual size of the current screen; setting gesture anchor position to ChCorresponding to the central point position C of the screen; the moving range of the gesture is hhAt each position of horizontal position whRespectively corresponding to the height h and the width w of the screen;
s3: anchor point positioning and jitter elimination are carried out, the anchor point is selected as the shoulder coordinate on the same side of the gesture, EcThe average of the previous n shoulder coordinates after the gesture recognition is started, and the shoulder coordinate is E when the n +1 th time is refreshedn+1And if the threshold is T, the new anchor point A has the calculation formula as follows:
if dist (E)c, En+1) >And T, indicating that the shoulder has large displacement, taking the new shoulder coordinate as the anchor point coordinate, and resetting the counting to be 1:
A = En+1where dist is the Euclidean distance between two points,
dist(Ec,En+1) = sqrt((Ec_x - En+1_x)2 + (Ec_y - En+1_y)2);
if dist (E)c, En+1) <T, calculating the cumulative average position of the shoulder as newThe simultaneous count plus 1:
A = (Ec* n + En+1) / (n+1)。
2. the method for Kinect-based gesture localization three-dimensional medical model according to claim 1, wherein the left and right boundaries and the upper and lower boundaries of the screen corresponding to the left and right limits of experience of the gesture in the x-y plane are natural lifting of the upper arm, and when the forearm is perpendicular to the upper arm, the horizontal dynamic range of the gesture is 30cm in total, taking the same side shoulder as the center point, and the left and right are 15cm respectively; and naturally lifting the upper arm, and taking the shoulders on the same side as the central point, wherein the upper and lower parts are respectively 10cm, and the total 20cm is the vertical dynamic range of the gesture when the forearm is vertical to the upper arm.
3. The method for Kinect-based gesture localization of three-dimensional medical model according to claim 1, wherein in step S2, during the gesture movement, the current position is assumed to be phAnd the corresponding screen positioning positions are as follows: p, then the calculation formula of p is:
horizontal coordinate: p _ x = (p)h_x – Ch_x) / wh_x * w + C_x;
Vertical coordinates: p _ y = (p)h_y – Ch_y) / wh_y * w + C_y。
CN202110365217.2A 2021-04-06 2021-04-06 Kinect-based method for positioning three-dimensional medical model through gestures Pending CN113110739A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110365217.2A CN113110739A (en) 2021-04-06 2021-04-06 Kinect-based method for positioning three-dimensional medical model through gestures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110365217.2A CN113110739A (en) 2021-04-06 2021-04-06 Kinect-based method for positioning three-dimensional medical model through gestures

Publications (1)

Publication Number Publication Date
CN113110739A true CN113110739A (en) 2021-07-13

Family

ID=76713929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110365217.2A Pending CN113110739A (en) 2021-04-06 2021-04-06 Kinect-based method for positioning three-dimensional medical model through gestures

Country Status (1)

Country Link
CN (1) CN113110739A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404384A (en) * 2015-11-02 2016-03-16 深圳奥比中光科技有限公司 Gesture operation method, method for positioning screen cursor by gesture, and gesture system
CN107678540A (en) * 2017-09-08 2018-02-09 广东广业开元科技有限公司 Virtual touch screen man-machine interaction method, system and device based on depth transducer
CN111736697A (en) * 2020-06-22 2020-10-02 四川长虹电器股份有限公司 Camera-based gesture control method
US20210081052A1 (en) * 2019-09-17 2021-03-18 Gaganpreet Singh User interface control based on elbow-anchored arm gestures

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404384A (en) * 2015-11-02 2016-03-16 深圳奥比中光科技有限公司 Gesture operation method, method for positioning screen cursor by gesture, and gesture system
CN107678540A (en) * 2017-09-08 2018-02-09 广东广业开元科技有限公司 Virtual touch screen man-machine interaction method, system and device based on depth transducer
US20210081052A1 (en) * 2019-09-17 2021-03-18 Gaganpreet Singh User interface control based on elbow-anchored arm gestures
CN111736697A (en) * 2020-06-22 2020-10-02 四川长虹电器股份有限公司 Camera-based gesture control method

Similar Documents

Publication Publication Date Title
EP3858433B1 (en) Tms positioning navigation apparatus for transcranial magnetic stimulation treatment
CN106859742B (en) Puncture operation navigation positioning system and method
US10384348B2 (en) Robot apparatus, method for controlling the same, and computer program
CN110236682B (en) System and method for recentering imaging device and input control device
WO2019233227A1 (en) Visual navigation-based dental robot path planning system and method
CN109152615A (en) The system and method for being identified during robotic surgery process and tracking physical object
EP3640949A1 (en) Augmented reality with medical imaging
JP2017538452A5 (en)
US20220054200A1 (en) Calibration method and device for dental implant navigation surgery, and tracking method and device for dental implant navigation surgery
US20210315637A1 (en) Robotically-assisted surgical system, robotically-assisted surgical method, and computer-readable medium
WO2012129669A1 (en) Gesture operated control for medical information systems
CN204655073U (en) A kind of orthopaedics operation navigation system
CN115741732B (en) Interactive path planning and motion control method for massage robot
JP2022519307A (en) Hand-eye collaboration system for robotic surgery system
CN108090448A (en) Model is worth evaluation method in a kind of Virtual assemble
CN113110739A (en) Kinect-based method for positioning three-dimensional medical model through gestures
CN113509296A (en) Method and system for automatically adjusting position and posture of acetabular cup and surgical robot
WO2023078249A1 (en) Obstacle avoidance method, system and apparatus for surgical robot, and storage medium
WO2020142338A1 (en) Needle insertion into subcutaneous target
US20240100701A1 (en) Method for determining safety-limit zone, and device, reset method and medical robot using the same
CN115005979A (en) Computer-readable storage medium, electronic device, and surgical robot system
JP2004287823A (en) Pointing operation supporting system
CN115429432A (en) Readable storage medium, surgical robot system and adjustment system
TW202222270A (en) Surgery assistant system and related surgery assistant method
US20190196602A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210713