CN111639531A - Medical model interaction visualization method and system based on gesture recognition - Google Patents

Medical model interaction visualization method and system based on gesture recognition Download PDF

Info

Publication number
CN111639531A
CN111639531A CN202010334488.7A CN202010334488A CN111639531A CN 111639531 A CN111639531 A CN 111639531A CN 202010334488 A CN202010334488 A CN 202010334488A CN 111639531 A CN111639531 A CN 111639531A
Authority
CN
China
Prior art keywords
gesture
operator
hand
motion
medical model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010334488.7A
Other languages
Chinese (zh)
Inventor
鲁媛媛
赵静
何昆仑
李俊来
肖若秀
汪洋
常秀娟
周超
张伟
肖伟厚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese PLA General Hospital
Original Assignee
Chinese PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese PLA General Hospital filed Critical Chinese PLA General Hospital
Priority to CN202010334488.7A priority Critical patent/CN111639531A/en
Publication of CN111639531A publication Critical patent/CN111639531A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a medical model interaction visualization method and system based on gesture recognition, wherein the method comprises the following steps: carrying out image acquisition on an operator, and capturing a hand motion visual image of the operator; performing gesture segmentation on the captured hand action visual image by adopting a preset gesture segmentation algorithm; analyzing the change of the gesture of the operator based on the gesture segmentation result of the hand motion visual image; and controlling the operated medical model in real time according to the change of the gesture of the operator. The medical model interaction visualization method and system based on gesture recognition combine three-dimensional reconstruction with gesture recognition, so that an image processing technology and a human-computer interaction technology are further developed in the medical field, and the medical model interaction visualization method and system based on gesture recognition can be used for displaying a medical three-dimensional model in a gesture control mode.

Description

Medical model interaction visualization method and system based on gesture recognition
Technical Field
The invention relates to the technical field of computer vision, in particular to a medical model interaction visualization method and system based on gesture recognition
Background
Human-computer interaction technology is regarded as one of the important scientific technologies nowadays. In the national 973 plan and the schema of medium-and-long-term scientific and technological development, the harmony human-computer interaction theory and the intelligent information processing basic research are provided to be supported, and the virtual reality technology and the intelligent perception technology are developed with emphasis and priority. Mechanical input devices such as keyboards and mice are difficult to represent in 3D and with a high degree of freedom, and this interaction is not convenient to some extent. With the progress of research, more and more people are working on human-computer interaction technologies conforming to human habits, such as human face recognition, human body tracking and recognition, gesture tracking and the like.
The gesture recognition technology is one of man-machine interaction technologies, and is an intuitive, simple and convenient man-machine interaction means. Human-computer interaction technology has gradually moved from computer-centric to human-centric. At present, human-computer interaction mainly comprises voice interaction, gesture interaction and the like. Gesture recognition is mainly applied to the aspects of interaction with a virtual environment, sign language recognition, multimedia user interface and the like in a human-computer interaction system.
The three-dimensional visualization technology of the medical images is a hot problem, and the application field is very wide. The three-dimensional visualization technology of medical images has been applied in large quantities through the recent applications of three-dimensional model visualization technology, surface rendering technology, combined medical images, and clinical diagnosis and treatment. In recent years, with the continuous emergence of image visualization technology, the image processing technology is in endless, various accelerating algorithms are rapidly developed, and in the medical field, three-dimensional visualization technology and application thereof will be continuously developed. The medical image stereo display technology is to reconstruct a three-dimensional model through two-dimensional medical images, provide more visual, comprehensive and accurate information of focus and normal tissue, and bring great convenience to the clinical diagnosis and treatment of doctors.
However, for the display of the medical three-dimensional model, the most basic control mode of a keyboard and a mouse is adopted when the medical three-dimensional model is operated and controlled, so that the human-computer interaction is realized, and the interaction is not convenient enough.
Disclosure of Invention
The invention aims to solve the technical problem of providing a medical model interactive visualization method and system based on gesture recognition, so as to at least partially solve the problems existing in the existing medical three-dimensional model control mode.
In order to solve the technical problems, the invention provides the following technical scheme:
a method for gesture recognition based medical model interaction visualization, comprising:
carrying out image acquisition on an operator, and capturing a hand motion visual image of the operator;
performing gesture segmentation on the captured hand action visual image by adopting a preset gesture segmentation algorithm;
analyzing the change of the gesture of the operator based on the gesture segmentation result of the hand motion visual image;
and controlling the operated medical model in real time according to the change of the gesture of the operator.
Wherein, carry out image acquisition to the operator, catch operator's hand action visual image, include:
acquiring images of an operator through a binocular camera, acquiring a calibrated stereo image through stereo calibration so as to complete stereo matching, acquiring a parallax image, and acquiring a depth image by utilizing triangulation calculation in combination with internal parameters and external parameters of the binocular camera so as to capture a hand action visual image of the operator; wherein the captured hand motion visual images comprise left and right visual images.
Wherein, carry out gesture segmentation to the hand action visual image that catches, include:
and respectively applying the preset gesture segmentation algorithm in the left visual image and the right visual image to perform gesture segmentation processing operation, and acquiring initial position information of the segmented operator hand.
Wherein the analyzing of the change in the operator gesture; the method comprises the following steps:
establishing a Cartesian right-hand coordinate system;
selecting a preset tracking algorithm, setting the initial position information as an initial position of the tracking algorithm, and analyzing the overall motion of the hand of the operator based on the established Cartesian right-hand coordinate system;
based on the analysis result of the overall motion, judging the displacement, rotation and scale change of standard frame data, monitoring objects in the visual field range of the binocular camera, giving frame motion factors based on the hand motion of an operator, and describing the attribute of the synthesized motion by comparing the current frame with the previous frame; wherein the attributes of the composite motion include a rotation coordinate, a rotation angle, a rotation matrix, a scaling factor, and a displacement.
Wherein the real-time controlling of the manipulated medical model comprises:
zooming of the camera is controlled through gesture change of an operator, and zooming of the operated medical model is achieved; the rotation angle of the camera is controlled through the gesture change of the operator, and the rotation of the operated medical model is achieved.
Wherein, before the image capturing of the operator and the capturing of the visual image of the hand motion of the operator, the method further comprises:
two-dimensional image information stored in a computer is expressed in a three-dimensional form through dragging and moving operations of a mouse, rendering and three-dimensional reconstruction of a model are achieved, and a three-dimensional operated medical model is generated.
Accordingly, in order to solve the above technical problems, the present invention further provides the following technical solutions:
a medical model interaction visualization system based on gesture recognition, comprising:
the binocular camera is used for collecting images of an operator and capturing hand movement visual images of the operator;
the gesture segmentation module is used for performing gesture segmentation on the hand action visual image captured by the binocular camera by adopting a preset gesture segmentation algorithm;
the gesture analysis and tracking module is used for analyzing the change of the gesture of the operator based on the gesture segmentation result of the hand motion visual image by the gesture segmentation module;
and the gesture recognition and control module is used for controlling the operated medical model in real time according to the change of the gesture of the operator.
Wherein, binocular camera specifically is used for:
acquiring images of an operator, acquiring calibrated stereo images through stereo calibration so as to finish stereo matching, acquiring parallax images, and acquiring depth images by utilizing triangulation calculation by combining internal parameters and external parameters of the binocular camera so as to capture hand motion visual images of the operator; wherein the captured hand motion visual images comprise left and right visual images.
Wherein the gesture segmentation module is specifically configured to:
and respectively applying the preset gesture segmentation algorithm in the left visual image and the right visual image to perform gesture segmentation processing operation, and acquiring initial position information of the segmented operator hand.
Wherein the gesture analysis and tracking module is specifically configured to:
establishing a Cartesian right-hand coordinate system;
selecting a preset tracking algorithm, setting the initial position information as an initial position of the tracking algorithm, and analyzing the overall motion of the hand of the operator based on the established Cartesian right-hand coordinate system;
based on the analysis result of the overall motion, judging the displacement, rotation and scale change of standard frame data, monitoring objects in the visual field range of the binocular camera, giving frame motion factors based on the hand motion of an operator, and describing the attribute of the synthesized motion by comparing the current frame with the previous frame; wherein the attributes of the composite motion include a rotation coordinate, a rotation angle, a rotation matrix, a scaling factor, and a displacement.
The technical scheme of the invention has the following beneficial effects:
1. the invention realizes the combination of medical three-dimensional reconstruction and gesture recognition technology, and further develops the image processing technology and the human-computer interaction technology in the medical field;
2. the vision-based gesture interaction system is used for capturing motion information of natural human hands and simultaneously realizing a task of interacting with a virtual system. The vision-based gesture interaction system generally only needs one or more cameras for recording gesture information, and hands do not need to additionally wear electronic equipment and are not restricted by wearable electronic equipment. The gesture recognition technology is in a very important position in the field of human-computer interaction, and accurate gesture recognition can fundamentally ensure the smooth proceeding of an interaction process.
Drawings
Fig. 1 is a schematic flowchart of a method for visualizing interaction of a medical model based on gesture recognition according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for visualizing interaction of a medical model based on gesture recognition according to a second embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
First embodiment
Referring to fig. 1, the present embodiment provides a medical model interaction visualization method based on gesture recognition, where the medical model interaction visualization method based on gesture recognition includes the following steps:
s101, carrying out image acquisition on an operator, and capturing a hand motion visual image of the operator;
the method comprises the following steps of acquiring images of an operator through a binocular camera, acquiring a calibrated stereo image through stereo calibration so as to complete stereo matching, acquiring a parallax image, and acquiring a depth image by utilizing triangulation calculation in combination with internal parameters and external parameters of the camera so as to capture a hand motion visual image of the operator; wherein, the hand action visual image comprises a left visual image and a right visual image.
S102, performing gesture segmentation on the captured hand motion visual image by adopting a gesture segmentation algorithm;
in the above step, the selected gesture division algorithm is applied to each of the left visual image and the right visual image to perform a gesture division processing operation, and initial position information of the operator's hand after division is acquired and set as the initial position of the tracking algorithm.
S103, analyzing the gesture change of the operator based on the gesture segmentation result of the hand motion visual image;
it should be noted that the above steps are for analyzing the overall movement of the operator's hand; based on the analysis result of the overall motion, judging displacement, rotation, scale change and the like of standard frame data, monitoring objects in the visual field range of the binocular camera, giving frame motion factors based on the hand motion of an operator, and describing the attribute of the synthesized motion by comparing the current frame with the previous frame; the attributes of the synthesized motion include rotation coordinates, rotation angles, rotation matrices, scaling factors, displacements, and the like.
And S104, controlling the operated medical model in real time according to the change of the gesture of the operator.
It should be noted that the control of the model in the above steps includes: zooming of the camera is controlled through gesture change of an operator, and zooming of the operated medical model is achieved; the rotation angle of the camera is controlled through the gesture change of the operator, and the rotation of the operated medical model is achieved.
If the tracking target disappears, the gesture needs to be segmented again, and then the operation is repeated.
The embodiment captures a hand motion visual image of an operator by carrying out image acquisition on the operator; performing gesture segmentation on the captured hand action visual image by adopting a gesture segmentation algorithm; analyzing the change of the gesture of the operator based on the gesture segmentation result of the hand motion visual image; and controlling the operated medical model in real time according to the change of the gesture of the operator. The medical model is displayed in a gesture control mode.
Second embodiment
Referring to fig. 2, the present embodiment provides a medical model interaction visualization method based on gesture recognition, where the medical model interaction visualization method based on gesture recognition includes the following steps:
s101, rendering of a control model is achieved, namely two-dimensional image information stored in a computer can be represented in a three-dimensional mode through operations of dragging and moving of a mouse and the like, and meanwhile, the operation of selecting a certain area can be completed in cooperation with a keyboard. The model rendering function is a main function of the system;
s102, acquiring an operator action visual image through a binocular camera, acquiring a calibrated stereo image through stereo calibration so as to complete stereo matching, acquiring a parallax image, and acquiring a depth image by utilizing triangulation calculation in combination with internal parameters and external parameters of a camera;
s103, processing the formed left and right visual images by a gesture segmentation algorithm respectively, acquiring initial position information of the segmented human hand, and setting the information as an initial position of a tracking algorithm;
s104, a Cartesian right-hand coordinate system is established, namely the four fingers of the right hand make a fist from the square of the X axis to the positive direction of the Y axis, and the thumb direction is the positive direction of the Z axis. In millimeters, the origin is at the center. The X and Z axes are horizontal and the Y axis is vertical. Increasing the Z value away from the screen and increasing the Y value upwards;
and S105, describing the motion through the frame attribute. Based on the analysis of the overall motion, the judgment criteria are that the frame data has undergone displacement, rotation, scale change, etc. The attributes of the resultant motion include: rotation coordinates (Rotation Axis, i.e., Rotation of coordinates described by a direction vector); rotation Angle (Rotation Angle, Rotation Angle in the clockwise direction with respect to the Rotation coordinate); rotation Matrix (Rotation Matrix, a Matrix transformation used to describe Rotation); scaling factor (ScaleFactor, i.e., a factor to describe inflation and deflation); displacement (Translation, i.e. describing a linear motion with a vector). Objects in the application scene are manipulated through motion factors, namely, the factors can be manually modified, and independent hands and fingers do not need to be tracked in multi-frame data;
s106, establishing a gesture control algorithm, and matching with a camera function in The Visualization Toolkit library for use, namely controlling The zooming of a camera through gestures to realize The zooming of a model; and controlling the rotation angle of the camera to realize the rotation of the model.
The medical model interaction visualization method based on gesture recognition of the embodiment realizes the combination of medical three-dimensional reconstruction and gesture recognition technology, so that the image processing technology and the human-computer interaction technology are further developed in the medical field; the medical model display method is suitable for displaying the medical model in a gesture control mode.
Third embodiment
The embodiment provides a medical model interactive visualization system based on gesture recognition, which includes:
the binocular camera is used for collecting images of an operator and capturing hand movement visual images of the operator;
the gesture segmentation module is used for performing gesture segmentation on the hand action visual image captured by the binocular camera by adopting a preset gesture segmentation algorithm;
the gesture analysis and tracking module is used for analyzing the change of the gesture of the operator based on the gesture segmentation result of the hand motion visual image by the gesture segmentation module;
and the gesture recognition and control module is used for controlling the operated medical model in real time according to the change of the gesture of the operator.
The medical model interactive visualization system based on gesture recognition of the embodiment corresponds to the medical model interactive visualization method based on gesture recognition of the embodiment; the functions realized by each functional module in the medical model interactive visualization system based on the gesture recognition correspond to each flow step in the medical model interactive visualization method based on the gesture recognition; therefore, the description is omitted here.
Furthermore, it should be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
Finally, it should be noted that while the above describes a preferred embodiment of the invention, it will be appreciated by those skilled in the art that, once they have learned the basic inventive concepts of the present invention, numerous modifications and adaptations may be made without departing from the principles of the invention, which are intended to be covered by the claims. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.

Claims (10)

1. A medical model interactive visualization method based on gesture recognition is characterized by comprising the following steps:
carrying out image acquisition on an operator, and capturing a hand motion visual image of the operator;
performing gesture segmentation on the captured hand action visual image by adopting a preset gesture segmentation algorithm;
analyzing the change of the gesture of the operator based on the gesture segmentation result of the hand motion visual image;
and controlling the operated medical model in real time according to the change of the gesture of the operator.
2. The method for interactive visualization of a medical model based on gesture recognition according to claim 1, wherein the image acquisition of the operator, capturing the visual image of the hand motion of the operator, comprises:
acquiring images of an operator through a binocular camera, acquiring a calibrated stereo image through stereo calibration so as to complete stereo matching, acquiring a parallax image, and acquiring a depth image by utilizing triangulation calculation in combination with internal parameters and external parameters of the binocular camera so as to capture a hand action visual image of the operator; wherein the captured hand motion visual images comprise left and right visual images.
3. The method for interactive visualization of a medical model based on gesture recognition as claimed in claim 2, wherein the gesture segmentation of the captured visual image of hand motion comprises:
and respectively applying the preset gesture segmentation algorithm in the left visual image and the right visual image to perform gesture segmentation processing operation, and acquiring initial position information of the segmented operator hand.
4. A method for gesture recognition based interactive visualization of a medical model according to claim 3, wherein the analysis of changes in operator gestures; the method comprises the following steps:
establishing a Cartesian right-hand coordinate system;
selecting a preset tracking algorithm, setting the initial position information as an initial position of the tracking algorithm, and analyzing the overall motion of the hand of the operator based on the established Cartesian right-hand coordinate system;
based on the analysis result of the overall motion, judging the displacement, rotation and scale change of standard frame data, monitoring objects in the visual field range of the binocular camera, giving frame motion factors based on the hand motion of an operator, and describing the attribute of the synthesized motion by comparing the current frame with the previous frame; wherein the attributes of the composite motion include a rotation coordinate, a rotation angle, a rotation matrix, a scaling factor, and a displacement.
5. The method for interactive visualization of medical model based on gesture recognition according to claim 1, wherein the real-time control of the manipulated medical model comprises:
zooming of the camera is controlled through gesture change of an operator, and zooming of the operated medical model is achieved; the rotation angle of the camera is controlled through the gesture change of the operator, and the rotation of the operated medical model is achieved.
6. The gesture recognition based medical model interactive visualization method according to any one of claims 1-5, characterized in that before the image acquisition of the operator and the capturing of the visual image of the hand motion of the operator, the method further comprises:
two-dimensional image information stored in a computer is expressed in a three-dimensional form through dragging and moving operations of a mouse, rendering and three-dimensional reconstruction of a model are achieved, and a three-dimensional operated medical model is generated.
7. A medical model interaction visualization system based on gesture recognition is characterized by comprising:
the binocular camera is used for collecting images of an operator and capturing hand movement visual images of the operator;
the gesture segmentation module is used for performing gesture segmentation on the hand action visual image captured by the binocular camera by adopting a preset gesture segmentation algorithm;
the gesture analysis and tracking module is used for analyzing the change of the gesture of the operator based on the gesture segmentation result of the hand motion visual image by the gesture segmentation module;
and the gesture recognition and control module is used for controlling the operated medical model in real time according to the change of the gesture of the operator.
8. The medical model interactive visualization system based on gesture recognition according to claim 7, wherein the binocular camera is specifically configured to:
acquiring images of an operator, acquiring calibrated stereo images through stereo calibration so as to finish stereo matching, acquiring parallax images, and acquiring depth images by utilizing triangulation calculation by combining internal parameters and external parameters of the binocular camera so as to capture hand motion visual images of the operator; wherein the captured hand motion visual images comprise left and right visual images.
9. The gesture recognition based medical model interaction visualization system of claim 8, wherein the gesture segmentation module is specifically configured to:
and respectively applying the preset gesture segmentation algorithm in the left visual image and the right visual image to perform gesture segmentation processing operation, and acquiring initial position information of the segmented operator hand.
10. The gesture recognition based medical model interaction visualization system according to claim 9, wherein the gesture analysis and tracking module is specifically configured to:
establishing a Cartesian right-hand coordinate system;
selecting a preset tracking algorithm, setting the initial position information as an initial position of the tracking algorithm, and analyzing the overall motion of the hand of the operator based on the established Cartesian right-hand coordinate system;
based on the analysis result of the overall motion, judging the displacement, rotation and scale change of standard frame data, monitoring objects in the visual field range of the binocular camera, giving frame motion factors based on the hand motion of an operator, and describing the attribute of the synthesized motion by comparing the current frame with the previous frame; wherein the attributes of the composite motion include a rotation coordinate, a rotation angle, a rotation matrix, a scaling factor, and a displacement.
CN202010334488.7A 2020-04-24 2020-04-24 Medical model interaction visualization method and system based on gesture recognition Pending CN111639531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334488.7A CN111639531A (en) 2020-04-24 2020-04-24 Medical model interaction visualization method and system based on gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334488.7A CN111639531A (en) 2020-04-24 2020-04-24 Medical model interaction visualization method and system based on gesture recognition

Publications (1)

Publication Number Publication Date
CN111639531A true CN111639531A (en) 2020-09-08

Family

ID=72331865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334488.7A Pending CN111639531A (en) 2020-04-24 2020-04-24 Medical model interaction visualization method and system based on gesture recognition

Country Status (1)

Country Link
CN (1) CN111639531A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422901A (en) * 2020-10-30 2021-02-26 哈雷医用(广州)智能技术有限公司 Method and device for generating operation virtual reality video
CN112667088A (en) * 2021-01-06 2021-04-16 湖南翰坤实业有限公司 Gesture application identification method and system based on VR walking platform
CN113741701A (en) * 2021-09-30 2021-12-03 之江实验室 Brain nerve fiber bundle visualization method and system based on somatosensory gesture control
CN114663432A (en) * 2022-05-24 2022-06-24 武汉泰乐奇信息科技有限公司 Skeleton model correction method and device
CN115145456A (en) * 2022-06-29 2022-10-04 重庆长安汽车股份有限公司 Rotation of 3D car model control system and control method
CN115712354A (en) * 2022-07-06 2023-02-24 陈伟 Man-machine interaction system based on vision and algorithm

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927016A (en) * 2014-04-24 2014-07-16 西北工业大学 Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
US20150018622A1 (en) * 2013-03-13 2015-01-15 Camplex, Inc. Surgical visualization systems
CN104317391A (en) * 2014-09-24 2015-01-28 华中科技大学 Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN107357428A (en) * 2017-07-07 2017-11-17 京东方科技集团股份有限公司 Man-machine interaction method and device based on gesture identification, system
CN108256504A (en) * 2018-02-11 2018-07-06 苏州笛卡测试技术有限公司 A kind of Three-Dimensional Dynamic gesture identification method based on deep learning
CN108363482A (en) * 2018-01-11 2018-08-03 江苏四点灵机器人有限公司 A method of the three-dimension gesture based on binocular structure light controls smart television
CN109116984A (en) * 2018-07-27 2019-01-01 冯仕昌 A kind of tool box for three-dimension interaction scene
WO2019040493A1 (en) * 2017-08-21 2019-02-28 The Trustees Of Columbia University In The City Of New York Systems and methods for augmented reality guidance
CN109960403A (en) * 2019-01-07 2019-07-02 西南科技大学 For the visualization presentation of medical image and exchange method under immersive environment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150018622A1 (en) * 2013-03-13 2015-01-15 Camplex, Inc. Surgical visualization systems
CN103927016A (en) * 2014-04-24 2014-07-16 西北工业大学 Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
CN104317391A (en) * 2014-09-24 2015-01-28 华中科技大学 Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN107357428A (en) * 2017-07-07 2017-11-17 京东方科技集团股份有限公司 Man-machine interaction method and device based on gesture identification, system
WO2019040493A1 (en) * 2017-08-21 2019-02-28 The Trustees Of Columbia University In The City Of New York Systems and methods for augmented reality guidance
CN108363482A (en) * 2018-01-11 2018-08-03 江苏四点灵机器人有限公司 A method of the three-dimension gesture based on binocular structure light controls smart television
CN108256504A (en) * 2018-02-11 2018-07-06 苏州笛卡测试技术有限公司 A kind of Three-Dimensional Dynamic gesture identification method based on deep learning
CN109116984A (en) * 2018-07-27 2019-01-01 冯仕昌 A kind of tool box for three-dimension interaction scene
CN109960403A (en) * 2019-01-07 2019-07-02 西南科技大学 For the visualization presentation of medical image and exchange method under immersive environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周鲜子: "Leap Motion的现状及展望", 《黑龙江科技信息》 *
林德江;井志胜;王国德;秦国伟;时华峰;: "基于Leap Motion体感控制技术的数字化展示系统研究", 火炮发射与控制学报 *
潘伟洲;赵仕豪;徐泽坤;李兴民;: "沉浸式手术环境的研究与实现", 计算机仿真 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422901A (en) * 2020-10-30 2021-02-26 哈雷医用(广州)智能技术有限公司 Method and device for generating operation virtual reality video
CN112667088A (en) * 2021-01-06 2021-04-16 湖南翰坤实业有限公司 Gesture application identification method and system based on VR walking platform
CN113741701A (en) * 2021-09-30 2021-12-03 之江实验室 Brain nerve fiber bundle visualization method and system based on somatosensory gesture control
CN114663432A (en) * 2022-05-24 2022-06-24 武汉泰乐奇信息科技有限公司 Skeleton model correction method and device
CN114663432B (en) * 2022-05-24 2022-08-16 武汉泰乐奇信息科技有限公司 Skeleton model correction method and device
CN115145456A (en) * 2022-06-29 2022-10-04 重庆长安汽车股份有限公司 Rotation of 3D car model control system and control method
CN115145456B (en) * 2022-06-29 2024-06-11 重庆长安汽车股份有限公司 Rotation control system and control method of 3D car model
CN115712354A (en) * 2022-07-06 2023-02-24 陈伟 Man-machine interaction system based on vision and algorithm
CN115712354B (en) * 2022-07-06 2023-05-30 成都戎盛科技有限公司 Man-machine interaction system based on vision and algorithm

Similar Documents

Publication Publication Date Title
CN111639531A (en) Medical model interaction visualization method and system based on gesture recognition
US10394334B2 (en) Gesture-based control system
US20220084279A1 (en) Methods for manipulating objects in an environment
US6624833B1 (en) Gesture-based input interface system with shadow detection
JP3114813B2 (en) Information input method
Lu et al. Immersive manipulation of virtual objects through glove-based hand gesture interaction
CN109145802B (en) Kinect-based multi-person gesture man-machine interaction method and device
CN104656893A (en) Remote interaction control system and method for physical information space
CN111862333A (en) Content processing method and device based on augmented reality, terminal equipment and storage medium
CN112527112B (en) Visual man-machine interaction method for multichannel immersion type flow field
WO2013149475A1 (en) User interface control method and device
CN108664126B (en) Deformable hand grabbing interaction method in virtual reality environment
O'Hagan et al. Visual gesture interfaces for virtual environments
Hernoux et al. A seamless solution for 3D real-time interaction: design and evaluation
Kipshagen et al. Touch-and marker-free interaction with medical software
Kruszyński et al. Tangible props for scientific visualization: concept, requirements, application
Ong et al. 3D bare-hand interactions enabling ubiquitous interactions with smart objects
CN108401452B (en) Apparatus and method for performing real target detection and control using virtual reality head mounted display system
Narducci et al. Enabling consistent hand-based interaction in mixed reality by occlusions handling
Tuntakurn et al. Natural interaction on 3D medical image viewer software
Liu et al. COMTIS: Customizable touchless interaction system for large screen visualization
Feng et al. An HCI paradigm fusing flexible object selection and AOM-based animation
Varga et al. Survey and investigation of hand motion processing technologies for compliance with shape conceptualization
Varma et al. Gestural interaction with three-dimensional interfaces; current research and recommendations
CN216719062U (en) Environmental space positioning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination