CN113741701A - Brain nerve fiber bundle visualization method and system based on somatosensory gesture control - Google Patents

Brain nerve fiber bundle visualization method and system based on somatosensory gesture control Download PDF

Info

Publication number
CN113741701A
CN113741701A CN202111159943.5A CN202111159943A CN113741701A CN 113741701 A CN113741701 A CN 113741701A CN 202111159943 A CN202111159943 A CN 202111159943A CN 113741701 A CN113741701 A CN 113741701A
Authority
CN
China
Prior art keywords
gesture
fiber bundle
image
nerve fiber
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111159943.5A
Other languages
Chinese (zh)
Inventor
赵嘉玥
王俊彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202111159943.5A priority Critical patent/CN113741701A/en
Publication of CN113741701A publication Critical patent/CN113741701A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a brain nerve fiber bundle visualization method and system based on somatosensory gesture control, wherein the method comprises the steps of capturing hand images; separating the hand from the background; recognizing a user gesture, and analyzing the change state of the gesture; performing real-time operation corresponding to the nerve fiber bundle model according to the gesture information; the system comprises: body sensing equipment, computer and display, body sensing equipment adopt the depth camera, and wherein, the computer includes body sensing equipment interface module, gesture recognition control module, rear end server, visual interface's front end module, and gesture recognition control module includes again: the gesture control module is used for controlling the gesture recognition module; the invention mainly solves the problem that the mouse and the keyboard are not easy to carry out complex interactive operation on the three-dimensional fiber bundle.

Description

Brain nerve fiber bundle visualization method and system based on somatosensory gesture control
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a brain nerve fiber bundle visualization method and system based on somatosensory gesture control.
Background
At present, with the development of computer science, the application of the medical image is increasing. The nerve fiber bundle image shows the distribution, trend and the like of nerve fiber bundles in human brain, and has important function for researching brain diseases, nerve functions, psychological cognition and the like.
Nerve fibers are composed of the axons of neurons and surrounding membrane-sheath structures, forming various conduction tracts within the central nervous system. The diffusion weighted magnetic resonance image can be obtained through a non-invasive nuclear magnetic resonance imaging technology, and the fiber bundle tracking algorithm can be used for constructing a complex white matter fiber structure in the brain. Some visualization tools have provided interfaces for displaying the three-dimensional structure of the nerve fiber bundle, but the operation functions of these visualization interfaces are relatively single, only support mouse and keyboard operations, are not suitable for complex interactive operations in three-dimensional space, and are not conducive to researchers understanding the structure of the complex three-dimensional nerve fiber bundle.
Disclosure of Invention
In order to solve the defects of the prior art, the human gestures are detected through the motion sensing equipment, and corresponding instructions are obtained by defining the gestures, so that the touch input equipment is separated, and the purpose of performing complex control on three-dimensional images by using the gestures is realized, the invention adopts the following technical scheme:
a brain nerve fiber bundle visualization method based on somatosensory gesture control comprises the following steps:
s1, loading the nerve fiber bundle image data, rendering, and displaying the nerve fiber bundle in a three-dimensional image form;
s2, acquiring depth information and color information of the hand image, identifying the hand position, and acquiring a hand binary image;
s3, extracting hand features according to the hand binary image, identifying gesture categories, and calculating the position of a hand center point and/or a fingertip based on the contour information and the depth information of the hand image;
s4, judging whether the gesture is switched according to the gesture category of the previous frame, if the current gesture category is the same as the gesture category of the previous frame, calculating the moving distance and angle between two hands according to the positions of the central points and/or the fingertips of the hands, and if the current gesture category is different from the gesture category of the previous frame, taking the current gesture category as the start of the corresponding control operation, and initializing the moving distance and angle value;
s5, according to the type and state of the gesture, performing control operation, wherein the control operation comprises the following steps:
s51, single nerve fiber bundle selection operation: the gesture type is one finger of a single hand, the single nerve fiber bundle at the corresponding position on the three-dimensional image is highlighted according to the position of the fingertip, meanwhile, the unselected nerve fiber bundle is hidden or weakened, the highlighting is cancelled when the fingertip moves away from the selected single nerve fiber bundle, and meanwhile, the state of the unselected nerve fiber bundle is recovered; when the gesture category is changed from a single hand and one finger to an enqueue gesture category, the highlighted single nerve fiber bundle is added into the selection list, and if the single nerve fiber bundle at the position exists in the selection list, the single nerve fiber bundle is removed, so that the method is more flexible and convenient compared with the method for checking and selecting a complex three-dimensional nerve fiber bundle by a mouse and a keyboard;
s52, multiple nerve fiber bundle selection operation: when the gesture category is single-hand two-finger, highlighting the nerve fiber bundle cluster between the two finger tip positions on the three-dimensional image according to the positions of the two finger tips, hiding or weakening the unselected nerve fiber bundle cluster, canceling the highlighting when the two fingers recover the original positions, and recovering the state of the unselected nerve fiber bundle cluster; when the gesture category is changed from a single-hand two-finger gesture category to an enqueue gesture category, the highlighted nerve fiber bundle cluster is added into the selection list, and if the nerve fiber bundle cluster at the position exists in the selection list, the nerve fiber bundle cluster is removed, so that the method is more flexible and convenient compared with the method for checking and selecting a complex three-dimensional nerve fiber bundle cluster by a mouse and a keyboard;
s53, rotational translation operation: the gesture type is a single-hand palm, when the center position of the hand is relatively unchanged and the fingertips move, rotating operation is carried out, and the angle and the direction of movement of the fingertips relative to the center of the hand correspond to the angle and the direction of rotation of the three-dimensional image; when the center of the hand and the fingertip move for the same distance, performing translation operation, and translating the three-dimensional image for the corresponding distance in the corresponding direction;
s54, zoom advance and retreat operation: when the gesture type is the palms of the two hands, when the centers of the hands of the two hands are close to or far away from each other in the horizontal and/or vertical direction, carrying out zooming operation, when the two hands are close to each other, reducing the three-dimensional image according to the corresponding position of the coordinates of the centers of the hands in the three-dimensional image, and when the two hands are far away from each other, amplifying the three-dimensional image according to the corresponding position of the coordinates of the centers of the hands in the three-dimensional image; when the centers of the hands of the two hands move in the front-back direction and move backwards or forwards, the forward-backward operation is performed, the two hands move backwards, the depth of the three-dimensional image corresponding to the center of the hand moves forwards, the two hands move forwards, and the depth of the three-dimensional image corresponding to the center of the hand moves backwards, so that compared with the operation of a mouse and a keyboard on a complex three-dimensional nerve fiber bundle, more convenient longitudinal depth operation is provided, usually, a plane point on the three-dimensional image is selected as a reference by the mouse, a rolling roller is used for zooming, if the longitudinal depth operation is required to be performed, the depth adjustment needs to be performed in advance by matching with the keyboard, the operation requirement can be met through the coherent action of the hands, and the two operations in the step S53 or S54 can be completed in one action, for example: the zooming advance and retreat operation is carried out, the two hands move forwards in the process of separating the two hands, and the three-dimensional image achieves the effect of zooming and retreating simultaneously;
s55, exiting the current operation and returning to S4: the gesture category of one hand or two hands is converted into a first exit gesture category;
s56, quitting the somatosensory gesture control operation and returning to S1: the gesture category of one or both hands is changed to a second exit gesture category.
Further, in S1, the nerve fiber bundle image data is read in a binary format, the header information and the content data of the image are analyzed in bytes, and each nerve fiber is rendered as one object.
Further, in S2, the gesture image that obtains is the RGB-D image, judges whether the RGB color value on each pixel point is within the skin color threshold, and whether the depth value D is within the image depth threshold, with RGB color value and depth value D, all the points within the range of both thresholds, as the hand skin point, sets up the first color value for the RGB color value of the hand skin point, and the RGB color values of the other points set up different second color values, obtain the binarized hand image.
Further, the RGB color value determination is to convert the gesture image from an RGB color space to a YCrCb color space, in the YCrCb color space, the distribution of the skin color is similar to an ellipse, and determine whether (Cr, Cb) of each pixel point is within a skin color ellipse, where Y represents luminance, Cr represents a red component in the light source, Cb represents a blue component in the light source, and the pixel point within the skin color ellipse is used as a hand skin point.
Further, in S3, for the binary image, extracting features of the hand image through a histogram of directional gradients, using an SVM model as a classifier to obtain probabilities that the current gesture belongs to different categories, selecting a category with the highest probability as a category of the recognized current gesture, performing saliency detection based on contour information and depth information of the image, and calculating a position of the fingertip by finding a point farthest from a center of the hand.
Further, the convexity detection is to add depth information to the image to obtain a binary image, then find a palm contour in the binary image, calculate a center of gravity in the palm contour, and obtain a point farthest from the center of gravity according to a distance from the center of gravity to a contour edge, wherein the position of an image coordinate is a fingertip.
Further, in S4, the moving distance and angle between the two gestures are calculated according to the hand center points and/or fingertips of the two frames, and the same point P is calculated for the two frames1(x1,y1,z1) And P2(x2,y2,z2) Calculating a moving distance:
Figure BDA0003289684150000031
wherein x, y and z respectively represent the abscissa, the ordinate and the depth information of the image, and the angle between two points is obtained by calculating the included angle between the connecting line between the two points and the horizontal axis:
Figure BDA0003289684150000032
further, in S5, if the current gesture category is the beginning of a control operation, the center point of the gesture and/or the center of the nerve fiber bundle image corresponding to the fingertip position at this time; otherwise, calculating the current gesture position according to the displacement information by taking the image space coordinate of the previous frame of gesture as a reference.
Further, in S5, the method further includes S57, when the gesture category is a palm of both hands, performing a twisting operation, taking a connection line between centers of the hands of both hands as a reference line, moving both hands in a reverse direction on a vertical plane of the reference line, performing a stretching twisting operation, rotating both hands in a same direction with a midpoint of the reference line as a center, performing a rotation twisting operation, setting a control range for a coordinate of the center of the hand, corresponding to a position in the three-dimensional image, performing a twisting operation on the three-dimensional image within the control range, and when the gesture category is changed to a first exit gesture category, exiting the current operation and resetting the twisted image, so as to better observe the hierarchical relationship between the single nerve fiber bundle and the nerve fiber bundle cluster, and the hierarchical relationship between the nerve fiber bundle cluster and the nerve fiber bundle cluster, and overcome the problem that a mouse or a keyboard cannot perform the twisting operation observation; at the same time, this operation can be performed in cooperation with the rotating operation of S53 and the advancing and retreating operation of S54, so that the twisted state of the linear solid formed by the fibers is observed from the side at a specified depth of the three-dimensional image.
A brain nerve fiber bundle visualization system based on somatosensory gesture control comprises somatosensory equipment, a computer and a display, wherein the somatosensory equipment is a depth camera, the computer comprises a somatosensory equipment interface module, a gesture recognition control module, a rear-end server and a front-end module of a visualization interface, and the gesture recognition control module comprises a gesture separation module, a gesture recognition module and a gesture control module;
the depth camera is used for acquiring depth information and color information of the image;
the gesture separation module extracts hand features according to the hand binary image after acquiring the image information through the somatosensory device interface module;
the gesture recognition module is used for recognizing gesture categories and calculating the position of a hand central point and/or a fingertip based on the contour information and the depth information of the hand image; judging whether the gestures are switched or not according to the gesture category of the previous frame, if the current gesture category is the same as the gesture category of the previous frame, calculating the moving distance and angle between the two hands according to the positions of the central points and/or the fingertips of the hands, and if the current gesture category is different from the gesture category of the previous frame, taking the current gesture category as the start of the corresponding control operation, and initializing the moving distance and angle values;
the gesture control module acquires corresponding control operation from the configuration file according to the type and the state of the gesture, and executes the corresponding control operation in the nerve fiber bundle image space according to the configuration file; taking the coordinate when the gesture class appears as the initial position of the gesture class, if the displacement of the gesture class within a certain time threshold is smaller than a certain displacement threshold, taking the position as the end position of the gesture class, and converting the movement distance and the angle obtained by continuous calculation into the distance and the angle in the nerve fiber bundle image space in the process;
and the front-end module of the visual interface is used for loading the nerve fiber bundle image data, rendering the nerve fiber bundle image data and displaying the nerve fiber bundle on the display in a three-dimensional image form.
The invention has the advantages and beneficial effects that:
the invention realizes non-contact three-dimensional gesture control, completely replaces mouse control with gesture operation, provides immersive experience, and solves the problem that the mouse and the keyboard are not easy to carry out complex interactive operation on the three-dimensional fiber bundle. According to the invention, the gesture information is acquired only by using the depth camera, and accurate gesture recognition and control operation can be realized without additional wearable equipment. The discovery realizes the operations of selection, rotation, scaling and the like, is beneficial to analyzing and understanding the nerve fiber bundle image from multiple directions and angles by a user, and improves the efficiency of analyzing the nerve fiber bundle image.
Drawings
FIG. 1 is a schematic structural diagram of the present invention.
Fig. 2 is a flow chart of the method of the present invention.
FIG. 3 is a schematic diagram of an interaction gesture according to the present invention.
FIG. 4 is a three-dimensional view of a multi-fiber bundle prior to grasping in the present invention.
FIG. 5 is a three-dimensional view of a multi-fiber bundle in the grasping according to the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, the system hardware of the visualization tool based on gesture control includes a motion sensing device, a computer and a display, the motion sensing device adopts an Azure Kinect depth camera, the computer adopts a computer of a Windows 10 operating system, and is provided with a visualization program, and the visualization program includes a motion sensing device interface program, a gesture recognition control program, a back-end server and a front-end program of a visualization interface. After the visualization program is started, the front-end program of the visualization interface is started, a user can check and select image files needing visualization through a display connected with the front-end program of the visualization interface, operate images displayed on a screen through a mouse and a keyboard, and also can enter a somatosensory mode to operate the images. After entering the motion sensing mode, user gestures captured by the depth camera enter a gesture recognition control program through a motion sensing device interface program, the gesture recognition control program comprises a gesture separation module, a gesture recognition module and a gesture control module and is used for recognizing gesture types, analyzing gesture changes, matching corresponding image operations, and displaying the images on a display through a front-end program of a visual interface.
The Azure Kinect depth camera is used for capturing RGB-D images and transmitting the RGB-D images to the computer;
the gesture separation module is used for separating the hand from the image background; the method comprises the steps of collecting images of a user, and obtaining color information and depth information in the images for hand capture. The gesture is a bare-hand gesture of two hands, and needs to be located in a sensing area of the somatosensory interaction device. And judging whether the color value on each pixel point is in the skin color threshold value and whether the depth value is in the threshold value range close to the camera in the obtained image. Regarding a point whose color value and depth value are within the threshold range as skin, the color value of the point considered as skin is 255, and the color values of the remaining points are 0, and a binarized hand image is obtained.
And the gesture recognition module extracts and represents the characteristics of the image by utilizing the direction gradient histogram, and obtains the probability that the current gesture belongs to different categories by using the SVM model as a classifier. And selecting the category with the highest probability as the category of the recognized current gesture. And if the current gesture is that the fingertip extends out, carrying out convexity detection based on the contour information and the depth information of the image, searching a point farthest from the center, and calculating the position of the fingertip.
The gesture control module is used for receiving gesture information and carrying out gesture corresponding operation on the nerve fiber bundle image; according to gesture information of a user, including gesture types and operation types corresponding to gestures, recognized by equipment, obtaining corresponding system operations from a setting file, and executing corresponding application operations according to the setting file.
And the visualization module is used for realizing three-dimensional visualization of the nerve fiber bundle image based on three-dimensional visualization tools such as vtk, itk and the like. And loading and analyzing the image data of the nerve fiber bundles, and rendering each nerve fiber as an independent object so as to realize the interactive operation of single nerve fibers and multiple nerve fibers.
As shown in fig. 2, the brain nerve fiber bundle visualization method based on somatosensory gesture control includes the following steps:
step one, loading nerve fiber bundle image data, rendering the fiber bundle image by using WebGL and a three.js library, and displaying the image data in a three-dimensional image form, wherein the step is a conventional technology. The user can select to operate the image displayed on the screen through the mouse and the keyboard, and can also enter the somatosensory mode to operate.
Specifically, a nerve fiber bundle image file is read in a binary form, header information and image data of the image are analyzed according to bytes, and each nerve fiber is rendered as an object, so that interactive operation on a single fiber bundle and a plurality of fiber bundles is realized.
The original image file is a binary file, which comprises a header part and a body part, wherein the header content comprises the number, dimension, property, byte number occupied by the header and the like of fiber bundles, and the original image file, the number of the fiber bundles contained in the image and the like can be known by reading the binary data according to the byte and carrying out corresponding data type conversion from the byte. The Body part is all the image data, including the number of points and coordinates on each fiber bundle. For example, the first 1000 bytes are headers, and 1001 st and 1004 th bytes represent the number of points in the first fiber bundle.
After this operation, an image array is obtained that can be directly used by the front-end program.
And step two, entering an interactive gesture recognition program after entering the somatosensory mode. The Azure Kinect depth camera collects images of a user, identifies the position of a hand according to depth information and color information in the images, and obtains a binary image of the hand.
Specifically, whether the RGB color value on each pixel point in the obtained RGB-D image is within a skin color threshold value and whether the depth value D is within a threshold value range close to the camera is judged. The image is converted from the RGB color space to the YCrCb color space where the distribution of skin colors approximates an ellipse. And (Cr, Cb) of each pixel point is judged whether to be in the skin color ellipse. Regarding a point of which the color value and the depth value are within the threshold range as one point of the skin, the RGB color value of the point regarded as the skin is 255, and the color values of the remaining points are 0, thereby obtaining a binarized hand image.
Firstly, a connected area with the largest depth value is selected, and the skin color value is judged in the area.
The YCbCr color space is a commonly used color model for skin color detection, where Y represents luminance, Cr represents the red component of the illuminant, and Cb represents the blue component of the illuminant. If the skin information is mapped to YCrCb after statistics, the skin pixel points are approximately distributed in an ellipse in a CrCb two-dimensional space.
Conversion of RGB to YCrCb:
Y=0.299R+0.287G+0.11B
Cr=R-Y
Cb=B-Y
skin color detection algorithm:
R>95and G>40and B>20and R>G and R>B and|R-G|>15and A>15and Cr>135and Cb>85and Y>80and
Cr<=(1.5862*Cb)+20and
Cr>=(0.3448*Cb)+76.2069and
Cr>=(-4.5652*Cb)+234.5652and
Cr>=(-1.15*Cb)+301.75and
Cr>=(-2.2857*Cb)+432.85
a represents opacity.
And step three, extracting hand features in the image according to the obtained binary image, and identifying gesture categories. Gesture categories include palms of both hands, palms of one hand, index fingers of one hand, thumbs and index fingers of one hand, fist, or other gestures.
Specifically, the directional gradient histogram is used for extracting and representing the characteristics of the image, an SVM model is used as a classifier, the probability that the current gesture belongs to different categories is obtained, the category with the highest probability is selected as the category of the recognized current gesture, saliency detection is carried out based on the contour information and the depth information of the image, the point farthest from the center is searched, and the position of the fingertip is calculated.
And the convexity detection is to obtain a binary image through the previous threshold operation, find the palm contour in the image, calculate the gravity center in the contour, enumerate the distances from the gravity center to each edge, and the finger tip is the farthest from the gravity center. From the outline alone, there may be a plurality of edge points, such as the wrist edge, where the image coordinate location is above the fingertip.
And step four, judging gesture switching according to the gesture of the previous frame.
Specifically, whether the current gesture category is the same as the gesture in the previous frame of image is judged, and if yes, the moving distance and the moving angle between the two gestures are calculated. For any two points P1(x1,y1,z1),P2(x2,y2,z2) The distance between the two can be calculated by the formula:
Figure BDA0003289684150000071
wherein, x, y, z represent abscissa, ordinate and depth information of the picture respectively, as to the angle between two points, can get through calculating the line between two points and included angle of the horizontal axis:
Figure BDA0003289684150000072
and calculating the moving distance and angle of the center point of the hand or the fingertip in the current frame and the previous frame according to a formula.
For example: the position of a fingertip is moved from (0,1,0) to (1,0,0), which corresponds to the actual situation, which may be that the index finger is moved upwards to the right, and the distance of movement is
Figure BDA0003289684150000073
The angle θ obtained at this time is 45 ° clockwise.
If the current gesture is different from the previous frame, the current gesture is considered as the start of the corresponding operation, and the distance and angle values are zero.
And step five, executing corresponding control operation according to the type and the state of the gesture, and displaying the control operation on a display screen.
In the motion sensing gesture control program, the recognized hand appears in the range of the motion sensing equipment as the start of the control program, the coordinate where the control gesture appears is taken as the initial position, and the control gesture stays in a certain tiny position range for a period of time as the end position of the control gesture. The displacement, angle, etc. between the start position and the end position are taken as the corresponding operation displacement and angle of the corresponding operation on the nerve fiber bundle image.
Specifically, the calculated movement distance and angle are converted into a distance and angle in the nerve fiber bundle image space. If the current gesture is the start of an operation, the center point or the fingertip position of the gesture at the moment is considered to correspond to the center of the display interface of the visualization program; otherwise, calculating the current gesture position according to the displacement information by taking the image space coordinate of the previous frame of gesture as a reference.
And sending all information of the current gesture, including the gesture category, the operation category corresponding to the gesture, the gesture position, the displacement and the angle relative to the previous frame and the like, to a visualization program.
The corresponding operation state is obtained from the operation configuration file and the corresponding operation is performed, as shown in table 1 and fig. 3.
TABLE 1
Figure BDA0003289684150000081
And the visualization program receives the gesture information and executes corresponding application operation according to the setting file. The specific method for operating the image according to the gesture information is as follows:
and when the gesture is a single-hand index finger, entering a single fiber bundle selection mode, highlighting the fiber bundle at the corresponding position on the three-dimensional image according to the position of the fingertip of the index finger, simultaneously setting the unselected fiber bundle to be transparent or semi-transparent, canceling the highlighting when the fingertip moves away from the vicinity of the nerve fiber bundle, and simultaneously recovering the transparency of the unselected fiber bundle. When the gesture is changed from a single index finger to a single palm, the index finger is added to the selection list corresponding to the highlighted fiber bundle, and if the fiber bundle at the position exists in the selection list, the fiber bundle is removed from the selection list. When the gesture turns to fist, the current mode is ended.
And when the gesture is a single-hand thumb and an index finger, entering a multi-fiber bundle selection mode, highlighting the fiber bundle cluster between two fingertip positions on the three-dimensional image according to the positions of the fingertips of the thumb and the index finger, simultaneously setting the unselected fiber bundles to be transparent or semitransparent, canceling the previously selected or added fiber bundle cluster when the thumb and the index finger are closed or restored to the original positions, and simultaneously restoring the transparency of the unselected fiber bundle. When the gesture turns to the palm, the highlighted fiber bundle cluster is added to the selection list and if the fiber bundle at that location already exists in the selection list, it is removed from the selection list. When the gesture turns to fist, the current mode is ended.
When the gesture is a single-hand palm, the gesture enters a rotational translation operation mode, the gesture is rotated according to the moving distance and angle of the hand, when the palm position is relatively unchanged, and when the fingers move, the gesture enters a rotational mode, the angle and direction of the fingers moving relative to the palm correspond to the angle and direction of image rotation. And when the palm and the fingers move for the same distance, entering an image translation mode, and translating the image for a corresponding distance towards a corresponding direction. When the gesture turns to fist, the current mode is ended.
And when the gesture is the palms of the two hands, entering a zooming forward and backward mode. Zooming is carried out according to the relative position of the moving palms of the two hands, when the palms of the two hands are close to or far away from each other in the horizontal direction and/or the vertical direction, a zooming mode is entered, when the palms of the two hands are close to each other, the image is three-dimensionally reduced according to the corresponding positions of the coordinates of the palms in the image, and when the palms of the two hands are far away from each other, the image is three-dimensionally enlarged according to the corresponding positions of the coordinates of the palms in the image. When the palms of the two hands move forwards or backwards in the front-back direction, the mode of advancing and retreating is entered, the palms of the two hands move forwards, the depth corresponding to the three-dimensional image retreats backwards, the palms of the two hands move backwards, and the depth of the three-dimensional image advances forwards.
And when the gesture is one-hand or two-hand fist making, exiting the ongoing operation mode until a new control gesture appears.
And when the gesture is the closing of the hands, the somatosensory control mode is exited, and the mouse operation mode is switched to.
In another example, when the gesture category is a palm of two hands, performing a twisting operation, taking a connecting line of the centers of the hands of the two hands as a reference line, moving the two hands reversely on a vertical plane of the reference line, performing a stretching twisting operation, rotating the two hands in the same direction by taking the midpoint of the reference line as a center of a circle, performing a rotating twisting operation, taking a coordinate of the center of the hand, setting a control range at a corresponding position in a three-dimensional image, performing a twisting operation on the three-dimensional image in the control range, and exiting the current operation and resetting the twisted image when the gesture category is changed into a first exiting gesture category, so that the hierarchical relationship between a single nerve fiber bundle and a nerve fiber bundle cluster and between the nerve fiber bundle cluster and the nerve fiber bundle cluster can be better observed, and the problem that a mouse or a keyboard cannot perform the twisting operation observation is solved; meanwhile, the operation can be matched with the rotation operation and the advancing and retreating operation, so that the linear three-dimensional distortion state formed by the fibers is observed from the side surface at the specified depth of the three-dimensional image.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A brain nerve fiber bundle visualization method based on somatosensory gesture control is characterized by comprising the following steps:
s1, loading the nerve fiber bundle image data, rendering, and displaying the nerve fiber bundle in a three-dimensional image form;
s2, acquiring depth information and color information of the hand image, identifying the hand position, and acquiring a hand binary image;
s3, extracting hand features according to the hand binary image, identifying gesture categories, and calculating the position of a hand center point and/or a fingertip based on the contour information and the depth information of the hand image;
s4, judging whether the gesture is switched according to the gesture category of the previous frame, if the current gesture category is the same as the gesture category of the previous frame, calculating the moving distance and angle between two hands according to the positions of the central points and/or the fingertips of the hands, and if the current gesture category is different from the gesture category of the previous frame, taking the current gesture category as the start of the corresponding control operation, and initializing the moving distance and angle value;
and S5, performing control operation according to the type and the state of the gesture, wherein the control operation comprises the following types:
s51, single nerve fiber bundle selection operation: the gesture type is one finger of a single hand, the single nerve fiber bundle at the corresponding position on the three-dimensional image is highlighted according to the position of the fingertip, meanwhile, the unselected nerve fiber bundle is hidden or weakened, the highlighting is cancelled when the fingertip moves away from the selected single nerve fiber bundle, and meanwhile, the state of the unselected nerve fiber bundle is recovered; when the gesture category is changed from a single hand and one finger to an enqueue gesture category, adding the highlighted single nerve fiber bundle into the selection list, and removing the single nerve fiber bundle at the position if the single nerve fiber bundle at the position already exists in the selection list;
s52, multiple nerve fiber bundle selection operation: the gesture category is single-hand two-finger, the nerve fiber bundle cluster between the two fingertip positions on the three-dimensional image is highlighted according to the positions of the two fingertip, meanwhile, the unselected nerve fiber bundle cluster is hidden or weakened, the highlighting is cancelled when the two fingers recover the original positions, and meanwhile, the state of the unselected nerve fiber bundle cluster is recovered; when the gesture category is changed from a single-hand two-finger gesture category to an enqueue gesture category, adding the highlighted nerve fiber bundle cluster into the selection list, and if the nerve fiber bundle cluster at the position already exists in the selection list, removing the nerve fiber bundle cluster;
s53, rotational translation operation: the gesture type is a single-hand palm, when the center position of the hand is relatively unchanged and the fingertips move, rotating operation is carried out, and the angle and the direction of movement of the fingertips relative to the center of the hand correspond to the angle and the direction of rotation of the three-dimensional image; when the center of the hand and the fingertip move for the same distance, performing translation operation, and translating the three-dimensional image for the corresponding distance in the corresponding direction;
s54, zoom advance and retreat operation: the gesture type is palms of two hands, when the centers of the hands of the two hands are close to or far away from each other in the horizontal and/or vertical direction, zooming operation is carried out, when the two hands are close to each other, the three-dimensional image is reduced according to the corresponding position of the coordinates of the center of the hand in the three-dimensional image, and when the two hands are far away from each other, the three-dimensional image is enlarged according to the corresponding position of the coordinates of the center of the hand in the three-dimensional image; when the centers of the hands of the two hands move backwards or forwards in the front-back direction, carrying out forward and backward operation, wherein the two hands move backwards, the depth of the three-dimensional image corresponding to the center of the hand moves forwards, the two hands move forwards, and the depth of the three-dimensional image corresponding to the center of the hand moves backwards;
s55, exiting the current operation and returning to S4: the gesture category of one hand or two hands is converted into a first exit gesture category;
s56, quitting the somatosensory gesture control operation and returning to S1: the gesture category of one or both hands is changed to a second exit gesture category.
2. The method for visualizing the cranial nerve fiber bundle based on somatosensory gesture control according to claim 1, wherein in S1, the nerve fiber bundle image data is read in a binary form, the head information and the content data of the image are analyzed by bytes, and each nerve fiber is rendered as an object.
3. The method for visualizing the cranial nerve fiber bundle based on somatosensory gesture control according to claim 1, wherein in S2, the acquired gesture image is an RGB-D image, and whether the RGB color value on each pixel point is within a skin color threshold and whether the depth value D is within an image depth threshold is determined, and using the RGB color value and the depth value D, both of which are within the threshold ranges, as a hand skin point, a first color value is set for the RGB color value of the hand skin point, and different second color values are set for the RGB color values of the remaining points, so as to obtain a binarized hand image.
4. The method of claim 3, wherein the RGB color values are determined by converting the gesture image from RGB color space to YCrCb color space, where the skin color distribution is approximated to an ellipse, and determining whether (Cr, Cb) of each pixel point is within a skin color ellipse, where Y represents brightness, Cr represents a red component in the light source, Cb represents a blue component in the light source, and the pixel points within the skin color ellipse are used as hand skin points.
5. The method for visualizing the cranial nerve fiber bundle based on somatosensory gesture control according to claim 1, wherein in S3, for the binary image, the features of the hand image are extracted through a histogram of directional gradients, an SVM model is used as a classifier to obtain the probabilities that the current gesture belongs to different categories, the category with the highest probability is selected as the category of the recognized current gesture, saliency detection is performed based on the contour information and depth information of the image, and the position of the fingertip is calculated by finding the point farthest from the center of the hand.
6. The method for visualizing the cranial nerve fiber bundle based on somatosensory gesture control according to claim 5, wherein the convexity detection is to add depth information to the image, to obtain a binary image, to find a palm contour in the binary image, and to calculate the center of gravity in the palm contour, wherein the coordinate position of the image is the fingertip.
7. The method for visualizing the cranial nerve fiber bundle based on somatosensory gesture control according to claim 1, wherein in S4, the moving distance and angle between two gestures are calculated according to the hand center points and/or fingertips of the two frames before and after, and the same point P for the two frames before and after is calculated1(x1,y1,z1) And P2(x2,y2,z2) Calculating a moving distance:
Figure FDA0003289684140000021
wherein, x, y, z represent abscissa, ordinate and degree of depth information of the pixel of hand respectively, obtain the angle between two points through calculating the line between two points and the contained angle of the horizontal axis:
Figure FDA0003289684140000022
8. the method for visualizing the cranial nerve fiber bundle based on somatosensory gesture control according to claim 1, wherein in S5, if the current gesture category is the start of a control operation, the center point of the gesture and/or the center of the nerve fiber bundle image corresponding to the fingertip position at that time; otherwise, calculating the current gesture position according to the displacement information by taking the image space coordinate of the previous frame of gesture as a reference.
9. The method for visualizing the cranial nerve fiber bundle based on somatosensory gesture control according to claim 1, wherein in S5, the method further comprises S57, when the gesture type is palms of both hands, performing a twisting operation, using a connecting line between the centers of both hands and hands as a reference line, moving both hands in a reverse direction on a vertical plane of the reference line, performing a stretching twisting operation, rotating both hands in a same direction around a midpoint of the reference line, performing a rotating twisting operation, setting a control range for a coordinate of the center of the hand at a corresponding position in the three-dimensional image, performing a twisting operation on the three-dimensional image within the control range, and when the gesture type is changed to the first exit gesture type, exiting the current operation and resetting the twisted image.
10. A brain nerve fiber bundle visualization system based on somatosensory gesture control comprises somatosensory equipment, a computer and a display, and is characterized in that the somatosensory equipment is a depth camera, the computer comprises a somatosensory equipment interface module, a gesture recognition control module, a rear-end server and a front-end module of a visualization interface, and the gesture recognition control module comprises a gesture separation module, a gesture recognition module and a gesture control module;
the depth camera is used for acquiring depth information and color information of the image;
the gesture separation module extracts hand features according to the hand binary image after acquiring the image information through the somatosensory device interface module;
the gesture recognition module is used for recognizing gesture categories and calculating the position of a hand central point and/or a fingertip based on the contour information and the depth information of the hand image; judging whether the gestures are switched or not according to the gesture category of the previous frame, if the current gesture category is the same as the gesture category of the previous frame, calculating the moving distance and angle between the two hands according to the positions of the central points and/or the fingertips of the hands, and if the current gesture category is different from the gesture category of the previous frame, taking the current gesture category as the start of the corresponding control operation, and initializing the moving distance and angle values;
the gesture control module acquires corresponding control operation from the configuration file according to the type and the state of the gesture, and executes the corresponding control operation in the nerve fiber bundle image space according to the configuration file; taking the coordinate when the gesture class appears as the initial position of the gesture class, if the displacement of the gesture class within a certain time threshold is smaller than a certain displacement threshold, taking the position as the end position of the gesture class, and converting the movement distance and the angle obtained by continuous calculation into the distance and the angle in the nerve fiber bundle image space in the process;
and the front-end module of the visual interface is used for loading the nerve fiber bundle image data, rendering the nerve fiber bundle image data and displaying the nerve fiber bundle on the display in a three-dimensional image form.
CN202111159943.5A 2021-09-30 2021-09-30 Brain nerve fiber bundle visualization method and system based on somatosensory gesture control Pending CN113741701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111159943.5A CN113741701A (en) 2021-09-30 2021-09-30 Brain nerve fiber bundle visualization method and system based on somatosensory gesture control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111159943.5A CN113741701A (en) 2021-09-30 2021-09-30 Brain nerve fiber bundle visualization method and system based on somatosensory gesture control

Publications (1)

Publication Number Publication Date
CN113741701A true CN113741701A (en) 2021-12-03

Family

ID=78725825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111159943.5A Pending CN113741701A (en) 2021-09-30 2021-09-30 Brain nerve fiber bundle visualization method and system based on somatosensory gesture control

Country Status (1)

Country Link
CN (1) CN113741701A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063618A (en) * 2011-01-13 2011-05-18 中科芯集成电路股份有限公司 Dynamic gesture identification method in interactive system
CN102142084A (en) * 2011-05-06 2011-08-03 北京网尚数字电影院线有限公司 Method for gesture recognition
CN106909871A (en) * 2015-12-22 2017-06-30 江苏达科智能科技有限公司 Gesture instruction recognition methods
CN107272893A (en) * 2017-06-05 2017-10-20 上海大学 Man-machine interactive system and method based on gesture control non-touch screen
CN107563286A (en) * 2017-07-28 2018-01-09 南京邮电大学 A kind of dynamic gesture identification method based on Kinect depth information
CN107578023A (en) * 2017-09-13 2018-01-12 华中师范大学 Man-machine interaction gesture identification method, apparatus and system
CN108520264A (en) * 2018-03-23 2018-09-11 上海数迹智能科技有限公司 A kind of hand contour feature optimization method based on depth image
CN109614922A (en) * 2018-12-07 2019-04-12 南京富士通南大软件技术有限公司 A kind of dynamic static gesture identification method and system
CN109634415A (en) * 2018-12-11 2019-04-16 哈尔滨拓博科技有限公司 It is a kind of for controlling the gesture identification control method of analog quantity
CN109960403A (en) * 2019-01-07 2019-07-02 西南科技大学 For the visualization presentation of medical image and exchange method under immersive environment
CN111639531A (en) * 2020-04-24 2020-09-08 中国人民解放军总医院 Medical model interaction visualization method and system based on gesture recognition

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063618A (en) * 2011-01-13 2011-05-18 中科芯集成电路股份有限公司 Dynamic gesture identification method in interactive system
CN102142084A (en) * 2011-05-06 2011-08-03 北京网尚数字电影院线有限公司 Method for gesture recognition
CN106909871A (en) * 2015-12-22 2017-06-30 江苏达科智能科技有限公司 Gesture instruction recognition methods
CN107272893A (en) * 2017-06-05 2017-10-20 上海大学 Man-machine interactive system and method based on gesture control non-touch screen
CN107563286A (en) * 2017-07-28 2018-01-09 南京邮电大学 A kind of dynamic gesture identification method based on Kinect depth information
CN107578023A (en) * 2017-09-13 2018-01-12 华中师范大学 Man-machine interaction gesture identification method, apparatus and system
CN108520264A (en) * 2018-03-23 2018-09-11 上海数迹智能科技有限公司 A kind of hand contour feature optimization method based on depth image
CN109614922A (en) * 2018-12-07 2019-04-12 南京富士通南大软件技术有限公司 A kind of dynamic static gesture identification method and system
CN109634415A (en) * 2018-12-11 2019-04-16 哈尔滨拓博科技有限公司 It is a kind of for controlling the gesture identification control method of analog quantity
CN109960403A (en) * 2019-01-07 2019-07-02 西南科技大学 For the visualization presentation of medical image and exchange method under immersive environment
CN111639531A (en) * 2020-04-24 2020-09-08 中国人民解放军总医院 Medical model interaction visualization method and system based on gesture recognition

Similar Documents

Publication Publication Date Title
Shenoy et al. Real-time Indian sign language (ISL) recognition
US20170293364A1 (en) Gesture-based control system
CN107885327B (en) Fingertip detection method based on Kinect depth information
Feng et al. Real-time fingertip tracking and detection using Kinect depth sensor for a new writing-in-the air system
Cheng et al. Image-to-class dynamic time warping for 3D hand gesture recognition
CN109359514B (en) DeskVR-oriented gesture tracking and recognition combined strategy method
CN101673161A (en) Visual, operable and non-solid touch screen system
CN110780739A (en) Eye control auxiliary input method based on fixation point estimation
JP6066093B2 (en) Finger shape estimation device, finger shape estimation method, and finger shape estimation program
Li et al. Appearance-based gaze estimator for natural interaction control of surgical robots
Weiyao et al. Human action recognition using multilevel depth motion maps
CN106909871A (en) Gesture instruction recognition methods
Yousefi et al. 3D gesture-based interaction for immersive experience in mobile VR
CN106484108A (en) Chinese characters recognition method based on double vision point gesture identification
CN111460858B (en) Method and device for determining finger tip point in image, storage medium and electronic equipment
CN108521594B (en) Free viewpoint video playing method based on motion sensing camera gesture recognition
WO2021258862A1 (en) Typing method and apparatus, and device and storage medium
CN110750157A (en) Eye control auxiliary input device and method based on 3D eyeball model
CN112199015B (en) Intelligent interaction all-in-one machine and writing method and device thereof
Sonoda et al. A letter input system based on handwriting gestures
Abdallah et al. An overview of gesture recognition
Roy et al. Real time hand gesture based user friendly human computer interaction system
CN116909393A (en) Gesture recognition-based virtual reality input system
WO2024000233A1 (en) Facial expression recognition method and apparatus, and device and readable storage medium
CN113741701A (en) Brain nerve fiber bundle visualization method and system based on somatosensory gesture control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination