CN105353873A - Gesture manipulation method and system based on three-dimensional display - Google Patents

Gesture manipulation method and system based on three-dimensional display Download PDF

Info

Publication number
CN105353873A
CN105353873A CN201510735569.7A CN201510735569A CN105353873A CN 105353873 A CN105353873 A CN 105353873A CN 201510735569 A CN201510735569 A CN 201510735569A CN 105353873 A CN105353873 A CN 105353873A
Authority
CN
China
Prior art keywords
hand
gesture
gesture motion
steering order
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510735569.7A
Other languages
Chinese (zh)
Other versions
CN105353873B (en
Inventor
黄源浩
肖振中
许宏淮
钟亮洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201510735569.7A priority Critical patent/CN105353873B/en
Publication of CN105353873A publication Critical patent/CN105353873A/en
Priority to PCT/CN2016/076748 priority patent/WO2017075932A1/en
Application granted granted Critical
Publication of CN105353873B publication Critical patent/CN105353873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a gesture manipulation method based on three-dimensional display. The method comprises: acquiring position information of a hand; establishing a motion trajectory of the hand in a coordinate system of three-dimensional space; recognizing a gesture motion of the hand according to the motion trajectory; reading a corresponding control instruction according to the gesture motion; and controlling an operation object in a three-dimensional image according to the control instruction. By recognizing a gesture motion of an user, an operational object is controlled to execute a corresponding operation according to the gesture motion. Therefore, human-computer natural interaction can be realized with no need of being contacted with a display screen. Furthermore, a gesture manipulation system and apparatus based on the three-dimensional display are further provided.

Description

Based on gesture control method and the system of 3-D display
Technical field
The present invention relates to gesture operation control technology, particularly relate to the gesture control method based on 3-D display and the system of natural human-machine interaction.
Background technology
The fast development in the fields such as man-machine interaction, robot and virtual reality and widespread use, three-dimension interaction input technique becomes the focus of numerous researcher in man-machine virtual interacting field.Along with this technology development and deepen continuously, popular more and more higher to its user demand, noncontact, location and three-dimensional manipulating become the direction of this technical development at a high speed, in real time.Therefore, traditional employing mouse or touch-screen control display screen cannot realize popular demand.
Summary of the invention
Based on this, be necessary to provide a kind of natural human-machine interaction, the non-contacting gesture control method based on 3-D display.
Based on a gesture control method for 3-D display, comprise the following steps:
Obtain the positional information of hand, and set up the movement locus of hand in three-dimensional coordinate system;
According to the movement locus of hand in three-dimensional coordinate system, identify the gesture motion of hand;
Read corresponding steering order according to described gesture motion, and control the operand in 3-D view according to described steering order.
Wherein in an embodiment, the positional information of described acquisition hand, and set up hand and comprise in the step of the movement locus of three-dimensional coordinate system:
Obtain a series of continuous print depth information of hand, form the movement locus of hand in three-dimensional coordinate system according to described depth information.
Wherein in an embodiment, the operating space of described hand and the linear corresponding relation of described three-dimensional coordinate system, wherein, operating space is the real space that hand performs a series of continuous action.
Wherein in an embodiment, describedly to comprise according to the step of hand in the gesture motion of the movement locus identification hand of three-dimensional coordinate system:
Extract hand contour feature information, and carry out characteristic matching in conjunction with described positional information discriminator is carried out to gesture motion.
Wherein in an embodiment, the described steering order reading correspondence according to described gesture motion, and comprise according to the step of the operand in described steering order control 3-D view:
Store the corresponding relation of each gesture motion and corresponding steering order;
After identification gesture motion, from corresponding relation, read the steering order that this gesture motion is corresponding;
Respective action is performed according to the operand that described steering order controls in 3-D view.
In addition, a kind of natural human-machine interaction, the non-contacting gesture control system based on 3-D display is also provided.
Based on a gesture control system for 3-D display, comprising:
Data obtaining module, for obtaining the positional information of hand,
Coordinate sets up module, for setting up the movement locus of hand in three-dimensional coordinate system according to described positional information;
Gesture recognition module, for according to the gesture motion of hand at the movement locus identification hand of three-dimensional coordinate system;
Operation control module, for reading corresponding steering order according to described gesture motion, and controls the operand in 3-D view according to described steering order.
Wherein in an embodiment, described data obtaining module is also for obtaining a series of continuous print depth information of hand; Described coordinate sets up module also for forming the movement locus of hand in three-dimensional coordinate system according to described depth information.
Wherein in an embodiment, the operating space of hand and the linear corresponding relation of described three-dimensional coordinate system, wherein, operating space is the real space that hand performs a series of continuous action.
Wherein in an embodiment, described gesture recognition module also for extracting hand contour feature information, and is carried out characteristic matching in conjunction with described positional information and is carried out discriminator to gesture motion.
Wherein in an embodiment, described operation control module comprises memory module, read module and execution module; Described memory module is for storing the corresponding relation of each gesture motion and corresponding steering order; Described read module is used for, after identification gesture motion, reading the steering order that this gesture motion is corresponding from corresponding relation; The operand that described execution module is used for controlling in 3-D view according to described steering order performs respective action.
In addition, a kind of natural human-machine interaction, the non-contacting gesture actuation means based on 3-D display is also provided.
Based on a gesture actuation means for 3-D display, it is characterized in that, comprise depth camera, three dimensional display and processor;
Described depth camera for obtaining the depth image of hand, and exports to described processor;
Described processor obtains the positional information of hand according to described depth image; And set up the movement locus of hand in three-dimensional coordinate system according to described positional information; Described processor is also for according to the gesture motion of hand at the movement locus identification hand of three-dimensional coordinate system; Described processor also for reading corresponding steering order according to described gesture motion, and controls the operand in 3-D view according to described steering order;
Described processor also shows this gesture motion for controlling three dimensional display according to described gesture motion, and display performs the track of steering order corresponding to this gesture motion.
The above-mentioned gesture control method based on 3-D display and system, device pass through the positional information obtaining hand, then the movement locus of hand in three-dimensional coordinate system is set up, according to the gesture motion of this movement locus identification hand, finally read corresponding steering order according to described gesture motion, and control the operand in 3-D view according to described steering order.Namely by identifying the gesture motion of user, then corresponding operation is performed according to gesture motion control operation object.Therefore, it is possible to realize natural human-machine interaction and without the need to contacting display screen.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the gesture control method based on 3-D display;
Fig. 2 is the schematic diagram of Object Depth computation model;
One of schematic diagram of Fig. 3 (a) hand movement corresponding to 3-D display cursor;
The schematic diagram two of Fig. 3 (b) hand movement corresponding to 3-D display cursor;
Fig. 3 (c) grasps object in 3-D display, follows one of schematic diagram of hand movement;
Fig. 3 (d) grasps the schematic diagram two that object follows hand movement in 3-D display;
Fig. 3 (e) grasps the schematic diagram three that object follows hand movement in 3-D display;
Fig. 3 (f) grasps the schematic diagram four that object follows hand movement in 3-D display;
Fig. 3 (g) grasps the schematic diagram five that object follows hand movement in 3-D display;
Fig. 4 is based on the module map of the gesture control system of 3-D display.
Embodiment
For the ease of understanding the present invention, below with reference to relevant drawings, the present invention is described more fully.Preferred embodiment of the present invention is given in accompanying drawing.But the present invention can realize in many different forms, is not limited to embodiment described herein.On the contrary, provide the object of these embodiments be make the understanding of disclosure of the present invention more comprehensively thorough.
It should be noted that, when element is called as " being fixed on " another element, directly can there is element placed in the middle in it on another element or also.When an element is considered to " connection " another element, it can be directly connected to another element or may there is centering elements simultaneously.Term as used herein " vertical ", " level ", "left", "right" and similar statement are just for illustrative purposes.
Unless otherwise defined, all technology used herein and scientific terminology are identical with belonging to the implication that those skilled in the art of the present invention understand usually.The object of term used in the description of the invention herein just in order to describe specific embodiment, is not intended to be restriction the present invention.Term as used herein " and/or " comprise arbitrary and all combinations of one or more relevant Listed Items.
As shown in Figure 1, be the process flow diagram of the gesture control method based on 3-D display.
Obtain the depth image of hand, depth image also becomes range image, refers to from observation visual angle and looks, a kind of image that information contained by image is relevant to body surface distance in scene or a kind of image channel.In depth image, the gray-scale value of pixel corresponds to the depth value of scene mid point.The information that depth image comprises is depth information.
Depth image has color independence two character mutually equal in direction, the visual field Z-direction captured by camera gray-value variation direction.Wherein, color independence refers to compared with coloured image, and depth image does not have the interference of illumination, shade and environmental change.Gray-value variation direction utilizes depth image can rebuild 3d space region within the specific limits with direction, the visual field identical finger of Z-direction captured by camera, and can solve the problem that object blocks or agree to object each several part overlap to a certain extent.According to depth information, can easily prospect and background be separated, this can reduce the difficulty of image recognition.
Depth image mainly contains time-of-flight method, structured light and 3 D laser scanning etc. according to image-forming principle division and is mainly used in human-computer interaction.Depth image is utilized to carry out pattern-recognition.
In the present invention, obtaining depth image and can adopt with the following method: the first is based on time-of-flight, by measuring the mistiming that light emission is returned to body surface back reflection, thus calculating the depth information of body surface.Second method similar structures pumped FIR laser, oneself infrared mode known that projects, in scene, carrys out measuring distance by the deformation pattern that infrared CMOS camera records.Mode of operation mainly identifies human body and relevant action, and the main core identifying human body is exactly skeleton, by the tracking of bone, the action of human body is scanned on computing machine, and does relevant simulation and operation.Certainly, the method obtaining depth image in the present invention is not limited only to said method.
When gathering the depth information of hand, detect the feature of gesture motion, provide steering order corresponding to this gesture motion according to gesture motion.Such as follow hand according to mapping relations 3D display highlighting, hand grasp corresponding capture subject instructions, hand grasp forward corresponding amplification instruction, grasp correspondence backward and reduce instruction etc.After gesture motion being detected, just can export steering order corresponding to this gesture motion.
In the present invention, the gesture motion gathered in advance is stored, move forward as grasped, grasping, arrange and grasp, grasp and move forward corresponding steering order.Therefore, when making grasping user, grasp and the action such as to move forward, steering order corresponding to this action can correspondingly be performed.Namely, after gathering the data of gesture motion in advance, any user makes corresponding gesture motion, all can perform steering order corresponding to this gesture motion.
As, palm opens, grasp the steering order representing mitigation and amplification respectively.Or grasp the steering order that thumb represents mitigation and amplification up or down respectively.Or hand represents the steering order of mitigation and amplification forward or backward respectively after grasping.These gesture motion collections will be similar to store, and give corresponding steering order, and make to set up corresponding relation between gesture motion and steering order.Thus when identifying gesture motion, corresponding operation can be performed according to the steering order of correspondence.In other embodiments, also can give each gesture motion different steering orders.
The collection of gesture motion comprises a lot of class, each class also includes many different examples, this data acquisition is taken under physical environment, in real room or office, different illumination and angle, makes the data acquisition of gesture motion have more practicality.
In the present embodiment, a kind of gesture control method based on 3-D display, comprises the following steps:
Step S110, obtains the positional information of hand, and sets up the movement locus of hand in three-dimensional coordinate system.The present embodiment can obtain the positional information of hand based on depth image.
Concrete, obtain the positional information of hand, and set up hand and comprise in the step of the movement locus of three-dimensional coordinate system:
Obtain a series of continuous print depth information of hand, form the movement locus of hand in three-dimensional coordinate system according to described depth information.The present embodiment based on depth image acquisition technique, can utilize depth camera to gather the view data of a series of continuous action of hand, from this view data, then extracts a series of continuous print depth information of hand.
The operating space of hand and the linear corresponding relation of described three-dimensional coordinate system, wherein, operating space is the real space of a series of continuous action of hand, and the view data utilizing depth camera to gather from operating space can obtain the space of a series of continuous print depth information of hand and two-dimensional coordinate information.Above-mentioned three-dimensional coordinate system refers to the space coordinates corresponding to the stereoscopic image data for showing 3-D view.
After obtaining the depth information of hand, just can find out corresponding coordinate points in three-dimensional coordinate system.Follow the trail of the bone information of hand, and the action of hand is scanned on computing machine, gather the depth information of hand simultaneously, combine according to depth information and bone information and obtain hand coordinate points corresponding in three-dimensional coordinate system, in order to precise assembly position in 3-D view.And when hand exercise, by following the trail of the bone information of hand, the movement locus of hand in three-dimensional coordinate system can be tracked successively, namely complete the movement locus following the trail of hand, and the movement locus of reality is transformed in three-dimensional coordinate system.
In one embodiment, the acquisition of depth information can adopt the model of parallel stereovision, suppose that the outer parameter of video camera C1 rotation matrix R1 and translation vector t1 represent, the outer parameter of video camera C2 represents with rotation matrix R2 and translation vector t2, if wherein, R1=R2, the i.e. parallel placement of left and right cameras, only there is translation in relative position relation, such stereo visual system is exactly parallel stereo city office system.
Be world coordinate system with the camera coordinates at video camera C1 place, then have:
t 1=(0,0,0) T,R 1=R 2I.;
In parallel stereovision system, the optical axis of 2 video cameras is parallel to each other, and overlaps in x in left and right cameras coordinate system, and polar curve is parallel to each other, and the difference of 2 camera coordinate systems is exactly a flat B (i.e. " baseline ") in x-axis.
Set up Object Depth computation model as shown in Figure 2, calculate Object Depth.
As shown in Figure 2, the position of left and right cameras photocentre (i.e. lens center) is respectively C1 and Cr, and B is the translation vector between 2 video camera photocentres, and f is the focal length of video camera.1 P, p1 and the pr be provided with in space is then respectively the subpoint of electric P in left images plane.Z is required depth information, i.e. the distance of spatial point P distance video camera photocentre line C1Cr.L with R is through the vertical intersection point point doing vertical line at the plane of delineation of video camera photocentre.H does the intersection point of vertical line through spatial point P to the plane of delineation.
Then linear relation is as follows:
Z - f Z = | p r H | | p r H | + | Rp r | , Z - f Z = | B | - | Lp 1 | + | Rp r | + | p r H | | B | + | p r H | + | Rp r | . ;
Merge abbreviation to above formula to solve, then have | p r H | = | B | | Lp 1 | | Lp 1 | - | Lp r | - | Rp r | , Z = f | p r H | + | Lp 1 | | Rp r | = | B | f | Lp 1 | - | Rp r | . ;
Above formula is the formula solving depth information, wherein, | Lp 1|-| Rp r| be the parallax value of the Corresponding matching point obtained in Stereo matching, representation space point P locks into the poor x1-x2 of picture position on the image plane, video camera photocentre distance | and B| and focal distance f are obtained by camera calibration.
Step S120, according to the movement locus of hand in three-dimensional coordinate system, identifies the gesture motion of hand.
Particularly, comprise according to the step of hand in the gesture motion of the movement locus identification hand of three-dimensional coordinate system:
Extract hand contour feature information, and carry out characteristic matching in conjunction with described positional information discriminator is carried out to gesture motion.
In the present embodiment, according to the depth information of gesture, three-dimensional point cloud is utilized to calculate gesture cloud data, after calculating, gesture cloud data only includes the three-dimensional coordinate position information of hand joint point and palm central point, then data filtering process is done to gesture cloud data, filter out the noise point in gesture cloud data, obtain gesture point cloud information.By gesture point cloud information, by rotating translation, gesture point cloud information three-dimensional information is carried out plane registration, gesture point cloud information after preservation registration, then extract the profile body weight dot information of gesture point cloud information, contour feature point comprises finger tip point, finger tip concave point and palm central point.
Because contour feature dot information to map out the depth value of contour feature point in conjunction with the pixel depth value of depth image, do distance threshold by Euclidean distance method to judge, filter out crucial finger tip dot information, obtain five finger characteristic vectors according to finger tip dot information and corresponding finger tip concave point information in conjunction with the plane of plane registration, recover gesture motion according to eigenvector.
Step S130, reads corresponding steering order according to described gesture motion, and controls the operand in 3-D view according to described steering order.
Concrete, read corresponding steering order according to described gesture motion, and comprise according to the step of the operand in described steering order control 3-D view:
Store the corresponding relation of each gesture motion and corresponding steering order.
After identification gesture motion, from corresponding relation, read the steering order that this gesture motion is corresponding.
According to described steering order, the operand controlled in 3-D view performs respective action.
After identifying gesture motion, according to the corresponding relation between the gesture motion prestored and steering order, find out the steering order that this gesture motion is corresponding, and control the operand in 3-D view according to this steering order.
The 3-D view of the present embodiment can adopt real tri-dimension image display technology to obtain space multistory image.Real tri-dimension image display technology refers to based on holography or based on body dimension display technologies, within the scope of certain entity space, show stereoscopic image data, forms a kind of technology of real space stereo-picture.Stereoscopic image data is the view data with a three-dimensional coordinate system, and the information of every individual pixel at least comprises, the positional information of this pixel and image information.
Holography herein, mainly comprise conventional hologram (transmission-type holography display image, reflective holographic display image, image planes formula holography display image, rainbow type holography display image, synthesis type holography display image etc.) and computer hologram (CGH, ComputerGeneratedHologram).Computer hologram floats on aerial and has wider colour gamut, in computer hologram, the object being used for producing hologram needs to generate a mathematical model in a computer and describes, and the physical interference of light wave also replace by calculation procedure, in each step, intensity pattern in CGH model can be determined, this figure can output in a reconfigurable equipment, and this equipment is again modulated light-wave information and reconstructed output.Popular says, CGH is exactly the interference pattern being obtained a computer graphical (virtual object) by the computing of computing machine, substitutes the interventional procedures of conventional hologram object light wave record; And the diffraction process of hologram reconstruction does not have the change in principle, only increase the reconfigurable equipment of light-wave information, thus realize different computing machine static state, the holography display of motion graphics.
Based on holography, in some of them embodiment of the present invention, space multistory display device comprises: 360 holographic phantom imaging systems, this system comprises light source, controller, spectroscope, light source can adopt shot-light, controller comprises one or more processor, stereoscopic image data is received by communication interface, and obtain the interference pattern of computer graphical (virtual object) after treatment, export this interference image to spectroscope, and present this interference pattern by the light of light source projects on spectroscope, form space multistory image.Here spectroscope can be special eyeglass or quadrilateral pyramid etc.
Except above-mentioned 360 holographic phantom imaging systems, space multistory display device can also based on line holographic projections equipment, such as, by forming stereopsis on air, special lens, mist screen etc.Therefore, space multistory display device 8 can also for air line holographic projections equipment, laser beam line holographic projections equipment, (its principle is by image projection on the mirror of High Rotation Speed, thus realizes hologram to have the line holographic projections equipment of 360 degree of holographic display screen.) and the equipment such as veil stereo imaging system in one of.
But for body dimension display technologies, it refers to the vision mechanism utilizing people self special, manufacture a display replacing molecule particulate to form by voxel particulate in kind, except can seeing the shape that light wave embodies, the necessary being of voxel can also have been touched.It encourages the material being positioned at Transparence Display volume by appropriate ways, utilize the generation absorption of visible radiation or scattering and form voxel, after the material in orientation many in volume is all energized, just can be formed and in three dimensions, be formed three-dimension space image by permitting polydisperse voxel.
The present invention can also adopt with the following method:
(1), rotary body scanning technique, rotary body scanning technique is mainly used in the display of dynamic object.In the art, a string two dimensional image be projected to one rotate or movement screen on, the speed that this screen cannot be perceiveed with observer is simultaneously being moved, because of people persistence of vision thus in human eye, form three-dimensional body.Therefore, the display system of this stereo display technique is used can to realize the true 3-D display (360 ° visual) of image.In system, the light beam of different colours is projected on display medium by light deflector, thus makes dielectric reveal abundant color.Meanwhile, this display medium can allow light beam produce discrete visible point of light, and these points are exactly voxel, corresponding to any point in 3-D view.One group of group voxel is used for setting up image, and observer can from any viewing point to this true 3-D view.Can be produced by the rotation of screen or translation based on the imaging space in the display device of rotary body scanning technique.On the surface of emission, voxel is activated when the inswept imaging space of screen.This system comprises: the subsystems such as laser system, computer control system, rotational display system.
(2), static body imaging technique, three-dimensional image is formed based on frequency upooaversion technology, so-called frequency upooaversion 3 D stereo display spontaneous radiation can go out a kind of fluorescence after utilizing the multiple photon of imaging space Absorption of Medium, thus produce visible pixel.Its ultimate principle utilizes the orthogonal infrared laser cross action of two bundles on up-conversion, through twice resonance absorption of up-conversion, luminescent center electronics is excited to high excitation level, energy level transition just may produce the transmitting of visible ray downwards again, a point in such up-conversion space is exactly a luminous bright spot, if make the point of crossing of two bundle laser do three-dimensional address scan according to certain track in up-conversion, the place that so point of crossing of two bundle laser is scanned should be a bright band can launching visible fluorescence, namely the three-dimensional graph that same laser point of crossing movement locus is identical can be demonstrated.This display packing naked eyes just can see the three-dimensional image of 360 ° of Omnibearing visuals.
Certainly, the 3-D view in the present invention can also be carry out on a display screen showing the 3D rendering obtained based on 3D display technique.Here the display screen mentioned, based on 3D display technique, utilizes the right and left eyes parallax of human eye, makes the image of human eye to display screen display be reconstructed the virtual 3D stereo-picture of rear acquisition.Display screen is divided into spectacle display device and the large class of bore hole formula display device two.Spectacle display device utilizes flat-faced screen to coordinate 3D glasses jointly to realize.Bore hole formula display device, i.e. bore hole 3D display, it is made up of the three-dimensional non-real end of 3D, playout software, Software for producing, application technology four part, be light harvesting, photography, robot calculator, the modern high technology technology such as automatic control, software, 3d cartoon making are in the three-dimensional reality system of reporting to the leadship after accomplishing a task of one.
Based on above-mentioned different 3-D view imaging mode, the stereoscopic image data with a three-dimensional coordinate system can be converted into the required view data inputed on different display device.This different display device based on 3-D view imaging mode and adopt different hardware devices, specifically can see related content of the prior art.
In one embodiment, when hand is expansion action, followed the tracks of by bone, identifying hand is expansion action, and searches steering order corresponding to expansion action, supposes that the steering order that expansion action is corresponding is origination action, therefore, the cursor that now only display is corresponding with hand.When move down in this state of expansion action start portion time, followed the tracks of by bone, the instruction feeding back to computing machine is only the movement locus following the tracks of hand, and namely display highlighting follows the movement locus of hand.Because gesture operation space is corresponding with three-dimensional coordinate system, therefore, when hand moves in operating space, corresponding with three-dimensional coordinate system.
As shown in Fig. 3 (a) He Fig. 3 (b), when determining to need to operate certain operand, mobile hand, the cursor making hand corresponding is in the control area of operand.Reduce and amplification instruction to grasp to move forward and move representative backward.When recognizing grasping, obtain the starting position of hand, generally with the palm of the hand position of hand for starting position.Follow the tracks of the movement locus of hand, when be identified as move forward time, corresponding steering order is reduction operation object.When be identified as move backward time, corresponding steering order is amplifieroperation object.
In other embodiments, when grasp motion is for choosing instruction, when cursor corresponding to hand is in the control area of operand, when recognizing grasp motion, with the object at the corresponding cursor place of now hand for operand, namely choose existing object, can move existing object, copy and the operation such as stickup.
Concrete, as shown in Fig. 3 (c), when hand is grasping state, and when moving forward, corresponding be that the cursor that hand is corresponding amplifies gradually in 3-D display.As shown in Fig. 3 (d), when hand is grasping state, and when moving backward, corresponding be that the cursor that hand is corresponding reduces gradually in 3-D display.Three dimensional display can be in below depth camera or side.As shown in Fig. 3 (e) He Fig. 3 (f), the placement location of three dimensional display does not affect the display in three-dimensional manipulating space.
In other embodiments, during to open and to rotate finger movement for rotate instruction, when cursor corresponding to hand is in the control area of operand, recognize open and rotate finger movement time, with the object at the corresponding cursor place of now hand for operand, namely rotation process is carried out to existing object.
Based on embodiment described above, a kind of gesture actuation means based on 3-D display, comprises depth camera, three dimensional display and processor.
Described depth camera for obtaining the depth image of hand, and exports to described processor.
Described processor obtains the positional information of hand according to described depth image; And set up the movement locus of hand in three-dimensional coordinate system according to described positional information; Described processor is also for according to the gesture motion of hand at the movement locus identification hand of three-dimensional coordinate system; Described processor also for reading corresponding steering order according to described gesture motion, and controls the operand in 3-D view according to described steering order.
Described processor also shows this gesture motion for controlling three dimensional display according to described gesture motion, and display performs the track of steering order corresponding to this gesture motion.
In the present embodiment, owing to being collection gesture motion in operating space (for hand performs the real space of a series of continuous action).Therefore, 3-D view adopts holography can realize naked eye three-dimensional effect.Namely real space can correspondingly in real time show with virtual display.Thus, user when operating the display object of three dimensional display, can implement to grasp, grasp the operations such as mobile to display object accurately.
Incorporated by reference to Fig. 3 (g).Such as, suppose that user needs the stereo-picture to be operated (as racket) shown the three dimensional display three dimensional display of holography (adopt) to rotate and the operation of movement, because operating space and virtual display are corresponding in real time displays, therefore user only needs to find the position corresponding with stereo-picture to be operated in operating space, and makes grasp motion.Now, depth camera can detect the gesture motion of user, and is transferred to processor.Processor then controls three dimensional display and shows the state that stereo-picture to be operated (as racket) grasped by user's hand.When user then carries out grasping mobile (or brandishing arm) in operating space, then processor control three dimensional display shows the track that stereo-picture to be operated is moved (or brandishing).
As shown in Figure 4, be the module map of the gesture control system based on 3-D display.
Data obtaining module, for obtaining the positional information of hand,
Coordinate sets up module, for setting up the movement locus of hand in three-dimensional coordinate system according to described positional information;
Gesture recognition module, for according to the gesture motion of hand at the movement locus identification hand of three-dimensional coordinate system;
Operation control module, for reading corresponding steering order according to described gesture motion, and controls the operand in 3-D view according to described steering order.
Data obtaining module is also for obtaining a series of continuous print depth information of hand, and coordinate sets up module also for forming the movement locus of hand in three-dimensional coordinate system according to described depth information.
The operating space of hand and the linear corresponding relation of described three-dimensional coordinate system, wherein, operating space is the real space that hand performs a series of continuous action.
Gesture recognition module also for extracting hand contour feature information, and is carried out characteristic matching in conjunction with described positional information and is carried out discriminator to gesture motion.
Operation control module comprises memory module, read module and execution module.Memory module is for storing the corresponding relation of each gesture motion and corresponding steering order.Read module is used for, after identification gesture motion, reading the steering order that this gesture motion is corresponding from corresponding relation.The operand that execution module is used for controlling in 3-D view according to described steering order performs respective action.
The above-mentioned gesture control method based on 3-D display and system, device pass through the positional information obtaining hand, then the movement locus of hand in three-dimensional coordinate system is set up, according to the gesture motion of this movement locus identification hand, finally read corresponding steering order according to described gesture motion, and control the operand in 3-D view according to described steering order.Namely by identifying the gesture motion of user, then corresponding operation is performed according to gesture motion control operation object.Therefore, it is possible to realize natural human-machine interaction and without the need to contacting display screen.
Each technical characteristic of the above embodiment can combine arbitrarily, for making description succinct, the all possible combination of each technical characteristic in above-described embodiment is not all described, but, as long as the combination of these technical characteristics does not exist contradiction, be all considered to be the scope that this instructions is recorded.
The above embodiment only have expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be construed as limiting the scope of the patent.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (11)

1., based on a gesture control method for 3-D display, comprise the following steps:
Obtain the positional information of hand, and set up the movement locus of hand in three-dimensional coordinate system;
According to the movement locus of hand in three-dimensional coordinate system, identify the gesture motion of hand;
Read corresponding steering order according to described gesture motion, and control the operand in 3-D view according to described steering order.
2. the gesture control method based on 3-D display according to claim 1, is characterized in that, the positional information of described acquisition hand, and sets up hand and comprise in the step of the movement locus of three-dimensional coordinate system:
Obtain a series of continuous print depth information of hand, form the movement locus of hand in three-dimensional coordinate system according to described depth information.
3. the gesture control method based on 3-D display according to claim 1, it is characterized in that, the operating space of described hand and the linear corresponding relation of described three-dimensional coordinate system, wherein, operating space is the real space that hand performs a series of continuous action.
4. the gesture control method based on 3-D display according to claim 1, is characterized in that, describedly comprises according to the step of hand in the gesture motion of the movement locus identification hand of three-dimensional coordinate system:
Extract hand contour feature information, and carry out characteristic matching in conjunction with described positional information discriminator is carried out to gesture motion.
5. the gesture control method based on 3-D display according to claim 1, is characterized in that, the described steering order reading correspondence according to described gesture motion, and comprises according to the step of the operand in described steering order control 3-D view:
Store the corresponding relation of each gesture motion and corresponding steering order;
After identification gesture motion, from corresponding relation, read the steering order that this gesture motion is corresponding;
Respective action is performed according to the operand that described steering order controls in 3-D view.
6. based on a gesture control system for 3-D display, it is characterized in that, comprising:
Data obtaining module, for obtaining the positional information of hand,
Coordinate sets up module, for setting up the movement locus of hand in three-dimensional coordinate system according to described positional information;
Gesture recognition module, for according to the gesture motion of hand at the movement locus identification hand of three-dimensional coordinate system;
Operation control module, for reading corresponding steering order according to described gesture motion, and controls the operand in 3-D view according to described steering order.
7. the gesture control system based on 3-D display according to claim 6, is characterized in that, described data obtaining module is also for obtaining a series of continuous print depth information of hand; Described coordinate sets up module also for forming the movement locus of hand in three-dimensional coordinate system according to described depth information.
8. the gesture control system based on 3-D display according to claim 6, is characterized in that, the operating space of hand and the linear corresponding relation of described three-dimensional coordinate system, and wherein, operating space is the real space that hand performs a series of continuous action.
9. the gesture control system based on 3-D display according to claim 6, is characterized in that, described gesture recognition module also for extracting hand contour feature information, and is carried out characteristic matching in conjunction with described positional information and carried out discriminator to gesture motion.
10. the gesture control system based on 3-D display according to claim 6, is characterized in that, described operation control module comprises memory module, read module and execution module; Described memory module is for storing the corresponding relation of each gesture motion and corresponding steering order; Described read module is used for, after identification gesture motion, reading the steering order that this gesture motion is corresponding from corresponding relation; The operand that described execution module is used for controlling in 3-D view according to described steering order performs respective action.
11. 1 kinds based on the gesture actuation means of 3-D display, is characterized in that, comprise depth camera, three dimensional display and processor;
Described depth camera for obtaining the depth image of hand, and exports to described processor;
Described processor obtains the positional information of hand according to described depth image; And set up the movement locus of hand in three-dimensional coordinate system according to described positional information; Described processor is also for according to the gesture motion of hand at the movement locus identification hand of three-dimensional coordinate system; Described processor also for reading corresponding steering order according to described gesture motion, and controls the operand in 3-D view according to described steering order;
Described processor also shows this gesture motion for controlling three dimensional display according to described gesture motion, and display performs the track of steering order corresponding to this gesture motion.
CN201510735569.7A 2015-11-02 2015-11-02 Gesture control method and system based on Three-dimensional Display Active CN105353873B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510735569.7A CN105353873B (en) 2015-11-02 2015-11-02 Gesture control method and system based on Three-dimensional Display
PCT/CN2016/076748 WO2017075932A1 (en) 2015-11-02 2016-03-18 Gesture-based control method and system based on three-dimensional displaying

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510735569.7A CN105353873B (en) 2015-11-02 2015-11-02 Gesture control method and system based on Three-dimensional Display

Publications (2)

Publication Number Publication Date
CN105353873A true CN105353873A (en) 2016-02-24
CN105353873B CN105353873B (en) 2019-03-15

Family

ID=55329857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510735569.7A Active CN105353873B (en) 2015-11-02 2015-11-02 Gesture control method and system based on Three-dimensional Display

Country Status (2)

Country Link
CN (1) CN105353873B (en)
WO (1) WO2017075932A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589293A (en) * 2016-03-18 2016-05-18 严俊涛 Holographic projection method and holographic projection system
CN105955461A (en) * 2016-04-25 2016-09-21 乐视控股(北京)有限公司 Interactive interface management method and system
WO2017075932A1 (en) * 2015-11-02 2017-05-11 深圳奥比中光科技有限公司 Gesture-based control method and system based on three-dimensional displaying
CN106774849A (en) * 2016-11-24 2017-05-31 北京小米移动软件有限公司 virtual reality device control method and device
CN106919928A (en) * 2017-03-08 2017-07-04 京东方科技集团股份有限公司 gesture recognition system, method and display device
CN106933347A (en) * 2017-01-20 2017-07-07 深圳奥比中光科技有限公司 The method for building up and equipment in three-dimensional manipulation space
CN107368194A (en) * 2017-07-21 2017-11-21 上海爱优威软件开发有限公司 The gesture control method of terminal device
CN107463261A (en) * 2017-08-11 2017-12-12 北京铂石空间科技有限公司 Three-dimensional interaction system and method
CN107976183A (en) * 2017-12-18 2018-05-01 北京师范大学珠海分校 A kind of spatial data measuring method and device
CN108052237A (en) * 2018-01-05 2018-05-18 上海昶音通讯科技有限公司 3D projection touch device and touch method thereof
CN108073267A (en) * 2016-11-10 2018-05-25 腾讯科技(深圳)有限公司 Three dimensions control method and device based on movement locus
CN108363482A (en) * 2018-01-11 2018-08-03 江苏四点灵机器人有限公司 A method of the three-dimension gesture based on binocular structure light controls smart television
CN108681402A (en) * 2018-05-16 2018-10-19 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device
WO2018196552A1 (en) * 2017-04-25 2018-11-01 腾讯科技(深圳)有限公司 Method and apparatus for hand-type display for use in virtual reality scene
CN109153122A (en) * 2016-06-17 2019-01-04 英特尔公司 The robot control system of view-based access control model
CN109732606A (en) * 2019-02-13 2019-05-10 深圳大学 Long-range control method, device, system and the storage medium of mechanical arm
CN110058688A (en) * 2019-05-31 2019-07-26 安庆师范大学 A kind of projection system and method for dynamic gesture page turning
WO2019169644A1 (en) * 2018-03-09 2019-09-12 彼乐智慧科技(北京)有限公司 Method and device for inputting signal
CN110456957A (en) * 2019-08-09 2019-11-15 北京字节跳动网络技术有限公司 Show exchange method, device, equipment, storage medium
CN110794959A (en) * 2019-09-25 2020-02-14 苏州联游信息技术有限公司 Gesture interaction AR projection method and device based on image recognition
CN110889390A (en) * 2019-12-05 2020-03-17 北京明略软件系统有限公司 Gesture recognition method, gesture recognition device, control equipment and machine-readable storage medium
CN110989835A (en) * 2017-09-11 2020-04-10 大连海事大学 Working method of holographic projection device based on gesture recognition
WO2020087204A1 (en) * 2018-10-29 2020-05-07 深圳市欢太科技有限公司 Display screen operating method, electronic device, and readable storage medium
CN111242084A (en) * 2020-01-21 2020-06-05 深圳市优必选科技股份有限公司 Robot control method, device, robot and computer readable storage medium
CN112020694A (en) * 2018-09-19 2020-12-01 维塔驰有限公司 Method, system, and non-transitory computer-readable recording medium for supporting object control
CN112241204A (en) * 2020-12-17 2021-01-19 宁波均联智行科技有限公司 Gesture interaction method and system of vehicle-mounted AR-HUD
CN114701409A (en) * 2022-04-28 2022-07-05 东风汽车集团股份有限公司 Gesture interactive intelligent seat adjusting method and system
CN118192812A (en) * 2024-05-17 2024-06-14 深圳市立体通技术有限公司 Man-machine interaction method, device, computer equipment and storage medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107678425A (en) * 2017-08-29 2018-02-09 南京理工大学 A kind of car controller based on Kinect gesture identifications
CN108776994B (en) * 2018-05-24 2022-10-25 长春理工大学 Roesser model based on true three-dimensional display system and implementation method thereof
CN110659543B (en) * 2018-06-29 2023-07-14 比亚迪股份有限公司 Gesture recognition-based vehicle control method and system and vehicle
CN109240494B (en) * 2018-08-23 2023-09-12 京东方科技集团股份有限公司 Control method, computer-readable storage medium and control system for electronic display panel
CN111142664B (en) * 2019-12-27 2023-09-01 恒信东方文化股份有限公司 Multi-user real-time hand tracking system and tracking method
CN113065383B (en) * 2020-01-02 2024-03-29 中车株洲电力机车研究所有限公司 Vehicle-mounted interaction method, device and system based on three-dimensional gesture recognition
CN111949134A (en) * 2020-08-28 2020-11-17 深圳Tcl数字技术有限公司 Human-computer interaction method, device and computer-readable storage medium
CN112329540A (en) * 2020-10-10 2021-02-05 广西电网有限责任公司电力科学研究院 Identification method and system for overhead transmission line operation in-place supervision
CN115840507B (en) * 2022-12-20 2024-05-24 北京帮威客科技有限公司 Large-screen equipment interaction method based on 3D image control
CN117278735B (en) * 2023-09-15 2024-05-17 山东锦霖智能科技集团有限公司 Immersive image projection equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236414A (en) * 2011-05-24 2011-11-09 北京新岸线网络技术有限公司 Picture operation method and system in three-dimensional display space
CN102650906A (en) * 2012-04-06 2012-08-29 深圳创维数字技术股份有限公司 Control method and device for user interface
CN103176605A (en) * 2013-03-27 2013-06-26 刘仁俊 Control device of gesture recognition and control method of gesture recognition
CN103488292A (en) * 2013-09-10 2014-01-01 青岛海信电器股份有限公司 Three-dimensional application icon control method and device
US20140118252A1 (en) * 2012-10-25 2014-05-01 Min Ho Kim Method of displaying cursor and system performing cursor display method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10242255B2 (en) * 2002-02-15 2019-03-26 Microsoft Technology Licensing, Llc Gesture recognition system using depth perceptive sensors
CN102270035A (en) * 2010-06-04 2011-12-07 三星电子株式会社 Apparatus and method for selecting and operating object in non-touch mode
CN102226880A (en) * 2011-06-03 2011-10-26 北京新岸线网络技术有限公司 Somatosensory operation method and system based on virtual reality
CN102411426A (en) * 2011-10-24 2012-04-11 由田信息技术(上海)有限公司 Operating method of electronic device
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
US9201500B2 (en) * 2012-09-28 2015-12-01 Intel Corporation Multi-modal touch screen emulator
CN104182035A (en) * 2013-05-28 2014-12-03 中国电信股份有限公司 Method and system for controlling television application program
CN104571510B (en) * 2014-12-30 2018-05-04 青岛歌尔声学科技有限公司 A kind of system and method that gesture is inputted in 3D scenes
CN105353873B (en) * 2015-11-02 2019-03-15 深圳奥比中光科技有限公司 Gesture control method and system based on Three-dimensional Display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236414A (en) * 2011-05-24 2011-11-09 北京新岸线网络技术有限公司 Picture operation method and system in three-dimensional display space
CN102650906A (en) * 2012-04-06 2012-08-29 深圳创维数字技术股份有限公司 Control method and device for user interface
US20140118252A1 (en) * 2012-10-25 2014-05-01 Min Ho Kim Method of displaying cursor and system performing cursor display method
CN103176605A (en) * 2013-03-27 2013-06-26 刘仁俊 Control device of gesture recognition and control method of gesture recognition
CN103488292A (en) * 2013-09-10 2014-01-01 青岛海信电器股份有限公司 Three-dimensional application icon control method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯保全 杨波: "《三维自然手势跟踪的理论和方法》", 31 May 2013 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017075932A1 (en) * 2015-11-02 2017-05-11 深圳奥比中光科技有限公司 Gesture-based control method and system based on three-dimensional displaying
CN105589293A (en) * 2016-03-18 2016-05-18 严俊涛 Holographic projection method and holographic projection system
CN105955461A (en) * 2016-04-25 2016-09-21 乐视控股(北京)有限公司 Interactive interface management method and system
CN111452050A (en) * 2016-06-17 2020-07-28 英特尔公司 Robot control system based on vision
CN109153122A (en) * 2016-06-17 2019-01-04 英特尔公司 The robot control system of view-based access control model
CN108073267B (en) * 2016-11-10 2020-06-16 腾讯科技(深圳)有限公司 Three-dimensional control method and device based on motion trail
CN108073267A (en) * 2016-11-10 2018-05-25 腾讯科技(深圳)有限公司 Three dimensions control method and device based on movement locus
CN106774849B (en) * 2016-11-24 2020-03-17 北京小米移动软件有限公司 Virtual reality equipment control method and device
CN106774849A (en) * 2016-11-24 2017-05-31 北京小米移动软件有限公司 virtual reality device control method and device
CN106933347A (en) * 2017-01-20 2017-07-07 深圳奥比中光科技有限公司 The method for building up and equipment in three-dimensional manipulation space
CN106919928A (en) * 2017-03-08 2017-07-04 京东方科技集团股份有限公司 gesture recognition system, method and display device
US11194400B2 (en) 2017-04-25 2021-12-07 Tencent Technology (Shenzhen) Company Limited Gesture display method and apparatus for virtual reality scene
WO2018196552A1 (en) * 2017-04-25 2018-11-01 腾讯科技(深圳)有限公司 Method and apparatus for hand-type display for use in virtual reality scene
CN107368194A (en) * 2017-07-21 2017-11-21 上海爱优威软件开发有限公司 The gesture control method of terminal device
CN107463261A (en) * 2017-08-11 2017-12-12 北京铂石空间科技有限公司 Three-dimensional interaction system and method
CN110989835B (en) * 2017-09-11 2023-04-28 大连海事大学 Working method of holographic projection device based on gesture recognition
CN110989835A (en) * 2017-09-11 2020-04-10 大连海事大学 Working method of holographic projection device based on gesture recognition
CN107976183A (en) * 2017-12-18 2018-05-01 北京师范大学珠海分校 A kind of spatial data measuring method and device
CN108052237A (en) * 2018-01-05 2018-05-18 上海昶音通讯科技有限公司 3D projection touch device and touch method thereof
CN108052237B (en) * 2018-01-05 2022-01-14 上海昶音通讯科技有限公司 3D projection touch device and touch method thereof
CN108363482A (en) * 2018-01-11 2018-08-03 江苏四点灵机器人有限公司 A method of the three-dimension gesture based on binocular structure light controls smart television
WO2019169644A1 (en) * 2018-03-09 2019-09-12 彼乐智慧科技(北京)有限公司 Method and device for inputting signal
CN108681402A (en) * 2018-05-16 2018-10-19 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device
WO2019218880A1 (en) * 2018-05-16 2019-11-21 Oppo广东移动通信有限公司 Interaction recognition method and apparatus, storage medium, and terminal device
CN112020694B (en) * 2018-09-19 2024-02-20 维塔驰有限公司 Method, system and non-transitory computer readable recording medium for supporting object control
CN112020694A (en) * 2018-09-19 2020-12-01 维塔驰有限公司 Method, system, and non-transitory computer-readable recording medium for supporting object control
WO2020087204A1 (en) * 2018-10-29 2020-05-07 深圳市欢太科技有限公司 Display screen operating method, electronic device, and readable storage medium
CN112714900A (en) * 2018-10-29 2021-04-27 深圳市欢太科技有限公司 Display screen operation method, electronic device and readable storage medium
CN109732606A (en) * 2019-02-13 2019-05-10 深圳大学 Long-range control method, device, system and the storage medium of mechanical arm
CN110058688A (en) * 2019-05-31 2019-07-26 安庆师范大学 A kind of projection system and method for dynamic gesture page turning
CN110456957A (en) * 2019-08-09 2019-11-15 北京字节跳动网络技术有限公司 Show exchange method, device, equipment, storage medium
CN110794959A (en) * 2019-09-25 2020-02-14 苏州联游信息技术有限公司 Gesture interaction AR projection method and device based on image recognition
CN110889390A (en) * 2019-12-05 2020-03-17 北京明略软件系统有限公司 Gesture recognition method, gesture recognition device, control equipment and machine-readable storage medium
CN111242084A (en) * 2020-01-21 2020-06-05 深圳市优必选科技股份有限公司 Robot control method, device, robot and computer readable storage medium
CN112241204A (en) * 2020-12-17 2021-01-19 宁波均联智行科技有限公司 Gesture interaction method and system of vehicle-mounted AR-HUD
CN114701409A (en) * 2022-04-28 2022-07-05 东风汽车集团股份有限公司 Gesture interactive intelligent seat adjusting method and system
CN114701409B (en) * 2022-04-28 2023-09-05 东风汽车集团股份有限公司 Gesture interactive intelligent seat adjusting method and system
CN118192812A (en) * 2024-05-17 2024-06-14 深圳市立体通技术有限公司 Man-machine interaction method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN105353873B (en) 2019-03-15
WO2017075932A1 (en) 2017-05-11

Similar Documents

Publication Publication Date Title
CN105353873A (en) Gesture manipulation method and system based on three-dimensional display
Liu et al. 3D imaging, analysis and applications
CN112771539B (en) Employing three-dimensional data predicted from two-dimensional images using neural networks for 3D modeling applications
US11164289B1 (en) Method for generating high-precision and microscopic virtual learning resource
CN104077804B (en) A kind of method based on multi-frame video picture construction three-dimensional face model
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
US8432435B2 (en) Ray image modeling for fast catadioptric light field rendering
CN101794349B (en) Experimental system and method for augmented reality of teleoperation of robot
US9986228B2 (en) Trackable glasses system that provides multiple views of a shared display
CN100468465C (en) Stereo vision three-dimensional human face modelling approach based on dummy image
Asayama et al. Fabricating diminishable visual markers for geometric registration in projection mapping
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
Starck et al. The multiple-camera 3-d production studio
CN113892127A (en) Method and apparatus for corner detection using a neural network and a corner detector
WO2012126103A1 (en) Apparatus and system for interfacing with computers and other electronic devices through gestures by using depth sensing and methods of use
CN110276791A (en) A kind of depth camera emulation mode that parameter is configurable
CN105574812A (en) Multi-angle three-dimensional data registration method and device
Canessa et al. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space
Taubin et al. 3d scanning for personal 3d printing: build your own desktop 3d scanner
Sang et al. Inferring super-resolution depth from a moving light-source enhanced RGB-D sensor: a variational approach
CN104834913A (en) Flag signal identification method and apparatus based on depth image
Wang et al. Optimization-based eye tracking using deflectometric information
Pavlidis et al. 3D digitization of tangible heritage
Planche et al. Physics-based differentiable depth sensor simulation
Tian et al. Registration and occlusion handling based on the FAST ICP-ORB method for augmented reality systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant