CN105353873B - Gesture control method and system based on Three-dimensional Display - Google Patents
Gesture control method and system based on Three-dimensional Display Download PDFInfo
- Publication number
- CN105353873B CN105353873B CN201510735569.7A CN201510735569A CN105353873B CN 105353873 B CN105353873 B CN 105353873B CN 201510735569 A CN201510735569 A CN 201510735569A CN 105353873 B CN105353873 B CN 105353873B
- Authority
- CN
- China
- Prior art keywords
- hand
- gesture
- motion
- image
- control instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Abstract
The present invention relates to a kind of gesture control methods based on Three-dimensional Display to pass through the location information of acquisition hand, then hand is established in the motion profile of three-dimensional coordinate system, the gesture motion of hand is identified according to the motion profile, corresponding control instruction is finally read according to the gesture motion, and the operation object in 3-D image is controlled according to the control instruction.I.e. by the gesture motion of identification user, corresponding operation then is executed according to gesture motion control operation object.Therefore, it can be realized natural human-machine interaction and without contacting display screen.In addition, also providing a kind of gesture control system and device based on Three-dimensional Display.
Description
Technical field
The present invention relates to gesture operation control technologies, more particularly to the gesture based on Three-dimensional Display of natural human-machine interaction
Control method and system.
Background technique
The fast development and extensive use in the fields such as human-computer interaction, robot and virtual reality, three-dimension interaction input new skill
Art becomes the hot spot of numerous researchers in man-machine virtual interacting field.With this technology development and deepen continuously, it is public
It is used for that demand is higher and higher, non-contact, high speed, positioning becomes the direction of technology development with three-dimensional manipulating in real time.Therefore,
Traditional cannot achieve public demand using mouse or touch screen control display screen.
Summary of the invention
Based on this, it is necessary to provide a kind of natural human-machine interaction, the non-contacting gesture manipulation side based on Three-dimensional Display
Method.
A kind of gesture control method based on Three-dimensional Display, comprising the following steps:
The location information of hand is obtained, and establishes hand in the motion profile of three-dimensional coordinate system;
According to hand in the motion profile of three-dimensional coordinate system, the gesture motion of hand is identified;
Corresponding control instruction is read according to the gesture motion, and is controlled in 3-D image according to the control instruction
Operation object.
The location information for obtaining hand in one of the embodiments, and hand is established in three-dimensional coordinate system
Motion profile the step of include:
A series of continuous depth informations of hand are obtained, hand is formed in three-dimensional coordinate system according to the depth information
Motion profile.
The operating space of the hand and the three-dimensional coordinate system are linear in one of the embodiments, corresponding closes
System, wherein operating space is a series of real space that hand executes continuous actions.
The hand of the motion profile identification hand according to hand in three-dimensional coordinate system in one of the embodiments,
Gesture act the step of include:
Hand contour feature information is extracted, and carries out characteristic matching in conjunction with the location information and gesture motion is identified
Classification.
It is described in one of the embodiments, that corresponding control instruction is read according to the gesture motion, and according to described
Control instruction control 3-D image in operation object the step of include:
Store the corresponding relationship of each gesture motion with corresponding control instruction;
After identifying gesture motion, the corresponding control instruction of the gesture motion is read from corresponding relationship;
The operation object in 3-D image, which is controlled, according to the control instruction executes respective action.
In addition, also providing a kind of natural human-machine interaction, the non-contacting gesture control system based on Three-dimensional Display.
A kind of gesture control system based on Three-dimensional Display, comprising:
Data obtaining module, for obtaining the location information of hand,
Coordinate establishes module, for establishing hand according to the positional information in the motion profile of three-dimensional coordinate system;
Gesture recognition module, for dynamic in the gesture of the motion profile identification hand of three-dimensional coordinate system according to hand
Make;
Control module is operated, for reading corresponding control instruction according to the gesture motion, and is referred to according to the control
Enable the operation object in control 3-D image.
The data obtaining module is also used to obtain a series of continuous depth letters of hand in one of the embodiments,
Breath;The coordinate establishes module and is also used to form hand in the motion profile of three-dimensional coordinate system according to the depth information.
The operating space of hand and the linear corresponding relationship of the three-dimensional coordinate system in one of the embodiments,
Wherein, operating space is a series of real space that hand executes continuous actions.
The gesture recognition module is also used to extract hand contour feature information in one of the embodiments, and combines
The location information carries out characteristic matching and carries out identification classification to gesture motion.
The operation control module includes memory module, read module and execution module in one of the embodiments,;Institute
Memory module is stated for storing the corresponding relationship of each gesture motion with corresponding control instruction;The read module is for knowing
After other gesture motion, the corresponding control instruction of the gesture motion is read from corresponding relationship;The execution module is used for according to institute
The operation object stated in control instruction control 3-D image executes respective action.
In addition, also providing a kind of natural human-machine interaction, the non-contacting gesture control device based on Three-dimensional Display.
A kind of gesture control device based on Three-dimensional Display, which is characterized in that including depth camera, three dimensional display and place
Manage device;
The depth camera is used to obtain the depth image of hand, and exports to the processor;
The processor obtains the location information of hand according to the depth image;And hand is established according to the positional information
Motion profile of the portion in three-dimensional coordinate system;The processor is also used to the movement rail according to hand in three-dimensional coordinate system
The gesture motion of mark identification hand;The processor is also used to read corresponding control instruction, and root according to the gesture motion
According to the operation object in control instruction control 3-D image;
The processor, which is also used to control three dimensional display according to the gesture motion, shows the gesture motion, and shows and hold
The track of the corresponding control instruction of the row gesture motion.
Above-mentioned gesture control method and system based on Three-dimensional Display, device pass through the location information for obtaining hand, then
Hand is established in the motion profile of three-dimensional coordinate system, the gesture motion of hand, last basis are identified according to the motion profile
The gesture motion reads corresponding control instruction, and controls the operation object in 3-D image according to the control instruction.I.e.
By identifying the gesture motion of user, corresponding operation then is executed according to gesture motion control operation object.Therefore, Neng Goushi
Show natural human-machine interaction and without contacting display screen.
Detailed description of the invention
Fig. 1 is the flow chart of the gesture control method based on Three-dimensional Display;
Fig. 2 is the schematic diagram of Object Depth computation model;
One of the schematic diagram of Fig. 3 (a) hand movement corresponding with Three-dimensional Display cursor;
The two of the schematic diagram of Fig. 3 (b) hand movement corresponding with Three-dimensional Display cursor;
Fig. 3 (c) grasps object in Three-dimensional Display with one of schematic diagram mobile conveniently;
Fig. 3 (d) grasps object in Three-dimensional Display with the two of schematic diagram mobile conveniently;
Fig. 3 (e) grasps object in Three-dimensional Display with the three of schematic diagram mobile conveniently;
Fig. 3 (f) grasps object in Three-dimensional Display with the four of schematic diagram mobile conveniently;
Fig. 3 (g) grasps object in Three-dimensional Display with the five of schematic diagram mobile conveniently;
The module map of gesture control system of the Fig. 4 based on Three-dimensional Display.
Specific embodiment
To facilitate the understanding of the present invention, a more comprehensive description of the invention is given in the following sections with reference to the relevant attached drawings.In attached drawing
Give preferred embodiment of the invention.But the invention can be realized in many different forms, however it is not limited to herein
Described embodiment.On the contrary, purpose of providing these embodiments is keeps the understanding to the disclosure more saturating
It is thorough comprehensive.
It should be noted that it can directly on the other element when element is referred to as " being fixed on " another element
Or there may also be elements placed in the middle.When an element is considered as " connection " another element, it, which can be, is directly connected to
To another element or it may be simultaneously present centering elements.Term as used herein " vertical ", " horizontal ", " left side ",
" right side " and similar statement are for illustrative purposes only.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention
The normally understood meaning of technical staff is identical.Term as used herein in the specification of the present invention is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.Term " and or " used herein includes one or more phases
Any and all combinations of the listed item of pass.
As shown in Figure 1, being the flow chart of the gesture control method based on Three-dimensional Display.
The depth image of hand is obtained, depth image also becomes range image, refers to as viewed from observation visual angle, contained by image
Information is to body surface in scene apart from a kind of relevant image or a kind of image channel.The gray scale of pixel in depth image
Value corresponds to the depth value at scene midpoint.The information that depth image includes is depth information.
Depth image has color independence and gray-value variation direction identical as visual field direction Z-direction captured by camera
Etc. two attributes.Wherein, color independence refers to compared with color image, and depth image does not have illumination, shade and environment
The interference of variation.Refer to identical as visual field direction Z-direction captured by camera can be using depth image in gray-value variation direction
3d space region is rebuild in a certain range, and can be solved object to a certain extent and be blocked or agree to the overlapping of object each section
The problem of.According to depth information, easily foreground and background can be separated, this can reduce the difficulty of image recognition.
Depth image mainly has time-of-flight method, structure light and 3 D laser scanning etc. mainly to use according to image-forming principle division
In human-computer interaction.Pattern-recognition is carried out using depth image.
In the present invention, obtaining depth image can be used following method: the first is based on time-of-flight, passes through measurement
Light is emitted to the time difference that body surface back reflection is returned, to calculate the depth information of body surface.Second method
Similar structures pumped FIR laser projects a known infrared mode into scene, passes through the deformation recorded on infrared CMOS camera
Mode measures distance.Operating mode mainly identifies human body and relevant movement, and identifies that the main core of human body is exactly
Skeleton scans the movement of human body onto computer by the tracking of bone, and does relevant simulation and operation.Certainly, this hair
The bright middle method for obtaining depth image is not limited only to the above method.
When acquiring the depth information of hand, the feature of detection gesture movement provides the gesture motion pair according to gesture motion
The control instruction answered.Such as hand, hand is followed to grasp corresponding crawl subject instructions, hand according to mapping relations 3D display cursor
Grasping corresponds to forward amplification instruction, grasps corresponding diminution instruction etc. backward.After detecting gesture motion, it will be able to export the hand
Gesture acts corresponding control instruction.
In the present invention, gesture motion gathered in advance is stored, such as grasp, grasp move forward, setting with
It grasps, grasp the corresponding control instruction of forward movement.Therefore, when user makes the movements such as grasping, grasping forward movement,
It can correspond to and execute the corresponding control instruction of the movement.I.e. in advance after the data of acquisition gesture motion, any user makes corresponding
Gesture motion is able to carry out the corresponding control instruction of the gesture motion.
Such as, palm opens, grasps the control instruction for respectively representing amplification and diminution.Or grasp thumb upward or downward
Respectively represent the control instruction of amplification and diminution.Or hand respectively represents the control of amplification and diminution forward or backward after grasping
Instruction.Will be similar to that the acquisition storage of these gesture motions, and assign corresponding control instruction, make gesture motion and control instruction it
Between establish corresponding relationship.Thus corresponding operation can be executed according to corresponding control instruction when identifying gesture motion.?
In other embodiments, it can also assign each gesture motion different control instructions.
The acquisition of gesture motion includes many classes, and each class also includes many different examples, and the acquisition of this data is
It shoots in a natural environment, in true room or office, different illumination and angle, so that the data of gesture motion are adopted
Collection has more practicability.
In the present embodiment, a kind of gesture control method based on Three-dimensional Display, comprising the following steps:
Step S110, obtains the location information of hand, and establishes hand in the motion profile of three-dimensional coordinate system.This reality
The location information of hand can be obtained based on depth image by applying example.
Specifically, obtaining the location information of hand, and hand is established the motion profile of three-dimensional coordinate system the step of
Include:
A series of continuous depth informations of hand are obtained, hand is formed in three-dimensional coordinate system according to the depth information
Motion profile.The present embodiment can be based on depth image acquisition technique, a series of continuous dynamic using depth camera acquisition hand
Then the image data of work extracts a series of continuous depth informations of hand from the image data.
The operating space of hand and the linear corresponding relationship of the three-dimensional coordinate system, wherein operating space is hand
A series of real space of continuous actions, available one system of hand of image data acquired using depth camera from operating space
Arrange the space of continuous depth information and two-dimensional coordinate information.Above-mentioned three-dimensional coordinate system refers to for showing 3-D image
Space coordinates corresponding to stereoscopic image data.
After the depth information for obtaining hand, it will be able to find out corresponding coordinate points in three-dimensional coordinate system.Track hand
Bone information, and by the movement of hand scanning on computer, while acquiring the depth information of hand, according to depth information and
Bone information, which combines, obtains hand corresponding coordinate points in three-dimensional coordinate system, to the precise assembly position in 3-D image
It sets.And in hand exercise, by tracking the bone information of hand, hand can be successively tracked in three-dimensional coordinate system
Motion profile, that is, complete the motion profile of tracking hand, and actual motion profile be transformed into three-dimensional coordinate system.
In one embodiment, the model of parallel stereovision can be used in the acquisition of depth information, it is assumed that video camera C1's
Outer parameter indicates with spin matrix R1 and translation vector t1, outer parameter the spin matrix R2 and translation vector t2 table of video camera C2
Show, if wherein, R1=R2, i.e. left and right cameras are placed in parallel, relative positional relationship only exists translation, such stereoscopic vision system
System is exactly parallel stereo city office system.
Using the camera coordinate system where video camera C1 as world coordinate system, then have:
t1=(0,0,0)T, R1=R2I.;
In parallel stereovision system, the optical axis of 2 video cameras is parallel to each other, and in left and right cameras coordinate system
It is overlapped in x, polar curve is parallel to each other, and the difference of 2 camera coordinate systems is exactly that one in x-axis puts down a B (i.e. " baseline ").
Object Depth computation model as shown in Figure 2 is established, Object Depth is calculated.
As shown in Fig. 2, the position of left and right cameras optical center (i.e. lens centre) is respectively C1 and Cr, B is 2 camera lights
Translation vector between the heart, f are the focal length of video camera.Equipped with the point P in space, and p1 and pr are then respectively electricity P in left and right
Subpoint on the plane of delineation.Z is required depth information, i.e. distance of the spatial point P apart from camera optical center line C1Cr.L
It is the intersection point point for vertically doing vertical line in the plane of delineation by camera optical center with R.H is by spatial point P to the plane of delineation
Do the intersection point of vertical line.
Then wired sexual intercourse is as follows:
Merge abbreviation to above formula to solve, then has
Above formula is the formula for solving depth information, wherein | Lp1|-|Rpr| it is the correspondence obtained in Stereo matching
Parallax value with point, representation space point P lock into the poor x1-x2 of picture position on the image plane, camera optical center distance | B | and
Focal length f is obtained by camera calibration.
Step S120 identifies the gesture motion of hand according to hand in the motion profile of three-dimensional coordinate system.
Specifically, it is wrapped according to hand in the step of gesture motion of the motion profile identification hand of three-dimensional coordinate system
It includes:
Hand contour feature information is extracted, and carries out characteristic matching in conjunction with the location information and gesture motion is identified
Classification.
In the present embodiment, according to the depth information of gesture, gesture point cloud data is calculated using three-dimensional point cloud, calculates
Gesture point cloud data only includes the three-dimensional coordinate position information of hand joint point and palm center point afterwards, then to gesture point cloud number
According to data filtering processing is done, the noise jamming point in gesture point cloud data is filtered out, gesture point cloud information is obtained.By gesture point cloud
Gesture point cloud information three-dimensional information is carried out plane registration by rotation translation, saves gesture point cloud information after registration, so by information
The profile body keynote message of gesture point cloud information is extracted afterwards, and contour feature point includes finger tip point, finger tip concave point and palm center
Point.
Since the pixel depth value of contour feature point information combination depth image maps out the depth value of contour feature point, lead to
It crosses euclidean distance method to do distance threshold judgement, filter out crucial finger tip point information, according to finger tip point information and corresponding finger tip concave point
The plane of information combination plane registration obtains five finger characteristic vectors, recovers gesture motion according to characteristic vector.
Step S130 reads corresponding control instruction according to the gesture motion, and according to control instruction control three
Tie up the operation object in image.
Specifically, reading corresponding control instruction according to the gesture motion, and three-dimensional is controlled according to the control instruction
The step of operation object in image includes:
Store the corresponding relationship of each gesture motion with corresponding control instruction.
After identifying gesture motion, the corresponding control instruction of the gesture motion is read from corresponding relationship.
According to the control instruction, the operation object controlled in 3-D image executes respective action.
After identifying gesture motion, according to the corresponding relationship between the gesture motion and control instruction prestored, find out
The corresponding control instruction of the gesture motion, and the operation object in 3-D image is controlled according to the control instruction.
The 3-D image of the present embodiment can obtain space multistory image using real tri-dimension image display technology.True three
Dimension stereo-picture display technology refers to based on holography or based on body dimension display technologies, in certain entity space range
Interior display stereoscopic image data forms a kind of technology of real space stereo-picture.Stereoscopic image data is that have a three-dimensional space
Between coordinate system image data, the information of every individual pixel includes at least, the location information and image information of the pixel.
The holography of this paper mainly includes that (transmission-type holography shows that image, reflective holographic are aobvious to conventional hologram
Diagram picture, image planes formula holography show that image, rainbow type holography show that image, synthesis formula holography show image etc.) and computer it is complete
Breath figure (CGH, Computer Generated Hologram).Computer hologram floats in the air and has wider colour gamut,
In computer hologram, need to generate a mathematical model description, and light in a computer for generating the object of hologram
The physical interference of wave is also replaced by calculating step, and in each step, the intensity pattern in CGH model can be determined, the figure
Shape can be output in a reconfigurable equipment, which re-modulates light-wave information and reconstruct output.It is logical
Custom says that CGH is exactly the interference pattern that a computer graphical (virtual object) is obtained by the operation of computer, and it is complete to substitute tradition
Cease the interventional procedures of figure object light wave record;And there is no the changes in principle for the diffraction process of hologram reconstruction, only increase
To the reconfigurable equipment of light-wave information, to realize that different computers is static, holographic displays of motion graphics.
Based on holography, in some of embodiments of the invention, space multistory display device includes: 360
Holographic phantom imaging system, the system include light source, controller, spectroscope, and light source can use shot-light, and controller includes one
Or multiple processors, stereoscopic image data is received by communication interface, and obtain computer graphical (virtual object) after treatment
Interference pattern exports the interference image to spectroscope, and this interference pattern, shape is presented by light of the light source projects on spectroscope
At space multistory image.Here spectroscope can be special eyeglass or be quadrilateral pyramid etc..
In addition to above-mentioned 360 holographic phantom imaging systems, space multistory display device is also based on line holographic projections equipment,
For example, by forming stereopsis on air, special lens, mist screen etc..Therefore, space multistory display device 8 can also be
Air line holographic projections equipment, laser beam line holographic projections equipment, line holographic projections equipment (its principle with 360 degree of holographic display screens
It is by image projection on high-speed rotating mirror, to realize hologram.) and the equipment such as veil stereo imaging system
One of.
However, referring to the vision mechanism special using people itself for body dimension display technologies, one has been manufactured by body
Plain particle replaces the display of molecule particle composition in kind, other than it can see the shape of light wave embodiment, moreover it is possible to touch voxel
Necessary being.It motivates the substance in transparence display volume by appropriate ways, is inhaled using the generation of visible radiation
It receives or scatters and form voxel, after the substance in orientation many in volume is all motivated, just can be formed by the body of many dispersions
Element constitutes three-dimension space image in three-dimensional space.
The present invention can also be with the following method:
(1), rotary body scanning technique, rotary body scanning technique are mainly used for the display of dynamic object.In the art, one
String two dimensional image is projected on a rotation or mobile screen, while the speed that the screen can not be perceived with observer is being transported
Dynamic, the persistence of vision because of people in human eye so that form three-dimension object.Therefore, using the display system of this stereo display technique
It unites and the true Three-dimensional Display of image can be achieved (360 ° visual).The light beam of different colours projects to display by light deflector in system
On medium, so that medium embodies color abundant.Meanwhile this display medium can allow light beam to generate discrete visible light
Point, these points are exactly voxel, corresponding to any point in 3-D image.One group of group voxel is used to establish image, and observer can be from
Any viewing point is to this true 3-D image.Imaging space in display equipment based on rotary body scanning technique can be by shielding
The rotation or translation of curtain generate.Voxel is activated on the surface of emission in the inswept imaging space of screen.The system includes: laser system
The subsystems such as system, computer control system, rotational display system.
(2), static body imaging technique is that three-dimensional image is formed based on frequency upooaversion technology, in so-called frequency
It is to go out a kind of fluorescence using meeting spontaneous radiation after the multiple photons of imaging space Absorption of Medium that conversion 3 D stereo, which is shown, to generate
Visible pixel.The basic principle is that being passed through using the orthogonal infrared laser cross action of two beams on up-conversion
The RESONANCE ABSORPTION twice of up-conversion is crossed, centre of luminescence electronics is excited to high excitation level, and energy level transition can still further below
The transmitting of visible light can be generated, a point in such up-conversion space is exactly a luminous bright spot, if making two
The address scan of three-dimensional space is done in the crosspoint of Shu Jiguang according to certain track in up-conversion, then the friendship of two beam laser
The scanned place of crunode should be the bright band that can emit visible fluorescence, it can show same laser crosspoint fortune
The identical three-dimensional graph of dynamic rail mark.This display methods naked eyes are it is seen that the visual 3 dimensional drawing of 360 ° omni-directional
Picture.
Certainly, the 3-D image in the present invention can also be carries out display acquisition based on 3D display technology on a display screen
3D rendering.Display screen mentioned herein is based on 3D display technology, using the right and left eyes parallax of human eye, shows that human eye on display screen
The image shown obtains virtual 3D stereo-picture after being reconstructed.Display screen is divided into spectacle and shows that equipment and naked eye type show and sets
Standby two major classes.Spectacle shows that equipment utilization flat-faced screen cooperation 3D glasses are realized jointly.Naked eye type shows equipment, i.e. naked eye
3D display device is made of 3D solid non-real end, playout software, Software for producing, four part of application technology, is collection optics, is taken the photograph
Shadow, electronic computer, the modern high technologies technologies such as automatic control, software, 3d cartoon making are in the three-dimensional reality system of reporting to the leadship after accomplishing a task of one
System.
It, can be by the stereo-picture number with a three-dimensional coordinate system based on above-mentioned different 3-D image imaging mode
According to the image data being input to required for being converted into different display equipment.This different display equipment is based on 3-D image
Imaging mode and use different hardware devices, for details, reference can be made to related contents in the prior art.
In one embodiment, it when hand is expansion action, is tracked by bone, identifies that hand is expansion action, and
Search the corresponding control instruction of expansion action, it is assumed that the corresponding control instruction of expansion action is therefore origination action is only shown at this time
Show cursor corresponding with hand.When expansion action this state move down start portion when, tracked by bone, feed back to computer
Instruction be only the motion profile for tracking hand, i.e. the display highlighting motion profile that follows hand.Due to gesture operation space with
Three-dimensional coordinate system is corresponding, therefore, corresponding with three-dimensional coordinate system when hand moves in operating space.
As shown in Fig. 3 (a) and Fig. 3 (b), when determination needs to operate some operation object, mobile hand makes hand
The corresponding cursor in portion is in the control area of operation object.Representative diminution and amplification instruction are moved forward and moved backward to grasp
For.When recognizing grasping, the starting position of hand is obtained, generally using the palm of the hand position of hand as starting position.Track hand
The motion profile in portion, when being identified as moving forward, corresponding control instruction is reduction operation object.When being identified as moving backward
When, corresponding control instruction is that object is enlarged.
In other embodiments, grasp motion is when choosing instruction, when the corresponding cursor of hand is in the control of operation object
In region, when recognizing grasp motion, for the object where corresponding to cursor using hand at this time as operation object, that is, it is current right to choose
As that the operation such as can be moved, replicated and be pasted to existing object.
Specifically, when hand is grasping state, and is moved forward, correspondence is in Three-dimensional Display as shown in Fig. 3 (c)
The corresponding cursor of hand gradually amplifies.It is corresponding in three-dimensional when hand is grasping state, and is moved backward as shown in Fig. 3 (d)
It is gradually reduced in display for the corresponding cursor of hand.Three dimensional display can be at below depth camera or side.Such as Fig. 3 (e)
With shown in Fig. 3 (f), the placement location of three dimensional display has no effect on the display in three-dimensional manipulating space.
In other embodiments, when being instructed with opening and rotating finger movement for rotation, when the corresponding cursor of hand is being grasped
Make in the control area of object, when recognizing opening and rotation finger movement, the object where cursor is corresponded to hand at this time
For operation object, i.e., rotation process is carried out to existing object.
Based on embodiment described above, a kind of gesture control device based on Three-dimensional Display, including depth camera, three-dimensional are aobvious
Show device and processor.
The depth camera is used to obtain the depth image of hand, and exports to the processor.
The processor obtains the location information of hand according to the depth image;And hand is established according to the positional information
Motion profile of the portion in three-dimensional coordinate system;The processor is also used to the movement rail according to hand in three-dimensional coordinate system
The gesture motion of mark identification hand;The processor is also used to read corresponding control instruction, and root according to the gesture motion
According to the operation object in control instruction control 3-D image.
The processor, which is also used to control three dimensional display according to the gesture motion, shows the gesture motion, and shows and hold
The track of the corresponding control instruction of the row gesture motion.
In the present embodiment, due to being adopted in operating space (executing a series of real space of continuous actions for hand)
Collect gesture motion.Therefore, naked eye three-dimensional effect may be implemented using holography in 3-D image.That is real space and virtual
Display can correspond to display in real time.Thus, it, can be accurately right in user when the display object to three dimensional display operates
Show that object is implemented to grasp, grasps the operation such as mobile.
Incorporated by reference to Fig. 3 (g).For example, it is assumed that user needs to three dimensional display (using the Three-dimensional Display of holography
Device) display stereo-picture to be operated (such as racket) operation that is rotated and moved, show due to operating space and virtually and be
Corresponding display in real time, therefore user only needs to find the position corresponding with stereo-picture to be operated in operating space, and makes
Grasp motion.At this point, depth camera can detect the gesture motion of user, and it is transferred to processor.Processor then controls three-dimensional aobvious
Show the state that device shows that stereo-picture to be operated (such as racket) is grasped by user's hand.When user then carries out in operating space
When grasping mobile (or brandishing arm), then processor control three dimensional display show stereo-picture to be operated moved (or
Brandish) track.
As shown in figure 4, being the module map of the gesture control system based on Three-dimensional Display.
Data obtaining module, for obtaining the location information of hand,
Coordinate establishes module, for establishing hand according to the positional information in the motion profile of three-dimensional coordinate system;
Gesture recognition module, for dynamic in the gesture of the motion profile identification hand of three-dimensional coordinate system according to hand
Make;
Control module is operated, for reading corresponding control instruction according to the gesture motion, and is referred to according to the control
Enable the operation object in control 3-D image.
Data obtaining module is also used to obtain a series of continuous depth informations of hand, and coordinate establishes module and is also used to basis
The depth information forms hand in the motion profile of three-dimensional coordinate system.
The operating space of hand and the linear corresponding relationship of the three-dimensional coordinate system, wherein operating space is hand
Execute a series of real space of continuous actions.
Gesture recognition module is also used to extract hand contour feature information, and carries out characteristic matching in conjunction with the location information
Identification classification is carried out to gesture motion.
Operating control module includes memory module, read module and execution module.Memory module is for storing each gesture
It acts and the corresponding relationship of corresponding control instruction.Read module is used for after identifying gesture motion, is read from corresponding relationship
The corresponding control instruction of the gesture motion.Execution module is used to control the operation object in 3-D image according to the control instruction
Execute respective action.
Above-mentioned gesture control method and system based on Three-dimensional Display, device pass through the location information for obtaining hand, then
Hand is established in the motion profile of three-dimensional coordinate system, the gesture motion of hand, last basis are identified according to the motion profile
The gesture motion reads corresponding control instruction, and controls the operation object in 3-D image according to the control instruction.I.e.
By identifying the gesture motion of user, corresponding operation then is executed according to gesture motion control operation object.Therefore, Neng Goushi
Show natural human-machine interaction and without contacting display screen.
Each technical characteristic of embodiment described above can be combined arbitrarily, for simplicity of description, not to above-mentioned reality
It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited
In contradiction, all should be considered as described in this specification.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to protection of the invention
Range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.
Claims (9)
1. a kind of gesture control method based on Three-dimensional Display, comprising the following steps:
The location information of hand is obtained, and establishes hand in the motion profile of three-dimensional coordinate system, is specifically included: being established parallel
Stereo visual system, the parallel stereovision system includes the corresponding spatial point of hand and two video cameras, described two to take the photograph
The optical axis of camera is parallel to each other, and the focal length of described two video cameras is f, and defining the optical axis direction is Y-axis, described two
The optical center of video camera is only in X-axis there are translation vector B, and described two video cameras form the plane of delineation, and the spatial point is at it
In a video camera the plane of delineation on subpoint be p1, and along the optical center of the correspondence video camera vertically in corresponding institute
It states and is intersection point point L on the plane of delineation, subpoint of the spatial point on the wherein plane of delineation of another video camera is
Pr, and it vertically is intersection point point R in corresponding described image plane along the optical center of the correspondence video camera, according to formulaCalculate a series of depth information Z of hand, wherein | Lp1|-|Rpr| it is the spatial point in the figure
As plane locking is at the difference of picture position;The hand is calculated in the movement rail of three-dimensional coordinate system according to the depth information
Mark;
According to hand in the motion profile of three-dimensional coordinate system, the gesture motion of hand is identified;
Corresponding control instruction is read according to the gesture motion, and the operation in 3-D image is controlled according to the control instruction
Object.
2. the gesture control method according to claim 1 based on Three-dimensional Display, which is characterized in that the operation of the hand
Space and the linear corresponding relationship of the three-dimensional coordinate system, wherein operating space is that hand executes a series of continuous actions
Real space.
3. the gesture control method according to claim 1 based on Three-dimensional Display, which is characterized in that described to be existed according to hand
The motion profile of three-dimensional coordinate system identifies that the step of gesture motion of hand includes:
Hand contour feature information is extracted, and carries out characteristic matching in conjunction with the location information and identification point is carried out to gesture motion
Class.
4. the gesture control method according to claim 1 based on Three-dimensional Display, which is characterized in that described according to the hand
Corresponding control instruction is read in gesture movement, and the step of controlling the operation object in 3-D image according to the control instruction is wrapped
It includes:
Store the corresponding relationship of each gesture motion with corresponding control instruction;
After identifying gesture motion, the corresponding control instruction of the gesture motion is read from corresponding relationship;
The operation object in 3-D image, which is controlled, according to the control instruction executes respective action.
5. a kind of gesture control system based on Three-dimensional Display characterized by comprising
Data obtaining module, for obtaining the location information of hand,
Coordinate establishes module, described for establishing hand according to the positional information in the motion profile of three-dimensional coordinate system
The motion profile that hand is established according to the positional information in three-dimensional coordinate system specifically includes: establishing parallel stereovision system
System, the parallel stereovision system include the corresponding spatial point of hand and two video cameras, the optical axis of described two video cameras
It is parallel to each other, and the focal length of described two video cameras is f, defining the optical axis direction is Y-axis, the light of described two video cameras
The heart is only in X-axis there are translation vector B, and described two video cameras form the plane of delineation, and the spatial point is wherein described in one
Subpoint on the plane of delineation of video camera is p1, and along the optical center of the correspondence video camera vertically in corresponding described image plane
On be intersection point point L, subpoint of the spatial point on the wherein plane of delineation of another video camera is pr, and along correspondence
The optical center of the video camera is vertically intersection point point R in corresponding described image plane, according to formulaIt calculates
The a series of depth information Z of hand, wherein | Lp1|-|Rpr| it locks in described image plane into picture position for the spatial point
Difference;The hand is calculated in the motion profile of three-dimensional coordinate system according to the depth information;
Gesture recognition module, for identifying the gesture motion of hand in the motion profile of three-dimensional coordinate system according to hand;
Control module is operated, for reading corresponding control instruction according to the gesture motion, and according to the control instruction control
Operation object in 3-D image processed.
6. the gesture control system according to claim 5 based on Three-dimensional Display, which is characterized in that the operating space of hand
With the linear corresponding relationship of the three-dimensional coordinate system, wherein operating space is that hand executes a series of the true of continuous actions
The real space.
7. the gesture control system according to claim 5 based on Three-dimensional Display, which is characterized in that the gesture identification mould
Block is also used to extract hand contour feature information, and carries out characteristic matching in conjunction with the location information and identify to gesture motion
Classification.
8. the gesture control system according to claim 5 based on Three-dimensional Display, which is characterized in that the operation controls mould
Block includes memory module, read module and execution module;The memory module is for storing each gesture motion and corresponding control
Make the corresponding relationship of instruction;The read module is used for after identifying gesture motion, and the gesture motion is read from corresponding relationship
Corresponding control instruction;The execution module is used to control the operation object execution pair in 3-D image according to the control instruction
It should act.
9. a kind of gesture control device based on Three-dimensional Display, which is characterized in that including depth camera, three dimensional display and processing
Device;
The depth camera is used to obtain the depth image of hand, and exports to the processor, the depth for obtaining hand
Image specifically includes: establish parallel stereovision system, the parallel stereovision system include the corresponding spatial point of hand and
Two video cameras, the optical axis of described two video cameras are parallel to each other, and the focal length of described two video cameras is f, described in definition
Optical axis direction is Y-axis, and for the optical center of described two video cameras only in X-axis there are translation vector B, described two video cameras form figure
As plane, subpoint of the spatial point on the wherein plane of delineation of a video camera is p1, and along taking the photograph described in correspondence
The optical center of camera is vertically intersection point point L in corresponding described image plane, the spatial point another described video camera wherein
The plane of delineation on subpoint be pr, and vertically done in corresponding described image plane along the optical center of the correspondence video camera vertical
Foot point R, according to formulaCalculate a series of depth information Z of hand, wherein | Lp1|-|Rpr| it is described
The difference that spatial point is locked in described image plane into picture position, the depth image includes the depth information;
The processor obtains the location information of hand according to the depth image;And it establishes hand according to the positional information and exists
The motion profile of three-dimensional coordinate system;The processor is also used to be known according to hand in the motion profile of three-dimensional coordinate system
The gesture motion of other hand;The processor is also used to read corresponding control instruction according to the gesture motion, and according to institute
State the operation object in control instruction control 3-D image;
The processor, which is also used to control three dimensional display according to the gesture motion, shows the gesture motion, and shows that execution should
The track of the corresponding control instruction of gesture motion.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510735569.7A CN105353873B (en) | 2015-11-02 | 2015-11-02 | Gesture control method and system based on Three-dimensional Display |
PCT/CN2016/076748 WO2017075932A1 (en) | 2015-11-02 | 2016-03-18 | Gesture-based control method and system based on three-dimensional displaying |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510735569.7A CN105353873B (en) | 2015-11-02 | 2015-11-02 | Gesture control method and system based on Three-dimensional Display |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105353873A CN105353873A (en) | 2016-02-24 |
CN105353873B true CN105353873B (en) | 2019-03-15 |
Family
ID=55329857
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510735569.7A Active CN105353873B (en) | 2015-11-02 | 2015-11-02 | Gesture control method and system based on Three-dimensional Display |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105353873B (en) |
WO (1) | WO2017075932A1 (en) |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105353873B (en) * | 2015-11-02 | 2019-03-15 | 深圳奥比中光科技有限公司 | Gesture control method and system based on Three-dimensional Display |
CN105589293A (en) * | 2016-03-18 | 2016-05-18 | 严俊涛 | Holographic projection method and holographic projection system |
CN105955461A (en) * | 2016-04-25 | 2016-09-21 | 乐视控股(北京)有限公司 | Interactive interface management method and system |
US10377042B2 (en) * | 2016-06-17 | 2019-08-13 | Intel Corporation | Vision-based robot control system |
CN108073267B (en) * | 2016-11-10 | 2020-06-16 | 腾讯科技(深圳)有限公司 | Three-dimensional control method and device based on motion trail |
CN106774849B (en) * | 2016-11-24 | 2020-03-17 | 北京小米移动软件有限公司 | Virtual reality equipment control method and device |
CN106933347A (en) * | 2017-01-20 | 2017-07-07 | 深圳奥比中光科技有限公司 | The method for building up and equipment in three-dimensional manipulation space |
CN106919928A (en) * | 2017-03-08 | 2017-07-04 | 京东方科技集团股份有限公司 | gesture recognition system, method and display device |
WO2018196552A1 (en) | 2017-04-25 | 2018-11-01 | 腾讯科技(深圳)有限公司 | Method and apparatus for hand-type display for use in virtual reality scene |
CN107368194A (en) * | 2017-07-21 | 2017-11-21 | 上海爱优威软件开发有限公司 | The gesture control method of terminal device |
CN107463261B (en) * | 2017-08-11 | 2021-01-15 | 北京铂石空间科技有限公司 | Three-dimensional interaction system and method |
CN107678425A (en) * | 2017-08-29 | 2018-02-09 | 南京理工大学 | A kind of car controller based on Kinect gesture identifications |
CN107589628A (en) * | 2017-09-11 | 2018-01-16 | 大连海事大学 | A kind of holographic projector and its method of work based on gesture identification |
CN107976183A (en) * | 2017-12-18 | 2018-05-01 | 北京师范大学珠海分校 | A kind of spatial data measuring method and device |
CN108052237B (en) * | 2018-01-05 | 2022-01-14 | 上海昶音通讯科技有限公司 | 3D projection touch device and touch method thereof |
CN108363482A (en) * | 2018-01-11 | 2018-08-03 | 江苏四点灵机器人有限公司 | A method of the three-dimension gesture based on binocular structure light controls smart television |
CN112567319A (en) * | 2018-03-09 | 2021-03-26 | 彼乐智慧科技(北京)有限公司 | Signal input method and device |
CN108681402A (en) * | 2018-05-16 | 2018-10-19 | Oppo广东移动通信有限公司 | Identify exchange method, device, storage medium and terminal device |
CN108776994B (en) * | 2018-05-24 | 2022-10-25 | 长春理工大学 | Roesser model based on true three-dimensional display system and implementation method thereof |
CN110659543B (en) * | 2018-06-29 | 2023-07-14 | 比亚迪股份有限公司 | Gesture recognition-based vehicle control method and system and vehicle |
CN109240494B (en) * | 2018-08-23 | 2023-09-12 | 京东方科技集团股份有限公司 | Control method, computer-readable storage medium and control system for electronic display panel |
KR102155378B1 (en) * | 2018-09-19 | 2020-09-14 | 주식회사 브이터치 | Method, system and non-transitory computer-readable recording medium for supporting object control |
CN112714900A (en) * | 2018-10-29 | 2021-04-27 | 深圳市欢太科技有限公司 | Display screen operation method, electronic device and readable storage medium |
CN109732606A (en) * | 2019-02-13 | 2019-05-10 | 深圳大学 | Long-range control method, device, system and the storage medium of mechanical arm |
CN110058688A (en) * | 2019-05-31 | 2019-07-26 | 安庆师范大学 | A kind of projection system and method for dynamic gesture page turning |
CN110456957B (en) * | 2019-08-09 | 2022-05-03 | 北京字节跳动网络技术有限公司 | Display interaction method, device, equipment and storage medium |
CN110794959A (en) * | 2019-09-25 | 2020-02-14 | 苏州联游信息技术有限公司 | Gesture interaction AR projection method and device based on image recognition |
CN110889390A (en) * | 2019-12-05 | 2020-03-17 | 北京明略软件系统有限公司 | Gesture recognition method, gesture recognition device, control equipment and machine-readable storage medium |
CN111142664B (en) * | 2019-12-27 | 2023-09-01 | 恒信东方文化股份有限公司 | Multi-user real-time hand tracking system and tracking method |
CN113065383B (en) * | 2020-01-02 | 2024-03-29 | 中车株洲电力机车研究所有限公司 | Vehicle-mounted interaction method, device and system based on three-dimensional gesture recognition |
CN111242084B (en) * | 2020-01-21 | 2023-09-08 | 深圳市优必选科技股份有限公司 | Robot control method, robot control device, robot and computer readable storage medium |
CN111949134A (en) * | 2020-08-28 | 2020-11-17 | 深圳Tcl数字技术有限公司 | Human-computer interaction method, device and computer-readable storage medium |
CN112329540A (en) * | 2020-10-10 | 2021-02-05 | 广西电网有限责任公司电力科学研究院 | Identification method and system for overhead transmission line operation in-place supervision |
CN112241204B (en) * | 2020-12-17 | 2021-08-27 | 宁波均联智行科技股份有限公司 | Gesture interaction method and system of vehicle-mounted AR-HUD |
CN114701409B (en) * | 2022-04-28 | 2023-09-05 | 东风汽车集团股份有限公司 | Gesture interactive intelligent seat adjusting method and system |
CN115840507A (en) * | 2022-12-20 | 2023-03-24 | 北京帮威客科技有限公司 | Large-screen equipment interaction method based on 3D image control |
CN117278735A (en) * | 2023-09-15 | 2023-12-22 | 山东锦霖智能科技集团有限公司 | Immersive image projection equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102236414A (en) * | 2011-05-24 | 2011-11-09 | 北京新岸线网络技术有限公司 | Picture operation method and system in three-dimensional display space |
CN102650906A (en) * | 2012-04-06 | 2012-08-29 | 深圳创维数字技术股份有限公司 | Control method and device for user interface |
CN103176605A (en) * | 2013-03-27 | 2013-06-26 | 刘仁俊 | Control device of gesture recognition and control method of gesture recognition |
CN103488292A (en) * | 2013-09-10 | 2014-01-01 | 青岛海信电器股份有限公司 | Three-dimensional application icon control method and device |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10242255B2 (en) * | 2002-02-15 | 2019-03-26 | Microsoft Technology Licensing, Llc | Gesture recognition system using depth perceptive sensors |
CN102270035A (en) * | 2010-06-04 | 2011-12-07 | 三星电子株式会社 | Apparatus and method for selecting and operating object in non-touch mode |
CN102226880A (en) * | 2011-06-03 | 2011-10-26 | 北京新岸线网络技术有限公司 | Somatosensory operation method and system based on virtual reality |
CN102411426A (en) * | 2011-10-24 | 2012-04-11 | 由田信息技术(上海)有限公司 | Operating method of electronic device |
CN102426480A (en) * | 2011-11-03 | 2012-04-25 | 康佳集团股份有限公司 | Man-machine interactive system and real-time gesture tracking processing method for same |
US9201500B2 (en) * | 2012-09-28 | 2015-12-01 | Intel Corporation | Multi-modal touch screen emulator |
KR20140052640A (en) * | 2012-10-25 | 2014-05-07 | 삼성전자주식회사 | Method for displaying a cursor on a display and system performing the same |
CN104182035A (en) * | 2013-05-28 | 2014-12-03 | 中国电信股份有限公司 | Method and system for controlling television application program |
CN104571510B (en) * | 2014-12-30 | 2018-05-04 | 青岛歌尔声学科技有限公司 | A kind of system and method that gesture is inputted in 3D scenes |
CN105353873B (en) * | 2015-11-02 | 2019-03-15 | 深圳奥比中光科技有限公司 | Gesture control method and system based on Three-dimensional Display |
-
2015
- 2015-11-02 CN CN201510735569.7A patent/CN105353873B/en active Active
-
2016
- 2016-03-18 WO PCT/CN2016/076748 patent/WO2017075932A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102236414A (en) * | 2011-05-24 | 2011-11-09 | 北京新岸线网络技术有限公司 | Picture operation method and system in three-dimensional display space |
CN102650906A (en) * | 2012-04-06 | 2012-08-29 | 深圳创维数字技术股份有限公司 | Control method and device for user interface |
CN103176605A (en) * | 2013-03-27 | 2013-06-26 | 刘仁俊 | Control device of gesture recognition and control method of gesture recognition |
CN103488292A (en) * | 2013-09-10 | 2014-01-01 | 青岛海信电器股份有限公司 | Three-dimensional application icon control method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2017075932A1 (en) | 2017-05-11 |
CN105353873A (en) | 2016-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105353873B (en) | Gesture control method and system based on Three-dimensional Display | |
US11164289B1 (en) | Method for generating high-precision and microscopic virtual learning resource | |
JP6246757B2 (en) | Method and system for representing virtual objects in field of view of real environment | |
Seitz et al. | Plenoptic image editing | |
JP4932951B2 (en) | Facial image processing method and system | |
JP4865093B2 (en) | Method and system for animating facial features and method and system for facial expression transformation | |
US8218825B2 (en) | Capturing and processing facial motion data | |
CN104077804B (en) | A kind of method based on multi-frame video picture construction three-dimensional face model | |
US8432435B2 (en) | Ray image modeling for fast catadioptric light field rendering | |
JP5586594B2 (en) | Imaging system and method | |
Sawhney et al. | Video flashlights: real time rendering of multiple videos for immersive model visualization | |
Luo et al. | Multi-view hair capture using orientation fields | |
Starck et al. | The multiple-camera 3-d production studio | |
JP2004537082A (en) | Real-time virtual viewpoint in virtual reality environment | |
CN109242959A (en) | Method for reconstructing three-dimensional scene and system | |
Yu et al. | Multiperspective modeling, rendering, and imaging | |
Enciso et al. | Synthesis of 3D faces | |
Sang et al. | Inferring super-resolution depth from a moving light-source enhanced RGB-D sensor: a variational approach | |
CN109064533A (en) | A kind of 3D loaming method and system | |
Güssefeld et al. | Are reflectance field renderings appropriate for optical flow evaluation? | |
Hülsken et al. | Modeling and animating virtual humans for real-time applications | |
Liu | Impact of high-tech image formats based on full-frame sensors on visual experience and film-television production | |
US20240119671A1 (en) | Systems and methods for face asset creation and models from one or more images | |
Yao et al. | Neural Radiance Field-based Visual Rendering: A Comprehensive Review | |
Shi et al. | CG Benefited Driver Facial Landmark Localization Across Large Rotation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |