CN108491066A - A kind of gesture interaction method and device - Google Patents
A kind of gesture interaction method and device Download PDFInfo
- Publication number
- CN108491066A CN108491066A CN201810090288.4A CN201810090288A CN108491066A CN 108491066 A CN108491066 A CN 108491066A CN 201810090288 A CN201810090288 A CN 201810090288A CN 108491066 A CN108491066 A CN 108491066A
- Authority
- CN
- China
- Prior art keywords
- finger
- center point
- interaction center
- coordinate value
- depth image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Abstract
The invention discloses a kind of gesture interaction method and device, this method includes:Obtain the depth image of finger interactive action of the user on the projection interactive flat of smart projector;Depth image is handled to obtain the coordinate value of corresponding finger interaction center point, and judges whether finger interaction meets deliberate action condition according to the coordinate value of finger interaction center point, when meeting, row interpolation compensation is clicked through to finger interaction center;Report the coordinate value and interpolation of finger interaction center point so that according to the coordinate value and interpolation of finger interaction center point, finger sliding and the correspondence of predetermined registration operation in smart projector execute predetermined registration operation.The technical solution makes original discontinuous sliding interactive process become smooth, improves user-interaction experience, has expanded the application field of smart projector.
Description
Technical field
The present invention relates to smart projector technical fields, and in particular to a kind of gesture interaction method and device.
Background technology
Projecting apparatus is more common electronic equipment, currently, the development of projecting apparatus is more and more intelligent, have it is intelligent,
The feature that can be interacted.But existing user gesture interaction, especially sliding interactive process is not smooth, and user-interaction experience is poor.
Invention content
The present invention provides a kind of gesture interaction method and devices, to solve or at least partly solve smart projector gesture
Interactive process is not smooth, the poor problem of user experience.
To reach above-mentioned technical purpose, the technical proposal of the invention is realized in this way:
According to an aspect of the invention, there is provided a kind of gesture interaction method, this method include:
Obtain the depth image of finger interactive action of the user on the projection interactive flat of smart projector;
Depth image is handled to obtain the coordinate value of corresponding finger interaction center point, and according to the hand of depth image
The coordinate value for referring to interaction center point judges whether finger interaction meets deliberate action condition, when meeting, to finger interaction center
Click through row interpolation compensation;
Report the coordinate value and interpolation of finger interaction center point so that according to the coordinate value and interpolation of finger interaction center point
It determines that finger interactive action slides for finger, and is executed according to finger sliding and the correspondence of predetermined registration operation in smart projector
Predetermined registration operation.
According to another aspect of the present invention, a kind of gesture interaction device is provided, which includes:
Image acquisition unit, for obtaining finger interactive action of the user on the projection interactive flat of smart projector
Depth image;
Image processing unit obtains the coordinate value of corresponding finger interaction center point for being handled depth image,
And judge whether finger interaction meets deliberate action condition according to the coordinate value of the finger interaction center point of depth image, work as satisfaction
When, row interpolation compensation is clicked through to finger interaction center;
Unit, coordinate value and interpolation for reporting finger interaction center point so that in being interacted according to finger are realized in interaction
The coordinate value and interpolation of heart point determine that finger interactive action slides for finger, and are preset in smart projector according to finger sliding
The correspondence of operation executes predetermined registration operation.
According to a further aspect of the invention, a kind of electronic equipment is provided, which includes:Memory and processing
Device is communicated by internal bus between memory and processor and is connected, and memory is stored with the program that can be executed by processor
Instruction, can realize the gesture interaction method of one aspect of the present invention when program instruction is executed by processor.
The beneficial effects of the invention are as follows:The gesture interaction method and device of implementation column of the present invention, by obtaining user in intelligence
The depth image of finger interactive action on the projection interactive flat of energy projecting apparatus, is handled to obtain corresponding to depth image
The coordinate value of finger interaction center point judges whether finger interaction is full according to the coordinate value of the finger interaction center point of depth image
Sufficient deliberate action condition reports interpolation, in reporting finger to interact when meeting after clicking through row interpolation compensation to finger interaction center
The coordinate value and interpolation of heart point so that the correspondence according to finger sliding and predetermined registration operation in smart projector executes default behaviour
Make.Thus the embodiment of the present invention is directed to smart projector after identification finger sliding interaction, in being interacted to finger using interpolation method
The heart clicks through row interpolation and compensates and report prediction coordinate points, so that original discontinuous sliding interactive process is become smooth, improves use
The gesture interaction at family is experienced, and the application field and range of smart projector have been expanded.
Description of the drawings
Fig. 1 is the depth map signal of the user's finger interactive action obtained;
Fig. 2 is the depth map signal of the user's finger interactive action obtained;
Fig. 3 is a kind of flow chart of gesture interaction method of one embodiment of the invention;
Fig. 4 is a kind of sequence diagram of gesture interaction method of one embodiment of the invention;
Fig. 5 is the interpolation flow chart of one embodiment of the invention;
Fig. 6 is the depth map signal after the interpolation of one embodiment of the invention;
Fig. 7 is the principle schematic of the pre-determined distance threshold calculations of one embodiment of the invention;
Fig. 8 is the schematic diagram of the finger interaction center point of the click gesture interaction of one embodiment of the invention;
Fig. 9 is a kind of block diagram of gesture interaction device of one embodiment of the invention;
Figure 10 is the structure chart of the electronic equipment of one embodiment of the invention.
Specific implementation mode
Important structure there are two in smart projector, when upright projection module, second is that depth of field module.Upright projection mould
Group can be connected with background operating system, can be on real desktop by upright projection module such as the Android system of mobile phone
The content for delivering out mobile phone desktop, in there is provided the visible interactive windows of user.Depth of field module is by way of infrared scan
(or other feasible distance measuring methods), obtain the distance of subpoint on the depth of field module to real desktop, with flat desktop phase
Than if wherein adding hand body, the depth value that depth of field module is scanned to object will become smaller.In this way by being added before hand body
The difference of data and data after addition hand body, can obtain the essential information of body in one's hands, including finger centre dot position information.
In order to effectively capture the information of user gesture interaction, the perspective plane of depth of field module needs to cover vertical throw
The perspective plane of shadow module, that is, the area on the perspective plane of depth of field module is more than the perspective plane of upright projection module.For ease of understanding,
Here the operation principle of depth of field module is illustrated, depth of field module is by emitting continuous near-infrared arteries and veins to target scene
Then punching receives the light pulse being reflected back by object with sensor, the light by comparing light pulse is emitted and by object reflection
The phase difference of pulse, the distance that can be calculated to obtain the transmission delay between light pulse and then obtain object relative to video camera,
Finally obtain a width depth image.By the cooperation of above-mentioned two module, user can obtain as on smart mobile phone touch screen
Carry out the similar impression of gesture interaction.
The inventor of the present application discovered that:Smart projector is when carrying out finger sliding interaction, due to the frame speed of depth of field module
Rate is relatively low, generally 30FPS (frame number each second), i.e. 30 images of acquisition per second, and frame number each second is the more, and shown is dynamic
Making will be more smooth, and 30FPS means that in gesture identification, the gesture interaction of certain speed can not be detected, and such as schemes
1, shown in Fig. 2,11 be palm, and 12 be finger interaction center point.The gesture that original user does is that a sliding from left to right is dynamic
Make, but since depth of field module acquisition image data frame per second is relatively low, coherent movement originally becomes discontinuously.The interaction of user
Experience is a card for originally smooth sliding, and poor user experience also limits the development of smart projector.
For this purpose, providing a kind of gesture interaction method in an implementation column of the invention, it is applied to smart projector, Fig. 3 is
A kind of flow chart of gesture interaction method of one embodiment of the invention, referring to Fig. 3, this method includes the following steps:
Step S101 obtains the depth map of finger interactive action of the user on the projection interactive flat of smart projector
Picture;
Step S102 handles the depth image to obtain the coordinate value of corresponding finger interaction center point, and root
Judge whether finger interaction meets deliberate action condition according to the coordinate value of the finger interaction center point of depth image, when meeting,
Row interpolation compensation is clicked through to the finger interaction center;
Step S103 reports the coordinate value and interpolation of the finger interaction center point so that in being interacted according to the finger
The coordinate value and interpolation of heart point determine that the finger interactive action slides for finger, and according in finger sliding and smart projector
The correspondence of predetermined registration operation executes predetermined registration operation.
As shown in Figure 3 it is found that the gesture interaction method of implementation column of the present invention, passes through the scape of acquisition user's finger interactive action
Then deep image handles depth image to obtain the coordinate value of corresponding finger interaction center point, judge that finger interaction is full
When sufficient deliberate action condition, reports the coordinate value of finger interaction center point and finger interaction center is clicked through on after row interpolation compensates
Report interpolation;Determine that finger interactive action slides for finger by the coordinate value and interpolation of finger interaction center point again, and according to finger
Sliding and the correspondence of predetermined registration operation in smart projector execute predetermined registration operation.
Here predetermined registration operation is, for example, page turning, wakes up the operations such as screen.
It is appreciated that when the gesture interaction method of the present embodiment is the finger interaction center point that can indicate slide
After coordinate value is reported to system, system can realize predetermined registration operation according to the sliding of preservation and the correspondence of predetermined registration operation, this
It is realized similar to the touch screen by smart mobile phone and touches interactive operation.
It should be noted that on the projection plane of smart projector, user can carry out gesture interaction, that is, Ke Yi
It clicked, slided with finger on projection plane, relatively-stationary contact area that there are one fingers and projection plane meeting, and depth of field mould
Group can be found out in the preset range according to the depth value in the depth image of acquisition in preset range (such as 1 cm range)
The average value of depth value is then using average value as the coordinate value of finger interaction center point.In addition, the colour with acquisition color-values
Image is different, the depth value of the collected target scene of depth image, and depth value characterizes certain point distance in projection scene and takes the photograph
The distance of camera (i.e. depth of field module).
In order to improve the efficiency of the present embodiment algorithm, three independent thread parallel meters are used in one embodiment of the invention
It calculates, that is to say, that step S101, S102, S103 in Fig. 3 are realized by three threads.
Fig. 4 is a kind of sequence diagram of gesture interaction method of one embodiment of the invention, referring to Fig. 4, downward arrow in Fig. 4
Head indicates time shaft, and the time of every Image Acquisition is uniform in the present embodiment, and the rate with depth of field module is that 30FPS is
Example, the acquisition time of every image is about 40 milliseconds.Here it is illustrated with acquiring the action once slided, Fig. 4 intermediate cam shapes
Label represents the finger interaction center point identified in depth image, and (each triangular marker comes from a width depth map
Picture), position coordinate value of the circular mark representative to the finger interaction center point obtained after width depth image processing, five jiaos
Shape label represents the coordinate value of the finger interaction center point reported and the prediction coordinate value (solid point as interpolation) of insertion.
Workflow in the present embodiment is substantially:After thread three gets the first width image data, image data enters
The algoritic module of thread two, while thread three continues to obtain the image of depth of field module acquisition, realizes parallel mode.When in thread two
Algorithm finger interaction center point be calculated postponed, thread one will start, and thread one will be in the interaction of obtained finger
The position coordinates of heart point are reported to background system (Android system in such as mobile phone), background system to pass through front and back finger interaction center
The position of point is slided and realizes corresponding operation with the correspondence of predetermined registration operation.
Fig. 5 is the interpolation flow chart of one embodiment of the invention, in the present embodiment, is sat from second finger interaction center point
Mark proceeds by corresponding judgement.Deterministic process is as shown in Figure 5, and after meeting condition, one is obtained newly by interpolation method
Point, such as the point 3 in Fig. 4.Start timer (since algorithm time interval substantially constant is 50 milliseconds) at this time, in the algorithm time
When at the time of the half of interval, thread one is reported to the coordinate value for putting 3 in system, is equivalent to the point 2 and point realized in figure 6
A new point 3 is inserted between 4, to realize the continuity and fluency of sliding.
Referring to Fig. 5, flow starts, the condition in first judgment step 201, and whether Point (n-2) is null value;That is, first
It first to determine in the n-th -2 width figure and not detect finger interaction point.Here n is, for example, 3.
Here Point (n)=(x (n), y (n)), the finger interaction center point detected for the n-th width depth image
The coordinate of point, i.e. coordinate value are (x, y).Coordinate system is using the top left corner pixel of piece image point as coordinate origin, with width
It is respectively reference axis structure with short transverse and obtains.
The condition of step 202 is that Point (n-1) is non-null value and single-point touch.
That is, the (n-1)th width figure detects finger interaction center point and only detects a finger interaction center point,
If meeting above-mentioned 202 condition, it is equivalent to have obtained the point 1 in Fig. 4.
Step 203 is the detection with step 202 the same terms, that is, Point (n) is non-null value, and single-point touch, if step
Rapid 203 condition is also met, and is equivalent to have obtained the point 2 in Fig. 4;
It is emphasized that here limitation single-point touch condition be due to:The algorithm of the present embodiment supports 10 touch-controls (such as
Ten fingers contact projection plane), i.e., in same width figure, there can be ten finger interaction center points, may then generate
The coordinate of Point (n) _ 1, Point (n) _ 2......Point (n) _ 100 point.And under usual conditions, sliding is all with one
What finger carried out, the present embodiment is also for the slack problem of sliding process is improved, so by the gesture interaction of user
It is limited to and slides under this scene, i.e., to exclude this non-slip exchange scenario of ten finger contact screens.
The condition of step 204 is that the absolute value of Point (n)-Point (n-1) is more than pre-determined distance threshold value.This condition
It is the screening to the distance between two points being obtained in step 202 and step 203.
Here presetting at distance threshold isThis is because as shown in fig. 7, in general, on the contact surface of gesture interaction
One button is composed of several coordinate points (dot in Fig. 7), and distance is 1 between two coordinate points, four coordinate points
Square is constituted, then cornerwise distance is between coordinate points
It is emphasized that using the condition and threshold value of above-mentioned steps 202 to 204, can be very good to evade one other
Interaction scenarios, finally navigate to sliding finger interact this scene.First, this algorithm be for single-point touch, this
Which two central point just specify that range formula in step 204 applies is.Secondly, the condition of above-mentioned steps 202 to 204 is utilized
Interference can be filtered out with threshold value.For example, Fig. 8 is the finger interaction center point of the click gesture interaction of one embodiment of the invention
Schematic diagram, for finger interaction (i.e. finger in the of short duration time by interaction face) scene during the clicking of Fig. 8 signals
It can be very good to filter out.Such case can still obtain a series of coordinate value, these coordinate values when carrying out finger identification
The characteristics of be to be held essentially constant, and this slip gesture interaction scenarios that be apparently not the embodiment of the present invention be directed to, even if it can
By the condition of step 202 and 203, can be also filtered in step 204, because of finger centre point in consecutive image at this time
It sets and is basically unchanged, center position coordinate value is made the absolute value obtained after difference and (is not more than close to 0)。
In addition, can also be filtered out well to clicking interval in Fig. 8.It refers to that finger clicks end to click interval, then from
Interaction page is opened, the process clicked again.Obviously, in Fig. 8 continuous solid dot be unsatisfactory for step 204 condition detection,
And it is mingled with hollow dots between solid dot, and the detection of step 202 and step 203 cannot be met, then such situation also can be by
It excludes, a kind of final only remaining this interaction scenario of sliding.
After having passed through the condition of step 201-204, the conclusion that user is being slided at this time is obtained, then subsequently will
It can be inserted into system in particular moment and report finger interaction center point coordinates value.
Step 205 is namely executed, the timing since by step 204 has reached preset time threshold when the time
When value, in the n-th width depth image finger interaction center point indicate position after be inserted into new finger interaction center point and on
The coordinate value for reporting new finger interaction center point, n+1 is set to by n.
Preset time threshold calculates in this way:In conjunction with Fig. 4, the present embodiment will be realized to be noted between two points detected
Enter an interpolation to improve the slack effect of sliding, and the time obtained between two finger interaction center points is about 50ms
Algorithm in (millisecond), that is, thread two determines finger interaction center point the time it takes.Preferably, the present embodiment will be
20 milliseconds after each being reported by the finger interaction center point of condition report interpolation, such as on reporting a little 20ms after 2
Report the coordinate (that is, interpolation) of the point of prediction.
And interpolation calculates in this way:The coordinate of known Point (n-1) is (x (n)+x (n+1), y (n)+y (n+1)),
The coordinate of Point (n) is (x (n), y (n)), then the coordinate of the intermediate point of the two isBy
This it is found that the present embodiment is the prediction for carrying out subsequent point with linear interpolation method, can then obtain point coordinates below calculate it is public
Formula isIt can be obtained after arrangement
Then obtained new interpolation point and reported, the judgement of follow-up finger interaction center point and the calculating of interpolation and report will repeat
The flow of step 203- steps 205.
It should be pointed out that in order to ensure the smooth effect of sliding, not using the end of sliding as exiting in the present embodiment
The condition of interpolation after user slides that last point of end, still can be inserted into an interpolation point then according to this algorithm.Example
Such as, referring to Fig. 4, it is assumed that point 4 is the point that the sliding judged terminates, then after sliding still according to the method for the present embodiment
A point 5 so is inserted, this does not interfere with the impression of user, for sliding interaction, compensates the distance of several sliding points more,
User is not aware of substantially.
Fig. 9 is a kind of block diagram of gesture interaction device of one embodiment of the invention, referring to Fig. 9, the gesture interaction device
900 are applied to smart projector, including:
Image acquisition unit 901, it is dynamic for obtaining finger interaction of the user on the projection interactive flat of smart projector
The depth image of work;
Image processing unit 902 obtains the coordinate of corresponding finger interaction center point for being handled depth image
Value, and judge whether finger interaction meets deliberate action condition according to the coordinate value of the finger interaction center point of depth image, when
When meeting, row interpolation compensation is clicked through to finger interaction center;
Unit 903, coordinate value and interpolation for reporting finger interaction center point so that interacted according to finger are realized in interaction
The coordinate value and interpolation of central point determine that finger interactive action slides for finger, and according to pre- in finger sliding and smart projector
If the correspondence of operation executes predetermined registration operation.
In one embodiment of the invention, image processing unit 902 is specifically used for:In through the following steps after rule
It is determined as finger interaction and meets deliberate action condition,
Step 201, the coordinate value of finger interaction center point is null value in the n-th -2 width depth image;
Step 202, the coordinate value of finger interaction center point is non-null value and finger interaction center in the (n-1)th width depth image
Point is one;
Step 203, the coordinate value of finger interaction center point is non-null value and finger interaction center point in the n-th width depth image
It is one;
Step 204, by hand in the coordinate value and the n-th width depth image of finger interaction center point in the (n-1)th width depth image
The distance for referring to the calculated two fingers interaction center point of coordinate value of interaction center point is more than pre-determined distance threshold value.
In one embodiment of the invention, image processing unit 902, for the timing since by step 204,
When the time having reached preset time threshold, inserted after the position that finger interaction center point indicates in the n-th width depth image
Enter new finger interaction center point and report the coordinate value of new finger interaction center point, n is set to n+1.
In one embodiment of the invention, image processing unit 902, for utilizing linear interpolation method according to the n-th width scape
In deep image in the coordinate value of finger interaction center point and the (n-1)th width depth image finger interaction center point coordinate value, really
The coordinate value of fixed new finger interaction center point, and after the position that finger interaction center point indicates in the n-th width depth image
It is inserted into.
In one embodiment of the invention, unit is realized in image acquisition unit 901, image processing unit 902 and interaction
903 three threads by starting in a parallel fashion execute respectively.
It should be noted that the course of work of the gesture interaction device of the present embodiment is the reality with aforementioned gesture interaction method
Existing step is corresponding, therefore does not have the part of description to may refer to the explanation in previous embodiment in the present embodiment, herein no longer
It repeats.
Figure 10 is the structure chart of the electronic equipment of one embodiment of the invention.As shown in Figure 10, which includes depositing
Reservoir 1001 and processor 1002 are communicated by internal bus 1003 between memory 1001 and processor 1002 and are connected, storage
Device 1001 is stored with the program instruction that can be executed by processor 1002, and program instruction can be realized when being executed by processor 1002
Above-mentioned gesture interaction method.
In addition, the logical order in above-mentioned memory 1001 can be realized and be used as by the form of SFU software functional unit
Independent product sale in use, can be stored in a computer read/write memory medium.Based on this understanding, originally
Substantially the part of the part that contributes to existing technology or the technical solution can be in other words for the technical solution of invention
The form of software product embodies, which is stored in a storage medium, including some instructions to
So that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation of the application
The all or part of step of example method.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-
Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with
Store the medium of program code.
An alternative embodiment of the invention provides a kind of computer readable storage medium, computer-readable recording medium storage
Computer instruction, computer instruction make computer execute above-mentioned method.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, the present invention can be used in one or more wherein include computer usable program code computer
The computer program production implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided
Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine so that the instruction executed by computer or the processor of other programmable data processing devices is generated for real
The dress for the function of being specified in a present flow of flow chart or a box or multiple boxes for multiple flows and/or block diagram
It sets.
It should be noted that the terms "include", "comprise" or its any other variant are intended to the packet of nonexcludability
Contain, so that the process, method, article or equipment including a series of elements includes not only those elements, but also includes
Other elements that are not explicitly listed, or further include for elements inherent to such a process, method, article, or device.
In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including the element
Process, method, article or equipment in there is also other identical elements.
In the specification of the present invention, numerous specific details are set forth.Although it is understood that the embodiment of the present invention can
To put into practice without these specific details.In some instances, well known method, structure and skill is not been shown in detail
Art, so as not to obscure the understanding of this description.Similarly, it should be understood that disclose in order to simplify the present invention and helps to understand respectively
One or more of a inventive aspect, in the above description of the exemplary embodiment of the present invention, each spy of the invention
Sign is grouped together into sometimes in single embodiment, figure or descriptions thereof.However, should not be by the method solution of the disclosure
It releases and is intended in reflection is following:The feature that i.e. the claimed invention requirement ratio is expressly recited in each claim is more
More features.More precisely, as the following claims reflect, inventive aspect is to be less than single reality disclosed above
Apply all features of example.Therefore, it then follows thus claims of specific implementation mode are expressly incorporated in the specific implementation mode,
Wherein each claim itself is as a separate embodiment of the present invention.
The above description is merely a specific embodiment, under the above-mentioned introduction of the present invention, those skilled in the art
Other improvement or deformation can be carried out on the basis of the above embodiments.It will be understood by those skilled in the art that above-mentioned tool
Body description only preferably explains that the purpose of the present invention, protection scope of the present invention are subject to the protection scope in claims.
Claims (11)
1. a kind of gesture interaction method, which is characterized in that this method includes:
Obtain the depth image of finger interactive action of the user on the projection interactive flat of smart projector;
The depth image is handled to obtain the coordinate value of corresponding finger interaction center point, and according to the hand of depth image
The coordinate value for referring to interaction center point judges whether finger interaction meets deliberate action condition, when meeting, is interacted to the finger
Central point carries out Interpolation compensation;
Report the coordinate value and interpolation of the finger interaction center point so that according to the coordinate value of the finger interaction center point and
Interpolation determine the finger interactive action for finger slide, and according to finger sliding in smart projector predetermined registration operation it is corresponding
Relationship executes predetermined registration operation.
2. according to the method described in claim 1, it is characterized in that, according to the coordinate value of the finger interaction center point of depth image
Judge whether finger interaction meets deliberate action condition and include:
It is determined as finger interaction after rule in through the following steps and meets deliberate action condition,
Step 201, the coordinate value of finger interaction center point is null value in the n-th -2 width depth image;
Step 202, the coordinate value of finger interaction center point is non-null value in the (n-1)th width depth image and finger interaction center point is
One;
Step 203, the coordinate value of finger interaction center point is non-null value in the n-th width depth image and finger interaction center point is one
It is a;
Step 204, it is handed over by finger in the coordinate value and the n-th width depth image of finger interaction center point in the (n-1)th width depth image
The distance of the calculated two fingers interaction center point of coordinate value of mutual central point is more than pre-determined distance threshold value.
3. according to the method described in claim 2, it is characterized in that, clicking through row interpolation compensation packet to the finger interaction center
It includes:
Timing since by the step 204, when the time having reached preset time threshold, in the n-th width depth map
It is inserted into new finger interaction center point after the position that finger interaction center point indicates as in and reports new finger interaction center
The coordinate value of point, n+1 is set to by n.
4. according to the method described in claim 3, it is characterized in that, finger interaction center point indicates in the n-th width depth image
Position after be inserted into new finger interaction center point and include:
Using linear interpolation method according to the coordinate value and the (n-1)th width depth map of finger interaction center point in the n-th width depth image
The coordinate value of finger interaction center point, determines the coordinate value of new finger interaction center point, and in the n-th width depth image as in
It is inserted into after the position of finger interaction center point instruction.
5. according to the method described in claim 1, it is characterized in that, this method includes:Start three threads difference in a parallel fashion
It executes depth image to obtain, depth image processing, the coordinate value and interpolation of finger interaction center point report.
6. a kind of gesture interaction device, which is characterized in that the device includes:
Image acquisition unit, the depth of field for obtaining finger interactive action of the user on the projection interactive flat of smart projector
Image;
Image processing unit obtains the coordinate value of corresponding finger interaction center point for being handled the depth image,
And judge whether finger interaction meets deliberate action condition according to the coordinate value of the finger interaction center point of depth image, work as satisfaction
When, row interpolation compensation is clicked through to the finger interaction center;
Unit, coordinate value and interpolation for reporting the finger interaction center point are realized in interaction so that are handed over according to the finger
The coordinate value and interpolation of mutual central point determine that the finger interactive action slides for finger, and are projected with intelligence according to finger sliding
The correspondence of predetermined registration operation executes predetermined registration operation on instrument.
7. device according to claim 6, which is characterized in that described image processing unit is specifically used for:When by following
It is determined as finger interaction in step after rule and meets deliberate action condition,
Step 201, the coordinate value of finger interaction center point is null value in the n-th -2 width depth image;
Step 202, the coordinate value of finger interaction center point is non-null value in the (n-1)th width depth image and finger interaction center point is
One;
Step 203, the coordinate value of finger interaction center point is non-null value in the n-th width depth image and finger interaction center point is one
It is a;
Step 204, it is handed over by finger in the coordinate value and the n-th width depth image of finger interaction center point in the (n-1)th width depth image
The distance of the calculated two fingers interaction center point of coordinate value of mutual central point is more than pre-determined distance threshold value.
8. device according to claim 7, which is characterized in that described image processing unit, for passing through the step certainly
Beginning timing from when 204, when the time having reached preset time threshold, the finger interaction center point in the n-th width depth image
It is inserted into new finger interaction center point after the position of instruction and reports the coordinate value of new finger interaction center point, n is set to n
+1。
9. device according to claim 8, which is characterized in that
Described image processing unit, for the seat using linear interpolation method according to finger interaction center point in the n-th width depth image
The coordinate value of finger interaction center point in scale value and the (n-1)th width depth image determines the coordinate of new finger interaction center point
Value, and be inserted into after the position that finger interaction center point indicates in the n-th width depth image.
10. device according to claim 6, which is characterized in that described image acquiring unit, image processing unit and interaction
Realize that unit is executed respectively by three threads started in a parallel fashion.
11. a kind of electronic equipment, which is characterized in that the electronic equipment includes:Memory and processor, the memory and institute
It states to communicate by internal bus between processor and connect, the memory, which is stored with, to be referred to by the program that the processor executes
It enables, described program instruction can realize claim 1-5 any one of them gesture interaction methods when being executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810090288.4A CN108491066A (en) | 2018-01-30 | 2018-01-30 | A kind of gesture interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810090288.4A CN108491066A (en) | 2018-01-30 | 2018-01-30 | A kind of gesture interaction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108491066A true CN108491066A (en) | 2018-09-04 |
Family
ID=63343955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810090288.4A Pending CN108491066A (en) | 2018-01-30 | 2018-01-30 | A kind of gesture interaction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108491066A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109683775A (en) * | 2018-12-12 | 2019-04-26 | 歌尔科技有限公司 | Exchange method, projection device and storage medium based on projection |
CN112732162A (en) * | 2021-03-30 | 2021-04-30 | 北京芯海视界三维科技有限公司 | Projection interaction method, device and system and computer storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012234450A (en) * | 2011-05-09 | 2012-11-29 | Bluemouse Technology Co Ltd | Character recognition device |
US20130185671A1 (en) * | 2012-01-13 | 2013-07-18 | Fih (Hong Kong) Limited | Electronic device and method for unlocking the electronic device |
CN103488356A (en) * | 2013-10-18 | 2014-01-01 | 武汉拓宝电子系统有限公司 | Infrared camera three-dimensional imaging-based touch recognition method |
US20150033194A1 (en) * | 2013-07-25 | 2015-01-29 | Yahoo! Inc. | Multi-finger user identification |
CN105589553A (en) * | 2014-09-23 | 2016-05-18 | 上海影创信息科技有限公司 | Gesture control method and system for intelligent equipment |
CN105718878A (en) * | 2016-01-19 | 2016-06-29 | 华南理工大学 | Egocentric vision in-the-air hand-writing and in-the-air interaction method based on cascade convolution nerve network |
CN107256083A (en) * | 2017-05-18 | 2017-10-17 | 河海大学常州校区 | Many finger method for real time tracking based on KINECT |
-
2018
- 2018-01-30 CN CN201810090288.4A patent/CN108491066A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012234450A (en) * | 2011-05-09 | 2012-11-29 | Bluemouse Technology Co Ltd | Character recognition device |
US20130185671A1 (en) * | 2012-01-13 | 2013-07-18 | Fih (Hong Kong) Limited | Electronic device and method for unlocking the electronic device |
US20150033194A1 (en) * | 2013-07-25 | 2015-01-29 | Yahoo! Inc. | Multi-finger user identification |
CN103488356A (en) * | 2013-10-18 | 2014-01-01 | 武汉拓宝电子系统有限公司 | Infrared camera three-dimensional imaging-based touch recognition method |
CN105589553A (en) * | 2014-09-23 | 2016-05-18 | 上海影创信息科技有限公司 | Gesture control method and system for intelligent equipment |
CN105718878A (en) * | 2016-01-19 | 2016-06-29 | 华南理工大学 | Egocentric vision in-the-air hand-writing and in-the-air interaction method based on cascade convolution nerve network |
CN107256083A (en) * | 2017-05-18 | 2017-10-17 | 河海大学常州校区 | Many finger method for real time tracking based on KINECT |
Non-Patent Citations (1)
Title |
---|
王晓峰等主编: "《数值逼近》", 31 December 2017 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109683775A (en) * | 2018-12-12 | 2019-04-26 | 歌尔科技有限公司 | Exchange method, projection device and storage medium based on projection |
CN112732162A (en) * | 2021-03-30 | 2021-04-30 | 北京芯海视界三维科技有限公司 | Projection interaction method, device and system and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Memo et al. | Head-mounted gesture controlled interface for human-computer interaction | |
US10664993B1 (en) | System for determining a pose of an object | |
CN107466411B (en) | Two-dimensional infrared depth sensing | |
CN104598915B (en) | A kind of gesture identification method and device | |
CA3058821C (en) | Touchless input | |
CN102999152B (en) | A kind of gesture motion recognition methods and system | |
CN112823328A (en) | Method for HMD camera calibration using synchronized images rendered on an external display | |
EP3316080B1 (en) | Virtual reality interaction method, apparatus and system | |
CN106407875B (en) | Target's feature-extraction method and device | |
US9665767B2 (en) | Method and apparatus for pattern tracking | |
CN104317391A (en) | Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system | |
CN104240277B (en) | Augmented reality exchange method and system based on Face datection | |
CN110413816A (en) | Colored sketches picture search | |
US10338879B2 (en) | Synchronization object determining method, apparatus, and system | |
SG181597A1 (en) | Head recognition method | |
EP3345123B1 (en) | Fast and robust identification of extremities of an object within a scene | |
CN107067031A (en) | A kind of calligraphy posture automatic identifying method based on Wi Fi signals | |
CN103632126A (en) | Human face tracking method and device | |
CN102799271A (en) | Method and system for identifying interactive commands based on human hand gestures | |
CN105892668B (en) | Apparatus control method and device | |
CN108491066A (en) | A kind of gesture interaction method and device | |
CN105205786B (en) | A kind of picture depth restoration methods and electronic equipment | |
CN110780734B (en) | Gesture interaction AR projection method and device based on radar | |
KR20130017258A (en) | Apparatus and method about implementation of augmented reality based on model with multiple-planes | |
CN107610173A (en) | A kind of real-time location method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180904 |
|
RJ01 | Rejection of invention patent application after publication |