CN109558000A - A kind of man-machine interaction method and electronic equipment - Google Patents
A kind of man-machine interaction method and electronic equipment Download PDFInfo
- Publication number
- CN109558000A CN109558000A CN201710882327.XA CN201710882327A CN109558000A CN 109558000 A CN109558000 A CN 109558000A CN 201710882327 A CN201710882327 A CN 201710882327A CN 109558000 A CN109558000 A CN 109558000A
- Authority
- CN
- China
- Prior art keywords
- dimensional image
- set dimension
- target
- pointer
- electronic equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 230000003993 interaction Effects 0.000 title claims abstract description 41
- 238000003384 imaging method Methods 0.000 claims abstract description 73
- 238000004590 computer program Methods 0.000 claims description 10
- 241000406668 Loxodonta cyclotis Species 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 14
- 230000033001 locomotion Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 6
- 230000006854 communication Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
The present invention provides a kind of man-machine interaction method and electronic equipment, this method comprises: to include pointer multiple two dimensional images in each two dimensional image handle, obtain imaging size of the pointer in each two dimensional image;According to imaging size of the pointer in each two dimensional image, at least one target two dimensional image is determined from the multiple two dimensional image;Using at least one described target two dimensional image, determines and execute corresponding user instruction.The imaging size that pointer can be used in this way screens two dimensional image, is interacted using the target two dimensional image for the condition that meets, so as to improve accuracy of identification.
Description
Technical field
The present invention relates to field of communication technology more particularly to a kind of man-machine interaction methods and electronic equipment.
Background technique
With the rapid development of electronic device technology, the operational capability of electronic equipment itself is more and more stronger, and function is increasingly
It is more, it can be interacted in many ways between electronic equipment and user, such as use remote controler, mouse, voice or gesture
Interact etc..Since the process of gesture interaction is simpler, and during user interacts with electronic equipment naturally arbitrarily,
It has been applied among many scenes using the mode interacted between gesture and electronic equipment.
In the prior art, it is all based on depth camera many times to realize the tracking of target and the identification of gesture.
But the price of the higher depth camera of current precision is very expensive, such as Mesa Imaging SwissRanger 4000
(SR4000) price is up to U.S. dollars up to ten thousand.
In order to reduce cost, gesture identification is realized using common two dimensional image acquisition device in many scenes,
But the prior art realizes that the mode of gesture identification has that accuracy of identification is lower based on two dimensional image acquisition device.Citing
As follows, during carrying out human-computer interaction using gesture due to user, since no actual plane is as support, arm is hanging
It is easy shake, after this shake is collected, the recognition result of mistake will be obtained by carrying out gesture identification by processor.
Summary of the invention
The embodiment of the present invention provides a kind of man-machine interaction method and electronic equipment, is based on two dimensional image acquisition device to improve
Realize the accuracy of identification of the mode of gesture identification.
In a first aspect, the embodiment of the invention provides a kind of man-machine interaction methods, comprising:
Each two dimensional image in multiple two dimensional images including pointer is handled, obtains the pointer in institute
State the imaging size in each two dimensional image;
According to imaging size of the pointer in each two dimensional image, determined at least from the multiple two dimensional image
One target two dimensional image;
Using at least one described target two dimensional image, determines and execute corresponding user instruction.
Second aspect, the embodiment of the present invention also provide a kind of electronic equipment, comprising:
Module is obtained, for handling each two dimensional image in multiple two dimensional images including pointer, is obtained
Imaging size of the pointer in each two dimensional image;
First determining module, for the imaging size according to the pointer in each two dimensional image, from the multiple
At least one target two dimensional image is determined in two dimensional image;
Execution module, for determining and executing corresponding user instruction using at least one described target two dimensional image.
The third aspect, the embodiment of the present invention also provide a kind of electronic equipment, comprising: memory, processor and are stored in institute
The computer program that can be run on memory and on the processor is stated, the processor executes real when the computer program
Step in existing above-mentioned man-machine interaction method.
Fourth aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, the computer-readable storage
Computer program is stored on medium, the computer program realizes the step in above-mentioned man-machine interaction method when being executed by processor
Suddenly.
In the embodiment of the present invention, each two dimensional image in multiple two dimensional images including pointer is handled, is obtained
Take imaging size of the pointer in each two dimensional image;According to the pointer in each two dimensional image at
As size, at least one target two dimensional image is determined from the multiple two dimensional image;Utilize at least one target two dimension
Image determines and executes corresponding user instruction.The imaging size that pointer can be used in this way screens two dimensional image,
It is interacted using the target two dimensional image for the condition that meets, so as to improve accuracy of identification.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention
Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention,
For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is one of the schematic diagram for the Image Acquisition distance that one embodiment of the invention provides;
Fig. 2 is one of the schematic diagram that the image that one embodiment of the invention provides is shown;
Fig. 3 is the two of the schematic diagram for the Image Acquisition distance that one embodiment of the invention provides;
Fig. 4 is the two of the schematic diagram that the image that one embodiment of the invention provides is shown;
Fig. 5 is the flow chart for the man-machine interaction method that one embodiment of the invention provides;
Fig. 6 is the flow chart for the man-machine interaction method that further embodiment of this invention provides;
Fig. 7 is one of the schematic diagram of connected region that one embodiment of the invention provides;
Fig. 8 is the two of the schematic diagram for the connected region that one embodiment of the invention provides;
Fig. 9 is the schematic diagram in the different operation region that one embodiment of the invention provides;
Figure 10 is one of the structure chart of electronic equipment that one embodiment of the invention provides;
Figure 11 is the structure chart of the first acquisition module of the electronic equipment that one embodiment of the invention provides;
Figure 12 is the two of the structure chart for the electronic equipment that one embodiment of the invention provides;
Figure 13 is the three of the structure chart for the electronic equipment that one embodiment of the invention provides;
Figure 14 is the structure chart for the electronic equipment that further embodiment of this invention provides.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
The realization process of embodiment in order to better understand the present invention first carries out the explanation of principle below.
Firstly, pointer imaging size in the picture and the distance between pointer and image collecting device are close phases
It closes.It is explained as follows below in conjunction with Fig. 1 to Fig. 4.
As Figure 1-Figure 4, pointer therein is an orbicule.It will be noted from fig. 1 that image collecting device with
Between orbicule from it is closer, at this time orbicule image acquisition device to image in imaging size such as Fig. 2 institute
Show.From in Fig. 2 it is observed that orbicule image acquisition device to image in imaging size it is bigger.Relatively
For Fig. 1, between the image collecting device and orbicule in Fig. 3 from it is distant, orbicule is in image collecting device at this time
Imaging size in acquired image is as shown in Figure 4.Fig. 4 and Fig. 2 are compared, it is found that orbicule is being schemed in Fig. 4
As the imaging size in acquisition device acquired image is with regard to smaller.That is, imaging size can indirectly describe to refer to
Point the distance between object and image collecting device.
In the specific embodiment of the invention, in order to improve accuracy of identification, from the user point of view, being is user setting one
Operating area, the user's operation only in operating area can be just identified.According to description before it is found that imaging size can be with
Indirectly description the distance between pointer and image collecting device, therefore can be according to pointer in the specific embodiment of the invention
Imaging size exclude the operation that those are carried out outside operating area, only identify the operation that user carries out in operating area,
And then improve accuracy of identification.
It is the flow chart of man-machine interaction method provided in an embodiment of the present invention referring to Fig. 5, Fig. 5, as shown in figure 5, the people
Machine exchange method the following steps are included:
Step 101 handles each two dimensional image in multiple two dimensional images including pointer, obtains the finger
Imaging size of the point object in each two dimensional image.
In the embodiment of the present invention, above-mentioned pointer can be the finger of user, the palm of user, or be also possible to user
The object (such as strip object) that can be held, or can also be the object being attached in user's finger (such as retroreflective sheeting or
Person has the sheet refractive body of specific shape) etc..
Step 102, the imaging size according to the pointer in each two dimensional image, from the multiple two dimensional image
Determine at least one target two dimensional image.
In conjunction with the principle of front, the embodiment of the present invention for user perspective, be for one operating area of user setting,
Therefore user's operation can be divided into two parts: the operation in operating area and the operation outside operating area.The present invention is specific
In embodiment, only identify operation of the user in operating area, and some operations outside operating area, electronic equipment without
Response.It, can be according to pointer each two by the principle of front it is found that judging whether user operates in operating area
Imaging size in dimension image is judged.When imaging size of the pointer in two dimensional image is in pre-set size field
When interior, so that it may determine that user is operated in operating area.
Step 103, using at least one described target two dimensional image, determine and simultaneously execute corresponding user instruction.
In the specific embodiment of the invention, after obtaining target two dimensional image, then it can use target two dimensional image and formed
Image sequence carry out equipment control.The track of pointer, and the determining and rail are such as determined according to multiple target two dimensional images
The matched instruction of mark.
Since imaging size can indirectly describe the distance between pointer and image collecting device, that is to say, that when
When displacement between pointer and image collecting device changes, pointer image acquisition device to image in
Imaging size can also change.Therefore it in the specific embodiment of the invention, is collected based on common two dimensional image acquisition device
Two dimensional image in the imaging size of pointer pointer can be identified towards the movement on image collecting device direction.
Meanwhile the imaging size based on pointer in the common collected two dimensional image of two dimensional image acquisition device may be used also
With judge user's operation whether one set space in carry out, that is to say, that can exclude those outside operating area into
The operation of misrecognition is gone and be will lead to, the accuracy of identification for realizing the mode of gesture identification based on two dimensional image acquisition device is improved.
Hereinafter the effect of the specific embodiment of the invention illustrate so that board application program inputs alphabetical " T " as an example
It is bright.In the specific embodiment of the invention, due to the presence of operating area, and the operation occurred in this operating area is only identified,
Accuracy of identification can be improved.
In the prior art, when user needs to input alphabetical " T " by board application program, it is necessary first to draw a level
Line segment, then need to draw a vertical line segment down since the midpoint of horizontal line section, form alphabetical " T ".However, with
When the finger at family returns to the midpoint of horizontal line section from the endpoint of horizontal line section, this can also be identified without the movement of identification, make
At misrecognition.
And in the specific embodiment of the invention, due to the presence of operating area, user can be lifted hand, leave the area of setting
Then domain enters the region of setting at the midpoint of horizontal line section, since the finger of user returns to level from the endpoint of horizontal line section
The movement at the midpoint of line segment is carried out except operating area, so can't be identified, also avoids the hair of misrecognition
It is raw.And the moving line that this mode for avoiding misrecognition only needs user to change finger, it is more not need user's progress
Remaining movement realizes that process is simple.
In the embodiment of the present invention, above-mentioned electronic equipment can be mobile phone, tablet computer (Tablet Personal
Computer), laptop computer (Laptop Computer), personal digital assistant (personal digital
Assistant, abbreviation PDA), mobile Internet access device (Mobile Internet Device, MID) or wearable device
(Wearable Device) etc..
In the specific embodiment of the invention, the imaging size according to the pointer in each two dimensional image is needed, from institute
It states and determines at least one target two dimensional image in multiple two dimensional images.And for work angle, that is, need according to indication
Imaging size of the object in each two dimensional image selects the operation that those occur in operating area.In the specific embodiment of the invention
Target two dimensional image can be determined according to various ways, and one way in which is described as follows below.
It, can be according to pointer every by the principle of front it is found that judging whether user operates in operating area
Imaging size in a two dimensional image is judged.On the direction towards image collecting device, judge whether user is operating
It is operated in region, then whether can be in pre-set size field according to imaging size of the pointer in two dimensional image
It is interior to be judged.
Referring to Fig. 6, Fig. 6 be further embodiment of this invention provide man-machine interaction method the following steps are included:
Step 201 handles each two dimensional image in multiple two dimensional images including pointer, obtains the finger
Imaging size of the point object in each two dimensional image.
In the embodiment of the present invention, above-mentioned pointer can be the finger of user, the palm of user, or be also possible to user
Strip object of grip etc..The two dimensional image for including pointer is adopted in the two dimensional image acquisition device of electronic equipment
During collection, pointer front and back or can carry out movement up and down.And certain movements may cause pointer not
Imaging size in same two dimensional image is different.
Step 202, the imaging size according to the pointer in each two dimensional image, from the multiple two dimensional image
Determine at least one target two dimensional image, wherein imaging size of the pointer in the target two dimensional image is in the
In one pre-set dimension section.
In the embodiment of the present invention, above-mentioned first pre-set dimension section can be the reasonable interval of any definition, to this this hair
Bright embodiment is not construed as limiting.
Based on simple mapping relations it is recognised that above-mentioned the first pre-set dimension section definition operating area is in direction
Depth on two dimensional image acquisition device direction.After having this operating area, the operation only in operating area is
It can be identified, that is to say, that user is only just able to achieve in this operating area and interacts with electronic equipment.When user not
When needing to interact between electronic equipment, it is only necessary to leave this operating area, make the interactive process of user with
And stopped process is more convenient.
Step 203, using at least one described target two dimensional image, determine and simultaneously execute corresponding user instruction.
In the specific embodiment of the invention, pointer is varied, such as can be finger, can be palm, can be strip
Object may be sphere etc..Illustrate the imaging size for specifically how obtaining pointer using pointer as finger below.
When the pointer be finger when, described pair include pointer multiple two dimensional images in each two dimensional image into
Row processing, obtains imaging size of the pointer in each two dimensional image, specifically includes:
Determine the corresponding connected region of the finger for including in the two dimensional image;
Finger tip positioning is carried out to the finger based on the geometrical characteristic of finger, obtains finger tip coordinate;
Determine the corresponding connected region width of the finger tip coordinate.
In order to better understand the above process, we can be refering to Fig. 7 to Fig. 8.The side of image recognition can be passed through first
Method identify including the corresponding connected region of finger, at this time can be as shown in fig. 7, entire palm be just to include finger pair
The connected region answered.After being positioned by finger tip of some modes to finger, the coordinate of available finger tip.Finger tip at this time
The corresponding connected region of coordinate can with for region shown in part M in Fig. 8, and connected region width at this time can with for
The width L of finger tip.
In present embodiment, the corresponding company of finger for including in two dimensional image can be identified by the method for image recognition
Logical region.Here it can be understood that be the region of entire palm, which can be to be opened by palm and be formed for connected region
Region is also possible to hold the region formed by palm.Similarly, the finger of finger can also be identified by the method for image recognition
Point, and the coordinate between being positioned and being obtained.
After determining finger tip coordinate, so that it may the directly corresponding connected region width of record finger tip coordinate.And finger tip
The corresponding connected region width of coordinate, so that it may the popular width for being interpreted as finger fingertip and being imaged in two dimensional image.Finger tip
The acquisition modes of width, such as: firstly, carrying out skin color segmentation by well known certain methods, for example carried out using Otsu algorithm
It calculates, and obtains bianry image.Then, according to the range of finger connected region in the controllable range of the gesture actually obtained, i.e.,
Have the setting of the max pixel value and minimum pixel value of finger tip width, since exclude it is some be unsatisfactory for require connected regions.Again
The point farthest apart from center of gravity is found from profile after obtaining center of gravity for each connected region, then obtains profile where this point
Width and length, and record length and width ratio.Finally, finding maximum one in experience ratio range from multiple ratios
It is a, as finger tip, and the width of finger tip is recorded, is used for subsequent judgement finger apart from two dimensional image acquisition device distance change.
In this way, user does not need to set using other tools (such as strip shape body or orbicule etc.) with electronics
It is standby to carry out gesture interaction, directly gesture interaction can be carried out with electronic equipment using finger.And the finger of most people it
Between width in general differ not too large, so that it is one reasonable operating space of public setting that electronic equipment, which can be convenient,
Domain.
Optionally, first pre-set dimension section is the pre-set dimension section at least two pre-set dimension sections, should
Any two pre-set dimension interval distribution at least two pre-set dimension sections.
In present embodiment, since the first pre-set dimension section is the pre-set dimension area at least two pre-set dimension sections
Between, and any two pre-set dimension interval distribution in at least two pre-set dimensions section, then in space can
With there are at least two operating areas, and intersection is not had between the two operating areas.Popular can be understood as two
There are at least two operating areas in the front of dimension image collecting device, and can be carried out operating in each operating area,
It is interacted between electronic equipment.
In order to better understand two above-mentioned operating areas, we can be mentioned refering to Fig. 9, Fig. 9 for the embodiment of the present invention
The schematic diagram in the different operation region of confession.It will be seen that there is two operating areas, operating area A in space in Fig. 9
With operating area B, there are certain space intervals between operating area A and operating area B, are operated in the space interval
It will not be interacted between electronic equipment, only being operated in operating area A or operating area B can just set with electronics
It is interacted between standby.
In present embodiment, there are at least two operating areas for user's operation, and user is allow to carry out multiple selections, and
It can choose that operating area closer from oneself to be operated, keep the interaction between user and electronic equipment more square
Just.
Optionally, the multiple two dimensional image further includes and the second default ruler at least two pre-set dimension section
At least one corresponding two dimensional image of very little section;
It is described using at least one described target two dimensional image, determine and execute corresponding user instruction and specifically include:
Using at least one target two dimensional image determination corresponding with first pre-set dimension section and execute corresponding
User instruction.
In present embodiment, different pre-set dimension sections corresponds to different operating areas, and utilizes and preset with first
Corresponding at least one target two dimensional image in size section is determining simultaneously to execute corresponding user instruction, it can be ensured that an instruction or
Person's once-through operation should occur in an operating area, avoid the operation in two operating areas being put into a sequence progress
Movement matching, may insure that matched process is more accurate in this way.
Also, the distance between the size of objects in images and object and image capture device be it is relevant, work as distance map
When closer as acquisition equipment, the slight change of pointer in the depth direction all will lead to the larger-size change of pointer in image
Change, therefore relatively large size range will be arranged in the operating area closer apart from described image acquisition equipment, to reduce user
Operation precision requirement.
Therefore, in the specific embodiment of the invention, at least two pre-set dimension section, the bigger pre-set dimension of numerical value
Section, corresponding siding-to-siding block length are bigger.
In present embodiment, when pointer is closer with a distance from image collecting device, pointer is adopted in image
Imaging size in the image of acquisition means acquisition can be bigger;When pointer is distant with a distance from image collecting device
It waits, imaging size of the pointer in the image of image acquisition device can be smaller.And when pointer is from Image Acquisition
When device is closer, pointer slightly carries out some movements in the depth direction may will lead to the imaging ruler of pointer
Very little variation is bigger.
In this way, when the numerical value in pre-set dimension section is bigger, so that it may the bigger operating area of a corresponding length,
Can have a relatively good tolerance to the operating area, make pointer from image collecting device it is closer with a distance from into
When some operations of row, it can be very good to be identified by image collecting device.
Optionally, the man-machine interaction method further include:
According to the mapping relations between pre-set dimension section and controllable object set, determining and first pre-set dimension area
Between the corresponding controllable object set of target;
Show controllable pair in the controllable object set of the target as.
In present embodiment, above-mentioned controllable pair passes through as can be some icons, some buttons or some options etc.
These controllable pairs are as may be implemented corresponding operation.
In present embodiment, the first pre-set dimension section can correspond to an operating area, in determining and the first default ruler
After the corresponding controllable object set of target in very little section, can with the controllable pair in the controllable object set of displaying target as.In this way,
In the corresponding operating area in first pre-set dimension section, so that it may operate controllable pair in the controllable object set of target as.In this way
These controllable pairs are shown as being distributed in some layer, can also can reduce use to controllable pair as amplifying display
The operation precision at family, can also reduce the precision of image recognition, user simply can choose very much the controllable pair of oneself needs as.
In the prior art, typically all controllable pairs are shown as being placed in a region, each may be used in this way
The icon for controlling object may be just smaller, and the finger of user comparatively may be just bigger, goes to select by finger in this way
One smaller icon selects the probability of mistake can be bigger, to influence the precision of selection.
In present embodiment, it can make in this way by the controllable pair in some controllable object sets as taking out independent display
Shown with a big display area fewer controllable pair as, each controllable pair is as can be carried out certain amplification, and
And after zooming display area can also accommodate the controllable pair of lower amplification as.And the finger tip coordinate for determining user the case where
Under, correspond to some controllable pair as can operate some controllable pair as.Since controllable pair is as being amplified, finger also holds very much
Controllable pair easily after selection amplification as, make user very and arbitrarily may be selected by controllable pair to needs as and operate.
Optionally, the controllable pair in the display controllable object set of target as, comprising:
When the controllable object set of the target is different from the controllable object set currently shown, by the first current display
Interface be updated to include controllable pair elephant in the controllable object set of the target the second display interface.
In present embodiment, since the controllable object set currently shown can be exactly the controllable object set of target, thus
It does not need to carry out secondary display.It only needs when the controllable object set of target is different from the controllable object set currently shown, it will
The first current display interface be updated to include controllable pair elephant in the controllable object set of the target the second display interface.This
Sample only can just carry out the switching of controllable pair elephant when operating area switches over, and not have to be judged and refreshed every time.
For example, user, when first operating area operates, first operating area can show that image is cut out at this time
The set for some buttons cut.If it is determined that the controllable object set of target be just image cropping some buttons set, then
There is no need to be updated to display.If user leaves first operating area and enters second operating area at this time, at this time
The controllable object set of determining target is the set of some buttons of image rendering, and what is currently shown is still image cropping
The set of some buttons of image cropping is updated to image rendering then can be carried out updating by the set of some buttons
The set of some buttons.
In this way, can show different controllable object sets in different operating areas, so that it may using different controllable
Object set carries out the operation of different function, so that user is operated more convenient.A fairly large number of button is distributed to difference
Operating area, and the switching of operating area only need user carry out operating distance control can be realized, not only realize just
The operation switching of benefit, and button is distributed to multiple operation interfaces, reduce the operation precision and image recognition precision of user
Requirement.
Optionally, the man-machine interaction method further include:
It obtains and refers to two dimensional image;
Two dimensional image is referred to according to described, determines first pre-set dimension section.
In present embodiment, it is equivalent to the process of first pre-set dimension section initialization.Here it does not need in advance
First pre-set dimension section is set, but determines the first pre-set dimension section according to obtaining with reference to two dimensional image.Certainly, here
Refer to two dimensional image according to described, determine the process in first pre-set dimension section, can be by reference two dimensional image at
As size expands the first pre-set dimension section that identical size determines forwards, backwards, or being also possible to will be with reference to two dimensional image
Imaging size expands the first pre-set dimension section etc. that different sizes determines forwards, backwards.
Certainly, in addition to default ruler can also be pre-defined according to the first pre-set dimension section is determined with reference to two dimensional image
The size in very little section is not construed as limiting this embodiment of the present invention.In present embodiment, pre-defined first default ruler is not needed
Very little section, but the first pre-set dimension section is determined according to the reference two dimensional image obtained in real time, can make electronic equipment with
There is stronger adaptability between user, interactive process is more flexible.
Optionally, first pre-set dimension section is (W-dw, W+dw), and the W is the pointer in the reference
Imaging size in two dimensional image, dw are the siding-to-siding block length threshold value in first pre-set dimension section.
Optionally, described to include: with reference to two dimensional image
The two dimensional image obtained for the first time after this starting of electronic equipment;Or
The interval of the acquisition time for the two dimensional image that acquisition time and electronic equipment last time get is more than predetermined time door
The two dimensional image of limit.
In present embodiment, the two dimensional image obtained for the first time after this starting of electronic equipment can guarantee that electronics is set in this way
It is standby to be initialized in real time.Certainly during initialization, electronic equipment can carry out prompting etc., and inquiry user is
It is no to be initialized.If not initialized, the two dimensional image that last time can also be used to use is as reference
Two dimensional image.
In present embodiment, the interval of the acquisition time for the two dimensional image that acquisition time was got with electronic equipment last time is super
The two dimensional image for crossing predetermined time thresholding, returns again after so that user is gone for a season in this way, can also be reinitialized.
Certainly electronic equipment can also be prompted here, inquire whether the user needs to initialization etc..In this way, electronic equipment and user
Between interaction can be more intelligent, user's use is also more convenient.
In order to better understand interactive process, it is illustrated using lower example.When user starts to hand over electronic equipment
When mutual, electronic equipment can capture the finger tip of user by the method for image recognition, and obtain the imaging size of finger tip.
There are two types of situations after capturing the imaging size of finger tip, and one is electronic equipments to pre-set size field
Between, an operating area can be determined by the size section.If the imaging size of finger tip is unsatisfactory for the requirement of operating area,
Can not be interacted with electronic equipment, only by size section determine operating area in, just can with electronic equipment into
Row interaction.
And another kind is that electronic equipment is not provided with size section in advance, can think do not have for the angle of user
One fixed operating area.An operating area can be thus determined by the imaging size of the finger tip currently acquired, this
In be equivalent to one initialization process, and be specially initialize an operating area.
Certainly, user can show some controllable pairs to relevant in current operating region in current operating area
As, and only display these controllable pairs to relevant in current operating region as.So in this way, some smaller
Controllable pair as that can be amplified in display area, user can more easily choose oneself need operate it is controllable
Object.
Select oneself to need the controllable pair that operates as during, can first determine the index of the finger tip of user, thus
Go the controllable pair for selecting user to need in display area as such user can fast and accurately choose certainly using finger tip coordinate
Oneself need the controllable pair that operates as.
Multiple size sections can be set in certain electronic equipment, in this way in the angle of user it can be understood that being user setting
Multiple operating areas.When the finger of user generates displacement in the depth direction, the imaging size of finger fingertip also can be corresponding
It changes.When the imaging size of finger tip falls into another size section from current size section, can be adopted by image
Acquisition means are identified, to can interact with electronic equipment again.
When the imaging size of finger tip falls into another size section from current size section, user can be intuitive
It is interpreted as entering another operating area from an operating area, electronic equipment can show that another operating area is corresponding
Some controllable pairs as.These controllable pairs are operated as that can also amplify display convenient for user.
Other than above-mentioned interactive mode, the present embodiment also provides the process of another interaction.
The position of the available finger tip of the image collecting device of electronic equipment and width, then according to the position of the finger tip of acquisition
It sets and width initialization first layer.First layer can be the first layer on specific direction, which can be finger tip direction
The direction of image collecting device.Certain first layer can be the region distant from image collecting device, be also possible to from image
The closer region of acquisition device.
By taking first layer is the region distant from image collecting device as an example, the method for initialization can be to be detected for the first time
To finger tip, and in fingertip location holding several seconds, there is no variations.After not detecting finger tip more than certain time,
When detecting finger tip again, first layer information is reinitialized.
According to photo resolution finger tip mobile pixel number and got, operating position in the electronic device is mapped;
In addition the initialization width of base is set as W, mode from the distant to the near, minimum finger tip presetted pixel number is X, and pixel changes threshold values dw
This layer operation is left in expression, then there is following situation:
According to the number of plies that system needs, calculate each layer of predetermined width, for example, three layers then N (0~2) slice width degree be W3
=W-N* (W-X)/(3-1) is in third layer operation model if the width of current finger tip is at W3 ± ((W-X)/(3-1)/2)
In enclosing;Preset value can certainly be set using other modes, to guarantee each layer of depth difference;
If finger tip width changes since Wn, variation is less than dw, then it is assumed that also in n-th layer;
If finger tip width changes since Wn, variation is greater than dw, but still in n-th layer opereating specification, then it is assumed that
It is lifting status;
If finger tip width changes since Wn, and enters the opereating specification of other layers, then it is corresponding to correspond to changing interface
Operation layer;
If it find that finger tip has been moved off this layer and enters other layers, then corresponding prompt is provided in display interface, or
The definition of the menu for showing respective layer, function and corresponding menu for different layers is realized in the software of application program, no
Same function puts the opereating specification that can simplify each layer on different layers, keeps positioning more accurate.In addition multiple layers it
Between can switch over, operate more efficiently.
A kind of man-machine interaction method of the embodiment of the present invention, to each two dimension in multiple two dimensional images including pointer
Image is handled, and imaging size of the pointer in each two dimensional image is obtained;According to the pointer every
Imaging size in a two dimensional image determines at least one target two dimensional image, wherein described from the multiple two dimensional image
Pointer is in the first pre-set dimension section in the imaging size in the target two dimensional image;Utilize at least one described mesh
Two dimensional image is marked, determine and executes corresponding user instruction.Can be used in this way the imaging size of pointer to two dimensional image into
Row screening, is interacted using the target two dimensional image for the condition that meets, so as to improve accuracy of identification.
It is the structure chart of electronic equipment provided in an embodiment of the present invention referring to Figure 10, Figure 10, is able to achieve in above-described embodiment
The details of man-machine interaction method, and reach identical effect.As shown in Figure 10, electronic equipment 1000 includes the first acquisition module
1001, the first determining module 1002 and execution module 1003, first obtains module 1001 and the connection of the first determining module 1002, the
One determining module 1002 and execution module 1003 connect, in which:
First obtains module 1001, to each two dimensional image in multiple two dimensional images including pointer
Reason, obtains imaging size of the pointer in each two dimensional image;
First determining module 1002, for the imaging size according to the pointer in each two dimensional image, from described
At least one target two dimensional image is determined in multiple two dimensional images;
Execution module 1003 refers to for determining and executing corresponding user using at least one described target two dimensional image
It enables.
Optionally, as shown in figure 11, the pointer includes finger, and described first obtains module 1001, is specifically included:
First determines submodule 10011, for determining the corresponding connected region of finger for including in the two dimensional image;
Acquisition submodule 10012 carries out finger tip positioning to the finger for the geometrical characteristic based on finger, obtains finger tip
Coordinate;
Second determines submodule 10013, for determining the corresponding connected region width of the finger tip coordinate.
Optionally, imaging size of the pointer in the target two dimensional image is in the first pre-set dimension section
It is interior.
Optionally, first pre-set dimension section is the pre-set dimension section at least two pre-set dimension sections, should
Any two pre-set dimension interval distribution at least two pre-set dimension sections.
Optionally, the multiple two dimensional image further includes and the second default ruler at least two pre-set dimension section
At least one corresponding two dimensional image of very little section;
The execution module 1003, is specifically used for:
Using at least one target two dimensional image determination corresponding with first pre-set dimension section and execute corresponding
User instruction.
Optionally, at least two pre-set dimension section, the bigger pre-set dimension section of numerical value, corresponding section is long
It spends bigger.
Optionally, as shown in figure 12, the electronic equipment 1000 further include:
Second determining module 1004, for according to the mapping relations between pre-set dimension section and controllable object set, really
The fixed controllable object set of target corresponding with first pre-set dimension section;
Display module 1005, for show the controllable pair in the controllable object set of the target as.
Optionally, the display module 1005, is used for:
When the controllable object set of the target is different from the controllable object set currently shown, by the first current display
Interface be updated to include controllable pair elephant in the controllable object set of the target the second display interface.
Optionally, as shown in figure 13, the electronic equipment 1000 further include:
Second obtains module 1006, refers to two dimensional image for obtaining;
Third determining module 1007 determines first pre-set dimension section for referring to two dimensional image according to described.
Optionally, first pre-set dimension section is (W-dw, W+dw), and the W is the pointer in the reference
Imaging size in two dimensional image, dw are the siding-to-siding block length threshold value in first pre-set dimension section.
Optionally, described to include: with reference to two dimensional image
The two dimensional image obtained for the first time after this starting of electronic equipment;Or
The interval of the acquisition time for the two dimensional image that acquisition time and electronic equipment last time get is more than predetermined time door
The two dimensional image of limit.
Electronic equipment 1000 is able to achieve each process that electronic equipment is realized in the embodiment of the method for Fig. 5 to Fig. 6, to avoid
It repeats, which is not described herein again.
The electronic equipment 1000 of the embodiment of the present invention, to each two dimensional image in multiple two dimensional images including pointer
It is handled, obtains imaging size of the pointer in each two dimensional image;According to the pointer each two
The imaging size in image is tieed up, at least one target two dimensional image is determined from the multiple two dimensional image;Using it is described at least
One target two dimensional image determines and executes corresponding user instruction.The imaging size of pointer can be used in this way to two dimension
Image is screened, and is interacted using the target two dimensional image for the condition that meets, so as to avoid some maloperations as far as possible.
Referring to Figure 14, the hardware structural diagram of Figure 14 a kind of electronic equipment of each embodiment to realize the present invention should
Electronic equipment 1400 includes: memory 1409 and processor 1410.In embodiments of the present invention, electronic equipment includes but is not limited to
Mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, processor 1410, to each two dimensional image in multiple two dimensional images including pointer
Reason, obtains imaging size of the pointer in each two dimensional image;According to the pointer in each two dimensional image
In imaging size, from the multiple two dimensional image determine at least one target two dimensional image;Utilize at least one described mesh
Two dimensional image is marked, determine and executes corresponding user instruction.Can be used in this way the imaging size of pointer to two dimensional image into
Row screening, is interacted using the target two dimensional image for the condition that meets, so as to improve accuracy of identification.
Optionally, the pointer includes finger, and processor 1410 is also used to determine the hand for including in the two dimensional image
Refer to corresponding connected region;Finger tip positioning is carried out to the finger based on the geometrical characteristic of finger, obtains finger tip coordinate;Determine institute
State the corresponding connected region width of finger tip coordinate.
Optionally, imaging size of the pointer in the target two dimensional image is in the first pre-set dimension section
It is interior.
Optionally, first pre-set dimension section is the pre-set dimension section at least two pre-set dimension sections, should
Any two pre-set dimension interval distribution at least two pre-set dimension sections.
Optionally, the multiple two dimensional image further includes and the second default ruler at least two pre-set dimension section
At least one corresponding two dimensional image of very little section;Processor 1410 is also used to using corresponding with first pre-set dimension section
At least one target two dimensional image it is determining and execute corresponding user instruction.
Optionally, at least two pre-set dimension section, the bigger pre-set dimension section of numerical value, corresponding section is long
It spends bigger.
Optionally, processor 1410 are also used to according to the mapping relations between pre-set dimension section and controllable object set,
Determine the controllable object set of target corresponding with first pre-set dimension section;It shows in the controllable object set of the target
Controllable pair as.
Optionally, processor 1410 are also used to be different from the controllable pair currently shown in the controllable object set of the target
When as set, the first current display interface is updated to include second of controllable pair elephant in the controllable object set of the target
Display interface.
Optionally, processor 1410 are also used to obtain with reference to two dimensional image;Two dimensional image is referred to according to described, determines institute
State the first pre-set dimension section.
Optionally, first pre-set dimension section is (W-dw, W+dw), and the W is the pointer in the reference
Imaging size in two dimensional image, dw are the siding-to-siding block length threshold value in first pre-set dimension section.
Optionally, described with reference to two dimensional image includes: the two dimensional image that obtains for the first time after this starting of electronic equipment;Or
The interval of the acquisition time for the two dimensional image that acquisition time and electronic equipment last time get is more than the two dimension of predetermined time thresholding
Image.
It will be appreciated that as shown in figure 14, the electronic equipment of the specific embodiment of the invention can also include such as lower component
One or more of:
Radio frequency unit 1401, audio output unit 1403, input unit 1404, sensor 1405, is shown network module 1402
Show the components such as unit 1406, user input unit 1407, interface unit 1408 and power supply 1411.
It should be understood that the embodiment of the present invention in, radio frequency unit 1401 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 1410 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 1401 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 1401 can also by wireless communication system and network and other
Equipment communication.
Electronic equipment provides wireless broadband internet by network module 1402 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 1403 can be received by radio frequency unit 1401 or network module 1402 or in memory
The audio data stored in 1409 is converted into audio signal and exports to be sound.Moreover, audio output unit 1403 can be with
Audio output relevant to the specific function that electronic equipment 1400 executes is provided (for example, call signal receives sound, message sink
Sound etc.).Audio output unit 1403 includes loudspeaker, buzzer and receiver etc..
Input unit 1404 is for receiving audio or video signal.Input unit 1404 may include graphics processor
(Graphics Processing Unit, GPU) 14041 and microphone 14042, graphics processor 14041 are captured in video
In mode or image capture mode by image capture apparatus (such as camera) obtain static images or video image data into
Row processing.Treated, and picture frame may be displayed on display unit 1406.Through treated the picture frame of graphics processor 14041
It can store in memory 1409 (or other storage mediums) or carried out via radio frequency unit 1401 or network module 1402
It sends.Microphone 14042 can receive sound, and can be audio data by such acoustic processing.Audio that treated
Data can be converted to the lattice that mobile communication base station can be sent to via radio frequency unit 1401 in the case where telephone calling model
Formula output.Electronic equipment 1400 further includes at least one sensor 1405, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 14061, and proximity sensor can close when electronic equipment 1400 is moved in one's ear
Close display panel 14061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect (one in all directions
As be three axis) acceleration size, can detect that size and the direction of gravity when static, can be used to identify electronic equipment posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
Sensor 1405 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, air pressure
Meter, hygrometer, thermometer, infrared sensor etc., details are not described herein.
Display unit 1406 is for showing information input by user or being supplied to the information of user.Display unit 1406 can
Including display panel 14061, liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diodes can be used
Forms such as (Organic Light-Emitting Diode, OLED) are managed to configure display panel 14061.
User input unit 1407 can be used for receiving the number or character information of input, and generate the use with electronic equipment
Family setting and the related key signals input of function control.Specifically, user input unit 1407 include touch panel 14071 with
And other input equipments 14072.Touch panel 14071, also referred to as touch screen collect the touch behaviour of user on it or nearby
Make (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 14071 or in touch panel
Operation near 14071).Touch panel 14071 may include both touch detecting apparatus and touch controller.Wherein, it touches
The touch orientation of detection device detection user is touched, and detects touch operation bring signal, transmits a signal to touch controller;
Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 1410,
It receives the order that processor 1410 is sent and is executed.Furthermore, it is possible to using resistance-type, condenser type, infrared ray and surface
The multiple types such as sound wave realize touch panel 14071.In addition to touch panel 14071, user input unit 1407 can also include
Other input equipments 14072.Specifically, other input equipments 14072 can include but is not limited to physical keyboard, function key (ratio
Such as volume control button, switch key), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 14071 can be covered on display panel 14061, when touch panel 14071 detects
After touch operation on or near it, processor 1410 is sent to determine the type of touch event, is followed by subsequent processing device 1410
Corresponding visual output is provided on display panel 14061 according to the type of touch event.Although in Figure 14, touch panel
14071 and display panel 14061 are the functions that outputs and inputs of realizing electronic equipment as two independent components, but
In some embodiments, touch panel 14071 can be integrated with display panel 14061 and realize outputting and inputting for electronic equipment
Function, specifically herein without limitation.
Interface unit 1408 is the interface that external device (ED) is connect with electronic equipment 1400.For example, external device (ED) may include
Wired or wireless headphone port, external power supply (or battery charger) port, wired or wireless data port, storage card
Port, port, the port audio input/output (I/O), video i/o port, earphone for connecting the device with identification module
Port etc..Interface unit 1408 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) simultaneously
And by one or more elements that the input received is transferred in electronic equipment 1400 or it can be used in electronic equipment
Data are transmitted between 1400 and external device (ED).
Memory 1409 can be used for storing software program and various data.Memory 1409 can mainly include storage program
Area and storage data area, wherein storing program area can application program needed for storage program area, at least one function (such as
Sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses created data (ratio according to mobile phone
Such as audio data, phone directory) etc..In addition, memory 1409 may include high-speed random access memory, it can also include non-
Volatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Processing
Device 1410 is the control centre of electronic equipment, using the various pieces of various interfaces and the entire electronic equipment of connection, is passed through
Operation executes the software program and/or module being stored in memory 1409, and calls and be stored in memory 1409
Data execute the various functions and processing data of electronic equipment, to carry out integral monitoring to electronic equipment.Processor 1410 can
Including one or more processing units;Preferably, processor 1410 can integrate application processor and modem processor,
In, the main processing operation system of application processor, user interface and application program etc., modem processor are mainly handled wirelessly
Communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1410.
Electronic equipment 1400 can also include the power supply 1411 (such as battery) powered to all parts, it is preferred that power supply
1411 can be logically contiguous by power-supply management system and processor 1410, to realize that management is filled by power-supply management system
The functions such as electricity, electric discharge and power managed.
In addition, electronic equipment 1400 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of electronic equipment, including processor 1410, memory 1409, storage
On memory 1409 and the computer program that can run on the processor 1410, the computer program is by processor 1410
Each process of above-mentioned man-machine interaction method embodiment is realized when execution, and can reach identical technical effect, to avoid repeating,
Which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned man-machine interaction method embodiment, and energy when being executed by processor
Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (22)
1. a kind of man-machine interaction method characterized by comprising
Each two dimensional image in multiple two dimensional images including pointer is handled, obtains the pointer described every
Imaging size in a two dimensional image;
According to imaging size of the pointer in each two dimensional image, at least one is determined from the multiple two dimensional image
Target two dimensional image;
Using at least one described target two dimensional image, determines and execute corresponding user instruction.
2. man-machine interaction method according to claim 1, which is characterized in that the pointer includes finger, described pair of packet
The each two dimensional image included in multiple two dimensional images of pointer is handled, and obtains the pointer in each X-Y scheme
Imaging size as in, specifically includes:
Determine the corresponding connected region of the finger for including in the two dimensional image;
Finger tip positioning is carried out to the finger based on the geometrical characteristic of finger, obtains finger tip coordinate;
Determine the corresponding connected region width of the finger tip coordinate.
3. man-machine interaction method according to claim 1, which is characterized in that the pointer is in the target two dimensional image
In imaging size be in the first pre-set dimension section.
4. man-machine interaction method according to claim 3, which is characterized in that first pre-set dimension section is at least two
Pre-set dimension section in a pre-set dimension section, any two pre-set dimension section in at least two pre-set dimensions section
It is spaced apart.
5. man-machine interaction method according to claim 4, which is characterized in that the multiple two dimensional image further include with it is described
At least one corresponding two dimensional image of the second pre-set dimension section at least two pre-set dimension sections;
It is described using at least one described target two dimensional image, determine and execute corresponding user instruction and specifically include:
Using at least one target two dimensional image determination corresponding with first pre-set dimension section and execute corresponding user
Instruction.
6. man-machine interaction method according to claim 4, which is characterized in that at least two pre-set dimension section,
The bigger pre-set dimension section of numerical value, corresponding siding-to-siding block length are bigger.
7. man-machine interaction method according to claim 4, which is characterized in that the man-machine interaction method further include:
According to the mapping relations between pre-set dimension section and controllable object set, determining and first pre-set dimension section pair
The controllable object set of the target answered;
Show controllable pair in the controllable object set of the target as.
8. man-machine interaction method according to claim 7, which is characterized in that the display controllable object set of target
In controllable pair as, comprising:
When the controllable object set of the target is different from the controllable object set currently shown, by the first current display interface
Be updated to include controllable pair elephant in the controllable object set of the target the second display interface.
9. man-machine interaction method according to claim 3, which is characterized in that the man-machine interaction method further include:
It obtains and refers to two dimensional image;
Two dimensional image is referred to according to described, determines first pre-set dimension section.
10. man-machine interaction method according to claim 9, which is characterized in that first pre-set dimension section is (W-
Dw, W+dw), the W is the pointer in the imaging size with reference in two dimensional image, and dw is first pre-set dimension
The siding-to-siding block length threshold value in section.
11. man-machine interaction method according to claim 9, which is characterized in that described to include: with reference to two dimensional image
The two dimensional image obtained for the first time after this starting of electronic equipment;Or
The interval of the acquisition time for the two dimensional image that acquisition time and electronic equipment last time get is more than predetermined time thresholding
Two dimensional image.
12. a kind of electronic equipment characterized by comprising
First obtains module, for handling each two dimensional image in multiple two dimensional images including pointer, obtains
Imaging size of the pointer in each two dimensional image;
First determining module, for the imaging size according to the pointer in each two dimensional image, from the multiple two dimension
At least one target two dimensional image is determined in image;
Execution module, for determining and executing corresponding user instruction using at least one described target two dimensional image.
13. electronic equipment according to claim 12, which is characterized in that the pointer is in the target two dimensional image
Imaging size be in the first pre-set dimension section.
14. electronic equipment according to claim 13, which is characterized in that first pre-set dimension section is at least two
Pre-set dimension section in pre-set dimension section, between any two pre-set dimension section in at least two pre-set dimensions section
Every distribution.
15. electronic equipment according to claim 14, which is characterized in that the multiple two dimensional image further include with it is described extremely
At least one corresponding two dimensional image of the second pre-set dimension section in few two pre-set dimension sections;
The execution module, is specifically used for:
Using at least one target two dimensional image determination corresponding with first pre-set dimension section and execute corresponding user
Instruction.
16. electronic equipment according to claim 14, which is characterized in that at least two pre-set dimension section, number
It is worth bigger pre-set dimension section, corresponding siding-to-siding block length is bigger.
17. electronic equipment according to claim 14, which is characterized in that the electronic equipment further include:
Second determining module, for according to the mapping relations between pre-set dimension section and controllable object set, it is determining with it is described
The corresponding controllable object set of target in first pre-set dimension section;
Display module, for show the controllable pair in the controllable object set of the target as.
18. electronic equipment according to claim 17, which is characterized in that the display module is used for:
When the controllable object set of the target is different from the controllable object set currently shown, by the first current display interface
Be updated to include controllable pair elephant in the controllable object set of the target the second display interface.
19. electronic equipment according to claim 13, which is characterized in that the electronic equipment further include:
Second obtains module, refers to two dimensional image for obtaining;
Third determining module determines first pre-set dimension section for referring to two dimensional image according to described.
20. electronic equipment according to claim 19, which is characterized in that first pre-set dimension section is (W-dw, W+
Dw), the W is the pointer in the imaging size with reference in two dimensional image, and dw is first pre-set dimension section
Siding-to-siding block length threshold value.
21. a kind of electronic equipment characterized by comprising memory, processor and be stored on the memory and can be in institute
The computer program run on processor is stated, the processor realizes such as claim 1 to 11 when executing the computer program
Any one of described in man-machine interaction method in step.
22. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program realizes the human-computer interaction side as described in any one of claims 1 to 11 when the computer program is executed by processor
Step in method.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710882327.XA CN109558000B (en) | 2017-09-26 | 2017-09-26 | Man-machine interaction method and electronic equipment |
EP18861362.4A EP3690605A4 (en) | 2017-09-26 | 2018-09-25 | Gesture recognition method and electronic device |
PCT/CN2018/107214 WO2019062682A1 (en) | 2017-09-26 | 2018-09-25 | Gesture recognition method and electronic device |
US16/340,497 US10866649B2 (en) | 2017-09-26 | 2018-09-25 | Gesture identification method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710882327.XA CN109558000B (en) | 2017-09-26 | 2017-09-26 | Man-machine interaction method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109558000A true CN109558000A (en) | 2019-04-02 |
CN109558000B CN109558000B (en) | 2021-01-22 |
Family
ID=65861989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710882327.XA Active CN109558000B (en) | 2017-09-26 | 2017-09-26 | Man-machine interaction method and electronic equipment |
Country Status (4)
Country | Link |
---|---|
US (1) | US10866649B2 (en) |
EP (1) | EP3690605A4 (en) |
CN (1) | CN109558000B (en) |
WO (1) | WO2019062682A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113946206A (en) * | 2020-07-17 | 2022-01-18 | 云米互联科技(广东)有限公司 | Household appliance interaction method, household appliance equipment and storage medium |
CN113946205A (en) * | 2020-07-17 | 2022-01-18 | 云米互联科技(广东)有限公司 | Gas control method, gas equipment and storage medium |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230383B (en) * | 2017-03-29 | 2021-03-23 | 北京市商汤科技开发有限公司 | Hand three-dimensional data determination method and device and electronic equipment |
US11270531B2 (en) | 2019-06-28 | 2022-03-08 | GM Cruise Holdings, LLC | Autonomous vehicle data management platform |
US20230196823A1 (en) * | 2021-12-16 | 2023-06-22 | Nanjing Easthouse Electrical Co., Ltd. | Finger vein sensors |
JP2023139535A (en) * | 2022-03-22 | 2023-10-04 | キヤノン株式会社 | Gesture recognition apparatus, head-mounted display apparatus, gesture recognition method, program, and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080273755A1 (en) * | 2007-05-04 | 2008-11-06 | Gesturetek, Inc. | Camera-based user input for compact devices |
US20090167882A1 (en) * | 2007-12-28 | 2009-07-02 | Wistron Corp. | Electronic device and operation method thereof |
CN102508547A (en) * | 2011-11-04 | 2012-06-20 | 哈尔滨工业大学深圳研究生院 | Computer-vision-based gesture input method construction method and system |
US20130181897A1 (en) * | 2010-09-22 | 2013-07-18 | Shimane Prefectural Government | Operation input apparatus, operation input method, and program |
CN103376890A (en) * | 2012-04-16 | 2013-10-30 | 富士通株式会社 | Gesture remote control system based on vision |
CN103440035A (en) * | 2013-08-20 | 2013-12-11 | 华南理工大学 | Gesture recognition system in three-dimensional space and recognition method thereof |
WO2015057410A1 (en) * | 2013-10-16 | 2015-04-23 | Qualcomm Incorporated | Z-axis determination in a 2d gesture system |
US9323352B1 (en) * | 2012-10-23 | 2016-04-26 | Amazon Technologies, Inc. | Child-appropriate interface selection using hand recognition |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1148411A3 (en) * | 2000-04-21 | 2005-09-14 | Sony Corporation | Information processing apparatus and method for recognising user gesture |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
JP3996015B2 (en) * | 2002-08-09 | 2007-10-24 | 本田技研工業株式会社 | Posture recognition device and autonomous robot |
US8062126B2 (en) * | 2004-01-16 | 2011-11-22 | Sony Computer Entertainment Inc. | System and method for interfacing with a computer program |
US7623731B2 (en) * | 2005-06-20 | 2009-11-24 | Honda Motor Co., Ltd. | Direct method for modeling non-rigid motion with thin plate spline transformation |
US9910497B2 (en) * | 2006-02-08 | 2018-03-06 | Oblong Industries, Inc. | Gestural control of autonomous and semi-autonomous systems |
US8818027B2 (en) * | 2010-04-01 | 2014-08-26 | Qualcomm Incorporated | Computing device interface |
JP2011258159A (en) * | 2010-06-11 | 2011-12-22 | Namco Bandai Games Inc | Program, information storage medium and image generation system |
JP5439347B2 (en) * | 2010-12-06 | 2014-03-12 | 日立コンシューマエレクトロニクス株式会社 | Operation control device |
US9189068B2 (en) * | 2011-03-14 | 2015-11-17 | Lg Electronics Inc. | Apparatus and a method for gesture recognition |
US8840466B2 (en) * | 2011-04-25 | 2014-09-23 | Aquifi, Inc. | Method and system to create three-dimensional mapping in a two-dimensional game |
EP2703949B1 (en) * | 2011-04-28 | 2017-10-25 | NEC Solution Innovators, Ltd. | Information processing device, information processing method, and recording medium |
CN202075718U (en) | 2011-05-17 | 2011-12-14 | 东华大学 | Finger multi-point touch control system based on color discrimination |
JP2012248067A (en) * | 2011-05-30 | 2012-12-13 | Canon Inc | Information input device, control method for the same and control program |
US8881051B2 (en) * | 2011-07-05 | 2014-11-04 | Primesense Ltd | Zoom-based gesture user interface |
US9268406B2 (en) * | 2011-09-30 | 2016-02-23 | Microsoft Technology Licensing, Llc | Virtual spectator experience with a personal audio/visual apparatus |
CN103135754B (en) * | 2011-12-02 | 2016-05-11 | 深圳泰山体育科技股份有限公司 | Adopt interactive device to realize mutual method |
TWI454966B (en) * | 2012-04-24 | 2014-10-01 | Wistron Corp | Gesture control method and gesture control device |
JP6202810B2 (en) * | 2012-12-04 | 2017-09-27 | アルパイン株式会社 | Gesture recognition apparatus and method, and program |
CN103926999B (en) * | 2013-01-16 | 2017-03-01 | 株式会社理光 | Palm folding gesture identification method and device, man-machine interaction method and equipment |
US20140282274A1 (en) * | 2013-03-15 | 2014-09-18 | Qualcomm Incorporated | Detection of a gesture performed with at least two control objects |
US9274607B2 (en) * | 2013-03-15 | 2016-03-01 | Bruno Delean | Authenticating a user using hand gesture |
CN103440033B (en) * | 2013-08-19 | 2016-12-28 | 中国科学院深圳先进技术研究院 | A kind of method and apparatus realizing man-machine interaction based on free-hand and monocular cam |
US20150084859A1 (en) * | 2013-09-23 | 2015-03-26 | Yair ITZHAIK | System and Method for Recognition and Response to Gesture Based Input |
PL2921936T3 (en) * | 2014-03-22 | 2019-09-30 | Monster & Devices Home sp. z o.o. | Method and apparatus for gesture control of a device |
JP6603024B2 (en) * | 2015-02-10 | 2019-11-06 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and information processing method |
US9848780B1 (en) * | 2015-04-08 | 2017-12-26 | Google Inc. | Assessing cardiovascular function using an optical sensor |
JP2017059103A (en) * | 2015-09-18 | 2017-03-23 | パナソニックIpマネジメント株式会社 | Determination device, determination method, determination program and recording medium |
-
2017
- 2017-09-26 CN CN201710882327.XA patent/CN109558000B/en active Active
-
2018
- 2018-09-25 WO PCT/CN2018/107214 patent/WO2019062682A1/en unknown
- 2018-09-25 US US16/340,497 patent/US10866649B2/en active Active
- 2018-09-25 EP EP18861362.4A patent/EP3690605A4/en not_active Withdrawn
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080273755A1 (en) * | 2007-05-04 | 2008-11-06 | Gesturetek, Inc. | Camera-based user input for compact devices |
US20090167882A1 (en) * | 2007-12-28 | 2009-07-02 | Wistron Corp. | Electronic device and operation method thereof |
US20130181897A1 (en) * | 2010-09-22 | 2013-07-18 | Shimane Prefectural Government | Operation input apparatus, operation input method, and program |
CN102508547A (en) * | 2011-11-04 | 2012-06-20 | 哈尔滨工业大学深圳研究生院 | Computer-vision-based gesture input method construction method and system |
CN103376890A (en) * | 2012-04-16 | 2013-10-30 | 富士通株式会社 | Gesture remote control system based on vision |
US9323352B1 (en) * | 2012-10-23 | 2016-04-26 | Amazon Technologies, Inc. | Child-appropriate interface selection using hand recognition |
CN103440035A (en) * | 2013-08-20 | 2013-12-11 | 华南理工大学 | Gesture recognition system in three-dimensional space and recognition method thereof |
WO2015057410A1 (en) * | 2013-10-16 | 2015-04-23 | Qualcomm Incorporated | Z-axis determination in a 2d gesture system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113946206A (en) * | 2020-07-17 | 2022-01-18 | 云米互联科技(广东)有限公司 | Household appliance interaction method, household appliance equipment and storage medium |
CN113946205A (en) * | 2020-07-17 | 2022-01-18 | 云米互联科技(广东)有限公司 | Gas control method, gas equipment and storage medium |
CN113946205B (en) * | 2020-07-17 | 2024-02-09 | 云米互联科技(广东)有限公司 | Gas control method, gas equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US10866649B2 (en) | 2020-12-15 |
CN109558000B (en) | 2021-01-22 |
US20190243462A1 (en) | 2019-08-08 |
EP3690605A4 (en) | 2021-12-15 |
WO2019062682A1 (en) | 2019-04-04 |
EP3690605A1 (en) | 2020-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105518605B (en) | A kind of touch operation method and device of terminal | |
CN109558000A (en) | A kind of man-machine interaction method and electronic equipment | |
CN108536365A (en) | A kind of images share method and terminal | |
CN108845853A (en) | A kind of application program launching method and mobile terminal | |
CN109032734A (en) | A kind of background application display methods and mobile terminal | |
CN107977132A (en) | A kind of method for information display and mobile terminal | |
CN108287650A (en) | One-handed performance method based on mobile terminal and mobile terminal | |
CN107707762A (en) | A kind of method for operating application program and mobile terminal | |
CN110531904A (en) | A kind of background task display methods and terminal | |
CN109240577A (en) | A kind of screenshotss method and terminal | |
CN108897473A (en) | A kind of interface display method and terminal | |
CN109669747A (en) | A kind of method and mobile terminal of moving icon | |
CN108287655A (en) | A kind of interface display method, interface display apparatus and mobile terminal | |
CN108563383A (en) | A kind of image viewing method and mobile terminal | |
CN110231900A (en) | A kind of application icon display methods and terminal | |
CN110531915A (en) | Screen operating method and terminal device | |
CN110109604A (en) | A kind of application interface display methods and mobile terminal | |
CN109800045A (en) | A kind of display methods and terminal | |
CN109343788A (en) | A kind of method of controlling operation thereof and mobile terminal of mobile terminal | |
CN108898555A (en) | A kind of image processing method and terminal device | |
CN108388396A (en) | A kind of interface switching method and mobile terminal | |
CN109683802A (en) | A kind of icon moving method and terminal | |
CN108228902A (en) | A kind of document display method and mobile terminal | |
CN107741814A (en) | A kind of display control method and mobile terminal | |
CN109508136A (en) | A kind of display methods and mobile terminal of application program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |