CN102880352A - Non-contact interface operation method and non-contact interface operation system - Google Patents
Non-contact interface operation method and non-contact interface operation system Download PDFInfo
- Publication number
- CN102880352A CN102880352A CN 201110193258 CN201110193258A CN102880352A CN 102880352 A CN102880352 A CN 102880352A CN 201110193258 CN201110193258 CN 201110193258 CN 201110193258 A CN201110193258 A CN 201110193258A CN 102880352 A CN102880352 A CN 102880352A
- Authority
- CN
- China
- Prior art keywords
- coordinate
- display
- interface
- module
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention discloses a non-contact interface operation method and a non-contact interface operation system. The non-contact interface operation method comprises the following steps: recording a first coordinate of an operation point in a 3D display operation interface; identifying and recording a second coordinate of a hand; and judging whether the first coordinate corresponds to the second coordinate so as to respond the operation interface. According to the method, the 3D display system is operated through body actions in a 3D space, people can make multiple actions in a three-dimensional space during operation, and multiple operations of the system are realized, so that man-machine interaction is flexible; and moreover, any equipment is not required to be touched, and cross infection of equipment in public places is avoided.
Description
Technical field
The invention belongs to electronic equipment operation control technology and human-computer interaction technique field, relate in particular to a kind of contactless interface operation method and system.
Background technology
Touch-screen is simple, convenient, natural a kind of man-machine interaction mode as a kind of new computer input apparatus, and the user just can realize the operation to main frame as long as run into icon or the literal of touch-screen with finger.For existing touch screen operating, must could realize touch control by hand contact screen surface, therefore hand only has the two dimensional surface actions such as click, slip when operation, the ATM that uses for present public place simultaneously, the number of getting machine etc. are all used touch screen operation, everybody removes to contact same screen, can cause the public's cross-infection, produce potential safety hazard.
Summary of the invention
In view of this, technical matters to be solved by this invention provides a kind of contactless interface operation method and system.For there is a basic understanding some aspects to the embodiment that discloses, the below has provided simple summary.This summary part is not to comment general, neither determine the key/critical component or describe the protection domain of these embodiment.Its sole purpose is to present some concepts with simple form, with this preamble as following detailed description.
The invention discloses a kind of contactless interface operation method, comprising:
The first coordinate of operating point in the operation interface that record 3D shows;
The second coordinate of identification and record staff;
Judge whether described the first coordinate is corresponding with the second coordinate, so that operation interface responds.
In some optional embodiments, described record the first coordinate comprises:
By 3D mode operation display interface;
Determine the first coordinate system take one jiao of display as true origin, determine the coordinate of operating point in the first coordinate system in the operation interface.
In some optional embodiments, the second coordinate of described record staff comprises:
Gesture and positional information by two camera collection staff;
Determine the second coordinate system take arbitrary camera wherein as true origin, determine the coordinate of staff in the second coordinate system.
In some optional embodiments, the described correspondence that judges whether comprises:
Determine the first coordinate system take one jiao of display as true origin, with described the second coordinate conversion to described the first coordinate system;
Relatively if the second coordinate and described the first coordinate after the conversion identical, then is defined as correspondence.
Another aspect of the present invention is to have proposed a kind of contactless interface operation system, it is characterized in that, comprising:
First module: the first coordinate of operating point in the operation interface that record 3D shows;
Second unit: the second coordinate of identification and record staff;
Judging unit: judge whether described the first coordinate is corresponding with the second coordinate, so that operation interface responds.
In some optional embodiments, described first module comprises:
Display module: by 3D mode operation display interface;
The first module: determine the first coordinate system take one jiao of display as true origin, determine the coordinate of operating point in the first coordinate system in the operation interface.
In some optional embodiments, described second unit comprises:
Acquisition module: by gesture and the positional information of two camera collection staff;
The second module: determine the second coordinate system take arbitrary camera wherein as true origin, determine the coordinate of staff in the second coordinate system.
In some optional embodiments, described judging unit comprises:
Modular converter: determine the first coordinate system take one jiao of display as true origin, with described the second coordinate conversion to described the first coordinate system;
Comparison module: if the second coordinate and described the first coordinate after relatively changing identical, then are defined as correspondence.
For above-mentioned and relevant purpose, one or more embodiment comprise the feature that the back will describe in detail and particularly point out in the claims.Below explanation and accompanying drawing describe some illustrative aspects in detail, and its indication only is some modes in the utilizable variety of way of principle of each embodiment.Other benefit and novel features will consider by reference to the accompanying drawings and become obviously along with following detailed description, and the disclosed embodiments are to comprise being equal to of all these aspects and they.
The present invention operates the system that 3D shows by limb action in 3d space, and the people can do multiple three-dimensional motion in the space when operation, realizes the multiple operation of system, and it does not need not contact any equipment, avoids the cross-infection of public place.Realized also not needing to touch main frame not by any input equipment, just can the operating system that 3D shows have been operated, realized more flexibly man-machine interaction.It is applicable to the various opening equipment in the public place, as: bank ATM, the number of getting machine, communication merchant's telephone expenses are paid machine, the automatic machine in railway station and the subway etc.
Figure of description
Fig. 1 is method schematic diagram of the present invention;
Fig. 2 is method flow diagram of the present invention;
Fig. 3 is system schematic of the present invention;
Fig. 4 is 3D displaying principle figure;
Fig. 5 is that the binocular camera shooting head gathers the position schematic diagram;
Fig. 6 is that the binocular camera shooting head gathers the position schematic diagram;
Fig. 7 is two kinds of coordinate system spaces of the present invention schematic diagram;
Fig. 8 is operating system design sketch of the present invention;
Fig. 9 is the embodiment of the invention one design sketch;
Figure 10 is the embodiment of the invention one design sketch.
Embodiment
The following description and drawings illustrate specific embodiments of the present invention fully, to enable those skilled in the art to put into practice them.Other embodiments can comprise structure, logic, electric, process and other change.Embodiment only represents possible variation.Unless explicitly call for, otherwise independent assembly and function are optional, and the order of operation can change.The part of some embodiments and feature can be included in or replace part and the feature of other embodiments.The scope of embodiment of the present invention comprises the gamut of claims, and all obtainable equivalents of claims.In this article, these embodiments of the present invention can be represented with term " invention " individually or always, this only is for convenient, and if in fact disclose and surpass one invention, not that the scope that will automatically limit this application is any single invention or inventive concept.
The present invention is based on present 3D display technique and machine vision technique, with the stereo-picture of 3D demonstration and limbs correspondence in the space of people, as shown in Figure 1; Visually form the sensation that limbs touch image in the space, thereby realize really every blank operation, namely realize contactless operation.Make man-machine interaction more flexible, and do not need to contact any equipment, avoid the cross-infection of public place equipment.
A kind of contactless interface operation method that the present invention proposes, such as Fig. 2, idiographic flow is as follows:
Step 201: the first coordinate of operating point in the operation interface that record 3D shows;
At first, for system's operation interface, utilize the 3D display technique, system's operation interface is shown in three dimensions.The 3D display technique is utilized a series of optical means exactly, thereby makes people's right and left eyes generation parallax receive different pictures, forms the 3D stereoeffect at brain.Therefore people's eyes can be seen same direction simultaneously, but interocular distance is got over 65mm, can not take a fancy to same straight line, and the image seen of eyes is variant within the specific limits.
As shown in Figure 4, right and left eyes is seen respectively left eye figure and right eye figure, and sight line has an intersection point in the centre so, at this infall, visually can form the stereo-picture that two figure synthesize, and brain synthesizes one at intersection point place stereographic map with two width of cloth figure like this.By the 3D displaying principle, take the upper left corner of display as the space initial point
O, the display face is the XY plane, and the vertical display direction is Z axis, and we can obtain the coordinate of operating point in three dimensions in system's operation interface
D (x, y, z)
Step 202: the second coordinate of identification and record staff;
The people sees the three-dimensional operation interface of 3-D display, and by limb action the interface of seeing is operated.Limb action with the collection of binocular camera shooting head.And it is carried out action recognition, calculate simultaneously limbs 3D coordinate at the volley.
As shown in Figure 5 and Figure 6, two focal lengths are
fThe parallel placement of video camera, the distance between the optical axis is
T, two rectangles among Fig. 5 are differentiated the imaging plane of expression left and right cameras,
O l With
O r Be the focus of left and right cameras, for any point in the scene
P, the imaging point on the left and right cameras imaging plane is respectively
p l With
p r , their imager coordinates (image coordinate) on imaging plane are
x l With
x r , then parallax is defined as
d=
x l –
x r Shown in 6.
With the left camera focus among Fig. 5
O l Be initial point,
O l O r The place straight line is
XAxle, left camera optical axis is
ZAxle, perpendicular to
XZAxle be Y-axis, then
PPoint exists
O l Coordinate in the coordinate system can calculate according to formula (1):
(1)
Step 203: judge whether described the first coordinate is corresponding with the second coordinate, so that operation interface responds.
By step 201 and step 202, we have calculated operating point
D (x, y, z)Coordinate and people's limbs coordinate of doing when action
P (x, y, z), because two coordinates are not under the same coordinate system, according to the position relationship of left camera and display, will
PThe point be transformed into and
DPut under the identical coordinate system, obtain under the same coordinate system
P ' (x, y, z)Concrete coordinate conversion mode is as follows:
Because camera is placed on the top of display, establishes left camera and initial point herein
ODistance be
l, namely with
OOn the X-axis for the space coordinates of initial point, its optical axis is vertical with X-axis, with the XY plane included angle is
θ, as shown in Figure 7.
By calculating the limbs coordinate time in the step 202, take left camera optical axis as Z axis, establish so the limbs coordinate points and be
P (x, y, z), then will
PBe transformed into
OFor under the space coordinates of initial point
P ' , be shown below:
(2)
At last, utilize recognition technology to judge people's limb action, this action need to touch the operating point in a certain position of three dimensions, if
DWith
P ' 2 coincidences illustrate that this action touches the visually operating point of 3-D display in the space, but are not to contact display itself, and this moment, operating system was carried out action response, otherwise did not respond.
As shown in Figure 8, the people sees system's operation interface that 3D shows before display, then in three dimensions, go operation with limb action, utilize two camera collection servants' action, utilize the recognition and tracking technology, determine the limb motion form and record 3D coordinate in the motion process, corresponding with the coordinate of interface operation point according to forms of motion and limbs coordinate, judge whether system responds current operation.In above-mentioned this process, brain visually people's limbs touched system's operation interface, and in real space, the human body any part need not contact display all the time, also need not be by other sensing device, only move in the system interface viewing area in the space, just can realize the operation to system, therefore realized contactless operation.
Specific embodiment one:
The present embodiment simulated bank transacting business need to be taken number queuing, and the number of getting machine is contactless operating system.
1. client stations utilizes 3D glasses or other technology to see the operation interface that shows in the space before the number of getting machine.The interface schematic diagram as shown in Figure 9; Three the business operation pieces in its front, median surface are the interface that the 3D that sees of client shows, volume coordinate initial point O is used as in the number of the getting machine display upper left corner, calculate the operating block coordinate of 3D demonstration according to the 3D displaying principle
D
2. the client need to operate individual's business, stretches out one's hand and clicks the individual business module that is presented at the space, and two video cameras that are in the display place gather hands movement, and after judgement was the finger click action, through type (1) calculated the 3D coordinate of finger
P, as shown in figure 10.
According to camera A with
OBetween position relationship, will point coordinate
PBe transformed into
OFor under the space coordinates of initial point being
P ' , calculating formula (2).If through the finger coordinate after the conversion
P ' With individual business module coordinate
DIdentical, namely visually clicked operating block in the interface, this moment, operating system responded, and printed the professional row number paper slip of individual.
The below introduces the contactless operating system that the present invention proposes:
Fig. 3 is contactless operating system configuration schematic diagram of the present invention, and this system comprises first module 301, second unit 302 and judging unit 303.
Above-mentioned first module 301: the first coordinate of operating point in the operation interface that record 3D shows;
Above-mentioned second unit 302: the second coordinate of identification and record staff;
Above-mentioned judging unit 303: judge whether described the first coordinate is corresponding with the second coordinate, so that operation interface responds.
301 comprise display module 3011 and the first module 3012 in the further above-mentioned first module.
Above-mentioned display module 3011: by 3D mode operation display interface;
Above-mentioned the first module 3012: determine the first coordinate system take one jiao of display as true origin, determine the coordinate of operating point in the first coordinate system in the operation interface.
Further above-mentioned second unit 302 comprises acquisition module 3021 and the second module 3022.
Above-mentioned acquisition module 3021: by gesture and the positional information of two camera collection staff;
Above-mentioned the second module 3022: determine the second coordinate system take arbitrary camera wherein as true origin, determine the coordinate of staff in the second coordinate system.
Further above-mentioned judging unit 303 comprises modular converter 3031 and comparison module 3032.
Above-mentioned modular converter 3031: determine the first coordinate system take one jiao of display as true origin, with described the second coordinate conversion to described the first coordinate system;
Above-mentioned comparison module 3032: if the second coordinate and described the first coordinate after relatively changing identical, then are defined as correspondence.
In above-mentioned detailed description, various features are combined in the single embodiment together, to simplify the disclosure.This open method should be interpreted as reflecting such intention, that is, the embodiment of theme required for protection need to be than the more feature of the feature of clearly stating in each claim.On the contrary, as appending claims reflected, the present invention was in the state that lacks than whole features of disclosed single embodiment.Therefore, appending claims clearly is merged in the detailed description hereby, and wherein every claim is alone as the independent preferred embodiment of the present invention.
Those skilled in the art it is also understood that various illustrative box, module, circuit and the algorithm steps in conjunction with the embodiment description of this paper all can be embodied as electronic hardware, computer software or its combination.For the interchangeability between the hardware and software clearly is described, the above has all carried out usually describing around its function to various illustrative parts, frame, module, circuit and step.Be embodied as hardware or be embodied as software as for this function, depend on specific application and the design constraint that whole system is applied.Those skilled in the art can be for each application-specific, realizes described function in the mode of accommodation, and still, this realization decision-making should not be construed as and deviates from protection domain of the present disclosure.
Description above comprises giving an example of one or more embodiment.Certainly, all possible combination of describing parts or method in order to describe above-described embodiment is impossible, but those of ordinary skills should be realized that, each embodiment can do further combinations and permutations.Therefore, the embodiment that describes herein is intended to contain all the such changes, modifications and variations in the protection domain that falls into appended claims.In addition, " comprise " with regard to the term that uses in instructions or claims, the mode that contains of this word is similar to term and " comprises ", just as " comprising, " in the claims as link word explain like that.In addition, using any one term " perhaps " in the instructions of claims is to represent " non-exclusionism or ".
Claims (8)
1. a contactless interface operation method is characterized in that, comprising:
The first coordinate of operating point in the operation interface that record 3D shows;
The second coordinate of identification and record staff;
Judge whether described the first coordinate is corresponding with the second coordinate, so that operation interface responds.
2. the method for claim 1 is characterized in that, described record the first coordinate comprises:
By 3D mode operation display interface;
Determine the first coordinate system take one jiao of display as true origin, determine the coordinate of operating point in the first coordinate system in the operation interface.
3. the method for claim 1 is characterized in that, the second coordinate of described record staff comprises:
Gesture and positional information by two camera collection staff;
Determine the second coordinate system take arbitrary camera wherein as true origin, determine the coordinate of staff in the second coordinate system.
4. the method for claim 1 is characterized in that, the described correspondence that judges whether comprises:
Determine the first coordinate system take one jiao of display as true origin, with described the second coordinate conversion to described the first coordinate system;
Relatively if the second coordinate and described the first coordinate after the conversion identical, then is defined as correspondence.
5. a contactless interface operation system is characterized in that, comprises
First module: the first coordinate of operating point in the operation interface that record 3D shows;
Second unit: the second coordinate of identification and record staff;
Judging unit: judge whether described the first coordinate is corresponding with the second coordinate, so that operation interface responds.
6. system as claimed in claim 5 is characterized in that, described first module comprises:
Display module: by 3D mode operation display interface;
The first module: determine the first coordinate system take one jiao of display as true origin, determine the coordinate of operating point in the first coordinate system in the operation interface.
7. system as claimed in claim 5 is characterized in that, described second unit comprises:
Acquisition module: by gesture and the positional information of two camera collection staff;
The second module: determine the second coordinate system take arbitrary camera wherein as true origin, determine the coordinate of staff in the second coordinate system.
8. system as claimed in claim 5 is characterized in that, described judging unit comprises:
Modular converter: determine the first coordinate system take one jiao of display as true origin, with described the second coordinate conversion to described the first coordinate system;
Comparison module: if the second coordinate and described the first coordinate after relatively changing identical, then are defined as correspondence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110193258 CN102880352A (en) | 2011-07-11 | 2011-07-11 | Non-contact interface operation method and non-contact interface operation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110193258 CN102880352A (en) | 2011-07-11 | 2011-07-11 | Non-contact interface operation method and non-contact interface operation system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102880352A true CN102880352A (en) | 2013-01-16 |
Family
ID=47481709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110193258 Pending CN102880352A (en) | 2011-07-11 | 2011-07-11 | Non-contact interface operation method and non-contact interface operation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102880352A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104914981A (en) * | 2014-03-10 | 2015-09-16 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105407262A (en) * | 2014-09-16 | 2016-03-16 | 洪永川 | Camera |
CN106951074A (en) * | 2013-01-23 | 2017-07-14 | 青岛海信电器股份有限公司 | A kind of method and system for realizing virtual touch calibration |
CN107015644A (en) * | 2017-03-22 | 2017-08-04 | 腾讯科技(深圳)有限公司 | Virtual scene middle reaches target position adjustments method and device |
-
2011
- 2011-07-11 CN CN 201110193258 patent/CN102880352A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106951074A (en) * | 2013-01-23 | 2017-07-14 | 青岛海信电器股份有限公司 | A kind of method and system for realizing virtual touch calibration |
CN106951074B (en) * | 2013-01-23 | 2019-12-06 | 青岛海信电器股份有限公司 | method and system for realizing virtual touch calibration |
CN104914981A (en) * | 2014-03-10 | 2015-09-16 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104914981B (en) * | 2014-03-10 | 2018-07-06 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN105407262A (en) * | 2014-09-16 | 2016-03-16 | 洪永川 | Camera |
CN107015644A (en) * | 2017-03-22 | 2017-08-04 | 腾讯科技(深圳)有限公司 | Virtual scene middle reaches target position adjustments method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107111365B (en) | The application presented in Virtual Space is subjected to selective matching with physical display | |
CN103999018B (en) | The user of response three-dimensional display object selects the method and system of posture | |
CN103336575B (en) | The intelligent glasses system of a kind of man-machine interaction and exchange method | |
CN103793060B (en) | A kind of user interactive system and method | |
JP6028351B2 (en) | Control device, electronic device, control method, and program | |
CN102346642B (en) | Mobile terminal and method of controlling operation of the mobile terminal | |
CN108334199A (en) | The multi-modal exchange method of movable type based on augmented reality and device | |
US20150301596A1 (en) | Method, System, and Computer for Identifying Object in Augmented Reality | |
Behringer et al. | Augmented Reality: Placing artificial objects in real scenes | |
CN106134186A (en) | Distant existing experience | |
CN102411474B (en) | Mobile terminal and method of controlling operation of the same | |
US20160021353A1 (en) | I/o device, i/o program, and i/o method | |
CN102779000A (en) | User interaction system and method | |
CN109196406A (en) | Utilize the virtual reality system and its implementation method of mixed reality | |
CN106708270A (en) | Display method and apparatus for virtual reality device, and virtual reality device | |
CN105068649A (en) | Binocular gesture recognition device and method based on virtual reality helmet | |
CN103176605A (en) | Control device of gesture recognition and control method of gesture recognition | |
CN104731338B (en) | One kind is based on enclosed enhancing virtual reality system and method | |
CN102426509A (en) | Method, device and system for displaying hand input | |
CN104714646A (en) | 3D virtual touch control man-machine interaction method based on stereoscopic vision | |
CN102508548A (en) | Operation method and system for electronic information equipment | |
CN102880352A (en) | Non-contact interface operation method and non-contact interface operation system | |
US10171800B2 (en) | Input/output device, input/output program, and input/output method that provide visual recognition of object to add a sense of distance | |
CN210609485U (en) | Real-time interactive virtual reality view sharing system | |
CN104598035A (en) | Cursor display method based on 3D image display, intelligent equipment and intelligent system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130116 |