CN1904806A - System and method of contactless position input by hand and eye relation guiding - Google Patents

System and method of contactless position input by hand and eye relation guiding Download PDF

Info

Publication number
CN1904806A
CN1904806A CN 200610029483 CN200610029483A CN1904806A CN 1904806 A CN1904806 A CN 1904806A CN 200610029483 CN200610029483 CN 200610029483 CN 200610029483 A CN200610029483 A CN 200610029483A CN 1904806 A CN1904806 A CN 1904806A
Authority
CN
China
Prior art keywords
hand
eye
image
point
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610029483
Other languages
Chinese (zh)
Other versions
CN100432897C (en
Inventor
张之江
董志华
于瀛洁
李纯灿
潘志浩
许丽
周文静
马赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CNB2006100294833A priority Critical patent/CN100432897C/en
Publication of CN1904806A publication Critical patent/CN1904806A/en
Application granted granted Critical
Publication of CN100432897C publication Critical patent/CN100432897C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a discontact location input system and method guided by hand and eye. It includes two CCD video cameras, a large scale display terminal, a computer installed plural image acquisition card. The system uses platform labeling module to label video camera parameter. The location input method includes following steps: full automation video camera demarcating, hand and eye location gathering, and demarcating target point solving. The invention is based on computer vision technology, and has the advantages of visual, accurate and real time.

Description

The contactless position input system and the method for hand, eye relation guiding
Technical field
The present invention relates to a kind of dimensional visual measurement system and method, the contactless position input system and the method for particularly a kind of hand, eye relation guiding.
Background technology
The position input system is a kind of of computer entry device, theoretically, and any input that need all need positional information with the occasion that computing machine carries out interactive operation.At present, the most frequently used position input equipment is Genius mouse and touch-screen, and this kind equipment requires user and equipment to keep in touch in use, has therefore fettered user's activity space.Be not easy to retrain under the occasion of user's activity, such as exhibition and some industrial occasions, it is inconvenient even impossible using contact position input system.
Replacing the existing contact input mode with the contactless position input mode will be that the user provides unconfined input to experience, and the ease for use of enhanced system, the interactive and enjoyment used are the effective means that promotes the digital product added value.Yet this class The Application of Technology is also fewer at present, only trials mainly concentrate on two classes: a class is that existing Genius mouse is transformed, the stube cable of Genius mouse and main frame is replaced with infrared or radiofrequency signal, make the location input device of hand-held.This class technology has been broken away from the constraint of cable to the user, but because the input of positional information need realize by the roll mouse ball, for distance indicating target user far away (as the visitor in the exhibition, they often are required to watch displaying outside certain distance), this input mode is not directly perceived and coarse; Another kind of technology is used SMD position transducer, some pieces of sensors is positioned on some position (as finger) of user's health, realizes the input of positional information by the special exercise of induction human body.This method can realize position input intuitively, but owing to need user's wearable sensors, application flexibility has been subjected to influence.This two classes technology reality still requires the user to contact input equipment, therefore from the strict sense, is not real contactless position input system.
Summary of the invention
The object of the present invention is to provide the position input system and the method for a kind of hand, eye relation guiding, utilize binocular vision technology to realize to the real-time follow-up and accurate location of user's finger with eyes, and judge the target location that its finger is indicated, thereby realize non-contacting, position input intuitively.
For achieving the above object, design of the present invention is:
The contactless position input system of hand, eye relation guiding, by two ccd video cameras (1), a display terminal (2), a computing machine (3) that is equipped with the multichannel image capture card is formed, and plane reference template (4) is used for calibrating camera parameters.Wherein, computing machine (3) is connected with video camera (1) by image pick-up card and is connected with display terminal (2) by display card, is provided with a cover automatic calibrating procedure of video camera and a cover indicating positions routine analyzer in the computing machine (3).System of the present invention characteristics are:
Two ccd video cameras (1) use assembles configuration formation binocular vision system, plane reference template (4) is made of two concentric circless and four rectilinear figures by the center of circle and circumference in equal parts that accurate Drawing on the one flat plate goes out, the automatic calibrating procedure of video camera can be realized full-automatic, high-precision camera parameters demarcation, the indicating positions routine analyzer can be exported the position coordinates of indication point in real time, and display terminal can be the space object of large screen projector, displaying sand table or stereoscopic model and arbitrary shape etc.
The contactless position input method of hand of the present invention, eye relation guiding adopts said system to realize having following characteristics:
1. binocular vision system is demarcated after the equipment installation and is once got final product;
2. utilize the binocular vision principle to calculate the volume coordinate of object point, realize non-contacting, real-time position measurement;
3. use the detection of people's face, human eye location and finger locating algorithm and target tracking algorism, the human body privileged site in the image sequence is followed the tracks of;
4. from people's physiological characteristic,, judge the target location that finger is indicated with the straight line of central point between connecting two and finger tip boresight as the people;
5. with different environments for use abstract be one group of environmental parameter, can adapt to different application by revising environmental parameter.
According to above-mentioned inventive concept, the present invention adopts following technical proposals:
The contactless position input system of a kind of hand, eye relation guiding, by two ccd video cameras, a large-scale display terminal, a table apparatus is made up of the computing machine of multichannel image capture card, and computing machine is connected with video camera by image pick-up card and is connected with large-scale display terminal by display card.It is characterized in that: described two ccd video cameras are fixed in user's dead ahead to assemble configuration mode, and the installation site need guarantee that ccd video camera visual field, the left and right sides all covers the front of user's health first half; There is a plane reference template timing signal to place the visual field of two ccd video cameras; The image that described computing machine is gathered calibrating template according to two ccd video cameras carries out camera parameters and demarcates; Remove calibrating template during systematic survey, computing machine is found the solution and the export target point coordinate promptly contactless input indicating positions coordinate according to the camera parameters of default environmental parameter, demarcation and the image that contains user's hand, eye of camera acquisition.
Above-mentioned plane reference template is made of two concentric circless and four rectilinear figures by the center of circle and circumference in equal parts that accurate Drawing on the smooth flat board goes out.
Above-mentioned large-scale display terminal is large screen projector or shows sand table or stereoscopic model or space object.
The contactless position input method of a kind of hand, eye relation guiding, adopt the contactless position input system of hand, eye relation guiding, it is characterized in that concrete job step is: after two ccd video cameras and a plane reference Template Location are installed, the automatic calibrating procedure of computer starting one cover video camera, according to the image of the plane reference template of gathering, carry out camera parameters and demarcate; Default environmental parameter is set up display terminal place plane equation; During system works, remove calibrating template, computer starting one cover indicating positions routine analyzer contains the image of user's hand, eye according to default environmental parameter, camera calibration parameter and first frame that reads in camera acquisition, finds the solution and export the indicating positions point coordinate; Then read in next frame and contain the image of user's hand, eye, find the solution and export the indicating positions point coordinate, so circulation realizes the tracking of indicating positions.
The automatic calibrating procedure of above-mentioned video camera has following steps:
The extraction of feature point for calibration, with the intersection point of the radial alignment of drawing on the plane reference template and circumference as feature point for calibration: the image that reads in the plane reference template, extract the sample point of circle and the drop shadow curve of straight line on the video camera imaging face respectively, the drop shadow curve of circle is oval, the projection of straight line still is a straight line, the sample point that uses elliptic equation and straight-line equation to extract respectively with least square fitting, two set of equations of simultaneous match solve intersecting point coordinate oval and straight line;
Camera parameters calibrated and calculated: according to the intersecting point coordinate of trying to achieve, calculate video camera external parameter---rotation matrix R and motion vector T, try to achieve intrinsic parameters of the camera then---focal distance f and lens distortion coefficient k.
Above-mentioned indicating positions routine analyzer has following steps: read in the image that a frame contains the camera acquisition of user's hand, eye; Locate point coordinate Pe and forefinger finger tip coordinate Pf in the eyes respectively; The boresight of the straight line of eyes mid point and finger tip as the people will be connected, judge the target location of finger indication: utilize and demarcate good camera parameters, mid point and four system of equations of finger tip point difference simultaneous to the eyes line, solve three-dimensional coordinate Pe (X separately, Y, Z) and Pf (X, X, Z), Pe connects, the space line equation of Pf according to the environmental parameter of pre-input, is set up the space plane equation on plane, display terminal place, to connect Pe, indicating positions impact point coordinate is found the solution and exported to the space line equation of Pf and the space plane equations simultaneousness at display terminal place; Read in the next frame image, repeated for two steps, realize the indicating positions tracking.
The present invention compared with prior art has following conspicuous outstanding substantive distinguishing features and remarkable advantage:
1. use video camera as sensor, realized real contactless position input.
2. considered people's physiological property,, made the mode of position input meet people's custom, thereby need not the user is carried out special training, reached intuitive and accurate purpose with hand, eye position relation guiding system identification people's indicating target.
3. environment for use had stronger adaptability.The detection of human body privileged site and track algorithm have guaranteed the steady operation of system under complex background, therefore the background of using the district are arranged there is not specific (special) requirements; Parameterized environment is provided with function, and system can be conveniently used among the multiple different environment.
4. camera calibration is easy, accurate.Native system only need be demarcated a video camera when mounted and get final product, what calibrating template used is the lower and portable plane template of cost, the unique point that circular symmetric distributes helps to reduce the radially influence of image deformation to calculating, calibrating procedure in the computing machine can be finished extracted with high accuracy, coupling, and whole evaluation works such as parametric solution of template characteristic point automatically, need not artificial intervention.
The present invention can realize directly perceived, accurate, real-time contactless position input.
Description of drawings
Fig. 1 is the structural representation of one embodiment of the invention.
Fig. 2 is camera calibration template and the scaling method synoptic diagram that Fig. 1 example is used.
Fig. 3 is the optical schematic diagram of the twin camera of Fig. 1 example.
Fig. 4 is that the video camera mark of Fig. 1 example is decided the process flow diagram of program automatically, and wherein figure (a) is the feature point extraction process flow diagram, and figure (b) is the calculation of parameter process flow diagram.
Fig. 5 is the process flow diagram of the indicating positions routine analyzer of Fig. 1 example.
Embodiment
A preferred embodiment accompanying drawings of the present invention is as follows:
Referring to Fig. 1, Fig. 2, the contactless position input system of this hand, eye relation guiding, by two ccd video cameras (1), a display terminal (2), a computing machine (3) that is equipped with the multichannel image capture card is formed, and plane reference template (4) is used for calibrating camera parameters.Wherein, computing machine (3) is connected with video camera (1) by image pick-up card and is connected with display terminal (2) by display card, is provided with a cover automatic calibrating procedure of video camera and a cover indicating positions routine analyzer in the computing machine (3).
Contactless position input method of the present invention is implemented by the contactless position input system of above-mentioned hand, eye relation guiding, and its step is as follows:
Before the use, two video cameras are fixed in user's dead ahead in the mode of assembling configuration, the installation site need guarantee that the left and right cameras visual field all covers the front of user's health first half, referring to Fig. 1.
After installation, carry out the camera parameters staking-out work, referring to Fig. 2.Calibrating template (4) is placed among the visual field of video camera (1), by computing machine (3) images acquired and use the automatic calibrating procedure of its built-in video camera to demarcate.
The optical principle of twin camera as shown in Figure 3, wherein, two lens focus and distortion factors of assembling the video camera of configuration are respectively f1, k1 and f2, k2; O1, o1 and O2, o2 are respectively their camera lens photocentre and focus; XYZ is a world coordinate system, x 1y 1z 1With x 2y 2z 2It is camera coordinate system; (Xw, Yw Zw) are the coordinate of object point P under world coordinate system, (x1 i, y1 i) and (x2 i, y2 i) be respectively the P o'clock subpoint coordinate on two camera plane.The purpose of camera calibration is exactly to determine inner parameter f (focal length), k (lens distortion coefficient) and the external parameter of two video cameras respectively R = r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 (rotation matrix), T = T x T y T z (motion vector).Automatically the algorithm of calibrating procedure is divided into two parts:
First is the extraction of feature point for calibration, referring to Fig. 4 (a).As shown in Figure 2, the present invention uses the intersection point of radial alignment section that template surface draws and circumference as feature point for calibration, for realizing high-precision feature point extraction, and the method for having used Flame Image Process to combine here with the characteristic curve match.
Extract the sample point of circle and the drop shadow curve of straight line on the video camera imaging plane at first respectively, because the drop shadow curve of circle still is a straight line for the drop shadow curve of ellipse straight line, therefore the sample point that uses elliptic equation (1) and straight-line equation (2) to extract with least square fitting respectively:
Ax 2+Bxy+Cy 2+Dx+Ey+1=0 (1)
y=mx+b (2)
Two set of equations that simultaneous simulates, the intersecting point coordinate (X that solves Fi, Y Fi) promptly be the unique point image coordinate of sub-pixel precision.
Second portion is a calculation of parameter, referring to Fig. 4 (b).Calculation of parameter utilizes the unique point coordinate of obtaining previously to finish in two steps.
The first step:
1, establishes unique point (X Fi, Y Fi) corresponding world coordinates is (X Wi, Y Wi), (X c, Y c) be the picture centre coordinate, (N x, N y) be that video camera is as the last pixel number of planar unit distance.Calculate
X di=(X fi-X c)/N x (3)
Y di=(Y fi-Y c)/N y
2, to each unique point point P i, list an equation, this N of simultaneous equation
x wi Y di y wi Y di Y di - x wi X di - y wi X di · r 1 / T y r 2 / T y T x / T y r 4 / T y r 5 / T y = X di - - - ( 4 )
I=1 in the formula, 2 ..., N utilizes least square method to find the solution this overdetermined equation group (N>5), can get following variable:
r 1′=r 1/T y r 2′=r 2/T y T x′=T x/T y r 4′=r 4/T y r 5′=r 5/T y
3, utilize the orthogonality of R to calculate T yAnd r 1~r 9, can obtain:
T y 2 = S r - [ S r 2 - 4 ( r 1 ′ r 5 ′ - r 4 ′ r 2 ′ ) 2 ] 1 / 2 2 ( r 1 ′ r 5 ′ - r 4 ′ r 2 ′ ) 2 - - - ( 5 )
S wherein r=r 1 ' 2+ r 2 ' 2+ r 4 ' 2+ r 5 ' 2, T ySymbol can utilize imaging geometry to determine.
Utilize orthogonality and right-handed system characteristic (is right-handed system corresponding to world coordinates) can calculate R
R = r 1 r 2 ( 1 - r 1 2 - r 2 2 ) 1 / 2 r 4 r 5 S ( 1 - r 4 2 - r 5 2 ) 1 / 2 r 7 r 8 r 9 Or R = r 1 r 2 - ( 1 - r 1 2 - r 2 2 ) 1 / 2 r 4 r 5 - S ( 1 - r 4 2 - r 5 2 ) 1 / 2 - r 7 - r 8 r 9
S=-sgn[r in the formula 1r 4+ r 2r 5], r 7, r 8, r 9Can obtain by preceding two multiplication crosses of going of matrix.Specifically choose which R, can determine by trial method.
Second step is to each unique point P iCalculate:
y i=r 4x wi+r 5y wi+T y
z i=r 1x wi+r 2y wi+T z
If w i=r 7x Wi+ r 8y Wi, do not consider lens distortion earlier, then
y i - ( Y f - Y c ) / N y · f T z = ( Y f - Y c ) w i / N y - - - ( 6 )
Separate this equation (i=1,2 ..., N) can obtain the T of effective focal length f and translation vector T respectively zComponent, and hypothesis k=0 are done initial the exploration with these values, utilize optimized Algorithm to find the solution following Nonlinear System of Equations:
Y ( 1 + kr 2 ) = f r 4 x wi + r 5 y wi + T y r 7 x wi + r 8 y wi + T z - - - ( 7 )
X ( 1 + kr 2 ) = f r 1 x wi + r 2 y wi + T x r 7 x wi + r 8 y wi + T z
Can solve: f, T z, k exact value.
After camera calibration is good, starts the indicating positions routine analyzer on the computing machine (3) and environmental parameter is set, system promptly starts working.With reference to Fig. 5, the course of work of indicating positions routine analyzer is as follows:
1, (size is the red component I among the M * N) to computed image n(x, vertical Gray Projection y) PV ( x ) = Σ y = 1 N I ( x , y ) , Get the left and right sides border of the border, the left and right sides at the protruding peak of curve PV as people's face; Use the edge grouping algorithm, and the accurate image coordinate Pe that locatees the position of eyes and calculate eyes line mid point in human face region (x, y);
2, judge the approximate region of finger according to people's physiological structure, in this zone, use colour of skin extraction algorithm to isolate the template finger of binaryzation, white pixel in this template point carried out fitting a straight line and obtains foremost coordinate Pf (x is y) as the finger tip point coordinate;
3, according to image-forming principle, spatial point (X, Y, Z) left and right cameras as the projection on the plane (x1, y1) with (x2, y2) satisfy respectively:
(x 1r 17-f 1r 11)X+(x 1r 18-f 1r 12)Y+(x 1r 19-f 1r 13)Z=f 1T 1x-x 1T 1z
(x 1r 17-f 1r 14)X+(x 1r 18-f 1r 15)Y+(x 1r 19-f 1r 16)Z=f 1T 1y-y 1T 1z
With
(x 2r 27-f 2r 21)X+(x 2r 28-f 2r 22)Y+(x 2r 29-f 2r 23)Z=f 2T 2x-x 2T 2z
(x 2r 27-f 2r 24)X+(x 2r 28-f 2r 25)Y+(x 2r 29-f 2r 26)Z=f 2T 2y-y 2T 2z
Utilize to demarcate good camera parameters, the mid point and the finger tip point of eyes line distinguished above-mentioned four system of equations of simultaneous, solve separately three-dimensional coordinate Pe (X, Y, Z) and Pf (X, Y, Z).According to the environmental parameter of pre-input, set up the space plane equation on display terminal (2) plane, place, and with the space line equations simultaneousness that is connected Pe, Pf, (X, Y Z), are exactly the indicated impact point coordinate of people to solve intersection point Pt;
4, use block matching algorithm to follow the tracks of the next frame image I N+1(x, y) in Pe, the match point of Pf, circulation execution in step 3,4, thereby incessantly export target point coordinate Pt (X, Y, Z).

Claims (6)

1. the contactless position input system of a hand, eye relation guiding, by two ccd video cameras (1), a large-scale display terminal (2), one table apparatus has the computing machine (3) of multichannel image capture card to form, and computing machine (3) is connected with video camera (1) by image pick-up card and is connected with large-scale display terminal (2) by display card; It is characterized in that:
1) described two ccd video cameras (1) are fixed in user's dead ahead to assemble configuration mode, and the installation site need guarantee that left and right sides ccd video camera (1) visual field all covers the front of user's health first half;
2) there is a plane reference template (4) to place the visual field of two ccd video cameras (1) in the demarcation before use; The image that described computing machine (3) is gathered calibrating template (4) according to two ccd video cameras (1) carries out camera parameters and demarcates; During systematic survey, remove calibrating template (4), computing machine (3) is found the solution and the export target point coordinate promptly contactless input indicating positions coordinate according to the camera parameters of default environmental parameter, demarcation and the image that contains user's hand, eye of camera acquisition.
2. the contactless position input system of hand according to claim 1, eye relation guiding is characterized in that described plane reference template (4) is made of two concentric circless and four rectilinear figures by the center of circle and circumference in equal parts that accurate Drawing on the smooth flat board goes out.
3. the contactless position input system of hand according to claim 1, eye relation guiding is characterized in that described large-scale display terminal (2) is large screen projector or displaying sand table or stereoscopic model or space object.
4. the contactless position input method of a hand, eye relation guiding adopts the contactless position input system of the described hand of claim 1, eye relation guiding, it is characterized in that concrete job step is:
1. after two ccd video cameras (1) and a plane reference Template Location were installed, computing machine (3) started the automatic calibrating procedure of a cover video camera, according to the image of the plane reference template (4) of gathering, carried out camera parameters and demarcated;
2. default environmental parameter is set up display terminal (2) place plane equation;
3. during systematic survey, remove calibrating template (4), computing machine (3) starts a cover indicating positions routine analyzer, according to default environmental parameter, camera calibration parameter with read in the image that first frame that video camera (1) gathers contains user's hand, eye, finds the solution and export the indicating positions point coordinate; Then read in next frame and contain the image of user's hand, eye, find the solution and export the indicating positions point coordinate, so circulation realizes the tracking of indicating positions.
5. the contactless position input method of hand according to claim 4, eye relation guiding is characterized in that the automatic calibrating procedure of described video camera has following steps:
1. the extraction of feature point for calibration, the radial alignment of go up drawing with plane reference template (4) and the intersection point of circumference are as feature point for calibration:
A) read in the image of plane reference template (4),
B) extract the sample point of circle and the drop shadow curve of straight line on the video camera imaging face respectively, the drop shadow curve of circle is oval, and the projection of straight line still is a straight line,
C) sample point that uses elliptic equation and straight-line equation to extract respectively with least square fitting, two set of equations of simultaneous match solve intersecting point coordinate oval and straight line;
2. camera parameters calibrated and calculated:
A) according to the intersecting point coordinate of trying to achieve, calculate video camera external parameter---rotation matrix R and motion vector T,
B) try to achieve intrinsic parameters of the camera then---focal distance f and lens distortion coefficient k.
6. the contactless position input method of hand according to claim 4, eye relation guiding is characterized in that described indicating positions routine analyzer has following steps:
1. read in a frame and contain the image that the video camera (1) of user's hand, eye is gathered;
2. locate point coordinate Pe and forefinger finger tip coordinate Pf in the eyes respectively;
3. will connect the boresight of the straight line of eyes mid point and finger tip, judge the target location of finger indication as the people:
A) utilize to demarcate good camera parameters, the mid point and the finger tip point of eyes line distinguished four system of equations of simultaneous, solve separately three-dimensional coordinate Pe (X, Y, Z) and Pf (X, Y, Z),
B) the connect space line equation of Pe, Pf,
C) according to the environmental parameter of pre-input, set up the space plane equation on display terminal (2) plane, place,
D) will connect the space line equation of Pe, Pf and the space plane equations simultaneousness at display terminal place, find the solution and export indicating positions impact point coordinate;
4. read in the next frame image, 2.~3. repeating step realizes the indicating positions tracking.
CNB2006100294833A 2006-07-28 2006-07-28 System and method of contactless position input by hand and eye relation guiding Expired - Fee Related CN100432897C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100294833A CN100432897C (en) 2006-07-28 2006-07-28 System and method of contactless position input by hand and eye relation guiding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100294833A CN100432897C (en) 2006-07-28 2006-07-28 System and method of contactless position input by hand and eye relation guiding

Publications (2)

Publication Number Publication Date
CN1904806A true CN1904806A (en) 2007-01-31
CN100432897C CN100432897C (en) 2008-11-12

Family

ID=37674078

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100294833A Expired - Fee Related CN100432897C (en) 2006-07-28 2006-07-28 System and method of contactless position input by hand and eye relation guiding

Country Status (1)

Country Link
CN (1) CN100432897C (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2488785A (en) * 2011-03-07 2012-09-12 Sharp Kk A method of user interaction with a device in which a cursor position is calculated using information from tracking part of the user (face) and an object
CN102667677A (en) * 2009-12-14 2012-09-12 阿尔卡特朗讯 A user-interface apparatus and method for user control
CN102841679A (en) * 2012-05-14 2012-12-26 乐金电子研发中心(上海)有限公司 Non-contact man-machine interaction method and device
WO2013098827A1 (en) * 2011-12-27 2013-07-04 Hewlett-Packard Development Company, L.P. User interface device
CN103294173A (en) * 2012-02-24 2013-09-11 冠捷投资有限公司 Remote control system based on user actions and method thereof
CN103347437A (en) * 2011-02-09 2013-10-09 普莱姆森斯有限公司 Gaze detection in a 3d mapping environment
CN103370672A (en) * 2011-01-05 2013-10-23 高通股份有限公司 Method and apparatus for tracking orientation of a user
CN103793060A (en) * 2014-02-14 2014-05-14 杨智 User interaction system and method
CN103925879A (en) * 2014-04-24 2014-07-16 中国科学院合肥物质科学研究院 Indoor robot vision hand-eye relation calibration method based on 3D image sensor
CN104239877A (en) * 2013-06-19 2014-12-24 联想(北京)有限公司 Image processing method and image acquisition device
CN104407692A (en) * 2014-09-30 2015-03-11 深圳市亿思达科技集团有限公司 Hologram image interaction type display method based on ultrasonic wave, control method and system
CN104471511A (en) * 2012-03-13 2015-03-25 视力移动技术有限公司 Touch free user interface
CN104656903A (en) * 2015-03-04 2015-05-27 联想(北京)有限公司 Processing method for display image and electronic equipment
CN105700674A (en) * 2014-12-10 2016-06-22 现代自动车株式会社 Gesture recognition apparatus, vehicle having the same, and method for controlling the vehicle
CN106020478A (en) * 2016-05-20 2016-10-12 青岛海信电器股份有限公司 Intelligent terminal manipulation method, intelligent terminal manipulation apparatus and intelligent terminal
CN106980377A (en) * 2017-03-29 2017-07-25 京东方科技集团股份有限公司 The interactive system and its operating method of a kind of three dimensions
CN107545591A (en) * 2016-06-29 2018-01-05 沈阳新松机器人自动化股份有限公司 A kind of Robotic Hand-Eye Calibration method based on 6 contact methods
CN107656637A (en) * 2017-08-28 2018-02-02 哈尔滨拓博科技有限公司 A kind of automation scaling method using the projected keyboard for choosing manually at 4 points

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030048280A1 (en) * 2001-09-12 2003-03-13 Russell Ryan S. Interactive environment using computer vision and touchscreens
CN1174337C (en) * 2002-10-17 2004-11-03 南开大学 Apparatus and method for identifying gazing direction of human eyes and its use
CN100409157C (en) * 2002-12-23 2008-08-06 皇家飞利浦电子股份有限公司 Non-contact inputting devices
CN1770175A (en) * 2004-11-05 2006-05-10 上海乐金广电电子有限公司 Human computer interaction apparatus and method

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102667677A (en) * 2009-12-14 2012-09-12 阿尔卡特朗讯 A user-interface apparatus and method for user control
CN103370672B (en) * 2011-01-05 2016-10-05 高通股份有限公司 For the method and apparatus following the tracks of user location
US9436286B2 (en) 2011-01-05 2016-09-06 Qualcomm Incorporated Method and apparatus for tracking orientation of a user
CN103370672A (en) * 2011-01-05 2013-10-23 高通股份有限公司 Method and apparatus for tracking orientation of a user
CN103347437A (en) * 2011-02-09 2013-10-09 普莱姆森斯有限公司 Gaze detection in a 3d mapping environment
CN103347437B (en) * 2011-02-09 2016-06-08 苹果公司 Gaze detection in 3D mapping environment
GB2488785A (en) * 2011-03-07 2012-09-12 Sharp Kk A method of user interaction with a device in which a cursor position is calculated using information from tracking part of the user (face) and an object
WO2013098827A1 (en) * 2011-12-27 2013-07-04 Hewlett-Packard Development Company, L.P. User interface device
CN104040461A (en) * 2011-12-27 2014-09-10 惠普发展公司,有限责任合伙企业 User interface device
GB2511973A (en) * 2011-12-27 2014-09-17 Hewlett Packard Development Co User interface device
CN103294173A (en) * 2012-02-24 2013-09-11 冠捷投资有限公司 Remote control system based on user actions and method thereof
CN104471511B (en) * 2012-03-13 2018-04-20 视力移动技术有限公司 Identify device, user interface and the method for pointing gesture
CN104471511A (en) * 2012-03-13 2015-03-25 视力移动技术有限公司 Touch free user interface
CN102841679B (en) * 2012-05-14 2015-02-04 乐金电子研发中心(上海)有限公司 Non-contact man-machine interaction method and device
CN102841679A (en) * 2012-05-14 2012-12-26 乐金电子研发中心(上海)有限公司 Non-contact man-machine interaction method and device
CN104239877A (en) * 2013-06-19 2014-12-24 联想(北京)有限公司 Image processing method and image acquisition device
CN103793060A (en) * 2014-02-14 2014-05-14 杨智 User interaction system and method
CN103925879A (en) * 2014-04-24 2014-07-16 中国科学院合肥物质科学研究院 Indoor robot vision hand-eye relation calibration method based on 3D image sensor
CN104407692A (en) * 2014-09-30 2015-03-11 深圳市亿思达科技集团有限公司 Hologram image interaction type display method based on ultrasonic wave, control method and system
CN104407692B (en) * 2014-09-30 2018-09-07 深圳市亿思达科技集团有限公司 Hologram image interactive display method, control method and system based on ultrasound
CN105700674A (en) * 2014-12-10 2016-06-22 现代自动车株式会社 Gesture recognition apparatus, vehicle having the same, and method for controlling the vehicle
CN104656903A (en) * 2015-03-04 2015-05-27 联想(北京)有限公司 Processing method for display image and electronic equipment
CN106020478A (en) * 2016-05-20 2016-10-12 青岛海信电器股份有限公司 Intelligent terminal manipulation method, intelligent terminal manipulation apparatus and intelligent terminal
CN106020478B (en) * 2016-05-20 2019-09-13 青岛海信电器股份有限公司 A kind of intelligent terminal control method, device and intelligent terminal
CN107545591A (en) * 2016-06-29 2018-01-05 沈阳新松机器人自动化股份有限公司 A kind of Robotic Hand-Eye Calibration method based on 6 contact methods
CN107545591B (en) * 2016-06-29 2021-04-06 沈阳新松机器人自动化股份有限公司 Robot hand-eye calibration method based on six-point contact method
CN106980377A (en) * 2017-03-29 2017-07-25 京东方科技集团股份有限公司 The interactive system and its operating method of a kind of three dimensions
WO2018176773A1 (en) * 2017-03-29 2018-10-04 京东方科技集团股份有限公司 Interactive system for three-dimensional space and operation method therefor
US10936053B2 (en) 2017-03-29 2021-03-02 Boe Technology Group Co., Ltd. Interaction system of three-dimensional space and method for operating same
CN107656637A (en) * 2017-08-28 2018-02-02 哈尔滨拓博科技有限公司 A kind of automation scaling method using the projected keyboard for choosing manually at 4 points
CN107656637B (en) * 2017-08-28 2018-06-05 哈尔滨拓博科技有限公司 A kind of automation scaling method using the projected keyboard for choosing manually at 4 points

Also Published As

Publication number Publication date
CN100432897C (en) 2008-11-12

Similar Documents

Publication Publication Date Title
CN1904806A (en) System and method of contactless position input by hand and eye relation guiding
CN111238374B (en) Three-dimensional model construction and measurement method based on coordinate measurement
CN108885487B (en) Gesture control method of wearable system and wearable system
CN102589516B (en) Dynamic distance measuring system based on binocular line scan cameras
CN102243687A (en) Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system
CN1743806A (en) Moving-object height determining apparatus
WO2021185214A1 (en) Method for long-distance calibration in 3d modeling
CN108470373A (en) It is a kind of based on infrared 3D 4 D datas acquisition method and device
CN101639747A (en) Spatial three-dimensional positioning method
CN104036488A (en) Binocular vision-based human body posture and action research method
CN112016570B (en) Three-dimensional model generation method for background plate synchronous rotation acquisition
CN104933704B (en) A kind of 3 D stereo scan method and system
CN111060008B (en) 3D intelligent vision equipment
CN1220866C (en) Method for calibarting lens anamorphic parameter
WO2021185215A1 (en) Multi-camera co-calibration method in 3d modeling
CN108074265A (en) A kind of tennis alignment system, the method and device of view-based access control model identification
CN110954555A (en) WDT 3D vision detection system
CN205466320U (en) Intelligent machine hand based on many camera lenses
WO2023280082A1 (en) Handle inside-out visual six-degree-of-freedom positioning method and system
CN107990825B (en) High-precision position measuring device and method based on priori data correction
CN106886758A (en) Based on insect identifying device and method that 3 d pose is estimated
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
CN112082486B (en) Handheld intelligent 3D information acquisition equipment
CN103425355B (en) The portable optical touch screen of a kind of omnidirectional camera structure and location calibration steps thereof
CN214410073U (en) Three-dimensional detection positioning system combining industrial camera and depth camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081112

Termination date: 20110728