CN101482782A - Cursor positioning system and method - Google Patents

Cursor positioning system and method Download PDF

Info

Publication number
CN101482782A
CN101482782A CNA2009100460254A CN200910046025A CN101482782A CN 101482782 A CN101482782 A CN 101482782A CN A2009100460254 A CNA2009100460254 A CN A2009100460254A CN 200910046025 A CN200910046025 A CN 200910046025A CN 101482782 A CN101482782 A CN 101482782A
Authority
CN
China
Prior art keywords
mtd
mtr
image
mrow
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2009100460254A
Other languages
Chinese (zh)
Inventor
袁鸿军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CNA2009100460254A priority Critical patent/CN101482782A/en
Publication of CN101482782A publication Critical patent/CN101482782A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Position Input By Displaying (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a cursor positioning system and a positioning method thereof. The positioning method drives the movement of cursor through a relative three-dimensional movement between an image shooting unit and shot object. The object which is shot randomly is used as the instantaneous measuring reference of three-dimensional movement of image shooting unit. The three-dimensional movement of image shooting unit causes the movement of picture point of object which is randomly shot in the vision field. The movement of cursor is driven according to the two-dimensional movement of picture point of object which is shot randomly. The cursor positioning system comprises the image shooting unit, an image processing and resolving unit, a data transmitting unit, a data receiving unit and a processing unit. The invention uses the movement of shooting device relatively to the shot object for driving the cursor to move. The user can use the cursor positioning system in the spatial area at will without using a flat tabletop as support.

Description

Cursor positioning system and positioning method thereof
Technical Field
The invention belongs to the technical field of electronic information, relates to a positioning system and a positioning method thereof, and particularly relates to a cursor positioning system and a positioning method thereof.
Background
The existing computer mouse is generally classified into a mechanical mouse and an optical mouse according to the structure.
The mechanical mouse judges the moving direction by sliding the potentiometer at the beginning of appearance, so the sensitivity is low and the abrasion is large. However, with the progress of technology, the mechanical mouse absorbs some designs of the optical mouse, and develops from a pure mechanical structure into an optical mechanical mouse, which adopts an encoder different from the pure mechanical mouse and uses a structure that one rolling ball leans against two rotating shafts. Through the development of many years, the optical mechanical mouse becomes the most mature mouse in the prior art, has a simple structure and low cost, and still enjoys a certain share in the market.
The optical mouse is invented by Dick Lyon and Steve Kirsch in 1981, the mouse without the rolling ball adopts optical positioning, and the initial photoelectric mouse can be used only by matching with a special forehead pad, thereby causing a plurality of inconveniences. With the progress of the technology, the photoelectric mouse abandons the backing plate finally, and when the photoelectric mouse works, a red light beam is sent to irradiate the desktop, and then the movement of the mouse is judged through the movement and reflection of different colors or concave-convex points on the desktop. The relatively high precision of optical mice, combined with the light weight and lack of periodic cleaning of the mouse, have previously been used in design areas where precise positioning is required. At present, the cost is reduced, and the method is gradually popularized.
The two types of mice have the defects that the two types of mice are required to be placed on a flat contact surface, and the operation of moving the cursor is completed through the contact surface, so that a user has to operate in a narrow space; often, occupational diseases are easily generated, and the health of a user is adversely affected.
In addition, there is an air mouse, which is implemented based on a gyro and an accelerometer, and which is too expensive.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a cursor positioning system is provided that can be used in a spatial domain as desired.
In addition, the invention also provides a cursor positioning method of the cursor positioning system.
In order to solve the technical problems, the invention adopts the following technical scheme:
a cursor positioning method drives the movement of a cursor through the relative three-dimensional movement between an image pickup unit and a photographed object; the three-dimensional motion of the image pickup unit causes two-dimensional motion of image points of an object photographed at random in a field of view thereof; and driving the cursor to move according to the two-dimensional motion of the image point of the randomly shot object.
As a preferable aspect of the present invention, the randomly photographed object serves as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit; the method for acquiring the two-dimensional motion of the image point of the randomly shot object comprises the following steps: recording the position of an image point of an instantaneous measurement reference in the ith frame image; searching the image point of the instantaneous measurement reference in the (i + 1) th frame image by a template matching method; and calculating the two-dimensional motion of the image point of the object shot at random according to the positions of the two image points.
As a preferable aspect of the present invention, the image pickup unit is a camera; selecting a space point P (0, 0, Z) on an optical axis Z axis of the camera as an instantaneous measurement reference of the three-dimensional motion of the camera; the image point of the instantaneous measurement reference is located in the center of the image plane.
As a preferable aspect of the present invention, the randomly photographed object serves as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit; the method comprises the following steps;
(1) the user controls an image shooting unit to shoot images and sends shot ith frame images to an image processing and resolving unit; adding 1 to the value of i every time of shooting;
(2) the image processing and resolving unit acquires an image point of an instantaneous measurement reference and gives the position of the image point to a position C;
(3) the image processing and resolving unit acquires an i +1 th frame image from the image pickup unit;
(4) the image processing and resolving unit searches image points of the instantaneous measurement reference in the (i + 1) th frame of image in a matching manner;
(5) if the image points of the instantaneous measurement reference are successfully searched in a matching mode, turning to the step (6), and otherwise, turning to the step (1);
(6) assigning the best matching position successfully found by matching to the position D;
(7) calculating a position offset vector V ═ D-C;
(8) calculating the cursor position Q ═ Q' + V; wherein, Q is the cursor position after the current movement, and Q' is the cursor position before the current movement; v is the cursor position compensation amount, and V ═ k × V', k is the scaling factor.
As a preferable mode of the present invention, the above-mentionedWhen the image shooting unit shoots the i-frame image, the coordinate system of the image shooting unit is taken as a relative coordinate system o0(ii) a Taking a point P ([ 00 z ] located on the z-axis in the coordinate system]T) As instantaneous measurement reference, the image of the instantaneous measurement reference P is located at the center P of the image plane0(0 0);
After the image pick-up unit has moved R t, a new coordinate system o is formed1At this time, the (i + 1) th frame image is captured, and the position of the image of the instantaneous measurement reference P is moved to P1(u v); linear camera imaging model
s u v 1 = A 0 R t 0 1 X 1
= [ A ] ( [ RX + t ] ) ;
<math> <mrow> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>z</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>+</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>)</mo> </mrow> </mrow></math>
Namely: <math> <mrow> <mi>s</mi> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mi>z</mi> <mo>+</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>)</mo> </mrow> <mo>;</mo> </mrow></math>
<math> <mrow> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>-</mo> <mi>&beta;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> </mrow></math>
<math> <mrow> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>&alpha;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> </mrow></math>
wherein, A = f 0 0 0 f 0 0 0 1 , is a camera intrinsic parameter matrix; <math> <mrow> <mi>R</mi> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>,</mo> </mrow></math> rotating the matrix for the camera; t = t 1 t 2 t 3 , a translation matrix for the camera; x is the instantaneous measurement reference in the coordinate system o0Coordinates of (5); u v coordinates of the image point which is a reference of instantaneous measurement in an image coordinate system; s is a proportionality constant.
As a preferred scheme of the present invention, in the step (4), the method for searching for image matching is a template matching method; in the step (5), if the correlation function value of the template matching is larger than the set threshold value, the point is considered to be successfully matched; and (6) taking the position with the maximum correlation function value as the best matching position.
A cursor positioning method drives the movement of a cursor through the relative three-dimensional movement between an image pickup unit and a photographed object; the image shooting unit is static, and all objects shot randomly in the visual field of the image shooting unit are used as a whole to do rigid motion; the rigid motion of all objects randomly shot by the image shooting unit as a whole causes the whole motion of image points of the objects randomly shot in the visual field of the image shooting unit; the image pickup unit is used as an instantaneous measurement reference of the rigid body motion; and driving the movement of the cursor according to the two-dimensional movement of the randomly shot image of the object.
A cursor positioning system comprises an image shooting unit, an image processing and resolving unit, a data sending unit, a data receiving unit and a processing unit; the image shooting unit is used for shooting images and sending the shot frame images to an image processing and resolving unit; the image processing and resolving unit is used for processing the relative three-dimensional motion between the image pickup unit and the shot object into cursor position compensation data and then transmitting the cursor position compensation data to a data transmitting unit; the three-dimensional motion of the image pickup unit causes motion of an image of a randomly photographed object in a field of view thereof, and motion of a cursor is driven according to the two-dimensional motion of the image of the randomly photographed object; the data sending unit is used for sending the calculated cursor position compensation to a data receiving unit; the data receiving unit is used for receiving the cursor position compensation sent by the data sending unit and sending the cursor position compensation to a processing unit; the processing unit is used for driving the cursor to display at the corresponding position.
As a preferable aspect of the present invention, the randomly photographed object serves as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit.
As a preferable scheme of the present invention, the image capturing unit sends the captured ith frame image to the image processing and calculating unit; adding 1 to the value of i every time of shooting; the image processing and resolving unit acquires an image point of an instantaneous measurement reference and gives the position of the image point to a position C; the image processing and resolving unit matches and searches an image point of an instantaneous measurement reference in the image of the (i + 1) th frame when the image of the (i) th frame is shot; if the image point of the instantaneous measurement reference is successfully matched and searched, the best matching position successfully matched and searched is given to the position D; a position offset vector V ═ D-C; cursor position Q ═ Q' + V; wherein, Q is the cursor position after the current movement, and Q' is the cursor position before the current movement; v is the cursor position compensation amount, and V ═ k × V', k is the scaling factor.
As a preferred embodiment of the present invention, the image capturing unit includes an image capturing device and at least one control key; and setting an operation function on the control key.
As a preferable aspect of the present invention, the image pickup unit includes a pen-shaped housing; arranging a camera device on the head of the shell; at least three keys are arranged on the side of the shell, and the functions of the keys are respectively set as a left key, a right key and a power switch of a common mouse.
As a preferable aspect of the present invention, the image pickup unit includes a ring which is easy to be worn on a finger, an image pickup device provided on the ring, and a key unit provided separately from the ring; the key unit at least comprises three keys, and the functions of the three keys are respectively set as a left key, a right key and a power switch of the common mouse. The key unit is provided on the keyboard or is a separate part from other mechanisms.
The invention has the beneficial effects that: according to the cursor positioning system and the cursor positioning method disclosed by the invention, a user does not need to rely on a flat desktop and can use the cursor positioning system and the cursor positioning method at will in a space range. Meanwhile, the invention can shoot objects randomly without shooting set reference objects.
Drawings
Fig. 1 is a schematic diagram illustrating a cursor positioning system according to an embodiment.
Fig. 2 is a flowchart illustrating a positioning method of a cursor positioning system according to an embodiment.
Fig. 3 is a schematic diagram of the working principle of the present invention.
Fig. 4-1 and 4-2 are schematic structural views of an image capturing unit according to a fourth embodiment.
Fig. 5-1 and 5-2 are schematic structural views of an image capturing unit according to a fifth embodiment.
Fig. 6 is a schematic composition diagram of a cursor positioning system in accordance with a sixth embodiment.
Description of reference numerals:
10: cursor positioning system 11: image pickup unit
111: the housing 112: image pickup apparatus
113: the keys 114: push-button
115: key 116: image pickup apparatus
117: the ring 12: image processing and resolving unit
13: the data transmission unit 14: data receiving unit
15: the processing unit 20: electronic product
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example one
[ working principle ]
The invention drives the cursor to move through the movement of the camera. Before explaining the present invention in detail, the working principle of the present invention, i.e. how the movement of the cursor is a reflection of the camera movement, will be explained first.
According to the photographic imaging theory, there is a relationship between the motion of the camera and the motion of the image of the target object. If only a target object located on the optical axis of the camera is considered, it can be found that the relationship between the movement of its image and the movement of the camera is greatly simplified. The relationship among the motion of a camera, the motion of an image of a target object on the optical axis of the camera and the motion of a cursor is researched on the basis of the principle of photographic imaging, so that the control of the motion of the cursor is realized.
1. The movement of the image of the target object on the optical axis of the camera and the camera movement.
Linear camera imaging model: s u v 1 = A 0 R t 0 1 X 1 , wherein A = f 0 0 0 f 0 0 0 1 , Is a camera intrinsic parameter matrix; <math> <mrow> <mi>R</mi> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>,</mo> </mrow></math> for camerasRotating the matrix; t = t 1 t 2 t 3 , a translation matrix for the camera; x is the instantaneous measurement reference in the coordinate system o0Coordinates of (5); u v coordinates of the image point which is a reference of instantaneous measurement in an image coordinate system; s is a proportionality constant.
As shown in fig. 3, the camera coordinate system when the camera captures the i-frame image is assumed to be the world coordinate system o0(i is an integer, and the value of i is added by 1 every time a shot is taken), and a point P ([ 00 z ] z) located on the z-axis in the coordinate system is taken]T) As the target point for investigation, the image of the target point p is now located in the center p of the image plane0(00). After the camera has moved R t, a new camera coordinate system o is formed1When the (i + 1) th frame image is captured, the position of the image of p is moved to p1(u v):
s u v 1 = A 0 R t 0 1 X 1
= [ A ] ( [ RX + t ] ) ;
<math> <mrow> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>z</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>+</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>)</mo> </mrow> </mrow></math>
Namely: <math> <mrow> <mi>s</mi> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mi>z</mi> <mo>+</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>)</mo> </mrow> <mo>;</mo> </mrow></math>
<math> <mrow> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>-</mo> <mi>&beta;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>.</mo> </mrow></math>
<math> <mrow> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>&alpha;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> </mrow></math>
the above equation shows that the camera moves from taking the i frame to the i +1 frame (R t) and thus causes movement of the image of the p (00 z) point [ u v ]. The motion [ u v ] of the image of the p (00 z) point is related to the motion (R t) of the camera and the z-coordinate value of p (00 z).
2. And controlling the motion of the cursor.
In the image processing of the air mouse, a p (00 z) point in two adjacent frames of images is searched by using a matching method. Therefore, the p (00 z) point is a statistic, and has no direct correspondence with a certain point on a physically real object. Since the p (00 z) point is statistically calculated in one image window, the position of the p (00 z) point is determined by all the objects in the whole statistical window. One difference between this and the coordinates of the point in the physical sense is that the z-coordinate value of the p (00 z) point is a slowly varying, non-hopping quantity. In conjunction with the above equation, it can be seen that the primary determinant of the motion [ u v ] of the image of the p (00 z) point is the motion of the camera itself (R t). While the motion (R t) of the camera itself is a factor that the user can control at all. In addition, the motion [ u v ] of the image of the p (00 z) point is an inaccurate reflection of the motion of the camera itself due to the presence of the user uncontrollable factor z, and if the cursor is driven with the motion [ u v ] of the image of the p (00 z) point, the motion of the cursor is also an inaccurate reflection of the motion of the camera. Although not absolutely accurate, it can, of course, reflect the trend of the camera motion. The method for driving the cursor by the motion of the camera is similar to the method for writing by a writing brush, the motion track of the hand obviously has a difference with the shape of the word to be written, and although the difference exists, a beautiful calligraphy can be drawn at will.
[ SYSTEM COMPOSITION ]
As shown in fig. 1, the present invention discloses a cursor positioning system 10, which includes an image capturing unit 11, an image processing and resolving unit 12, a data transmitting unit 13, a data receiving unit 14, and a processing unit 15.
The image capturing unit 11 captures images and sends the captured frames of images to an image processing and calculating unit 12. In this embodiment, the user moves the image pickup unit 11, so that the image pickup unit 11 and the photographed object generate a relative three-dimensional motion therebetween; of course, the relative three-dimensional motion between the image capturing unit 11 and the object to be photographed may be generated by moving the object to be photographed.
The image processing and calculating unit 12 is used for processing the relative three-dimensional motion between the image capturing unit 11 and the shot object into cursor position compensation data, and then transmitting the cursor position compensation data to the data sending unit 13; the relative three-dimensional motion of the image pickup unit 11 causes motion of an image of a randomly photographed object in its field of view, which serves as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit 11, and the motion of the cursor is driven in accordance with the two-dimensional motion of the image of the randomly photographed object.
The data transmitting unit 13 is configured to transmit the calculated cursor position compensation to the data receiving unit 14.
The data receiving unit 14 is used to receive the cursor position compensation sent by the data sending unit 13 and send the cursor position compensation to the processing unit 15.
The processing unit 15 is used to drive the cursor to be displayed at the corresponding position. In this embodiment, the processing unit 15 is included as a part of an electronic product 20 that includes the cursor positioning system 10.
[ POSITIONING METHOD ]
The invention discloses a cursor positioning method, which drives the movement of a cursor through the relative three-dimensional movement between an image shooting unit 11 and a shooting object; that is, the image pickup unit 11 may be in motion, or the object photographed by the image pickup unit 11 may be in motion.
In the present embodiment, the randomly photographed object is used as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit 11. The three-dimensional motion of the image pickup unit 11 causes a two-dimensional motion of an image point of an object photographed at random in its field of view; the invention drives the movement of the cursor according to the two-dimensional movement of the image point of the randomly shot object.
The image pickup unit 11 may be a video camera. According to the working principle, the movement of the camera can be reflected more accurately by selecting the point on the optical axis Z axis of the camera as the instantaneous measurement reference of the three-dimensional movement of the camera. Therefore, in the present embodiment, a spatial point P (0, 0, Z) located on the Z-axis of the optical axis of the camera is selected as an instantaneous measurement reference of the three-dimensional motion of the camera, and an image point of the instantaneous measurement reference is located at the center of the image plane.
Further, a method of acquiring two-dimensional motion of an image point of a randomly photographed object is: recording the position of an image point of an instantaneous measurement reference in the ith frame image; searching the image point of the instantaneous measurement reference in the (i + 1) th frame image by a template matching method; and calculating the two-dimensional motion of the image point of the object shot at random according to the positions of the two image points. In the embodiment, a template matching method is adopted for matching and searching, and if the matching method is not adopted, the z value can jump (for example, from 1 to 10) according to the working principle, so that the jump of an image point and the jump of a cursor can be caused. And by adopting a template matching method, the value of z is changed slowly, and z can be regarded as a constant, so that cursor jumping cannot be caused.
Referring to fig. 2, the cursor positioning method of the present embodiment includes the following steps;
(1) the user controls an image shooting unit to shoot images and sends shot ith frame images to an image processing and resolving unit; each time a shot is taken, the value of i is incremented by 1.
When the image shooting unit shoots an i-frame image, the coordinate system of the image shooting unit is taken as a relative coordinate system o0(ii) a Taking a point P ([ 00 z ] located on the z-axis in the coordinate system]T) As instantaneous measurement reference, the image of the instantaneous measurement reference P is located at the center P of the image plane0(0 0)。
(2) The image processing and resolving unit acquires the image point of the instantaneous measurement reference and gives the position thereof to the position C.
(3) After the image shooting unit generates relative motion, the image processing and resolving unit acquires an i +1 frame image from the image shooting unit;
after the image pick-up unit has moved R t, a new coordinate system o is formed1At this time, the (i + 1) th frame image is taken and measured instantaneouslyThe position of the image of the reference P is moved to P1(u v); linear camera imaging model
s u v 1 = A 0 R t 0 1 X 1
= [ A ] ( [ RX + t ] ) ;
<math> <mrow> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>z</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>+</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>)</mo> </mrow> </mrow></math>
Namely: <math> <mrow> <mi>s</mi> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mi>z</mi> <mo>+</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>)</mo> </mrow> <mo>;</mo> </mrow></math>
<math> <mrow> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>-</mo> <mi>&beta;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>;</mo> </mrow></math>
<math> <mrow> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>&alpha;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> </mrow></math>
wherein A = f 0 0 0 f 0 0 0 1 , Is a camera intrinsic parameter matrix; <math> <mrow> <mi>R</mi> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>,</mo> </mrow></math> rotating the matrix for the camera; t = t 1 t 2 t 3 , a translation matrix for the camera; x is the instantaneous measurement reference in the coordinate system o0Coordinates of (5); u v coordinates of the image point which is a reference of instantaneous measurement in an image coordinate system; s is a proportionality constant.
(4) The image processing and resolving unit searches image points of the instantaneous measurement reference in the (i + 1) th frame of image in a matching manner; in this embodiment, the image matching search method is a template matching method;
(5) if the correlation function value of the template matching is larger than the set threshold value, the point matching is considered to be successful, the step (6) is carried out, and if not, the step (1) is carried out;
(6) taking the position with the maximum correlation function value as the optimal matching position, and giving the optimal matching position which is successfully matched and searched to a position D;
(7) calculating a position offset vector V ═ D-C;
(8) calculating the cursor position Q ═ Q' + V; wherein, Q is the cursor position after the current movement, and Q' is the cursor position before the current movement; v is the cursor position compensation amount, and V ═ k × V', k is the scaling factor.
According to the cursor positioning system and the cursor positioning method, the relative motion of the camera device and the motion of the object shot by the camera device drives the cursor to move, and a user does not need to rely on a flat desktop and can use the cursor at will in a space range.
Example two
The difference between the embodiment and the first embodiment is that, in the embodiment, a point located near the Z axis of the optical axis of the camera is selected as the instantaneous measurement reference of the three-dimensional motion of the camera.
In addition, any point in the camera field of view can be selected as the instantaneous measurement reference of the three-dimensional motion of the camera.
EXAMPLE III
The difference between this embodiment and the first and second embodiments is that in this embodiment, the relative three-dimensional motion is generated between the image capturing unit and the object by moving the object.
At the moment, the image pickup unit is static, and all objects randomly photographed in the field of view of the image pickup unit are taken as a whole to do rigid motion; the rigid motion of all objects randomly shot by the image shooting unit as a whole causes the whole motion of image points of the objects randomly shot in the visual field of the image shooting unit; the image pickup unit is used as an instantaneous measurement reference of the rigid body motion; and driving the movement of the cursor according to the two-dimensional movement of the randomly shot image of the object.
Example four
Referring to fig. 4-1 and 4-2, in the present embodiment, the image capturing unit and the plurality of keys form an input control unit (of course, the keys may not be provided). The input control unit comprises a pen-shaped shell 111, a camera device 112 arranged at the head of the shell 111, and three keys 113, 114 and 115 arranged at the side edge of the shell 11, wherein the functions of the keys 113, 114 and 115 are respectively set as a left key, a right key and a power switch of a common mouse.
In addition, the functions of the keys can be changed (for example, the power switch is set as the middle key of the common mouse) at the same time when the keys are increased or decreased (for example, the middle key function of the common mouse is increased).
EXAMPLE five
Referring to fig. 5-1 and 5-2, in the present embodiment, the image capturing unit and the plurality of keys form an input control unit (of course, the keys may not be provided). The input control unit includes a ring 117 to be conveniently worn on a finger, an image pickup device 116 provided on the ring 117, and a key unit (not shown) provided separately from the ring 117. The ring 117 may be worn like a ring on the user's finger, either closed or open. The key unit at least comprises three keys, and the functions of the three keys are respectively set as a left key, a right key and a power switch of the common mouse. The key unit is arranged on the keyboard (for example, the left middle key and the right middle key are realized by simulation of a combined key of the keyboard), or the key unit is a separate part separated from other mechanisms.
In addition, the functions of the keys can be changed (for example, the power switch is set as the middle key of the common mouse) at the same time when the keys are increased or decreased (for example, the middle key function of the common mouse is increased).
EXAMPLE six
Referring to fig. 6, in the embodiment, the input control unit does not include a key and is only an image capturing unit. Only image information is transmitted among the image capturing unit 11, the image processing and resolving unit 12, the data transmitting unit 13, the data receiving unit 14 and the processing unit 15, so as to control cursor movement (see the first embodiment for the control principle). The left and right keys of the mouse are realized by simulating the combination keys of the keyboard of the electronic product 20.
The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other elements, materials, and components, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (10)

1. A cursor positioning method, characterized by: the method drives the movement of a cursor through the relative three-dimensional movement between an image pickup unit and a shooting object;
the three-dimensional motion of the image pickup unit causes two-dimensional motion of image points of an object photographed at random in a field of view thereof;
and driving the cursor to move according to the two-dimensional motion of the image point of the randomly shot object.
2. A cursor positioning method according to claim 1, characterized in that:
the randomly shot object is used as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit;
the method for acquiring the two-dimensional motion of the image point of the randomly shot object comprises the following steps:
recording the position of an image point of an instantaneous measurement reference in the ith frame image;
searching the image point of the instantaneous measurement reference in the (i + 1) th frame image by a template matching method;
and calculating the two-dimensional motion of the image point of the object shot at random according to the positions of the two image points.
3. A cursor positioning method according to claim 1, characterized in that:
the image shooting unit is a camera;
selecting a space point P (0, 0, Z) on an optical axis Z axis of the camera as an instantaneous measurement reference of the three-dimensional motion of the camera;
the image point of the instantaneous measurement reference is located in the center of the image plane.
4. A cursor positioning method according to claim 1, characterized in that:
the randomly shot object is used as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit;
the method comprises the following steps;
(1) the user controls an image shooting unit to shoot images and sends shot ith frame images to an image processing and resolving unit; adding 1 to the value of i every time of shooting;
(2) the image processing and resolving unit acquires an image point of an instantaneous measurement reference and gives the position of the image point to a position C;
(3) the image processing and resolving unit acquires an i +1 th frame image from the image pickup unit;
(4) the image processing and resolving unit searches image points of the instantaneous measurement reference in the (i + 1) th frame of image in a matching manner;
(5) if the image points of the instantaneous measurement reference are successfully searched in a matching mode, turning to the step (6), and otherwise, turning to the step (1);
(6) assigning the best matching position successfully found by matching to the position D;
(7) calculating a position offset vector V ═ D-C;
(8) calculating the cursor position Q ═ Q' + V; wherein, Q is the cursor position after the current movement, and Q' is the cursor position before the current movement; v is the cursor position compensation amount, and V ═ k × V', k is the scaling factor.
5. The cursor positioning method of claim 4, wherein:
when the image shooting unit shoots an i-frame image, the coordinate system of the image shooting unit is taken as a relative coordinate system o0(ii) a Taking a point P ([ 00 z ] located on the z-axis in the coordinate system]T) As instantaneous measurement reference, the image of the instantaneous measurement reference P is located at the center P of the image plane0(0 0);
After the image pick-up unit has moved R t, a new coordinate system o is formed1At this time, the (i + 1) th frame image is captured, and the position of the image of the instantaneous measurement reference P is moved to P1(uv); linear camera imaging model
s u v 1 = A 0 R t 0 1 X 1
= [ A ] ( [ RX + t ] ) ;
<math> <mrow> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>z</mi> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </mrow></math>
Namely: <math> <mrow> <mi>s</mi> <mrow> <mfenced open='[' close=']' separators=','> <mtable> <mtr> <mtd> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mi>z</mi> <mo>+</mo> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <msub> <mi>t</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>t</mi> <mn>3</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </mrow> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </mrow></math>
<math> <mrow> <mfrac> <mi>u</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>-</mo> <mi>&beta;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> </mrow></math>
<math> <mrow> <mfrac> <mi>v</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>&alpha;z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> </mrow> <mrow> <mi>z</mi> <mo>+</mo> <msub> <mi>t</mi> <mn>3</mn> </msub> </mrow> </mfrac> </mrow></math>
wherein, A = f 0 0 0 f 0 0 0 1 , is a camera intrinsic parameter matrix; <math> <mrow> <mi>R</mi> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&gamma;</mi> </mtd> <mtd> <mo>-</mo> <mi>&beta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>&gamma;</mi> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mi>&beta;</mi> </mtd> <mtd> <mo>-</mo> <mi>&alpha;</mi> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow></math> rotating the matrix for the camera; t = t 1 t 2 t 3 , a translation matrix for the camera; x is the instantaneous measurement reference in the coordinate system o0Coordinates of (5); u v coordinates of the image point which is a reference of instantaneous measurement in an image coordinate system; s is a proportionality constant.
6. The cursor positioning method of claim 4, wherein:
in the step (4), the image matching searching method is a template matching method;
in the step (5), if the correlation function value of the template matching is larger than the set threshold value, the point is considered to be successfully matched;
and (6) taking the position with the maximum correlation function value as the best matching position.
7. A cursor positioning method, characterized by: the method drives the movement of a cursor through the relative three-dimensional movement between an image pickup unit and a shooting object;
the image shooting unit is static, and all objects shot randomly in the visual field of the image shooting unit are used as a whole to do rigid motion;
the rigid motion of all objects randomly shot by the image shooting unit as a whole causes the whole motion of image points of the objects randomly shot in the visual field of the image shooting unit;
the image pickup unit is used as an instantaneous measurement reference of the rigid body motion;
and driving the movement of the cursor according to the two-dimensional movement of the randomly shot image of the object.
8. A cursor positioning system, comprising:
the image shooting unit is used for shooting images and sending the shot frame images to an image processing and resolving unit;
the image processing and resolving unit is used for processing the relative three-dimensional motion between the image pickup unit and the shot object into cursor position compensation data and then transmitting the cursor position compensation data to a data transmitting unit; the three-dimensional motion of the image pickup unit causes motion of an image of a randomly photographed object in a field of view thereof, and motion of a cursor is driven according to the two-dimensional motion of the image of the randomly photographed object;
the data sending unit is used for sending the calculated cursor position compensation to a data receiving unit;
the data receiving unit is used for receiving the cursor position compensation sent by the data sending unit and sending the cursor position compensation to a processing unit;
and the processing unit is used for driving the cursor to be displayed at the corresponding position.
9. A cursor positioning system according to claim 8, wherein:
the randomly shot object serves as an instantaneous measurement reference of the three-dimensional motion of the image pickup unit.
10. A cursor positioning system according to claim 9, wherein:
the image shooting unit sends the shot ith frame image to an image processing and resolving unit; adding 1 to the value of i every time of shooting;
the image processing and resolving unit acquires an image point of an instantaneous measurement reference and gives the position of the image point to a position C;
the image processing and resolving unit matches and searches an image point of an instantaneous measurement reference in the image of the (i + 1) th frame when the image of the (i) th frame is shot;
if the image point of the instantaneous measurement reference is successfully matched and searched, the best matching position successfully matched and searched is given to the position D;
a position offset vector V ═ D-C;
cursor position Q ═ Q' + V; wherein, Q is the cursor position after the current movement, and Q' is the cursor position before the current movement; v is the cursor position compensation amount, and V ═ k × V', k is the scaling factor.
CNA2009100460254A 2009-02-06 2009-02-06 Cursor positioning system and method Pending CN101482782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2009100460254A CN101482782A (en) 2009-02-06 2009-02-06 Cursor positioning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2009100460254A CN101482782A (en) 2009-02-06 2009-02-06 Cursor positioning system and method

Publications (1)

Publication Number Publication Date
CN101482782A true CN101482782A (en) 2009-07-15

Family

ID=40879922

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2009100460254A Pending CN101482782A (en) 2009-02-06 2009-02-06 Cursor positioning system and method

Country Status (1)

Country Link
CN (1) CN101482782A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103513790A (en) * 2012-06-19 2014-01-15 昆达电脑科技(昆山)有限公司 Method for generating cursor in deflection position on basis of mobile device
CN103677314A (en) * 2012-09-18 2014-03-26 原相科技股份有限公司 Electronic system, pointing device thereof and method for tracking image thereof
CN103699592A (en) * 2013-12-10 2014-04-02 天津三星通信技术研究有限公司 Video shooting positioning method for portable terminal and portable terminal
TWI461969B (en) * 2012-09-04 2014-11-21 Pixart Imaging Inc Electronic system with pointing device and the method thereof
CN104331212A (en) * 2013-07-22 2015-02-04 原相科技股份有限公司 Cursor positioning method of handheld pointing device
CN107754310A (en) * 2013-12-18 2018-03-06 原相科技股份有限公司 Handheld apparatus and its localization method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103513790A (en) * 2012-06-19 2014-01-15 昆达电脑科技(昆山)有限公司 Method for generating cursor in deflection position on basis of mobile device
TWI461969B (en) * 2012-09-04 2014-11-21 Pixart Imaging Inc Electronic system with pointing device and the method thereof
US9104249B2 (en) 2012-09-04 2015-08-11 Pixart Imaging Inc. Electronic system with pointing device and method thereof
CN103677314A (en) * 2012-09-18 2014-03-26 原相科技股份有限公司 Electronic system, pointing device thereof and method for tracking image thereof
CN103677314B (en) * 2012-09-18 2017-06-13 原相科技股份有限公司 The method of electronic system and its indicator device with image is followed the trail of
CN104331212A (en) * 2013-07-22 2015-02-04 原相科技股份有限公司 Cursor positioning method of handheld pointing device
CN104331212B (en) * 2013-07-22 2017-11-10 原相科技股份有限公司 The cursor positioning method of hand-held indicator device
CN103699592A (en) * 2013-12-10 2014-04-02 天津三星通信技术研究有限公司 Video shooting positioning method for portable terminal and portable terminal
CN103699592B (en) * 2013-12-10 2018-04-27 天津三星通信技术研究有限公司 Video capture localization method and portable terminal applied to portable terminal
CN107754310A (en) * 2013-12-18 2018-03-06 原相科技股份有限公司 Handheld apparatus and its localization method
CN107754310B (en) * 2013-12-18 2020-09-15 原相科技股份有限公司 Handheld device and positioning method thereof

Similar Documents

Publication Publication Date Title
Hilliges et al. HoloDesk: direct 3d interactions with a situated see-through display
Kumar et al. Mujoco haptix: A virtual reality system for hand manipulation
Starner et al. The perceptive workbench: Computer-vision-based gesture tracking, object tracking, and 3D reconstruction for augmented desks
US20160232715A1 (en) Virtual reality and augmented reality control with mobile devices
US10922870B2 (en) 3D digital painting
JP3521187B2 (en) Solid-state imaging device
Wagner et al. Handheld augmented reality displays
CN106055090A (en) Virtual reality and augmented reality control with mobile devices
Reichinger et al. Gesture-based interactive audio guide on tactile reliefs
CN101482782A (en) Cursor positioning system and method
CN108136258A (en) Picture frame is adjusted based on tracking eye motion
CN101238428A (en) Free-space pointing and handwriting
US9734622B2 (en) 3D digital painting
Borst et al. Evaluation of a haptic mixed reality system for interactions with a virtual control panel
CN106200985A (en) Desktop type individual immerses virtual reality interactive device
CN201465045U (en) Cursor locating system
Marton et al. Natural exploration of 3D massive models on large-scale light field displays using the FOX proximal navigation technique
US20230256297A1 (en) Virtual evaluation tools for augmented reality exercise experiences
US10238972B2 (en) System and method of modeling the behavior of game elements during a remote game
CN110806797A (en) Method and device for controlling game based on head movement
US11998798B2 (en) Virtual guided fitness routines for augmented reality experiences
EP3378542B1 (en) System and method of modeling the behavior of game elements during a remote game
CN114510143A (en) Motion capture device and motion capture program
CN1667561A (en) Intelligent pen
CN110806811A (en) Method and device for generating mouse control instruction through MPU

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20090715