CN101655739A - Device for three-dimensional virtual input and simulation - Google Patents

Device for three-dimensional virtual input and simulation Download PDF

Info

Publication number
CN101655739A
CN101655739A CN200810214319A CN200810214319A CN101655739A CN 101655739 A CN101655739 A CN 101655739A CN 200810214319 A CN200810214319 A CN 200810214319A CN 200810214319 A CN200810214319 A CN 200810214319A CN 101655739 A CN101655739 A CN 101655739A
Authority
CN
China
Prior art keywords
dimensional
point light
light sources
signal
visual axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200810214319A
Other languages
Chinese (zh)
Other versions
CN101655739B (en
Inventor
林明彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unique Instruments Co Ltd
Original Assignee
Unique Instruments Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unique Instruments Co Ltd filed Critical Unique Instruments Co Ltd
Priority to CN2008102143199A priority Critical patent/CN101655739B/en
Publication of CN101655739A publication Critical patent/CN101655739A/en
Application granted granted Critical
Publication of CN101655739B publication Critical patent/CN101655739B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a device for three-dimensional virtual input and simulation, comprising a plurality of point light sources, plural groups of optical localizers with the capacity of visual axistracing and a procedure for controlling analyzing. The device is mainly characterized by using the optical localizers to carry out measurement and analysis on three-dimensional movement of the point light sources, thus reaching the aims of virtual input and simulation like simulators.

Description

Three-dimensional virtual input and simulation device
Technical Field
The invention relates to a three-dimensional virtual input and simulation device, in particular to a virtual input and simulation device which is composed of a plurality of point light sources, a plurality of groups of optical positioners with visual axis tracking and a control analysis program.
Background
The keyboard, the mouse, the remote controller, the touch screen and the like are traditional and commonly used human-computer interfaces, and the key characteristics of the traditional and commonly used human-computer interfaces are that a mechanical structure is directly contacted and controlled through hands and fingers so that information such as characters, drawings and functional operations can be input into a machine, and the purpose of human-computer interaction is achieved.
In the invention, the most basic definition of the virtual input device is that the three-dimensional hand motion amount is used as an input mode, and the purposes of inputting information such as characters, drawings, functional operation and the like are achieved. In short, the three-dimensional hand motion quantity is used as an interface of human-computer interaction.
Fig. 1 is a schematic diagram of a glove used in a virtual reality. A glove 1 (hereinafter, referred to as VR glove) used in Virtual Reality (hereinafter, referred to as VR) is a typical three-dimensional hand motion amount recognition device. In order to pursue detailed movements of the fingers of the hand, a typical VR glove has a Strain Gage Sensor or Flex Sensor (not shown) mounted on the finger 2 for measuring the physical amount of finger bending. In addition, various miniature flip-flops (actuators) (not shown) are commonly used to pursue Force Feedback. Finally, a Positioning Device (3) is installed on the VR glove to measure the three-dimensional coordinates and angles of a single position on the glove. For details, see the following related patents:
U.S.Pat.No.4414537(Gray J.Grimes,1983)
U.S.Pat.No.5047952(James P.Kramer,1991)
U.S.Pat.No.4988981(Tomas G.Zimmerman,1991)
although VR gloves can achieve the effect of human-computer communication, they are not suitable for personal computers, game machines, PDAs, mobile phones, home movies, and other devices that generally require simple interface operation due to their complex structure and control. Furthermore, the manufacturing cost is not affordable for the general user. Therefore, VR gloves have not been popular and circulated in the consumer market to date. In addition, technically speaking, in order to avoid interference from hand movement, the positioning device adopted by VR gloves is generally not electromagnetic or ultrasonic, and the greatest drawback is that the response speed is not fast enough, which causes significant delay effect (latency) in actual operation, and is easily interfered by environment, and cannot be positioned correctly. For details, please refer to the following related research reports:
Christine Youngblut,etc.,Review of Virtual Environment InterfaceTechnology, Chapter 3 and 5,INSTITUTE FOR DEFENSE ANALYSES,1996
therefore, for any virtual input device, a positioner capable of quickly recognizing the amount of motion of a plurality of points on the hand is a prerequisite for achieving the purpose of virtual input. For the above reasons, the positioner should have the following features to achieve practical and general purposes.
1. Can provide the three-dimensional motion physical quantity (such as space coordinate, displacement, speed, acceleration and the like) of the complex point of the hand;
2. the hand motion detection device has the characteristic of large-range detection, namely, a user can do any hand motion within a large operation range;
3. the system has the capability of tracking the viewpoint, namely, the operating position of a user can be automatically tracked, and a larger using range is provided;
4. the device has high spatial resolution capability, namely, for the motion of the hand of a user, the minimum spatial displacement can be resolved in space, and the level of the displacement is required to reach centimeter (mm);
5. the capability of high-speed response is provided, namely, for the motion of the hand of a user, the shortest time required for detecting the physical quantity of the three-dimensional motion is required to reach the ms level in terms of time;
6. low manufacturing cost, i.e., price around the periphery of a computer as usual.
The achievement of the prior art is examined according to the above-mentioned requirement criteria. In the past, techniques for measuring physical quantities of single-point three-dimensional motion include electrostatic field type, static magnetic field type, ultrasonic wave type, electromagnetic wave type, and triangulation type, and refer to the following related patents:
electrostatic field type:
U.S.Pat.No.6025726(Neil Gershenfeld,2000)
static magnetic field type:
U.S.Pat.No.4945305(Ernest B.Blood,1990)
ultrasonic wave formula:
U.S.Pat.No.5214615(Will Bauer,1993)
electromagnetic wave type:
U.S.Pat.No.4613866(Ernest B.Blood,1986)
U.S.Pat.No.5739812(Takayasu Mochizuki,1998)
triangulation-image processing (2D Camera):
U.S.Pat.No.4928175(Henrik Haggren,1990)
U.S.Pat.No.6810142(Nobuo Kochi,2004)
triangulation formula-2D optical formula:
U.S.Pat.No.5319387(Kouhei Yoshikawa,1994)
the above techniques, to a greater or lesser extent, cannot simultaneously satisfy the requirements of high spatial resolution, high-speed reaction, wide-range use, and low manufacturing cost. Are not the subject of the present invention to which this invention is directed. The present invention is based on one-dimensional optical positioning measurement technology. Different from the other technologies, the one-dimensional optical positioning technology can completely meet the requirements of high spatial resolution, high-speed reaction, large-range use and low manufacturing cost. Regarding the positioning technology of one-dimensional optics, related art patents have been disclosed in the past, as follows:
U.S.Pat.No.3084261(Donald K.Wilson,1963)
U.S.Pat.No.4092072(Stafford Malcolm Ellis,1978)
U.S.Pat.No.4193689(Jean-Claude Reymond,1980)
U.S.Pat.No.4209254(Jean-Claude Reymond,1980)
U.S.Pat.No.4419012(Michael D.Stephenson,1983)
U.S.Pat.No.4973156(Andrew Dainis,1990)
U.S.Pat.No.5198877(Waldean A.Schuiz,1993)
U.S.Pat.No.5640241(Yasuji Ogawa,1997)
U.S.Pat.No.5642164(Yasuji Ogawa,1997)
U.S.Pat.No.5907395(Waldean A.Schuiz,1999)
U.S.Pat.No.5920395(Waldean A.Schuiz,1999)
U.S.Pat.No.6584339B2(Robert L.Galloway,2003)
U.S.Pat.No.6587809B2(Dennis Majoe,2003)
U.S.Pat.No.6801637B2(Nestor Voronka,2004)
U.S.Pat.No.7072707B2(Robert L.Galloway,2006)
the first localization based on one-dimensional optics was found in U.S. Pat. No.3084261(Donald k. wilson, 1963). Wilson uses two orthogonal one-dimensional cylindrical lenses (hereinafter referred to as one-dimensional lenses), two triangular photoelectric sensing devices, and two square photoelectric sensing devices (silicon photovoltaic cells) to measure the azimuth angle (elevation) of the sun and automatically track the movement of the sun. In 1978, Ellis utilized a V-shaped optical gate (V-shaped) and a one-dimensional optical sensor array (linear array of light sensitive elements) to achieve the same angle measurement.
Subsequently, in 1980, Reymond first proposed a three-dimensional coordinate positioning technique based on one-dimensional optics. The main features of the technology are as follows:
1. construction of optical system
The Linear position Sensor mainly comprises three groups of Linear position detectors (Linear position sensors) formed by components such as a one-dimensional lens, a Filter (Filter), a one-dimensional optical sensing array (Linear optical sensing arrays) and a one-dimensional optical sensing array signal reading circuit, and a calculation method of a space coordinate (for convenience of the following description, Linear terms are changed into one dimension). The long axes of the first and second sets of one-dimensional light sensing arrays are parallel, but the first (second) and third sets of one-dimensional position detectors are perpendicular.
2. Theoretical calculation of three-dimensional coordinates
Under the condition that the long axis direction of the one-dimensional light sensing array is coplanar, theoretical calculation of three-dimensional coordinates is provided. The method is characterized in that according to three geometric planes formed by the position of a point light source to be detected, the position of the center of the optical axis of a one-dimensional lens and the imaging position of a one-dimensional light sensing array, the intersection point of the three planes is calculated, and then the position coordinate of the point light source can be calculated.
3. Achieve the effect of positioning a plurality of points
The light emitting time of the point light sources is continuously and periodically switched, that is, the point light sources emit light at different time points, so as to avoid the phenomenon of image overlapping, obtain the correct corresponding relationship of the imaging of the point light sources among the one-dimensional light sensing arrays (for convenience, the prior art, hereinafter referred to as time modulation technology), and achieve the purpose of positioning the three point light sources.
4. Processing of metrology data
A threshold comparison (threshold comparison) circuit is added to a hardware circuit for reading signals of the one-dimensional light sensing array to remove unnecessary background light sources.
In addition, the possible extensibility of the technology proposed by Reymond, which is not discussed or claimed herein, is as follows:
5. extensibility measured by plural points
For the positioning measurement of the plurality of returning points, the number of the one-dimensional position detectors can be increased to achieve the purpose of positioning measurement of the returning points.
6. Expansion of spatial arrangement
The arrangement positions of the one-dimensional photo sensing arrays are not necessarily in a coplanar arrangement.
For these two extensions, Reymond does not provide any specific theoretical calculation to explain how to achieve the purpose of obtaining the spatial coordinates of the point to be measured.
The Reymond patent clearly discloses the principles, architecture and basic techniques of a one-dimensional optical positioning system for the positioning of cubic coordinate elements. Subsequently, among the patents from Stephenson (1983) to Galloway (2006), the principles and architecture of Reymond were largely adopted technically; in the application aspect, the method stays in the field of special measurement, and there is no special new place, which is described as follows:
U.S.Pat.No.4419012(Michael D.Stephenson,1983)
basically, the most important feature of this patent is the improvement of the Reymond patent, i.e. Reymond is wired to achieve the time synchronization between the plural point light sources and the one-dimensional light sensing array data scanning. Stephenson monitors the time when each of the plurality of point light sources is turned on by using a PIN DIODE, and accordingly, data scanning of the one-dimensional photo sensor array is started synchronously. Therefore, Stephenson achieves the synchronization effect in an optical wireless mode.
U.S.Pat.No.4973156(Andrew Dainis,1990)
This patent extends almost all concepts of Reymond, although for the spatial arrangement of three sets of one-dimensional position detectors, a coplanar 120 ° spatial arrangement is proposed; in addition, for four groups of one-dimensional position detectors, a coplanar 45-degree spatial arrangement mode is provided. However, for the two spatial arrangement modes, no specific theoretical calculation is proposed to explain how to achieve the purpose of obtaining the spatial coordinates of the points to be measured. In addition, for the measurement of a plurality of points, although the light source to be measured (a multiple of simultaneous illumination) is lighted, no specific implementation method can be proposed. In addition, there is no discussion about overlapping phenomenon (see Taiwan patent application No. 096113579).
U.S.Pat.No.5198877(Waldean A.Schuiz,1993)
Basically, this patent is mainly an application example of Reymond patent, and the application of the Schuiz concept is to scan a linear laser spot on an object to be measured by a hand-held (hand-held) one-dimensional laser scanner, that is, to project the laser spot on the surface of the object to be measured. By using two sets of one-dimensional position detectors, the relative coordinates of the laser spot reflected by the object to be measured can be obtained first. Then, three sets of one-dimensional position detectors of Reymond are used to measure three pilot light sources (pilot light detectors) arranged on the laser scanner, and finally, the absolute coordinates of the laser spot reflected by the object to be measured can be calculated. Here, the three light sources used in Schuiz, the method for lighting and emitting light, also extends to Reymond's technique of continuously and periodically switching multiple point light sources, and is not new. In addition, for the light emitting method of illuminating three light guide sources, Schuiz mentioned (but not discussed or claimed) light sources with different wavelengths (i.e., different colors) and light sources with different modulation frequencies (modulation), but did not propose any specific statement.
U.S.Pat.No.5640241(Yasuji Ogawa,1997)
U.S.Pat.No.5642164(Yasuji Ogawa,1997)
Basically, the two patents are mainly improvements of the Reymond patent, and the biggest characteristic is to use a two-dimensional optical sensing array and a composite one-dimensional lens, which has the advantages of simplifying the mechanism, but not improving the resolution of any measurement (note: the quality of the resolution is not in the use of one-dimensional or two-dimensional optical sensing arrays, but in the size of a single pixel on the optical sensing array, the processing of a point light source and the setting of other optical parameters), improving the speed (note: only the speed is reduced by using the two-dimensional optical sensing array), and reducing the manufacturing cost (note: the manufacturing of the composite one-dimensional lens is mainly used); there is no description of any process on the measurement data, and the measurement of plural points is not under the control of the user.
U.S.Pat.No.5907395(Waldean A.Schuiz,1999)
U.S.Pat.No.5920395(Waldean A.Schuiz,1999)
Basically, these two patents are primarily applications and minor additions to the Reymond patent. The complementary part is the processing of the point light source, namely, the point light source with larger divergence angle can be obtained by spherical and planar dispersion (diffuser). For the processing of the background light, a software mode is used, that is, the signal of the background light is recorded in the memory, and during the actual measurement, the original signal can be obtained by subtracting the signal of the background light from the measured signal. For the plurality of point light sources, the method of lighting and emitting light also extends to the technology of continuous and periodic switching of the plurality of point light sources of Reymond, and there is no innovation.
U.S.Pat.No.6584339B2(Robert L.Galloway,2003)
U.S.Pat.No.6801637B2(Nestor Voronka,2004)
U.S.Pat.No.7072707B2(Robert L.Galloway,2006)
Basically, the three patents are all application examples of the Reymond patent, and no innovation is found in the positioning technology.
In view of the above-mentioned patents, the following conclusions can be drawn:
(1) theoretical calculation of
For the theoretical calculation of the three-dimensional coordinates of the point light source, no new theory appears except for the simple theoretical calculation proposed by Reymond. For theoretical calculations, the following papers have appeared in the academic world:
Yasuo Yamashita,Three-dimensional StereometericMeasurement System Using Optical Scanners,Cylindrical Lenses,& Line Sensors,SPIE 361,Aug.1982.
the theory stated in Yamashita is only suitable for the special condition that the long axis direction of the one-dimensional photosensitive array is coplanar with the optical axis of the one-dimensional lens, and is not a generalized calculation theory. For the generalized theoretical calculation that the one-dimensional position detector can be set at any position and any angle, the following patents appear:
taiwan patent application no: 096108692
Taiwan patent application no: 096113579
Taiwan patent application no: 096116210
(2) Technique of
Technically, the depth of the technology used does not exceed the scope of the patent proposed by Reymond (1980). Especially for the solution of the phenomenon like the overlapping, Stephenson (1983) has been reduced, and there is no improvement or innovation.
(3) Applications of
On the application side, all patents are limited to the three-dimensional position measurement, and no patent refers to the application of virtual input. The use of one-dimensional optical positioning techniques for virtual input appears in the following patents: taiwan patent application no: 096116210. in the patent, a three-dimensional Mouse (3D Mouse) using gestures as a human-computer interface is proposed for the first time.
Disclosure of Invention
First, in view of the above-mentioned drawbacks of the prior art, the following innovative or improved solutions are proposed:
1. processing of point source uniqueness
2. Processing of background light
3. Processing of data
4. Expansion of system architecture
5. Expansion of system applications
Finally, a description of embodiments of the present invention is provided.
1. Processing of point source uniqueness
For a one-dimensional optical systemIn other words, the advantage is that the position of the image point can be read quickly (due to the use of the one-dimensional photo sensing array), and the disadvantage is that the phenomenon of image overlapping is easily generated. FIG. 2 is a schematic diagram of an image overlap phenomenon in a one-dimensional optical system. The one-dimensional lens 5 having a focal length f is defined such that the focusing direction is parallel to the Y axis (hereinafter, the one-dimensional lens is illustrated by a short line of a double arrow, and the direction of the arrow represents a direction in which focusing is possible), the major axis direction (non-focusing direction) is parallel to the X axis, and the optical axis direction is the Z axis. Along the optical axis Z and on a plane perpendicular to the optical axis Z, there is a straight line perpendicular to the Y axis
Figure G2008102143199D00081
So as to be located in said straight line
Figure G2008102143199D00082
The imaging point of the point light source o at any position is I (0, y)s0), which is a phenomenon like overlapping, hereinafter called straight line
Figure G2008102143199D00083
Like an overlap line. I.e. any lying in a straight line
Figure G2008102143199D00084
The imaging positions of the point light sources are the same. Therefore, for the positioning system based on the one-dimensional optical system, the solution of the image overlay phenomenon is the first task when performing the positioning measurement for the plural point light sources. In addition, since the technique needs to use at least three one-dimensional light sensing arrays, for the plural point light sources and the imaging on the plural one-dimensional light sensing arrays at the same time, an identification procedure is needed to quickly find out the correct imaging corresponding relationship among the one-dimensional light sensing arrays, so as to perform correct positioning measurement on the plural point light sources.
Each point light source is made to have uniqueness (Unique Characteristics), which is the most basic principle for solving the corresponding relationship between the image formation and the superposition phenomenon. As mentioned in the aforementioned US patent, the plurality of point light sources are time-modulated, wavelength-modulated, or frequency-modulated, so that each point light source has uniqueness. The so-called time modulation is that the point light sources are lighted up in a continuous and alternate manner, that is, each point light source is lighted up and emits light at a different time point. Therefore, at the induction detection end, only one and only one imaging position of the lighted point light source can be read in each scanning. The disadvantage of the time modulation method is the position error caused by asynchronous measurement, which is proportional to the moving speed of the point light sources and the number of point light sources. The wavelength modulation, i.e. each point light source has a different light emitting wavelength, has the disadvantages of increased manufacturing cost and increased data processing amount; the so-called frequency modulation (modulation), i.e. the light intensity of each point light source is oscillated at different frequencies, has the disadvantage of using the demodulation (demodulation) technique. In addition, different from the above-mentioned processing method, in taiwan patent application no: 096113579, the number of one-dimensional position detectors and the optical axis rotation are increased to solve the phenomenon of image overlap.
In the invention, aiming at the application of gesture recognition, the following methods such as (1) an intensity modulation method, (2) a geometric modulation method, (3) a Stephenson improvement method, (4) a master-slave wireless synchronization method, and (5) a wavelength modulation method are proposed to solve the phenomenon of overlapping.
Uniqueness of light intensity and geometric configuration of point light source
As shown in fig. 3(a), the point light source acts as an image of the one-dimensional optical lens. A common point light source 10 can obtain linear imaging 11 after being acted by a one-dimensional optical lens. After the linear imaging 11 is read by the one-dimensional optical sensing array 12, an imaging signal i (x) with an intensity approximately gaussian distribution can be obtained in the transverse direction, as follows:
<math> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mn>0</mn> </msub> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, I0Intensity at the center, σ is the standard deviation of the distribution, μ is the mean position. In general, when x deviates from the mean position μ by about three distribution standard deviations σ, i (x) becomes zero. Therefore, a signal where | x- μ | < 3 σ can be defined as an effective imaging signal. In addition, the light emission intensity P of the point light source 10 determines the intensity I of the center0And the light emitting radius r of the point light source determines the standard deviation sigma of the distribution. Thus, the point light source I can be utilized0And σ as the unique parameters. That is, for plural point light sources, different I's can be used0And sigma identifying the point light source. Thus, utilize I0The method of identifying the point light source is called intensity modulation; the method of using sigma to identify the point light source is called geometric modulation.
In the following, three point light sources are used as an example to illustrate the intensity modulation and the geometric modulation.
(1) Intensity modulation method
As shown in fig. 3(b) to 3(e), the intensity modulation is illustrated schematically. FIG. 3(b) shows that three light intensities I with different intensities are obtained when there is no image overlap10、I30、I30However, the point light sources with the same σ have imaging signals 15, 16, 17 on the one-dimensional photo sensing array, which have uniqueness of the central intensity, and can identify the three effective imaging signals by a threshold comparison method or a waveform detection method (as described later), and accordingly calculate the position of the central intensity, i.e. the average position μ. When the three point light sources approach the image overlap line, as shown in fig. 3(c), the three imaging signals 15, 16, 17 are superimposed to become a superimposed imaging signal 18. In this case, the threshold comparison method is disabled, and the waveform detection method is used, but the level thereof can be clearly recognizedAll the positions. When the three point light sources are almost close to the overlapping line, the average position thereof cannot be recognized as shown in fig. 3(d) to 3 (e). At this time, the average positions of the three point light sources are considered to be consistent, but some measurement errors are generated. If the light emitting radius r of the point light source is made very small and there is enough light emitting intensity to induce imaging, the standard deviation σ of the distribution of the imaging signal i (x) can be made to be close to or smaller than the width of a single pixel on the one-dimensional light sensing array 12, i.e. the above-mentioned measurement error is improved, and the phenomenon of image overlapping is solved.
(2) Method of geometric modulation
As shown in fig. 3(f) to 3(i), the geometric modulation is illustrated schematically. Three have the same I0But different from sigma1、σ2、σ3The imaging signals 15, 16, 17 on the one-dimensional photo-sensing array are I1(x)、I2(x)、I3(x) (ii) a The superimposed imaging signal 18 is then i (x), which is expressed as follows:
<math> <mrow> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mn>0</mn> </msub> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mn>1</mn> <mn>2</mn> </msubsup> </mrow> </mfrac> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mn>0</mn> </msub> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> </mfrac> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>I</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mn>0</mn> </msub> <msup> <mi>e</mi> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>3</mn> <mn>2</mn> </msubsup> </mfrac> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
I(x)=I1(x)+I2(x)+I3(x) (5)
wherein σ1,σ2,σ3Is known, and σ3>σ2>σ1
FIG. 3(f) shows that three point light sources are simultaneously arranged on the image overlap line, so that the phenomenon of complete image overlap is present; fig. 3(g) to 3(h) show that the three point light sources are very close to each other and overlapped on each other, and most of the three point light sources are overlapped. FIG. 3(i) shows the phenomenon that three point light sources are overlapped in a close image manner, and a small portion of the images are overlapped. Therefore, when the image overlap phenomenon occurs, how to use the measured superimposed imaging signal 18I (x) to solve μ1,μ2,μ3The problem of the geometric modulation technique is to be solved.
As shown in FIG. 3(j), when the image overlap phenomenon occurs, the Method of elimination and Guassian matching are used to determine μ1,μ2,μ3. The principle of the elimination method is to sequentially separate the imaging signals from the maximum point light source to the minimum point light source in the order of the superimposed imaging signals 18. From I (x), part of I is found first3(x) And after the Gaussian Fitting is carried out on the mixture, I can be obtained3(x) And mu3Thereafter, separating I from I (x)3(x) Such that I' (x) ═ I (x) -I3(x) In that respect According to the same procedure, and from I '(x), I' is isolated2(x) I.e. can obtain I1(x) In that respect The advantage of geometric modulation is that it is not affected by the image overlap phenomenon, and the average position μ of the imaging signals of all point light sources can be solved. The disadvantages are that more mathematical calculations are needed and the sizes of the light sources at each point must be clearly distinguished.
Method for improving time modulation
Like the Stephenson patent, Stephenson monitors the time when a plurality of light sources are lighted up by using a DIODE at the induction detection end, so as to achieve the effect of synchronous scanning of a one-dimensional light sensing array and improve the Reymond wired mode. The timing sequence of each signal is a schematic diagram of the Stephenson time modulation method timing sequence as shown in FIG. 4(a), and the diagram shows the timing sequence between the lighting of a plurality of point light sources and the scanning and reading of the data of the one-dimensional light sensing array. The Emitter 1-Emitter 3 (i.e. point light sources) are continuously and alternately lighted up at a fixed period, and the Diode circuit generates a synchronization signal SYNC after receiving the light signals, so as to synchronously drive all the one-dimensional light sensing arrays to start scanning.
However, as shown in fig. 4(b), in the actual gesture operation, the point light source, such as the emitter2, installed on the hand or the finger may be shielded by the hand at any time due to any movement of the hand or the finger, which is likely to cause the synchronization error, so that the one-dimensional photo sensor array obtains the wrong data. In this regard, no solution has been proposed in the Stephenson patent. Aiming at the phenomenon of synchronization disorder, the invention provides the following two solving methods.
(3) Stephenson improvement method
Fig. 4(c) is a schematic diagram of the Stephenson time modulation improvement method timing sequence. For the optical signals generated by Emitter 1-Emitter 3, a microprocessor (μ P) can be used to measure the continuous alternate lighting period of Emitter 1-Emitter 3 after receiving the Diade signal at a proper time (such as before use or at a fixed time interval), and the synchronous signal SYNC is generated synchronously at the same period, so as to overcome the phenomenon of synchronization disorder.
(4) Master-slave mode wireless synchronization method
Different from the Reymond and Stephenson methods, the so-called master-slave wireless synchronization method is that a transmitting end (master end) transmits a synchronization signal with codes in an RF mode, and the coded synchronization signal contains a number of a point light source to be lighted and a signal of time for lighting the point light source. Therefore, after receiving the encoded synchronization signal, the receiving end (slave end) can decode and analyze the information in the encoded synchronization signal, and make correct synchronization control to achieve the purpose of time modulation.
Fig. 4(d) is a schematic diagram of the master-slave wireless synchronization method. An RF transmitter 21, which is arranged at the one-dimensional position detector 20, transmits an encoded RF synchronization signal 22. The encoded synchronization signal 22 includes an encoded signal 24 and a synchronization signal 25. The code signal 24 represents the number of the point light source to be lighted, and the synchronous signal 25 represents the time of the point light source to be lighted. The encoded signal 24 may be a set of digital codes (binary codes), square waves with a specific time length, or pulses with a specific number. In addition, the device receives the encoded RF sync signal 22 in real time at the hand RF receiver 26 and outputs the signal to a decoder 27. The decoder 27 can separate the encoded signal 24 and the synchronization signal 25 and output the two signals to a one-point light source switch 28. The switch 28 can individually and correctly illuminate the point light sources 29 at the correct time according to the number of the point light sources. Therefore, whether the point light source is shielded or not, the number and timing of the lighted point light source can be clearly and correctly known at the one-dimensional position detector 20 side. This also fully solves the shadowing problem encountered by the former Stephenson. In addition, because of the coding method, the point light sources 29 can be lighted up in any order and time. Of course, the same effect can be achieved by placing the RF transmitter at the hand end and the RF receiver at the one-dimensional position detector end. For the conventional RF technology, modulation is usually required at the transmitting end, and demodulation is required at the receiving end, which is not discussed or illustrated in the prior art.
Alternatively, as shown in FIG. 4(e), the RF transmitter may also transmit another RF synchronization signal 22 with a different code, the purpose of which is to illuminate all the point sources simultaneously. Therefore, the method of intensity modulation, geometric modulation and wavelength modulation can be integrated with the master-slave wireless synchronization method to control the uniqueness of the point light source, so as to achieve the purpose of more efficacy.
(5) Wavelength modulation method
As in the aforementioned Schuiz patents, wavelength modulation has been proposed to overcome the optical overlap phenomenon, but there is no further and specific description. Fig. 5(a) to 5(c) show a number of commonly used image sensing technologies. As shown in fig. 5(a), the spectrum of the photo-sensing spectrum of a typical CCD or CMOS photo-sensing array (see SONY/CCD/ILX526A) can sense the wavelength of light, usually between 400nm and 1000nm, and the size of a single photo-sensing pixel is between several micrometers (μm) and several tens micrometers (μm). As shown in fig. 5(b), the light of the RGB color filter (see SONY/CCD/ILX516K) passes through the spectrum, and the wavelength filtering is used to achieve the purpose of color image capture. As shown in fig. 5(c), the RGB pixels are arranged in a corresponding RGB color chip arrangement on a typical two-dimensional CCD or CMOS photo sensor array. In the following, a wavelength modulation method is proposed by using the above conventional image sensing technology to achieve the purpose of solving the image overlap phenomenon.
As shown in FIGS. 5(d) to 5(e)The illustration is a schematic diagram of wavelength modulation. For the phenomenon of image overlap caused by a plurality of point light sources, the wavelength modulation can effectively solve the problem of image overlap under the condition that the number of the point light sources is not large (for example, less than three). The key point is that a plurality of point light sources with different wavelengths are used, and color chips with different wavelengths are utilized to filter and separate the point light sources with different wavelengths, so that the plurality of point light sources are simultaneously and individually imaged on the same one-dimensional light sensing array but different pixel positions, or simultaneously and individually imaged on different one-dimensional light sensing array groups. FIG. 5(d) is a schematic diagram of three point light sources with different wavelengths. For the point light sources Emitter 1-Emitter 3 of the device on the hand or finger, the Emitter 1-Emitter 3 can be white light LED, or LED with proper wavelength, or semiconductor laser. The lights emitted from Emitter 1-Emitter 3 are processed by an appropriate optical bandwidth Filter (Bandpass Filter) to emit λ1±Δλ1、λ2±Δλ2、λ3±Δλ3A source of light of a wavelength. Wherein λ is1、λ2、λ3Is the center wavelength of the bandwidth filter; 2 delta lambda1、2Δλ2、2Δλ3It is a Half-wave Width (FWHM, Full Width at Half Maximum). The selection of the center wavelength and the half-wave height and width is determined according to the characteristics of the light passing spectrum of the RGB color chips. For example, for the light passing spectrum as shown in fig. 5(b), the center wavelength and the half-wave height width can be set as: lambda [ alpha ]1450nm (blue light), lambda2550nm (green light), lambda3630nm (red light); 2 delta lambda1~20nm、2Δλ2~20nm、2Δλ3About 20 nm. When delta lambda1、Δλ2、Δλ3If the value is too large, the conventional RGB color filter loses the function of correct filtering, and thus the problem of image overlap cannot be solved. In addition, λ1、λ2、λ3The selection of (1) is not necessarily set in the visible light band, but may be set in the infrared band, and it is necessary to match an appropriate infrared light source with an appropriate infrared color chip or infrared band width filter。
Fig. 5(e) is a schematic diagram showing the arrangement of RGB color chips on the one-dimensional photo sensor array. For the one-dimensional photo sensor array 12, the arrangement of the RGB color chips is based on the pixel unit, and the RGB color chips are alternately arranged in R, G, B order. Thus, for the aforementioned selected λ1450nm (blue light), lambda2550nm (green light), lambda3630nm (red light), the alternating arrangement of the RGB color chips can separate the three point light sources and form images on R, G, B pixels (only red light is taken as an example in the figure to show the effect of light separation and imaging). The advantage is that the imaging of three point light sources can be processed simultaneously by only one-dimensional light sensing array 12; the disadvantage is that the spatial resolution of the measurable point source is reduced to one third.
To improve the measurable spatial resolution, as shown in FIG. 5(f), a three-color one-dimensional photo-sensing array (see SONY/CCD/ILX516K) can be used, i.e., as described above for λ1450nm (blue light), lambda2550nm (green light), lambda3Three 630nm (red) point light sources provide three parallel one-dimensional light sensing arrays, and three RGB color chips with different wavelengths are respectively attached to the three one-dimensional light sensing arrays to achieve the purpose of wavelength modulation.
In addition, the existing color two-dimensional CCD or CMOS light sensing array is produced in large quantity and used on the digital camera, based on the consideration of cost, the existing color two-dimensional CCD or CMOS light sensing array can be adopted to replace the one-dimensional CCD or CMOS light sensing array, namely, the existing color two-dimensional CCD or CMOS light sensing array is utilized, only the RGB color sheet arrangement and the pixel image scanning mode are required to be changed, and the existing color two-dimensional CCD or CMOS light sensing array can be utilized to achieve the purpose of wavelength modulation. As shown in fig. 5(g), the color two-dimensional CCD or CMOS light sensing array has RGB color chips arranged. The arrangement of the RGB color chips is that the RGB color chips are alternately arranged in R, G, B order by taking Row as a unit. As shown in fig. 5(h), the two-dimensional color CCD or CMOS sensor array and the other RGB color chips are arranged. In addition, as shown in fig. 5(i), the pixel image is scanned and read by using a color two-dimensional CCD or CMOS photo sensor array. The scanning of the image can be performed by a microprocessor μ P, a Row Decoder, and a Column Decoder, and Random Access can be performed for any pixel # ij (i.e., ith Row and jth Column).
2. Processing of background light
In the aforementioned US patents, only Reymond and Schuiz, the processing of the measurement data is mentioned, but only the removal of the background light source. As mentioned above, the processing method is to achieve the effect of removing the background light by a threshold comparison (threshold comparison) processing in an electronic or software manner. In general, the threshold comparison is performed on the premise that the background light signal has a fixed DC value without time variation. However, for background light with spatially and temporally varying characteristics, the threshold comparison method is completely ineffective. In addition, the Reymond and Schuiz patents do not give any further details regarding the processing of subsequent data.
Dynamic background light removal
For the use environment in a general room, the background light source is usually from a fluorescent lamp and a halogen lamp (or tungsten lamp). As shown in fig. 6(a) and 6(b), the emission spectra of a conventional fluorescent lamp and a halogen lamp are shown. Basically, these light sources can cause unstable, covered, and saturated phenomena of the imaging signals at the end of the photo sensing array. Therefore, the phenomenon can be collectively referred to as ambient light interference phenomenon, and the ambient light interference noise is caused on the light sensing array. For the ambient light interference noise, the existing simple threshold comparison method, namely complete failure, cannot obtain correct imaging signals. Hereinafter, a method of ambient light interference noise removal is proposed.
At time tkMeasuring the one-dimensional optical sensing array to obtain an imaging signal I (x, t)k) Is an imaging signal S (x, t) from a point light sourcek) And interfering with the ambient light noise signal N (x, t)k) The superposition is formed by the following formula:
I(x,tk)=S(x,tk)+N(x,tk) (6)
<math> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>S</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>N</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, S (x, t)k) Is an effective imaging signal composed of several point light sources, M is the total pixel of the one-dimensional light sensing array, and its value can be 2a(e.g., a-10, M-2)10=1024),xmThe position of the mth pixel. In general, ambient light interferes with the noise signal N (x, t)k) Most of the light reflected from a lamp used in an indoor environment and from other objects by the lamp; a small portion of the dark current from the photo sensing array itself and other electronic noise on the loop.In addition, because the power used by the lamp source (1) is an alternating current power, the luminous intensity of the power has the characteristic of alternating current naturally, (2) a user can adjust the intensity of the lamp source at any time, even switch on or off the lamp source, (3) the position of the lamp source can be as high as the light sensing array, and when the power is arranged at the back of the user, the motion of the body of the user can generate direct interference with the light emitted by the lamp source, and the like. Thus, in time, ambient light interferes with the noise signal N (x, t)k) It is not a stable constant, but a function of time. In particular, the interference described in (3) also affects N (x, t) seriouslyk) The stability of (2). This is the reason why the existing threshold comparison method fails. Such interference is then defined as temporal ambient light interference signals. In addition, the lamp source and the lamp shade thereof may have a special geometric structure, even a highly reflective object (such as a mirror surface, a metal button on a garment) exists in the environment, and after the light source with the geometric structure is imaged by the one-dimensional lens, the characteristics and the size of an imaging signal of the light source may be similar to those of an effective imaging signal of a point light source, and in a worse case, the interference signals are overlapped with the effective imaging signal of the point light source. This is another reason why the existing threshold comparison method fails. Therefore, such interference is defined as a spatial ambient light interference signal. The ambient light with temporal and spatial characteristics is referred to as dynamic background light, and the imaging signal of the ambient light in the photo sensor array is referred to as dynamic background light signal. The method for real-time ambient light interference signal removal and fourier signal processing (i.e. spatial ambient light interference signal removal) is proposed below, and then a threshold comparison method or a waveform detection method is used, so that the interference problem of the dynamic background light can be effectively solved, and the effective imaging signal of the point light source can be analyzed. Hereinafter, the real-time ambient light interference signal removal method and the fourier signal processing method are collectively referred to as a dynamic background light signal removal method.
(1) Real-time environmental light interference signal removing method
Because the light sensing array has the characteristic of linear superposition to light sensing and the one-dimensional optical lens has the characteristic of linear imaging,as shown in fig. 5(j) to 5(k), another one-dimensional photo sensing array 13 may be utilized to simultaneously obtain the dynamic background light signal N' (x, t)k) Then, from the original signal I (x, t) of formula (6)k) Removing N' (x, t)k) Is ready to obtain
I′(x,tk)=I(x,tk)-N′(x,tk) (9)
By substituting formula (6) for formula (9), it is possible to obtain
I′(x,tk)=S(x,tk)+ΔN(x,tk) (10)
Wherein,
ΔN(x,tk)=N(x,tk)-N′(x,tk) (11)
here, N' (x) is obtained by hardware, i.e. by using another one-dimensional photo sensing array 13m,tk) A signal, hereinafter referred to as a noise one-dimensional photo sensing array 13; the original one-dimensional photo sensing array 12 is called as the measuring one-dimensional photo sensing array 12. The noise is detected by the one-dimensional optical sensing array 13, on which a proper optical filter (not shown) must be added to filter all point light sources but let ambient light pass; the position of the sensor is as close as possible and the one-dimensional optical sensing array is used for parallel measurement. In addition, for the scanning reading of the imaging signal of the one-dimensional optical sensing array 13 for noise, the scanning reading of the imaging signal of the one-dimensional optical sensing array 12 for measurement is synchronized, and an electronic amplifier is used on the signal reading electronic circuit of the one-dimensional optical sensing array 13 for noise to properly amplify the signal of the dynamic background light, so that Δ N (x, t) is obtainedk) Can be
ΔN(x,tk)=DC+δn(x,tk) (12)
Is represented by the formula (10)
I′(x,tk)~S(x,tk)+δn(x,tk)+DC (13)
Wherein DC is a low-frequency signal similar to DC, δ n (x, t)k) It can be regarded as a spatial ambient light interference signal with higher frequency geometry. In addition, the luminous intensity of the point light source can be properly adjusted to ensure that the imaging signal of the point light source meets the following conditions,
δn(x,tk)<<S(x,tk) (14)
then, by matching with a threshold comparison method, an effective imaging signal S (x, t) can be obtainedk). The threshold comparison method is applied to DC and δ n (x, t) in the formula (13)k) Value, set to a ratio of DC + δ n (x, t)k) The proper value with large value is used for finding out the effective imaging signal S (x, t)k). In addition, for the point light source used in the intensity modulation, the effective imaging signal S (x, t) can also be obtained by the Method of waveform Detection (Method of Profile Detection)k). The waveform detection method utilizes the characteristics of the point light source imaging waveform to obtain S (x, t)k). Compared with the background light source with environmental interference, the point light source used in the invention has the advantages that the luminous intensity per unit area is far greater than that of the background light source, and the luminous area is much smaller than that of the background light source. Therefore, the waveform of the imaging signal thereof is characterized by a sharp Mine (sharp peak). That is, the standard deviation σ of the distribution of the imaging signal of the point light source of the present invention is a relatively small value, such as 20 to 30 μm, compared to the imaging signal of the background light source; and the intensity of its center I0Is of a relatively large value; in addition, in the effective imaging signal, the slope of the waveform change thereof has a relatively large value. Therefore, the intensity I of the center can be determined according to the standard deviation σ of the distribution0And the waveform change slope and the like to find out the effective imaging signal S (x, t) of the point light sourcek)。
In general, since the point light source uses a general battery as its power source, increasing the light emission intensity of the point light source to increase the S/N ratio as described above increases the power consumption of the battery, thereby shortening the time required for using the battery. Thus, the light emission of the point light source is not improvedUnder the condition of intensity, the signal processing method is needed to reduce the Delta N (x, t)k) Thus, the purpose of improving the S/N ratio can be achieved.
(2) Method for removing spatial ambient light interference signal (Fourier signal processing method)
The objective of conventional fourier optics is to filter unnecessary geometry or noise from the geometry signal in the Spatial Domain (Spatial Domain) to obtain the desired geometry. The basic method is to filter out the geometric structure to be removed or the characteristic Frequency of the noise in the Frequency Domain (Frequency Domain), so as to achieve the purpose of fourier optics. Thus, the Fourier optics technique can be used to reduce Δ N (x, t)k) To achieve still higher S/N ratios. Fourier transform of formula (10) to obtain
I′(ωn,tk)=S(ωn,tk)+ΔN(ωn,tk) (15)
Wherein,
<math> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>S</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>j</mi> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mi>M</mi> </mfrac> <mi>mn</mi> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>&Delta;N</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>&Delta;N</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>j</mi> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mi>M</mi> </mfrac> <mi>mn</mi> </mrow> </msup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mo>[</mo> <mi>DC</mi> <mo>+</mo> <mi>&delta;n</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>j</mi> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mi>M</mi> </mfrac> <mi>mn</mi> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow> </math>
as in the Fourier method described above, a band-pass filter function BPF (ω) is applied in the frequency domainn) I.e. for low frequency generated by the DC signal, and δ n (x, t)k) The high frequency generated by the signal is filtered and then inverse Fourier operation is carried out, thus obtaining a clean imaging signal which is approximate to the original point light source. Thus, the band-pass filtering and inverse Fourier operation are performed on the formula (15) to obtain
<math> <mrow> <msup> <mi>I</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mi>e</mi> <mrow> <mi>j</mi> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mi>M</mi> </mfrac> <mi>mn</mi> </mrow> </msup> <mo>{</mo> <mo>[</mo> <mi>S</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>&Delta;N</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <mi>BPF</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow> </math>
The formula (18) is shown in a simplified manner,
I′(xm,tk)=S′(xm,tk)+δ′n(xm,tk) (19)
wherein,
<math> <mrow> <msup> <mi>S</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mi>e</mi> <mrow> <mi>j</mi> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mi>M</mi> </mfrac> <mi>mn</mi> </mrow> </msup> <mo>{</mo> <mi>S</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>BPF</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <mi>&delta;</mi> <mo>&prime;</mo> </msup> <mi>n</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mi>e</mi> <mrow> <mi>j</mi> <mfrac> <mrow> <mn>2</mn> <mi>&pi;</mi> </mrow> <mi>M</mi> </mfrac> <mi>mn</mi> </mrow> </msup> <mo>{</mo> <mi>&Delta;N</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>BPF</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>21</mn> <mo>)</mo> </mrow> </mrow> </math>
band-pass filter function BPF (omega)n) Can be as follows:
<math> <mrow> <mi>BPF</mi> <mrow> <mo>(</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mo>&equiv;</mo> <mfenced open='{' close='' separators=''> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>when</mi> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> <mo>&lt;</mo> <msub> <mi>&omega;</mi> <mi>L</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>when</mi> <msub> <mi>&omega;</mi> <mi>L</mi> </msub> <mo>&le;</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>when</mi> <msub> <mi>&omega;</mi> <mi>H</mi> </msub> <mo>&lt;</mo> <msub> <mi>&omega;</mi> <mi>n</mi> </msub> </mtd> </mtr> </mtable> <mrow> <mo>&le;</mo> <msub> <mi>&omega;</mi> <mi>H</mi> </msub> </mrow> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>22</mn> <mo>)</mo> </mrow> </mrow> </math>
that is, in the frequency domain, the filtered frequency is below ωLIs a number and has a frequency higher than omegaHIs a number, i.e. filters Δ N (ω)n,tk) Has most of the frequencies, while the other numbers are multiplied by a real number A. When A > 1.0, the imaging signal S' (x) of the origin light source can be amplifiedm,tk) So that δ' n (x)m,tk)<<S′(xm,tk) I.e. a still higher S/N ratio is achieved. So that the temperature of the molten metal is controlled,
I′(xm,tk)~S′(xm,tk) (23)
finally, the effective imaging signal S (x, t) is obtained by matching with a threshold value comparison method or a waveform detection methodk). As mentioned above, the dynamic backlight elimination method mainly uses another noise light sensor array to obtain the dynamic backlight. However, this approach increases the cost and complexity of the hardware. Another software method for removing the near-real-time ambient light interference signal is proposed below.
(3) Near real-time ambient light interference signal removing method
The method for removing the time-based ambient light interference signal in near real time is a method for removing the time-based ambient light interference signal only by using a software mode on the premise of not increasing the use of another one-dimensional optical sensing array for noise. As mentioned above, the temporal ambient light interference causes the background light signal to be greatly distorted and fluctuated when the user's body movement directly interferes with the light source, and still seriously affects the accurate acquisition of the point light source signal. Relative to the conventional one-dimensional photo sensing array (e.g. 10)-3 sec/scan),The speed of the body movement of the user is relatively slow. Thus, the imaging signal I (x, t) obtained for two consecutive scansk)、I(x,tk-1),
I(x,tk)=S(x,tk)+N(x,tk) (24)
I(x,tk-1)=S(x,tk-1)+N(x,tk-1) (25)
Dynamic background light signal N (x, t) contained thereink)、N(x,tk-1) At a time Δt=tk-tk-1The variation amount of the above can be regarded as a relatively smaller amount than the point light source imaging signal S (x, t)k). Therefore, the formula (25) can be reduced to the formula (24), i.e., the product can be obtained
I′(x,tk)=I(x,tk-1)-I(x,tk-1)=ΔS(x,tk)+ΔN(x,tk) (26)
ΔS(x,tk)=S(x,tk)-S(x,tk-1)(27)
ΔN(x,tk)=N(x,tk)-N(x,tk-1)(28)
Wherein,
Figure G2008102143199D00181
ΔN(x,tk)=N(x,tk)-N(x,tk-1)=δn(x,tk) (30)
in the near real-time ambient light interference signal removal method, equations (29) and (30) describe the characteristics of the point light source imaging signal and the dynamic background light signal. That is, when the point light source is in a moving state, the point light source imaging signal appears as a Gaussian signal G (mu) at two different positionsk)、G(μk-1) A subtracted signal; when the point light source is at a static stateIn the stopped state, the point light source imaging signal is zero. In addition, for dynamic background light signals, δ n (x, t) thereofk) It has the same characteristics as formula (12). Therefore, the Fourier signal processing method is used to remove the spatial ambient light interference signal δ n (x, t)k). When the point light source is in a static state, the imaging signal of the point light source also presents a zero state after the Fourier signal processing is carried out in the formula (29), namely the imaging signal of the original point light source cannot be taken out. To solve this phenomenon, a Tracking method (Tracking) can be used to estimate the imaging position of the existing point light source based on the previous data.
3. Processing of data (calculation of spatial resolution and mean position)
Fig. 7(a) is a schematic diagram of an optical system used in Reymond.
Three sets of one-dimensional photo sensing arrays S are arranged in the world coordinate system O (X, Y, Z)1、S2、S3The center points are set at (-h, 0, 0), (0, 0, 0) and (h, 0, 0), and the long axis direction is shown in the figure; in addition, three sets of one-dimensional lenses L with equal focal length f are arranged1、L2、L3Each optical axis thereof being Z1、Z2、Z3The direction of focus is as shown. For point light source o (x)1,y1,z1) At S of1、S2、S3Are each ys1、ys2、ys3(ii) a In addition, the Z axis in the world coordinate system O (X, Y, Z) is the visual axis of the optical system. Thus, the point light source o (x)1,y1,z1) The spatial position of the object can be obtained by the following positioning calculation formula (for a detailed calculation, refer to the aforementioned three patents of taiwan).
x 1 = y s 1 + y s 3 y s 1 - y s 3 h ; - - - ( 31 )
y 1 = - 2 h y s 1 - y s 3 y s 2 - - - ( 32 )
z 1 = ( 1 + 2 h y s 1 - y s 3 ) f - - - ( 33 )
Wherein f and h are known; and y iss1、ys2、ys3The measured value is obtained.
The error in the positioning of the optical system can be estimated by the following formula.
<math> <mrow> <msub> <mi>&Delta;x</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>f</mi> <mo>)</mo> </mrow> <mi>f</mi> </mfrac> <mi>&Delta;</mi> <msub> <mi>y</mi> <mrow> <mi>s</mi> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>34</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>&Delta;</mi> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>f</mi> <mo>)</mo> </mrow> <mi>f</mi> </mfrac> <mi>&Delta;</mi> <msub> <mi>y</mi> <mrow> <mi>s</mi> <mn>2</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>35</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&Delta;z</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>f</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>hf</mi> </mfrac> <mi>&Delta;</mi> <msub> <mi>y</mi> <mrow> <mi>s</mi> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>36</mn> <mo>)</mo> </mrow> </mrow> </math>
The equations (34) to (36) clearly show the error Δ x of the spot light source position in each direction1、Δy1、Δz1The size of the optical parameter is defined by optical parameters f, h and vertical distance z1And the measurement error Deltays1、Δys2、Δys3And (4) determining. Thus, at a minimum Δ ys1、Δys2、Δys3Δ x obtained below1、Δy1、Δz1Which may be defined as the spatial resolution of the optical system.
As mentioned above, under the condition of no interference of dynamic background light, after the spot light is acted by the one-dimensional optical lens, the intensity i (x) of the effective imaging signal on the one-dimensional optical sensing array is approximately gaussian, referring to formula (1). Since the one-dimensional photo sensor array is composed of a row of a plurality of discrete sensing pixels with width and gap, as shown in fig. 3 (a). Therefore, the actual measured imaging signal i (x) becomes:
<math> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <mover> <mi>I</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>&Delta;w</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>37</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, I (x)i) The unit length sensing average value of the ith sensing pixel is related to parameters such as the large size of the sensing pixel, the photoelectric conversion efficiency, the intensity and wavelength distribution of incident light, the ambient temperature and the like; and Δ w is the average width of the sensing pixels. If, only take the maximum value I (x)i) Position x ofi(i.e., the position of the brightest sensing pixel) as ys1、ys2、ys3The minimum measurement error Δ y of the measured values1、Δys2、Δys3Is the width Δ w of a single sensing pixel. In the following, the evaluation of spatial resolution is described as an example.
The parameters are known, and are assumed to be as follows:
f=20mm、h=200mm、Z1=2000mm
Δys1=Δys2=Δys3=Δw~5μm
the spatial resolution can be obtained by substituting equations (34) to (36)
Δx1~0.5mm、Δy1~0.5mm、Δz1~5mm
The average width Δ w of the sensing pixels determines the spatial resolution by using the brightest sensing pixel position as the imaging position. For the movement of the point light source with the displacement smaller than the spatial resolution, as shown in fig. 7(b) (the upper graph is the imaging signal before the movement, and the lower graph is the imaging signal after the movement), on the one-dimensional photo-sensing array, the displacement of the imaging signal is smaller than the width Δ w of one sensing pixel. Therefore, the position of the brightest sensing pixel is not changed, and finally the movement below the spatial resolution cannot be resolved. Therefore, for the micro-variation of the imaging signal among pixels, Guassian Fitting or the following statistical calculation formula is necessary to obtain the average position μ,
<math> <mrow> <mi>&mu;</mi> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>x</mi> <mi>i</mi> </msub> <mover> <mi>I</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mover> <mi>I</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>38</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M is the total sensing pixel on the one-dimensional photo sensing array. Generally, an adc (analog to Digital converter) is used to sense the analog voltage value I (x) of the pixeli) After conversion, a digital value is obtained. If a one-ten bit (bit) ADC is used, the modulus for the input isThe pseudo voltage value can easily identify the 1024-step micro variation. Thus, the resolution of three-dimensional position measurement can be increased to the micrometer (μm) level by using the two methods for calculating the average position μ. In addition, if the measurement distance (i.e. Z) is further decreased1) The resolution can be increased to nanometer (nm) level. Therefore, the invention can also be used for positioning a non-contact ultra-precision measuring instrument.
4. Expansion of the System architecture (dead Angle Compensation, Angle of View enlargement, and Axis of View tracking)
As is well known, any optical system has a limited viewing angle and dead angle. The same problem exists for one-dimensional optical positioning systems. In view of the above-mentioned patents at home and abroad, no specific solution has been proposed. As shown in fig. 8(a), for the one-dimensional optical positioning system 50, the maximum viewing angle 51 limits the range in which the point light source 52 can move (hereinafter, the description will be given by taking only one-dimensional, i.e. horizontal viewing angle as an example).
As shown in fig. 8(b), it is a schematic diagram of the occurrence and solution of dead angle, that is, when the point light source 52 is shielded by the obstacle 53 (such as the body of the user), another one-dimensional optical positioning system or systems 50' can be added at a suitable position in the space to compensate the dead angle.
As shown in fig. 8(c), the method for expanding the viewing angle is schematically illustrated, that is, another one-dimensional optical positioning system or systems 50 'are added at appropriate positions in the space, so as to expand the viewing angle 51'.
As shown in fig. 8(d), it is a schematic diagram of the visual axis tracking method, that is, when the point light source 52 'is to be moved out of the original visual angle 51, the one-dimensional optical positioning system 50 can rotate its own visual axis 54 to an appropriate angle 54' according to the estimation of the movement change of the point light source 50, so that the point light source 52 'can be within the visual angle 51'.
Therefore, as shown in fig. 8(b) to 8(d), the one-dimensional optical positioning system 50 has to have the functions of rotating the viewing axis and positioning as shown in fig. 8(e) for the purpose of blind spot compensation, widening of the viewing angle, and tracking of the viewing axis. The function of the boresight rotation is to achieve the effects of horizontal rotation (i.e. rotation about the Y axis, the angle shown by Θ) and vertical rotation (i.e. rotation about the X axis, the angle shown by Φ) for the boresight 54 by the conventional techniques such as rotating mechanism, motor, angle measurement, etc. The function of positioning is to position a plurality of one-dimensional optical positioning systems 50 with respect to each other by a plurality of point light sources 55 (hereinafter referred to as positioning calibration light sources) that can be fixed to the housing of the one-dimensional optical positioning system 50. That is, when the blind angle compensation and the view angle expansion are performed, i.e. a plurality of one-dimensional optical positioning systems 50 need to be placed at any spatial position, the positioning calibration light source 55 can be measured through the measurement, so as to achieve the purpose of positioning the positions of the plurality of one-dimensional optical positioning systems 50 and the view axis angle.
5. Expansion of system applications
(1) Application of virtual input device
In the present invention, the virtual input device is a method for simulating input by a device without using a physical mechanical device such as a mouse, a keyboard, a remote controller, a touch screen, etc., aiming at the existing computer, PDA, mobile phone, game machine, television, etc., and completely or partially replacing the physical input device. Hereinafter, a method of virtual input will be described with respect to the above-described physical input device.
Fig. 9(a) is a schematic diagram showing a general mouse.
In a general operating environment such as Windows, Windows is operated by mechanical operations such as moving, pressing, releasing, pressing once or twice a mouse 61 on a displayed screen 60 (hereinafter referred to as a physical operation screen). In addition, a drawing cursor 61' is used to mark the position corresponding to the mouse 61 on the physical operation screen 60. For the operation of these mice, the way of achieving input by gestures is described in detail in taiwan patent application No.: 096116210. the patent uses a single point light source to simulate the operation of a mouse.
The method for simulating input by device is mainly characterized by that it uses a virtual input device, and makes it correspond to an actual input device, and simulates and cognizes the described actual input device and requires the operation action of hand finger so as to attain the goal of virtual input. The method mainly provides a program corresponding to a virtual operation picture, a program corresponding to a virtual device geometric structure and an operation finger, and a program for defining and recognizing an operation gesture. Hereinafter, the device simulation input method will be described by taking a mouse having a left button, a middle button, a right button, and a middle wheel, and operating the mouse using three fingers as an example. Subsequently, the supplementary explanation is made for the other entity input device.
Fig. 9(b) is a schematic diagram of a mouse device simulation input method.
Program corresponding to virtual operation screen
For a physical operation screen 60 with an actual L (length) xH (width) size, a virtual operation screen 60 'with L' xH 'can be defined at any place in space, and the virtual operation screen 60' is a spatial correspondence to the physical operation screen 60, and the geometric correspondence relationship is a one-to-one and equal relationship, i.e., L '═ m × L, H' ═ n × H, where m and n can be real numbers greater than 1, equal to 1, or less than 1. Therefore, only by moving the point light source on the finger to the virtual operation screen 60', a one-to-one correspondence can be found on the physical operation screen 60. In addition, the virtual operation screen 60' can be set in the air or on any fixed surface (e.g., on a desktop or wall surface to facilitate finger operation).
Virtual device geometry definition and program corresponding to operating finger
Secondly, the structure of the virtual device geometry is defined, that is, for the function keys on the virtual device geometry, the physical position, size and physical action of the function keys are defined, and the corresponding relationship between the finger and the function keys is defined. The physical position and size of the function keys on the virtual device and the physical actions of the function keys are used for judging the physical interaction relationship between the fingers and the function keys, namely whether the fingers fall on the function keys or not and do key pressing operation; the correspondence between the finger and the function key defines the function key to be operated by the finger. For example, the index finger 62 of the right hand corresponds to the left key 62 ', the middle finger 63 of the right hand corresponds to the middle key and the wheel 63 ', and the ring finger 64 of the right hand corresponds to the right key 64 '. Therefore, in the actual virtual input operation, the user's hand is just like holding a virtual mouse having the same physical structure and size, and operating the virtual mouse on the virtual operation screen 60'. In addition, the correspondence between the fingers and the function keys can be changed according to the habit of the user. In addition, the correspondence relationship between the finger and the function key may have a pair of plural correspondence relationships, that is, plural function keys may be operated by the same finger.
Definition and cognition program for operation gesture
For the device emulation input method, which is used as the gesture for mouse movement, pressing down a key, releasing a key, pressing a key once or pressing two keys twice, the basic method is as follows, for example, taiwan patent application no: 096116210, it is composed of a plurality of gesture units that occur consecutively to define the gestures of the index finger 62, middle finger 63, and ring finger 64, respectively. The gesture unit is composed of three physical states which occur continuously, namely a temporary pause state I, a special motion state II and another temporary pause state III, for example: the gesture of pressing the left key may be defined as a gesture of releasing the left key, in which the index finger 62 is formed by three consecutively occurring physical states, i.e., a short pause state i, a short linear motion state ii from top to bottom, and a short pause state iii, and the gesture of pressing the left key is defined as a gesture of pressing the left key once, in which the index finger 62 is formed by three consecutively occurring physical states, i.e., a short pause state i, a short linear motion state ii, and a short pause state iii, and the gesture of pressing the left key once may be defined as a gesture of pressing the left key once, in which the index finger 62 is formed by a gesture of pressing the left key once and a gesture of releasing the left key twice, and the gesture of pressing the left key twice may be defined as a gesture of pressing the left key once by two consecutive gestures of pressing the left key once, and the gesture of operating the middle and right keys is defined as the left key. The rotation gesture of the wheel may be defined as a state in which the middle finger 63 is in a physical state in which three states, i.e., a state of a short pause, a state of a short linear motion in a forward or backward direction, a state of a short pause, and a state of a short pause, are consecutively generated, and the position of the cursor 61' may be defined as a coordinate of the center of the group when the motion states of the three fingers are relatively stationary (refer to the following). Of course, the above definition of the mouse operation gesture is to simulate the operation of the 2D mouse to meet the habit of a general user, but may also be according to taiwan patent application No.: 096116210, the definition of a generalized gesture as described, is defined otherwise.
Fig. 9(c) is a schematic diagram of simulation input of the remote controller device.
Because the remote controller is simple to operate, the remote controller can be generally operated by a single key and can be operated by a single finger to perform actions similar to mouse actions. Therefore, for the remote controller, only a virtual operation screen 60 'is provided to make the definition 75 of the geometric structure of the virtual device, a single finger 74 is used to correspond to all the function keys, an auxiliary drawing image 76 of the remote controller is displayed on the physical operation screen 60, and the finger 74 can be moved to the function key by the aid of a corresponding single finger cursor 74', and the actions of pressing down the key and releasing the key are performed, so as to achieve the purpose of inputting by the virtual remote controller. In addition, the virtual device geometry 75 may also visually provide a virtual stereoscopic image by using the existing virtual reality technology, and the single finger 74 may directly operate the virtual stereoscopic device geometry 75, or the single finger 74 may be stereoscopically virtualized to operate the virtual stereoscopic device geometry 75 by using a virtual stereoscopic finger, so as to improve the convenience of virtual operation.
Fig. 9(d) is a schematic diagram of a simulation input of the touch screen device.
The operation of the physical touch screen is usually simple, that is, a single finger is used to perform an operation similar to a mouse action on the physical operation screen 60. Therefore, for the simulation input of the touch screen device, the method only needs to define a virtual operation screen 60 'and use a single finger 74, and the simulation input operation can be performed on the physical operation screen 60 by the aid of the corresponding single finger cursor 74'. In addition, the virtual operation screen 60 ' may be visually provided with a virtual stereoscopic image by using a conventional virtual reality technology, and the single finger 74 may be directly operated on the virtual stereoscopic operation screen 60 ' or the single finger 74 may be stereoscopically virtualized to operate the virtual stereoscopic operation screen 60 ' with a virtual stereoscopic finger, thereby improving convenience of virtual operation.
As shown in fig. 9(e), a schematic diagram of device emulation input for a keyboard is shown.
In general, a keyboard has a large number of keys and a large number of keys for pressing. However, the device simulation input method is basically similar to the operation of a remote controller, and only needs to provide a virtual operation screen 60 ', make definition 80 of the geometric structure of the virtual device, and use several (such as three) single fingers 74 to correspond to all the function keys, and display an auxiliary drawing image 85 of a keyboard on the physical operation screen 60, and with the aid of a plurality of cursors 74 ', the cursors 74 ' respectively correspond to the operated fingers 74, and can move the fingers 74 to the function keys, and make the actions of pressing down the keys and releasing the keys, thereby achieving the purpose of virtual input of the keyboard. In addition, as mentioned above, a virtual operation screen 60' can be defined on a fixed physical surface (e.g. desktop), and a printed matter of a keyboard (i.e. a paper printed with a keyboard) is placed on the physical surface, so that the user can achieve the purpose of virtual keyboard input by approaching the physical keyboard. In addition, the virtual device geometry 80 can also visually provide a virtual stereoscopic image by using the existing virtual reality technology, so that the plurality of single fingers 74 can directly operate the virtual stereoscopic device geometry 80, or the plurality of single fingers 74 can be stereoscopically virtualized to operate the virtual stereoscopic device geometry 80 by using virtual stereoscopic fingers, thereby improving the convenience of virtual operation.
(2) Application of simulator
In the above, the application of the virtual input is explained. As mentioned above, the one-dimensional optical positioner of the present invention not only has the features of high operation speed, high spatial resolution and low manufacturing cost, but also can simultaneously use a plurality of sets of one-dimensional optical positioners to expand the measurement range of point light sources and compensate for dead angles. Thus, with these characteristics, the present invention can also be applied to the field of simulators.
The method comprises the following steps:
the plurality of point light sources (e.g., three) used in the present invention are mounted on the physical racket (e.g., racket for tennis, badminton, billiards, and billiards), the physical stick (e.g., bat for baseball, softball), the physical stick (e.g., golf, hockey, pool, and western sword), the physical glove (e.g., gloves for baseball, softball, and boxing), the physical ball (e.g., ball for baseball, softball, basketball, football, volleyball, and bowling), the physical toy (e.g., toy for games such as toy gun), the physical remote control toy (e.g., remote control toy such as remote control car, remote control airplane, and remote control helicopter), and the physical remote control (e.g., remote control for home game). The one-dimensional optical positioner of the invention is used for carrying out real-time positioning measurement on a plurality of point light sources arranged on the object, and then the motion trail and the motion physical quantity of the object can be calculated. In addition, through the virtual reality technology, a virtual object can be defined in a virtual space to directly correspond to the motion state of the physical object (such as the racket), and the virtual object and other virtual objects (such as balls) in the virtual space can be made to perform nearly vivid and natural interaction (such as batting) according to the physical principle, so as to achieve the purpose of simulating various sports, shooting, driving, flying and the like.
Drawings
FIG. 1 is a schematic diagram of a glove used in a virtual reality environment;
FIG. 2 is a diagram illustrating an image overlay phenomenon of a one-dimensional optical system;
FIG. 3(a) is a schematic diagram of the imaging of a one-dimensional optical lens by a point source of light;
FIGS. 3(b) -3 (e) are schematic diagrams illustrating intensity modulation;
FIGS. 3(f) -3 (j) are schematic diagrams illustrating geometric modulation;
FIG. 4(a) is a schematic diagram of the Stephenson time modulation timing sequence;
FIG. 4(b) shows the defect of the Stephenson time modulation method;
FIG. 4(c) is a schematic diagram of the Stephenson time modulation modification timing sequence;
FIG. 4(d) to FIG. 4(e) are schematic diagrams illustrating the timing of the synchronization signal in the master-slave wireless synchronization method;
FIG. 5(a) is a schematic diagram of the spectrum of light induced by a typical CCD or CMOS photo sensor array;
FIG. 5(b) is a schematic diagram showing the light passing spectrum of RGB color chips on a typical CCD or CMOS photo-sensing array;
FIG. 5(c) is a schematic diagram showing the arrangement of RGB color chips corresponding to RGB pixels on a typical two-dimensional CCD or CMOS photo sensor array;
FIG. 5(d) is a schematic diagram of three point light sources with different wavelengths;
FIG. 5(e) is a schematic diagram showing an arrangement of RGB color chips on a one-dimensional photo-sensing array;
FIG. 5(f) is a schematic diagram of an arrangement of RGB color chips of the three-color one-dimensional light sensing array;
FIGS. 5(g) to 5(h) are schematic diagrams of two-dimensional color CCD or CMOS photo-sensing arrays with RGB color chips arranged;
FIG. 5(i) is a schematic diagram of a two-dimensional color CCD or CMOS photo sensor array with pixel image scanning and random reading;
FIGS. 5(j) to 5(k) are schematic diagrams illustrating the structure of a noise one-dimensional optical sensing array;
FIG. 6(a) is a diagram showing the emission spectrum of a conventional fluorescent lamp;
FIG. 6(b) is a diagram showing the emission spectrum of a conventional halogen lamp;
FIG. 7(a) is a schematic diagram showing an optical system used in Reymond;
FIG. 7(b) is a schematic diagram showing the change of the imaging signal when the point light source is slightly displaced;
FIG. 8(a) is a schematic diagram of a maximum viewing angle of a one-dimensional optical positioning system;
FIG. 8(b) is a schematic diagram illustrating the generation and solution of dead angles in a one-dimensional optical positioning system;
FIG. 8(c) is a schematic view of a one-dimensional optical positioning system with an enlarged view;
FIG. 8(d) is a schematic diagram of a method for tracking the optical axis of a one-dimensional optical positioning system;
FIG. 8(e) is a schematic diagram of a one-dimensional optical positioning system with a rotating viewing axis and a positionable function;
FIG. 9(a) is a diagram showing a general mouse;
FIG. 9(b) is a schematic view showing a simulated input method of the mouse device;
FIG. 9(c) is a schematic diagram showing a simulation input of the remote controller device;
FIG. 9(d) is a schematic diagram of a simulation input of the touch screen device;
FIG. 9(e) is a schematic diagram illustrating device emulation input for a keyboard;
FIG. 10 is a schematic diagram of a configuration according to an embodiment of the present invention;
FIG. 11(a) is a schematic diagram of a plurality of point light sources with unique light emission intensity;
FIG. 11(b) is a schematic diagram of a plurality of point light sources with geometric uniqueness;
FIG. 11(c) is a schematic view showing a single point light source;
FIG. 11(d) is a schematic view showing the constitution of a light scattering body;
FIG. 11(e) to FIG. 11(n) are schematic views of a point light source device;
FIG. 12(a) is a schematic diagram of a single set of one-dimensional optical locators with boresight tracking;
FIG. 12(b) is a schematic diagram of a coordinate system corresponding to a single one-dimensional optical positioner with boresight tracking;
FIG. 12(c) is a schematic diagram of the coordinate system of all one-dimensional optical locators with boresight tracking;
FIG. 12(d) is a schematic diagram showing the structure of a one-dimensional position detector;
FIG. 12(e) -FIG. 12(i) are schematic diagrams showing geometrical structural relationships among the one-dimensional optical positioner fixing mechanism, the one-dimensional position detector device mechanism, and the positioning calibration point light source;
FIG. 12(j) shows the housing of another prior art device;
FIG. 13(a) is a schematic view showing a program configuration for control analysis;
FIG. 13(b) is a schematic diagram showing a coordinate system alignment correction procedure;
FIG. 14 is a schematic view showing the constitution of a second embodiment of the present invention;
FIGS. 15(a) to 15(c) are schematic views showing the constitution of a third embodiment of the present invention;
FIG. 16 is a schematic view showing a fourth constitution of the embodiment of the present invention;
FIG. 17 is a schematic view showing a fifth constitution of the embodiment of the present invention;
fig. 18 is a schematic diagram showing a sixth configuration of the embodiment of the present invention.
Description of reference numerals: 1-VR gloves; a finger portion on a 2-VR glove; a locator on the 3-VR glove; 5-a one-dimensional lens; 10. 29, 52' -point light sources; 11-line imaging; 12. 13-a one-dimensional light sensing array; 15. 16, 17-imaging signals on the one-dimensional light sensing array; 18-superimposed imaging signals; 20-a one-dimensional position detector; 21-an RF transmitter; 22-RF synchronization signal with coding; 24-an encoded signal; 25-a synchronization signal; 26-an RF receiver at the hand end; 27-a decoder; 28-point light source switch switcher; 50. 50' -a one-dimensional optical positioning system; 51. 51' -maximum viewing angle of the one-dimensional optical positioning system; 53-obstacles; 54. 54' -the visual axis of the one-dimensional optical positioning system; 55-a positioning calibration light source of a one-dimensional optical positioning system; 60-entity operation picture; 60' -a virtual operation screen; 61-a physical mouse; 61' -cursor of mouse; 62-index finger of right hand; 63-middle finger of right hand; 64-ring finger of right hand; 62' -left button of mouse; 63' -the middle wheel of the mouse; 64' -the right button of the mouse; 74-single or multiple fingers; 74' -cursor corresponding to single or plural fingers; 75-definition of virtual remote controller geometry; 76-drawing image of remote controller; 80-definition of virtual keyboard geometry; 100-composition of embodiment one of the invention; 110. 210-a plurality of unique point light sources; 111. 211-a single point light source; 112-approximately point-like divergent light sources; 113-light scatterers; 115-a light entrance port of suitable size and shape; 116-a light emitting source; 117-electronic control loop; 118-a battery; 119-A point light source device mechanism; 120-point light source device fixing mechanism; 123-transparent light guide; 124-scatterers; 130. 230, 330, 430, 530, 630-a plurality of sets of one-dimensional optical positioners with visual axis tracking; 131. 231, 331, 431, 531, 631-single set of one-dimensional optical locators with boresight tracking; 132-a plurality of one-dimensional position detectors; 133. 333-a single one-dimensional position detector; 134-one-dimensional optical component group; 135-one-dimensional light sensor; 136-a signal microprocessor; 137-one-dimensional position detector device mechanism; 139-a set of imaging mean positions; 145. 345-a positioning calculation control microprocessor; 146-a signal transmission interface; 150. 250, 350, 450, 550, 650-signals containing all the physical motion quantities of the point light sources, synchronous trigger signals and visual axis angles; 160-a set of positioning calibration point light sources; 170-one-dimensional optical positioner fixing mechanism; 171-connected structure; 180-biaxial angle control means; 181-two flip-flops; 182-two angle measuring devices; 190. 290, 390, 490, 590, 690-control analysis program; 191-procedure of coordinate system collimation correction; 192-device simulation input program; 193-program simulated by simulator; 194-other devices; 200-composition of embodiment two of the present invention; 300-constitution of example three of the invention; 311. 411, 511, 611-a modular device with a plurality of point light sources; 312. 412, 512, 612-a plurality of point light sources; 313. 413, 513, 613-approximately point-like divergent light sources; 314. 414, 514-RF receiver; 315. 614-a switch; 320. 420, 520-RF synchronization signal with code; 332. 432-an RF transceiver; 400-composition of embodiment four of the invention; 410. 510-a plurality of groups having a plurality of point light source module devices; 500-composition of example five of the invention; 600-composition of example six of the invention; 632-means for light reception; f-focal length of the one-dimensional lens; X-X coordinate axis; Y-Y coordinate axes; Z-Z coordinate axis;
Figure G2008102143199D00271
-like an overlap line; o-position of point light source; i (0, y)s0) -imaging position of point light source; i (x) -imaging signals with gaussian distribution of intensity; i is0-the intensity of the center of the gaussian distribution; standard deviation of σ -gaussian distribution; mean position of μ -gaussian distribution; of P-point light sourcesThe intensity of the emitted light; r-the light emitting radius of the point light source; μ P-microprocessor; i (x, t)k) At time tkAn imaging signal of time; s (x, t)k) At time tkA point light source imaging signal; n (x, t)k) At time tkAmbient light in time interferes with the noise signal; number x of pixels in M-one-dimensional photosensitive arraym-the position of the mth pixel; o (X, Y, Z) -world coordinate system; s1、S2、S3-a one-dimensional light sensing array; l is1、L2、L3-a one-dimensional lens of focal length f; z1、Z2、Z3-L1、L2、L3The optical axis of (a); o (x)1,y1,z1) -point source coordinates; y iss1、ys2、ys3Point light source o (x)1,y1,z1) At S of1、S2、S3The imaging position of (a); Δ x1、Δy1、Δz1-said spatial resolution of the optical system; Δ ys1、Δys2、Δys3-minimum measurement error; i (x)i) -average value of unit length measurements of the ith sensing pixel; Δ w-width of a single sensing pixel; theta-the horizontal rotation angle of the one-dimensional optical positioning system; phi-the vertical rotation angle of the one-dimensional optical positioning system; l-length of the entity operation picture; h-width of the entity operation picture; l' -length of the virtual operation screen; h' -width of the virtual operation screen; m and n-are real numbers greater than 1, equal to 1, or less than 1; # i-number of single set of one-dimensional optical positioners; # j-the number of a single one-dimensional position detector; # k-number of single point light sources; mu.sijk-an imaging average position of the point source; piPhysical quantity of point light source obtained by # i group one-dimensional optical positioner; pi-physical quantities of point light source groups obtained by the # i-th group of one-dimensional optical positioners; riThe relative physical quantity of the point light source obtained by the # i group of one-dimensional optical positioners; fi-other physical quantities of point light sources obtained by the # i th group of one-dimensional optical positioners; SCAN-SCAN timing; SYNC-periodic synchronous scanning signal; ENABLE-synchronous trigger signal; (Theta)i,Φi) -group # iThe visual axis angle of the optical positioner is measured; (Theta)ia,Φia) -boresight angle drive control signals for the # i th set of one-dimensional optical positioners; (Theta)is,Φis) -the electrical boresight signals of the # i th set of one-dimensional optical positioners;-a reference coordinate system arranged within the # i-th set of one-dimensional optical positioners; (X)i0,Yi0,Yi0) -a reference coordinate system
Figure G2008102143199D00282
The origin of coordinates of (a);-a visual axis of a # i th set of one-dimensional optical positioners; i isij(x) -imaging signals obtained from # i-th set of one-dimensional optical positioners and # j-th set of one-dimensional position detectors.
Detailed Description
The above and further features and advantages of the present invention are described in more detail below with reference to the accompanying drawings.
Example one
Fig. 10 is a schematic diagram of a configuration according to an embodiment of the present invention.
The apparatus 100 according to the first embodiment of the present invention provides an intensity modulation method or a geometric modulation method for the uniqueness of the point light sources, and can perform the measurement and analysis of the three-dimensional motion of the plurality of point light sources, so as to achieve the purpose of virtual input and simulation. The system mainly comprises a plurality of unique point light sources 110, a plurality of sets of one-dimensional optical positioners 130 with visual axis tracking, and a control analysis program 190, wherein each point light source 111 of the plurality of unique point light sources 110 can emit a divergent light source 112 with unique optical characteristics and approximate point shape respectively in a simultaneous and continuous light emitting manner. The plurality of sets of one-dimensional optical locators 130 with the visual axis tracking function, each set of one-dimensional optical locators 131 with the visual axis tracking function, mainly receives a synchronous trigger signal 150 and the divergent light sources 112 of all the plurality of point light sources at the same time, and then can measure the three-dimensional positioning of all the point light sources 111 and output a set of physical quantities 150; in addition, each set of the one-dimensional optical positioners 131 with the function of visual axis tracking and positioning also has the function of visual axis tracking and positioning, and can automatically track the group center coordinates of the plurality of point light sources, and also can automatically track the coordinates of any one point light source (as described later) of the plurality of point light sources, and output the own visual axis angle 150, so as to achieve the purpose of visual axis tracking; and can also receive a visual axis angle to achieve the purpose of visual axis positioning. The control analysis program 190 is a software program, is connected to and controls all the one-dimensional optical positioners 131 with optical axis tracking, mainly outputs a synchronous trigger signal 150, and can synchronously start all the one-dimensional optical positioners 130 with optical axis tracking to synchronously execute the measurement of three-dimensional positioning; a group of visual axis angles 150 can also be output, so that the purpose of positioning the visual axis angles of all the one-dimensional optical positioners with the visual axis tracking is achieved; after receiving all the physical quantities and the visual axis angle 150, the input of a physical input device can be simulated, so that the purpose of virtual input is achieved; the motion of a physical object can be simulated, so that the simulation purpose of the simulator is achieved.
FIG. 11(a) is a schematic view of a plurality of unique point light sources.
The plurality of unique point light sources 110, wherein each point light source 111 may have the same light emitting radius but different light emitting intensity, i.e. each point light source has the uniqueness of light intensity, and all the point light sources emit light simultaneously and continuously. For convenience of the following description, each point light source is given a unique number # k.
FIG. 11(b) is a schematic view of another configuration of a plurality of unique point light sources.
The plurality of unique point light sources 110, wherein each point light source 111 may have a different light emitting radius but the same light emitting intensity, i.e. each point light source has a unique geometric size, and all point light sources emit light simultaneously and continuously. For convenience of the following description, each point light source is given a unique number # k.
Fig. 11(c) is a schematic diagram showing the configuration of a single point light source.
The point light source 111 is composed of a light scattering body 113, a light emitting source 116, an electronic control circuit 117, a battery 118, a point light source device mechanism 119, and a point light source device fixing mechanism 120, wherein the light scattering body 113 is an object capable of uniformly dispersing incident light in angle; the light source 116 is composed of a single or a plurality of LEDs capable of emitting visible light or non-visible light and semiconductor laser; the electronic control circuit 117 is a circuit including a power switch and a constant current source, and provides a constant current source in addition to the power switch function, so that the light source 116 can emit a light source with specific and stable brightness; the point light source device 119 is a mechanical device that can mount and fix the light scattering body 113, the light source 116, the electronic control circuit 117, and the battery 118;
the point light source device fixing mechanism 120 can fix the point light source device mechanism 119 to the device shown in fig. 11(e) to 11 (n). These implements may be fingers of the hand, forehead of the head, or upper instep of the foot (as shown in fig. 11 (e)); or a racket-like object for tennis, badminton, billiards, and billiards (as shown in fig. 11 (f)); a rod-like object for baseball, softball, etc. (as shown in fig. 11 (g)); or a shaft for golf, hockey, pool, sword, gun, etc. (as shown in FIG. 11 (h)); or, for example, gloves for baseball, softball, boxing, etc. (as shown in fig. 11 (i)); or, for example, a ball for baseball, softball, basketball, soccer, volleyball, bowling, etc. (as shown in fig. 11 (j)); or, a toy for game such as a toy gun (as shown in fig. 11 (k)); or, a remote control toy such as a remote control car, a remote control airplane, a remote control helicopter, etc. (as shown in fig. 11 (1)); or a joystick of a computer (as shown in fig. 11 (m)), or a manipulator of a home game machine (as shown in fig. 11 (n)).
As shown in fig. 11(d), the configuration of the light scattering body is schematically illustrated.
The light scattering body 113 may be composed of a transparent light guide 123 and a scattering body 124, the transparent light guide 123 may have any shape, and preferably may be a spherical object; the material of the structure may be any transparent material, and is preferably a transparent material such as glass or plastic. The diffuser 124 is disposed inside the transparent light guide 123, and preferably may be a randomly distributed light reflective powder, a randomly distributed transparent powder, a randomly distributed fine air bubble, or a small transparent sphere. The refractive index of the scattering body 124 is lower than that of the transparent light guide 123. In addition, a light entrance port having an appropriate size and shape is provided at an appropriate position of the transparent light guide 123, so that the light emitted from the light emitting source 116 can be introduced at an optimum angle.
Fig. 12(a) is a schematic diagram of a single set of one-dimensional optical positioners with view axis tracking.
The single set of one-dimensional optical positioner 131 with optical axis tracking is composed of a plurality of one-dimensional position detectors 132, a positioning calculation control microprocessor 145, a signal transmission interface 146, a set of positioning calibration point light sources 160, a one-dimensional optical positioner fixing mechanism 170, and a two-axis angle control device 180, and for convenience of description, the single set of one-dimensional optical positioner 131 has a unique number # i, and the single one-dimensional position detector 133 has a unique number # j.
Each one of theThe dimension position detector 133(# j) receives the divergent light 112 of all the point light sources 111(# k) and a synchronous scanning signal SYNC at the same time, and then calculates and outputs the average imaging position 139 (the value is μ #) of all the point light sourcesijkShown) in the figure. More specifically defining the point source imaging mean position 139 (mu)ijk) The resulting imaging average position μ for all point light sources 111(# k) is the one-dimensional position detector 133(# j) in the set of one-dimensional optical positioners 131(# i)ijk
The positioning calculation control microprocessor 145 includes a positioning calculation control program, and is capable of connecting and controlling all the one-dimensional position detectors 133(# j) and the biaxial angle control device 180. The positioning calculation control program mainly outputs a periodic synchronous scanning signal SYNC after receiving a synchronous trigger signal ENABLE output by the control analysis program 190 through the signal transmission interface 146, and obtains the average imaging position 139(μ) of all point light sourcesijk) Thereafter, the physical quantities P of all the point light sources 111(# k) are calculated and outputiGroup physical quantity PiRelative physical quantity RiAnd other physical quantities Fi(ii) a In addition, the positioning calculation control program of the positioning calculation control microprocessor 145 also has a function of changing the self-viewing angle (Θ)i,Φi) The ability to receive a new viewing axis angle (Θ)i,Φi) Also or according to physical quantity PiOr according to the group physical quantity PiTo calculate and output a new viewing axis angle (theta)i,Φi) Then, an angle driving control signal (theta) is calculated and outputia,Φia) While receiving an electrical signal (theta) at an angleis,Φis) It is used as the control of angle feedback to achieve the purpose of tracking and precisely positioning the visual axis. Thus, by receiving a new viewing axis angle (Θ) generated externallyi,Φi) To change the angle of the visual axis, namely, the function of positioning the visual axis; using the group physical quantity PiThe resulting new viewing axis angle (Θ)i,Φi) To change the visual axisThe angle is the function of tracing the visual axis.
As described above, the physical quantity P of each point light source 111(# k)iIs a three-dimensional position coordinate (x) including the point light source 111(# k)ik,yik,zik) And a displacement (Deltax)ik,Δyik,Δzik) Velocity (v)xik,vyik,vzik) Acceleration (a)xik,ayik,azik) And the like physical quantities; and group physical quantity PiThen include the group center coordinates (x)i、yi、zi) Group average displacement (Δ x)i、Δyi、Δzi) Group average velocity (v)xi、vyi、zzi) Group average acceleration (a)xi、ayi、azi) It is defined as follows:
group center coordinates <math> <mrow> <msub> <mover> <mi>x</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>x</mi> <mi>ik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>;</mo> <msub> <mover> <mi>y</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>y</mi> <mi>ik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>,</mo> <msub> <mover> <mi>z</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>z</mi> <mi>ik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>39</mn> <mo>)</mo> </mrow> </mrow> </math>
Group average displacement <math> <mrow> <msub> <mover> <mi>&Delta;x</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>&Delta;x</mi> <mi>ik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>,</mo> <msub> <mover> <mi>&Delta;y</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>&Delta;y</mi> <mi>ik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>,</mo> <msub> <mover> <mi>&Delta;z</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>&Delta;</mi> <msub> <mi>z</mi> <mi>ik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>40</mn> <mo>)</mo> </mrow> </mrow> </math>
Group mean velocity <math> <mrow> <msub> <mover> <mi>v</mi> <mo>&OverBar;</mo> </mover> <mi>xi</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>v</mi> <mi>xik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>,</mo> <msub> <mover> <mi>v</mi> <mo>&OverBar;</mo> </mover> <mi>yi</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>v</mi> <mi>yik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>,</mo> <msub> <mover> <mi>v</mi> <mo>&OverBar;</mo> </mover> <mi>zi</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>v</mi> <mi>zik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>41</mn> <mo>)</mo> </mrow> </mrow> </math>
Group mean acceleration <math> <mrow> <msub> <mover> <mi>a</mi> <mo>&OverBar;</mo> </mover> <mi>xi</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>xik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>,</mo> <msub> <mover> <mi>a</mi> <mo>&OverBar;</mo> </mover> <mi>yi</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>yik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>,</mo> <msub> <mover> <mi>a</mi> <mo>&OverBar;</mo> </mover> <mi>zi</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>zik</mi> </msub> <mo>/</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>42</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein N is the number of all point light sources.
In addition, the physical quantity P of each point light source 111(# k) can be usediAnd group physical quantity PiOr calculating the relative physical quantity R between each point light source or between each point light source and the center coordinate of the groupiSuch as relative position, velocity, acceleration, angle, angular velocity, angular acceleration, plane normal vector formed by point light sources. If a mass is given to each point light source, other physical quantities F such as force, moment, centripetal force, centrifugal force, momentum, kinetic energy, etc. can be calculatedi
The signal transmission interface 146, which is a wired or wireless transmission device, connects the positioning calculation control microprocessor 145 and the control analysis program 190 to perform Pi、Pi、Ri、FiEqual physical quantity, visual axis angle (theta)i,Φi) And synchronizing the transmission of the trigger signal ENABLE.
The positioning and calibration point light source 160 is composed of a plurality of point light sources, and is fixed at a known position on the one-dimensional optical positioner fixing mechanism 170, so as to be used for positioning the spatial position and the visual axis angle of the one-dimensional optical positioner 131(# i).
As shown in fig. 12(b), for each set of one-dimensional optical positioners 131(# i), where appropriate within its mechanism, the reference coordinate system of the positioner is fictitious
Figure G2008102143199D00321
And let the coordinate of the origin of the coordinate system be (X)i0,Yi0,Yi0). Wherein,
Figure G2008102143199D00322
the axis is the visual axis. Since the set of position-index point light sources 160 is a known location where the device is fixed to the one-dimensional optical positioner fixture 170. Therefore, the position coordinates of the set of positioning calibration point light sources 160 are measured, and the position coordinates (X) of the origin can be calculatedi0,Yi0,Yi0) Angle (theta) to the visual axisi,Φi)。
As shown in fig. 12(c), when a plurality of one-dimensional optical positioners 131 (such as #0, #1, and #2) are used simultaneously, before the actual operation, the control analysis program 190 needs to select one of the one-dimensional optical positioners 131(#0) to define as the main positioner and order them to make the main positioner
Figure G2008102143199D00323
Its world coordinate system, its origin is (0, 0, 0); and the other one-dimensional optical positioners 131(#1, #2) are defined as slave positioners. Subsequently, the main positioner 131(#0) measures the positioning of the positioning point light sources 160 of the other slave positioners 131(#1, #2), and the coordinates (X) of the origins of the coordinate systems of the other slave positioners 131(#1, #2) can be calculated10,Y10,Z10)、(X20,Y20,Z20) Andangle of visual axis (theta)1,Φ1)、(Θ2,Φ2)。
As shown in fig. 12(a), the biaxial angle control device 180 mainly comprises two triggers 181 (actuators), two angle measuring devices 182(Angular sensors), and a two-axis rotating mechanism (not shown), and the biaxial angle control device 180 receives the angle driving control signal (Θ)ia,Φia) Then, the control signal (theta) is driven according to the angleia,Φia) The two triggers 181 are driven to rotate the two-axis rotating mechanism and the two angle measuring devices 182; the two angle measuring devices 182 can feedback and output two angle electrical signals (Θ) according to the actual rotation angleis,Φis) For the positioning control of two-axis angle; the two-axis rotation mechanism can rotate the one-dimensional optical positioner 131(# i) fixing mechanism 170 to change the angle (theta) of the visual axis of the one-dimensional optical positioneri,Φi)。
As shown in fig. 12(a), the one-dimensional optical positioner fixing mechanism 170 is a mechanical mechanism for fixing the plurality of one-dimensional position detectors 132, the positioning calculation control microprocessor 145, the signal transmission interface 146, and the set of positioning and pointing point light sources 160, and can be connected to a two-axis rotating mechanism in the two-axis angle control device 180 to achieve the purpose of two-axis rotation.
As shown in fig. 12(d), the configuration of the one-dimensional position detector 133(# j) is schematically illustrated.
The one-dimensional position detector 133(# j) is mainly composed of a one-dimensional optical component set 134, a one-dimensional optical sensor 135, a signal microprocessor 136, and a one-dimensional position detector device mechanism 137,
the one-dimensional optical component group 134 is composed of a filter, a line-shaped aperture, and a one-dimensional optical lens (not shown in the figure), and can make the point-shaped light source 112 form a line-shaped image.
The one-dimensional optical sensor 135 is composed of a one-dimensional optical sensing array, a scanning and reading electronic circuit, and an analog-to-digital converter (ADC) (not shown). The scanning and reading electronic circuit sequentially and continuously reads and outputs the light-induced analog voltage of each sensing Pixel (Pixel) on the one-dimensional light-sensing array according to the received scanning time sequence SCAN, and outputs a digital voltage after the action of the analog-to-digital converter (ADC). As mentioned above, the digital voltage is the imaging superposition signal Iij(x) In that respect Wherein, the subscripts i and j are as defined above for # i, # j.
The signal microprocessor 136 is connected to and controls the one-dimensional photosensor 135, and executes a signal processing program after receiving the synchronous scanning signal SYNC to generate a scanning signal SCAN and read the imaging superposition signal Iij(x) And calculating and outputting the average imaging position mu of all point light sourcesijk. The signal processing program mainly comprises a program for synchronously reading data, a program for removing dynamic background light signals and a program corresponding to the identification of a point light source imaging signal,
the data synchronous reading procedure is to output a SCAN signal SCAN after a proper time according to the timing sequence of the received synchronous SCAN signal SYNC to obtain and record the imaging superposition signal Iij(x) The imaging superposition signal comprises effective imaging signals and dynamic background light signals of all point light sources;
the dynamic background light signal removing program mainly comprises a time ambient light interference signal removing program and a space ambient light interference signal removing program, and can be used for removing the imaging superposition signal Iij(x) After the dynamic background light is removed, outputting an effective imaging signal containing all point light sources;
the procedure for identifying and corresponding point light source imaging signals is mainly to identify and analyze the effective imaging signals of the respective point light sources and the corresponding relation thereof for the effective imaging signals of all the point light sources through a threshold value comparison procedure or a waveform detection procedure. The waveform detection program can achieve the purposes of identification, analysis and correspondence according to the characteristics of the distribution standard deviation, the central intensity and the waveform change slope of the effective imaging signal of the point light source. In addition, when the point light sources are modulated geometrically, effective imaging signals of the respective point light sources and corresponding relations thereof are identified and analyzed through a geometric modulation elimination method.
The procedure of calculating the average point light source imaging position is to analyze the maximum signal pixel position, Guassian Fitting and statistic analysis of the effective point light source imaging signals which are respectively identified and analyzed to calculate and output the average point light source imaging position muijk
In addition, the one-dimensional position detector device mechanism 137 is a mechanical mechanism for fixing the one-dimensional optical elements 134, the one-dimensional optical sensor 135, and the signal microprocessor 136, and can be fixed in the one-dimensional optical positioner fixing mechanism 170.
In addition, as shown in fig. 12(e) to fig. 12(i), the geometric relationship of the interconnection means among the one-dimensional optical positioner fixing mechanism 170, the one-dimensional position detector device 137, and the positioning calibration point light source 160 is schematically illustrated.
As shown in FIG. 12(e), the one-dimensional optical positioner fixing mechanism 170 may have a triangular geometry, and more preferably, may have an equilateral triangular structure. At the top corner or the center of three sides, each device has the one-dimensional position detector device 137; that is, the relative device positions of the three one-dimensional position detectors 133 are in a triangular geometry. In addition, the one-dimensional position detector 133(# j) can be set to rotate at any angle with the optical axis thereof as a rotation axis. That is, the directions of the long axes of the one-dimensional photo sensor arrays in the three one-dimensional position detectors 133 can be set at any angle.
The positioning and calibration point light sources 160 composed of a plurality of point light sources can be installed at any position on the triangular one-dimensional optical positioner fixing mechanism 170, and the optimum number of the point light sources can be composed of three point light sources, and the optimum installation position can be at the apex angle of the triangle or the center of three sides.
In addition, the triangular one-dimensional optical positioner fixing mechanism 170 may be provided with a connecting structure 171 at a vertex angle thereof, and the connecting structure 171 has a structure for connecting or disconnecting (i.e. not connecting) two triangular sides, and an angle between the two triangular sides can be arbitrarily adjusted. For example, a triangular geometry may be transformed into a linear geometry.
As shown in fig. 12(f), the improvement of the above embodiment is that a connecting mechanism is added at the center of any three sides of the triangular one-dimensional optical positioner fixing mechanism 170, so as to add the one-dimensional position detector device 137, i.e. add a one-dimensional position detector, and make the one-dimensional position detector be installed at the center point of the triangle.
As shown in fig. 12(g), another improvement of the above embodiment is to change the triangular geometry into a quadrangular geometry, preferably an equilateral quadrangular geometry, in which the number of one-dimensional position detectors can be increased to four.
As shown in fig. 12(h), another improvement of the above embodiment is to change the triangular geometry into a pentagonal geometry, and preferably, an equilateral pentagonal geometry, in which the number of one-dimensional position detectors can be increased to five.
As shown in fig. 12(i), another improvement of the above example is to change the triangular geometry into a hexagonal geometry, preferably an equilateral hexagonal geometry, and the number of one-dimensional position detectors mounted therein can be increased to six. Of course, such a structure can be extended to a more polygonal geometry.
As shown in fig. 12(j), the one-dimensional optical positioner fixing mechanism 170 may be a casing of other conventional devices, such as a casing of a notebook computer, a PDA, a main body of a game machine, a mobile phone, a liquid crystal display, a plasma display, a television, a projector, an optical camera, an optical telescope, an automobile, a motorcycle, and the like (only the notebook computer and the liquid crystal display are illustrated). That is, in the present invention, the plurality of one-dimensional position detectors 132, a positioning calculation control microprocessor 145, a signal transmission interface 146, a set of positioning calibration point light sources 160, etc. can be independently installed on the housing of the conventional apparatus to achieve the effects of three-dimensional positioning measurement, virtual input, or simulator.
As shown in fig. 13(a), the configuration of the control analysis program is schematically illustrated.
The control analysis program 190 is a software program, which is mainly composed of a program 191 for coordinate system alignment synchronous correction, a program 192 for device simulation input, and a program 193 for simulator simulation, the control analysis program 190 can be integrated into other devices 194 such as a personal computer, a notebook computer, a PDA, a mobile phone, a host of a game machine, a video playing and converting device (such as DVD, Setup Box), etc., and the three programs can be executed by using an electronic system such as a microprocessor in the other devices 194.
Fig. 13(b) is a schematic diagram of the coordinate system collimation correction procedure.
The procedure 191 of the coordinate system collimation correction is mainly composed of a procedure of the visual axis resetting to zero, a procedure of the coordinate system setting and conversion, and a procedure of the synchronous time correction, and is capable of determining the relation of the coordinate conversion among all the one-dimensional optical positioners with the visual axis tracking, compensating the measurement error caused by the coordinate conversion, and correcting the synchronous time error.
The procedure of resetting to zero the visual axis is to make all the visual axes of the one-dimensional optical localizer 131 with visual axis tracking align with the same locating point light source 111 through the visual axis control procedure of the one-dimensional optical localizer 131 with visual axis tracking, and then resetting the visual axes to zero, that is, (theta)i=0,Φi0); the procedure of setting and converting the coordinate system is to set a master locator 131(#0) and a slave locator 131(# i) in all the one-dimensional optical locators with visual axis tracking, measure the positioning point light source and the positioning point calibration point light source of the slave locator through the master locator, measure the positioning of the positioning point light source through the slave locator, calculate and obtain the coordinate conversion relation between the master locator and each slave locator and compensate the positioning error; in addition, the synchronization time correction sequence is to output the synchronization trigger signal ENABLE at an appropriate time period, and all the positioners can be corrected to synchronously execute the positioning calculation control program.
As shown in fig. 13(a), the device simulation input program 192 is mainly composed of a program corresponding to a virtual operation screen, a program corresponding to a virtual device geometry definition and an operation finger, and a program corresponding to an operation gesture definition and recognition, and is a device for performing virtual input by simulating and recognizing a hand operation motion required for the physical input device.
The program corresponding to the virtual operation picture defines a virtual operation picture at any position in space for an entity operation picture with a real size. The virtual operation picture is a space corresponding to the entity operation picture, and the geometric corresponding relation of the virtual operation picture can be a one-to-one corresponding relation and has a corresponding relation of enlargement, equal or reduction. In addition, the virtual operation picture can generate a virtual three-dimensional image through the virtual reality technology;
the definition of the virtual device geometry and the program corresponding to the operating finger are to define the geometry of a virtual device, the physical position and size of the function key and the physical action of the function key for the entity input device to be simulated, and to connect the finger and the function key correspondingly. In addition, the virtual device geometry and the operating finger can generate a virtual stereo image through the virtual reality technology;
the operation gesture definition and cognition program defines the physical exercise quantity of a finger operation action according to the physical action of the virtual device function key. The physical quantity is a set of physical quantities consisting of a series of temporal physical quantities. The physical quantity set is composed of the physical quantity, group physical quantity, relative physical quantity and other physical quantities of all the point light sources, so that the point light sources worn on the finger part can be detected, compared and analyzed according to the predefined physical quantity, and the gesture of the finger can be recognized, and the aim of simulating input by the device is fulfilled.
In addition, as shown in fig. 13(a), the simulator simulation program 193 performs real-time positioning measurement on a plurality of point light sources mounted on other physical objects, so as to calculate the motion trajectory and the motion physical quantity of the physical objects. In addition, by matching with a virtual image and a physical law, a physical object (such as a ball) and the virtual image (such as a ball) can perform nearly vivid and natural interaction (such as batting), thereby achieving the purposes of simulating various sports, shooting, driving, flying and the like. In addition, the real-time positioning measurement is carried out on a plurality of point light sources arranged on other physical objects, so that the motion trail and the motion physical quantity of the physical objects (such as a racket) can be calculated. Through the virtual reality technology, a virtual object can be defined in a virtual space to directly correspond to the motion state of the physical object, and the virtual object and other virtual objects (such as balls) in the virtual space can still perform nearly vivid and natural interaction (such as bat ball) according to the physical collision rule, so that the purposes of various sports, shooting, driving, flying and other simulation can be achieved.
Example two
As shown in fig. 14, a schematic diagram of a second embodiment of the present invention is shown.
The apparatus 200 of the second embodiment of the present invention provides a wavelength modulation method mainly aiming at the uniqueness of the point light source, and has the same structure as the first embodiment. Hereinafter, description will be given only for different points.
The second embodiment 200 is mainly composed of a plurality of unique point light sources 210, a plurality of sets of one-dimensional optical positioners 230 with optical axis tracking, and a control analysis program 290, and for clarity, R, G, B is taken as an example to indicate the uniqueness of the wavelength of the point light sources 211.
The difference from the first embodiment mainly lies in that:
(1) the plurality of point light sources 211 are composed of point light sources that have different light emission wavelengths and can continuously emit light at the same time (see fig. 5(d) and the related description above). And the number of the first and second groups,
(2) the one-dimensional optical positioner 231 with optical axis tracking, in which the one-dimensional light sensor (not shown) formed by the one-dimensional position detector can be replaced by one or more linear color light sensors or a two-dimensional color light sensor. The sensing pixels of the one or the plurality of linear color light sensors and the two-dimensional color light sensor are respectively provided with different filtering color chips, which can be used for filtering and passing light corresponding to the point light sources with different wavelengths, that is, the filtering color chips only pass light of the point light sources corresponding to the filtering color chips, and filter light of non-corresponding point light sources (as shown in fig. 5(e), 5(f), and 5(g) and referred to the related description above).
In addition, according to the characteristic of uniqueness of the point light sources, the plurality of point light sources can be formed by combining the point light sources with uniqueness of light intensity, uniqueness of geometric size and uniqueness of wavelength, namely, the point light sources are integrated applications of the first embodiment and the second embodiment. For example, the unique configuration of each of the three groups of point light sources, each of which has a plurality of point light sources, may be in units of groups such that each of the point light sources has a unique wavelength (e.g., R, G, B wavelength), and the unique configuration of each of the plurality of point light sources within a single group may be in units of point light sources such that each of the plurality of point light sources has a unique light intensity or a unique geometric size. The basic principles and functions thereof have been disclosed and will not be described herein.
EXAMPLE III
Fig. 15(a) is a schematic view of a third embodiment of the present invention.
In the third embodiment, mainly aiming at the uniqueness of the point light source, an improved method of time modulation is provided, i.e. the master-slave wireless synchronization method as described above has the same structure as the first embodiment. Hereinafter, description will be given only for different points.
The third embodiment 300 is mainly composed of a module device 311 with a plurality of point light sources, a plurality of sets of one-dimensional optical positioners 330 with visual axis tracking, and a control analysis program 390, and for clarity, white circles are used to mark the uniqueness of the light emitting time of the point light sources 312.
The difference from the first embodiment mainly lies in that:
(1) the point light sources 312 of the plurality of point light source module devices 311 are configured to alternately emit the time-unique and approximately point divergent light sources 313 at different time points after receiving an encoded RF synchronization signal 320. And;
(2) the plurality of sets of one-dimensional optical locators with visual axis tracking 330, each set of one-dimensional optical locators with visual axis tracking 331, can mainly transmit or receive the encoded RF synchronization signal 320 and the divergent light source 313 capable of synchronously receiving the point light sources, and then can analyze, calculate and output the physical motion quantities 350 of all the point light sources 312.
Fig. 15(b) is a schematic diagram of a single group of point light source modules. The point light source module 311 has an RF receiver 314, a switch 315, and a plurality of point light sources 312. The RF receiver 314 is composed of an RF receiving terminal, a demodulator, and a decoder (not shown in the figure), and is used for receiving the encoded RF synchronization signal 320 and analyzing the timing of the encoded signal 24 and the synchronization signal 25 contained in the encoded RF synchronization signal 320 (see fig. 4 (d)). The switch 315 continuously, alternately and individually lights the plurality of point light sources 312 according to the timing sequence of the encoding signal 24 and the synchronization signal 25, thereby achieving the time modulation effect.
FIG. 15(c) is a schematic diagram of a single set of one-dimensional optical positioners with view axis tracking. In a different configuration of the first embodiment, a plurality of devices of the set of one-dimensional optical locators with boresight tracking 331 are provided with an RF transceiver 332 for transmitting or receiving the encoded RF synchronization signal 320. The encoded RF synchronization signal 320 may be generated by the position calculation control microprocessor 345. The coded signal may be a set of digital codes (binary codes), square waves with a specific time length, or pulses with a specific number. If the one-dimensional optical positioner 331 is used as a master positioner, the RF transceiver 332 transmits the encoded RF synchronization signal 320; if the one-dimensional optical positioner 331 is used as a slave positioner, the RF transceiver 332 receives the encoded RF synchronization signal 320 and generates a synchronous scanning signal SYNC to synchronously drive all the one-dimensional position detectors 333 to scan and extract the imaging signal of the only illuminated point light source.
Example four
Fig. 16 is a schematic diagram of a fourth configuration according to the embodiment of the present invention.
The fourth embodiment is a method for improving the first, second and third embodiments in combination, that is, the combination of the optical intensity, the geometric uniqueness, the wavelength uniqueness and the time uniqueness is applied, and has the same structure as the first embodiment. Hereinafter, description will be given only for different points. For clarity, the uniqueness of the light emitting time of the point light source module device 411 is marked by white circles.
The fourth embodiment 400 is mainly composed of a plurality of sets of point light source module devices 410, a plurality of sets of one-dimensional optical positioners 430 with visual axis tracking, and a control analysis procedure 490,
the difference between the above embodiments is mainly that:
(1) each of the plurality of point light source module devices 410 comprises a RF receiver 414, a switch (not shown), and a plurality of point light sources 412, wherein all of the point light sources 412 comprise point light sources with unique light intensity, unique geometry, or unique wavelength, and the plurality of point light sources 412 simultaneously emit a unique and approximately point-like divergent light source 413 according to an encoded RF synchronization signal 420 received by the RF receiver 414. In addition, the encoded RF synchronization signal 420 is composed of an encoded signal and a synchronization signal (not shown in the figure), the encoded signal defines the number of each group of the plurality of point light source module devices 411, and the synchronization signal defines the light emitting time of the plurality of point light sources 412, i.e. each point light source module device 411 has uniqueness in time. Therefore, the RF receiver 414 can analyze the code signal and the synchronization signal to achieve the purpose of alternately controlling the light emitting time of each group of the plurality of point light source module devices 411 at different times. And the number of the first and second groups,
(2) the plurality of sets of one-dimensional optical locators with optical axis tracking 430, each set of one-dimensional optical locators 431 with optical axis tracking, can mainly transmit or receive the encoded RF synchronization signal 420, and can analyze, calculate and output the physical motion quantity 450 of all the point light sources 412 after synchronously receiving the divergent light source 413 of the point light source.
EXAMPLE five
Fig. 17 is a schematic diagram showing a fifth configuration in the embodiment of the present invention.
The fifth embodiment is a method for improving the second and third embodiments, that is, the combined application of the wavelength uniqueness and the time uniqueness, and has the same structure as the fourth embodiment. Hereinafter, description will be given only for different points.
The fifth embodiment 500 is mainly composed of a plurality of sets of a plurality of point light source module devices 510, a plurality of sets of one-dimensional optical locators 530 with visual axis tracking, and a control analysis program 590, and for clarity, white circles are used to mark the uniqueness of the light emitting time of the point light sources 512. R, B, the uniqueness of the wavelength of the point source module 511 is marked.
The difference from the fourth embodiment mainly lies in that:
(1) each group of the plurality of point light source module devices 511 is composed of an RF receiver 514, a switch (not shown), and a plurality of point light sources 512, and all the point light sources 512 are composed of point light sources having the same and unique wavelength by module, that is, each point light source module device 511 has the uniqueness of the wavelength. The all-point-light-source-module device 511 is capable of synchronously lighting a single point light source in the module according to an encoded RF synchronization signal 520 received by the RF receiver 514, and making all the point light sources in the module emit temporally unique and approximately point-shaped divergent light sources 513 alternately at different time points. That is, the light emitting manner of the point light sources 512 in a single point light source module device 511 is unique in time, and the light emitting manner of all the point light source module devices 511 is synchronous. In addition, the encoded RF synchronization signal 520 is composed of an encoded signal defining the number of each point light source in the plurality of point light source module devices 511 and a synchronization signal (not shown in the figure) defining the light emitting time of the plurality of point light sources 512. Therefore, the RF receiver 514 can analyze the encoded signal and the synchronization signal to alternately control the light emitting time of the plurality of point light sources 512 at different times. And the number of the first and second groups,
(2) the plurality of sets of one-dimensional optical locators with viewing axis tracking 530, each set of one-dimensional optical locators with viewing axis tracking 531, can mainly transmit or receive the encoded RF synchronization signal 520, and can analyze, calculate and output the physical motion quantity 550 of all the point light sources 512 after synchronously receiving the divergent light sources 513 of the point light sources.
EXAMPLE six
Fig. 18 is a schematic diagram of a sixth configuration in the embodiment of the present invention.
In the sixth embodiment, another time modulation improvement method is provided mainly for uniqueness of the point light source, that is, the Stephenson improvement method as described above has the same structure as that of the first embodiment. Hereinafter, description will be given only for different points.
The sixth embodiment 600 is mainly composed of a plurality of point light source modules 611, a plurality of sets of one-dimensional optical positioners 630 with optical axis tracking, and a control analysis program 690,
the main difference from the first embodiment is that:
(1) a switch 614 is additionally arranged in the point light source module 611, and the point light sources 612 can be continuously and alternately lightened at a fixed period to emit divergent light 613 approximate to a point light source;
(2) in the one-dimensional optical positioner 631 with optical axis tracking, a light receiving device 632 is additionally disposed to receive the divergent light source 613 emitted by the point light source 612 and output the light emitting timing of the point light source 611 (as shown in fig. 4(c) and described in the foregoing). In the one-dimensional optical positioner 631 with optical axis tracking, a positioning calculation control microprocessor (not shown) measures the period of continuous alternate lighting of the point light source 611 at a proper time (e.g., before use or at a fixed time interval) according to the lighting timing of the point light source 611, and synchronously generates a synchronization signal SYNC at the same period to drive the one-dimensional position detector (not shown) to synchronously scan and read the imaging superposition signal.
The basic techniques, system architectures, and applications of the present invention are detailed above, and are summarized as follows:
1. the unique point light source treating technology includes the following steps:
(1) intensity modulation processing techniques;
(2) techniques for geometric modulation processing;
(3) wavelength modulation processing techniques;
(4) a master-slave wireless synchronization technique;
(5) stephenson improved technology.
2. The dynamic background light removing technology comprises the following steps:
(1) a technique for removing real-time ambient light interference signals;
(2) techniques for near real-time temporal ambient light interference signal removal;
(3) and (4) a technology for removing a spatial ambient light interference signal (Fourier signal processing method).
3. The data processing comprises the following steps:
(1) techniques for waveform detection;
(2) calculating the spatial resolution;
(3) and calculating the average position.
4. The expansion of the system architecture comprises:
(1) a dead angle compensation framework;
(2) an expanded view architecture;
(3) techniques for visual axis tracking;
(4) procedure for coordinate system collimation correction
5. The expansion of the system application comprises:
(1) an application of a virtual input device;
(2) an emulator, etc.
In summary, although the basic techniques, system architectures, and applications of the present invention and the descriptions of the embodiments are focused on the one-dimensional optical system, the basic techniques, system architectures, and applications of the present invention can also be applied to the two-dimensional optical system, i.e. the two-dimensional optical lens and the two-dimensional optical sensing array. The fundamental differences are only the calculation of the coordinates of the point light sources, the difference in the dynamic background light removal and data processing, and the difference in the number of position detectors used. For the calculation of the coordinates of the point light source of the two-dimensional optical system, see the taiwan patent application No. in china: 096108692, will not be described in detail herein. In addition, for the dynamic backlight removal and data processing, one-dimensional calculation is adopted, and the same mathematical logic can be adopted to extend to two-dimensional calculation, so that the details are not required. In addition, as for the number of position detectors, in the one-dimensional optical system, the number of one-dimensional position detectors is at least three; in the two-dimensional optical system, the number of two-dimensional position detectors used is at least two.
In summary, the method features and embodiments of the present invention have been disclosed in detail, so as to fully show the progress of the present invention in terms of both the purpose and the efficacy, and the present invention has great industrial value, and is an application that is never seen in the market at present.
The foregoing is merely a preferred embodiment of the invention, which is intended to be illustrative and not limiting. It will be understood by those skilled in the art that various changes, modifications and equivalents may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (55)

1. A three-dimensional virtual input and simulation device is characterized in that: it includes:
a plurality of point light sources, wherein each point light source emits a divergent light with approximate point shape in a simultaneous and continuous mode;
the one-dimensional optical positioner with the visual axis tracking function comprises a plurality of groups of one-dimensional optical positioners with the visual axis tracking function, wherein each group of one-dimensional optical positioners with the visual axis tracking function is used for receiving a synchronous trigger signal and simultaneously receiving divergent light sources of all the plurality of point light sources, measuring three-dimensional positioning of all the point light sources and outputting a group of physical quantities; receiving a visual axis angle to achieve the purpose of visual axis positioning; and
the control analysis program is a software program which is used for connecting and controlling all the one-dimensional optical positioners with visual axis tracking, outputting a synchronous trigger signal, and synchronously starting all the one-dimensional optical positioners with visual axis tracking so as to synchronously execute the measurement of three-dimensional positioning; a group of visual axis angles are also output, so that the purpose of positioning the visual axis angles of all the one-dimensional optical positioners with the visual axis tracking is achieved; after receiving all the physical quantities and a group of visual axis angles, the input of an entity input device can be simulated, so that the aim of virtual input is fulfilled; the motion of a physical object is simulated, so that the simulation purpose of the simulator is achieved.
2. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the plurality of point light sources means that each point light source has uniqueness of light intensity, and preferably, each point light source has the same light emitting radius but different light emitting intensity.
3. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the plurality of point light sources means that each point light source has the uniqueness of geometric size, and preferably, each point light source has different light emitting radius but the same light emitting intensity.
4. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the plurality of point light sources are unique in wavelength, and preferably, each point light source has different light emitting wavelengths which are not overlapped.
5. The apparatus for three-dimensional virtual input and simulation of claim 4, wherein: the number of the point light sources is three, and the point light sources respectively emit light sources with red, green and blue wavelengths.
6. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the point light sources are composed of point light sources with light intensity uniqueness, geometric size uniqueness and wavelength uniqueness.
7. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the plurality of point light sources are composed of a plurality of single point light sources, and the single point light source is composed of the following components:
the light scattering body is an object which makes incident light uniformly diffused in angle;
a light emitting source composed of a visible light emitting LED or a non-visible light emitting LED and semiconductor lasers, wherein the number of the light emitting sources is single or plural;
an electronic control loop, which comprises a power switch and a constant current source loop, and provides a constant current source loop besides the function of the power switch, so that the light source emits a light source with specific and stable light brightness;
a battery for providing power supply to the light source and the electronic control loop;
a point light source device mechanism, which is a mechanical mechanism and is used for fixing the light scattering body, the light source, the electronic control loop and the battery; and
the point light source device fixing mechanism is a mechanical mechanism, and the point light source device fixing mechanism and the point light source device are fixed on other objects.
8. The apparatus for three-dimensional virtual input and simulation of claim 7, wherein: the light scattering body is composed of a transparent light guide body and a scattering body, wherein the transparent light guide body is in an arbitrary shape, and is optimally a spherical object; the material of the structure is any transparent material, and is preferably a glass or plastic transparent material. The scattering body is arranged in the transparent light guide body, and is optimally a randomly distributed light reflection powder, a randomly distributed transparent powder, a randomly distributed tiny air bubble and a small transparent spherical object, in addition, the refractive index of the scattering body is lower than that of the transparent light guide body, in addition, a light incidence port with proper size and shape is arranged at a proper position of the transparent light guide body, and the light source emitted by the light source is guided in a proper angle.
9. The apparatus for three-dimensional virtual input and simulation of claim 7, wherein: the light source can also be provided with an optical bandwidth filter to generate light source with special wavelength.
10. The apparatus for three-dimensional virtual input and simulation of claim 7, wherein: the point light source device fixing mechanism is characterized in that other devices fixed by the point light source device fixing mechanism are fingers of hands, forehead of head, instep of foot, racket-shaped object, rod-shaped object, glove-shaped object, spherical object, toy for game, remote control toy, game rod of computer and controller of household game machine.
11. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: one of the plurality of sets of one-dimensional optical positioners with visual axis tracking comprises the following components:
a plurality of one-dimensional position detectors, wherein each one-dimensional position detector calculates and outputs the imaging average position of all the point light sources according to the received synchronous scanning signal and the received divergent light of all the point light sources;
a positioning calculation control microprocessor, which contains a positioning calculation control program, connected with and controlling all the one-dimensional position detectors and a two-axis angle control device, said positioning calculation control program for receiving said synchronous trigger signal, said imaging average position, a viewing axis angle and two angle electric signals to calculate and output a synchronous scanning signal, a group of physical quantities, a viewing axis angle and a viewing axis angle driving control signal;
a signal transmission interface, which is a wired or wireless transmission device, for transmitting the set of physical quantities, the viewing axis angle and the synchronous trigger signal;
a set of positioning calibration point light sources, which are composed of a plurality of point light sources and are fixed at known positions on the one-dimensional optical positioner fixing mechanism, so as to be used for positioning the spatial position and the visual axis angle of the set of visual axis tracking one-dimensional optical positioners;
a one-dimensional optical positioner fixing mechanism which is a mechanical structure and is used for fixing the plurality of one-dimensional position detectors, the positioning calculation control microprocessor, the signal transmission interface and the group of positioning calibration point light sources for the device and is connected to a two-axis rotating mechanism in the two-axis angle control device so as to achieve the purpose of two-axis rotation; and
the two-axis angle control device is used for driving the two triggers according to the amount of the driving control signal after receiving the visual axis angle driving control signal so as to drive and rotate the two-axis rotating mechanism and the two angle measuring devices; the two angle measuring devices feed back and output two angle electric signals according to the actual rotating angle so as to be used for positioning and controlling the two-axis angle; the two-axis rotating mechanism rotates the one-dimensional optical positioner fixing mechanism to change the angle of the visual axis of the one-dimensional optical positioner.
12. The apparatus for three-dimensional virtual input and simulation of claim 11, wherein: one of the plurality of one-dimensional position detectors is composed of the following components:
a one-dimensional optical component group, which is composed of a filter, a line strip-shaped aperture and a one-dimensional optical lens;
a one-dimensional optical sensor, which is composed of a one-dimensional optical sensing array, a scanning reading electronic circuit and an analog-digital converter, and is composed of the scanning reading electronic circuit, according to the received scanning signal, sequentially and continuously reading and outputting the optical sensing analog voltage of each sensing pixel on the one-dimensional optical sensing array, and outputting a digital voltage after the action of the analog-digital converter, wherein the digital voltage is the imaging superposition signal;
a signal microprocessor, which is connected with and controls the one-dimensional light sensor, and executes a signal processing program after receiving the synchronous scanning signal to generate a scanning signal, read the imaging superposition signal and calculate and output the imaging average position of all point light sources; and
the one-dimensional position detector device mechanism is a mechanical structure, and is fixed in the one-dimensional optical component group, the one-dimensional optical sensor and the signal microprocessor, and is fixed in the one-dimensional optical positioner fixing mechanism.
13. The apparatus for three-dimensional virtual input and simulation of claim 12, wherein: the one-dimensional optical sensor is composed of a color one-dimensional optical sensing array, a scanning reading electronic circuit and an analog-digital converter, wherein a single optical sensing pixel is used as a unit on the optical sensing pixels of the color one-dimensional optical sensing array, and each optical sensing pixel is provided with a proper optical filtering color chip, preferably a red, green and blue light filtering color chip.
14. The apparatus for three-dimensional virtual input and simulation of claim 12, wherein: the one-dimensional optical sensor is composed of a plurality of color one-dimensional optical sensing arrays, a scanning reading electronic circuit and an analog-digital converter, wherein each optical sensing pixel of each color one-dimensional optical sensing array takes a single optical sensing array as a unit, and each optical sensing array is provided with a proper optical filtering color chip, preferably a red, green and blue light filtering color chip.
15. The apparatus for three-dimensional virtual input and simulation of claim 12, wherein: the one-dimensional optical sensor is composed of a color two-dimensional optical sensing array, a random reading electronic circuit and an analog-digital converter, wherein the color two-dimensional optical sensing array is composed of a plurality of color one-dimensional optical sensing arrays, a single one-dimensional optical sensing array is used as a unit on the light sensing pixel of each color one-dimensional optical sensing array, and each color one-dimensional optical sensing array is provided with a proper optical filtering color chip, preferably an R, G, B optical filtering color chip; in addition, each pixel of the single one-dimensional photo sensor array is also provided with an appropriate light filtering color chip, preferably R, G, B light filtering color chips. The random reading electronic circuit performs random data reading action aiming at any pixel through a microprocessor, a row decoding controller and a column decoding controller.
16. The apparatus for three-dimensional virtual input and simulation of claim 12, wherein: the signal processing program is composed of the following programs:
a program for synchronously reading data, which is to output a scanning signal after a proper time according to the time sequence of the received synchronous scanning signal to obtain and record the imaging superposition signal, wherein the imaging superposition signal is an effective imaging signal containing all point light sources and a dynamic background light signal;
a dynamic background light signal removing program, which is composed of a time-based ambient light interference signal removing program and a space-based ambient light interference signal removing program, and outputs an effective imaging signal containing all point light sources after the imaging superposition signal is subjected to dynamic background light removing;
the program corresponding to the identification of the imaging signals of one point light source is to identify and analyze the effective imaging signals of the respective point light sources and the corresponding relation thereof through a threshold value comparison program or a waveform detection program, and the waveform detection program is to achieve the purposes of identification, analysis and correspondence according to the characteristics of the distribution standard deviation, the central intensity and the waveform change slope of the effective imaging signals of the point light sources; in addition, when the point light sources with the geometric modulation are used, effective imaging signals of the respective point light sources and corresponding relations thereof are identified and analyzed through a geometric modulation elimination method; and
the program for calculating the average imaging position of one point light source is to analyze the maximum signal pixel position, Guassian matching and statistic analysis of the identified and analyzed effective imaging signals of the point light source to calculate and output the average imaging positions of all the point light sources.
17. The apparatus for three-dimensional virtual input and simulation of claim 16, wherein: the dynamic background light signal removing program is to add another one-dimensional light sensor for measuring background noise outside the one-dimensional light sensor in a hardware mode, and subtract the dynamic background light signal from the imaging superposition signal after separately obtaining the dynamic background light signal in a mode of synchronous signal scanning and reading of the one-dimensional light sensor and a mode of carrying out appropriate signal amplification processing, in addition, a proper optical filter is added on a one-dimensional array arranged in the other one-dimensional light sensor for noise, and the optical filter filters all point light sources but allows ambient light to pass through.
18. The apparatus for three-dimensional virtual input and simulation of claim 16, wherein: the temporal ambient light interference signal removing program is a software method, that is, a subtraction operation is performed on imaging superimposed signals obtained by two consecutive scans, so as to achieve the purpose of removing dynamic background light signals.
19. The apparatus for three-dimensional virtual input and simulation of claim 16, wherein: the spatial ambient light interference signal removing program is a fourier signal processing program, and is a program for performing a temporal ambient light interference signal removing program, performing a fourier transform on the obtained data, performing a band-pass filtering process in the frequency domain, i.e. performing an operation of filtering unnecessary frequencies and amplifying, and then performing an inverse fourier operation, so as to achieve the purpose of removing spatial ambient light interference.
20. The apparatus for three-dimensional virtual input and simulation of claim 11, wherein: the positioning calculation control program is composed of the following programs:
a synchronous scanning program, which generates and outputs a periodic synchronous scanning signal according to the time sequence of the synchronous trigger signal, synchronously drives all the one-dimensional position detectors, and executes the signal processing program;
a physical quantity calculating program for calculating and outputting a set of physical quantities after obtaining the imaging average position of the point light source output by all the one-dimensional position detectors; the set of physical quantities includes individual physical quantities, groups of physical quantities, relative physical quantities, and other physical quantities of all the point light sources; and
and a visual axis control program, which calculates and outputs a visual axis angle and a visual axis angle driving control signal according to the respective physical quantity, the group physical quantity or the visual axis angle, and simultaneously utilizes the received two angle electric signals as the control of angle feedback to achieve the accurate positioning control of the visual axis angle.
21. The apparatus for three-dimensional virtual input and simulation of claim 11, wherein: the one-dimensional optical positioner fixing mechanism is a triangular geometric structure, preferably an equilateral triangle structure, and is provided with one-dimensional position detector device mechanisms at the top corner or the center of three sides; the relative position of the three one-dimensional position detectors is a triangular geometric structure, the one-dimensional position detector device mechanism is set by rotating the optical axis of the one-dimensional position detectors by any angle by taking the optical axis of the one-dimensional position detectors as a rotating shaft, namely, the direction of the long axes of the one-dimensional light sensing arrays in the three one-dimensional position detectors is set by any angle, in addition, the group of positioning calibration point light sources are set at any position on the triangular one-dimensional optical positioner fixing mechanism, the optimal number of the positioning calibration point light sources is formed by three point light sources, the optimal device position is at the vertex angle of a triangle and at the center of three sides, in addition, the vertex angle of the triangular one-dimensional optical positioner fixing mechanism is provided with a connecting structure, and the connecting structure has the effect of connecting or disassembling two three corner sides, and the angle of the connection between the two triangular edges is adjusted at will.
22. The apparatus for three-dimensional virtual input and simulation of claim 21, wherein: the one-dimensional optical positioner fixing mechanism is additionally provided with a connecting mechanism at the center of any three sides so as to add a one-dimensional position detector device mechanism, and the one-dimensional position detector device mechanism is arranged at the triangular central point.
23. The apparatus for three-dimensional virtual input and simulation of claim 11, wherein: the geometric structure of the one-dimensional optical locator fixing mechanism is a quadrangle-shaped geometric structure, a pentagon-shaped geometric structure and a hexagon-shaped geometric structure, the best is an equilateral quadrangle-shaped geometric structure, an equilateral pentagon-shaped geometric structure and an equilateral hexagon-shaped geometric structure, the one-dimensional position detector device mechanism is arranged at the vertex angles of the quadrangle-shaped geometric structure, the pentagon-shaped geometric structure and the hexagon-shaped geometric structure and is also arranged at the center of each side, namely the number of the one-dimensional position detectors arranged on the one-dimensional optical locator fixing mechanism can be increased to four, five or six, in addition, the group of positioning calibration point light sources are arranged at any positions on the one-dimensional optical locator fixing mechanism, the best number of the positioning calibration point light sources is formed by three point light sources, and the best device position is arranged at the vertex angles of the quadrangle-shaped geometric structure, the pentagon-shaped geometric structure and the.
24. The apparatus for three-dimensional virtual input and simulation of claim 11, wherein: the one-dimensional optical positioner fixing mechanism is a casing of other existing devices, the casings of other existing devices are casings of notebook computers, PDAs, game consoles, mobile phones, liquid crystal displays, plasma displays, televisions, projectors, optical cameras, optical telescopes, automobiles, locomotives and the like, namely, the plurality of one-dimensional position detectors, the positioning calculation control microprocessor, the signal transmission interface, the group of positioning calibration point light sources are also arranged on the casings of other existing devices.
25. The apparatus for three-dimensional virtual input and simulation of claim 20, wherein: the individual physical quantities are three-dimensional position coordinates, displacement, speed, acceleration and motion tracks of each point light source; the group physical quantity is a group center coordinate, a group average displacement, a group average speed, a group average acceleration and a group motion track; the relative physical quantity is a plane normal vector formed by the relative position, the relative speed, the relative acceleration, the relative angle, the relative angular speed, the relative angular acceleration and the point light sources between the point light sources and the central coordinates of the point light source pair group; the other physical quantities are force, moment, centripetal force, centrifugal force, momentum and kinetic energy acting on each point light source.
26. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the program for controlling the analysis is composed of the following programs:
a coordinate system collimation synchronous correction procedure, determining all the one-dimensional optical positioners with visual axis tracking, the relationship of coordinate conversion between them and compensation of measurement error caused by coordinate conversion, and correcting synchronous time error;
the program for simulating and inputting the device achieves the aim of virtual input for the device for inputting the entity through simulating and recognizing the hand operation action required by the device for inputting the entity; and
a simulator simulating program is used for real-time positioning measurement of multiple point light sources installed on other physical objects, i.e. calculating the motion track and physical motion quantity of said physical objects, and defining a virtual object in a virtual space by means of virtual reality technology to directly correspond to the motion state of said physical objects, and making said virtual object interact with other virtual objects in said virtual space according to the physical collision rule so as to attain the goal of simulation.
27. The apparatus for three-dimensional virtual input and simulation of claim 26, wherein: the program for the collimation synchronous correction of the coordinate system comprises the following programs:
the program for resetting the visual axis weight to zero is to reset the visual axis weight to zero after all the visual axes of the one-dimensional optical positioner with the visual axis tracking are aligned to the same positioning point light source through the visual axis control program of the one-dimensional optical positioner with the visual axis tracking;
a coordinate system setting and converting program, setting a master positioner, and slave positioners in all the one-dimensional optical positioners with visual axis tracking, calibrating the positioning point light sources of the positioning point light sources and the slave positioners through the master positioner, measuring the positioning point light sources through the slave positioners, and calculating the coordinate conversion relation between the master positioner and each slave positioner and compensating the positioning error; and
the synchronization time correction sequence is to output the synchronization trigger signal in a proper time period to correct all the positioners so as to synchronously execute the positioning calculation control program.
28. The apparatus for three-dimensional virtual input and simulation of claim 26, wherein: the device simulation input program is composed of the following programs:
a program corresponding to a virtual operation screen defines a virtual operation screen at any position in space for an entity operation screen with a real size. The virtual operation picture is a space corresponding to the entity operation picture, the geometric corresponding relation is a one-to-one corresponding relation and has a corresponding relation of enlargement, equal or reduction, and in addition, the virtual operation picture generates a virtual three-dimensional image through a virtual reality technology;
the virtual device geometry structure and the operation finger corresponding program are used for defining the geometry structure of a virtual device, the physical position and size of a function key and the physical action of the function key for an entity input device to be simulated, and connecting the finger and the function key in a corresponding operation way; and
the program for defining and recognizing the operation gesture is to define the physical quantity of the finger operation motion according to the physical motion of the virtual device function key, the physical quantity is a physical quantity set formed by a series of time-based physical quantities, and the physical quantity set is formed by the physical quantities, group physical quantities, relative physical quantities and other physical quantities of all the point light sources.
29. The apparatus for three-dimensional virtual input and simulation of claim 28, wherein: the said entity operation picture is the window picture of personal computer, PDA; or an operation screen of the mobile phone; or an operation picture of the television; or an operation screen of the game machine.
30. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the program for controlling analysis is integrated in the host computer of general personal computer, notebook computer, PDA, mobile phone and game machine, and video playing and converting equipment.
31. The apparatus for three-dimensional virtual input and simulation of claim 1, wherein: the entity input device is a mouse, a keyboard, a remote controller or a touch screen.
32. A three-dimensional virtual input and simulation device is characterized in that: the device is composed of the following components:
the module with a plurality of point light sources is a module consisting of a plurality of point light sources, the plurality of point light sources have uniqueness of light emitting time, and the plurality of point light sources emit light in a continuous, alternate and individual lighting mode, so that the plurality of point light sources emit approximately point-shaped divergent light sources at different time;
a plurality of groups of one-dimensional optical positioners with visual axis tracking, wherein each group of one-dimensional optical positioners with visual axis tracking is used for receiving a synchronous trigger signal and continuously and alternately and respectively receiving divergent light sources of the plurality of point light sources, then measuring three-dimensional positioning of all the point light sources and outputting a group of physical quantities; or receive a visual axis angle to achieve the purpose of visual axis positioning; and
the control analysis program is a software program which is used for connecting and controlling all the one-dimensional optical positioners with visual axis tracking, outputting a synchronous trigger signal, and synchronously starting all the one-dimensional optical positioners with visual axis tracking so as to synchronously execute the measurement of three-dimensional positioning; or a group of visual axis angles can be output, so that the purpose of positioning the visual axis angles of all the one-dimensional optical positioners with the visual axis tracking is achieved; after receiving all the physical quantities and a group of visual axis angles, simulating the input of an entity input device to achieve the aim of virtual input; the motion of a physical object can be simulated, and the simulation purpose of the simulator is achieved.
33. The apparatus for three-dimensional virtual input and simulation of claim 32, wherein: the module with a plurality of point light sources is composed of the following components:
an RF receiver, which is composed of an RF receiving terminal, a demodulator, and a decoder, and is used to receive an encoded RF synchronous signal and analyze the encoded signal and the timing of the synchronous signal contained in the encoded RF synchronous signal;
a switch, which is an electronic switching circuit, and receives the time sequence of the coding signal and the synchronous signal to generate a driving signal for continuously, alternately and individually lighting the plurality of point light sources; and
and each point light source receives the driving signal respectively to switch the light emitting state of the point light source.
34. The apparatus for three-dimensional virtual input and simulation of claim 32, wherein: one of the plurality of sets of one-dimensional optical positioners with visual axis tracking is internally provided with an RF transceiver for transmitting or receiving the RF synchronous signal with code; the RF synchronous signal with code is generated by a microprocessor; the coded signal can be a group of digital codes, or square waves with specific time length, or pulses with specific number, if the one-dimensional optical locator is used as a main locator, the RF transceiver transmits the RF synchronous signal with the code; if the one-dimensional optical positioner is used as a slave positioner, the RF transceiver receives the coded RF synchronous signal and can generate a synchronous scanning signal to synchronously drive all the one-dimensional position detectors so as to scan and extract the imaging signal of the only lighted point light source.
35. A three-dimensional virtual input and simulation device is characterized in that: the device is composed of the following components:
a plurality of point light source modules, wherein each point light source module is composed of a plurality of point light sources, and the point light sources are unique, and each point light source module is a module and has the uniqueness of light emitting time; the plurality of point light sources in the single point light source module respectively have uniqueness of light intensity, uniqueness of geometric size or uniqueness of wavelength, namely, the plurality of groups of point light source modules emit light, and all the point light sources in the single module are alternately and individually lightened by taking the module as a unit so as to simultaneously emit a plurality of approximately point-shaped divergent light sources in the single module;
a plurality of groups of one-dimensional optical positioners with visual axis tracking, wherein each group of one-dimensional optical positioners with visual axis tracking receives a synchronous trigger signal and alternately and individually receives the light emitting mode of a single module by taking the module as a unit, namely receives all the point light sources in the single point light source module at different time points simultaneously, measures the three-dimensional positioning of all the point light sources and outputs a group of physical quantities after emitting divergent light sources, and in addition, each group of one-dimensional optical positioners with visual axis tracking has the capability of visual axis tracking and positioning, automatically tracks the cluster center coordinates of the plurality of point light sources or automatically tracks the coordinates of any point light source in the plurality of point light sources and outputs the own visual axis angle so as to achieve the purpose of visual axis tracking; or receive a visual axis angle to achieve the purpose of visual axis positioning; and
the control analysis program is a software program, is used for connecting and controlling all the one-dimensional optical positioners with visual axis tracking, and is used for outputting a synchronous trigger signal to synchronously start all the one-dimensional optical positioners with visual axis tracking so as to synchronously execute the measurement of three-dimensional positioning; a group of visual axis angles can be output, so that the purpose of positioning the visual axis angles of all the one-dimensional optical positioners with the visual axis tracking is achieved; after receiving all the physical quantities and a group of visual axis angles, simulating the input of an entity input device to achieve the aim of virtual input; the motion of a physical object can be simulated, and the simulation purpose of the simulator is achieved.
36. A three-dimensional virtual input and simulation device is characterized in that: the device is composed of the following components:
the point light source modules are arranged in a plurality of groups, wherein each point light source module consists of a plurality of point light sources, and the point light sources are unique and have uniqueness of light wavelength by taking the module as a unit; the point light sources in the single point light source module have uniqueness of light emitting time, the point light sources in the same point light source module have the same light emitting wavelength, the point light sources in different point light source modules have different light emitting wavelengths, and in addition, the plurality of groups have a light emitting mode of the point light source modules, and the point light sources in all the point light source modules are synchronously enabled to alternately and individually light the single point light source to emit a divergent light source similar to a point by taking the single point light source as a unit;
the one-dimensional optical positioner with the visual axis tracking function is used for receiving a synchronous trigger signal and synchronously receiving the light emitted by the single-point light sources in all the point light source modules, namely receiving the single-point light sources in all the point light source modules at different time points, emitting divergent light sources, measuring the three-dimensional positioning of all the point light sources and outputting a group of physical quantities. In addition, each group of one-dimensional optical positioners with visual axis tracking also has the capability of visual axis tracking and positioning, automatically tracks the group center coordinates of the plurality of point light sources or automatically tracks the coordinates of any point light source in the plurality of point light sources, and outputs the own visual axis angle so as to achieve the purpose of visual axis tracking; can also receive a visual axis angle to achieve the purpose of visual axis positioning; and
the control analysis program is a software program, is used for connecting and controlling all the one-dimensional optical positioners with visual axis tracking, and is used for outputting a synchronous trigger signal to synchronously start all the one-dimensional optical positioners with visual axis tracking so as to synchronously execute the measurement of three-dimensional positioning; a group of visual axis angles can be output, so that the purpose of positioning the visual axis angles of all the one-dimensional optical positioners with the visual axis tracking is achieved; after receiving all the physical quantities and a group of visual axis angles, simulating the input of an entity input device to achieve the aim of virtual input; the motion of a physical object can be simulated, and the simulation purpose of the simulator is achieved.
37. A three-dimensional virtual input and simulation device is characterized in that: the device is composed of the following components:
a module with multiple point light sources, which is composed of a switcher and multiple point light sources, wherein the switcher lights the multiple point light sources respectively in a fixed period, continuous and alternative mode, and makes the point light sources emit approximately point divergent light sources;
a plurality of groups of one-dimensional optical positioners with visual axis tracking, wherein each group of one-dimensional optical positioners with visual axis tracking is used for receiving a synchronous trigger signal and divergent light sources which continuously alternate and respectively receive point light sources, then measuring three-dimensional positioning of all the point light sources and outputting a group of physical quantities; or receive a visual axis angle to achieve the purpose of visual axis positioning; and
the control analysis program is a software program, is used for connecting and controlling all the one-dimensional optical positioners with visual axis tracking, and is used for outputting a synchronous trigger signal to synchronously start all the one-dimensional optical positioners with visual axis tracking so as to synchronously execute the measurement of three-dimensional positioning; a group of visual axis angles can be output, so that the purpose of positioning the visual axis angles of all the one-dimensional optical positioners with the visual axis tracking is achieved; after receiving all the physical quantities and a group of visual axis angles, simulating the input of an entity input device to achieve the aim of virtual input; the motion of a physical object can be simulated, and the simulation purpose of the simulator is achieved.
38. The apparatus for three-dimensional virtual input and simulation of claim 37, wherein: the detection of the light emitting time sequence of the divergent light sources of the plurality of point light sources is to receive the divergent light sources emitted by all the point light sources through an optical receiver and output the light emitting time sequence of one point light source; in addition, a microprocessor is used to measure the period of the light-emitting time sequence of the point light source at proper time and synchronously generate a synchronous scanning signal with the same period.
39. A three-dimensional virtual input and simulation device is characterized in that: the device is composed of the following components:
a plurality of point light sources, wherein each point light source emits a divergent light source with approximate point shape in a simultaneous and continuous mode;
a plurality of groups of two-dimensional optical positioners with visual axis tracking, wherein each group of two-dimensional optical positioners with visual axis tracking is used for receiving a synchronous trigger signal and simultaneously receiving divergent light sources of all the plurality of point light sources, measuring three-dimensional positioning of all the point light sources and outputting a group of physical quantities; or receive a visual axis angle to achieve the purpose of visual axis positioning; and
the control analysis program is a software program, is used for connecting and controlling all the two-dimensional optical positioners with visual axis tracking, and is used for outputting a synchronous trigger signal to synchronously start all the two-dimensional optical positioners with visual axis tracking so as to synchronously execute the measurement of three-dimensional positioning; a group of visual axis angles can be output, so that the purpose of positioning the visual axis angles of all the two-dimensional optical positioners with the visual axis tracking is achieved; after receiving all the physical quantities and a group of visual axis angles, simulating the input of an entity input device to achieve the aim of virtual input; the motion of a physical object can be simulated, and the simulation purpose of the simulator is achieved.
40. The apparatus for three-dimensional virtual input and simulation of claim 39, wherein: the plurality of point light sources means that each point light source has uniqueness of light intensity, and preferably, each point light source has the same light emitting radius but different light emitting intensity.
41. The apparatus for three-dimensional virtual input and simulation of claim 39, wherein: the plurality of point light sources means that each point light source has the uniqueness of geometric size, and preferably, each point light source has different light emitting radius but the same light emitting intensity.
42. The apparatus for three-dimensional virtual input and simulation of claim 39, wherein: the plurality of point light sources are unique in wavelength, and preferably, each point light source has different light emitting wavelengths which are not overlapped.
43. The apparatus for three-dimensional virtual input and simulation of claim 42, wherein: the number of the point light sources is three, and the point light sources respectively emit light sources with red, green and blue wavelengths.
44. The apparatus for three-dimensional virtual input and simulation of claim 39, wherein: the point light sources are composed of point light sources with uniqueness of light intensity, uniqueness of geometric size and uniqueness of wavelength.
45. The apparatus for three-dimensional virtual input and simulation of claim 39, wherein: one of the two-dimensional optical positioners with the visual axis tracking function consists of the following components:
a plurality of two-dimensional position detectors, wherein each two-dimensional position detector calculates and outputs the two-dimensional imaging average position of all the point light sources according to the received synchronous scanning signal and the received divergent light of all the point light sources;
a positioning calculation control microprocessor, which contains a positioning calculation control program, and is connected with and controls all the two-dimensional position detectors and a two-axis angle control device; the positioning calculation control program receives the synchronous trigger signal, the two-dimensional imaging average position, a visual axis angle and two angle electric signals to calculate and output a synchronous scanning signal, a group of physical quantities, a visual axis angle and a visual axis angle driving control signal;
a signal transmission interface, which is a wired or wireless transmission device, for transmitting the set of physical quantities, the viewing axis angle and the synchronous trigger signal;
a set of positioning calibration point light sources, which are composed of a plurality of point light sources and are known positions fixed on the fixing mechanism of the two-dimensional optical positioner, so as to be used for tracking the space position and the angle positioning of the visual axis of the two-dimensional optical positioner;
a two-dimensional optical positioner fixing mechanism which is a mechanical structure and is used for fixing the plurality of two-dimensional position detectors, the positioning calculation control microprocessor, the signal transmission interface and the group of positioning calibration point light sources for the device and is connected to a two-axis rotating mechanism in the two-axis angle control device so as to achieve the purpose of two-axis rotation; and
the two-axis angle control device is used for driving the two triggers according to the amount of the driving control signal after receiving the visual axis angle driving control signal so as to drive and rotate the two-axis rotating mechanism and the two angle measuring devices; the two angle measuring devices feed back and output two angle electric signals according to the actual rotating angle so as to be used for positioning and controlling the two-axis angle; the two-axis rotating mechanism rotates the two-dimensional optical positioner fixing mechanism to change the angle of the visual axis of the two-dimensional optical positioner.
46. The apparatus for three-dimensional virtual input and simulation of claim 45, wherein: one of the plurality of two-dimensional position detectors is composed of the following components:
a two-dimensional optical component group which is composed of a filter, a circular aperture and a two-dimensional optical lens;
a two-dimensional optical sensor is composed of a two-dimensional optical sensing array, a scanning and reading electronic circuit and an analog-digital converter, wherein the scanning and reading electronic circuit sequentially and continuously reads and outputs the optical sensing analog voltage of each sensing pixel on the two-dimensional optical sensing array according to a scanning signal, and the optical sensing analog voltage is acted by the analog-digital converter to output a digital voltage which is a two-dimensional imaging superposition signal.
The signal microprocessor is connected with and controls the two-dimensional light sensor, and executes a signal processing program after receiving the synchronous scanning signal so as to generate a scanning signal, read the two-dimensional imaging superposition signal and calculate and output the two-dimensional imaging average position of all point light sources; and
the two-dimensional position detector device mechanism is a mechanical structure, and is fixed in the two-dimensional optical component group, the two-dimensional optical sensor and the signal microprocessor, and is fixed in the two-dimensional optical positioner fixing mechanism.
47. The apparatus for three-dimensional virtual input and simulation of claim 46, wherein: the two-dimensional optical sensor is composed of a two-dimensional optical sensing array, a random reading electronic circuit and an analog-digital converter, wherein the random reading electronic circuit performs random data reading action aiming at any pixel through a microprocessor, a row decoding controller and a column decoding controller.
48. The apparatus for three-dimensional virtual input and simulation of claim 46, wherein: the signal processing program is composed of the following programs:
a program for synchronously reading data, which is to output a scanning signal after a proper time according to the time sequence of the received synchronous scanning signal to obtain and record the two-dimensional imaging superposition signal, wherein the two-dimensional imaging superposition signal is a two-dimensional effective imaging signal containing all point light sources and a two-dimensional dynamic background light signal;
a two-dimensional dynamic background light signal removing program, which is composed of a time two-dimensional ambient light interference signal removing program and a space two-dimensional ambient light interference signal removing program, and outputs a two-dimensional effective imaging signal containing all point light sources after the two-dimensional imaging superposition signal is subjected to two-dimensional dynamic background light removing;
the procedure corresponding to the identification of the two-dimensional imaging signals of the point light sources is to identify and analyze the two-dimensional effective imaging signals of the respective point light sources and the two-dimensional corresponding relation thereof through a two-dimensional threshold value comparison procedure or a two-dimensional waveform detection procedure; the two-dimensional waveform detection program achieves the purposes of identification, analysis and correspondence according to the characteristics of the two-dimensional distribution standard deviation, the central intensity and the waveform change slope of the two-dimensional effective imaging signal of the point light source; and
the program for calculating the average position of two-dimensional imaging of one point light source is to analyze the maximum signal pixel position, or to analyze the two-dimensional Guassianfitting, or to analyze the two-dimensional statistics of the two-dimensional effective imaging signals of the point light sources which have been identified and analyzed respectively, so as to calculate and output the average position of two-dimensional imaging of all point light sources.
49. The apparatus for three-dimensional virtual input and simulation of claim 48, wherein: the two-dimensional dynamic background light signal removing program is to add another two-dimensional light sensor for measuring background noise outside the two-dimensional light sensor in a hardware mode, and subtract the two-dimensional dynamic background light signal from the two-dimensional imaging superposed signal after separately obtaining the two-dimensional dynamic background light signal in a mode of synchronous signal scanning reading of the two-dimensional light sensor and a mode of performing appropriate signal amplification processing so as to achieve the purpose of removing the two-dimensional dynamic background light signal; in addition, a proper optical filter is additionally arranged on the two-dimensional light sensing array for measuring the background noise, and the optical filter filters all point light sources but allows ambient light to pass through.
50. The apparatus for three-dimensional virtual input and simulation of claim 48, wherein: the temporal two-dimensional ambient light interference signal removing program is a software mode, namely, a subtraction operation is performed on two-dimensional imaging superposed signals obtained by continuous scanning twice, so that the purpose of removing two-dimensional dynamic background light signals is achieved.
51. The apparatus for three-dimensional virtual input and simulation of claim 48, wherein: the spatial two-dimensional ambient light interference signal removing program is a two-dimensional fourier signal processing program, and is characterized in that after the temporal two-dimensional ambient light interference signal removing program is carried out, the obtained data is subjected to two-dimensional fourier transform, and in the frequency domain, two-dimensional band-pass filtering is carried out, namely, unnecessary frequency filtering and amplification are carried out, and then inverse two-dimensional fourier operation is carried out, so that the purpose of removing spatial two-dimensional ambient light interference is achieved.
52. The apparatus for three-dimensional virtual input and simulation of claim 45, wherein: the program for positioning calculation control is composed of the following programs:
a synchronous scanning program, which generates and outputs a periodic synchronous scanning signal according to the time sequence of the synchronous trigger signal, synchronously drives all two-dimensional position detection, and executes the signal processing program;
a physical quantity calculating program for calculating and outputting a set of physical quantities after acquiring the two-dimensional imaging average position of the point light source output by all the two-dimensional position detectors; the set of physical quantities includes individual physical quantities, groups of physical quantities, relative physical quantities, and other physical quantities of all the point light sources; and
and a visual axis control program, which calculates and outputs a visual axis angle and a visual axis angle driving control signal according to the respective physical quantity, the group physical quantity or the visual axis angle, and simultaneously utilizes the received two angle electric signals as the control of angle feedback to achieve the accurate positioning control of the visual axis angle.
53. The apparatus for three-dimensional virtual input and simulation of claim 39, wherein: the program for controlling analysis is integrated in the host computer of general personal computer, notebook computer, PDA, mobile phone and game machine, and video playing and converting equipment.
54. The apparatus for three-dimensional virtual input and simulation of claim 39, wherein: the simulative entity input device is a mouse, a keyboard, a remote controller or a touch screen.
55. The apparatus for three-dimensional virtual input and simulation of claim 45, wherein: the two-dimensional optical positioner fixing mechanism is a casing of other existing devices, the casing of other existing devices is a casing of a notebook computer, a PDA, a host machine of a game machine, a mobile phone, a liquid crystal display, a plasma display, a television, a projector, an optical camera, an optical telescope, an automobile or a locomotive and the like, namely, the plurality of two-dimensional position detectors, the positioning calculation control microprocessor, the signal transmission interface and the group of positioning calibration point light sources can also be arranged on the casing of other existing devices.
CN2008102143199A 2008-08-22 2008-08-22 Device for three-dimensional virtual input and simulation Expired - Fee Related CN101655739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102143199A CN101655739B (en) 2008-08-22 2008-08-22 Device for three-dimensional virtual input and simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102143199A CN101655739B (en) 2008-08-22 2008-08-22 Device for three-dimensional virtual input and simulation

Publications (2)

Publication Number Publication Date
CN101655739A true CN101655739A (en) 2010-02-24
CN101655739B CN101655739B (en) 2012-07-04

Family

ID=41710047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102143199A Expired - Fee Related CN101655739B (en) 2008-08-22 2008-08-22 Device for three-dimensional virtual input and simulation

Country Status (1)

Country Link
CN (1) CN101655739B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981599A (en) * 2011-09-05 2013-03-20 硕擎科技股份有限公司 Three-dimensional human-computer interface system and method thereof
CN103529947A (en) * 2013-10-31 2014-01-22 京东方科技集团股份有限公司 Display device and control method thereof and gesture recognition method
TWI464652B (en) * 2011-08-05 2014-12-11 Pixart Imaging Inc Image sensor and optical touch panel system having the same
CN105103085A (en) * 2013-04-09 2015-11-25 ams有限公司 Method for gesture detection, optical sensor circuit, in particular an optical sensor circuit for gesture detection, and optical sensor arrangement for gesture detection
CN105630232A (en) * 2014-11-26 2016-06-01 阿尔卑斯电气株式会社 Input device, and control method and program therefor
CN108452514A (en) * 2017-02-18 2018-08-28 饶涛 A kind of billiard table and application method
CN110045824A (en) * 2014-02-10 2019-07-23 苹果公司 It is inputted using the motion gesture that optical sensor detects
CN110067972A (en) * 2014-12-26 2019-07-30 麦克赛尔株式会社 Lighting device
TWI675217B (en) * 2016-12-26 2019-10-21 宏達國際電子股份有限公司 Positioning system and method thereof
CN111589099A (en) * 2020-04-02 2020-08-28 深圳创维-Rgb电子有限公司 Laser induction system and laser induction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO300943B1 (en) * 1995-04-03 1997-08-18 Steinar Pedersen Tools for positioning and controlling objects in two or three dimensions
CN1587900A (en) * 2004-07-09 2005-03-02 中国科学院计算技术研究所 Three dimension surface measuring method and device

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI464652B (en) * 2011-08-05 2014-12-11 Pixart Imaging Inc Image sensor and optical touch panel system having the same
CN102981599A (en) * 2011-09-05 2013-03-20 硕擎科技股份有限公司 Three-dimensional human-computer interface system and method thereof
US9791935B2 (en) 2013-04-09 2017-10-17 Ams Ag Method for gesture detection, optical sensor circuit, in particular an optical sensor circuit for gesture detection, and optical sensor arrangement for gesture detection
CN105103085B (en) * 2013-04-09 2018-09-21 ams有限公司 Method for gestures detection, the optical sensor circuit for gestures detection and the optical sensor arrangement for gestures detection
CN105103085A (en) * 2013-04-09 2015-11-25 ams有限公司 Method for gesture detection, optical sensor circuit, in particular an optical sensor circuit for gesture detection, and optical sensor arrangement for gesture detection
WO2015062251A1 (en) * 2013-10-31 2015-05-07 京东方科技集团股份有限公司 Display device and control method therefor, and gesture recognition method
CN103529947A (en) * 2013-10-31 2014-01-22 京东方科技集团股份有限公司 Display device and control method thereof and gesture recognition method
CN110045824A (en) * 2014-02-10 2019-07-23 苹果公司 It is inputted using the motion gesture that optical sensor detects
US11422635B2 (en) 2014-02-10 2022-08-23 Apple Inc. Optical sensing device
CN105630232A (en) * 2014-11-26 2016-06-01 阿尔卑斯电气株式会社 Input device, and control method and program therefor
CN105630232B (en) * 2014-11-26 2019-04-05 阿尔卑斯阿尔派株式会社 The control method and program of input unit, input unit
CN110067972A (en) * 2014-12-26 2019-07-30 麦克赛尔株式会社 Lighting device
TWI675217B (en) * 2016-12-26 2019-10-21 宏達國際電子股份有限公司 Positioning system and method thereof
CN108452514A (en) * 2017-02-18 2018-08-28 饶涛 A kind of billiard table and application method
CN111589099A (en) * 2020-04-02 2020-08-28 深圳创维-Rgb电子有限公司 Laser induction system and laser induction method

Also Published As

Publication number Publication date
CN101655739B (en) 2012-07-04

Similar Documents

Publication Publication Date Title
CN101655739B (en) Device for three-dimensional virtual input and simulation
JP2009037620A (en) Three-dimensional virtual input and simulation device
JP6854366B2 (en) Detector that optically detects at least one object
US8971565B2 (en) Human interface electronic device
JP6979068B2 (en) Detector for optically detecting at least one object
US10191559B2 (en) Computer interface for manipulated objects with an absolute pose detection component
US8696459B2 (en) Measurement and segment of participant&#39;s motion in game play
EP3508812B1 (en) Object position and orientation detection system
US8237656B2 (en) Multi-axis motion-based remote control
US20100201808A1 (en) Camera based motion sensing system
US20100001998A1 (en) Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features
US20150002391A1 (en) Systems and methods for controlling device operation according to hand gestures
Xiao et al. Lumitrack: low cost, high precision, high speed tracking with projected m-sequences
CN101911162A (en) Input device for a scanned beam display
WO2009120299A2 (en) Computer pointing input device
US10126123B2 (en) System and method for tracking objects with projected m-sequences
US20210279893A1 (en) Interactive entertainment system
KR100532525B1 (en) 3 dimensional pointing apparatus using camera
Mikawa et al. Dynamic projection mapping for robust sphere posture tracking using uniform/biased circumferential markers
Scherfgen et al. 3D tracking using multiple Nintendo Wii Remotes: a simple consumer hardware tracking approach
TWI635255B (en) Method and system for tracking object
Peltola et al. A portable and low-cost 3D tracking system using four-point planar square calibration
CN105164617A (en) Self discovery of autonomous NUI device
CN112105426A (en) Power management for optical position tracking devices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120704

Termination date: 20140822

EXPY Termination of patent right or utility model