WO1996024116A1 - Process for obtaining the maximum information acquisition for o utput signals, especially control signals for automatic machines - Google Patents

Process for obtaining the maximum information acquisition for o utput signals, especially control signals for automatic machines Download PDF

Info

Publication number
WO1996024116A1
WO1996024116A1 PCT/DE1996/000209 DE9600209W WO9624116A1 WO 1996024116 A1 WO1996024116 A1 WO 1996024116A1 DE 9600209 W DE9600209 W DE 9600209W WO 9624116 A1 WO9624116 A1 WO 9624116A1
Authority
WO
WIPO (PCT)
Prior art keywords
intensity
determination
method according
characterized
modulated
Prior art date
Application number
PCT/DE1996/000209
Other languages
German (de)
French (fr)
Inventor
Hans-Otto Carmesin
Christoph Herwig
Original Assignee
Carmesin Hans Otto
Christoph Herwig
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to DE19503606 priority Critical
Priority to DE19503606.9 priority
Priority to DE19509277.5 priority
Priority to DE19509277A priority patent/DE19509277C1/en
Application filed by Carmesin Hans Otto, Christoph Herwig filed Critical Carmesin Hans Otto
Publication of WO1996024116A1 publication Critical patent/WO1996024116A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Abstract

Process for obtaining the maximum information acquisition for output signals, especially control signals for automatic machines and especially for determining relative positions and movements between any objects and an imaging sensor.

Description

A method for generating maximum information gain for output signals, in particular control signals from machine

description

The invention relates to a method for generating maximum information gain for output signals insbesonderen control signals from machines.

An automatic control requires i. allg. sensor data to determine a reliable and precise as possible control signals. All sensor data are associated with a limited accuracy. This raises the question of how to automatically select one sensor data collected with a limited accuracy of measurement y i, and combined to provide reliable and precise as possible data X k as control signals or for the direct determination of control signals. This means frequent sensor signal data is collected, which does not directly provide the desired information. Classically, this problem is in the numerical analysis using regression analysis (J. Stoer. I. Springer Verlag, 3rd edition, Berlin, 1979. Introduction to Numerical Mathematics) treated. Here the problem is worked exactly how a quantity of interest can be determined without it being possible to measure them directly. The procedure is as follows indirectly: Instead of the desired size is measured another, more accessible measuring size, which depends on a known lawful manner by the desired size. It performs a number of measurements, measuring its results and provides a system of equations. Typically, these are above or below a determined system of equations, as well as the measured values ​​are typically subject to measurement errors. The task, then, is this equation system, if not exactly, then at least "as good as possible" to solve. As a quality measure for estimating the required parameters, the maximum deviation or the sum of squared deviations is usually used. Applied to the example. Example, in a robot control task known to indirectly determine the relative positions and movements of the image data of a directly present the imaging sensor, the measurement sites consist of adjacent pixels, and the measurement corresponds to the collection of intensities, and calculating the intensity gradient. The aim is to produce the maximum information gain signal data from the present sensor.

With such a generic method for can. B. be solved is known as "Structure from Motion" has become a problem. It may be generally formulated as follows: Given a sequence of images that has been captured by a monocular observer; reconstruct therefrom, both the shape and relative depth of objects in the scene and the relative three-dimensional rotational and translational motion with respect to itself independent of the observer moving objects.

The problem can "visually" by the sensors are extended to the general class of "imaging sensors" principle. Important is only the high resolution for obtaining dense and thus locally evaluable data and the knowledge of the underlying projective imaging geometry. used for visual sensors of the expression of the measurement of "intensity" can all sensors that have as an objective the acquisition of surface reflection properties (such as sensors for different wavelength ranges), expanded.

The center of gravity above the "monocular" image sequences can be relaxed if the three-dimensional orientation and position of the sensors (and hence the correspondence of the individual image elements) is well known. The depth information may be at least up to a distance of about 5m also binocular, that are obtained by methods of the stereo vision. High-resolution "active" sensors, for example, are 2D laser scanner as it will revolutionize the future due to their increasingly compact design the robotics area, possible to recover the point-wise depth information.

In order to tap the problems systematically and concretely, you should first make conscious that all procedures are based in principle ultimately hypotheses. This raises the question of whether they are virtually always met or man can be fulfilled easily. are considered typical virtually universal hypotheses:

• Intensity Constance: The intensity of an object point does not change from frame to frame,

• the objects are modeled with Lambert surfaces, ie, the intensity does not change with changing observer standpoint,

• the scene that is non-deformable objects is made up of solids together.

Among the high-resolution sensors processes are best explored with the use of cameras. Therefore, should a prototype be discussed below on the state of the art in image processing.

The research in the field of visual motion perception proceeded essentially in three stages (C. Fermüller. Navigational Preliminaries. In Y. Aloimonos, (ed.), Active Perception, Chapter 3, pages 103-150. Lawrence Erlbaum, 1993.). First problem was ever searched for solutions to the "Structure from Motion". Later they dealt with the resolution of procedural ambiguities in which he testified solutions. Today, the search for durable solutions to the fore consider what security aspects in particular linked to the application (eg, mobile robotics, see below). Regarding safety aspects heuristic methods are fundamentally problematic. Finally, the method must work quickly while guaranteeing a high resolution in many applications, such as robotics, not only safe, but also in real time and CPU time.

Traditionally the problem as an essential part of task general vision systems, ie those considered of universal usability for solving common visual problems. Its specific objective is to reconstruct the scene complete (D. Marr. Vision. WH Freeman, San Francisco, CA, 1982. and BKP Horn. Robot Vision. MIT Press and McGraw-Hill, 1986.). This reconstruction approaches are varied and the majority can be achieved by the following modular and hierarchical structure described (Y. Aloimonos, (ed) Active Perception Computer Vision Lawrence Erlbaum, 1993.....):

1. Calculation of optical flow field, ie the projection of the movement of the scene to the observer in a relatively camera.

2. segmentation of the flow field in areas of different relative movements. This calculation of 3D movements.

3. Comprehensive calculation of the relative depths and thus of the object's surface normal, that is, the scene structure.

Examples of these methods can be found in (S. Ullman The Interpretation of Visual Motion MIT Press, Cambridge, USA, 1979. HC Longuet-Higgins A computer algorithm for reconstructing a scene from two Projections Nature, 293:.... 133-135 September 1981, G. Adiv Determining three-dimensional motion and structure from optical flow generated by several moving objects IEEE Transactions on Pattern Analysis and Machine Intelligence, 7 (4):.. 384-401, July 1985., ME and Spetsakis JY Aloimonos Structure from motion using line correspondences Int J. of computer vision, 4:... 171-183, 1990.). Not a robust and rapid method is in industrial use.

For a short time is questioned in the research field of image processing, the sense of striving for the development of general visual systems. As an alternative tasks Aloimonos suggests (Y. Aloimonos, (ed.). Active Perception. Computer vision. Lawrence Erlbaum, 1993.) the development of task-driven vision systems before, which make general rather than specific assumptions about the scene, only partially successful reconstruct completely and special instead of solving common problems try (Y. Aloimonos, E. Rivlin, and L. Huang Designing Visual Systems:... In purposive navigation Y. Aloimonos, (ed), Active Perception, Chapter 2, pages 47-102 Lawrence Erlbaum. , 1993.). The most costly, error-prone and / or associated with heuristics calculate the optical flow fields (JL Barron, DJ Fleet, and SS Beauchemin Systems and experiment. Performance of optical flow techniques Int J. of Computer Vision, 12 (1):.. 43 -77, 1994.) could in this case be bypassed, for example.

An important application of the generic method is for. B. in use in robotic systems. Here is the fundamentally important task of the collision-free maneuvering a complete reconstruction of the scene is not necessary. For a successful obstacle avoidance only the own reaction time must be correlated with the time to a potential collision. With regard to a required timely response to emergency situations, it is also necessary to develop and deploy real-time process. Real-time capability in this context means that the robot can move in the usual speeds. So computationally intensive processes are eliminated. Although the fast by today's standards image processing hardware is used, and you'll come up against limits.

Safe maneuvering is now often by telemanipulation, ie the control of the robot under direct human control, and reached by the soil, taken in guides. In the future, autonomous robots are asked which reliably move in unstructured environment. Two approaches are being pursued (CE Thorpe Mobile robots Int Journal of Pattern Recognition and Artificial Intelligence, 5 (3):... 383-397, 1991.): First scene manipulation by positioning landmarks, secondly, use of sensors to set up internal cards.

The active sensors used industrially is usually limited to low-resolution sensors such as ultrasound and infrared and expensive 1D laser scanner. The use of high-resolution sensor technology is nowadays rarely in the industrial sector and only in highly controlled environments the case. In particular, the increasingly cheaper cameras provide sufficient Resol clothes and the possibility of additional extraction of additional information such as the relative movement. In research, the use of cameras for mobile robotics is popular, however there is a lack of robust method.

The invention is therefore based on the object to combine predetermined equations set measured with a constant measurement precision sizes, and desired sizes in relation such that is maximal for the desired sizes, the information gain.

According to the invention the object is achieved by a method for generating maximum information gain for output signals, in particular control signals from machine, comprising the steps of:

• measurement of sensor data y i in the form of tuples having at least one component thereof and collecting input data for the purpose of determining the desired data X k with the aid of a functional relationship 0 = F i (y i, x).

• Determination of the occurring error propagation in a linear approximation practically determinable partial derivatives,

Figure imgf000010_0001

• Reading positive usefulness parameters v k x k which characterize the relevance of the desired sizes for the control signals, and reading at least one measurement accuracy provided.

• Joint determination of x k with the aid of weighting factors γ j, where Σ j γ j applies = constant, so that a summed squared error when using the weighting factors almost satisfies the relation

Figure imgf000011_0001

minimal, and the weighting factors γ i such that the essential information gain is almost maximum, the essential information gain is the sum of the logarithms of v k weighted.

Figure imgf000011_0002

It can be provided that • the sensor data y i intensities I (x, y, t) of picture elements (x, y) at times t of an imaging sensor correspond to the indices i of the sensor data y i triples (x, y, t ) represent • the desired quantities x 1, x 2, x 3, x 4, x 5 and x 6 each represent an x rotation, y rotation, such rotation, deeply scaled x translation, deeply scaled y-translation and deeply scaled z -Translation between any object and the imaging sensor based on an image sequence corresponding to, and

• the functions F i corresponding to the geometry of the imaging sensor and with the proviso that the total intensity of objects remains constant, • further can be set up, a collection of spatial and temporal intensity leads I x, I y and I t of the intensities I (x, y, t) of the pixels (x, y) of the imaging sensor is made as the input data,

• further comprising a ensured relative measurement accuracy of the

Figure imgf000011_0003

Sensor, • utility parameters v k for the movement variables x 1, x 2, x 3, x 4, x 5, x 6, and

• image regions are read in addition, and

• be taken to determine the relative rotations and deeply scaled translations weighting pattern.

Further, that the elevation of the spatial and temporal intensity derivatives is carried out by means of the following steps may be provided:

• measurement of the intensity distribution I (x, y, t) of picture elements (x, y) of an image area of ​​the imaging sensor to at least two successive points in time t,

• formation of the associated intensity leads I x, I y and I is the intensity distributions t to the pixel coordinates x, y of at least two neighboring pixels or with respect to time t of at least two consecutive pixels.

Alternatively, it can be provided that the elevation of the spatial and temporal intensity derivatives is carried out by the following steps:

• direct reaction of the measured intensity I (x, y, t) by a conventional analog electronic differentiator circuit in the partial time derivative, wherein

Figure imgf000012_0001

for the additional measurement of the spatial intensity derivatives and the imaging sensor high-frequency micro-

Figure imgf000012_0002
Figure imgf000012_0003

movements, so-called Microsaccades be impressed. Further, it can be provided that the Microsaccades be generated by placing a piezo-electrically controlled mirror in the parallel of the incident image beam path.

On the other hand, can also be provided that the Microsaccades be generated by placing refractive index varying in the liquid crystals parallel the incident image beam path.

Alternatively, it can be provided that the Microsaccades be generated by mechanical, magnetically driven oscillations of a mirror in the parallel of the incident image beam path.

Furthermore, may alternatively be to also be provided that the Microsaccades be realized by piezo electrically controlled oscillations of an imaging chip.

In a particular embodiment of the invention can be provided that the imaging sensor additionally determines the signal propagation time for the double distance from the imaging sensor to a point on the image (x, y) projected object.

Preferably pattern markings are also used in the scene.

Furthermore, it can be provided that the weighting patterns are determined based on the maximum significant information gain.

Alternatively, it can be provided that the weighting patterns are determined based on the selected image material in advance, wherein a classification of the possible intensities measured and a determination of a weighting pattern having a relatively high substantially Information gain for each class is vorgenom men.

Further, may also be provided, that the weighting patterns are determined based on a transformation of the measured intensity gradient to read in the target pattern, and this includes the steps of:

• Reading of special process parameters such as conjugate image points, target patterns modulated multi Poland and symmetrized modulated Multipolgleichungen,

• implementation of the specific process parameters, the determination of the symmetry factors P q, a calculation of the coefficient symmetry

Figure imgf000014_0001

and d t q I x and I y comprises for the selected multipoles, a determination of Konjugationsfaktoren and a determination of the selected transformed modulated multipoles for the intensity gradient from the selected target patterns,

• transformation of the measured intensity gradient on a linear intensity gradients combined with the target pattern orientation by calculating the linear factors α k (j) for each pixel of an image region conjugated point,

• determination of normalization factors p j s, based on a linear transformation of the combined intensity gradient on the target pattern by normalizing the combined linear intensity gradients corresponding to the target pattern

• Determination of the transformed modulated multipoles for temporal intensity leads I t by determining the ef fektiven temporal intensity discharges from the measured temporal intensity derivatives, the linear factors and normalization factors, determining the time transformed modulated multipole elements of the effective temporal intensity derivatives, summing the transformed modulated multipole elements to the transformed modulated multipoles for the temporal intensity derivatives and setting the selected symmetrized modulated Multipolgleichungen from the transformed modulated multipoles for the intensity leads I x, I y and I t.

Furthermore, it can also be provided that a determination of the x rotation, y rotation and z translation deeply scaled and then the z-rotation, low scaled x-translation and deeply scaled y-translation is performed separately.

Furthermore, it can also be provided that desired relative accuracies by means of the following steps in Nütz

Figure imgf000015_0001

lichkeitsparameter be implemented:

• Reading the desired accuracy q k,

• determination of the v k by.

Figure imgf000015_0002

Finally, it can also be provided that the joint optimization problem of minimizing

Figure imgf000015_0003
and maximizing the information gain is achieved substantially by the following steps in advance:

• Characterize all possible configurations of the input data. • selection of representative configurations of the input data.

• For each representative configuration of the input data solution of the optimization problem of the common determination of the desired data x k and the weighting factors γ j, γ j giltΣ j = constant and

Figure imgf000016_0002
almost minimal, and almost maximum.

Figure imgf000016_0001

• creating a classification of all possible configurations of input data with each of a representative configuration for each class.

• For each class assignment solving the optimization problem of the associated representative configuration.

• Create a file with this assignment.

The invention is based on the following interlocking findings on the use of sensor data y i for an automatic generation of control signals: In general, to use sensors with an automatic control to achieve conclusions from the sensor data to the corresponding parts of the outside world. To the extent to which these conclusions match corresponding facts the outside world, they represent information. Information is typically measured by the number of binary digits of a binary number. According to the information obtained for such a return path is here introduced as the number of reliably calculated binary one (or more) the inference described binary number (s) x k. will continue here accordingly for a method for evaluating sensor data demanded that the information gain of the interesting conclusions, the so-called significant information gain becomes maximum with respect to. Here, it is essential to distinguish the information contained in the measured values ​​of the measured quantities of recoverable from the measured information about the interest conclusions. On the one hand, the information about the measured quantities and the recoverable information quantitatively distinguish i. general very. On the other hand, a conclusion principle always requires an own underlying theory (here, this is expressed by the functions F i).

If the relative measurement accuracy of the sensor data are reasonably benign, then the x k through the functions F i to the calculated data propagated measurement error can be determined appropriately within the linear error propagation theory. In this way one can determine the expected relative error of the calculated values x k from the expected relative errors of the measured variables y i. From this, one can determine the essential information gain from the expected relative errors of the measured variables y i.

Further, If the measured sensor data y i corresponding weights take into account γ i, and determine the essential information gain from the expected relative errors of the measured variables y i, together with the weights γ i. Finally, one can also determine the weights and the calculated values ​​so that both the main infor mationsgewinn becomes practically maximum, and the mean square error

Figure imgf000018_0001
is virtually minimal. In this way, one can calculate the desired sizes x k with the maximum information gain.

This approach substantially maximizing the information gain may be used in a particular embodiment according to claim 2 for the scientifically and economically important case, the movement determination. Here, the conclusions about movements are relatively difficult to win because the measured intensities represent a signal, information about the brightness of objects in the outside world can be deduced from the while relatively directly from but only using relatively small differences or differentials information about the relative motion of objects can be tapped in the outside world. Therefore, it is precisely here very important to use a method which essentially the (interest) information gain (here over movement) is practically maximized.

For this purpose, variables obvious pixels of a so-called pixel region are used. For each of these pixels of linear equations for the rotations and translations scaled depth may be established by using the measured intensities or intensity derivatives, under the plausible assumption of constant total intensity and in accordance with the projection geometry of the imaging sensor. You can select the above equations arbitrary weights and additively combine without affecting the measurement accuracy effectively; but by such an equation combining the essential information can maximize profit. Thus, a method for maximizing the information gain established (below this is tangibly embodied in several variants). The thereby resulting maximum gain of information ultimately depends not on the initial accuracy of the measured by the imaging sensor data. In order to increase this accuracy, one can determine the temporal intensity derivatives by an electronic differentiator, and the spatial by additional application of so-called Microsaccades.

Total is the method developed here based on the fundamental insight that one the so-called least squares problem (J. Stoer. Introduction to Numerical Mathematics I. Springer Verlag, 3rd edition, Berlin, 1979.) can vary by weighted combination equation (this is as neutral to view the results as long as no additional a priori knowledge is available), and that it can thereby maximize the essential information gain. Here we mean by the compensation calculation, the determination of desired quantities x k (which a vector x establish) from measured variables y i due to a functional relationship F i (y i, x) = f i (x) - y i = 0 with the least square error

Figure imgf000019_0001

First, we modified the balance problem in that the functions F i is not necessarily the form f i (x) - y i must have. Because of the constant measuring accuracy hypothesis we have the balance problem modified so thanks to optimally adapted weighting factors γ i that firstly the required quantities are x k determined with the least square error

Figure imgf000020_0002

and secondly, that it is of significant information gain maximum

Figure imgf000020_0001

The advantages achieved with the invention are given in the following three lists. The general advantages achieved by the invention with respect to the expanded information gain are:

1. One can indirectly through measurements of other data mediated by relations gain desired data, and thereby maximize the information gain for the desired data through a perfectly adapted combination of the measured data and the relations.

2. It can emphasize important of the required data by a utility factor in a so-called essential information gain, maximize the essential information gain, and so the highlighted desired data particularly accurate and reliable, that is, with a particularly high information gain, determine. Thus, one can determine the relevant height accurate than the mistaken levan tere lateral deviation approximately in the automatically controlled landing of an aircraft. The general advantages achieved by the invention (see also Fig. 1):

1. The method is generally applicable to the class of imaging sensors with a projective imaging property.

2. The method is real-time, such as by possible separation of offline calibration of process parameters of the online evaluation.

3. The procedure can be interpreted as fully adaptive calibrator with maximum information gain.

4. By providing dense data by the imaging sensor, it is possible to maximize the essential information gain by selecting and weighting of sensor data.

5. The scored wesentlichte information gain is determined explicitly, that there is provided a security guarantee for the corresponding binary digits of certain sizes.

In particular embodiments arise further advantages.

6. totally local processing of intensities, the relative rotation and translation between the deeply scaled imaging sensor and any object is uniquely determined.

7. Namely, this relative movement is local, clearly and quickly determined without iteration. 8. It can be interpreted as a single object and thus bind larger objects based on certain relative movements of pixels pixels with the same relative motion.

9. When using a plurality of local areas of the same object to redundant information, with which the relative movement can be determined particularly robust due to coupled nonlinear equations yields.

10. Moreover, the scene structure are determined coverage for each pixel by so-called deep-scaled translations. The so particular scene structure is a high resolution by the local processing. This scene structure can be determined separately by the process without prior determination of the relative movement; thus the process is divided into independent parts.

11. If in the scene moving objects with pattern markings, the corresponding relative rotations and translations can be safely scaled deep within a determined uniquely determined fault tolerance by the method thereof.

12. With the method of the above relative movement can be successively determined by linear equations by completely locally averaged intensity modulated derivatives. Therefore, this relative movement is determined particularly quickly and completely functional transparent locally without iteration. 13. In particular, in the process no Flußfeldbestimmung is necessary that we use a so-called direct method.

14. To particularly robust and safe processing to enable image areas with anisotropic radial intensity gradient, so-called star pattern can (see Figure 6), be recognized and used.

15. This process can be somewhat favored not only by use of star patterns, but also by using other patterns. And this can be determined so that the essential information is winning maximum.

16. If imaging sensors provide in addition the depth information, the relative translation can be determined instead of a deep scaled translation. Overall, the relative position and movement is in high resolution and reliably determined.

17. By using analog differentiators and execution of micro-movements ( "Microsaccades") of the sensor, it is possible to determine the intensity gradient without discretization, ie with particularly low relative measurement errors. As an exemplary embodiment the Mikrosakkadenbewegung can be realized without movable component by placing a piezo-electrically controlled mirror in the parallel beam path of the incident image (see Fig. 2). The consequent reduction of the measurement error is essential, as the subsequent process already contains no approximations and also ensures the maximum information gain for a given measurement error.

The advantages achieved by the invention with respect to specific applications are:

1. The method generally allows the most efficient use of sensor data for the automatic generation of output signals, in particular control signals.

2. Costly control systems can be substituted cost by markings, for instance with anisotropic radial intensity gradient in the scene, so-called star pattern marks (see Figure 6), or by other markings pattern.

3. The use of imaging sensors essential for the safe control fast high-resolution signal acquisition is particularly inexpensive compared to other signaling devices, such as ultrasound.

4. cameras used deliver images that are a robot often useful, in contrast to the pure depth information of a laser scanner.

5. Our functionally transparent procedure is, for example, when using star pattern marks (or similar provided in the scene pattern marks), the basis for a guaranteed safe control method for high speeds of the imaging sensor and / or moving objects. These high speeds are significantly promoted even by the above Iterationsvermeidung.

The implementation of the inventive method for the special case of image processing using local pixel regions. A pixel region is determined by the pixel Regions center here with the coordinates (x e, y e) means, in the image plane, and by adjacent pixels, here with the coordinates (x j, y j) (see Fig. 3), so-called image region points. The following processing steps are performed:

Further features and advantages of the invention will become apparent from the following description, are explained in the embodiments with reference to the drawings in detail. Here shows

Fig. 1 is an illustration of the process principle for the example of the movement determination,

Fig. 2 proposals for Mikrosakkadensteuerung,

Fig. 3 is a pixel region the size of 5 by 5 pixels,

Fig. 4 convolution masks for the calculation of the intensity derivatives,

Fig. 5 possible conjugate points of a pixel region, Fig. 6, the illustration of the intensity distribution of a star-shaped and anisotropic target pattern,

Fig. 7, the imaging geometry of the transformation of 3D

Scene on 2D scene using the example of a

Camera model.

Notes to the question of a priori knowledge. The purpose of this method is that sensor data from machines optimally to choose in terms of the output signals and weights. In this case, that the machine in addition to the sensor data y i, and the functional relation F i no further instructions (known a priori knowledge) is assumed, can be used with respect to the control signals and the other output signals. If any knowledge would have existed it would express by means of the sensor data y i, and the functional relation F i.

Generating maximum information gain for output signals, in particular control signals of vending machines. An automatic control requires i. allg. sensor data to determine a reliable and precise as possible output signals, in particular control signals. All sensor data are associated with a limited accuracy. This raises the question, by which method it automatically selects raised with a limited accuracy sensor data and combined to provide reliable and precise as possible data as output signals, in particular control signals. D. h. you want desired but not directly measurable data x k as accurately as possible by means of a functional (valid for theoretical reasons) Relation 0 = F i (y i, x) determined from measurable data y i. This means you want to determine the desired data x k as many binary digits reliable. This raises the question of which measured data you should choose for this and which method you should combine the selected measurement data to determine as many binary digits of the desired data x k reliable.

This question is first formulated in the context of the usual terminology of information theory: the information contained in a Bi närzahl in units of bits is equal to the number of binary digits of the number. The information obtained by a measurement in units of bits is equal to the number corresponding to the specific reliable in the measurement of binary digits for the measured size. If the variable y j was measured at the measurement, and if the measurement error is the amount Ay; does not exceed, then the relative error

Figure imgf000027_0001

the corresponding signal-to-noise ratio is the reciprocal of

Figure imgf000027_0002

The number of reliable specific binary digits is equal to the difference between the number of specific points and the number of unreliable certain places, ie log 2 | y j | - log 2 | Δyj | = Log 2 | y y / Ay j | , This is achieved according to the definition on the measured size in the measurement in terms of information gain:

Information gain respect. Measured size

Figure imgf000027_0003

In connection with an automatic generation of output signals one is i. allg. However, no direct interest in the measured size but in one or more measured quantities using functional relations 0 = F i (y i, x) calculated variables x k. For such a calculated size of the information gain is given in accordance with the number of reliable certain binary digits x k. Information gain respect. Calculated size

Figure imgf000028_0002

With many issues you may not even interested in the largest possible information gain with respect to each calculated from measured quantities size, but it may possibly just a few reliable certain binary digits in some such calculated values, whereas you probably many reliably determined at another such calculated size binary digits needed. To express such questions short, one may introduce a substantial x k information gain by a weighted average of obtaining information as follows for a plurality of calculated values: preliminary positive usefulness weights v k are agreed with Σ k v k = constant; the essential Informationsgewirin then

Figure imgf000028_0001

This raises the question how relative from the

Accuracies the relative accuracies of the compute

Figure imgf000028_0004

th sizes determined. In the event that a calculated size

Figure imgf000028_0003

x k is determined from measured values with reasonably small relative error (this is almost always the case), it is appropriate to determine the relative error of the calculated quantity by means of the linear error propagation theory. Then

Figure imgf000028_0005
Here, one can determine the partial derivatives using either explicit equations, or numerical by determination of x k in the y m for m ≠ j and y j ± Dy j appreciate. And indeed

Figure imgf000029_0001

In any case, one obtains the essential information gain W explicitly as a function of the measurement accuracies Dy j by substituting Eq. (9) in Eq. (7) as follows.

Figure imgf000029_0002

One can now evaluate the functional relations in accordance with the least square error, and thereby with weighting factors γ i, where the following applies constant Σ j = γ j, weights. Finally, one can at the same time with an optimization method (such as with a genetic algorithm) the weighted mean square error

Figure imgf000029_0003

and minimize the essential information gain W (see Eq. (10)) maximize. Thus, according to obtain the desired calculated values x k v k their utilities with maximum information gain.

In this maximization of profit information i can. the y i tuples, ie vectors, be made up of individual sensor data allg.. In such a case, instead of the partial derivative of the gradient is to be determined; and this is to be multiplied with the vector scalar Dy j (see Eqs. (8) and (10)).

For the particular illustrative and particularly simple case that all v k are equal, and that the x k specifically, the control signals, the maximum information gain is generated in the immediate manner x k for the control signals.

As a particular embodiment, the following is mentioned: In order to reduce the complexity of the above optimization problem, one can make a preselection of sensor data based on general clues and experience.

Overall, therefore, the desired relevant for the generation of control signals, sizes, x k is determined with the maximum significant information gain by the following steps: 1. measurement of the sensor data y i. 2. Determination of the partial derivatives. 3. Import of measurement accuracy and utility parameters. 4. solve the optimization problem, namely E in Eq. minimize (11) and at the same time W in Eq. (10) maximize by simultaneous variation of the desired sizes x k and the weights γ i for the sensor data.

This is the method of producing maximum information gain for output signals, in particular control signals, completely specified by machines. The following are important applications of the method are performed. These statements limit the generality of the method completely set up here in any way.

Determining the relative rotations and translations of the deep-scaled relative to arbitrary objects 22 on the basis of an image sequence of an imaging sensor. This relates to particular embodiments of the invention according to the dependent claims. An important application of the generic method is the use of robotic systems. Here, the important task of determining relative rotations and deep-scaled relative translations with any object 22 is to be released on the basis of an image sequence of an imaging sensor.

Preprocessing. This concerns the fourth subpart of claim 2. First, the pre-processing is performed in two steps:

1. A movable imaging sensor collects a series of corresponding intensity distributions I (x, y, t) at points (x, y) of the image plane 21 at points in time t.

Consider the following embodiment: To prepare the intensity distribution for the following differential geometric analysis particularly favorable, one can use a low pass filter.

2. There are the associated intensity leads I x, I y and I is the intensity distribution t to x, y and t.

Consider the following embodiments: (a) It is this intensity derivatives such. determine example from 2 times 2 or 3 times 3 pixels at two successive points in time (see Fig. 4).

can (b) limits for object particularly high discharges occur. Corresponding image points (x k, y k) used with particularly high derivatives limited.

(C) homogeneous surfaces particularly low discharges can occur. Corresponding image points (x k, y k) used with particularly low derivatives limited.

Simplifications for an imaging sensor, which stem from the fact that, in the imaging sensors, the relative accuracies of adjacent pixels are practically the same. Because the pixels of a pixel region relatively close together, it is plausible to assume that the relative measurement error of the temporal and spatial intensity discharges are constant at these pixels. This hypothesis is called 'error constancy', shortly ERCO. ERCO is formulated as follows:

Figure imgf000032_0001
for all (x j, y j) (x k, y k) in the same pixel region. (Even if the ERCO should apply only approximately, then could the presented method characterized generalize in that the relative measuring error (at the image point x j, y j) is expressed by means of a Taylor series by the relative measurement errors at pixel (x k, y k) and by the distance vector (x j - x k, y j - y k)., because with the presented method, the movement amounts are explicitly determined by continuous functions of the measured intensity derivatives) Partial derivatives. The following special consideration applies for the use of the generic method of maximizing the essential information gain to the case of determining the relative rotations and the deep scaled relative translations: Here, the calculated value x k by means of systems of linear equations determined (see Appendix), this can be expressed as follows:

Ax = b.

In addition, the measurement errors are here and, by

Figure imgf000033_0001
Figure imgf000033_0002

and determine the error of the above inhomogeneity and matrix as follows.

Figure imgf000033_0003

The relative error can be explicated by using the pseudo inverse A + as follows:

Figure imgf000033_0004
Figure imgf000034_0001

Substituting this into the above Eq. (7), we obtain an explicit expression for the essential information gain as a function of the measured intensity gradient inherent in the components b j and the matrix elements Aij and also as a function of the seized measurement accuracies

Figure imgf000034_0002
and
Figure imgf000034_0003
This explicit function simplifies here specifically to maximize the essential information gain.

Microsaccades. This applies in claim 4. Because the particular embodiment of the method already maximized the essential information gain with measured intensity derivatives at the pixels and, for a given measurement accuracy of the imaging sensor according to claim 2 for the determination of relative rotations and deeply scaled relative translations from image sequences, is a further enlargement of the essential information gain only Verbessserung possible by the measurement accuracy of the measured intensity derivatives. To this end, one can determine the intensity of derivatives at individual pixels using an electronic analog circuit that operates as a differentiator.

Namely, such a differentiator directly supplies the time derivative of the measured intensity; this is i. allg. accurate than the determination of the temporal intensity derivative by two successive intensity measurements.

To the use of two, adjacent here measured intensities kidney to elimi also in measurement of the intensity gradient may be the imaging sensor at speeds effective movements

Figure imgf000035_0001

can execute so-called Microsaccades 13 (Fig. 2).

Here you can see the temporal intensity discharge at a time t very shortly before the time t with a differentiator measured at time t and at time t + very shortly after the time t. This results using the chain rule three equations with three unknowns and

Figure imgf000035_0002

From this, the three unknowns, ie the ver¬

Figure imgf000035_0003

applied intensity derivatives determined. are the three equations

Figure imgf000035_0004

Here, the plausible hypothesis is used that

Figure imgf000035_0005
and
Figure imgf000035_0006
at time points t, t and t + are practically the same.

The above Microsaccades 13 must be physically realized. To this end, we propose four embodiments provide:

1. Piezoelectric suspension of the imaging sensor: one can vary the dimensions of the piezoelectric member by applying an electrical voltage. Thus, one can vary the position of the imaging sensor electrically directly, that one can thus produce Microsaccades 13 (see Fig. 2 (d)). This relates to claim 8 of the process.

2. Alternatively, these Microsaccades 13 can attach to a desired component of the sensor, such as at one (or two) of mirrors 10 in the parallel beam path 11 (see Fig. 2 (a)). This applies claim 5 of the process.

3. It can be a scroll desired small movable (about a suspended to a steel membrane) component of the imaging sensor, such as a mirror 14 in the parallel beam path 11 by using one (or two) electromagnetic coil (s), that is (by magnetic forces, see Figure . 2 B)). This relates to claim 7 of the process.

4. One can thus produce, that obliquely at one (or two) without any movable part effectively Microsaccades 13 supplied to the parallel beam path 11 (n) plates of suitable material (such as filled with a liquid crystalline material 16) the refractive index with variable applied electrical voltages vary (see Fig. 2 (c)). This applies to Claim 6 of the process.

Signal propagation time. If possible, the imaging sensor also determines the signal propagation time for the double distance from the sensor to a to the image point (x, y) projected object 22. From this and from the signal propagation speed of the object sensor distance d (x, y) is determined. From this and from (x, y) is the object position is determined by using geometric constraints, including the depth. This relates to claim 9 of the process.

This has the following case distinction consequence: In the particular embodiment of the method according to claim 2, in each case, the relative rotation and the deep scaled relative translation of the object 22 from the measured intensity leads I x are determined I y and I t. If the sensor additionally determines the above term, as well as the immediate relative translation can be determined from the scaled depth relative translation along with the depth. Overall, the complete Relativposition- and motion between the imaging sensor and an arbitrary object 22 is then determined.

Pattern markings. Pattern marks in the scene are used, if desired a guarantee of safety for the detection of objects 22 is (see, for. Example, Fig. 6). This applies to dependent claim 10th

To determine the Wicht ungsmuster. The determination of the weighting pattern γ i needed i. gen. a relatively large amount of computing time, as an optimization task is to solve. According to the weighting pattern used to determine γ i alternative different embodiments with specific advantages and disadvantages according to the dependent claims 11, 12 and 13 set out.

In practice, one can combine sensor data in advance and thus obtains different equations for sensor data. Thus correspond to the weighting patterns for sensor data and weighting pattern for equations (see for example the case of a planar image surface Eqs. (90), (82), (80)). All equations used be taking rest ultimately on the geometry of the imaging sensor (geometric optics) (see, eg, in the case of a planar image surface Eq. (84)) as well as on the hypothesis ( "brightness constancy ', shortly BC) (see Eq. ( 85)) so that the total projected from an object 22 on the image surface is practically constant intensity, as long as the object 22 is fully projected onto the image surface. The equations can be combined and weighted.

Optimization. The direct way to determine the weighting pattern for equations is to optimize, that is in the minimization of the error (see Eq. (11)) and the simultaneous maximization of the essential information gain W (see Eq. (10)). This applies to the dependent claim 11 the process.

Classification. To avoid having to optimize online, one can determine the cost weighting pattern in advance and pick up online through a look-up table: determining the weighting pattern corresponding to selected artwork advance.

1. Classification of the possible measured intensities,

2. Determination of a weighting pattern with relatively high essential information gain as in the above optimization for each class.

It is readily apparent that it is possible in such a manner, other process parameters used, such as target pattern optimized. This applies to the dependent claim 12 the method.

Symmetrization. One can appropriate weighting pattern γ also determine i without solving an optimization problem; this leads to a relatively low demand for computer time. This be hitting the dependent claim 13 the process. This determination of the weighting pattern can be carried out with the following so-called / it Symmetrisierungsverfahren. This process is executed explicitly that a closed solution is given.

For any object pixel region centered at (x e, y e) measured with the imaging sensor intensity derivatives at the image region points 19 (x j, y j) by using the so-called conjugate image points 20 on so-called target pattern is transformed and processed to so-called modulated multipoles , Using this modulated multipoles a system of equations for the relative movement (shorthand notation Ψe) is situated between the imaging sensor and the image projected on this pixel region object 22nd These equations of the equation system for the pixel region with the center (x e, y e) are called symmetrized modulated Multipolgleichungen for (x e, y e), short SMME e. This part process, ie this reaction measured by the imaging sensor in the intensity derivatives SMME e is called symmetrization.

With this symmetrization a statistical averaging of the measured by the imaging sensor intensity derivatives is simultaneously carried out and carried out a transformation of these derivatives to target intensity pattern. By this statistical averaging the method is robust against noise signals and by transforming a suitable target pattern is the process at different geometric arrangement of its objects and at different object brightness ratios robust applicable (because the matrix of the equation system in ge suitable target patterns has a relatively large determinant). Here, the transformation and the statistical averaging support each other. Specifically later modulated multi-poles at any measured intensity derivatives are exactly zero, and thus further processed to stable matrix elements.

At this point it is important to emphasize that statistical error in the input, ie, those not accumulate in collecting the intensities by the sensor and result from the subsequent calculation of the derivatives, by the transformation, but rather over which then compensate for subsequent averaging process (in the determination of the multi-pole) according to the central limit theorem. The representation in multipoles is selected based on the electromagnetic field theory. The modulated multipole put in a certain sense (see below) a complete functional system is having in principle unlimited number of modulated multi Poland. Accordingly one can get different versions of the procedure by meeting another selection modulated multipoles by meeting another selection conjugated pixels 20, or by applying a transformation to another (for the invertibility of the SMME s budget) target pattern.

You might remember to use a different full-function system. This can be done completely analogous in this process, if desired, however, makes little sense. It could result in a limited speed advantage in certain maximum (not general) measured intensity ratios, for the modulated multipoles come about because modulating the measured intensity discharges by any modulation function and constitutes the modulation function by polynomials. This presentation is only for a finite number of pixels (essentially of the pixel region) is needed and therefore can finally be done with virtually any function systems with many terms exactly. Overall, the process can be performed by any functional system practical, also the choice of the functional system is largely irrelevant and will not be further discussed below.

For purposes of a specific illustration, the indexing method of the symmetrization below with a so-called prototype executed case by a prototypical selection modulated multipoles is (Eqs. 17 below), conjugate points 20 (see Fig. 5) and a star-shaped target pattern (see Fig. 6 ) introduced; while embodiments with other modulated multipoles, conjugate points 20 and target patterns are given.

In the following the abbreviations x j - x e: = Ξ j and y j - y e: = Υ j used. The symmetrization is determined by the following ten processing steps:

1. Determination of conjugated pixels. For a point image region consider 19 (Ξ j, j Υ) are selected further so-called conjugated pixels 20 (Ξ k (j), Υ k (j)).

And these are the prototypical running case

k, Υ k) = (j -Ξ, Υ j),m, Υ m) = (Ξ j, - Υ j), (n Ξ, Υ n) = (- Ξ j, - j Υ )

and a pixel within the pixel region and on the half-line from the center by (Ξ j, Υ j), ie, a pixel (Ξ p. Υ p) = (F Ξ j, F Υ j) is determined (by an appropriate factor F see, e.g. . B. Fig. 5). Because of the discretization of the pixels by the imaging sensor is such a fourth conjugated pixel 20 for relatively many but not for all the pixels (j Ξ, Υ j) of the pixel region before.

This can be offset by various embodiments. One can as image points (Ξ j, Υ j) without the fourth conjugate image point 20 (Ξ p, p Υ) simply in determining the modulated multipoles (see below) can be disregarded. One can determine the intensity derivatives by interpolation, and use this point, alternatively, as a fourth conjugate point 20 for a point on the above half-lines.

Statistical averaging of the measured intensity of modulated derivatives multipoles. In this second step, the full-function system of the modulated multipoles is introduced. Moreover, are for the purpose of explanation of the method, the associated modulated so-called multipolar equations briefly determined MME e. This step has definitional and explanatory. At an image point (x j, y j) is the BCCE (see Eq. 90 in Appendix A)

Figure imgf000043_0002

Here, the motion state is indicated by:

Figure imgf000043_0001

Explanation of the modulation. The BCCEs in the image region 19 points (x j, y j) are (x j, y j) is multiplied by an arbitrary modulation function F and the multiplied equations are added. This results in more equations, called modulated equations. The modulation function is generally represented as a power series. We can consider the corresponding terms of the power series as a new separate modulation functions that form respective modulated equations sum these identify the sum with the original modulation function and so-called in the sum modulated multipoles identify (see Table 1). Insofar as the modulated multipoles form a complete function system with respect to the BCCE and thus with respect to the determination of the relative movement. This sum modulated equations expressed by modulated multipoles is called modulated multipolar equation (MME e).

Definition of the modulated multipoles.

Figure imgf000044_0001

Embodiments for modulated multipoles are:

Figure imgf000045_0001

Sometimes we fail "modulates" the attribute if it is apparent from the context.

Embodiments of the resulting modulated multipolar equations are: (a) We have the following modulation functions

Figure imgf000046_0001
.

with the exponent g x and g y, which take the values (g x, g y) = (0,0), (0,1), (1,0) or (1,1), a.

We obtained using the modulated multipoles (Eqs. 18) for the pairs (g x, g y) Eqs. (19).

The above is resulting in a system of four equations for

(g x, g y) = (0,0), (0,1), (1,0) and (1,1). These equations are short (g x, g y) - MME e called.

(B) Next, we introduce another four equations on the modulation function

Figure imgf000046_0002

with the exponent g x and g y, which take the values (g x, g y) = (0,0), (0,1), (1,0) or (1,1), a. The resulting equations are means of multipoles (Eqs. 18) as shown in Eq. (20), respectively. These four equations are short (g x, g y, Ξ) - called MME e.

(C) Similarly, we perform four more equations than the modulation function

.

Figure imgf000046_0003

with the exponent g x and g y, which take the values (g x, g y) = (0,0), (0,1), (1,0) or (1,1), a. The resulting equations are means of multipoles (Eqs. 18) as shown in Eq. (21), respectively. These four equations are short (g x, g y, Υ) - called MME e. The general equation for all multipoles is as Eq. (22).

Figure imgf000047_0001

Figure imgf000048_0001

Figure imgf000048_0002

In the prototypical running case we use the derived above twelve modulated Multipolgleichungen the four (g x, g y) - MME e, the (0,0, Ξ) - MME e, the (0,0, Υ) - MME e and the (1, 0, Υ) - MME e. These seven equations form the matrix A multi pole. This matrix is ​​the case that even a transformation with star patterns was made as a target pattern in Eq. (64) given for illustration.

3. Conformal symmetry organizing modulated multipole. Now the modulated multipoles are classified according to the characteristics regarding axis reflections. This is particularly favorable for the process, because this corresponds to a particular symmetry property of BCCE. Namely occur in the BCCE only terms proportional Ξ Υ j j j 0 Ξ Υ j, j Ξ Υ j 0 and j 0 Ξ Υ 0 j on this are symmetrical or anti-symmetrical with respect. Such axis reflections. Overall, this third step has first illustrative character, and performs secondly the so-called symmetry factor P q used in the process, and the so-called symmetry coefficient a x q, b x q, c x q, d x q, a q y, b y q, c y q, d q y, q a t, b t q, t c and q d t q.

Symmetry righteous notation for modulated multipoles. Thus one axis reflections can formulate efficient the outline of the modulated multipole according to the characteristics concerning, firstly a symmetry factor

Figure imgf000049_0001

introduced, which is symmetrical in Ξ j and j in Υ. D. h. p 1 + p 3 is also straight and p 2 + p. 4 q hereby specifies the values of the four exponents p 1, .., p. 4

Second, the multi-pole elements μ j be introduced for a modulated multipole μ, from which μ as the sum of the image region 19 points (x j, y j) is established:

Figure imgf000050_0001

With these two labels each multipole element is represented by μ j = μ j + μ x + μ j y j t, (25)

Figure imgf000050_0002

Here, the twelve symmetry coefficients (with values 0 or 1) a x q, b x q, c x q, d x q, a y q, b y q, c y q, d y q, a q t, b t q c q t and d t q introduced. For each modulated multipole a symmetry coefficient is exactly equal to zero. Conversely, by exactly this symmetry coefficients together with the inherent intensity derivative I t, a modulated multipole specified I x or I y. Herewith each modulated multipole a so-called symmetry class is assigned; namely, the class with the so-called Symmetry Index s = α for a x q = 1 or a q y = 1 or a t q = 1, the class with the Symmetry Index s = b for b q x = 1 or b y q = 1 or b q t = 1, the class with the symmetry Index s = c for c x q = 1 or c y q = 1 or c t q = 1, the class with the symmetry Index s = d for d q x = 1 or d q y = 1 or q d t = 1. For example,

Figure imgf000051_0001
determined by the symmetry of coefficients a = 1 y q with q ~ (p 1 = 1, p 2 = 0, p 3 = 1, p 4 = 0). As another example, you can
Figure imgf000051_0002
consider. It is determined by the symmetry coefficient d q x = 1 q ~ (p 1 = 0, p 2 = -1, p 3 = 0, p 4 = 1).

Next, we collect the terms of the Multipolelementes to which Ξ symmetrically and j are Υ j and denote their sum

Figure imgf000051_0003

as Ξ j - j Υ - symmetry compliant sub-element. Similarly, we run the Ξ j - j Υ - symmetry compliant subelement

Figure imgf000051_0004

symmetrical to Υ j and j antisymmetric to Ξ, Ξ the j - j Υ -symmetriekonforme subelement

Figure imgf000051_0005

antisymmetric to Υ j and j symmetrical to Ξ, Ξ and the jj - symmetry compliant subelement

Figure imgf000051_0006

antisymmetric to Υ j and j Ξ a. Using this symmetry compliant sub-elements, we express the multipole as follows:

μ j = μ α j + μ j + μ b c d j + μ j. (30) 4. Linear combination of modulated multipole conjugated pixels. For the pixel of interest (x j, y j), we take the modulated multipole elements of the conjugate image points 20 (x k (j), y k (j)). We make the linear combination

Figure imgf000052_0001

by a coefficient a k (j) for each conjugate image point 20. These coefficients are subsequently chosen so that the measured by the imaging sensor intensity derivatives give the target pattern. suitable conjugated pixels for this are to choose 20 for general target pattern.

Generally, a target pattern determined by a mapping of a 2D vector for each image region of point 19 (see for example Figure 6). A target pattern is thus a vector field.

In a variant, it is also possible for μ j choose a coefficient equal to the first

For the prototype running example of the pixel and its conjugate points 20, the five image points (x j, y j), (x k, y k), (x m, y m), (x n, y n) and (x p, y p). We form the linear combination of μ sum, j = μ j + k + αμ βμ m + n + γμ δμ p (32) with four coefficients α, ß, γ and δ are calculated below which. In particular, these coefficients are determined below for the star-shaped target pattern. 5. Symmetry compliant Organization of linear Multipolelementkombinationen. As above (see Eq. (30)), we also provide for the linear combination (see Eq. (31)), the four possible symmetry compliant subelements one.

Figure imgf000053_0001

Overall, we get

Figure imgf000053_0002

6. Determination of the effective intensity gradient.

For each conjugate image point 20 (x k (j), y k (j)) and each symmetry class (s∈ {a, b, c, d}), a Konjugationsfaktor is determined as follows:

Figure imgf000053_0003

The following symmetry exponent n y n y n x (a) = 0, (a) = 0, n x (b) = 1, (b) = 0, n x (c) = 0, n y (c) = 1 and n x (d) = 1, n y (d) = 1 is used. So is

Figure imgf000054_0001

The same applies

Figure imgf000054_0002

and

Figure imgf000054_0003

as

Figure imgf000054_0004

The square brackets in the above equations are effective intensity gradient (according to Eq. 26) mentioned. These derivatives are effective intensity actually determined in the process for the coefficients a k (j). In this case, the coefficients a k (j) are previously determined so that the effective intensity derivatives of the image region points 19 form a vector field that is equal to the target pattern. In concrete terms, are the effective intensity derivatives as follows:

Figure imgf000055_0001

for s∈ {α, b, c, d}.

For the prototype running event takes above Eq. (33) the following shape.

Figure imgf000055_0002

This symmetry compliant sub-elements are as follows be

Right:

First we set by Gln. (26), (27), (28) and (29):

Figure imgf000056_0001

Analogous equations apply to

Figure imgf000056_0002
and

Figure imgf000056_0003

Second, we explain useful symmetry relations that stuck inherent in the symmetry factor P q. With k = Ξ -Ξ j and k = Υ Υ j is obtained

Figure imgf000056_0004

Analog gets you

Figure imgf000056_0005

For the conjugate points (x p, y p) and for the contemplated multi-poles, there is an exponent E q such that

Figure imgf000056_0006
applies. In particular, q is equal to E (p 1 + p 2) for the considered value of q; This shows that such e q exists. We denote P q (j Ξ, Υ j) below shorter through P q.

Thirdly, one can with the above relation, Eq. Simplify (41) to give

Figure imgf000057_0001

Similarly, one can (in addition with Ξ p = FΞ j and k Ξ = Ξ n =

- Ξ = Ξ j m)

Figure imgf000057_0002

derived and obtained in a similar manner

Figure imgf000057_0003

and

Figure imgf000057_0004
Overall, we also get here
Figure imgf000058_0001

The above square brackets, which include the intensity gradients, are the effective intensity gradient of the resulting linearly combined modulated multipole μ sum, j.

7. pixel manner fulfilling the target pattern. Now, the coefficients a k (j) for the conjugate image points 20 are determined so that the effective intensity gradient to x and y are proportional to the vector of the target pattern

Figure imgf000058_0004

at the image point (x j, y j). The coefficients a k (j) so it can be determined so that the following eight equations apply:

Figure imgf000058_0002

for s∈ {α, b, c, d}. Here p j s of proportionality.

This intensity leads are for the corresponding brackets in equations. (36), (37), (38) and (39) are used, then these equations are, according to Eq. (34) is added to sum μ, j. P q (j Ξ, Υ j) is abbreviated by P q. So is

Figure imgf000058_0003
Radicalization. For the prototype case, the executed determination of the coefficients is carried α, ß, γ and δ as follows. We choose these coefficients so that the pair (square brackets with Ξ- intensity derivatives, rectangular brackets with Υ- intensity derivatives) in parallel with the pair (-Ξ j,j), that is parallel to the radial direction to the center of the pixel region (for the square brackets refer to Eqs. (46), (47), (48) and (49)).

More generally, we want a certain deviation from the radial direction characterized by an anisotropy parameter Ξ anis by requiring that (square brackets with Ξ- intensity derivatives, rectangular brackets with Υ- intensity derivatives) in parallel with the pair (-Ξ anis Ξ j, - Υ j) is. Here, the special case Ξ leads anis = 1 to the isotropic radial directions and an isotropic field direction, while the case Ξ anis ≠ 1 results in an anisotropic direction field. The corresponding individual directions denoted radially anisotropic. A generalized radial direction, we mean either an isotropic radial direction parallel to (j -Ξ, -Υ j) or an anisotropic radial direction parallel to (-Ξ anis Ξ j,j) with Ξ anis ≠ 1. We refrain the attribute "generalized" if it is apparent from the context.

Overall, this request directions be explained as follows. We choose the coefficients α, β, γ and δ such that

Figure imgf000060_0001

That is, the four linear combinations of the effective intensity gradient should be (as indicated by corresponding square brackets) to the direction parallel (-Ξ anis Ξ j,j). This yields a system of linear equations which can be solved -indem first the four equivalent equations which express that the non-radial components disappear, to be formulated and, secondly, the Gaussian elimination is applied to the solution. While this method rather complicated formulas results, it can be used to yet, schlußzufolgern that a solution exists. Based on this finding, the system of equations can be solved more efficiently as follows:

First adding the first, third, fifth and seventh equations in (53), and the second, fourth, sixth and eighth to

Figure imgf000060_0002
with the short notation
Figure imgf000061_0001
derive.

Second, one can add the third and the seventh in equation (53), as well as the fourth and eighth to

Figure imgf000061_0002

with the abbreviation

Figure imgf000061_0003

derive.

Third, one can the fifth and seventh equations in (53) add up, and the sixth and eighth order

Figure imgf000061_0004

derive.

Fourth, one can add the third and fifth equation in (53), as well as the fourth and sixth to

Figure imgf000061_0005

derive the abbreviation.

Figure imgf000061_0006

Substituting Eqs. (53) in the above equations for the symmetry compliant sub-elements (see Eqs. (46), (47), (48) and (49)), we obtain

Figure imgf000061_0007
Figure imgf000062_0001
Figure imgf000062_0002

and analog

Figure imgf000062_0003

Now we add the above four symmetry-compliant sub-elements (see Eq. (34)). To do this we first collect the expressions of each symmetry compliant subelement in the first row of the combined multipole μ sum.j, the second term of each symmetry compliant subelement in the second row and the third term in the third line. Thus we get

Figure imgf000062_0004
Due to the form of this equation, and in particular by the fact that the two brackets include four symmetry compliant sub-elements, which can be factors (-Ξ anis Ξ j) and (j -Υ) of these brackets as effective intensity gradients parallel to -Ξ anis Ξ j and parallel to -Υ j identi
Figure imgf000063_0001
Figure imgf000063_0002

fied.

8. standardization. The above effective intensity gradient can not have any length, because of the Pro

Figure imgf000063_0003

portionalitätsfaktors p j s. This is solved by normalizing. Namely, the normalization factor is

Figure imgf000063_0004

used. This results

Figure imgf000063_0005

For the prototype running case, the effective intensity gradients are normalized as follows. The above effective intensity gradients show radial (iso

Figure imgf000063_0006

trop or anisotropic) to the center of the pixel region. However, the vector length can be different for different pixels in the pixel region, which merge into one another by reflection on the horizontal or vertical axis through the center of the pixel region. To compensate for these differences in length, we normalize next these effective intensity gradient. As a result, we obtain an isotropic radial field for Ξ anis = 1 and an anisotropic radial field for Ξ anise ≠ first

For the purpose of such normalization we divide both sides of the above equation by the above scale factor (see Eq. (59)). So we get

Figure imgf000064_0001

This equation can be simplified: To this end, we consider the case that a x q + a y q + a q t = 1. Then give the nine variables b q x, b y q, b q t, c q x, c y q, c t q d q x, d y q, d q t to zero. So remain to α j p. proportional terms in Eq.

Receive (61), during terms proportional to p b j, p j p j c and d disappear. Thus, in this case N j q equals p a j and you can shorten the factors p j a. For b x q + b y q + b q t = 1 truncates was analogous to the factor p j b for c x q + c y q + c q t = 1 the factor p j c and d x q + d y q + d q t = 1 p j by a factor d. Total obtained as a consequence of Eq. (58)

Figure imgf000065_0001

Due to the form of this equation, that is due to the fact that the two brackets include four symmetry compliant sub-elements without a j-dependent term, which can be factors (-Ξ anis Ξ j) and (- Υ j) of these brackets as effective normalized intensity gradient

Figure imgf000065_0004

and name.

Figure imgf000065_0003

9. Transformed multipoles. To calculate the transformed modulated multipole we sum the above multipole (see Eq. (60)) of all pixels of a pixel region:

Figure imgf000065_0002
Figure imgf000066_0001

Figure imgf000067_0001

10. Corresponding linear combination of equations. For the purpose of explanation, it is explained how the SMME is valid, and therefore it allows a robust determination of the relative movement: It is possible (ie, a SMME) derive a transformed MME, which corresponds to the above transformation of the multipoles. This MME transformation is illustrated in Table 1 and is constructed as follows.

(a) First, we consider the peripheral MMEs for (j Ξ, Υ j) and the conjugate image points 20 (Ξ k (j), Υ k (j)).

(b) Secondly, we perform the linear combination of a coefficient with the 1, a k (j).

(C) Third, include by designing all modulated multipolar equations (MMEs) only modulated multipoles with the same q. Thus, each MME and each modulated multipole can be provided with an index q (see Table 1).

(d) Fourth, can each such MME by N j q normalize (see Eq. (59)).

(E) Fifth, one can form the sum of the resulting normalized MMEs of the pixels of a pixel region (see Table 1). This amount is referred to as transformed MME, or simply as symmetrized MME, shortly SMME.

(F) Sixth, we press this SMME out with the corresponding target patterns transformed modulated multipoles.

11. consequences. By design corresponding to the multipole elements of the modulated multipoles the SMME the transformed multipole μ sum, j (see Eqs. (32) and (62) and the Table 1), because the normalized MME and the normalized modulated multipole elements have the same coefficients and. Again, by design, have this

Figure imgf000068_0001
Figure imgf000068_0002

transformed multipole effective intensity

Figure imgf000068_0003

gradient exactly in accordance with the target pattern.

For the prototype executed, the transformed measured intensity gradients are anisotropic radial, ie, and therefore also has the transformed

Figure imgf000068_0004
Figure imgf000068_0005

MME anisotropic radial intensity gradient effective (see Figure 6). This includes the complete compensation of possible anisotropy of the measured intensity gradient at

Figure imgf000069_0002

any image points (Ξ j, j Υ) of an arbitrary pixel region with the center (x e, y e) called by a transformation procedure symmetrization decreases.

To illustrate why a star-shaped aiming pattern for the determination of motion is relatively low, the resulting matrix A multi pole (see above step 2) by Eq. (64) shown.

Figure imgf000069_0001
Figure imgf000070_0001

The state of motion is given by

Figure imgf000070_0002

With

Figure imgf000070_0003

It is recognized that many matrix elements are zero. This leads to a simple and robust method for determining the relative movement.

By Symmetrisierungsverfahren different SM-MEs are provided with the product used modulated multi Poland. You can select from these SMMEs a desired subset. These are a suitable choice of conjugate points 20 of the modulated multi poles and the target pattern robust with respect to the statistical averaging and also with regard to the reconstruction of the movement state by matrix inversion.

Star pattern. When using anisotropic radial objective patterns, so-called star patterns, the determination of ψ k simplified by separation. This applies to the dependent claim 14 the method.

Determining the x, y and z rotation translation when using star-shaped target pattern. Next, we consider the system of equations which is expressed by the matrix A multipole. We subtract the five times

Figure imgf000071_0001

th equation from the first equation. Then we subtract times the sixth equation from the second equation. Zusätz¬

Figure imgf000071_0002

all, we take the fourth equation. We obtain the matrix A ttc as follows

Figure imgf000071_0003

With

Figure imgf000071_0004

and

Figure imgf000071_0005
The corresponding inhomogeneity b ttc is

Figure imgf000072_0005

So we get

Figure imgf000072_0004

From this equation we extract the rotations

and the translational component T e, z auf¬

Figure imgf000072_0003

Due to the zero block, and due to the tridiagonal form different in the block of zero.

Determining the z rotation and x, y translation when using star-shaped target pattern. For the special case of anisotropic pixel regions obtained nonzero matrix elements A 3,3 and A 7,3:

Figure imgf000072_0002

and

Figure imgf000072_0001

With the above-defined ω x, ω y, T e, for one of these matrix elements for determining the ω z enough by the third or seventh row of A multipole is used. Finally, which are T e, T and determines x e, y, ω z, by the first and second row of A multipole is used. Overall, the entire relative movement state can be determined for the case of anisotropic pixel regions by means of a single pixel region. So therefore the relative movement between the imaging sensor and anisotropic pixel can be determined regions.

The method outlined above is a closed solution for determining the movement. In addition, star-shaped pattern marks now are (6 see Fig.) Attached to the objects of the scene. Emanating from these markings, light intensity is first in a measured intensity and turn converted it into a calculated motion state Ψ. And this is done by the following steps:

1. Implementation of the pattern marks in measured intensities

2. Implementation of the measured intensities in intensity derivatives

3. Implementation of the intensity derivatives in transformed modulated multi poles corresponding to the target pattern equal to zero.

4. Implementation of the transformed modulated multi poles that are not zero in accordance with the target pattern in a calculated motion state Ψ.

These four steps consist of iterationsfreien relatively simple transformations. Therefore, an error propagation calculation is practicable. Thus, a guaranteed-compliance with fault tolerance is determined.

A consideration of all the transformations occurring in this process shows that all used process steps of additions and multiplications are made which are related to. The error propagation without problems, except for the α k in the determination of the coefficients (j) for the linear combination of process steps used, and except for the matrix inversion , The matrix inversion is stable can be carried out by using suitable anisotropic star-shaped target pattern. The coefficients α k (j), by using a star-shaped aiming pattern with Gln. (55) (56) (57) and (54) are determined. Here, a denominator occurs, which can be made by appropriate selection of the anisotropy parameter Ξ anise relatively large. This also this process step, the error propagation is respect. Unproblematic. Overall, the error propagation calculation is therefore feasible and gives such. B. at suitable anisotropic star patterns, a relatively low reinforcement of measurement errors.

Therefore, the state of motion is determined by the presented method with relatively small errors, and the corresponding error tolerances are predicted directly from the conventional error propagation into account. Overall, the use of anisotropic star-shaped pattern marks a motion state is determined within an error tolerance.

In another embodiment, one can also use any other markings, instead of the (favorable) anisotropic star-shaped pattern markings and apply the error propagation bill to this.

In a further embodiment there are altogether dispense with the application of markings, and instead produce about by unsharp setting of the camera favorable intensity characteristics.

In another embodiment, one can establish the error updating planting bill to the measured intensity distribution with the camera.

z rotation and x, y translation for isotropic pixel regions when using star-shaped target pattern. In the case of isotropic pixel regions, we use two pixel regions

at different image points (x e, y e) and (x e, y e '). First for this we determine T e, z and T e ', such as above. Then we introduce the new state vector

Figure imgf000075_0001

one with and / Next

Figure imgf000075_0002
Figure imgf000075_0003

We take the two equations, which correspond to the first two rows of the matrix A multi-pole, and we divide these equations by T e, z. Then we proceed similarly with the other pixel region. We obtain the following system of four equations for the new state vector

Figure imgf000075_0004

marked with the multipoles respectively e and e 'and Eq. (75)

Figure imgf000076_0002

and with

Figure imgf000076_0001

The matrix A rest is re regularly for most Bildpunktregionenpaa because its determinant determined to be

Figure imgf000077_0001

The system is thus solved.

Extremumkonstanzgleichung (extreme constancy equation (ECE)). Depending it may be possible to extract the locations of local intensity maxima of image Regioen and track over the image sequence on the quality of the artwork. This raises the question of how an image point (x i, y i) shifts as a function of the relative movement state and the relative depth to the corresponding object. Specifically: What are the infinitesimal displacements dx and dy i i dt as a linear function of an infinitesimal time interval? This is accomplished by the equations of motion (Eq. (83)) described. How then can these local changes dx i / dt and dy are measured in the artwork i / dt? First, consider a pixel region in the vicinity of the local maximum intensity. Second, we estimate the distance d from this local maximum to the nearest local intensity extreme. Third, we determine the circle C k having a radius d about the point (x circle, y circle), which was selected at the beginning as a local maximum. Fourth, we determine the mass of the pixel intensities in the circle

Figure imgf000077_0002

Fifth, we iterate as follows: We set the new circle center point (x circle, circle y) 'equal to the calculated at this time of mass r cm and calculate the new emphasis r' cm means that circle. We iterate through the emphasis identical r * cm is a fixed point. Sixth, the ECE calculation is based on intensities at two successive times t 1 and t 2 due to the selected time derivative. Accordingly, we determine the fixed points presented above for both. We calculate so r cm (t 1) * r and cm (t 2) *. Seventh, we determine the displacement

Figure imgf000078_0001

We interpret this as that the pixel region has been moved by a shift in this size. (For our purposes, it is sufficient to measure this shift for three pixel region centers (x e, y e), so we use particularly suitable image region centers.)

Each pixel region follows an equation according to (Eq. (84)) as follows

Figure imgf000078_0002

These equations we call the Extremumkonstanzgleichungen (ECE), in particular, we refer to them as ECEX and ECEY.

To separate rotation and translation, we develop equations that zero components for translation Terme

Figure imgf000078_0003
and
Figure imgf000078_0004
have. To this end, we consider a point in the image point region, take the corresponding BCCE (90) and subtract I x, j times the ECEX and I y, j times the ECEY on Extremwertort (x e, y e). This gives us

Figure imgf000079_0009

Here we use an appropriate approximation of the variables

Figure imgf000079_0007
and
Figure imgf000079_0001
to
Figure imgf000079_0002
and
Figure imgf000079_0003
as well as and and.

Figure imgf000079_0004
Figure imgf000079_0005
Figure imgf000079_0006
Figure imgf000079_0008
To obtain statistical averages, we sum the above equations for all pixels (x j, y j) of the pixel region. So we get

Figure imgf000079_0010

A derivation of the BCCE

The equations of motion

The perspective projective transform (pinhole camera) is widely used as a realistic camera model in image processing research (RJ Schalkoff. Digital Image Processing and Computer Vision. John Wiley & Sons, Inc., New York, 1989.). It approximates well the geometrical optics of the projection of the three-dimensional world on two-dimensional pixels. Figure 7 shows the geometry in frontalprojektiver representation (numeral 23).

When objects (reference 22) in front of the camera move and / or the camera moves in the environment induces corresponding changes in the images. We now derive the relation between the three-dimensional relative movement and the movement generated on the image plane (reference numeral 21).

After a classical result of the kinematics of each rigid body motion can be divided (without deformation) in six components, namely translational t = (t x, t y, t z) T and rotation ω = (ω x, ω y, ω z) T. The choice of the two coordinate systems for the translation and rotation is arbitrary (ie, the same movement can be described by an infinite number of combinations of translations and rotations). Here we'll choose the camera coordinate system (Fig. 7).

Perspective projection of 3D motion results in the equations of motion which can be displayed separately for translational and rotational motion components (HC Longuet-Higgins and K. Prazdny The Interpretation of a Moving Retinal Image Proc R. Soc Lond B 208:..... 385 -397, 1980.):

Figure imgf000081_0001

The equations describing the motion vector field v i at each image point x i. The absolute value of the translational speed can not be determined by the observer; This motivates the formulation

Figure imgf000081_0002
by

Figure imgf000081_0003

Calculation of Normalenfluß

We choose the gradient approach to the movement determination. The gradients are local and berechungstechnisch efficiently. In a heuristic approach, it is assumed that the intensity of a point in space does not change in time:

I (x, y, t) = (x + dx, y + dy, t + dt) I, therefore. (85)

Figure imgf000081_0004

Taylor expansion in leading order results in

Figure imgf000081_0005
Eq. (85) gives the so-called brightness constancy equation (BCE) with
Figure imgf000082_0003

and

Figure imgf000082_0001

The vector field is called optical flow

Figure imgf000082_0002

field (H. von Helmholtz. Handbook of Physiological Optics. Publisher Leopold Voss, Hamburg, Third Edition. 1909 and JJ Gibson. The Ecological Approach to Visual Perception. Houghton Mifflin, Boston, 1979), respectively. We carry the normalized gradient vector and the normalized flux

Figure imgf000082_0004

field u n

Figure imgf000082_0005

on.

the BCCE

For each pixel (x j, y j), the BCE will be merged with the equations of motion as follows. We set the speeds in Gln. (84) for those in Eqs. (87) a. This gives us the so-called brightness change constraint equation (BCCE) (p Negahdaripour and BKP Horn Direct Passive Navigation IEEE Transactions on Pattern Analysis and Ma chine Intelligence, 9 (1):.. 168-176, January 1987.)

Figure imgf000083_0001

Figure imgf000084_0001
Figure imgf000085_0001
Figure imgf000086_0001

H2616 LIST OF REFERENCE NUMBERS

10 piezo electrically controlled mirror

1 1 Parallel beam path

12 incident image

13 Microsaccade

14 levels

15 Mechanical magnetically driven oscillation

16 refractive index varying the liquid crystal

17 Imaging chip 18 piezoelectrically controlled oscillation and

Microsaccade

19 point image region 20 Conjugated pixel

21 image plane

22 object

23 Front view alprojektive

Claims

claims
1. A method for generating maximum information gain for output signals, in particular control signals from machine, comprising the steps of:
• measurement of sensor data y i in the form of tuples having at least one component thereof and collecting input data for the purpose of determining the desired data x k with the aid of a functional relationship
0 = F i (y i, x). • determination of the error propagation within a linear approximation occurring practically determinable partial derivatives.
Figure imgf000089_0001
• Reading positive usefulness parameters v k, which characterize the relevance of the desired sizes x k for the output signals, and reading at least one measurement accuracy provided.
• Joint determination of x k with the aid of weighting factors γ j, wobeiΣ j γ j = constant applies such that a summed squared error when using the weighting factors, the relationship nahezuΣ i γ i F i 2 (y i, x) = minimum met, and the weighting factors γ i such that the essential information gain is almost maximum, the essential information gain is the sum of the logarithms of v k weighted.
Figure imgf000089_0002
2. The method according to claim 1, characterized in that
• the sensor data y i intensities I (x, y, t) of picture elements (x, y) correspond to times t of an imaging sensor, wherein the indices i of the sensor data y i triples (x, y, t) represent,
• the desired quantities x 1, x 2, x 3, x 4, x 5 and x 6 each represent an x rotation, y rotation, such rotation, deeply scaled x-translation, deeply scaled y-translation and deeply scaled z-translation between arbitrary objects (22) and the imaging sensor based on an image sequence corresponding to, and • the functions F i corresponding to the geometry of the imaging sensor and with the proviso that the total intensity of objects (22) remains constant, are set up,
• further comprising a collection of spatial and temporal intensity leads I x, I y and I t of the intensities I (x, y, t) of the pixels (x, y) is made of the imaging sensor as input data,
• Furthermore, a guaranteed relative accuracy
of the sensor,
Figure imgf000090_0001
• utility parameters v k for the movement variables x 1, x 2, x 3, x 4, x 5, x 6,
• image regions are read in addition, and
• be taken to determine the relative rotations and deeply scaled translations weighting pattern.
3. The method according to claim 2, characterized in that the elevation of the spatial and temporal intensity derivatives is carried out by the following steps:
• measurement of the intensity distribution I (x, y, t) of picture elements (x, y) of an image area of ​​the imaging sensor to at least two successive points in time t,
• formation of the associated intensity leads I x, I y and I is the intensity distributions t to the pixel coordinates x, y of at least two neighboring pixels or with respect to time t of at least two consecutive pixels.
4. The method according to claim 2, characterized in that the elevation of the spatial and temporal intensity derivatives is carried out by the following steps:
• direct reaction of the measured intensity I (x, y, t) by a conventional analog electronic differentiator circuit in the partial time derivative, wherein for the additional measurement of the spatial In
Figure imgf000091_0003
tensitätsableitungen and the imaging sensor
Figure imgf000091_0001
Figure imgf000091_0002
, high-frequency micro-movements, so-called Microsaccades (13) is impressed.
5. The method according to claim 4, characterized in that the Microsaccades (13) by placing a piezo-electrically controlled mirror (10) are generated in the parallel beam path (11) of the incident image (12).
6. A method according to claim 4, characterized in that the Microsaccades (13) by placing refractive index varying liquid crystals (16) in the parallel beam path (11) of the incident image (12) are generated.
7. A method according to claim 4, characterized in that the Microsaccades (13) by mechanical, magnetic driven oscillations (15) of a mirror (14) in the parallel beam path (11) of the incident image (12) are generated.
8. A method according to claim 4, characterized in that the Microsaccades by piezo-electrically controlled oscillations (18) of an imaging chip (17) can be realized.
9. A method according to any one of claims 2 to 8, characterized in that the imaging sensor additionally determines the signal propagation time for the double distance from the imaging sensor to a to the image point (x, y) projected object (22).
10. The method according to any one of claims 2 to 9, characterized in that in addition pattern marks are used in the scene.
11. The method according to any one of claims 2 to 10, characterized in that the weighting patterns are determined based on the maximum significant information gain.
12. The method according to any one of claims 2 to 10, characterized in that the weighting patterns are determined based on the selected image material in advance, wherein a classification of the possible intensities measured and a determination of a weighting pattern having a relatively high substantially Information gain for each class is made.
13. The method according to any one of claims 2 to 10, characterized in that the weighting patterns are determined based on a transformation of the measured intensity gradient to read in the target pattern, and this includes the following steps: • Reading in of specific process parameters such as conjugate image points (20), target patterns modulated multi Poland and symmetrized modulated Multipolgleichungen,
• implementation of the specific process parameters set q, a determination of the symmetry factors P q, calculation of the symmetry coefficient a x q, b q x, c x q, d x q, a q y, b y, c q y, d q y, a q t, q t b, c and d t q t q for the selected comprises multipoles, a determination of Konjugationsfaktoren and a determination of the selected transformed modulated multipoles for the intensity gradient I x and I y of the selected target patterns,
• transformation of the measured intensity gradient on a linear intensity gradients combined with the target pattern orientation by computing the linear coefficients a k (j) for each conjugate image point (20) of an image point region (19),
• determination of normalization factors p j s, based on a linear transformation of the combined intensity gradient on the target pattern by normalizing the combined linear intensity gradients corresponding to the target pattern
• Determination of the transformed modulated multipoles for temporal intensity leads I t by determining the effective temporal intensity discharges from the measured temporal intensity derivatives, the linear factors and normalization factors, determining the time union transformed modulated multipole elements of the effective temporal intensity derivatives, summing the transformed modulated multipole elements to the transformed modulated multipoles for the temporal intensity derivatives and setting the selected symmetrized modulated Multipolgleichungen from the transformed modulated multipoles for the intensity leads I x, I y and I t.
14. The method according to claim 13, characterized in that a determination of the x rotation, y rotation and z translation deeply scaled and a determination of the z-rotation, low scaled x-translation and deeply scaled y-translation is performed separately.
15. The method according to any one of the preceding claims, characterized in that desired relative accuracies q = k by the following steps in utility parameters
Figure imgf000094_0001
be implemented:
• Reading the desired accuracy k q ,,
• determination of the v k by
Figure imgf000094_0002
16. The method according to any one of the preceding claims, characterized gekennzeichet that the joint optimization problem of minimizing vonΣ i γ i F i 2 (y i, x), and the maximization of the essential information gain is achieved by the steps of first: • characterizing all possible configurations the input data. • selection of representative configurations of the input data. • For each representative configuration of the input data solution of the optimization problem of the common determination of the desired data x k and the weighting factors γ j, where giltΣ j γ j = constant undΣ i γ i F i 2 (y i, x) is almost minimal, and nearly maximal.
Figure imgf000095_0001
• creating a classification of all possible configurations of input data with each of a representative configuration for each class. • For each class assignment solving the optimization problem of the associated representative configuration. • Create a file with this assignment.
PCT/DE1996/000209 1995-02-03 1996-02-05 Process for obtaining the maximum information acquisition for o utput signals, especially control signals for automatic machines WO1996024116A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
DE19503606 1995-02-03
DE19503606.9 1995-02-03
DE19509277.5 1995-03-15
DE19509277A DE19509277C1 (en) 1995-02-03 1995-03-15 A process for the evaluation of sensor data to generate accurate output signals, in particular control signals for moving determination of machines,

Publications (1)

Publication Number Publication Date
WO1996024116A1 true WO1996024116A1 (en) 1996-08-08

Family

ID=26012142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE1996/000209 WO1996024116A1 (en) 1995-02-03 1996-02-05 Process for obtaining the maximum information acquisition for o utput signals, especially control signals for automatic machines

Country Status (1)

Country Link
WO (1) WO1996024116A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4980762A (en) * 1989-10-13 1990-12-25 Massachusetts Institute Of Technology Method and apparatus for image processing to obtain three dimensional motion and depth

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4980762A (en) * 1989-10-13 1990-12-25 Massachusetts Institute Of Technology Method and apparatus for image processing to obtain three dimensional motion and depth

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AYER S ET AL: "Tracking based on hierarchical multiple motion estimation and robust regression", TIME-VARYING IMAGE PROCESSING AND MOVING OBJECT RECOGNITION, 3. PROCEEDINGS OF THE 4TH INTERNATIONAL WORKSHOP, PROCEEDINGS OF 4TH INTERNATIONAL WORKSHOP ON TIME-VARYING IMAGE PROCESSING AND MOVING OBJECT RECOGNITION, FLORENCE, ITALY, 10-11 JUNE 1993, ISBN 0-444-81467-1, 1994, AMSTERDAM, NETHERLANDS, ELSEVIER, NETHERLANDS, pages 295 - 302, XP002003427 *
HERWIG C ET AL: "Robust patch concept for egomotion estimation", COMPUTER ANALYSIS OF IMAGES AND PATTERNS. 6TH INTERNATIONAL CONFERENCE, CAIP'95. PROCEEDINGS, PROCEEDINGS OF 6TH INTERNATIONAL CONFERENCE ON COMPUTER ANALYSIS OF IMAGES AND PATTERNS, PRAGUE, CZECH REPUBLIC, 6-8 SEPT. 1995, ISBN 3-540-60268-2, 1995, BERLIN, GERMANY, SPRINGER-VERLAG, GERMANY, pages 926 - 931, XP002003428 *

Similar Documents

Publication Publication Date Title
Zhu et al. Reliability fusion of time-of-flight depth and stereo geometry for high quality depth maps
Gallup et al. Variable baseline/resolution stereo
Bajcsy Active perception
Pollard et al. PMF: A stereo correspondence algorithm using a disparity gradient limit
Dreschler et al. Volumetric model and 3D trajectory of a moving car derived from monocular TV frame sequences of a street scene
Helmli et al. Adaptive shape from focus with an error estimation in light microscopy
Bae et al. A method for automated registration of unorganised point clouds
Trucco et al. Model-based planning of optimal sensor placements for inspection
US7616807B2 (en) System and method for using texture landmarks for improved markerless tracking in augmented reality applications
David et al. Simultaneous pose and correspondence determination using line features
Hansard et al. Time-of-flight cameras: principles, methods and applications
TWI509565B (en) Depth mapping based on pattern matching and stereoscopic information
Keller et al. Real-time 3d reconstruction in dynamic scenes using point-based fusion
Stoddart et al. Registration of multiple point sets
Sanger Stereo disparity computation using Gabor filters
US20010036302A1 (en) Method and apparatus for cross modality image registration
Shimizu et al. Precise subpixel estimation on area‐based matching
US4745562A (en) Signal processing disparity resolution
EP0974128B1 (en) Three-dimensional imaging method and device
US6911995B2 (en) Computer vision depth segmentation using virtual surface
Matthies Stereo vision for planetary rovers: Stochastic modeling to near real-time implementation
Salvi et al. A comparative review of camera calibrating methods with accuracy evaluation
EP1694821B1 (en) Probable reconstruction of surfaces in occluded regions by computed symmetry
Lemmens A survey on stereo matching techniques
Gennery Modelling the environment of an exploring vehicle by means of stereo vision

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase