CN112334945A - Method for determining pixels corresponding to one another, SoC for carrying out the method, camera system having the SoC, control device and vehicle - Google Patents

Method for determining pixels corresponding to one another, SoC for carrying out the method, camera system having the SoC, control device and vehicle Download PDF

Info

Publication number
CN112334945A
CN112334945A CN201980041272.7A CN201980041272A CN112334945A CN 112334945 A CN112334945 A CN 112334945A CN 201980041272 A CN201980041272 A CN 201980041272A CN 112334945 A CN112334945 A CN 112334945A
Authority
CN
China
Prior art keywords
matrix
signature
camera image
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980041272.7A
Other languages
Chinese (zh)
Inventor
S·西蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN112334945A publication Critical patent/CN112334945A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

A method for continuously determining image points corresponding to one another between a first camera image and a second camera image, having the following steps: detecting (10) the first camera image (101); detecting (11) the second camera image (102); -deriving (40) at least one first signature matrix element (203) of a first signature matrix (201) from the first camera image (101), assigning (60) a first coordinate to the derived signature value of the first signature matrix element (203) in a signature value position table (300); -deriving (70) at least one signature value for a second signature matrix element (203) of a second signature matrix (202) from the second camera image (102), the at least one second signature matrix element having second coordinates; determining (80) at least one element (503) of a correspondence matrix (500, 501, 502) from the signature value position table (300), the clock-derived signature value of the second signature matrix element (203), the second coordinates of the second signature matrix element (203), wherein at least one concatenation matrix element (403) of a concatenation matrix (400) is derived (50) from the first signature matrix (201) and the signature value position table (300), and additionally determining (80) the element (503) of the correspondence matrix (500, 501, 502) from the derived concatenation matrix (400), the first camera image (101) and the second camera image (102).

Description

Method for determining pixels corresponding to one another, SoC for carrying out the method, camera system having the SoC, control device and vehicle
Technical Field
The invention relates to a method for continuously determining image points corresponding to each other between a first camera image and a second camera image. The invention also relates to a System-on-a-chip (SoC) for performing the method and a camera System having the SoC. The invention also relates to a control device for carrying out the method. The invention also relates to a vehicle having the camera system or the control device.
Background
In order to guide the vehicle partially or autonomously, a critical driving situation (e.g., collision risk) of the vehicle should be identified at least within a time span of less than about 0.2 seconds. This time span corresponds to the average response time of humans. More advantageous is a faster reaction time than a human reaction, e.g. a reaction time of 0.1 seconds. In other words, for autonomous or partially autonomous driving, an identification time span of less than or equal to 0.2 seconds is required to identify the movement and the movement direction of the image point or of the object.
For example, the determination of optical flow and/or the recognition of objects or object tracking may be performed: the movement or the direction of movement of the image point in the detected camera image sequence is determined. The identification of objects in only one image can be performed with fewer operations and thus faster than the calculation of the amount of loss between two images. However, the determination of the movement of the image points and the direction of movement thereof on the basis of object recognition is in principle always slower than the determination of the optical flow vectors, since the objects in the detected camera images must first be recognized without the movement parameters of the camera, and only then the movement or change in the direction of movement of the recognized objects is determined on the basis of a sequence of a plurality of camera images. Furthermore, object recognition is performed using a neural network trained with training data, which neural network recognizes the object based on experience. Therefore, all objects may not be reliably recognized in all cases, if necessary. In contrast, the determination of optical flow may be performed independently of the training data, and is in principle analytical. However, the challenge in the case of determining the optical flow is to quickly and reliably find image points corresponding to one another from different camera images, wherein the camera images have a high resolution and are preferably detected at a fast detection rate.
Document DE 10351778 a1 discloses a method for processing image data of a moving scene. For this purpose, pixels or image regions corresponding to one another are identified in the respective temporally successive image data records. In a first step, the image data set to be compared is transformed by means of a signature operator (signaturator), or a signature string (signaturing) or signature value is calculated for each pixel. The signature string is stored in a signature table assigned to each image data group together with the pixel coordinates. Correspondence assumptions are then generated for the signature strings that can be found in both tables.
The object of the invention is to optimize the determination of corresponding image points between the first camera image and the second camera image.
Disclosure of Invention
The above object is solved according to independent claims 1, 11, 12, 13 and 14.
The invention relates to a method for determining image points corresponding to each other between a first camera image and a second camera image. For the corresponding pixels of the camera images, in particular, pixel optical flow vectors, i.e., the direction of motion and the speed of motion, are determined from the pixel of the first camera image to the assigned pixel in the second camera image. Alternatively, the distance between, for example, the vehicle and an object in the surroundings of the vehicle can be determined from the corresponding image points by means of triangulation.
In the method according to the invention, the first camera image is initially detected, wherein the first camera image is preferably continuously updated at a predefined detection rate or frame rate. The detection rate is advantageously greater than or equal to 5 camera images per second and particularly preferably greater than or equal to 60 camera images per second. In addition, a second camera image is detected, wherein the second camera image is also preferably continuously updated at a predefined detection rate of the first camera image. Advantageously, the second camera image is detected by means of a camera which is offset in time from the first camera image, wherein the camera detects both the first camera image and the second camera image, and the time offset between the first camera image and the second camera image is positive or negative. In other words, the camera preferably detects a sequence of camera images at a predetermined detection rate. Optionally, for example, the sequence of detected camera images is buffered in the electronic memory in such a way that the last 16 previously captured camera images are stored in the electronic memory in addition to the first and/or second camera image currently being detected or newly detected. In principle, the method can be applied to any pair of camera images in the form of a first and a second camera image in a sequence of camera images stored in an electronic memory. Particularly preferably, the first and second camera images comprise at least the latest or currently detected camera image. In other words, the second camera image may be detected after the first camera image or before the first camera image in time. Optionally, after the detection of the first camera image and the second camera image, the first camera image and the second camera image are matched. Preferably, the first camera image and the second camera image are transformed into a grey-value image by means of an optional matching, wherein the camera images can be transformed linearly or non-linearly into a grey-value image. Alternatively or additionally, the optional matching may be performed by scaling the size or resolution of the first camera image and the size of the second camera image. Advantageously, the matching is carried out at a predetermined clock frequency (i.e. in a predetermined sequence in each clock of the clock frequency or pixel by pixel). Subsequently, at least one first signature matrix element of a first signature matrix is determined for each clock and at a predefined clock frequency from the first camera image, said first signature matrix element having first coordinates. The clock frequency is predefined, in particular, by the SoC (system on chip) or the control device. In other words, the signature values of the first signature matrix elements of the first signature matrix are respectively determined for each clock. The predefined clock frequency or the calculation clock frequency is preferably at least six orders of magnitude faster than the detection rate of the first and second camera images. Each of the determined signature values of the first signature matrix elements of the first signature matrix represents the surroundings of the image point of the first camera image. In other words, the first coordinates of the first signature matrix element in the first signature matrix that are sought have a fixed assignment of coordinates to the image points of the first camera image. The signature values of the first signature matrix elements characterize the respectively assigned pixels by a computational environment description of the assigned pixels. The signature values are preferably determined from the respective neighboring pixel contents of the assigned pixel by one or more simple arithmetic operations, for example by a plurality of differences and/or summations of the gray values of the directly neighboring or neighboring pixels. In a further method step, the first coordinate is then assigned to the determined signature value in the signature value position table. In other words, in the signature value position table, the signature value found in the current clock is assigned the position of the first signature matrix element found in the current clock in the first signature matrix, which is represented by the first coordinates. In the extracted or matched signature value position table, the signature value is therefore assigned the first coordinate of the first signature matrix with the last extracted first signature matrix element of the signature value. The method also has: and solving a cascade matrix element of a cascade matrix (Verkettungsmatrix) according to the first signature matrix and the signature value position table. Preferably, the concatenation matrix elements of the concatenation matrix are found in the next clock each time the first signature matrix element of the first signature matrix is found or in the previous clock covering the first coordinate of the signature value assigned to the signature value position table in the signature value position table. The cascaded matrix is solved in particular by: the first coordinate of the last signature value assigned to the signature value found in the current clock in the signature value position table is stored into the cascaded matrix element with the first coordinate of the current clock. Thus, the concatenation matrix element is determined before the signature value position table is found or matched, that is to say the first coordinate is assigned to the first signature matrix element in the signature value position table. The last first coordinate of the signature value determined in the current clock and in the previous clock is therefore advantageously stored in the concatenation matrix. In a further method step, at least one signature value of a second signature matrix element of a second signature matrix is determined for each clock from the second camera image, wherein the determined signature values respectively represent the surroundings of the image point of the second camera image. The signature values of the second signature matrix are also determined continuously or sequentially at a predetermined clock frequency and in a predetermined sequence, one signature value for each clock. In a subsequent method step, at least one correspondence matrix (korrospuntzmatrix) is determined from the determined signature value position table, the derived signature values of the second signature matrix, the derived concatenation matrix, the first camera image and the second camera image by constructing a difference between the first and second coordinates, each having the same signature value. In other words, for example, for determining the correspondence matrix, a difference is formed between the second coordinates of the second signature matrix elements determined in the current clock and the first coordinates in the signature value position table assigned to the signature values determined for the second signature matrix elements, or a difference is formed between the second coordinates of the second signature matrix elements determined in the current clock and the older first coordinates assigned to these first coordinates in the concatenation matrix.
In other words, in the current clock, a difference is formed between the second coordinates of the current clock and a selection of the first coordinates from the signature values of the second signature matrix assigned to the current clock and read from the first cascade matrix, or between the second coordinates of the current clock and a selection of the first coordinates from the signature values and read from the signature value matrix table, wherein the selection is made as a function of the properties of the image points assigned to the first and second coordinates, respectively. This characteristic of the image point includes: the color of the respective image point, the ambient description and/or the history of the respective image point at the respective coordinate. The correspondence matrix advantageously represents the image points corresponding to each other and the respective solved optical flow vectors between the first camera image and the second camera image. In particular, the correspondence matrix is also determined or adapted continuously at a predefined clock frequency — one matrix element is determined or adapted for each clock. Preferably, at least one correspondence matrix is determined for each spatial direction or image dimension. Optionally, the correspondence matrix can be used further in a further method step of the driver assistance method and/or can be displayed to the user, for example, in a color-coded manner. Subsequently, optionally as a driver assistance method step, the brakes of the vehicle are actuated, for example, according to a correspondence matrix. The advantage of this method is that even in the case of high resolution of the first and second camera images, the correspondence matrix can be calculated quickly with little memory requirement, wherein the assignment of pixels to each other is reliable for a plurality of pixels in the camera images. In this way, the corresponding image points in the two camera images and the associated optical flow vectors are determined with high quality and quickly. These advantages arise because: the pixels of the first camera image and of the second camera image which correspond to one another are determined as a function of the signature value (i.e. the surroundings of the respective pixel) and as a function of the properties of the respective pixel.
In a preferred embodiment, the determined signature values of the first and second signature matrices have a predetermined length which is greater than or equal to 8 bits and less than or equal to 32 bits, particularly preferably 14 to 18 bits. The number of arithmetic operations or computation time for determining the signature values of the first and second signature matrix elements and for determining the correspondence matrix is reduced by this embodiment.
In one embodiment, it can be provided that a plurality of first camera images each having a different time offset relative to the second camera image are detected. Subsequently, the method is performed in parallel for each of these detected first camera images in a clock, so that a plurality of elements of the offset-dependent correspondence matrix is determined for each clock. The advantage of this configuration is that different correspondence matrices represent different velocity hypotheses for image points that are associated with each other. Advantageously, the next method steps (e.g. in the driver assistance method) can be performed on the basis of the correspondence matrix associated with the offset, thereby more reliably identifying the critical driving situation.
In a further embodiment, it can be provided that a plurality of second camera images are detected, each having a different time offset relative to the first camera image. Subsequently, the method is performed in parallel for each of these detected second camera images in a clock, so that a plurality of elements of the offset-dependent correspondence matrix is determined for each clock. This further configuration has the same advantages as the configuration described above, and furthermore has the following advantages: a signature value location table and a concatenation matrix are derived for only the first camera image. Thus, resources are saved in this further configuration, since the signature value location table and the concatenation matrix can be jointly used for determining the correspondence matrix related to the offset.
A combined (konsolidiort) correspondence matrix is then optionally derived from a comparison of the derived offset-dependent correspondence matrices with each other or with a previously determined correspondence matrix. This has the advantages that: the velocity hypotheses or the offset-dependent correspondence matrix are checked for plausibility and the different velocity hypotheses for the image points between the first and second camera images are taken into account. The different speeds of the different image points between the first and second camera images are caused, for example, by the fact that: the distances of objects in the surroundings of the vehicle differ, and therefore the relative speed between the object and the vehicle or the relative speed between the image points changes. The reliability of the extracted correspondence matrix elements in the combined correspondence matrix is thereby improved compared to a single extracted offset-dependent correspondence matrix.
In a further embodiment, it can be provided that the determination of the first additional feature matrix is performed from the first camera image, wherein the additional feature elements of the first additional feature matrix represent properties of the pixels of the first camera image. The additional characteristic elements represent, for example, color values of the assigned image point or of the surroundings of the image point, and/or color value changes of the surroundings of the image point, and/or noise values in the surroundings of the assigned image point, and/or identified object or slice segments assigned to the image point in the first camera image, and/or properties of past assigned image points. In particular, the additional characteristic elements of the additional characteristic matrix are determined continuously at a predetermined clock frequency. Optionally, in this embodiment, a second additional feature matrix is determined from the second camera image, the additional feature elements of the second additional feature matrix representing properties of the assigned pixels of the second camera image. The additional feature elements of the second additional feature matrix represent, for example, color values of the assigned pixels or of the surroundings of the pixels, and/or color value changes in the surroundings of the pixels, and/or noise values in the surroundings of the assigned pixels, and/or identified objects or sections assigned to the pixels in the second camera image, and/or properties of the previously assigned pixels, respectively. Advantageously, the additional characteristic elements of the second additional characteristic matrix are also determined successively at a predefined clock frequency, one additional characteristic element for each clock. In this embodiment, the correspondence matrix is additionally determined from the first supplementary characteristic matrix and the second supplementary characteristic matrix. The following advantages are thereby obtained: the assignment between two corresponding image points between the first and second camera images is more reliable.
In a preferred embodiment, a plurality of signature value position tables and/or a plurality of cascade matrices can be determined for the first signature matrix of the detected first camera image, wherein each signature value position table and/or each cascade matrix represents a partial region of the first signature matrix. In other words, with this configuration, the examined image region of the image points corresponding to one another between the first camera image and the second camera image is limited to the respective partial region of the first camera image or of the assigned first signature matrix.
This advantageously achieves that: the correspondence table is found from a second coordinate of the signature value found at the current clock of the second signature matrix and a signature value position table assigned to the first coordinate corresponding to the second coordinate or a concatenated matrix assigned to the first coordinate corresponding to the second coordinate. The determination of the image points corresponding to one another can thus be carried out more quickly and more efficiently. This configuration makes it possible to increase the detection rate for detecting the first and second pixels, since the corresponding pixels are determined more quickly and more efficiently.
In a particularly preferred embodiment, the correspondence matrix is determined as a function of a predefined search area. The predefined search area is preferably determined from the second coordinates of the current clock and/or from a previously determined correspondence matrix in time. By means of the previously determined correspondence matrix or the previously determined optical flow vectors of the pixels, it is possible, for example, to estimate the future movement of these pixels or to determine the search area. The search area may also have multiple signature value location tables and/or a concatenation matrix. In other words, in this embodiment, the assigned first coordinate or coordinates of the currently ascertained signature values of the signature matrix elements of the second signature matrix are advantageously recognized in the plurality of signature value position tables in order to determine the correspondence matrix, wherein the correspondence matrix is then ascertained as a function of the first coordinate or coordinates assigned to the signature value position table or tables, the first coordinate or coordinates being located in the search area. In the case of a plurality of suitable first coordinates in the search area, the correspondence matrix is additionally determined as a function of the properties of the respective pixels in the first and second camera images and/or the first and/or second additional features of the respective pixels. This configuration increases the reliability of the determined pixels that correspond to one another. Furthermore, the method is accelerated by this configuration.
In one embodiment, it may be provided that the current first coordinate position of the signature matrix element used for determining the second signature matrix differs from the current second coordinate position of the signature matrix element used for determining the first signature matrix. Preferably, the second coordinate position used for deriving the signature matrix elements of the second signature matrix is the following coordinate position: the coordinate location is a coordinate location processed at least ten clocks before being used to find the signature matrix element of the first signature matrix. In other words, the coordinate position processed in the current clock of the signature matrix element of the first signature matrix and the coordinate position processed in the current clock of the signature matrix element of the second signature matrix are offset from each other. This configuration increases the reliability of the determined pixels that correspond to one another.
In one embodiment, the matching of the correspondence matrix elements of the correspondence matrix is performed as a function of the correspondence matrix elements in the surroundings of the correspondence matrix elements and/or as a function of a previously determined correspondence matrix in time. This results in the following advantages: subsequently, the determined correspondence matrix or optical flow vector or the mutually assigned pixels are filtered or checked for plausibility. This embodiment is advantageous, in particular, when the detected camera images have a high resolution, in particular in regions of the camera images with increased structural repetition (for example shown fences or shown sky), in order to subsequently correct a misallocation (fehlzurdnung) between the pixels of the first and second camera images.
The invention also relates to a SoC, i.e. a System-on-a-Chip, for performing the method according to the invention. The SoC is provided for detecting a first camera image by means of the camera, wherein the first camera image is updated continuously, in particular at a predefined detection rate or frame rate. The SoC is furthermore provided for detecting a second camera image by means of the camera, wherein the second camera image is advantageously continuously updated at a predefined detection rate or frame rate. The SoC advantageously has at least one predefined clock frequency, wherein the clock frequency is at least six orders of magnitude faster than the detection rate of the first and second camera images. The SoC is further arranged for determining at least one correspondence matrix from the detected first camera image and the detected second camera image, and for generating an output signal from the determined correspondence matrix. The output signal represents in particular an optical flow vector between the first camera image and the second camera image, which optical flow vector is assigned to the coordinates of the image points of the second camera image. The SoC according to the present invention has the following advantages: in the case of a high resolution of the camera images and a fast detection rate of the camera images, the mutually corresponding image points between the first camera image and the second camera image are efficiently determined with high quality. If the elements of the first signature matrix and/or of the second signature matrix and/or of the concatenation matrix, which are ascertained in the method, are calculated or stored only for a short period of time, then a particularly small electrical storage requirement for the SoC results, and the respective memory location is freed for the newly calculated value once it is determined that it is no longer accessible.
The invention also relates to a camera system having at least one camera. The camera system is provided for the continuous detection of a first camera image and a second camera image, which is offset in particular in time with respect to the first camera image, at a predetermined detection rate. The camera system also has an SoC according to the present invention.
The invention also relates to a control device for carrying out the method according to the invention.
The invention also relates to a vehicle having a camera system according to the invention or a control device according to the invention.
Drawings
Further advantages result from the following description of embodiments with reference to the drawings.
Fig. 1 shows a flow chart illustrating the method in block diagram form;
FIG. 2 shows a camera image;
FIG. 3 illustrates a signature matrix;
FIG. 4 illustrates a portion of a signature value location table;
FIG. 5 illustrates a cascaded matrix;
FIG. 6 illustrates a correspondence matrix;
FIG. 7 illustrates dividing a first signature matrix into a fractional region and a search region;
fig. 8 shows a system on chip (SoC).
Detailed Description
A flow chart of the method is shown in block diagram form in fig. 1. The method begins with the detection 10 of a first camera image, wherein the first camera image is updated at a predefined detection rate. Then, a second camera image is detected in step 11. The first and second camera images have a predetermined resolution. Furthermore, the detection 10 of the first camera image and the detection 11 of the second camera image, which are detected, for example, by means of the cameras and have a predetermined time offset from one another, are each carried out at a predetermined detection rate. The detections 10 and 11 are performed, for example, with 60 camera images per second, wherein the first and second camera images each have at least one XGA resolution, i.e. each detected camera image comprises at least 1024 × 768 pixels. Advantageously, the detected first and second camera images each have a full HD resolution of 1920 × 1080 image points and particularly preferably a 4K UHD resolution of 3840 × 2160 image points. All method steps after the detection 10 of the first camera image can be evaluated with a computation clock or with a predefined clock frequency (for example 1GHz) of the SoC or of a computation unit used by the control device. In this case, one element of the matrix is preferably determined for each clock. The individual matrix elements are each stored in a memory, some of the matrix elements being stored only temporarily for the current clock in a subsequent method step or optionally being overwritten in a subsequent clock. It is initially optionally provided in step 20 that the detected first camera image and/or the detected second camera image are matched, for example, the corresponding RGB color image is converted into a gray-value image and/or the first and second camera images are scaled. In an optional step 31, for each image point of the first camera image, an additional feature is determined from the first camera image and assigned to an additional feature element of the first additional feature matrix. The first additional feature matrix represents another characteristic of the image points of the first camera image. Step 31 may include: the color values of the respective pixels and/or the noise values of the surroundings of the respective pixels are determined and/or objects and/or sections are identified in the first camera image at first coordinates which are respectively assigned to the respective pixels in the first additional feature matrix. In a further optional step 32, for each image point of the second camera image, an additional feature is determined from the second camera image and assigned to an additional feature element of the second additional feature matrix. The second additional feature matrix represents another characteristic of the image points of the second camera image. Step 32 may include: the color values of the respective pixels and/or the noise values of the surroundings of the respective pixels are determined and/or objects and/or sections are identified in the second camera image at second coordinates, which are respectively assigned to the respective pixels in the second additional feature matrix. In step 40, the first signature matrix or signature matrix elements of the first signature matrix are signed at the current first coordinates. In step 50, a concatenation matrix is determined from the first signature matrix, or the coordinates stored in the signature value position table for the signature value determined in step 40 are assigned to the concatenation matrix element of the concatenation matrix having the current first coordinates. In step 60, the signature value position table is matched according to the first signature matrix or the current first coordinate is assigned to the signature value determined in step 40 in the signature value position table. In step 70, the signature values of the second signature matrix or the signature matrix elements of the second signature matrix are evaluated at the current second coordinates. The second coordinates are advantageously different from the first coordinates. In other words, the second coordinate advantageously lags the first coordinate by a fixed coordinate offset. In step 80, at least one correspondence matrix is determined from the second signature matrix and the signature value location table, and from the concatenation matrix and the first and second camera images. Optionally, the correspondence matrix is additionally determined from the first additional feature matrix and/or the second additional feature matrix. The correspondence matrix is determined by constructing a difference between the second coordinate and the first coordinate of the signature value found in step 70 assigned to the signature matrix element of the second signature matrix in the signature value position table, or by constructing a difference between the second coordinate and the first coordinate stored in the concatenation matrix by the first coordinate assigned to the signature value found in step 70. The selection of the first coordinates for constructing the difference to be carried out in step 80 for determining the correspondence matrix is made according to the following characteristics: the characteristic of the image point in the first camera image at the corresponding first coordinate and/or the characteristic of the image point in the second camera image at the second coordinate. In an optional step 90, it may be provided that the extracted correspondence matrix is matched by a plausibility check of the extracted correspondence matrix elements. In an optional further step 99, it can be provided that the driver of the vehicle is warned according to the correspondence matrix and/or that the partially autonomous or autonomous driving assistance method carries out a driving maneuver according to the determined correspondence matrix. The method is carried out continuously repeatedly or continuously at a predefined clock frequency or computation clock, wherein the current first coordinate and the current second coordinate are matched in each clock in a predefined sequence. If the detections 10 and 11 of the respective first or second camera image are carried out line by line in the sequence from top left to bottom right in the predetermined sequence, the determination 40 of the first signature matrix, the determination 70 of the signature values of the signature matrix elements of the second signature matrix and the determination of the correspondence matrix advantageously likewise have the predetermined sequence. In other words, in this case, the first signature matrix, the concatenation matrix, the second signature matrix and the correspondence matrix are advantageously determined line by line in order and in a predetermined order from top left to bottom right. The respectively determined signature values of the first signature matrix and of the second signature matrix are stored, in particular, only temporarily for the subsequent method steps for the current clock and are overwritten in the subsequent clock, so that the memory requirements of the method are reduced.
In one embodiment of the method, a plurality of first camera images can be detected, each having a different time offset relative to the second camera image. The method is then performed in parallel in the clock for each of the detected first camera images, and thus at least one offset-dependent correspondence matrix is determined for each of the detected first camera images. In an alternative embodiment of the method, a plurality of second camera images can be detected, each having a different time offset relative to the first camera image. Subsequently, the method is performed in parallel in the clock for each of the detected second camera images, and thus at least one offset-dependent correspondence matrix is determined for each of the detected second camera images. The merged correspondence matrix is then solved in an optional step 95. The merged correspondence matrix is obtained by: the physically correct elements of the respective offset-dependent correspondence matrix are selected for different first or second camera images as a function of a comparison of the ascertained correspondence matrices with one another and/or as a function of a comparison of the ascertained correspondence matrices with previously determined correspondence matrices. The offset-related correspondence matrix represents different velocity hypotheses between a pixel of the second camera image and an associated pixel in the first camera image. After a plausibility check of the different velocity hypotheses assigned to the second coordinates or of the correspondence matrix associated with the offset, a merged correspondence matrix is obtained. This increases the proportion of reliably determined correspondence matrix elements in the merged correspondence matrix.
Fig. 1 also shows time ranges 1, 2 and 3. In a first time range 1, the detections 10 and 11 of the camera images are updated at a predetermined detection rate, which is predetermined, for example, by the camera characteristics used to detect 10, 11 the first and second camera images 101, 102. In a second time range 2, arithmetic operations are carried out in different method steps in order to determine a correspondence matrix with a predefined clock frequency or with a calculation clock of the SoC or of the control device. Alternatively, the detection 11 of the second camera image 102 can be assigned to the second time range 2, which means that the determination 70 of the signature value of the second signature matrix element and the subsequent method steps can already be performed in only one detected pixel of the second camera image 102. The matching 90 of the correspondence matrices, the determination 95 of the combined correspondence matrix and the step 99 are carried out in the second time range 2 or in the third time range 3, wherein the steps 90, 95 and 99 can each be carried out at a specific repetition rate which differs from one another.
The memory requirements of the method shown in fig. 1, and hence the silicon area required for the SoC, are mainly influenced by the predefined length of the signature value. In the case of a predefined length of 8 bits, compared to the predefined length of 32 bits of the signature value, new first coordinates are assigned more frequently to the respective signature value in the signature value position table during the time course, since the predefined length of 32 bits of the signature value more accurately characterizes the respective surroundings of the image point sought in the current clock and is generally correspondingly less frequent. However, the size of the signature value position table is longer for a predetermined length of 32 bits than for a predetermined length of 8 bits of the signature value, and the predetermined length of 32 bits is accordingly 232Each position is longer than 2 of a predetermined length of 8 bits8The position is much larger.
In fig. 2, a detected camera image 101 or 102, for example a first camera image 101 or a second camera image 102, is shown with a plurality of pixels 103 or pixels, wherein the resolution shown in fig. 1 is selected for the sake of clarity to have only 7 × 7 pixels. Advantageously, the first camera image 101 and the second camera image 102 have a significantly higher resolution of at least 64 × 64 pixels, particularly preferably at least a resolution of 1024 × 768 pixels (XGA) (which overall has 786432 individual pixels). In fig. 1, for example, each image point 103 of the camera images 101, 102 is assigned a specific first or second coordinate, or a coordinate pair depending on the row number y1, y2, y3, y4, y5, y6 or y7, etc. and the column number x1, x2, x3, x4, x5, x6, x7, etc., for example, the image point 103a is assigned a first or second coordinate (x 4; y 4). The first camera image 101 and the second camera image 102 are detected, for example, as RGB color images, i.e. each image point of the respective camera image 101, 102 having the respective first or second coordinates is assigned a red component, a green component and a blue component. After the detection 10 or 11, the first and/or second camera image 101, 102 detected as a color image can be converted or adapted to a gray-scale image, whereby each image point 103 is assigned a gray-scale value. For example, a (advantageously 12-bit) gray value can be determined from the RGB color values. Thus, the grey value is between 0 and 4095 for each pixel. The detection rate of the first camera image 101 and the second camera image 102 is for example 60 camera images per second or 60Hz, respectively. The first camera image 101 and the second camera image 102 are, for example, two camera images of a camera (which detects the camera image at 60 Hz) which are detected one after the other or offset in time, so that the first camera image 101 and the second camera image 102 usually differ from one another only slightly. The difference between the first camera image 101 and the second camera image 102 is caused, for example, by the fact that: the camera is arranged on a vehicle, wherein the vehicle moves in the direction of travel. Accordingly, the first camera image 101 of the camera differs, for example, from the second camera image 102 detected offset in time, depending on the own movement of the vehicle, the distance between the vehicle and the object, and the movement of the object in the surroundings of the vehicle.
Fig. 3 shows an example of a first signature matrix 201 for the first camera image 101 or a second signature matrix 202 for the second camera image 102. The signature matrix 201 is determined from the first camera image 101 or the first grayscale image of the first camera image 101, wherein a signature value 204 is determined for each signature matrix element 203. The signature value 204 represents the surroundings of the image point 103 of the respective camera image 101 or 102 assigned to the signature matrix element 203 and can be stored, for example, as an 8-bit, 16-bit or 32-bit value. For example, a signature value having 16 bits has a value between 0 and 65535. The signature value 204 of each signature matrix element 203 can be determined, for example, by summing and/or differencing gray values in a 3 × 3 matrix respectively surrounding the assigned image point. Alternatively, a more complex calculation of the signature value of the respective image point can be performed. It can be provided that the signature value is not determined at the direct image edge of the camera image, since in this example a 3 × 3 matrix is required to determine the signature value, so that for example the signature values of the first and/or second signature matrix elements with the first or second coordinates within, for example, 1 pixel width from the image edge of the first and/or second signature matrix are marked as invalid (not shown). Each signature matrix element 203 of the signature matrix 201, 202 with the signature value 204 is assigned a specific first or second coordinate or a coordinate pair depending on the row number 211y1, y2, y3, y4, y5, y6 or y7 and the column number 210x1, x2, x3, x4, x5, x6, x7 of the signature matrix 201, 202, which coordinate pair represents the respective image point of the camera image 101, 102 at the or the assigned first or second coordinate, respectively. Advantageously, the signature matrices 201 and 202 are determined for the signature matrix elements 203 with the computation clock of the SoC or the control device, wherein the order of the determination of the individual signature elements 203 and the determination clock frequency are predefined. For example, the signature matrix elements 203 of the signature matrices 201 and 202 are evaluated row by row from top to bottom and from left to right — in particular, one element of the matrix is evaluated for each clock. The computation clock may be in the range of 1MHz to 10 GHz. In FIG. 3, this is represented by the signature matrix element 203a having a signature value 3333 and a first or second coordinate (x 3; y5) that is found in the current clock. The signature matrix element 203 having the signature value 7356 and the first or second coordinate (x 2; y5) has been found in the last clock, while the signature matrix element 203 having the first or second coordinate (x 4; y5) has been found in the next clock. The order in which the signature matrix elements 203 are calculated with the computation clock is shown by directional arrows 205 in fig. 3. Alternative calculation orders for the signature matrix elements 203 in the signature matrix 201 or 202 are possible, for example calculating from right to left and from top to bottom column by column with a clock. The first signature matrix 201 and the second signature matrix 202 preferably have first and second coordinates different from each other in the current clock. In other words, the current operating coordinates (i.e. the first and second coordinates) of the clock are matched in each clock (in particular in a predefined sequence), wherein the second coordinate used for determining the second signature matrix 70 in step 70 is advantageously set behind (hindterlaufen) the current first coordinate used in step 40 by a fixed or variable coordinate offset. In the case of a variable coordinate offset, the coordinate offset is adapted, for example, as a function of the first and/or second coordinate. For example, the coordinate offset between the first coordinate and the second coordinate may be +30 pixels in the vertical direction, in other words, the first coordinate precedes (vorauslaufen) the second coordinate by 30 clocks when the second signature matrix starts to be solved from the second camera image. In the course of the process, the coordinate offset can be slowly increased to +70 pixels in the vertical direction, or decreased to +10 pixels or even-10 pixels in the vertical direction, wherein the variable coordinate offset is caused by the fact that: instead of evaluating the signature values of the signature matrix elements of the first and second signature matrices in the same clock at any time, one of the two evaluations 40, 70 is temporarily suspended while the other evaluation 40, 70 is continued. Particularly advantageous are: the pause duration does not exceed one line in case of line-by-line processing. The coordinate offset may be negative, positive, or zero, and the sign of the coordinate offset may also change during the image pair processing. A variable coordinate offset is particularly advantageous for shifting the search area.
Fig. 4 shows a portion of the signature value location table 300. In the signature value position table 300, all possible signature values 204 corresponding to a predetermined word length (for example 16 bits) are present as addresses. The matching or determination of the signature value position table 300 takes place in the clock, wherein in the current clock the signature value 204 of the first signature matrix 201 determined in step 40 is assigned in the signature value position table 300: the first coordinate or current working coordinate pair 210, 211 of the signature matrix element 203 of the ascertained signature value 204, or the coordinate of the respective image point 103. Thus, each possible signature value 204 is assigned the following first coordinate in the signature value location table 300: at these first coordinates, the signature value 204 is found in the current clock or last time. In the course of the method, it may result in the first coordinate assigned to the signature value 204 in the signature value position table 300 being overwritten, whereby the unique unequivocalness (Eindeutigkeit) of the assignment of the corresponding image point between the first and second camera images 101, 102 by means of the signature value is lost. In this case, a plurality of first coordinates having the same signature value are found in the first signature matrix 201. For example, in the signature value 204 of the first signature matrix 201 (in this example, the signature value 3333) that is found for the signature matrix element 203a in the current clock, the corresponding coordinates (in this example, the coordinates (x 3; y5) or the column coordinates 210x3 and the row coordinates 211y5) of the signature matrix element 203a are assigned in the signature value location table 300. However, in the signature value location table 300, the signature value 3333 has been assigned coordinates (x 2; y4) 8 clocks ago because the signature value 3333 that was sought there was the last occurrence. In order not to lose the last coordinate (x 2; y4) as information due to the coverage of the current coordinate of the signature matrix element 203a, the concatenation matrix 400 is found.
A concatenation matrix 400 of the first signature matrix 201 of fig. 3 is shown in fig. 5. In the case of the evaluation 50 of the concatenation matrix 400, the last first coordinate (x 2; y4) stored for the signature value 204 in the signature value position table 300 is written into the respective concatenation matrix element 403a together with the current first coordinate of the signature matrix element 203a, before the first coordinate (x 3; y5) of the signature matrix element 203a of the first signature matrix 201 in the signature value position table 300 present in the current clock is assigned to the respective signature value 204 in step 60. Thus, the last first coordinate (x 2; y4), which is overwritten by entering the current first coordinate (x 3; y5) into the signature value location table 300, is stored in the concatenation matrix 400 and is thus preserved. The corresponding entries of the previous clock for the signature value 3333 are shown in fig. 5 in a further concatenation matrix element 403 of the concatenation matrix 400. The corresponding coordinate references for the same signature value stored in the concatenated matrix, either in the previous clock or previously derived, are also illustrated in fig. 5 by arrows 421, 422 and 423. Such a cascade matrix 400, which is also determined in the computation clock, forms an effective possibility for avoiding information losses in the method according to the invention. Thus, the derivation 80 of the correspondence matrix from the concatenation matrix 400 significantly reduces the pixel misallocation due to the overlay in the signature value location table 300.
The correspondence matrices 500, 501, 502 are shown in fig. 6. The correspondence matrix 500 has a coordinate difference in the correspondence matrix element 503, for example in the correspondence matrix element 503a (2; 2), or the correspondence matrix has an optical flow vector between the second coordinate of the second signature matrix 202 or the second coordinate of the correspondence matrix 500 and the first coordinate assigned to the corresponding signature value in the signature value position table 300. For example, the following signature values 9305 (not shown) are determined at the second coordinates (x 5; y3) of the second signature matrix 202: the signature value is assigned to the first coordinate (x 3; y1) by the signature value location table 300 because the first signature matrix 201 has the signature value 9305 at the first coordinate (x 3; y 1). For the correspondence matrix element 503a marked in FIG. 6 with the second coordinate (x 5; y3), the optical flow vector (2; 2) is derived from the coordinate difference between the second coordinate (x 5; y3) and the first coordinate (x 3; y 1). Thus, by finding the correspondence matrix 500, the pixels corresponding to each other are determined between the first camera image and the second camera image, and the optical flow vector between these corresponding pixels is determined. It can be provided that the direction- dependent correspondence matrices 501, 502 are determined such that each correspondence matrix element 503 of the respective correspondence matrices 501 and 502 is assigned a value of only one dimension. First, the correspondence matrix is determined by reading the first coordinates of the derived signature values assigned to the current signature matrix element of the second signature matrix 202 in the signature value location table 300. Then, at least one concatenated matrix element 403 is read according to these assigned first coordinates. Preferably, the sequence of concatenated matrix elements, for example three concatenated matrix elements or coordinates, is read as a function of the assigned first coordinates and the first coordinates stored there in each case, as is indicated by the arrows 421 and 422 in fig. 5, wherein the same signature value has been determined for each of these read first coordinates. Then, additional properties, for example color values, of the pixels of the second camera image with the second coordinates of the second signature matrix of the current clock are determined or read. These read additional characteristics are then compared with the corresponding characteristics of the image points of the first camera image at the read first coordinates. Then, the correspondence matrix elements of the correspondence matrix are determined from the comparison, that is, the correspondence matrix is determined by: a difference is formed between the second coordinates of the currently sought signature element of the second signature matrix and the first coordinates assigned to the signature value in the signature value position table, or a difference is formed between the second coordinates of the currently sought signature element of the second signature matrix and the first coordinates read from the at least one concatenation matrix in accordance with the sought additional property of the respective image point.
The division of the first signature matrix into three part-areas A, B and C is shown in fig. 7. For each of the partial regions A, B and C of the first signature matrix, a separate signature value location table 300a, 300b, 300C, respectively, and/or a separate concatenation matrix 400a, 400b, 400C, respectively, are derived. In step 80, to find the correspondence matrix 500, first the matched signature value position table 300a, 300b, 300c and/or the matched concatenation matrix 400a, 400b, 400c are selected according to the current second coordinates processed in step 70. Next, in order to find the correspondence matrix, in the signature value position table 300a, 300b or 300c selected or assigned to the current second coordinate, the first coordinate assigned to the signature value and/or the respectively assigned concatenation matrix element is read from the matched concatenation matrix 400a, 400b or 400c with the further first coordinate relating to the signature value. In an optional embodiment, the shape and/or size of the partial regions a, B and C can be adapted to determine the signature value position table 300a, 300B, 300C and/or the cascade matrix 400a, 400B, 400C as a function of the detected speed of the vehicle (which has a SoC or a control device for carrying out the method) and/or as a function of the first additional feature matrix. In fig. 7, the search area 750 is also shown as a sub-matrix in the first signature matrix 201 having 3 × 3 pixels. The search area 750 may be determined from the current second coordinates and/or from a correspondence matrix determined at a previous time. The search area 750 has one or more partial areas a, B, and C. The search area 750 is, for example, symmetrical with respect to the current second coordinate (x 5; y4), which is illustrated, for example, by a double-line box in fig. 7. Alternatively, an asymmetrical position of the search area 750 with respect to the current second coordinate (x 5; y4) is also conceivable. Further, the search area 750 may alternatively be found from the optical flow vector and the current second coordinate or the past correspondence matrix. The search area 750 limits the first coordinates of the sought signature values read from the signature value position tables 300a, 300b, 300c and/or the concatenation matrices 400a, 400b, 400c in order to seek a correspondence matrix or to construct differences in coordinates in the search area 750. In an optional embodiment, the shape and/or size of the search area 750 is adapted as a function of the detected speed of the vehicle and/or as a function of the past correspondence matrix and/or as a function of the first and/or second additional feature matrix and/or as a function of the objects and/or the identified sections identified in the first and/or second camera images 101, 102.
It may occur that no reliable correspondence is determined for the image points in the second camera image 102. For example, reliable allocation between pixels of the first camera image and pixels of the second camera image cannot be achieved: in step 80, the first coordinates read out from the signature value position table on the basis of the signature value determined in step 70, the first coordinates assigned to the signature value by means of the concatenation matrix, the properties of the image points of the first and second camera images respectively assigned to the first coordinates, and/or the corresponding elements of the first additional feature matrix and/or of the second additional feature matrix. In the absence of such a reliable assignment, the correspondence matrix at the respective second coordinate may be placed at a virtual value (Dummy-Wert), e.g., at zero. Thus, the method ensures, even in the absence of allocation: after a predetermined number of clock cycles, the correspondence matrix is determined as a function of the clock frequency, whereby a hardware implementation with real-time functionality can be realized.
In fig. 8, a SoC 800 for performing the method according to the present invention is shown. The SoC 800 detects the first camera image 101 and the second camera image 102, wherein the first camera image 101 and the second camera image 102 are preferably detected by means of a camera. Further, the SoC may optionally detect the vehicle speed 801 and/or yaw rate of the vehicle and/or pitch rate and/or roll rate of the vehicle to match the partial regions A, B and C. SoC 800 is configured to generate an output signal that represents correspondence matrix 500 or correspondence matrices 501, 502. Optionally, SoC 800 may furthermore be provided for generating a further output signal representing the correspondence matrix 500 or the evaluation matrix 801 of the correspondence matrices 501 and 502. By outputting the evaluation matrix 810, the matching 90 of the correspondence matrix 500 can be subsequently performed, for example, in the control device. Optionally, the SoC 800 may further be arranged for outputting raw image data 820, which raw image data represents the first camera image 101 and the second camera image 102. This raw image data 820 may be displayed to the driver of the vehicle, for example. The SoC has a clock frequency between 1MHz to 10GHz and an electrical memory between 1MB to 5 GB.

Claims (14)

1. A method for continuously determining image points corresponding to one another between a first camera image and a second camera image, wherein the method has at least one predefined clock frequency, comprising the following method steps:
detecting (10) the first camera image (101);
detecting (11) the second camera image (102);
for each clock of a predefined clock frequency, determining (40) at least one first signature matrix element (203) of a first signature matrix (201) from the first camera image (101), said at least one first signature matrix element having first coordinates, wherein the determined signature values of the first signature matrix elements (203) each represent a surroundings of an image point of the first camera image (101);
assigning (60) the first coordinate of the clock to the derived signature value of the first signature matrix element (203) in a signature value position table (300);
for each clock of the predefined clock frequency, determining (70) at least one signature value of a second signature matrix element (203) of a second signature matrix (202) from the second camera image (102), the second signature matrix element having second coordinates, wherein the determined signature value of the second signature matrix element (203) represents a surroundings of an image point of the second camera image (102);
determining (80) at least one element (503) of a correspondence matrix (500, 501, 502) from the signature value location table (300), the in-clock derived signature value of the second signature matrix element (203), the second coordinates of the second signature matrix element (203);
it is characterized by executing the following steps:
for each clock, deriving (50) at least one concatenation matrix element (403) of a concatenation matrix (400) from the first signature matrix (201) and the signature value position table (300);
additionally, elements (503) of the correspondence matrix (500, 501, 502) are determined (80) from the determined concatenation matrix (400), the first camera image (101) and the second camera image (102).
2. The method according to claim 1, characterized in that the extracted signature values of the first signature matrix (201) and of the second signature matrix (202) have a predefined length smaller than or equal to 16 bits.
3. Method according to any of the preceding claims, characterized in that the following method steps are performed,
a plurality of first camera images (101) are detected (10), each having a different time offset relative to the second camera image, wherein the method is subsequently performed in parallel in a clock for each of the detected first camera images (101), such that a plurality of elements (503) of a correspondence matrix (500, 501, 502) relating to the offset is determined in a step (80) for each clock.
4. Method according to any of the preceding claims, characterized in that the following method steps are performed:
detecting (11) a plurality of second camera images (102) which have respectively different time offsets with respect to the first camera image (101), wherein the method is subsequently performed in parallel for each of the detected second camera images (101) in a clock, such that a plurality of elements (503) of a correspondence matrix (500, 501, 502) which is dependent on the offset is determined in a step (80) for each clock respectively.
5. Method according to claim 3 or 4, characterized in that the following method steps are performed:
a combined correspondence matrix is derived (95) from a comparison of the derived correspondence matrices (500, 501, 502) relating to the offset with each other or with a previously determined correspondence matrix.
6. Method according to any of the preceding claims, characterized in that the following steps are performed:
determining (31) additional feature elements of a first additional feature matrix from the first camera image (101) for each clock, wherein the additional feature elements represent characteristics of the assigned image point of the first camera image (101) and/or represent characteristics of the surroundings of the assigned image point and/or,
determining (32) additional feature elements of a second additional feature matrix from the second camera image (102) for each clock, wherein the additional feature elements represent properties of the assigned image point of the second camera image (102) and/or represent properties of the surroundings of the assigned image point;
-determining (80) the correspondence matrix (500, 501, 502) additionally from the first and/or second additional feature matrix.
7. The method according to any of the preceding claims, characterized by determining a plurality of signature value position tables (300a, 300B, 300C) and/or a plurality of concatenation matrices (400a, 400B, 400C) for the first signature matrix (201), wherein each signature value position table (201) and/or concatenation matrix represents a partial region (a, B, C) of the first signature matrix (201), respectively.
8. The method according to any of the preceding claims, characterized in that the correspondence matrix (500, 501, 502) is determined (80) from a predefined search area (750), wherein the predefined search area (750) is derived from the second coordinates of the clock.
9. Method according to one of the preceding claims, characterized in that in the clock the second coordinate has a coordinate offset from the first coordinate, wherein the second coordinate lags behind the first coordinate, in particular continuously at a predefined clock frequency, by a fixed or variable coordinate offset.
10. Method according to any of the preceding claims, characterized in that the following steps are performed:
matching (90) correspondence matrix elements of the correspondence matrix (500, 501, 502) according to correspondence matrix elements in a surrounding of the correspondence matrix elements.
11. SoC (800) arranged for performing the method according to any of claims 1 to 10, wherein the SoC (800) is arranged for,
detecting a first camera image (101);
detecting a second camera image (102);
determining at least one correspondence matrix (500, 501, 502) from the detected first camera image (101) and the detected second camera image (102);
generating output signals from the determined correspondence matrix (500, 501, 502), wherein the output signals represent in particular optical flow vectors between the first and second camera images (101, 102).
12. A camera system, the camera system having:
at least one camera, which is provided for the continuous detection of a first camera image (101) and a second camera image (102) at a predefined detection rate;
the SoC (800) of claim 11.
13. A control apparatus for performing the method of any one of claims 1 to 10.
14. A vehicle having a camera system according to claim 12 or a control apparatus according to claim 13.
CN201980041272.7A 2018-06-19 2019-05-09 Method for determining pixels corresponding to one another, SoC for carrying out the method, camera system having the SoC, control device and vehicle Pending CN112334945A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102018209898.0A DE102018209898A1 (en) 2018-06-19 2018-06-19 Method for determining corresponding pixels, SoC for carrying out the method, camera system with the SoC, control unit and vehicle
DE102018209898.0 2018-06-19
PCT/EP2019/061926 WO2019242929A1 (en) 2018-06-19 2019-05-09 Method for determining mutually corresponding image points, soc for performing the method, camera system having the soc, controller and vehicle

Publications (1)

Publication Number Publication Date
CN112334945A true CN112334945A (en) 2021-02-05

Family

ID=66476648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980041272.7A Pending CN112334945A (en) 2018-06-19 2019-05-09 Method for determining pixels corresponding to one another, SoC for carrying out the method, camera system having the SoC, control device and vehicle

Country Status (5)

Country Link
US (1) US11361450B2 (en)
EP (1) EP3811336B1 (en)
CN (1) CN112334945A (en)
DE (1) DE102018209898A1 (en)
WO (1) WO2019242929A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020214093A1 (en) 2020-11-10 2022-05-12 Robert Bosch Gesellschaft mit beschränkter Haftung Method for detecting a moving object
US11640668B2 (en) * 2021-06-10 2023-05-02 Qualcomm Incorporated Volumetric sampling with correlative characterization for dense estimation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825522A (en) * 2015-01-27 2016-08-03 三星电子株式会社 Image processing method and electronic device for supporting the same
CN106534723A (en) * 2015-09-10 2017-03-22 罗伯特·博世有限公司 Environment detection device used for vehicle and method of detecting image with help of environment detection device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10351778A1 (en) 2003-11-06 2005-06-09 Daimlerchrysler Ag Method for correspondence analysis in image data sets
WO2018143263A1 (en) * 2017-02-06 2018-08-09 富士フイルム株式会社 Photographing control device, photographing control method, and program
JP6751691B2 (en) * 2017-06-15 2020-09-09 ルネサスエレクトロニクス株式会社 Anomaly detector and vehicle system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825522A (en) * 2015-01-27 2016-08-03 三星电子株式会社 Image processing method and electronic device for supporting the same
CN106534723A (en) * 2015-09-10 2017-03-22 罗伯特·博世有限公司 Environment detection device used for vehicle and method of detecting image with help of environment detection device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CORNELIABECK ET AL: "Integration of Multiple Temporal and Spatial Scales for Robust Optic Flow Estimation in a Biologically Inspired Algorithm", INTEGRATION OF MULTIPLE TEMPORAL AND SPATIAL SCALES FOR ROBUST OPTIC FLOW ESTIMATION IN A BIOLOGICALLY INSPIRED ALGORITHM, pages 3 *
STEIN.F ET AL: "Efficient Computation of Optical Flow Using the Census Transform", LECTURE NOTES IN COMPUTER SCIENCE, pages 4 *
YUTANABE ET AL: "Fast and Accurate Optical Flow Estimation using FPGA", ACM SIGARCH COMPUTER ARCHLIECTURE NEWS, pages 2 *

Also Published As

Publication number Publication date
EP3811336B1 (en) 2023-07-12
US11361450B2 (en) 2022-06-14
EP3811336A1 (en) 2021-04-28
WO2019242929A1 (en) 2019-12-26
US20210233257A1 (en) 2021-07-29
DE102018209898A1 (en) 2019-12-19

Similar Documents

Publication Publication Date Title
US10061993B2 (en) Warning method of obstacles and device of obstacles
US20210326624A1 (en) Method, system and device for difference automatic calibration in cross modal target detection
JP4429298B2 (en) Object number detection device and object number detection method
JP2915894B2 (en) Target tracking method and device
CN107452015B (en) Target tracking system with re-detection mechanism
US10943141B2 (en) Object detection device and object detection method
JP5663352B2 (en) Image processing apparatus, image processing method, and image processing program
CN110796686A (en) Target tracking method and device and storage device
CN111382637B (en) Pedestrian detection tracking method, device, terminal equipment and medium
US20110007971A1 (en) Method for recognizing pattern, pattern recognizer and computer program
KR20210116953A (en) Method and apparatus for tracking target
CN112334945A (en) Method for determining pixels corresponding to one another, SoC for carrying out the method, camera system having the SoC, control device and vehicle
CN112329645A (en) Image detection method, image detection device, electronic equipment and storage medium
CN115205335A (en) Pedestrian trajectory prediction method and device and electronic equipment
CN113610835B (en) Human shape detection method for nursing camera
KR20200096426A (en) Moving body detecting device, moving body detecting method, and moving body detecting program
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
JP4918615B2 (en) Object number detection device and object number detection method
CN111652907B (en) Multi-target tracking method and device based on data association and electronic equipment
JP2020144758A (en) Moving object detector, moving object detection method, and computer program
CN110880003A (en) Image matching method and device, storage medium and automobile
JP4674920B2 (en) Object number detection device and object number detection method
CN112001949B (en) Method, device, readable storage medium and equipment for determining target point moving speed
JP2023008030A (en) Image processing system, image processing method, and image processing program
JP2021077091A (en) Image processing device and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination