EP3903478A2 - Système et procédé de traitement pour traiter les données mesurées d'un capteur d'images - Google Patents

Système et procédé de traitement pour traiter les données mesurées d'un capteur d'images

Info

Publication number
EP3903478A2
EP3903478A2 EP19829496.9A EP19829496A EP3903478A2 EP 3903478 A2 EP3903478 A2 EP 3903478A2 EP 19829496 A EP19829496 A EP 19829496A EP 3903478 A2 EP3903478 A2 EP 3903478A2
Authority
EP
European Patent Office
Prior art keywords
light sensors
image sensor
measurement data
image
reading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19829496.9A
Other languages
German (de)
English (en)
Inventor
Ulrich Seger
Marc Geese
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of EP3903478A2 publication Critical patent/EP3903478A2/fr
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the invention is based on a method or a processing device according to the type of the independent claims.
  • the present invention also relates to a computer program.
  • Weighting values to obtain image data for the reference position are Weighting values to obtain image data for the reference position.
  • Measurement data can be understood to mean data which have been recorded by a light sensor or other measurement units of an image sensor and which represent an image of a real object on the image sensor.
  • a reference position can be understood to mean, for example, a position of a light sensor with a light property (e.g. a red filtered spectral range) for which other light properties are to be calculated (e.g. green and blue), or whose measured value is to be processed or corrected.
  • a light property e.g. a red filtered spectral range
  • other light properties e.g. green and blue
  • reference positions form a regular grid of points, which allows the generated measurement data to be processed as an image on a system with e.g. display orthogonal display grid (e.g. a digital computer display).
  • the reference position can be marked with a
  • Measuring position or the position of an existing light sensor match or lie anywhere on the sensor array spanned by the axes x and y, as described in more detail below.
  • the axes x and y are spanned by the axes x and y, as described in more detail below.
  • Light sensors are understood which are in the adjacent other rows and / or columns of the light sensor grid of an image sensor
  • Limit reference position For example, the environment around the reference position forms a rectangular two-dimensional structure in which there are NxM light sensors with different properties.
  • a weighting value can be understood to mean, for example, a factor which is linked or weighted, for example multiplied, with the measured values of the light sensors in the vicinity of the reference position and then the result is added up in order to obtain the image data for the reference position.
  • the weighting values for light sensors can differ depending on the position on the sensor, for example based on the same light sensor types, ie on light sensors which are designed to record the same physical parameters. This means that the sensor values or measurement data values of light sensors that are in one
  • Edge region of the image sensor are arranged, are weighted differently than the measurement data values or values of light sensors, which are arranged in a central region of the image sensor.
  • Linking can, for example, multiply the measured values of the light sensors (i.e. the light sensors in the vicinity of the reference position) by the respectively assigned values
  • Weighting values are understood, followed by, for example, adding the respective weighted measurement data values of these light sensors.
  • Weighting values that depend on a position of the light sensor in question on the image sensor are corrected for this unfavorable imaging property, the weighting values being trained or ascertained, for example, in a previous method or at runtime.
  • This training can be carried out, for example, for a corresponding combination of image sensor and optical components, that is to say for a specific optical system, or for groups of systems with similar properties.
  • the trained weighting values can subsequently be stored in a memory and can be read out at a later point in time for the method proposed here.
  • Reference position can be observed.
  • a continuously increasing change in the point imaging of a real object from a central area of the image sensor to an edge area can be very precisely taken into account or compensated for.
  • measurement data can be read in by the light sensors in the reading step, each for recording measurement data in different parameters, in particular colors, exposure times, the brightness or others
  • Lighting parameters are formed.
  • Such an embodiment of the Approach proposed here enables the correction of different physical parameters such as the mapping of colors, the exposure times and / or the brightnesses on the light sensors in the different ones
  • An embodiment of the approach proposed here is also advantageous, in which a step of determining the weighting values using an interpolation of weighting reference values, in particular wherein the weighting reference values are assigned to light sensors, which are arranged at a predefined distance from one another on the image sensor.
  • the weighting reference values can thus be understood to mean weighting base values which represent weighting values for individual light sensors which are in the predetermined distance and / or position
  • Weighting values are to be provided, so that a reduction in the storage space which is to be provided for the implementation of the approach proposed here can advantageously be achieved.
  • Those weighting values for light sensor or light sensors which are arranged between the light sensors on the image sensor and to which weighting reference values are assigned can then be determined by an interpolation which is technically easy to implement as soon as these weighting values are required.
  • Measurement data are read in from light sensors which are arranged at a different position on the image sensor than the measurement data from the light sensors, from which measurement data were read in in a previous step of reading.
  • Such an embodiment of the approach proposed here enables the step-by-step optimization or correction of measurement data for as many, possibly almost all, to be considered sensibly
  • Light sensor for mapping a certain physical parameter of the light can be understood.
  • a light sensor of a first light sensor type can be designed to determine certain color properties of the light incident on the light sensor, such as, for example, red light, green light or to detect white light particularly well, whereas a light sensor of another type of light sensor is designed to adjust the brightness or a
  • measurement data from the light sensors of different types of light sensors can also be read in in the reading step.
  • the measurement data from light sensors of an image sensor which is at least partially cyclical, can be read in in the reading step
  • Arrangement of light sensor types as light sensors and / or measurement data from light sensors of different sizes are read on the image sensor and / or measurement data from light sensors are read, each having different light sensor types that occupy a different area on the image sensor.
  • Such an embodiment of the approach proposed here offers the advantage of being able to process or link measurement data from corresponding light sensors of the corresponding light sensor types in a technically simple and fast manner, without having to previously scale these measurement data from the corresponding light sensor types or otherwise to prepare them for a link.
  • An embodiment of the proposed approach can be implemented in a technically particularly simple manner, in which, in the linking step, the measurement data of the light sensors which are multiplicatively weighted with the assigned weighting values are added in order to obtain the image data for the reference position.
  • An embodiment of the approach proposed here is advantageous as a method for generating a weight value matrix for weighting measurement data of an image sensor, the method comprising the following steps:
  • Reference measurement data for the corresponding reference position is compared, using light sensors which are arranged around the reference position on the image sensor.
  • Reference image data of a reference image can be understood to mean measurement data which represent an image which is considered to be optimal.
  • Training measurement data of a training image can be understood to mean measurement data which represent an image which one of light sensors
  • weight values can thus be generated which can subsequently be used for the correction or processing of measurement data of an image of an object by the image sensor.
  • An embodiment of the approach presented here is particularly advantageous, in which, in the step of reading in as a reference image and as a training image, one image each is read in, which represents an image section that is smaller than an image that can be detected by the image sensor.
  • Such an embodiment of the approach proposed here offers the advantage of a technically or numerically significantly easier determination of the weight value matrix, since not the
  • Measurement data of the entire reference image or of the training image need to be used, but individual light sensor areas at certain positions of the image sensor are used only in the form of base cutouts in order to calculate the weight value matrix.
  • a change in imaging properties of the image sensor from a central region of the image sensor to one
  • the edge area can often be approximated linearly in sections, so that the weight values for them can be interpolated, for example
  • Light sensors can determine which are not in the area of the concerned
  • Processing device can solve the problem underlying the invention quickly and efficiently.
  • the processing device can have at least one computing unit for processing signals or data, at least one storage unit for storing signals or data, at least one interface to a sensor or an actuator for reading sensor signals from the sensor or for outputting data or control signals to the Actuator and / or at least one communication interface for reading or outputting data, which are embedded in a communication protocol.
  • the computing unit can be, for example, a signal processor, a microcontroller or the like, and the storage unit can be a flash memory, an EEPROM or a magnetic storage unit.
  • Communication interface can be designed to read or output data wirelessly and / or line-bound, wherein one
  • Communication interface that can insert or output line-bound data, can insert this data, for example, electrically or optically from a corresponding data transmission line, or can output it into a corresponding data transmission line.
  • a processing device can be understood to mean an electrical device that processes sensor signals and outputs control and / or data signals as a function thereof.
  • the processing device can have an interface which can be designed in terms of hardware and / or software.
  • the interfaces can be part of a so-called system ASIC, for example, which contains the most varied functions of the device.
  • system ASIC system ASIC
  • the interfaces own, integrated circuits are or at least partially consist of discrete components.
  • the interfaces can be software modules which are present, for example, on a microcontroller in addition to other software modules.
  • a computer program product or computer program with program code which can be stored on a machine-readable carrier or storage medium such as a semiconductor memory, a hard disk memory or an optical memory and for carrying out, implementing and / or controlling the steps of the method according to one of the above
  • Fig. 1 in cross-sectional view is a schematic representation of an optical
  • FIG. 2 shows a schematic view of the image sensor in a top view for use with an exemplary embodiment of the approach presented here;
  • Fig. 3 is a block diagram representation of a system for the preparation of the two-dimensionally arranged light sensor set
  • 4A is a schematic top view of an image sensor for
  • 4D shows a representation of a complex unit cell, which represents the smallest repetitive area-covering group of light sensors in the image sensor presented in FIG. 4a;
  • Fig. 6 is a schematic top view of an image sensor for
  • Light sensors have been selected as the light sensors providing measurement data, here a group of 3 x 3 unit cells is highlighted;
  • Fig. 7 is a schematic top view of an image sensor
  • FIG. 8 shows a schematic representation of a weighting value matrix for use with an exemplary embodiment of the approach presented here;
  • FIG. 9 shows a block diagram of a schematic procedure as can be carried out in a processing device according to FIG. 3; 10 shows a flowchart of a method for processing measurement data of an image sensor according to an exemplary embodiment;
  • 11 is a flowchart of a method for generating a
  • Weight value matrix for weighting measurement data of an image sensor according to an embodiment
  • FIG. 12 shows a schematic illustration of an image sensor with a light sensor arranged on the image sensor for use in a method for generating a weight value matrix for weighting measurement data of an image sensor according to an exemplary embodiment.
  • FIG. 1 shows a cross-sectional view of a schematic representation of an optical system 100 with a lens 105 aligned in an optical axis 101, through which an object 110 shown as an example is imaged on an image sensor 115. It can be seen from the exaggerated illustration in FIG. 1 that a light beam 117 which is in a
  • the central region 120 of the image sensor 115 strikes a shorter path through the lens 105 than a light beam 122 that passes through an edge region of the lens 105 and also strikes an edge region 125 of the image sensor 115.
  • a change in the optical image and / or a change in the spectral intensity with respect to different colors can also be recorded in this light beam 122, for example compared to that corresponding values of the light beam 117.
  • the image sensor 115 cannot be planned exactly, but is shaped slightly convex or concave or is tilted with respect to the optical axis 101, so that likewise Changes in the image when the light rays 123 are recorded in the edge region 125 of the image sensor 115 result. This leads to
  • Evaluation of the image of the object 110 by the data supplied by the image sensor 115 may be imprecise, so that the measurement data supplied by the image sensor 115 may not be sufficiently usable for some applications.
  • FIG. 2 shows a schematic view of the image sensor 115 in FIG.
  • the image sensor 115 in this case comprises a plurality of
  • Light sensors 200 which are arranged in a matrix in rows and columns, the exact configuration of these light sensors 200 being described in more detail below. Furthermore, a first area 210 is shown in the central area 120 of the image sensor 115, in which the light beam 117 from FIG. 1 strikes, for example. From the small diagram shown in FIG. 2, which is assigned to the first area 210 and represents an exemplary evaluation of a specific spectral energy distribution detected in this area 210 of the image sensor 115, it can be seen that the light beam 117 in the first area 210 is a relatively sharp point -shaped.
  • the light beam 122 when it strikes the area 250 of the image sensor 115, is represented somewhat “smeared.” If a light beam strikes one of the image areas 220, 230, 240 of the image sensor 115 in between, it can already be seen from the associated diagrams that Now the spectral energy distribution can take on a different shape, for example as a result of imaging by aspherical lenses, so that it is precise
  • Image sensor 115 is problematic, as can be seen, for example, from the illustration in image area 250.
  • the measurement data supplied by the image sensor 115 are now to be used for safety-critical applications, for example for the real-time detection of objects in a vehicle environment
  • FIG. 3 shows a block diagram representation of a system 300 for
  • the measurement data 310 provided as an image sensor 115 designed as a light sensor matrix.
  • the measurement data 310 are output by the image sensor 115, which correspond to the respective measured values of light sensors 200 of the image sensor 115 from FIG. 2.
  • the image sensor 115 corresponds to the respective measured values of light sensors 200 of the image sensor 115 from FIG. 2.
  • Light sensors of the image sensor 115 are constructed differently in shape, position and function and, in addition to corresponding spectral values, that is to say color values, also detect the parameters intensity, brightness, polarization, phase or the like.
  • such detection can take place in that individual light sensors of the image sensor 115 by means of appropriate color filters,
  • this measurement data 310 (Optional) pre-processed in a unit 320.
  • the preprocessed image data which for the sake of simplicity can also be referred to as measurement data 310 ', can be fed to a processing unit 325, in which, for example, the approach described in more detail below is implemented in the form of a grid-base correction.
  • the measurement data 310 ' are read in via a read-in interface 330 and one
  • Linking unit 335 supplied.
  • weighting values 340 can be read out from a weighting value memory 345 and also fed to the link unit 335 via the read-in interface 330.
  • the measurement data 310 ′ from the individual light sensors are then linked to weighting values 340, for example in accordance with the description which is explained in more detail below, and the correspondingly obtained image data 350 can be further processed in one or more parallel or sequential processing units.
  • FIG. 4A shows a schematic top view of an image sensor 115, in which light sensors 400 of different types of light sensors are arranged in a cyclic pattern.
  • the light sensors 400 can correspond to the light sensors 200 from FIG. 2 and can be implemented as pixels of the image sensor 115.
  • the light sensors 400 of the different light sensor types can, for example, be of different sizes, different orientations, have different spectral filters or others
  • the light sensors 400 can also be constructed as sensor cells S1, S2, S3 or S4, as can be seen in FIG. 4B, which each form a sampling point for light falling on the sensor cell S, these sampling points being located in the center of gravity of the respective sensor cells can be viewed.
  • the individual sensor cells S can also be combined to form macro cells M, as shown in FIG. 4C, which each have a common one
  • a smallest repetitive group of sensor cells such as B. in Fig. 4D in a complex form.
  • the unit cell can also have an irregular structure.
  • the individual light sensors 400 in FIG. 4 can be used multiple times in one
  • the light sensors 400 are also arranged in a cyclical sequence both in the vertical and in the horizontal direction, in which case they lie on a grid of the same or different periodicity that is specific to each sensor type.
  • This vertical as well as the horizontal direction of the arrangement of light sensors in a cyclical sequence can also be understood as row or column rowing of the light sensors.
  • Regularity of the pattern can also result in modulo n, d. that is, the structure is not visible in every row / column.
  • any cyclically repeating arrangement of light sensors can be used by the method described here, if rows and column-like arrangements are currently common.
  • FIG. 5 now shows a schematic top view of an image sensor 115, in which some light sensors 400 are selected from a group 515 to be weighted in an environment of a reference position 500 and are weighted by a weighting described in more detail below in order to solve the above-mentioned problem that the image sensor 115 is not provided with optimally usable measurement data 310 or 310 'corresponding to FIG. 3.
  • one reference position 500 is selected and several
  • Light sensors 510 are defined in the vicinity of this reference position 500, the light sensors 510 (which can also be referred to as ambient light sensors 510) being arranged, for example, in a different column and / or a different row than the reference position 500 on the image sensor 115.
  • a (virtual) position on the image sensor 115 serves as the reference position 500, which serves as a reference point for a reconstruction of image data to be formed for this reference position, that is to say that the image data to be reconstructed from the measurement data of the ambient light sensors 510 corresponds to the data to be output or in one image parameters to be evaluated in the following method at this reference position.
  • the reference position 500 need not necessarily be bound to a light sensor; rather, image data 350 can also be determined for a reference position 500 that lies between two light sensors 510 or completely outside a region of a light sensor 510.
  • the reference position 500 therefore does not need a triangular or circular shape have, which is based, for example, on the shape of a light sensor 510.
  • light sensors of the same light sensor type can be selected as ambient light sensors 510 as a light sensor at reference position 500.
  • light sensors can also be selected as ambient light sensors 510 to be used for the approach presented here, which represent a different light sensor type than the light sensor at reference position 500 , or a combination of the same and different types of light sensors.
  • FIG. 5 An environment of 14 individual cells is selected in FIG. 5 (8 squares, 4
  • the measurement data of each of the light sensors 400 ie
  • Ambient light sensors 510, each weighted with a weighting value 340, and the weighted measurement data thus obtained are linked to one another and assigned to the reference position 500 as image data 350.
  • image data 350 at the reference position 500 is not only based on information that is actually provided by a light sensor at the
  • Reference position 500 was recorded or measured, but that the image data 350 assigned to the reference position 500 also contain information that was recorded or measured by the ambient light sensors 510. As a result, it is now possible to correct distortions or other imaging errors to a certain degree, so that the image data associated with the reference position 500 now come very close to those measurement data that a light sensor at the reference position 500 has without z. B. would record or measure the deviations from an ideal light energy distribution or the aberration.
  • weighting values 340 are used, which are determined or trained depending on the position of the light sensor 400 on the image sensor 115, to which the respective
  • Weighting values 340 are assigned. For example, you can
  • Weighting values 340 which are assigned to light sensors 400, which are located in the edge region 125 of the image sensor 115, have a higher value than weighting values 340, which are assigned to light sensors 400, which are located in the central region 120 of the image sensor 115. This allows
  • weighting values 340 which can be used for such processing or weighting are determined in advance in a training mode which will be described in more detail below and can be stored, for example, in the memory 345 shown in FIG. 3.
  • Light sensors were selected as ambient light sensors 510. in the
  • Weighting values 340 which are assigned to light sensors 400 on the image sensor 115, which lie between two light sensors, one of each
  • weighting value matrix 800 (for example for each light sensor type), which requires a significantly smaller memory size than if a correspondingly assigned weighting value 340 had to be stored for each light sensor 400.
  • FIG. 9 shows a block diagram of a schematic procedure as can be carried out in a processing device 325 according to FIG. 3.
  • the image sensor 115 or the
  • Preprocessing unit 320 (the measurement data 310 (or the image data 310 ') which read in, which, as measurement data or sensor data 900, form the measurement data which actually provide information and which were measured or recorded by the individual light sensors 400.
  • these measurement data 310 and 310 Position information 910 is also known, from which it can be seen at which position the relevant light sensor 400 is located in the image sensor 115, which has supplied the sensor data 900. For example, a conclusion can be drawn from this position information 910 as to whether the corresponding light sensor is located 400 is located in the edge area 125 of the image sensor 115 or rather in the central area 120 of the image sensor 115.
  • this position information 910 which is sent to the memory 345 for example via a position signal 915, all are stored in the memory 345
  • weighting reference values 810 which are assigned to the light sensor type from which the relevant measurement data 310 or 310 'or the relevant sensor data 900 were supplied. For reference positions for which 800 does not contain a specific value, the weights for position 910 are interpolated.
  • the measurement data 310 or 310 ′, respectively, respectively weighted with the assigned weighting values 340 or respectively with the assigned values, are then stored in a collection unit 920
  • Weighted values 340 collected weighted sensor data 900 and sorted according to their reference positions and reconstruction tasks and then added the collected and sorted weighted measurement data in an addition unit 925 in their group and the result obtained as weighted image data 350 of the respective reference positions and
  • the lower part of FIG. 9 shows a very advantageous implementation of the determination of the image data 350.
  • the output buffer 930 has, for example, a height of the light sensors 510 contained in the neighborhood. Each of the ambient light sensors 510 acts (differently weighted) on many
  • Reference positions 500 If all weighted values are available for a reference position 500 (which is shown as a column in FIG. 9), the result is output along the column. The column can then be used for a new reference value (circular buffer indexing). This has the advantage that each measured value is processed only once, but acts on many different output pixels (as reference positions 500), which is represented by the different columns. As a result, the imaginary logic from Figures 4 to 7 is “turned over” and hardware resources are saved.
  • the height of the memory depends on the amount of surrounding pixels and should have one line for each surrounding pixel
  • the width of the memory depends on the amount of those Design reference positions that can be influenced by each measured value.
  • FIG. 10 shows a flow chart of an exemplary embodiment of the approach presented here as method 1000 for processing measurement data of an image sensor.
  • the method 1000 comprises a step 1010 of reading in measurement data that were recorded by light sensors (ambient light sensors) in the vicinity of a reference position on the image sensor, the light sensors being arranged around the reference position on the image sensor, wherein weighting values are also read in, each of the measurement data of the light sensors in a surrounding area are assigned to a reference position, the weighting values for light sensors arranged at an edge region of the image sensor being different from weighting values for one in one
  • light sensors ambient light sensors
  • the method 1000 comprises a step 1020 of linking the measurement data of the light sensors with the assigned weighting values in order to obtain image data for the reference position.
  • FIG. 11 shows a flow chart of an exemplary embodiment of the approach presented here as method 1100 for generating a
  • Weight value matrix for weighting measurement data from an image sensor.
  • the method 1100 comprises a step 1110 of reading in reference image data for reference positions of a reference image and training measurement data of a training image and an initial weight value matrix.
  • the method 1100 further comprises a step 1120 of training in the
  • Weight values contained in the initial weight value matrix using the reference image data and the training measurement data in order to obtain the weight value matrix a link being formed between training measurement data of the light sensors weighted with a weight value and with the reference measurement data is compared for the corresponding reference position, light sensors being used which are arranged around the reference position on the image sensor.
  • a weight value matrix can be obtained which, for light sensors at different positions on the image sensor, makes available corresponding different weighting values in order to enable the best possible correction of distortions or imaging errors in the measurement data of the image sensor, as is possible by the above described approach to processing measurement data of an image sensor can be implemented.
  • FIG. 12 shows a schematic illustration of an image sensor 115 with light sensors 400 arranged on the image sensor 115
  • a weighting value matrix 800 (either with the weighting reference values 810 or also with direct weighting values 340, which are each assigned to a single one of the light sensors 400), as is shown, for example, in FIG. 8, a reference image 1210 (to which the weighting value matrix 800 points out) can now be obtained ) and a training image 1220 (which represents the initial measurement data 310 of the image sensor 115 without the use of weighting values) can be used.
  • a reference image 1210 to which the weighting value matrix 800 points out
  • a training image 1220 which represents the initial measurement data 310 of the image sensor 115 without the use of weighting values
  • the partial training images 1220 should be mapped to the partial reference images 1210 on the image sensor 115.
  • Such a procedure can also be used, for example, to determine weighting values by means of an interpolation, which are assigned to alternating 400 of the image sensor 115 and which lie in a region of the image sensor 115 which are not covered by a partial reference image 1210 or a partial training image 1220.
  • the approach presented here describes a method and its possible implementation in hardware.
  • the method is used to summarize correction of several error classes of image errors that arise from the physical image processing chain (optics and imager, atmosphere, windshield, motion blur).
  • it is intended for the correction of wavelength-dependent errors that occur when the light signal is scanned by the image sensor and its correction, the so-called “demosaicing”.
  • errors caused by the optics are corrected. This applies to errors due to manufacturing tolerances as well as to z. B. thermal or air pressure-related changes in imaging behavior. So z. B. the red-blue error is usually corrected differently in the image center than at the edge of the image, at high temperatures differently than at low temperatures. The same applies to an attenuation of the image signal at the edge (see shading).
  • the hardware block "Grid based demoasicing" in the form of the processing unit which is proposed as an example for correction, can correct all these errors simultaneously and, with a suitable light sensor structure, also maintain the quality of the geometric resolution and contrast better than conventional methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Input (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Facsimile Scanning Arrangements (AREA)

Abstract

L'invention concerne un procédé (1000) de traitement de données de mesure (310, 310') d'un capteur d'images (115). Le procédé (1000) comprend une étape de lecture (1010) de données de mesure (310, 310') enregistrées par des capteurs de lumière (510) à proximité d'une position de référence (500) sur le capteur d'image (115), les capteurs de lumière (510) étant disposés autour de la position de référence (500) sur le capteur d'image (115). Des valeurs de pondération (340) associées chacune aux données de mesure (310, 310') des capteurs de lumière (510) à proximité d'une position de référence (500) sont en outre lues. Les valeurs de pondération (340) pour des capteurs de lumière (510) disposés sur une zone de bord (125) du capteur d'images (115) sont différentes des valeurs de pondération (340) pour un capteur de lumière (510) disposé dans une zone centrale (120) du capteur d'images (115) et/ou les valeurs de pondération (340) dépendant d'une position des capteurs de lumière (510) sur le capteur d'images (115). En outre, le procédé (1000) comprend une étape de connexion (1020) des données de mesure (310, 310') des capteurs de lumière (510) avec les valeurs de pondération associées (340) pour obtenir des données d'image (350) pour la position de référence (500).
EP19829496.9A 2018-12-25 2019-12-17 Système et procédé de traitement pour traiter les données mesurées d'un capteur d'images Pending EP3903478A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102018222903.1A DE102018222903A1 (de) 2018-12-25 2018-12-25 Verfahren und Verarbeitungseinrichtung zum Verarbeiten von Messdaten eines Bildsensors
PCT/EP2019/085555 WO2020136037A2 (fr) 2018-12-25 2019-12-17 Système et procédé de traitement pour traiter les données mesurées d'un capteur d'images

Publications (1)

Publication Number Publication Date
EP3903478A2 true EP3903478A2 (fr) 2021-11-03

Family

ID=69063749

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19829496.9A Pending EP3903478A2 (fr) 2018-12-25 2019-12-17 Système et procédé de traitement pour traiter les données mesurées d'un capteur d'images

Country Status (5)

Country Link
US (1) US20220046157A1 (fr)
EP (1) EP3903478A2 (fr)
CN (1) CN113475058A (fr)
DE (1) DE102018222903A1 (fr)
WO (1) WO2020136037A2 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281290B (zh) 2021-12-27 2024-04-12 深圳市汇顶科技股份有限公司 显示屏下传感器定位的方法、装置和电子设备
AT525579B1 (de) * 2022-03-09 2023-05-15 Vexcel Imaging Gmbh Verfahren und Kamera zur Korrektur eines geometrischen Abbildungsfehlers in einer Bildaufnahme

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7920200B2 (en) * 2005-06-07 2011-04-05 Olympus Corporation Image pickup device with two cylindrical lenses
JP2008135908A (ja) * 2006-11-28 2008-06-12 Matsushita Electric Ind Co Ltd 位相調整装置、デジタルカメラおよび位相調整方法
JP5147994B2 (ja) * 2009-12-17 2013-02-20 キヤノン株式会社 画像処理装置およびそれを用いた撮像装置
DE102015217253A1 (de) * 2015-09-10 2017-03-16 Robert Bosch Gmbh Umfelderfassungseinrichtung für ein Fahrzeug und Verfahren zum Erfassen eines Bilds mittels einer Umfelderfassungseinrichtung
DE102016212771A1 (de) * 2016-07-13 2018-01-18 Robert Bosch Gmbh Verfahren und Vorrichtung zum Abtasten eines Lichtsensors

Also Published As

Publication number Publication date
CN113475058A (zh) 2021-10-01
DE102018222903A1 (de) 2020-06-25
WO2020136037A3 (fr) 2020-08-20
WO2020136037A2 (fr) 2020-07-02
US20220046157A1 (en) 2022-02-10

Similar Documents

Publication Publication Date Title
EP1979769B1 (fr) Systeme de saisie d'image et procede de fabrication d'au moins un systeme de saisie d'image
EP3186952B1 (fr) Dispositif et procédé de prises de vue
DE202018101096U1 (de) Augennahe Anzeige mit Superauflösung durch spärliche Abtastung
DE102008034979A1 (de) Verfahren und Einrichtung zur Erzeugung von fehlerreduzierten hochauflösenden und kontrastverbesserten Bildern
DE102015217253A1 (de) Umfelderfassungseinrichtung für ein Fahrzeug und Verfahren zum Erfassen eines Bilds mittels einer Umfelderfassungseinrichtung
DE102012221667A1 (de) Vorrichtung und Verfahren zum Verarbeiten von Fernerkundungsdaten
WO2016041944A1 (fr) Procédé de production d'une image résultante et dispositif optique
EP3903478A2 (fr) Système et procédé de traitement pour traiter les données mesurées d'un capteur d'images
DE102021100444A1 (de) Mikroskopiesystem und verfahren zum bewerten von bildverarbeitungsergebnissen
DE102019008472B4 (de) Multilinsen-Kamerasystem und Verfahren zur hyperspektralen Aufnahme von Bildern
EP2887010B1 (fr) Procédé et dispositif de mesure optique en trois dimensions d'objets avec un procédé de mesure topométrique ainsi que programme informatique correspondant
DE102019107835A1 (de) Bildkombiniervorrichtung, Verfahren zum Kombinieren von Bildern und Programm
DE102019133515B3 (de) Verfahren und Vorrichtung zur Parallaxenbestimmung von Aufnahmen eines Multilinsen-Kamerasystems
DE102013209109A1 (de) Vorrichtung und Verfahren zum Parametrisieren einer Pflanze
DE102016212771A1 (de) Verfahren und Vorrichtung zum Abtasten eines Lichtsensors
WO2019197230A1 (fr) Procédé de correction et dispositif de correction de données d'images
DE102018106181A1 (de) Bildaufnahmevorrichtung und Verfahren zum Aufnehmen einer Bildaufnahme eines Dokuments und Verwendung
DE102009047437A1 (de) Verfahren und Vorrichtung zur Anpassung von Bildinformationen eines optischen Systems
DE102012003127A1 (de) Verfahren für eine Autofokuseinrichtung
DE102019101324B4 (de) Multilinsen-Kamerasystem und Verfahren zur hyperspektralen Aufnahme von Bildern
DE102019133516B4 (de) Verfahren und Vorrichtung zur Bestimmung von Wellenlängenabweichungen von Aufnahmen eines Multilinsen-Kamerasystems
DE102017106217B4 (de) Verfahren und Vorrichtung zur optoelektronischen Entfernungsmessung
DE102020215413A1 (de) Verfahren und Vorrichtung zum Erzeugen eines erweiterten Kamerabilds unter Verwendung eines Bildsensors und Bildverarbeitungseinrichtung
WO2022063806A1 (fr) Procédé de création d'un enregistrement d'image
DE102022204882A1 (de) Verfahren und Vorrichtung zum Bereitstellen von Betriebsdaten zum Betreiben mindestens einer Assistenzeinrichtung eines Fahrzeugs

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210726

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)