WO2017071996A1 - Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle - Google Patents

Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle Download PDF

Info

Publication number
WO2017071996A1
WO2017071996A1 PCT/EP2016/075011 EP2016075011W WO2017071996A1 WO 2017071996 A1 WO2017071996 A1 WO 2017071996A1 EP 2016075011 W EP2016075011 W EP 2016075011W WO 2017071996 A1 WO2017071996 A1 WO 2017071996A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
motor vehicle
partial area
area
Prior art date
Application number
PCT/EP2016/075011
Other languages
English (en)
French (fr)
Inventor
Patrick Eoghan Denny
Aidan Casey
Brian Michael Thomas DEEGAN
Ciáran HUGHES
Jonathan Horgan
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2017071996A1 publication Critical patent/WO2017071996A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Definitions

  • the invention relates to a method for adjusting at least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera.
  • An environmental region of the motor vehicle is captured by the camera at least partially.
  • the image is recorded of the captured environmental region.
  • the invention also relates to a computer program product, a camera system for a motor vehicle, a driver assistance system for a motor vehicle and a motor vehicle with a driver assistance system.
  • Methods for adjusting at least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera are known from the prior art.
  • an area is usually defined in the image, in which properties of the image are determined, on the basis of which in turn the camera parameter can be adjusted.
  • An example thereof is the automatic exposure control of known cameras.
  • the image can also be locally adjusted in the defined area. Thus, in the defined area for example an increase of contrast can be effected.
  • At least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera are adjusted.
  • An environmental region of the motor vehicle is captured by the camera at least partially.
  • the image is recorded of the captured environmental region.
  • a partial area of the image is determined, which is displayed on a display unit of the motor vehicle and/or provided for further processing by means of a method of machine vision.
  • a selection area positioned completely within the partial area is determined.
  • the at least one camera parameter and/or the image are adjusted in dependence on a property of the image in the selection area.
  • the partial area can, for example, be defined such that subsequent to further possible processing steps this area of the image is outputted on a display unit of the motor vehicle.
  • the partial area can also be defined such that this area is utilized, for example, for the further processing according to a method of machine vision.
  • the information relevant for the display and/or the further processing is present only in the partial area. It is thus sufficient if the adjustment of the camera parameter and/or the image takes into account only the area of the image within the partial area. This can, for example, result in an underexposed display of areas outside the partial area. However, in that case these underexposed areas outside the partial area are not relevant, since preferably they are utilized neither for being displayed on the display unit nor for further processing by means of the method of machine vision.
  • the selection area is then determined.
  • a focus of interest can then in turn be directed to a local area within the partial area.
  • a brightness of the image can, for example, be determined so that the camera parameter which influences the brightness of an image captured by the camera can be adjusted.
  • the image can also be adjusted on the basis of the property of the image in the selection area.
  • a brightness of the image can be increased so that the partial area is suitable for being displayed on the display unit and/or for further processing according to the method of machine vision. This can be performed even if in consequence thereof an area of the image outside the partial area is rendered too dark for being displayed and/or for further processing.
  • the partial area is continually adjusted image by image and that thus for each image the camera parameter and/or the image can be adjusted with regard to the relevant area, i.e. the partial area.
  • the selection area is in particular determined each time completely within the partial area. If the partial area is not known or if the selection area is determined independently of the partial area or the viewport, it cannot be ensured in the prior art that the camera parameter and/or the image can be displayed on the display unit and/or provided for further processing according to the method of machine vision entirely in better quality.
  • a size and/or a position of the selection area within the partial area are determined in dependence on a driving direction of the motor vehicle.
  • the selection area can be placed within the partial area where there might be obstacles for the motor vehicle. In this context, for example, objects in the
  • a speed of the motor vehicle can also, for example, be determined.
  • a driving trajectory of the motor vehicle can then be predicted, for example in connection with other sensor data of the motor vehicle, e.g. of a steering angle sensor.
  • the size of the selection area can also be adjusted.
  • the camera parameter and/or the image can be adjusted particularly precisely for the partial area, in particular the selection area.
  • a driver assistance system of the motor vehicle can be operated more precisely and the safety of the motor vehicle is improved.
  • the partial area is determined such that only the environmental region is shown in the partial area.
  • the camera comprises a so-called fisheye objective lens and the image is captured by the fisheye objective lens.
  • an azimuthal angle of, for example, 180° or more is captured by the fisheye objective lens.
  • the partial area is preferably positioned only within the environmental region depicted in the image. This is advantageous in that thus irrelevant areas of the image such as the own number plate of the motor vehicle do not negatively influence the quality of the image in the processing or adjustment of the image or of the camera parameter.
  • a further image is captured by the camera and that a further partial area is determined for the further image, wherein in dependence on the further partial area a further selection area is determined, preferably positioned
  • the camera parameter and/or the further image are adjusted each in dependence on a property of the further image, in particular in dependence on a property featured only in the selection area.
  • the adjustment or determination of the further partial area and/or of the further selection area are preferably carried out in real time or else in the time span remaining between single consecutive camera recordings.
  • the real time frame is predefined, for example, by the recording frequency or frame rate of the camera, which can be 30 Hz or 60 Hz, in particular 29.97 Hz, for example.
  • the partial area and/or the selection area are dynamically adjusted to the current situation.
  • the quality of the image can be maintained in continual compliance with qualitative requirements, e.g. the requirement that there should be no underexposure or overexposure.
  • the driver assistance system can be operated more precisely and the safety of the motor vehicle can be increased.
  • the image is adjusted only within the selection area.
  • the adjustment of the image can be limited solely to the selection area. This means, for example, that preferably the image is not adjusted outside the selection area. This has the advantage that thus accelerated processing is enabled in the adjustment of the image.
  • this can also mean that for the adjustment of the camera parameter only the selection area is taken into consideration and not the image within the partial area, but outside the selection area.
  • the selection area can also coincide with the partial area.
  • an output image is generated, in particular through rectification of the image within the partial area, and that the output image is outputted on a display unit of the motor vehicle.
  • the image is thus, for example, a raw image.
  • the partial area can then, for example, be determined in the image.
  • the partial area can feature a distortion resultant, for example, from the above-mentioned fisheye objective lens. Therefore, the partial area of the image can then, for example, be rectified and provided as an output image for the output on the display unit.
  • the term rectification describes, for example, the elimination of geometrical distortions in image data. This can be effected, for example, by means of a transformation of the partial area.
  • the output of the partial area as the output image can be less distorted and consequently more illustrative.
  • a camera parameter of the camera an exposure time and/or a gain setting and/or a white balance setting and/or a gamma value are adjusted.
  • Exposure time refers to the time span in which a photosensitive medium, e.g. a CMOS sensor or a CCD sensor, is exposed to the light for image capturing.
  • a photosensitive medium e.g. a CMOS sensor or a CCD sensor
  • an f-number and/or a light sensitivity of the camera can be adjusted.
  • the light sensitivity is for example adjusted via the gain setting of the camera.
  • the gain setting it is predefined to what extent the incident amount of light, e.g.
  • the white balance of the white balance setting serves to sensitize the camera to the colour temperature of the light at the recording location, that is, to the colour temperature of the environmental region of the motor vehicle, for example.
  • the gamma value or gamma correction is a correction function for transforming a proportionally increasing physical quantity into a quantity that does not grow on a linear scale according to human perception. In mathematical terms this function is a power function with an exponent often referred to for short as gamma value as its only parameter.
  • the further image can be captured such that at least the further selection area and/or the further partial area can be provided with the desired image properties. This is possible because the scene or the captured part of the environmental region do not usually change so abruptly that a completely different scene is captured by the image and the further image captured subsequently thereto.
  • the camera can also be a time of flight camera or a hyperspectral camera or a plenoptic camera.
  • the camera can also have a plurality of sensors.
  • the hyperspectral camera more than just the three usual frequency bands red, green and blue can be captured.
  • electromagnetic radiation from the infrared wavelength range, in particular with a plurality of frequency bands, can be captured. It is advantageous that colours in the environmental region can be captured by the hyperspectral camera in a more precise and differentiated manner, in particular if the camera is directed towards the sun.
  • the system may be more sensitive to particular colours depending on whether the driving direction is towards or moving away from sunlight.
  • the partial area may be moved away from a region of the image containing the sun. This can be determined uniquely from a combination of the driving direction of the motor vehicle, the current position of the motor vehicle, e.g. by global navigation satellite system (GNSS) of the motor vehicle, and the current time for example.
  • GNSS global navigation satellite system
  • the image is adjusted, in particular only, in the selection area by brightness setting and/or colour setting and/or contrast setting and/or a noise reduction method.
  • the image can thus, for example, be adjusted with regard to its brightness by brightness setting.
  • the selection area can thus for example be adjusted to be brighter or darker than before.
  • tone mapping or dynamic compression refers to the compression of the dynamic range of high dynamic range images, i.e. of images with a high brightness range.
  • tone mapping the contrast range of a high dynamic range image is reduced in order to be able to display it on conventional display devices. For example, as a result of the adjustment of the image within the selection area, tone mapping can be utilized more simply and effectively.
  • the contrast of the image, in particular in the selection area can be adjusted by contrast setting.
  • a smoothing filter such as a Gaussian filter can be applied in order to suppress image noise, in particular in the selection area, by means of the noise reduction method.
  • noise can be also for the time being deliberately kept in the image, in particular in the selection area. It is advantageous for several machine vision applications that noise is not immediately removed from the image. As methods based on noise statistics which are for instance used by a hereback-end" machine vision algorithm may remove noise in a qualtitively better way than fast pharmaceuticalfront-end" algorithms which are for instance implemented in the camera.
  • edges can, for example, be drawn more sharply in the image, in particular in the selection area, e.g. by means of so-called image sharpness methods. It is thus advantageous that within the partial area the image can be variously adjusted to enable an output of the partial area on the display unit in better quality and/or to apply a method of machine vision more precisely and correctly to the partial area and/or the selection area.
  • the selection area is divided into a plurality of selection subareas and the camera parameter and/or the image are adjusted in dependence on the plurality of selection subareas.
  • the selection subareas can, for example, have the same size.
  • the selection subareas are in particular arranged in the manner of a matrix. By means of the selection subareas the selection area can be adjusted more precisely and variously.
  • the selection area can, for example, comprise only one of the selection subareas or else also a plurality of selection subareas. The larger the number of selection subareas utilized, the higher the precision with which the adjustment of the camera parameter and/or the captured image can be performed.
  • the selection subareas can also, for example, have different sizes and/or be weighted differently when evaluated.
  • weightings for a plurality of camera parameters are determined and the plurality of camera parameters are adjusted in dependence on the determined weightings.
  • This can be utilized, for example, if a total image is provided by a plurality of cameras. The total image is, for example, joined together from a plurality of images in order to generate a top view image. In this context, it is usually of importance that the top view image is homogenous.
  • different weightings can be accorded to different selection areas.
  • the selection areas can, for example, be determined such with regard to their size and position that in the generation of the top view image they fit together and thus, subsequent to the merging, a homogenous total image or top view image is generated.
  • the invention relates to a computer program product for performing a method according to the invention if the computer program product is executed on a programmable computer device.
  • the invention relates to a camera system for a motor vehicle with a camera and an evaluation unit.
  • the camera system is configured to perform a method according to the invention.
  • the camera system can also comprise a plurality of cameras and/or evaluation units.
  • the invention also relates to a driver assistance system for a motor vehicle with a camera system according to the invention.
  • the invention moreover relates to a motor vehicle with a driver assistance system according to the invention.
  • Fig. 1 a schematic top view of an embodiment of a motor vehicle according to the invention with a driver assistance system
  • Fig. 2 a schematic view of an image with a partial area, captured by a camera of a camera system of the driver assistance system;
  • Fig. 3 a schematic view of an output image generated on the basis of the partial area
  • Fig. 4 a schematic view of the image with the partial area and a selection area having a plurality of selection subareas, which is positioned completely within the partial area; and Fig. 5 a schematic view of the image with the partial area, which is displayed on a display unit of the motor vehicle and provided for further processing by means of a method of machine vision.
  • Fig. 1 shows schematically a motor vehicle 1 with a driver assistance system 2.
  • the driver assistance system 2 comprises a camera system 3.
  • the camera system 3 further comprises a camera 4 and an evaluation unit 5.
  • the evaluation unit 5 can, for example, be integrated into the camera 4 or else formed separately from the camera 4.
  • the camera 4 is arranged at a front 6 of the motor vehicle 1 .
  • the camera 4 can be variously arranged, preferably, however, such that an environmental region 7 of the motor vehicle 1 can be captured at least partially.
  • the camera system 3 can also comprise a plurality of cameras 4. Thus, also a plurality of cameras 4 can be arranged at the motor vehicle 1 .
  • the camera 4 can be a CMOS camera (complementary metal-oxide-semiconductor) or a CCD camera (charge coupled-device) or any other image capturing device, for example a time of flight camera, a hyperspectral camera, a plenoptic camera or in particular an infrared camera.
  • CMOS camera complementary metal-oxide-semiconductor
  • CCD camera charge coupled-device
  • any other image capturing device for example a time of flight camera, a hyperspectral camera, a plenoptic camera or in particular an infrared camera.
  • NIR near infrared wavelength range
  • the camera 4 provides an image sequence of images of the environmental region 7.
  • the image sequence of the images is then, for example, processed in real time by the evaluation unit 5.
  • the driver assistance system 2 can, for example, comprise a top view image system and/or an object detection system and/or an obstacle warning system. Moreover, the driver assistance system 2 can also be formed as a camera monitoring system (CMS). In the camera monitoring system the camera 4 can, for example, additionally be arranged in a left side mirror 8 of the motor vehicle 1 and/or in a right side mirror 9 of the motor vehicle 1 and/or at a rear end 10 of the motor vehicle 1 .
  • Fig. 2 shows an image 1 1 .
  • the image 1 1 is captured by the camera 4.
  • the image 1 1 is a so-called raw image and shows a representation of the environmental region 7 and of the motor vehicle 1 .
  • the image 1 1 was captured depending on a fisheye objective lens of the camera 4.
  • the environmental region 7 is represented in the image 1 1 with a horizontal angle or an azimuthal angle of more than 180° for example.
  • the motor vehicle 1 itself is shown in part.
  • a partial area 12 is determined.
  • the partial area 12 is defined such that it is displayed on a display unit 13 - as shown in Fig. 1 - and/or provided for further processing by means of a method of machine vision.
  • the partial area 12 can also be described as a field of vision.
  • machine vision usually describes the computer-aided solution of tasks capable of being solved by the human visual system. This includes, for example, the detection of objects in the environmental region 7.
  • the further step can be the display of the partial area 12 on the display unit 13 or, additionally or alternatively, the further processing of the partial area 12 by means of a method of machine vision.
  • the partial area 12 is arranged such that within the partial area 12 only the environmental region 7 is represented. It is thus in particular not intended that parts of the motor vehicle 1 , which are shown in the image 1 1 , are arranged within the partial area 12.
  • Fig. 3 shows an output image 14.
  • the output image 14 is outputted thus on the display unit 13.
  • the output image 14 is generated on the basis of the partial area 12.
  • the partial area 12 of the image 1 1 is rectified or corrected through transformation.
  • the rectification can be necessary in order to reduce distortions which can, for example, result from a fisheye objective lens of the camera 4 in the partial area 12.
  • Fig. 4 shows the image 1 1 with the partial area 12.
  • a selection area 15 is arranged completely within the partial area 12.
  • a size and/or a position of the selection area 15 within the partial area 12 can, for example, depend on a driving direction of the motor vehicle 1 .
  • the partial area 12 is divided into selection subareas 16.
  • the selection subareas 16 can, for example, be arranged within the selection area 15 in various numbers and sizes.
  • the selection subareas 16 can be arranged within the selection area 15 in the manner of a matrix, for example.
  • the selection subareas 16 can, for example, be arranged in a four-by-four matrix.
  • the embodiment of the method according to the invention is executed as follows.
  • the image 1 1 is captured by the camera 4.
  • the partial area 12 is determined.
  • the partial area 12 is determined such that it is either displayed on the display unit 13 and thus, for example, converted into the output image 14, or else utilized for further processing by means of a method of machine vision.
  • the selection area 15 is determined completely within the partial area 12.
  • the selection area 15 cannot be determined unless the partial area 12 has already been determined and is thus known.
  • a property of the image 1 1 in the selection area 15 is determined.
  • the property of the image 1 1 can, for example, be a brightness or brightness distribution of the image 1 1 within the partial area 12.
  • a camera parameter of the camera 4 is adjusted and/or the image 1 1 itself is adjusted.
  • the camera parameter can, for example, be an exposure time and/or a gain setting and/or a white balance setting and/or a gamma value. If the camera parameter is adjusted in dependence on the property of the image 1 1 , the adjusted camera parameter is then utilized for the capturing of a further image by means of the camera 4. With regard to the camera parameter, the further image is thus configured for high-quality capturing of a further partial area of the environmental region 7 previously represented by the partial area 12. Consequently, with regard to a further partial area and thus also with regard to a further selection area, the further image can be captured more precisely and in better quality than would be the case if the camera parameter had not been adjusted in dependence on the property of the image 1 1 in the selection area 15.
  • the image 1 1 can also be adjusted within the selection area 15.
  • the image 1 1 can, for example, be adjusted within the partial area 12 and in particular only within the selection area 15 by brightness setting and/or colour setting and/or contrast setting and/or a noise reduction method and/or a plurality of further image processing methods.
  • the image 1 1 can then be adjusted such in the area relevant for the user, in particular the driver of the motor vehicle 1 , i.e. in the partial area 12, that in the image 1 1 the partial area 12 receives preferred treatment with regard to the qualitative representation.
  • the area relevant to the user i.e. the partial area 12 of the image 1 1
  • the adjustment of the camera parameter and/or the adjustment of the image 1 1 in dependence on the partial area 12 and/or the selection area 15 can also be utilized for the generation of a top view image, in which a plurality of images 1 1 are joined together.
  • different parts of the plurality of images 1 1 can be utilized.
  • each utilized part of the respective image 1 1 can, for example, be described as the partial area 12 and/or the selection area 15.
  • the determination of the partial area 12 and/or the selection area 15 is in particular performed in real time. This means that the partial area 12 and/or the selection area 15 are thus in particular for each image 1 1 of an image sequence captured by the camera 4 adjusted within the time span which remains until the next image 1 1 of the image sequence is provided.
  • Fig. 5 shows the image 1 1 with the partial area 12.
  • the selection area 15 corresponds to the partial area 12.
  • the selection area 15 comprises only one of the selection subareas 16.
  • a method of machine vision is performed.
  • a plurality of objects 17 of the environmental region 7 are detected.
  • the image 1 1 and/or the camera settings for the further image can be adjusted such that the method of machine vision can be applied to the partial area 12 more correctly and precisely.
  • the partial area 12 receives preferred treatment with regard to the area of the image 1 1 positioned outside the partial area 12.
  • the area positioned outside the partial area 12 is underexposed or overexposed subsequent to the adjustment of the image.
  • the area of the further image positioned outside the further partial area is underexposed or overexposed because the camera parameter of the camera 4 has been adjusted correspondingly on the basis of the preceding image 1 1 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
PCT/EP2016/075011 2015-10-29 2016-10-19 Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle WO2017071996A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102015118474.5A DE102015118474A1 (de) 2015-10-29 2015-10-29 Verfahren zum Anpassen eines Kameraparameters und/oder eines Bilds,Computerprogrammprodukt, Kamerasystem, Fahrerassistenzsystem und Kraftfahrzeug
DE102015118474.5 2015-10-29

Publications (1)

Publication Number Publication Date
WO2017071996A1 true WO2017071996A1 (en) 2017-05-04

Family

ID=57184435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/075011 WO2017071996A1 (en) 2015-10-29 2016-10-19 Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle

Country Status (2)

Country Link
DE (1) DE102015118474A1 (de)
WO (1) WO2017071996A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112703724A (zh) * 2018-09-13 2021-04-23 索尼半导体解决方案公司 信息处理装置和信息处理方法、成像装置、移动设备和计算机程序

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006030394A1 (de) * 2006-07-01 2008-01-03 Leopold Kostal Gmbh & Co. Kg Verfahren zum Betreiben eines Fahrerassistenzsystems
US20140204267A1 (en) * 2013-01-23 2014-07-24 Denso Corporation Control of exposure of camera
DE102013011844A1 (de) * 2013-07-16 2015-02-19 Connaught Electronics Ltd. Verfahren zum Anpassen einer Gammakurve eines Kamerasystems eines Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug
DE102013020952A1 (de) * 2013-12-12 2015-06-18 Connaught Electronics Ltd. Verfahren zum Einstellen eines für die Helligkeit und/oder für den Weißabgleich einer Bilddarstellung relevanten Parameters in einem Kamerasystem eines Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6967569B2 (en) * 2003-10-27 2005-11-22 Ford Global Technologies Llc Active night vision with adaptive imaging
US8199198B2 (en) * 2007-07-18 2012-06-12 Delphi Technologies, Inc. Bright spot detection and classification method for a vehicular night-time video imaging system
DE102012008986B4 (de) * 2012-05-04 2023-08-31 Connaught Electronics Ltd. Kamerasystem mit angepasster ROI, Kraftfahrzeug und entsprechendes Verfahren
US9738223B2 (en) * 2012-05-31 2017-08-22 GM Global Technology Operations LLC Dynamic guideline overlay with image cropping

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006030394A1 (de) * 2006-07-01 2008-01-03 Leopold Kostal Gmbh & Co. Kg Verfahren zum Betreiben eines Fahrerassistenzsystems
US20140204267A1 (en) * 2013-01-23 2014-07-24 Denso Corporation Control of exposure of camera
DE102013011844A1 (de) * 2013-07-16 2015-02-19 Connaught Electronics Ltd. Verfahren zum Anpassen einer Gammakurve eines Kamerasystems eines Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug
DE102013020952A1 (de) * 2013-12-12 2015-06-18 Connaught Electronics Ltd. Verfahren zum Einstellen eines für die Helligkeit und/oder für den Weißabgleich einer Bilddarstellung relevanten Parameters in einem Kamerasystem eines Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112703724A (zh) * 2018-09-13 2021-04-23 索尼半导体解决方案公司 信息处理装置和信息处理方法、成像装置、移动设备和计算机程序
US11815799B2 (en) 2018-09-13 2023-11-14 Sony Semiconductor Solutions Corporation Information processing apparatus and information processing method, imaging apparatus, mobile device, and computer program

Also Published As

Publication number Publication date
DE102015118474A1 (de) 2017-05-04

Similar Documents

Publication Publication Date Title
KR101367637B1 (ko) 감시장치
JP4341691B2 (ja) 撮像装置、撮像方法、露光制御方法、プログラム
US11477372B2 (en) Image processing method and device supporting multiple modes and improved brightness uniformity, image conversion or stitching unit, and computer readable recording medium realizing the image processing method
US10630920B2 (en) Image processing apparatus
WO2018070100A1 (ja) 画像処理装置、画像処理方法、及び撮影装置
JP6029954B2 (ja) 撮像装置
EP3410702B1 (de) Bildgebungsvorrichtung, bildgebungs-/anzeigeverfahren und bildgebungs-/anzeigeprogramm
WO2012172922A1 (ja) 車載カメラ装置
JP5860663B2 (ja) ステレオ撮像装置
US9800776B2 (en) Imaging device, imaging device body, and lens barrel
US9214034B2 (en) System, device and method for displaying a harmonized combined image
WO2016104166A1 (ja) 撮像システム
US20170347008A1 (en) Method for adapting a brightness of a high-contrast image and camera system
US9769376B2 (en) Imaging device, imaging device body, and lens barrel
US10551535B2 (en) Image pickup apparatus capable of inserting and extracting filter
JP2020068524A (ja) 画像処理
JP6330474B2 (ja) 画像処理装置、画像処理装置の制御方法、撮像装置
JP7483368B2 (ja) 画像処理装置、制御方法およびプログラム
US10944929B2 (en) Imaging apparatus and imaging method
WO2017071996A1 (en) Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle
JP2019029833A (ja) 撮像装置
US20180176445A1 (en) Imaging device and imaging method
JP2017063362A (ja) 撮像装置および撮像方法
JP2012010282A (ja) 撮像装置、露光制御方法及び露光制御プログラム
TW201935917A (zh) 用於檢測和減少由攝影機獲取的數位視訊中的彩色邊紋效應的方法,裝置和系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16784860

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16784860

Country of ref document: EP

Kind code of ref document: A1