US20180262678A1 - Vehicle camera system - Google Patents

Vehicle camera system Download PDF

Info

Publication number
US20180262678A1
US20180262678A1 US15/455,935 US201715455935A US2018262678A1 US 20180262678 A1 US20180262678 A1 US 20180262678A1 US 201715455935 A US201715455935 A US 201715455935A US 2018262678 A1 US2018262678 A1 US 2018262678A1
Authority
US
United States
Prior art keywords
standard deviation
mean
training images
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/455,935
Other versions
US10417518B2 (en
Inventor
Shawn Hunt
Joseph Lull
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Denso International America Inc
Original Assignee
Denso International America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Denso International America Inc filed Critical Denso International America Inc
Priority to US15/455,935 priority Critical patent/US10417518B2/en
Assigned to DENSO INTERNATIONAL AMERICA, INC. reassignment DENSO INTERNATIONAL AMERICA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUNT, Shawn, LULL, JOSEPH
Publication of US20180262678A1 publication Critical patent/US20180262678A1/en
Application granted granted Critical
Publication of US10417518B2 publication Critical patent/US10417518B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • H04N5/23216
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • G06K9/00798
    • G06K9/46
    • G06K9/6202
    • G06K9/628
    • G06T5/002
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Definitions

  • the present disclosure relates to a vehicle camera system.
  • More and more vehicles are being outfitted with cameras to detect lane markers, obstacles, signage, infrastructure, other vehicles, pedestrians, etc.
  • the cameras can be used, for example, to enhance safe vehicle operation and/or to guide the vehicle during autonomous driving. While current cameras are suitable for their intended use, they are subject to improvement. Although there are various image processing technologies applied in imaging, no single technique or combination of techniques addresses the robustness issues experienced with automotive applications.
  • the present teachings provide for camera systems and methods that advantageously enhance the object detection capabilities of vehicle cameras, for example.
  • One skilled in the art will appreciate that the present teachings provide numerous additional advantages and unexpected results in addition to those set forth herein.
  • the present teachings include a camera system for a vehicle.
  • the system includes a camera configured to capture an image of an area about the vehicle, and a control module.
  • the control module compares the captured image to a plurality of previously captured training images.
  • the control module also determines which one of the plurality of training images is most similar to the captured image.
  • the control module modifies settings of the camera to match camera settings used to capture the one or more of the plurality of training images that is most similar to the captured image.
  • FIG. 1 illustrates a camera system according to the present teachings included with an exemplary vehicle
  • FIG. 2 illustrates an image area of an exemplary camera of the camera system according to the present teachings
  • FIG. 3 illustrates a method according to the present teachings for creating a trained model for configuring a camera
  • FIG. 4 illustrates a method according to the present teachings for configuring settings of the camera in an optimal manner to improve object detection.
  • the camera system 10 generally includes a camera 20 and a control module 30 .
  • the camera system 10 is illustrated as included with a passenger vehicle 40 , the system 10 can be included with any suitable type of vehicle.
  • the camera system 10 can be included with any suitable recreational vehicle, mass transit vehicle, construction vehicle, military vehicle, motorcycle, construction equipment, mining equipment, watercraft, aircraft, etc.
  • the camera system 10 can be used with any suitable non-vehicular applications to enhance the ability of the camera 20 to detect objects of interest.
  • the camera 20 can be any suitable camera or other sensor capable of detecting objects of interest.
  • the camera 20 can be any suitable visual light, extended spectrum, multi-spectral imaging, or fused imaging system camera and/or sensor.
  • the camera 20 can be mounted at any suitable position about the vehicle 40 , such as on a roof of the vehicle 40 , at a front of the vehicle 40 , on a windshield of the vehicle 40 , etc.
  • the camera system 10 can include any suitable number of cameras 20 , although the exemplary system described herein includes a single camera 20 .
  • the control module 30 receives an image taken by the camera 20 including an object of interest, and adjusts the settings of the camera 20 , such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting the particular object of interest.
  • the term “module” may be replaced with the term “circuit.”
  • the term “module” may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The code is configured to provide the features of the control module 30 described herein.
  • the present teachings advantageously adjust the settings of the camera 20 , such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting particular objects.
  • the camera system 10 can be configured to adjust the settings of the camera 20 to optimal settings for identifying vehicle lane lines painted or printed on a road.
  • the system 10 can be configured to set the settings of the camera 20 for optimal detection of any other suitable object as well, such as road signage, other vehicles, pedestrians, infrastructure, etc.
  • any suitable portion of an image captured by the camera 20 can be used to identify the optimal camera settings based on current environmental conditions.
  • the control module 30 can be configured to adjust the camera settings based on environmental conditions above a horizon line. To detect the horizon line, the control module 30 first identifies in an image captured by the camera 20 a vanishing point V where lines L 1 and L 2 , which are drawn along left and right lane markers of a lane that the vehicle 40 is traveling in, appear to meet and/or cross in the distance.
  • Line H is arranged by the control module 30 to extend through the vanishing point V in a direction perpendicular to a direction that the vehicle 40 is traveling in, and generally parallel to a surface of the road.
  • Image data from the area above line H has been determined to be the most relevant to setting the camera 20 , and thus it is data from above line H of each image captured by the camera 20 , and the training images described herein, which is used to set the camera 20 .
  • the method 110 can be performed by the control module 30 , or with any other suitable control module or system.
  • multiple training images are obtained for training the camera 20 .
  • the training images can be obtained in any suitable manner, such as from a developer, manufacturer, and/or provider of the camera system 10 . Any suitable number of training images can be obtained and used. For example, 5,000 training images of different environmental conditions for each one of a plurality of different scenes typically encountered by the camera 20 can be obtained. For example, 5,000 training images for each of the following typical scenes can be obtained: normal scene; rainy scene; snowy scene; sunny scene; cloudy scene; tunnel-enter scene; and tunnel-exit scene.
  • each training image is classified according to the scene captured therein. Any suitable classifications can be used.
  • the training images can be classified into one of the following scenes: normal, rainy, snowy, sunny, cloudy, tunnel-enter, and tunnel-exit.
  • each one of the training images is prepared for the extraction of features therefrom that can be used to distinguish the different training images from one another.
  • the different training images can be distinguished based on any relevant features, such as, but not limited to, one or more of the following:
  • Each one of the training images can be prepared for extraction of features therefrom at block 118 in any suitable manner.
  • each color (red, green, blue) training image can be transformed to an HSV (hue, saturation, and value) image, from which various features listed above in Table A can be extracted.
  • HSV hue, saturation, and value
  • color (red, green, blue) training images are converted to grayscale images, and at block 124 a Gaussian blur of each grayscale image is performed. Multiple Gaussian blurs of each grayscale image can be performed, and the difference of the multiple Gaussian blurs is taken at block 126 .
  • features relevant to distinguishing each training image from one another are extracted at bock 130 .
  • the features extracted at block 130 can be those set forth above at Table A, or any other suitable features.
  • the extracted features are used to build a model, data set, or file of images.
  • the model can be trained in any suitable manner, such as with any suitable algorithm.
  • One example of a suitable algorithm that may be used is a random forest algorithm, but any other suitable algorithm can be used as well.
  • the method 210 can be performed by the control module 30 of the system 10 , or in any other suitable manner, such as with any other suitable control module.
  • the trained model of training image data obtained by performing the method 110 is accessed by the control module 30 .
  • the control module 30 can access the trained model of training image data in any suitable manner, such as by accessing data previously loaded to the control module 30 , or accessing the trained model of training image data from a remote source, such as by way of any suitable remote connection (e.g., internet connection).
  • the control module 30 retrieves a live image captured by the camera 20 , such as of an area about the vehicle 40 .
  • any suitable image features are extracted from the live image captured by the camera 20 , such as the features listed above in Table A.
  • the live image may be prepared in any suitable manner, such as set forth in FIG. 3 at blocks 120 , 122 , 124 , and 126 with respect to the training images.
  • the live image is classified according to the scene captured therein. For example, the live image can be classified into any one of the following classifications: normal, rainy, snowy, sunny, cloudy, tunnel-enter, tunnel-exit.
  • the control module 30 compares the extracted features of the classified live image with the features extracted from each training image at block 130 of FIG. 3 .
  • the control module 30 identifies the training image with features most similar to the live image captured by the camera 20 .
  • the control module 30 configures the settings of the camera 20 to correspond with the camera settings used to capture the training image identified as being most similar to the live image captured by the camera 20 .
  • the control module 30 can configure any suitable settings of the camera 20 , such as the gain, exposure, shutter speed, etc. of the camera 20 .
  • the present teachings thus advantageously provide for methods and systems for running a computer vision algorithm automatically and dynamically to change camera settings in order to match the camera settings used to capture a reference image, the reference image previously having been found to be of a quality that facilitates identification of road lane lines, or any other suitable object of interest.
  • Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
  • first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
  • Spatially relative terms such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

A camera system for a vehicle. The system includes a camera configured to capture an image of an area about the vehicle, and a control module. The control module compares the captured image to a plurality of previously captured training images. The control module also determines which one of the plurality of training images is most similar to the captured image. Furthermore, the control module modifies settings of the camera to match camera settings used to capture the one or more of the plurality of training images that is most similar to the captured image.

Description

    FIELD
  • The present disclosure relates to a vehicle camera system.
  • BACKGROUND
  • This section provides background information related to the present disclosure, which is not necessarily prior art.
  • More and more vehicles are being outfitted with cameras to detect lane markers, obstacles, signage, infrastructure, other vehicles, pedestrians, etc. The cameras can be used, for example, to enhance safe vehicle operation and/or to guide the vehicle during autonomous driving. While current cameras are suitable for their intended use, they are subject to improvement. Although there are various image processing technologies applied in imaging, no single technique or combination of techniques addresses the robustness issues experienced with automotive applications.
  • The present teachings provide for camera systems and methods that advantageously enhance the object detection capabilities of vehicle cameras, for example. One skilled in the art will appreciate that the present teachings provide numerous additional advantages and unexpected results in addition to those set forth herein.
  • SUMMARY
  • This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
  • The present teachings include a camera system for a vehicle. The system includes a camera configured to capture an image of an area about the vehicle, and a control module. The control module compares the captured image to a plurality of previously captured training images. The control module also determines which one of the plurality of training images is most similar to the captured image. The control module then modifies settings of the camera to match camera settings used to capture the one or more of the plurality of training images that is most similar to the captured image.
  • Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
  • DRAWINGS
  • The drawings described herein are for illustrative purposes only of select embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
  • FIG. 1 illustrates a camera system according to the present teachings included with an exemplary vehicle;
  • FIG. 2 illustrates an image area of an exemplary camera of the camera system according to the present teachings;
  • FIG. 3 illustrates a method according to the present teachings for creating a trained model for configuring a camera; and
  • FIG. 4 illustrates a method according to the present teachings for configuring settings of the camera in an optimal manner to improve object detection.
  • Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
  • DETAILED DESCRIPTION
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • With initial reference to FIG. 1, a camera system in accordance with the present teachings is illustrated at reference numeral 10. The camera system 10 generally includes a camera 20 and a control module 30. Although the camera system 10 is illustrated as included with a passenger vehicle 40, the system 10 can be included with any suitable type of vehicle. For example, the camera system 10 can be included with any suitable recreational vehicle, mass transit vehicle, construction vehicle, military vehicle, motorcycle, construction equipment, mining equipment, watercraft, aircraft, etc. Further, the camera system 10 can be used with any suitable non-vehicular applications to enhance the ability of the camera 20 to detect objects of interest.
  • The camera 20 can be any suitable camera or other sensor capable of detecting objects of interest. For example, the camera 20 can be any suitable visual light, extended spectrum, multi-spectral imaging, or fused imaging system camera and/or sensor. The camera 20 can be mounted at any suitable position about the vehicle 40, such as on a roof of the vehicle 40, at a front of the vehicle 40, on a windshield of the vehicle 40, etc. The camera system 10 can include any suitable number of cameras 20, although the exemplary system described herein includes a single camera 20.
  • As explained further herein, the control module 30 receives an image taken by the camera 20 including an object of interest, and adjusts the settings of the camera 20, such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting the particular object of interest. In this application, including the definitions below, the term “module” may be replaced with the term “circuit.” The term “module” may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The code is configured to provide the features of the control module 30 described herein.
  • The present teachings advantageously adjust the settings of the camera 20, such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting particular objects. As described herein, the camera system 10 can be configured to adjust the settings of the camera 20 to optimal settings for identifying vehicle lane lines painted or printed on a road. However, the system 10 can be configured to set the settings of the camera 20 for optimal detection of any other suitable object as well, such as road signage, other vehicles, pedestrians, infrastructure, etc.
  • Any suitable portion of an image captured by the camera 20 can be used to identify the optimal camera settings based on current environmental conditions. For example and as illustrated in FIG. 2, the control module 30 can be configured to adjust the camera settings based on environmental conditions above a horizon line. To detect the horizon line, the control module 30 first identifies in an image captured by the camera 20 a vanishing point V where lines L1 and L2, which are drawn along left and right lane markers of a lane that the vehicle 40 is traveling in, appear to meet and/or cross in the distance. Line H is arranged by the control module 30 to extend through the vanishing point V in a direction perpendicular to a direction that the vehicle 40 is traveling in, and generally parallel to a surface of the road. Image data from the area above line H has been determined to be the most relevant to setting the camera 20, and thus it is data from above line H of each image captured by the camera 20, and the training images described herein, which is used to set the camera 20.
  • With continued reference to FIGS. 1 and 2, and additional reference to FIG. 3, a method according to the present teachings for creating a training model for optimally setting the camera 20 is illustrated at reference numeral 110 and will now be described in detail. The method 110 can be performed by the control module 30, or with any other suitable control module or system. With initial reference to block 112 of FIG. 3, multiple training images are obtained for training the camera 20. The training images can be obtained in any suitable manner, such as from a developer, manufacturer, and/or provider of the camera system 10. Any suitable number of training images can be obtained and used. For example, 5,000 training images of different environmental conditions for each one of a plurality of different scenes typically encountered by the camera 20 can be obtained. For example, 5,000 training images for each of the following typical scenes can be obtained: normal scene; rainy scene; snowy scene; sunny scene; cloudy scene; tunnel-enter scene; and tunnel-exit scene.
  • At block 114, the camera settings for each one of the training images obtained is identified. For example, the gain, exposure, and shutter speed settings for each training image obtained is identified. At block 116, each training image is classified according to the scene captured therein. Any suitable classifications can be used. For example, the training images can be classified into one of the following scenes: normal, rainy, snowy, sunny, cloudy, tunnel-enter, and tunnel-exit.
  • At block 118, each one of the training images is prepared for the extraction of features therefrom that can be used to distinguish the different training images from one another. The different training images can be distinguished based on any relevant features, such as, but not limited to, one or more of the following:
  • TABLE A
    Mean RGB The mean value of red, green, blue
    plane
    Mean Red The mean value of red plane
    Mean Green The mean value of green plane
    Mean Blue The mean value of blue plane
    Standard Deviation RGB The standard deviation value of
    red, green, blue plane
    Standard Deviation Red The standard deviation value of
    red plane
    Standard Deviation Green The standard deviation value of
    green plane
    Standard Deviation Blue The standard deviation value of
    blue plane
    Mean HSV The RGB image converted to HSV,
    the mean value of the hue,
    saturation, value plane
    Mean Hue The RGB image converted to HSV,
    the mean value of the hue plane
    Mean Saturation The RGB image converted to HSV,
    the mean value of the saturation
    plane
    Mean Value The RGB image converted to HSV,
    the mean value of the value plane
    Standard Deviation HSV The RGB image converted to HSV,
    the standard deviation value of the
    hue, saturation, value plane
    Standard Deviation Hue The RGB image converted to HSV,
    the standard deviation value of the
    hue plane
    Standard Deviation Saturation The RGB image converted to HSV,
    the standard deviation value of the
    saturation plane
    Standard Deviation Value The RGB image converted to HSV,
    the standard deviation value of the
    value plane
    Mean Gaussian Blurs (10) The input converted to grayscale
    then a Gaussian blur run (ten
    different times with different values
    of sigma) then the mean value
    taken
    Standard Deviation Gaussian Blurs The input converted to grayscale
    (10) then a Gaussian blur run (ten
    different times with different values
    of sigma) then the standard
    deviation value taken
    Mean Difference of Gaussian (10) The input converted to grayscale
    then two Gaussian blurs run,
    followed by an image subtraction
    (difference of Gaussian) then the
    mean value taken
    Standard Deviation Gaussian Blurs The input converted to grayscale
    (10) then two Gaussian blurs run,
    followed by an image subtraction
    (difference of Gaussian) then the
    standard deviation value taken
  • Each one of the training images can be prepared for extraction of features therefrom at block 118 in any suitable manner. For example and with reference to block 120, each color (red, green, blue) training image can be transformed to an HSV (hue, saturation, and value) image, from which various features listed above in Table A can be extracted. At block 122, color (red, green, blue) training images are converted to grayscale images, and at block 124 a Gaussian blur of each grayscale image is performed. Multiple Gaussian blurs of each grayscale image can be performed, and the difference of the multiple Gaussian blurs is taken at block 126.
  • With reference to block 130, after each one of the training images has been prepared, such as set forth at blocks 120, 122, 124, and 126, features relevant to distinguishing each training image from one another are extracted at bock 130. The features extracted at block 130 can be those set forth above at Table A, or any other suitable features. With reference to block 132, the extracted features are used to build a model, data set, or file of images. The model can be trained in any suitable manner, such as with any suitable algorithm. One example of a suitable algorithm that may be used is a random forest algorithm, but any other suitable algorithm can be used as well.
  • With additional reference to FIG. 4, a method 210 according to the present teachings for setting the camera 20 will now be described. The method 210 can be performed by the control module 30 of the system 10, or in any other suitable manner, such as with any other suitable control module. With initial reference to block 212, the trained model of training image data obtained by performing the method 110, or in any other suitable manner, is accessed by the control module 30. The control module 30 can access the trained model of training image data in any suitable manner, such as by accessing data previously loaded to the control module 30, or accessing the trained model of training image data from a remote source, such as by way of any suitable remote connection (e.g., internet connection).
  • At block 214, the control module 30 retrieves a live image captured by the camera 20, such as of an area about the vehicle 40. At block 216, any suitable image features are extracted from the live image captured by the camera 20, such as the features listed above in Table A. To extract the features from the live image, the live image may be prepared in any suitable manner, such as set forth in FIG. 3 at blocks 120, 122, 124, and 126 with respect to the training images. At block 218, the live image is classified according to the scene captured therein. For example, the live image can be classified into any one of the following classifications: normal, rainy, snowy, sunny, cloudy, tunnel-enter, tunnel-exit.
  • At block 220, the control module 30 compares the extracted features of the classified live image with the features extracted from each training image at block 130 of FIG. 3. At block 222, the control module 30 identifies the training image with features most similar to the live image captured by the camera 20. At block 224, the control module 30 configures the settings of the camera 20 to correspond with the camera settings used to capture the training image identified as being most similar to the live image captured by the camera 20. The control module 30 can configure any suitable settings of the camera 20, such as the gain, exposure, shutter speed, etc. of the camera 20.
  • The present teachings thus advantageously provide for methods and systems for running a computer vision algorithm automatically and dynamically to change camera settings in order to match the camera settings used to capture a reference image, the reference image previously having been found to be of a quality that facilitates identification of road lane lines, or any other suitable object of interest.
  • The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
  • Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
  • The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
  • When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to,” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
  • Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.

Claims (20)

What is claimed is:
1. A method for setting a camera of a camera system for a vehicle, the method comprising:
capturing an image of an area about the vehicle;
comparing the captured image to a plurality training images that were previously captured;
determining which one of the plurality of training images is most similar to the captured image; and
modifying settings of the camera to match camera settings used to capture the one or more of the plurality of training images that is most similar to the captured image.
2. The method of claim 1, further comprising capturing the image such that the image includes vehicle lanes of a roadway.
3. The method of claim 1, further comprising classifying the captured image as including one of the following scenes: normal; rainy; snowy; sunny; cloudy; tunnel-enter; and tunnel-exit.
4. The method of claim 3, wherein determining which one of the plurality of training images is most similar to the captured image includes comparing the scene of the captured image with scenes of the plurality of training images.
5. The method of claim 1, further comprising extracting image features from the captured image.
6. The method of claim 5, wherein determining which one of the plurality of training images is most similar to the captured image includes comparing the extracted image features extracted from the captured image with image features of the plurality of training images.
7. The method of claim 6, wherein the extracted image features include one or more of the following: mean RGB; mean red; mean green; mean blue; standard deviation RGB; standard deviation red, standard deviation green; standard deviation blue; mean HSV; mean hue; mean saturation; mean value; standard deviation HSV; standard deviation hue; standard deviation saturation; standard deviation value; mean Gaussian blur; standard deviation Gaussian blur; mean difference of Gaussian; and standard deviation Gaussian blur.
8. The method of claim 1, wherein modifying settings of the camera includes modifying at least one of gain, exposure, and shutter speed of the camera.
9. The method of claim 1, wherein the plurality of training images are included with a model trained with a random forest algorithm.
10. The method of claim 1, wherein at least one of the plurality of training images is prepared for extraction of features therefrom by transforming a color version of the at least one of the plurality of training images to a grayscale image, performing multiple Gaussian blurs on the at least one of the plurality of training images, and taking a difference of the Gaussian blurs.
11. The method of claim 1, wherein at least one of the plurality of training images is prepared for extraction of features therefrom by transforming a color version of the at least one of the plurality of images to an HSV (hue, saturation, and value) image.
12. A camera system for a vehicle, the system comprising:
a camera configured to capture an image of an area about the vehicle; and
a control module that:
compares the captured image to a plurality of previously captured training images;
determines which one of the plurality of training images is most similar to the captured image; and
modifies settings of the camera to match camera settings used to capture the one or more of the plurality of training images that is most similar to the captured image.
13. The camera system of claim 12, wherein the camera is configured to capture vehicle lanes of a roadway in the captured image.
14. The camera system of claim 12, wherein the control module further classifies the captured image as including one of the following scenes: normal; rainy; snowy; sunny; cloudy; tunnel-enter; and tunnel-exit.
15. The camera system of claim 14, wherein the control module compares the scene of the captured image with scenes of the plurality of training images when determining which one of the plurality of training images is most similar to the captured image.
16. The camera system of claim 12, wherein the control module extracts image features from the captured image.
17. The camera system of claim 16, wherein the control module compares the extracted image features extracted from the captured image with image features of the plurality of training images when determining which one of the plurality of training images is most similar to the captured image.
18. The camera system of claim 17, wherein the image features extracted by the control module include one or more of the following: mean RGB; mean red; mean green; mean blue; standard deviation RGB; standard deviation red, standard deviation green; standard deviation blue; mean HSV; mean hue; mean saturation; mean value; standard deviation HSV; standard deviation hue; standard deviation saturation; standard deviation value; mean Gaussian blur; standard deviation Gaussian blur; mean difference of Gaussian; and standard deviation Gaussian blur.
19. The camera system of claim 12, wherein the control module modifies settings of the camera including at least one of gain, exposure, and shutter speed.
20. The camera system of claim 12, wherein control module includes the plurality of training images as a model trained with a random forest algorithm
US15/455,935 2017-03-10 2017-03-10 Vehicle camera system Active 2037-04-30 US10417518B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/455,935 US10417518B2 (en) 2017-03-10 2017-03-10 Vehicle camera system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/455,935 US10417518B2 (en) 2017-03-10 2017-03-10 Vehicle camera system

Publications (2)

Publication Number Publication Date
US20180262678A1 true US20180262678A1 (en) 2018-09-13
US10417518B2 US10417518B2 (en) 2019-09-17

Family

ID=63445717

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/455,935 Active 2037-04-30 US10417518B2 (en) 2017-03-10 2017-03-10 Vehicle camera system

Country Status (1)

Country Link
US (1) US10417518B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642372A (en) * 2020-04-27 2021-11-12 百度(美国)有限责任公司 Method and system for recognizing object based on gray-scale image in operation of autonomous driving vehicle
EP3783882A4 (en) * 2018-10-26 2021-11-17 Huawei Technologies Co., Ltd. Camera apparatus adjustment method and related device
US11875580B2 (en) * 2021-10-04 2024-01-16 Motive Technologies, Inc. Camera initialization for lane detection and distance estimation using single-view geometry

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100004824A1 (en) * 2008-07-03 2010-01-07 Mitsubishi Electric Corporation Electric power-steering control apparatus
US20100020795A1 (en) * 2008-07-23 2010-01-28 Venkatavaradhan Devarajan System And Method For Broadcast Pruning In Ethernet Based Provider Bridging Network
US20140232895A1 (en) * 2013-02-19 2014-08-21 Sensormatic Electronics, LLC Method and System for Adjusting Exposure Settings of Video Cameras
US20150028276A1 (en) * 2010-02-15 2015-01-29 Altair Engineering, Inc. Portable rescue tool and method of use

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007328555A (en) 2006-06-08 2007-12-20 Hitachi Ltd Image correction device
US8385971B2 (en) * 2008-08-19 2013-02-26 Digimarc Corporation Methods and systems for content processing
JP4941482B2 (en) * 2009-02-17 2012-05-30 株式会社豊田中央研究所 Pseudo color image generation apparatus and program
US8630806B1 (en) * 2011-10-20 2014-01-14 Google Inc. Image processing for vehicle control
JP6120500B2 (en) 2012-07-20 2017-04-26 キヤノン株式会社 Imaging apparatus and control method thereof
US10335091B2 (en) * 2014-03-19 2019-07-02 Tactonic Technologies, Llc Method and apparatus to infer object and agent properties, activity capacities, behaviors, and intents from contact and pressure images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100004824A1 (en) * 2008-07-03 2010-01-07 Mitsubishi Electric Corporation Electric power-steering control apparatus
US20100020795A1 (en) * 2008-07-23 2010-01-28 Venkatavaradhan Devarajan System And Method For Broadcast Pruning In Ethernet Based Provider Bridging Network
US20150028276A1 (en) * 2010-02-15 2015-01-29 Altair Engineering, Inc. Portable rescue tool and method of use
US20140232895A1 (en) * 2013-02-19 2014-08-21 Sensormatic Electronics, LLC Method and System for Adjusting Exposure Settings of Video Cameras

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3783882A4 (en) * 2018-10-26 2021-11-17 Huawei Technologies Co., Ltd. Camera apparatus adjustment method and related device
CN113642372A (en) * 2020-04-27 2021-11-12 百度(美国)有限责任公司 Method and system for recognizing object based on gray-scale image in operation of autonomous driving vehicle
US11875580B2 (en) * 2021-10-04 2024-01-16 Motive Technologies, Inc. Camera initialization for lane detection and distance estimation using single-view geometry
US20240096114A1 (en) * 2021-10-04 2024-03-21 Motive Technologies, Inc. Camera initialization for lane detection and distance estimation using single-view geometry

Also Published As

Publication number Publication date
US10417518B2 (en) 2019-09-17

Similar Documents

Publication Publication Date Title
Wu et al. Lane-mark extraction for automobiles under complex conditions
Son et al. Real-time illumination invariant lane detection for lane departure warning system
Alvarez et al. Road detection based on illuminant invariance
CN105981042B (en) Vehicle detection system and method
US8036427B2 (en) Vehicle and road sign recognition device
CN109409186B (en) Driver assistance system and method for object detection and notification
US10334141B2 (en) Vehicle camera system
US8345100B2 (en) Shadow removal in an image captured by a vehicle-based camera using an optimized oriented linear axis
US8319854B2 (en) Shadow removal in an image captured by a vehicle based camera using a non-linear illumination-invariant kernel
CN107301405A (en) Method for traffic sign detection under natural scene
CN109729256B (en) Control method and device for double camera devices in vehicle
US10417518B2 (en) Vehicle camera system
CN101369312B (en) Method and equipment for detecting intersection in image
WO2019085929A1 (en) Image processing method, device for same, and method for safe driving
Kim et al. Illumination invariant road detection based on learning method
JP6375911B2 (en) Curve mirror detector
CN110809767B (en) Advanced driver assistance system and method
CN106803064B (en) Traffic light rapid identification method
Zong et al. Traffic light detection based on multi-feature segmentation and online selecting scheme
CN110388985B (en) Method for determining color of street sign and image processing system
Chen et al. Real-time vehicle color identification using symmetrical SURFs and chromatic strength
CN110741379A (en) Method for determining the type of road on which a vehicle is travelling
CN111066024A (en) Method and device for recognizing lane, driver assistance system and vehicle
Manoharan et al. Robust lane detection in hilly shadow roads using hybrid color feature
Karavaev et al. LIGHT INVARIANT LANE DETECTION METHOD USINGADVANCED CLUSTERING TECHNIQUES

Legal Events

Date Code Title Description
AS Assignment

Owner name: DENSO INTERNATIONAL AMERICA, INC., MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUNT, SHAWN;LULL, JOSEPH;REEL/FRAME:041653/0389

Effective date: 20170310

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4