US20180262678A1 - Vehicle camera system - Google Patents
Vehicle camera system Download PDFInfo
- Publication number
- US20180262678A1 US20180262678A1 US15/455,935 US201715455935A US2018262678A1 US 20180262678 A1 US20180262678 A1 US 20180262678A1 US 201715455935 A US201715455935 A US 201715455935A US 2018262678 A1 US2018262678 A1 US 2018262678A1
- Authority
- US
- United States
- Prior art keywords
- standard deviation
- mean
- training images
- camera
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 4
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims 2
- 239000000284 extract Substances 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000000701 chemical imaging Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H04N5/23216—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G06K9/00798—
-
- G06K9/46—
-
- G06K9/6202—
-
- G06K9/628—
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/72—Combination of two or more compensation controls
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
Definitions
- the present disclosure relates to a vehicle camera system.
- More and more vehicles are being outfitted with cameras to detect lane markers, obstacles, signage, infrastructure, other vehicles, pedestrians, etc.
- the cameras can be used, for example, to enhance safe vehicle operation and/or to guide the vehicle during autonomous driving. While current cameras are suitable for their intended use, they are subject to improvement. Although there are various image processing technologies applied in imaging, no single technique or combination of techniques addresses the robustness issues experienced with automotive applications.
- the present teachings provide for camera systems and methods that advantageously enhance the object detection capabilities of vehicle cameras, for example.
- One skilled in the art will appreciate that the present teachings provide numerous additional advantages and unexpected results in addition to those set forth herein.
- the present teachings include a camera system for a vehicle.
- the system includes a camera configured to capture an image of an area about the vehicle, and a control module.
- the control module compares the captured image to a plurality of previously captured training images.
- the control module also determines which one of the plurality of training images is most similar to the captured image.
- the control module modifies settings of the camera to match camera settings used to capture the one or more of the plurality of training images that is most similar to the captured image.
- FIG. 1 illustrates a camera system according to the present teachings included with an exemplary vehicle
- FIG. 2 illustrates an image area of an exemplary camera of the camera system according to the present teachings
- FIG. 3 illustrates a method according to the present teachings for creating a trained model for configuring a camera
- FIG. 4 illustrates a method according to the present teachings for configuring settings of the camera in an optimal manner to improve object detection.
- the camera system 10 generally includes a camera 20 and a control module 30 .
- the camera system 10 is illustrated as included with a passenger vehicle 40 , the system 10 can be included with any suitable type of vehicle.
- the camera system 10 can be included with any suitable recreational vehicle, mass transit vehicle, construction vehicle, military vehicle, motorcycle, construction equipment, mining equipment, watercraft, aircraft, etc.
- the camera system 10 can be used with any suitable non-vehicular applications to enhance the ability of the camera 20 to detect objects of interest.
- the camera 20 can be any suitable camera or other sensor capable of detecting objects of interest.
- the camera 20 can be any suitable visual light, extended spectrum, multi-spectral imaging, or fused imaging system camera and/or sensor.
- the camera 20 can be mounted at any suitable position about the vehicle 40 , such as on a roof of the vehicle 40 , at a front of the vehicle 40 , on a windshield of the vehicle 40 , etc.
- the camera system 10 can include any suitable number of cameras 20 , although the exemplary system described herein includes a single camera 20 .
- the control module 30 receives an image taken by the camera 20 including an object of interest, and adjusts the settings of the camera 20 , such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting the particular object of interest.
- the term “module” may be replaced with the term “circuit.”
- the term “module” may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The code is configured to provide the features of the control module 30 described herein.
- the present teachings advantageously adjust the settings of the camera 20 , such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting particular objects.
- the camera system 10 can be configured to adjust the settings of the camera 20 to optimal settings for identifying vehicle lane lines painted or printed on a road.
- the system 10 can be configured to set the settings of the camera 20 for optimal detection of any other suitable object as well, such as road signage, other vehicles, pedestrians, infrastructure, etc.
- any suitable portion of an image captured by the camera 20 can be used to identify the optimal camera settings based on current environmental conditions.
- the control module 30 can be configured to adjust the camera settings based on environmental conditions above a horizon line. To detect the horizon line, the control module 30 first identifies in an image captured by the camera 20 a vanishing point V where lines L 1 and L 2 , which are drawn along left and right lane markers of a lane that the vehicle 40 is traveling in, appear to meet and/or cross in the distance.
- Line H is arranged by the control module 30 to extend through the vanishing point V in a direction perpendicular to a direction that the vehicle 40 is traveling in, and generally parallel to a surface of the road.
- Image data from the area above line H has been determined to be the most relevant to setting the camera 20 , and thus it is data from above line H of each image captured by the camera 20 , and the training images described herein, which is used to set the camera 20 .
- the method 110 can be performed by the control module 30 , or with any other suitable control module or system.
- multiple training images are obtained for training the camera 20 .
- the training images can be obtained in any suitable manner, such as from a developer, manufacturer, and/or provider of the camera system 10 . Any suitable number of training images can be obtained and used. For example, 5,000 training images of different environmental conditions for each one of a plurality of different scenes typically encountered by the camera 20 can be obtained. For example, 5,000 training images for each of the following typical scenes can be obtained: normal scene; rainy scene; snowy scene; sunny scene; cloudy scene; tunnel-enter scene; and tunnel-exit scene.
- each training image is classified according to the scene captured therein. Any suitable classifications can be used.
- the training images can be classified into one of the following scenes: normal, rainy, snowy, sunny, cloudy, tunnel-enter, and tunnel-exit.
- each one of the training images is prepared for the extraction of features therefrom that can be used to distinguish the different training images from one another.
- the different training images can be distinguished based on any relevant features, such as, but not limited to, one or more of the following:
- Each one of the training images can be prepared for extraction of features therefrom at block 118 in any suitable manner.
- each color (red, green, blue) training image can be transformed to an HSV (hue, saturation, and value) image, from which various features listed above in Table A can be extracted.
- HSV hue, saturation, and value
- color (red, green, blue) training images are converted to grayscale images, and at block 124 a Gaussian blur of each grayscale image is performed. Multiple Gaussian blurs of each grayscale image can be performed, and the difference of the multiple Gaussian blurs is taken at block 126 .
- features relevant to distinguishing each training image from one another are extracted at bock 130 .
- the features extracted at block 130 can be those set forth above at Table A, or any other suitable features.
- the extracted features are used to build a model, data set, or file of images.
- the model can be trained in any suitable manner, such as with any suitable algorithm.
- One example of a suitable algorithm that may be used is a random forest algorithm, but any other suitable algorithm can be used as well.
- the method 210 can be performed by the control module 30 of the system 10 , or in any other suitable manner, such as with any other suitable control module.
- the trained model of training image data obtained by performing the method 110 is accessed by the control module 30 .
- the control module 30 can access the trained model of training image data in any suitable manner, such as by accessing data previously loaded to the control module 30 , or accessing the trained model of training image data from a remote source, such as by way of any suitable remote connection (e.g., internet connection).
- the control module 30 retrieves a live image captured by the camera 20 , such as of an area about the vehicle 40 .
- any suitable image features are extracted from the live image captured by the camera 20 , such as the features listed above in Table A.
- the live image may be prepared in any suitable manner, such as set forth in FIG. 3 at blocks 120 , 122 , 124 , and 126 with respect to the training images.
- the live image is classified according to the scene captured therein. For example, the live image can be classified into any one of the following classifications: normal, rainy, snowy, sunny, cloudy, tunnel-enter, tunnel-exit.
- the control module 30 compares the extracted features of the classified live image with the features extracted from each training image at block 130 of FIG. 3 .
- the control module 30 identifies the training image with features most similar to the live image captured by the camera 20 .
- the control module 30 configures the settings of the camera 20 to correspond with the camera settings used to capture the training image identified as being most similar to the live image captured by the camera 20 .
- the control module 30 can configure any suitable settings of the camera 20 , such as the gain, exposure, shutter speed, etc. of the camera 20 .
- the present teachings thus advantageously provide for methods and systems for running a computer vision algorithm automatically and dynamically to change camera settings in order to match the camera settings used to capture a reference image, the reference image previously having been found to be of a quality that facilitates identification of road lane lines, or any other suitable object of interest.
- Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
- first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
- Spatially relative terms such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
Description
- The present disclosure relates to a vehicle camera system.
- This section provides background information related to the present disclosure, which is not necessarily prior art.
- More and more vehicles are being outfitted with cameras to detect lane markers, obstacles, signage, infrastructure, other vehicles, pedestrians, etc. The cameras can be used, for example, to enhance safe vehicle operation and/or to guide the vehicle during autonomous driving. While current cameras are suitable for their intended use, they are subject to improvement. Although there are various image processing technologies applied in imaging, no single technique or combination of techniques addresses the robustness issues experienced with automotive applications.
- The present teachings provide for camera systems and methods that advantageously enhance the object detection capabilities of vehicle cameras, for example. One skilled in the art will appreciate that the present teachings provide numerous additional advantages and unexpected results in addition to those set forth herein.
- This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
- The present teachings include a camera system for a vehicle. The system includes a camera configured to capture an image of an area about the vehicle, and a control module. The control module compares the captured image to a plurality of previously captured training images. The control module also determines which one of the plurality of training images is most similar to the captured image. The control module then modifies settings of the camera to match camera settings used to capture the one or more of the plurality of training images that is most similar to the captured image.
- Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
- The drawings described herein are for illustrative purposes only of select embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
-
FIG. 1 illustrates a camera system according to the present teachings included with an exemplary vehicle; -
FIG. 2 illustrates an image area of an exemplary camera of the camera system according to the present teachings; -
FIG. 3 illustrates a method according to the present teachings for creating a trained model for configuring a camera; and -
FIG. 4 illustrates a method according to the present teachings for configuring settings of the camera in an optimal manner to improve object detection. - Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
- Example embodiments will now be described more fully with reference to the accompanying drawings.
- With initial reference to
FIG. 1 , a camera system in accordance with the present teachings is illustrated atreference numeral 10. Thecamera system 10 generally includes acamera 20 and acontrol module 30. Although thecamera system 10 is illustrated as included with apassenger vehicle 40, thesystem 10 can be included with any suitable type of vehicle. For example, thecamera system 10 can be included with any suitable recreational vehicle, mass transit vehicle, construction vehicle, military vehicle, motorcycle, construction equipment, mining equipment, watercraft, aircraft, etc. Further, thecamera system 10 can be used with any suitable non-vehicular applications to enhance the ability of thecamera 20 to detect objects of interest. - The
camera 20 can be any suitable camera or other sensor capable of detecting objects of interest. For example, thecamera 20 can be any suitable visual light, extended spectrum, multi-spectral imaging, or fused imaging system camera and/or sensor. Thecamera 20 can be mounted at any suitable position about thevehicle 40, such as on a roof of thevehicle 40, at a front of thevehicle 40, on a windshield of thevehicle 40, etc. Thecamera system 10 can include any suitable number ofcameras 20, although the exemplary system described herein includes asingle camera 20. - As explained further herein, the
control module 30 receives an image taken by thecamera 20 including an object of interest, and adjusts the settings of thecamera 20, such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting the particular object of interest. In this application, including the definitions below, the term “module” may be replaced with the term “circuit.” The term “module” may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The code is configured to provide the features of thecontrol module 30 described herein. - The present teachings advantageously adjust the settings of the
camera 20, such as gain, exposure, and shutter speed to the settings that are optimal based on the current environmental conditions for detecting particular objects. As described herein, thecamera system 10 can be configured to adjust the settings of thecamera 20 to optimal settings for identifying vehicle lane lines painted or printed on a road. However, thesystem 10 can be configured to set the settings of thecamera 20 for optimal detection of any other suitable object as well, such as road signage, other vehicles, pedestrians, infrastructure, etc. - Any suitable portion of an image captured by the
camera 20 can be used to identify the optimal camera settings based on current environmental conditions. For example and as illustrated inFIG. 2 , thecontrol module 30 can be configured to adjust the camera settings based on environmental conditions above a horizon line. To detect the horizon line, thecontrol module 30 first identifies in an image captured by the camera 20 a vanishing point V where lines L1 and L2, which are drawn along left and right lane markers of a lane that thevehicle 40 is traveling in, appear to meet and/or cross in the distance. Line H is arranged by thecontrol module 30 to extend through the vanishing point V in a direction perpendicular to a direction that thevehicle 40 is traveling in, and generally parallel to a surface of the road. Image data from the area above line H has been determined to be the most relevant to setting thecamera 20, and thus it is data from above line H of each image captured by thecamera 20, and the training images described herein, which is used to set thecamera 20. - With continued reference to
FIGS. 1 and 2 , and additional reference toFIG. 3 , a method according to the present teachings for creating a training model for optimally setting thecamera 20 is illustrated atreference numeral 110 and will now be described in detail. Themethod 110 can be performed by thecontrol module 30, or with any other suitable control module or system. With initial reference toblock 112 ofFIG. 3 , multiple training images are obtained for training thecamera 20. The training images can be obtained in any suitable manner, such as from a developer, manufacturer, and/or provider of thecamera system 10. Any suitable number of training images can be obtained and used. For example, 5,000 training images of different environmental conditions for each one of a plurality of different scenes typically encountered by thecamera 20 can be obtained. For example, 5,000 training images for each of the following typical scenes can be obtained: normal scene; rainy scene; snowy scene; sunny scene; cloudy scene; tunnel-enter scene; and tunnel-exit scene. - At
block 114, the camera settings for each one of the training images obtained is identified. For example, the gain, exposure, and shutter speed settings for each training image obtained is identified. Atblock 116, each training image is classified according to the scene captured therein. Any suitable classifications can be used. For example, the training images can be classified into one of the following scenes: normal, rainy, snowy, sunny, cloudy, tunnel-enter, and tunnel-exit. - At
block 118, each one of the training images is prepared for the extraction of features therefrom that can be used to distinguish the different training images from one another. The different training images can be distinguished based on any relevant features, such as, but not limited to, one or more of the following: -
TABLE A Mean RGB The mean value of red, green, blue plane Mean Red The mean value of red plane Mean Green The mean value of green plane Mean Blue The mean value of blue plane Standard Deviation RGB The standard deviation value of red, green, blue plane Standard Deviation Red The standard deviation value of red plane Standard Deviation Green The standard deviation value of green plane Standard Deviation Blue The standard deviation value of blue plane Mean HSV The RGB image converted to HSV, the mean value of the hue, saturation, value plane Mean Hue The RGB image converted to HSV, the mean value of the hue plane Mean Saturation The RGB image converted to HSV, the mean value of the saturation plane Mean Value The RGB image converted to HSV, the mean value of the value plane Standard Deviation HSV The RGB image converted to HSV, the standard deviation value of the hue, saturation, value plane Standard Deviation Hue The RGB image converted to HSV, the standard deviation value of the hue plane Standard Deviation Saturation The RGB image converted to HSV, the standard deviation value of the saturation plane Standard Deviation Value The RGB image converted to HSV, the standard deviation value of the value plane Mean Gaussian Blurs (10) The input converted to grayscale then a Gaussian blur run (ten different times with different values of sigma) then the mean value taken Standard Deviation Gaussian Blurs The input converted to grayscale (10) then a Gaussian blur run (ten different times with different values of sigma) then the standard deviation value taken Mean Difference of Gaussian (10) The input converted to grayscale then two Gaussian blurs run, followed by an image subtraction (difference of Gaussian) then the mean value taken Standard Deviation Gaussian Blurs The input converted to grayscale (10) then two Gaussian blurs run, followed by an image subtraction (difference of Gaussian) then the standard deviation value taken - Each one of the training images can be prepared for extraction of features therefrom at
block 118 in any suitable manner. For example and with reference to block 120, each color (red, green, blue) training image can be transformed to an HSV (hue, saturation, and value) image, from which various features listed above in Table A can be extracted. Atblock 122, color (red, green, blue) training images are converted to grayscale images, and at block 124 a Gaussian blur of each grayscale image is performed. Multiple Gaussian blurs of each grayscale image can be performed, and the difference of the multiple Gaussian blurs is taken atblock 126. - With reference to block 130, after each one of the training images has been prepared, such as set forth at
blocks bock 130. The features extracted atblock 130 can be those set forth above at Table A, or any other suitable features. With reference to block 132, the extracted features are used to build a model, data set, or file of images. The model can be trained in any suitable manner, such as with any suitable algorithm. One example of a suitable algorithm that may be used is a random forest algorithm, but any other suitable algorithm can be used as well. - With additional reference to
FIG. 4 , amethod 210 according to the present teachings for setting thecamera 20 will now be described. Themethod 210 can be performed by thecontrol module 30 of thesystem 10, or in any other suitable manner, such as with any other suitable control module. With initial reference to block 212, the trained model of training image data obtained by performing themethod 110, or in any other suitable manner, is accessed by thecontrol module 30. Thecontrol module 30 can access the trained model of training image data in any suitable manner, such as by accessing data previously loaded to thecontrol module 30, or accessing the trained model of training image data from a remote source, such as by way of any suitable remote connection (e.g., internet connection). - At
block 214, thecontrol module 30 retrieves a live image captured by thecamera 20, such as of an area about thevehicle 40. Atblock 216, any suitable image features are extracted from the live image captured by thecamera 20, such as the features listed above in Table A. To extract the features from the live image, the live image may be prepared in any suitable manner, such as set forth inFIG. 3 atblocks block 218, the live image is classified according to the scene captured therein. For example, the live image can be classified into any one of the following classifications: normal, rainy, snowy, sunny, cloudy, tunnel-enter, tunnel-exit. - At
block 220, thecontrol module 30 compares the extracted features of the classified live image with the features extracted from each training image atblock 130 ofFIG. 3 . Atblock 222, thecontrol module 30 identifies the training image with features most similar to the live image captured by thecamera 20. Atblock 224, thecontrol module 30 configures the settings of thecamera 20 to correspond with the camera settings used to capture the training image identified as being most similar to the live image captured by thecamera 20. Thecontrol module 30 can configure any suitable settings of thecamera 20, such as the gain, exposure, shutter speed, etc. of thecamera 20. - The present teachings thus advantageously provide for methods and systems for running a computer vision algorithm automatically and dynamically to change camera settings in order to match the camera settings used to capture a reference image, the reference image previously having been found to be of a quality that facilitates identification of road lane lines, or any other suitable object of interest.
- The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
- Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
- The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
- When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to,” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
- Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
- Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/455,935 US10417518B2 (en) | 2017-03-10 | 2017-03-10 | Vehicle camera system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/455,935 US10417518B2 (en) | 2017-03-10 | 2017-03-10 | Vehicle camera system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180262678A1 true US20180262678A1 (en) | 2018-09-13 |
US10417518B2 US10417518B2 (en) | 2019-09-17 |
Family
ID=63445717
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/455,935 Active 2037-04-30 US10417518B2 (en) | 2017-03-10 | 2017-03-10 | Vehicle camera system |
Country Status (1)
Country | Link |
---|---|
US (1) | US10417518B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113642372A (en) * | 2020-04-27 | 2021-11-12 | 百度(美国)有限责任公司 | Method and system for recognizing object based on gray-scale image in operation of autonomous driving vehicle |
EP3783882A4 (en) * | 2018-10-26 | 2021-11-17 | Huawei Technologies Co., Ltd. | Camera apparatus adjustment method and related device |
US11875580B2 (en) * | 2021-10-04 | 2024-01-16 | Motive Technologies, Inc. | Camera initialization for lane detection and distance estimation using single-view geometry |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100004824A1 (en) * | 2008-07-03 | 2010-01-07 | Mitsubishi Electric Corporation | Electric power-steering control apparatus |
US20100020795A1 (en) * | 2008-07-23 | 2010-01-28 | Venkatavaradhan Devarajan | System And Method For Broadcast Pruning In Ethernet Based Provider Bridging Network |
US20140232895A1 (en) * | 2013-02-19 | 2014-08-21 | Sensormatic Electronics, LLC | Method and System for Adjusting Exposure Settings of Video Cameras |
US20150028276A1 (en) * | 2010-02-15 | 2015-01-29 | Altair Engineering, Inc. | Portable rescue tool and method of use |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007328555A (en) | 2006-06-08 | 2007-12-20 | Hitachi Ltd | Image correction device |
US8385971B2 (en) * | 2008-08-19 | 2013-02-26 | Digimarc Corporation | Methods and systems for content processing |
JP4941482B2 (en) * | 2009-02-17 | 2012-05-30 | 株式会社豊田中央研究所 | Pseudo color image generation apparatus and program |
US8630806B1 (en) * | 2011-10-20 | 2014-01-14 | Google Inc. | Image processing for vehicle control |
JP6120500B2 (en) | 2012-07-20 | 2017-04-26 | キヤノン株式会社 | Imaging apparatus and control method thereof |
US10335091B2 (en) * | 2014-03-19 | 2019-07-02 | Tactonic Technologies, Llc | Method and apparatus to infer object and agent properties, activity capacities, behaviors, and intents from contact and pressure images |
-
2017
- 2017-03-10 US US15/455,935 patent/US10417518B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100004824A1 (en) * | 2008-07-03 | 2010-01-07 | Mitsubishi Electric Corporation | Electric power-steering control apparatus |
US20100020795A1 (en) * | 2008-07-23 | 2010-01-28 | Venkatavaradhan Devarajan | System And Method For Broadcast Pruning In Ethernet Based Provider Bridging Network |
US20150028276A1 (en) * | 2010-02-15 | 2015-01-29 | Altair Engineering, Inc. | Portable rescue tool and method of use |
US20140232895A1 (en) * | 2013-02-19 | 2014-08-21 | Sensormatic Electronics, LLC | Method and System for Adjusting Exposure Settings of Video Cameras |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3783882A4 (en) * | 2018-10-26 | 2021-11-17 | Huawei Technologies Co., Ltd. | Camera apparatus adjustment method and related device |
CN113642372A (en) * | 2020-04-27 | 2021-11-12 | 百度(美国)有限责任公司 | Method and system for recognizing object based on gray-scale image in operation of autonomous driving vehicle |
US11875580B2 (en) * | 2021-10-04 | 2024-01-16 | Motive Technologies, Inc. | Camera initialization for lane detection and distance estimation using single-view geometry |
US20240096114A1 (en) * | 2021-10-04 | 2024-03-21 | Motive Technologies, Inc. | Camera initialization for lane detection and distance estimation using single-view geometry |
Also Published As
Publication number | Publication date |
---|---|
US10417518B2 (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wu et al. | Lane-mark extraction for automobiles under complex conditions | |
Son et al. | Real-time illumination invariant lane detection for lane departure warning system | |
Alvarez et al. | Road detection based on illuminant invariance | |
CN105981042B (en) | Vehicle detection system and method | |
US8036427B2 (en) | Vehicle and road sign recognition device | |
CN109409186B (en) | Driver assistance system and method for object detection and notification | |
US10334141B2 (en) | Vehicle camera system | |
US8345100B2 (en) | Shadow removal in an image captured by a vehicle-based camera using an optimized oriented linear axis | |
US8319854B2 (en) | Shadow removal in an image captured by a vehicle based camera using a non-linear illumination-invariant kernel | |
CN107301405A (en) | Method for traffic sign detection under natural scene | |
CN109729256B (en) | Control method and device for double camera devices in vehicle | |
US10417518B2 (en) | Vehicle camera system | |
CN101369312B (en) | Method and equipment for detecting intersection in image | |
WO2019085929A1 (en) | Image processing method, device for same, and method for safe driving | |
Kim et al. | Illumination invariant road detection based on learning method | |
JP6375911B2 (en) | Curve mirror detector | |
CN110809767B (en) | Advanced driver assistance system and method | |
CN106803064B (en) | Traffic light rapid identification method | |
Zong et al. | Traffic light detection based on multi-feature segmentation and online selecting scheme | |
CN110388985B (en) | Method for determining color of street sign and image processing system | |
Chen et al. | Real-time vehicle color identification using symmetrical SURFs and chromatic strength | |
CN110741379A (en) | Method for determining the type of road on which a vehicle is travelling | |
CN111066024A (en) | Method and device for recognizing lane, driver assistance system and vehicle | |
Manoharan et al. | Robust lane detection in hilly shadow roads using hybrid color feature | |
Karavaev et al. | LIGHT INVARIANT LANE DETECTION METHOD USINGADVANCED CLUSTERING TECHNIQUES |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DENSO INTERNATIONAL AMERICA, INC., MICHIGAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUNT, SHAWN;LULL, JOSEPH;REEL/FRAME:041653/0389 Effective date: 20170310 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |