CN110536073B - Image capturing device for vehicle and image capturing method - Google Patents

Image capturing device for vehicle and image capturing method Download PDF

Info

Publication number
CN110536073B
CN110536073B CN201810513070.5A CN201810513070A CN110536073B CN 110536073 B CN110536073 B CN 110536073B CN 201810513070 A CN201810513070 A CN 201810513070A CN 110536073 B CN110536073 B CN 110536073B
Authority
CN
China
Prior art keywords
value
fill
image
gray scale
shutter speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810513070.5A
Other languages
Chinese (zh)
Other versions
CN110536073A (en
Inventor
蔡昆佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Original Assignee
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitac Computer Kunshan Co Ltd, Getac Technology Corp filed Critical Mitac Computer Kunshan Co Ltd
Priority to CN201810513070.5A priority Critical patent/CN110536073B/en
Publication of CN110536073A publication Critical patent/CN110536073A/en
Application granted granted Critical
Publication of CN110536073B publication Critical patent/CN110536073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/235Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor
    • H04N5/2353Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor by influencing the exposure time, e.g. shutter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/235Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor
    • H04N5/2354Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed circuit television systems, i.e. systems in which the signal is not broadcast
    • H04N7/183Closed circuit television systems, i.e. systems in which the signal is not broadcast for receiving images from a single remote source

Abstract

The invention relates to a vehicle image capturing device and an image capturing method, wherein the vehicle image capturing device comprises an image capturing unit and a processing unit, the image capturing unit is used for sequentially capturing a plurality of driving images, each driving image comprises an object image, the processing unit is used for carrying out image analysis on two driving images to obtain the variation of the object image, and the shutter speed of the image capturing unit is set according to the variation; the image capturing method comprises the steps of sequentially capturing a plurality of driving images by an image capturing unit, analyzing two images of the images to obtain the variation of an object image, and setting the shutter speed of the image capturing unit according to the variation, wherein each driving image comprises the object image.

Description

Image capturing device for vehicle and image capturing method
[ technical field ] A method for producing a semiconductor device
The present invention relates to an image capturing technology, and more particularly, to an image capturing device and an image capturing method for a vehicle.
[ background of the invention ]
The image capturing device can record images, so that the image capturing device has wide application, and can be installed at places needing monitoring, such as an entrance and an exit of a building, so as to assist in tracing, storing certificates and the like.
A general image capturing device is usually installed at a certain point and captures an image within a range where the image capturing device can capture the image according to a fixed operation mode. However, when the image capturing device is mounted on a moving object, such as a vehicle body, the quality of the image captured by the image capturing device will be degraded due to the speed of the moving object, and the accuracy of the subsequent identification of the captured image will be affected.
[ summary of the invention ]
In one embodiment, an image capturing method includes sequentially capturing a plurality of driving images by an image capturing unit, analyzing two of the images to obtain a variation of an object image, and setting a shutter speed of the image capturing unit according to the variation. Wherein each of the vehicle images includes an object image.
In one embodiment, an image capturing device for a vehicle includes an image capturing unit and a processing unit. The image capturing unit is used for sequentially capturing a plurality of driving images. Wherein each of the vehicle images includes an object image. The processing unit is used for carrying out image analysis on two of the driving images to obtain the variation of the object image, and the processing unit can set the shutter speed of the image acquisition unit according to the variation.
In summary, in the image capturing apparatus for a vehicle and the image capturing method of the embodiment of the invention, the shutter speed is set according to the variation of the object image in the driving image, so as to obtain a clearer driving image. In addition, the fill-in light intensity or gain value can be adjusted through the gray scale number distribution of the driving image, so as to obtain the driving image with proper brightness. In addition, the shutter speed, the fill-in light intensity or the gain value can be finely adjusted through the frequency spectrum of the driving image or the brightness distribution of the object image, so that the driving image with better detail expression capability can be obtained. Moreover, the quality of the driving image can be confirmed without waiting for the feedback of the background system and fine adjustment can be correspondingly carried out, so that the driving image with better quality can be obtained more quickly.
The detailed features and advantages of the present invention are described in detail in the following embodiments, which are sufficient for anyone skilled in the art to understand the technical contents of the present invention and to implement the present invention, and the related objects and advantages of the present invention can be easily understood by anyone skilled in the art according to the disclosure, claims and drawings of the present specification.
[ description of the drawings ]
Fig. 1 is a block diagram of an embodiment of an image capturing device for a vehicle.
Fig. 2 is a flowchart illustrating an embodiment of an image capturing method.
Fig. 3 is a flowchart illustrating an embodiment of step S30 in fig. 2.
Fig. 4 is a flowchart illustrating an embodiment of step S34 in fig. 3.
FIG. 5 is a histogram of an embodiment of a driving image.
Fig. 6 is a flowchart illustrating an embodiment of step S34C in fig. 4.
Fig. 7 is a flowchart illustrating the image capturing method after step S34 according to an embodiment.
Fig. 8 is a schematic diagram of an embodiment of a driving image.
Fig. 9 is a flowchart illustrating an embodiment of step S37 in fig. 7.
Fig. 10 is a flowchart illustrating the image capturing method after step S34 according to an embodiment of the image capturing method.
FIG. 11 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof.
Fig. 12 is a flowchart illustrating an embodiment of step S40 in fig. 10.
Fig. 13 is a flowchart illustrating an embodiment of step S40 in fig. 10.
Fig. 14 is a flowchart illustrating an embodiment of step S40 in fig. 10.
Fig. 15 is a flowchart illustrating an embodiment of step S40 in fig. 10.
Fig. 16 is a flowchart illustrating an embodiment of step S40 in fig. 10.
Fig. 17 is a flowchart illustrating an embodiment of step S40 in fig. 10.
FIG. 18 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof.
FIG. 19 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof.
[ detailed description ] embodiments
Fig. 1 is a block diagram of an embodiment of an image capturing device for a vehicle. Referring to fig. 1, in general, an image capturing device 100 for a vehicle is installed on a vehicle and used for capturing and recording a driving image F1. In some embodiments, the vehicle may be an automobile, a motorcycle, etc., but the invention is not limited thereto, and any suitable vehicle using the image capturing device 100 for an automobile is within the scope of the invention.
In one embodiment, the image capturing apparatus 100 for a vehicle includes an image capturing unit 110 and a processing unit 120, and the processing unit 120 is coupled to the image capturing unit 110. In addition, the image capturing device 100 for a vehicle may further include a light supplement unit 130, and the light supplement unit 130 is coupled to the image capturing unit 110 and the processing unit 120.
The image capturing unit 110 is configured to capture a plurality of driving images F1. Also, the driving images F1 can be a plurality of frames captured by the image capturing unit 110 in a continuous time. The fill-in light unit 130 is used to output fill-in light to assist the image capturing of the image capturing unit 110.
In some embodiments, the image capturing unit 110 may include a lens and a photosensitive device, such as a Complementary Metal Oxide Semiconductor (CMOS) device or a photosensitive coupled device (CCD). In addition, the light supplement unit 130 can be implemented by, for example, a Light Emitting Diode (LED), an infrared diode (IR LED), a halogen lamp, a laser source, etc., but the invention is not limited thereto.
The processing unit 120 can control and adjust the operations of the image capturing unit 110 and/or the fill-in light unit 130 according to the image capturing method of any embodiment of the invention, so that the driving image F1 captured by the image capturing unit 110 can have better image quality.
In some embodiments, the processing unit 120 may be, for example, an SoC chip, a Central Processing Unit (CPU), a Microcontroller (MCU), an Application Specific Integrated Circuit (ASIC), or the like.
Fig. 2 is a flowchart illustrating an embodiment of an image capturing method. Referring to fig. 1 to 2, in an embodiment of the image capturing method, the processing unit 120 may sequentially capture a plurality of driving images F1 by using the image capturing unit 110 (step S10). Here, each of the line vehicle images F1 may include an object image M1. Next, the processing unit 120 performs image analysis on two driving images F1 of the driving images F1 to obtain a variation of the object image M1 in the two driving images F1 (step S20), and the processing unit 120 sets the shutter speed of the image capturing unit 110 according to the variation obtained in step S20 (step S30). Thereafter, the processing unit 120 may return to step S10 to start the next adjustment procedure.
In the embodiment of the step S10, the image capturing unit 110 can capture each driving image F1 by using a Global Shutter (Global Shutter) operation method, but the invention is not limited thereto, and the image capturing unit 110 can capture each driving image F1 by using a Rolling Shutter (Rolling Shutter) operation method.
Here, the image capturing unit 110 can capture a plurality of driving images F1 sequentially at a predetermined shutter speed in an initial state. In some embodiments, the predetermined shutter speed may be between 1/1000 and 1/100000 seconds.
In an embodiment of the step S20, the processing unit 120 may perform image processing on each driving image F1 to determine whether each driving image F1 covers the object image M1.
In an embodiment of the step S20, the processing unit 120 optionally selects two driving images F1 containing the object image M1 from the driving images F1 for image analysis to obtain the variation. In one embodiment, the processing unit 120 can selectively perform image analysis on the driving image F1 covering the object image M1 in the first driving image F1 and the driving image F1 covering the object image M1 in the last driving image F1. In another embodiment, the processing unit 120 can also directly select two driving images F1 including the object image M1 captured consecutively in time for image analysis. Here, the variation obtained after the image analysis by the processing unit 120 may be a position movement amount of the object image M1 in the two driving images F1, and further, the variation is a position variation. For example, the object image M1 is the moving distance on the X axis in the two driving images F1. However, the present invention is not limited thereto, and the variation obtained by the processing unit 120 may further be the moving speed of the object image M1 in the two driving images F1.
In some embodiments, the processing unit 120 may obtain the variation of the object image M1 by image subtraction, but the invention is not limited thereto, and the processing unit 120 may obtain the variation of the object image M1 by any suitable image analysis algorithm.
Fig. 3 is a flowchart illustrating an embodiment of step S30 in fig. 2, in which in an embodiment of step S30, the processing unit 120 compares a variation of the object image M1 with a preset variation threshold (step S31). When the variation is determined to be smaller than or equal to the variation threshold, the processing unit 120 may select a predetermined shutter speed value from a plurality of predetermined shutter speed values according to the magnitude of the variation, and set the shutter speed of the image capturing unit 110 as the predetermined shutter speed value (step S32). When the variation is greater than the variation threshold, the processing unit 120 may set the shutter speed of the image capturing unit 110 to the initial shutter speed value (step S33), and adjust the fill-in light intensity of the fill-in light unit 130 or the gain value of the image capturing unit 110 according to one of the driving images F1 (step S34).
In some embodiments, the image capturing apparatus 100 for a vehicle may further include a storage unit 140, and the storage unit 140 is coupled to the processing unit 120. The storage unit 140 may be configured to store a variation threshold, a plurality of preset shutter speed values, an initial shutter speed value, a fill-in light intensity and/or a gain value.
In an embodiment of step S32, an exemplary relationship corresponding table between the magnitude of the variation and a plurality of predetermined shutter speed values can be as shown in the following table. For convenience of description, only the numerical values of the four variation amounts and the corresponding preset shutter speed values are roughly listed. Wherein the variation is a speed, and the unit thereof is kilometers per hour (km/h). The unit of the preset shutter speed value is second, but the invention is not limited thereto.
Table-preset shutter speed value and variation relation mapping table
Serial number A1 A2 A3 A4
Amount of change 20 40 80 160
Preset shutter speed value 1/500 1/1000 1/2000 1/4000
Referring to table one, each variation a1-a4 and its corresponding preset shutter speed value are generally formed in a one-to-one manner as a corresponding table and stored in the storage unit 140. Therefore, the processing unit 120 can select the preset shutter speed value by looking up the table in step S32.
For example, assume that the change threshold is 40 (km/h). If the variation obtained by the processing unit 120 is 20(km/h) and smaller than the variation threshold, the processing unit 120 can compare the obtained variation with each variation a1-a4 stored in the relationship mapping table of the storage unit 140, so that after the table lookup and comparison, the processing unit 120 can find the corresponding preset shutter speed value 1/500 (seconds) according to the relationship mapping table when the obtained variation and the variation a1 are corresponding, and thus set the shutter speed of the image capturing unit 110. In another example, if the variation obtained by the processing unit 120 is 30(km/h) and less than the variation threshold, the processing unit 120 may compare the obtained variation with each variation a1-a4 stored in the corresponding table of the relationship in the storage unit 140, however, after comparing the obtained variation with the variations a1-a4 by looking up the table, the processing unit 120 may set the shutter speed of the image capturing unit 110 to be greater than or closest to the variation a2 of the obtained variation.
In one embodiment of step S33, the initial shutter speed value may be 1/1000 seconds.
In an embodiment of step S34, the driving image F1 may include a plurality of pixels, and each pixel may display a corresponding gray scale according to one of a plurality of gray scale levels. In other words, the driving image F1 can be displayed according to the gray levels displayed by the pixels and the positions of the pixels.
In some embodiments, the driving image F1 may be composed of 1280 × 720 pixels, but the invention is not limited thereto, and the driving image F1 may be composed of 360 × 240 pixels, 1920 × 1080 pixels, or any other number of pixels meeting the display format standard.
In some embodiments, the number of the gray scale levels can be 256, such as gray scale level 0 to gray scale level 255, wherein gray scale level 0 represents the lowest brightness and gray scale level 255 represents the highest brightness, but the present invention is not limited thereto, and the number of the gray scale levels can be determined according to the rendering capability provided by the image capturing unit 110. For example, the image capturing unit 110 may include an analog-to-digital conversion circuit, and when the analog-to-digital conversion circuit is 10 bits, the image capturing unit 110 may provide 1024 (i.e., 210) gray-scale levels of representation capability, and so on.
Fig. 4 is a flowchart illustrating an embodiment of step S34 in fig. 3, and fig. 5 is a histogram of an embodiment of a driving image. Referring to fig. 1 to 5, in an embodiment of the step S34, the processing unit 120 may convert a histogram (histogram) of the driving image F1 by image integration to obtain a gray level number distribution of the pixels of the driving image F1 at a plurality of gray levels (step S34A). The processing unit 120 may start to number the pixels sequentially from the highest gray level to the lowest gray level of the gray levels according to the gray level number distribution of step S34A until the pixels are numbered to a predetermined number (step S34B). Then, the processing unit 120 can adjust the fill-in intensity of the fill-in unit 130 or the gain of the image capturing unit 110 according to the gray scale level of the pixel with the preset number (step S34C), so that the maximum brightness of the driving image F1 can be adjusted within a reasonable range without overexposure or overexposure.
In an embodiment of the step S34B, the predetermined number may be the number of pixels of the object image M1 in the driving image F1. In some embodiments, when the object image M1 is an image of a license plate, the predetermined number may be between 1000 and 3000 or between 2000 and 3000, but the present invention is not limited thereto, and the value of the predetermined number may depend on the size of the license plate of each country and the number of pixels that the license plate needs to occupy when correctly recognized in the image.
Generally, in the driving image F1 captured under the fill-in light of the fill-in light unit 130, the object image M1 should be the highest brightness part. In other words, the pixels displaying the object image M1 should be distributed at the higher gray level portion of the gray-scale number distribution. Therefore, the processing unit 120 may sequentially number the pixels along the direction from the highest gray level to the lowest gray level, so as to determine what the lowest gray level the pixels of the object image M1 are at.
For example, assuming that 256 gray levels are provided and the default number is 1000, the processing unit 120 may number the pixels distributed at the gray level 255 from the gray level 255 in the direction from the gray level 255 to the gray level 0, then number the pixels distributed at the gray level 254, number the pixels distributed at the gray level 253, and so on until the number reaches 1000, the processing unit 120 stops the numbering. Here, the numbering action may also be a cumulative action permutation. In other words, the processing unit 120 can also sequentially accumulate the number of the pixels distributed at gray level 255, gray level 254, gray level 253 …, etc. along the direction from gray level 255 to gray level 0 until the number is accumulated to the predetermined number 1000.
Fig. 6 is a flowchart illustrating an embodiment of step S34C in fig. 4. Referring to fig. 1 to 6, in an embodiment of the step S34C, the gray scale levels can be divided into a plurality of gray scale sections, and the processing unit 120 can perform corresponding adjustment according to which gray scale section the gray scale level of the pixel with the preset number falls in.
Hereinafter, 256 gradation levels will be described as an example. Herein, the gray scale levels from the highest gray scale level 255 to the lowest gray scale level 0 are sequentially formed into a first gray scale section, a second gray scale section, a third gray scale section and a fourth gray scale section. Here, gray scale level 255 to gray scale level 200 are the first gray scale section, gray scale level 199 to gray scale level 150 are the second gray scale section, gray scale level 149 to gray scale level 100 are the third gray scale section, and gray scale level 99 to gray scale level 0 are the fourth gray scale section. An exemplary relationship table of these gray scale segments and the adjustment operations is shown in the following table two.
Relation comparison table for table two gray scale section and adjusting action
When the gray scale level of the pixel with the preset number is in the first gray scale section, it indicates that the object image M1 may be in an overexposure state, and therefore, the processing unit 120 can improve the condition by reducing the fill-in intensity of the fill-in unit 130 or the gain value of the image capturing unit 110 (step S34C 1).
When the gray level of the pixel with the predetermined number is within the second gray level section, it indicates that the brightness of the object image M1 is proper, and therefore, the processing unit 120 does not adjust the fill-in intensity of the light-filling unit 130 or the gain of the image capturing unit 110 (step S34C 2).
When the gray scale level of the pixel with the preset number is in the third gray scale section, it indicates that the brightness of the object image M1 may be too dark, and therefore, the processing unit 120 can improve the condition by increasing the fill-in intensity of the fill-in unit 130 or the gain of the image capturing unit 110 (step S34C 3).
When the gray level of the pixel with the predetermined number falls in the fourth gray level section, it indicates that the object image M1 may not exist in the driving image F1, and therefore the processing unit 120 does not perform the adjustment and selects to execute step S34C 2.
Fig. 7 is a flowchart illustrating the image capturing method after step S34 according to an embodiment of the image capturing method, and fig. 8 is a diagram illustrating an embodiment of a driving image. Referring to fig. 1 to 8, in an embodiment of the image capturing method, after the step S34, the processing unit 120 may further perform frequency domain conversion on one driving image F1 of the driving images F1 to convert a frequency spectrum of the driving image F1 (step S35). Then, the processing unit 120 further detects a frequency domain position in the frequency spectrum (step S36), and fine-tunes the gain value of the image capturing unit 110 or the fill-in light intensity of the fill-in light unit 130 according to whether a signal appears at the frequency domain position, and fine-tunes the shutter speed of the image capturing unit 110 (step S37), so as to further optimize the quality of the image captured by the image capturing device 100 for the vehicle.
In an embodiment of step S35, the frequency domain transformation may be implemented by Fourier Transform (Fourier Transform).
In one embodiment of step S36, the object image M1 may include a plurality of character images W1. The processing unit 120 may set a straight line L1 passing through the driving image F1 to obtain the frequency domain position according to the number of pixels of the straight line L1 passing through the driving image F1 and the number of pixels of the character image W1 in the same direction as the straight line L1. In some implementations, the frequency-domain location is a high-frequency location in the frequency spectrum.
Hereinafter, a driving image F1 with an image format of 1280 × 720 will be described as an example. When the image format of the driving image F1 is 1280 × 720, it indicates that the driving image F1 has 1280 pixels on the horizontal axis (i.e., X axis) and 720 pixels on the vertical axis (i.e., Y axis), and the driving image F1 is composed of 1280 × 720 pixels. When the processing unit 120 sets the straight line L1 along the horizontal axis of the driving image F1, the number of pixels that the straight line L1 passes through on the driving image F1 should be 1280. In addition, the number of pixels of the character image W1 on the straight line L1 is the number of pixels required for the character image W1 to be recognized, for example, 3 to 10. Here, 3 pixels are taken as an example. Accordingly, the processing unit 120 may accordingly obtain the frequency domain position to be detected as (3/1280). However, the present invention is not limited thereto, and the straight line L1 may be disposed along the longitudinal axis of the driving image F1 or along other suitable directions to pass through the driving image F1.
Fig. 9 is a flowchart illustrating an embodiment of step S37 in fig. 7. Referring to fig. 1 to 9, in an embodiment of the step S37, when the processing unit 120 detects that the signal appears in the frequency domain position in the step S36, the processing unit 120 may determine that the driving image F1 has sufficient sharpness without adjusting the gain value of the image capturing unit 110, the shutter speed, and the fill-in light intensity of the fill-in light unit 130 (step S37A). When the processing unit 120 does not detect that the signal appears at the frequency domain position in step S36, the processing unit 120 may determine that the driving image F1 does not have sufficient sharpness, such as blur, and increase the shutter speed of the image capturing unit 110 and select to fine-tune one of the gain value of the image capturing unit 110 and the fill-in light intensity of the fill-in light unit 130 (step S37B), so that the driving image F1 captured by the image capturing unit 110 after the fine tuning may have sufficient brightness and sharpness.
In some embodiments, the processing unit 120 may repeatedly perform the fine adjustment through the repeated execution of steps S35 to S37, and the fine adjustment of steps S35 to S37 is not stopped until the processing unit 120 determines that the driving image F1 has a sufficient high spectral response (high spectral response).
In summary, in the execution of the steps S35 to S37, the processing unit 120 determines whether the driving image F1 lacks high frequency signals through frequency domain conversion to determine the quality of the driving image F1, so as to perform fast feedback and corresponding fine adjustment, thereby obtaining the driving image F1 with better quality more quickly.
Fig. 10 is a flowchart illustrating the image capturing method after step S34 according to an embodiment, and fig. 11 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof. Referring to fig. 10 and 11, in an embodiment of the image capturing method, after step S34, the processing unit 120 may further extract the object image M1 from one of the driving images F1 (step S38) to convert the luminance distribution of the pixels on the straight line L2 passing through the object image M1 (step S39). Then, the processing unit 120 can fine-tune the gain value of the image capturing unit 110, the fill-in intensity of the fill-in unit 130, or the shutter speed of the image capturing unit 110 according to the waveform of the brightness distribution (step S40), so as to further optimize the quality of the image captured by the image capturing apparatus 100 for vehicle through these fine-tuning actions.
In some embodiments, the processing unit 120 may repeatedly perform the fine adjustment through repeated execution of steps S38 to S40, and the fine adjustment of steps S38 to S40 is not stopped until the processing unit 120 determines that the driving image F1 has sufficient image quality.
In one embodiment of step S38, the processing unit 120 may extract the object image M1 from the driving image F1 by an image processing technique, such as image segmentation.
In an embodiment of step S39, the processing unit 120 may set a straight line L2 passing through the object image M1 to convert the luminance distribution of luminance versus position according to each pixel passing through the straight line L2 and its position. In some embodiments, the processing unit 120 arranges the line L2 along the horizontal axis of the object image M1, but the invention is not limited thereto, and the line L2 may also be arranged along the vertical axis of the object image M1 or other suitable directions to pass through the object image M1. In addition, the object image M1 may include a plurality of character images W1, and the line L2 may pass through the character images W1.
Fig. 12 is a flowchart illustrating an embodiment of step S40 in fig. 10. Referring to fig. 10 to 12, in an embodiment of step S40, the processing unit 120 performs a fine adjustment according to the peak-to-peak value Vpp of the waveform in the luminance distribution converted in step S39. The peak-to-peak value Vpp is a difference between a peak Vc and a valley Vt of a waveform in the luminance distribution. Accordingly, the processing unit 120 may compare the peak-to-peak value Vpp of the waveform with a preset difference value (step S41). When the peak-to-peak value Vpp is greater than or equal to the predetermined difference, the processing unit 120 may determine that the contrast of the object image M1 is sufficient without adjusting the gain of the image capturing unit 110, the fill-in intensity of the fill-in unit 130, and the shutter speed of the image capturing unit 110 (step S42). When the peak-to-peak value Vpp is smaller than the preset difference, the processing unit 120 determines that the contrast of the object image M1 is insufficient, and causes the fill-in unit 130 to increase the fill-in intensity or causes the image capturing unit 110 to increase the gain (step S43), so that the contrast of the object image M1 in the fine-tuned driving image F1 can be increased.
In some implementations, the luminance in the luminance distribution may have a gray scale level as its unit. In addition, the predetermined difference may be between 90 gray scale levels and 110 gray scale levels. For example, the predetermined difference may be 100 gray levels, but the invention is not limited thereto.
Fig. 13 is a flowchart illustrating an embodiment of step S40 in fig. 10. Referring to fig. 13, in an embodiment of step S40, in addition to the peak-to-peak value Vpp, the processing unit 120 may perform fine tuning according to the magnitude of the peak value. In one embodiment, after the processing unit 120 performs step S41 and determines that the peak-to-peak value Vpp is greater than or equal to the predetermined difference, the peak value of the waveform is compared with the predetermined peak value (step S44). When the comparison result of the step S44 is that the peak value is greater than or equal to the predetermined peak value, it indicates that the brightness of the object image M1 is not too dark, and the processing unit 120 continues to execute the step S42 to not perform adjustment. On the contrary, when the comparison result of the step S44 is that the peak value is smaller than the preset peak value, which indicates that the brightness of the object image M1 may be too dark, the processing unit 120 may execute the step S43 to increase the brightness of the object image M1, but the invention is not limited thereto. Fig. 14 is a flowchart illustrating an embodiment of step S40 in fig. 10. Referring to fig. 14, in another embodiment, the processing unit 120 may also perform step S44 before performing step S41. Then, when the comparison result of the step S44 is that the peak value is greater than or equal to the predetermined peak value, the processing unit 120 further performs the peak-to-peak comparison of the step S41, and selects to perform the step S42 or the step S43 according to the comparison result of the step S41. Moreover, when the comparison result of the step S44 is that the peak value is smaller than the predetermined peak value, the processing unit 120 can select to execute the step S43.
Fig. 15 is a flowchart illustrating an embodiment of step S40 in fig. 10. Referring to fig. 15, in an embodiment of step S40, in addition to the peak-to-peak value, the processing unit 120 may also perform fine adjustment according to the magnitude of the trough value. In one embodiment, after the processing unit 120 performs the step S41 and determines that the peak-to-peak value is greater than or equal to the predetermined difference value, the processing unit compares the bottom value of the waveform with the predetermined bottom value (step S45). When the valley value is smaller than or equal to the predetermined valley value as a result of the comparison in step S45, it indicates that the brightness of the object image M1 is not too bright, and the processing unit 120 continues to perform step S42 without adjustment. On the contrary, when the comparison result of the step S45 is that the valley value is greater than the preset valley value, which indicates that the brightness of the object image M1 may be too bright, the processing unit 120 may cause the fill-in light unit 130 to decrease the fill-in light intensity or cause the image capturing unit 110 to decrease the gain value (step S46), but the invention is not limited thereto. Fig. 16 is a flowchart illustrating an embodiment of step S40 in fig. 10. Referring to fig. 16, in another embodiment, the processing unit 120 may also perform step S45 before performing step S41. Thereafter, when the trough value is less than or equal to the preset trough value as a result of the comparison in step S45, the processing unit 120 further performs the peak-to-peak comparison in step S41, and selects to perform step S42 or step S43 according to the result of the comparison in step S41. When the comparison result of step S45 is that the trough value is greater than the preset trough value, the processing unit 120 can select to execute step S46.
In some embodiments, the predetermined peak value may be between the gray scale level 120 and the gray scale level 140. In addition, the predetermined valley value may be between the gray scale level 120 and the gray scale level 140. In some embodiments, the predetermined peak value may be equal to the predetermined valley value. For example, the predetermined peak value and the predetermined valley value may be the gray scale 130, but the invention is not limited thereto.
Fig. 17 is a flowchart illustrating an embodiment of step S40 in fig. 10, fig. 18 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof, and fig. 19 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof. Referring to fig. 10, 17 to 19, in an embodiment of the step S40, the processing unit 120 can also perform a fine adjustment according to the number of gray-scale pixels of each tangent line Lt of the waveform in the luminance distribution converted in the step S39. The number of gray-scale pixels of the tangent line Lt is a transition slope when the peak Vc transitions to the valley Vt or a transition slope when the peak Vc transitions from the valley Vt to the peak Vc. In some implementations, the luminance in the luminance distribution may have a gray scale level as its unit. Also, the unit of the number of gray-scale pixels of the tangent line may be: gray scale level/number of pixels.
The processing unit 120 may compare the number of gray-scale pixels of each tangent line with a preset number of gray-scale pixels (step S47). When the number of gray-scale pixels of each tangent line falls within the predetermined number of gray-scale pixels, it indicates that the sharpness of the object image M1 is sufficient, and the processing unit 120 does not adjust the gain of the image capturing unit 110, the fill-in intensity of the fill-in unit 130, and the shutter speed of the image capturing unit 110 (step S48). When the number of gray-scale pixels of any tangent line exceeds the number of preset gray-scale pixels, indicating that the sharpness of the object image M1 is not sufficient, the processing unit 120 may cause the image capturing unit 110 to increase its shutter speed (step S49), so as to increase the sharpness of the object image M1 in the driving image F1 captured after the fine adjustment.
In some embodiments, the predetermined number of gray-scale pixels may be an interval of values. For example, 0 to 2 (gray scale level/number of pixels), but the invention is not limited thereto.
In summary, in the execution of the steps S38 to S40, the processing unit 120 determines the quality of the object image M1 in the driving image F1 according to the waveform of the brightness distribution thereof, so as to quickly feed back and correspondingly fine-tune the object image M1, thereby obtaining the driving image F1 with better quality more quickly.
In some embodiments, in the execution of steps S10 to S33 of the image capturing method, the processing unit 120 may find out a currently suitable shutter speed so that the driving image F1 captured by the image capturing unit 110 may not be blurred. Then, in the execution of step S34 of the image capturing method, the processing unit 120 can further find out the currently suitable fill-in light intensity or gain value so that the driving image F1 captured by the image capturing unit 110 can have a suitable brightness. Finally, based on the above obtained more suitable shutter speed and fill-in light intensity or gain value, the processing unit 120 may further enhance the detail expression capability of the driving image F1 through the fine adjustment actions of steps S35 to S37 or steps S38 to S40 of the image capturing method.
Therefore, in some embodiments, the product of the shutter speed and the gain of the image capturing unit 110 and the fill-in light intensity of the fill-in light unit 130 is equal before and after the fine tuning operation of step S37 (or step S40). For example, when the processing unit 120 changes the shutter speed to 1/2, the processing unit 120 changes the gain value or the fill-in light intensity to 2 times the original value, so that the product of the shutter speed, the gain value and the fill-in light intensity is substantially equal before and after the fine adjustment, i.e. the product of the shutter speed, the gain value and the fill-in light intensity does not change greatly before and after the fine adjustment.
In some embodiments, the image capturing device 100 for a vehicle can be applied to a police surveillance system. For example, the image capturing device 100 for a vehicle can be installed on a police car. The image capturing device 100 for the vehicle can be electrically connected to an internal system of the police vehicle, and the internal system can upload the captured driving image F1 to the background system, so that the background system can post-process (post-processing), identify images, and the like on the driving image F1, thereby assisting the police to quickly record and identify license plates, vehicle money, and the like. The object image M1 in the driving image F1 may be an image of a license plate or an image of a vehicle body. In addition, the character image W1 can be an image of a number, a character, or the like.
In summary, in the image capturing apparatus for a vehicle and the image capturing method of the embodiment of the invention, the shutter speed is set according to the variation of the object image in the driving image, so as to obtain a clearer driving image. In addition, the fill-in light intensity or gain value can be adjusted through the gray scale number distribution of the driving image, so as to obtain the driving image with proper brightness. In addition, the shutter speed, the fill-in light intensity or the gain value can be finely adjusted through the frequency spectrum of the driving image or the brightness distribution of the object image, so that the driving image with better detail expression capability can be obtained. Moreover, the quality of the driving image can be confirmed without waiting for the feedback of the background system and fine adjustment can be correspondingly carried out, so that the driving image with better quality can be obtained more quickly.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes and modifications can be made without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (20)

1. An image capturing method, comprising:
sequentially capturing a plurality of driving images by using an image capturing unit, wherein each driving image comprises an object image and each driving image comprises a plurality of pixels;
performing image analysis on two of the plurality of driving images to obtain a variable quantity of the object image;
setting a shutter speed of the image capturing unit according to the variation, comprising:
when the variation is smaller than or equal to a variation threshold, setting one of a plurality of preset shutter speed values as the shutter speed of the image capturing unit according to the magnitude of the variation; and
when the variation is greater than the variation threshold, setting an initial shutter speed value as the shutter speed, and adjusting a fill-in light intensity of a fill-in light unit or a gain value of the image capturing unit according to one of the driving images, including:
obtaining a gray scale quantity distribution of pixels of the driving image on a plurality of gray scale levels;
numbering the pixels of the driving image in sequence from the highest gray level to the lowest gray level of the plurality of gray levels according to the gray level number distribution until the pixels are numbered to a preset number; and
adjusting the fill-in light intensity or the gain value according to the gray scale level of the pixel with the preset number, wherein the gray scale levels sequentially form a first gray scale section, a second gray scale section, a third gray scale section and a fourth gray scale section from the highest gray scale level to the lowest gray scale level, and the step of adjusting the fill-in light intensity or the gain value according to the gray scale level of the pixel with the preset number comprises the following steps:
when the gray scale level is in the first gray scale section, reducing the fill-in light intensity or the gain value;
when the gray scale level is in the second gray scale section or the fourth gray scale section, the fill light intensity or the gain value is not adjusted; and
when the gray scale level is in the third gray scale section, the fill light intensity or the gain value is increased.
2. The image capturing method of claim 1, wherein the step of adjusting the fill-in light intensity or the gain value according to one of the driving images further comprises:
converting a frequency spectrum of one of the driving images;
detecting a frequency domain position in the frequency spectrum; and
and fine-tuning the gain value or the fill-in intensity according to whether a signal appears at the frequency domain position, and fine-tuning the shutter speed.
3. The method of claim 2, wherein the step of fine-tuning the gain value or the fill-in light intensity and the shutter speed comprises:
when a signal appears at the frequency domain position, the gain value or the fill-in light intensity and the shutter speed are not adjusted; and
when no signal appears at the frequency domain position, the shutter speed is increased and the fill-in light intensity or the gain value is reduced.
4. The image capturing method of claim 3, wherein the frequency domain position is obtained according to the number of pixels on a line passing through the driving image and the number of pixels of a character image in the same direction as the line.
5. The image capturing method of claim 1, wherein the step of adjusting the fill-in light intensity or the gain value according to one of the driving images further comprises:
taking out the object image from one of the driving images;
converting a brightness distribution of pixels on a line passing through the object image; and
and finely adjusting the gain value, the fill-in light intensity or the shutter speed according to the waveform of the brightness distribution.
6. The method as claimed in claim 5, wherein the step of fine-tuning the gain, the fill-in intensity or the shutter speed according to the waveform of the luminance distribution comprises:
comparing the peak-to-peak value of the waveform with a preset difference value;
when the peak-to-peak value is greater than or equal to the preset difference value, the gain value, the light supplement intensity and the shutter speed are not adjusted; and
and when the peak-to-peak value is smaller than the preset difference value, the fill light intensity or the gain value is increased.
7. The method as claimed in claim 5, wherein the step of fine-tuning the gain, the fill-in intensity or the shutter speed according to the waveform of the luminance distribution comprises:
comparing the peak-to-peak value of the waveform with a preset difference value;
when the peak-to-peak value is larger than or equal to the preset difference value, comparing the wave peak value of the waveform with a preset peak value;
when the wave peak value is larger than or equal to the preset peak value, the gain value, the light supplement intensity and the shutter speed are not adjusted; and
and when the wave peak value is smaller than the preset peak value, the light supplement intensity or the gain value is increased.
8. The method as claimed in claim 5, wherein the step of fine-tuning the gain, the fill-in intensity or the shutter speed according to the waveform of the luminance distribution comprises:
comparing the peak-to-peak value of the waveform with a preset difference value;
when the peak-to-peak value is greater than or equal to the preset difference value, comparing the trough value of the waveform with a preset trough value;
when the trough value is larger than the preset trough value, reducing the light supplement intensity or the gain value; and
when the trough value is less than or equal to the preset trough value, the gain value, the fill light intensity and the shutter speed are not adjusted.
9. The method as claimed in claim 5, wherein the step of fine-tuning the gain, the fill-in intensity or the shutter speed according to the waveform of the luminance distribution comprises:
when the gray-scale pixel number of each tangent line of the waveform is within a preset gray-scale pixel number, the gain value, the light supplement intensity and the shutter speed are not adjusted; and
when the gray-scale pixel number of any tangent line of the waveform exceeds the preset gray-scale pixel number, the shutter speed is increased.
10. The image capturing method of any one of claims 2 to 9, wherein a product of the shutter speed, the gain and the fill-in light intensity is equal before and after the trimming.
11. An image capturing device for a vehicle, comprising:
an image capturing unit for sequentially capturing a plurality of driving images, wherein each driving image comprises an object image;
a processing unit for analyzing two of the plurality of driving images to obtain a variation of the object image and setting a shutter speed of the image capturing unit according to the variation; and
a fill-in light unit for fill-in light with a fill-in light intensity, wherein when the variation is less than or equal to a variation threshold, the processing unit sets one of a plurality of preset shutter speeds as the shutter speed according to the variation, and when the variation is greater than the variation threshold, the processing unit sets an initial shutter speed value as the shutter speed and adjusts the fill-in light intensity or a gain value of the image capturing unit according to one of the plurality of driving images,
wherein each driving image comprises a plurality of pixels, the processing unit further obtains a gray scale number distribution of the pixels of the driving images on a plurality of gray scale levels, and numbers the pixels of the driving images in sequence from the highest gray scale level to the lowest gray scale level of the plurality of gray scale levels according to the gray scale number distribution until a preset number is reached, and the processing unit adjusts the fill light intensity or the gain value according to the gray scale level of the pixel with the preset number, wherein the plurality of gray scale levels sequentially form a first gray scale section, a second gray scale section, a third gray scale section and a fourth gray scale section from the highest gray scale level to the lowest gray scale level, when the gray scale level is in the first gray scale section, the processing unit reduces the fill light intensity or the gain value, when the gray scale level is in the second gray scale section or the fourth gray scale section, the processing unit does not adjust the fill-in light intensity or the gain value, and when the gray scale level is in the third gray scale section, the processing unit increases the fill-in light intensity or the gain value.
12. The image capturing apparatus as claimed in claim 11, wherein after the fill-in light intensity or the gain value is adjusted according to one of the driving images, the processing unit further transforms a frequency spectrum of one of the driving images and detects a frequency domain position in the frequency spectrum, and the processing unit further fine-tunes the gain value or the fill-in light intensity and the shutter speed according to whether a signal appears at the frequency domain position.
13. The image capturing apparatus as claimed in claim 12, wherein the processing unit does not adjust the gain value or the fill-in light intensity and the shutter speed when a signal is present at the frequency domain position, and increases the shutter speed and decreases the fill-in light intensity or the gain value when a signal is not present at the frequency domain position.
14. The image capturing apparatus for vehicle as claimed in claim 12, wherein the processing unit obtains the frequency domain position according to the number of pixels on a line passing through the driving image and the number of pixels of a character image in the same direction as the line.
15. The image capturing apparatus as claimed in claim 11, wherein after adjusting the fill-in light intensity or the gain according to one of the driving images, the processing unit further extracts the object image from one of the driving images, converts a luminance distribution of pixels on a straight line passing through the object image, and finely adjusts the gain, the fill-in light intensity or the shutter speed according to a waveform of the luminance distribution.
16. The image capturing apparatus for vehicle as claimed in claim 15, wherein the step of fine-tuning the gain, the fill-in light intensity or the shutter speed according to the brightness distribution comprises: comparing the peak-to-peak value of the waveform with a preset difference value; when the peak-to-peak value is greater than or equal to the preset difference value, the gain value, the light supplement intensity and the shutter speed are not adjusted; and when the peak-to-peak value is smaller than the preset difference value, the fill light intensity or the gain value is increased.
17. The image capturing apparatus for vehicle as claimed in claim 15, wherein the step of fine-tuning the gain value, the fill-in light intensity or the shutter speed according to the waveform of the luminance distribution further comprises: comparing the peak-to-peak value of the waveform with a preset difference value; when the peak-to-peak value is larger than or equal to the preset difference value, comparing the wave peak value of the waveform with a preset peak value; when the wave peak value is larger than or equal to the preset peak value, the gain value, the light supplement intensity and the shutter speed are not adjusted; and when the wave peak value is smaller than the preset peak value, the light supplement intensity or the gain value is increased.
18. The image capturing apparatus for vehicle as claimed in claim 15, wherein the step of fine-tuning the gain value, the fill-in light intensity or the shutter speed according to the waveform of the luminance distribution further comprises: comparing the peak-to-peak value of the waveform with a preset difference value; when the peak-to-peak value is greater than or equal to the preset difference value, comparing the trough value of the waveform with a preset trough value; when the trough value is larger than the preset trough value, reducing the light supplement intensity or the gain value; when the trough value is less than or equal to the preset trough value, the gain value, the fill light intensity and the shutter speed are not adjusted.
19. The image capturing apparatus as claimed in claim 15, wherein the processing unit does not adjust the gain, the fill-in light intensity and the shutter speed when the number of gray-scale pixels of each tangent line of the waveform falls within a predetermined number of gray-scale pixels, and the processing unit increases the shutter speed when the number of gray-scale pixels of any tangent line of the waveform exceeds the predetermined number of gray-scale pixels.
20. The image capturing apparatus for vehicle as claimed in any one of claims 12 to 19, wherein the product of the shutter speed, the gain value and the fill-in light intensity is equal before and after trimming.
CN201810513070.5A 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method Active CN110536073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810513070.5A CN110536073B (en) 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810513070.5A CN110536073B (en) 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method

Publications (2)

Publication Number Publication Date
CN110536073A CN110536073A (en) 2019-12-03
CN110536073B true CN110536073B (en) 2021-05-11

Family

ID=68657017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810513070.5A Active CN110536073B (en) 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method

Country Status (1)

Country Link
CN (1) CN110536073B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013250A (en) * 2006-01-30 2007-08-08 索尼株式会社 Exposure control apparatus and image pickup apparatus
CN101895783A (en) * 2009-05-18 2010-11-24 华晶科技股份有限公司 Detection device for stability of digital video camera and digital video camera
CN101998058A (en) * 2009-08-20 2011-03-30 三洋电机株式会社 Image sensing apparatus and image processing apparatus
CN103347152A (en) * 2013-07-08 2013-10-09 华为终端有限公司 Method, device and terminal for picture processing
US9466001B1 (en) * 2015-04-07 2016-10-11 Toshiba Tec Kabushiki Kaisha Image processing apparatus and computer-readable storage medium
WO2018051615A1 (en) * 2016-09-14 2018-03-22 ソニー株式会社 Image capture control apparatus and image capture control method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10348974B2 (en) * 2016-08-02 2019-07-09 Cree, Inc. Solid state lighting fixtures and image capture systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101013250A (en) * 2006-01-30 2007-08-08 索尼株式会社 Exposure control apparatus and image pickup apparatus
CN101895783A (en) * 2009-05-18 2010-11-24 华晶科技股份有限公司 Detection device for stability of digital video camera and digital video camera
CN101998058A (en) * 2009-08-20 2011-03-30 三洋电机株式会社 Image sensing apparatus and image processing apparatus
CN103347152A (en) * 2013-07-08 2013-10-09 华为终端有限公司 Method, device and terminal for picture processing
US9466001B1 (en) * 2015-04-07 2016-10-11 Toshiba Tec Kabushiki Kaisha Image processing apparatus and computer-readable storage medium
WO2018051615A1 (en) * 2016-09-14 2018-03-22 ソニー株式会社 Image capture control apparatus and image capture control method

Also Published As

Publication number Publication date
CN110536073A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
JP4985394B2 (en) Image processing apparatus and method, program, and recording medium
US9524558B2 (en) Method, system and software module for foreground extraction
KR101845943B1 (en) A system and method for recognizing number plates on multi-lane using one camera
US20190289272A1 (en) Imaging apparatus, imaging processing method, image processing device and imaging processing system
US10694112B2 (en) Vehicular image pickup device and image capturing method
JP6553624B2 (en) Measurement equipment and system
JP5071198B2 (en) Signal recognition device, signal recognition method, and signal recognition program
CN110536073B (en) Image capturing device for vehicle and image capturing method
CN110536071B (en) Image capturing device for vehicle and image capturing method
US10516831B1 (en) Vehicular image pickup device and image capturing method
CN110807756A (en) Apparatus and method for detecting vehicle light in an image
CN110536063B (en) Image capturing device for vehicle and image capturing method
CN109803096B (en) Display method and system based on pulse signals
CN105190687B (en) Image processing apparatus and image processing method
US10710515B2 (en) In-vehicle camera device and method for selecting driving image
JP2016126750A (en) Image processing system, image processing device, imaging device, image processing method, program, and recording medium
CN110611772B (en) Image capturing device for vehicle and exposure parameter setting method thereof
US10129458B2 (en) Method and system for dynamically adjusting parameters of camera settings for image enhancement
JP2003299066A (en) Image processing apparatus and its method
US20200021730A1 (en) Vehicular image pickup device and image capturing method
US20200210723A1 (en) Attached object detection apparatus
WO2020090176A1 (en) Image processing device and image processing method
DE102014218627A1 (en) Method and control device for operating an image sensor
CN110875999A (en) Vehicle image capturing device and method for screening driving images
JP2017091118A (en) Image determination device and processing device using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant