CN110536071B - Image capturing device for vehicle and image capturing method - Google Patents

Image capturing device for vehicle and image capturing method Download PDF

Info

Publication number
CN110536071B
CN110536071B CN201810512757.7A CN201810512757A CN110536071B CN 110536071 B CN110536071 B CN 110536071B CN 201810512757 A CN201810512757 A CN 201810512757A CN 110536071 B CN110536071 B CN 110536071B
Authority
CN
China
Prior art keywords
value
gray scale
fill
image
light intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810512757.7A
Other languages
Chinese (zh)
Other versions
CN110536071A (en
Inventor
蔡昆佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Original Assignee
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitac Computer Kunshan Co Ltd, Getac Technology Corp filed Critical Mitac Computer Kunshan Co Ltd
Priority to CN201810512757.7A priority Critical patent/CN110536071B/en
Publication of CN110536071A publication Critical patent/CN110536071A/en
Application granted granted Critical
Publication of CN110536071B publication Critical patent/CN110536071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Abstract

The invention relates to a vehicle image capturing device and an image capturing method, the vehicle image capturing device comprises an image capturing unit, a light supplementing unit and a processing unit, wherein the image capturing unit is used for capturing a driving image, the light supplementing unit is used for supplementing light, the processing unit is used for obtaining gray scale quantity distribution of a plurality of pixels of the driving image on a plurality of gray scale levels, the processing unit sequentially numbers the pixels from the highest gray scale level to the lowest gray scale level according to the gray scale quantity distribution until the pixels are numbered to a preset number, and the processing unit adjusts the light supplementing intensity of the light supplementing unit or the gain value of the image capturing unit according to the gray scale level where the pixels with the preset number are located. The image capturing device and the image capturing method for the vehicle can obtain the driving image with proper brightness, and can also finely adjust the shutter speed, the light supplement intensity or the gain value through the frequency spectrum of the driving image or the brightness distribution of the object image so as to obtain the driving image with better detail expression capability.

Description

Image capturing device for vehicle and image capturing method
[ technical field ] A method for producing a semiconductor device
The present invention relates to an image capturing technology, and more particularly, to an image capturing device and an image capturing method for a vehicle.
[ background of the invention ]
The image capturing device can record images, so that the image capturing device has wide application, and can be installed at places needing monitoring, such as an entrance and an exit of a building, so as to assist in tracing, storing certificates and the like.
A general image capturing device is usually installed at a certain point and captures an image within a range where the image capturing device can capture the image according to a fixed operation mode. However, when the image capturing device is mounted on a moving object, such as a vehicle body, the quality of the image captured by the image capturing device will be degraded due to the speed of the moving object, and the accuracy of the subsequent identification of the captured image will be affected.
[ summary of the invention ]
In one embodiment, an image capture method includes capturing a driving image by an image capture unit, obtaining gray scale quantity distribution of a plurality of pixels of the driving image on a plurality of gray scale levels, numbering the pixels in sequence from a highest gray scale level to a lowest gray scale level of the gray scale levels until the pixels are numbered to a preset number according to the gray scale quantity distribution, and adjusting fill light intensity of a fill light unit or gain value of the image capture unit according to the gray scale level where the pixels numbered to the preset number are located.
In one embodiment, an image capturing device for a vehicle includes an image capturing unit, a light supplementing unit and a processing unit. The image capturing unit is used for capturing the driving image. The light supplement unit is used for supplementing light. The processing unit is used for obtaining gray scale quantity distribution of a plurality of pixels of the driving image on a plurality of gray scale levels. The processing unit numbers the pixels in sequence from the highest gray level to the lowest gray level of the gray levels according to the gray scale number distribution until the pixels are numbered to a preset number, and adjusts the fill light intensity of the fill light unit or the gain value of the image capture unit according to the gray level of the pixel with the preset number.
In summary, in the image capturing apparatus for a vehicle and the image capturing method of the embodiment of the invention, the fill-in light intensity or the gain value is adjusted according to the gray scale number distribution of the driving image, so as to obtain the driving image with proper brightness. In addition, the shutter speed, the fill-in light intensity or the gain value can be finely adjusted through the frequency spectrum of the driving image or the brightness distribution of the object image, so that the driving image with better detail expression capability can be obtained. Moreover, the quality of the driving image can be confirmed without waiting for the feedback of the background system and the fine adjustment can be carried out in real time, so that the driving image with better quality can be obtained more quickly.
The detailed features and advantages of the present invention are described in detail in the following embodiments, which are sufficient for anyone skilled in the art to understand the technical contents of the present invention and to implement the present invention, and the related objects and advantages of the present invention can be easily understood by anyone skilled in the art according to the disclosure, claims and drawings of the present specification.
[ description of the drawings ]
Fig. 1 is a block diagram of an embodiment of an image capturing device for a vehicle.
Fig. 2 is a flowchart illustrating an embodiment of an image capturing method.
Fig. 3 is a flowchart illustrating an embodiment of step S40 in fig. 2.
Fig. 4 is a flowchart illustrating the image capturing method after step S40 according to an embodiment.
Fig. 5 is a schematic diagram of an embodiment of a driving image.
Fig. 6 is a flowchart illustrating an embodiment of step S53 in fig. 4.
Fig. 7 is a flowchart illustrating the image capturing method after step S40 according to an embodiment.
FIG. 8 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof.
Fig. 9 is a flowchart illustrating an embodiment of step S56 in fig. 7.
Fig. 10 is a flowchart illustrating an embodiment of step S56 in fig. 7.
Fig. 11 is a flowchart illustrating an embodiment of step S56 in fig. 7.
Fig. 12 is a flowchart illustrating an embodiment of step S56 in fig. 7.
Fig. 13 is a flowchart illustrating an embodiment of step S56 in fig. 7.
Fig. 14 is a flowchart illustrating an embodiment of step S56 in fig. 7.
FIG. 15 is a schematic diagram illustrating an embodiment of an object image and its brightness distribution
FIG. 16 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof.
[ detailed description ] embodiments
Fig. 1 is a block diagram of an embodiment of an image capturing device for a vehicle. Referring to fig. 1, in general, an image capturing device 100 for a vehicle is installed on a vehicle and used for capturing and recording a driving image F1. In some embodiments, the vehicle may be an automobile, a motorcycle, etc., but the invention is not limited thereto, and any suitable vehicle using the image capturing device 100 for an automobile is within the scope of the invention.
In one embodiment, the image capturing apparatus 100 for a vehicle includes an image capturing unit 110 and a processing unit 120, and the processing unit 120 is coupled to the image capturing unit 110. In addition, the image capturing device 100 for a vehicle may further include a light supplement unit 130, and the light supplement unit 130 is coupled to the image capturing unit 110 and the processing unit 120. The image capturing unit 110 is used for capturing a driving image F1. The light supplement unit 130 is configured to output supplement light to assist the image capturing unit 110 in capturing an image.
In some embodiments, the image capturing unit 110 may include a lens and a photosensitive device, such as a Complementary Metal Oxide Semiconductor (CMOS) device or a photosensitive coupled device (CCD). In addition, the light supplement unit 130 can be implemented by, for example, a Light Emitting Diode (LED), an infrared diode (IR LED), a halogen lamp, a laser source, etc., but the invention is not limited thereto.
The processing unit 120 can control and adjust the operations of the image capturing unit 110 and/or the fill-in light unit 130 according to the image capturing method of any embodiment of the invention, so that the driving image F1 captured by the image capturing unit 110 can have better image quality.
In some embodiments, the processing unit 120 may be, for example, an SoC chip, a Central Processing Unit (CPU), a Microcontroller (MCU), an Application Specific Integrated Circuit (ASIC), or the like.
Fig. 2 is a flowchart illustrating an embodiment of an image capturing method. Referring to fig. 1 to 2, in an embodiment of the image capturing method, the processing unit 120 may capture the driving image F1 by using the image capturing unit 110 (step S10). Then, the processing unit 120 may convert a histogram (histogram) of the driving image F1 by image integration to obtain a gray level number distribution of a plurality of pixels of the driving image F1 at a plurality of gray levels (step S20). The processing unit 120 may start to number the pixels sequentially from the highest gray level to the lowest gray level of the gray levels according to the gray level number distribution of step S20 until the pixels are numbered to a predetermined number (step S30). Then, the processing unit 120 can adjust the fill-in intensity of the fill-in unit 130 or the gain of the image capturing unit 110 according to the gray scale level of the pixel with the preset number (step S40), so that the maximum brightness of the driving image F1 can be adjusted within a reasonable range without overexposure or overexposure.
In the embodiment of the step S10, the image capturing unit 110 can capture the driving image F1 by using a Global Shutter (Global Shutter) operation method, but the invention is not limited thereto, and the image capturing unit 110 can also capture the driving image F1 by using a Rolling Shutter (Rolling Shutter) operation method. In addition, the image capturing unit 110 can capture the driving image F1 by using a predetermined shutter speed. In some embodiments, the predetermined shutter speed may be between 1/1000 seconds and 1/100000 seconds.
In some embodiments, the driving image F1 may include a plurality of pixels, and each pixel may display a corresponding gray scale according to one of a plurality of gray scale levels. Therefore, the driving image F1 can be displayed according to the gray scale and the position of the pixels.
In some embodiments, the driving image F1 may be composed of 1280 × 720 pixels, but the invention is not limited thereto, and the driving image F1 may be composed of 360 × 240 pixels, 1920 × 1080 pixels, or any other number of pixels meeting the display format standard.
In some embodiments, the number of the gray scale levels can be 256, such as gray scale level 0 to gray scale level 255, wherein gray scale level 0 represents the lowest brightness and gray scale level 255 represents the highest brightness, but the present invention is not limited thereto, and the number of the gray scale levels can be determined according to the rendering capability provided by the image capturing unit 110. For example, the image capturing unit 110 may include an analog-to-digital conversion circuit, and when the analog-to-digital conversion circuit is 10 bits, the image capturing unit 110 may provide 1024 (i.e., 210) gray-scale levels of representation capability, and so on.
In some embodiments, if there is an object existing within the image capturing range of the in-vehicle image capturing apparatus 100, the driving image F1 captured by the image capturing unit 110 may cover the object image M1.
In an embodiment of the step S30, the predetermined number may be the number of pixels of the object image M1 in the driving image F1. In some embodiments, when the object image M1 is an image of a license plate, the predetermined number may be between 1000 and 3000 or between 2000 and 3000, but the present invention is not limited thereto, and the value of the predetermined number may depend on the size of the license plate of each country and the number of pixels that the license plate needs to occupy when correctly recognized in the image.
Generally, in the driving image F1 captured under the proper fill-in by the fill-in unit 130, the object image M1 should be the highest brightness part, and the rest of the image parts with lower brightness should be the background image. In other words, the pixels displaying the object image M1 should be distributed in the portion of the gray-scale distribution with higher gray-scale level, and the pixels displaying the background image should be distributed in the portion of the gray-scale distribution with lower gray-scale level. Therefore, the processing unit 120 may sequentially number the pixels along the direction from the highest gray level to the lowest gray level to determine whether the exposure of the driving image F1 is appropriate by determining the lowest gray level at which the pixels of the object image M1 are displayed.
For example, assume 256 gray levels and a default number of 1000. First, the processing unit 120 may number the pixels distributed at the gray level 255 from the gray level 255, then number the pixels distributed at the gray level 254, number the pixels distributed at the gray level 253, and so on until the number reaches 1000, the processing unit 120 stops the numbering from the gray level 255 to the gray level 0. Here, the numbering action may also be a cumulative action permutation. In other words, the processing unit 120 can also sequentially accumulate the number of the pixels distributed at gray level 255, gray level 254, gray level 253 …, etc. along the direction from gray level 255 to gray level 0 until the number is accumulated to the predetermined number 1000.
Fig. 3 is a flowchart illustrating an embodiment of step S40 in fig. 2. Referring to fig. 1 to fig. 3, in an embodiment of the step S40, the gray levels can be divided into a plurality of gray level sections, and the processing unit 120 can perform a corresponding adjustment operation according to which gray level section the pixel with the preset number is located in.
Hereinafter, 256 gradation levels will be described as an example. Herein, the gray scale levels from the highest gray scale level 255 to the lowest gray scale level 0 are sequentially formed into a first gray scale section, a second gray scale section, a third gray scale section and a fourth gray scale section. Here, gray scale level 255 to gray scale level 200 are the first gray scale section, gray scale level 199 to gray scale level 150 are the second gray scale section, gray scale level 149 to gray scale level 100 are the third gray scale section, and gray scale level 99 to gray scale level 0 are the fourth gray scale section. An exemplary relationship table of these gray scale sections and adjustment actions is shown in Table I below.
Relation comparison table for indicating a gray scale section and adjusting action
Figure BDA0001673104010000041
When the gray scale level of the pixel with the preset number is in the first gray scale section, it indicates that the object image M1 may be in an overexposure state, and therefore, the processing unit 120 can improve the condition by reducing the fill-in intensity of the fill-in unit 130 or the gain value of the image capturing unit 110 (step S41).
When the gray level of the pixel with the predetermined number is within the second gray level section, it indicates that the brightness of the object image M1 is proper, and therefore, the processing unit 120 does not adjust the fill-in intensity of the light-filling unit 130 or the gain of the image-capturing unit 110 (step S42).
When the gray scale level of the pixel with the preset number is in the third gray scale section, it indicates that the brightness of the object image M1 may be too dark, and therefore, the processing unit 120 can improve the condition by increasing the fill-in intensity of the fill-in unit 130 or the gain of the image capturing unit 110 (step S43).
When the gray level of the pixel with the predetermined number falls in the fourth gray level section, it indicates that the object image M1 may not exist in the driving image F1, and therefore the processing unit 120 does not perform the adjustment at this time and selects to execute step S42.
Fig. 4 is a flowchart illustrating the image capturing method after step S40 according to an embodiment of the image capturing method, and fig. 5 is a diagram illustrating an embodiment of a driving image. Referring to fig. 1 to 5, in an embodiment of the image capturing method, after the step S40, the processing unit 120 may further perform frequency domain conversion on the driving image F1 to convert the frequency spectrum of the driving image F1 (step S51). Then, the processing unit 120 may detect a frequency domain position in the frequency spectrum (step S52), and fine-tune the gain value of the image capturing unit 110 or the fill-in intensity of the fill-in unit 130 according to whether a signal appears at the frequency domain position, and fine-tune the shutter speed of the image capturing unit 110 (step S53), so as to further optimize the quality of the image captured by the image capturing device 100 for a vehicle.
In an embodiment of step S51, the frequency domain transformation may be implemented by Fourier Transform (Fourier Transform).
In one embodiment of step S52, the object image M1 may include a plurality of character images W1. The processing unit 120 may set a straight line L1 passing through the driving image F1 to obtain a frequency domain position according to the number of pixels of the straight line L1 passing through the driving image F1 and the number of pixels of the character image W1 in the same direction as the straight line L1. In some implementations, the frequency-domain location is a high-frequency location in the frequency spectrum.
Hereinafter, a driving image F1 with an image format of 1280 × 720 will be described as an example. When the image format of the driving image F1 is 1280 × 720, it indicates that the driving image F1 has 1280 pixels on the horizontal axis (i.e., X axis) and 720 pixels on the vertical axis (i.e., Y axis), and the driving image F1 is composed of 1280 × 720 pixels. When the processing unit 120 sets the straight line L1 along the horizontal axis of the driving image F1, the number of pixels that the straight line L1 passes through on the driving image F1 should be 1280. In addition, the number of pixels of the character image W1 on the straight line L1 is the number of pixels required for the character image W1 to be recognized, for example, 3 to 10. Here, 3 pixels are taken as an example. Accordingly, the processing unit 120 may accordingly obtain the frequency domain position to be detected as (3/1280). However, the present invention is not limited thereto, and the straight line L1 may be disposed along the longitudinal axis of the driving image F1 or in other suitable directions to pass through the driving image F1.
Fig. 6 is a flowchart illustrating an embodiment of step S53 in fig. 4. Referring to fig. 1 to 6, in an embodiment of the step S53, when the processing unit 120 detects that the signal appears in the frequency domain position in the step S52, the processing unit 120 may determine that the driving image F1 has sufficient sharpness without adjusting the gain value of the image capturing unit 110, the shutter speed, and the fill-in light intensity of the fill-in light unit 130 (step S53A). When the processing unit 120 does not detect that the signal appears at the frequency domain position in step S52, the processing unit 120 may determine that the driving image F1 does not have sufficient sharpness, such as blur, and increase the shutter speed of the image capturing unit 110 and select to fine-tune one of the gain value of the image capturing unit 110 and the fill-in light intensity of the fill-in light unit 130 (step S53B), so that the driving image F1 captured by the image capturing unit 110 after the fine tuning may have sufficient brightness and sharpness.
In some embodiments, the processing unit 120 may repeatedly perform the fine adjustment through the repeated execution of steps S51 to S53, and the fine adjustment of steps S51 to S53 is not stopped until the processing unit 120 determines that the driving image F1 has a sufficient high spectral response (high spectral response).
In summary, in the execution of the steps S51 to S53, the processing unit 120 determines whether the driving image F1 lacks high frequency signals through frequency domain conversion to determine the quality of the driving image F1, so as to perform fast feedback and corresponding fine adjustment, thereby obtaining the driving image F1 with better quality more quickly.
Fig. 7 is a flowchart illustrating the image capturing method after step S40 according to an embodiment, and fig. 8 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof. Referring to fig. 7 and 8, in an embodiment of the image capturing method, after step S40, the processing unit 120 may further extract the object image M1 from the driving image F1 (step S54) to transform the luminance distribution of the pixels on the straight line L2 passing through the object image M1 (step S55). Then, the processing unit 120 can fine-tune the gain value of the image capturing unit 110, the fill-in intensity of the fill-in unit 130, or the shutter speed of the image capturing unit 110 according to the waveform of the brightness distribution (step S56), so as to further optimize the quality of the image captured by the image capturing apparatus 100 for vehicle through these fine-tuning actions.
In some embodiments, the processing unit 120 may repeatedly perform the fine adjustment through repeated execution of steps S54 to S56, and the fine adjustment of steps S54 to S56 is not stopped until the processing unit 120 determines that the driving image F1 has sufficient image quality.
In one embodiment of step S54, the processing unit 120 may extract the object image M1 from the driving image F1 by an image processing technique, such as image segmentation.
In an embodiment of step S55, the processing unit 120 may set a straight line L2 passing through the object image M1 to convert the luminance distribution of luminance versus position according to each pixel passing through the straight line L2 and its position. In some embodiments, the processing unit 120 arranges the line L2 along the horizontal axis of the object image M1, but the invention is not limited thereto, and the line L2 may also be arranged along the vertical axis of the object image M1 or other suitable directions to pass through the object image M1. In addition, the object image M1 may include a plurality of character images W1, and the line L2 may pass through the character images W1.
Fig. 9 is a flowchart illustrating an embodiment of step S56 in fig. 7. Referring to fig. 7 to 9, in an embodiment of step S56, the processing unit 120 performs a fine adjustment according to the peak-to-peak value Vpp of the waveform in the luminance distribution converted in step S55. The peak-to-peak value Vpp is a difference between a peak Vc and a valley Vt of a waveform in the luminance distribution. Accordingly, the processing unit 120 may compare the peak-to-peak value Vpp of the waveform with a preset difference value (step S561). When the peak-to-peak value Vpp is greater than or equal to the predetermined difference, the processing unit 120 may determine that the contrast of the object image M1 is sufficient, but not adjust the gain of the image capturing unit 110, the fill-in intensity of the fill-in unit 130, and the shutter speed of the image capturing unit 110 (step S562). When the peak-to-peak value Vpp is smaller than the preset difference, the processing unit 120 determines that the contrast of the object image M1 is insufficient, and causes the fill-in unit 130 to increase the fill-in intensity or causes the image capturing unit 110 to increase the gain (step S563), so that the contrast of the object image M1 in the fine-tuned driving image F1 can be increased.
In some implementations, the luminance in the luminance distribution may have a gray scale level as its unit. In addition, the predetermined difference may be between 90 gray scale levels and 110 gray scale levels. For example, the predetermined difference may be 100 gray levels, but the invention is not limited thereto.
Fig. 10 is a flowchart illustrating an embodiment of step S56 in fig. 7. Referring to fig. 10, in an embodiment of step S56, in addition to the peak-to-peak value Vpp, the processing unit 120 may perform fine tuning according to the magnitude of the peak value. In one embodiment, after the processing unit 120 performs step S561 and determines that the peak-to-peak value Vpp is greater than or equal to the predetermined difference, the peak value of the waveform is compared with the predetermined peak value (step S564). When the comparison result of the step S564 is that the peak value is greater than or equal to the predetermined peak value, it indicates that the brightness of the object image M1 is not too dark, and the processing unit 120 continues to execute the step S562 so as not to perform adjustment. On the contrary, when the comparison result of the step S564 shows that the peak value is smaller than the preset peak value, which indicates that the brightness of the object image M1 may be too dark, the processing unit 120 may execute the step S563 to increase the brightness of the object image M1, but the invention is not limited thereto. Fig. 11 is a flowchart illustrating an embodiment of step S56 in fig. 7. Referring to fig. 11, in another embodiment, the processing unit 120 can also perform step S564 before performing step S561. Then, when the comparison result of step S564 is that the peak value is greater than or equal to the predetermined peak value, the processing unit 120 continues to perform the comparison of the peak-to-peak value Vpp of step S561, and selects to continue to perform step S562 or step S563 according to the comparison result of step S561. If the comparison result of step S564 is that the peak value is smaller than the predetermined peak value, the processing unit 120 may select to execute step S563.
Fig. 12 is a flowchart illustrating an embodiment of step S56 in fig. 7. Referring to fig. 12, in an embodiment of step S56, in addition to the peak-to-peak value Vpp, the processing unit 120 may also perform fine tuning according to the magnitude of the valley value. In one embodiment, after the processing unit 120 performs step S561 and determines that the peak-to-peak value Vpp is greater than or equal to the predetermined difference, the bottom of the waveform is compared with the predetermined bottom (step S565). When the valley value is smaller than or equal to the predetermined valley value as a result of the comparison in step S565, it indicates that the brightness of the object image M1 is not too bright, and the processing unit 120 continues to perform step S562 so as not to perform adjustment. On the contrary, when the valley value is larger than the preset valley value as the comparison result in step S565, it indicates that the brightness of the object image M1 may be too bright, and the processing unit 120 may cause the fill-in light unit 130 to decrease the fill-in light intensity or cause the image capturing unit 110 to decrease the gain value (step S566), but the invention is not limited thereto. Fig. 13 is a flowchart illustrating an embodiment of step S56 in fig. 7. Referring to fig. 13, in another embodiment, the processing unit 120 can also perform step S565 before performing step S561. Thereafter, when the trough value is smaller than or equal to the preset trough value as the comparison result of step S565, the processing unit 120 continues to perform the comparison of the peak-to-peak value Vpp of step S561, and selects to continue to perform step S562 or step S563 according to the comparison result of step S561. When the trough value is greater than the preset trough value as a result of the comparison in step S565, the processing unit 120 may select to perform step S566.
In some embodiments, the predetermined peak value may be between the gray scale level 120 and the gray scale level 140. In addition, the predetermined valley value may be between the gray scale level 120 and the gray scale level 140. In some embodiments, the predetermined peak value may be equal to the predetermined valley value. For example, the predetermined peak value and the predetermined valley value may be the gray scale 130, but the invention is not limited thereto.
Fig. 14 is a flowchart illustrating an embodiment of step S56 in fig. 7, fig. 15 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof, and fig. 16 is a schematic diagram illustrating an embodiment of an object image and a brightness distribution thereof. Referring to fig. 7 and 14 to 16, in an embodiment of the step S56, the processing unit 120 can also perform corresponding fine adjustment according to the number of gray-scale pixels of each tangent line Lt of the waveform in the luminance distribution converted in the step S55. The number of gray-scale pixels of the tangent line Lt is a transition slope when the peak Vc transitions to the valley Vt or a transition slope when the peak Vc transitions from the valley Vt to the peak Vc. In some implementations, the luminance in the luminance distribution may have a gray scale level as its unit. Also, the unit of the number of gray-scale pixels of the tangent line may be: gray scale level/number of pixels.
The processing unit 120 may compare the number of gray-scale pixels of each tangent line with a preset number of gray-scale pixels (step S567). When the number of gray-scale pixels of each tangent line falls within the predetermined number of gray-scale pixels, it indicates that the sharpness of the object image M1 is sufficient, and the processing unit 120 does not adjust the gain of the image capturing unit 110, the fill-in intensity of the fill-in unit 130, and the shutter speed of the image capturing unit 110 (step S568). When the number of gray-scale pixels of any tangent line exceeds the number of preset gray-scale pixels, it indicates that the sharpness of the object image M1 is not sufficient, and the processing unit 120 may cause the image capturing unit 110 to increase its shutter speed (step S569), so as to increase the sharpness of the object image M1 in the driving image F1 captured after the fine adjustment.
In some embodiments, the predetermined number of gray-scale pixels may be an interval of values. For example, 0 to 2 (gray scale level/number of pixels), but the invention is not limited thereto.
In summary, in the execution of the steps S54 to S56, the processing unit 120 determines the quality of the object image M1 in the driving image F1 according to the waveform of the brightness distribution thereof, so as to quickly feed back and correspondingly fine-tune the object image M1, thereby obtaining the driving image F1 with better quality more quickly.
In some embodiments, before the step S20, the processing unit 120 may set the shutter speed of the image capturing unit 110 so that the driving image F1 captured by the image capturing unit 110 is not blurred. Then, the processing unit 120 finds out the currently suitable fill-in light intensity or gain value in the execution of steps S10 to S40 of the image capturing method so that the driving image F1 captured by the image capturing unit 110 has a suitable brightness. Finally, based on the proper shutter speed and fill-in light intensity or gain value, the processing unit 120 can further enhance the detail representation capability of the driving image F1 through the fine adjustment operations of steps S51 to S53 or steps S54 to S56 of the image capturing method.
In some embodiments, the product of the shutter speed and the gain of the image capturing unit 110 and the fill-in light intensity of the fill-in light unit 130 is equal before and after the fine tuning operation of step S53 (or step S56). For example, when the processing unit 120 changes the shutter speed to 1/2, the processing unit 120 changes the gain value or fill-in light intensity to 2 times the original value, so that the product of the shutter speed, the gain value and the fill-in light intensity can be equal before and after the fine adjustment.
In some embodiments, the image capturing device 100 for a vehicle can be applied to a police surveillance system. For example, the image capturing device 100 for a vehicle can be installed on a police car. The image capturing device 100 for the vehicle can be electrically connected to an internal system of the police vehicle, and the internal system can upload the captured driving image F1 to the background system, so that the background system can post-process (post-processing), identify images, and the like on the driving image F1, thereby assisting the police to quickly record and identify license plates, vehicle money, and the like. The object image M1 in the driving image F1 may be an image of a license plate or an image of a vehicle body. In addition, the character image W1 can be an image of a number, a character, or the like.
In summary, in the image capturing apparatus for a vehicle and the image capturing method of the embodiment of the invention, the fill-in light intensity or the gain value is adjusted according to the gray scale number distribution of the driving image, so as to obtain the driving image with proper brightness. In addition, the shutter speed, the fill-in light intensity or the gain value can be finely adjusted through the frequency spectrum of the driving image or the brightness distribution of the object image, so that the driving image with better detail expression capability can be obtained. Moreover, the quality of the driving image can be confirmed without waiting for the feedback of the background system and the fine adjustment can be carried out in real time, so that the driving image with better quality can be obtained more quickly.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes and modifications can be made without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (20)

1. An image capturing method, comprising:
capturing a driving image by using an image capturing unit;
obtaining gray scale quantity distribution of pixels of the driving image on a plurality of gray scale levels;
numbering the pixels of the driving image in sequence from the highest gray level to the lowest gray level of the plurality of gray levels according to the gray level number distribution until the pixels are numbered to a preset number; and
adjusting a fill-in light intensity of a fill-in light unit or a gain value of the image capture unit according to a gray scale level of the pixel with the preset number, wherein the plurality of gray scale levels sequentially form a first gray scale section, a second gray scale section, a third gray scale section and a fourth gray scale section from the highest gray scale level to the lowest gray scale level, and the step of adjusting the fill-in light intensity or the gain value comprises:
when the gray scale level is in the first gray scale section, reducing the fill-in light intensity or the gain value;
when the gray scale level is in the second gray scale section or the fourth gray scale section, the fill light intensity or the gain value is not adjusted; and
when the gray scale level is in the third gray scale section, the fill light intensity or the gain value is increased.
2. The image capturing method of claim 1, wherein the step of adjusting the fill-in light intensity or the gain value further comprises:
converting a frequency spectrum of the driving image;
detecting a frequency domain position in the frequency spectrum; and
and fine-tuning a shutter speed of the image capturing unit according to whether a signal appears at the frequency domain position, and fine-tuning the gain value or the fill-in light intensity.
3. The method of claim 2, wherein the step of fine-tuning the shutter speed and the gain or fill-in intensity comprises:
when a signal appears at the frequency domain position, the shutter speed, the gain value and the fill-in light intensity are not adjusted; and
when no signal appears at the frequency domain position, the shutter speed is increased and the fill-in light intensity or the gain value is reduced.
4. The image capturing method of claim 2, wherein the frequency domain position is obtained according to the number of pixels on a line of the driving image and the number of pixels of a character image in the same direction as the line.
5. The image capturing method of claim 1, wherein the step of adjusting the fill-in light intensity or the gain value further comprises:
taking out an object image from the driving image;
converting a brightness distribution of pixels on a line passing through the object image; and
and finely adjusting a shutter speed, the gain value or the fill-in light intensity of the image acquisition unit according to the waveform of the brightness distribution.
6. The method of claim 5, wherein the step of fine-tuning the shutter speed, the gain value or the fill-in light intensity comprises:
comparing the peak-to-peak value of the waveform with a preset difference value;
when the peak-to-peak value is greater than or equal to the preset difference value, the gain value, the light supplement intensity and the shutter speed are not adjusted; and
and when the peak-to-peak value is smaller than the preset difference value, the fill light intensity or the gain value is increased.
7. The method of claim 5, wherein the step of fine-tuning the shutter speed, the gain value or the fill-in light intensity comprises:
comparing the peak-to-peak value of the waveform with a preset difference value;
when the peak-to-peak value is larger than or equal to the preset difference value, comparing the wave peak value of the waveform with a preset peak value;
when the wave peak value is larger than or equal to the preset peak value, the gain value, the light supplement intensity and the shutter speed are not adjusted; and
and when the wave peak value is smaller than the preset peak value, the light supplement intensity or the gain value is increased.
8. The method of claim 5, wherein the step of fine-tuning the shutter speed, the gain value or the fill-in light intensity comprises:
comparing the peak-to-peak value of the waveform with a preset difference value;
when the peak-to-peak value is greater than or equal to the preset difference value, comparing the trough value of the waveform with a preset trough value;
when the trough value is larger than the preset trough value, reducing the light supplement intensity or the gain value; and
when the trough value is less than or equal to the preset trough value, the gain value, the fill light intensity and the shutter speed are not adjusted.
9. The method of claim 5, wherein the step of fine-tuning the shutter speed, the gain value or the fill-in light intensity comprises:
when the gray-scale pixel number of each tangent line of the waveform is within a preset gray-scale pixel number, the shutter speed, the gain value and the fill light intensity are not adjusted; and
when the gray-scale pixel number of any tangent line of the waveform exceeds the preset gray-scale pixel number, the shutter speed is increased.
10. The image capturing method of any one of claims 2 to 9, wherein a product of the shutter speed, the gain and the fill-in light intensity is equal before and after the trimming.
11. An image capturing device for a vehicle, comprising:
an image capturing unit for capturing a driving image;
a light supplement unit for supplementing light; and
a processing unit for obtaining a gray scale number distribution of pixels of the driving image on a plurality of gray scale levels, numbering the pixels of the driving image in sequence from the highest gray scale level to the lowest gray scale level of the plurality of gray scale levels until a preset number is reached according to the gray scale number distribution, and adjusting a fill light intensity of the fill light unit or a gain value of the image capture unit according to the gray scale level of the pixel with the preset number, wherein the plurality of gray scale levels sequentially form a first gray scale section, a second gray scale section, a third gray scale section and a fourth gray scale section from the highest gray scale level to the lowest gray scale level, when the gray scale level is in the first gray scale section, the processing unit reduces the fill light intensity or the gain value, when the gray scale level is in the second gray scale section or the fourth gray scale section, the processing unit does not adjust the fill-in light intensity or the gain value, and when the gray scale level is in the third gray scale section, the processing unit increases the fill-in light intensity or the gain value.
12. The image capturing apparatus as claimed in claim 11, wherein after adjusting the fill-in light intensity or the gain value, the processing unit further transforms a frequency spectrum of the driving image and detects a frequency domain position in the frequency spectrum, and the processing unit further fine-tunes a shutter speed of the image capturing unit and the gain value or the fill-in light intensity according to whether a signal is present at the frequency domain position.
13. The image capturing apparatus as claimed in claim 12, wherein the processing unit does not adjust the shutter speed, the gain value and the fill-in light intensity when a signal is present at the frequency domain position, and increases the shutter speed and decreases the fill-in light intensity or the gain value when no signal is present at the frequency domain position.
14. The image capturing apparatus for vehicle as claimed in claim 12, wherein the processing unit obtains the frequency domain position according to the number of pixels on a line passing through the driving image and the number of pixels of a character image in the same direction as the line.
15. The image capturing apparatus for vehicle as claimed in claim 11, wherein after adjusting the fill-in light intensity or the gain value, the processing unit further extracts the object image from the driving image, converts the luminance distribution of pixels on a straight line passing through the object image, and finely adjusts a shutter speed, the gain value and/or the fill-in light intensity of the image capturing unit according to the waveform of the luminance distribution.
16. The image capturing apparatus for vehicle as claimed in claim 15, wherein the step of fine-tuning the gain, the fill-in light intensity or the shutter speed according to the brightness distribution comprises: comparing the peak-to-peak value of the waveform with a preset difference value; when the peak-to-peak value is greater than or equal to the preset difference value, the shutter speed, the gain value and the light supplement intensity are not adjusted; and when the peak-to-peak value is smaller than the preset difference value, the fill light intensity or the gain value is increased.
17. The image capturing apparatus for vehicle as claimed in claim 15, wherein the step of fine-tuning the gain, the fill-in light intensity or the shutter speed according to the brightness distribution comprises: comparing the peak-to-peak value of the waveform with a preset difference value; when the peak-to-peak value is larger than or equal to the preset difference value, comparing the wave peak value of the waveform with a preset peak value; when the wave peak value is larger than or equal to the preset peak value, the gain value, the light supplement intensity and the shutter speed are not adjusted; and when the wave peak value is smaller than the preset peak value, the light supplement intensity or the gain value is improved.
18. The image capturing apparatus for vehicle as claimed in claim 15, wherein the step of fine-tuning the gain value, the fill-in light intensity or the shutter speed according to the waveform of the luminance distribution further comprises: comparing the peak-to-peak value of the waveform with a preset difference value; when the peak-to-peak value is greater than or equal to the preset difference value, comparing the trough value of the waveform with a preset trough value; when the trough value is larger than the preset trough value, reducing the light supplement intensity or the gain value; and when the trough value is less than or equal to the preset trough value, the gain value, the fill light intensity and the shutter speed are not adjusted.
19. The image capturing apparatus as claimed in claim 15, wherein the processing unit does not adjust the shutter speed, the gain value and the fill-in light intensity when the number of gray-scale pixels of each tangent line of the waveform falls within a predetermined number of gray-scale pixels, and the processing unit increases the shutter speed when the number of gray-scale pixels of any one tangent line of the waveform exceeds the predetermined number of gray-scale pixels.
20. The image capturing apparatus for vehicle as claimed in any one of claims 12 to 19, wherein the product of the shutter speed, the gain value and the fill-in light intensity is equal before and after trimming.
CN201810512757.7A 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method Active CN110536071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810512757.7A CN110536071B (en) 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810512757.7A CN110536071B (en) 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method

Publications (2)

Publication Number Publication Date
CN110536071A CN110536071A (en) 2019-12-03
CN110536071B true CN110536071B (en) 2021-05-11

Family

ID=68656718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810512757.7A Active CN110536071B (en) 2018-05-25 2018-05-25 Image capturing device for vehicle and image capturing method

Country Status (1)

Country Link
CN (1) CN110536071B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206322A (en) * 2006-12-22 2008-06-25 奇美电子股份有限公司 Method and device for adjusting pixel gray level value of LCD
CN101931749A (en) * 2009-06-25 2010-12-29 原相科技股份有限公司 Shooting parameter adjustment method for face detection and image capturing device for face detection
CN102209200A (en) * 2010-03-31 2011-10-05 比亚迪股份有限公司 Automatic exposure control method
CN103873786A (en) * 2012-12-17 2014-06-18 原相科技股份有限公司 Image adjustment method and optical navigator using same
CN105898143A (en) * 2016-04-27 2016-08-24 维沃移动通信有限公司 Moving object snapshotting method and mobile terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100677332B1 (en) * 2004-07-06 2007-02-02 엘지전자 주식회사 A method and a apparatus of improving image quality on low illumination for mobile phone
JP5680573B2 (en) * 2012-01-18 2015-03-04 富士重工業株式会社 Vehicle driving environment recognition device
JP2016076869A (en) * 2014-10-08 2016-05-12 オリンパス株式会社 Imaging apparatus, imaging method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206322A (en) * 2006-12-22 2008-06-25 奇美电子股份有限公司 Method and device for adjusting pixel gray level value of LCD
CN101931749A (en) * 2009-06-25 2010-12-29 原相科技股份有限公司 Shooting parameter adjustment method for face detection and image capturing device for face detection
CN102209200A (en) * 2010-03-31 2011-10-05 比亚迪股份有限公司 Automatic exposure control method
CN103873786A (en) * 2012-12-17 2014-06-18 原相科技股份有限公司 Image adjustment method and optical navigator using same
CN105898143A (en) * 2016-04-27 2016-08-24 维沃移动通信有限公司 Moving object snapshotting method and mobile terminal

Also Published As

Publication number Publication date
CN110536071A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
JP5071198B2 (en) Signal recognition device, signal recognition method, and signal recognition program
US9131141B2 (en) Image sensor with integrated region of interest calculation for iris capture, autofocus, and gain control
JP4985394B2 (en) Image processing apparatus and method, program, and recording medium
CN108860045B (en) Driving support method, driving support device, and storage medium
US9800881B2 (en) Image processing apparatus and image processing method
KR101845943B1 (en) A system and method for recognizing number plates on multi-lane using one camera
CN113766143B (en) Light detection chip, image processing device and operation method thereof
US10129458B2 (en) Method and system for dynamically adjusting parameters of camera settings for image enhancement
KR101218302B1 (en) Method for location estimation of vehicle number plate
US11736807B2 (en) Vehicular image pickup device and image capturing method
US20130287254A1 (en) Method and Device for Detecting an Object in an Image
TW202022807A (en) Adjustable receiver exposure times for active depth sensing systems
CN110536071B (en) Image capturing device for vehicle and image capturing method
JP6375911B2 (en) Curve mirror detector
JP2020071809A (en) Image processing device and image processing method
CN110536073B (en) Image capturing device for vehicle and image capturing method
CN110611772B (en) Image capturing device for vehicle and exposure parameter setting method thereof
CN110536063B (en) Image capturing device for vehicle and image capturing method
US10710515B2 (en) In-vehicle camera device and method for selecting driving image
US10516831B1 (en) Vehicular image pickup device and image capturing method
JP2018072884A (en) Information processing device, information processing method and program
CN113111883A (en) License plate detection method, electronic equipment and storage medium
US20200021730A1 (en) Vehicular image pickup device and image capturing method
JPH05199443A (en) Focused position detecting device for electronic camera
CN110875999B (en) Vehicle image capturing device and method for screening driving images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant