CN114025102A - Method for improving identification accuracy and environmental adaptability of ADAS (advanced data acquisition System) - Google Patents
Method for improving identification accuracy and environmental adaptability of ADAS (advanced data acquisition System) Download PDFInfo
- Publication number
- CN114025102A CN114025102A CN202111152162.3A CN202111152162A CN114025102A CN 114025102 A CN114025102 A CN 114025102A CN 202111152162 A CN202111152162 A CN 202111152162A CN 114025102 A CN114025102 A CN 114025102A
- Authority
- CN
- China
- Prior art keywords
- exposure
- image
- adas system
- adas
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
Abstract
The invention discloses a method for improving the identification accuracy and environmental adaptability of an ADAS system. The method comprises the steps of taking a road vanishing point as a boundary point to partition an image collected by a camera, analyzing brightness information of each area, giving a higher weight to a target object to detect an ROI area, reducing the weights of an area above the image and an area without concern on the side of the image, and calculating the exposure required by the next frame of image, so that the detection area obtains a better exposure. The invention improves the accuracy and precision of the ADAS system for recognizing the vehicle, the pedestrian and the lane line, and improves the adaptability of the system in special light environments such as adverse light, overcast and rainy, tunnels and the like.
Description
Technical Field
The invention relates to a method for improving the performance of an Advanced Driver Assistance System (ADAS), in particular to a method for improving the identification accuracy and the environmental adaptability of the ADAS.
Background
The forward-looking camera module is one of the most important sensors of the intelligent driving system, and the quality of the image acquired by the camera module is directly influenced by the exposure amount, so that the recognition effect of the system on a target object is directly influenced by the setting of the module exposure strategy and algorithm.
Most of camera modules for ADAS in the market at present adopt an average exposure method, that is, brightness information of all pixel points of a picture is counted, and the values of the brightness information are calculated averagely, so that the exposure of the next frame of image is adjusted and optimized. This has the disadvantages that the optimum exposure amount may not be obtained for detecting and searching the ROI region, and under some special light conditions, the detected ROI region may be over-exposed or under-exposed due to the large contrast of the brightness information of each region of the image. For example, under the condition of strong backlight, the exposure of the next frame image calculated by the average exposure method is small because the gray value of the sky part above the picture is too high. The illumination of the detected ROI area below the picture is not high, so that the exposure of the detected ROI area is seriously insufficient, and the detection of the system on the target object is influenced.
The invention mainly utilizes the method of exposure in a partition mode by taking the vanishing point as the boundary point, improves the brightness information calculation weight of the ROI, and can avoid the influence of abnormal brightness values of other areas on the calculation of the exposure of the whole image to the greatest extent. The ROI picture area is exposed moderately, the picture quality is improved, and the ADAS forward-looking vision system is more favorable for identifying the target object.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a method for optimizing ADAS recognition performance based on partition exposure, which carries out real-time feedback optimization on the exposure of each frame image of a visual forward-looking camera by partition exposure, and improves the recognition performance of a target object by utilizing the partition exposure of a module of the ADAS forward-looking camera and updating the exposure during image acquisition in real time.
The technical scheme adopted by the invention is as follows:
the method is shown in fig. 2, and the ADAS system is more beneficial to identifying the target object by partitioning the image obtained by the forward-looking camera module of the ADAS system in real time, increasing the exposure calculation weight of the partitioned ROI according to the partitioning result and optimizing the exposure effect of the next frame of image in real time. The optimization processing of the method of the invention involves a small amount of calculation, and all the steps are completed by online real-time processing.
And the partition is used for partitioning the image by taking the road vanishing point as an interface.
Dividing an image obtained by a forward-looking camera module of the ADAS into a real road area, roadside areas positioned at two sides of the real road area and a sky area positioned above the real road area and the roadside areas by taking the vanishing point as a boundary.
And finally, controlling an exposure driving module of the foresight camera module to expose the next frame of image according to the exposure amount, acquiring the next frame of image, and performing optimization processing on each frame of image by processing feedback in real time.
The forward-looking camera module comprises a shutter drive control module and a diaphragm drive control module, the generated exposure automatically generates corresponding exposure time and light-entering amount, and the exposure time and the light-entering amount are respectively input into the shutter drive control module and the diaphragm drive control module to complete the control of the exposure time and the light-entering amount, so that the required exposure is realized.
The method of the invention takes the road vanishing point as the boundary point to partition the image collected by the camera, analyzes the brightness information of each area, gives a higher weight to the ROI detected by the target object, reduces the weights of the irrelevant areas above and on the side of the image to calculate the exposure required by the next frame of image, and ensures that the detected area obtains a better exposure.
The invention has the beneficial effects that:
the invention improves the picture quality of the forward-looking camera module in the ADAS system, optimizes the recognition effect of the system on the lane lines of the vehicles and the pedestrians, improves the recognition accuracy and precision of the ADAS system on the vehicles, the pedestrians and the lane lines, and improves the recognition accuracy and the environmental adaptability of the system under the special light conditions of backlight, tunnels, overcast and rainy, night and the like.
Drawings
FIG. 1 is a schematic view of a partitioning method of exposure;
FIG. 2 is a comparison graph before and after tunnel optimization;
FIG. 3 is a comparison chart before and after night optimization;
fig. 4 is a schematic flow chart of the whole method.
FIG. 5 is a chart of vehicle identification positive rate statistics;
fig. 6 is a statistical result chart of the positive detection rate of pedestrian recognition.
Detailed Description
The invention is described in further detail below with reference to the figures and the embodiments.
The examples of the invention are as follows:
firstly, a camera module photosensitive chip obtains an original image, and then a module image processing unit ISP partitions the image by taking a vanishing point of a road as a boundary.
Fig. 1 is a schematic diagram of exposure partitioning results, a triangular region formed by connecting two points, namely a vanishing point and a bottom corner of a picture, is a solid road region, namely an ROI detection region, and a system detects that a target object in the solid road region has the highest detection priority, which is denoted as a region a. The rectangular area above the road vanishing point is an area b, the area is a sky area above the real road, and the target object detection priority in the sky area is the lowest. and areas on two sides of the area a are marked as an area c, and are areas on two sides of a road, namely roadside areas, and the detection priority of the target object in the roadside areas is centered.
Taking the average brightness value of each region as B, the total number of pixels in each region as N, and the gray value of each pixel as V:i denotes the description of the pixel, ViIndicating the gray value of the ith pixel.
Setting a brightness calculation weight coefficient K, and obtaining a brightness mean value B of the whole image according to the following formulaI:
BI=Ka*Ba+Kb*Bb+Kc*Bc(Ka+Kb+Kc=1)
Wherein, Ka、Kb、KcRespectively representing the brightness weight values of the road area, the roadside area and the sky area, Ba、Bb、BcThe average luminance values of the solid road region, the roadside region, and the sky region are respectively represented.
Determination of K according to a number of experimentsa>Kb>KcSetting weight coefficients for the basic conditions can improve the brightness weight of the ROI area, namely the a area and the solid road area.
Fig. 2 and fig. 3 are comparison graphs of effects before and after the optimization processing by the method in the tunnel and at night respectively, wherein the left side is before optimization and the right side is after optimization. Due to the interference of strong light outside the tunnel and strong light on two sides of the road at night, the exposure calculated by the traditional method is low, and the target object in the central detection area of the road is dark. And the problem can be perfectly solved by taking the vanishing point as a boundary point to perform image partition and improving the exposure calculation weight in the detection area. The optimized image target object is obviously clearer, and the exposure is more suitable.
Specifically, as shown in fig. 4, the camera module SENSOR chip obtains an original image and transmits the original image to the ISP, the module image processing unit ISP obtains vanishing point data from the vision main control MCU and performs image partitioning, then obtains a set weight coefficient (or a weight coefficient may be written in software) for each region from the module internal register chip and calculates a brightness mean value of the whole image, determines the exposure amount of the next frame of image from the exposure parameter comparison table according to the brightness mean value, then the module image processing unit ISP controls the module aperture and the shutter to adjust the exposure amount to acquire the next frame of image, the vision main control MCU obtains optimized image data, and the target object identification performance is improved.
Fig. 5 and fig. 6 respectively show that the ADAS system counts the positive inspection rate (i.e. the number of system identification objects/the number of actual objects) of pedestrians and vehicles under different weather conditions in the same road segment (including road types such as urban area, rural area, high speed, tunnel, etc.). It can be seen that after the optimization is carried out by using the method, the positive detection rate of the ADAS system on the target object is obviously improved.
Claims (7)
1. A method for improving the identification accuracy and environmental adaptability of an ADAS system is characterized in that: the images obtained by the foresight camera module of the ADAS system are partitioned in real time, the calculation weight of the exposure of the ROI is increased according to the partitioning result, and the exposure of the next frame of image is optimized in real time, so that the ADAS system is more beneficial to identifying the target object.
2. The method of claim 1, wherein the ADAS system identification accuracy and environmental suitability are improved by: and the partition is used for partitioning the image by taking the road vanishing point as an interface.
3. The method of claim 2, wherein the ADAS system identification accuracy and environmental suitability are improved by: dividing an image obtained by a forward-looking camera module of the ADAS into a real road area, roadside areas positioned at two sides of the real road area and a sky area positioned above the real road area and the roadside areas by taking the vanishing point as a boundary.
4. The method of claim 2, wherein the ADAS system identification accuracy and environmental suitability are improved by: the solid road region is used as the ROI region.
5. The method of claim 1, wherein the ADAS system identification accuracy and environmental suitability are improved by: and finally, controlling an exposure driving module of the front-view camera module to expose the next frame of image according to the exposure, and acquiring the next frame of image.
6. The method of claim 5, wherein the ADAS system identification accuracy and environmental suitability are improved by: and determining the exposure of the next frame of image according to the exposure table according to the brightness mean value.
7. The method for improving the recognition accuracy and environmental adaptability of the ADAS system according to claim 1 or 5, wherein: the forward-looking camera module comprises a shutter drive control module and a diaphragm drive control module, the generated exposure automatically generates corresponding exposure time and light-entering amount, and the exposure time and the light-entering amount are respectively input into the shutter drive control module and the diaphragm drive control module to complete the control of the exposure time and the light-entering amount, so that the required exposure is realized.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111152162.3A CN114025102A (en) | 2021-09-29 | 2021-09-29 | Method for improving identification accuracy and environmental adaptability of ADAS (advanced data acquisition System) |
PCT/CN2021/133322 WO2023050549A1 (en) | 2021-09-29 | 2021-11-26 | Method for improving identification accuracy and environmental adaptability of adas system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111152162.3A CN114025102A (en) | 2021-09-29 | 2021-09-29 | Method for improving identification accuracy and environmental adaptability of ADAS (advanced data acquisition System) |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114025102A true CN114025102A (en) | 2022-02-08 |
Family
ID=80055137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111152162.3A Pending CN114025102A (en) | 2021-09-29 | 2021-09-29 | Method for improving identification accuracy and environmental adaptability of ADAS (advanced data acquisition System) |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114025102A (en) |
WO (1) | WO2023050549A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005148308A (en) * | 2003-11-13 | 2005-06-09 | Denso Corp | Exposure controller for white line detection camera |
CN102806867A (en) * | 2011-06-02 | 2012-12-05 | 株式会社小糸制作所 | Image processing device and light distribution control method |
KR20140056510A (en) * | 2012-10-29 | 2014-05-12 | 주식회사 만도 | Automatic exposure control apparatus and automatic exposure control method |
CN104041022A (en) * | 2012-01-17 | 2014-09-10 | 本田技研工业株式会社 | Image processing device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4990806B2 (en) * | 2008-01-22 | 2012-08-01 | 富士重工業株式会社 | Image pickup means adjustment device and vehicle exterior monitoring device |
CN101304489B (en) * | 2008-06-20 | 2010-12-08 | 北京中星微电子有限公司 | Automatic exposure method and apparatus |
CN102629988A (en) * | 2012-03-31 | 2012-08-08 | 博康智能网络科技股份有限公司 | Automatic control method and device of camera head |
CN104320593B (en) * | 2014-11-19 | 2016-02-24 | 湖南国科微电子股份有限公司 | A kind of digital camera automatic exposure control method |
CN106791475B (en) * | 2017-01-23 | 2019-08-27 | 上海兴芯微电子科技有限公司 | Exposure adjustment method and the vehicle mounted imaging apparatus being applicable in |
CN110248112B (en) * | 2019-07-12 | 2021-01-29 | 成都微光集电科技有限公司 | Exposure control method of image sensor |
-
2021
- 2021-09-29 CN CN202111152162.3A patent/CN114025102A/en active Pending
- 2021-11-26 WO PCT/CN2021/133322 patent/WO2023050549A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005148308A (en) * | 2003-11-13 | 2005-06-09 | Denso Corp | Exposure controller for white line detection camera |
CN102806867A (en) * | 2011-06-02 | 2012-12-05 | 株式会社小糸制作所 | Image processing device and light distribution control method |
CN104041022A (en) * | 2012-01-17 | 2014-09-10 | 本田技研工业株式会社 | Image processing device |
KR20140056510A (en) * | 2012-10-29 | 2014-05-12 | 주식회사 만도 | Automatic exposure control apparatus and automatic exposure control method |
Also Published As
Publication number | Publication date |
---|---|
WO2023050549A1 (en) | 2023-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109785291B (en) | Lane line self-adaptive detection method | |
CN110287905B (en) | Deep learning-based real-time traffic jam area detection method | |
CN110647850A (en) | Automatic lane deviation measuring method based on inverse perspective principle | |
CN112200143A (en) | Road disease detection method based on candidate area network and machine vision | |
CN103888681A (en) | Automatic exposure method and device | |
CN110248112A (en) | A kind of exposal control method of imaging sensor | |
JP2003346278A (en) | Apparatus and method for measuring queue length of vehicles | |
CN116310845B (en) | Intelligent monitoring system for sewage treatment | |
CN103049909A (en) | Exposure method taking license plate as focus | |
CN111723778B (en) | Vehicle distance measuring system and method based on MobileNet-SSD | |
CN109919062A (en) | A kind of road scene weather recognition methods based on characteristic quantity fusion | |
CN114495025A (en) | Vehicle identification method and device, electronic equipment and storage medium | |
CN110033425B (en) | Interference area detection device and method and electronic equipment | |
CN110171263B (en) | Bend identification and vehicle height adjustment method for ECAS system | |
CN107506739B (en) | Night forward vehicle detection and distance measurement method | |
CN114025102A (en) | Method for improving identification accuracy and environmental adaptability of ADAS (advanced data acquisition System) | |
CN117152513A (en) | Vehicle boundary positioning method for night scene | |
CN110414384B (en) | Intelligent rice and wheat harvester leading line tracking method | |
CN109522851B (en) | Vehicle running speed and license plate recognition method based on non-visible light imaging | |
CN116229404A (en) | Image defogging optimization method based on distance sensor | |
CN116740657A (en) | Target detection and ranging method based on similar triangles | |
CN114782561B (en) | Smart agriculture cloud platform monitoring system based on big data | |
CN110688876A (en) | Lane line detection method and device based on vision | |
CN114494054A (en) | Night vehicle detection method and storage medium | |
CN112150828B (en) | Method for preventing jitter interference and dynamically regulating traffic lights based on image recognition technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20220208 |
|
WD01 | Invention patent application deemed withdrawn after publication |