CN113364990B - Method, controller and system for adapting vehicle camera to shadow area - Google Patents

Method, controller and system for adapting vehicle camera to shadow area Download PDF

Info

Publication number
CN113364990B
CN113364990B CN202110474818.7A CN202110474818A CN113364990B CN 113364990 B CN113364990 B CN 113364990B CN 202110474818 A CN202110474818 A CN 202110474818A CN 113364990 B CN113364990 B CN 113364990B
Authority
CN
China
Prior art keywords
area
dark
value
camera
illumination intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110474818.7A
Other languages
Chinese (zh)
Other versions
CN113364990A (en
Inventor
林为
段春艳
胡昌吉
陈伟镇
郑铠航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Polytechnic
Original Assignee
Foshan Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Polytechnic filed Critical Foshan Polytechnic
Priority to CN202110474818.7A priority Critical patent/CN113364990B/en
Publication of CN113364990A publication Critical patent/CN113364990A/en
Application granted granted Critical
Publication of CN113364990B publication Critical patent/CN113364990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method for adapting a vehicle camera to a shadow area based on a shadow recognition technology, which comprises the following steps: acquiring reference outline information and reference position information of a fixed object according to the three-dimensional map; constructing a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information; calculating a target dark area according to the sunlight irradiation direction information, the target outline information and the target position information; updating the shadow map in real time according to the target dark area; and adjusting the exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map. The invention also discloses a controller and a system related to the method. By adopting the invention, the image brightness of the camera can be quickly and effectively adjusted, and the camera is suitable for areas with different illumination brightness.

Description

Method, controller and system for adapting vehicle camera to shadow area
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method, a controller and a system for adapting a vehicle camera to a shadow area based on a shadow recognition technology.
Background
On urban roads, tall buildings stand in forest, and the sun irradiates the buildings to leave shadows on the roads, so that the illumination change of the roads is large. For a camera of an automatic driving automobile, the change of the shadow can greatly influence the image recognition of the camera, thereby influencing the driving safety.
In digital cameras (video cameras), TTL (through the transistor logic) is usually used for light measurement, that is, signals output by an image sensor are simultaneously used for measuring illuminance. This approach typically goes through three key steps: brightness acquisition, brightness analysis, and exposure adjustment.
In the prior art, at the beginning of performing automatic exposure, brightness information of a current image is acquired. The specific method comprises the following steps: each pixel is demosaiced and color space converted to obtain YUV color space, and Y is used to express the brightness of image. After obtaining the brightness of each pixel, the digital camera will count all the pixels according to the area where the pixel is located, so as to obtain the brightness information (i.e. brightness analysis) of different areas of the image. After the brightness analysis is completed, the digital camera makes corresponding adjustments to the exposure parameters (exposure time and gain, assuming that the aperture is unchanged) through a certain algorithm according to the difference between the brightness of the current image and the target brightness, so that the brightness of the next frame of image tends to and finally reaches the target brightness.
Since the automatic exposure is a feedback adjustment process, the above three steps are repeated for each frame of image until the image brightness reaches the target.
However, this automatic exposure (adaptive) approach has two problems: (1) The response speed is slow, (2) the adjustable luminosity range is limited, and timely and effective adjustment cannot be carried out during transition between an excessively bright highlight area and an excessively dark shadow area, particularly during quick switching, so that the driving safety of automatic driving is influenced.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method, a controller and a system for adapting a vehicle camera to a shadow area based on a shadow recognition technology, which can quickly and effectively adjust the image brightness of the camera to adapt to areas with different illumination brightness.
In order to solve the technical problem, the invention provides a method for adapting a vehicle camera to a shadow area based on a shadow recognition technology, which comprises the following steps: acquiring sunlight irradiation direction information; acquiring a three-dimensional map, and acquiring reference outline information and reference position information of a fixed object according to the three-dimensional map; constructing a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information, wherein the shadow map is formed by marking a dark area and a bright area on the three-dimensional map; acquiring target outline information and target position information of objects around the vehicle; calculating a target dark area according to the sunlight irradiation direction information, the target outline information and the target position information; updating the shadow map in real time according to the target dark area; acquiring vehicle position information; obtaining a bright area illumination intensity value; obtaining a dark area illumination intensity value; and adjusting the exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map.
As an improvement of the above, the step of acquiring sunlight irradiation direction information includes: acquiring a first direction angle of sunlight through the camera, wherein the first direction angle is an included angle between a projection line of a connecting line of the sunlight to a coordinate origin on an xoy plane and an x axis; and acquiring a second direction angle of the sunlight through the camera, wherein the second direction angle is an included angle between a connecting line from the sunlight to the origin of coordinates and the xoy plane.
As an improvement of the above solution, the reference contour information and the reference position information are derived from a reference point J ii A constituent coordinate array J, wherein J ii =(x ii ,y ii ,z ii ),x ii Is the reference point J ii Coordinate on the x-axis, y ii Is the reference point J ii Coordinate in the y-axis, z ii Is the reference point J ii Coordinates on the z-axis, i is a positive integer; the method for constructing the shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information comprises the following steps: according to the formula
Figure GDA0003818603060000021
Calculating the reference point J ii Projected point K on road surface ii All projection points K ii The maximum area enclosed is the dark area, wherein theta is a first direction angle of sunlight, beta is a second direction angle of sunlight, and z is a Is a road height value.
As an improvement of the above scheme, the step of adjusting the exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map comprises: setting a bright area target brightness value Y 1 And dark region target luminance value Y 2 According toFormula Y 3 =(Y 1 +Y 2 ) 2 obtaining the image brightness transition value T 3 (ii) a When the vehicle enters the dark area from the bright area, the illumination intensity value of the dark area and the target brightness value Y of the bright area are calculated according to the illumination intensity value of the bright area, the illumination intensity value of the dark area and the target brightness value Y of the bright area 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 Adjusting exposure parameters of the camera to enable the image brightness of the camera to be changed from the bright area target brightness value Y 1 Transition value Y to the image brightness 3 Then, the image brightness transition value Y is used 3 To the dark region target brightness value Y 2 Performing smooth transition; or when the vehicle enters the bright area from the dark area, the bright area illumination intensity value, the dark area illumination intensity value and the bright area target brightness value Y are obtained 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 Adjusting the exposure parameters of the camera to enable the image brightness of the camera to be changed from the dark area target brightness value Y 2 Transition value Y to the image brightness 3 Then from the image brightness transition value Y 3 To the bright area target brightness value Y 1 And (4) smooth transition.
As an improvement of the above scheme, when the vehicle enters the dark area from the bright area, the vehicle enters the dark area according to the bright area illumination intensity value, the dark area illumination intensity value and the bright area target brightness value Y 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 The step of adjusting the exposure parameters of the camera comprises: when the vehicle is in the bright area, according to the formula T 1 ×G 1 =Y 1 /(k×L 1 ) Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright field exposure time Y 1 And bright area gain G 1 The product of (a) sets the bright field exposure time Y 1 Sum bright area gain G 1 Stabilizing the image brightness of the camera at the bright area target brightness value Y 1 Wherein, L 1 As bright area illumination intensity value, L 2 The illumination intensity value in the dark area is shown, and k is a constant; when the vehicle is about to enter the dark area, according to the formulaFormula T 3 ×G 3 =Y 3 /(k×L 1 ) Calculating a first transitional exposure time T 3 And a first transition gain G 3 And according to said first transit exposure time T 3 And a first transition gain G 3 Set the first transit exposure time T 3 And a first transition gain G 3 Reducing the image brightness of the camera to the image brightness transition value Y 3 (ii) a When the vehicle is just entering the shadow region, according to the formula T 3 ′×G 3 ′=Y 3 /(k×L 2 ) Calculating a second transitional exposure time T 3 ' and second transition gain G 3 And according to said second transit exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ' stabilizing the image brightness of the camera at the image brightness transition value Y 3 (ii) a When the vehicle completely enters the dark area, the vehicle can enter the dark area according to the formula T 2 ×G 2 =Y 2 /(k×L 2 ) Calculating dark space exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And dark area gain G 2 Making the image brightness of the camera transit to the dark area target brightness value Y 2
As an improvement of the above scheme, when the vehicle enters the bright area from the dark area, the vehicle enters the dark area according to the bright area illumination intensity value, the dark area illumination intensity value and the bright area target brightness value Y 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 The step of adjusting the exposure parameters of the camera comprises: when the vehicle is in the dark area, according to formula T 2 ×G 2 =Y 2 /(k×L 2 ) Calculating dark space exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And dark area gain G 2 Stabilizing the image brightness of the camera at the dark region target brightness value Y 2 Wherein L is 1 As bright area illumination intensity value, L 2 The illumination intensity value in the dark area is shown, and k is a constant; when the vehicle is about to enter the bright area, according to the formula T 3 ′×G 3 ′=Y 3 /(k×L 2 ) Calculating a second transitional exposure time T 3 ' and a second transition gain G 3 And according to said second transit exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ' increasing the image brightness of the camera to the image brightness transition value Y 3 (ii) a When the vehicle is just entering the bright area, according to the formula T 3 ×G 3 =Y 3 /(k×L 1 ) Calculating a first transitional exposure time T 3 And a first transition gain G 3 And according to said first transit exposure time T 3 And a first transition gain G 3 Sets the first transitional exposure time T 3 And a first transition gain G 3 Stabilizing the image brightness of the camera at the image brightness transition value Y 3 (ii) a When the vehicle completely enters the bright area, the vehicle can enter the bright area according to the formula T 1 ×G 1 =Y 1 /(k×L 1 ) Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright field exposure time T 1 And bright area gain G 1 The product of (a) sets the bright field exposure time T 1 Sum bright area gain G 1 Making the image brightness of the camera transit to the bright area target brightness value Y 1
As an improvement of the above solution, the method for obtaining the bright-area illumination intensity value includes: the camera shoots a bright area image in the bright area, and a pixel bright area brightness value Y is extracted according to the bright area image 1 ', according to the formula L 1 =Y′ 1 /(k×T′ 1 ×G′ 1 ) Calculating the illumination intensity value L of the bright area 1 Wherein, T' 1 Is the actual exposure time G 'of the camera in the bright area' 1 Is the actual gain, T 'of the camera in the bright area' 1 And G' 1 K is a constant, a known quantity; or the sunlight is collected through the camera to obtain the bright area illumination intensity value L 1 (ii) a Or obtaining the bright area illumination intensity value L of the corresponding area through an illumination sensor 1 And uploading the light intensity value L to a server, and receiving the bright area illumination intensity value L sent by the server by the vehicle 1 (ii) a Or vehicles share the bright area illumination intensity value L through the V2X technology 1
As an improvement of the above solution, the method for obtaining the illumination intensity value of the dark area includes: before the vehicle enters the dark area, the camera shoots a dark area image of the dark area formed by the vehicle body of the vehicle, and a pixel dark area brightness value Y 'is extracted according to the dark area image' 2 According to the formula L 2 =Y′ 2 /(k×T′ 2 ×G′ 2 ) Calculating the illumination intensity value L of the dark area 2 Wherein, T' 2 Is the actual exposure time G 'of the camera in the dark area' 2 Is the actual gain, T 'of the camera in the dark area' 2 And G' 2 K is a constant, a known quantity; or obtaining the illumination intensity value L of the dark area of the corresponding area through the illumination sensor 2 And uploading the light intensity value L to a server, and receiving the dark space illumination intensity value L sent by the server by the vehicle 2 (ii) a Or the illumination intensity value L in the dark area is shared among the vehicles through the V2X technology 2
Correspondingly, the invention also provides a controller for adapting the vehicle camera to the shadow area based on the shadow recognition technology, which comprises the following components: the first acquisition module is used for acquiring sunlight irradiation direction information; the second acquisition module is used for acquiring a three-dimensional map and acquiring reference outline information and reference position information of the fixed object according to the three-dimensional map; the map building module is used for building a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information; the third acquisition module is used for acquiring target outline information and target position information of objects around the vehicle; a target building module for calculating a target dark area according to the sunlight irradiation direction information, the target outline information and the target position information; the map updating module is used for updating the shadow map in real time according to the target dark area; the fourth acquisition module is used for acquiring the vehicle position information; the fifth acquisition module is used for acquiring the illumination intensity value of the bright area; a sixth obtaining module, configured to obtain a dark area illumination intensity value; and the calculation adjusting module is used for adjusting the exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map.
Correspondingly, the invention also provides a system for adapting the vehicle camera to the shadow area based on the shadow recognition technology, which comprises the following steps: the camera is used for collecting sunlight, bright area images and dark area images; a radar for identifying objects around the vehicle; the cloud server is used for sending the three-dimensional map; a locator for locating the vehicle; the controller is described above.
The beneficial effects of the implementation of the invention are as follows:
according to the method, a shadow recognition technology is adopted firstly, a shadow map is constructed according to sunlight irradiation direction information, reference appearance outline information and reference position information, a target dark area is calculated according to the sunlight irradiation direction information, the target appearance outline information and the target position information, then the shadow map is updated in real time according to the target dark area, finally exposure parameters of a camera are adjusted in real time according to vehicle position information, a bright area illumination intensity value, a dark area illumination intensity value and the updated shadow map, the image brightness of the camera can be adjusted rapidly and effectively, stable transition between the bright area and the dark area is achieved, areas with different brightness are adapted, and therefore the influence of the change of shadows on driving safety of automatic driving is avoided.
Drawings
FIG. 1 is a flow chart of an implementation of a method for adapting a vehicle camera to a shadow area based on a shadow recognition technology in the present invention;
FIG. 2 is a schematic diagram of the present invention for obtaining sunlight irradiation direction information;
fig. 3 is a schematic diagram of the shadow map constructed according to the sunlight irradiation direction information, the reference outline information and the reference position information in the present invention.
FIG. 4 is a flowchart illustrating an embodiment of adjusting exposure parameters of the camera when the vehicle enters the dark area from the bright area;
FIG. 5 is a flowchart illustrating an embodiment of adjusting the exposure parameters of the camera when the vehicle enters the bright area from the dark area;
fig. 6 is a block diagram of a system for adapting a vehicle camera to a shadow area based on a shadow recognition technology in the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that the main formula applied herein is as follows:
assuming that the exposure function of the camera system is linear (non-linearity does not affect the conclusion of the present invention), the relationship between the illumination intensity L of the scene and the image brightness Y is
Y=k×L×T×G
Where k is a constant of the system, T is the exposure time, and G is the gain.
Therefore, when L is changed, Y can be changed by adjusting T and G, so that Y is close to a target value, and the stability of the image brightness is ensured.
Referring to fig. 1, fig. 1 shows an implementation process of a method for adapting a vehicle camera to a shadow area based on a shadow recognition technology in the present invention, which includes:
s101, acquiring sunlight irradiation direction information;
specifically, the sunlight irradiation direction information includes a first direction angle θ and a second direction angle β. The camera can be used for acquiring a first direction angle theta of sunlight, and the camera can be used for acquiring a second direction angle beta of the sunlight.
Referring to fig. 2, the first direction angle θ is an included angle between a projection line of a connecting line of the sunlight to the origin of coordinates on the xoy plane and the x axis; and the second direction angle beta is an included angle between a connecting line from the sunlight to the coordinate origin and the xoy plane.
It should be noted that, for a certain position at a certain time, the irradiation direction of the sunlight is fixed, and besides using a camera, the sunlight irradiation direction information may also be obtained by other existing technologies.
S102, acquiring a three-dimensional map, and acquiring reference outline information and reference position information of the fixed object according to the three-dimensional map.
Specifically, the three-dimensional map includes coordinates and a height of the fixed object, and the reference contour information and the reference position information are derived from a reference point J ii The coordinate array J of composition, i.e.
Figure GDA0003818603060000071
Wherein, J ii =(x ii ,y ii ,z ii ),x ii Is the reference point J ii Coordinate on the x-axis, y ii Is the reference point J ii Coordinate in the y-axis, z ii Is the reference point J ii And (3) coordinates on the z-axis, i is a positive integer.
And S103, constructing a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information.
Specifically, according to the sunlight irradiation direction information, the reference outline information and the reference position information, a dark area and a light area formed by the fixed object are calculated to construct the shadow map, wherein the dark area is an area formed by the fact that light is shielded by the object, the light area is an area irradiated by sunlight, and the shadow map is a three-dimensional map with the dark area and the light area marked thereon.
Referring to fig. 3, the first direction angle θ, the second direction angle β, and the reference contour information are usedInformation J of reference position and road height z a Calculating the reference point J ii Projected point K on road surface ii All projection points K ii The maximum area formed by enclosing is the dark area.
Wherein the light source light passes through the reference point J ii Has a space linear equation of
Figure GDA0003818603060000072
The plane equation of the road surface is z = z a Then, the reference point J can be obtained ii Projected point K on road surface ii I.e. by
Figure GDA0003818603060000073
And S104, acquiring target outline information and target position information of objects around the vehicle.
Specifically, the object around the vehicle, which is mainly a non-fixed object such as a surrounding tree, a power pole, a traveling vehicle, or the like, is recognized using radar, thereby obtaining the target contour information and the target position information.
And S105, calculating a target dark area according to the sunlight irradiation direction information, the target outline information and the target position information.
The specific calculation and construction method is the same as step S103, and is not described again.
And S106, updating the shadow map in real time according to the target dark area.
Specifically, the original dark area and the target dark area are overlapped to obtain a new dark area, so that an updated shadow map is obtained.
S107, vehicle position information is acquired.
Specifically, a vehicle is positioned by a positioner, and the vehicle position information is acquired.
S108, obtaining the illumination intensity value L of the bright area 1
Specifically, the bright-area illumination intensity value L is obtained 1 The following methods are possible:
1) The camera shoots a bright area image in the bright area, and pixel bright area brightness value Y 'is extracted according to the bright area image' 1 According to the formula
L 1 =Y′ 1 /(k×T′ 1 ×G′ 1 )
Calculating the illumination intensity value L of the bright area 1 Wherein, T' 1 The actual exposure time G 'of the camera in the bright area' 1 Is the actual gain, T 'of the camera in the bright area' 1 And G' 1 K is a constant, a known quantity.
2) Collecting sunlight through the camera to obtain the bright area illumination intensity value L 1
3) The irradiation direction and the illumination intensity of the sunlight of one area are relatively fixed, so that the illumination intensity value L of the bright area of the corresponding area can be obtained through the illumination sensor 1 And uploading the light intensity value to a server, and receiving the bright area illumination intensity value L sent by the server by the vehicle 1
4) The bright area illumination intensity value L is shared among vehicles through a V2X technology 1
S109, obtaining the illumination intensity value L of the dark area 2
Specifically, the dark area illumination intensity value L is obtained 2 The following methods are possible:
1) Before the vehicle enters the dark area, the camera shoots a dark area image of the dark area formed by the vehicle body of the vehicle, and a pixel dark area brightness value T 'is extracted according to the dark area image' 2 According to the formula L 2 =Y′ 2 /(k×T′ 2 ×G′ 2 )
Calculating the illumination intensity value L of the dark area 2 Wherein, T' 2 Is the actual exposure time G 'of the camera in the bright area' 2 The camera being in the bright areaActual gain, T' 2 And G' 2 K is a constant, a known quantity.
2) The irradiation direction and the illumination intensity of the sunlight of one area are relatively fixed, and the illumination intensity of a dark area formed by the sunlight shielded by an object is also relatively fixed, so that the illumination intensity value L of a dark area of the corresponding area can be obtained through the illumination sensor 2 And uploading the light intensity value to a server, and receiving the dark area illumination intensity value L sent by the server by the vehicle 2
3) The illumination intensity value L of the dark area is shared among vehicles through a V2X technology 2
And S110, adjusting exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map.
Specifically, a bright-area target luminance value Y is set 1 And dark region target luminance value Y 2 According to the formula
Y 3 =(Y 1 +Y 2 )/2
Obtaining the image brightness transition value Y 3
Referring to fig. 4, fig. 4 shows an implementation process of adjusting the exposure parameter of the camera when the vehicle enters the dark area from the bright area.
It should be noted that, when the vehicle enters the dark area from the bright area, the illumination intensity value L of the bright area is determined according to the intensity of the light 1 Dark area illumination intensity value L 2 Bright area target brightness value Y 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 Adjusting the exposure parameters of the camera to enable the image brightness of the camera to be changed from the bright area target brightness value Y 1 Transition value Y to the image brightness 3 Then from the image brightness transition value Y 3 To the dark region target brightness value Y 2 And (3) smooth transition, which comprises the following specific steps:
s201, when the vehicle is in the bright area, according to a formula
T 1 ×G 1 =Y 1 /(k×L 1 )
Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright field exposure time T 1 And bright area gain G 1 The product of (a) sets the bright field exposure time T 1 Sum bright area gain G 1 Stabilizing the image brightness of the camera at the bright area target brightness value Y 1
S202, when the vehicle is about to enter the dark area, according to a formula
T 3 ×G 3 =Y 3 /(k×L 1 )
Calculating a first transitional exposure time T 3 And a first transition gain G 3 And according to said first transit exposure time T 3 And a first transition gain G 3 Sets the first transitional exposure time T 3 And a first transition gain G 3 Reducing the image brightness of the camera to the image brightness transition value Y 3
S203, when the vehicle just enters the dark area, according to a formula
T 3 ′×G 3 ′=Y 3 /(k×L 2 )
Calculating a second transitional exposure time T 3 ' and second transition gain G 3 And according to said second transit exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ' stabilizing the image brightness of the camera at the image brightness transition value Y 3
S204, after the vehicle completely enters the dark area, according to a formula
T 2 ×G 2 =Y 2 /(k×L 2 )
Calculating dark area exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And increase in dark areaYi G 2 Making the image brightness of the camera transit to the dark zone target brightness value Y 2
Referring to fig. 5, fig. 5 shows an implementation process of adjusting the exposure parameters of the camera when the vehicle enters the bright area from the dark area.
It should be noted that, when the vehicle enters the bright area from the dark area, the illumination intensity value L of the bright area is determined according to the dark area 1 Dark area illumination intensity value L 2 Bright area target brightness value Y 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 Adjusting the exposure parameters of the camera to enable the image brightness of the camera to be changed from the dark area target brightness value Y 2 Transition value Y to the image brightness 3 Then, the image brightness transition value Y is used 3 To the bright area target brightness value Y 1 And (3) smooth transition, which comprises the following specific steps:
s301, when the vehicle is in the dark area, according to a formula
T 2 ×G 2 =Y 2 /(k×L 2 )
Calculating dark area exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And dark area gain G 2 Stabilizing the image brightness of the camera at the target brightness value Y of the dark area 2
S302, when the vehicle is about to enter the bright area, according to a formula
T 3 ′×G 3 ′=Y 3 /(k×L 2 )
Calculating a second transitional exposure time T 3 ' and second transition gain G 3 And according to said second transit exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ' increasing the image brightness of the camera to the image brightness transition valueY 3
S303, when the vehicle just enters the bright area, according to a formula
T 3 ×G 3 =Y 3 /(k×L 1 )
Calculating a first transitional exposure time T 3 And a first transition gain G 3 And according to said first transit exposure time T 3 And a first transition gain G 3 Set the first transit exposure time T 3 And a first transition gain G 3 Stabilizing the image brightness of the camera at the image brightness transition value Y 3
S304, when the vehicle completely enters the bright area, according to a formula
T 1 ×G 1 =Y 1 /(k×L 1 )
Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright field exposure time T 1 And bright area gain G 1 The product of (a) sets the bright field exposure time T 1 Sum bright area gain G 1 Making the image brightness of the camera transit to the bright area target brightness value Y 1
As shown in fig. 6, the present invention further provides a system for adapting a vehicle camera to a shadow area based on a shadow recognition technology, including:
the camera 5 is used for collecting sunlight, bright area images and dark area images;
a radar 4 for identifying objects around the vehicle;
the cloud server 3 is used for sending a three-dimensional map, wherein the three-dimensional map is provided by third parties such as google, baidu and God and uploaded to the cloud server 3;
a locator 2 for locating the vehicle;
and the controller 1 is used for acquiring information, processing the information and adjusting the image brightness of the camera 5.
It should be noted that the controller 1 acquires information from the camera 5, the radar 4, the cloud server 3 and the locator 2, processes the information, and then sends a formed adjusting signal to the camera 5 in real time to adjust the image brightness of the camera 5 in real time, so as to realize stable transition between a bright area and a dark area.
Wherein the controller 1 includes:
a first obtaining module 11, configured to obtain sunlight irradiation direction information;
the second acquisition module 12 is configured to acquire a three-dimensional map, and acquire reference outline information and reference position information of a fixed object according to the three-dimensional map;
the map building module 13 is used for building a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information;
a third obtaining module 14, configured to obtain target contour information and target position information of objects around the vehicle;
a target building module 15, configured to calculate a target dark area according to the sunlight irradiation direction information, the target contour information, and the target position information;
an update map module 16, configured to update the shadow map in real time according to the target dark area;
a fourth obtaining module 17, configured to obtain vehicle position information;
a fifth obtaining module 18, configured to obtain a bright-area illumination intensity value;
a sixth obtaining module 19, configured to obtain a dark area illumination intensity value;
and the calculation adjusting module 20 is configured to adjust the exposure parameter of the camera 5 in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value, and the updated shadow map.
Referring to fig. 6, fig. 6 shows a structure of a system for adapting a vehicle camera to a shadow area based on a shadow recognition technology in the present invention, which specifically comprises the following:
the camera 5 is connected with the first obtaining module 11, and the first obtaining module 11 collects sunlight through the camera 5 to obtain sunlight irradiation direction information.
The sunlight irradiation direction information includes a first direction angle θ and a second direction angle β. A first direction angle theta of sunlight can be obtained through the camera, and the first direction angle theta is an included angle between a projection line of a connecting line of the sunlight to a coordinate origin on an xoy plane and an x axis; a second direction angle beta of sunlight can be obtained through the camera, and the second direction angle beta is an included angle between a connecting line from the sunlight to the origin of coordinates and the xoy plane.
The cloud server 3 is connected to the second obtaining module 12 and sends a three-dimensional map, and the second obtaining module 12 obtains reference outline information and reference position information of the fixed object according to the three-dimensional map.
The three-dimensional map includes coordinates and a height of the fixed object, and the reference contour information and the reference position information are defined by a reference point J ii The coordinate array J of composition, i.e.
Figure GDA0003818603060000121
Wherein, J ii =(x ii ,y ii ,z ii ),x ii Is the reference point J ii Coordinate on the x-axis, y ii Is the reference point J ii Coordinate in the y-axis, z ii Is the reference point J ii And (5) coordinates on the z-axis, i is a positive integer.
The map building module 13 is connected to the first obtaining module 11 and the second obtaining module 12, respectively, the first obtaining module 11 sends the sunlight irradiation direction information to the map building module 13, the second obtaining module 11 sends the reference outline information and the reference position information to the map building module 13, and the map building module 13 builds a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information.
It should be noted that, according to the sunlight irradiation direction information, the reference outline information, and the reference position information, a dark area and a light area formed by the fixed object are calculated to construct the shadow map, where the dark area is an area where light is blocked by the object, the light area is an area irradiated by sunlight, and the shadow map is an area where the dark area and the light area are marked on the three-dimensional map. The specific calculation process is as follows:
calculating the reference and point J according to the first direction angle theta, the second direction angle beta, the reference outline information, the reference position information J and the road height value za ii Projected point K on road surface ii All projection points K ii The maximum area enclosed is the shadow area.
Wherein the light source light passes through the reference point J ii Has a space linear equation of
Figure GDA0003818603060000122
The plane equation of the road surface is z = z a Then, the reference point J can be obtained ii Projected point K on road surface ii I.e. by
Figure GDA0003818603060000131
The radar 4 is connected to the third obtaining module 14, and the third obtaining module 14 obtains target contour information and target position information of objects around the vehicle by identifying the objects around the vehicle through the radar 4, and the specific obtaining principle is the same as the above-mentioned obtaining of the reference contour information and the reference position information of the fixed object, which is not repeated.
The constructed target module 15 is connected to the first acquisition module 11 and the third acquisition module 14, respectively, the first acquisition module 11 sends the sunlight irradiation direction information to the constructed target module 15, the third acquisition module 14 sends the target contour information and the target position information to the constructed target module 15, and the constructed target module 15 calculates a target dark area according to the sunlight irradiation direction information, the target contour information and the target position information.
The map updating module 16 is connected to the map building module 13 and the object building module 15, respectively, the map building module 13 sends the shadow map to the map updating module 16, the object building module 15 sends the object dark area to the map updating module 16, and the map updating module 16 overlaps the dark area in the shadow map with the object dark area to obtain a new dark area, so as to update the shadow map in real time.
The locator 2 is connected with the fourth obtaining module 17, and the fourth obtaining module 17 obtains vehicle position information by locating the vehicle position through the locator 2.
The camera 5 is connected with the fifth obtaining module 18, the camera 5 sends the bright area image collected in the bright area to the fifth obtaining module 18, and the fifth obtaining module 18 calculates the bright area illumination intensity value according to the bright area image.
It should be noted that, the fifth obtaining module 18 extracts a pixel bright area luminance value Y 'according to the bright area image' 1 According to the formula
L 1 =Y′ 1 /(k×T′ 1 ×G′ 1 )
Calculating the illumination intensity value L of the bright area 1 Wherein, T' 1 Is the actual exposure time G 'of the camera in the bright area' 1 Is the actual gain, T 'of the camera in the bright area' 1 And G' 1 K is a constant, a known quantity.
The camera 5 is connected to the sixth obtaining module 19, the camera 5 sends the dark area image collected in the dark area to the sixth obtaining module 19, and the sixth obtaining module 19 calculates the illumination intensity value of the dark area according to the dark area image.
It should be noted that, the sixth obtaining module 19 extracts the dark area brightness value of the pixel according to the dark area imageY′ 2 According to the formula
L 2 =Y′ 2 /(k×T′ 2 ×G′ 2 )
Calculating the illumination intensity value L of the dark area 2 Wherein, T' 2 The actual exposure time G 'of the camera in the bright area' 2 Is the actual gain, T ', of the camera in the bright area' 2 And G' 2 K is a constant, a known quantity.
The input end of the calculation and adjustment module 20 is connected to the fourth acquisition module 17, the fifth acquisition module 18, the sixth acquisition module 19 and the update acquisition module 16, the output end of the calculation and adjustment module 20 is connected to the camera 5, the fourth acquisition module 17 sends the vehicle position information to the calculation and adjustment module 20, the fifth acquisition module 18 sends the bright area illumination intensity value to the calculation and adjustment module 20, the sixth acquisition module 19 sends the dark area illumination intensity value to the calculation and adjustment module 20, the update acquisition module 16 sends the updated shadow map to the calculation and adjustment module 20, and the calculation and adjustment module 20 adjusts the exposure parameters of the camera 5 in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map. The specific adjustment process is as follows:
when the vehicle enters the dark area from the bright area, the illumination intensity value L of the bright area is used 1 Dark area illumination intensity value L 2 Bright area target brightness value Y 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 Adjusting the exposure parameters of the camera to enable the image brightness of the camera to be changed from the bright area target brightness value Y 1 Transition value Y to the image brightness 3 Then from the image brightness transition value Y 3 To the dark region target brightness value Y 2 And (3) smooth transition, which comprises the following specific steps:
when the vehicle is in the bright area, according to the formula
T 1 ×G 1 =Y 1 /(k×L 1 )
Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright field exposure time T 1 And bright area gain G 1 The product of (a) sets the bright field exposure time T 1 Sum bright area gain G 1 Stabilizing the image brightness of the camera at the bright area target brightness value Y 1
When the vehicle is about to enter the dark area, according to a formula
T 3 ×G 3 =Y 3 /(k×L 1 )
Calculating a first transitional exposure time T 3 And a first transition gain G 3 And according to said first transit exposure time T 3 And a first transition gain G 3 Sets the first transitional exposure time T 3 And a first transition gain G 3 Reducing the image brightness of the camera to the image brightness transition value Y 3
When the vehicle just enters the dark area, according to the formula
T 3 ′×G 3 ′=Y 3 /(k×L 2 )
Calculating a second transitional exposure time T 3 ' and second transition gain G 3 ' and according to said second transitional exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ' stabilizing the image brightness of the camera at the image brightness transition value Y 3
When the vehicle completely enters the dark area, the vehicle enters the dark area according to the formula
T 2 ×G 2 =Y 2 /(k×L 2 )
Calculating dark area exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And dark area gain G 2 To make the image brightness of the camera overTransition to the dark region target brightness value Y 2
When the vehicle enters the bright area from the dark area, the light intensity value L of the bright area is obtained 1 Dark area illumination intensity value L 2 Bright area target brightness value Y 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 Adjusting the exposure parameters of the camera to enable the image brightness of the camera to be changed from the dark area target brightness value Y 2 To the image luminance transition value Y 3 Then from the image brightness transition value Y 3 To the bright area target brightness value Y 1 And (3) smooth transition, which comprises the following specific steps:
when the vehicle is in the dark area, according to the formula
T 2 ×G 2 =Y 2 /(k×L 2 )
Calculating dark area exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And dark area gain G 2 Stabilizing the image brightness of the camera at the dark region target brightness value Y 2
When the vehicle is about to enter the bright area, according to a formula
T 3 ′×G 3 ′=Y 3 /(k×L 2 )
Calculating a second transitional exposure time T 3 ' and second transition gain G 3 And according to said second transit exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ' increasing the image brightness of the camera to the image brightness transition value Y 3
When the vehicle just enters the bright area, according to the formula
T 3 ×G 3 =Y 3 /(k×L 1 )
Calculating a first transit exposure timeT 3 And a first transition gain G 3 And according to said first transit exposure time T 3 And a first transition gain G 3 Set the first transit exposure time T 3 And a first transition gain G 3 Stabilizing the image brightness of the camera at the image brightness transition value Y 3
When the vehicle completely enters the bright area, the vehicle can enter the bright area according to the formula
T 1 ×G 1 =Y 1 /(k×L 1 )
Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright area exposure time T 1 And bright area gain G 1 The product of (a) sets the bright area exposure time T 1 Sum bright area gain G 1 Making the image brightness of the camera transit to the bright area target brightness value Y 1
In summary, the present invention first adopts a shadow recognition technology, constructs a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information, calculates a target dark area according to the sunlight irradiation direction information, the target outline information and the target position information, updates the shadow map in real time according to the target dark area, and finally adjusts the exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map, so as to quickly and effectively adjust the image brightness of the camera, and realize a smooth transition between the bright area and the dark area to adapt to areas with different brightness, thereby avoiding the influence of the change of the shadow on the driving safety of the automatic driving.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A method for adapting a vehicle camera to a shadow area based on a shadow recognition technology is characterized by comprising the following steps:
acquiring sunlight irradiation direction information;
acquiring a three-dimensional map, and acquiring reference outline information and reference position information of a fixed object according to the three-dimensional map;
constructing a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information, wherein the shadow map is formed by marking a dark area and a bright area on the three-dimensional map;
acquiring target outline information and target position information of objects around the vehicle;
calculating a target dark area according to the sunlight irradiation direction information, the target outline information and the target position information;
updating the shadow map in real time according to the target dark area;
acquiring vehicle position information;
obtaining a bright area illumination intensity value;
obtaining a dark area illumination intensity value;
and adjusting the exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map.
2. The method according to claim 1, wherein the step of acquiring sunlight irradiation direction information comprises:
acquiring a first direction angle of sunlight through the camera, wherein the first direction angle is an included angle between a projection line of a connecting line of the sunlight to a coordinate origin on an xoy plane and an x axis;
and acquiring a second direction angle of the sunlight through the camera, wherein the second direction angle is an included angle between a connecting line from the sunlight to the origin of coordinates and the xoy plane.
3. The method of claim 1, wherein the reference contour information and the reference position information are derived from a baseQuasi point J ii A constituent coordinate array J, wherein J ii =(x ii ,y ii ,z ii ),x ii Is the reference point J ii Coordinate on the x-axis, y ii Is the reference point J ii Coordinate on the y-axis, z ii Is the reference point J ii Coordinates on the z-axis, i is a positive integer;
the method for constructing the shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information comprises the following steps:
according to the formula
Figure FDA0003818603050000021
Calculating the reference point J ii Projected point K on road surface ii All projection points K ii The maximum area is the dark area, wherein theta is a first direction angle of sunlight, beta is a second direction angle of sunlight, and z is a Is a road height value.
4. The method of claim 1, wherein the step of adjusting the exposure parameters of the camera in real time based on the vehicle location information, the bright area illumination intensity value, the dark area illumination intensity value, and the updated shadow map comprises:
setting a bright area target brightness value Y 1 And dark region target luminance value Y 2 According to the formula Y 3 =(Y 1 +Y 2 ) 2 obtaining image brightness transition value Y 3
When the vehicle enters the dark area from the bright area, the vehicle enters the dark area according to the bright area illumination intensity value, the dark area illumination intensity value and the bright area target brightness value Y 1 Dark region target luminance value Y 2 And image luminance transition value Y 3 Adjusting exposure parameters of the camera to enable the image brightness of the camera to be changed from the bright area target brightness value Y 1 Transition value Y to the image brightness 3 Then, the image brightness transition value Y is used 3 To the dark region target brightness value Y 2 Performing smooth transition;
or when the vehicle enters the bright area from the dark area, the bright area illumination intensity value, the dark area illumination intensity value and the bright area target brightness value Y are obtained 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 Adjusting the exposure parameters of the camera to enable the image brightness of the camera to be changed from the dark area target brightness value Y 2 Transition value Y to the image brightness 3 Then from the image brightness transition value Y 3 To the bright area target brightness value Y 1 And (4) smooth transition.
5. The method according to claim 4, wherein the bright area illumination intensity value, the dark area illumination intensity value, and the bright area target brightness value Y are determined according to the bright area illumination intensity value, the dark area illumination intensity value, and the bright area target brightness value Y when the vehicle enters the dark area from the bright area 1 Dark region target luminance value Y 2 And the image brightness transition value Y 3 The step of adjusting the exposure parameters of the camera comprises:
when the vehicle is in the bright area, according to the formula T 1 ×G 1 =Y 1 /(k×L 1 ) Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright field exposure time T 1 And bright area gain G 1 The product of (a) sets the bright field exposure time T 1 And a bright area gain G1 for stabilizing the image brightness of the camera at the bright area target brightness value T 1 Wherein L1 is the bright area illumination intensity value, L 2 The illumination intensity value in the dark area is shown, and k is a constant;
when the vehicle is about to enter the dark area, according to formula T 3 ×G 3 =Y 3 /(k×L 1 ) Calculating a first transitional exposure time T 3 And a first transition gain G 3 And according to said first transit exposure time T 3 And a first transition gain G 3 Sets the first transitional exposure time T 3 And a first transition gain G 3 Reducing the image brightness of the camera to the image brightness transition value Y 3
When coming toWhen the vehicle just enters the dark area, the vehicle is in accordance with the formula T 3 ′×G 3 ′=Y 3 /(k×L 2 ) Calculating a second transitional exposure time T 3 ' and second transition gain G 3 ' and according to said second transitional exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ' stabilizing the image brightness of the camera at the image brightness transition value Y 3
After the vehicle completely enters the dark area, according to the formula T 2 ×G 2 =Y 2 /(k×L 2 ) Calculating dark area exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And dark area gain G 2 Making the image brightness of the camera transit to the dark zone target brightness value Y 2
6. The method of claim 4, wherein the bright area illumination intensity value, the dark area illumination intensity value, the bright area target brightness value T are determined according to the dark area illumination intensity value, the bright area target brightness value T, and the like when the vehicle enters the bright area from the dark area 1 Dark region target luminance value Y 2 And image brightness transition value T 3 The step of adjusting the exposure parameters of the camera comprises:
when the vehicle is in the dark area, according to formula T 2 ×G 2 =Y 2 /(k×L 2 ) Calculating dark area exposure time T 2 And dark area gain G 2 And according to said dark field exposure time T 2 And dark area gain G 2 The product of (c) sets the dark field exposure time T 2 And dark area gain G 2 Stabilizing the image brightness of the camera at the target brightness value Y of the dark area 2 Wherein, L 1 As bright area illumination intensity value, L 2 The illumination intensity value in the dark area, k is a constant;
when the vehicle is about to advanceWhen entering the bright area, according to the formula T 3 ′×G 3 ′=Y 3 /(k×L 2 ) Calculating a second transitional exposure time T 3 ' and second transition gain G 3 And according to said second transit exposure time T 3 ' and second transition gain G 3 ' sets the second transitional exposure time T 3 ' and a second transition gain G 3 ', increasing the image brightness of the camera to the image brightness transition value Y 3
When the vehicle just enters the bright area, according to the formula T 3 ×G 3 =Y 3 /(k×L 1 ) Calculating a first transitional exposure time T 3 And a first transition gain G 3 And in accordance with said first transitional exposure time T 3 And a first transition gain G 3 Sets the first transitional exposure time T 3 And a first transition gain G 3 Stabilizing the image brightness of the camera at the image brightness transition value Y 3
When the vehicle completely enters the bright area, the vehicle can enter the bright area according to the formula T 1 ×G 1 =Y 1 /(k×L 1 ) Calculating the bright area exposure time T 1 And bright area gain G 1 And according to the bright field exposure time T 1 And bright area gain G 1 The product of (a) sets the bright area exposure time T 1 Sum bright area gain G 1 Making the image brightness of the camera transit to the bright area target brightness value
Figure FDA0003818603050000041
1
7. The method of claim 1, wherein the method of obtaining the bright-area illumination intensity value comprises:
the camera shoots a bright area image in the bright area, and pixel bright area brightness value Y 'is extracted according to the bright area image' 1 According to the formula L 1 =Y′ 1 /(k×T′ 1 ×G′ 1 ) Calculating the illumination intensity value L of the bright area 1 Wherein, T' 1 For the actual exposure time of the camera in the bright area, G 1 ' is the actual gain, T ' of the camera in the bright area ' 1 And G' 1 K is a constant, a known quantity;
or the sunlight is collected through the camera to obtain the bright area illumination intensity value L 1
Or obtaining the bright area illumination intensity value L of the corresponding area through an illumination sensor 1 And uploading the light intensity value to a server, and receiving the bright area illumination intensity value L sent by the server by the vehicle 1
Or vehicles share the bright area illumination intensity value L through the V2X technology 1
8. The method of claim 1, wherein the method of obtaining dark area illumination intensity values comprises:
before the vehicle enters the dark area, the camera shoots a dark area image of the dark area formed by the vehicle body of the vehicle, and a pixel dark area brightness value Y 'is extracted according to the dark area image' 2 According to the formula L 2 =Y′ 2 /(k×T 2 ′×G′ 2 ) Calculating the illumination intensity value L of the dark area 2 Wherein, T 2 ' is the actual exposure time, G ', of the camera in the dark area ' 2 Is the actual gain, T, of the camera in the dark area 2 'and G' 2 K is a constant, a known quantity;
or obtaining the illumination intensity value L of the dark area of the corresponding area through an illumination sensor 2 And uploading the light intensity value L to a server, and receiving the dark space illumination intensity value L sent by the server by the vehicle 2
Or the illumination intensity values L in the dark area are shared among vehicles through a V2X technology 2
9. A controller for a vehicle camera to adapt to a shadow area based on a shadow recognition technology is characterized by comprising:
the first acquisition module is used for acquiring sunlight irradiation direction information;
the second acquisition module is used for acquiring a three-dimensional map and acquiring reference outline information and reference position information of the fixed object according to the three-dimensional map;
the map construction module is used for constructing a shadow map according to the sunlight irradiation direction information, the reference outline information and the reference position information;
the third acquisition module is used for acquiring target outline information and target position information of objects around the vehicle;
a target building module for calculating a target dark area according to the sunlight irradiation direction information, the target outline information and the target position information;
the map updating module is used for updating the shadow map in real time according to the target dark area;
the fourth acquisition module is used for acquiring the vehicle position information;
a fifth obtaining module, configured to obtain a bright area illumination intensity value;
the sixth acquisition module is used for acquiring the illumination intensity value of the dark area;
and the calculation adjusting module is used for adjusting the exposure parameters of the camera in real time according to the vehicle position information, the bright area illumination intensity value, the dark area illumination intensity value and the updated shadow map.
10. A system for adapting a vehicle camera to a shadow region based on shadow recognition technology, comprising:
the camera is used for collecting sunlight, bright area images and dark area images;
a radar for identifying objects around the vehicle;
the cloud server is used for sending the three-dimensional map;
a locator for locating the vehicle;
the controller of claim 9.
CN202110474818.7A 2021-04-29 2021-04-29 Method, controller and system for adapting vehicle camera to shadow area Active CN113364990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110474818.7A CN113364990B (en) 2021-04-29 2021-04-29 Method, controller and system for adapting vehicle camera to shadow area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110474818.7A CN113364990B (en) 2021-04-29 2021-04-29 Method, controller and system for adapting vehicle camera to shadow area

Publications (2)

Publication Number Publication Date
CN113364990A CN113364990A (en) 2021-09-07
CN113364990B true CN113364990B (en) 2023-04-07

Family

ID=77525637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110474818.7A Active CN113364990B (en) 2021-04-29 2021-04-29 Method, controller and system for adapting vehicle camera to shadow area

Country Status (1)

Country Link
CN (1) CN113364990B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115934868A (en) * 2021-09-19 2023-04-07 华为技术有限公司 Map data processing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1269083C (en) * 2004-07-22 2006-08-09 上海交通大学 Obtaining method for shadow projecting direction in aviation image
JPWO2007083494A1 (en) * 2006-01-17 2009-06-11 日本電気株式会社 Graphic recognition apparatus, graphic recognition method, and graphic recognition program
CN101894382B (en) * 2010-07-23 2012-06-06 同济大学 Satellite stereo image shadow calculating method integrated with light detection and ranging (LiDAR) point clouds
CN104913784B (en) * 2015-06-19 2017-10-10 北京理工大学 A kind of autonomous extracting method of planetary surface navigation characteristic
CN110794848B (en) * 2019-11-27 2020-11-03 北京三快在线科技有限公司 Unmanned vehicle control method and device
CN112672070B (en) * 2020-12-30 2022-07-26 惠州华阳通用电子有限公司 Camera shooting parameter adjusting method
CN112927336B (en) * 2021-03-26 2024-02-20 智道网联科技(北京)有限公司 Shadow processing method and device for three-dimensional building for road information display

Also Published As

Publication number Publication date
CN113364990A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
KR102247092B1 (en) Spectro-fusion image acquisition instrument
CN112505065B (en) Method for detecting surface defects of large part by indoor unmanned aerial vehicle
JP6353289B2 (en) Ranging correction device
CN109493273B (en) Color consistency adjusting method
CN107305312B (en) Automatic adjustment system and method for projection brightness and contrast
US9596416B2 (en) Microscope system
JP6750519B2 (en) Imaging device, imaging display method, and imaging display program
CN106488139A (en) Image compensation method, device and unmanned plane that a kind of unmanned plane shoots
CN110850109B (en) Method for measuring vehicle speed based on fuzzy image
CN113364990B (en) Method, controller and system for adapting vehicle camera to shadow area
CN113382143B (en) Automatic exposure adjusting method for binocular camera of fire-fighting robot
JP2006237851A (en) Image input apparatus
CN110537197A (en) Image processing apparatus, maturation history image creation system and program
CN109996053A (en) A kind of projecting method and optical projection system applied to outside vertical surface of building
Jin et al. Image Enhancement Based on Selective-Retinex Fusion Algorithm.
JP2013187782A (en) On-vehicle camera device
JP6232933B2 (en) Radiation distortion correction apparatus, road environment recognition apparatus, radial distortion correction method and program
JP2012138688A (en) Method of determining exposure control value of on-vehicle camera
WO2023074452A1 (en) Camera device and method for controlling camera device
JP4539400B2 (en) Stereo camera correction method and stereo camera correction device
CN110176040B (en) Automatic calibration method for panoramic all-around system
JP2012085093A (en) Imaging device and acquisition method
WO2020121758A1 (en) Imaging unit control device
WO2015098107A1 (en) Image processing device, warning apparatus, and image processing method
CN116634289B (en) Method for dimming overexposed area of image sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant