CN117508002A - Vehicle high beam control method, system, medium and equipment - Google Patents

Vehicle high beam control method, system, medium and equipment Download PDF

Info

Publication number
CN117508002A
CN117508002A CN202311733925.2A CN202311733925A CN117508002A CN 117508002 A CN117508002 A CN 117508002A CN 202311733925 A CN202311733925 A CN 202311733925A CN 117508002 A CN117508002 A CN 117508002A
Authority
CN
China
Prior art keywords
vehicle
light spot
detection frame
illuminance
high beam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311733925.2A
Other languages
Chinese (zh)
Inventor
李君宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Autopilot Technology Co Ltd
Original Assignee
Human Horizons Shanghai Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Autopilot Technology Co Ltd filed Critical Human Horizons Shanghai Autopilot Technology Co Ltd
Priority to CN202311733925.2A priority Critical patent/CN117508002A/en
Publication of CN117508002A publication Critical patent/CN117508002A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/06Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle
    • B60Q1/08Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle automatically
    • B60Q1/085Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle automatically due to special conditions, e.g. adverse weather, type of road, badly illuminated road signs or potential dangers

Abstract

The invention discloses a vehicle high beam control method, a system, a medium and equipment, wherein the method comprises the following steps: acquiring multi-sensor data; wherein the multi-sensor data includes an initial image and radar data; determining illuminance and first light spot information according to the initial image; determining a vehicle target detection result based on the initial image and the radar data; performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information; based on the illuminance and the second light spot information, the self-vehicle high beam is controlled, so that the working state of the self-vehicle high beam can be accurately controlled.

Description

Vehicle high beam control method, system, medium and equipment
Technical Field
The invention relates to the field of automobiles, in particular to a vehicle high beam control method, a system, a medium and equipment.
Background
When driving at night, the visible range after the high beam is turned on is far longer than that after the low beam is turned on the black road without street lamp because the irradiation angle and irradiation distance of the high beam are higher and the light is concentrated. In such an environment, it is necessary to turn on the high beam. On the other hand, long-term continuous turn-on of the high beam is liable to affect other vehicles on the road, for example, the eyes of a driver facing the oncoming vehicle are stimulated by strong light during a meeting, so that a phenomenon similar to night blindness occurs, which seriously affects traffic safety.
In the prior art, when a vehicle-mounted radar or a camera is used for detecting that a target vehicle exists, a high beam is controlled to be turned off, wherein if the vehicle-mounted radar is only used, the problems that the high beam is not turned off timely and the control distance is too short easily occur because the detection distance is too short; if only the vehicle-mounted camera is used, the problem that once the light spot appears in the image picture, the high beam is controlled to be turned off, so that the high beam is turned off by mistake and the control distance is too long is easily caused.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the invention provides a vehicle high beam control method, a system, a medium and equipment, which can accurately control the working state of a vehicle high beam.
In order to achieve the above object, an embodiment of the present invention provides a method for controlling a high beam of a vehicle, including:
acquiring multi-sensor data; wherein the multi-sensor data includes an initial image and radar data;
determining illuminance and first light spot information according to the initial image;
determining a vehicle target detection result based on the initial image and the radar data;
performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information;
And controlling the self-vehicle high beam based on the illuminance and the second light spot information.
Further, the illuminance includes global illuminance and local illuminance;
and determining illuminance and first light spot information according to the initial image, wherein the determining comprises:
graying the initial image to obtain a gray image, and determining an HSV image of the initial image;
determining first spot information based on the gray scale image and the HSV image;
calculating global illuminance and local illuminance according to the brightness of the pixel points in the initial image; the local illuminance is an illuminance of a preset first area in the initial image.
Further, the determining the first flare information based on the gray scale image and the HSV image includes:
a region with the brightness of the pixel point higher than a preset first brightness threshold value in the gray image is defined as a first facula detection frame;
generating a tail lamp detection frame according to an area, in the HSV image, of which the color gamut is red and the brightness of the pixel point is higher than a preset second brightness threshold value;
calculating the workshop distance between the vehicle corresponding to the tail lamp detection frame and the own vehicle based on the width of the tail lamp detection frame and the preset vehicle width;
Performing first matching processing on the first light spot detection frame and the tail light detection frame, and generating first light spot information based on the result of the first matching processing, the workshop distance and the first light spot detection frame;
wherein, the result of the first matching processing includes: and if the tail lamp detection frame and the first light spot detection frame have an overlapping area, judging the light spot attribute corresponding to the first light spot detection frame as a tail lamp, otherwise, judging the light spot attribute corresponding to the first light spot detection frame as a front lamp.
Further, the generating a tail light detection frame according to the region of the HSV image, where the color gamut is red and the brightness of the pixel point is higher than a preset second brightness threshold value, includes:
all areas with the color gamut of red and the pixel brightness higher than a preset second brightness threshold value in the HSV image are defined as a plurality of detection frames to be associated;
calculating area difference values, longitudinal distance difference values and transverse distance difference values between the detection frames to be associated;
and associating each detection frame to be associated based on the area difference value, the longitudinal distance difference value and the transverse distance difference value to obtain a tail lamp detection frame belonging to the same vehicle.
Further, the performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information includes:
determining a relative position between the first light spot detection frame contained in the first light spot information and a vehicle detection frame contained in the vehicle target detection result;
when the relative position represents that the first light spot detection frame is positioned in the vehicle detection frame, the vehicle target detection result is related to the first light spot information so as to acquire second light spot information;
when the relative position represents that the first light spot detection frame is partially overlapped with the vehicle detection frame, calculating the overlapping degree between the first light spot detection frame and the vehicle detection frame, and performing second matching processing on the first light spot detection frame and the vehicle detection frame; and generating second light spot information based on the overlapping degree, the result of the second matching processing and the first light spot information.
Further, the controlling the self-vehicle high beam based on the illuminance and the second light spot information includes:
judging whether the global illuminance is smaller than a preset first illuminance threshold value or not;
When the global illuminance is smaller than a preset first illuminance threshold, determining a current control mode for the self-vehicle high beam based on the local illuminance and the second light spot information;
wherein, the control mode comprises at least one of the following: opening and closing.
Further, the determining, based on the local illuminance and the second light spot information, a current control mode for the self-vehicle high beam includes:
judging whether a close-range vehicle exists in front of the vehicle according to the confidence degree in the second light spot information; wherein it is determined that there is a close range vehicle in front of the host when the confidence is 1; the confidence coefficient is obtained according to the association processing and is used for indicating the association degree between the first light spot information and the vehicle target detection result;
judging whether a remote vehicle exists in front of the host vehicle or not based on the local illuminance and the second light spot information;
if at least one of the short-range vehicle and the long-range vehicle exists in front of the vehicle, the control mode is determined to be closed, otherwise, the control mode is determined to be open.
Further, the remote vehicles include at least one of remote oncoming vehicles and remote oncoming vehicles; wherein,
If the light spot attribute corresponding to the second light spot detection frame contained in the second light spot information is a front light, the local illuminance is greater than a preset second illuminance threshold, and the area of the second light spot detection frame is greater than a preset first area threshold, the remote vehicle is a remote opposite coming vehicle;
if the light spot attribute corresponding to the second light spot detection frame is a tail light and the longitudinal distance of the light spot is greater than a preset first distance threshold, the remote vehicle is a remote homodromous vehicle; and the longitudinal distance of the light spot is determined according to the width of the tail light detection frame matched with the second light spot detection frame.
Further, the method further comprises:
when the determined control mode is on, acquiring current vehicle state information;
and judging whether the self-vehicle state information meets a preset self-vehicle state limiting condition, if so, controlling the self-vehicle high beam to be turned on, and otherwise, controlling the self-vehicle high beam to be turned off.
Further, the determining a vehicle target detection result based on the initial image and the radar data includes:
performing target detection on the initial image to obtain a first target detection result;
determining a second target detection result detected by the radar according to the radar data;
And carrying out fusion processing on the first target detection result and the second target detection result to obtain a vehicle target detection result.
The embodiment of the invention also provides a vehicle high beam control system, which comprises:
the data acquisition module is used for acquiring multi-sensor data; wherein the multi-sensor data includes an initial image and radar data;
the image processing module is used for determining illuminance and first light spot information according to the initial image;
a vehicle target detection module for determining a vehicle target detection result based on the initial image and the radar data;
the association processing module is used for carrying out association processing on the first light spot information and the vehicle target detection result to obtain second light spot information;
and the high beam control module is used for controlling the self-vehicle high beam based on the illuminance and the second facula information.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, the computer program implementing the steps of the vehicle high beam control method according to any one of the above when being executed by a processor.
The embodiment of the invention also provides computer equipment, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the steps of the vehicle high beam control method are realized when the processor executes the computer program.
In summary, the invention has the following beneficial effects:
by adopting the embodiment of the invention, the data of multiple sensors are acquired; wherein the multi-sensor data includes an initial image and radar data; determining illuminance and first light spot information according to the initial image; determining a vehicle target detection result based on the initial image and the radar data; performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information; based on the illuminance and the second light spot information, the self-vehicle high beam is controlled, so that the data detected by various sensors can be integrated to realize the accurate control of the high beam.
Drawings
FIG. 1 is a schematic flow chart of one embodiment of a method for controlling a high beam of a vehicle according to the present invention;
FIG. 2 is a schematic diagram of one embodiment of a vehicle high beam control system provided by the present invention;
FIG. 3 is a schematic diagram of one embodiment of a computer device provided by the present invention;
FIG. 4 is a schematic diagram of one embodiment of a vehicle high beam control method provided by the present invention;
FIG. 5 is a schematic diagram of one embodiment of a spot detection frame provided by the present invention;
FIG. 6 is a schematic diagram of one embodiment of the acquisition of a taillight detection frame provided by the present invention;
FIG. 7 is a schematic diagram of one embodiment of the calculation of the inter-vehicle distance provided by the present invention;
FIG. 8 is a schematic diagram of one embodiment of localized illuminance region selection provided by the present invention;
FIG. 9 is a schematic diagram of one embodiment of determining first spot information provided by the present invention;
FIG. 10 is a schematic view of one embodiment of a close-range oncoming vehicle and a reflective cone provided by the present invention;
FIG. 11 is a schematic view of an embodiment of a close range vehicle provided by the present invention;
FIG. 12 is a schematic view of one embodiment of a remote oncoming vehicle provided by the present invention;
FIG. 13 is a schematic view of one embodiment of a remote oncoming vehicle provided by the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of this application, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", "a third", etc. may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
In the description of the present application, it should be noted that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art unless defined otherwise. The terminology used in the description of the present invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention, as the particular meaning of the terms described above in this application will be understood to those of ordinary skill in the art in the specific context.
Referring to fig. 1, a schematic flow chart of an embodiment of a control method for a high beam of a vehicle according to the present invention includes steps S1 to S5, specifically as follows:
s1, acquiring multi-sensor data; wherein the multi-sensor data includes an initial image and radar data;
s2, determining illuminance and first light spot information according to the initial image;
s3, determining a vehicle target detection result based on the initial image and the radar data;
s4, carrying out association processing on the first light spot information and the vehicle target detection result to obtain second light spot information;
and S5, controlling the self-vehicle high beam based on the illuminance and the second light spot information.
The radar data comprise laser radar data and/or millimeter wave radar data; the laser radar data are acquired through a vehicle-mounted laser radar installed on a vehicle; the millimeter wave radar data is acquired through a vehicle-mounted millimeter wave radar installed on a vehicle.
The initial image is acquired by a vehicle-mounted camera mounted on the own vehicle.
In an alternative embodiment, the illuminance includes a global illuminance and a local illuminance;
And determining illuminance and first light spot information according to the initial image, wherein the determining comprises:
graying the initial image to obtain a gray image, and determining an HSV image of the initial image;
determining first spot information based on the gray scale image and the HSV image;
calculating global illuminance and local illuminance according to the brightness of the pixel points in the initial image; the local illuminance is an illuminance of a preset first area in the initial image.
It should be noted that, the HSV image is obtained by converting the initial image into an HSV (Value) format.
It will be appreciated that the global illuminance is used to characterize the overall brightness of the image area, i.e. the ambient illuminance; the local illuminance is used for judging whether a vehicle exists in front of the vehicle and adjusting the distance between the vehicle and the vehicle in front of the vehicle when the high beam is turned off by setting different thresholds.
Note that, referring to fig. 8, the upper boundary line of the first area is higher than the horizon line of the road where the own vehicle is located, and the horizontal distance corresponding to the lower boundary line with respect to the own vehicle is smaller than the radar detection distance indicated by the laser data. In this embodiment, since the oncoming traffic is generally on the left side of the host vehicle in the right-hand traffic region, the area where the local illuminance is located is generally offset to the left as a whole, and therefore the left and right boundary lines of the first area can be adjusted according to the irradiation range and the actual control effect of the host vehicle high beam, which is not particularly limited.
It should be noted that, the global illuminance is the overall brightness in the preset second area in the initial image.
In an alternative embodiment, the determining the first spot information based on the gray scale image and the HSV image includes:
a region with the brightness of the pixel point higher than a preset first brightness threshold value in the gray image is defined as a first facula detection frame;
generating a tail lamp detection frame according to an area, in the HSV image, of which the color gamut is red and the brightness of the pixel point is higher than a preset second brightness threshold value;
calculating the workshop distance between the vehicle corresponding to the tail lamp detection frame and the own vehicle based on the width of the tail lamp detection frame and the preset vehicle width;
performing first matching processing on the first light spot detection frame and the tail light detection frame, and generating first light spot information based on the result of the first matching processing, the workshop distance and the first light spot detection frame;
wherein, the result of the first matching processing includes: and if the tail lamp detection frame and the first light spot detection frame have an overlapping area, judging the light spot attribute corresponding to the first light spot detection frame as a tail lamp, otherwise, judging the light spot attribute corresponding to the first light spot detection frame as a front lamp.
It can be understood that, referring to fig. 9, in the process of acquiring the first light spot information, two image processing technologies of graying and HSV conversion are combined, wherein in the gray level image, light spot detection is performed, and the output first light spot detection frame does not represent the types of the front light and the tail light; in the HSV color space, tail lamp detection is carried out, all areas with Hue (Hue), saturation (Saturation) and brightness (Value) within a preset threshold range are framed by using a threshold method, and after a mask is set, a frame selection area is closed, so that a complete tail lamp detection frame is obtained; therefore, the first matching process can enable the generated first light spot information to carry the relation between the light spot detection frame and the tail light detection frame so as to judge whether the light spot represents a tail light or a front light of a vehicle.
The first matching process may be, for example, hungarian matching, which is not particularly limited herein.
Illustratively, all of the test frames are 2D test frames.
Specifically, referring to fig. 7, the present embodiment adopts a monocular ranging method, and the inter-vehicle distance is calculated by the following formula:
D=(W light ×F)/W P
wherein D is the inter-vehicle distance, and in this embodiment, the position of the camera is used as the position of the vehicle, and the positions of the tail lamps of other vehicles are used as the positions of the vehicles;
Wherein if the tail lamp detection frame has a pre-paired tail lamp, the pixel width W p =W ab *P w The method comprises the steps of carrying out a first treatment on the surface of the Otherwise W p =W a *P w Or W p =W b *P w ;W a The width W of the spot detection frame of the spot A b The width W of the spot detection frame is the width W of the spot B ab The width of the outer frame of the spot detection frame after the spot A and the spot B are paired; w (W) light The width of the vehicle, F is the focal length of the camera;
in this embodiment, the resolution of the camera is 800 ten thousand pixels, and the number of horizontal pixels P w 3840 vertical pixel count P h 2160.
For example, referring to fig. 5, the upper boundary line of the first light spot detection frame is higher than the horizon of the road where the vehicle is located and lower than the road lamp connection line. Therefore, the embodiment can filter the interference of most street lamps.
In an optional embodiment, the generating a tail light detection frame according to an area in the HSV image where the color gamut is red and the brightness of the pixel point is higher than a preset second brightness threshold includes:
all areas with the color gamut of red and the pixel brightness higher than a preset second brightness threshold value in the HSV image are defined as a plurality of detection frames to be associated;
calculating area difference values, longitudinal distance difference values and transverse distance difference values between the detection frames to be associated;
and associating each detection frame to be associated based on the area difference value, the longitudinal distance difference value and the transverse distance difference value to obtain a tail lamp detection frame belonging to the same vehicle.
In particular, see FIG. 6, wherein (X) a ,Y a ) The geometrical center point of the spot detection frame for spot A, (X) b ,Y b ) The geometrical center point of the spot detection frame for spot B, (X) amin ,Y amin )、(X amax ,Y amax ) The diagonal point of the spot detection frame for spot a, (X) bmin ,Y bmin )、(X bmax ,Y bmax ) The diagonal point of the spot detection frame is the spot B;
and when the area difference value, the longitudinal distance difference value and the transverse distance difference value of any two to-be-associated detection frames meet the following limit, judging that the two to-be-associated detection frames belong to the tail lamp detection frame of the same vehicle:
wherein S is a The area of the spot detection frame is the spot A, S b Area of spot detection frame for spot B, W a The width W of the spot detection frame of the spot A b The width of the spot detection frame is the width H of the spot B a The height of the spot detection frame is the height H of the spot A b The height of the spot detection frame is the height of the spot B; q, R, P are preset calibration values, and can be correspondingly set according to different vehicle types.
In an optional embodiment, the performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information includes:
determining a relative position between the first light spot detection frame contained in the first light spot information and a vehicle detection frame contained in the vehicle target detection result;
When the relative position represents that the first light spot detection frame is positioned in the vehicle detection frame, the vehicle target detection result is related to the first light spot information so as to acquire second light spot information;
when the relative position represents that the first light spot detection frame is partially overlapped with the vehicle detection frame, calculating the overlapping degree between the first light spot detection frame and the vehicle detection frame, and performing second matching processing on the first light spot detection frame and the vehicle detection frame; and generating second light spot information based on the overlapping degree, the result of the second matching processing and the first light spot information.
It can be understood that in this embodiment, the spot information is associated with the detection result of the vehicle target, so that the generated second spot information carries the corresponding relationship between the spot information and the vehicle target.
It should be noted that, the second light spot information carries a confidence coefficient, where the confidence coefficient is used to indicate a correlation between the first light spot information and the vehicle target detection result, where the confidence coefficient is 0 indicates that the first light spot information is not correlated to the vehicle target detection result, and 1 indicates that the first light spot information is correlated to the vehicle target detection result.
Illustratively, the second spot information carries: the related vehicle target detection result, the light spot attribute (a front lamp or a tail lamp), the detected running direction of the vehicle, the detection frame longitudinal distance, the detection frame transverse distance, the longitudinal speed of the vehicle target, the transverse speed of the vehicle target, the longitudinal pixel point position and the transverse pixel point position.
The second matching process may be, for example, hungarian matching, and is not particularly limited herein.
In an alternative embodiment, the controlling the self-headlight based on the illuminance and the second spot information includes:
judging whether the global illuminance is smaller than a preset first illuminance threshold value or not;
when the global illuminance is smaller than a preset first illuminance threshold, determining a current control mode for the self-vehicle high beam based on the local illuminance and the second light spot information;
wherein, the control mode comprises at least one of the following: opening and closing.
The control method specifically includes: the specific implementation means of turning on and off the high beam, such as the on duration, the delayed on duration, and the off delay, are not limited herein.
In an optional embodiment, the determining, based on the local illuminance and the second spot information, a current control mode for the high beam of the vehicle includes:
judging whether a close-range vehicle exists in front of the vehicle according to the confidence degree in the second light spot information; wherein it is determined that there is a close range vehicle in front of the host when the confidence is 1; the confidence coefficient is obtained according to the association processing and is used for indicating the association degree between the first light spot information and the vehicle target detection result;
judging whether a remote vehicle exists in front of the host vehicle or not based on the local illuminance and the second light spot information;
if at least one of the short-range vehicle and the long-range vehicle exists in front of the vehicle, the control mode is determined to be closed, otherwise, the control mode is determined to be open.
It can be appreciated that, referring to fig. 10 and 11, for a short-distance vehicle, the confidence level is adopted to determine, when a light spot with the confidence level of 1 is determined as a vehicle, and a light spot with the confidence level of 0 is determined as a passive light source, such as a reflecting cone and/or a sign board, so that whether the vehicle exists in a short-distance traffic scene can be accurately identified.
In an alternative embodiment, the remote vehicle includes at least one of a remote oncoming vehicle and a remote oncoming vehicle; wherein,
if the light spot attribute corresponding to the second light spot detection frame contained in the second light spot information is a front light, the local illuminance is greater than a preset second illuminance threshold, and the area of the second light spot detection frame is greater than a preset first area threshold, the remote vehicle is a remote opposite coming vehicle;
if the light spot attribute corresponding to the second light spot detection frame is a tail light and the longitudinal distance of the light spot is greater than a preset first distance threshold, the remote vehicle is a remote homodromous vehicle; and the longitudinal distance of the light spot is determined according to the width of the tail light detection frame matched with the second light spot detection frame.
It can be understood that referring to fig. 12, in the prior art, the lamps of the vehicles facing away from the distance are generally brighter, often show a large circular spot on the image, and the vehicle body part is not completely shot, so that the spot distance cannot be calculated by the spot or the detection frame where the vehicle is located, and thus, the high beam is generally turned off immediately after the spot is detected in the prior art, which easily causes erroneous turning off of the high beam. In this embodiment, referring to fig. 13, by setting the relevant conditions of the local illuminance and the second light spot information, whether the oncoming traffic and/or the same-direction traffic exist in the remote traffic scene can be accurately identified, so that the on/off of the far-reaching headlamp of the own vehicle is accurately controlled, and the false off is avoided.
In an alternative embodiment, the method further comprises:
when the determined control mode is on, acquiring current vehicle state information;
and judging whether the self-vehicle state information meets a preset self-vehicle state limiting condition, if so, controlling the self-vehicle high beam to be turned on, and otherwise, controlling the self-vehicle high beam to be turned off.
Illustratively, the vehicle status constraints include one or more of the following:
(1) the wiper is not high-speed; (2) the gear is D gear; (3) absolute value of yaw rate <6 °/s; (4) the speed of the instrument is greater than 40kph; (5) the absolute value of the transverse acceleration is less than 2m/s 2; (6) the fog lamp is not turned on.
It can be understood that, referring to fig. 4, when it is determined that the external environment meets the condition of turning on the high beam, the present embodiment further determines whether the vehicle state is suitable for turning on the high beam, thereby further ensuring driving safety.
In an alternative embodiment, the determining the vehicle target detection result based on the initial image and the radar data includes:
performing target detection on the initial image to obtain a first target detection result;
determining a second target detection result detected by the radar according to the radar data;
And carrying out fusion processing on the first target detection result and the second target detection result to obtain a vehicle target detection result.
It can be understood that in this embodiment, the vehicle targets detected by the various sensors are fused, so that the fused vehicle target detection result integrates the information of each sensor, and thus the finally obtained vehicle target detection result is more accurate, where the information of each sensor includes the information detected by the radar and the information detected by the camera.
Referring to fig. 2, a schematic structural diagram of an embodiment of a high beam control system for a vehicle according to an embodiment of the present invention includes:
a data acquisition module 101 for acquiring multi-sensor data; wherein the multi-sensor data includes an initial image and radar data;
the image processing module 102 is configured to determine illuminance and first light spot information according to the initial image;
a vehicle target detection module 103 for determining a vehicle target detection result based on the initial image and the radar data;
the association processing module 104 is configured to perform association processing on the first light spot information and the vehicle target detection result to obtain second light spot information;
The high beam control module 105 is configured to control a self-vehicle high beam based on the illuminance and the second spot information.
In an alternative embodiment, the illuminance includes a global illuminance and a local illuminance;
and determining illuminance and first light spot information according to the initial image, wherein the determining comprises:
graying the initial image to obtain a gray image, and determining an HSV image of the initial image;
determining first spot information based on the gray scale image and the HSV image;
calculating global illuminance and local illuminance according to the brightness of the pixel points in the initial image; the local illuminance is an illuminance of a preset first area in the initial image.
In an alternative embodiment, the determining the first spot information based on the gray scale image and the HSV image includes:
a region with the brightness of the pixel point higher than a preset first brightness threshold value in the gray image is defined as a first facula detection frame;
generating a tail lamp detection frame according to an area, in the HSV image, of which the color gamut is red and the brightness of the pixel point is higher than a preset second brightness threshold value;
calculating the workshop distance between the vehicle corresponding to the tail lamp detection frame and the own vehicle based on the width of the tail lamp detection frame and the preset vehicle width;
Performing first matching processing on the first light spot detection frame and the tail light detection frame, and generating first light spot information based on the result of the first matching processing, the workshop distance and the first light spot detection frame;
wherein, the result of the first matching processing includes: and if the tail lamp detection frame and the first light spot detection frame have an overlapping area, judging the light spot attribute corresponding to the first light spot detection frame as a tail lamp, otherwise, judging the light spot attribute corresponding to the first light spot detection frame as a front lamp.
In an optional embodiment, the generating a tail light detection frame according to an area in the HSV image where the color gamut is red and the brightness of the pixel point is higher than a preset second brightness threshold includes:
all areas with the color gamut of red and the pixel brightness higher than a preset second brightness threshold value in the HSV image are defined as a plurality of detection frames to be associated;
calculating area difference values, longitudinal distance difference values and transverse distance difference values between the detection frames to be associated;
and associating each detection frame to be associated based on the area difference value, the longitudinal distance difference value and the transverse distance difference value to obtain a tail lamp detection frame belonging to the same vehicle.
In an optional embodiment, the performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information includes:
determining a relative position between the first light spot detection frame contained in the first light spot information and a vehicle detection frame contained in the vehicle target detection result;
when the relative position represents that the first light spot detection frame is positioned in the vehicle detection frame, the vehicle target detection result is related to the first light spot information so as to acquire second light spot information;
when the relative position represents that the first light spot detection frame is partially overlapped with the vehicle detection frame, calculating the overlapping degree between the first light spot detection frame and the vehicle detection frame, and performing second matching processing on the first light spot detection frame and the vehicle detection frame; and generating second light spot information based on the overlapping degree, the result of the second matching processing and the first light spot information.
In an alternative embodiment, the controlling the self-headlight based on the illuminance and the second spot information includes:
judging whether the global illuminance is smaller than a preset first illuminance threshold value or not;
When the global illuminance is smaller than a preset first illuminance threshold, determining a current control mode for the self-vehicle high beam based on the local illuminance and the second light spot information;
wherein, the control mode comprises at least one of the following: opening and closing.
In an optional embodiment, the determining, based on the local illuminance and the second spot information, a current control mode for the high beam of the vehicle includes:
judging whether a close-range vehicle exists in front of the vehicle according to the confidence degree in the second light spot information; wherein it is determined that there is a close range vehicle in front of the host when the confidence is 1; the confidence coefficient is obtained according to the association processing and is used for indicating the association degree between the first light spot information and the vehicle target detection result;
judging whether a remote vehicle exists in front of the host vehicle or not based on the local illuminance and the second light spot information;
if at least one of the short-range vehicle and the long-range vehicle exists in front of the vehicle, the control mode is determined to be closed, otherwise, the control mode is determined to be open.
In an alternative embodiment, the remote vehicle includes at least one of a remote oncoming vehicle and a remote oncoming vehicle; wherein,
If the light spot attribute corresponding to the second light spot detection frame contained in the second light spot information is a front light, the local illuminance is greater than a preset second illuminance threshold, and the area of the second light spot detection frame is greater than a preset first area threshold, the remote vehicle is a remote opposite coming vehicle;
if the light spot attribute corresponding to the second light spot detection frame is a tail light and the longitudinal distance of the light spot is greater than a preset first distance threshold, the remote vehicle is a remote homodromous vehicle; and the longitudinal distance of the light spot is determined according to the width of the tail light detection frame matched with the second light spot detection frame.
In an alternative embodiment, the method further comprises:
when the determined control mode is on, acquiring current vehicle state information;
and judging whether the self-vehicle state information meets a preset self-vehicle state limiting condition, if so, controlling the self-vehicle high beam to be turned on, and otherwise, controlling the self-vehicle high beam to be turned off.
In an alternative embodiment, the determining the vehicle target detection result based on the initial image and the radar data includes:
performing target detection on the initial image to obtain a first target detection result;
Determining a second target detection result detected by the radar according to the radar data;
and carrying out fusion processing on the first target detection result and the second target detection result to obtain a vehicle target detection result.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, the computer program implementing the steps of the vehicle high beam control method according to any one of the above when being executed by a processor.
The embodiment of the invention also provides computer equipment, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the steps of the vehicle high beam control method are realized when the processor executes the computer program.
Referring to fig. 3, the computer device of this embodiment includes: a processor 301, a memory 302 and a computer program, such as a vehicle headlight control program, stored in said memory 302 and being executable on said processor 301. The processor 301, when executing the computer program, implements the steps of the above-described embodiments of the method for controlling a high beam of a vehicle, such as steps S1 to S5 shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 302 and executed by the processor 301 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the computer device.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer device may include, but is not limited to, a processor 301, a memory 302. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a computer device and is not limiting of the computer device, and may include more or fewer components than shown, or may combine some of the components, or different components, e.g., the computer device may also include input and output devices, network access devices, buses, etc.
The processor 301 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors 301, digital signal processors 301 (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor 301 may be a microprocessor 301 or the processor 301 may be any conventional processor 301 or the like, the processor 301 being the control center of the computer device, with various interfaces and lines connecting the various parts of the overall computer device.
The memory 302 may be used to store the computer programs and/or modules, and the processor 301 may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory 302, and invoking data stored in the memory 302. The memory 302 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 302 may include a high-speed random access memory 302, and may also include a non-volatile memory 302, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk memory 302 device, flash memory device, or other volatile solid-state memory 302 device.
Wherein the computer device integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each method embodiment described above when executed by the processor 301. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory 302, a Read-Only Memory 302, a random access Memory 302 (RAM, random Access Memory), an electrical carrier wave signal, a telecommunication signal, a software distribution medium, and so forth.
In summary, the invention has the following beneficial effects:
by adopting the embodiment of the invention, the data of multiple sensors are acquired; wherein the multi-sensor data includes an initial image and radar data; determining illuminance and first light spot information according to the initial image; determining a vehicle target detection result based on the initial image and the radar data; performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information; based on the illuminance and the second light spot information, the self-vehicle high beam is controlled, so that the data detected by various sensors can be integrated to realize the accurate control of the high beam.
From the above description of the embodiments, it will be clear to those skilled in the art that the present invention may be implemented by means of software plus necessary hardware platforms, but may of course also be implemented entirely in hardware. With such understanding, all or part of the technical solution of the present invention contributing to the background art may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the embodiments or some parts of the embodiments of the present invention.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (13)

1. A vehicle high beam control method, characterized by comprising:
acquiring multi-sensor data; wherein the multi-sensor data includes an initial image and radar data;
determining illuminance and first light spot information according to the initial image;
determining a vehicle target detection result based on the initial image and the radar data;
performing association processing on the first light spot information and the vehicle target detection result to obtain second light spot information;
and controlling the self-vehicle high beam based on the illuminance and the second light spot information.
2. The vehicle high beam control method according to claim 1, wherein the illuminance includes a global illuminance and a local illuminance;
and determining illuminance and first light spot information according to the initial image, wherein the determining comprises:
graying the initial image to obtain a gray image, and determining an HSV image of the initial image;
Determining first spot information based on the gray scale image and the HSV image;
calculating global illuminance and local illuminance according to the brightness of the pixel points in the initial image; the local illuminance is an illuminance of a preset first area in the initial image.
3. The vehicle high beam control method according to claim 2, wherein the determining first spot information based on the gray scale image and the HSV image includes:
a region with the brightness of the pixel point higher than a preset first brightness threshold value in the gray image is defined as a first facula detection frame;
generating a tail lamp detection frame according to an area, in the HSV image, of which the color gamut is red and the brightness of the pixel point is higher than a preset second brightness threshold value;
calculating the workshop distance between the vehicle corresponding to the tail lamp detection frame and the own vehicle based on the width of the tail lamp detection frame and the preset vehicle width;
performing first matching processing on the first light spot detection frame and the tail light detection frame, and generating first light spot information based on the result of the first matching processing, the workshop distance and the first light spot detection frame;
wherein, the result of the first matching processing includes: and if the tail lamp detection frame and the first light spot detection frame have an overlapping area, judging the light spot attribute corresponding to the first light spot detection frame as a tail lamp, otherwise, judging the light spot attribute corresponding to the first light spot detection frame as a front lamp.
4. The vehicle high beam control method according to claim 3, wherein the generating a tail light detection frame according to the region of the HSV image having a color gamut of red and a pixel brightness higher than a preset second brightness threshold value includes:
all areas with the color gamut of red and the pixel brightness higher than a preset second brightness threshold value in the HSV image are defined as a plurality of detection frames to be associated;
calculating area difference values, longitudinal distance difference values and transverse distance difference values between the detection frames to be associated;
and associating each detection frame to be associated based on the area difference value, the longitudinal distance difference value and the transverse distance difference value to obtain a tail lamp detection frame belonging to the same vehicle.
5. The method for controlling a high beam of a vehicle according to claim 3, wherein the correlating the first spot information with the vehicle target detection result to obtain second spot information includes:
determining a relative position between the first light spot detection frame contained in the first light spot information and a vehicle detection frame contained in the vehicle target detection result;
when the relative position represents that the first light spot detection frame is positioned in the vehicle detection frame, the vehicle target detection result is related to the first light spot information so as to acquire second light spot information;
When the relative position represents that the first light spot detection frame is partially overlapped with the vehicle detection frame, calculating the overlapping degree between the first light spot detection frame and the vehicle detection frame, and performing second matching processing on the first light spot detection frame and the vehicle detection frame; and generating second light spot information based on the overlapping degree, the result of the second matching processing and the first light spot information.
6. The vehicle high beam control method according to any one of claims 3 to 5, wherein the controlling the own vehicle high beam based on the illuminance and the second spot information includes:
judging whether the global illuminance is smaller than a preset first illuminance threshold value or not;
when the global illuminance is smaller than a preset first illuminance threshold, determining a current control mode for the self-vehicle high beam based on the local illuminance and the second light spot information;
wherein, the control mode comprises at least one of the following: opening and closing.
7. The vehicle high beam control method according to claim 6, wherein the determining a current control mode for the own vehicle high beam based on the local illuminance and the second spot information includes:
Judging whether a close-range vehicle exists in front of the vehicle according to the confidence degree in the second light spot information; wherein it is determined that there is a close range vehicle in front of the host when the confidence is 1; the confidence coefficient is obtained according to the association processing and is used for indicating the association degree between the first light spot information and the vehicle target detection result;
judging whether a remote vehicle exists in front of the host vehicle or not based on the local illuminance and the second light spot information;
if at least one of the short-range vehicle and the long-range vehicle exists in front of the vehicle, the control mode is determined to be closed, otherwise, the control mode is determined to be open.
8. The vehicle high beam control method according to claim 7, wherein the remote vehicle includes at least one of a remote oncoming vehicle and a remote homodromous vehicle; wherein,
if the light spot attribute corresponding to the second light spot detection frame contained in the second light spot information is a front light, the local illuminance is greater than a preset second illuminance threshold, and the area of the second light spot detection frame is greater than a preset first area threshold, the remote vehicle is a remote opposite coming vehicle;
If the light spot attribute corresponding to the second light spot detection frame is a tail light and the longitudinal distance of the light spot is greater than a preset first distance threshold, the remote vehicle is a remote homodromous vehicle; and the longitudinal distance of the light spot is determined according to the width of the tail light detection frame matched with the second light spot detection frame.
9. The vehicle high beam control method according to claim 6, characterized in that the method further comprises:
when the determined control mode is on, acquiring current vehicle state information;
and judging whether the self-vehicle state information meets a preset self-vehicle state limiting condition, if so, controlling the self-vehicle high beam to be turned on, and otherwise, controlling the self-vehicle high beam to be turned off.
10. The vehicle high beam control method according to any one of claims 1 to 5, characterized in that the determining a vehicle target detection result based on the initial image and the radar data includes:
performing target detection on the initial image to obtain a first target detection result;
determining a second target detection result detected by the radar according to the radar data;
and carrying out fusion processing on the first target detection result and the second target detection result to obtain a vehicle target detection result.
11. A vehicle high beam control system, comprising:
the data acquisition module is used for acquiring multi-sensor data; wherein the multi-sensor data includes an initial image and radar data;
the image processing module is used for determining illuminance and first light spot information according to the initial image;
a vehicle target detection module for determining a vehicle target detection result based on the initial image and the radar data;
the association processing module is used for carrying out association processing on the first light spot information and the vehicle target detection result to obtain second light spot information;
and the high beam control module is used for controlling the self-vehicle high beam based on the illuminance and the second facula information.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the vehicle high beam control method according to any one of claims 1 to 10.
13. A computer device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the vehicle high beam control method according to any one of claims 1 to 10 when executing the computer program.
CN202311733925.2A 2023-12-15 2023-12-15 Vehicle high beam control method, system, medium and equipment Pending CN117508002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311733925.2A CN117508002A (en) 2023-12-15 2023-12-15 Vehicle high beam control method, system, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311733925.2A CN117508002A (en) 2023-12-15 2023-12-15 Vehicle high beam control method, system, medium and equipment

Publications (1)

Publication Number Publication Date
CN117508002A true CN117508002A (en) 2024-02-06

Family

ID=89764677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311733925.2A Pending CN117508002A (en) 2023-12-15 2023-12-15 Vehicle high beam control method, system, medium and equipment

Country Status (1)

Country Link
CN (1) CN117508002A (en)

Similar Documents

Publication Publication Date Title
EP3304886B1 (en) In-vehicle camera system and image processing apparatus
US7957559B2 (en) Apparatus and system for recognizing environment surrounding vehicle
JP5680573B2 (en) Vehicle driving environment recognition device
JP4725391B2 (en) Visibility measuring device for vehicle and driving support device
O'Malley et al. Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions
EP2600329B1 (en) External environment recognition device for vehicle, and light distribution control system using same
US10634317B2 (en) Dynamic control of vehicle lamps during maneuvers
CN108528431B (en) Automatic control method and device for vehicle running
US11325522B2 (en) Automatic light system
US10688913B2 (en) Light intensity adjustment apparatus, light intensity adjustment method, and computer readable medium
O'malley et al. Vision-based detection and tracking of vehicles to the rear with perspective correction in low-light conditions
US8848980B2 (en) Front vehicle detecting method and front vehicle detecting apparatus
US20080007429A1 (en) Visibility condition determining device for vehicle
US10532695B2 (en) Vehicle monitoring device, vehicle monitoring method and vehicle monitoring program
US20100123395A1 (en) Headlamp control device for vehicle
JP2000339589A (en) Traffic safety auxiliary system for vehicle and recording medium
WO2022105381A1 (en) Exposure parameter adjustment method and apparatus
WO2016194296A1 (en) In-vehicle camera system and image processing apparatus
JP2012240530A (en) Image processing apparatus
JP5361901B2 (en) Headlight control device
US20230174091A1 (en) Motor-vehicle driving assistance in low meteorological visibility conditions, in particular with fog
JP2019146012A (en) Imaging apparatus
CN107886036B (en) Vehicle control method and device and vehicle
CN117508002A (en) Vehicle high beam control method, system, medium and equipment
JP7210208B2 (en) Providing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination