CN115817330A - Control method, control device, terminal equipment and computer readable storage medium - Google Patents

Control method, control device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN115817330A
CN115817330A CN202211612145.8A CN202211612145A CN115817330A CN 115817330 A CN115817330 A CN 115817330A CN 202211612145 A CN202211612145 A CN 202211612145A CN 115817330 A CN115817330 A CN 115817330A
Authority
CN
China
Prior art keywords
data
vehicle
identification information
point cloud
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211612145.8A
Other languages
Chinese (zh)
Inventor
周成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Bicom Optics Co ltd
Original Assignee
Zhejiang Bicom Optics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Bicom Optics Co ltd filed Critical Zhejiang Bicom Optics Co ltd
Priority to CN202211612145.8A priority Critical patent/CN115817330A/en
Publication of CN115817330A publication Critical patent/CN115817330A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application is applicable to the technical field of automotive electronics, and provides a control method, a control device, terminal equipment and a computer readable storage medium method, wherein the control method comprises the following steps: the method comprises the steps of obtaining first data, wherein the first data represent radar data obtained by detecting a front area of a first vehicle through a radar installed on the first vehicle, obtaining second data, the second data represent image data obtained by shooting the front area of the first vehicle through a camera installed on the first vehicle, detecting a second vehicle in front of the first vehicle according to the first data and the second data, obtaining identification information of the second vehicle, and controlling lighting of a headlamp module of the first vehicle according to the identification information of the second vehicle. By the method, the recognition accuracy of the area in front of the vehicle can be improved, and the accurate control of the vehicle headlamp is realized, so that the driving safety is improved.

Description

Control method, control device, terminal equipment and computer readable storage medium
Technical Field
The present application belongs to the field of automotive electronics technologies, and in particular, relates to a control method, an apparatus, a terminal device, and a computer-readable storage medium.
Background
When the vehicle is driving at night, the headlamps need to be turned on to help the driver see the road ahead. However, when a vehicle is meeting or following at night, if the brightness of the headlamp is too high, the headlamp can dazzle a driver and cause traffic accidents because the driver cannot see the road clearly.
Present intelligent head-light system can detect the oncoming traffic according to the forward-looking camera to adjust head-light luminance, however, because the camera imaging quality that the illumination condition is abominable at night leads to is not high, and can't improve imaging quality through optimizing image processing algorithm, very easily leads to the false retrieval, miss inspection scheduling problem, finally influences head-light illuminating effect, thereby influences driving safety at night.
Disclosure of Invention
The embodiment of the application provides a control method, a control device, terminal equipment and a computer readable storage medium, which can effectively solve the problem of identification precision of a front area of a vehicle, realize accurate control of vehicle headlights and improve driving safety.
In a first aspect, an embodiment of the present application provides a control method, including:
acquiring first data representing radar data acquired by detecting an area in front of a first vehicle by a radar mounted on the first vehicle;
acquiring second data representing image data acquired by shooting an area in front of the first vehicle through a camera mounted on the first vehicle;
detecting a second vehicle in front of the first vehicle according to the first data and the second data, and obtaining identification information of the second vehicle;
and controlling the lighting of the headlamp module of the first vehicle according to the identification information of the second vehicle.
In the embodiment of the application, the identification information of the second vehicle in front of the first vehicle is obtained by utilizing the image data of the area in front of the first vehicle, which is shot by the camera, and the radar data of the area in front of the first vehicle, which is detected by the radar, so that the radar data and the image data are comprehensively considered in the process of identifying the vehicle in front of the first vehicle. By the method, when the illumination condition is severe at night or the weather conditions are poor, such as rain, snow, haze and the like, the problems of false detection, missing detection and the like caused by poor imaging quality of the camera can be effectively avoided, and the identification precision of the front area of the vehicle is improved, so that the illumination of the vehicle headlamp module is accurately controlled, and the driving safety is improved.
In a possible implementation manner of the first aspect, the detecting, according to the first data and the second data, a second vehicle ahead of the first vehicle to obtain identification information of the second vehicle includes:
performing data matching processing on the first data and the second data to obtain matched data;
and performing fusion processing according to the matching data to obtain the identification information of the second vehicle.
In a possible implementation manner of the first aspect, the processing of matching the first data and the second data to obtain matching data includes:
for each frame of the first point cloud data, respectively performing spatial matching on the first point cloud data and each first image to obtain a spatial matching result;
respectively performing time matching on the first point cloud data and each first image to obtain a time matching result;
and determining a first image matched with the first point cloud data according to the space matching result and the time matching result.
In a possible implementation manner of the first aspect, the spatially matching the first point cloud data with each first image to obtain a spatial matching result includes:
acquiring third data, wherein the third data represents a transformation matrix between a radar coordinate system and a camera coordinate system;
and respectively carrying out space matching on the first point cloud data and each first image according to the third data to obtain the space matching result.
In a possible implementation manner of the first aspect, the time matching the first point cloud data with each of the first images to obtain a time matching result includes:
calculating the time difference between the acquisition time corresponding to the first point cloud data and the acquisition time corresponding to each first image;
and determining the time matching result according to the calculated time difference.
In a possible implementation manner of the first aspect, the obtaining the identification information of the second vehicle by performing fusion processing according to the matching data includes:
for each group of matched first point cloud data and first image, acquiring fourth data, wherein the fourth data comprises a distance value and an azimuth angle between the first vehicle and the second vehicle obtained by analyzing the first image;
acquiring fifth data, wherein the fifth data comprise a distance value and an azimuth angle obtained by analyzing the first point cloud data before the first vehicle and the second vehicle;
and determining the identification information of the second vehicle according to the fourth data and the fifth data.
In one possible implementation manner of the first aspect, the determining the identification information of the second vehicle according to the fourth data and the fifth data includes:
acquiring a first precision, wherein the first precision is expressed as the data acquisition precision of the view camera;
acquiring a second precision, wherein the second precision is expressed as the data acquisition precision of the radar;
determining a weight according to the first precision and the second precision.
Determining identification information of the second vehicle from the fourth data and the sixth data according to the weight.
In a possible implementation manner of the first aspect, the determining, according to the weight, the identification information of the second vehicle from fourth data and the fifth data includes: the method further comprises the following steps:
if the weight is smaller than a first preset value, determining the fourth data as the identification information of the second vehicle;
if the weight is larger than a second preset value, determining the fourth data as the identification information of the second vehicle;
and if the weight is between the first preset value and the second preset value, determining data obtained by performing weighted average on the fourth data and the fifth data as the identification information of the second vehicle.
In a second aspect, an embodiment of the present application provides a control apparatus, including:
an acquisition unit configured to acquire first data indicating radar data acquired by detecting an area in front of a first vehicle by a radar mounted on the first vehicle;
a generation unit configured to acquire second data representing image data acquired by capturing an area in front of the first vehicle by a camera mounted on the first vehicle;
the detection unit is used for detecting a second vehicle in front of the first vehicle according to the first data and the second data to obtain identification information of the second vehicle;
and the control unit is used for controlling the lighting of the headlamp module of the first vehicle according to the identification information of the second vehicle.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the control method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the control method according to any one of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the control method of any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a control system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a control method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of information identification provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of data matching provided by an embodiment of the present application;
fig. 5 is a structural diagram of a control device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise.
With the improvement of science and technology and the improvement of living standard, the intelligent vehicle gradually enters the visual field of people, the intelligent vehicle can provide information such as position, speed, time and the like by using a satellite positioning system, and the route planning capability of a high-precision navigation electronic map is matched, so that the navigation function is improved for a user, the user is helped to quickly and accurately plan a driving route, and the user is guided to drive according to the planned route and reach a destination.
When the vehicle is driving at night, the headlamps need to be turned on to help the driver see the road clearly. However, when a vehicle is meeting or following at night, if the brightness of the headlamp is too high, the headlamp can dazzle a driver and cause traffic accidents due to the fact that the driver cannot see the road clearly. The intelligent head-light system of present car can detect the oncoming traffic according to the forward-looking camera to adjust head-light luminance, however, because the camera imaging quality that the illumination condition is abominable at night leads to is not high, and can't improve imaging quality through optimizing image processing algorithm, very easily leads to the false retrieval, miss the problem of examining etc. finally influences the head-light illuminating effect, thereby influences the safety of driving at night.
In order to solve the defects existing in the prior art, the embodiment of the application provides a control method. In the embodiment of the application, image data and radar data in a vehicle front detection range are firstly obtained through the camera and the radar, then the image data and the radar data are fused, a vehicle front area is identified through the fused data, and accurate information and accurate positions of front vehicle targets can be obtained. By the method in the embodiment of the application, the recognition precision of the area in front of the vehicle can be effectively achieved, the accurate control of the vehicle headlamp is realized, and the driving safety is improved.
First, a control system according to an embodiment of the present application will be described. Fig. 1 is a schematic structural diagram of a control system provided in the embodiment of the present application. By way of example and not limitation, as shown in fig. 1, the control system may include: camera 11, radar 12, signal processor 13, headlight controller 14. The signal processor 13 receives data acquired by the camera 11 and the radar 12, processes the data by using the control method provided by the embodiment of the application to obtain identification information of an area in front of the vehicle, and finally transmits the identification information to the headlamp controller 14, and the headlamp controller 14 controls the headlamp module to illuminate according to the identification information.
In some application scenarios, the signal processor 13 may also control the headlight module according to the identification information.
A control method provided in an embodiment of the present application may be executed by a signal processor shown in fig. 1, and in the present application, referring to fig. 2, is a schematic flow chart of the control method provided in an embodiment of the present application, and by way of example and not limitation, the method includes the following steps:
step S201, first data is acquired, where the first data represents radar data acquired by detecting an area in front of a first vehicle by a radar mounted on the first vehicle.
In this application embodiment, the selection of radar can be multiple such as laser radar, millimeter wave radar, but because the millimeter wave radar is not restricted by the illumination condition, advantages such as the strong detection range of interference killing feature is suitable for the application condition of this application, so select millimeter wave radar collection radar data in this application embodiment.
The radar is arranged right above the vehicle, namely the first vehicle, so that radar data, namely the first data, in a detectable range in front of the vehicle CAN be obtained, the radar data CAN display road conditions of the front vehicle, pedestrians and the like so as to make correct judgment on the running speed of the vehicle, the illumination brightness of headlights of the vehicle and the like, the driving safety is improved, after the radar data are obtained, the collected radar data are sent to a signal processor through a whole vehicle CAN bus to be applied to subsequent signal processing, and the data collected by the millimeter wave radar CAN be more accurate.
Step S202, second data representing image data acquired by capturing an image of an area in front of the first vehicle by a camera mounted on the first vehicle is acquired.
In this embodiment, the image data acquired by the camera is the second data, the camera is arranged right above the inside rear view mirror and used for acquiring the image data within the field of view of the camera in front of the vehicle, and after the image data is acquired, the image data is also sent to the signal processor through the whole vehicle CAN bus.
Step S203, detecting a second vehicle ahead of the first vehicle according to the first data and the second data, and obtaining identification information of the second vehicle.
In this embodiment, after the signal processor performs data preprocessing and fusion algorithm processing on the radar data and the image data received through the entire vehicle CAN bus, the target identification information in front of the vehicle CAN be obtained.
For example, the identification information may include information about the position, the traveling direction, and the like of the second vehicle relative to the first vehicle, such as the identification information about the vehicle front left front co-traveling vehicle, the vehicle front left counter-traveling vehicle, the vehicle front right co-traveling vehicle, the vehicle front right counter-traveling vehicle, and the like, so that the driver can determine the accurate position of the oncoming vehicle according to the identification information, and the safety of the driving at night can be improved.
In an embodiment, referring to fig. 3, which is a schematic flowchart of information identification provided in an embodiment of the present application, as shown in fig. 3, an implementation manner of step S203 includes:
step S301, performing data matching processing on the first data and the second data to obtain matching data.
And step S302, performing fusion processing according to the matching data to obtain the identification information of the second vehicle.
In the embodiment of the application, the radar data can be multi-frame point cloud data, and the image data can be a plurality of shot images. In the running process of the vehicle, the point cloud data acquired at different moments and the areas in front of the vehicle corresponding to the shot images are different. In the process of fusion processing, if the point cloud data and the shot image acquired at different times are fused, which is equivalent to fusing radar data and image data of different areas in front of the vehicle, in this case, the acquired identification information of the second vehicle is inaccurate.
In order to solve the above problem, in the embodiment of the present application, data matching processing is performed on radar data and image data, and then fusion processing is performed using matching data, so as to improve the accuracy of identification information of the second vehicle.
In an embodiment, the first data includes multiple frames of first point cloud data, the second data includes multiple first images, and the matching data includes at least one set of matching first point cloud data and first images, see fig. 4, which is a schematic flow chart of data matching provided in an embodiment of the present application, and as shown in fig. 4, one implementation manner of step S301 includes:
step S401, for each frame of the first point cloud data, performing spatial matching on the first point cloud data and each first image, respectively, to obtain a spatial matching result.
In this embodiment, because the radar data acquired by the radar is point cloud data of one frame, the image data acquired by the radar is a plurality of images, spatial matching needs to be performed before the fusion algorithm processing, spatial matching is performed for each frame of point cloud data and each image to obtain a spatial matching result, and the point cloud data and the image data which are subjected to spatial matching can be processed by the fusion algorithm to obtain the identification information of the vehicle.
In one embodiment, one implementation of step S301 includes:
acquiring third data, wherein the third data represents a transformation matrix between a radar coordinate system and a camera coordinate system;
and respectively carrying out space matching on the first point cloud data and each first image according to the third data to obtain the space matching result.
In this embodiment, when spatially matching the first point cloud data with the image data, the coordinate conversion may be used. Because the radar and the camera are respectively installed at different positions of the vehicle, and each sensor defines its own coordinate, a unified standard needs to be performed on a radar coordinate system and a camera coordinate system.
In one implementation, a transformation matrix between a radar coordinate system and a camera coordinate system is calibrated. For each frame of first point cloud data, mapping the first point cloud data to a camera coordinate system according to the conversion matrix to obtain a mapping image; respectively calculating the similarity between the mapping image and each first image; and determining the first image corresponding to the maximum similarity as an image which is matched with the first point cloud data in space.
In the embodiment of the application, the Matlab calibration tool box can be used for calculating the conversion matrix.
Optionally, one way to calculate the similarity between the mapping image and each first image is as follows: calculating a pixel distance between each pixel in the mapped image and each pixel in the first image; matching pixels in the mapped image with pixels in the first image according to the pixel distance; the number of matched pixels is counted and determined as the similarity between the mapped image and each first image. The other mode is as follows: inputting the mapping image and the first image into a pre-trained detection model, and outputting a detection result; the detection result may be a confidence level, a probability value, or the like, and a matching result is determined according to the detection result.
In another implementation, a transformation matrix between the radar coordinate system and the camera coordinate system is calibrated. For each first image, mapping the first image to a radar coordinate system according to the conversion matrix to obtain a mapping point cloud; respectively calculating the similarity between the mapping point cloud and each frame of first point cloud data; and determining the first point cloud data corresponding to the maximum similarity as point cloud data which is matched with the first image in space.
Optionally, the manner of mapping the similarity between the point cloud and the first point cloud data may be: calculating the point cloud distance between each point in the mapping point cloud and each point in the first point cloud data; matching and mapping points in the point cloud and points in the first point cloud data according to the point cloud distance; and counting the number of matched points, and determining the number as the similarity between the mapping point cloud and the first point cloud data.
Step S402, time matching is carried out on the first point cloud data and each first image respectively, and a time matching result is obtained.
In the embodiment of the application, the first point cloud data and the first image need to be subjected to time matching, the time matching is that image data of each frame of the first point cloud data at the time nearest to the time is searched, and the radar data is projected into the image data, which is the time matching of the two data.
In one embodiment, one implementation of step S402 includes:
calculating the time difference between the acquisition time corresponding to the first point cloud data and the acquisition time corresponding to each first image; and determining the time matching result according to the calculated time difference.
In one implementation manner, for each frame of first point cloud data, a first image with a minimum time difference from the acquisition time of the frame of first point cloud data is determined as an image matched with the frame of first point cloud in time.
Illustratively, the radar acquires 100 frames of point cloud data, and the camera takes 80 images. For the first frame of point cloud data, the time difference between the acquisition time of the first frame of point cloud data and the acquisition time of each image in 80 images is calculated respectively. And (3) assuming that the time difference between the acquisition time of the first frame of point cloud data and the acquisition time of the 2 nd image is minimum, determining that the first frame of point cloud data is matched with the 2 nd image in terms of time.
Step S403, determining a first image matched with the first point cloud data according to the spatial matching result and the temporal matching result.
In this embodiment, the first point cloud data and each piece of image data are respectively subjected to space and time matching to obtain a space matching result and a time matching result, and finally, a first image matched with the first point cloud data is determined according to the space matching result and the time matching result, and finally, the matched point cloud data and the image data are used for processing of a fusion algorithm.
In one embodiment, the matching data includes at least one set of matching first point cloud data and first image, and one implementation of step S302 includes:
for each group of matched first point cloud data and first image, acquiring fourth data, wherein the fourth data comprises a distance value and an azimuth angle between the first vehicle and the second vehicle obtained by analyzing the first image;
acquiring fifth data, wherein the fifth data comprise a distance value and an azimuth angle obtained by analyzing the first point cloud data before the first vehicle and the second vehicle;
and determining the identification information of the second vehicle according to the fourth data and the fifth data.
In this embodiment, the successfully matched point cloud data and image data are processed by a fusion algorithm, the image data are detected by using a convolutional neural network, a front target vehicle in the image data is identified, and a distance value s between the front target vehicle and the vehicle is output 1 And azimuth angle delta 1 I.e. the fourth data, then analyzes and matches the fourth data into successful point cloud data, and outputs the distance value s between the front target vehicle and the vehicle 2 And azimuth angle delta 2 I.e., the fifth data. And finally, determining the identification information of the second vehicle according to the fourth and fifth data which are successfully analyzed.
The distance value and the azimuth angle in the data are used as target identification information in front of the vehicle, the running condition of the front vehicle can be more accurately judged according to the azimuth angle and the distance value, and the driving safety of a driver is fully improved.
In one embodiment, one implementation of step S302 further includes:
acquiring a first precision, wherein the first precision is expressed as the data acquisition precision of the view camera;
acquiring a second precision, wherein the second precision is expressed as the data acquisition precision of the radar;
determining a weight according to the first precision and the second precision;
determining identification information of the second vehicle from the fourth data and the fifth data according to the weight.
In the present embodiment, because of the existence of the measurement error, the data acquisition precision w of the camera needs to be acquired 1 I.e. first accuracy, data acquisition accuracy w of millimeter wave radar 2 I.e. a second accuracy, and then determines the weight omega based on the relative accuracy of the radar and the camera. Finally, the identification information of the second vehicle is determined from the fourth data and the fifth data according to the weight.
In one embodiment, one implementation of step S302 further includes:
if the weight is smaller than a first preset value, determining the fourth data as the identification information of the second vehicle;
if the weight is larger than a second preset value, determining the fourth data as the identification information of the second vehicle;
and if the weight is between the first preset value and the second preset value, determining data obtained by performing weighted average on the fourth data and the fifth data as the identification information of the second vehicle.
In this embodiment, the identification information needs to be determined according to the weight, and the determination method of the weight is as follows:
if it is
Figure BDA0003998731860000121
Indicating that the precision of the front-view camera is insufficient, not adopting image data, and taking the data of the millimeter wave radar as information processed by a fusion algorithm; if it is
Figure BDA0003998731860000122
Indicating that the precision of the millimeter wave radar is insufficient, not adopting radar data, and taking the data of the forward-looking camera as the information processed by the fusion algorithm, wherein 0.1 is a first preset value, 0.9 is a second preset value, and if the information is not enough, the information is processed by the forward-looking camera, and the information is processed by the fusion algorithm, the first preset value is 0.1, and the second preset value is 0.9
Figure BDA0003998731860000123
Then, the image data result and the radar data result are weighted and averaged by using the formulas (1) and (2), and the distance value s and the azimuth angle delta of the front target vehicle from the vehicle after fusion are calculated:
Figure BDA0003998731860000124
Figure BDA0003998731860000125
and S204, controlling the headlight module of the first vehicle to illuminate according to the identification information of the second vehicle.
In this embodiment, the information processing module sends the target identification information to the headlamp controller through the entire vehicle CAN bus, and the headlamp controller controls the headlamp module to illuminate according to the identification information.
As described in S203, the identification information may include information of the position, the traveling direction, and the like of the second vehicle with respect to the first vehicle, such as a vehicle front left front co-traveling vehicle, a vehicle front left counter-traveling vehicle, a vehicle front right co-traveling vehicle, a vehicle front right counter-traveling vehicle, and the like.
In one implementation, the headlight module of the first vehicle can be controlled to be turned on or off according to the driving direction of the second vehicle relative to the first vehicle in the identification information. For example, if the second vehicle is traveling in an opposite direction to the first vehicle, the headlight module of the first vehicle is controlled to switch to the low beam.
In another implementation, the illumination area of the headlight model of the first vehicle may be controlled as a function of the position of the second vehicle relative to the first vehicle in the identification information. For example, if the position of the second vehicle relative to the first vehicle is left front, the headlight module left headlight of the first vehicle is turned off, so that the headlight module illuminates the right front area of the first vehicle through the right headlight; if the position of the second vehicle relative to the first vehicle is right front, the right headlamp of the headlamp module of the first vehicle is turned off, so that the headlamp module illuminates the left front area of the first vehicle through the left headlamp.
In some control scenes, the headlamp controller judges whether a target vehicle in front of the vehicle runs out of the illumination dark area or not according to the identification information in front of the vehicle; if so, controlling the headlamp module to recover to normal illumination; otherwise, the lighting effect of the headlamp module is adjusted in real time according to the target identification information in front of the vehicle updated in real time, and when the target vehicle runs out of the bright and dark area, the headlamp controller controls the headlamp module to recover to normal lighting.
By the method, the vehicle can automatically identify whether an oncoming vehicle exists, so that the downstream headlamp module is controlled to form a lighting dark area in the lighting range. The application can avoid dazzling the driver of the opposite vehicle, and improve the driving safety at night.
The control method provided by the embodiment of the application. The method comprises the steps of firstly obtaining image data and radar data in a detection range in front of a vehicle through a camera and a radar, adopting a data fusion idea of enhancing detection precision by fusing the image data and the radar data according to the input image data and the radar data, overcoming the defect of low night imaging quality of a forward-looking camera, effectively solving the problems of false detection, missed detection and the like of the forward-looking camera in the existing intelligent headlamp system, and fully improving the safety of road illumination and the driving comfort of a driver.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 5 is a block diagram of a control device provided in the embodiment of the present application, which corresponds to the control method described in the foregoing embodiment, and only the relevant portions to the embodiment of the present application are shown for convenience of description.
Referring to fig. 5, the apparatus includes:
an acquisition unit 51 configured to acquire first data indicating radar data acquired by detecting an area in front of a first vehicle by a radar mounted on the first vehicle;
a generation unit 52 configured to acquire second data indicating image data acquired by capturing an area in front of the first vehicle by a camera mounted on the first vehicle;
a detection unit 53, configured to detect a second vehicle ahead of the first vehicle according to the first data and the second data, and obtain identification information of the second vehicle;
and the control unit 54 is used for controlling the headlight module illumination of the first vehicle according to the identification information of the second vehicle.
Optionally, the detecting unit 53 is further configured to:
performing data matching processing on the first data and the second data to obtain matched data;
and performing fusion processing according to the matching data to obtain the identification information of the second vehicle.
Optionally, the detecting unit 53 is further configured to:
for each frame of the first point cloud data, respectively performing spatial matching on the first point cloud data and each first image to obtain a spatial matching result;
respectively performing time matching on the first point cloud data and each first image to obtain a time matching result;
and determining a first image matched with the first point cloud data according to the space matching result and the time matching result.
Optionally, the detecting unit 53 is further configured to:
acquiring third data, wherein the third data represents a transformation matrix between a radar coordinate system and a camera coordinate system;
and respectively carrying out space matching on the first point cloud data and each first image according to the third data to obtain the space matching result.
Optionally, the detecting unit 53 is further configured to:
calculating the time difference between the acquisition time corresponding to the first point cloud data and the acquisition time corresponding to each first image;
and determining the time matching result according to the calculated time difference.
Optionally, the detecting unit 53 is further configured to:
acquiring fifth data for each group of matched first point cloud data and first image, wherein the fourth data comprises a distance value and an azimuth angle between the first vehicle and the second vehicle obtained by analyzing the first image;
acquiring fifth data, wherein the fifth data comprise a distance value and an azimuth angle obtained by analyzing the first point cloud data before the first vehicle and the second vehicle;
and determining the identification information of the second vehicle according to the fourth data and the fifth data.
Optionally, the detecting unit 53 is further configured to:
acquiring a first precision, wherein the first precision is expressed as the data acquisition precision of the video camera;
acquiring a second precision, wherein the second precision is expressed as the data acquisition precision of the radar;
determining a weight according to the first precision and the second precision.
Determining identification information of the second vehicle from the fourth data and the sixth data according to the weight.
Optionally, the detecting unit 53 is further configured to:
if the weight is smaller than a first preset value, determining the fourth data as the identification information of the second vehicle;
if the weight is larger than a second preset value, determining the fourth data as the identification information of the second vehicle;
and if the weight is between the first preset value and the second preset value, determining data obtained by performing weighted average on the fourth data and the fifth data as the identification information of the second vehicle.
It should be noted that, for the information interaction, execution process, and other contents between the above devices/units, the specific functions and technical effects thereof based on the same concept as those of the method embodiment of the present application can be specifically referred to the method embodiment portion, and are not described herein again.
The control device shown in fig. 5 may be a software unit, a hardware unit, or a combination of software and hardware unit built in the existing terminal device, may be integrated into the terminal device as a separate pendant, or may exist as a separate terminal device.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: at least one processor 60 (only one shown in fig. 6), a memory 61, and a computer program 62 stored in the memory 61 and executable on the at least one processor 60, the processor 60 implementing the steps in any of the various control method embodiments described above when executing the computer program 62.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that fig. 6 is only an example of the terminal device 6, and does not constitute a limitation to the terminal device 6, and may include more or less components than those shown, or combine some components, or different components, such as an input/output device, a network access device, and the like.
The processor 60 may be a Central Processing Unit (CPU), and the processor 60 may be other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may in some embodiments be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the terminal device 5. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 61 is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The memory 61 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an apparatus/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one type of logical function division, and other division manners may be available in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present application, and they should be construed as being included in the present application.

Claims (11)

1. A control method, comprising:
acquiring first data representing radar data acquired by detecting an area in front of a first vehicle by a radar mounted on the first vehicle;
acquiring second data representing image data acquired by shooting an area in front of the first vehicle through a camera mounted on the first vehicle;
detecting a second vehicle in front of the first vehicle according to the first data and the second data, and obtaining identification information of the second vehicle;
and controlling the lighting of the headlamp module of the first vehicle according to the identification information of the second vehicle.
2. The control method according to claim 1, wherein the detecting a second vehicle ahead of the first vehicle from the first data and the second data, obtaining identification information of the second vehicle, includes:
performing data matching processing on the first data and the second data to obtain matched data;
and performing fusion processing according to the matching data to obtain the identification information of the second vehicle.
3. The control method according to claim 2, wherein the first data includes a plurality of frames of first point cloud data, the second data includes a plurality of first images, the matching data includes at least one set of matching first point cloud data and first images, and the performing the data matching process on the first data and the second data to obtain matching data includes:
for each frame of first point cloud data, respectively carrying out space matching on the first point cloud data and each first image to obtain a space matching result;
respectively performing time matching on the first point cloud data and each first image to obtain a time matching result;
and determining a first image matched with the first point cloud data according to the space matching result and the time matching result.
4. The control method according to claim 3, wherein the spatially matching the first point cloud data with each of the first images to obtain a spatial matching result comprises:
acquiring third data, wherein the third data represents a transformation matrix between a radar coordinate system and a camera coordinate system;
and respectively carrying out space matching on the first point cloud data and each first image according to the third data to obtain the space matching result.
5. The control method according to claim 3, wherein the time-matching the first point cloud data with each of the first images to obtain a time-matching result comprises:
calculating the time difference between the acquisition time corresponding to the first point cloud data and the acquisition time corresponding to each first image;
and determining the time matching result according to the calculated time difference.
6. The control method according to claim 2, wherein the matching data includes at least one set of matching first point cloud data and first image;
the performing fusion processing according to the matching data to obtain the identification information of the second vehicle includes:
for each group of matched first point cloud data and first image, acquiring fourth data, wherein the fourth data comprises a distance value and an azimuth angle between the first vehicle and the second vehicle obtained by analyzing the first image;
acquiring fifth data, wherein the fifth data comprise a distance value and an azimuth angle obtained by analyzing the first point cloud data before the first vehicle and the second vehicle;
and determining the identification information of the second vehicle according to the fourth data and the fifth data.
7. The control method according to claim 6, wherein the determining identification information of the second vehicle from the fourth data and the fifth data includes:
acquiring a first precision, wherein the first precision is expressed as the data acquisition precision of the camera;
acquiring a second precision, wherein the second precision is expressed as the data acquisition precision of the radar;
determining a weight according to the first precision and the second precision;
determining identification information of the second vehicle from the fourth data and the fifth data according to the weight.
8. The control method according to claim 7, wherein the determining the identification information of the second vehicle from fourth data and the fifth data based on the weight includes: the method further comprises the following steps:
if the weight is smaller than a first preset value, determining the fourth data as the identification information of the second vehicle;
if the weight is larger than a second preset value, determining the fourth data as the identification information of the second vehicle;
and if the weight is between the first preset value and the second preset value, determining data obtained by performing weighted average on the fourth data and the fifth data as the identification information of the second vehicle.
9. A control device, comprising:
an acquisition unit configured to acquire first data indicating radar data acquired by detecting an area in front of a first vehicle by a radar mounted on the first vehicle;
a generation unit configured to acquire second data representing image data acquired by capturing an area in front of the first vehicle by a camera mounted on the first vehicle;
the detection unit is used for detecting a second vehicle in front of the first vehicle according to the first data and the second data to obtain identification information of the second vehicle;
and the control unit is used for controlling the lighting of the headlamp module of the first vehicle according to the identification information of the second vehicle.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202211612145.8A 2022-12-14 2022-12-14 Control method, control device, terminal equipment and computer readable storage medium Pending CN115817330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211612145.8A CN115817330A (en) 2022-12-14 2022-12-14 Control method, control device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211612145.8A CN115817330A (en) 2022-12-14 2022-12-14 Control method, control device, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115817330A true CN115817330A (en) 2023-03-21

Family

ID=85545731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211612145.8A Pending CN115817330A (en) 2022-12-14 2022-12-14 Control method, control device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115817330A (en)

Similar Documents

Publication Publication Date Title
JP5747549B2 (en) Signal detector and program
US8995723B2 (en) Detecting and recognizing traffic signs
JP4624594B2 (en) Object recognition method and object recognition apparatus
US7366325B2 (en) Moving object detection using low illumination depth capable computer vision
US8175331B2 (en) Vehicle surroundings monitoring apparatus, method, and program
TWI302879B (en) Real-time nighttime vehicle detection and recognition system based on computer vision
US20130027511A1 (en) Onboard Environment Recognition System
US8848980B2 (en) Front vehicle detecting method and front vehicle detecting apparatus
JP5065172B2 (en) Vehicle lighting determination device and program
CN113435237B (en) Object state recognition device, recognition method, and computer-readable recording medium, and control device
JP2012240530A (en) Image processing apparatus
JP2021018465A (en) Object recognition device
JP2019146012A (en) Imaging apparatus
US8965142B2 (en) Method and device for classifying a light object located ahead of a vehicle
CN115817330A (en) Control method, control device, terminal equipment and computer readable storage medium
CN112926476B (en) Vehicle identification method, device and storage medium
JP7210208B2 (en) Providing device
KR101180676B1 (en) A method for controlling high beam automatically based on image recognition of a vehicle
CN111090096B (en) Night vehicle detection method, device and system
JPWO2020110802A1 (en) In-vehicle object identification system, automobile, vehicle lighting equipment, classifier learning method, arithmetic processing device
CN111243310A (en) Traffic sign recognition method, system, medium, and apparatus
JP2020061052A (en) Traffic light determination device
CN116206483B (en) Parking position determining method, electronic device and computer readable storage medium
KR20230026262A (en) Methdo for high-beam assistance in motor veicle and high-beam assistant for motor vehicle
CN118061900A (en) Control method and device for vehicle lamplight, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination