CN115205814A - Distance detection method, vehicle high beam control method, device, medium and vehicle - Google Patents
Distance detection method, vehicle high beam control method, device, medium and vehicle Download PDFInfo
- Publication number
- CN115205814A CN115205814A CN202210727830.9A CN202210727830A CN115205814A CN 115205814 A CN115205814 A CN 115205814A CN 202210727830 A CN202210727830 A CN 202210727830A CN 115205814 A CN115205814 A CN 115205814A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- distance
- point
- lamp
- pair
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q1/00—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
- B60Q1/02—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
- B60Q1/04—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
- B60Q1/06—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Mechanical Engineering (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention belongs to the field of auxiliary driving, and particularly provides a distance detection method, a vehicle high beam control method, equipment, a medium and a vehicle, aiming at solving the problem of realizing vehicle distance detection only through visual image processing in a night scene. To this end, the method of the invention comprises: and training a distance acquisition model by using a sample image marked with the vehicle lamp point, the vehicle lamp point attribute, the vehicle lamp pair and the distance between the vehicle lamp point in the vehicle lamp pair and the vehicle-mounted camera, and acquiring the distance between the vehicles by using the trained distance acquisition model. According to the technical scheme, the preset car lamp pair distance is introduced, so that the distance between the car lamp point in the car lamp pair and the vehicle-mounted camera in the sample image can be automatically calculated and labeled, the source of the sample image is enlarged, and the labeling workload is reduced. The invention provides a simple and reliable solution for realizing the detection of the distance between the vehicles only through the visual image processing under the night condition, and has higher practicability.
Description
Technical Field
The invention belongs to the field of assistant driving, and particularly provides a distance detection method, electronic equipment, a storage medium and a vehicle.
Background
With the rapid development of new technologies such as computing technology and sensing technology, the driving assistance function has been emphasized and started to be applied by many automobile manufacturers, so that the driving process becomes easier and safer. For example, the automatic high beam control in the auxiliary driving can automatically control the high beam to be turned on or off according to the distance between the vehicles in front when the vehicles drive at night, or remind the driver of the vehicle to control the high beam, so that the high beam is prevented from interfering the driver of the vehicle in front and influencing the driving safety of the vehicle in front and the vehicle.
In the automatic high beam control for driving assistance, the inter-vehicle distance between the host vehicle and the preceding vehicle is generally used as a control criterion. In the monocular 3D vision scheme, the 3D attributes of the obstacles in the image, such as position, orientation, length, width and the like, can be acquired through the combined marking of the laser radar and the vision; and then obtaining the distance of the target through a deep learning network. However, in a night scene, the front vehicle lamp is turned on, the whole vehicle target cannot be seen, only the light source of the vehicle lamp pair can be seen, and the distance between the vehicle lamp point in the vehicle lamp pair and the imaging camera can be detected only by a visual image processing method, so that the vehicle distance can be obtained.
Accordingly, there is a need in the art for a new solution to the above-mentioned problems.
Disclosure of Invention
The invention aims to solve or partially solve the technical problem, namely, the problem of detecting the distance between a car lamp point in a car lamp pair and an imaging camera by only using a visual image processing method under a night scene so as to obtain the distance of a car.
In a first aspect, the present invention provides a method of distance detection, the method comprising:
acquiring a vehicle image through a vehicle-mounted camera;
and obtaining a target vehicle lamp point, vehicle lamp point attributes corresponding to the target vehicle lamp point and the distance between the target vehicle lamp point and the vehicle-mounted camera through a trained distance acquisition model based on the vehicle image.
In one embodiment of the above distance detection method, the method further comprises: in one embodiment of the above distance detection method, the method further comprises:
obtaining a sample image, wherein the sample image is marked with a car light point, a car light point attribute corresponding to the car light point, a car light pair, and a distance between the car light point in the car light pair and the vehicle-mounted camera;
training the distance acquisition model by using the sample image to obtain the trained distance acquisition model;
the vehicle lamp point attributes comprise vehicle lamp point pairs, bicycle lamp points, head vehicle lamp points and tail vehicle lamp points.
In one embodiment of the above distance detection method, "acquiring a sample image, wherein the sample image is labeled with a car light point, a car light point attribute corresponding to the car light point, a car light pair, and a distance between the car light point in the car light pair and the vehicle-mounted camera" includes:
marking the vehicle lamp points;
marking the attribute of the car light point;
labeling the car light pair ID of the car light pair, wherein two car light points in the same car light pair have the same car light pair ID;
acquiring a calculated distance between the car light pair and the vehicle-mounted camera according to the ID of the car light pair;
and marking the distance between the car light point in the car light pair and the vehicle-mounted camera according to the calculated distance.
In one embodiment of the above distance detection method, "acquiring a calculated distance between the pair of vehicle lights and the vehicle-mounted camera according to the pair of vehicle lights ID" includes:
obtaining a vehicle lamp-to-pixel distance w according to the two vehicle lamp points with the same vehicle lamp pair ID;
and obtaining the calculated distance D = (D × w)/c between the car lamp pair and the vehicle-mounted camera according to the pixel-level distance w of the car lamp pair, a preset car lamp pair distance D and a camera internal parameter c.
In one embodiment of the above distance detection method, "training the distance acquisition model using the sample image" includes:
obtaining a first output and a second output through the distance acquisition model based on the sample image;
wherein the first output is a Gaussian heatmap comprising the headlight points, the headlight point attributes, and headlight point Gaussian regions corresponding to the headlight points;
the second output comprises a predicted distance between each pixel in a headlight point Gaussian region corresponding to the headlight point in the headlight pair and the onboard camera;
performing regression training on the predicted distance between each pixel in the vehicle lamp point Gaussian region and the vehicle-mounted camera by using a smooth L1 loss function.
In one embodiment of the above distance detecting method, the lamp point gaussian region is a rectangular region, and the method for acquiring the rectangular region includes:
setting the rectangular regions with different areas according to the difference of the calculated distance, wherein the area of the rectangular region is larger when the calculated distance is smaller.
In a second aspect, the present invention provides a vehicle high beam control method, the method comprising:
according to the distance detection method of any scheme, a target vehicle lamp point, vehicle lamp point attributes corresponding to the target vehicle lamp point and a distance between the target vehicle lamp point and the vehicle-mounted camera are obtained, wherein the vehicle lamp point attributes comprise vehicle lamp point pairs, a single vehicle lamp point, a head vehicle lamp point and a tail vehicle lamp point;
the vehicle-mounted camera is arranged on a first vehicle, the target vehicle lamp point is positioned on a second vehicle, and the distance between the target vehicle lamp point and the vehicle-mounted camera is the vehicle-to-vehicle distance between the first vehicle and the second vehicle;
when the lamp point attribute of the target lamp point is the lamp point and is the headlight lamp point, if the distance between the vehicles is less than a first distance threshold value, turning off a high beam of the first vehicle and/or sending out a first prompt message;
when the vehicle light point attribute of the target vehicle light point is the vehicle light point and is the tail vehicle light point, if the vehicle distance is smaller than a second vehicle distance threshold value, a vehicle high beam of the first vehicle is turned off and/or a second prompt message is sent out;
wherein the first headway threshold is greater than the second headway threshold.
In a third aspect, the present invention provides an electronic device comprising a processor and a memory, said memory being adapted to store a plurality of program codes, said program codes being adapted to be loaded and run by said processor to perform the distance detection method or the vehicle high beam control method of any of the above aspects.
In a fourth aspect, the present invention provides a storage medium adapted to store a plurality of program codes, the program codes being adapted to be loaded and executed by a processor to perform the distance detection method or the vehicle high beam control method according to any one of the above aspects.
In a fifth aspect, the invention provides a vehicle comprising the electronic device described above.
According to the method, the distance acquisition model is trained by using the sample image marked with the vehicle lamp points, the vehicle lamp point attributes, the vehicle lamp pairs and the distance between the vehicle lamp point in the vehicle lamp pairs and the vehicle-mounted camera, and the distance between the vehicle lamp point in the vehicle lamp pairs and the vehicle-mounted camera in the vehicle image at night is detected by the trained distance acquisition model, so that the workshop distance between the two vehicles is obtained. According to the technical scheme, the preset car light pair distance is introduced, the distance between the car light point in the car light pair and the car-mounted camera in the sample image can be automatically calculated and marked according to the two marked car light points in the car light pair, manual work is not needed, the distance between the car light pair and the car-mounted camera is actually measured, manual recording and marking are carried out, and manpower and material resources can be greatly saved; with the increase of the number of samples, the distance detection model trained by a large number of samples will generally have higher detection accuracy and generalization capability. Therefore, the method provides a simple and reliable solution for detecting the distance between the vehicles only through visual image processing under the night condition, and has higher practicability.
Drawings
Preferred embodiments of the present invention are described below with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of main steps of a distance detection method according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a sample image of an embodiment of the invention.
FIG. 3 is a flowchart illustrating the main steps of a sample image annotation process according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a principle of calculating a distance between a pair of vehicle lights and an in-vehicle camera of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention. And can be modified as needed by those skilled in the art to suit particular applications.
Turning first to fig. 1, fig. 1 is a flow chart of the main steps of a distance detection method according to an embodiment of the present invention. As shown in fig. 1, the distance detection method of the present invention includes:
step S101: acquiring a vehicle image through a vehicle-mounted camera;
step S102: based on the vehicle image, the model is obtained through the trained distance, and the target vehicle lamp point, the vehicle lamp point attribute corresponding to the target vehicle lamp point and the distance between the target vehicle lamp point and the vehicle-mounted camera are obtained.
In an embodiment of the present invention, an on-vehicle camera directed toward the forward direction of the vehicle is mounted on the first vehicle. In step S101, when the vehicle is traveling at night, an image of the vehicle in front of the vehicle may be acquired by the onboard camera.
In step S102, it is usually necessary to first perform preprocessing on the vehicle image to obtain an image to be processed, whose image size, image color storage format, and the like meet the input requirements of the distance acquisition model. The image preprocessing method comprises equal ratio scaling, color space conversion into RGB color space, filling, color normalization and the like. One skilled in the art can select one or more methods to preprocess the vehicle image according to the image size of the image to be processed, the image length-width ratio, the data processing speed requirement and the like input by the distance acquisition model.
As an example, when the image aspect ratio of the vehicle image is different from the image aspect ratio of the image to be processed, the vehicle image may be filled in, the image aspect ratio of the vehicle image after filling is made to be consistent with that of the image to be processed, and then the image may be scaled in an equal ratio. In order to increase the speed of image processing, color normalization processing may be selected for each pixel.
And after the vehicle image is preprocessed, obtaining an image to be processed, inputting the image to be processed into the trained distance acquisition model, and obtaining target vehicle lamp points, vehicle lamp point positions of each target vehicle lamp point, vehicle lamp point attributes corresponding to the target vehicle lamp points and the distance between each target vehicle lamp point and the vehicle-mounted camera. The vehicle lamp point attributes comprise vehicle lamp point points, bicycle lamp points, head lamp points and tail lamp points.
It should be noted that the car light point is a car light point in the complete double car lights visible in the car image; the bicycle light point may be an electric vehicle light, or a motorcycle light, or a visible one of a pair of obscured vehicle lights, etc.; the headlight point indicates that the headlight point is a headlight of the vehicle, and the color of the headlight is white generally; the tail lamp point indicates that the lamp point is a tail lamp of the vehicle, and the color of the tail lamp is red generally.
The specific model structure of the distance obtaining model in step S102 is not limited in the embodiments of the present invention, and for example, the distance obtaining model may be implemented by using a ResNet network, an HRNet network, or the like. The skilled person can select a suitable technical solution according to the actual situation.
Before the distance acquisition model is used, the distance acquisition model needs to be trained by using the labeled sample image to obtain the trained distance acquisition model.
As shown in the schematic diagram of the sample image shown in fig. 2, the upper left vertex of the sample image is selected as the origin of coordinates, and a pixel coordinate system of the feature map of the car light pair is established, where the u-axis direction is the horizontal direction and the v-axis direction is the vertical direction. In fig. 2, a, B, C, D, E are lamp points, where a and B are pairs of headlights of the vehicle, C and D are pairs of taillights of the vehicle, and D is that of the motorcycle.
Next, an annotation process of a sample image will be described with reference to fig. 3, as shown in fig. 3, the annotation process according to the embodiment of the present invention includes:
step S301: marking a vehicle lamp point;
step S302: marking the attribute of the car light point;
step S303: marking the ID of the car light pair, wherein two car light points in the same car light pair have the same ID of the car light pair;
step S304: acquiring a calculated distance between the vehicle lamp pair and the vehicle-mounted camera according to the vehicle lamp pair ID;
step S305: and marking the distance between the car light point in the car light pair and the car-mounted camera according to the calculated distance.
In step S301, a, B, C, D, and E are all labeled as vehicle light points.
In step S302, the vehicle light point attributes of each vehicle light point are respectively labeled, as an example, the vehicle light point attribute of the vehicle light point a is labeled as [1,0,1,0], the vehicle light point attribute of the vehicle light point D is labeled as [0,1, 0], and four positions in the matrix are: a light point, a single light point, a head light point, and a tail light point, a value of 1 indicates that the attribute is present, and a value of 0 indicates that the attribute is absent. Therefore, the lamp point attribute of the lamp point a is a lamp point and is a headlight point; the lamp point attribute of the lamp point D is only the single lamp point.
In step S303, the pair ID of the pair of lamps is labeled, and as shown in fig. 2, the pair ID of the lamp point a and the pair ID of the lamp point B of the pair AB are both ID1.
In step S304, the pixel-level distance w of the headlight pair AB can be obtained according to the position coordinates a (uA, vA) of the headlight point a and the position coordinates B (uB, vB) of the headlight point B AB
It should be noted that the coordinates of the car light point may be selected from the center position of the car light point region in the sample image.
According to the pinhole imaging principle, the relationship shown in fig. 4 exists among a car light-to-pixel distance w, a camera internal parameter c, a preset car light pair distance D and a distance D between a car light pair and an on-vehicle camera, wherein D/D = w/c, and the camera internal parameter c is a camera focal length.
For some application scenarios, such as automatic control of high beam at night for vehicle-assisted driving, a high accuracy is not required for measuring the distance, and at this time, the distance of the lamp pair is set to a fixed value by referring to an average value of actual widths of the lamps of most vehicles, that is, the preset distance D =1.86 m of the lamp pair in the embodiment of the present invention.
When the pixel-level distance w of the car lamp pair, the camera internal parameter c and the preset car lamp pair distance D are known, the distance D between the car lamp pair and the vehicle-mounted camera is obtained according to D = D × w/c. In the embodiment of the invention, the distance d between the car light pair AB and the car-mounted camera is calculated AB =1.86*w AB /c。
Therefore, in step S305, the distances between the vehicle light point of the vehicle light pair AB and the vehicle light point of the vehicle light point a and the vehicle light point B and the vehicle-mounted camera can be labeled as d according to the calculated distances described above AB 。
In the embodiment of the invention, the calculated distance d between the car light pair CD and the vehicle-mounted camera is obtained by the same method CD And marking that the distances between the car light point C and the car light point D in the car light pair CD and the car camera are D CD . For the vehicle light point E, since the calculated distance cannot be obtained by calculation, the distance between the vehicle light point of the vehicle light point and the vehicle-mounted camera is not labeled.
It should be noted that the obtaining and labeling of the distance between the car light and the vehicle-mounted camera can also be realized manually, that is, the distance between the car light and the vehicle-mounted camera is actually measured manually, and a sample image of the position is obtained, and then the sample image is labeled manually.
According to the method, only the same vehicle-mounted camera is used for randomly acquiring the sample image containing the vehicle lamp pair in the visual field range, and then according to the distance calculation method between the vehicle lamp pair and the vehicle-mounted camera, the distance between the vehicle lamp pair and the vehicle-mounted camera in the sample image is automatically calculated, and automatic distance marking of the vehicle lamp point is carried out. Need not again artifically, go the actual measurement car light to with the distance of on-vehicle camera to manual recording and mark can use manpower sparingly and material resources greatly.
Meanwhile, as long as images containing the car light pairs of the vehicle-mounted cameras of the same model can be used as sample images, the number of samples is greatly increased. Accordingly, distance detection models trained over a large number of samples typically have higher detection accuracy and generalization capability.
In training the distance acquisition model using the labeled sample images, the model will have two outputs, a first output and a second output. The first output is a gaussian heatmap used for detection of the headlight points in the sample image and for obtaining a headlight point attribute for each headlight point. The second output is a distance prediction result, specifically, the predicted distance between each pixel in a vehicle lamp point gaussian region corresponding to a vehicle lamp point in the vehicle lamp pair and the vehicle-mounted camera. As an example, the second output includes the predicted distance of the lamp point a, the predicted distance of the lamp point B, the predicted distance of the lamp point C, and the predicted distance of the lamp point D, corresponding to the sample image shown in fig. 2. Since the vehicle light point E is a vehicle light point and the distance data is not labeled, the predicted distance of the vehicle light point E is not included in the second output result.
And performing regression training on the predicted distance between each pixel in a Gaussian region of the headlight pair in the first output and the vehicle-mounted camera by using a smooth L1 loss function based on the labeling of the distance between each headlight pair of the sample image and the vehicle-mounted camera. Meanwhile, based on other labels of the sample image, the vehicle lamp point detection result and the vehicle lamp point attribute are subjected to supervised training.
The area of the gaussian region is different according to the calculated distance between the headlight point and the vehicle-mounted camera, and the area of the gaussian region of the headlight point is set to be larger as the calculated distance between the headlight point and the target camera is smaller according to the principle of how far the camera images. Preferably, in the embodiment of the present invention, the gaussian region is a rectangular region, and when the calculated distance is less than or equal to 300 meters, the rectangular region may be set to 9pixel x 9pixel; when the calculated distance is more than 300 meters and less than or equal to 600 meters, the rectangular area can be set to be 8 pixels by 8 pixels; when the calculated distance is greater than 600 meters, the rectangular region may be set to 7pixel x 7pixel.
It should be noted that image preprocessing methods (color space conversion to RGB color space, filling, color normalization, etc.), a ResNet network, an HRNet network, regression training using smooth L1 regression loss function, supervised training, etc. are all methods of image processing and network training commonly used by those skilled in the art, and are not described herein again.
Further, the present invention also provides a vehicle high beam control method, according to the distance detection method, obtaining a target vehicle light point, a vehicle light point attribute corresponding to the target vehicle light point, and a distance between the target vehicle light point and a vehicle-mounted camera, wherein the vehicle light point attribute comprises: the lamp is to the point, the bicycle light point, the first car light point, the tail car light point.
The vehicle-mounted camera is arranged on the first vehicle, the target lamp point is positioned on the second vehicle, and the distance between the target lamp point and the vehicle-mounted camera is the vehicle-to-vehicle distance between the first vehicle and the second vehicle.
And if the value of the lamp point attribute of the target lamp point is [1,0,1,0], indicating that the target lamp point is a lamp point in a lamp pair and is a head lamp point, indicating that two vehicles drive oppositely, and turning off a high beam of the vehicle and/or sending out first prompt information when the distance between the vehicles is smaller than a first distance threshold value. By way of example, the content of the first prompt includes: when the driver turns off the high beam, the information prompt method selects the sound equipment in the vehicle.
And if the attribute value of the lamp point of the target lamp point is [1,0, 1], indicating that the target lamp point is the lamp point in the lamp pair and is a tail lamp point, indicating that the two vehicles run in the same direction, and closing the high beam of the vehicle and/or sending out second prompt information when the distance between the vehicles is less than a second vehicle distance threshold value. For example, the content of the second prompt includes: and (4) turning off the high beam of the vehicle in the same direction, and preferably selecting the sound equipment in the vehicle by the information prompting method.
It should be noted that when the distance obtaining model outputs a plurality of target vehicle light points, the head vehicle light point and/or the tail vehicle light point with the lowest vehicle distance are generally used as the basis for controlling the high beam of the vehicle. In addition, the first vehicle distance threshold is usually set to be larger than the second vehicle distance threshold because the influence of the high beam of the vehicle driving in the opposite direction on the driver is larger than the influence of the high beam of the vehicle driving in the same direction on the driver under the same distance condition. As an example, the first track width threshold is set at 600 meters and the second track width threshold is set at 300 meters.
Although regression training is performed only on the distance between the headlight points of the headlight pairs during model training, the single headlight point and the distance between the single headlight point and the onboard camera can be obtained similarly from the output of the trained distance acquisition model due to the generalization ability of the model, and only the single headlight point data is not processed during application.
Further, the present invention also provides a storage medium that may be configured to store a program for executing the distance detection method or the vehicle high beam control method of the above-described method embodiment, which may be loaded and executed by a processor to implement the above-described distance detection method or the vehicle high beam control method. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and specific technical details are not disclosed. The storage medium may be a storage device formed by various electronic devices, and optionally, in this embodiment of the present invention, the storage medium is a non-transitory readable and writable storage medium.
Further, the present invention also provides an electronic device, which includes a processor, and the processor may be configured to execute instructions to implement the distance detection method or the vehicle high beam control method of the above-described method embodiments. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and details of the specific techniques are not disclosed. The object detection device may be a control apparatus device formed including various electronic devices.
Further, the present invention also provides a vehicle, which includes the above electronic device, where the electronic device includes a processor, and the processor may be configured to execute instructions to implement the distance detection method or the vehicle high beam control method of the above method embodiment. Optionally, the vehicle is a new energy automobile with an auxiliary driving function and provided with a vehicle-mounted camera.
Those of skill in the art will appreciate that the method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of electronic hardware and software. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and in the claims, and in the drawings, are used for distinguishing between similar elements and not necessarily for describing or implying any particular order or sequence. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
In addition, in the description of the present application, the term "a and/or B" denotes all possible combinations of a and B, such as a alone, B alone or a and B.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is apparent to those skilled in the art that the scope of the present invention is not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (10)
1. A method of distance detection, the method comprising:
acquiring a vehicle image through a vehicle-mounted camera;
and obtaining a target vehicle light point, vehicle light point attributes corresponding to the target vehicle light point and the distance between the target vehicle light point and the vehicle-mounted camera through a trained distance acquisition model based on the vehicle image.
2. The distance detection method according to claim 1, characterized in that the method further comprises:
obtaining a sample image, wherein the sample image is marked with a car light point, a car light point attribute corresponding to the car light point, a car light pair, and a distance between the car light point in the car light pair and the vehicle-mounted camera;
training the distance acquisition model by using the sample image to obtain the trained distance acquisition model;
the vehicle lamp point attributes comprise vehicle lamp point pairs, bicycle lamp points, head vehicle lamp points and tail vehicle lamp points.
3. The distance detection method according to claim 2, wherein "acquiring a sample image, wherein the sample image is labeled with a car light point, the car light point attribute corresponding to the car light point, a car light pair, and a distance between the car light point in the car light pair and the vehicle-mounted camera" comprises:
marking the vehicle lamp points;
marking the attribute of the car light point;
labeling the pair ID of the pair of lights, wherein two of the light points in the same pair of lights have the same pair ID;
acquiring a calculated distance between the car light pair and the vehicle-mounted camera according to the ID of the car light pair;
and marking the distance between the car light point in the car light pair and the vehicle-mounted camera according to the calculated distance.
4. The distance detection method according to claim 3, wherein the "obtaining the calculated distance between the pair of vehicle lights and the vehicle-mounted camera from the pair of vehicle lights ID" includes:
obtaining a vehicle lamp-to-pixel distance w according to the two vehicle lamp points with the same vehicle lamp pair ID;
and obtaining the calculated distance D = (D × w)/c between the car lamp pair and the vehicle-mounted camera according to the pixel-level distance w of the car lamp pair, a preset car lamp pair distance D and a camera internal parameter c.
5. The distance detection method according to claim 2, wherein training the distance acquisition model using the sample image comprises:
obtaining a first output and a second output through the distance acquisition model based on the sample image;
wherein the first output is a Gaussian heatmap comprising the headlight points, the headlight point attributes, and headlight point Gaussian regions corresponding to the headlight points;
the second output comprises a predicted distance between each pixel in a headlight point Gaussian region corresponding to the headlight point in the headlight pair and the onboard camera;
performing regression training on the predicted distance between each pixel in the Gaussian spot Gaussian region and the vehicle-mounted camera by using a smooth L1 loss function.
6. The distance detection method according to claim 5, wherein the lighting point Gaussian region is a rectangular region, and the rectangular region acquisition method includes:
and setting the rectangular regions with different areas according to the calculated distance, wherein the smaller the calculated distance is, the larger the area of the rectangular region is.
7. A vehicle high beam control method, characterized by comprising:
obtaining a target vehicle lamp point, vehicle lamp point attributes corresponding to the target vehicle lamp point and a distance between the target vehicle lamp point and the vehicle-mounted camera according to the distance detection method of any one of claims 1-6, wherein the vehicle lamp point attributes comprise vehicle lamp point, single vehicle lamp point, head vehicle lamp point and tail vehicle lamp point;
the vehicle-mounted camera is arranged on a first vehicle, the target vehicle lamp point is positioned on a second vehicle, and the distance between the target vehicle lamp point and the vehicle-mounted camera is the vehicle-to-vehicle distance between the first vehicle and the second vehicle;
when the lamp point attribute of the target lamp point is the lamp point and is the headlight lamp point, if the distance between the vehicles is less than a first distance threshold value, turning off a high beam of the first vehicle and/or sending out a first prompt message;
when the attribute of the target vehicle light point is the vehicle light point and is the tail vehicle light point, if the distance between vehicles is less than a second distance threshold value, turning off a vehicle high beam of the first vehicle and/or sending out a second prompt message;
wherein the first headway threshold is greater than the second headway threshold.
8. An electronic device comprising a processor and a memory, said memory being adapted to store a plurality of program codes, characterized in that said program codes are adapted to be loaded and run by said processor to perform the distance detection method of any one of claims 1 to 6 or the vehicle high beam control method of claim 7.
9. A storage medium adapted to store a plurality of program codes, characterized in that the program codes are adapted to be loaded and run by a processor to perform the distance detection method according to any one of claims 1 to 6 or the vehicle high beam control method according to claim 7.
10. A vehicle characterized in that the vehicle comprises the electronic device of claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210727830.9A CN115205814A (en) | 2022-06-22 | 2022-06-22 | Distance detection method, vehicle high beam control method, device, medium and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210727830.9A CN115205814A (en) | 2022-06-22 | 2022-06-22 | Distance detection method, vehicle high beam control method, device, medium and vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115205814A true CN115205814A (en) | 2022-10-18 |
Family
ID=83577807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210727830.9A Pending CN115205814A (en) | 2022-06-22 | 2022-06-22 | Distance detection method, vehicle high beam control method, device, medium and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115205814A (en) |
-
2022
- 2022-06-22 CN CN202210727830.9A patent/CN115205814A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11386673B2 (en) | Brake light detection | |
CN113284366B (en) | Vehicle blind area early warning method, early warning device, MEC platform and storage medium | |
CN106980813B (en) | Gaze generation for machine learning | |
CN111695546B (en) | Traffic signal lamp identification method and device for unmanned vehicle | |
CN114454809B (en) | Intelligent lamplight switching method, system and related equipment | |
JP4052310B2 (en) | Method, apparatus and system for calculating distance to intersection | |
US7957559B2 (en) | Apparatus and system for recognizing environment surrounding vehicle | |
JP2002083297A (en) | Object recognition method and object recognition device | |
JP4755227B2 (en) | Method for recognizing objects | |
CN112001235A (en) | Vehicle traffic information generation method and device and computer equipment | |
CN110727269B (en) | Vehicle control method and related product | |
CN111967384A (en) | Vehicle information processing method, device, equipment and computer readable storage medium | |
CN113989772A (en) | Traffic light detection method and device, vehicle and readable storage medium | |
CN115082894A (en) | Distance detection method, vehicle high beam control method, device, medium and vehicle | |
CN115205814A (en) | Distance detection method, vehicle high beam control method, device, medium and vehicle | |
CN112926476B (en) | Vehicle identification method, device and storage medium | |
US11113549B2 (en) | Method and device for analyzing an image and providing the analysis for a driving assistance system of a vehicle | |
JP2007072948A (en) | Traffic signal lighting state identification method and apparatus | |
TWI848512B (en) | Traffic sign identification methods, systems and vehicles | |
US20240371150A1 (en) | Brake Light Detection | |
US11935412B2 (en) | Information supply method and storage medium | |
CN118470681A (en) | Traffic light detection method and device, electronic equipment and storage medium | |
CN117681766A (en) | Vehicle collision early warning method, device, equipment, storage medium and program product | |
JP2022161700A (en) | Traffic light recognition device | |
CN118230593A (en) | Parking space detection method, electronic equipment, storage medium and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |