CN112926476A - Vehicle identification method, device and storage medium - Google Patents

Vehicle identification method, device and storage medium Download PDF

Info

Publication number
CN112926476A
CN112926476A CN202110252292.8A CN202110252292A CN112926476A CN 112926476 A CN112926476 A CN 112926476A CN 202110252292 A CN202110252292 A CN 202110252292A CN 112926476 A CN112926476 A CN 112926476A
Authority
CN
China
Prior art keywords
vehicle
frame
environment
target vehicle
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110252292.8A
Other languages
Chinese (zh)
Inventor
董博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Kunpeng Jiangsu Technology Co Ltd
Original Assignee
Jingdong Kunpeng Jiangsu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Kunpeng Jiangsu Technology Co Ltd filed Critical Jingdong Kunpeng Jiangsu Technology Co Ltd
Priority to CN202110252292.8A priority Critical patent/CN112926476A/en
Publication of CN112926476A publication Critical patent/CN112926476A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The application provides a vehicle identification method, a vehicle identification device and a storage medium, which can be applied to the field of unmanned driving.

Description

Vehicle identification method, device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a vehicle identification method, apparatus, and storage medium.
Background
In recent years, with the continuous progress of technologies such as artificial intelligence, information fusion, communication and automatic control, the development speed of the unmanned automobile is increased, and the acceptance and demand of people on the unmanned automobile are also gradually improved. The unmanned automobile can collect environmental data inside and outside the automobile in real time by using the vehicle-mounted sensor, and realizes the identification and tracking of static or dynamic objects, thereby assisting a driver to detect potential risks in the shortest time and improving driving safety.
At present, in the aspect of identifying the lamps, the unmanned vehicle mainly discusses different types of lamp identification methods, such as identifying whether a brake lamp and a steering lamp are turned on, and most of the methods utilize an image segmentation technology to detect the positions of the lamps based on a single frame to identify the lamps.
However, in different weather or illumination environments, the car lights are used in different ways, and the driving intentions of the surrounding vehicles cannot be accurately identified only by detecting the positions of the car lights, so that the driving safety of the vehicles cannot be ensured.
Disclosure of Invention
The embodiment of the application provides a vehicle identification method, a vehicle identification device and a storage medium, and improves the accuracy of vehicle driving intention identification.
In a first aspect, an embodiment of the present application provides a vehicle identification method, which is applied to a first vehicle, and includes:
acquiring a plurality of frames of environment images from an image acquisition device, wherein each frame of environment image is used for presenting car light information of a target vehicle around the first vehicle and information of the environment where the first vehicle is located;
determining the vehicle lamp change information of the target vehicle according to the multi-frame environment image;
determining information of the environment where the first vehicle is located according to at least one frame of the environment image;
and determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the lamp change information of the target vehicle.
In an embodiment of the present application, the determining the headlight change information of the target vehicle according to the plurality of frames of environment images includes:
determining a plurality of two-dimensional detection frames of the target vehicle in the multi-frame environment image through target detection;
and inputting image blocks corresponding to the plurality of two-dimensional detection frames into a pre-trained car light detection model to obtain car light change information of the target vehicle, wherein the car light change condition is used for indicating the lighting conditions of different car lights of the target vehicle.
In one embodiment of the present application, the determining, through target detection, a plurality of two-dimensional detection frames of the target vehicle in the plurality of frames of environmental images includes:
and performing two-dimensional target detection on each frame of the environment image, determining a two-dimensional detection frame of the target vehicle in each frame of the environment image, and obtaining a plurality of two-dimensional detection frames of the target vehicle.
In one embodiment of the present application, the determining, through target detection, a plurality of two-dimensional detection frames of the target vehicle in the plurality of frames of environmental images includes:
acquiring multi-frame point cloud data from a laser radar detector;
carrying out three-dimensional target detection on the multiple frames of point cloud data to obtain a three-dimensional position frame of the target vehicle in each frame of point cloud data;
mapping the three-dimensional position frame of the target vehicle in the multi-frame point cloud data to the multi-frame environment image to obtain a plurality of two-dimensional detection frames of the target vehicle; and the multi-frame point cloud data corresponds to the multi-frame environment images one by one.
In an embodiment of the application, the determining, according to at least one frame of the environment image, information of an environment in which the first vehicle is located includes:
inputting the at least one frame of environment image into a pre-trained environment recognition model to obtain an output result corresponding to each frame of environment image, wherein the output result indicates illumination information and/or weather state of the environment where the first vehicle is located;
and determining illumination information and/or a weather state of the environment where the first vehicle is located according to the output result corresponding to the at least one frame of environment image.
In an embodiment of the application, the information of the environment where the first vehicle is located includes illumination information and/or a weather state, the illumination information includes weak light or normal light, and the weather state includes any one of fog, rain, snow or normal light.
In one embodiment of the present application, the light change information is further used to indicate a brightness level of a first light of the target vehicle, the first light being used to indicate vehicle width or braking; the determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the vehicle lamp change information of the target vehicle comprises the following steps:
if the illumination information of the environment where the first vehicle is located is weak light, or the weather state is fog, rainy or snowy, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, the brightness level is switched from low brightness to high brightness, and the driving intention of the target vehicle is determined to be deceleration driving; or
And if the illumination information or the weather state of the environment where the first vehicle is located is normal, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, and the driving intention of the target vehicle is determined to be decelerated driving.
In a second aspect, an embodiment of the present application provides a vehicle identification device, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a plurality of frames of environment images from an image acquisition device, and each frame of environment image is used for presenting car light information of a target vehicle around a first vehicle and information of the environment where the first vehicle is located;
the processing module is used for determining the vehicle lamp change information of the target vehicle according to the multi-frame environment image;
determining information of the environment where the first vehicle is located according to at least one frame of the environment image;
and determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the lamp change information of the target vehicle.
In an embodiment of the present application, the processing module is specifically configured to:
determining a plurality of two-dimensional detection frames of the target vehicle in the multi-frame environment image through target detection;
and inputting image blocks corresponding to the plurality of two-dimensional detection frames into a pre-trained car light detection model to obtain car light change information of the target vehicle, wherein the car light change information is used for indicating the lighting conditions of different car lights of the target vehicle.
In an embodiment of the present application, the processing module is specifically configured to: and performing two-dimensional target detection on each frame of the environment image, determining a two-dimensional detection frame of the target vehicle in each frame of the environment image, and obtaining a plurality of two-dimensional detection frames of the target vehicle.
In an embodiment of the application, the obtaining module is further configured to:
acquiring multi-frame point cloud data from a laser radar detector;
the processing module is specifically used for carrying out three-dimensional target detection on the multiple frames of point cloud data to obtain a three-dimensional position frame of the target vehicle in each frame of point cloud data;
mapping the three-dimensional position frame of the target vehicle in the multi-frame point cloud data to the multi-frame environment image to obtain a plurality of two-dimensional detection frames of the target vehicle; and the multi-frame point cloud data corresponds to the multi-frame environment images one by one.
In an embodiment of the present application, the processing module is specifically configured to:
inputting the at least one frame of environment image into a pre-trained environment recognition model to obtain an output result corresponding to each frame of environment image, wherein the output result indicates illumination information and/or weather state of the environment where the first vehicle is located;
and determining illumination information and/or a weather state of the environment where the first vehicle is located according to the output result corresponding to the at least one frame of environment image.
In one embodiment of the present application, the light change information is further used to indicate a brightness level of a first light of the target vehicle, the first light being used to indicate vehicle width or braking; the processing module is specifically configured to:
if the illumination information of the environment where the first vehicle is located is weak light, or the weather state is fog, rainy or snowy, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, the brightness level is switched from low brightness to high brightness, and the driving intention of the target vehicle is determined to be deceleration driving; or
And if the illumination information or the weather state of the environment where the first vehicle is located is normal, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, and the driving intention of the target vehicle is determined to be decelerated driving.
In a third aspect, an embodiment of the present application provides a vehicle identification device, including:
a processor, a memory, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the vehicle identification method according to any one of the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the vehicle identification method of any one of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program, which when executed by a processor is configured to implement the vehicle identification method according to any one of the first aspect.
The application provides a vehicle identification method, a vehicle identification device and a storage medium, which can be applied to the field of unmanned driving.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic view of a vehicle identification method provided herein;
FIG. 2 is a first flowchart of a vehicle identification method provided by the present application;
FIG. 3 is a schematic diagram of a vehicle light detection model provided herein;
FIG. 4 is a model schematic of a vehicle identification process provided herein;
FIG. 5 is a first schematic diagram of a vehicle lamp state change provided by the present application;
FIG. 6 is a second schematic diagram of a vehicle lamp state change provided by the present application;
FIG. 7 is a third schematic view of a vehicle lamp state change provided by the present application;
FIG. 8 is a fourth schematic view of a vehicle light state change provided by the present application;
FIG. 9 is a second flowchart of a vehicle identification method provided by the present application;
FIG. 10 is a schematic diagram of mapping three-dimensional point cloud data provided herein to a two-dimensional image;
FIG. 11 is a schematic structural diagram of a vehicle identification device provided herein;
fig. 12 is a schematic diagram of a hardware structure of the vehicle identification device provided in the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments that can be made by one skilled in the art based on the embodiments in the present application in light of the present disclosure are within the scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein.
Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Based on the scheme in the background art, the current car light identification method mainly utilizes an image segmentation technology and detects the position of a car light based on a single frame to identify the car light, however, in some scenes, a brake light and a width indicator light of a car are shared, a brake light and a steering light of the car are shared, and the position of the car light is detected only through the single frame, so that the driving intention of a front car or a rear car cannot be accurately identified. For example, when the brake light and the width light of the vehicle are shared, both the left width light and the right width light are turned on, and it may be determined as the width or the braking. For example, when the brake light and the turn light are used in common, both the left turn light and the right turn light are turned on, and it may be determined as a double flash or a brake.
In addition, in the current car light identification scheme, the influence of evening, night or abnormal weather on car light identification is not considered. For example, when a brake lamp and a width indicator lamp of a vehicle are used together, if the brake lamp and the width indicator lamp are in a normally-on state in the evening or in a foggy day, the light brightness is enhanced during braking; if the weather is daytime or normal weather, the left and right width indicating lamps are in an off state, and the left and right width indicating lamps are turned on when the vehicle is braked. It follows that different lighting conditions or weather conditions, the state of the vehicle lights may indicate different meanings. The existing car lamp identification scheme cannot intelligently distinguish the situations, misjudgment on the driving intentions of front and rear cars is easily caused, and the driving safety of the cars needs to be improved.
In order to solve the existing problems, the inventor considers the influence of illumination or abnormal weather on vehicle lamp identification, adds an environment detection module in the vehicle identification process, and obtains the information of the current environment of the vehicle by extracting the environmental characteristics such as illumination or weather in the image frame. The state of the lamp of the target vehicle is identified by utilizing the multi-frame image, and meanwhile, the actual driving intention indicated by the lamp state of the target vehicle under the current illumination or weather condition is comprehensively judged by combining the information of the current environment of the vehicle, so that the optimization of the lamp identification scheme is realized, the accuracy of vehicle state identification is improved, the misjudgment is reduced, and the vehicle driving safety is improved.
Fig. 1 is a scene schematic diagram of a vehicle identification method provided by the present application, and as shown in fig. 1, an identification scene of a vehicle may include a plurality of vehicles, and 3 vehicles in fig. 1 are taken as an example, and the vehicles may communicate with each other through a server or may directly communicate with each other in a peer-to-peer manner.
Each vehicle is provided with at least one image capturing device, such as a front camera or a rear camera, for capturing an environmental image around the vehicle, where the environmental image may include other vehicles around the vehicle, such as a vehicle 12 in front of the vehicle 11 and a vehicle 13 behind the vehicle 11.
Optionally, in some scenarios, at least one laser radar detector is further disposed on each vehicle, for example, one laser radar detector is disposed on the top of the vehicle, and is used for acquiring point cloud data around the vehicle. Each point in the point cloud data includes three-dimensional coordinate information, and also includes color (Red Green Blue, RGB) information or reflection intensity information. And obtaining a frame of point cloud data by the laser radar detector every time the laser radar detector scans.
It should be noted that the lamp structure of each vehicle in the above-mentioned scene is the same, and the lamp structure including the front end and the rear end of the vehicle is the same. For example, a brake light and a width light are used in common at the rear end of the vehicle, or a brake light and a turn light are used in common at the rear end of the vehicle.
Each vehicle in the above-mentioned scene can be provided with a vehicle identification device, and by executing the vehicle identification scheme provided by the application, the determination of the state of the lamps of the surrounding vehicles is realized, so that the line adjustment is performed when necessary.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a first flowchart of the vehicle identification method provided in the present application, where the vehicle identification method provided in this embodiment may be applied to a vehicle-mounted computer of an unmanned vehicle, or a device or apparatus that can execute the vehicle identification method on any vehicle, and the vehicle and the lamp structures of surrounding vehicles are consistent.
As shown in fig. 2, the vehicle identification method of the present embodiment specifically includes the following steps:
step 101, obtaining a plurality of frames of environment images from an image acquisition device, wherein each frame of environment image is used for presenting car light information of a target vehicle around a first vehicle and information of an environment where the first vehicle is located.
In this embodiment, the first vehicle is provided with one or more image capturing devices, and the number of the image capturing devices is not particularly limited in this embodiment. For example, the image capturing device is a camera at the front end of the first vehicle, and is used for capturing an environment image in front of the first vehicle. For another example, the image capturing device is a camera at the rear end of the first vehicle, and is used for capturing an environment image behind the first vehicle. Optionally, in some embodiments, the image capturing devices may also be disposed on both sides of the first vehicle.
The environment image collected by the image collecting device is a two-dimensional image. The environment image includes target vehicles around the first vehicle, and the number of the target vehicles may be one or more. It should be understood that each frame of the environment image may present the headlight information of the target vehicle at the current time, and successive frames of the environment image may present the headlight change information of the target vehicle over a period of time.
The information of the environment in which the first vehicle is located comprises illumination information and/or weather conditions.
The illumination information comprises weak light or normal light, the weak light illumination can correspond to the evening or night environment, and the normal light illumination can correspond to the daytime environment. Optionally, the illumination information may further include strong light, and the strong light illumination corresponds to a daytime environment with strong light or a night environment in which a high beam is turned on by surrounding vehicles.
The weather state includes any one of fog, rain, snow or normal, the weather state in fog and rain and snow is not good, and various parameters of the collected environment image are not good, such as definition.
And step 102, determining the vehicle lamp change information of the target vehicle according to the multi-frame environment image.
In the embodiment, a target vehicle in a multi-frame environment image is mainly detected through a target detection algorithm, a plurality of two-dimensional detection frames of the target vehicle in the multi-frame environment image are obtained, and image blocks corresponding to the two-dimensional detection frames are input into a pre-trained vehicle lamp detection model to obtain vehicle lamp change information of the target vehicle. Wherein the lamp change information is used to indicate the lighting conditions of different lamps of the target vehicle.
The target detection algorithm comprises a two-dimensional target detection algorithm or a three-dimensional target detection algorithm, and a two-dimensional detection frame of the target vehicle in each frame of the environment image is obtained through any one target detection algorithm.
In one embodiment of the application, two-dimensional target detection is directly performed on each frame of environment image, a two-dimensional detection frame of a target vehicle in each frame of environment image is determined, and a plurality of two-dimensional detection frames of the target vehicle are obtained.
In one embodiment of the application, target detection is performed on each frame of environment image by combining point cloud data acquired by a laser radar detector, a two-dimensional detection frame of a target vehicle in each frame of environment image is determined, and a plurality of two-dimensional detection frames of the target vehicle are obtained.
Fig. 3 is a schematic diagram of a car light detection model provided by the present application, and as shown in fig. 3, the input of the car light detection model is consecutive image frames, where the consecutive image frames correspond to image blocks corresponding to two-dimensional detection frames of the same target vehicle in consecutive multi-frame environment images. The method comprises the steps that sequential image frames obtain an embedding (feature vector) time sequence corresponding to the sequential image frames through a feature extraction network, each embedding corresponds to feature information of a car lamp in one image frame, and the feature information is transformed to obtain an individual embedding, wherein the individual embedding integrates the feature information of the car lamps in multiple frames, can be used as an output result of a car lamp detection model, represents car lamp change information of a target vehicle, and indicates the lighting condition of different car lamps of the target vehicle in the sequential multiple frame images.
Illustratively, one example of the vehicle light variation information: { BLR ═ 001,000,001,000,001}, where B denotes brake light, B ═ 1 denotes brake light on, B ═ 0 denotes brake light off, L denotes left turn light, L ═ 1 denotes left turn light on, L ═ 0 denotes left turn light off, R denotes right turn light, R ═ 1 denotes right turn light on, and R ═ 0 denotes right turn light off. The vehicle light change information in this example indicates that the right turn lamp is flashing for a period of time.
Optionally, in some embodiments, the light change information is further used to indicate a brightness level of a first light of the target vehicle. The first vehicle lamp is used for indicating the width of the vehicle or braking, namely the first vehicle lamp can be used as a brake lamp and also can be used as a width indicating lamp. The embodiment mainly considers that the brake lamp and the width lamp of some vehicles are shared, and the vehicle lamp change information indicates the brightness level of the shared vehicle lamp besides indicating whether different vehicle lamps are turned on or not. One example of the vehicle light variation information: { BLRA ═ 1000,1000,1000,1000,1000}, where B denotes brake/width indicator light, B ═ 1 denotes brake/width indicator light on, B ═ 0 denotes brake/width indicator light off, a denotes the brightness levels of the brake/width indicator light, for example, a are two levels each, a ═ 0 denotes low brightness, and a ═ 1 denotes high brightness. The vehicle light change information in this example indicates that the brake light/width light is on for a period of time and the brightness level is low.
Optionally, in some embodiments, the input requirements of the car light detection model include the size of the image frame. Therefore, before inputting the image block into the car light detection model, the image block needs to be preprocessed, and the preprocessing includes rotation, scaling, clipping and the like of the image block.
Optionally, in some embodiments, the input requirement of the headlight detection model includes that the number of image frames in which the same target vehicle appears in consecutive image frames is greater than or equal to M, for example, M is 5 frames. If it is determined through target tracking that a certain target vehicle appears in the continuous N frames of environment images, where N is less than M, the target vehicle may be marked as an invalid vehicle. That is, after a plurality of target vehicles in a plurality of frames of environment images are acquired through a target detection algorithm, vehicles whose tracking histories do not reach a fixed sequence length can be filtered out. It should be understood that the vehicle whose tracking history does not reach the fixed sequence length does not affect the normal running of the first vehicle, and therefore, the vehicle can be excluded from the identification process, the calculation amount of the vehicle lamp identification device is reduced, and the vehicle identification efficiency is improved.
Optionally, in some embodiments, vehicles in the environment image that exceed the preset image boundary may be filtered out. Illustratively, taking the image acquisition device as a front camera as an example, the viewing angle range of the front camera is ± 60 °, the viewing angle range actually required to be monitored is ± 45 °, and if a certain target vehicle is located at the boundary position of the front camera, the target vehicle does not affect the normal running of the first vehicle, and therefore the target vehicle can be excluded in the identification process, so that the calculation amount of the vehicle lamp identification device is reduced, and the vehicle identification efficiency is improved.
And 103, determining the information of the environment where the first vehicle is located according to at least one frame of environment image.
It should be understood that within the preset time period, the information of the environment where the vehicle is located, which is represented by the environment image collected by the image collecting device, changes less, so that the embodiment may perform the environmental feature analysis according to at least one frame of the multiple frames of environment images.
Specifically, at least one frame of environment image may be input into a pre-trained environment recognition model to obtain an output result corresponding to each frame of environment image, and the output result indicates illumination information and/or a weather state of an environment where the first vehicle is located. And determining the illumination information and/or the weather state of the environment where the first vehicle is located according to the output result corresponding to the at least one frame of environment image.
The environment recognition model of the embodiment includes a feature extraction network, the input of the environment recognition model is a single-frame image, and the single-frame image obtains environment feature information (embedding) corresponding to the single-frame image through the feature extraction network. For example, the environmental characteristic information indicates scores of environmental parameters such as fog days, rain days, snow days, cloudy days, or scores of different light levels. The environmental characteristic information indicates a lighting or weather condition of the single frame image.
And step 104, determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the lamp change information of the target vehicle.
For example, fig. 4 is a model schematic diagram of a vehicle identification process provided by the present application, and as shown in fig. 4, the vehicle identification process involves a vehicle light detection model and an environment identification model, and a driving intention of a target vehicle is output based on embedding output by the vehicle light detection model and embedding output by the environment identification model through comprehensive judgment. The driving intention of the vehicle includes deceleration, stop deceleration (normal driving), deceleration plus left turn, deceleration plus right turn, left turn, right turn, abnormal driving (the left and right turn lights flash at the same time, namely double flashing), abnormal driving acceleration and deceleration, and the like.
In one scenario, the first vehicle light is used to indicate vehicle width or braking, i.e. the first vehicle light is both a width light and a brake light, the width light and the brake light being common.
In an embodiment of the application, if the illumination information of the environment where the first vehicle is located is weak light, or the weather state is fog, rainy or snowy, the vehicle lamp change information is used to indicate that the first vehicle lamp is turned on, and the brightness level is switched from low brightness to high brightness, and it is determined that the driving intention of the target vehicle is deceleration driving.
In an embodiment of the application, if the illumination information of the environment where the first vehicle is located is low light, or the weather state is fog, rainy or snowy, the vehicle lamp change information is used to indicate that the first vehicle lamp is turned on, and the brightness level is switched from high brightness to low brightness, it is determined that the driving intention of the target vehicle is to stop decelerating.
In an embodiment of the application, if the illumination information or the weather state of the environment where the first vehicle is located is normal, the lamp change information is used for indicating that the first lamp is turned on, and it is determined that the driving intention of the target vehicle is deceleration driving.
In an embodiment of the application, if the illumination information or the weather state of the environment where the first vehicle is located is normal, the vehicle lamp change information is used for indicating that the first vehicle lamp is switched from on to off, and it is determined that the driving intention of the target vehicle is to stop decelerating.
According to the vehicle identification method provided by the embodiment, the multi-frame image of the surrounding environment of the vehicle is obtained through the image acquisition device on the vehicle, the vehicle lamp change condition of the target vehicle in the image is determined based on the multi-frame image, meanwhile, the information of the environment where the vehicle is located is determined based on at least one frame image, and the actual driving intention of the target vehicle around the vehicle, which is indicated by the vehicle lamp change, is comprehensively judged according to the vehicle lamp change condition and the information of the environment where the vehicle is located under the current environment of the vehicle, so that the vehicle lamp identification scheme is optimized, the accuracy of vehicle lamp state identification is improved, misjudgment is reduced, and the vehicle driving safety.
Optionally, in another scenario, the second lamp is used to indicate braking or steering, i.e. the second lamp is both a brake lamp and a steering lamp, the brake lamp and the steering lamp being common. Fig. 5 to 8 are schematic diagrams showing various lamp state changes, and two lamps are shown as brake/steering lamps at the tail of the vehicle.
In one embodiment of the present application, if the lamp change information is used to indicate two lamp states, one is that the left and right second lamps of the target vehicle are simultaneously turned on, the other is that the left and right second lamps of the target vehicle are simultaneously turned off, and the two lamp states alternately appear within a second time period, it is determined that the target vehicle has the double flash operation, as shown in fig. 5.
In one embodiment of the present application, if the lamp change information indicates that the two second right and left lamps of the target vehicle are simultaneously turned on for a period of time longer than the second period of time, it is determined that the target vehicle has a braking operation and the target vehicle is traveling at a reduced speed, as shown in fig. 6.
In one embodiment of the present application, if the lamp change information is used to indicate two lamp states, one is that the left and right second lamps of the target vehicle are simultaneously turned on, the other is that the left second lamp of the target vehicle is turned off and the right second lamp is turned on, and the two lamp states alternately appear within the second time period, it is determined that the driving intention of the target vehicle is braking plus left turning, as shown in fig. 7.
In one embodiment of the present application, if the lamp change information is used to indicate two lamp states, one is that the left and right second lamps of the target vehicle are simultaneously turned on, the other is that the right second lamp of the target vehicle is turned off and the left second lamp is turned on, and the two lamp states alternately appear within the second time period, it is determined that the driving intention of the target vehicle is braking plus right turning, as shown in fig. 8.
It can be known from the above embodiments that the light state of the current image frame cannot accurately indicate the actual driving intention of the vehicle, so that it is necessary to perform comprehensive judgment by combining multiple frame images before the current image frame to improve the accuracy of vehicle identification.
It should be noted that the determination of the vehicle driving intention in this step is based on a preset lamp structure, such as a brake lamp and a width lamp, or a brake lamp and a turn lamp, and it should be understood that the indicated lamp state is different and the determination logic for the vehicle driving intention is different based on different lamp structures. The scheme provided by the application can be applied to the unmanned system, the structures of the lamps of the unmanned vehicle in the system are consistent, and the judgment logics of the vehicle driving intention are also consistent.
Fig. 9 is a second flowchart of the vehicle identification method provided in the present application, and as shown in fig. 9, the vehicle identification method of the present embodiment specifically includes the following steps:
step 201, obtaining multi-frame point cloud data from a laser radar detector.
In this embodiment, the laser radar detector can detect the distance between a specific object and the vehicle in the daytime or at night by emitting an infrared beam to the surrounding environment and receiving a reflected beam to detect the object in the surrounding environment. The point cloud data may be obtained by scanning with a laser radar detector, and each frame of point cloud data includes a plurality of points, each point including three-dimensional coordinate information of an object, for example, three-dimensional coordinate information of surrounding vehicles, and in some scenarios, color information or reflection intensity information (brightness information) may also be included. The number of the laser radar detectors is not limited in the embodiment, and one or more laser radar detectors can be used, and the laser radar detectors can be set according to actual requirements.
Specifically, in the vehicle driving process, objects around the vehicle can be scanned in an all-around mode through a laser radar detector arranged on the vehicle, scanning can be conducted for multiple times within a preset time period, and multi-frame point cloud data are obtained.
Step 202, three-dimensional target detection is carried out on the multi-frame point cloud data, and a three-dimensional position frame of the target vehicle in each frame of point cloud data is obtained.
And step 203, mapping the three-dimensional position frame of the target vehicle in the multi-frame point cloud data to the multi-frame environment image to obtain a plurality of two-dimensional detection frames of the target vehicle.
And the multi-frame point cloud data corresponds to the multi-frame environment images one by one.
In this embodiment, a three-dimensional position frame of a target vehicle in each frame of point cloud data is first determined through a three-dimensional target detection algorithm, and since the number of the target vehicles may be multiple, each three-dimensional position frame corresponds to an identification ID of one target vehicle, and is used for tracking the movement situation of each target vehicle in the multi-frame point cloud data.
The laser radar detector acquires point cloud data of surrounding vehicles, the image acquisition device acquires two-dimensional images, the point cloud data and the two-dimensional images at the same moment have a corresponding relation, and a three-dimensional position frame of a point cloud data frame can be mapped into the two-dimensional images according to internal and external parameters of the image acquisition device.
It should be noted that the point cloud data of the lidar detector is data in a range of 360 degrees around the vehicle, the view angle range of the image acquisition device is small, for example, the view angle range of the front camera is ± 45 degrees, and therefore, before data mapping, data exceeding the view angle range of the image acquisition device in the point cloud data needs to be filtered out, so that the data calculation amount is reduced, and the processing speed is increased.
The method comprises the steps of taking a camera as an image acquisition device, and firstly converting three-dimensional coordinates of a target vehicle from a radar coordinate system to a camera coordinate system according to external parameters of the camera, including a rotation matrix and a translation vector. Then, according to camera parameters including a projection matrix and distortion parameters, the three-dimensional coordinates of the target vehicle are projected to a pixel coordinate system, and finally, two-dimensional coordinates of the target vehicle in a two-dimensional image are obtained, namely, a two-dimensional detection frame of the target vehicle can be obtained, and the data processing process can be shown in fig. 10.
And 204, inputting the image blocks corresponding to the plurality of two-dimensional detection frames into a pre-trained vehicle lamp detection model to obtain vehicle lamp change information of the target vehicle. The lamp change information is used to indicate the lighting conditions of different lamps of the target vehicle.
Step 205, determining information of the environment where the first vehicle is located according to at least one frame of environment image.
And step 206, determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the lamp change information of the target vehicle.
Steps 204 to 206 of this embodiment correspond to steps 102 to 104 of the above embodiment, respectively, and the principle and technical effect of this embodiment can be found in the above embodiment, which is not described herein again.
According to the vehicle identification method provided by the embodiment, the two-dimensional image and the point cloud data around the vehicle are respectively obtained through the image acquisition device and the laser radar detector on the vehicle, the three-dimensional position information of the target vehicle in the point cloud data is obtained based on a three-dimensional target detection algorithm, and the three-dimensional position information is converted into the two-dimensional image, so that the two-dimensional detection frame of the target vehicle is obtained. The two-dimensional detection frame of the target vehicle in the multi-frame images is obtained through the process, the existing three-dimensional detection process of the unmanned vehicle can be effectively utilized, a two-dimensional detector does not need to be added independently, resources are saved, and the accuracy of three-dimensional detection is higher. By analyzing the change of the state of the vehicle lamp in the two-dimensional detection frame and combining the detection of the environment information in the two-dimensional image, the actual driving intention of the vehicle lamp change indication of the target vehicle around the vehicle in the current environment is comprehensively judged, the vehicle lamp identification scheme is optimized, the accuracy rate of vehicle lamp state identification is improved, the misjudgment is reduced, and the vehicle driving safety is improved.
In the embodiment of the present application, the vehicle identification device may be divided into the functional modules according to the method embodiment, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a form of hardware or a form of a software functional module. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. The following description will be given by taking an example in which each functional module is divided by using a corresponding function.
Fig. 11 is a schematic structural diagram of a vehicle identification device provided in the present application, and as shown in fig. 11, a vehicle identification device 300 of the present embodiment includes: an acquisition module 301 and a processing module 302.
The system comprises an acquisition module 301, a processing module and a display module, wherein the acquisition module 301 is used for acquiring multiple frames of environment images from an image acquisition device, and each frame of environment image is used for presenting car light information of target vehicles around a first vehicle and information of the environment where the first vehicle is located;
the processing module 302 is configured to determine vehicle light change information of the target vehicle according to the multiple frames of environment images;
determining information of the environment where the first vehicle is located according to at least one frame of the environment image;
and determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the lamp change information of the target vehicle.
In an embodiment of the present application, the processing module 302 is specifically configured to:
determining a plurality of two-dimensional detection frames of the target vehicle in the multi-frame environment image through target detection;
and inputting image blocks corresponding to the plurality of two-dimensional detection frames into a pre-trained car light detection model to obtain car light change information of the target vehicle, wherein the car light change information is used for indicating the lighting conditions of different car lights of the target vehicle.
In an embodiment of the present application, the processing module 302 is specifically configured to: and performing two-dimensional target detection on each frame of the environment image, determining a two-dimensional detection frame of the target vehicle in each frame of the environment image, and obtaining a plurality of two-dimensional detection frames of the target vehicle.
In an embodiment of the present application, the obtaining module 301 is further configured to:
acquiring multi-frame point cloud data from a laser radar detector;
the processing module 302 is specifically configured to perform three-dimensional target detection on the multiple frames of point cloud data to obtain a three-dimensional position frame of the target vehicle in each frame of point cloud data;
mapping the three-dimensional position frame of the target vehicle in the multi-frame point cloud data to the multi-frame environment image to obtain a plurality of two-dimensional detection frames of the target vehicle; and the multi-frame point cloud data corresponds to the multi-frame environment images one by one.
In an embodiment of the present application, the processing module 302 is specifically configured to:
inputting the at least one frame of environment image into a pre-trained environment recognition model to obtain an output result corresponding to each frame of environment image, wherein the output result indicates illumination information and/or weather state of the environment where the first vehicle is located;
and determining illumination information and/or a weather state of the environment where the first vehicle is located according to the output result corresponding to the at least one frame of environment image.
In one embodiment of the present application, the light change information is further used to indicate a brightness level of a first light of the target vehicle, the first light being used to indicate vehicle width or braking; the processing module 302 is specifically configured to:
if the illumination information of the environment where the first vehicle is located is weak light, or the weather state is fog, rainy or snowy, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, the brightness level is switched from low brightness to high brightness, and the driving intention of the target vehicle is determined to be deceleration driving; or
And if the illumination information or the weather state of the environment where the first vehicle is located is normal, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, and the driving intention of the target vehicle is determined to be decelerated driving.
The vehicle identification device provided by any one of the above embodiments is used for executing the technical scheme in any one of the above method embodiments, and the implementation principle and the technical effect are similar, and are not described herein again.
Fig. 12 is a schematic diagram of a hardware structure of the vehicle identification device provided in the present application. As shown in fig. 12, the vehicle recognition device 400 of the present embodiment includes:
a processor 401, a memory 402, and computer programs;
the memory 402 is used for storing executable instructions of the processor 401;
wherein the computer program is stored in the memory 402 and configured to be executed by the processor 401 to implement the vehicle identification method in any of the preceding method embodiments.
Optionally, the memory 402 may be separate or integrated with the processor 401.
When the memory 402 is a device independent of the processor 401, the vehicle identification apparatus 400 may further include:
a bus 403 for connecting the above devices.
The vehicle identification device is used for executing the vehicle identification method provided by any one of the method embodiments, the implementation principle and the technical effect are similar, and the detailed description is omitted here.
Embodiments of the present application further provide a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the vehicle identification method provided in any of the foregoing embodiments.
The embodiment of the present application further provides a computer program product, which includes a computer program, and the computer program is used for implementing the vehicle identification method provided by any one of the foregoing method embodiments when being executed by a processor.
An embodiment of the present application further provides a chip, including: a processing module and a communication interface, wherein the processing module can execute the technical scheme in the method embodiment.
Further, the chip further includes a storage module (e.g., a memory), where the storage module is configured to store instructions, and the processing module is configured to execute the instructions stored in the storage module, and the execution of the instructions stored in the storage module causes the processing module to execute the technical solution in the foregoing method embodiment.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in the vehicle identification apparatus.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (16)

1. A vehicle identification method, applied to a first vehicle, the method comprising:
acquiring a plurality of frames of environment images from an image acquisition device, wherein each frame of environment image is used for presenting car light information of a target vehicle around the first vehicle and information of the environment where the first vehicle is located;
determining the vehicle lamp change information of the target vehicle according to the multi-frame environment image;
determining information of the environment where the first vehicle is located according to at least one frame of the environment image;
and determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the lamp change information of the target vehicle.
2. The method according to claim 1, wherein the determining the headlight change information of the target vehicle according to the plurality of frames of environment images comprises:
determining a plurality of two-dimensional detection frames of the target vehicle in the multi-frame environment image through target detection;
and inputting image blocks corresponding to the plurality of two-dimensional detection frames into a pre-trained car light detection model to obtain car light change information of the target vehicle, wherein the car light change information is used for indicating the lighting conditions of different car lights of the target vehicle.
3. The method according to claim 2, wherein the determining a plurality of two-dimensional detection frames of the target vehicle in the plurality of frames of environmental images through target detection comprises:
and performing two-dimensional target detection on each frame of the environment image, determining a two-dimensional detection frame of the target vehicle in each frame of the environment image, and obtaining a plurality of two-dimensional detection frames of the target vehicle.
4. The method according to claim 2, wherein the determining a plurality of two-dimensional detection frames of the target vehicle in the plurality of frames of environmental images through target detection comprises:
acquiring multi-frame point cloud data from a laser radar detector;
carrying out three-dimensional target detection on the multiple frames of point cloud data to obtain a three-dimensional position frame of the target vehicle in each frame of point cloud data;
mapping the three-dimensional position frame of the target vehicle in the multi-frame point cloud data to the multi-frame environment image to obtain a plurality of two-dimensional detection frames of the target vehicle; and the multi-frame point cloud data corresponds to the multi-frame environment images one by one.
5. The method according to any one of claims 1-4, wherein the determining information of the environment in which the first vehicle is located according to at least one frame of the environment image comprises:
inputting the at least one frame of environment image into a pre-trained environment recognition model to obtain an output result corresponding to each frame of environment image, wherein the output result indicates illumination information and/or weather state of the environment where the first vehicle is located;
and determining illumination information and/or a weather state of the environment where the first vehicle is located according to the output result corresponding to the at least one frame of environment image.
6. The method according to any one of claims 1-4, wherein the information of the environment in which the first vehicle is located comprises lighting information and/or weather conditions, wherein the lighting information comprises low light or normal, and wherein the weather conditions comprise any one of fog, rain, snow or normal.
7. The method of any of claims 1-4, wherein the light change information is further used to indicate a brightness level of a first light of the target vehicle, the first light being used to indicate vehicle width or braking; the determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the vehicle lamp change information of the target vehicle comprises the following steps:
if the illumination information of the environment where the first vehicle is located is weak light, or the weather state is fog, rainy or snowy, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, the brightness level is switched from low brightness to high brightness, and the driving intention of the target vehicle is determined to be deceleration driving; or
And if the illumination information or the weather state of the environment where the first vehicle is located is normal, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, and the driving intention of the target vehicle is determined to be decelerated driving.
8. A vehicle identification device characterized by comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a plurality of frames of environment images from an image acquisition device, and each frame of environment image is used for presenting car light information of a target vehicle around a first vehicle and information of the environment where the first vehicle is located;
the processing module is used for determining the vehicle lamp change information of the target vehicle according to the multi-frame environment image;
determining information of the environment where the first vehicle is located according to at least one frame of the environment image;
and determining the driving intention of the target vehicle according to the information of the environment where the first vehicle is located and the lamp change information of the target vehicle.
9. The apparatus of claim 8, wherein the processing module is specifically configured to:
determining a plurality of two-dimensional detection frames of the target vehicle in the multi-frame environment image through target detection;
and inputting image blocks corresponding to the plurality of two-dimensional detection frames into a pre-trained car light detection model to obtain car light change information of the target vehicle, wherein the car light change information is used for indicating the lighting conditions of different car lights of the target vehicle.
10. The apparatus of claim 9, wherein the processing module is specifically configured to: and performing two-dimensional target detection on each frame of the environment image, determining a two-dimensional detection frame of the target vehicle in each frame of the environment image, and obtaining a plurality of two-dimensional detection frames of the target vehicle.
11. The apparatus of claim 9, wherein the obtaining module is further configured to:
acquiring multi-frame point cloud data from a laser radar detector;
the processing module is specifically used for carrying out three-dimensional target detection on the multiple frames of point cloud data to obtain a three-dimensional position frame of the target vehicle in each frame of point cloud data;
mapping the three-dimensional position frame of the target vehicle in the multi-frame point cloud data to the multi-frame environment image to obtain a plurality of two-dimensional detection frames of the target vehicle; and the multi-frame point cloud data corresponds to the multi-frame environment images one by one.
12. The apparatus according to any one of claims 8 to 11, wherein the processing module is specifically configured to:
inputting the at least one frame of environment image into a pre-trained environment recognition model to obtain an output result corresponding to each frame of environment image, wherein the output result indicates illumination information and/or weather state of the environment where the first vehicle is located;
and determining illumination information and/or a weather state of the environment where the first vehicle is located according to the output result corresponding to the at least one frame of environment image.
13. The apparatus of any one of claims 8-11, wherein the light change information is further used to indicate a brightness level of a first light of the target vehicle, the first light being used to indicate vehicle width or braking; the processing module is specifically configured to:
if the illumination information of the environment where the first vehicle is located is weak light, or the weather state is fog, rainy or snowy, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, the brightness level is switched from low brightness to high brightness, and the driving intention of the target vehicle is determined to be deceleration driving; or
And if the illumination information or the weather state of the environment where the first vehicle is located is normal, the vehicle lamp change information is used for indicating that the first vehicle lamp is turned on, and the driving intention of the target vehicle is determined to be decelerated driving.
14. A vehicle identification device characterized by comprising:
a processor, a memory, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the vehicle identification method according to any of claims 1-7.
15. A readable storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the vehicle identification method according to any one of claims 1 to 7.
16. A computer program product, characterized in that it comprises a computer program which, when being executed by a processor, is adapted to carry out the vehicle identification method according to any one of claims 1 to 7.
CN202110252292.8A 2021-03-08 2021-03-08 Vehicle identification method, device and storage medium Pending CN112926476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110252292.8A CN112926476A (en) 2021-03-08 2021-03-08 Vehicle identification method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110252292.8A CN112926476A (en) 2021-03-08 2021-03-08 Vehicle identification method, device and storage medium

Publications (1)

Publication Number Publication Date
CN112926476A true CN112926476A (en) 2021-06-08

Family

ID=76171983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110252292.8A Pending CN112926476A (en) 2021-03-08 2021-03-08 Vehicle identification method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112926476A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108357418A (en) * 2018-01-26 2018-08-03 河北科技大学 A kind of front truck driving intention analysis method based on taillight identification
CN108528433A (en) * 2017-03-02 2018-09-14 比亚迪股份有限公司 Vehicle travels autocontrol method and device
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium
CN109747638A (en) * 2018-12-25 2019-05-14 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle driving intension recognizing method and device
CN109795406A (en) * 2019-03-07 2019-05-24 董博 A kind of vehicle running state alarming device based on embedded system control
CN110168559A (en) * 2017-12-11 2019-08-23 北京嘀嘀无限科技发展有限公司 For identification with positioning vehicle periphery object system and method
CN110471058A (en) * 2018-05-09 2019-11-19 福特全球技术公司 The system and method detected automatically for trailer attribute
CN110928286A (en) * 2018-09-19 2020-03-27 百度在线网络技术(北京)有限公司 Method, apparatus, medium, and system for controlling automatic driving of vehicle
CN110920541A (en) * 2019-11-25 2020-03-27 的卢技术有限公司 Method and system for realizing automatic control of vehicle based on vision
WO2020116195A1 (en) * 2018-12-07 2020-06-11 ソニーセミコンダクタソリューションズ株式会社 Information processing device, information processing method, program, mobile body control device, and mobile body
CN111815959A (en) * 2020-06-19 2020-10-23 浙江大华技术股份有限公司 Vehicle violation detection method and device and computer readable storage medium
CN111859618A (en) * 2020-06-16 2020-10-30 长安大学 Multi-end in-loop virtual-real combined traffic comprehensive scene simulation test system and method
CN111899545A (en) * 2020-07-29 2020-11-06 Tcl通讯(宁波)有限公司 Driving reminding method and device, storage medium and mobile terminal
CN111931715A (en) * 2020-09-22 2020-11-13 深圳佑驾创新科技有限公司 Method and device for recognizing state of vehicle lamp, computer equipment and storage medium
CN112149476A (en) * 2019-06-28 2020-12-29 北京海益同展信息科技有限公司 Target detection method, device, equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108528433A (en) * 2017-03-02 2018-09-14 比亚迪股份有限公司 Vehicle travels autocontrol method and device
CN110168559A (en) * 2017-12-11 2019-08-23 北京嘀嘀无限科技发展有限公司 For identification with positioning vehicle periphery object system and method
CN108357418A (en) * 2018-01-26 2018-08-03 河北科技大学 A kind of front truck driving intention analysis method based on taillight identification
CN110471058A (en) * 2018-05-09 2019-11-19 福特全球技术公司 The system and method detected automatically for trailer attribute
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium
CN110928286A (en) * 2018-09-19 2020-03-27 百度在线网络技术(北京)有限公司 Method, apparatus, medium, and system for controlling automatic driving of vehicle
WO2020116195A1 (en) * 2018-12-07 2020-06-11 ソニーセミコンダクタソリューションズ株式会社 Information processing device, information processing method, program, mobile body control device, and mobile body
CN109747638A (en) * 2018-12-25 2019-05-14 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle driving intension recognizing method and device
CN109795406A (en) * 2019-03-07 2019-05-24 董博 A kind of vehicle running state alarming device based on embedded system control
CN112149476A (en) * 2019-06-28 2020-12-29 北京海益同展信息科技有限公司 Target detection method, device, equipment and storage medium
CN110920541A (en) * 2019-11-25 2020-03-27 的卢技术有限公司 Method and system for realizing automatic control of vehicle based on vision
CN111859618A (en) * 2020-06-16 2020-10-30 长安大学 Multi-end in-loop virtual-real combined traffic comprehensive scene simulation test system and method
CN111815959A (en) * 2020-06-19 2020-10-23 浙江大华技术股份有限公司 Vehicle violation detection method and device and computer readable storage medium
CN111899545A (en) * 2020-07-29 2020-11-06 Tcl通讯(宁波)有限公司 Driving reminding method and device, storage medium and mobile terminal
CN111931715A (en) * 2020-09-22 2020-11-13 深圳佑驾创新科技有限公司 Method and device for recognizing state of vehicle lamp, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡远志;刘俊生;何佳;肖航;宋佳;: "基于激光雷达点云与图像融合的车辆目标检测方法", 汽车安全与节能学报, no. 04, pages 451 - 458 *

Similar Documents

Publication Publication Date Title
US20170144585A1 (en) Vehicle exterior environment recognition apparatus
US20170144587A1 (en) Vehicle exterior environment recognition apparatus
CN108875458B (en) Method and device for detecting turning-on of high beam of vehicle, electronic equipment and camera
Souani et al. Efficient algorithm for automatic road sign recognition and its hardware implementation
CN107667378B (en) Method and device for detecting and evaluating road surface reflections
JP5879219B2 (en) In-vehicle environment recognition system
JP6034923B1 (en) Outside environment recognition device
JP6420650B2 (en) Outside environment recognition device
CN113246846B (en) Vehicle light control method and device and vehicle
US20220410931A1 (en) Situational awareness in a vehicle
CN111339996A (en) Method, device and equipment for detecting static obstacle and storage medium
KR20080004833A (en) Apparatus and method for detecting a navigation vehicle in day and night according to luminous state
Ewecker et al. Provident vehicle detection at night for advanced driver assistance systems
CN111976585A (en) Projection information recognition device and method based on artificial neural network
US11676403B2 (en) Combining visible light camera and thermal camera information
JP7226368B2 (en) Object state identification device
Dai et al. A driving assistance system with vision based vehicle detection techniques
CN112926476A (en) Vehicle identification method, device and storage medium
JP6151569B2 (en) Ambient environment judgment device
WO2022244356A1 (en) Light interference detection during vehicle navigation
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
US20190039515A1 (en) System and method for warning against vehicular collisions when driving
CN115465182A (en) Automatic high beam and low beam switching method and system based on night target detection
KR20180010363A (en) Method and device for monitoring forward vehicle
US20230394844A1 (en) System for avoiding accidents caused by wild animals crossing at dusk and at night

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination