CN117452411A - Obstacle detection method and device - Google Patents

Obstacle detection method and device Download PDF

Info

Publication number
CN117452411A
CN117452411A CN202311439097.1A CN202311439097A CN117452411A CN 117452411 A CN117452411 A CN 117452411A CN 202311439097 A CN202311439097 A CN 202311439097A CN 117452411 A CN117452411 A CN 117452411A
Authority
CN
China
Prior art keywords
obstacle
image
environment
detection
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311439097.1A
Other languages
Chinese (zh)
Inventor
汪涌
魏慧锋
潘家春
陈於东
周韦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Original Assignee
Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202311439097.1A priority Critical patent/CN117452411A/en
Publication of CN117452411A publication Critical patent/CN117452411A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging

Abstract

The application discloses a method and a device for detecting obstacles, and belongs to the technical field of vehicles. The method comprises the following steps: acquiring an environment image acquired by a vision sensor on a vehicle and radar data acquired by a millimeter wave radar on the vehicle in the running process of the vehicle; performing feature fusion on the environment image and the radar data to obtain an environment fusion image; determining an obstacle in the environment fusion image according to the environment fusion image; tracking the obstacle based on the vision sensor and the millimeter wave radar to obtain the movement state of the obstacle; determining a motion trajectory of the obstacle based on a motion state of the obstacle; and generating a detection image of the obstacle based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle. The method and the device improve accuracy and robustness of obstacle detection.

Description

Obstacle detection method and device
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to a method and an apparatus for detecting an obstacle.
Background
With the development of intelligent driving technology, automatic driving technology is increasingly applied to the technical field of vehicles. In the automatic driving technology, after a vehicle obtains environmental information around the vehicle through an on-board sensor, an obstacle to be avoided is determined based on the environmental information around the vehicle, and a reasonable route is planned to avoid the obstacle.
In an automatic driving scene of a vehicle, the obstacles mainly comprise pedestrians, automobiles, trucks, bicycles and motorcycles, and as the obstacles in the visual range have different scales and length-width ratios, different degrees of shielding can exist between the obstacles, or the appearance of the obstacles can be blurred when extreme weather such as heavy rain, heavy snow, heavy fog and the like occurs, so that the detection performance of the visual sensor is greatly reduced when the visual sensor acquires environmental information.
As can be seen, in the related art, when a vehicle is traveling in a complex environment, there is a problem in that accuracy is low when obstacle detection is performed based on environmental information acquired by a sensor.
Disclosure of Invention
The application provides a method and a device for detecting an obstacle, which can improve the accuracy of obstacle detection. The technical scheme is as follows.
In a first aspect, there is provided a method of detecting an obstacle, the method comprising:
acquiring an environment image acquired by a vision sensor on the vehicle and radar data acquired by a millimeter wave radar on the vehicle in the running process of the vehicle;
performing feature fusion on the environment image and the radar data to obtain an environment fusion image;
determining an obstacle in the environment fusion image according to the environment fusion image;
Tracking the obstacle based on the vision sensor and the millimeter wave radar to obtain the movement state of the obstacle;
determining a motion trajectory of the obstacle based on a motion state of the obstacle;
and generating a detection image of the obstacle based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle.
Optionally, feature fusion is performed on the environment image and the radar data to obtain an environment fusion image, which includes:
extracting the characteristics of the environment image to obtain a characteristic image comprising a three-dimensional detection frame;
extracting point cloud data in the radar data;
and projecting the point cloud data into the characteristic image to obtain the environment fusion image.
Optionally, determining, according to the environment fusion image, an obstacle in the environment fusion image includes:
and performing example segmentation on the environment fusion image to obtain an obstacle in the environment fusion image.
Optionally, tracking the obstacle to obtain a motion state of the obstacle includes:
when an environment fusion image of an N-th frame is obtained, performing barrier matching on the environment fusion image of the N-th frame and the environment fusion image of an N-1 th frame, and determining a position corresponding relation of the barrier, wherein the position corresponding relation is a position mapping relation of the barrier between the environment fusion images of different frames, N is more than or equal to 2, and N is an integer;
Determining a motion estimation result of the obstacle in the environment fusion image of the N frame according to the position corresponding relation of the obstacle, wherein the motion estimation result comprises the motion direction and the motion speed of the obstacle;
and determining the movement state of the obstacle according to the movement direction and the movement speed of the obstacle in the environment fusion image of the N frame.
Optionally, determining the motion trajectory of the obstacle based on the motion state of the obstacle includes:
and when the motion state of the obstacle is dynamic, determining the motion trail of the obstacle based on the motion direction and the motion speed of the obstacle in the environment fusion image of the N frame.
Optionally, generating the detection image of the obstacle based on the environment fusion image, the motion track of the obstacle, and the motion state of the obstacle includes:
retaining a three-dimensional detection frame of the obstacle in the environment fusion image to obtain an intermediate detection image of the obstacle;
determining the category of the obstacle according to the three-dimensional detection frame of the obstacle, the movement track of the obstacle and the movement state of the obstacle in the environment fusion image;
A detection image of the obstacle is generated based on the class of the obstacle and the intermediate detection image.
Optionally, generating a detection image of the obstacle based on the class of the obstacle and the intermediate detection image includes:
determining display information of point cloud data in the intermediate detection image based on the category of the obstacle, wherein the display information comprises at least one of color, size and shape of point cloud corresponding to the point cloud data;
and displaying the point cloud data in the intermediate detection image according to the display information to obtain a detection image of the obstacle.
In a second aspect, there is provided an obstacle detecting apparatus comprising:
the acquisition module is used for acquiring an environment image acquired by a vision sensor on the vehicle and radar data acquired by a millimeter wave radar on the vehicle in the running process of the vehicle;
the fusion module is used for carrying out feature fusion on the environment image and the radar data to obtain an environment fusion image;
the object determining module is used for determining an obstacle in the environment fusion image according to the environment fusion image;
the tracking module is used for tracking the obstacle based on the vision sensor and the millimeter wave radar to obtain the motion state of the obstacle;
The track determining module is used for determining the movement track of the obstacle based on the movement state of the obstacle;
and the generation module is used for generating a detection image of the obstacle based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle.
Optionally, the fusion module includes:
the first extraction submodule is used for extracting the characteristics of the environment image to obtain a characteristic image comprising a three-dimensional detection frame;
the second extraction submodule is used for extracting point cloud data in the radar data;
and the projection sub-module is used for projecting the point cloud data into the characteristic image to obtain an environment fusion image.
Optionally, the target determining module includes: and the segmentation sub-module is used for carrying out example segmentation on the environment fusion image to obtain an obstacle in the environment fusion image.
Optionally, the tracking module includes:
the matching sub-module is used for matching the environmental fusion image of the N frame with the environmental fusion image of the N-1 frame when the environmental fusion image of the N frame is acquired, and determining the position corresponding relation of the obstacle; the position corresponding relation is the position mapping relation of the obstacle between the environment fusion images of different frames, N is more than or equal to 2, and N is an integer;
The estimation sub-module is used for determining a motion estimation result of the obstacle in the environment fusion image of the N frame according to the position corresponding relation of the obstacle, wherein the motion estimation result comprises the motion direction and the motion speed of the obstacle;
and the state determining submodule is used for determining the motion state of the obstacle according to the motion direction and the motion speed of the obstacle in the environment fusion image of the N frame.
Optionally, the track determining module includes: and the track determination submodule is used for determining the movement track of the obstacle based on the movement direction and the movement speed of the obstacle in the environment fusion image of the N frame when the movement state of the obstacle is dynamic.
Optionally, the generating module includes:
the retaining sub-module is used for retaining the three-dimensional detection frame of the obstacle in the environment fusion image to obtain an intermediate detection image of the obstacle;
the category determining submodule is used for determining the category of the obstacle according to the three-dimensional detection frame of the obstacle, the motion track of the obstacle and the motion state of the obstacle in the environment fusion image;
and the detection sub-module is used for generating a detection image of the obstacle based on the category of the obstacle and the intermediate detection image.
Optionally, the detection sub-module includes:
the type determining unit is used for determining display information of cloud data in the middle detection image based on the type of the obstacle; the display information comprises at least one of the color, size and shape of the point cloud;
and the generating unit is used for displaying the point cloud data in the intermediate detection image according to the display information to obtain a detection image of the obstacle.
In a third aspect, there is provided an obstacle detection device comprising a memory and a processor, the memory storing at least one computer program loaded and executed by the processor to implement the obstacle detection method provided in the first aspect or any of the alternative implementations of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement the obstacle detection method provided in the first aspect or any of the alternative implementations thereof.
In a fifth aspect, there is provided a computer program product comprising computer programs/instructions which when executed by a processor implement the obstacle detection method provided by the first aspect or any of the alternative implementations of the first aspect.
According to the obstacle detection method and device, in the running process of the vehicle, after the vision sensor and the millimeter wave radar are adopted to collect the environment image and the radar data respectively, the environment image and the radar data are subjected to feature fusion to obtain the environment fusion image, so that the data fusion of two different modes is realized, and the obstacle detection method has the advantage that the vision sensor and the millimeter wave radar perform obstacle detection in a complex environment, and obstacle characteristic information is described more comprehensively. After the environment fusion image is obtained, continuously tracking the obstacle to obtain the motion state of the obstacle, predicting the motion trail of the obstacle based on the motion state of the obstacle, realizing the tracking of the motion state and the motion trail of the obstacle, and further determining the motion condition of the obstacle in the current vehicle running environment. After a detection image of the obstacle is generated based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle, the detection image is displayed through display equipment in the vehicle, so that a driver can avoid the obstacle based on the detection image. The obstacle detection method is suitable for various complex environments, and accuracy and robustness of obstacle detection are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an obstacle detection method provided in an embodiment of the present application;
FIG. 2 is a schematic illustration of an early fusion provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a feature level fusion provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a late fusion provided in an embodiment of the present application;
FIG. 5 is a flowchart of another obstacle detection method provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a point cloud data projection according to an embodiment of the present application;
FIG. 7 is a flowchart of yet another obstacle detection method provided by an embodiment of the present application;
FIG. 8 is a schematic illustration of an obstacle location provided in an embodiment of the present application;
FIG. 9 is a schematic illustration of another obstacle location provided by an embodiment of the present application;
FIG. 10 is a flowchart of yet another obstacle detection method provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a detection image according to an embodiment of the present application;
fig. 12 is a schematic view of an obstacle detecting apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic view of another obstacle detecting apparatus according to an embodiment of the present disclosure.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, wherein it is apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
With the development of intelligent driving technology, target detection in the field of computer vision is also more applied to automatic driving technology. Target detection in autopilot technology is typically obstacle detection. Because the vision sensor has the advantages of high resolution, rich color information, object surface details, texture information and the like, the vision sensor is widely applied to obstacle detection in the automatic driving technology. However, in the case of low illumination, shielding of an object or extreme weather, the detection capability of the visual sensor on the obstacle is greatly weakened, and further, the obtained obstacle detection result has the problem of lower accuracy.
Millimeter wave radars have better reliability and stability under severe environmental conditions (e.g., extreme weather such as low light, rain, fog, etc.) than visual sensors. After the millimeter wave radar transmits haomibo, a reflected signal of the object after receiving the millimeter wave is acquired, and the distance from the obstacle to the vehicle is determined based on the time difference between transmitting the reflected signal and receiving the reflected information. In addition, the Doppler effect of the reflected signal can be used to measure the velocity vector of the obstacle. Although millimeter-radar waves have the advantages described above, millimeter-radar waves cannot provide profile information of obstacles, and it is difficult to distinguish between relatively stationary obstacle classes.
Based on the above, the obstacle detection method and the obstacle detection device provided by the embodiment of the application acquire an environment image acquired by a vision sensor on a vehicle and radar data acquired by a millimeter wave radar on the vehicle in the running process of the vehicle. And carrying out feature fusion on the environment image and the radar data to obtain an environment fusion image. And determining an obstacle in the environment fusion image according to the environment fusion image. And tracking the obstacle based on the vision sensor and the millimeter wave radar to obtain the movement state of the obstacle. A motion profile of the obstacle is determined based on the motion state of the obstacle. And generating a detection image of the obstacle based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle. In the running process of the vehicle, after the vision sensor and the millimeter wave radar are adopted to collect the environment image and the radar data respectively, the environment image and the radar data are subjected to feature fusion to obtain an environment fusion image, so that the data fusion of two different modes is realized, and the obstacle detection method has the advantage that the vision sensor and the millimeter wave radar perform obstacle detection in a complex environment, and obstacle feature information is more comprehensively described. After the environment fusion image is obtained, continuously tracking the obstacle to obtain the motion state of the obstacle, predicting the motion trail of the obstacle based on the motion state of the obstacle, realizing the tracking of the motion state and the motion trail of the obstacle, and further determining the motion condition of the obstacle in the current vehicle running environment. After a detection image of the obstacle is generated based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle, the detection image is displayed through display equipment in the vehicle, so that a driver can avoid the obstacle based on the detection image. The obstacle detection method is suitable for various complex environments, and accuracy and robustness of obstacle detection are improved.
Referring to fig. 1, a flowchart of an obstacle detection method according to an embodiment of the present application is shown. The method may be performed by an obstacle detection device deployed in the vehicle, which may be a cycle computer, a main control unit (Main Control Unit, MCU), or a functional module integrated on the system motherboard, etc. Referring to fig. 1, the method flow includes the following steps S101 to S106.
S101, acquiring an environment image acquired by a vision sensor on the vehicle and radar data acquired by a millimeter wave radar on the vehicle in the running process of the vehicle.
In embodiments of the present application, the visual sensor may include a camera, video camera, digital video camera, or the like. The environmental image collected by the vision sensor may include a three-dimensional image, a two-dimensional image, and the like, and the dimensional information of the specific environmental image is determined based on the performance of the vision sensor. The environment image can comprise images of various objects in the surrounding environment in the running process of the vehicle, and particularly comprises information such as color, texture, shape and the like.
The millimeter wave radar generally refers to a radar which emits millimeter wave signals with the frequency domain of 30-300 GHz (the wavelength can be 1-10 mm). In the millimeter wave radar, a radar sensor with a millimeter wave signal of 24GHz and a radar sensor with a millimeter wave signal of 77Ghz are mainly used for preventing collision of automobiles. The radar data acquired by the millimeter wave radar may include distance data, speed data, direction data, target reflectivity (or intensity information), environmental characteristic data, multi-target data, and the like. The distance data is the distance between an obstacle and the millimeter wave radar calculated by the millimeter wave radar through measuring the round trip time of the millimeter wave signal. The speed data is the speed of the obstacle calculated by analyzing the doppler shift of the echo signal of the millimeter wave radar with respect to the millimeter wave radar. The direction data is the direction of the obstacle provided by the millimeter wave radar relative to the radar, including the horizontal direction and the vertical direction. The direction data has positive significance for the positioning and spatial distribution information of the obstacle. The target reflectivity (or signal strength information) is used to reflect the absorption, reflection, or scattering capabilities of the obstacle for millimeter wave signals, typically in the form of an intensity value. The intensity values may be used to infer information about the material properties, size, and shape of the obstruction. The environmental characteristic data is point cloud data generated by detecting obstacles, buildings, roads and the like in the surrounding environment by the millimeter wave radar. The three-dimensional data has reference significance for the environmental perception and obstacle avoidance of the vehicle. The multi-target data refers to that the millimeter wave radar can detect a plurality of targets at the same time, and therefore, the multi-target data may include data of a plurality of obstacles, including positions, speeds, sizes, and the like.
S102, performing feature fusion on the environment image and the radar data to obtain an environment fusion image.
Optionally, the obstacle detection device performs direct or indirect feature fusion on the environment image and the radar data to obtain the environment fusion image. The obstacle detection device performs direct or indirect feature fusion on the environment image and the radar data, and the method comprises the following three implementation modes.
The first implementation mode: early fusion. As shown in fig. 2, the early fusion refers to directly fusing the environment image and the radar data, and then performing data processing on the fused features to obtain the environment fusion image. In the process of feature fusion, the environment image and the radar data can be weighted and added to obtain the environment fusion image. The fusion relation among the environment fusion image, the environment image and the radar data can be represented by a data fusion formula, wherein the data fusion formula is as follows:
Fused_Data=Weight_C*Camera_Data+Weight_R*Radar_Data
in the above Data fusion formula, weight_c is a Weight given to an environmental image, weight_r is a Weight given to Radar Data, camera_data is an environmental image, radar_data is Radar Data, and fused_data is a Fused environmental fusion image.
The second implementation mode: feature level fusion. As shown in fig. 3, the feature level fusion is to perform feature extraction on the environment image and the radar data, and normalize the extracted image features and radar features. And then, carrying out feature fusion on the normalized image features and the normalized radar features in a splicing or weighted summation mode, and carrying out data processing on the fused features to obtain the environment fusion image. Further, the relationship between feature level fusion data may be expressed using a target detection formula, where the target detection formula is:
Fused_Feature=Weight_C*Visual_Feature+Weight_R*Radar_Feature
in the above-mentioned target detection formula, weight_c is a Weight given to a Feature extracted based on an environment image, weight_r is a Weight given to a Feature extracted based on Radar data, visual_feature is a Feature extracted from an environment image, radar_feature is a Feature extracted from Radar data, and fused_feature is a Fused environment fusion image.
Third implementation: late fusion. As shown in fig. 4, the late fusion refers to that after the features of the environment image and the radar data are respectively extracted, the extracted image features and the radar features are respectively processed by corresponding data, and then feature fusion is performed on an output layer to obtain the environment fusion image.
Optionally, before performing the step of feature fusion of the environmental image and the radar data, the obstacle detecting device may establish a multi-modal fusion network, where the multi-modal fusion network may be fusion Net. The fusion Net generally comprises a sharing layer, a branching layer and a fusion layer, and can fuse two data sources with different modes end to end. In fusion Net, the fusion layer can adopt a multi-mode convolutional neural network (Multimodal Convolutional Neural Network, m-CNN), wherein m-CNN is obtained by respectively carrying out multi-layer convolution, pooling and full connection operation on data acquired by different sensors, and then fusing the characteristics extracted from the data acquired by each sensor to obtain final characteristics. When the multi-mode fusion network is trained, a plurality of groups of environment images and radar data in the historical data can be used as input to the initial multi-mode fusion network, a training result is output, a loss value between the training result and a gold standard is calculated, and the network structure is adjusted until the trained initial multi-mode fusion network reaches convergence, so that the multi-mode fusion network is obtained. When the trained multi-mode fusion network is used, the acquired environment image and radar data are input into the multi-mode fusion network, and the environment fusion image can be output.
In an alternative embodiment, as shown in fig. 5, step S102 includes the following substeps S1021 through S1023.
And S1021, extracting the characteristics of the environment image to obtain a characteristic image comprising a three-dimensional detection frame.
In one embodiment, the obstacle detection device may perform feature extraction on the environmental image using a classification network, including RCNN, fast-RCNN, etc., without limitation in this application. The characteristic image comprising the three-dimensional detection frame comprises images for selecting a plurality of classified objects in a frame mode.
Further, the obstacle detecting device may extract a plurality of image features from the input environmental image through a pretrained convolutional neural network (such as VGG16, res Net, etc.) when extracting features from the environmental image. These image features are used for subsequent region proposal network (Region Proposal Network, RPN) candidate regions, as well as for object classification for obstacle classification. Wherein the RPN slides over the input image through the convolutional layer, generates a plurality of candidate regions (i.e., candidate suggestions of bounding boxes), and scores each candidate region. These candidate regions are often referred to as "anchor boxes". Candidate region screening is to eliminate overlapping candidate regions by performing Non-maximum suppression (Non-Maximum Suppression, NMS) on candidate regions generated based on RPN, and obtain final candidate regions. The target classification is to send each candidate region into a classification network (usually a fast RCNN network) to perform target classification and bounding box regression, so as to obtain a final obstacle detection result. The final obstacle detection result includes a detected obstacle Bounding box (Bounding Boxes).
And S1022, extracting point cloud data in the radar data.
Optionally, the obstacle detection device determines three-dimensional coordinates of each point cloud corresponding to the obstacle according to distance data, direction data and coordinates of millimeter wave radars arranged on the vehicle in the radar data, where the point cloud data is a set including each point cloud having the three-dimensional coordinates. Because the distance data in the radar data is the distance between each arrival position and the millimeter wave radar after the millimeter wave radar sends a beam of millimeter wave signals to each arrival position in the obstacle, the three-dimensional coordinates of each millimeter wave arrival position can be determined based on the distance and the direction data of each millimeter wave arrival position and the millimeter wave radar, wherein the three-dimensional coordinates corresponding to each millimeter wave arrival position can be the three-dimensional coordinates of a point cloud. Optionally, the radar data further includes intensity data of each point cloud, and the point cloud data further includes intensity data corresponding to each point cloud. Alternatively, the coordinates of the millimeter wave radar provided to the vehicle are located in the world coordinate system.
And S1023, projecting the point cloud data into the characteristic image to obtain the environment fusion image.
For example, since the three-dimensional detection frame of the obstacle after image segmentation is included in the feature image, the obstacle detection device generates a truncated cone based on the three-dimensional detection frame of the obstacle, the three-dimensional detection frame being located within the truncated cone, and four opposite vertexes of the three-dimensional detection frame being located on opposite side planes of the truncated cone. As shown in fig. 6, after the three-dimensional coordinate system of each point cloud in the point cloud data is aligned with the three-dimensional coordinate system of the three-dimensional detection frame, each point cloud in the point cloud data is projected into the three-dimensional detection frame, and the point clouds outside the truncated cone are filtered to obtain an environment fusion image. The environment fusion image includes a three-dimensional detection frame of the obstacle and point cloud data of the obstacle. For example, the obstacle detection device adopts a cone-of-view correlation (Frustum Association) structure to project the point cloud data into the feature image to obtain the environment fusion image.
In the embodiment of the application, the obstacle detection device performs feature extraction on the environment image to obtain the feature image comprising the three-dimensional detection frame, extracts point cloud data in radar data, projects the point cloud data into the feature image to obtain an environment fusion image, can fuse data of two different modes, combines the advantages of the outline shape of each object in the acquired image of the vision sensor, and the advantage of accurate ranging of the millimeter wave radar in a complex environment, and can provide color, texture information, distance and position information of the obstacle to describe the characteristics of the obstacle more comprehensively.
S103, determining an obstacle in the environment fusion image according to the environment fusion image.
In an alternative example, the environmental fusion image is segmented using an image segmentation algorithm, and the region or set of pixels of the segmented image having semantic information is determined as an obstacle in the environmental fusion image. Wherein each region of different semantic information may be determined as an obstacle in the environment fusion image, or a set of pixels may be determined as an obstacle in the environment fusion image.
Optionally, the obstacle detection device performs example segmentation on the environment fusion image to obtain an obstacle in the environment fusion image. For example, the environment fusion image is subjected to instance segmentation by using an instance segmentation network (Mask R-CNN). Wherein the instance segmentation network is an image processing task network, and aims to identify different object instances in an image, allocate a unique identifier to each instance, and accurately allocate the corresponding instance to each pixel. This allows us to detect and distinguish multiple object instances in an image while assigning them specific pixel level markers, determine obstacles in a coarse environmental fusion image and mark the obstacles.
S104, tracking the obstacle based on the vision sensor and the millimeter wave radar to obtain the movement state of the obstacle.
When the obstacle detection device tracks the obstacle, the vision sensor and the millimeter wave radar are required to continuously acquire surrounding environment information, and the acquired environment image and the radar data are subjected to feature fusion to obtain the environment fusion image, so that a plurality of frames of continuous environment fusion images can be obtained. Therefore, when tracking the obstacle, it is necessary to track the movement of the same obstacle based on the acquired multi-frame continuous environment fusion image to obtain the movement state of the obstacle. The association of the obstacle detection position with the environment fusion image can be illustrated by an obstacle detection formula. The obstacle detection formula is:
Object_Position=Fusion_Function(Visual_Detection,Radar_Detetion)
the object_position is the Position of an obstacle, and the formula shows that the positioning feature visual_detection of the obstacle in the environment image and the positioning feature radar_detection of the obstacle in the Radar data are fused through a Fusion Function.
Further, the obstacle detection device tracks an obstacle of the environment fusion image, and the obstacle detection device comprises: in each frame of the environment fusion image, the position and the characteristics of the obstacle are detected by using an object detection algorithm. And matching the obstacle detected in the current frame with the obstacle tracked in the previous frame, and determining the corresponding relation of the obstacle. And estimating the motion information, such as speed, direction and the like, of the obstacle in the current frame according to the corresponding relation of the obstacle.
In an alternative embodiment, as shown in fig. 7, step S104 includes the following sub-steps S1041 to S1043.
S1041, when the environment fusion image of the N frame is obtained, performing obstacle matching on the environment fusion image of the N frame and the environment fusion image of the N-1 frame, and determining the position corresponding relation of the obstacle. The position corresponding relation is a position mapping relation of the obstacle between environment fusion images of different frames, N is more than or equal to 2, and N is an integer.
Specifically, when the environment fusion image of the nth frame and the N-1 th frame is acquired, the obstacle detection device determines a three-dimensional detection frame with the same semantic information, namely, the matching of the same obstacle in the environment fusion images of two continuous frames is completed, and after the matching of the same obstacle in the environment fusion images of two continuous frames, the relative position of the obstacle in the environment fusion image of the nth frame and the environment fusion image of the N-1 th frame, namely, the position corresponding relation of the obstacle in the environment fusion images of the nth frame and the N-1 th frame, can be determined.
For example, when the 2 nd frame environment fusion image is acquired, the obstacle detection device performs obstacle matching on the 2 nd frame environment fusion image and the 1 st frame environment fusion image according to each obstacle in the 2 nd frame environment fusion image, and determines a position mapping relation of the obstacle in the 2 nd frame environment fusion image relative to the 1 st frame environment fusion image.
S1042, determining the motion estimation result of the obstacle in the environment fusion image of the N frame according to the position corresponding relation of the obstacle, wherein the motion estimation result comprises the motion direction and the motion speed of the obstacle.
In one example, after the obstacle detection device obtains the position correspondence of the obstacle, the environment fusion images of two continuous frames are overlapped according to the position correspondence of the obstacle, a moving distance between the position of the obstacle in the N-th frame environment fusion image and the position in the N-1 th frame environment fusion image is determined, and a moving direction of the position of the obstacle in the N-th frame environment fusion image relative to the position of the obstacle in the N-1 th frame environment fusion image is determined based on the moving distance and the moving direction.
For example, referring to fig. 8 to 9, the position a of the obstacle Q in the environment fusion image of the N-1 th frame (as shown in fig. 8), the position b in the environment fusion image of the N-1 th frame (as shown in fig. 9), it is apparent that the position b is located right west of the position a and the distance between the position b and the position a is 2m (meters), the moving direction of the obstacle may be determined to be right west, and the moving speed of the obstacle may be determined according to the formula "speed=distance/time" based on the moving distance between the position a and the position b and the time of the two frames of the environment fusion image of the N-1 th frame. Wherein, the time of the two frame interval of the N-1 frame environment fusion image can be determined based on the shooting interval time between the N-1 frame environment image and the N-1 frame environment image.
S1043, determining the motion state of the obstacle according to the motion direction and the motion speed of the obstacle in the environment fusion image of the N frame.
The movement state of the obstacle may include static state or dynamic state, and the dynamic state includes any one of acceleration dynamic state (i.e., acceleration movement state), deceleration dynamic state (i.e., deceleration movement state), and uniform velocity dynamic state (i.e., uniform velocity movement state).
For example, if the movement direction of the obstacle in the environment fusion image of the nth frame is northwest, and the movement speed is 1m/s (meter per second), the movement state of the obstacle can be determined to be dynamic.
Optionally, the obstacle detecting device determines the acceleration of the obstacle based on the movement speed of the obstacle in the two frames of images when the movement state of the obstacle is dynamic, if the acceleration is positive, the movement state of the obstacle is an acceleration movement state, if the acceleration is negative, the movement state of the obstacle is a deceleration movement state, and if the acceleration is 0, the movement state of the obstacle is a uniform movement state.
In another example, if the movement direction of the obstacle in the environment fusion image of the nth frame is unchanged and the movement speed is 0m/s, it may be determined that the movement state of the obstacle is static.
The continuous tracking of the obstacle is realized by detecting the obstacle in the environment fusion image of each frame on a continuous frame sequence.
S105, determining the movement track of the obstacle based on the movement state of the obstacle.
In this embodiment of the present application, when the movement state of the obstacle is dynamic, the movement track of the obstacle will change. Further, when the movement state of the obstacle is dynamic, the movement track of the obstacle is determined based on the movement direction and the movement speed of the obstacle in the environment fusion image of the nth frame.
For example, when the obstacle detection device obtains that the movement direction of the obstacle is the north direction and the movement speed is 2m/s, according to the movement direction of the obstacle in the north direction and the movement speed of 2m/s in the environment fusion image of the current frame, the obstacle detection device can predict the position of the obstacle reached in the environment fusion image of the next frame based on the movement direction and the movement speed, that is, the movement track of the obstacle is determined.
S106, obtaining a detection image of the obstacle based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle.
In one possible implementation manner, the obstacle detection device marks the motion track of the obstacle and the motion state of the obstacle in a preset area outside a three-dimensional detection frame of the corresponding obstacle in the environment fusion image, wherein the preset area comprises, but is not limited to, an upper part, a lower part, a left part and a right part outside the three-dimensional detection frame.
In another possible embodiment, as shown in fig. 10, step S106 includes the following substeps S1061 to S1063.
S1061, reserving a three-dimensional detection frame of the obstacle in the environment fusion image to obtain an intermediate detection image of the obstacle.
Optionally, the obstacle detection device generates a minimum bounding box corresponding to the point cloud data of the obstacle based on the point cloud data of the obstacle in the environment fusion image, combines the minimum bounding box and the three-dimensional detection frame of the obstacle, determines a minimum three-dimensional detection frame of the obstacle, and reserves the minimum three-dimensional detection frame as a new three-dimensional detection frame to obtain the intermediate detection image.
For example, the obstacle detection device connects each point cloud in the point cloud data of the obstacle in the environment fusion image to form a closed graph, and determines a minimum bounding box of the closed graph based on the length and the width of the closed graph. Comparing the minimum boundary frame with the three-dimensional detection frame of the obstacle, selecting the three-dimensional detection frame with smaller area as the minimum three-dimensional detection frame, and reserving the minimum three-dimensional detection frame to obtain the intermediate detection image.
S1062, determining the category of the obstacle according to the three-dimensional detection frame of the obstacle, the motion track of the obstacle and the motion state of the obstacle in the environment fusion image.
For example, the obstacle detection device determines the area of the obstacle selected by the three-dimensional detection frame based on the three-dimensional detection frame of the obstacle in the environment fusion image, that is, determines the size of the obstacle. And simultaneously, scanning pixel points in the obstacle image selected by the three-dimensional detection frame to determine the boundary of the obstacle, namely determining the shape of the obstacle. The method for scanning the pixel points in the obstacle image can adopt an 8-field searching method. Further, the type of the obstacle is determined in combination with the size of the obstacle, the shape of the obstacle, the movement trace of the obstacle, and the movement state of the obstacle. For example, firstly, the shape of the obstacle can be compared with a preset obstacle shape diagram, the category of the obstacle is primarily judged, the initial category of the obstacle is obtained, and if the initial category only comprises one target category, the initial category is determined as the category of the obstacle. If the initial category corresponds to multiple target categories, comparing the size of the obstacle with a preset obstacle size table, further judging the category of the obstacle to obtain a second category of the obstacle, and if the second category only comprises one target category, determining the initial category as the category of the obstacle. If the second category corresponds to multiple target categories, third judgment is performed based on the movement track of the obstacle and the movement state of the obstacle, and the category of the obstacle is determined. Wherein the categories of obstacles include vehicles, pedestrians, trucks, bicycles, motorcycles, road blocks, ground piles, road signs, etc., without limitation.
For example, after the obstacle shape acquired by the obstacle detecting device is preliminarily judged, the initial category including bicycles, electric bicycles, and motorcycles is determined. The type of the obstacle is further judged based on the size of the obstacle, and the second type of the obstacle is obtained as an electric bicycle and a motorcycle. Further, based on the movement track of the obstacle and the movement state of the obstacle, it can be determined that the obstacle is traveling in the motor vehicle lane, and the obstacle is determined to be a motorcycle.
S1063, generating a detection image of the obstacle based on the type of the obstacle and the intermediate detection image.
In the embodiment of the application, after the obstacle detection device acquires the category of the obstacle, display information of the point cloud data in the environment fusion image is determined based on the category of the obstacle, and the obstacle detection device displays the point cloud corresponding to the point cloud data in the intermediate detection image according to the display information to obtain the detection image of the obstacle. The display information of the point cloud data comprises at least one of color, size and shape of the point cloud corresponding to the point cloud data.
For example, assuming that the type of the obstacle is a pedestrian, it may be determined that each point cloud in the point cloud data corresponding to the pedestrian as the obstacle in the intermediate detection image is displayed as yellow, and the shape of each point cloud in the point cloud data corresponding to the obstacle is displayed as a circle. If the type of the obstacle is a vehicle, it can be determined that the point cloud data corresponding to the obstacle in the intermediate detection image is red, and the shape of each point cloud in the point cloud data corresponding to the obstacle is triangular. Further, displaying each point cloud in the point cloud data in the intermediate detection image according to the display information to obtain a detection image of the obstacle. The display information of the point cloud data corresponding to the types of different barriers can be preset based on different requirements. For example, pedestrian-yellow-circle, vehicle-red-triangle, motorcycle-green-five-pointed star, etc., without limitation herein.
Optionally, in the detected image of the obstacle, the point cloud of the environmental image and the radar data and the identification information of the category of each obstacle are also displayed, and the area on the road other than the obstacle is defined as an idle area, as shown in fig. 11, the environmental image and the radar data are located in the upper left corner of the detected image, and the upper right corner displays the identification information of the category of the obstacle and the identification information of the idle area. In the detection image of the obstacle, the display information of the point cloud data of the pedestrian is circular, and the display information of the point cloud data of the vehicle is triangular.
In summary, in the obstacle detection method provided by the embodiment of the present application, in the process of driving a vehicle, after the visual sensor and the millimeter wave radar are adopted to collect the environmental image and the radar data respectively, the environmental image and the radar data are subjected to feature fusion to obtain the environmental fusion image, so that the data fusion of two different modes is realized, and the obstacle detection method provided by the present application has the advantage that the visual sensor and the millimeter wave radar perform obstacle detection in a complex environment, so that the obstacle feature information is more comprehensively described. After the environment fusion image is obtained, continuously tracking the obstacle to obtain the motion state of the obstacle, predicting the motion trail of the obstacle based on the motion state of the obstacle, realizing the tracking of the motion state and the motion trail of the obstacle, and further determining the motion condition of the obstacle in the current vehicle running environment. After a detection image of the obstacle is generated based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle, the detection image is displayed through display equipment in the vehicle, so that a driver can avoid the obstacle based on the detection image. The obstacle detection method is suitable for various complex environments, and accuracy and robustness of obstacle detection are improved.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 12, a block diagram of an obstacle detection device according to one embodiment of the present application is shown, where the obstacle detection device may be used to perform the obstacle detection method provided by the embodiments shown in fig. 1-3, and referring to fig. 6, the obstacle detection device may include, but is not limited to:
an acquisition module 601, configured to acquire an environmental image acquired by a vision sensor on a vehicle and radar data acquired by a millimeter wave radar on the vehicle during a running process of the vehicle;
the fusion module 602 is configured to perform feature fusion on the environmental image and the radar data to obtain an environmental fusion image;
a target determining module 603, configured to determine an obstacle in the environment fusion image according to the environment fusion image;
the tracking module 604 is configured to track an obstacle based on a vision sensor and a millimeter wave radar, so as to obtain a motion state of the obstacle;
a track determining module 605, configured to determine a motion track of the obstacle based on the motion state of the obstacle;
a generating module 606, configured to generate a detection image of the obstacle based on the environment fusion image, the motion trajectory of the obstacle, and the motion state of the obstacle.
Optionally, the fusion module includes:
the first extraction submodule is used for extracting the characteristics of the environment image to obtain a characteristic image comprising a three-dimensional detection frame;
the second extraction submodule is used for extracting point cloud data in the radar data;
and the projection sub-module is used for projecting the point cloud data into the characteristic image to obtain an environment fusion image.
Optionally, the target determining module includes: and the segmentation sub-module is used for carrying out example segmentation on the environment fusion image to obtain an obstacle in the environment fusion image.
Optionally, the tracking module includes:
the matching sub-module is used for matching the environmental fusion image of the N frame with the environmental fusion image of the N-1 frame when the environmental fusion image of the N frame is acquired, and determining the position corresponding relation of the obstacle; the position corresponding relation is the position mapping relation of the obstacle between the environment fusion images of different frames, N is more than or equal to 2, and N is an integer;
the estimation submodule is used for determining a motion estimation result of the obstacle in the environment fusion image of the N frame according to the position corresponding relation of the obstacle, wherein the motion estimation result comprises the motion direction and the motion speed of the obstacle;
and the state determining submodule is used for determining the movement state of the obstacle according to the movement direction and the movement speed of the obstacle in the environment fusion image of the N frame.
Optionally, the track determining module includes: and the track determination submodule is used for determining the movement track of the obstacle based on the movement direction and the movement speed of the obstacle in the environment fusion image of the N frame when the movement state of the obstacle is dynamic.
Optionally, the generating module includes:
the retaining sub-module is used for retaining a three-dimensional detection frame of the obstacle in the environment fusion image to obtain an intermediate detection image of the obstacle;
the category determining submodule is used for determining the category of the obstacle according to the three-dimensional detection frame of the obstacle, the motion track of the obstacle and the motion state of the obstacle in the environment fusion image;
and the detection sub-module is used for generating a detection image of the obstacle based on the category of the obstacle and the intermediate detection image.
Optionally, the detection sub-module includes:
the type determining unit is used for determining display information of cloud data in the middle detection image based on the type of the obstacle; the display information comprises at least one of the color, the size and the shape of the point cloud;
and the generating unit is used for displaying the point cloud data in the intermediate detection image according to the display information to obtain a detection image of the obstacle.
In summary, in the obstacle detection device provided by the embodiment of the application, in the process of driving a vehicle, after the vision sensor and the millimeter wave radar are adopted to collect the environmental image and the radar data respectively, the environmental image and the radar data are subjected to feature fusion to obtain the environment fusion image, so that the data fusion of two different modes is realized, and the obstacle detection method provided by the application has the advantage that the vision sensor and the millimeter wave radar perform obstacle detection in a complex environment, and more comprehensively describes the characteristic information of the obstacle. After the environment fusion image is obtained, continuously tracking the obstacle to obtain the motion state of the obstacle, predicting the motion trail of the obstacle based on the motion state of the obstacle, realizing the tracking of the motion state and the motion trail of the obstacle, and further determining the motion condition of the obstacle in the current vehicle running environment. After a detection image of the obstacle is generated based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle, the detection image is displayed through display equipment in the vehicle, so that a driver can avoid the obstacle based on the detection image. The obstacle detection method is suitable for various complex environments, and accuracy and robustness of obstacle detection are improved.
The embodiment of the application provides an obstacle detection device, which comprises a memory and a processor, wherein at least one computer program is stored in the memory, and the at least one computer program is loaded and executed by the processor to realize all or part of the steps of the obstacle detection method provided by the embodiment of the method.
As an example, please refer to fig. 13, fig. 13 is a schematic diagram of an obstacle detecting apparatus 700 according to an embodiment of the present application. The obstacle detection device 700 is a vehicle or a functional component deployed in a vehicle, the obstacle detection device 700 being adapted to perform the method provided by the embodiments shown in fig. 1-10.
Generally, the obstacle detecting apparatus 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the obstacle detection methods provided by embodiments of the present application.
In some embodiments, the obstacle detecting apparatus 700 may further include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch display 705, camera 706, audio circuitry 707, positioning component 708, and power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limited by the embodiments of the present application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, providing a front panel of the obstacle detection device 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the obstacle detecting device 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the obstacle detection device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different positions of the obstacle detecting apparatus 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The locating component 708 is operative to locate a current geographic location of the obstacle detection device 700 for navigation or LBS (Location Based Service, location-based services). The positioning component 708 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Galileo system of Russia.
The power supply 709 is used to power the various components in the obstacle detection device 700. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, obstacle detection device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyroscope sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, proximity sensor 716, vision sensor 717, and millimeter wave radar 718.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established by the obstacle detecting device 700. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch display screen 705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the obstacle detection device 700, and the gyro sensor 712 may collect a 3D motion of the obstacle detection device 700 by a user in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the obstacle detecting device 700 and/or at a lower layer of the touch display screen 705. When the pressure sensor 713 is disposed at a side frame of the obstacle detecting device 700, a grip signal of the obstacle detecting device 700 by a user may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the touch display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 714 is used to collect a fingerprint of the user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 714 may be provided on the front, back, or side of the obstacle detection device 700. When a physical key or vendor Logo is provided on the obstacle detection device 700, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
The proximity sensor 716, also referred to as a distance sensor, is typically disposed on the front panel of the obstacle detection device 700. The proximity sensor 716 is used to collect the distance between the user and the front face of the obstacle detection device 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front face of the obstacle detection device 700 gradually decreases, the processor 701 controls the touch display screen 705 to switch from the bright screen state to the off screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the obstacle detection device 700 gradually increases, the processor 701 controls the touch display screen 705 to switch from the off-screen state to the on-screen state.
Wherein, the vision sensor 717 is used for collecting the environmental image of the surrounding environment when the vehicle is running.
The millimeter wave radar 718 is used for acquiring radar data of the surrounding environment when the vehicle runs.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is not limiting of the obstacle detection device 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In some embodiments, there is also provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement the above-described obstacle detection method.
It is noted that the computer readable storage medium mentioned in the embodiments of the present application may be a non-volatile storage medium, in other words, may be a non-transitory storage medium.
It should be understood that all or part of the steps to implement the above-described embodiments may be implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. Computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements the above-described obstacle detection method.
It should be understood that references herein to "at least one" mean one or more, and "a plurality" means two or more. In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
Different types of embodiments, such as a method embodiment and a system embodiment, provided in the embodiments of the present application may be mutually referred to, and the embodiments of the present application are not limited to this. The sequence of the operations of the method embodiment provided in the embodiment of the present application can be appropriately adjusted, the operations can also be increased or decreased according to the situation, and any method that is easily conceivable to be changed by a person skilled in the art within the technical scope of the present application is covered in the protection scope of the present application, so that no further description is provided.
In the corresponding embodiments provided in the present application, it should be understood that the disclosed system and the like may be implemented by other structural manners. For example, the system embodiments described above are merely illustrative, e.g., the division of modules is merely a logical division of functionality, and there may be additional divisions of actual implementation, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The modules illustrated as separate components may or may not be physically separate, and the components described as modules may or may not be physical modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
It should be noted that, information (including, but not limited to, vehicle equipment information, user personal information, etc.), data (including, but not limited to, data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the environmental image and radar data referred to in this application are acquired with sufficient authorization.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as being included within the spirit and principles of the present invention.

Claims (10)

1. A method of detecting an obstacle, the method comprising:
acquiring an environment image acquired by a vision sensor on the vehicle and radar data acquired by a millimeter wave radar on the vehicle in the running process of the vehicle;
performing feature fusion on the environment image and the radar data to obtain an environment fusion image;
determining an obstacle in the environment fusion image according to the environment fusion image;
Tracking the obstacle based on the vision sensor and the millimeter wave radar to obtain the movement state of the obstacle;
determining a motion trajectory of the obstacle based on a motion state of the obstacle;
and generating a detection image of the obstacle based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle.
2. The method of claim 1, wherein the feature fusing the environmental image and the radar data to obtain an environmental fused image comprises:
extracting the characteristics of the environment image to obtain a characteristic image comprising a three-dimensional detection frame;
extracting point cloud data in the radar data;
and projecting the point cloud data into the characteristic image to obtain the environment fusion image.
3. The method according to claim 1 or 2, wherein said determining an obstacle in said environment fusion image from said environment fusion image comprises:
and performing example segmentation on the environment fusion image to obtain an obstacle in the environment fusion image.
4. The method according to claim 1 or 2, wherein said tracking of said obstacle to obtain a movement state of said obstacle comprises:
When an environment fusion image of an N-th frame is obtained, performing barrier matching on the environment fusion image of the N-th frame and the environment fusion image of an N-1 th frame, and determining a position corresponding relation of the barrier, wherein the position corresponding relation is a position mapping relation of the barrier between the environment fusion images of different frames, N is more than or equal to 2, and N is an integer;
determining a motion estimation result of the obstacle in the environment fusion image of the N frame according to the position corresponding relation of the obstacle, wherein the motion estimation result comprises the motion direction and the motion speed of the obstacle;
and determining the movement state of the obstacle according to the movement direction and the movement speed of the obstacle in the environment fusion image of the N frame.
5. The method of claim 4, wherein the determining the movement trajectory of the obstacle based on the movement state of the obstacle comprises:
and when the motion state of the obstacle is dynamic, determining the motion trail of the obstacle based on the motion direction and the motion speed of the obstacle in the environment fusion image of the N frame.
6. The method according to claim 1 or 2, wherein the generating a detection image of the obstacle based on the environment fusion image, the movement locus of the obstacle, and the movement state of the obstacle includes:
Retaining a three-dimensional detection frame of the obstacle in the environment fusion image to obtain an intermediate detection image of the obstacle;
determining the category of the obstacle according to the three-dimensional detection frame of the obstacle, the movement track of the obstacle and the movement state of the obstacle in the environment fusion image;
a detection image of the obstacle is generated based on the class of the obstacle and the intermediate detection image.
7. The method of claim 6, wherein the generating a detection image of the obstacle based on the class of obstacle and the intermediate detection image comprises:
determining display information of point cloud data in the intermediate detection image based on the category of the obstacle, wherein the display information comprises at least one of color, size and shape of point cloud corresponding to the point cloud data;
and displaying the point cloud data in the intermediate detection image according to the display information to obtain a detection image of the obstacle.
8. An obstacle detection device, the device comprising:
the acquisition module is used for acquiring an environment image acquired by a vision sensor on the vehicle and radar data acquired by a millimeter wave radar on the vehicle in the running process of the vehicle;
The fusion module is used for carrying out feature fusion on the environment image and the radar data to obtain an environment fusion image;
the target determining module is used for determining an obstacle in the environment fusion image according to the environment fusion image;
the tracking module is used for tracking the obstacle based on the vision sensor and the millimeter wave radar to obtain the movement state of the obstacle;
the track determining module is used for determining the movement track of the obstacle based on the movement state of the obstacle;
and the generation module is used for generating a detection image of the obstacle based on the environment fusion image, the movement track of the obstacle and the movement state of the obstacle.
9. An obstacle detection device, characterized in that the obstacle detection device comprises: a memory and a processor, the memory having stored therein at least one computer program that is loaded and executed by the processor to implement the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that at least one computer program is stored in the computer readable storage medium, which is loaded and executed by a processor to implement the method of any one of claims 1-7.
CN202311439097.1A 2023-10-31 2023-10-31 Obstacle detection method and device Pending CN117452411A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311439097.1A CN117452411A (en) 2023-10-31 2023-10-31 Obstacle detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311439097.1A CN117452411A (en) 2023-10-31 2023-10-31 Obstacle detection method and device

Publications (1)

Publication Number Publication Date
CN117452411A true CN117452411A (en) 2024-01-26

Family

ID=89596198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311439097.1A Pending CN117452411A (en) 2023-10-31 2023-10-31 Obstacle detection method and device

Country Status (1)

Country Link
CN (1) CN117452411A (en)

Similar Documents

Publication Publication Date Title
CN111257866B (en) Target detection method, device and system for linkage of vehicle-mounted camera and vehicle-mounted radar
US20200409372A1 (en) Data fusion method and related device
CN110967011B (en) Positioning method, device, equipment and storage medium
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
US10262466B2 (en) Systems and methods for adjusting a combined image visualization based on depth information
WO2021128777A1 (en) Method, apparatus, device, and storage medium for detecting travelable region
CN103852067B (en) The method for adjusting the operating parameter of flight time (TOF) measuring system
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
JP6484228B2 (en) Visually enhanced navigation
CN113382473B (en) Positioning method, device, system, equipment and storage medium
CN110865388B (en) Combined calibration method and device for camera and laser radar and storage medium
CN110979318B (en) Lane information acquisition method and device, automatic driving vehicle and storage medium
CN103852754A (en) Method for interference suppression in time of flight (TOF) measurement system
CN109696173A (en) A kind of car body air navigation aid and device
CN111192341A (en) Method and device for generating high-precision map, automatic driving equipment and storage medium
CN112406707B (en) Vehicle early warning method, vehicle, device, terminal and storage medium
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN110775056B (en) Vehicle driving method, device, terminal and medium based on radar detection
CN116853240A (en) Barrier early warning method, device, equipment and storage medium
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
WO2023072093A1 (en) Virtual parking space determination method, display method and apparatus, device, medium, and program
CN117452411A (en) Obstacle detection method and device
CN111754564A (en) Video display method, device, equipment and storage medium
CN117055016A (en) Detection precision testing method and device of sensor and storage medium
CN113734199B (en) Vehicle control method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination