CN115616560B - Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium - Google Patents

Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN115616560B
CN115616560B CN202211535458.8A CN202211535458A CN115616560B CN 115616560 B CN115616560 B CN 115616560B CN 202211535458 A CN202211535458 A CN 202211535458A CN 115616560 B CN115616560 B CN 115616560B
Authority
CN
China
Prior art keywords
obstacle
information
target
radar
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211535458.8A
Other languages
Chinese (zh)
Other versions
CN115616560A (en
Inventor
李敏
张�雄
龙文
申苗
刘智睿
艾永军
陶武康
王倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Aion New Energy Automobile Co Ltd
Original Assignee
GAC Aion New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Aion New Energy Automobile Co Ltd filed Critical GAC Aion New Energy Automobile Co Ltd
Priority to CN202211535458.8A priority Critical patent/CN115616560B/en
Publication of CN115616560A publication Critical patent/CN115616560A/en
Application granted granted Critical
Publication of CN115616560B publication Critical patent/CN115616560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the disclosure discloses a vehicle obstacle avoidance method, a vehicle obstacle avoidance device, electronic equipment and a computer readable medium. One embodiment of the method comprises: acquiring an initial obstacle camera image set and an initial obstacle radar point cloud information set; matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set; preprocessing each piece of matched obstacle information in the matched obstacle information set to generate target matched obstacle information to obtain a target matched obstacle information set; fusing target obstacle camera images and radar obstacle information included in each piece of matching obstacle information in the target matching obstacle information set to generate fused obstacle information, and obtaining a fused obstacle information set; and sending the fused obstacle information set to a control terminal to control the target vehicle to avoid the obstacle. The implementation mode improves the accuracy of the obstacle avoidance function of the vehicle.

Description

Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a vehicle obstacle avoidance method, a vehicle obstacle avoidance device, electronic equipment and a computer readable medium.
Background
During vehicle driving, obstacles in the environment need to be identified and accurately avoided. At present, in the process of vehicle obstacle avoidance, the following methods are generally adopted: the obstacle information is only obtained from a single sensor and analyzed, or the obstacle information obtained from different sensors is fused and analyzed through a D-S (Dempster-Shafer) evidence theory algorithm, and then the analyzed obstacle information is sent to a control terminal to control the vehicle to avoid obstacles.
However, the inventor finds that when the vehicle obstacle avoidance is carried out in the above way, the following technical problems often exist:
firstly, obstacle information is obtained from a single sensor and analyzed, so that the accuracy of the obstacle information obtained through analysis is insufficient, and the accuracy of the obstacle avoidance function of the vehicle is insufficient;
secondly, when the obstacle information acquired by different sensors is fused and analyzed through a D-S evidence theory algorithm, when the obstacle information acquired by different sensors conflicts, the accuracy of the fused obstacle information is reduced, and therefore the accuracy of the obstacle avoidance function of the vehicle is reduced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose vehicle obstacle avoidance methods, apparatuses, electronic devices, and computer readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a vehicle obstacle avoidance method, including: acquiring an initial obstacle camera image set and an initial obstacle radar point cloud information set; matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set, wherein each piece of matched obstacle information in the matched obstacle information set comprises: matching the image set of the obstacle camera and the point cloud information of the obstacle radar; preprocessing each piece of matched obstacle information in the matched obstacle information set to generate target matched obstacle information to obtain a target matched obstacle information set, wherein each piece of target matched obstacle information in the target matched obstacle information set comprises: target obstacle camera images and radar obstacle information; fusing target obstacle camera images and radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fused obstacle information to obtain a fused obstacle information set; sending the fused obstacle information set to a control terminal to control a target vehicle to avoid obstacles; wherein the preprocessing each matching obstacle information in the set of matching obstacle information to generate target matching obstacle information comprises: splicing images of all matched obstacle cameras in a matched obstacle camera image set included in the matched obstacle information to obtain a target obstacle camera image; performing data filtering processing on the radar point cloud information of the matched obstacles included in the information of the matched obstacles to obtain radar obstacle information; and fusing the target obstacle camera image and the radar obstacle information into the target matching obstacle information.
In a second aspect, some embodiments of the present disclosure provide an obstacle avoidance apparatus for a vehicle, the apparatus comprising: an acquisition unit configured to acquire an initial obstacle camera image set and an initial obstacle radar point cloud information set; a matching unit configured to match the initial obstacle camera image set with the initial obstacle radar point cloud information set to obtain a matching obstacle information set, wherein each piece of matching obstacle information in the matching obstacle information set includes: matching the image set of the obstacle camera and matching the point cloud information of the obstacle radar; a preprocessing unit configured to preprocess each piece of matching obstacle information in the set of matching obstacle information to generate target matching obstacle information, resulting in a set of target matching obstacle information, wherein each piece of target matching obstacle information in the set of target matching obstacle information includes: target obstacle camera images and radar obstacle information; a fusion unit configured to fuse a target obstacle camera image and radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fusion obstacle information, resulting in a fusion obstacle information set; the sending unit is configured to send the fusion obstacle information set to a control terminal so as to control the target vehicle to avoid obstacles; wherein, the preprocessing each matching obstacle information in the matching obstacle information set to generate target matching obstacle information includes: splicing images of all matched obstacle cameras in a matched obstacle camera image set included in the matched obstacle information to obtain a target obstacle camera image; performing data filtering processing on the radar point cloud information of the matched obstacle included in the information of the matched obstacle to obtain radar obstacle information; and integrating the target obstacle camera image and the radar obstacle information into the target matching obstacle information.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device, on which one or more programs are stored, which when executed by one or more processors cause the one or more processors to implement the method described in any implementation of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium on which a computer program is stored, wherein the program when executed by a processor implements the method described in any one of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the vehicle obstacle avoidance method of some embodiments of the present disclosure, the accuracy of the vehicle obstacle avoidance function can be improved. Specifically, the reason why the accuracy of the vehicle obstacle avoidance function is insufficient is that: the obstacle information is acquired from only a single sensor and analyzed, resulting in insufficient accuracy of the analyzed obstacle information. Based on this, the vehicle obstacle avoidance method of some embodiments of the present disclosure first acquires an initial obstacle camera image set and an initial obstacle radar point cloud information set. Thereby, information about the obstacle can be obtained from different sensors. Secondly, matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set, wherein each piece of matched obstacle information in the matched obstacle information set comprises: and matching the image set of the obstacle camera and matching the point cloud information of the obstacle radar. Therefore, the matched initial obstacle camera image and initial obstacle radar point cloud information can be obtained, and the matching obstacle camera image set and the matching obstacle radar point cloud information can be conveniently fused in the follow-up process. Then, preprocessing each piece of matching obstacle information in the matching obstacle information set to generate target matching obstacle information, and obtaining a target matching obstacle information set, wherein each piece of target matching obstacle information in the target matching obstacle information set includes: target obstacle camera images and radar obstacle information. Therefore, the preprocessed matched obstacle information can be obtained, and irrelevant information in the matched obstacle information can be removed, so that the subsequent fused obstacle information can be generated. And then, fusing target obstacle camera images and radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fused obstacle information, and obtaining a fused obstacle information set. Therefore, the fused obstacle information can be obtained, and compared with the obstacle information obtained by analyzing after being acquired by a single sensor, the accuracy of the obstacle information is improved. And finally, sending the fused obstacle information set to a control terminal to control the target vehicle to avoid the obstacle. Therefore, the target vehicle can avoid the obstacle according to the accurate obstacle information. Therefore, according to some vehicle obstacle avoidance methods disclosed by the disclosure, obstacle information can be acquired from different sensors, and the obstacle information is subjected to fusion processing, so that the accuracy of the obstacle information can be improved, and the accuracy of a vehicle obstacle avoidance function is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a flow diagram of some embodiments of a vehicle obstacle avoidance method according to the present disclosure;
fig. 2 is a schematic structural diagram of some embodiments of a vehicle obstacle avoidance device according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a vehicle obstacle avoidance method according to the present disclosure. The vehicle obstacle avoidance method comprises the following steps:
step 101, an initial obstacle camera image set and an initial obstacle radar point cloud information set are obtained.
In some embodiments, the executing entity of the vehicle obstacle avoidance method may obtain the initial obstacle camera image set from the vehicle-mounted camera assembly of the target vehicle and obtain the initial obstacle radar point cloud information set from the vehicle-mounted radar of the target vehicle by means of wired connection or wireless connection. Wherein the target vehicle may be a driving vehicle. The onboard camera components may include, but are not limited to, a forward looking camera, a looking around camera, and a rear looking camera. The above-mentioned vehicle-mounted radar may be a millimeter-wave radar. The obstacle in the initial obstacle camera image set may be an obstacle vehicle.
And 102, matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set.
In some embodiments, the executing subject may match the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matching obstacle information set. Wherein each piece of matching obstacle information in the matching obstacle information set includes: and matching the image set of the obstacle camera and matching the point cloud information of the obstacle radar.
In some optional implementation manners of some embodiments, the matching, by the executing entity, the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matching obstacle information set may include the following steps:
the method comprises the steps of firstly, obtaining a camera sampling time set and a radar sampling time set. Each camera sampling time in the camera sampling time set corresponds to at least one initial obstacle camera image in the initial obstacle camera image set, and each radar sampling time in the radar sampling time set corresponds to each initial obstacle radar point cloud information in the initial obstacle radar point cloud information set. The execution subject may acquire the camera sampling time set from an on-vehicle camera component of the target vehicle and the radar sampling time set from an on-vehicle radar of the target vehicle. The camera sampling time may represent a time at which the initial obstacle camera image is acquired from the vehicle-mounted camera component. The radar sampling time can represent the time for acquiring the initial obstacle radar point cloud information from the vehicle-mounted radar.
And secondly, in response to the fact that the camera sampling moments in the camera sampling moment set are equal to the radar sampling moments in the radar sampling moment set, determining each initial obstacle camera image corresponding to each camera sampling moment in the camera sampling moment set as a matched obstacle camera image set to obtain a matched obstacle camera image set group.
And thirdly, in response to the fact that the camera sampling time in the camera sampling time set is equal to the radar sampling time in the radar sampling time set, determining initial obstacle radar point cloud information corresponding to each radar sampling time in the radar sampling time set as matched obstacle radar point cloud information to obtain a matched obstacle radar point cloud information set.
And fourthly, combining the matched obstacle camera image sets corresponding to the matched obstacle radar point cloud information in the matched obstacle camera image set group and the matched obstacle radar point cloud information set into matched obstacle information to obtain the matched obstacle information set. The matching obstacle camera image set corresponding to the matching obstacle radar point cloud information in the matching obstacle camera image set group and the matching obstacle radar point cloud information set may be matching obstacle information, and the matching obstacle radar point cloud information and the matching obstacle camera image set may be determined as the matching obstacle radar point cloud information and the matching obstacle camera image set included in the matching obstacle information.
Step 103, preprocessing each piece of matching obstacle information in the matching obstacle information set to generate target matching obstacle information, and obtaining a target matching obstacle information set.
In some embodiments, the execution subject may perform preprocessing on each matching obstacle information in the matching obstacle information set to generate target matching obstacle information, resulting in a target matching obstacle information set. Wherein each target matching obstacle information in the target matching obstacle information set includes: target obstacle camera image and radar obstacle information
In some optional implementations of some embodiments, the performing main body may perform preprocessing on each piece of matching obstacle information in the matching obstacle information set to generate target matching obstacle information, and may include:
and step 1031, splicing the images of the matched obstacle cameras in the image set of the matched obstacle cameras to obtain an image of the target obstacle camera.
And 1032, performing data filtering processing on the radar point cloud information of the matched obstacle to obtain radar obstacle information. And performing data filtering processing on the point cloud information of the matched obstacle radar through a Kalman filter.
And 1033, fusing the target obstacle camera image and the radar obstacle information into the target matching obstacle information. The merging of the target obstacle camera image and the radar obstacle information into the target matching obstacle information may be a target obstacle camera image and radar obstacle information that are included in the target matching obstacle information, which is determined by the target obstacle camera image and the radar obstacle information.
In some optional implementation manners of some embodiments, the stitching, by the execution main body, each of the images of the matching obstacle cameras in the image set of the matching obstacle cameras to obtain an image of a target obstacle camera may include:
firstly, carrying out lane line recognition processing on each matched obstacle camera image in the matched obstacle camera image set to generate lane line characteristic information, and obtaining a lane line characteristic information set. The lane line recognition processing may be performed on each image of the matched obstacle cameras in the image set of matched obstacle cameras through a Standard Hough Transform (SHT) algorithm to generate lane line feature information. The lane line characteristic information in the lane line characteristic information set may include, but is not limited to, a lane line equation.
And secondly, determining registration relation information among the images of the matched obstacle cameras in the image set of the matched obstacle cameras based on the lane line characteristic information set. The registration relationship information between the images of the matched obstacle cameras in the image set of matched obstacle cameras is determined based on the image set of lane line feature information, and may be determined by matching lane line equations included in the image set of matched obstacle cameras by the aid of the image set of lane line feature information. The registration relationship information may characterize a transformation relationship between each matching obstacle camera image in the set of matching obstacle camera images. The registration relationship information may include a set of mapping functions.
And thirdly, based on the registration relation information, performing image transformation processing on each matched obstacle camera image in the matched obstacle camera image set to generate a transformed obstacle camera image, so as to obtain a transformed obstacle camera image set. Wherein, the image transformation processing may be performed on each of the matching obstacle camera images in the matching obstacle camera image set by the mapping function set to generate a transformed obstacle camera image.
As an example, the image transformation process described above may include, but is not limited to: rotation, scaling and distortion removal processing.
And fourthly, fusing the images of the transformed obstacle cameras in the image set of the transformed obstacle cameras to obtain the image of the target obstacle camera. The fusing of the transformed obstacle camera images in the transformed obstacle camera image set may be performed by determining one transformed obstacle camera image in the transformed obstacle camera image set as a target transformed obstacle camera image, and projecting the remaining transformed obstacle camera images in the transformed obstacle camera image set onto the target transformed obstacle camera image.
And 104, fusing the target obstacle camera image and the radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fused obstacle information, and obtaining a fused obstacle information set.
In some embodiments, the executing subject may fuse the target obstacle camera image and the radar obstacle information included in each target matching obstacle information set to generate fused obstacle information, resulting in a fused obstacle information set.
As an example, each of the fused obstacle information sets described above may include, but is not limited to, size information of the obstacle, a color of the obstacle, and a distance value of the obstacle from the target vehicle.
In some optional implementations of some embodiments, the fusing, by the execution subject, the target obstacle camera image and the radar obstacle information included in each of the target matching obstacle information sets to generate fused obstacle information may include:
firstly, extracting the characteristics of the target obstacle camera image to obtain the characteristic information of the camera obstacle. Wherein the camera obstacle feature information includes: an obstacle shadow line information set and a camera obstacle feature coordinate set. The feature extraction of the target obstacle camera image can be performed through a neural network model. The camera obstacle feature information may include, but is not limited to, the shape, color, and distance value to the target vehicle of the obstacle. The obstacle shadow line information in the above-described obstacle shadow line information set may represent edge lines of shadows of the obstacle vehicles projected on the road surface. The obstacle contour line information in the obstacle contour line information set may include: the coordinates of the left end point of the obstacle contour line and the coordinates of the right end point of the obstacle contour line. The camera obstacle feature coordinates in the camera obstacle feature coordinate set may represent coordinates of feature points of the obstacle in the camera coordinate system. The coordinates of the left end point of the obstacle contour line and the coordinates of the right end point of the obstacle contour line may be coordinates in a vehicle body coordinate system. The body coordinate system may be a body coordinate system of the target vehicle.
By way of example, the neural network model described above may be, but is not limited to, a residual neural network or a packet convolutional neural network.
And secondly, extracting the features of the radar obstacle information to obtain radar obstacle feature information. Wherein, the radar obstacle feature information includes: and the obstacle contour line information set and the radar obstacle feature coordinate set. The radar obstacle information can be subjected to feature extraction through a neural network model. The radar obstacle feature information may include, but is not limited to, obstacle feature point coordinates. The obstacle shadow information in the obstacle shadow information set may include: the coordinates of the left end point of the obstacle shadow line and the coordinates of the right end point of the obstacle shadow line. The radar obstacle feature coordinates in the radar obstacle feature coordinate set can represent coordinates of feature points of the obstacle in a radar coordinate system. The coordinates of the left end point of the obstacle hatching and the coordinates of the right end point of the obstacle hatching may be coordinates in a vehicle body coordinate system. The body coordinate system may be a body coordinate system of the target vehicle.
By way of example, the neural network model may be, but is not limited to, a pointNet neural network or a DG (Dynamic Graph) convolutional neural network.
And thirdly, in response to the fact that the radar obstacle characteristic information meets a first preset condition, generating obstacle association degree information based on the obstacle shadow line information set and the obstacle contour line information set. The first preset condition may be that the radar obstacle feature information includes obstacle vehicle information.
And fourthly, in response to the fact that the obstacle association degree information meets a second preset condition, performing feature fusion on the radar obstacle feature information and the camera obstacle feature information to obtain fused obstacle information. The relevancy information may include a relevancy value set. The set of relevancy values may include, but is not limited to, a first relevancy value, a second relevancy value, a third relevancy value, and a fourth relevancy value. The second preset condition may be that the obstacle relevance information includes a first relevance value smaller than a first target value, a second relevance value smaller than a second target value, a third relevance value larger than a third target value, and a fourth relevance value larger than a fourth target value.
As an example, the first target value may be 30, the second target value may be 25, the third target value may be 0.5, and the fourth target value may be 0.5.
In some optional implementation manners of some embodiments, the generating, by the execution main body, obstacle relevance information based on the obstacle shadow line information set and the obstacle contour line information set may include:
the first step, for each obstacle outline information in the above obstacle outline information set, performs the following substeps:
a first substep of determining the obstacle shade information corresponding to the obstacle contour line information in the obstacle shade information set as target obstacle shade information. The determining of the obstacle shadow information corresponding to the obstacle contour line information in the obstacle shadow information set as target obstacle shadow information may be performed by determining a horizontal difference between an abscissa of an obstacle shadow left end point coordinate included in each obstacle shadow information in the obstacle shadow information set and an abscissa of an obstacle contour line left end point coordinate included in the obstacle contour line information to obtain a horizontal difference set, and then determining a horizontal difference in which the horizontal difference set is within a certain interval as a target horizontal difference. And finally, determining the obstacle shadow information corresponding to the target transverse difference in the obstacle shadow information set as the target obstacle shadow information.
As an example, the above-mentioned certain interval may be (0,0.5).
And a second substep of determining a difference between an abscissa of a left end point coordinate of the obstacle contour line included in the obstacle contour line information and an abscissa of a left end point coordinate of the target obstacle shadow line included in the target obstacle shadow line information as a first relevance value.
A third substep of determining a difference between the abscissa of the coordinates of the right end point of the obstacle outline included in the obstacle outline information and the abscissa of the coordinates of the right end point of the target obstacle shadow included in the target obstacle shadow information as a second relevance value.
A fourth substep of generating an obstacle difference value, a shadow difference value and a contour line difference value based on the obstacle contour line information and the target obstacle shadow line information.
And a fifth substep of determining a ratio of the obstacle difference value to the shadow difference value as a third correlation value.
And a sixth substep of determining a ratio of the obstacle difference value to the contour line difference value as a fourth correlation value.
A seventh substep of combining the first relevance value, the second relevance value, the third relevance value and the fourth relevance value into a relevance value set.
And secondly, fusing the combined relevance value sets into the obstacle relevance information. The combination of the association value sets into the obstacle association information may be a determination that the association value sets are association value sets included in the obstacle association information.
In some optional implementations of some embodiments, the executing body generating an obstacle difference value, a shadow difference value, and a contour difference value based on the obstacle contour line information and the target obstacle shadow line information may include:
first, determining a difference value between an abscissa of a coordinate of a left end point of the obstacle contour line included in the obstacle contour line information and an abscissa of a coordinate of a right end point of the target obstacle shadow line included in the target obstacle shadow line information as the obstacle difference value.
And secondly, determining the difference value of the abscissa of the coordinate of the left end point of the obstacle contour line and the abscissa of the coordinate of the right end point of the obstacle contour line, which are included in the obstacle contour line information, as the contour line difference value.
And thirdly, determining the difference value between the abscissa of the right end point coordinate of the target obstacle shadow line and the abscissa of the left end point coordinate of the target obstacle shadow line, which is included in the target obstacle shadow line information, as the shadow line difference value.
In some optional implementation manners of some embodiments, the performing main body performs feature fusion on the radar obstacle feature information and the camera obstacle feature information to obtain the fused obstacle information, and may include the following steps:
the first step is that coordinate conversion is carried out on each camera obstacle feature coordinate in the camera obstacle feature coordinate set to generate target camera obstacle feature coordinates, and a target camera obstacle feature coordinate set is obtained. The coordinate conversion of each camera obstacle feature coordinate in the camera obstacle feature coordinate set to generate target camera obstacle feature coordinates may be a coordinate conversion of each camera obstacle feature coordinate in the camera obstacle feature coordinate set in a camera coordinate system to target camera obstacle feature coordinates in a world coordinate system.
And secondly, performing coordinate transformation on each radar obstacle feature coordinate in the radar obstacle feature coordinate set to generate a target radar obstacle feature coordinate, and obtaining a target radar obstacle feature coordinate set. The coordinate conversion of each radar obstacle feature coordinate in the radar obstacle feature coordinate set to generate target radar obstacle feature coordinates may be a coordinate conversion of each radar obstacle feature coordinate in the radar obstacle feature coordinate set to target radar obstacle feature coordinates in a world coordinate system.
And thirdly, fusing the target camera obstacle feature coordinate set and the target radar obstacle feature coordinate set to obtain fused obstacle information. And fusing the target camera obstacle feature coordinate set and the target radar obstacle feature coordinate set through a neural network model.
By way of example, the neural Network model described above may be, but is not limited to, a RODNet (Radar Object Detection Network).
The related content of step 104 is an inventive point of the embodiment of the present disclosure, and solves the technical problem two mentioned in the background art, "accuracy of the vehicle obstacle avoidance function is reduced". Factors causing the accuracy reduction of the vehicle obstacle avoidance function are as follows: when obstacle information acquired from different sensors is fused and analyzed through a D-S evidence theory algorithm, when the obstacle information acquired from different sensors conflicts, the accuracy of the fused obstacle information is reduced. If the factors are solved, the effect of improving the accuracy of the obstacle avoidance function of the vehicle can be achieved. In order to achieve the effect, the method can fuse the obstacle camera images containing the same obstacle information, perform data filtering on the obstacle radar information, perform feature extraction on the preprocessed obstacle camera images and the obstacle radar information, and fuse the radar obstacle feature information and the camera obstacle feature information by determining the association degree information between the camera obstacle feature information and the radar obstacle feature information. Only the correlation information between the radar obstacle feature information and the camera obstacle feature information needs to be considered, and the influence on the fusion obstacle information when the radar obstacle feature information conflicts with the camera obstacle feature information does not need to be considered. Therefore, the accuracy of the fused obstacle information can be improved, and the accuracy of the obstacle avoidance function of the vehicle can be further improved.
And 105, sending the fused obstacle information set to a control terminal to control the target vehicle to avoid obstacles.
In some embodiments, the execution subject may send the fused obstacle information set to a control terminal to control the target vehicle to avoid the obstacle.
The above embodiments of the present disclosure have the following beneficial effects: by the vehicle obstacle avoidance method of some embodiments of the present disclosure, the accuracy of the vehicle obstacle avoidance function can be improved. Specifically, the reason why the accuracy of the vehicle obstacle avoidance function is insufficient is that: the obstacle information is acquired from only a single sensor and analyzed, resulting in insufficient accuracy of the analyzed obstacle information. Based on this, the vehicle obstacle avoidance method of some embodiments of the present disclosure first acquires an initial obstacle camera image set and an initial obstacle radar point cloud information set. Thereby, information about the obstacle can be obtained from different sensors. Secondly, matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set, wherein each piece of matched obstacle information in the matched obstacle information set comprises: and matching the image set of the obstacle camera and matching the point cloud information of the obstacle radar. Therefore, the matched initial obstacle camera image and initial obstacle radar point cloud information can be obtained, and the matching obstacle camera image set and the matching obstacle radar point cloud information can be conveniently fused in the follow-up process. Then, preprocessing each piece of matching obstacle information in the matching obstacle information set to generate target matching obstacle information, and obtaining a target matching obstacle information set, wherein each piece of target matching obstacle information in the target matching obstacle information set includes: target obstacle camera images and radar obstacle information. Therefore, the preprocessed matched obstacle information can be obtained, and irrelevant information in the matched obstacle information can be removed, so that the subsequent fused obstacle information can be generated. And then, fusing target obstacle camera images and radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fused obstacle information, and obtaining a fused obstacle information set. Therefore, the fused obstacle information can be obtained, and compared with the obstacle information obtained by analyzing after being acquired by a single sensor, the accuracy of the obstacle information is improved. And finally, sending the fused obstacle information set to a control terminal to control the target vehicle to avoid the obstacle. Therefore, the target vehicle can avoid the obstacle according to the accurate obstacle information. Therefore, according to some vehicle obstacle avoidance methods disclosed by the disclosure, obstacle information can be acquired from different sensors, and the obstacle information is subjected to fusion processing, so that the accuracy of the obstacle information can be improved, and the accuracy of a vehicle obstacle avoidance function is improved.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a vehicle obstacle avoidance apparatus, which correspond to those of the method embodiments shown in fig. 1, and which may be applied in various electronic devices in particular.
As shown in fig. 2, the vehicle obstacle avoidance device 200 of some embodiments includes: an acquisition unit 201, a matching unit 202, a preprocessing unit 203, a fusion unit 204, and a transmission unit 205. Wherein the acquisition unit 201 is configured to acquire an initial obstacle camera image set and an initial obstacle radar point cloud information set; a matching unit 202 configured to match the initial obstacle camera image set with the initial obstacle radar point cloud information set to obtain a matching obstacle information set, where the matching obstacle information in the matching obstacle information set includes: matching the image set of the obstacle camera and matching the point cloud information of the obstacle radar; a preprocessing unit 203 configured to preprocess each matching obstacle information in the matching obstacle information set to generate target matching obstacle information, resulting in a target matching obstacle information set, where each target matching obstacle information in the target matching obstacle information set includes: target obstacle camera images and radar obstacle information; a fusion unit 204 configured to fuse the target obstacle camera image and the radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fusion obstacle information, so as to obtain a fusion obstacle information set; a sending unit 205 configured to send the fused obstacle information set to a control terminal to control the target vehicle to avoid the obstacle.
It is to be understood that the units described in the vehicle obstacle avoidance apparatus 200 correspond to the respective steps in the vehicle obstacle avoidance method described with reference to fig. 1. Therefore, the operations, features and beneficial effects described above for the vehicle obstacle avoidance method are also applicable to the vehicle obstacle avoidance apparatus 200 and the units included therein, and are not described herein again.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, electronic device 300 may include a processing device (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate with other devices, wireless or wired, to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an initial obstacle camera image set and an initial obstacle radar point cloud information set; matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set, wherein the matched obstacle information in the matched obstacle information set comprises: matching the image set of the obstacle camera and the point cloud information of the obstacle radar; fusing a matching obstacle camera image set and matching obstacle radar point cloud information included in each matching obstacle information set to generate fused obstacle information to obtain a fused obstacle information set; sending the fused obstacle information set to a control terminal to control a target vehicle to avoid obstacles; wherein the preprocessing each matching obstacle information in the matching obstacle information set to generate target matching obstacle information comprises: splicing images of all matched obstacle cameras in a matched obstacle camera image set included in the matched obstacle information to obtain a target obstacle camera image; performing data filtering processing on the radar point cloud information of the matched obstacle included in the information of the matched obstacle to obtain radar obstacle information; and integrating the target obstacle camera image and the radar obstacle information into the target matching obstacle information.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a matching unit, a preprocessing unit, a fusion unit, and a transmission unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires an initial obstacle camera image set and an initial obstacle radar point cloud information set".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combinations of the above-mentioned features, and other embodiments in which the above-mentioned features or their equivalents are combined arbitrarily without departing from the spirit of the invention are also encompassed. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (6)

1. A vehicle obstacle avoidance method comprises the following steps:
acquiring an initial obstacle camera image set and an initial obstacle radar point cloud information set;
matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set, wherein each piece of matched obstacle information in the matched obstacle information set comprises: matching the image set of the obstacle camera and matching the point cloud information of the obstacle radar;
preprocessing each piece of matching obstacle information in the matching obstacle information set to generate target matching obstacle information to obtain a target matching obstacle information set, wherein each piece of target matching obstacle information in the target matching obstacle information set comprises: target obstacle camera images and radar obstacle information;
fusing target obstacle camera images and radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fused obstacle information to obtain a fused obstacle information set;
sending the fused obstacle information set to a control terminal to control a target vehicle to avoid obstacles;
wherein the preprocessing each matching obstacle information in the matching obstacle information set to generate target matching obstacle information comprises:
splicing images of all matched obstacle cameras in a matched obstacle camera image set included by the matched obstacle information to obtain a target obstacle camera image;
performing data filtering processing on the radar point cloud information of the matched obstacle included in the information of the matched obstacle to obtain radar obstacle information;
fusing the target obstacle camera image and the radar obstacle information into the target matching obstacle information;
wherein the fusing the target obstacle camera image and the radar obstacle information included in each target matching obstacle information in the target matching obstacle information set to generate fused obstacle information includes:
performing feature extraction on the target obstacle camera image to obtain camera obstacle feature information, wherein the camera obstacle feature information includes: an obstacle shadow line information set;
performing feature extraction on the radar obstacle information to obtain radar obstacle feature information, wherein the radar obstacle feature information comprises: an obstacle contour line information set;
in response to determining that the radar obstacle feature information satisfies a first preset condition, generating obstacle relevancy information based on the obstacle shadow line information set and the obstacle contour line information set, wherein each obstacle contour line information in the obstacle contour line information set comprises: the coordinates of the left end point of the obstacle contour line and the coordinates of the right end point of the obstacle contour line, wherein the shadow line information of each obstacle in the obstacle shadow line information set comprises: the coordinates of the left end point of the barrier shadow line and the coordinates of the right end point of the barrier shadow line;
in response to the fact that the obstacle association degree information meets a second preset condition, performing feature fusion on the radar obstacle feature information and the camera obstacle feature information to obtain fused obstacle information;
generating obstacle relevancy information based on the obstacle shadow line information set and the obstacle contour line information set, wherein the generating obstacle relevancy information comprises:
for each obstacle contour line information in the obstacle contour line information set, performing the following steps:
determining the obstacle shadow line information corresponding to the obstacle contour line information in the obstacle shadow line information set as target obstacle shadow line information;
determining a difference value between an abscissa of a left endpoint coordinate of the obstacle contour line included in the obstacle contour line information and an abscissa of a left endpoint coordinate of the target obstacle shadow line included in the target obstacle shadow line information as a first relevance value;
determining a difference value between the abscissa of the right end point coordinate of the obstacle contour line included in the obstacle contour line information and the abscissa of the right end point coordinate of the target obstacle shadow line included in the target obstacle shadow line information as a second relevance value;
generating an obstacle difference value, a shadow difference value and a contour line difference value based on the obstacle contour line information and the target obstacle shadow line information;
determining a ratio of the obstacle difference value to the shadow difference value as a third correlation value;
determining the ratio of the obstacle difference value to the contour line difference value as a fourth correlation value;
combining the first relevance value, the second relevance value, the third relevance value, and the fourth relevance value into a set of relevance values;
fusing the combined correlation value sets into the obstacle correlation information;
wherein the generating of the obstacle difference value, the shadow difference value, and the contour line difference value based on the obstacle contour line information and the target obstacle shadow line information includes:
determining a difference value between an abscissa of a coordinate of a left end point of the obstacle contour line included in the obstacle contour line information and an abscissa of a coordinate of a right end point of the target obstacle shadow line included in the target obstacle shadow line information as the obstacle difference value;
determining the difference value between the abscissa of the coordinates of the left end point of the obstacle contour line and the abscissa of the coordinates of the right end point of the obstacle contour line, which are included in the obstacle contour line information, as the contour line difference value;
determining a difference value between an abscissa of a right end point coordinate of the target obstacle shadow line and an abscissa of a left end point coordinate of the target obstacle shadow line, which is included in the target obstacle shadow line information, as the shadow line difference value;
the first preset condition is that the radar obstacle feature information comprises obstacle vehicle information, and the second preset condition is that a first correlation value included in the obstacle correlation degree information is smaller than a first target value, a second correlation value is smaller than a second target value, a third correlation value is larger than a third target value, and a fourth correlation value is larger than a fourth target value.
2. The method of claim 1, wherein said matching the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matched obstacle information set comprises:
acquiring a camera sampling time set and a radar sampling time set, wherein each camera sampling time in the camera sampling time set corresponds to at least one initial obstacle camera image in the initial obstacle camera image set, and each radar sampling time in the radar sampling time set corresponds to each initial obstacle radar point cloud information in the initial obstacle radar point cloud information set;
in response to the fact that the camera sampling time in the camera sampling time set is equal to the radar sampling time in the radar sampling time set, determining an initial obstacle camera image corresponding to each camera sampling time in the camera sampling time set as a matched obstacle camera image set to obtain a matched obstacle camera image set group;
in response to the fact that the camera sampling time in the camera sampling time set is equal to the radar sampling time in the radar sampling time set, determining initial obstacle radar point cloud information corresponding to each radar sampling time in the radar sampling time set as matching obstacle radar point cloud information to obtain a matching obstacle radar point cloud information set;
and combining the matching obstacle camera image sets corresponding to the matching obstacle radar point cloud information in the matching obstacle camera image set group and the matching obstacle radar point cloud information in the matching obstacle radar point cloud information set into matching obstacle information to obtain the matching obstacle information set.
3. The method according to claim 1, wherein the stitching the images of the matched obstacle cameras in the image set of matched obstacle cameras included in the information of matched obstacle to obtain the image of the target obstacle camera includes:
performing lane line recognition processing on each matched obstacle camera image in the matched obstacle camera image set to generate lane line characteristic information to obtain a lane line characteristic information set;
determining registration relation information among all matched obstacle camera images in the matched obstacle camera image set based on the lane line characteristic information set;
based on the registration relation information, performing image transformation processing on each matched obstacle camera image in the matched obstacle camera image set to generate a transformed obstacle camera image, so as to obtain a transformed obstacle camera image set;
and fusing the images of the transformed obstacle cameras in the image set of the transformed obstacle cameras to obtain the image of the target obstacle camera.
4. The method of claim 1, wherein the camera obstacle feature information further comprises: a camera obstacle feature coordinate set, the radar obstacle feature information further comprising: radar obstacle feature coordinate set; and
the performing feature fusion on the radar obstacle feature information and the camera obstacle feature information to obtain the fusion obstacle information includes:
performing coordinate conversion on each camera obstacle feature coordinate in the camera obstacle feature coordinate set to generate a target camera obstacle feature coordinate, and obtaining a target camera obstacle feature coordinate set;
performing coordinate conversion on each radar obstacle feature coordinate in the radar obstacle feature coordinate set to generate a target radar obstacle feature coordinate, and obtaining a target radar obstacle feature coordinate set;
and fusing the target camera obstacle feature coordinate set and the target radar obstacle feature coordinate set to obtain fused obstacle information.
5. A vehicle obstacle avoidance device comprising:
an acquisition unit configured to acquire an initial obstacle camera image set and an initial obstacle radar point cloud information set;
a matching unit configured to match the initial obstacle camera image set and the initial obstacle radar point cloud information set to obtain a matching obstacle information set, wherein each piece of matching obstacle information in the matching obstacle information set includes: matching the image set of the obstacle camera and the point cloud information of the obstacle radar;
a preprocessing unit configured to preprocess each matching obstacle information in the matching obstacle information set to generate target matching obstacle information, resulting in a target matching obstacle information set, wherein each target matching obstacle information in the target matching obstacle information set comprises: target obstacle camera images and radar obstacle information;
the fusion unit is configured to fuse target obstacle camera images and radar obstacle information included in each piece of target matching obstacle information in the target matching obstacle information set to generate fusion obstacle information, and obtain a fusion obstacle information set;
a transmitting unit configured to transmit the fusion obstacle information set to a control terminal to control a target vehicle to avoid an obstacle;
wherein the preprocessing each matching obstacle information in the matching obstacle information set to generate target matching obstacle information comprises:
splicing images of all matched obstacle cameras in a matched obstacle camera image set included by the matched obstacle information to obtain a target obstacle camera image;
performing data filtering processing on the radar point cloud information of the matched obstacles included in the information of the matched obstacles to obtain radar obstacle information;
fusing the target obstacle camera image and the radar obstacle information into the target matching obstacle information;
wherein the fusing target obstacle camera images and radar obstacle information included in each target matching obstacle information set in the target matching obstacle information set to generate fused obstacle information includes:
performing feature extraction on the target obstacle camera image to obtain camera obstacle feature information, wherein the camera obstacle feature information includes: an obstacle shadow line information set;
performing feature extraction on the radar obstacle information to obtain radar obstacle feature information, wherein the radar obstacle feature information comprises: an obstacle contour line information set;
in response to determining that the radar obstacle feature information satisfies a first preset condition, generating obstacle relevancy information based on the obstacle shadow line information set and the obstacle contour line information set, wherein each obstacle contour line information in the obstacle contour line information set comprises: the coordinates of the left end point of the obstacle contour line and the coordinates of the right end point of the obstacle contour line, wherein the shadow line information of each obstacle in the obstacle shadow line information set comprises: the coordinates of the left end point of the barrier shadow line and the coordinates of the right end point of the barrier shadow line;
in response to the fact that the obstacle correlation degree information meets a second preset condition, performing feature fusion on the radar obstacle feature information and the camera obstacle feature information to obtain fusion obstacle information;
generating obstacle relevancy information based on the obstacle shadow line information set and the obstacle contour line information set, wherein the generating obstacle relevancy information comprises:
for each obstacle contour line information in the obstacle contour line information set, performing the following steps:
determining the obstacle shadow line information corresponding to the obstacle contour line information in the obstacle shadow line information set as target obstacle shadow line information;
determining a difference value between an abscissa of a left endpoint coordinate of the obstacle contour line included in the obstacle contour line information and an abscissa of a left endpoint coordinate of the target obstacle shadow line included in the target obstacle shadow line information as a first relevance value;
determining a difference value between the abscissa of the right end point coordinate of the obstacle contour line included in the obstacle contour line information and the abscissa of the right end point coordinate of the target obstacle shadow line included in the target obstacle shadow line information as a second relevance value;
generating an obstacle difference value, a shadow difference value and a contour line difference value based on the obstacle contour line information and the target obstacle shadow line information;
determining a ratio of the obstacle difference value to the shadow difference value as a third correlation value;
determining the ratio of the obstacle difference value to the contour line difference value as a fourth correlation value;
combining the first relevance value, the second relevance value, the third relevance value, and the fourth relevance value into a set of relevance values;
combining the combined relevance value sets into the obstacle relevance information;
wherein the generating of the obstacle difference value, the shadow difference value, and the contour line difference value based on the obstacle contour line information and the target obstacle shadow line information includes:
determining a difference value between an abscissa of a coordinate of a left end point of the obstacle contour line included in the obstacle contour line information and an abscissa of a coordinate of a right end point of the target obstacle shadow line included in the target obstacle shadow line information as the obstacle difference value;
determining the difference value of the abscissa of the coordinate of the left end point of the obstacle contour line and the abscissa of the coordinate of the right end point of the obstacle contour line included in the obstacle contour line information as the contour line difference value;
determining a difference value between an abscissa of a right end point coordinate of the target obstacle shadow line and an abscissa of a left end point coordinate of the target obstacle shadow line, which is included in the target obstacle shadow line information, as the shadow line difference value;
the first preset condition is that the radar obstacle feature information comprises obstacle vehicle information, and the second preset condition is that a first correlation value included in the obstacle correlation degree information is smaller than a first target value, a second correlation value is smaller than a second target value, a third correlation value is larger than a third target value, and a fourth correlation value is larger than a fourth target value.
6. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-4.
CN202211535458.8A 2022-12-02 2022-12-02 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium Active CN115616560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211535458.8A CN115616560B (en) 2022-12-02 2022-12-02 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211535458.8A CN115616560B (en) 2022-12-02 2022-12-02 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN115616560A CN115616560A (en) 2023-01-17
CN115616560B true CN115616560B (en) 2023-04-14

Family

ID=84880241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211535458.8A Active CN115616560B (en) 2022-12-02 2022-12-02 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN115616560B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145680B (en) * 2017-06-16 2022-05-27 阿波罗智能技术(北京)有限公司 Method, device and equipment for acquiring obstacle information and computer storage medium
CN109298415B (en) * 2018-11-20 2020-09-22 中车株洲电力机车有限公司 Method for detecting obstacles on track and road
CN111191600B (en) * 2019-12-30 2023-06-23 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN113297881A (en) * 2020-02-24 2021-08-24 华为技术有限公司 Target detection method and related device
CN111812649A (en) * 2020-07-15 2020-10-23 西北工业大学 Obstacle identification and positioning method based on fusion of monocular camera and millimeter wave radar
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment sensing method and system
CN112861653B (en) * 2021-01-20 2024-01-23 上海西井科技股份有限公司 Method, system, equipment and storage medium for detecting fused image and point cloud information
CN113985405A (en) * 2021-09-16 2022-01-28 森思泰克河北科技有限公司 Obstacle detection method and obstacle detection equipment applied to vehicle
CN114529886B (en) * 2022-04-24 2022-08-02 苏州挚途科技有限公司 Method, device and system for determining obstacle

Also Published As

Publication number Publication date
CN115616560A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN112348029B (en) Local map adjusting method, device, equipment and computer readable medium
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN115326099B (en) Local path planning method and device, electronic equipment and computer readable medium
CN115817463B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115257727A (en) Obstacle information fusion method and device, electronic equipment and computer readable medium
CN113313064A (en) Character recognition method and device, readable medium and electronic equipment
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN112418249A (en) Mask image generation method and device, electronic equipment and computer readable medium
CN114894205A (en) Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN111967332B (en) Visibility information generation method and device for automatic driving
CN115808929B (en) Vehicle simulation obstacle avoidance method and device, electronic equipment and computer readable medium
CN115565158B (en) Parking space detection method, device, electronic equipment and computer readable medium
CN115616560B (en) Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium
CN114724116B (en) Vehicle traffic information generation method, device, equipment and computer readable medium
CN112597793B (en) Method, device and equipment for identifying traffic light state and timer state
CN112232451B (en) Multi-sensor data fusion method and device, electronic equipment and medium
CN111340813B (en) Image instance segmentation method and device, electronic equipment and storage medium
CN114724115A (en) Obstacle positioning information generation method, device, equipment and computer readable medium
CN112528970A (en) Guideboard detection method, device, equipment and computer readable medium
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN112488947A (en) Model training and image processing method, device, equipment and computer readable medium
CN112070034A (en) Image recognition method and device, electronic equipment and computer readable medium
CN115610415B (en) Vehicle distance control method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant