CN117746350A - Method, device and storage medium for detecting heading of target vehicle - Google Patents

Method, device and storage medium for detecting heading of target vehicle Download PDF

Info

Publication number
CN117746350A
CN117746350A CN202311780781.6A CN202311780781A CN117746350A CN 117746350 A CN117746350 A CN 117746350A CN 202311780781 A CN202311780781 A CN 202311780781A CN 117746350 A CN117746350 A CN 117746350A
Authority
CN
China
Prior art keywords
wheel
target vehicle
detection frame
detection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311780781.6A
Other languages
Chinese (zh)
Inventor
冯彪
王子涵
樊志远
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Shanghai Automotive Technologies Ltd
Original Assignee
Uisee Shanghai Automotive Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Shanghai Automotive Technologies Ltd filed Critical Uisee Shanghai Automotive Technologies Ltd
Priority to CN202311780781.6A priority Critical patent/CN117746350A/en
Publication of CN117746350A publication Critical patent/CN117746350A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the disclosure discloses a target vehicle course detection method, a device and a storage medium, the method is characterized in that through environment images acquired by vehicle-mounted looking-around cameras of each road of a vehicle, a detection frame of a first wheel, a detection frame of a second wheel and a segmentation frame of a ground area corresponding to each target vehicle are determined, and further, a first grounding point and a second grounding point between the first wheel and the ground are respectively determined for each target vehicle, and finally, the course of the target vehicle is determined according to a connecting line formed by the first grounding point and the second grounding point, so that course detection based on the images acquired in real time is realized.

Description

Method, device and storage medium for detecting heading of target vehicle
Technical Field
The disclosure relates to the technical field of automatic driving, and in particular relates to a target vehicle course detection method, device and storage medium.
Background
In recent years, with the high-speed development of the intelligent driving industry of the automobile industry, the perception of the surrounding environment of a vehicle on a driving road is a key component in the unmanned technical research field, and the heading of a target vehicle (i.e. other vehicles) is one of main parameters for positioning and motion state estimation of the target vehicle.
At present, research on vehicle voyage prediction is mainly divided into a tracking-based method, a deep learning segmentation-based method and a 3D target detection method. The tracking-based method adopts the running track of the target vehicle in the multi-frame images to predict the course, and the real-time performance is not high due to the fact that the result of the multi-frame images is seriously relied on; some researchers use a deep learning network to divide a longitudinal chassis line of a vehicle, the method is seriously dependent on the division precision, the chassis line is not positioned at the grounding part, and the high distortion of a fisheye camera can cause non-negligible influence; in addition, researchers directly predict the course angle by using a 3D target detection network, but the method has the defects of complex labeling and large calculation amount, and is not applied to an embedded platform with stress calculation.
Therefore, the current vehicle course detection has the problems of low accuracy, large calculation amount and poor instantaneity.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, embodiments of the present disclosure provide a method, an apparatus, and a storage medium for detecting a heading of a target vehicle, which solve the problems of low accuracy, large calculation amount, and poor real-time performance of the heading detection in the prior art.
In a first aspect, an embodiment of the present disclosure provides a target vehicle heading detection method, including:
determining a detection frame of a first wheel, a detection frame of a second wheel and a segmentation frame of a ground area corresponding to each target vehicle based on environmental images acquired by all vehicle-mounted looking-around cameras of the target vehicle, wherein the first wheel and the second wheel are positioned on the same side of a vehicle body of the target vehicle, and the ground area is an uncovered vehicle-running area in the ground;
determining, for each target vehicle, a first ground point between the first wheel and the ground, and a second ground point between the second wheel and the ground, based on the split frame of the corresponding ground area, the detection frame of the first wheel, and the detection frame of the second wheel;
and determining the course of the target vehicle according to the connecting line formed by the first grounding point and the second grounding point.
In a second aspect, an embodiment of the present disclosure further provides a target vehicle heading detection apparatus, including:
the detection frame determining module is used for determining a detection frame of a first wheel, a detection frame of a second wheel and a division frame of a ground area corresponding to each target vehicle based on environmental images acquired by all-way vehicle-mounted looking-around cameras of the vehicle, wherein the first wheel and the second wheel are positioned on the same side of a vehicle body of the target vehicle, and the ground area is an uncovered vehicle-driven area in the ground;
a ground point determination module for determining, for each target vehicle, a first ground point between the first wheel and the ground, and a second ground point between the second wheel and the ground, based on the split frame of the corresponding ground area, the detection frame of the first wheel, and the detection frame of the second wheel;
and the course determining module is used for determining the course of the target vehicle according to the connecting line formed by the first grounding point and the second grounding point.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including: one or more processors; a storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the target vehicle heading detection method as described above.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a target vehicle heading detection method as described above.
According to the method for detecting the heading of the target vehicle, the environmental images acquired by the vehicle-mounted all-around cameras of each vehicle are used for determining the detection frame of the first wheel, the detection frame of the second wheel and the division frame of the ground area corresponding to each target vehicle, and further, for each target vehicle, the first grounding point and the second grounding point between the first wheel, the second wheel and the ground are respectively determined according to the corresponding division frame of the ground area, the detection frame of the first wheel and the detection frame of the second vehicle, and finally, the heading of the target vehicle is determined according to the connecting line formed by the first grounding point and the second grounding point.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of a target vehicle heading detection method in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a target detection result in an embodiment of the disclosure;
FIG. 3 is a schematic diagram of an untrusted domain in an embodiment of the present disclosure;
FIG. 4 is a schematic view of a grounding point in an embodiment of the present disclosure;
FIG. 5 is a schematic view of another grounding point in an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a target vehicle heading detection device in an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Fig. 1 is a flowchart of a target vehicle heading detection method in an embodiment of the present disclosure. The method may be performed by a target vehicle heading detection device, which may be implemented in software and/or hardware, which may be configured in an electronic device. As shown in fig. 1, the method specifically may include the following steps:
s110, determining a detection frame of a first wheel, a detection frame of a second wheel and a segmentation frame of a ground area corresponding to each target vehicle based on environmental images acquired by all-vehicle looking-around cameras of the vehicle.
The vehicle-mounted looking-around cameras can be vehicle-mounted looking-around fish-eye cameras, and the number of the vehicle-mounted looking-around cameras can be four. In the embodiment of the disclosure, the four-way vehicle-mounted all-around camera can be respectively arranged on a front bumper, a left rearview mirror, a right rearview mirror and a rear bumper of the vehicle. The camera internal parameters and the camera external parameters of each road of vehicle-mounted all-round camera are calibrated in advance.
Specifically, each path of vehicle-mounted looking-around camera of the vehicle can acquire an environment image of the vehicle in the running process of the vehicle. Based on the environmental images acquired by the vehicle-mounted all-around cameras, a detection frame of the first wheel, a detection frame of the second wheel and a segmentation frame of the ground area corresponding to each target vehicle can be determined.
The target vehicle can be other vehicles positioned in the acquisition range of the vehicle-mounted looking-around camera of the own vehicle. The first wheel and the second wheel are positioned on the same side of the body of the target vehicle, and the ground area is an uncovered self-vehicle drivable area in the ground.
In a specific embodiment, based on environmental images acquired by each road of vehicle-mounted looking-around cameras, determining a detection frame of a first wheel, a detection frame of a second wheel and a segmentation frame of a ground area corresponding to each target vehicle, including the following steps:
step 111, inputting a current environment image into a pre-trained detection model aiming at an environment image acquired by each path of vehicle-mounted looking-around camera to obtain a detection frame of a target vehicle, a detection frame of a wheel and a segmentation frame of a ground area in the current environment image;
step 112, for each target vehicle of the current environment image, determining the detection frames of all wheels located within the detection frames of the target vehicle as the detection frames of the wheels to be verified;
Step 113, judging whether the detection frame of the wheel to be verified is the detection frame of the wheel corresponding to the target vehicle or not based on the distance between the detection frame of the wheel to be verified and the upper edge of the detection frame of the target vehicle and the distance between the detection frame of the wheel to be verified and the lower edge of the detection frame of the target vehicle;
and 114, obtaining a detection frame of the first wheel and a detection frame of the second wheel corresponding to the target vehicle under the condition that the number of the wheels corresponding to the target vehicle in the current environment image is two.
The pre-trained detection model may be a deep learning network model, such as a CNN (Convolutional Neural Network ) model, among others. The detection model can comprise an encoder and two decoders for processing specific tasks, the two specific tasks can be target detection and ground area segmentation respectively, a complex and redundant sharing module does not exist between the two decoders, the calculated amount of the detection model can be greatly reduced, and meanwhile, the detection model is easy to train end to end. And, the two decoders can share the feature extraction network (i.e., encoder), improving the reasoning speed of the model.
The method comprises the steps that image data can be collected in advance and marked to form a training set, and further, a deep learning network model is trained through the training set, so that the model can output a target detection result and a ground area segmentation result; the target detection results mainly comprise detection frames of objects such as cars, buses, trucks, wheels and the like, and the ground area segmentation results are segmentation frames of the ground area.
Specifically, in step 111, the environmental image acquired by each path of vehicle-mounted looking-around camera may be input into a detection model, and for the current input environmental image, i.e., the current environmental image, the detection model may encode the current environmental image through an encoder, further process the encoded result through a decoder for processing target detection, output a target detection result, i.e., a detection frame of the target vehicle and a detection frame of the wheel, and process the encoded result through a decoder for processing ground area segmentation, and output a segmentation result, i.e., a segmentation frame of the ground area. Before step 111, in order to ensure the processing precision of the model, the environmental image acquired by each path of vehicle-mounted looking-around camera may be preprocessed, for example, noise in the environmental image is removed.
Further, in step 112-step 113, for each target vehicle in the current environmental image, the detection frames of all the wheels located in the detection frames thereof may be determined as the detection frames of the wheels to be verified, so as to determine whether the wheels to be verified are wheels of the target vehicle, i.e. whether the detection frames of the wheels to be verified are detection frames of the wheels corresponding to the target vehicle, according to the distance between the detection frames of the wheels to be verified and the upper edge of the detection frames of the target vehicle and the distance between the detection frames of the wheels to be verified and the lower edge of the detection frames of the target vehicle.
Wherein the detection frames of the wheels that are all located within the detection frame of the target vehicle and that intersect the detection frame of the target vehicle may be determined as the detection frame of the wheel to be verified. If the distance between the detection frame of the wheel to be verified and the upper edge of the detection frame of the target vehicle is greater than a preset first threshold value, and the distance between the detection frame of the wheel to be verified and the lower edge of the detection frame of the target vehicle is less than a preset second threshold value, the wheel to be verified can be determined to be the wheel of the target vehicle, and then the detection frame of the wheel to be verified is the detection frame of the wheel corresponding to the target vehicle.
Fig. 2 is a schematic diagram illustrating a target detection result in an embodiment of the disclosure. Fig. 2 shows the target detection results in the environment image collected by the left rearview (left) vehicle-mounted looking-around camera, including the detection frame of each target vehicle and the detection frame of each wheel. The detection frame of the wheel in the target vehicle in the upper right corner is closer to the upper edge and farther from the lower edge in the target vehicle in the middle, so that the wheel does not belong to the target vehicle in the middle.
Further, after step 113 is performed, a detection frame of a wheel belonging to the target vehicle may be determined in the current environment image. The overlapping detection range between the vehicle-mounted all-round cameras on the adjacent paths of the vehicle is considered, such as the vehicle-mounted all-round camera of the front bumper and the vehicle-mounted all-round camera below the left rearview mirror, and the vehicle-mounted all-round camera of the front bumper and the vehicle-mounted all-round camera below the right rearview mirror. Therefore, the detection frames of the same wheel may exist in different environmental images, in which case, in order to ensure the accuracy of the detection frame of the vehicle corresponding to the target vehicle, the detection frames of the same wheel in different environmental images may be fused, or the detection frames that are not trusted may be discarded.
Optionally, after determining whether the detection frame of the wheel to be verified is the detection frame of the wheel corresponding to the target vehicle, the method further includes:
judging whether detection frames of target vehicles exist in other environment images, wherein the other environment images are environment images acquired by adjacent road vehicle-mounted surrounding cameras; if so, judging whether the wheels corresponding to the target vehicle in the current environment image and the wheels corresponding to the target vehicle in the other environment images are the same wheel or not;
if so, eliminating the detection frame of the same wheel in the current environment image under the condition that the detection frame of the same wheel in the current environment image is in the unreliable range of the current environment image, and eliminating the detection frame of the same wheel in other environment images under the condition that the detection frame of the same wheel in other environment images is in the unreliable range of other environment images.
Specifically, for the detection frame of the wheel corresponding to the target vehicle in the current environment image, whether the detection frame of the target vehicle exists or not may be determined in the environment image acquired by the adjacent road vehicle-mounted looking-around camera. For example, if the distance between the lower edge of the detection frame of the target vehicle in the current environment image and the lower edge of the detection frame of the vehicle in the other environment images is smaller than the preset first threshold value, it may be determined that the detection frame of the target vehicle in the current environment image exists in the other environment images.
Further, whether the wheels corresponding to the target vehicle in the other environment images and the wheels corresponding to the target vehicle in the current environment image are the same can be continuously judged. For example, if the distance between the lower edge in the detection frame of the wheel in the current environmental image and the lower edge in the detection frame of the wheel in the other environmental images is smaller than the preset second threshold value, it may be determined that the wheels corresponding to the target vehicle in the two environmental images are the same wheel.
For the situation that the same wheel exists in other environment images and the current environment image, the effect of the fish-eye looking-around camera detected at the position of the visual angle close to 180 degrees is poor, so that corresponding unreliable ranges can be set in the overlapping areas for all the road vehicle-mounted looking-around cameras, and the detection frames of the wheels in the unreliable ranges can be removed.
Fig. 3 is a schematic diagram of an unreliable range in an embodiment of the disclosure, where, taking a left vehicle-mounted looking-around camera (mounted at a left rearview mirror) and a front vehicle-mounted looking-around camera (mounted at a front bumper) as examples, the detection ranges of the left vehicle-mounted looking-around camera and the front vehicle-mounted looking-around camera are 180 °, the detection range of the left vehicle-mounted looking-around camera is the leftmost column of the nine-grid in fig. 3, and the detection range of the front vehicle-mounted looking-around camera is the leftmost column of the nine-grid in fig. 3. The untrusted range corresponding to the left vehicle-mounted looking-around camera may be: starting from the detection boundary of the left vehicle-mounted looking-around camera, determining a detection range according to a preset angle; the untrusted range corresponding to the front vehicle-mounted looking-around camera may be: starting from the detection boundary of the front vehicle-mounted looking-around camera, determining a detection range according to a preset angle. The trusted range is other ranges except all the untrusted ranges in the overlapped area, wherein the overlapped area is an area which can be detected by two adjacent vehicle-mounted all-around cameras.
Taking an example of a range formed by 10 degrees, which is a detection range and is close to a detection boundary, in the detection range, taking an untrusted range as an example, the detection angle of the front vehicle-mounted around-looking camera is 0-180 degrees, the detection angle of the left vehicle-mounted around-looking camera is 90-270 degrees, the detection angle of the rear vehicle-mounted around-looking camera is 180-360 degrees, and the detection angle of the right vehicle-mounted around-looking camera is 270-90 degrees; the non-trusted range may be 0-10 deg. and 170-180 deg. for the front vehicle-mounted looking-around camera, 90-100 deg. and 260-270 deg. for the left vehicle-mounted looking-around camera, 180-190 deg. and 350-360 deg. for the rear vehicle-mounted looking-around camera, and 270-280 deg. and 80-90 deg. for the right vehicle-mounted looking-around camera.
Of course, in addition to the above examples of the untrusted ranges of 10 ° in size, the size of the untrusted ranges may be other angles, such as 5 °, which are not limiting to the disclosed embodiments.
Specifically, if the detection frame of the same wheel in the current environment image is in the unreliable range of the current environment image, the detection frame of the same wheel in the current environment image is poor in accuracy, and the detection frame of the same wheel in the current environment image can be removed; if the detection frames of the same wheel in the other environment images are in the unreliable range of the other environment images, the detection frames of the wheel in the other environment images are poor in accuracy, and the detection frames of the same wheel in the other environment images can be removed.
In addition to the above case, the detection frame of the same wheel may be located in a trusted range (i.e. in a trusted range of the current environmental image and other environmental images), and for this case, the detection frames of the same wheel in the two environmental images may be fused to obtain a new detection frame. Optionally, the method provided by the embodiment of the present disclosure further includes:
if the wheel corresponding to the target vehicle in the current environment image and the wheel corresponding to the target vehicle in the other environment image are the same wheel, updating the detection frame of the same wheel based on the detection frame of the same wheel in the current environment image and the detection frame of the same wheel in the other environment image under the condition that the detection frame of the same wheel in the current environment image is in the trusted range of the current environment image and the other environment image.
The trusted range is the other range except all the untrusted ranges in the superposition area of the current environment image and other environment images. The overlapping area is an area which can be detected by two adjacent vehicle-mounted looking-around cameras. Along the application example, the trusted range of the front vehicle-mounted looking-around camera and the left vehicle-mounted looking-around camera can be 100-170 degrees.
If the detection frame of the same wheel is in the trusted range of the current environmental image and other environmental images, new coordinates can be calculated according to the coordinates of the detection frame of the same wheel in the two environmental images so as to obtain the new detection frame of the same wheel.
In one example, updating the detection frame of the same wheel based on the detection frame of the same wheel in the current environmental image and the detection frame of the same wheel in the other environmental images includes:
determining a fusion coefficient based on the positions of the detection frames of the same wheel in the current environment image and the other environment images in a virtual coordinate system; based on the fusion coefficient, fusing the positions of the detection frames of the same wheel in the virtual coordinate system in the current environment image and other environment images to obtain the fusion position of the detection frames of the same wheel in the virtual coordinate system; and updating the detection frames of the same wheel in the current environment image and other environment images based on the fusion position of the detection frames of the same wheel in the virtual coordinate system.
Wherein, the fusion coefficient can be calculated by the following formula:
wherein (x 1, y 1) and (x 2, y 2) are coordinates of a detection frame of the same wheel in the current environment image and other environment images, and w is a fusion coefficient. Further, the fusion position of the detection frames of the same wheel in the virtual coordinate system can be calculated based on the fusion coefficient, and the following formula is shown:
OC=OA+w(OA-OB);
Wherein O represents an origin in a virtual coordinate system, an x-axis and a y-axis of the virtual coordinate system are parallel to the BEV image coordinate system, points A and B are positions of detection frames of the same wheel in the current environment image and other environment images in the virtual coordinate system, and point C is a fusion position of the detection frames of the same wheel in the virtual coordinate system. According to the formula, the distance OC between the point C and the point O can be calculated according to the distance OA between the point A and the point O and the distance OB between the point B and the point O, and then the position of the point C can be obtained.
Furthermore, the detection frames of the same wheel in the current environment image can be removed, and the positions of the detection frames of the same wheel in other environment images can be updated to be the calculated fusion positions. Or, the detection frames of the same wheel in other environment images can be removed, and the positions of the detection frames of the same wheel in the current environment image can be updated to be the calculated fusion positions.
Through the embodiment, the targets detected by the vehicle-mounted all-around cameras on different roads can be fused, the perception blind area of the vehicle to the surrounding area can be made up, the perception precision is improved, the position accuracy of the wheel detection frame is ensured, and the heading precision of the target vehicle is further improved.
In addition to the situation that the same wheel of the same vehicle exists in the two environment images, the situation that different wheels of the same vehicle exist in the two environment images respectively can also exist, and for the situation, detection frames of different wheels of the two environment images can be combined, so that the heading can be determined according to the detection frames of the two wheels. Optionally, after determining whether the detection frame of the wheel to be verified is the detection frame of the wheel corresponding to the target vehicle, the method further includes:
judging whether detection frames of the target vehicle exist in other environment images under the condition that the number of wheels corresponding to the target vehicle in the current environment image is one; if so, judging whether the wheels corresponding to the target vehicle in the current environment image and the wheels corresponding to the target vehicle in the other environment images are the same wheel or not;
if not, determining a detection frame of a first wheel and a detection frame of a second wheel corresponding to the target vehicle according to the detection frames of the wheels corresponding to the target vehicle in the current environment image and the detection frames of the wheels corresponding to the target vehicle in other environment images.
Specifically, since the subsequent determination of the heading direction requires the detection frames of two wheels corresponding to the target vehicle, in order to avoid the problem that the heading cannot be determined because only one wheel is detected by the fisheye looking-around camera as much as possible, if only one wheel corresponding to the target vehicle in the current environment image is available, whether the detection frame of the target vehicle exists in other environment images can be judged, and if yes, whether the wheels corresponding to the target vehicle in the current environment image and the other environment images are consistent, namely whether the wheels are the same vehicle is further judged.
Further, if the wheels corresponding to the target vehicle in the two environmental images are inconsistent, the detection frame of the first wheel and the detection frame of the second wheel corresponding to the target vehicle can be obtained according to the wheels corresponding to the target vehicle in the two environmental images.
Through the implementation mode, the problem that the course cannot be determined due to the fact that the fish-eye looking-around camera only detects one wheel can be avoided as much as possible, and reliability of course detection is further guaranteed.
S120, for each target vehicle, determining a first ground point between the first wheel and the ground, and a second ground point between the second wheel and the ground, based on the corresponding split frame of the ground area, the detection frame of the first wheel, and the detection frame of the second wheel.
Specifically, for each target vehicle, a first grounding point between the first wheel and the ground and a second grounding point between the second wheel and the ground may be found according to the lower edge of the detection frame of the first wheel, the lower edge of the detection frame of the second wheel, and the division frame of the ground area corresponding to each target vehicle.
In a specific embodiment, determining a first ground point between the first wheel and the ground and a second ground point between the second wheel and the ground based on the corresponding split frame of the ground area, the detection frame of the first wheel, and the detection frame of the second wheel, comprises the steps of:
Step 121, determining a first search point based on a center point of a lower edge of a detection frame of a first wheel, determining a second search point based on a center point of a lower edge of a detection frame of a second wheel, and determining a search line segment formed by the first search point and the second search point;
step 122, determining a plurality of first scanning points from the search line segment by taking the first searching points as starting points, and determining a plurality of second scanning points from the search line segment by taking the second searching points as starting points;
step 123, determining a first grounding point on the ground area split frame based on the distance between each first scanning point and the ground area split frame, and determining a first grounding point on the ground area split frame based on the distance between each second scanning point and the ground area split frame.
In the step 121, the center point of the lower edge of the detection frame of the first wheel may be directly used as the first search point, and the center point of the lower edge of the detection frame of the second wheel may be directly used as the second search point.
Alternatively, the first search point may be obtained by extending a corresponding preset length along the lower edge of the detection frame of the first wheel in a direction opposite to the second wheel at a center point of the lower edge of the detection frame, where the corresponding preset length may be half the length of the lower edge of the detection frame. And extending a corresponding preset length along the lower edge of the detection frame at the center of the lower edge of the detection frame of the second wheel in the direction opposite to the direction of the first wheel to obtain a second search point, wherein the corresponding preset length can be half of the length of the lower edge of the detection frame.
By extending the center point of the lower edge of the detection frame to obtain the search point, it is possible to avoid the influence of the lower accuracy of the detection frame of the wheel on the determined grounding point.
After the first search point and the second search point are obtained, a search line segment formed by the first search point and the second search point can be obtained. After the search line segment is obtained, the search line segment can be scanned to obtain the coordinates of each pixel point on the search line segment. For example, the slope of the search line is k, and the angle θ between the slope and the x-axis in the virtual coordinate system (coordinate system parallel to the ground) is θ, ifThen scanning pixel by pixel along the x-axis direction, calculating the corresponding y value, obtaining the coordinates of all the pixel points on the search line segment, storing as an array arr, if at this time +.>Then scanning pixel by pixel along the y-axis direction, calculating the corresponding x value, obtaining the coordinates of all the pixel points on the search line segment, and storing the coordinates as an array arr.
Further, in the step 122, a first search point may be set as a starting point, and a plurality of pixel points may be determined in a search line segment from the first search point as a first scan point. For example, the first search point may be taken as a starting point, and each pixel point stored in the array is sequentially taken as a first scanning point from the first search point until the number of the first scanning points reaches a certain number (for example, 2p, p is the length of the lower edge of the detection frame of the first wheel).
Further, a plurality of pixel points may be determined in the search line segment from the second search point as the second scanning point. For example, the second search point may be taken as a starting point, and each pixel point stored in the array may be sequentially taken as a second scanning point from the second search point until the number of the second scanning points reaches a certain number (for example, 2q, q is the length of the lower edge of the detection frame of the first wheel).
Further, in the step 123, a first grounding point may be determined according to a distance between each first scanning point and the division frame of the ground area, and a second grounding point may be determined according to a distance between each second scanning point and the division frame of the ground area.
Optionally, for the step 123, determining the first grounding point on the segmentation frame of the ground area based on the distance between each first scanning point and the segmentation frame of the ground area includes:
for each first scanning point, determining a first scanning line passing through the first scanning point and perpendicular to the search line segment, obtaining the nearest intersection point between the first scanning line and the dividing frame of the ground area, and determining the distance between the nearest intersection point and the first scanning point;
And determining the nearest intersection point with the shortest distance as a first grounding point under the condition that the search line segment is positioned in the segmentation frame of the ground area, and otherwise, determining the nearest intersection point with the longest distance as the first grounding point.
Specifically, for each first scanning point, a first scanning line perpendicular to the search line segment may be taken as the first scanning point, so as to obtain a nearest intersection point between the first scanning line and the segmentation frame of the ground area, and further, a distance between the first scanning point and the nearest intersection point may be obtained.
If the search line segment is located in the segmentation frame of the ground area, determining the nearest intersection point with the shortest distance determined under all the first search points as a first grounding point; if the search line segment is located outside the segmentation frame of the ground area, the nearest corner point with the longest distance determined under all the first search points can be determined as the first grounding point.
Exemplary, fig. 4 is a schematic diagram of a grounding point in an embodiment of the present disclosure. Referring to fig. 4, ab is a search line segment, cd is a partial boundary of the division frame of the ground area, ab is within the division frame of the ground area, f is the nearest intersection determined under the first search point e, and the distance between ef is the shortest, so f can be taken as the first ground point.
Fig. 5 is a schematic view of another grounding point in an embodiment of the disclosure. Referring to fig. 5, ab is a search line segment, cd is a partial boundary of the division frame of the ground area, ab is outside the division frame of the ground area, f is the nearest intersection determined at the first search point e, and the distance between ef is longest, so f can be taken as the first ground point.
Similarly, for each second scanning point, a second scanning line perpendicular to the search line segment may be made by passing through the second scanning point, so as to obtain the nearest intersection point between the second scanning line and the division frame of the ground area, and further determine the distance between the nearest intersection point and the second scanning point. If the search line segment is located within the segmentation frame of the ground area, the nearest intersection point with the shortest distance can be used as a second grounding point; if the search line segment is located outside the segmentation box of the ground area, the nearest intersection point with the longest distance may be taken as the second ground point.
Through the steps 121-123, the grounding point between the wheel and the ground can be accurately determined based on the target detection result and the segmentation result, so that the heading of the target vehicle can be conveniently predicted subsequently, and the accuracy of the heading is ensured.
S130, determining the heading of the target vehicle according to the connecting line formed by the first grounding point and the second grounding point.
Specifically, after the first grounding point and the second grounding point are obtained, a connection line formed by the first grounding point and the second grounding point may be used as a body side grounding line of the target vehicle, and the body side grounding line may reflect the heading of the target vehicle.
Further, the segmentation frame of the ground area and the heading of the target vehicle can be sent to a decision module of the own vehicle so as to facilitate the decision model to carry out driving decision and planning.
According to the method for detecting the heading of the target vehicle, through the environment images acquired by the vehicle-mounted all-around cameras of each vehicle, the detection frame of the first wheel, the detection frame of the second wheel and the division frame of the ground area corresponding to each target vehicle are determined, and then for each target vehicle, according to the corresponding division frame of the ground area, the detection frame of the first wheel and the detection frame of the second vehicle, the first grounding point and the second grounding point between the first wheel, the second wheel and the ground are respectively determined, and finally the heading of the target vehicle is determined according to the connecting line formed by the first grounding point and the second grounding point.
Fig. 6 is a schematic structural diagram of a target vehicle heading detection device in an embodiment of the disclosure. As shown in fig. 6: the device comprises: the detection box determination module 610, the ground point determination module 620, and the heading determination module 630.
The detection frame determining module 610 is configured to determine a detection frame of a first wheel, a detection frame of a second wheel, and a segmentation frame of a ground area corresponding to each target vehicle based on an environmental image acquired by each road of vehicle-mounted looking-around cameras of the target vehicle, where the first wheel and the second wheel are located on the same side of a vehicle body of the target vehicle, and the ground area is an uncovered drivable area of the target vehicle in the ground;
a ground point determination module 620 for determining, for each target vehicle, a first ground point between the first wheel and the ground, and a second ground point between the second wheel and the ground, based on the split frame of the corresponding ground area, the detection frame of the first wheel, and the detection frame of the second wheel;
the heading determining module 630 is configured to determine a heading of the target vehicle according to a connection line formed by the first grounding point and the second grounding point.
Optionally, the detection frame determining module 610 includes a model output unit, a judging unit, and a determining unit, where:
The model output unit is used for inputting the current environment image into a pre-trained detection model aiming at the environment image acquired by each path of vehicle-mounted looking-around camera to obtain a detection frame of a target vehicle, a detection frame of a wheel and a segmentation frame of a ground area in the current environment image;
the judging unit is used for determining detection frames of all wheels positioned in the detection frames of the target vehicles as detection frames of wheels to be verified for each target vehicle of the current environment image; judging whether the detection frame of the wheel to be verified is a detection frame of the wheel corresponding to the target vehicle or not based on the distance between the detection frame of the wheel to be verified and the upper edge of the detection frame of the target vehicle and the distance between the detection frame of the wheel to be verified and the lower edge of the detection frame of the target vehicle;
the determining unit is used for obtaining a detection frame of a first wheel and a detection frame of a second wheel corresponding to the target vehicle under the condition that the number of wheels corresponding to the target vehicle in the current environment image is two.
Optionally, the detection frame determining module 610 further includes a fusion unit, where the fusion unit is configured to determine whether a detection frame of the target vehicle exists in other environmental images, where the other environmental images are environmental images acquired by neighboring road vehicle-mounted surrounding cameras; if so, judging whether the wheels corresponding to the target vehicle in the current environment image and the wheels corresponding to the target vehicle in the other environment images are the same wheel or not; if so, eliminating the detection frame of the same wheel in the current environment image under the condition that the detection frame of the same wheel in the current environment image is in the unreliable range of the current environment image, and eliminating the detection frame of the same wheel in other environment images under the condition that the detection frame of the same wheel in other environment images is in the unreliable range of other environment images.
Optionally, the fusion unit is further configured to update the detection frame of the same wheel based on the detection frame of the same wheel in the current environmental image and the detection frame of the same wheel in the other environmental images when the detection frame of the same wheel in the current environmental image is within the trusted range of the current environmental image if the wheel corresponding to the target vehicle in the current environmental image and the wheel corresponding to the target vehicle in the other environmental images are the same wheel.
Optionally, the fusion unit is further configured to determine a fusion coefficient based on a position of a detection frame of the same wheel in the current environmental image and the other environmental images in a virtual coordinate system; based on the fusion coefficient, fusing the positions of the detection frames of the same wheel in the current environment image and the other environment images in a virtual coordinate system to obtain the fusion position of the detection frames of the same wheel in the virtual coordinate system; and updating the detection frames of the same wheel in the current environment image and the other environment images based on the fusion position of the detection frames of the same wheel in the virtual coordinate system.
Optionally, the fusion unit is further configured to determine whether a detection frame of the target vehicle exists in the other environmental images when the number of wheels corresponding to the target vehicle in the current environmental image is one; if so, judging whether the wheels corresponding to the target vehicles in the current environment image and the wheels corresponding to the target vehicles in the other environment images are the same wheels or not; if not, determining a detection frame of a first wheel and a detection frame of a second wheel corresponding to the target vehicle according to the detection frames of the wheels corresponding to the target vehicle in the current environment image and the detection frames of the wheels corresponding to the target vehicle in other environment images.
Optionally, the grounding point determining module 620 includes a line segment determining unit, a scan point determining unit, and a grounding point determining unit, where:
the line segment determining unit is used for determining a first search point based on the center point of the lower edge of the detection frame of the first wheel, determining a second search point based on the center point of the lower edge of the detection frame of the second wheel and determining a search line segment formed by the first search point and the second search point;
the scanning point determining unit is configured to determine a plurality of first scanning points from the search line segment with the first search point as a starting point, and determine a plurality of second scanning points from the search line segment with the second search point as a starting point;
the ground point determining unit is used for determining a first ground point on the dividing frame of the ground area based on the distance between each first scanning point and the dividing frame of the ground area and determining a first ground point on the dividing frame of the ground area based on the distance between each second scanning point and the dividing frame of the ground area.
Optionally, the grounding point determining unit is further configured to determine, for each first scanning point, a first scanning line passing through the first scanning point and perpendicular to the search line segment, obtain a nearest intersection point between the first scanning line and a segmentation frame of the ground area, and determine a distance between the nearest intersection point and the first scanning point; and determining the nearest intersection point with the shortest distance as a first grounding point under the condition that the search line segment is positioned in the dividing frame of the ground area, and otherwise, determining the nearest intersection point with the longest distance as the first grounding point.
The target vehicle course detection device provided by the embodiment of the disclosure may execute steps in the target vehicle course detection method provided by the embodiment of the disclosure, and the execution steps and the beneficial effects are not described herein.
Fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 7, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device 500 may include a processing means 501, a ROM502RAM503, a bus 504, an input/output (I/0) interface 505, an input means 506, an output means 507, a storage means 508, and a communication means 509. A processing device (e.g., central processing unit, graphics processor, etc.) 501, which may perform various suitable actions and processes to implement the methods of embodiments as described in the present disclosure, in accordance with programs stored in a Read Only Memory (ROM) 502 or loaded from a storage device 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flowchart, thereby implementing the target vehicle heading detection method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
determining a detection frame of a first wheel, a detection frame of a second wheel and a segmentation frame of a ground area corresponding to each target vehicle based on environmental images acquired by all vehicle-mounted looking-around cameras of the target vehicle, wherein the first wheel and the second wheel are positioned on the same side of a vehicle body of the target vehicle, and the ground area is an uncovered vehicle-running area in the ground;
determining, for each target vehicle, a first ground point between the first wheel and the ground, and a second ground point between the second wheel and the ground, based on the split frame of the corresponding ground area, the detection frame of the first wheel, and the detection frame of the second wheel;
and determining the course of the target vehicle according to the connecting line formed by the first grounding point and the second grounding point.
Alternatively, the electronic device may perform other steps described in the above embodiments when the above one or more programs are executed by the electronic device.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be made by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (10)

1. A method for detecting heading of a target vehicle, the method comprising:
determining a detection frame of a first wheel, a detection frame of a second wheel and a segmentation frame of a ground area corresponding to each target vehicle based on environmental images acquired by all vehicle-mounted looking-around cameras of the target vehicle, wherein the first wheel and the second wheel are positioned on the same side of a vehicle body of the target vehicle, and the ground area is an uncovered vehicle-running area in the ground;
determining, for each target vehicle, a first ground point between the first wheel and the ground, and a second ground point between the second wheel and the ground, based on the split frame of the corresponding ground area, the detection frame of the first wheel, and the detection frame of the second wheel;
and determining the course of the target vehicle according to the connecting line formed by the first grounding point and the second grounding point.
2. The method according to claim 1, wherein determining the detection frame of the first wheel, the detection frame of the second wheel, and the division frame of the ground area corresponding to each target vehicle based on the environmental images acquired by the respective vehicle-mounted pan-around cameras comprises:
inputting a current environment image into a pre-trained detection model aiming at an environment image acquired by each path of vehicle-mounted looking-around camera to obtain a detection frame of a target vehicle, a detection frame of a wheel and a segmentation frame of a ground area in the current environment image;
For each target vehicle of the current environment image, determining detection frames of all wheels located within the detection frames of the target vehicle as detection frames of wheels to be verified;
judging whether the detection frame of the wheel to be verified is a detection frame of the wheel corresponding to the target vehicle or not based on the distance between the detection frame of the wheel to be verified and the upper edge of the detection frame of the target vehicle and the distance between the detection frame of the wheel to be verified and the lower edge of the detection frame of the target vehicle;
and under the condition that the number of wheels corresponding to the target vehicle in the current environment image is two, obtaining a detection frame of a first wheel and a detection frame of a second wheel corresponding to the target vehicle.
3. The method according to claim 2, further comprising, after determining whether the detection frame of the wheel to be verified is the detection frame of the wheel corresponding to the target vehicle:
judging whether detection frames of the target vehicle exist in other environment images, wherein the other environment images are environment images acquired by adjacent road vehicle-mounted surrounding cameras;
if so, judging whether the wheels corresponding to the target vehicle in the current environment image and the wheels corresponding to the target vehicle in the other environment images are the same wheel or not;
If so, eliminating the detection frame of the same wheel in the current environment image under the condition that the detection frame of the same wheel in the current environment image is in the unreliable range of the current environment image, and eliminating the detection frame of the same wheel in other environment images under the condition that the detection frame of the same wheel in other environment images is in the unreliable range of other environment images.
4. A method according to claim 3, characterized in that the method further comprises:
if the wheel corresponding to the target vehicle in the current environment image and the wheel corresponding to the target vehicle in the other environment images are the same wheel, updating the detection frame of the same wheel based on the detection frame of the same wheel in the current environment image and the detection frame of the same wheel in the other environment images under the condition that the detection frame of the same wheel in the current environment image is in the trusted range of the current environment image.
5. The method of claim 4, wherein updating the detection frame of the same wheel based on the detection frame of the same wheel in the current environmental image and the detection frame of the same wheel in the other environmental images comprises:
determining a fusion coefficient based on the positions of the detection frames of the same wheel in the current environment image and the other environment images in a virtual coordinate system;
Based on the fusion coefficient, fusing the positions of the detection frames of the same wheel in the current environment image and the other environment images in a virtual coordinate system to obtain the fusion position of the detection frames of the same wheel in the virtual coordinate system;
and updating the detection frames of the same wheel in the current environment image and the other environment images based on the fusion position of the detection frames of the same wheel in the virtual coordinate system.
6. The method according to claim 2, further comprising, after determining whether the detection frame of the wheel to be verified is the detection frame of the wheel corresponding to the target vehicle:
judging whether a detection frame of the target vehicle exists in other environment images or not under the condition that the number of wheels corresponding to the target vehicle in the current environment image is one;
if so, judging whether the wheels corresponding to the target vehicles in the current environment image and the wheels corresponding to the target vehicles in the other environment images are the same wheels or not;
if not, determining a detection frame of a first wheel and a detection frame of a second wheel corresponding to the target vehicle according to the detection frames of the wheels corresponding to the target vehicle in the current environment image and the detection frames of the wheels corresponding to the target vehicle in other environment images.
7. The method of claim 1, wherein determining a first ground point between the first wheel and the ground and a second ground point between the second wheel and the ground based on the corresponding split frame of the ground area, the detection frame of the first wheel, and the detection frame of the second wheel comprises:
determining a first search point based on the center point of the lower edge of the detection frame of the first wheel, determining a second search point based on the center point of the lower edge of the detection frame of the second wheel, and determining a search line segment formed by the first search point and the second search point;
determining a plurality of first scanning points from the search line segment by taking the first searching points as starting points, and determining a plurality of second scanning points from the search line segment by taking the second searching points as starting points;
a first ground point is determined on the ground area split frame based on a distance between each first scan point and the ground area split frame, and a first ground point is determined on the ground area split frame based on a distance between each second scan point and the ground area split frame.
8. The method of claim 7, wherein determining the first ground point on the segmented frame of the ground area based on the distance between each first scan point and the segmented frame of the ground area comprises:
For each first scanning point, determining a first scanning line passing through the first scanning point and perpendicular to the searching line segment, obtaining the nearest intersection point between the first scanning line and a dividing frame of a ground area, and determining the distance between the nearest intersection point and the first scanning point;
and determining the nearest intersection point with the shortest distance as a first grounding point under the condition that the search line segment is positioned in the dividing frame of the ground area, and otherwise, determining the nearest intersection point with the longest distance as the first grounding point.
9. A target vehicle heading detection apparatus, the apparatus comprising:
the detection frame determining module is used for determining a detection frame of a first wheel, a detection frame of a second wheel and a division frame of a ground area corresponding to each target vehicle based on environmental images acquired by all-way vehicle-mounted looking-around cameras of the vehicle, wherein the first wheel and the second wheel are positioned on the same side of a vehicle body of the target vehicle, and the ground area is an uncovered vehicle-driven area in the ground;
a ground point determination module for determining, for each target vehicle, a first ground point between the first wheel and the ground, and a second ground point between the second wheel and the ground, based on the split frame of the corresponding ground area, the detection frame of the first wheel, and the detection frame of the second wheel;
And the course determining module is used for determining the course of the target vehicle according to the connecting line formed by the first grounding point and the second grounding point.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-8.
CN202311780781.6A 2023-12-22 2023-12-22 Method, device and storage medium for detecting heading of target vehicle Pending CN117746350A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311780781.6A CN117746350A (en) 2023-12-22 2023-12-22 Method, device and storage medium for detecting heading of target vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311780781.6A CN117746350A (en) 2023-12-22 2023-12-22 Method, device and storage medium for detecting heading of target vehicle

Publications (1)

Publication Number Publication Date
CN117746350A true CN117746350A (en) 2024-03-22

Family

ID=90256141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311780781.6A Pending CN117746350A (en) 2023-12-22 2023-12-22 Method, device and storage medium for detecting heading of target vehicle

Country Status (1)

Country Link
CN (1) CN117746350A (en)

Similar Documents

Publication Publication Date Title
US20200265710A1 (en) Travelling track prediction method and device for vehicle
EP3703033A1 (en) Track prediction method and device for obstacle at junction
CN109284348B (en) Electronic map updating method, device, equipment and storage medium
CN111695546B (en) Traffic signal lamp identification method and device for unmanned vehicle
CN110278405B (en) Method, device and system for processing lateral image of automatic driving vehicle
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
CN109760675B (en) Method, device, storage medium and terminal equipment for predicting vehicle track
CN111874006B (en) Route planning processing method and device
CN112084810B (en) Obstacle detection method and device, electronic equipment and storage medium
WO2021226921A1 (en) Method and system of data processing for autonomous driving
CN112419776B (en) Autonomous parking method and device, automobile and computing equipment
CN111860227A (en) Method, apparatus, and computer storage medium for training trajectory planning model
US20220035036A1 (en) Method and apparatus for positioning movable device, and movable device
CN111857135A (en) Obstacle avoidance method and apparatus for vehicle, electronic device, and computer storage medium
CN107220632B (en) Road surface image segmentation method based on normal characteristic
CN113432615B (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
CN114550116A (en) Object identification method and device
CN110784680B (en) Vehicle positioning method and device, vehicle and storage medium
CN117746350A (en) Method, device and storage medium for detecting heading of target vehicle
CN110727269A (en) Vehicle control method and related product
US20230215184A1 (en) Systems and methods for mitigating mis-detections of tracked objects in the surrounding environment of a vehicle
CN114998861A (en) Method and device for detecting distance between vehicle and obstacle
CN115214627A (en) Parking prompting method and device, electronic equipment and storage medium
CN114898332A (en) Lane line identification method and system based on automatic driving
CN113614782A (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination