WO2023184868A1 - 障碍物朝向的确定方法、装置、系统、设备、介质及产品 - Google Patents

障碍物朝向的确定方法、装置、系统、设备、介质及产品 Download PDF

Info

Publication number
WO2023184868A1
WO2023184868A1 PCT/CN2022/117328 CN2022117328W WO2023184868A1 WO 2023184868 A1 WO2023184868 A1 WO 2023184868A1 CN 2022117328 W CN2022117328 W CN 2022117328W WO 2023184868 A1 WO2023184868 A1 WO 2023184868A1
Authority
WO
WIPO (PCT)
Prior art keywords
wheels
wheel
orientation angle
obstacle
vehicle
Prior art date
Application number
PCT/CN2022/117328
Other languages
English (en)
French (fr)
Inventor
张军良
Original Assignee
合众新能源汽车股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合众新能源汽车股份有限公司 filed Critical 合众新能源汽车股份有限公司
Publication of WO2023184868A1 publication Critical patent/WO2023184868A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present application relates to the technical field of obstacle detection, and in particular to a method, device, system, equipment, computer-readable storage medium and computer program product for determining the direction of an obstacle.
  • the vehicle's orientation (yaw) angle can be determined by obtaining the 3D coordinate information of the vehicle from the pixel coordinate system, in the camera-based surround perception task
  • an obstacle such as a large truck, etc.
  • the complete 3D coordinate information of the large truck in the pixel coordinate system is not obtained, that is, there is truncated information of the obstacle in the pixel coordinate system. , it will lead to serious errors in the orientation of 3D obstacles.
  • This application provides a method, device, system, electronic equipment, computer-readable storage medium and computer program product for determining the orientation of an obstacle, to at least solve the problem of 3D obstacles in related technologies due to the truncated information of 3D obstacles in the pixel coordinate system.
  • Technical problem of inaccurate object orientation determination is as follows:
  • a method for determining the direction of an obstacle including:
  • the heading angle of the vehicle is determined based on the road position in the world coordinate system.
  • determining the wheel pair with the largest distance between the plurality of wheels includes:
  • the coordinate information in the pixel coordinates of the wheel that meets the set threshold is obtained;
  • the method further includes:
  • the method before performing consistency verification on the dimensions of the plurality of wheels, the method further includes
  • the wheel is completed according to a set strategy, including:
  • Two complete wheels are selected from the plurality of wheels, an average of the wheel areas of the two complete wheels is calculated, and all truncated wheels are completed according to the average.
  • mapping the ground contact position to the road position in the world coordinate system includes:
  • the ground contact position is mapped to the road position in the world coordinate system through the inverse perspective transformation formula.
  • determining the orientation angle of the vehicle based on the road position in the world coordinate system includes:
  • the orientation of the wheel line around the direction of gravity is determined as the orientation angle of the vehicle.
  • a method for determining the direction of an obstacle including:
  • the angle average is determined as the heading angle of the current vehicle.
  • the method also includes:
  • the first orientation angle or the second orientation angle that is closest to the third orientation angle is determined as the orientation angle of the current vehicle.
  • determining the first orientation angle of the current vehicle includes:
  • the first orientation angle of the vehicle is determined based on the road position in the world coordinate system.
  • a device for determining the direction of an obstacle including:
  • the first acquisition module is used to acquire the target obstacle image detected in the detection area, where the target obstacle image includes a plurality of wheels;
  • a first determination module configured to determine the wheel pair with the largest distance between the plurality of wheels when the plurality of wheels are wheels of the same vehicle;
  • a second acquisition module configured to acquire the ground contact position of the wheel pair with the largest distance in the target obstacle image
  • a mapping module for mapping the grounding position to the road position in the world coordinate system
  • the second determination module is used to determine the orientation angle of the vehicle based on the road position in the world coordinate system.
  • the first determination module includes:
  • a first matching module configured to match each wheel size in the plurality of wheels with the wheel template size in the template pool
  • a third acquisition module configured to acquire the coordinate information in pixel coordinates of the wheel that satisfies the set threshold when the matching result of the first matching module satisfies the set threshold;
  • the first calculation module is used to calculate the distance between two wheels according to the coordinate information
  • the first selection module is used to select the pair of wheels with the largest distance.
  • the device also includes:
  • a verification module configured to perform consistency verification on the dimensions of the plurality of wheels when the plurality of wheels are wheels of the same vehicle;
  • the first determination module is also configured to determine the wheel pair with the largest distance between the plurality of wheels when the consistency check of the verification module is successful;
  • a completion module is used to complete the wheel according to a set strategy when the consistency check by the verification module fails.
  • the device also includes:
  • a first judgment module configured to judge whether each of the plurality of wheels has wheel truncation before the verification module performs consistency verification on the dimensions of the plurality of wheels
  • the first determination module is also configured to determine the wheel pair with the largest distance between the plurality of wheels when the first determination module determines that there is no wheel truncation.
  • the completion module is also configured to complete the wheel according to the set strategy when the first judgment module determines that there is wheel truncation.
  • the completion module includes: a second selection module, a second matching module and a first completion module; and/or a third selection module and a second completion module; and/or a fourth selection module and a third completion module.
  • the second selection module is used to select the truncated wheel with the largest area
  • the second matching module is used to match the truncated wheel with the largest area to the wheel template in the template pool;
  • the first completion module is used to complete all truncated wheels according to the wheel template matched by the second matching module;
  • the third selection module is used to select a complete wheel from the plurality of wheels
  • the second completion module is used to complete all the truncated wheels according to the complete wheels selected by the third selection module;
  • the fourth selection module is used to select two complete wheels from the plurality of wheels
  • the second calculation module is used to calculate the average of the wheel areas of the two complete wheels selected by the fourth selection module, and complete all the truncated wheels based on the average.
  • the mapping module is specifically configured to map the grounding position to the road position in the world coordinate system through an inverse perspective transformation formula.
  • the second determination module includes:
  • the third calculation module is used to calculate the wheel connection line of the road position of the wheel pair in the world coordinate system
  • An orientation angle determination module is used to determine the orientation of the wheel connection line around the direction of gravity as the orientation angle of the vehicle.
  • a device for determining the direction of an obstacle including:
  • the first determination module is used to determine the first orientation angle of the current vehicle in the detection area, where the first orientation angle is the orientation angle of the current vehicle determined based on the wheel pair in the target obstacle image of the detection area;
  • the first acquisition module is used to acquire the second orientation angle of the current vehicle detected in the detection area output by the 3D obstacle detection model
  • a second determination module configured to determine the difference between the first orientation angle and the second orientation angle
  • a third determination module configured to determine the angle average of the first orientation angle and the second orientation angle when the difference is less than a preset threshold
  • a fourth determination module is used to determine the angle average as the orientation angle of the current vehicle.
  • the device also includes:
  • a second acquisition module configured to acquire the historical orientation angle of the current vehicle when the difference is not less than the preset threshold
  • a fitting module used to perform curve fitting on the historical orientation angle through a random sampling consistency RANSAC verification algorithm, and predict the third orientation angle of the current vehicle;
  • a fifth determination module is configured to determine the first orientation angle or the second orientation angle that is closest to the third orientation angle as the orientation angle of the current vehicle.
  • the first determination module includes:
  • the third acquisition module is used to acquire the target obstacle image detected in the detection area, where the target obstacle image includes multiple wheels;
  • a sixth determination module configured to determine the wheel pair with the largest distance between the plurality of wheels when the plurality of wheels are wheels of the same vehicle;
  • the fourth acquisition module is used to acquire the ground contact position of the wheel pair with the largest distance in the target obstacle image
  • a mapping module for mapping the grounding position to the road position in the world coordinate system
  • a seventh determination module is used to determine the first orientation angle of the vehicle based on the road position in the world coordinate system.
  • a system for determining the orientation of an obstacle is provided.
  • the system is applied to a 3D obstacle detection network, and the system includes:
  • the 2D obstacle detection module is used to detect the image decoded by the decoder in the 3D obstacle detection network and obtain the target obstacle image in the detection area, where the target obstacle image includes multiple wheels; where the multiple wheels are When the wheels of the same vehicle are used, determine the wheel pair with the largest distance between the multiple wheels; obtain the ground contact position of the wheel pair with the largest distance in the target obstacle image;
  • a parameter transformation module configured to map the grounding position to the road position in the world coordinate system in the 3D obstacle detection network; and determine the orientation angle of the vehicle based on the road position in the world coordinate system.
  • the system is based on a 3D obstacle detection network. That is to say, the wheel detection network provided in this embodiment is a 2D head branched from the 3D obstacle detection network. Therefore, the changes to the backbone network are small. , at the same time, after detecting the wheels, the vehicle detection results are verified and completed through a post-processing, so as to obtain a more precise vehicle orientation angle.
  • an electronic device which is characterized in that it includes:
  • memory for storing instructions executable by the processor
  • the processor is configured to execute the instructions to implement the method for determining the obstacle orientation as described above.
  • a computer-readable storage medium which when instructions in the computer-readable storage medium are executed by a processor of an electronic device, enables the electronic device to perform the obstacles described above How to determine orientation.
  • a computer program product including a computer program or instructions that, when executed by a processor, implement the method for determining an obstacle orientation as described above.
  • the target obstacle image detected in the acquisition detection area includes multiple wheels
  • the multiple wheels are wheels of the same vehicle
  • the wheel pair with the largest distance between the multiple wheels is determined;
  • the embodiments of the present application can quickly determine the orientation of 3D obstacle vehicles in the area near the host vehicle, solving the problem in related technologies that causes inaccurate determination of the orientation of 3D obstacles due to the truncated information of 3D obstacles in the pixel coordinate system. technical issues. Using the embodiments of the present application, the accuracy of the orientation angle of the obstacle vehicle can be effectively improved.
  • Figure 1 is a flow chart of a method for determining the direction of an obstacle provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a marked wheel detection frame provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a cut wheel provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of a cut wheel provided by an embodiment of the present application.
  • FIG. 5 is an application example diagram of a method for determining the direction of an obstacle provided by an embodiment of the present application.
  • Figure 6 is another flowchart of a method for determining the direction of an obstacle provided by an embodiment of the present application.
  • Figure 7 is another flowchart of a method for determining the direction of an obstacle provided by an embodiment of the present application.
  • Figure 8 is a block diagram of a device for determining the direction of an obstacle provided by an embodiment of the present application.
  • Figure 9 is another block diagram of a device for determining the direction of an obstacle provided by an embodiment of the present application.
  • Figure 10 is a block diagram of an obstacle orientation determining system provided by an embodiment of the present application.
  • Figure 10A is an application block diagram of an obstacle orientation determination system provided by an embodiment of the present application.
  • Figure 11 is a block diagram of an electronic device provided by an embodiment of the present application.
  • Figure 12 is a block diagram of a device for determining the direction of an obstacle provided by an embodiment of the present application.
  • Figure 1 is a flow chart of a method for determining an obstacle orientation provided by an embodiment of the present application. As shown in Figure 1, the method for determining an obstacle orientation includes the following steps:
  • Step 101 Obtain the target obstacle image detected in the detection area, where the target obstacle image includes multiple wheels;
  • Step 102 When the plurality of wheels are wheels of the same vehicle, determine the wheel pair with the largest distance between the plurality of wheels;
  • Step 103 Obtain the ground contact position of the wheel pair with the largest distance in the target obstacle image
  • Step 104 Map the ground contact position to the road position in the world coordinate system
  • Step 105 Determine the orientation angle of the vehicle based on the road position in the world coordinate system.
  • the method for determining the direction of obstacles described in the embodiments of this application can be applied to terminals, etc.
  • the implementation device of the terminal can be a vehicle-mounted terminal, a main control platform of an autonomous vehicle, or an electronic device such as a vehicle machine, which is not limited here.
  • step 101 a target obstacle image detected in the detection area is obtained, where the target obstacle image includes a plurality of wheels.
  • a 2D obstacle detection network (2D head, also known as 2D detection head) is separated based on the 3D obstacle detection network (3D head, also known as 3D detection head).
  • This 2D head is used to detect the preset
  • the wheels of the vehicle in the area that is, the main control platform of the main vehicle, can obtain target obstacle images in the predetermined area (i.e., detection area) near the main vehicle through the camera in the 2D head, such as vehicle images, etc.
  • the target obstacle image It may include multiple wheels, and the multiple wheels may be wheels of the same vehicle, or may be individual wheels of different vehicles. When the wheels are different wheels of multiple wheels, for example, the same vehicle has only one wheel, the system will automatically filter out the situation where a vehicle has only one wheel.
  • FIG. 2 is a schematic diagram of a marked wheel detection frame provided by an embodiment of the present application.
  • Figure 2 the wheel detection frames marked with wheels are explained using number 1, number 2, and number 3 as examples.
  • the vehicle detected in this embodiment can also be a car, etc., which is not limited by this embodiment.
  • step 102 when the plurality of wheels are wheels of the same vehicle, a wheel pair with the largest distance between the plurality of wheels is determined.
  • the main control platform After the main control platform obtains multiple wheels included in the target obstacle image detected in the detection area, it first determines the number of vehicles based on the multiple wheels. If it is determined that multiple wheels belong to different vehicles, and each vehicle only Including one wheel, the operation process ends; if the multiple wheels belong to the same vehicle, determine the wheel pair with the largest distance between the multiple wheels.
  • determining the wheel pair with the largest distance between the plurality of wheels includes:
  • the main control platform matches the size of each wheel in the plurality of wheels with the wheel template size in the template pool.
  • the purpose of matching the wheel modules in the template pool is to ensure that the multiple detected wheels are the wheels of vehicles in the core area. That is to say, this embodiment usually orients the wheels of obstacle vehicles in the core area near the self-driving vehicle. Angle correction to avoid errors caused by obstacle vehicles outside the core area. Among them, the wheel size needs to meet the threshold of the template pool.
  • One matching process is: determine the vehicle model, search for the wheel template of the vehicle model from the template pool, and then calculate the difference between the size of each wheel in the multiple wheels and the found wheel template size, and determine the difference. When the value is less than the set threshold, if it is less, the wheel size is considered to meet the set threshold and the match is successful; otherwise, the match is considered unsuccessful.
  • the coordinate information in the pixel coordinates of the wheel that meets the set threshold is obtained.
  • the coordinate information in the pixel coordinates of the wheel that meets the set threshold is obtained. Obtaining the coordinate information of each wheel in the pixel coordinates that meets the set threshold is a well-known technology in the art and will not be described again here.
  • the distance between the two wheels is calculated respectively based on the coordinate information.
  • the distance between the coordinate points of each two wheels can be calculated using the calculation formula between the two coordinate points.
  • the specific calculation formula is for Those skilled in the art are familiar with the technology and will not repeat it here.
  • step 103 the ground contact position of the wheel pair in the target obstacle image is obtained.
  • the main control platform can obtain the coordinate information of the grounding position of this pair of wheels in the target obstacle image, that is, the coordinate information of the wheel touching point in the pixel coordinate system.
  • the acquisition method is a familiar technology to those skilled in the art and will not be described in detail here.
  • step 104 the ground contact position is mapped to the road position in the world coordinate system.
  • one mapping method is that the main control platform maps the grounding position to the road position in the world coordinate system through the inverse perspective transformation formula.
  • the main control platform maps the grounding position to the road position in the world coordinate system through the inverse perspective transformation formula.
  • the camera installation position is relatively fixed. Choosing an appropriate world coordinate system can make h equal to the height of the camera from the ground; ⁇ v represents the camera's up and down field of view range; ⁇ u represents the camera's horizontal field of view range; ⁇ represents the camera's up and down pitch angle; rFactor represents the tangent mapping factor, and cFactor represents the cosine mapping factor.
  • ⁇ v and ⁇ u can usually be expressed by other internal parameter data, as follows:
  • W and H in the formula represent the length and width of the camera photosensitive component respectively, and f is the focal length of the camera.
  • step 105 the orientation angle of the vehicle is determined based on the road position in the world coordinate system.
  • the wheel connection line of the road position of the wheel pair in the world coordinate system is first calculated; and then the orientation of the wheel connection line around the direction of gravity is determined as the orientation angle of the vehicle.
  • any two front and rear wheels will be in a straight line. Therefore, the wheel connection of the road position of the wheel pair in the world coordinate system is calculated.
  • Line, the direction of the wheel line around gravity is the direction of the vehicle (ie, the yaw angle). In other words, when the road coordinates of the wheel pair are known, the vehicle orientation can be accurately estimated.
  • the target obstacle image detected in the acquisition detection area includes multiple wheels
  • the multiple wheels are wheels of the same vehicle
  • the wheel pair with the largest distance between the multiple wheels is determined;
  • the embodiments of the present application can quickly determine the orientation of 3D obstacle vehicles in the area near the host vehicle, solving the problem in related technologies that causes inaccurate determination of the orientation of 3D obstacles due to the truncated information of 3D obstacles in the pixel coordinate system. technical issues. Using the embodiments of the present application, the accuracy of the orientation angle of the obstacle vehicle can be effectively improved.
  • the multiple wheels in the target obstacle image are wheels of the same vehicle, and the dimensions of the multiple wheels are consistent. Verify, if the consistency check is successful, then perform the step of determining the wheel pair with the largest distance between the multiple wheels; if the consistency check fails, then complete the wheels according to the set strategy .
  • the consistency of the dimensions of the multiple wheels There are many ways to check the consistency of the dimensions of the multiple wheels. One is to first calculate the area of each wheel, and then compare whether the areas are equal or approximately equal. If the multiple If the areas of the wheels are equal or close to equal, the consistency of the multiple wheels is considered to be strong, that is, the consistency check is confirmed to be successful. If the areas of the multiple wheels are not equal, or at least one wheel has an area that is different from other wheels. If the areas are not equal or have a large difference, the wheel with the smaller area is considered to be a truncated wheel, and the truncated wheel needs to be completed according to the set strategy.
  • the purpose of the wheel consistency check in this embodiment is to solve the truncation problem of some wheels, for example, when there are slightly truncated wheels among multiple wheels, but when the consistency check of the dimensions of the multiple wheels is performed, , the consistency check is successful. In this case, this embodiment defaults to not needing to enter the supplementary processing of the slightly truncated wheel.
  • the method may further include: judging the size of the plurality of wheels. Whether there is wheel truncation for each wheel in the wheel; if there is no wheel truncation, perform the step of verifying the consistency of all wheel sizes; if there is wheel truncation, follow the set strategy for the wheel Complete.
  • each wheel has wheel truncation.
  • the diameter of the wheels of the same vehicle can be compared. The largest wheel is the complete wheel, and the rest are truncated wheels.
  • the area is calculated, if the difference in area of the two larger wheels is less than a predetermined value, it can also be determined as a complete wheel, etc.
  • Figures 2 to 4 are shown in Figure 2 above.
  • Figure 2 three wheels of a large truck are detected as an example, and the numbers 1 to 3 are The three wheels are all complete wheels and there is no truncation.
  • Figure 3 is a schematic diagram of a truncated wheel provided by an embodiment of the present application.
  • Figure 3 still takes a large truck as an example. It can be seen from Figure 3 that the wheel numbered 1 is a truncated wheel, and the wheel numbered 2 and The wheel 3 is a complete wheel.
  • Figure 4 is a schematic diagram of another truncated wheel provided by the embodiment of the present application.
  • Figure 4 still takes a large truck as an example. It can be seen from Figure 4 that the numbers are 1 and 3.
  • the wheel numbered 2 is a cut-off wheel, while the wheel numbered 2 is a complete wheel. It should be noted that the cut-off wheels described in Figures 3 and 4 are only examples, and are not limited to this in practical applications.
  • the method may further include: determining whether there is wheel truncation in each wheel among the plurality of wheels; if there is no truncation, enter Wheel consistency check, if the wheel size consistency is strong, then continue to determine whether the wheel matches the wheel template ruler in the template pool. If the size meets the threshold of the template pool, the match is successful, and the matched wheel template is used to All cut off wheels are completed.
  • wheel detection is based on the following reasons: 1. The size of the wheel is relatively fixed and the characteristics are obvious, so the detection network is easy to design and apply; 2. There must be a grounding point at the wheel of each vehicle, and the wheel The coordinate point of the ground point in the pixel coordinate system can be transformed to the corresponding coordinate point in the world coordinate system through inverse perspective transformation; 3. The wheels of the same vehicle are the same size, so the completion strategy can be used to solve the problem of partial truncation of the wheels. .
  • the completion of the wheel according to a set strategy includes:
  • the average value of the wheel areas of the two complete wheels is calculated, and all truncated wheels are completed according to the average value.
  • the wheel orientation angle is not corrected for all obstacle vehicles perceived by the host vehicle, but the wheel orientation angle is corrected based on the wheels that match the wheel templates in the template pool, so that the wheel orientation angle can be very finely corrected. Filtering out vehicles that are far away and may produce large errors ensures positive benefits for sensing the direction of the vehicle.
  • Figure 5 is an application example diagram of a method for determining the direction of an obstacle provided by an embodiment of the present application.
  • the method includes:
  • Step 501 Obtain a target obstacle image detected in the detection area, where the target obstacle image includes multiple wheels.
  • Step 502 Determine whether the multiple wheels are wheels of the same vehicle. If so, execute step 503; otherwise, execute step 512;
  • Step 503 Perform consistency check on the sizes of the plurality of wheels; if the consistency check fails, execute step 504, and then execute step 505; if the consistency check succeeds, execute step 505;
  • Step 504 Complete the wheel according to the set strategy.
  • Step 505 Match each wheel size in the plurality of wheels with the wheel template size in the template pool; if the matching result meets the set threshold, perform step 506; if the matching result does not meet the set threshold, perform step 512:
  • Step 506 Obtain the coordinate information in pixel coordinates of the wheel that meets the set threshold
  • Step 507 Calculate the distance between two wheels respectively according to the coordinate information
  • Step 508 Select the pair of wheels with the largest distance
  • Step 509 Obtain the ground contact position of the wheel pair with the largest distance in the target obstacle image
  • Step 510 Map the grounding position to the road position in the world coordinate system through the inverse perspective transformation formula
  • Step 511 Determine the orientation angle of the vehicle based on the road position in the world coordinate system.
  • Step 512 End this operation.
  • the effect of determining the vehicle orientation through wheel detection in the side camera is very robust. Therefore, using the embodiment of the present application, the vehicle orientation of the detection area set near the main vehicle can be effectively improved. Angle detection accuracy.
  • Figure 6 is another flow chart of a method for determining the direction of an obstacle provided by an embodiment of the present application.
  • the method includes:
  • Step 601 Determine the first orientation angle of the current vehicle in the detection area, where the first orientation angle is the orientation angle of the vehicle determined based on the wheel pair in the target obstacle image in the detection area;
  • the determination process of the first orientation angle includes: the main control platform or the vehicle machine of the vehicle acquires a target obstacle image detected in the detection area, and the target obstacle image includes a plurality of wheels; when the plurality of wheels are When the wheels of the same vehicle are used, determine the wheel pair with the largest distance between the multiple wheels; obtain the ground contact position of the wheel pair with the largest distance in the target obstacle image; map the ground contact position to the world coordinate system and determine the orientation angle of the vehicle, that is, the first orientation angle, based on the road position under the world coordinate system.
  • Step 602 Obtain the second orientation angle of the current vehicle detected in the detection area output by the 3D obstacle detection model
  • the vehicle's main control platform or vehicle machine can directly obtain the second orientation angle of the vehicle in the current detection area through the 3D obstacle detection model.
  • the specific process of obtaining it is already a familiar technology to those skilled in the art, and will not be described again here.
  • Step 603 Determine the difference between the first orientation angle and the second orientation angle
  • the vehicle's main control platform or vehicle machine calculates the difference between the first orientation angle and the second orientation angle through a calculation formula.
  • Step 604 If the difference is less than the preset threshold, determine the angle average of the first orientation angle and the second orientation angle;
  • the preset threshold is a hyperparameter Thres, which selects different values for the selection performance of different 3D obstacle detection models.
  • Step 605 Adjust the average angle to the orientation angle of the current vehicle.
  • the main control platform or vehicle machine of the vehicle adjusts the average angle to the current orientation angle of the vehicle.
  • the first orientation angle of the current vehicle in the detection area is determined, and the second orientation angle of the current vehicle detected in the detection area output by the 3D obstacle detection model is obtained; the first orientation angle and the second orientation angle are determined.
  • the difference between the orientation angles if the difference is less than the preset threshold, determine the angle average of the first orientation angle and the second orientation angle; adjust the angle average to the orientation angle of the current vehicle.
  • the orientation angle of the obstacle vehicle determined based on the wheels is fused with the orientation angle of the obstacle vehicle output by the 3D obstacle detection model, thereby adaptively adjusting the orientation angle of the obstacle vehicle and improving the orientation angle of the vehicle. accuracy.
  • Figure 7 is another flow chart of a method for determining the direction of an obstacle provided by an embodiment of the present application.
  • the method includes:
  • Step 701 Determine the first orientation angle of the current vehicle in the detection area, where the first orientation angle is the orientation angle of the vehicle determined based on the wheel pair in the target obstacle image of the detection area.
  • Step 702 Obtain the second orientation angle of the current vehicle detected in the detection area output by the 3D obstacle detection model.
  • Step 703 Determine the difference between the first orientation angle and the second orientation angle.
  • Step 704 Determine whether the difference is less than the preset threshold. If it is less, perform steps 705 and 706; otherwise, perform steps 707 to 709.
  • Step 705 Determine the average angle of the first orientation angle and the second orientation angle.
  • Step 706 Determine the average angle as the orientation angle of the current vehicle, and end this operation.
  • Step 707 Obtain the historical heading angle of the current vehicle.
  • Step 708 Perform curve fitting on the historical orientation angle through a random sampling consistency (ransac) check algorithm, and predict the third orientation angle of the current vehicle.
  • random sampling consistency random sampling consistency
  • the random sampling consistency (ransac) verification algorithm usually chooses a curve of the first degree, that is, a straight line. This is because under normal driving conditions, the direction of the vehicle is relatively fixed. Even when changing lanes, the vehicle direction does not change greatly. Therefore, the vehicle direction is stable on a linear curve. Even if the vehicle turns around, it can be understood as a uniform angular velocity state, and its curve is still a linear curve.
  • the historical orientation angle of the current vehicle is judged through random sampling consistency check, which can clearly reflect the predicted amount of the historical orientation angle and the current angle, which is called the third orientation angle.
  • the third orientation angle Whichever orientation angle is closest to the first orientation angle or the second orientation angle is determined as the orientation angle of the current vehicle.
  • the purpose is to avoid serious distortion of the determined orientation angle and the orientation angle predicted by the model, thereby affecting the final detection results.
  • Step 709 Determine the first orientation angle or the second orientation angle that is closest to the third orientation angle as the orientation angle of the current vehicle.
  • Embodiments of the present application compare the determined vehicle orientation angle with the vehicle orientation angle predicted by the model, and perform fusion according to the fusion strategy based on the comparison results, thereby improving the accuracy of the vehicle orientation angle.
  • FIG. 8 is a block diagram of a device for determining the direction of an obstacle provided by an embodiment of the present application.
  • the device includes: a first acquisition module 801, a first determination module 802, a second acquisition module 803, a mapping module 804 and a second determination module 805, wherein,
  • the first acquisition module 801 is used to acquire target obstacle images detected in the detection area, where the target obstacle images include multiple wheels;
  • the first determination module 802 is configured to determine the wheel pair with the largest distance between the plurality of wheels when the plurality of wheels are wheels of the same vehicle;
  • the second acquisition module 803 is used to acquire the ground contact position of the wheel pair with the largest distance in the target obstacle image
  • the mapping module 804 is used to map the ground contact position to the road position in the world coordinate system
  • the second determination module 805 is used to determine the orientation angle of the vehicle based on the road position in the world coordinate system.
  • the first determination module includes: a first matching module, a third acquisition module, a first calculation module and a first selection module, in,
  • the first matching module is used to match each wheel size in the plurality of wheels with the wheel template size in the template pool;
  • the third acquisition module is used to acquire the coordinate information in the pixel coordinates of the wheel that satisfies the set threshold when the matching result of the first matching module meets the set threshold;
  • the first calculation module is used to calculate the distance between two wheels according to the coordinate information
  • the first selection module is used to select a pair of wheels with the largest distance.
  • the device further includes: a verification module and a policy completion module, wherein,
  • the verification module is used to perform consistency verification on the dimensions of the plurality of wheels when the plurality of wheels are wheels of the same vehicle;
  • the first determination module is also configured to determine the wheel pair with the largest distance between the plurality of wheels when the consistency check of the verification module is successful;
  • the strategy completion module is used to complete the wheel according to a set strategy when the consistency check by the verification module fails.
  • the device further includes: a first judgment module, wherein,
  • the first judgment module is used to judge whether each wheel in the plurality of wheels has wheel truncation before the verification module performs consistency verification on the dimensions of the plurality of wheels;
  • the first determination module is also configured to determine the wheel pair with the largest distance between the plurality of wheels when the first determination module determines that there is no wheel truncation.
  • the completion module is also configured to complete the wheel according to the set strategy when the first judgment module determines that there is wheel truncation.
  • the completion module includes: a second selection module, a second matching module and a first completion module; and/or a third The selection module and the second completion module; and/or the fourth selection module and the third completion module; wherein,
  • the second selection module is used to select the truncated wheel with the largest area
  • the second matching module is used to match the truncated wheel with the largest area with the wheel template in the template pool;
  • the first completion module is used to complete all truncated wheels according to the wheel template matched by the second matching module;
  • the third selection module is used to select a complete wheel from the plurality of wheels
  • the second completion module is used to complete all the truncated wheels according to the complete wheels selected by the third selection module;
  • the fourth selection module is used to select two complete wheels from the plurality of wheels
  • the second calculation module is used to calculate the average of the wheel areas of the two complete wheels selected by the fourth selection module, and complete all the truncated wheels based on the average.
  • the mapping module is specifically configured to map the grounding position to the road position in the world coordinate system through an inverse perspective transformation formula. .
  • the second determination module includes: a third calculation module and an orientation angle determination module, wherein,
  • the third calculation module is used to calculate the wheel connection line of the road position of the wheel pair in the world coordinate system
  • the orientation angle determination module is used to determine the orientation of the wheel connection line around the direction of gravity as the orientation angle of the vehicle.
  • FIG. 9 is another block diagram of a device for determining the direction of an obstacle provided by an embodiment of the present application.
  • the device includes: a first determination module 901, a first acquisition module 902, a second determination module 903, The third determination module 904 and the fourth determination module 905, wherein,
  • the first determination module 901 is used to determine the first orientation angle of the current vehicle in the detection area, where the first orientation angle is the orientation angle of the current vehicle determined based on the wheel pair in the target obstacle image of the detection area;
  • the first acquisition module 902 is used to acquire the second orientation angle of the current vehicle detected in the detection area output by the 3D obstacle detection model;
  • the second determination module 903 is used to determine the difference between the first orientation angle and the second orientation angle
  • the third determination module 904 is configured to determine the angle average of the first orientation angle and the second orientation angle when the difference is less than a preset threshold
  • the fourth determination module 905 is used to determine the angle average as the orientation angle of the current vehicle.
  • the device may further include: a second acquisition module, a fitting module and a fifth determination module, wherein,
  • the second acquisition module is used to acquire the historical orientation angle of the current vehicle when the difference is not less than the preset threshold
  • the fitting module is used to perform curve fitting on the historical orientation angle through a random sampling consistency RANSAC verification algorithm and predict the third orientation angle of the current vehicle;
  • the fifth determination module is used to determine the first orientation angle or the second orientation angle that is closest to the third orientation angle as the orientation angle of the current vehicle.
  • the first determination module includes:
  • the third acquisition module is used to acquire the target obstacle image detected in the detection area, where the target obstacle image includes multiple wheels;
  • a sixth determination module configured to determine the wheel pair with the largest distance between the plurality of wheels when the plurality of wheels are wheels of the same vehicle;
  • the fourth acquisition module is used to acquire the ground contact position of the wheel pair with the largest distance in the target obstacle image
  • a mapping module for mapping the grounding position to the road position in the world coordinate system
  • a seventh determination module is used to determine the first orientation angle of the vehicle based on the road position in the world coordinate system.
  • the device embodiments described above are only illustrative.
  • the modules described as separate components may or may not be physically separated.
  • the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed across multiple networks. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
  • FIG. 10 is an obstacle orientation determination system provided by an embodiment of the present application.
  • the system is based on a 3D obstacle detection network 1000.
  • the system includes: a 2D obstacle detection module 1001 and a parameter transformation module 1002.
  • the 2D obstacle detection module 1001 and the parameter transformation module 1002 may be located in a 2D detection network, and the 2D detection network may also be called a wheel detection network.
  • the 2D detection network may also be called a wheel detection network.
  • the 2D obstacle detection module 1001 is used to detect the image decoded by the decoder in the 3D obstacle detection network 1000, and obtain a target obstacle image in the detection area, where the target obstacle image includes a plurality of wheels; When the wheels are wheels of the same vehicle, determine the wheel pair with the largest distance between the multiple wheels; obtain the ground contact position of the wheel pair with the largest distance in the target obstacle image;
  • the parameter transformation module 1002 is used to map the ground contact position to the road position in the world coordinate system in the 3D obstacle detection network; and determine the orientation angle of the vehicle based on the road position in the world coordinate system.
  • the 3D obstacle detection network 1000 is used to detect 3D objects. Its goal of detecting 3D objects is usually to find all objects of interest in the scene based on point cloud data, such as vehicles, pedestrians, and static obstacles in autonomous driving scenes. things etc. It can include but is not limited to the following modules: image module, feature (backbone) module, decoder, 3D obstacle detection module and 3D rectangular box (3D BBox). The above modules are connected in sequence; among them, 3D rectangular box (3D BBox, 3D BoundingBox), each 3D rectangular box corresponds to an object in the scene. 3D BBox can be represented in a variety of ways.
  • the 2D obstacle detection network (i.e. wheel detection network) 1003 includes: a 2D obstacle detection module 1001 and a parameter transformation module 1002.
  • the 2D obstacle detection module 1001 is used to decode the decoder in the 3D obstacle detection network.
  • Figure 10A is an application block diagram of a system for determining the direction of obstacles adopted by the embodiment of the present application.
  • the 2D obstacle detection network (ie, wheel detection network) provided in this embodiment is connected after the decoder module in the 3D obstacle detection network.
  • the 2D obstacle detection network can use the simplest yolo network. After that, the 2D obstacle detection module in the 2D obstacle detection network can detect the image decoded by the decoder, obtain multiple wheels in the target obstacle image in the detection area, and the wheel detection frame of each wheel, which can be unified first.
  • the wheel detection frames of the vehicle are sorted, the distance between any two wheels belonging to the same vehicle is calculated, the distance is sorted in descending order, and a pair of wheels that is further apart is found, based on the ground contact position of the vehicle; the parameter transformation module 1002,
  • the orientation angle of the vehicle in the world coordinate system is obtained through the inverse perspective transformation formula; at the same time, it is also necessary to determine whether the vehicle in each wheel frame is truncated. If the wheel is truncated, the truncated wheels are completed according to the strategy. This improves the detection accuracy of the vehicle orientation angle in the detection area near the host vehicle.
  • this embodiment of the present application also provides an electronic device, including:
  • memory for storing instructions executable by the processor
  • the processor is configured to execute the instructions to implement the obstacle orientation determining method as described above.
  • embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • embodiments of the present application also provide a computer program product, including a computer program or instructions that, when executed by a processor, implement the method for determining the obstacle orientation as described above.
  • this embodiment of the present application also provides an electronic device, as shown in Figure 11, including a processor 1101, a communication interface 1102, a memory 1103, and a communication bus 1104, wherein the processor 1101, the communication interface 1102, and the memory 1103 Communication between each other is completed through the communication bus 1104, where,
  • the memory 1103 is used to store computer programs
  • the processor 1101 is configured to implement the method for determining the direction of the obstacle as described above when executing the program stored in the memory 1103 .
  • the communication bus mentioned in the above terminal can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above terminal and other devices.
  • the memory may include Random Access Memory (RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random Access Memory
  • non-volatile memory non-volatile memory
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (Digital Signal Processing, referred to as DSP) , Application Specific Integrated Circuit (ASIC for short), Field-Programmable Gate Array (FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • electronic device 1101 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gates Array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the method for determining the obstacle orientation shown above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable gates Array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the method for determining the obstacle orientation shown above.
  • a non-transitory computer-readable storage medium including instructions such as a memory 1103 including instructions.
  • the instructions can be executed by the processor 1101 of the electronic device to complete the determination of the obstacle orientation shown above.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • a computer program product is also provided.
  • the instructions in the computer program product are executed by the processor 1101 of the electronic device, the electronic device performs the obstacle orientation determination method shown above.
  • Figure 12 is a block diagram of a device 1200 for determining the direction of an obstacle provided by an embodiment of the present application.
  • device 1200 may be provided as a server.
  • apparatus 1200 includes a processing component 1222, which further includes one or more processors, and memory resources represented by memory 1232 for storing instructions, such as application programs, executable by processing component 1222.
  • the application program stored in memory 1232 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1222 is configured to execute instructions to perform the above-described method.
  • Device 1200 may also include a power supply component 1226 configured to perform power management of device 1200, a wired or wireless network interface 1250 configured to connect device 1200 to a network, and an input-output (I/O) interface 1258.
  • Device 1200 may operate based on an operating system stored in memory 1232, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种障碍物朝向的确定方法、装置、系统、设备、介质及产品,所述方法包括:获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;将所述接地位置映射到世界坐标系下的路面位置;基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。也就是说,本申请实施例基于车轮检测,可以快速确定主车附近区域的3D障碍车辆的朝向,解决了由于像素坐标系下3D障碍物的截断信息,导致3D障碍物朝向确定不准确的技术问题。采用本申请实施例,可以有效提升障碍车辆的朝向角度的精度。

Description

障碍物朝向的确定方法、装置、系统、设备、介质及产品
本申请要求在2022年4月2日提交中国专利局、申请号为202210343854.4、申请名称为“障碍物朝向的确定方法、装置、系统、设备、介质及产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及障碍物检测技术领域,尤其涉及一种障碍物朝向的确定方法、装置、系统、设备、计算机可读存储介质及计算机程序产品。
背景技术
随着3D障碍物和激光雷达技术的逐步成熟,相关技术中,虽然通过从像素坐标系中获取车辆的3D坐标信息,可以确定车辆的朝向(yaw)角度,但是,在基于摄像头的环视感知任务中,障碍物(比如大卡车等)在主车侧向相机范围内穿越时,如果没有获取到像素坐标系中该大卡车完整的3D坐标信息,即在像素坐标系下存在障碍物的截断信息,则会导致3D障碍物的朝向产生严重误差。
因此,如何准确的确定3D障碍物的朝向,是目前有待解决的技术问题。
概述
本申请提供一种障碍物朝向的确定方法、装置、系统、电子设备、计算机可读存储介质及计算机程序产品,以至少解决相关技术中由于像素坐标系下3D障碍物的截断信息,导致3D障碍物朝向确定不准确的技术问题。本申请的技术方案如下:
根据本申请实施例的第一方面,提供一种障碍物朝向的确定方法,包括:
获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
将所述接地位置映射到世界坐标系下的路面位置;
基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
可选的,所述确定所述多个车轮之间的距离最大的车轮对包括:
将所述多个车轮中每个车轮尺寸与模板池中车轮模板尺寸进行匹配;
如果匹配结果满足设定阈值,则获取满足设定阈值的车轮在像素坐标中的坐标信息;
根据所述坐标信息分别计算两两车轮之间的距离;
选取距离最大的一对车轮。
可选的,在所述多个车轮为同一车辆的车轮时,所述方法还包括:
对所述多个车轮的尺寸进行一致性校验;
如果一致性校验成功,则执行所述确定所述多个车轮之间的距离最大的车轮对的步骤;
如果一致性校验失败,则对所述车轮按照设定策略进行补全。
可选的,在对所述多个车轮的尺寸进行一致性校验之前,所述方法还包括
判断所述多个车轮中每个车轮是否存在车轮截断情况;
如果没有存在车轮截断情况,则执行所述对所有车轮尺寸的一致性进行校验的步骤;
如果存在车轮截断情况,则对所述车轮按照设定策略进行补全。
可选的,所述对所述车轮按照设定策略进行补全,包括:
选取最大面积的截断车轮,将所述最大面积的的截断车轮与模板池中的车轮模板进行匹配;根据匹配到的车轮模板,对所有截断车轮进行补全;或者
从所述多个车轮中选取一个完整的车轮,按照选取的所述完整车轮补全所有的截断车轮;或者
从所述多个车轮中选择两个完整车轮,计算所述两个完整车轮的车轮面积的平均值,根据所述平均值补全所有的截断车轮。
可选的,所述将所述接地位置映射到在世界坐标系下的路面位置包括:
通过逆透视变换公式将所述接地位置映射到在世界坐标系下的路面位置。
可选的,所述基于所述世界坐标系下的路面位置确定所述车辆的朝向角度,包括:
计算所述车轮对在世界坐标系下的路面位置的车轮连线;
将所述车轮连线绕着重力方向的朝向确定为车辆的朝向角度。
根据本申请实施例的第二方面,提供一种一种障碍物朝向的确定方法,包括:
确定检测区域内当前车辆的第一朝向角度,所述第一朝向角度是基于检测区域的目标障碍图像中的车轮对所确定当前车辆的朝向角度;
获取3D障碍物检测模型输出的检测区域内检测到当前车辆的第二朝向角度;
确定所述第一朝向角度和第二朝向角度的差值;
如果所述差值小于预设阈值,则确定所述第一朝向角度和第二朝向角度的角度平均值;
将所述角度平均值确定为所述当前车辆的朝向角度。
可选的,所述方法还包括:
如果所述差值不小于预设阈值,则获取当前车辆的历史朝向角度;
通过随机抽样一致性ransac校验算法对所述历史朝向角度进行曲线拟合,预测当前车辆的第三朝向角度;
将与所述第三朝向角度相最接近的所述第一朝向角度或第二朝向角度确定为所述当前车辆的朝向角度。
可选的,所述确定当前车辆的第一朝向角度,包括:
获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
将所述接地位置映射到世界坐标系下的路面位置;
基于所述世界坐标系下的路面位置确定所述车辆的第一朝向角度。
根据本申请实施例的第三方面,提供一种一种障碍物朝向的确定装置,包括:
第一获取模块,用于获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
第一确定模块,用于在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
第二获取模块,用于获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
映射模块,用于将所述接地位置映射到世界坐标系下的路面位置;
第二确定模块,用于基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
可选的,所述第一确定模块包括:
第一匹配模块,用于将所述多个车轮中每个车轮尺寸与模板池中车轮模板尺寸进行匹配;
第三获取模块,用于在所述第一匹配模块的匹配结果满足设定阈值时,获取满足设定阈值的车轮在像素坐标中的坐标信息;
第一计算模块,用于根据所述坐标信息分别计算两两车轮之间的距离;
第一选取模块,用于选取距离最大的一对车轮。
可选的,所述装置还包括:
校验模块,用于在所述多个车轮为同一车辆的车轮时,对所述多个车轮的尺寸进行一致性校验;
所述第一确定模块,还用于在所述校验模块一致性校验成功时,确定所述多个车轮之间的距离最大的车轮对;
补全模块,用于在所述校验模块进行一致性校验失败时,对所述车轮按照设定策略进行补全。
可选的,所述装置还包括:
第一判断模块,用于在所述校验模块对所述多个车轮的尺寸进行一致性校验之前,判断所述多个车轮中每个车轮是否存在车轮截断情况;
所述第一确定模块,还用于在所述第一判断模块判定没有存在车轮截断情况时,确定所述多个车轮之间的距离最大的车轮对。
所述补全模块,还用于在所述第一判断模块判定存在车轮截断情况时,对所述车轮按照设定策略进行补全。
可选的,所述补全模块包括:第二选取模块,第二匹配模块和第一补全模块;和/或第三选取模块和第二补全模块;和/或第四选取模块和第三补全模块;其中,
所述第二选取模块,用于选取最大面积的截断车轮;
所述第二匹配模块,用于将所述最大面积的的截断车轮与模板池中的车轮模板进行匹配;
所述第一补全模块,用于根据所述第二匹配模块匹配到的车轮模板,对所有截断车轮进行补全;
所述第三选取模块,用于从所述多个车轮中选取一个完整的车轮;
所述第二补全模块,用于按照所述第三选取模块选取的所述完整车轮补全所有的截断车轮;
所述第四选取模块,用于从所述多个车轮中选取两个完整车轮;
所述第二计算模块,用于计算所述第四选取模块选取的所述两个完整车轮的车轮面积的平均值,根据所述平均值补全所有的截断车轮。
可选的,所述映射模块,具体用于通过逆透视变换公式将所述接地位置映射到在世界坐标系下的路面位置。
可选的,所述第二确定模块包括:
第三计算模块,用于计算所述车轮对在世界坐标系下的路面位置的车轮连线;
朝向角度确定模块,用于将所述车轮连线绕着重力方向的朝向确定为车辆的朝向角度。
根据本申请实施例的第四方面,提供一种障碍物朝向的确定装置,包括:
第一确定模块,用于确定检测区域内当前车辆的第一朝向角度,所述第一朝向角度是基于检测区域的目标障碍图像中的车轮对所确定当前车辆的朝向角度;
第一获取模块,用于获取3D障碍物检测模型输出的检测区域内检测到当前车辆的第二朝向角度;
第二确定模块,用于确定所述第一朝向角度和第二朝向角度的差值;
第三确定模块,用于在所述所述差值小于预设阈值时,确定所述第一朝向角度和第二朝向角度的角度平均值;
第四确定模块,用于将所述角度平均值确定为所述当前车辆的朝向角度。
可选的,所述装置还包括:
第二获取模块,用于在所述差值不小于预设阈值时,获取当前车辆的历史朝向角度;
拟合模块,用于通过随机抽样一致性ransac校验算法对所述历史朝向角度进行曲线拟合,预测当前车辆的第三朝向角度;
第五确定模块,用于将与所述第三朝向角度相最接近的所述第一朝向角度或第二朝向角度确定为所述当前车辆的朝向角度。
可选的,所述第一确定模块包括:
第三获取模块,用于获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
第六确定模块,用于在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
第四获取模块,用于获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
映射模块,用于将所述接地位置映射到世界坐标系下的路面位置;
第七确定模块,用于基于所述世界坐标系下的路面位置确定所述车辆的第一朝向角度。
根据本申请实施例的第五方面,提供一种障碍物朝向的确定系统,所述系统应用于3D障碍物检测网络,所述系统包括:
2D障碍物检测模块,用于对3D障碍物检测网络中的解码器解码后的图像进行检测,获取检测区域的目标障碍图像,所述目标障碍图像包括多个车轮;在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
参数变换模块,用于将所述接地位置映射到所述3D障碍物检测网络中世界坐标系下的路面位置;基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
本申请实施例中,该系统基于3D障碍物检测网络,也就是说,本实施例提供的车轮检测网络,是在3D障碍物检测网络分出一个2D head,因此,对主干网络的改动较小,同时,在检测车轮后,对检测车辆的结果通过一个后处理对车轮进行校验及补全,从而此获得更加精细的车辆的朝向角度。
根据本申请实施例的第六方面,提供一种电子设备,其特征在于,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
其中,所述处理器被配置为执行所述指令,以实现如上所述的障碍物朝向的确定方法。
根据本申请实施例的第七方面,提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如上所述的障碍物朝向的确定方法。
根据本申请实施例的第八方面,提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现如上所述的障碍物朝向的确定方法。
本申请的实施例提供的技术方案至少带来以下有益效果:
本申请实施例中,在获取检测区域检测到的目标障碍图像包括多个车轮时,若所述多个车轮为同一车辆的车轮,则确定所述多个车轮之间的距离最大的车轮对;获取所述车轮对在所述目标障碍图像中的接地位置;将所述接地位置映射到世界坐标系下的路面位置;基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。也就是说,本申请实施例中,基于车轮检测,获取同一车辆的多个车轮中距离最大的一对车轮的接地位置,利用这一对车轮的接地位置的车轮连线在一条直线上,来确定障碍车辆的朝向角度(即偏航yaw角度)。所以,本申请实施例基于车轮检测,可以快速确定主车附近区域的3D障碍车辆的朝向,解决了相关技术中,由于像素坐标系下3D障碍物的截断信息,导致3D障碍物朝向确定不准确的技术问题。采用本申请实施例,可以有效提升障碍车辆的朝 向角度的精度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理,并不构成对本申请的不当限定。为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种障碍物朝向的确定方法的流程图。
图2是本申请实施例提供的一种标记车轮检测框的示意图。
图3是本申请实施例提供的一种截断车轮的示意图。
图4是本申请实施例提供的一种截断车轮的示意图。
图5是本申请实施例提供的一种障碍物朝向的确定方法的应用示例图。
图6是本申请实施例提供的一种障碍物朝向的确定方法的另一流程图。
图7是本申请实施例提供的一种障碍物朝向的确定方法的又一流程图。
图8是本申请实施例提供的一种障碍物朝向的确定装置的框图。
图9是本申请实施例提供的一种障碍物朝向的确定装置的另一框图。
图10是本申请实施例提供的一种障碍物朝向的确定系统的框图。
图10A是本申请实施例提供的一种障碍物朝向的确定系统的应用框图。
图11是本申请实施例提供的一种电子设备的框图。
图12是本申请实施例提供的一种具有障碍物朝向的确定的装置的框图。
详细描述
为了使本领域普通人员更好地理解本申请的技术方案,下面将结合附图,对本申请实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申 请的一些方面相一致的装置和方法的例子。
图1是本申请实施例提供的一种障碍物朝向的确定方法的流程图,如图1所示,该障碍物朝向的确定方法包括以下步骤:
步骤101:获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
步骤102:在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
步骤103:获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
步骤104:将所述接地位置映射到世界坐标系下的路面位置;
步骤105:基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
本申请实施例所述的障碍物朝向的确定方法可以应用于终端等,所述终端的实施设备可以是车载终端、自动驾驶车辆的主控平台或车机等电子设备,在此不作限制。
下面结合图1,对本申请实施例提供的一种障碍物朝向的确定方法的具体实施步骤进行详细说明。
在步骤101中,获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮。
该步骤中,基于3D障碍物检测网络(3D head,又称为3D检测头)分出的一个2D障碍物检测网络(2D head,又称为2D检测头),该2D head用来检测预设区域内车辆的车轮,即主车的主控平台通过该2D head中的摄像头可以获取到该主车附近预定区域(即检测区域)内的目标障碍图像,比如车辆图像等,该目标障碍图像中可以包括多个车轮,所述多个车轮可以为同一辆车的车轮,也可以是不同车辆的个车轮。当所述车轮为多个车轮的不同车轮时,比如同一个车辆只有一个车轮的情况,系统会自动过滤掉一个车辆只有一个车轮的情况。之后,对目标障碍图像中每辆车的多个车轮进行标记,可以通过车轮检测框来标记车轮,进一步,还可以对每个车辆的车轮检测框进行标号排序。比如,主控平台检测到预定区域内的一辆大卡车车轮的三个车轮检测框,如图2中所示,图2为本申请实施例提供的一种标记车轮检测框的示意图,图2中,标记车轮的车轮检测框分别以标号1、标号2和标号3为例来说明等,再比如,本实施例检测到的车辆也可以是一辆小轿车等,本实施例不做限制。
在步骤102中,在所述多个车轮为同一车辆的车轮时,确定所述所述多个车轮之间的距离最大的车轮对。
该步骤中,在主控平台获取到检测区域检测到的目标障碍图像包括的多个车轮后,先根据多个车轮判断车辆的数量,如果判定多个车轮属于不同的车辆,且每个车辆只包括一个车轮,则结束本操作流程;如果所述多个车轮属于同一车辆,则确定所述多个车轮之间的距离最大的车轮对。
其中,确定所述多个车轮之间的距离最大的车轮对,包括:
首先,主控平台将所述多个车轮中每个车轮尺寸与模板池中车轮模板尺寸进行匹配。
该步骤中,与模板池中车轮模块匹配的目的就是为了保证检测的多个车轮为核心区域的车辆的车轮,也就是说,本实施例通常对自驾车辆附近核心区域的障碍车辆的车轮进行朝向角度的修正,避免核心区域外的障碍车辆所引入的误差。其中,车轮尺寸大小需要满足于模板池的阈值。
其一种匹配过程为:确定该车辆的车型,从模板池中查找该车型的车轮模板,之后,计算多个车轮中每个车轮的尺寸与查找到的车轮模板尺寸的差值,判断该差值是否小于设定阈值时,如果小于,认为该车轮尺寸满足设定阈值,匹配成功;否则,认为匹配不成功。
其次,如果匹配结果满足设定阈值,则获取满足设定阈值的车轮在像素坐标中的坐标信息。
该步骤中,如果匹配结果满足设定阈值,则获取该满足设定阈值的车轮在像素坐标中的坐标信息。其获取满足设定阈值的每个车轮在像素坐标中的坐标信息,对于本领域技术来说,已是熟知技术,在此不再赘述。
再次,根据所述坐标信息分别计算两两车轮之间的距离。
该步骤中,在获取到满足设定阈值的每个车轮的坐标信息后,可以利用两个坐标点之间的计算公式来计算每两个车轮的坐标点之间的距离,具体的计算公式对于本领域技术人员来说,已是熟知技术,在此不再赘述。
最后,选取距离最大的一对车轮。
在计算出各个坐标点之间的距离后,选取距离最大的两个坐标点对应的车轮对。
在步骤103中,获取所述车轮对在所述目标障碍图像中的接地位置。
该步骤中,主控平台在确定距离最大的一对车轮后,可以获取到这一对车轮在目标障碍图像中的接地位置的坐标信息,即车轮对接地点在像素坐标系中的坐标信息,其获取的方式对于本领域技术人员来说,已是熟知技术,在此不再赘述。
在步骤104中,将所述接地位置映射到世界坐标系下的路面位置。
该步骤中,一种映射方式是,主控平台通过逆透视变换公式将所述接地位置映射到在世界坐标系下的路面位置,当然,在实际应用中,并不限于此。
该步骤中,将车轮接地位置(即车轮接地点)回归到世界坐标系的路面位置上,需要依赖无人车相机的内外参数,由于车轮接地位置在世界坐标系的坐标z=0,故通过车轮在图像中的位置(u,v)通过逆透视变换公式换得到在世界坐标系下的(x,y),具体逆透视变换公式如下:
Figure PCTCN2022117328-appb-000001
其中X 0(u,v)和Y 0(u,v)分别表示世界坐标系中的路面坐标;其中,u和v分别表示图像坐标系(即像素坐标系)中的横纵坐标映射到世界坐标系下的横纵坐标值;m和n分别表示图像坐标系的宽和高;(Cx,Cy,h)表示相机在世界坐标系当中的坐标位置。相机架设位置相对固定,选择合适的世界坐标系,可以使得h等于相机离地面的高度;α v表示相机上下视场角范围;α u表示相机水平视场角范围;θ表示相机上下俯仰角;rFactor表示正切映射因子,cFactor表示余弦映射因子。
其中,α v和α u的值通常可以用其他内参数据表示,如下:
Figure PCTCN2022117328-appb-000002
其中,该公式中的W,H分别表示为摄像头感光元器件尺寸长度和宽度,f为摄像头焦距。
需要说明的是,对于本领域技术人员,上述的逆透视变换公式已是熟知技术在,在此不在赘述。
在步骤105中,基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
该步骤中,先计算所述车轮对在世界坐标系下的路面位置的车轮连线;在将所述车轮连线绕着重力方向的朝向确定为车辆的朝向角度。
也就是说,基于车轮检测,在获得车轮对的路面位置后,基于车辆的刚性结构,前后任意两个车轮会在一条直线上,所以,计算车轮对在世界坐标系下的路面位置的车轮连线,该车轮连线围绕着重力的朝向就是车辆的朝向(即偏航yaw角度)。也就是说,在知道车轮对的路面坐标的情况下,可以准确的估算出车辆朝向。
本申请实施例中,在获取检测区域检测到的目标障碍图像包括多个车轮时,若所述多个车轮为同一车辆的车轮,则确定所述多个车轮之间的距离最大的车轮对;获取所述车轮对在所述目标障碍图像中的接地位置;将所述接地位置映射到世界坐标系下的路面位置;基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。也就是说,本申请实施例中,基于车轮检测,获取同一车辆的多个车轮中距离最大的一对车轮的接地位置,利用这一对车轮的接地位置的车轮连线在一条直线上,来确定障碍车辆的朝向角度(即偏航yaw角度)。所以,本申请实施例基于车轮检测,可以快速确定主车附近区域的3D障碍车辆的朝向,解决了相关技术中,由于像素坐标系下3D障碍物的截断信息,导致3D障碍物朝向确定不准确的技术问题。采用本申请实施例,可以有效提升障碍车辆的朝向角度的精度。
可选的,在另一实施例中,该实施例在上述实施例的基础上,在所述目标障碍图像中的多个车轮为同一车辆的车轮,对所述多个车轮的尺寸进行一致性校验,如果一致性校验成功,则执行所述确定所述多个车轮之间的距离最大的车轮对的步骤;如果一致性校验失败,则对所述车轮按照设定策略进行补全。
其中,对所述多个车轮的尺寸进行一致性校验的方式有多种,一种是,先计算每个车轮的面积,然后比较面积的是否相等,或是否近似相等,如果所述多个车轮的面积相等或接近相等,则认为所述多个车轮的一致性较强,即确认为一致性校验成功,如果多个车轮的的面积不相等,或者至少有一个车轮的面积与其他车轮的面积不等或相差较大,则认为该面积较小的车轮为截断车轮,需要按照设定策略对该截断车轮进行补全。
该实施例中的车轮一致性校验,其目的是为了部分车轮的截断问题,比如,当多个车轮中有轻微的截断车轮时,但是在对该多个车轮的尺寸进行一致性校验时,其一致性校验成功,在该情况下,本实施例默认为不需要进入对轻微截断的车轮进行补充处理。
可选的,在另一实施例中,该实施例在上述实施例的基础上,在对所述多个车轮的尺寸进行一致性校验之前,所述方法还可以包括:判断所述多个车轮中每个车轮是否存在车轮截断情况;如果没有存在车轮截断情况,则执行所述对所有车轮尺寸的一致性进行校验的步骤;如果存在车轮截断情况,则对所述车轮按照设定策略进行补全。
本实施例中,在对所述多个车轮的尺寸进行一致性校验之前,需要先判断多个车轮中的每个车轮是否存在车轮截断情况,如果存在车轮截断,则说明多个车轮的尺寸不一致,一致性比较差,需要对截断车轮按照设定策略进行补全,然后再执行确定所述多个车轮之间的距离最大的车轮对的步骤;如果没有存在车轮截断,则说明所述多个车轮的尺寸一致性比较强,直接执行所述对所有车轮尺寸的一致性进行校验的步骤。
需要判断每个车轮是否存在车轮截断情况,其判断方式有多种,其中,可以将同一车辆的车轮的直径进行比对,直接最大的车轮为完整车轮,其余为截断车轮,当然,也可以先计算每个车轮的面积,然后比较面积的大小,确定面积最大的车轮为完整车轮,其余面积小的车轮为截断车辆等。当然,还可以在计算出面积后,如果两个较大车轮面 积之差小于预定值,也可以确定为完整车轮等。
为了便于理解完整车轮和截断车轮的情况,还请一并参阅图2至图4,如上述图2所示,图2中以检测到大卡车的三个车轮为例,该标号1至标号3的三个车轮都是完整车轮,没有出现截断情况。还请参阅图3,图3为本申请实施例提供的一种截断车轮的示意图,图3仍以大卡车为例,由图3可知,标号为1的车轮为截断车轮,而标号为2和3的车轮为完整车轮,还请参阅图4,图4为本申请实施例提供的另一种截断车轮的示意图,图4仍以大卡车为例,由图4可知,标号为1和标号3的车轮为截断车轮,而标号为2的车轮为完整车轮。需要说明的是,图3和图4中所述的截断车轮只是举例说明,在实际应用中,并不限于此。
进一步,在另一实施例中,该实施例在上述实施例的基础上,所述方法还可以包括:判断所述多个车轮中每个车轮是否存在车轮截断情况;如果不存在截断,则进入车轮一致性校验,如果车轮尺寸一致性较强,进而继续判断车轮和模板池中的车轮模板尺匹配,如果尺寸大小满足于模板池的阈值,则匹配成功,利用匹配到的车轮模板,对所有截断车轮进行补全。
需要说明的是,本申请实施例中,基于车轮检测是因为:1、车轮的大小比较固定,特征明显,检测网络容易设计和应用;2、每辆车的车轮处必定有接地点,将车轮接地点在像素坐标系中的坐标点可以通过逆透视变换到世界坐标系中对应的坐标点;3、同一辆车辆车轮大小一致,因此,可以针对车轮部分截断问题,利用补全策略进行补全。
进一步,在另一实施例中,该实施例在上述实施例的基础上,所述对所述车轮按照设定策略进行补全,包括:
选取最大面积的截断车轮,将所述最大面积的的截断车轮与模板池中的车轮模板进行匹配;根据匹配到的车轮模板,对所有截断车轮进行补全;或者
若从所述多个车轮中选取一个完整的车轮,则按照选取的所述完整车轮补全所有的截断车轮;或者
若从所述多个车轮中选择两个完整车轮,则计算所述两个完整车轮的车轮面积的平均值,根据所述平均值补全所有的截断车轮。
本申请实施例中,并不是对主车辆感知到的所有障碍车辆进行车轮朝向角度的修正,而是基于与模板池中的车轮模板相匹配的车轮的车轮进行朝向角度修正,从而能够很精细的将距离较远且可能产生较大误差的车辆过滤掉,保证了对感知车辆朝向的正向收益。
还请参阅图5,为本申请实施例提供的一种障碍物朝向的确定方法的应用示例图,所述方法包括:
步骤501:获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮。
步骤502:判断所述多个车轮是否为同一车辆的车轮,如果是,执行步骤503;否则,执行步骤512;
步骤503:对所述多个车轮的尺寸进行一致性校验;如果一致性校验失败,执行步骤 504,之后执行步骤505;如果一致性校验成功,执行步骤505;
步骤504:对所述车轮按照设定策略进行补全。
步骤505:将所述多个车轮中每个车轮尺寸与模板池中车轮模板尺寸进行匹配;如果匹配结果满足设定阈值,执行步骤506;如果匹配结果未满足设定阈值,执行步骤512:
步骤506:获取满足设定阈值的车轮在像素坐标中的坐标信息;
步骤507:根据所述坐标信息分别计算两两车轮之间的距离;
步骤508:选取距离最大的一对车轮;
步骤509:获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
步骤510:通过逆透视变换公式将所述接地位置映射到世界坐标系下的路面位置;
步骤511:基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
步骤512:结束本次操作。
本申请实施例中,通过车轮检测进而求取车辆朝向在侧向摄像头中效果是非常鲁棒的,所以,采用本申请实施例,可以有效的提升对主车附近设定的检测区域的车辆朝向角度的检测精度。
还请参阅图6,为本申请实施例提供的一种障碍物朝向的确定方法的另一流程图,所述方法包括:
步骤601:确定检测区域内当前车辆的第一朝向角度,所述第一朝向角度是基于检测区域的目标障碍图像中的车轮对所确定车辆的朝向角度;
其中,该步骤中,第一朝向角度的确定过程包括:车辆的主控平台或车机获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;将所述接地位置映射到世界坐标系下的路面位置;基于所述世界坐标系下的路面位置确定所述车辆的朝向角度,即第一朝向角度。
需要说明的是,车辆的主控平台或车机确定第一朝向角度的各个步骤的具体实现过程,详见上述方法中对应步骤的实现过程,在此不在赘述。
步骤602:获取3D障碍物检测模型输出的所述检测区域内检测到到当前车辆的第二朝向角度;
该步骤中,车辆的主控平台或车机可以通过3D障碍物检测模型直接得到当前检测区域内车辆的第二朝向角度。其获取的具体过程,对于本领域技术人员来说,已是熟知技术,在此不再赘述。
步骤603:确定所述第一朝向角度和第二朝向角度的差值;
该步骤中,车辆的主控平台或车机通过计算公式计算第一朝向角度和第二朝向角度的差值。
步骤604:如果所述差值小于预设阈值,则确定所述第一朝向角度和第二朝向角度的角度平均值;
该实施例中,预设阈值是超参数Thres,该超参数针对不同的3D障碍物检测模型的选择性能,选取不同的值。
步骤605:将所述角度平均值调整为所述当前车辆的朝向角度。
该步骤中,车辆的主控平台或车机将所述角度平均值调整为所述当前车辆的朝向角度。
本申请实施例中,确定检测区域内当前车辆的第一朝向角度,以及获取3D障碍物检测模型输出的检测区域内检测到当前车辆的第二朝向角度;确定所述第一朝向角度和第二朝向角度的差值;如果所述差值小于预设阈值,则确定所述第一朝向角度和第二朝向角度的角度平均值;将所述角度平均值调整为所述当前车辆的朝向角度。本申请实施例中,将基于车轮确定的障碍物车辆的朝向角度与3D障碍物检测模型输出的该障碍车辆的朝向角度进行融合,从而适应性调整该障碍车辆的朝向角度,提升车辆的朝向角度的精准度。
还请参阅图7,为本申请实施例提供的一种障碍物朝向的确定方法的又一流程图,所述方法包括:
步骤701:确定检测区域内当前车辆的第一朝向角度,所述第一朝向角度是基于检测区域的目标障碍图像中的车轮对所确定车辆的朝向角度。
步骤702:获取3D障碍物检测模型输出的检测区域内检测到当前车辆的第二朝向角度。
步骤703:确定所述第一朝向角度和第二朝向角度的差值。
步骤704:判断所述差值是否小于预设阈值,如果小于,执行步骤705和步骤706;否则,执行步骤707至步骤709。
步骤705:确定所述第一朝向角度和第二朝向角度的角度平均值。
步骤706:将所述角度平均值确定为所述当前车辆的朝向角度,结束本次操作。
步骤707:获取当前车辆的历史朝向角度。
步骤708:通过随机抽样一致性(ransac)校验算法对所述历史朝向角度进行曲线拟合,预测当前车辆的第三朝向角度。
该步骤中,随机抽样一致性(ransac)校验算法,通常选择的曲线是一次曲线(curve of the first degree),即一条直线,这是因为在正常行驶的情况下,车辆的朝向是相对固定的,即使在变道情况下,车辆朝向变动幅度也不会很大,故,车辆朝向是稳定在一次曲线的,即使车辆掉头,也可理解为匀角速度状态,其曲线仍是一次曲线。本申请实施例中,对当前车辆的历史朝向角度通过随机抽样一致性校验判断,可以清楚反映出历史朝向角度与当前角度的预测量,称为第三朝向角度,之后,通过比较第三朝向角度与第一朝向角度或第二朝向角度,哪个朝向角度最接近,则将最接近的朝向角度确定为当前车辆的朝向角度。其目的是避免确定的朝向角度和模型预测的朝向角度的严重失真,从而影响最终的检测结果。
步骤709:将与所述第三朝向角度相最接近的所述第一朝向角度或第二朝向角度确定为所述当前车辆的朝向角度。
本申请实施例提供对于确定的车辆的朝向角度和模型预测的车辆的朝向角度进行比对,并根据比对结果按照融合策略进行融合,从而提高了车辆朝向角度的精准度。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本实施公开并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本申请所必须的。
还请参阅图8,是本申请实施例提供的一种障碍物朝向的确定装置装置框图。所述装置包括:第一获取模块801,第一确定模块802,第二获取模块803,映射模块804和第二确定模块805,其中,
该第一获取模块801,用于获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
该第一确定模块802,用于在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
该第二获取模块803,用于获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
该映射模块804,用于将所述接地位置映射到世界坐标系下的路面位置;
该第二确定模块805,用于基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述第一确定模块包括:第一匹配模块,第三获取模块,第一计算模块和第一选取模块,其中,
该第一匹配模块,用于将所述多个车轮中每个车轮尺寸与模板池中车轮模板尺寸进行匹配;
该第三获取模块,用于在所述第一匹配模块的匹配结果满足设定阈值时,获取满足设定阈值的车轮在像素坐标中的坐标信息;
该第一计算模块,用于根据所述坐标信息分别计算两两车轮之间的距离;
该第一选取模块,用于选取距离最大的一对车轮。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述装置还包括:校验模块和策略补全模块,其中,
该校验模块,用于在所述多个车轮为同一车辆的车轮时,对所述多个车轮的尺寸进行一致性校验;
所述第一确定模块,还用于在所述校验模块一致性校验成功时,确定所述多个车轮之间的距离最大的车轮对;
该策略补全模块,用于在所述校验模块进行一致性校验失败时,对所述车轮按照设定策略进行补全。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述装置还包括:第一判断模块,其中,
该第一判断模块,用于在所述校验模块对所述多个车轮的尺寸进行一致性校验之前,判断所述多个车轮中每个车轮是否存在车轮截断情况;
所述第一确定模块,还用于在所述第一判断模块判定没有存在车轮截断情况时,确定所述多个车轮之间的距离最大的车轮对。
所述补全模块,还用于在所述第一判断模块判定存在车轮截断情况时,对所述车轮按照设定策略进行补全。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述补全模块包括:第二选取模块,第二匹配模块和第一补全模块;和/或第三选取模块和第二补全模块;和/或第四选取模块和第三补全模块;其中,
所述第二选取模块,用于选取最大面积的截断车轮;
所述第二匹配模块,用于将所述最大面积的的截断车轮与模板池中的车轮模板进行匹配;
所述第一补全模块,用于根据所述第二匹配模块匹配到的车轮模板,对所有截断车轮进行补全;
所述第三选取模块,用于从所述多个车轮中选取一个完整的车轮;
所述第二补全模块,用于按照所述第三选取模块选取的所述完整车轮补全所有的截断车轮;
所述第四选取模块,用于从所述多个车轮中选取两个完整车轮;
所述第二计算模块,用于计算所述第四选取模块选取的所述两个完整车轮的车轮面积的平均值,根据所述平均值补全所有的截断车轮。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述映射模块,具体用于通过逆透视变换公式将所述接地位置映射到在世界坐标系下的路面位置。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述第二确定模块包括:第三计算模块和朝向角度确定模块,其中,
该第三计算模块,用于计算所述车轮对在世界坐标系下的路面位置的车轮连线;
该朝向角度确定模块,用于将所述车轮连线绕着重力方向的朝向确定为车辆的朝向角度。
还请参阅图9,为本申请实施例提供的一种障碍物朝向的确定装置的另一框图,所述装置包括:第一确定模块901,第一获取模块902,第二确定模块903,第三确定模块904和第四确定模块905,其中,
该第一确定模块901,用于确定检测区域内当前车辆的第一朝向角度,所述第一朝向 角度是基于检测区域的目标障碍图像中的车轮对所确定当前车辆的朝向角度;
该第一获取模块902,用于获取3D障碍物检测模型输出的检测区域内检测到当前车辆的第二朝向角度;
该第二确定模块903,用于确定所述第一朝向角度和第二朝向角度的差值;
该第三确定模块904,用于在所述所述差值小于预设阈值时,确定所述第一朝向角度和第二朝向角度的角度平均值;
该第四确定模块905,用于将所述角度平均值确定为所述当前车辆的朝向角度。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述装置还可以包括:第二获取模块,拟合模块和第五确定模块,其中,
该第二获取模块,用于在所述差值不小于预设阈值时,获取当前车辆的历史朝向角度;
该拟合模块,用于通过随机抽样一致性ransac校验算法对所述历史朝向角度进行曲线拟合,预测当前车辆的第三朝向角度;
该第五确定模块,用于将与所述第三朝向角度相最接近的所述第一朝向角度或第二朝向角度确定为所述当前车辆的朝向角度。
可选的,在另一实施例中,该实施例在上述实施例的基础上,所述第一确定模块包括:
第三获取模块,用于获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
第六确定模块,用于在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
第四获取模块,用于获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
映射模块,用于将所述接地位置映射到世界坐标系下的路面位置;
第七确定模块,用于基于所述世界坐标系下的路面位置确定所述车辆的第一朝向角度。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
还请参阅图10,为本申请实施例提供的一种障碍物朝向的确定系统,所述系统基于3D障碍物检测网络1000,所述系统包括:2D障碍物检测模块1001和参数变换模块1002,其中,2D障碍物检测模块1001和参数变换模块1002可以位于2D检测网络中,2D检测 网络也可以称为车轮检测网络。其中,
2D障碍物检测模块1001,用于对3D障碍物检测网络1000中的解码器解码后的图像进行检测,获取检测区域的目标障碍图像,所述目标障碍图像包括多个车轮;在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
参数变换模块1002,用于将所述接地位置映射到所述3D障碍物检测网络中世界坐标系下的路面位置;基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
其中,3D障碍物检测网络1000用来检测3D物体的,其检测3D物体的目标通常是要根据点云数据来找到场景中所有感兴趣的物体,比如自动驾驶场景中的车辆,行人,静态障碍物等等。可以包括但不限于下述模块:图像(image)模块、特征(backbone)模块、解码器、3D障碍物检测模块和3D矩形框(3D BBox)。上述模块依次连接;其中,3D矩形框(3D BBox,3D BoundingBox),每个3D矩形框对应一个场景中的物体。3D BBox可以有多种表示方法,一般最常用的就是用中心点3D坐标,长宽高,以及3D旋转角度来表示,简单一些的话可以只考虑平面内旋转等。2D障碍物检测网络(即车轮检测网络)1003包括:2D障碍物检测模块1001和参数变换模块1002,其中,2D障碍物检测模块1001,用于对3D障碍物检测网络中的解码器解码后的图像进行检测,获取检测区域的目标障碍图像,所述目标障碍图像包括多个车轮;在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;参数变换模块1002,用于将所述接地位置映射到所述3D障碍物检测网络中世界坐标系下的路面位置,即3D BBox;由基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。其对应的框图如图10A所示,图10A为本申请实施例通过的一种障碍物朝向的确定系统的应用框图。
其中,本实施例提供的2D障碍物检测网络(即车轮检测网络)连接在3D障碍物检测网络中的解码Decode器模块之后。其中,2D障碍物检测网络可以用最简单的yolo网络。之后,2D障碍物检测网络中的2D障碍物检测模块可以对解码器解码后的图像进行检测,获取检测区域的目标障碍图像中的多个车轮,以及每个车轮的车轮检测框,可以先同一车辆的车轮检测框进行排序,计算属于同一辆车的任意两个车轮的距离,按照按距离降序进行排序,查找到距离更远的一对车轮,根据该车辆的接地位置;参数变换模块1002,通过逆透视变换公式得到世界坐标系下该车辆的朝向角度;同时还需要判断每个车轮框中的车辆是否存在截断,如果车轮存在截断,则根据策略对截断车轮进行补全。从而提高了主车辆附近检测区域的车辆朝向角度的检测精度。
可选的,本申请实施例还提供一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
其中,所述处理器被配置为执行所述指令,以实现如上所述的障碍物朝向的确定方 法。
可选的,本申请实施例还提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如上所述的障碍物朝向的确定方法。可选地,计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
可选的,本申请实施例还提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现如上所述的障碍物朝向的确定方法
可选的,本申请实施例还提供了一种电子设备,如图11所示,包括处理器1101、通信接口1102、存储器1103和通信总线1104,其中,处理器1101,通信接口1102,存储器1103通过通信总线1104完成相互间的通信,其中,
所述存储器1103,用于存放计算机程序;
所述处理器1101,用于执行存储器1103上所存放的程序时,实现如上所述的障碍物朝向的确定方法。
上述终端提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,简称PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,简称EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述终端与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,简称RAM),也可以包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processing,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现场可编程门阵列(Field-Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在实施例中,电子设备1101可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述所示的障碍物朝向的确定方法。
在实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1103,上述指令可由电子设备的处理器1101执行以完成上述所示的障碍物朝向的确定方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在实施例中,还提供了一种计算机程序产品,当计算机程序产品中的指令由电子设 备的处理器1101执行时,使得电子设备执行上述所示的障碍物朝向的确定方法。
图12是本申请实施例提供的一种用于障碍物朝向的确定的装置1200的框图。例如,装置1200可以被提供为一服务器。参照图12,装置1200包括处理组件1222,其进一步包括一个或多个处理器,以及由存储器1232所代表的存储器资源,用于存储可由处理组件1222的执行的指令,例如应用程序。存储器1232中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1222被配置为执行指令,以执行上述方法。
装置1200还可以包括一个电源组件1226被配置为执行装置1200的电源管理,一个有线或无线网络接口1250被配置为将装置1200连接到网络,和一个输入输出(I/O)接口1258。装置1200可以操作基于存储在存储器1232的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (16)

  1. 一种障碍物朝向的确定方法,其特征在于,包括:
    获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
    在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
    获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
    将所述接地位置映射到世界坐标系下的路面位置;
    基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
  2. 根据权利要求1所述的障碍物朝向的确定方法,其特征在于,所述确定所述多个车轮之间的距离最大的车轮对包括:
    将所述多个车轮中每个车轮尺寸与模板池中车轮模板尺寸进行匹配;
    如果匹配结果满足设定阈值,则获取满足设定阈值的车轮在像素坐标中的坐标信息;
    根据所述坐标信息分别计算两两车轮之间的距离;
    选取距离最大的一对车轮。
  3. 根据权利要求1所述的障碍物朝向的确定方法,其特征在于,在所述多个车轮为同一车辆的车轮时,所述方法还包括:
    对所述多个车轮的尺寸进行一致性校验;
    如果一致性校验成功,则执行所述确定所述多个车轮之间的距离最大的车轮对的步骤;
    如果一致性校验失败,则对所述车轮按照设定策略进行补全。
  4. 根据权利要求3所述的障碍物朝向的确定方法,其特征在于,在对所述多个车轮的尺寸进行一致性校验之前,所述方法还包括
    判断所述多个车轮中每个车轮是否存在车轮截断情况;
    如果没有存在车轮截断情况,则执行所述对所有车轮尺寸的一致性进行校验的步骤;
    如果存在车轮截断情况,则对所述车轮按照设定策略进行补全。
  5. 根据权利要求3或4所述的障碍物朝向的确定方法,其特征在于,所述对所述车轮按照设定策略进行补全,包括:
    选取最大面积的截断车轮,将所述最大面积的的截断车轮与模板池中的车轮模板进行匹配;根据匹配到的车轮模板,对所有截断车轮进行补全;或者
    从所述多个车轮中选取一个完整的车轮,按照选取的所述完整车轮补全所有的截断车轮;或者
    从所述多个车轮中选择两个完整车轮,计算所述两个完整车轮的车轮面积的平均值, 根据所述平均值补全所有的截断车轮。
  6. 根据权利要求1至5任一项所述的障碍物朝向的确定方法,其特征在于,所述将所述接地位置映射到在世界坐标系下的路面位置包括:
    通过逆透视变换公式将所述接地位置映射到在世界坐标系下的路面位置。
  7. 根据权利要求1至5任一项所述的障碍物朝向的确定方法,其特征在于,所述基于所述世界坐标系下的路面位置确定所述车辆的朝向角度,包括:
    计算所述车轮对在世界坐标系下的路面位置的车轮连线;
    将所述车轮连线绕着重力方向的朝向确定为车辆的朝向角度。
  8. 一种障碍物朝向的确定方法,其特征在于,包括:
    确定检测区域内当前车辆的第一朝向角度,所述第一朝向角度是基于检测区域的目标障碍图像中的车轮对所确定当前车辆的朝向角度;
    获取3D障碍物检测模型输出的检测区域内检测到当前车辆的第二朝向角度;
    确定所述第一朝向角度和第二朝向角度的差值;
    如果所述差值小于预设阈值,则确定所述第一朝向角度和第二朝向角度的角度平均值;
    将所述角度平均值确定为所述当前车辆的朝向角度。
  9. 根据权利要求8所述的障碍物朝向的确定方法,其特征在于,所述方法还包括:
    如果所述差值不小于预设阈值,则获取当前车辆的历史朝向角度;
    通过随机抽样一致性校验算法对所述历史朝向角度进行曲线拟合,预测当前车辆的第三朝向角度;
    将与所述第三朝向角度相最接近的所述第一朝向角度或第二朝向角度确定为所述当前车辆的朝向角度。
  10. 根据权利要求8或9所述的障碍物朝向的确定方法,其特征在于,所述确定当前车辆的第一朝向角度,包括:
    获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
    在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
    获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
    将所述接地位置映射到世界坐标系下的路面位置;
    基于所述世界坐标系下的路面位置确定所述车辆的第一朝向角度。
  11. 一种障碍物朝向的确定装置,其特征在于,包括:
    第一获取模块,用于获取检测区域检测到的目标障碍图像,所述目标障碍图像包括多个车轮;
    第一确定模块,用于在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;
    第二获取模块,用于获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
    映射模块,用于将所述接地位置映射到世界坐标系下的路面位置;
    第二确定模块,用于基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
  12. 一种障碍物朝向的确定装置,其特征在于,包括:
    第一确定模块,用于确定检测区域内当前车辆的第一朝向角度,所述第一朝向角度是基于检测区域的目标障碍图像中的车轮对所确定当前车辆的朝向角度;
    第一获取模块,用于获取3D障碍物检测模型输出的检测区域内检测到当前车辆的第二朝向角度;
    第二确定模块,用于确定所述第一朝向角度和第二朝向角度的差值;
    第三确定模块,用于在所述所述差值小于预设阈值时,确定所述第一朝向角度和第二朝向角度的角度平均值;
    第一确定模块,用于将所述角度平均值确定为所述当前车辆的朝向角度。
  13. 一种障碍物朝向的确定系统,其特征在于,包括:
    2D障碍物检测模块,用于对3D障碍物检测网络中的解码器解码后的图像进行检测,获取检测区域的目标障碍图像,所述目标障碍图像包括多个车轮;在所述多个车轮为同一车辆的车轮时,确定所述多个车轮之间的距离最大的车轮对;获取距离最大的所述车轮对在所述目标障碍图像中的接地位置;
    参数变换模块,用于将所述接地位置映射到所述3D障碍物检测网络中世界坐标系下的路面位置;基于所述世界坐标系下的路面位置确定所述车辆的朝向角度。
  14. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现如权利要求1至10中任一项所述的障碍物朝向的确定方法。
  15. 一种计算机可读存储介质,其特征在于,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如权利要求1至10中任一项所述的障碍物朝向的确定方法。
  16. 一种计算机程序产品,包括计算机程序或指令,其特征在于,所述计算机程序或指令被处理器执行时实现如权利要求1至10中任一项所述的障碍物朝向的确定方法。
PCT/CN2022/117328 2022-04-02 2022-09-06 障碍物朝向的确定方法、装置、系统、设备、介质及产品 WO2023184868A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210343854.4 2022-04-02
CN202210343854.4A CN114863388A (zh) 2022-04-02 2022-04-02 障碍物朝向的确定方法、装置、系统、设备、介质及产品

Publications (1)

Publication Number Publication Date
WO2023184868A1 true WO2023184868A1 (zh) 2023-10-05

Family

ID=82629851

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117328 WO2023184868A1 (zh) 2022-04-02 2022-09-06 障碍物朝向的确定方法、装置、系统、设备、介质及产品

Country Status (2)

Country Link
CN (1) CN114863388A (zh)
WO (1) WO2023184868A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315035A (zh) * 2023-11-30 2023-12-29 武汉未来幻影科技有限公司 一种车辆朝向的处理方法、装置以及处理设备
CN118379705A (zh) * 2024-06-21 2024-07-23 探步科技(上海)有限公司 一种基于2d视觉的车辆信息检测方法和装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863388A (zh) * 2022-04-02 2022-08-05 合众新能源汽车有限公司 障碍物朝向的确定方法、装置、系统、设备、介质及产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (zh) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 用于检测道路障碍物的方法及系统
CN110246183A (zh) * 2019-06-24 2019-09-17 百度在线网络技术(北京)有限公司 车轮接地点检测方法、装置及存储介质
CN110738181A (zh) * 2019-10-21 2020-01-31 东软睿驰汽车技术(沈阳)有限公司 一种确定车辆朝向信息的方法及装置
CN112507862A (zh) * 2020-12-04 2021-03-16 东风汽车集团有限公司 基于多任务卷积神经网络的车辆朝向检测方法及系统
CN112861683A (zh) * 2021-01-29 2021-05-28 上海商汤临港智能科技有限公司 一种行驶朝向检测方法、装置、计算机设备及存储介质
CN114863388A (zh) * 2022-04-02 2022-08-05 合众新能源汽车有限公司 障碍物朝向的确定方法、装置、系统、设备、介质及产品

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (zh) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 用于检测道路障碍物的方法及系统
CN110246183A (zh) * 2019-06-24 2019-09-17 百度在线网络技术(北京)有限公司 车轮接地点检测方法、装置及存储介质
CN110738181A (zh) * 2019-10-21 2020-01-31 东软睿驰汽车技术(沈阳)有限公司 一种确定车辆朝向信息的方法及装置
CN112507862A (zh) * 2020-12-04 2021-03-16 东风汽车集团有限公司 基于多任务卷积神经网络的车辆朝向检测方法及系统
CN112861683A (zh) * 2021-01-29 2021-05-28 上海商汤临港智能科技有限公司 一种行驶朝向检测方法、装置、计算机设备及存储介质
CN114863388A (zh) * 2022-04-02 2022-08-05 合众新能源汽车有限公司 障碍物朝向的确定方法、装置、系统、设备、介质及产品

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315035A (zh) * 2023-11-30 2023-12-29 武汉未来幻影科技有限公司 一种车辆朝向的处理方法、装置以及处理设备
CN117315035B (zh) * 2023-11-30 2024-03-22 武汉未来幻影科技有限公司 一种车辆朝向的处理方法、装置以及处理设备
CN118379705A (zh) * 2024-06-21 2024-07-23 探步科技(上海)有限公司 一种基于2d视觉的车辆信息检测方法和装置

Also Published As

Publication number Publication date
CN114863388A (zh) 2022-08-05

Similar Documents

Publication Publication Date Title
WO2023184868A1 (zh) 障碍物朝向的确定方法、装置、系统、设备、介质及产品
CN103745452B (zh) 相机外参评估方法、装置、相机外参标定方法和装置
LU502288B1 (en) Method and system for detecting position relation between vehicle and lane line, and storage medium
WO2017048383A1 (en) Systems and methods for non-obstacle area detection
CN109543493B (zh) 一种车道线的检测方法、装置及电子设备
WO2023185250A1 (zh) 一种障碍物测距方法、移动机器人、设备及介质
CN109927717B (zh) 泊车轨迹的确定方法、装置及智能终端
US20210142076A1 (en) Lane line recognition method, lane line recognition device and non-volatile storage medium
WO2020087322A1 (zh) 车道线识别方法和装置、车辆
CN112733703A (zh) 一种车辆停放状态检测方法及系统
CN114091521B (zh) 车辆航向角的检测方法、装置、设备及存储介质
CN115542312B (zh) 多传感器关联方法及装置
WO2023168747A1 (zh) 基于域控制器平台的自动泊车的停车位标注方法及装置
WO2023184869A1 (zh) 室内停车场的语义地图构建及定位方法和装置
CN114037977B (zh) 道路灭点的检测方法、装置、设备及存储介质
CN116245730A (zh) 图像拼接的方法、装置、设备以及存储介质
Zhang et al. Real-time Lane Detection Method Based On Region Of Interest
CN112580402B (zh) 一种单目视觉行人测距方法及其系统、车辆、介质
JP6266340B2 (ja) 車線識別装置および車線識別方法
CN113313968A (zh) 车位检测方法及存储介质
CN112598736A (zh) 一种基于地图构建的视觉定位方法及装置
CN112686155A (zh) 图像识别方法、装置、计算机可读存储介质及处理器
CN113792601B (zh) 基于霍夫直线检测结果的车位线拟合方法及系统
CN117315048B (zh) 车载相机的外参自标定方法、电子设备和存储介质
CN116993797B (zh) 深度图无效点的估计方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934688

Country of ref document: EP

Kind code of ref document: A1