WO2021237754A1 - 一种路况检测方法和装置 - Google Patents

一种路况检测方法和装置 Download PDF

Info

Publication number
WO2021237754A1
WO2021237754A1 PCT/CN2020/093543 CN2020093543W WO2021237754A1 WO 2021237754 A1 WO2021237754 A1 WO 2021237754A1 CN 2020093543 W CN2020093543 W CN 2020093543W WO 2021237754 A1 WO2021237754 A1 WO 2021237754A1
Authority
WO
WIPO (PCT)
Prior art keywords
road
position information
detection point
target
horizontal
Prior art date
Application number
PCT/CN2020/093543
Other languages
English (en)
French (fr)
Inventor
高鲁涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080004833.9A priority Critical patent/CN112639814B/zh
Priority to PCT/CN2020/093543 priority patent/WO2021237754A1/zh
Publication of WO2021237754A1 publication Critical patent/WO2021237754A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • This application relates to the field of smart cars, and in particular to a road condition detection method and device.
  • road conditions the detection technology of road conditions (hereinafter referred to as road conditions) is one of the technologies it mainly relies on.
  • the main purpose of road condition detection is to obtain road condition information (such as road gradient, road curvature, etc.) of the road ahead of the car, so as to assist the car in reasonable auxiliary control or automatic control of vehicle speed and vehicle direction. Since the accuracy of road condition detection will directly affect the safety and reliability of smart cars, how to ensure the accuracy of road condition detection has become a current research hotspot.
  • smart cars can detect road conditions through sensors such as millimeter-wave radar, lidar, and cameras.
  • cameras have gradually become the main sensors in road condition detection due to their advantages such as low cost and mature technology.
  • the existing camera-based road condition detection technology has high requirements on the installation position and posture of the camera on the car.
  • the optical axis direction of the camera and the traveling direction of the car are not horizontal (such as the vehicle bumping). It is easy to cause misjudgment when driving, so that accurate road condition information cannot be obtained, which reduces the safety and reliability of smart cars.
  • This application provides a road condition detection method and device. Using the solution provided by this application can improve the accuracy of road condition detection and improve the safety and reliability of smart cars.
  • an embodiment of the present application provides a road condition detection method, the method including: first obtaining a road image of a target road.
  • the first detection point and the second detection point are determined in the road image.
  • the first detection point is a vanishing point in the road image.
  • the second detection point is an intersection of a road boundary line or a lane line of the target road in the road image, or the second detection point is a road boundary line or a lane of the target road in the road image The intersection of the extension line of the line.
  • the road condition of the target road is determined according to the first detection point and the second detection point.
  • the terminal device can pass the relative position of the intersection point (or the intersection point of the extension line) between the vanishing point and the lane line or the lane boundary line on the plane where the target road is located and in the vehicle traveling direction in a certain road image Relations to determine the horizontal and/or vertical road conditions of the target road can avoid misjudgment of road conditions caused by vehicle bumps or small differences in different road images. It can improve the accuracy of road condition detection and improve the performance of smart cars. Safety and reliability.
  • the first detection point is a vanishing point in the road image on the plane where the target road is located and in the traveling direction of the terminal device.
  • the road condition includes a vertical road condition.
  • the terminal device may obtain the first vertical position information of the first detection point and the second vertical position information of the second detection point. Determine the vertical road condition of the target road according to the first vertical position information and the second vertical position information.
  • the terminal device determines that the first detection point is above the second detection point according to the first vertical position information and the second vertical position information, It is determined that the vertical road condition of the target road is downhill. If the terminal device determines that the first detection point is below the second detection point according to the first vertical position information and the second vertical position information, it is determined that the vertical road condition of the target road is an uphill slope. If the terminal device determines that the first vertical position information and the second vertical position information are the same, it is determined that the vertical road condition of the target road is flat.
  • the terminal device directly judges the vertical road condition of the target road based on the relative positional relationship between the vanishing point and the intersection of the lane line or the lane boundary line (or the intersection of the extension line) on the plane where the target road is located and in the traveling direction of the vehicle.
  • the occurrence of misjudgment of vertical road conditions caused by factors such as the installation position and posture of the terminal equipment on the vehicle can improve the accuracy of road condition detection and improve the safety and reliability of smart cars.
  • the terminal device may determine the first target area in the road image according to the first vertical position information and the first preset difference value. If it is determined according to the second vertical position information that the second detection point is below the first target area, it is determined that the vertical road condition of the target road is downhill. If it is determined that the second detection point is above the first target area according to the second vertical position information, it is determined that the vertical road condition of the target road is an uphill slope. If it is determined according to the second vertical position information that the second detection point is within the first target area, it is determined that the vertical road condition of the target road is flat.
  • the terminal device judges the vertical road condition of the target road based on the relative positional relationship between the first target area and the intersection of the lane line or the lane boundary line (or the intersection of the extension line) determined by the vanishing point, which can avoid the verticality caused by factors such as vehicle bumps.
  • the occurrence of misjudgment of road conditions can further improve the accuracy of road condition detection and further enhance the safety and reliability of smart cars.
  • the road condition includes a horizontal road condition.
  • the terminal device may obtain the first horizontal position information of the first detection point and the second horizontal position information of the second detection point. Determine the horizontal road condition of the target road according to the first horizontal position information and the second horizontal position information.
  • the terminal device determines that the first detection point is to the left of the second detection point according to the first horizontal position information and the second horizontal position information , It is determined that the horizontal road condition of the target road is curving to the right. If the terminal device determines that the first detection point is to the right of the second detection point according to the first horizontal position information and the second horizontal position information, it is determined that the horizontal road condition of the target road is curved to the left . If the terminal device determines that the first horizontal position information and the second horizontal position information are the same, it is determined that the horizontal road condition of the target road is straight.
  • the terminal device directly judges the target road based on the relative positional relationship between the vanishing point on the plane of the target road and the intersection of the lane line or the lane boundary line (or the intersection of the extension line) in a certain road image.
  • the horizontal road conditions can avoid misjudgment of horizontal road conditions caused by small differences in different road images, improve the accuracy of road condition detection, and improve the safety and reliability of smart cars.
  • the terminal device may determine the second target area in the road image according to the first horizontal position information and the second preset difference value. If the terminal device determines that the second detection point is on the left side of the second target area according to the second horizontal position information, it is determined that the horizontal road condition of the target road is curved to the left. If the terminal device determines that the second detection point is on the right side of the second target area according to the second horizontal position information, it is determined that the horizontal road condition of the target road is curving to the right. If the terminal device determines that the second detection point is within the second target area according to the second horizontal position information, it is determined that the horizontal road condition of the target road is straight.
  • the terminal device judges the horizontal road condition of the target road according to the relative position relationship between the second target area and the intersection (or the intersection of the extension line) of the second target area and the lane line or the lane boundary line determined by the vanishing point and the second preset difference, which can further avoid vehicles
  • the occurrence of misjudgment of horizontal road conditions caused by turbulence and other factors can further improve the accuracy of road condition detection and improve the safety and reliability of smart cars.
  • the vertical distance between the first detection point and the second detection point in the road image is proportional to the slope of the target road.
  • the horizontal distance between the first detection point and the second detection point in the road image is proportional to the curvature of the target road.
  • an embodiment of the present application provides a device.
  • the device may be the terminal device itself, or may be a component or module such as a chip inside the terminal device.
  • the device includes a unit for executing the road condition detection method provided by any one of the possible implementations of the first aspect, and therefore can also achieve the beneficial effects (or advantages) of the road condition detection method provided by the first aspect.
  • an embodiment of the present application provides a device, which may be a terminal device.
  • the device includes at least one memory, a processor, and a transceiver.
  • the processor is used to call the code stored in the memory to execute the road condition detection method provided in any one of the possible implementations of the first aspect, and therefore can also achieve the beneficial effects of the road condition detection method provided in the first aspect (or advantage).
  • an embodiment of the present application provides a device.
  • the device may be a chip, and the device includes: at least one processor and an interface circuit.
  • the interface circuit is used to receive code instructions and transmit them to the processor.
  • the processor is configured to run the foregoing code instructions to implement the road condition detection method provided by any one of the possible implementation manners of the foregoing first aspect.
  • the embodiments of the present application provide a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when the instructions run on a computer, any one of the possible implementations of the first aspect described above is implemented
  • the road condition detection method provided by the method can also achieve the beneficial effects (or advantages) of the road condition detection method provided in the first aspect described above.
  • the embodiments of the present application provide a computer program product containing instructions, when the computer program product runs on a computer, the computer executes the road condition detection method provided by any one of the possible implementations of the first aspect.
  • the beneficial effects of the road condition detection method provided in the first aspect can also be achieved.
  • the terminal device can pass the intersection of the vanishing point and the lane line or the lane boundary line (or the intersection point of the extension line) on the plane where the target road is located and in the vehicle traveling direction in a road image.
  • the relative position relationship is used to judge the horizontal and/or vertical road conditions of the target road, which can avoid misjudgment of road conditions caused by vehicle bumps or small differences in different road images, which can improve the accuracy of road condition detection and improve intelligence The safety and reliability of the car.
  • FIG. 1 is a schematic diagram of a road condition detection scene provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of a vehicle coordinate system provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a road condition detection method provided by an embodiment of the present application.
  • Fig. 4a is a schematic diagram of a road image provided by an embodiment of the present application.
  • FIG. 4b is a schematic diagram of another road image provided by an embodiment of the present application.
  • FIG. 4c is a schematic diagram of another road image provided by an embodiment of the present application.
  • FIG. 4d is a schematic diagram of another road image provided by an embodiment of the present application.
  • FIG. 4e is a schematic diagram of another road image provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another structure of a device provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a road condition detection scene provided by an embodiment of the present application.
  • the smart car when the smart car is driving on the target road, it will collect the image of the target road in the forward direction of the vehicle through its on-board terminal equipment, and then process and analyze the image of the target road, and then The result of processing and analysis judges the road condition of the road ahead.
  • the above-mentioned smart car may include various car models.
  • the aforementioned terminal device may be a camera system mounted on a smart car or other systems or devices that carry a camera system.
  • the road condition detection method provided by the embodiment of the present application is applicable to the above-mentioned terminal equipment.
  • the road condition detection method provided in this application can be used not only for vehicle-mounted terminal equipment, but also for terminal equipment mounted on drones, signal lights, speed measuring devices and other equipment, and there is no specific limitation here.
  • a vehicle-mounted terminal device will be used as an example for description.
  • the smart car when a smart car is loaded with a terminal device, it needs to ensure that the terminal device can obtain the completed road image, and the optical axis of the terminal device needs to maintain a horizontal relationship with the forward direction of the smart car.
  • the smart car can collect the image of the road ahead through the terminal device, and then determine the road slope of the road ahead based on the position relationship between the image center point of the image and the intersection point of the extended lane line, or by collecting To determine the curvature of the road.
  • the smart car if the car bumps during driving, or the optical axis of the terminal device cannot maintain a horizontal relationship with the forward direction of the smart car due to improper installation, the smart car will not be able to determine the road slope determined by the above method.
  • the main technical problem solved by the embodiments of the present application is: how to improve the detection accuracy of the road condition detection technology, so as to improve the safety and reliability of the smart car.
  • any pixel in the image as the center of the circle, use any two mutually perpendicular directions as the horizontal and vertical directions, and use the length of one pixel as the basic unit to create one Image coordinate system. Then the horizontal and vertical distances between each pixel and the center of the circle can be used to describe the position of each pixel in the image.
  • the upper left corner of the road image is the center of the circle
  • the horizontal right direction is the horizontal positive direction
  • the vertical downward direction is the vertical positive direction.
  • the road image is used to describe each pixel in the road image.
  • the image coordinate system of the point position is used to describe each pixel in the road image.
  • FIG. 2 is a schematic diagram of a vehicle coordinate system provided by an embodiment of the present application.
  • the center point of the rear wheel bearing of the vehicle is taken as the origin of the coordinate system
  • the direction of the rear wheel bearing of the vehicle is taken as the Y axis
  • the direction perpendicular to the rear wheel is taken as the Z axis.
  • the direction of the line connecting the center points of the front wheel bearings is the X axis
  • a vehicle coordinate system for the vehicle is established.
  • the horizontal plane of the vehicle is formed by the X axis and the Y axis. At any moment, the direction of the vehicle is on the horizontal plane of the vehicle and parallel to the X axis of the vehicle coordinate system.
  • FIG. 3 is a schematic flowchart of a road condition detection method provided by an embodiment of the present application. It can be seen from Figure 3 that the method includes the following steps:
  • S101 Acquire a road image of a target road.
  • the terminal device when it determines that it needs to perform road condition detection, it can turn on the camera, and obtain the road image of the target road in the forward direction of the vehicle through the camera.
  • the terminal device may periodically or non-periodically take at least one image through the camera in the direction of travel of the vehicle, and then perform image recognition and processing on the at least one image, and select one from the at least one image.
  • An image including two road boundary lines or two lane lines of the target road, and the image is determined as the road image of the target road.
  • S102 Determine a first detection point and a second detection point in the road image.
  • the terminal device may determine the first detection point and the second detection point from the above-mentioned road image.
  • the above-mentioned first detection point is the vanishing point of the road image.
  • the above-mentioned second detection point is the intersection of two road boundary lines or two lane lines of the target road in the road image.
  • the aforementioned detection point may be an intersection of two road boundary lines or extension lines of two lane lines of the target road in the road image.
  • any image it may contain multiple vanishing points, and the above-mentioned first detection point may specifically be in the road image on the plane of the target road and in the direction of travel of the terminal device (ie, the vehicle The vanishing point in the direction of travel.
  • FIG. 4a is a schematic diagram of a road image provided by an embodiment of the present application.
  • the horizontal axis of the image coordinate system of the road image is U
  • the vertical axis is V.
  • the terminal device can first determine a set of parallel lines parallel to the traveling direction of the vehicle from the vehicle horizontal plane in the road image, and then project this set of parallel lines on the road image. Then, the terminal device may determine the intersection point (that is, the vanishing point) of the projections of the two parallel lines on the road image as the first detection point.
  • the terminal device can also use methods such as vanishing point detection based on spatial transformation technology, vanishing point detection based on statistical estimation, and message point detection based on machine learning to determine the first detection point from the above road image.
  • This application does not Make specific restrictions.
  • FIG. 4b is another schematic diagram of a road image provided by an embodiment of the present application.
  • the terminal device can also detect the road boundary of the target road from the above-mentioned road image. Specifically, the terminal device may first use a grayscale gradient or a color threshold to segment the pixel area where the road boundary line in the road image is located. Then, the terminal device can fit the segmented pixel regions through a quadratic function or a cubic function, so as to determine the pixel points occupied by the road boundary line in the above-mentioned road image.
  • the terminal device can also extract the road boundary line from the above road image through a road boundary line extraction method based on machine learning, a road boundary line extraction method based on an edge extraction algorithm based on an adaptive threshold, etc., which will not be specifically described in this application. limit.
  • the terminal device may determine the intersection of the road boundary line as the second detection point. If the terminal device detects that the road boundary line of the target road in the road image does not intersect at a point, the terminal device can extend it according to the extension direction of the road boundary line until the extension line of the road boundary line intersects at a point. Then, the terminal device may determine the intersection of the extension line of the road boundary line as the above-mentioned second detection point.
  • the terminal device may first detect the aforementioned lane line of the target road from the aforementioned road image.
  • the process of detecting the lane line of the target road by the terminal device from the road image can refer to the process of detecting the road boundary line of the target lane from the road image by the terminal device described above, which will not be repeated here.
  • the terminal device extracts the lane line of the target road from the above-mentioned road image, if it detects that the lane line has intersected at a point in the road image, the terminal device can determine the intersection of the lane line as the above-mentioned second detection point.
  • the terminal device can extend the lane line according to the extension direction of the lane line until the extension line of the lane line intersects at a point. Then, the terminal device may determine the intersection of the extension line of the lane line as the above-mentioned second detection point.
  • S103 Determine the road condition of the target road according to the first detection point and the second detection point.
  • the terminal device may determine the position information of the first detection point and the second detection point in the road image.
  • the traffic condition of the target road may be determined.
  • the road conditions of the target road may include vertical road conditions.
  • the so-called vertical road condition is that the road in the forward direction of the vehicle is uphill, downhill or flat.
  • the terminal device can determine the vertical road condition of the target road through the position information of the first detection point and the second detection point in the road image. It should be noted here that the position information involved in the embodiments of the present application refers to the corresponding coordinate value of each pixel in the road image in the preset image coordinate system.
  • the terminal device may obtain the position information of the first detection point in the vertical direction on the road image (in order to facilitate the distinction, the first vertical position information will be substituted for the description below).
  • the terminal device can also obtain the position information of the second detection point in the vertical direction on the road image (for the convenience of distinction, the second vertical position information will be substituted for the description below).
  • the terminal device may first determine the coordinate value of the pixel point corresponding to the first detection point in the image coordinate system corresponding to the road image, which is assumed to be (u1, v1) here. Then, the terminal device may determine the coordinate value v1 of the pixel point on the V axis as the first vertical position information of the first detection point.
  • the terminal device can also determine the coordinate value of the pixel point corresponding to the second detection point in the image coordinate system corresponding to the road image, which is assumed to be (u2, v2) here. Then, the terminal device may determine the coordinate value v2 of the pixel point on the V axis as the second vertical position information of the second detection point. Then, the terminal device can determine the relative positional relationship between the first detection point and the second detection point in the vertical direction on the road image according to the first vertical position information v1 and the second vertical position information v2, and further based on this The relative position relationship determines the vertical road condition of the target road.
  • the terminal device determines that the first detection point is above the second detection point according to the first vertical position information and the second vertical position information
  • the vertical condition of the target road may be determined For downhill. If the terminal device determines that the first detection point is below the second detection point according to the first vertical position information and the second vertical position information, it may determine that the vertical road condition of the target road is an uphill slope. If the terminal device determines that the first vertical position information and the second vertical position information are the same, it may be determined that the vertical road condition of the target road is flat. Specifically, the terminal device may calculate the difference between the second vertical position information and the first vertical position information, that is, v2-v1.
  • the terminal device determines that v2-v1 is greater than 0, it can be determined that the first detection point is above the second detection point, and it can be determined that the current vertical road condition of the target road is downhill. If the terminal device determines that v2-v1 is equal to 0 (that is, the first vertical position information and the second vertical position information are the same), it can be determined that the vertical position of the first detection point and the second detection point are the same, and the current The vertical road condition of the target road is flat. If the terminal device determines that v2-v1 is less than 0, it can be determined that the first detection point is below the second detection point, and it can be determined that the current vertical road condition of the target road is an uphill slope.
  • the terminal device directly judges the vertical road condition of the target road based on the relative positional relationship between the vanishing point and the intersection of the lane line or the lane boundary line (or the intersection of the extension line) on the plane where the target road is located and in the traveling direction of the vehicle.
  • the occurrence of misjudgment of vertical road conditions caused by factors such as the installation position and posture of the terminal equipment on the vehicle can improve the accuracy of road condition detection.
  • Fig. 4c is another schematic diagram of a road image provided by an embodiment of the present application.
  • a first target area is determined in the road image.
  • the first preset difference is d1.
  • the terminal device may determine an area composed of pixels whose position information in the vertical direction in the road image is within the range of [v1-d1, v1+d1] as the first target area. After the first target area is determined, the terminal device may determine the vertical road condition of the target road according to the relative position relationship between the first target area and the second detection point.
  • the terminal device determines the first target area
  • the terminal device determines that v2 is greater than or equal to v1-d1 and less than or equal to v1+d1
  • the magnitude of the first preset difference d1 may be determined by the terminal device according to the driving environment of the vehicle.
  • the terminal device determines that the driving environment of the vehicle is more complicated, such as the current road is relatively bumpy or has large ups and downs, the larger value d1 can be selected.
  • the terminal device determines that the driving environment of the vehicle is relatively simple, such as the current road surface is relatively smooth, the smaller value d1 can be selected.
  • the terminal device determines the vertical road condition of the target road based on the relative positional relationship between the first target area and the intersection of the lane line or the lane boundary line (or the intersection of the extension line) determined by the vanishing point, which can further avoid vehicle bumps and other factors.
  • the occurrence of misjudgment of vertical road conditions can further improve the accuracy of road condition detection and improve the safety and reliability of smart cars.
  • the absolute value of the difference v2-v1 between the second vertical position information and the first vertical position information can also be understood as the distance between the second detection point and the first detection point in the vertical direction of the image (that is, the vertical distance). This vertical distance is proportional to the slope of the target road.
  • the terminal device can also estimate the size of the uphill gradient or the downhill gradient based on this vertical distance. Specifically, the terminal device can calculate the vertical distance
  • the road conditions of the target road may include horizontal road conditions.
  • the so-called horizontal road condition is that the road in the forward direction of the vehicle is turning to the left, turning to the right, or going straight.
  • the terminal device may also determine the horizontal road condition of the target road through the position information of the first detection point and the second detection point in the road image.
  • the terminal device can obtain the position information of the first detection point in the horizontal direction on the road image (for the convenience of distinction, the first horizontal position information will be substituted for the description below).
  • the terminal device can also obtain the position information of the second detection point in the horizontal direction on the road image (for the convenience of distinction, the second horizontal position information will be substituted for the description below).
  • the terminal device After the terminal device obtains the coordinate values (u1, v1) and (u2, v2) of the pixel points corresponding to the first detection point and the second detection point in the image coordinate system corresponding to the road image, the coordinate value (u1 , U1 in v1) is determined as the first horizontal position information of the first detection point, and u2 in the coordinate values (u2, v2) can also be determined as the second horizontal position information of the second detection point. Then, the terminal device can determine the relative positional relationship between the first detection point and the second detection point in the road image in the horizontal direction according to the first horizontal position information u1 and the second horizontal position information u2, and further based on this relative position. The positional relationship determines the horizontal road condition of the target road.
  • FIG. 4d is another schematic diagram of a road image provided by an embodiment of the present application.
  • the terminal device obtains the first horizontal position information u1 and the second horizontal position information u2, if it determines the first horizontal position information u1 and the second horizontal position information u2, If the detection point is on the left side of the second detection point, it can be determined that the horizontal road condition of the target road is curved to the right. If the terminal device determines that the first detection point is to the right of the second detection point according to the first horizontal position information u1 and the second horizontal position information u2, it can determine that the horizontal road condition of the target road is Bend to the left.
  • the terminal device determines that the first horizontal position information u1 and the second horizontal position information u2 are the same, it may be determined that the horizontal road condition of the target road is straight. Specifically, the terminal device may calculate the difference between the second horizontal position information and the first horizontal position information, that is, u2-u1. Then, if the terminal device determines that u2-u1 is greater than 0, it can be determined that the first detection point is on the left side of the second detection point, and it can be determined that the current horizontal road condition of the target road is curved to the right.
  • the terminal device determines that u2-u1 is equal to 0 (that is, the first horizontal position information and the second horizontal position information are the same), it can be determined that the positions of the first detection point and the second detection point in the horizontal direction are the same, and the current The horizontal road condition of the target road is straight. If the terminal device determines that u2-u1 is less than 0, it can be determined that the first detection point is to the right of the second detection point, and it can be determined that the current horizontal road condition of the target road is curved to the left.
  • the terminal device directly judges the target road based on the relative positional relationship between the vanishing point on the plane of the target road and the intersection of the lane line or the lane boundary line (or the intersection of the extension line) in a certain road image.
  • the horizontal road conditions can avoid misjudgment of horizontal road conditions caused by small differences in different road images, improve the accuracy of road condition detection, and improve the safety and reliability of smart cars.
  • FIG. 4e is another schematic diagram of a road image provided by an embodiment of the present application.
  • a second target area is determined in the road image.
  • the second preset difference is d2.
  • the terminal device may determine an area composed of pixels whose position information in the horizontal direction in the road image is within the range of [u1-d2, u1+d2] as the second target area. After the second target area is determined, the terminal device may determine the horizontal road condition of the target road according to the relative position relationship between the second target area and the second detection point.
  • the terminal device determines the second target area
  • the terminal device determines that u2 is greater than or equal to u1-d2 and less than or equal to u1+d2
  • it can determine that the second detection point is within the second target area
  • the horizontal road condition of the target road is straight.
  • the terminal device determines that u2 is smaller than u1-d2
  • it can be determined that the second detection point is on the left side of the second target area, and the horizontal road condition of the target road can be determined to be curved to the left.
  • the terminal device determines that u2 is greater than u1+d2, it can be determined that the second detection point is on the right side of the second target area, and the horizontal road condition of the target road can be determined to be curved to the right.
  • the magnitude of the second preset difference may also be determined by the terminal device according to the driving environment of the vehicle.
  • the terminal device determines that the driving environment of the vehicle is more complicated, such as the current road is relatively bumpy or the ups and downs are large, the larger value d2 can be selected.
  • the terminal device determines that the driving environment of the vehicle is relatively simple, such as the current road surface is relatively smooth, the smaller value d2 can be selected.
  • the terminal device judges the horizontal road condition of the target road according to the relative position relationship between the second target area and the intersection (or the intersection of the extension line) of the second target area and the lane line or the lane boundary line determined by the vanishing point and the second preset difference. Avoiding misjudgment of horizontal road conditions caused by vehicle bumps and other factors can further improve the accuracy of road condition detection and improve the safety and reliability of smart cars.
  • the absolute value of the difference u2-u1 between the second horizontal position information and the first horizontal position information can also be understood as the difference between the second detection point and the first detection point.
  • the distance in the horizontal direction of the image that is, the horizontal distance).
  • This horizontal distance is proportional to the curvature of the target road. That is to say, in practical applications, the terminal device can also estimate the curvature of the target road to the left or right according to this horizontal distance. Specifically, the terminal device can calculate the horizontal distance
  • the terminal device may obtain the preset first curvature coefficient K3, and determine that the curvature of the target road to the left is K3*
  • the terminal device may obtain the preset second curvature coefficient K4, and determine that the curvature of the target road to the right is K4*
  • the terminal device may perform the first detection point based on the first detection point.
  • One vertical position information v1 and the second vertical position information v2 of the second detection point determine the vertical road condition of the target road at the same time, based on the first horizontal position information u1 of the first detection point and the second horizontal position information u2 of the second detection point. Determine the level of the target road.
  • the terminal device may perform the first detection point based on the first detection point.
  • One vertical position information v1 and the second vertical position information v2 of the second detection point determine the vertical road condition of the target road at the same time, based on the first horizontal position information u1 of the first detection point and the second horizontal position information u2 of the second detection point. Determine the level of the target road.
  • the increase in data processing volume can simplify the complexity of the road condition detection method and improve the applicability of the road condition detection method.
  • the terminal device can pass the relative position of the intersection point (or the intersection point of the extension line) between the vanishing point and the lane line or the lane boundary line on the plane where the target road is located and in the vehicle traveling direction in a certain road image Relations to determine the horizontal and/or vertical road conditions of the target road can avoid misjudgment of road conditions caused by vehicle bumps or small differences in different road images. It can improve the accuracy of road condition detection and improve the performance of smart cars. Safety and reliability.
  • FIG. 5 is a schematic structural diagram of an apparatus provided by an embodiment of the present application.
  • the device may be the terminal device itself described in the embodiment, or may be a device or module inside the terminal device. As shown in Figure 5, the device includes:
  • the transceiver unit 501 is used to obtain the road image of the target road;
  • the processing unit 502 is configured to determine a first detection point and a second detection point in the road image, where the first detection point is a vanishing point in the road image, and the second detection point is a The intersection of the road boundary line or the lane line of the target road in the road image, or the second detection point is the intersection of the road boundary line or the extension line of the lane line of the target road in the road image;
  • the processing unit 502 is further configured to determine the road condition of the target road according to the first detection point and the second detection point.
  • the first detection point is a vanishing point in the road image on the plane where the target road is located and in the traveling direction of the terminal device.
  • the road conditions include vertical road conditions
  • the processing unit 502 is configured to:
  • the processing unit 502 is configured to:
  • the processing unit 502 is configured to:
  • the road conditions include horizontal road conditions
  • the processing unit 502 is configured to:
  • the processing unit 502 is configured to:
  • first horizontal position information and the second horizontal position information are the same, it is determined that the horizontal road condition of the target road is straight.
  • the processing unit 502 is configured to:
  • the vertical distance between the first detection point and the second detection point in the road image is proportional to the slope of the target road.
  • the horizontal distance between the first detection point and the second detection point in the road image is proportional to the curvature of the target road.
  • the device can pass the relative position of the intersection point (or the intersection point of the extension line) between the vanishing point and the lane line or the lane boundary line on the plane where the target road is located and in the vehicle traveling direction in a certain road image
  • the horizontal road condition and/or vertical road condition of the target road can be judged based on the relationship, and the misjudgment of the road condition may occur due to factors such as vehicle bumps or small differences in different road images, which can improve the accuracy of road condition detection.
  • FIG. 6 is a schematic diagram of another structure of an apparatus provided by an embodiment of the present application.
  • the device may be the terminal device in the embodiment, and may be used to implement the road condition detection method implemented by the terminal device in the foregoing embodiment.
  • the device includes a processor 61, a memory 62, a transceiver 63 and a bus system 64.
  • the memory 61 includes, but is not limited to, RAM, ROM, EPROM, or CD-ROM, and the memory 61 is used to store instructions and data related to the road condition detection method provided by the embodiment of the present application.
  • the memory 61 stores the following elements, executable modules or data structures, or their subsets, or their extended sets:
  • Operating instructions including various operating instructions, used to implement various operations.
  • Operating system Including various system programs, used to implement various basic services and process hardware-based tasks.
  • the transceiver 63 may be a camera or other image capture device. Applied in the embodiment of the present application, the transceiver 63 is used to execute the process of obtaining the road image of the target road in step S101 in the embodiment.
  • the processor 61 may be a controller, a CPU, a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of the embodiments of the present application.
  • the processor 61 may also be a combination for realizing calculation functions, for example, including a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and so on. Applied to the embodiment of the present application, the processor 61 may be used to execute the step of determining the first detection point and the second detection point in step S102 in the embodiment.
  • the processor 61 may also be configured to perform the step of determining the road condition of the target road according to the first detection point and the second detection point in step S103 in the embodiment.
  • bus system 64 In addition to the data bus, the bus system 64 may also include a power bus, a control bus, and a status signal bus. However, for the sake of clarity, various buses are marked as the bus system 64 in FIG. 6. For ease of presentation, FIG. 6 is only schematically drawn.
  • the processor in the embodiment of the present application may be an integrated circuit chip with signal processing capability.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • Programming logic devices discrete gates or transistor logic devices, discrete hardware components.
  • the memory in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic RAM dynamic RAM
  • DRAM dynamic random access memory
  • synchronous dynamic random access memory synchronous DRAM, SDRAM
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous connection dynamic random access memory serial DRAM, SLDRAM
  • direct rambus RAM direct rambus RAM
  • the embodiment of the present application also provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a computer, it implements the road condition detection method or step executed by the terminal device in the above-mentioned embodiment.
  • the embodiment of the present application also provides a computer program product, which, when executed by a computer, implements the road condition detection method or step executed by the terminal device in the above-mentioned embodiment.
  • the embodiment of the present application also provides a device, and the device may be the terminal device in the embodiment.
  • the device includes at least one processor and interface.
  • the processor is used to execute the road condition detection method or step executed by the terminal device in the foregoing embodiment.
  • the foregoing terminal device may be a chip, and the foregoing processor may be implemented by hardware or software.
  • the processor When implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like.
  • the processor When implemented by software, the processor may be a general-purpose processor, which is implemented by reading the software code stored in the memory, and the memory may be integrated in the processor, may be located outside of the foregoing processor, and exist independently.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device described above is only illustrative.
  • the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated into another. A system or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Abstract

本申请实施例提供一种路况检测方法和装置。适用于智能汽车的辅助驾驶、无人驾驶等场景。方法包括:获取目标道路的道路图像。在道路图像中确定出第一检测点和第二检测点。这里,第一检测点为道路图像中的消失点。第二检测点为道路图像中目标道路的道路边界线或者车道线的交点。或者,第二检测点也可为道路图像中目标道路的道路边界线或者车道线的延长线交点。根据第一检测点和第二检测点确定目标道路的路况。采用本申请提供的方法,可提升路况检测的精度,提升智能汽车的安全性和可靠性。

Description

一种路况检测方法和装置 技术领域
本申请涉及智能汽车领域,尤其涉及一种路况检测方法和装置。
背景技术
随着互联网技术和制造技术的不断发展,诸如智能汽车等新兴事物不断的涌现。所谓的智能汽车,就是装载有高级驾驶辅助系统(advanced driving assistant system,ADAS)的汽车。智能汽车可以通过ADAS来感知道路环境,以实现辅助驾驶甚至是无人驾驶。对于智能汽车来说,道路状况(下文简称路况)检测技术又是其主要依赖的技术之一。路况检测的主要目的是为了获取汽车前行道路的路况信息(如道路坡度、道路曲率等),从而辅助汽车合理的进行车速、车向的辅助控制或者自动控制。由于路况检测精准与否会直接影响到智能汽车的安全性和可靠性,因此,如何保证路况检测的精度已经成为了当前一大研究热点。
现有技术中,智能汽车可通过毫米波雷达、激光雷达、摄像机等传感器进行路况检测,而摄像机由于成本低、技术成熟等优势,已经逐步成为路况检测中的主力传感器。但是,现有的基于摄像头的路况检测技术对摄像机的在汽车上的安装位置和姿态上有较高要求,在摄像机的光轴方向和汽车的行进方向不是水平关系(如车辆发生颠簸)等情况下容易产生误判,这样就无法获取准确的路况信息,从而降低了智能汽车的安全性和可靠性。
发明内容
本申请提供一种路况检测方法和装置。采用本申请所提供的方案,可提升路况检测的精度,提升智能汽车的安全性和可靠性。
第一方面,本申请实施例提供一种路况检测方法,该方法包括:先获取目标道路的道路图像。在所述道路图像中确定出第一检测点和第二检测点。这里,所述第一检测点为所述道路图像中的消失点。所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的交点,或者,所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的延长线交点。根据所述第一检测点和所述第二检测点确定所述目标道路的路况。
在本申请实施例中,终端设备可通过某一张道路图像中在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况和/或垂直路况,可避免因车辆颠簸或者不同的道路图像的差异较小等因素造成的路况误判的情况发生,可提升路况检测的精度,提升智能汽车的安全性和可靠性。
结合第一方面,在一种可能的实施方式中,所述第一检测点为所述道路图像中在所述目标道路所在平面且在所述终端设备行进方向上的消失点。
结合第一方面,在一种可能的实施方式中,所述路况包括垂直路况。终端设备可获取所述第一检测点的第一垂直位置信息和所述第二检测点的第二垂直位置信息。根据所述第一垂直位置信息和所述第二垂直位置信息确定所述目标道路的垂直路况。
结合第一方面,在一种可能的实施方式中,若终端设备根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的上方,则确定所述目标道路的垂直路况为下坡。若终端设备根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的下方,则确定所述目标道路的垂直路况为上坡。若终端设备确定所述第一垂直位置信息和所述第二垂直位置信息相同,则确定所述目标道路的垂直路况为平坦。这里,终端设备直接根据在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的垂直路况,可避免终端设备在车辆上的安装位置及姿态等因素造成的垂直路况误判的情况的发生,可提升路况检测的精度,提升智能汽车的安全性和可靠性。
结合第一方面,在一种可能的实施方式中,终端设备可根据所述第一垂直位置信息和第一预设差值在所述道路图像中确定出第一目标区域。若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的下方,则确定所述目标道路的垂直路况为下坡。若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的上方,则确定所述目标道路的垂直路况为上坡。若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域内,则确定所述目标道路的垂直路况为平坦。这里,终端设备根据由消失点确定的第一目标区域与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的垂直路况,可避免车辆颠簸等因素造成的垂直路况误判的情况的发生,可进一步提升路况检测的精度,进一步提升智能汽车的安全性和可靠性。
结合第一方面,在一种可能的实施方式中,所述路况包括水平路况。终端设备可获取所述第一检测点的第一水平位置信息和所述第二检测点的第二水平位置信息。根据所述第一水平位置信息和所述第二水平位置信息确定所述目标道路的水平路况。
结合第一方面,在一种可能的实施方式中,若终端设备根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的左侧,则确定所述目标道路的水平路况为向右弯曲。若终端设备根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的右侧,则确定所述目标道路的水平路况为向左弯曲。若终端设备确定所述第一水平位置信息和所述第二水平位置信息相同,则确定所述目标道路的水平路况为直行。这里,终端设备直接根据某一张道路图像中在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况,可避免不同道路图像差异较小造成的水平路况误判的情况的发生,可提升路况检测的精度,提升智能汽车的安全性和可靠性。
结合第一方面,在一种可能的实施方式中,终端设备可根据所述第一水平位置信息和第二预设差值在所述道路图像中确定出第二目标区域。若终端设备根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的左侧,则确定所述目标道路的水平路况为向左弯曲。若终端设备根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的右侧,则确定所述目标道路的水平路况为向右弯曲。若终端设备根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域内,则确定所述目标道路的水平路况为直行。终端设备根据由消失点和第二预设差值确定的第二目标区域与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况,可进一步避免车 辆颠簸等因素造成的水平路况误判的情况的发生,可进一步提升路况检测的精度,提升智能汽车的安全性和可靠性。
结合第一方面,在一种可能的实施方式中,所述第一检测点和所述第二检测点在所述道路图像中的垂直距离与所述目标道路的坡度成正比。
结合第一方面,在一种可能的实施方式中,所述第一检测点和所述第二检测点在所述道路图像中的水平距离与所述目标道路的曲率成正比。
第二方面,本申请实施例提供了一种装置。该装置可为终端设备本身,也可为终端设备内部的如芯片等元件或者模块。该装置包括用于执行上述第一方面的任意一种可能的实现方式所提供的路况检测方法的单元,因此也能够实现第一方面提供的路况检测方法所具备的有益效果(或者优点)。
第三方面,本申请实施例提供了一种装置,该装置可为终端设备。该装置包括至少一个存储器、处理器和收发器。其中,该处理器用于调用存储器存储的代码执行上述第一方面中任意一种可能的实施方式所提供的路况检测方法,因此也能够实现第一方面提供的路况检测方法所具备的有益效果(或者优点)。
第四方面,本申请实施例提供了一种装置。该装置可以是芯片,该装置包括:至少一个处理器和接口电路。该接口电路用于接收代码指令并传输至该处理器。该处理器用于运行上述代码指令以实现上述第一方面中任意一种可能的实施方式所提供的路况检测方法。
第五方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当该指令在计算机上运行时,实现上述第一方面中任意一种可能的实施方式所提供的路况检测方法,也能实现上述第一方面提供的路况检测方法所具备的有益效果(或者优点)。
第六方面,本申请实施例提供了一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面中任意一种可能的实施方式提供的路况检测方法,也能实现第一方面提供的路况检测方法所具备的有益效果。
在本申请实施例提供的方法中,终端设备可通过一张道路图像中在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况和/或垂直路况,可避免因车辆颠簸或者不同的道路图像的差异较小等因素造成的路况误判的情况发生,可提升路况检测的精度,提升智能汽车的安全性和可靠性。
附图说明
图1是本申请实施例提供的一种路况检测场景示意图;
图2是本申请实施例提供的一种车辆坐标系示意图;
图3是本申请实施例提供的一种路况检测方法的流程示意图;
图4a是本申请实施例提供的一种道路图像示意图;
图4b是本申请实施例提供的又一种道路图像示意图;
图4c是本申请实施例提供的又一种道路图像示意图;
图4d是本申请实施例提供的又一种道路图像示意图;
图4e是本申请实施例提供的又一种道路图像示意图;
图5是本申请实施例提供的一种装置一结构示意图;
图6是本申请实施例提供的一种装置又一结构示意图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
请参见图1,图1是本申请实施例提供的一种路况检测场景示意图。在该路况检测场景下,智能汽车在目标道路上的行驶过程中,会通过其车载的终端设备采集车辆前行方向上的目标道路的图像,然后对该目标道路的图像进行处理和分析,并根据处理和分析的结果判断前行道路的路况。在本申请实施例中,上述智能汽车可以包括各种车型。上述终端设备可以是智能汽车上装载的摄像头系统或者携带摄像头系统的其他系统或者设备。本申请实施例提供的路况检测方法即适用于上述终端设备。这里还需要说明的是,本申请提供的路况检测方法不仅可用于车载的终端设备,还可用于无人机、信号灯、测速装置等设备所装载的终端设备,此处不做具体限制。在本申请实施例中,将以车载的终端设备为例进行描述。
在实际应用中,智能汽车在装载终端设备的时候需要保证终端设备能够获取到完成的道路图像,并且终端设备的光轴需要与智能汽车前行方向保持水平关系。这样在车辆行驶过程中,智能汽车才可通过终端设备采集前进道路的图像,然后基于该图像的图像中心点和车道线延长后的交点的位置关系来确定前行道路的道路坡度,或者通过采集到的多张图像来确定道路曲率。但是,若在行驶过程中汽车发生颠簸,或者因安装的不牢固等原因使得终端设备的光轴与智能汽车前行方向无法保持水平关系,则智能汽车就无法通过上述方式确定的判定道路坡度。而当采集的多张图像之间差异较小时,智能汽车也无法准确的根据这多张图像确定道路曲率。因此,本申请实施例主要解决的技术问题是:如何提升路况检测技术的检测精度,从而提升智能汽车的安全性和可靠性。
下面,为了方便对本申请实施例的理解和描述,首先对本申请实施例涉及到的一些概念进行简单的描述。
1、图像坐标系
在实际应用中,针对某一张图像来说,可以以该图像中任意一个像素点为圆心,以任意相互垂直的两个方向作为水平方向和垂直方向,以一个像素的长度为基本单位建立一个图像坐标系。然后即可以每个像素与圆心的水平距离和垂直距离来描述每个像素在该图像中的位置。在本申请实施例中,将以道路图像的左上角顶点为圆心,以水平向右方向为水平正方向,以垂直向下方向为垂直正方向在道路图像中建立用于描述道路图像中各像素点位置的图像坐标系。
2、车辆坐标系
请参见图2,图2是本申请实施例提供的一种车辆坐标系示意图。在本申请实施例中,将以车辆的后轮轴承的中心点作为坐标系原点,以车辆的后轮轴承方向为Y轴,以垂直后轮的方向为Z轴,以后轮轴承的中心点和前轮轴承的中心点连线的方向为X轴,建立针对该车辆的车辆坐标系。这里,由X轴和Y轴构成了该车辆所在水平面。在任何时刻,车辆的进行方向都在车辆的水平面上,且与车辆坐标系的X轴平行。
请参见图3,图3是本申请实施例提供的一种路况检测方法的流程示意图。由图3可 知,该方法包括以下步骤:
S101,获取目标道路的道路图像。
在一些可行的实现方式中,当终端设备确定需要进行路况检测后,其可打开摄像头,并通过摄像头获取到在车辆前行方向上的目标道路的道路图像。
具体实现中,终端设备可以在车辆行进方向上通过摄像头周期性或者非周期性的拍摄至少一张图像,然后对上述至少一张图像进行图像识别和处理,从上述至少一张图像中选择出一张包括目标道路的两条道路边界线或者两条车道线的图像,并将该图像确定为目标道路的道路图像。
S102,在所述道路图像中确定出第一检测点和第二检测点。
在一些可行的实现方式中,终端设备在获取到上述道路图像后,可从上述道路图像中确定出第一检测点和第二检测点。这里,上述第一检测点为道路图像的消失点。上述第二检测点为道路图像中目标道路的两条道路边界线或者两条车道线的交点。或者,上述检测点可以为道路图像中目标道路的两条道路边界线或者两条车道线的延长线交点。实际应用中,对于任何一张图像来说,其可包含有多个消失点,而上述第一检测点具体可为道路图像中在所述目标道路所在平面且在终端设备的行进方向(即车辆的行进方向)上的消失点。
具体实现中,请参见图4a,图4a是本申请实施例提供的一种道路图像示意图。这里,该道路图像的图像坐标系的横轴为U,纵轴为V。终端设备在获取到上述道路图像后,可先从道路图像中的车辆水平面上确定出与车辆行进方向平行的一组平行线,然后将这一组平行线在道路图像上作投影。然后,终端设备可将这两条平行线在道路图像上的投影的交点(即消失点)确定为上述第一检测点。可选的,终端设备也可采用基于空间变换技术的消失点检测、基于统计估计的消失点检测以及基于机器学习的消息点检测等方法从上述道路图像中确定出第一检测点,本申请不做具体限制。
进一步的,在一种可选的实现中,请参见图4b,图4b是本申请实施例提供的又一种道路图像示意图。终端设备还可从上述道路图像中检测到目标道路的道路边界线。具体的,终端设备可先利用灰度梯度或者颜色阈值对上述道路图像中道路边界线所在的像素区域作分割。然后,终端设备可通过二次函数或者三次函数等对分割出来的像素区域进行拟合,从而确定该出上述道路图像中道路边界线所占用的像素点。可选的,终端设备还可通过基于机器学习的道路边界线提取方法、基于自适应阈值的边缘提取算法的道路边界线的提取方法等从上述道路图像中提取道路边界线,本申请不做具体限制。终端设备从上述道路图像中提取到道路边界线之后,若检测到在道路图像中道路边界线已经相交于一点,则终端设备可将该道路边界线的交点确定为上述第二检测点。若终端设备检测到在道路图像中目标道路的道路边界线并未相交于一点,则终端设备可按照道路边界线的延伸方向对其进行延长,直至道路边界线的延长线相交于一点为止。然后终端设备可将道路边界线的延长线的交点确定为上述第二检测点。
在另一种可选的实现中,终端设备可以先从上述道路图像中检测到目标道路上述的车道线。具体实现中,终端设备从道路图像中检测目标道路的车道线的过程可参见前文叙述的终端设备从道路图像中检测目标车道的道路边界线的过程,此处不再赘述。终端设备从上述道路图像中提取到目标道路的车道线之后,若检测到在道路图像中车道线已经相交于 一点,则终端设备可将该车道线的交点确定为上述第二检测点。若终端设备检测到在道路图像中目标道路的车道线并未相交于一点,则终端设备可按照车道线的延伸方向对其进行延长,直至车道线的延长线相交于一点为止。然后终端设备可将车道线的延长线的交点确定为上述第二检测点。
S103,根据所述第一检测点和所述第二检测点确定所述目标道路的路况。
在一些可行的实现方式中,终端设备在从上述道路图像中确定出第一检测点和第二检测点之后,可以根据第一检测点和第二检测点的在道路图像中的位置信息来确定目标道路的路况。
在一种可选的实现方式中,目标道路的路况可包括垂直路况。所谓的垂直路况就是车辆前行方向的道路为上坡、下坡或者平坦。终端设备可通过第一检测点和第二检测点在道路图像中的位置信息来确定目标道路的垂直路况。这里需要说明的是,本申请实施例涉及的位置信息指代的就是道路图像中各像素点在预设的图像坐标系中对应的坐标值。
具体实现中,终端设备可获取上述第一检测点在道路图像上的垂直方向上的位置信息(为了方便区别,下文将以第一垂直位置信息代替描述)。同时,终端设备也可获取上述第二检测点在道路图像上的垂直方向上的位置信息(为了方便区别,下文将以第二垂直位置信息代替描述)。如图4b所示,具体的,终端设备可先确定出上述第一检测点对应的像素点在道路图像对应的图像坐标系中的坐标值,这里假设为(u1,v1)。然后,终端设备可将该像素点在V轴上的坐标值v1确定为上述第一检测点的第一垂直位置信息。同理,终端设备也可确定出上述第二检测点对应的像素点在道路图像对应的图像坐标系中的坐标值,这里假设为(u2,v2)。然后,终端设备可将该像素点在V轴上的坐标值v2确定为上述第二检测点的第二垂直位置信息。然后,终端设备即可根据上述第一垂直位置信息v1和上述第二垂直位置信息v2确定上述第一检测点和第二检测点在道路图像上的垂直方向上的相对位置关系,并进一步根据这个相对位置关系确定目标道路的垂直路况。
可选的,若终端设备根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的上方,则可确定所述目标道路的垂直路况为下坡。若终端设备根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的下方,则可确定所述目标道路的垂直路况为上坡。若终端设备确定所述第一垂直位置信息和所述第二垂直位置信息相同,则可确定所述目标道路的垂直路况为平坦。具体的,终端设备可计算出上述第二垂直位置信息与第一垂直位置信息的差值,即v2-v1。然后,若终端设备确定v2-v1大于0,则可确定第一检测点在第二检测点的上方,则可确定当前的目标道路的垂直路况为下坡。若终端设备确定v2-v1等于0时(即第一垂直位置信息和第二垂直位置信息相同),则可确定第一检测点和第二检测点在垂直方向上的位置相同,则可确定当前的目标道路的垂直路况为平坦。若终端设备确定v2-v1小于0,则可确定第一检测点在第二检测点的下方,则可确定当前的目标道路的垂直路况为上坡。
这里,终端设备直接根据在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的垂直路况,可避免终端设备在车辆上的安装位置及姿态等因素造成的垂直路况误判的情况的发生,可提升路况检测的精度。
可选的,请一并参见图4c,图4c是本申请实施例提供的又一种道路图像示意图,终端设备可先根据上述第一垂直位置信息和预配置的第一预设差值在上述道路图像中确定出一个第一目标区域。这里,假设上述第一预设差值为d1。具体的,终端设备可以将道路图像中垂直方向上的位置信息在[v1-d1,v1+d1]这个范围内的像素点组成的区域确定为上述第一目标区域。在确定出上述第一目标区域之后,终端设备可根据上述第一目标区域和上述第二检测点之间的相对位置关系来确定目标道路的垂直路况。具体的,终端设备在确定出上述第一目标区域后,若终端设备确定v2大于或者等于v1-d1且小于或者等于v1+d1,则可确定上述第二检测点在上述第一目标区域内,则可确定目标道路的垂直路况为平坦。若终端设备确定v2小于v1-d1,则可确定上述第二检测点在上述第一目标区域的上方,则可确定目标道路的垂直路况为上坡。若终端设备确定v2大于v1+d1,则可确定第二检测点在第一目标区域的下方,则可确定目标道路的垂直路况为下坡。需要补充说明的是,上述第一预设差值d1的大小可由终端设备根据车辆的行驶环境来确定。当终端设备确定车辆的行驶环境较为复杂时,如当前道路较为颠簸或者上下起伏较大等,可选择数值较大的d1。当终端设备确定车辆的行驶环境较为简单时,如当前道路的路面较为平整,可选择数值较小的d1。
这里,终端设备根据由消失点确定的第一目标区域与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的垂直路况,可进一步避免车辆颠簸等因素造成的垂直路况误判的情况的发生,可进一步提升路况检测的精度,可提升智能汽车的安全性和可靠性。
进一步需要说明的是,上述第二垂直位置信息和第一垂直位置信息的差值v2-v1的绝对值也可理解为上述第二检测点和第一检测点在图像垂直方向的距离(即垂直距离)。这个垂直距离与目标道路的坡度成正比。在实际应用中,终端设备还可根据这个垂直距离估算出上坡坡度或者下坡坡度的大小。具体的,终端设备可计算出第一检测点和第二检测点的垂直距离|v2-v1|。然后,在确定目标道路的垂直路况为上坡的情况下,终端设备可获取预设的上坡坡度系数K1,并确定目标道路的上坡坡度为K1*|v2-v1|。在确定目标道路的垂直路况为上坡的情况下,终端设备可获取预设的下坡坡度系数K2,并确定目标道路下坡坡度为K2*|v2-v1|。
在另一种可选的实现中,目标道路的路况可包括水平路况。所谓的水平路况就是车辆前行方向的道路为向左转弯、向右转弯或者直行。终端设备也可通过上述第一检测点和第二检测点在道路图像中的位置信息来确定目标道路的水平路况。
具体实现中,终端设备可获取上述第一检测点在道路图像上的水平方向上的位置信息(为了方便区别,下文将以第一水平位置信息代替描述)。同时,终端设备也可获取上述第二检测点在道路图像上的水平方向上的位置信息(为了方便区别,下文将以第二水平位置信息代替描述)。终端设备在获取到上述第一检测点和第二检测点对应的像素点在道路图像对应的图像坐标系中的坐标值(u1,v1)和(u2,v2)后,可将坐标值(u1,v1)中的u1确定为上述第一检测点的第一水平位置信息,还可将坐标值(u2,v2)中的u2确定为上述第二检测点的第二水平位置信息。然后,终端设备即可根据上述第一水平位置信息u1和第二水平位置信息u2确定上述第一检测点和第二检测点在道路图像中的水平方向上的相对 位置关系,并进一步根据这个相对位置关系确定目标道路的水平路况。
可选的,请参见图4d,图4d是本申请实施例提供的又一种道路图像示意图。如4d所示,终端设备在获取到上述第一水平位置信息u1和第二水平位置信息u2之后,若根据所述第一水平位置信息u1和所述第二水平位置信息u2确定所述第一检测点在所述第二检测点的左侧,则可确定所述目标道路的水平路况为向右弯曲。若终端设备根据所述第一水平位置信息u1和所述第二水平位置信息u2确定所述第一检测点在所述第二检测点的右侧,则可确定所述目标道路的水平路况为向左弯曲。若终端设备确定所述第一水平位置信息u1和所述第二水平位置信息u2相同,则可确定所述目标道路的水平路况为直行。具体的,终端设备可计算出上述第二水平位置信息与第一水平位置信息的差值,即u2-u1。然后,若终端设备确定u2-u1大于0,则可确定第一检测点在第二检测点的左侧,则可确定当前的目标道路的水平路况为向右弯曲。若终端设备确定u2-u1等于0时(即第一水平位置信息和第二水平位置信息相同),则可确定第一检测点和第二检测点在水平方向上的位置相同,则可确定当前的目标道路的水平路况为直行。若终端设备确定u2-u1小于0,则可确定第一检测点在第二检测点的右侧,则可确定当前的目标道路的水平路况为向左弯曲。
这里,终端设备直接根据某一张道路图像中在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况,可避免不同道路图像差异较小造成的水平路况误判的情况的发生,可提升路况检测的精度,提升智能汽车的安全性和可靠性。
可选的,请一并参见图4e,图4e是本申请实施例提供的又一种道路图像示意图,终端设备可先根据上述第一水平位置信息和预配置的第二预设差值在上述道路图像中确定出一个第二目标区域。这里,假设上述第二预设差值为d2。具体的,终端设备可以将道路图像中水平方向上的位置信息在[u1-d2,u1+d2]这个范围内的像素点组成的区域确定为上述第二目标区域。在确定出上述第二目标区域之后,终端设备可根据上述第二目标区域和上述第二检测点之间的相对位置关系来确定目标道路的水平路况。具体的,终端设备在确定出上述第二目标区域后,若终端设备确定u2大于或者等于u1-d2且小于或者等于u1+d2,则可确定上述第二检测点在上述第二目标区域内,则可确定目标道路的水平路况为直行。若终端设备确定u2小于u1-d2,则可确定上述第二检测点在上述第二目标区域的左侧,则可确定目标道路的水平路况为向左弯曲。若终端设备确定u2大于u1+d2,则可确定第二检测点在第二目标区域的右侧,则可确定目标道路的水平路况向右弯曲。需要补充说明的是,上述第二预设差值的大小也可由终端设备根据车辆的行驶环境来确定。当终端设备确定车辆的行驶环境较为复杂时,如当前道路较为颠簸或者上下起伏较大等,可选择数值较大的d2。当终端设备确定车辆的行驶环境较为简单时,如当前道路的路面较为平整,可选择数值较小的d2。
这里,终端设备根据由消失点和第二预设差值确定的第二目标区域与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况,可进一步避免车辆颠簸等因素造成的水平路况误判的情况的发生,可进一步提升路况检测的精度,提升智能汽车的安全性和可靠性。
进一步需要说明的是,上述第二水平位置信息和第一水平位置信息的差值u2-u1的绝 对值(即|u2-u1|)也可理解为上述第二检测点和第一检测点在图像水平方向的距离(即水平距离)。这个水平距离与目标道路的道路曲率成正比。也就是说,在实际应用中,终端设备还可根据这个水平距离估算出目标道路向左或者向右弯曲的曲率大小。具体的,终端设备可计算出第一检测点和第二检测点的水平距离|u2-u1|。然后,在确定目标道路的水平路况为向左弯曲的情况下,终端设备可获取预设的第一曲率系数K3,并确定目标道路的向左弯曲的曲率为K3*|u2-u1|。在确定目标道路的水平路况为向右弯曲的情况下,终端设备可获取预设的第二曲率系数K4,并确定目标道路向右弯曲的曲率为K4*|u2-u1|。
在又一种可行的实现方式中,终端设备在获取到第一检测点的位置信息(u1,v1)和第二检点的位置信息(u2,v2)之后,可在基于第一检测点的第一垂直位置信息v1和第二检测点的第二垂直位置信息v2确定目标道路的垂直路况的同时,基于第一检测点的第一水平位置信息u1和第二检测点的第二水平位置信息u2确定目标道路的水平路况。具体判断过程可分别参见前文叙述终端设备判断目标道路的垂直路况和水平路况的过程,此处便不再赘述。通过一张道路图像中的第一检测点和第二检测点的水平位置关系和垂直位置关系同时确定目标道路的水平路况和垂直路况,可避免通过不同方法判断目标道路的水平路况和垂直路况带来的数据处理量的增加,可简化路况检测方法的复杂度,提升路况检测方法的适用性。
在本申请实施例中,终端设备可通过某一张道路图像中在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况和/或垂直路况,可避免因车辆颠簸或者不同的道路图像的差异较小等因素造成的路况误判的情况发生,可提升路况检测的精度,提升智能汽车的安全性和可靠性。
请参见图5,图5是本申请实施例提供的一种装置一结构示意图。该装置可以是实施例中描述的终端设备本身,也可以是该终端设备内部的器件或者模块。如图5所示,该装置包括:
收发单元501,用于获取目标道路的道路图像;
处理单元502,用于在所述道路图像中确定出第一检测点和第二检测点,其中,所述第一检测点为所述道路图像中的消失点,所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的交点,或者,所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的延长线交点;
所述处理单元502,还用于根据所述第一检测点和所述第二检测点确定所述目标道路的路况。
在一些可能的实现方式中,所述第一检测点为所述道路图像中在所述目标道路所在平面且在所述终端设备行进方向上的消失点。
在一些可能的实现方式中,所述路况包括垂直路况,所述处理单元502用于:
获取所述第一检测点的第一垂直位置信息和所述第二检测点的第二垂直位置信息。根据所述第一垂直位置信息和所述第二垂直位置信息确定所述目标道路的垂直路况。
在一些可能的实现方式中,所述处理单元502用于:
若根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第 二检测点的上方,则确定所述目标道路的垂直路况为下坡;
若根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的下方,则确定所述目标道路的垂直路况为上坡;
若所述第一垂直位置信息和所述第二垂直位置信息相同,则确定所述目标道路的垂直路况为平坦。
在一些可能的实现方式中,所述处理单元502用于:
根据所述第一垂直位置信息和第一预设差值在所述道路图像中确定出第一目标区域;
若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的下方,则确定所述目标道路的垂直路况为下坡;
若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的上方,则确定所述目标道路的垂直路况为上坡;
若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域内,则确定所述目标道路的垂直路况为平坦。
在一些可能的实现方式中,所述路况包括水平路况,所述处理单元502用于:
获取所述第一检测点的第一水平位置信息和所述第二检测点的第二水平位置信息;
根据所述第一水平位置信息和所述第二水平位置信息确定所述目标道路的水平路况。
在一些可能的实现方式中,所述处理单元502用于:
若根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的左侧,则确定所述目标道路的水平路况为向右弯曲;
若根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的右侧,则确定所述目标道路的水平路况为向左弯曲;
若所述第一水平位置信息和所述第二水平位置信息相同,则确定所述目标道路的水平路况为直行。
在一些可能的实现方式中,所述处理单元502用于:
根据所述第一水平位置信息和第二预设差值在所述道路图像中确定出第二目标区域;
若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的左侧,则确定所述目标道路的水平路况为向左弯曲;
若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的右侧,则确定所述目标道路的水平路况为向右弯曲;
若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域内,则确定所述目标道路的水平路况为直行。
在一些可能的实现方式中,所述第一检测点和所述第二检测点在所述道路图像中的垂直距离与所述目标道路的坡度成正比。
在一些可能的实现方式中,所述第一检测点和所述第二检测点在所述道路图像中的水平距离与所述目标道路的曲率成正比。
在本申请实施例中,该装置可通过某一张道路图像中在目标道路所在平面上且在车辆行进方向上的消失点与车道线或者车道边界线的交点(或者延长线交点)的相对位置关系来判断目标道路的水平路况和/或垂直路况,可因车辆颠簸或者不同道路图像的差异较小等 因素造成的路况误判的情况发生,可提升路况检测的精度。
请参见图6,图6是本申请实施例提供的一种装置又一结构示意图。该装置可以是实施例中的终端设备,可用于实现上述实施例中终端设备所实现的路况检测方法。该装置包括:处理器61、存储器62、收发器63和总线系统64。
存储器61包括但不限于是RAM、ROM、EPROM或CD-ROM,该存储器61用于存储与本申请实施例提供的路况检测方法的相关指令及数据。存储器61存储了如下的元素,可执行模块或者数据结构,或者它们的子集,或者它们的扩展集:
操作指令:包括各种操作指令,用于实现各种操作。
操作系统:包括各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。
图6中仅示出了一个存储器,当然,存储器也可以根据需要,设置为多个。
收发器63可以摄像头或者其他图像采集设备。应用在本申请实施例中,收发器63用于执行实施例中步骤S101中的获取目标道路的道路图像的过程。
处理器61可以是控制器,CPU,通用处理器,DSP,ASIC,FPGA或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请实施例公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器61也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。应用在本申请实施例,处理器61可用于执行实施例中步骤S102中确定第一检测点和第二检测点的步骤。处理器61还可用于执行实施例中步骤S103中根据第一检测点和第二检测点确定目标道路的路况的步骤。
具体的应用中,装置的各个组件通过总线系统64耦合在一起,其中总线系统64除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图6中将各种总线都标为总线系统64。为便于表示,图6中仅是示意性画出。
应注意,实际应用中,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(digital signal Processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器 (enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本申请实施例描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例还提供了一种计算机可读介质,其上存储有计算机程序,该计算机程序被计算机执行时实现上述实施例终端设备执行的路况检测方法或者步骤。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品被计算机执行时实现上述实施例终端设备执行的路况检测方法或者步骤。
本申请实施例还提供了一种装置,该装置可以是实施例中的终端设备。该装置包括至少一个处理器和接口。该处理器用于执行上述实施例终端设备执行的路况检测方法或者步骤。应理解,上述终端设备可以是一个芯片,上述处理器可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,该处理器可以是逻辑电路、集成电路等。当通过软件来实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现,改存储器可以集成在处理器中,可以位于上述处理器之外,独立存在。
应理解,本实施例中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
另外,在本申请实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
总之,以上上述仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (22)

  1. 一种路况检测方法,其特征在于,应用于终端设备,所述方法包括:
    获取目标道路的道路图像;
    在所述道路图像中确定出第一检测点和第二检测点,其中,所述第一检测点为所述道路图像中的消失点,所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的交点,或者,所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的延长线交点;
    根据所述第一检测点和所述第二检测点确定所述目标道路的路况。
  2. 根据权利要求1所述的方法,其特征在于,所述第一检测点为所述道路图像中在所述目标道路所在平面上且在所述终端设备行进方向上的消失点。
  3. 根据权利要求1或2所述的方法,其特征在于,所述路况包括垂直路况,所述根据所述第一检测点和所述第二检测点确定所述目标道路的路况包括:
    获取所述第一检测点的第一垂直位置信息和所述第二检测点的第二垂直位置信息;
    根据所述第一垂直位置信息和所述第二垂直位置信息确定所述目标道路的垂直路况。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一垂直位置信息和所述第二垂直位置信息确定所述目标道路的垂直路况包括:
    若根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的上方,则确定所述目标道路的垂直路况为下坡;
    若根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的下方,则确定所述目标道路的垂直路况为上坡;
    若所述第一垂直位置信息和所述第二垂直位置信息相同,则确定所述目标道路的垂直路况为平坦。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述第一垂直位置信息和所述第二垂直位置信息确定所述目标道路的垂直路况包括:
    根据所述第一垂直位置信息和第一预设差值在所述道路图像中确定出第一目标区域;
    若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的下方,则确定所述目标道路的垂直路况为下坡;
    若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的上方,则确定所述目标道路的垂直路况为上坡;
    若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域内,则确定所述目标道路的垂直路况为平坦。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述路况包括水平路况,所述 根据所述第一检测点和所述第二检测点确定所述目标道路的路况包括:
    获取所述第一检测点的第一水平位置信息和所述第二检测点的第二水平位置信息;
    根据所述第一水平位置信息和所述第二水平位置信息确定所述目标道路的水平路况。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述第一水平位置信息和所述第二水平位置信息确定所述目标道路的水平路况包括:
    若根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的左侧,则确定所述目标道路的水平路况为向右弯曲;
    若根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的右侧,则确定所述目标道路的水平路况为向左弯曲;
    若所述第一水平位置信息和所述第二水平位置信息相同,则确定所述目标道路的水平路况为直行。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述第一水平位置信息和所述第二水平位置信息确定所述目标道路的水平路况包括:
    根据所述第一水平位置信息和第二预设差值在所述道路图像中确定出第二目标区域;
    若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的左侧,则确定所述目标道路的水平路况为向左弯曲;
    若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的右侧,则确定所述目标道路的水平路况为向右弯曲;
    若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域内,则确定所述目标道路的水平路况为直行。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述第一检测点和所述第二检测点在所述道路图像中的垂直距离与所述目标道路的坡度成正比。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述第一检测点和所述第二检测点在所述道路图像中的水平距离与所述目标道路的曲率成正比。
  11. 一种装置,其特征在于,所述装置包括:
    收发单元,用于获取目标道路的道路图像;
    处理单元,用于在所述道路图像中确定出第一检测点和第二检测点,其中,所述第一检测点为所述道路图像中的消失点,所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的交点,或者,所述第二检测点为所述道路图像中所述目标道路的道路边界线或者车道线的延长线交点;
    所述处理单元,还用于根据所述第一检测点和所述第二检测点确定所述目标道路的路况。
  12. 根据权利要求11所述的装置,其特征在于,所述第一检测点为所述道路图像中在 所述目标道路所在平面且在所述终端设备行进方向上的消失点。
  13. 根据权利要求11或12所述的装置,其特征在于,所述路况包括垂直路况,所述处理单元用于:
    获取所述第一检测点的第一垂直位置信息和所述第二检测点的第二垂直位置信息;
    根据所述第一垂直位置信息和所述第二垂直位置信息确定所述目标道路的垂直路况。
  14. 根据权利要求13所述的装置,其特征在于,所述处理单元用于:
    若根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的上方,则确定所述目标道路的垂直路况为下坡;
    若根据所述第一垂直位置信息和所述第二垂直位置信息确定所述第一检测点在所述第二检测点的下方,则确定所述目标道路的垂直路况为上坡;
    若所述第一垂直位置信息和所述第二垂直位置信息相同,则确定所述目标道路的垂直路况为平坦。
  15. 根据权利要求13所述的装置,其特征在于,所述处理单元用于:
    根据所述第一垂直位置信息和第一预设差值在所述道路图像中确定出第一目标区域;
    若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的下方,则确定所述目标道路的垂直路况为下坡;
    若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域的上方,则确定所述目标道路的垂直路况为上坡;
    若根据所述第二垂直位置信息确定所述第二检测点在所述第一目标区域内,则确定所述目标道路的垂直路况为平坦。
  16. 根据权利要求11-15任一项所述的装置,其特征在于,所述路况包括水平路况,所述处理单元用于:
    获取所述第一检测点的第一水平位置信息和所述第二检测点的第二水平位置信息;
    根据所述第一水平位置信息和所述第二水平位置信息确定所述目标道路的水平路况。
  17. 根据权利要求16所述的装置,去特征在于,所述处理单元用于:
    若根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的左侧,则确定所述目标道路的水平路况为向右弯曲;
    若根据所述第一水平位置信息和所述第二水平位置信息确定所述第一检测点在所述第二检测点的右侧,则确定所述目标道路的水平路况为向左弯曲;
    若所述第一水平位置信息和所述第二水平位置信息相同,则确定所述目标道路的水平路况为直行。
  18. 根据权利要求16所述的装置,其特征在于,所述处理单元用于:
    根据所述第一水平位置信息和第二预设差值在所述道路图像中确定出第二目标区域;
    若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的左侧,则确定所述目标道路的水平路况为向左弯曲;
    若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域的右侧,则确定所述目标道路的水平路况为向右弯曲;
    若根据所述第二水平位置信息确定所述第二检测点在所述第二目标区域内,则确定所述目标道路的水平路况为直行。
  19. 根据权利要求11-18任一项所述的装置,其特征在于,所述第一检测点和所述第二检测点在所述道路图像中的垂直距离与所述目标道路的坡度成正比。
  20. 根据权利要求11-19任一项所述的装置,其特征在于,所述第一检测点和所述第二检测点在所述道路图像中的水平距离与所述目标道路的曲率成正比。
  21. 一种可读存储介质,用于存储指令,当所述指令被执行时,使如权利要求1-10中任一项所述的方法被实现。
  22. 一种装置,其特征在于,包括:处理器、存储器和收发器;
    所述存储器,用于存储计算机程序;
    所述处理器,用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求1-10中任一项所述的方法。
PCT/CN2020/093543 2020-05-29 2020-05-29 一种路况检测方法和装置 WO2021237754A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080004833.9A CN112639814B (zh) 2020-05-29 2020-05-29 一种路况检测方法和装置
PCT/CN2020/093543 WO2021237754A1 (zh) 2020-05-29 2020-05-29 一种路况检测方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093543 WO2021237754A1 (zh) 2020-05-29 2020-05-29 一种路况检测方法和装置

Publications (1)

Publication Number Publication Date
WO2021237754A1 true WO2021237754A1 (zh) 2021-12-02

Family

ID=75291180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093543 WO2021237754A1 (zh) 2020-05-29 2020-05-29 一种路况检测方法和装置

Country Status (2)

Country Link
CN (1) CN112639814B (zh)
WO (1) WO2021237754A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932472A (zh) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 一种基于车道线检测的自动驾驶行驶区域判别方法
CN109598256A (zh) * 2018-12-25 2019-04-09 斑马网络技术有限公司 进出坡道判断方法、装置、车辆、存储介质及电子设备
CN109886131A (zh) * 2019-01-24 2019-06-14 淮安信息职业技术学院 一种道路弯道识别方法及其装置
US10331957B2 (en) * 2017-07-27 2019-06-25 Here Global B.V. Method, apparatus, and system for vanishing point/horizon estimation using lane models
CN110979162A (zh) * 2019-12-20 2020-04-10 北京海纳川汽车部件股份有限公司 车辆的前大灯控制方法、系统及车辆

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5747482B2 (ja) * 2010-03-26 2015-07-15 日産自動車株式会社 車両用環境認識装置
CN103577790B (zh) * 2012-07-26 2016-06-01 株式会社理光 道路转弯类型检测方法和装置
KR101641490B1 (ko) * 2014-12-10 2016-07-21 엘지전자 주식회사 차량 운전 보조 장치 및 이를 구비한 차량
JP6657925B2 (ja) * 2015-06-04 2020-03-04 ソニー株式会社 車載カメラ・システム並びに画像処理装置
CN109492454B (zh) * 2017-09-11 2021-02-23 比亚迪股份有限公司 对象识别方法及装置
KR102541561B1 (ko) * 2018-02-12 2023-06-08 삼성전자주식회사 차량의 주행을 위한 정보를 제공하는 방법 및 그 장치들
CN108629292B (zh) * 2018-04-16 2022-02-18 海信集团有限公司 弯曲车道线检测方法、装置及终端
CN110044333A (zh) * 2019-05-14 2019-07-23 芜湖汽车前瞻技术研究院有限公司 一种基于单目视觉的坡度检测方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10331957B2 (en) * 2017-07-27 2019-06-25 Here Global B.V. Method, apparatus, and system for vanishing point/horizon estimation using lane models
CN108932472A (zh) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 一种基于车道线检测的自动驾驶行驶区域判别方法
CN109598256A (zh) * 2018-12-25 2019-04-09 斑马网络技术有限公司 进出坡道判断方法、装置、车辆、存储介质及电子设备
CN109886131A (zh) * 2019-01-24 2019-06-14 淮安信息职业技术学院 一种道路弯道识别方法及其装置
CN110979162A (zh) * 2019-12-20 2020-04-10 北京海纳川汽车部件股份有限公司 车辆的前大灯控制方法、系统及车辆

Also Published As

Publication number Publication date
CN112639814B (zh) 2022-02-11
CN112639814A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2021259344A1 (zh) 车辆检测方法、装置、车辆和存储介质
EP4089659A1 (en) Map updating method, apparatus and device
CN110271539B (zh) 一种自动垂直泊车控制系统
CN112009462B (zh) 一种前向自动泊车方法及装置
CN112115857B (zh) 智能汽车的车道线识别方法、装置、电子设备及介质
WO2020125138A1 (zh) 一种物体碰撞预测方法及装置
CN111267862B (zh) 一种依赖跟随目标的虚拟车道线构造方法和系统
WO2023065342A1 (zh) 车辆及其定位方法、装置、设备、计算机可读存储介质
Liu et al. Vision-based long-distance lane perception and front vehicle location for full autonomous vehicles on highway roads
CN113989766A (zh) 道路边缘检测方法、应用于车辆的道路边缘检测设备
CN113985405A (zh) 障碍物检测方法、应用于车辆的障碍物检测设备
CN110555801A (zh) 一种航迹推演的校正方法、终端和存储介质
US20210063192A1 (en) Own location estimation device
CN112902911B (zh) 基于单目相机的测距方法、装置、设备及存储介质
CN110784680B (zh) 一种车辆定位方法、装置、车辆和存储介质
WO2021237754A1 (zh) 一种路况检测方法和装置
CN113140002A (zh) 基于双目立体相机的道路状况检测方法、系统和智能终端
CN112241717B (zh) 前车检测方法、前车检测模型的训练获取方法及装置
CN114299466A (zh) 基于单目相机的车辆姿态确定方法、装置和电子设备
CN114333390A (zh) 共享车辆停放事件的检测方法、装置及系统
CN114037977A (zh) 道路灭点的检测方法、装置、设备及存储介质
CN112070839A (zh) 一种对后方车辆横纵向定位测距方法及设备
WO2023010236A1 (zh) 一种显示方法、装置和系统
CN114407916B (zh) 车辆控制及模型训练方法、装置、车辆、设备和存储介质
US20240046491A1 (en) System and Method of Automatic Image View Alignment for Camera-Based Road Condition Detection on a Vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937586

Country of ref document: EP

Kind code of ref document: A1