CN112639814A - Road condition detection method and device - Google Patents

Road condition detection method and device Download PDF

Info

Publication number
CN112639814A
CN112639814A CN202080004833.9A CN202080004833A CN112639814A CN 112639814 A CN112639814 A CN 112639814A CN 202080004833 A CN202080004833 A CN 202080004833A CN 112639814 A CN112639814 A CN 112639814A
Authority
CN
China
Prior art keywords
road
position information
detection point
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202080004833.9A
Other languages
Chinese (zh)
Other versions
CN112639814B (en
Inventor
高鲁涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN112639814A publication Critical patent/CN112639814A/en
Application granted granted Critical
Publication of CN112639814B publication Critical patent/CN112639814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a road condition detection method and device. The method is suitable for scenes such as auxiliary driving and unmanned driving of the intelligent automobile. The method comprises the following steps: and acquiring a road image of the target road. And determining a first detection point and a second detection point in the road image. Here, the first detection point is a vanishing point in the road image. The second detection point is a road boundary line of the target road in the road image or an intersection point of the lane lines. Alternatively, the second detection point may be a road boundary line of the target road in the road image or an intersection of extension lines of the lane lines. And determining the road condition of the target road according to the first detection point and the second detection point. By adopting the method provided by the application, the road condition detection precision can be improved, and the safety and the reliability of the intelligent automobile can be improved.

Description

Road condition detection method and device
Technical Field
The application relates to the field of intelligent automobiles, in particular to a road condition detection method and a road condition detection device.
Background
With the continuous development of internet technology and manufacturing technology, emerging things such as smart cars are continuously emerging. The intelligent vehicle is a vehicle equipped with an Advanced Driving Assistance System (ADAS). The intelligent automobile can sense the road environment through the ADAS so as to realize auxiliary driving and even unmanned driving. For intelligent automobiles, a road condition (hereinafter referred to as road condition) detection technology is one of the technologies mainly relied on by intelligent automobiles. The main purpose of road condition detection is to obtain road condition information (such as road gradient and road curvature) of a road ahead of an automobile, so as to assist the automobile in reasonably performing auxiliary control or automatic control on the speed and the direction of the automobile. Whether the road condition detection is accurate or not can directly affect the safety and reliability of the intelligent automobile, so how to ensure the road condition detection precision becomes a great research focus at present.
In the prior art, an intelligent automobile can detect road conditions through sensors such as a millimeter wave radar, a laser radar and a camera, and the camera gradually becomes a dominant force sensor in road condition detection due to the advantages of low cost, mature technology and the like. However, the existing road condition detection technology based on the camera has higher requirements on the installation position and posture of the camera on the automobile, and misjudgment is easily generated under the condition that the optical axis direction of the camera and the advancing direction of the automobile are not in a horizontal relation (for example, the automobile bumps), so that accurate road condition information cannot be acquired, and the safety and the reliability of the intelligent automobile are reduced.
Disclosure of Invention
The application provides a road condition detection method and device. By adopting the scheme provided by the application, the road condition detection precision can be improved, and the safety and the reliability of the intelligent automobile are improved.
In a first aspect, an embodiment of the present application provides a road condition detection method, including: a road image of a target road is acquired. And determining a first detection point and a second detection point in the road image. Here, the first detection point is a vanishing point in the road image. The second detection point is an intersection point of a road boundary line or a lane line of the target road in the road image, or the second detection point is an intersection point of extension lines of the road boundary line or the lane line of the target road in the road image. And determining the road condition of the target road according to the first detection point and the second detection point.
In the embodiment of the application, the terminal device can judge the horizontal road condition and/or the vertical road condition of the target road through the relative position relationship between the vanishing point on the plane of the target road in the vehicle advancing direction and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in a certain road image, so that the condition of misjudgment of the road condition caused by vehicle bump or small difference of different road images and other factors can be avoided, the road condition detection precision can be improved, and the safety and the reliability of the intelligent vehicle can be improved.
With reference to the first aspect, in one possible implementation, the first detection point is a vanishing point in the road image on the plane of the target road and in the traveling direction of the terminal device.
With reference to the first aspect, in one possible implementation, the road condition includes a vertical road condition. The terminal device may obtain first vertical position information of the first detection point and second vertical position information of the second detection point. And determining the vertical road condition of the target road according to the first vertical position information and the second vertical position information.
With reference to the first aspect, in a possible implementation manner, if the terminal device determines that the first detection point is above the second detection point according to the first vertical position information and the second vertical position information, it determines that the vertical road condition of the target road is a downhill. And if the terminal equipment determines that the first detection point is below the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is an uphill slope. And if the terminal equipment determines that the first vertical position information is the same as the second vertical position information, determining that the vertical road condition of the target road is flat. Here, the terminal device directly judges the vertical road condition of the target road according to the relative position relationship between the vanishing point on the plane of the target road and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in the vehicle advancing direction, so that the occurrence of the condition of misjudgment of the vertical road condition caused by factors such as the installation position and the posture of the terminal device on the vehicle can be avoided, the road condition detection precision can be improved, and the safety and the reliability of the intelligent vehicle can be improved.
With reference to the first aspect, in a possible implementation manner, the terminal device may determine the first target area in the road image according to the first vertical position information and a first preset difference value. And if the second detection point is determined to be below the first target area according to the second vertical position information, determining that the vertical road condition of the target road is a downhill. And if the second detection point is determined to be above the first target area according to the second vertical position information, determining that the vertical road condition of the target road is an uphill slope. And if the second detection point is determined to be in the first target area according to the second vertical position information, determining that the vertical road condition of the target road is flat. Here, the terminal device determines the vertical road condition of the target road according to the relative position relationship between the first target area determined by the vanishing point and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line, so that the occurrence of the situation of misjudgment of the vertical road condition caused by factors such as vehicle jolt can be avoided, the accuracy of road condition detection can be further improved, and the safety and the reliability of the intelligent vehicle can be further improved.
With reference to the first aspect, in one possible implementation, the road condition includes a level road condition. The terminal device may obtain first horizontal position information of the first detection point and second horizontal position information of the second detection point. And determining the horizontal road condition of the target road according to the first horizontal position information and the second horizontal position information.
With reference to the first aspect, in a possible implementation manner, if the terminal device determines that the first detection point is located on the left side of the second detection point according to the first horizontal position information and the second horizontal position information, it determines that the horizontal road condition of the target road is curved to the right. And if the terminal equipment determines that the first detection point is positioned on the right side of the second detection point according to the first horizontal position information and the second horizontal position information, determining that the horizontal road condition of the target road is left-curved. And if the terminal equipment determines that the first horizontal position information is the same as the second horizontal position information, determining that the horizontal road condition of the target road is straight. Here, the terminal device directly judges the horizontal road condition of the target road according to the relative position relationship between the vanishing point on the plane of the target road in the vehicle advancing direction and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in a certain road image, so that the occurrence of the condition of misjudgment of the horizontal road condition caused by small difference of different road images can be avoided, the road condition detection precision can be improved, and the safety and the reliability of the intelligent vehicle can be improved.
With reference to the first aspect, in a possible implementation manner, the terminal device may determine the second target area in the road image according to the first horizontal position information and a second preset difference value. And if the terminal equipment determines that the second detection point is positioned on the left side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is left-curved. And if the terminal equipment determines that the second detection point is positioned on the right side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is rightward bent. And if the terminal equipment determines that the second detection point is in the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is straight. The terminal equipment judges the horizontal road condition of the target road according to the relative position relation of the intersection point (or the intersection point of the extension lines) of the second target area and the lane line or the lane boundary line determined by the vanishing point and the second preset difference value, so that the condition that the horizontal road condition is misjudged due to factors such as vehicle jolt can be further avoided, the road condition detection precision can be further improved, and the safety and the reliability of the intelligent automobile can be improved.
With reference to the first aspect, in one possible implementation, a vertical distance between the first detection point and the second detection point in the road image is proportional to a gradient of the target road.
With reference to the first aspect, in one possible implementation, a horizontal distance between the first detection point and the second detection point in the road image is proportional to a curvature of the target road.
In a second aspect, an apparatus is provided in an embodiment of the present application. The device can be the terminal device itself, and also can be an element or module such as a chip in the terminal device. The apparatus includes a unit configured to execute the road condition detection method provided in any one of the possible implementation manners of the first aspect, so that the beneficial effects (or advantages) of the road condition detection method provided in the first aspect can also be achieved.
In a third aspect, an embodiment of the present application provides an apparatus, which may be a terminal device. The apparatus includes at least one memory, a processor, and a transceiver. The processor is configured to call a code stored in the memory to execute the road condition detection method provided in any one of the possible implementations of the first aspect, so that the beneficial effects (or advantages) of the road condition detection method provided in the first aspect can also be achieved.
In a fourth aspect, an apparatus is provided in an embodiment of the present application. The apparatus may be a chip, the apparatus comprising: at least one processor and interface circuitry. The interface circuit is used for receiving code instructions and transmitting the code instructions to the processor. The processor is configured to execute the code instructions to implement the road condition detection method provided in any one of the possible implementation manners of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the method for detecting a road condition provided in any one of the possible implementation manners of the first aspect can also achieve beneficial effects (or advantages) of the method for detecting a road condition provided in the first aspect.
In a sixth aspect, an embodiment of the present application provides a computer program product including instructions, where when the computer program product runs on a computer, the computer is enabled to execute the road condition detection method provided in any one of the possible implementation manners in the first aspect, and beneficial effects of the road condition detection method provided in the first aspect can also be achieved.
In the method provided by the embodiment of the application, the terminal device can judge the horizontal road condition and/or the vertical road condition of the target road through the relative position relationship between the vanishing point on the plane of the target road in the vehicle advancing direction and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in one road image, so that the condition of misjudgment of the road condition caused by vehicle bump or small difference of different road images and other factors can be avoided, the road condition detection precision can be improved, and the safety and the reliability of the intelligent vehicle can be improved.
Drawings
Fig. 1 is a schematic view of a traffic detection scene provided in an embodiment of the present application;
FIG. 2 is a schematic view of a vehicle coordinate system provided by an embodiment of the present application;
fig. 3 is a schematic flow chart of a road condition detection method according to an embodiment of the present application;
fig. 4a is a schematic diagram of a road image provided by an embodiment of the present application;
FIG. 4b is a schematic diagram of another road image provided by an embodiment of the present application;
FIG. 4c is a schematic diagram of another road image provided by an embodiment of the present application;
FIG. 4d is a schematic diagram of another road image provided by an embodiment of the present application;
FIG. 4e is a schematic diagram of another road image provided in the embodiment of the present application;
FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic view of another structure of an apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic view of a traffic detection scene according to an embodiment of the present disclosure. Under the condition of the road condition detection scene, in the driving process of the intelligent automobile on the target road, the image of the target road in the forward direction of the automobile is collected through the vehicle-mounted terminal equipment, then the image of the target road is processed and analyzed, and the road condition of the forward road is judged according to the processing and analyzing result. In the embodiment of the present application, the smart car may include various car models. The terminal device can be a camera system loaded on an intelligent automobile or other systems or devices carrying the camera system. The road condition detection method provided by the embodiment of the application is suitable for the terminal equipment. It should be further noted that the road condition detection method provided by the application can be used for vehicle-mounted terminal equipment, and can also be used for terminal equipment loaded by equipment such as an unmanned aerial vehicle, a signal lamp and a speed measuring device, and specific limitations are not made here. In the embodiment of the present application, a vehicle-mounted terminal device will be described as an example.
In practical application, when the intelligent automobile is loaded with the terminal equipment, the terminal equipment needs to be ensured to acquire the finished road image, and the optical axis of the terminal equipment needs to keep a horizontal relation with the advancing direction of the intelligent automobile. Therefore, in the driving process of the vehicle, the intelligent automobile can acquire the image of the advancing road through the terminal equipment, and then determine the road gradient of the advancing road based on the position relation of the image center point of the image and the intersection point after the lane line is extended, or determine the road curvature through a plurality of acquired images. However, if the vehicle bumps during driving or the optical axis of the terminal device and the forward direction of the smart vehicle cannot be kept horizontal due to poor installation, the smart vehicle cannot determine the road gradient in the above manner. When the difference between the collected images is small, the intelligent automobile cannot accurately determine the curvature of the road according to the images. Therefore, the technical problems mainly solved by the embodiments of the present application are: how to promote the detection precision of road conditions detection technique to promote intelligent automobile's security and reliability.
In the following, for convenience of understanding and description of the embodiments of the present application, some concepts related to the embodiments of the present application will be briefly described.
1. Image coordinate system
In practical application, for a certain image, an image coordinate system can be established by taking any one pixel point in the image as a circle center, taking any two mutually perpendicular directions as a horizontal direction and a vertical direction, and taking the length of one pixel as a basic unit. The position of each pixel in the image can then be described in terms of its horizontal and vertical distance from the center of the circle. In the embodiment of the application, an image coordinate system for describing the positions of the pixel points in the road image is established in the road image by taking the top left corner of the road image as the center of a circle, the horizontal right direction as the horizontal positive direction and the vertical downward direction as the vertical positive direction.
2. Vehicle coordinate system
Referring to fig. 2, fig. 2 is a schematic view of a vehicle coordinate system according to an embodiment of the present disclosure. In the embodiment of the present application, a vehicle coordinate system for a vehicle is established with a center point of a rear wheel bearing of the vehicle as a coordinate system origin, a rear wheel bearing direction of the vehicle as a Y axis, a direction perpendicular to a rear wheel as a Z axis, and a direction connecting the center point of the rear wheel bearing and the center point of a front wheel bearing as an X axis. Here, the X axis and the Y axis constitute a horizontal plane on which the vehicle is located. At any time, the direction of travel of the vehicle is on the horizontal plane of the vehicle and parallel to the X-axis of the vehicle coordinate system.
Referring to fig. 3, fig. 3 is a schematic flow chart of a road condition detection method according to an embodiment of the present application. As can be seen from fig. 3, the method comprises the following steps:
s101, acquiring a road image of the target road.
In some feasible implementation manners, after the terminal device determines that the road condition detection is required, the terminal device may turn on the camera and acquire the road image of the target road in the vehicle forward direction through the camera.
In a specific implementation, the terminal device may periodically or non-periodically capture at least one image through a camera in a vehicle traveling direction, then perform image recognition and processing on the at least one image, select one image including two road boundary lines or two lane lines of a target road from the at least one image, and determine the image as a road image of the target road.
S102, determining a first detection point and a second detection point in the road image.
In some possible implementation manners, after acquiring the road image, the terminal device may determine the first detection point and the second detection point from the road image. Here, the first detection point is a vanishing point of the road image. The second detection point is two road boundary lines of the target road in the road image or an intersection point of two lane lines. Alternatively, the detection point may be two road boundary lines of the target road in the road image or an intersection of extension lines of two lane lines. In practical applications, for any one image, it may include a plurality of vanishing points, and the first detection point may specifically be a vanishing point in the road image on the plane of the target road and in the traveling direction of the terminal device (i.e. the traveling direction of the vehicle).
In specific implementation, please refer to fig. 4a, and fig. 4a is a schematic diagram of a road image provided in the embodiment of the present application. Here, the horizontal axis of the image coordinate system of the road image is U, and the vertical axis is V. After the terminal device acquires the road image, a group of parallel lines parallel to the vehicle traveling direction can be determined from the vehicle horizontal plane in the road image, and then the group of parallel lines is projected on the road image. Then, the terminal device may determine an intersection point (i.e., vanishing point) of the projections of the two parallel lines on the road image as the above-described first detection point. Optionally, the terminal device may also determine the first detection point from the road image by using methods such as vanishing point detection based on a spatial transformation technology, vanishing point detection based on statistical estimation, message point detection based on machine learning, and the like, which is not limited in this application.
Further, in an alternative implementation, please refer to fig. 4b, and fig. 4b is a schematic diagram of another road image provided in the embodiment of the present application. The terminal device may also detect a road boundary line of the target road from the road image. Specifically, the terminal device may first segment the pixel region where the road boundary in the road image is located by using a gray scale gradient or a color threshold. Then, the terminal device can fit the divided pixel regions through a quadratic function or a cubic function, and the like, so as to determine the pixel points occupied by the road boundary in the road image. Optionally, the terminal device may further extract the road boundary line from the road image by using a road boundary line extraction method based on machine learning, a road boundary line extraction method based on an adaptive threshold edge extraction algorithm, and the like, which is not particularly limited in the present application. After the terminal device extracts the road boundary line from the road image, if it is detected that the road boundary line has intersected at a point in the road image, the terminal device may determine an intersection point of the road boundary line as the second detection point. If the terminal device detects that the road boundary line of the target road does not intersect at a point in the road image, the terminal device may extend the road boundary line in the extending direction thereof until the extending line of the road boundary line intersects at a point. The terminal apparatus may then determine the intersection of the extension lines of the road boundary line as the above-described second detection point.
In another alternative implementation, the terminal device may first detect the lane line of the target road from the road image. In a specific implementation, the process of detecting the lane line of the target road from the road image by the terminal device may refer to the process of detecting the road boundary line of the target lane from the road image by the terminal device, which is not described herein again. After the terminal device extracts the lane line of the target road from the road image, if it is detected that the lane line has intersected at one point in the road image, the terminal device may determine the intersection point of the lane line as the second detection point. If the terminal device detects that the lane lines of the target road do not intersect at one point in the road image, the terminal device may extend the lane lines in the extending direction thereof until the extended lines of the lane lines intersect at one point. The terminal device may then determine the intersection of the extension lines of the lane lines as the above-described second detection point.
S103, determining the road condition of the target road according to the first detection point and the second detection point.
In some possible implementations, after determining the first detection point and the second detection point from the road image, the terminal device may determine the road condition of the target road according to the position information of the first detection point and the second detection point in the road image.
In an alternative implementation, the target road condition may include a vertical road condition. The so-called vertical road condition is that the road in the forward direction of the vehicle is uphill, downhill or flat. The terminal equipment can determine the vertical road condition of the target road through the position information of the first detection point and the second detection point in the road image. It should be noted that the position information related to the embodiment of the present application refers to a coordinate value corresponding to each pixel point in the road image in a preset image coordinate system.
In a specific implementation, the terminal device may acquire position information of the above-described first detection point in the vertical direction on the road image (for convenience of distinction, the description will be replaced with the first vertical position information hereinafter). Meanwhile, the terminal device may also acquire position information of the above-described second detection point in the vertical direction on the road image (for convenience of distinction, the description will be replaced with the second vertical position information hereinafter). As shown in fig. 4b, specifically, the terminal device may determine the coordinate value of the pixel corresponding to the first detection point in the image coordinate system corresponding to the road image, which is assumed to be (u1, v 1). Then, the terminal device may determine the coordinate value V1 of the pixel point on the V axis as the first vertical position information of the above-described first detection point. Similarly, the terminal device may also determine the coordinate value of the pixel point corresponding to the second detection point in the image coordinate system corresponding to the road image, which is assumed to be (u2, v 2). Then, the terminal device may determine the coordinate value V2 of the pixel point on the V axis as the second vertical position information of the above-described second detection point. Then, the terminal device may determine the relative position relationship between the first detection point and the second detection point in the vertical direction on the road image according to the first vertical position information v1 and the second vertical position information v2, and further determine the vertical road condition of the target road according to the relative position relationship.
Optionally, if the terminal device determines that the first detection point is above the second detection point according to the first vertical position information and the second vertical position information, it may determine that the vertical road condition of the target road is a downhill. And if the terminal equipment determines that the first detection point is below the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is an uphill slope. And if the terminal equipment determines that the first vertical position information is the same as the second vertical position information, determining that the vertical road condition of the target road is flat. Specifically, the terminal device may calculate a difference between the second vertical position information and the first vertical position information, i.e., v2-v 1. Then, if the terminal device determines that v2-v1 is greater than 0, it may be determined that the first detection point is above the second detection point, and it may be determined that the vertical road condition of the current target road is a downhill. If the terminal device determines that v2-v1 is equal to 0 (i.e., the first vertical position information and the second vertical position information are the same), it may be determined that the first detected point and the second detected point are the same in position in the vertical direction, and it may be determined that the vertical road condition of the current target road is flat. If the terminal device determines that v2-v1 is less than 0, it may be determined that the first detection point is below the second detection point, and it may be determined that the vertical road condition of the current target road is an uphill slope.
Here, the terminal device directly determines the vertical road condition of the target road according to the relative position relationship between the vanishing point on the plane of the target road and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in the vehicle traveling direction, so that the occurrence of the erroneous determination of the vertical road condition caused by the factors such as the installation position and posture of the terminal device on the vehicle can be avoided, and the accuracy of road condition detection can be improved.
Optionally, referring to fig. 4c together, fig. 4c is a schematic diagram of another road image provided in this embodiment, where the terminal device may determine a first target area in the road image according to the first vertical position information and a pre-configured first preset difference value. Here, it is assumed that the first preset difference is d 1. Specifically, the terminal device may determine, as the above-described first target region, a region made up of pixel points whose position information in the vertical direction in the road image is within the range of [ v1-d1, v1+ d1 ]. After the first target area is determined, the terminal device may determine the vertical road condition of the target road according to the relative position relationship between the first target area and the second detection point. Specifically, after the terminal device determines the first target area, if the terminal device determines that v2 is greater than or equal to v1-d1 and less than or equal to v1+ d1, it may be determined that the second detection point is within the first target area, and it may be determined that the vertical road condition of the target road is flat. If the terminal device determines that v2 is less than v1-d1, it may be determined that the second detection point is above the first target area, and it may be determined that the vertical road condition of the target road is an uphill slope. If the terminal device determines that v2 is greater than v1+ d1, it may be determined that the second detection point is below the first target area, and it may be determined that the vertical road condition of the target road is a downhill. It should be added that the magnitude of the first preset difference d1 can be determined by the terminal device according to the driving environment of the vehicle. When the terminal device determines that the driving environment of the vehicle is complex, such as the current road is bumpy or the up-down fluctuation is large, the d1 with the larger value can be selected. When the terminal device determines that the driving environment of the vehicle is simple, such as the road surface of the current road is flat, d1 with a smaller value can be selected.
Here, the terminal device determines the vertical road condition of the target road according to the relative position relationship between the first target area determined by the vanishing point and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line, so that the occurrence of the situation of misjudgment of the vertical road condition caused by factors such as vehicle jolt can be further avoided, the accuracy of road condition detection can be further improved, and the safety and the reliability of the intelligent vehicle can be improved.
It should be further noted that the absolute value of the difference v2-v1 between the second vertical position information and the first vertical position information may also be understood as the distance between the second detection point and the first detection point in the vertical direction of the image (i.e., the vertical distance). This vertical distance is proportional to the gradient of the target road. In practical applications, the terminal device may also estimate the magnitude of the uphill gradient or the downhill gradient according to the vertical distance. Specifically, the terminal device may calculate a vertical distance | v2-v1| between the first detection point and the second detection point. Then, in the case where it is determined that the vertical road condition of the target road is an uphill, the terminal device may obtain a preset uphill gradient coefficient K1, and determine that the uphill gradient of the target road is K1 × v2-v1 |. Under the condition that the vertical road condition of the target road is determined to be an uphill slope, the terminal equipment can obtain a preset downhill gradient coefficient K2 and determine the downhill gradient of the target road to be K2 x | v2-v1 |.
In another alternative implementation, the target road conditions may include horizontal road conditions. The so-called horizontal road condition is that the road in the forward direction of the vehicle turns left, turns right or runs straight. The terminal equipment can also determine the horizontal road condition of the target road according to the position information of the first detection point and the second detection point in the road image.
In a specific implementation, the terminal device may acquire position information of the above-described first detection point in the horizontal direction on the road image (for convenience of distinction, the description will be replaced with the first horizontal position information hereinafter). Meanwhile, the terminal device may also acquire position information of the above-described second detection point in the horizontal direction on the road image (for convenience of distinction, the description will be replaced with the second horizontal position information hereinafter). After acquiring coordinate values (u1, v1) and (u2, v2) of pixel points corresponding to the first detection point and the second detection point in an image coordinate system corresponding to the road image, the terminal device may determine u1 in the coordinate values (u1, v1) as first horizontal position information of the first detection point, and may also determine u2 in the coordinate values (u2, v2) as second horizontal position information of the second detection point. Then, the terminal device may determine the relative position relationship between the first detection point and the second detection point in the horizontal direction in the road image according to the first horizontal position information u1 and the second horizontal position information u2, and further determine the horizontal road condition of the target road according to the relative position relationship.
Optionally, please refer to fig. 4d, where fig. 4d is a schematic diagram of another road image provided in the embodiment of the present application. As shown in fig. 4d, after acquiring the first horizontal position information u1 and the second horizontal position information u2, if it is determined that the first detection point is on the left side of the second detection point according to the first horizontal position information u1 and the second horizontal position information u2, the terminal device may determine that the horizontal road condition of the target road is curved to the right. If the terminal device determines that the first detection point is on the right side of the second detection point according to the first horizontal position information u1 and the second horizontal position information u2, it may be determined that the horizontal road condition of the target road is curved to the left. If the terminal device determines that the first horizontal position information u1 is the same as the second horizontal position information u2, it may be determined that the horizontal road condition of the target road is straight. Specifically, the terminal device may calculate the difference between the second horizontal position information and the first horizontal position information, i.e., u2-u 1. Then, if the terminal device determines that u2-u1 is greater than 0, it may be determined that the first detection point is to the left of the second detection point, and it may be determined that the horizontal road condition of the current target road is curved to the right. If the terminal device determines that u2-u1 is equal to 0 (i.e., the first horizontal position information and the second horizontal position information are the same), it may be determined that the first detected point and the second detected point are the same in position in the horizontal direction, and it may be determined that the horizontal road condition of the current target road is straight. If the terminal device determines that u2-u1 is less than 0, it may be determined that the first detection point is to the right of the second detection point, and it may be determined that the horizontal road condition of the current target road is curved to the left.
Here, the terminal device directly judges the horizontal road condition of the target road according to the relative position relationship between the vanishing point on the plane of the target road in the vehicle advancing direction and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in a certain road image, so that the occurrence of the condition of misjudgment of the horizontal road condition caused by small difference of different road images can be avoided, the road condition detection precision can be improved, and the safety and the reliability of the intelligent vehicle can be improved.
Optionally, referring to fig. 4e together, fig. 4e is a schematic diagram of another road image provided in an embodiment of the present application, where the terminal device may determine a second target area in the road image according to the first horizontal position information and a second preset difference value that is preconfigured. Here, it is assumed that the second preset difference is d 2. Specifically, the terminal device may determine, as the above-described second target region, a region made up of pixel points whose position information in the horizontal direction in the road image is within the range of [ u1-d2, u1+ d2 ]. After the second target area is determined, the terminal device may determine the horizontal road condition of the target road according to the relative position relationship between the second target area and the second detection point. Specifically, after the terminal device determines the second target area, if the terminal device determines that u2 is greater than or equal to u1-d2 and less than or equal to u1+ d2, it may determine that the second detection point is within the second target area, and then it may determine that the horizontal road condition of the target road is straight. If the terminal device determines that u2 is smaller than u1-d2, it may be determined that the second detection point is on the left side of the second target area, and it may be determined that the horizontal road condition of the target road is curved to the left. If the terminal device determines that u2 is greater than u1+ d2, it may be determined that the second detection point is on the right side of the second target area, and it may be determined that the horizontal road condition of the target road curves to the right. It should be noted that, the magnitude of the second preset difference may also be determined by the terminal device according to the driving environment of the vehicle. When the terminal device determines that the driving environment of the vehicle is complex, such as the current road is bumpy or the up-down fluctuation is large, the d2 with the larger value can be selected. When the terminal device determines that the driving environment of the vehicle is simple, such as the road surface of the current road is flat, d2 with a smaller value can be selected.
Here, the terminal device determines the horizontal road condition of the target road according to the relative position relationship between the second target area determined by the vanishing point and the second preset difference value and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line, so that the occurrence of the condition of misjudgment of the horizontal road condition caused by factors such as vehicle jolt can be further avoided, the accuracy of road condition detection can be further improved, and the safety and the reliability of the intelligent automobile can be improved.
It should be further noted that the absolute value of the difference u2-u1 between the second horizontal position information and the first horizontal position information (i.e., | u2-u1|) may also be understood as the distance between the second detection point and the first detection point in the horizontal direction of the image (i.e., horizontal distance). This horizontal distance is proportional to the road curvature of the target road. That is, in practical applications, the terminal device may also estimate the amount of curvature of the target road curved to the left or right according to this horizontal distance. Specifically, the terminal device may calculate a horizontal distance | u2-u1| between the first detection point and the second detection point. Then, in the case where it is determined that the horizontal road condition of the target road is left-curved, the terminal device may obtain the preset first curvature coefficient K3, and determine that the left-curved curvature of the target road is K3 | u2-u1 |. In the case that it is determined that the horizontal road condition of the target road is curved rightward, the terminal device may obtain the preset second curvature coefficient K4, and determine that the curvature of the target road curved rightward is K4 | u2-u1 |.
In yet another possible implementation, after acquiring the position information (u1, v1) of the first detection point and the position information (u2, v2) of the second detection point, the terminal device may determine the horizontal road condition of the target road based on the first horizontal position information u1 of the first detection point and the second horizontal position information u2 of the second detection point while determining the vertical road condition of the target road based on the first vertical position information v1 of the first detection point and the second vertical position information v2 of the second detection point. For a specific determination process, reference may be made to the process of determining the vertical road condition and the horizontal road condition of the target road by the terminal device, which is not described herein again. The horizontal road condition and the vertical road condition of the target road are determined simultaneously through the horizontal position relation and the vertical position relation of the first detection point and the second detection point in one road image, so that the increase of data processing amount caused by judging the horizontal road condition and the vertical road condition of the target road through different methods can be avoided, the complexity of a road condition detection method can be simplified, and the applicability of the road condition detection method is improved.
In the embodiment of the application, the terminal device can judge the horizontal road condition and/or the vertical road condition of the target road through the relative position relationship between the vanishing point on the plane of the target road in the vehicle advancing direction and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in a certain road image, so that the condition of misjudgment of the road condition caused by vehicle bump or small difference of different road images and other factors can be avoided, the road condition detection precision can be improved, and the safety and the reliability of the intelligent vehicle can be improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure. The apparatus may be the terminal device itself described in the embodiments, or may be a device or a module inside the terminal device. As shown in fig. 5, the apparatus includes:
a transceiving unit 501, configured to acquire a road image of a target road;
a processing unit 502, configured to determine a first detection point and a second detection point in the road image, where the first detection point is a vanishing point in the road image, and the second detection point is an intersection point of a road boundary line or a lane line of the target road in the road image, or the second detection point is an intersection point of extension lines of the road boundary line or the lane line of the target road in the road image;
the processing unit 502 is further configured to determine the road condition of the target road according to the first detection point and the second detection point.
In some possible implementations, the first detection point is a vanishing point in the road image on a plane where the target road is located and in a traveling direction of the terminal device.
In some possible implementations, the road conditions include vertical road conditions, and the processing unit 502 is configured to:
and acquiring first vertical position information of the first detection point and second vertical position information of the second detection point. And determining the vertical road condition of the target road according to the first vertical position information and the second vertical position information.
In some possible implementations, the processing unit 502 is configured to:
if the first detection point is determined to be above the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is a downhill;
if the first detection point is determined to be below the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is an uphill slope;
and if the first vertical position information is the same as the second vertical position information, determining that the vertical road condition of the target road is flat.
In some possible implementations, the processing unit 502 is configured to:
determining a first target area in the road image according to the first vertical position information and a first preset difference value;
if the second detection point is determined to be below the first target area according to the second vertical position information, determining that the vertical road condition of the target road is a downhill;
if the second detection point is determined to be above the first target area according to the second vertical position information, determining that the vertical road condition of the target road is an uphill slope;
and if the second detection point is determined to be in the first target area according to the second vertical position information, determining that the vertical road condition of the target road is flat.
In some possible implementations, the road conditions include horizontal road conditions, and the processing unit 502 is configured to:
acquiring first horizontal position information of the first detection point and second horizontal position information of the second detection point;
and determining the horizontal road condition of the target road according to the first horizontal position information and the second horizontal position information.
In some possible implementations, the processing unit 502 is configured to:
if the first detection point is determined to be on the left side of the second detection point according to the first horizontal position information and the second horizontal position information, determining that the horizontal road condition of the target road is bent rightwards;
if the first detection point is determined to be on the right side of the second detection point according to the first horizontal position information and the second horizontal position information, determining that the horizontal road condition of the target road is left-curved;
and if the first horizontal position information is the same as the second horizontal position information, determining that the horizontal road condition of the target road is straight.
In some possible implementations, the processing unit 502 is configured to:
determining a second target area in the road image according to the first horizontal position information and a second preset difference value;
if the second detection point is determined to be on the left side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is left-curved;
if the second detection point is determined to be on the right side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is rightward bent;
and if the second detection point is determined to be in the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is straight.
In some possible implementations, a vertical distance of the first detection point and the second detection point in the road image is proportional to a gradient of the target road.
In some possible implementations, a horizontal distance of the first detection point and the second detection point in the road image is proportional to a curvature of the target road.
In the embodiment of the application, the device can judge the horizontal road condition and/or the vertical road condition of the target road through the relative position relation between the vanishing point on the plane of the target road in the vehicle advancing direction and the intersection point (or the intersection point of the extension lines) of the lane line or the lane boundary line in a certain road image, can judge the condition of wrong road condition caused by vehicle bump or small difference of different road images and the like, and can improve the accuracy of road condition detection.
Referring to fig. 6, fig. 6 is a schematic view of another structure of an apparatus according to an embodiment of the present disclosure. The apparatus may be a terminal device in the embodiment, and may be configured to implement the road condition detection method implemented by the terminal device in the embodiment. The device includes: a processor 61, a memory 62, a transceiver 63 and a bus system 64.
The memory 61 includes, but is not limited to, a RAM, a ROM, an EPROM or a CD-ROM, and the memory 61 is used for storing instructions and data related to the road condition detection method provided in the embodiment of the present application. The memory 61 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
and (3) operating instructions: including various operational instructions for performing various operations.
Operating the system: including various system programs for implementing various basic services and for handling hardware-based tasks.
Only one memory is shown in fig. 6, but of course, the memory may be provided in plural numbers as necessary.
The transceiver 63 may be a camera or other image capture device. In the embodiment of the present application, the transceiver 63 is configured to execute the process of acquiring the road image of the target road in step S101 in the embodiment.
The processor 61 may be a controller, CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor 61 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like. In the embodiment of the present application, the processor 61 may be configured to execute the step of determining the first detection point and the second detection point in step S102 in the embodiment. The processor 61 may also be configured to execute the step of determining the road condition of the target road according to the first detection point and the second detection point in step S103 in the embodiment.
In a particular application, the various components of the device are coupled together by a bus system 64, wherein the bus system 64 may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 64 in fig. 6. For ease of illustration, it is only schematically drawn in fig. 6.
It should be noted that, in practical applications, the processor in the embodiment of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed.
It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memories described in the embodiments of the present application are intended to comprise, without being limited to, these and any other suitable types of memories.
The embodiment of the present application further provides a computer-readable medium, on which a computer program is stored, and when the computer program is executed by a computer, the method or the step for detecting a road condition executed by the terminal device in the above embodiment is implemented.
The embodiment of the present application further provides a computer program product, and when executed by a computer, the computer program product implements the road condition detection method or the steps executed by the terminal device in the above embodiments.
The embodiment of the application also provides a device, and the device can be the terminal equipment in the embodiment. The apparatus includes at least one processor and an interface. The processor is configured to execute the road condition detection method or step executed by the terminal device in the foregoing embodiment. It should be understood that the terminal device may be a chip, the processor may be implemented by hardware or software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated in the processor, located external to the processor, or stand-alone.
It should be understood that the term "and/or" in this embodiment is only one kind of association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus is merely illustrative, and for example, a division of a unit is merely a division of one logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
In short, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (22)

1. A road condition detection method is applied to terminal equipment, and the method comprises the following steps:
acquiring a road image of a target road;
determining a first detection point and a second detection point in the road image, wherein the first detection point is a vanishing point in the road image, the second detection point is an intersection point of a road boundary line or a lane line of the target road in the road image, or the second detection point is an intersection point of extension lines of the road boundary line or the lane line of the target road in the road image;
and determining the road condition of the target road according to the first detection point and the second detection point.
2. The method according to claim 1, characterized in that the first detection point is a vanishing point in the road image on the plane of the target road and in the direction of travel of the terminal device.
3. The method according to claim 1 or 2, wherein the road condition comprises a vertical road condition, and the determining the road condition of the target road according to the first detection point and the second detection point comprises:
acquiring first vertical position information of the first detection point and second vertical position information of the second detection point;
and determining the vertical road condition of the target road according to the first vertical position information and the second vertical position information.
4. The method of claim 3, wherein the determining the vertical road condition of the target road according to the first vertical position information and the second vertical position information comprises:
if the first detection point is determined to be above the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is a downhill;
if the first detection point is determined to be below the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is an uphill slope;
and if the first vertical position information is the same as the second vertical position information, determining that the vertical road condition of the target road is flat.
5. The method of claim 3, wherein the determining the vertical road condition of the target road according to the first vertical position information and the second vertical position information comprises:
determining a first target area in the road image according to the first vertical position information and a first preset difference value;
if the second detection point is determined to be below the first target area according to the second vertical position information, determining that the vertical road condition of the target road is a downhill;
if the second detection point is determined to be above the first target area according to the second vertical position information, determining that the vertical road condition of the target road is an uphill slope;
and if the second detection point is determined to be in the first target area according to the second vertical position information, determining that the vertical road condition of the target road is flat.
6. The method according to any one of claims 1-5, wherein the road condition comprises a level road condition, and wherein determining the road condition of the target road based on the first detection point and the second detection point comprises:
acquiring first horizontal position information of the first detection point and second horizontal position information of the second detection point;
and determining the horizontal road condition of the target road according to the first horizontal position information and the second horizontal position information.
7. The method of claim 6, wherein the determining the horizontal road condition of the target road according to the first horizontal position information and the second horizontal position information comprises:
if the first detection point is determined to be on the left side of the second detection point according to the first horizontal position information and the second horizontal position information, determining that the horizontal road condition of the target road is bent rightwards;
if the first detection point is determined to be on the right side of the second detection point according to the first horizontal position information and the second horizontal position information, determining that the horizontal road condition of the target road is left-curved;
and if the first horizontal position information is the same as the second horizontal position information, determining that the horizontal road condition of the target road is straight.
8. The method of claim 6, wherein the determining the horizontal road condition of the target road according to the first horizontal position information and the second horizontal position information comprises:
determining a second target area in the road image according to the first horizontal position information and a second preset difference value;
if the second detection point is determined to be on the left side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is left-curved;
if the second detection point is determined to be on the right side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is rightward bent;
and if the second detection point is determined to be in the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is straight.
9. The method according to any one of claims 1-8, wherein the vertical distance of the first detection point and the second detection point in the road image is proportional to the gradient of the target road.
10. The method according to any of claims 1-9, wherein the horizontal distance of the first detection point and the second detection point in the road image is proportional to the curvature of the target road.
11. An apparatus, characterized in that the apparatus comprises:
the receiving and sending unit is used for acquiring a road image of a target road;
a processing unit, configured to determine a first detection point and a second detection point in the road image, where the first detection point is a vanishing point in the road image, and the second detection point is an intersection point of a road boundary line or a lane line of the target road in the road image, or the second detection point is an intersection point of extension lines of the road boundary line or the lane line of the target road in the road image;
the processing unit is further configured to determine the road condition of the target road according to the first detection point and the second detection point.
12. The apparatus according to claim 11, wherein the first detection point is a vanishing point in the road image in the plane of the target road and in the direction of travel of the terminal device.
13. The apparatus according to claim 11 or 12, wherein the road conditions comprise vertical road conditions, and the processing unit is configured to:
acquiring first vertical position information of the first detection point and second vertical position information of the second detection point;
and determining the vertical road condition of the target road according to the first vertical position information and the second vertical position information.
14. The apparatus of claim 13, wherein the processing unit is configured to:
if the first detection point is determined to be above the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is a downhill;
if the first detection point is determined to be below the second detection point according to the first vertical position information and the second vertical position information, determining that the vertical road condition of the target road is an uphill slope;
and if the first vertical position information is the same as the second vertical position information, determining that the vertical road condition of the target road is flat.
15. The apparatus of claim 13, wherein the processing unit is configured to:
determining a first target area in the road image according to the first vertical position information and a first preset difference value;
if the second detection point is determined to be below the first target area according to the second vertical position information, determining that the vertical road condition of the target road is a downhill;
if the second detection point is determined to be above the first target area according to the second vertical position information, determining that the vertical road condition of the target road is an uphill slope;
and if the second detection point is determined to be in the first target area according to the second vertical position information, determining that the vertical road condition of the target road is flat.
16. The apparatus according to any of claims 11-15, wherein the road conditions comprise horizontal road conditions, and the processing unit is configured to:
acquiring first horizontal position information of the first detection point and second horizontal position information of the second detection point;
and determining the horizontal road condition of the target road according to the first horizontal position information and the second horizontal position information.
17. The apparatus of claim 16, wherein the processing unit is configured to:
if the first detection point is determined to be on the left side of the second detection point according to the first horizontal position information and the second horizontal position information, determining that the horizontal road condition of the target road is bent rightwards;
if the first detection point is determined to be on the right side of the second detection point according to the first horizontal position information and the second horizontal position information, determining that the horizontal road condition of the target road is left-curved;
and if the first horizontal position information is the same as the second horizontal position information, determining that the horizontal road condition of the target road is straight.
18. The apparatus of claim 16, wherein the processing unit is configured to:
determining a second target area in the road image according to the first horizontal position information and a second preset difference value;
if the second detection point is determined to be on the left side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is left-curved;
if the second detection point is determined to be on the right side of the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is rightward bent;
and if the second detection point is determined to be in the second target area according to the second horizontal position information, determining that the horizontal road condition of the target road is straight.
19. The apparatus of any one of claims 11-18, wherein a vertical distance between the first detection point and the second detection point in the road image is proportional to a slope of the target road.
20. The apparatus of any one of claims 11-19, wherein the horizontal distance between the first detection point and the second detection point in the road image is proportional to the curvature of the target road.
21. A readable storage medium storing instructions that, when executed, cause the method of any of claims 1-10 to be implemented.
22. An apparatus, comprising: a processor, a memory, and a transceiver;
the memory for storing a computer program;
the processor to execute a computer program stored in the memory to cause the apparatus to perform the method of any of claims 1-10.
CN202080004833.9A 2020-05-29 2020-05-29 Road condition detection method and device Active CN112639814B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093543 WO2021237754A1 (en) 2020-05-29 2020-05-29 Road condition detection method and apparatus

Publications (2)

Publication Number Publication Date
CN112639814A true CN112639814A (en) 2021-04-09
CN112639814B CN112639814B (en) 2022-02-11

Family

ID=75291180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080004833.9A Active CN112639814B (en) 2020-05-29 2020-05-29 Road condition detection method and device

Country Status (2)

Country Link
CN (1) CN112639814B (en)
WO (1) WO2021237754A1 (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201056A (en) * 2010-03-26 2011-09-28 日产自动车株式会社 Vehicle environment recognizing apparatus and method
CN103577790A (en) * 2012-07-26 2014-02-12 株式会社理光 Road turning type detecting method and device
CN105691299A (en) * 2014-12-10 2016-06-22 Lg电子株式会社 Vehicle driving assistance apparatus and vehicle
US20180139368A1 (en) * 2015-06-04 2018-05-17 Sony Corporation In-vehicle camera system and image processing apparatus
CN108629292A (en) * 2018-04-16 2018-10-09 海信集团有限公司 It is bent method for detecting lane lines, device and terminal
CN108932472A (en) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 A kind of automatic Pilot running region method of discrimination based on lane detection
US20190034740A1 (en) * 2017-07-27 2019-01-31 Here Global B.V. Method, apparatus, and system for vanishing point/horizon estimation using lane models
CN109492454A (en) * 2017-09-11 2019-03-19 比亚迪股份有限公司 Object identifying method and device
CN109598256A (en) * 2018-12-25 2019-04-09 斑马网络技术有限公司 Pass in and out ramp judgment method, device, vehicle, storage medium and electronic equipment
CN109886131A (en) * 2019-01-24 2019-06-14 淮安信息职业技术学院 A kind of road curve recognition methods and its device
CN110044333A (en) * 2019-05-14 2019-07-23 芜湖汽车前瞻技术研究院有限公司 A kind of slope detection method and device based on monocular vision
CN110155053A (en) * 2018-02-12 2019-08-23 三星电子株式会社 Method and apparatus for driving the information of vehicle is provided
CN110979162A (en) * 2019-12-20 2020-04-10 北京海纳川汽车部件股份有限公司 Vehicle headlamp control method and system and vehicle

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201056A (en) * 2010-03-26 2011-09-28 日产自动车株式会社 Vehicle environment recognizing apparatus and method
CN103577790A (en) * 2012-07-26 2014-02-12 株式会社理光 Road turning type detecting method and device
CN105691299A (en) * 2014-12-10 2016-06-22 Lg电子株式会社 Vehicle driving assistance apparatus and vehicle
US20180139368A1 (en) * 2015-06-04 2018-05-17 Sony Corporation In-vehicle camera system and image processing apparatus
US20190034740A1 (en) * 2017-07-27 2019-01-31 Here Global B.V. Method, apparatus, and system for vanishing point/horizon estimation using lane models
CN109492454A (en) * 2017-09-11 2019-03-19 比亚迪股份有限公司 Object identifying method and device
CN110155053A (en) * 2018-02-12 2019-08-23 三星电子株式会社 Method and apparatus for driving the information of vehicle is provided
CN108629292A (en) * 2018-04-16 2018-10-09 海信集团有限公司 It is bent method for detecting lane lines, device and terminal
CN108932472A (en) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 A kind of automatic Pilot running region method of discrimination based on lane detection
CN109598256A (en) * 2018-12-25 2019-04-09 斑马网络技术有限公司 Pass in and out ramp judgment method, device, vehicle, storage medium and electronic equipment
CN109886131A (en) * 2019-01-24 2019-06-14 淮安信息职业技术学院 A kind of road curve recognition methods and its device
CN110044333A (en) * 2019-05-14 2019-07-23 芜湖汽车前瞻技术研究院有限公司 A kind of slope detection method and device based on monocular vision
CN110979162A (en) * 2019-12-20 2020-04-10 北京海纳川汽车部件股份有限公司 Vehicle headlamp control method and system and vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宫金良 等: "车道线信息的全面理解及偏离预警算法", 《激光杂志》 *

Also Published As

Publication number Publication date
WO2021237754A1 (en) 2021-12-02
CN112639814B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN113160594B (en) Change point detection device and map information distribution system
US10795370B2 (en) Travel assist apparatus
GB2558777A (en) Vehicle collision avoidance
US10733889B2 (en) Method and device for parking assistance
EP4089659A1 (en) Map updating method, apparatus and device
US11738747B2 (en) Server device and vehicle
CN110969059A (en) Lane line identification method and system
CN113297881B (en) Target detection method and related device
CN110341621B (en) Obstacle detection method and device
CN111857135A (en) Obstacle avoidance method and apparatus for vehicle, electronic device, and computer storage medium
CN111142528A (en) Vehicle dangerous scene sensing method, device and system
US10970870B2 (en) Object detection apparatus
CN110784680B (en) Vehicle positioning method and device, vehicle and storage medium
CN112902911B (en) Ranging method, device, equipment and storage medium based on monocular camera
US11295429B2 (en) Imaging abnormality diagnosis device
CN112639814B (en) Road condition detection method and device
CN111028544A (en) Pedestrian early warning system with V2V technology and vehicle-mounted multi-sensor integration
CN110539748A (en) congestion car following system and terminal based on look around
CN113920490A (en) Vehicle obstacle detection method, device and equipment
CN115257790A (en) Sensor abnormality estimation device
CN112400094B (en) Object detecting device
JP2018124641A (en) Driving support device
US10867397B2 (en) Vehicle with a driving assistance system with a low power mode
CN113753073B (en) Vehicle speed control method, device, equipment and storage medium
US20230303066A1 (en) Driver assistance system and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant