CN111667706A - Lane-level road surface condition recognition method, road condition prompting method and device - Google Patents
Lane-level road surface condition recognition method, road condition prompting method and device Download PDFInfo
- Publication number
- CN111667706A CN111667706A CN202010507874.1A CN202010507874A CN111667706A CN 111667706 A CN111667706 A CN 111667706A CN 202010507874 A CN202010507874 A CN 202010507874A CN 111667706 A CN111667706 A CN 111667706A
- Authority
- CN
- China
- Prior art keywords
- lane
- road
- vehicle
- target
- condition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000000926 separation method Methods 0.000 claims description 28
- 230000015654 memory Effects 0.000 claims description 25
- 230000001502 supplementing effect Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000013589 supplement Substances 0.000 claims description 2
- 230000004888 barrier function Effects 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 241001149900 Fusconaia subrotunda Species 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application discloses a lane-level road surface condition identification method, a road condition prompting method and a device, and relates to the technical field of intelligent navigation with deep learning. The specific implementation scheme is as follows: carrying out pavement condition recognition on road images around the collected vehicle to obtain a target pavement condition and an image position of the target pavement condition; carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle; matching the image position of the target road surface condition with the image position of the at least one lane to obtain a target lane covered by the target road surface condition; and uploading the target road surface condition and the target lane to a server, so that the server can issue road condition prompt messages to user vehicles according to the target road surface condition and the target lane. The embodiment of the application prompts the road surface condition according to the lane level.
Description
Technical Field
The application relates to the computer technology, in particular to the technical field of intelligent navigation.
Background
Nowadays, navigation electronic products are more and more popular, and the dependence of driving users on navigation products is stronger and stronger. If the driving route has the obstacle or the like which obstructs the normal driving, the emergency braking of the user can be caused, and the conditions of jam, even traffic accident and the like can be caused.
The current navigation software only can prompt information such as a route, a red street lamp and the like, and does not prompt road surface conditions, so that the driving safety is reduced.
Disclosure of Invention
The embodiment of the application provides a lane road condition identification method, a road condition prompting method and a device, and further provides a vehicle-mounted terminal, a server and a readable storage medium.
In a first aspect, an embodiment of the present application provides a lane-level road surface condition identification method, which is applicable to a vehicle-mounted terminal, and includes:
carrying out pavement condition recognition on road images around the collected vehicle to obtain a target pavement condition and an image position of the target pavement condition;
carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle;
matching the image position of the target road surface condition with the image position of the at least one lane to obtain a target lane covered by the target road surface condition;
and uploading the target road surface condition and the target lane to a server, so that the server can issue road condition prompt messages to user vehicles according to the target road surface condition and the target lane.
In a second aspect, an embodiment of the present application further provides a road condition prompting method, which is applicable to a server, and includes:
acquiring a user vehicle to be prompted;
issuing a road condition prompting message to the user vehicle according to the target road surface condition and the target lane;
the target road surface condition is obtained by carrying out road surface condition recognition on road images around the collection vehicle;
the target lane is used for identifying the road condition of the road image around the collection vehicle to obtain the image position of the target road condition; carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle; and matching the image position of the target road surface condition with the image position of the at least one lane to obtain the target lane covered by the target road surface condition.
In a third aspect, an embodiment of the present application provides a lane-level road surface condition recognition apparatus, which is applicable to a vehicle-mounted terminal, and includes:
the road condition identification module is used for identifying the road condition of the road image around the collection vehicle to obtain the target road condition and the image position of the target road condition;
the lane line recognition module is used for recognizing lane lines of the road image to obtain the image position of at least one lane in the same direction as the acquisition vehicle;
the matching module is used for matching the image position of the target road surface condition with the image position of the at least one lane to obtain a target lane covered by the target road surface condition;
and the uploading module is used for uploading the target road surface condition and the target lane to a server so that the server can issue road condition prompting messages to user vehicles according to the target road surface condition and the target lane.
In a fourth aspect, an embodiment of the present application further provides a road condition prompting device, which is applicable to a server, and includes:
the acquisition module is used for acquiring a user vehicle to be prompted;
the issuing module is used for issuing road condition prompting messages to the user vehicle according to the target road surface condition and the target lane;
the target road surface condition is obtained by carrying out road surface condition recognition on road images around the collection vehicle;
the target lane is used for identifying the road condition of the road image around the collection vehicle to obtain the image position of the target road condition; carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle; and matching the image position of the target road surface condition with the image position of the at least one lane to obtain the target lane covered by the target road surface condition.
In a fifth aspect, an embodiment of the present application provides a vehicle-mounted terminal, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a lane-level road condition identification method as provided by any of the embodiments.
In a sixth aspect, an embodiment of the present application provides a server, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute a road condition prompting method provided by any of the embodiments.
In a seventh aspect, the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a lane-level road surface condition identification method or a road condition prompting method provided in any of the embodiments.
The technology according to the present application provides road surface condition indication at a lane level.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart of a first lane-level road surface condition recognition method in an embodiment of the present application;
fig. 2a is a flowchart of a second lane-level road surface condition recognition method in the embodiment of the present application;
FIG. 2b is a flowchart illustrating the recognition of a road image by the road surface condition recognition model according to the second embodiment of the present application;
fig. 2c is a flowchart of a supplementary lane line provided in the second embodiment of the present application;
fig. 3 is a flowchart of a first road condition prompting method in the embodiment of the present application;
fig. 4 is a flowchart of a second road condition prompting method provided in the embodiment of the present application;
fig. 5 is a structural diagram of a lane-level road surface condition recognition apparatus in the embodiment of the present application;
fig. 6 is a structural diagram of a road condition indicating device in the embodiment of the present application;
fig. 7 is a configuration diagram of the in-vehicle terminal in the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a first lane-level road surface condition recognition method according to an embodiment of the present application, which is applied to a case of recognizing a road surface condition. The method is executed by a lane-level road surface condition recognition device, which is realized by software and/or hardware and is specifically configured in a vehicle-mounted terminal with certain data calculation capacity.
The lane-level road surface condition recognition method shown in fig. 1 includes:
and S110, carrying out road surface condition recognition on the road image around the collected vehicle to obtain the target road surface condition and the image position of the target road surface condition.
In this embodiment, the vehicle-mounted terminal is configured in the collection vehicle, and the collection vehicle shoots the road image around in real time through the camera configured on the collection vehicle in the road driving process. In an actual application scene, a road image is shot in the driving direction of the collection vehicle. If the collection vehicle moves straight, shooting a road image in front of the collection vehicle; if the collected vehicle backs, shooting and collecting a road image behind the vehicle; and if the collection vehicle turns, shooting and collecting road images at the turning position of the collection vehicle.
Road image set I ═ tone obtained by continuously capturing images on a plurality of roads<l1,I1>,<l2,I2>,<l3,I3>… … } in which IiFor on the road liAnd (4) collecting the road images obtained by shooting, wherein i is a natural number. In the present embodiment, the road surface condition is recognized for each road image captured on each road, and the recognized road surface condition is referred to as a target road surface condition. The image position of the target road surface condition is also the position of the target road surface condition in the road image.
Alternatively, types of road surface conditions include, but are not limited to, potholes (degree of pothole including mild, moderate, and severe), construction, and obstacles.
And S120, identifying the lane lines of the road image to obtain the image position of at least one lane in the same direction as the collected vehicle.
And carrying out lane line identification on the road image so as to identify the type and the image position of the lane line. The direction and occupied area based on the lane are divided by the lane line, and thus the direction of at least one lane and the image position can be obtained according to the kind and image position of the lane line.
And acquiring the driving direction of the collection vehicle, such as forward or backward, and combining the obtained direction and the image position of the at least one lane so as to screen out the image position of the at least one lane in the same direction as the driving direction of the collection vehicle.
S130, matching the image position of the target road surface condition with the image position of at least one lane to obtain a target lane covered by the target road surface condition.
The image coordinate system of the road image represents the image position of the target road surface condition or the image position of the lane. In the case of two image positions having the same coordinate system, the two image positions can be directly matched.
Specifically, the image position of the target road surface condition is compared with the image position of the lane, and the lane covered with the covered area or the lane covering the whole area is selected and called as a target lane.
And S140, uploading the target road surface condition and the target lane to a server, so that the server issues road condition prompting messages to the user vehicle according to the target road surface condition and the target lane.
And uploading a corresponding relation formed by the target road surface condition and the target lane to a server. With the identification and position matching of multiple road images on multiple roads, the server receives the corresponding relation between multiple target road conditions and target lanes, and is used for issuing road condition prompting messages to the vehicles of the user to prompt the user that the target road conditions exist on the target lanes, so that road condition prompting at lane level is achieved.
Optionally, a triple is formed by the target road surface condition, the target lane and the geographic position of the collection vehicle, and the triple is uploaded to the server. The server acquires the geographical position of the user vehicle, and when the geographical position of the user vehicle is within a set range of the geographical position of the acquisition vehicle, the server issues road condition prompting messages to the user vehicle according to the target road surface condition and the target lane. Wherein, the setting range can be 200 meters to prompt the road surface condition in advance.
In the embodiment, the image positions of the lane and the target road surface condition are obtained by respectively identifying the lane line and the road surface condition of the road image, and the target lane covered by the target road surface condition is obtained by the image position matching method, so that the road condition prompt is carried out on the vehicle of a user through the server, the recognition and the prompt of the lane grade road surface condition with finer granularity are realized, the driving safety is improved, and the auxiliary decision information is provided for the navigation application and the automatic driving of the lane grade; in addition, the lane direction can be recognized, and the recognized lane and the collecting vehicle are in the same direction, so that the target road surface condition in the driving direction of the collecting vehicle can be recognized more accurately without considering the lane and the road surface condition in the opposite direction, the driving requirement is met, the vehicle error prompt of a user is avoided, and the driving safety of the vehicle of the user is further improved.
According to the embodiment of the present application, fig. 2a is a flowchart of a second lane level road surface condition identification method in the embodiment of the present application, and the embodiment of the present application optimizes the process of lane line identification on the basis of the technical solutions of the above embodiments.
The lane-level road surface condition recognition method shown in fig. 2a includes:
s210, carrying out road surface condition recognition on the road image around the collected vehicle to obtain the target road surface condition and the image position of the target road surface condition.
Optionally, the road image is input to the road surface condition recognition model, and the target road surface condition output by the model and the image position of the target road surface condition are obtained.
Fig. 2b is a flowchart of the road image recognition by the road surface condition recognition model according to the second embodiment of the present application. As shown in fig. 2b, the road surface condition recognition model includes a road surface condition detection sub-model and a road surface condition classification sub-model. The road surface condition detection sub-model is used for detecting whether the target road surface condition exists in the road image and the image position of the target road surface condition. And then intercepting the road surface condition area according to the image position of the road surface condition, and inputting the road surface condition area to a road surface condition classification sub-model. The road surface condition detection submodel may be a deep Neural network module, and specifically may adopt target detection frameworks such as fast-RCNN (fast regional convolutional Neural Networks), YOLO, SSD (Single shot multi box Detector), and the like. The road condition classification submodel is used for classifying the road condition area output by the road condition detection submodel to obtain
The road surface condition classification submodel is used for classifying the road surface condition area to obtain the type of the target road surface condition. The road condition classification submodel may be a deep neural Network module, and may specifically adopt VGG (visual geometry group, oxford visual geometry research group), ResNet (Residual Network), and inclusion series models. As shown in fig. 2b, the road surface condition is classified as construction.
In the embodiment, samples marked with different road condition types are selected from different shot road data, and the road condition recognition model is trained, so that the trained road condition recognition model has more robust recognition capability on different road condition types.
S220, carrying out lane line recognition on the road image to obtain a lane area in the same direction as the collected vehicle and the type and position of the lane line in the lane area.
The lane area in the same direction as the collecting vehicle is an area formed by at least one lane in the same direction as the collecting vehicle.
Optionally, with reference to fig. 2c, the present operation includes the following three steps:
the method comprises the following steps: and carrying out lane line identification on the road image to obtain the type and the image position of the lane line.
Specifically, firstly, a semantic segmentation map of the image is obtained through a pre-trained image semantic segmentation model, such as deep lab. Then, the pixels belonging to the lane line in the semantic segmentation result are determined by a clustering algorithm, such as a DBSCAN algorithm or a Mean-Shift algorithm, to determine different lane line instances, that is, to determine which lane line the segmented lane line pixels belong to. Edge pixels exemplified by lane lines: x _ min, Y _ min, X _ max, and Y _ max can determine the bounding box (bounding box) of the lane line, i.e., determine the image position of the lane line. And then, classifying the lane lines in the surrounding frame through an image classification model to obtain the types of the lane lines. The types of lane lines include road separation lines and broken lines. The road separation line comprises a green belt, hard isolation, double yellow lines and long solid lines, and is used for separating lanes in different directions, and lanes and non-lanes. The dashed lines are used to separate different lanes in the same direction.
Step two: and determining the lane areas in the same direction as the collection vehicle according to the image positions of the road separation lines on the two sides of the collection vehicle.
The road separation lines on both sides of the collection vehicle include road separation lines for separating different directions and road separation lines for separating lanes and non-lanes, and obviously, the lane area in the same direction as the collection vehicle is located between the road separation lines on both sides of the collection vehicle.
The present embodiment assumes that the cameras on the collection vehicle are located directly in front of and behind the collection vehicle so that the collection vehicle is located on the lane in the middle of the road image, then the road separation lines on both sides of the collection vehicle will be located on both the left and right sides of the road image. Therefore, the road separation lines are selected from the left side and the right side of the road image, the image positions of the road separation lines are obtained, and the area between the two road separation lines is determined as the lane area in the same direction as the collection vehicle in the road image.
Step three: and determining the type and the position of the lane line in the lane area according to the image position of the lane line.
Specifically, from among all the types and image positions of the lane lines recognized in the step one, the type and image position of the lane line located within the lane area are selected. As shown in fig. 2c, the types of lane lines in the lane area are a solid line, a dotted line, and a solid line in order from left to right.
According to the embodiment, the lane areas in the same direction as the collection vehicle can be accurately determined according to the image positions of the road separation lines on the two sides of the collection vehicle, so that the types and the positions of the lane lines in the lane areas can be accurately determined, the lane lines in the non-lane areas are eliminated, and unnecessary data volume is reduced.
And S230, carrying out vehicle identification on the road image to obtain the type and the position of a lane line blocked by the vehicle in the lane area.
If the lane line in the lane area may be blocked by the vehicles in the same direction, and the lane line identification in S220 is not complete, the lane line in S220 is supplemented by this operation. By utilizing the actual condition that one vehicle occupies one lane line, after the vehicles in the road image are identified, the lane lines on the two sides of the vehicles can be further determined, and the shielded lane lines are supplemented.
Optionally, with reference to fig. 2c, the present operation includes the following two steps:
the method comprises the following steps: and carrying out vehicle identification on the road image to obtain the image position of the vehicle in the lane area.
Similar to the image position identification process of the lane line, firstly, a semantic segmentation map of the image is obtained through a pre-trained image semantic segmentation model, such as deep lab. Then, the pixels belonging to the vehicle in the semantic segmentation result are determined by a clustering algorithm, such as DBSCAN or Mean-Shift algorithm, to determine different vehicle instances, that is, to which vehicle the segmented vehicle pixels belong. Edge pixels by vehicle example: the X _ min, Y _ min, X _ max, and Y _ max may determine an enclosure box (bounding box) of the vehicle, i.e., determine the image position of the vehicle.
Step two: if no lane line is recognized on the left and/or right side of the vehicle, a lane line of a road separation line or a dotted line category is supplemented on the side where no lane line is recognized.
And step two is executed for each vehicle identified in step one. If no lane line is recognized on the left side, the right side or the left and right sides of a vehicle, which indicates that the lane line is covered by a tire cover or a vehicle body, the lane line of the road separation line or the dotted line type is supplemented on the side where the lane line is not recognized, such as the left side, the right side or the left and right sides.
According to the embodiment, according to the rule that a vehicle occupies a lane, when no lane line is identified on the left side and/or the right side of the vehicle, the corresponding lane line is supplemented, so that the type and the position of the lane line shielded by the vehicle in the lane area are obtained.
Optionally, it is determined that the lane line is not recognized on the left side and/or the right side of the vehicle by: 1) the lane line is not identified in the set distance range on the left side and/or the right side of the vehicle; 2) no lane line is recognized between adjacent vehicles.
For 1) the set distance range may be set to a half of the lane width, it is determined whether a lane line is recognized within the set distance range on the left and/or right side of the vehicle according to the lane line within the lane area recognized at S220. If no lane line is identified and no other vehicles are identified within a set distance range on the left side and/or the right side of a vehicle in the lane area, supplementing the lane line of the road separation line category at the side where the lane line is not identified; on the contrary, if the lane line is not recognized but other vehicles are recognized within the set distance range of the left and/or right side of one vehicle, the lane line of the dotted line category is supplemented to the side where the lane line is not recognized but other vehicles are recognized.
For 2) as shown in fig. 2c, the type of lane line and the vehicle in the lane area are, in order from left to right, a solid line, a vehicle, a dashed line, a vehicle and a solid line. It can be seen that there is no lane line recognition result between the two vehicles, and it can be determined that the lane line is covered, complementing the lane line of the dotted line type. Based on the above steps, the final lane line result can be obtained as a solid line, a broken line and a solid line, and the existence of 3 lanes in the same direction is finally determined.
The first method described above does not depend on the type of lane line and the relative positional relationship between the lane lines, and only needs to detect the recognition result of the lane line within a set range on both sides of the vehicle. The second method does not depend on whether a lane line is detected in the set range on the two sides of the vehicle, and only needs the left and right sequence between the lane line and the vehicle.
S240, obtaining the image position of at least one lane in the same direction as the collected vehicle according to the type and the position of the lane line recognized in the lane area and the type and the position of the lane line shielded by the vehicle in the lane area.
The lane lines identified in the lane area and the lane lines blocked by the vehicle form all the lane lines in the lane area. Based on the types and positions of all the lane lines in the lane area, the area between two adjacent lane lines is determined as the image position of one lane.
And S250, matching the image position of the target road surface condition with the image position of at least one lane to obtain a target lane covered by the target road surface condition.
And S260, uploading the target road surface condition and the target lane to a server, so that the server can issue road condition prompting messages to the user vehicle according to the target road surface condition and the target lane.
In this embodiment, the lane line identification and the vehicle identification are complementary to each other, so as to comprehensively identify the type and the image position of the lane line, thereby avoiding the situation that the lane line cannot be identified or is identified incorrectly due to vehicle shielding.
According to an embodiment of the present application, fig. 3 is a flowchart of a first road condition prompting method in the embodiment of the present application, and the embodiment of the present application is applicable to a condition of prompting a road condition of a vehicle of a user. The method is executed by a road condition prompting device, the device is realized by software and/or hardware and is specifically configured in a server with certain data operation capacity.
The road condition prompting method shown in fig. 3 includes:
and S310, obtaining the user vehicle to be prompted.
Optionally, the geographic location of the collection vehicle is obtained. And when the distance between the geographic position of the user vehicle and the geographic position of the collection vehicle is within a set range, determining the user vehicle as the user vehicle to be prompted. Wherein the set range may be 200 meters.
And S320, issuing road condition prompting messages to the vehicles of the users according to the target road surface condition and the target lane.
The target lane is an image position of the target road condition obtained by carrying out road condition recognition on the road image around the collected vehicle; carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle; and matching the image position of the target road surface condition with the image position of at least one lane to obtain the target lane covered by the target road surface condition.
In a specific embodiment, the user vehicle reports the geographic position to the server in real time in the navigation process, and when the server detects that the geographic position of the user vehicle is within a set range from the geographic position of the collection vehicle, the server determines the user vehicle as the user vehicle to be prompted, and issues a road condition prompting message to the user vehicle, for example, a target road condition exists on a target lane.
In the embodiment, the lane line recognition and the road surface condition recognition are respectively carried out on the road image, so that the image positions of the lane and the target road surface condition are obtained, the target lane covered by the target road surface condition is obtained by an image position matching method, and the road condition prompt is carried out on the vehicle of a user through a server, so that the recognition and the prompt of the lane grade road surface condition with finer granularity are realized, the driving safety is improved, and the auxiliary decision information is provided for the navigation application and the automatic driving of the lane grade; in addition, the lane direction can be recognized, and the recognized lane and the collecting vehicle are in the same direction, so that the target road surface condition in the driving direction of the collecting vehicle can be recognized more accurately without considering the lane and the road surface condition in the opposite direction, the driving requirement is met, the vehicle error prompt of a user is avoided, and the driving safety of the vehicle of the user is further improved.
In the embodiment of the present application, fig. 4 is a flowchart of a second road condition prompting method provided in the embodiment of the present application, and the embodiment is optimized.
The road condition prompting method shown in fig. 4 includes:
and S410, obtaining the user vehicle to be prompted.
And S420, determining a driving lane of the vehicle of the user.
Optionally, the driving lane of the user vehicle is determined according to the geographical position of the user vehicle.
And S430, calculating the direction of the target lane relative to the driving lane and the number of the interval lanes.
For example, the lane area has 4 lanes from left to right in sequence, the driving lane is the leftmost lane in the lane area, the target lane is the rightmost lane in the lane area, and the direction of the target lane relative to the driving lane is the right side and is separated by 3 lanes.
And S440, issuing road condition prompting messages to the vehicles of the users according to the condition and the direction of the target road surface and the number of the lane spaces.
Specifically, a prompt message of "there is a target road surface condition on the 3 rd lane on the right side of the driving lane" is issued to the user vehicle.
The direction of the target lane relative to the driving lane and the number of the lane at intervals are prompted to the vehicle of the user, so that the user can directly know the specific position of the road surface condition, and the method is more humanized and intelligent.
Fig. 5 is a structural diagram of a lane-level road surface condition recognition device according to an embodiment of the present application, which is implemented by software and/or hardware and is specifically configured in a vehicle-mounted terminal having a certain data calculation capability, and the embodiment of the present application is applied to the case of recognizing a road surface condition.
A lane-level road surface condition recognition apparatus 500 shown in fig. 5 includes: the system comprises a road surface condition identification module 501, a lane line identification module 502, a matching module 503 and an uploading module 504; wherein,
the road condition recognition module 501 is configured to perform road condition recognition on road images around the collected vehicle to obtain a target road condition and an image position of the target road condition;
the lane line recognition module 502 is configured to perform lane line recognition on the road image to obtain an image position of at least one lane in the same direction as the collected vehicle;
the matching module 503 is configured to match the image position of the target road surface condition with an image position of at least one lane, so as to obtain a target lane covered by the target road surface condition;
the uploading module 504 is configured to upload the target road surface condition and the target lane to the server, so that the server issues a road condition prompting message to the user vehicle according to the target road surface condition and the target lane.
In the embodiment, the image positions of the lane and the target road surface condition are obtained by respectively identifying the lane line and the road surface condition of the road image, and the target lane covered by the target road surface condition is obtained by the image position matching method, so that the road condition prompt is carried out on the vehicle of a user through the server, the recognition and the prompt of the lane grade road surface condition with finer granularity are realized, the driving safety is improved, and the auxiliary decision information is provided for the navigation application and the automatic driving of the lane grade; in addition, the lane direction can be recognized, and the recognized lane and the collecting vehicle are in the same direction, so that the target road surface condition in the driving direction of the collecting vehicle can be recognized more accurately without considering the lane and the road surface condition in the opposite direction, the driving requirement is met, the vehicle error prompt of a user is avoided, and the driving safety of the vehicle of the user is further improved.
Further, the lane line identification module 502 includes: the lane line recognition submodule is used for recognizing lane lines of the road image to obtain a lane area in the same direction as the collected vehicle and the type and position of the lane lines in the lane area; the vehicle identification submodule is used for carrying out vehicle identification on the road image to obtain the type and the position of a lane line shielded by a vehicle in a lane area; and the lane position recognition submodule is used for obtaining the image position of at least one lane in the same direction with the collected vehicle according to the type and the position of the lane line recognized in the lane area and the type and the position of the lane line shielded by the vehicle in the lane area.
Further, the lane line identification submodule includes: the lane line identification unit is used for identifying lane lines of the road image to obtain the types and the image positions of the lane lines, wherein the types of the lane lines comprise road separation lines and dotted lines; the lane area determining unit is used for determining a lane area in the same direction as the collection vehicle according to the image positions of the road separation lines on the two sides of the collection vehicle; and the type and position determining unit is used for determining the type and position of the lane line in the lane area according to the image position of the lane line.
Further, a vehicle identification submodule comprising: the vehicle position identification unit is used for carrying out vehicle identification on the road image to obtain the image position of the vehicle in the lane area; and a supplementing unit for supplementing a road separation line or a lane line of a dotted line category at a side where the lane line is not recognized, if the lane line is not recognized at the left side and/or the right side of the vehicle.
Further, a supplementing unit, specifically configured to supplement a lane line of a road separation line or a broken line category on an unidentified side if no lane line is identified within a set distance range on the left side and/or the right side of the vehicle; alternatively, if no lane lines are identified between adjacent vehicles, lane lines of the dotted line category are supplemented between adjacent vehicles.
The lane-level road surface condition recognition device can execute the lane-level road surface condition recognition method provided by any embodiment of the application, and has corresponding functional modules and beneficial effects for executing the lane-level road surface condition recognition method.
According to an embodiment of the present application, fig. 6 is a structural diagram of a road condition prompting device in the embodiment of the present application, and the embodiment of the present application is suitable for a situation of prompting a road condition of a user vehicle.
As shown in fig. 6, the road condition prompting device 600 includes: an acquisition module 601 and a distribution module 602; wherein,
an obtaining module 601, configured to obtain a user vehicle to be prompted;
the issuing module 602 is configured to issue a road condition prompting message to a user vehicle according to a target road surface condition and a target lane;
the target road condition is obtained by carrying out road condition recognition on road images around the collected vehicle;
the target lane is used for identifying the road condition of the road image around the collected vehicle to obtain the image position of the target road condition; carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle; and matching the image position of the target road surface condition with the image position of at least one lane to obtain the target lane covered by the target road surface condition.
In the embodiment, the lane line recognition and the road surface condition recognition are respectively carried out on the road image, so that the image positions of the lane and the target road surface condition are obtained, the target lane covered by the target road surface condition is obtained by an image position matching method, and the road condition prompt is carried out on the vehicle of a user through a server, so that the recognition and the prompt of the lane grade road surface condition with finer granularity are realized, the driving safety is improved, and the auxiliary decision information is provided for the navigation application and the automatic driving of the lane grade; in addition, the lane direction can be recognized, and the recognized lane and the collecting vehicle are in the same direction, so that the target road surface condition in the driving direction of the collecting vehicle can be recognized more accurately without considering the lane and the road surface condition in the opposite direction, the driving requirement is met, the vehicle error prompt of a user is avoided, and the driving safety of the vehicle of the user is further improved.
The device further comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a driving lane of the user vehicle before sending a road condition prompting message to the user vehicle according to the target road surface condition and the target lane; the calculation module is used for calculating the direction of the target lane relative to the driving lane and the number of the interval lanes; correspondingly, the issuing module is specifically configured to: and issuing road condition prompting messages to the vehicles of the users according to the condition, the direction and the number of the lane spaces of the target road.
According to an embodiment of the application, the application also provides a vehicle-mounted terminal, a server and a readable storage medium.
As shown in fig. 7, the present invention is a block diagram of a vehicle-mounted terminal that implements the lane-level road surface condition recognition method according to the embodiment of the present invention. Vehicle terminals are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The in-vehicle terminal may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the in-vehicle terminal includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple in-vehicle terminals may be connected, with each terminal providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the lane-level road surface condition identification method provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the lane-level road surface condition recognition method provided by the present application.
The memory 702, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the lane-level road condition recognition method in the embodiment of the present application (for example, the program instructions/modules shown in fig. 5 include a road condition recognition module 501, a lane line recognition module 502, a matching module 503, and an uploading module 504). The processor 701 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 702, that is, implements the method of lane-level road surface condition recognition in the above-described method embodiments.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the in-vehicle terminal that implements the lane-level road surface condition recognition method, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory remotely located from the processor 701, and such remote memory may be connected via a network to an in-vehicle terminal that performs the lane-level road condition identification method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The vehicle-mounted terminal performing the lane-level road surface condition recognition method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the in-vehicle terminal that performs the lane-level road surface condition recognition method, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
In addition, the embodiment of the present application further provides a server, the structure of which is detailed in fig. 7, at least one processor; and a memory communicatively coupled to the at least one processor; the storage stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the road condition prompting method provided by any of the above embodiments.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (17)
1. A lane-level road surface condition identification method is suitable for a vehicle-mounted terminal and comprises the following steps:
carrying out pavement condition recognition on road images around the collected vehicle to obtain a target pavement condition and an image position of the target pavement condition;
carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle;
matching the image position of the target road surface condition with the image position of the at least one lane to obtain a target lane covered by the target road surface condition;
and uploading the target road surface condition and the target lane to a server, so that the server can issue road condition prompt messages to user vehicles according to the target road surface condition and the target lane.
2. The method of claim 1, wherein the performing lane line recognition on the road image to obtain an image position of at least one lane co-directional with the capturing vehicle comprises:
carrying out lane line recognition on the road image to obtain a lane area in the same direction as the acquisition vehicle and the type and position of a lane line in the lane area;
carrying out vehicle identification on the road image to obtain the type and the position of a lane line shielded by a vehicle in the lane area;
and obtaining the image position of at least one lane in the same direction as the acquisition vehicle according to the type and the position of the lane line identified in the lane area and the type and the position of the lane line shielded by the vehicle in the lane area.
3. The method of claim 2, wherein the performing lane line recognition on the road image to obtain a lane area in the same direction as the capturing vehicle and the type and position of the lane line in the lane area comprises:
performing lane line identification on the road image to obtain the type and the image position of the lane line, wherein the type of the lane line comprises a road separation line and a dotted line;
determining lane areas in the same direction as the collection vehicle according to the image positions of the road separation lines on the two sides of the collection vehicle;
and determining the type and the position of the lane line in the lane area according to the image position of the lane line.
4. The method according to claim 2 or 3, wherein the vehicle recognition of the road image to obtain the type and position of a lane line blocked by a vehicle in the lane area comprises:
carrying out vehicle identification on the road image to obtain the image position of the vehicle in the lane area;
if no lane line is recognized on the left and/or right side of the vehicle, a lane line of a road separation line or a dotted line category is supplemented on the side where no lane line is recognized.
5. The method of claim 4, wherein the supplementing of the lane markings of the road separation line or the dashed category at the side of the lane markings unrecognized if no lane markings are recognized at the left and/or right side of the vehicle comprises:
if no lane line is identified within a set distance range on the left side and/or the right side of the vehicle, supplementing a lane line of a road separation line or a dotted line type on the side which is not identified; or,
if no lane lines are identified between adjacent vehicles, lane lines of the dotted line category are supplemented between the adjacent vehicles.
6. A road condition prompting method is applicable to a server and comprises the following steps:
acquiring a user vehicle to be prompted;
issuing a road condition prompting message to the user vehicle according to the target road surface condition and the target lane;
the target road surface condition is obtained by carrying out road surface condition recognition on road images around the collection vehicle;
the target lane is used for identifying the road condition of the road image around the collection vehicle to obtain the image position of the target road condition; carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle; and matching the image position of the target road surface condition with the image position of the at least one lane to obtain the target lane covered by the target road surface condition.
7. The method of claim 6, further comprising, prior to said issuing a road condition prompting message to the user vehicle in accordance with the target road condition and the target lane:
determining a driving lane of the user vehicle;
calculating the direction of the target lane relative to the driving lane and the number of lane barriers;
the issuing of the road condition prompting message to the user vehicle according to the target road condition and the target lane comprises the following steps:
and issuing road condition prompting messages to the user vehicles according to the target road surface condition, the direction and the number of the lane spaces.
8. A lane-level road surface condition recognition device is applicable to a vehicle-mounted terminal, and comprises:
the road condition identification module is used for identifying the road condition of the road image around the collection vehicle to obtain the target road condition and the image position of the target road condition;
the lane line recognition module is used for recognizing lane lines of the road image to obtain the image position of at least one lane in the same direction as the acquisition vehicle;
the matching module is used for matching the image position of the target road surface condition with the image position of the at least one lane to obtain a target lane covered by the target road surface condition;
and the uploading module is used for uploading the target road surface condition and the target lane to a server so that the server can issue road condition prompting messages to user vehicles according to the target road surface condition and the target lane.
9. The apparatus of claim 8, wherein the lane line identification module comprises:
the lane line recognition submodule is used for recognizing lane lines of the road image to obtain a lane area in the same direction as the acquisition vehicle and the type and position of the lane lines in the lane area;
the vehicle identification submodule is used for carrying out vehicle identification on the road image to obtain the type and the position of a lane line shielded by a vehicle in the lane area;
and the lane position recognition submodule is used for obtaining the image position of at least one lane in the same direction as the acquisition vehicle according to the type and the position of the lane line recognized in the lane area and the type and the position of the lane line shielded by the vehicle in the lane area.
10. The apparatus of claim 9, wherein the lane line identification sub-module comprises:
the lane line identification unit is used for identifying a lane line of the road image to obtain the type and the image position of the lane line, wherein the type of the lane line comprises a road separation line and a dotted line;
the lane area determining unit is used for determining lane areas in the same direction as the collecting vehicle according to the image positions of the road separation lines on the two sides of the collecting vehicle;
and the type and position determining unit is used for determining the type and position of the lane line in the lane area according to the image position of the lane line.
11. The apparatus of claim 9 or 10, wherein the vehicle identification sub-module comprises:
the vehicle position identification unit is used for carrying out vehicle identification on the road image to obtain the image position of the vehicle in the lane area;
and a supplementing unit for supplementing a road separation line or a lane line of a dotted line category at a side where a lane line is not recognized, if the lane line is not recognized at the left side and/or the right side of the vehicle.
12. The apparatus of claim 11, wherein,
a supplementing unit, specifically configured to supplement a lane line of a road separation line or a broken line category on an unidentified side if the lane line is not identified within a set distance range on the left side and/or the right side of the vehicle; alternatively, if no lane lines are identified between adjacent vehicles, lane lines of the dotted line category are supplemented between the adjacent vehicles.
13. A road condition prompting device is suitable for a server and comprises:
the acquisition module is used for acquiring a user vehicle to be prompted;
the issuing module is used for issuing road condition prompting messages to the user vehicle according to the target road surface condition and the target lane;
the target road surface condition is obtained by carrying out road surface condition recognition on road images around the collection vehicle;
the target lane is used for identifying the road condition of the road image around the collection vehicle to obtain the image position of the target road condition; carrying out lane line recognition on the road image to obtain the image position of at least one lane in the same direction as the collected vehicle; and matching the image position of the target road surface condition with the image position of the at least one lane to obtain the target lane covered by the target road surface condition.
14. The apparatus of claim 13, further comprising:
the determining module is used for determining a driving lane of the user vehicle before sending a road condition prompting message to the user vehicle according to the target road surface condition and the target lane;
the calculation module is used for calculating the direction of the target lane relative to the driving lane and the number of the interval lanes;
the issuing module is specifically configured to: and issuing road condition prompting messages to the user vehicles according to the target road surface condition, the direction and the number of the lane spaces.
15. An in-vehicle terminal comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a lane-level roadway condition identifying method as recited in any one of claims 1-5.
16. A server, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform a road condition notification method as claimed in claim 6 or 7.
17. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a lane-level road surface condition recognition method according to any one of claims 1 to 5 or a road condition presentation method according to claim 6 or 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010507874.1A CN111667706A (en) | 2020-06-05 | 2020-06-05 | Lane-level road surface condition recognition method, road condition prompting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010507874.1A CN111667706A (en) | 2020-06-05 | 2020-06-05 | Lane-level road surface condition recognition method, road condition prompting method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111667706A true CN111667706A (en) | 2020-09-15 |
Family
ID=72386956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010507874.1A Pending CN111667706A (en) | 2020-06-05 | 2020-06-05 | Lane-level road surface condition recognition method, road condition prompting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111667706A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329564A (en) * | 2020-10-24 | 2021-02-05 | 武汉光庭信息技术股份有限公司 | Lane keeping function failure analysis method, system, electronic device and storage medium |
CN112818792A (en) * | 2021-01-25 | 2021-05-18 | 北京百度网讯科技有限公司 | Lane line detection method, lane line detection device, electronic device, and computer storage medium |
US11175149B2 (en) * | 2018-10-16 | 2021-11-16 | Samsung Electronics Co., Ltd. | Vehicle localization method and apparatus |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106327895A (en) * | 2016-08-30 | 2017-01-11 | 深圳市元征科技股份有限公司 | Traveling road segment situation prompt system and method |
CN108921089A (en) * | 2018-06-29 | 2018-11-30 | 驭势科技(北京)有限公司 | Method for detecting lane lines, device and system and storage medium |
CN109740484A (en) * | 2018-12-27 | 2019-05-10 | 斑马网络技术有限公司 | The method, apparatus and system of road barrier identification |
CN109785633A (en) * | 2019-03-14 | 2019-05-21 | 百度在线网络技术(北京)有限公司 | Dangerous road conditions based reminding method, device, car-mounted terminal, server and medium |
CN109829351A (en) * | 2017-11-23 | 2019-05-31 | 华为技术有限公司 | Detection method, device and the computer readable storage medium of lane information |
CN110364008A (en) * | 2019-08-16 | 2019-10-22 | 腾讯科技(深圳)有限公司 | Road conditions determine method, apparatus, computer equipment and storage medium |
-
2020
- 2020-06-05 CN CN202010507874.1A patent/CN111667706A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106327895A (en) * | 2016-08-30 | 2017-01-11 | 深圳市元征科技股份有限公司 | Traveling road segment situation prompt system and method |
CN109829351A (en) * | 2017-11-23 | 2019-05-31 | 华为技术有限公司 | Detection method, device and the computer readable storage medium of lane information |
CN108921089A (en) * | 2018-06-29 | 2018-11-30 | 驭势科技(北京)有限公司 | Method for detecting lane lines, device and system and storage medium |
CN109740484A (en) * | 2018-12-27 | 2019-05-10 | 斑马网络技术有限公司 | The method, apparatus and system of road barrier identification |
CN109785633A (en) * | 2019-03-14 | 2019-05-21 | 百度在线网络技术(北京)有限公司 | Dangerous road conditions based reminding method, device, car-mounted terminal, server and medium |
CN110364008A (en) * | 2019-08-16 | 2019-10-22 | 腾讯科技(深圳)有限公司 | Road conditions determine method, apparatus, computer equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11175149B2 (en) * | 2018-10-16 | 2021-11-16 | Samsung Electronics Co., Ltd. | Vehicle localization method and apparatus |
CN112329564A (en) * | 2020-10-24 | 2021-02-05 | 武汉光庭信息技术股份有限公司 | Lane keeping function failure analysis method, system, electronic device and storage medium |
CN112818792A (en) * | 2021-01-25 | 2021-05-18 | 北京百度网讯科技有限公司 | Lane line detection method, lane line detection device, electronic device, and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12067764B2 (en) | Brake light detection | |
CN111695483B (en) | Vehicle violation detection method, device and equipment and computer storage medium | |
CN111667706A (en) | Lane-level road surface condition recognition method, road condition prompting method and device | |
US9104919B2 (en) | Multi-cue object association | |
CN111292531B (en) | Tracking method, device and equipment of traffic signal lamp and storage medium | |
CN112069279B (en) | Map data updating method, device, equipment and readable storage medium | |
CN111950537B (en) | Zebra crossing information acquisition method, map updating method, device and system | |
CN111674388B (en) | Information processing method and device for vehicle curve driving | |
CN115147809B (en) | Obstacle detection method, device, equipment and storage medium | |
WO2021199584A1 (en) | Detecting debris in a vehicle path | |
CN111540010B (en) | Road monitoring method and device, electronic equipment and storage medium | |
CN111652112A (en) | Lane flow direction identification method and device, electronic equipment and storage medium | |
CN111666714A (en) | Method and device for identifying automatic driving simulation scene | |
CN111640301B (en) | Fault vehicle detection method and fault vehicle detection system comprising road side unit | |
CN115641359A (en) | Method, apparatus, electronic device, and medium for determining motion trajectory of object | |
Huu et al. | Proposing Lane and Obstacle Detection Algorithm Using YOLO to Control Self‐Driving Cars on Advanced Networks | |
CN117593685B (en) | Method and device for constructing true value data and storage medium | |
CN111597986A (en) | Method, apparatus, device and storage medium for generating information | |
CN113011298B (en) | Truncated object sample generation, target detection method, road side equipment and cloud control platform | |
CN111814724B (en) | Lane number identification method, device, equipment and storage medium | |
CN110458815A (en) | There is the method and device of mist scene detection | |
CN112699773A (en) | Traffic light identification method and device and electronic equipment | |
CN116563801A (en) | Traffic accident detection method, device, electronic equipment and medium | |
CN113255404A (en) | Lane line recognition method and device, electronic device and computer-readable storage medium | |
CN116311157A (en) | Obstacle recognition method and obstacle recognition model training method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200915 |