CN111814724B - Lane number identification method, device, equipment and storage medium - Google Patents

Lane number identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN111814724B
CN111814724B CN202010700414.0A CN202010700414A CN111814724B CN 111814724 B CN111814724 B CN 111814724B CN 202010700414 A CN202010700414 A CN 202010700414A CN 111814724 B CN111814724 B CN 111814724B
Authority
CN
China
Prior art keywords
road surface
lane
road
lanes
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010700414.0A
Other languages
Chinese (zh)
Other versions
CN111814724A (en
Inventor
杨建忠
张通滨
王珊珊
夏德国
卢振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010700414.0A priority Critical patent/CN111814724B/en
Publication of CN111814724A publication Critical patent/CN111814724A/en
Application granted granted Critical
Publication of CN111814724B publication Critical patent/CN111814724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The application discloses a method, a device, equipment and a storage medium for identifying the number of lanes, which relate to the deep learning technology, wherein the method for identifying the number of lanes comprises the following steps: acquiring a target image, wherein the target image comprises a road pavement and a road isolator, and the road isolator is used for limiting the transverse crossing of a vehicle; preprocessing a target image to obtain a first area corresponding to the road surface and a second area corresponding to the road spacer in the target image; determining a first demarcation point according to the second area, and forming a first transverse line across the first area through the first demarcation point to obtain an intersection point of the first transverse line and each lane line on the road surface; and determining the number of lanes on the road surface according to the number of the intersection points. The number of the lane lines on the road can be accurately identified by using the method and the device.

Description

Lane number identification method, device, equipment and storage medium
Technical Field
The application relates to a lane number identification method, a lane number identification device, lane number identification equipment and a storage medium.
Background
Map products or navigation products, such as a map-like or navigation-like app of a smart terminal, may provide road navigation services to a user, generally comprising the following processes: searching an optimal path through a path planning algorithm based on urban road network data according to travel demands (such as departure places and destinations) of users; after the path planning is completed, the navigation product can provide map-based navigation guide information, voice prompt information and the like for the user, and the user can travel from the departure place to the destination by referring to the navigation information. With the continuous upgrading of technology, on the premise of completing the basic path planning function, navigation services provided by the products for users are increasingly abundant.
However, the number of lanes, such as the number of lanes in the same direction on the road surface where the vehicle is traveling, cannot be automatically identified by the current map product, and related reminding cannot be provided in the navigation product. Some products, while capable of providing simple class identification of lane lines, do not identify the number of lanes. The "number of lanes" is an attribute of a road, and generally represents the number of lanes of a vehicle that can be maximally driven on the road, and the number of lanes can be divided into a total number of lanes, a left number of lanes, and a right number of lanes, where the right number of lanes is the number of lanes that can be maximally driven on the current driving direction, and the left number of lanes represents the number of lanes that can be maximally driven on the opposite direction. For roads with more lanes, the road is usually wide, while for roads with fewer lanes, the road surface is likely to be narrow, special conditions such as traffic jam and the like are easy to occur, and related reminding cannot be provided, so that a certain safety risk exists in practice.
Disclosure of Invention
The application provides a lane number identification method, a lane number identification device, lane number identification equipment and a storage medium.
According to a first aspect of the present application, there is provided a method for identifying the number of lanes, including:
acquiring a target image, wherein the target image comprises a road pavement and a road isolator, and the road isolator is used for limiting the transverse crossing of a vehicle;
preprocessing a target image to obtain a first area corresponding to the road surface and a second area corresponding to the road spacer in the target image;
determining a first demarcation point according to the second area, and forming a first transverse line across the first area through the first demarcation point to obtain an intersection point of the first transverse line and each lane line on the road surface;
and determining the number of lanes on the road surface according to the number of the intersection points.
According to a second aspect of the present application, there is provided an apparatus for recognizing the number of lanes, comprising:
the image acquisition module is used for acquiring a target image, wherein the target image comprises a road pavement and a road isolator, and the road isolator is used for limiting the transverse crossing of a vehicle;
the image preprocessing module is used for preprocessing a target image to obtain a first area corresponding to the road surface and a second area corresponding to the road spacer in the target image;
the intersection point determining module is used for determining a first demarcation point according to the second area, and forming a first transverse line across the first area through the first demarcation point to obtain intersection points of the first transverse line and each lane line on the road surface;
and the lane number determining module is used for determining the number of lanes on the road surface according to the number of the intersection points.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the lane number identification method as described above.
According to a fourth aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the above method of recognizing the number of lanes.
According to a fifth aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
The lane number result accuracy based on the demarcation point of the road surface where the vehicle is located and other roads or other direction roads can be accurately distinguished by utilizing the embodiment of the application, the reliability is high, the application of the embodiment of the application to navigation products can provide accurate lane number information for the navigation products, relevant additional functions such as 'avoidance of small roads' and the like can be realized based on the information, diversified selections can be provided for needed users such as novice drivers and the like, easy-to-congestion road sections or difficult-to-walk road sections with fewer lane numbers can be avoided as much as possible, and important application values are realized.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a flow chart of a method of identifying lane numbers according to one embodiment of the present application;
FIG. 2 is a flow chart diagram of a method of identifying lane numbers according to another embodiment of the present application;
FIG. 3 is a schematic illustration of a road image to be processed according to one embodiment of the present application;
FIG. 4 is a schematic diagram of a semantic segmentation map of the embodiment of FIG. 3;
fig. 5 is a schematic diagram of the area division effect corresponding to the embodiment of fig. 3 and 4;
FIGS. 6 and 7 are schematic views of effects of embodiments of the present application applied to navigation products;
fig. 8 is a block diagram of the structure of the lane number recognition apparatus according to the embodiment of the present application;
fig. 9 is a block diagram of an electronic device for implementing the method of embodiments of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 shows a flow chart of a lane number identification method according to an embodiment of the present application, where the method includes:
s101, acquiring a target image, wherein the target image comprises a road pavement and a road spacer, and the road spacer is used for limiting the transverse crossing of a vehicle;
s102, preprocessing a target image to obtain a first area corresponding to the road surface and a second area corresponding to the road spacer in the target image;
s103, determining a first demarcation point according to the second area, and forming a first transverse line across the first area through the first demarcation point to obtain an intersection point of the first transverse line and each lane line on the road surface;
s104, determining the number of lanes on the road surface according to the number of the intersection points.
According to the method for identifying the number of lanes provided in the embodiment of the present application, the processing object is an image including a road surface and a road separator, for example, a road image captured by a device such as a vehicle recorder or a vehicle-mounted camera may be selected, a separator is usually disposed on a road surface side or a road surface, for example, in order to split vehicles traveling on different road surfaces (such as a main road and an auxiliary road), and isolation facilities such as a separator, a green belt or a guardrail may be disposed on two sides or one side of the road; in order to split vehicles running in different directions, a specific type of lane line, such as a bidirectional two-lane center line, is arranged on a road surface, a center double line or a center four line is common, and the isolation facilities or isolation means can prevent the occurrence of dangers caused by the transverse crossing of the vehicles. The embodiment of the application expands logic processing based on the characteristics of the spacers on the road, and finally the number of lanes on the road can be obtained.
In the processing process, a first area corresponding to a road surface and a second area corresponding to a road spacer in the image can be obtained by processing the target image, and then a "demarcation point" is determined in the second area, wherein the reason for the "demarcation point" is mainly that the second area is an area where the road spacer is located, and the area is an area where the vehicle is forbidden to transversely cross, and according to the road traffic rule, it can be understood that the road where the vehicle is located on the right side of the second area, if a road (the spacer is a green belt) or a lane (the spacer is a double solid line) exists on the left side of the second area, the road is an opposite road or lane, and otherwise, if the road or the lane does not exist on the left side of the second area, the road where the vehicle is located is indicated to be a single line. Thus, the second area actually forms the left edge of the road where the host vehicle is located, and the point in the second area may be considered as a "boundary point" between the road where the host vehicle is located and the opposite road, opposite lane or other facilities, for example, in the embodiment of the present application, the "boundary point" may be preferably used as the left starting point of the road where the host vehicle is located, and a transverse line may be formed across the road surface from the boundary point to the right in the driving direction of the road, where the transverse line may intersect with each lane line on the road surface.
Then, a cross line can be made by crossing the dividing point over the first area (corresponding to the road surface area), the cross line is intersected with each lane line on the road surface, a plurality of intersecting points can be obtained, the number of intersecting points is equal to the number of lane lines, the number of lanes on the road surface where the vehicle is located can be determined according to the number of intersecting points or the number of lane lines, the dividing point of the road surface where the vehicle is located and other roads or other direction roads can be accurately distinguished by utilizing the embodiment of the application, the accuracy of the obtained lane number result is high, the reliability is high, the embodiment of the application is applied to navigation products, accurate lane number information can be provided for the navigation products, related functions such as avoiding small roads can be realized on the basis of the information, diversified selections can be provided for users such as novice drivers, the like, the road sections easy to be blocked or the road sections difficult to walk with few lane numbers can be avoided as much as possible, and the application value is important.
Various specific implementations of the embodiments of the present application are described in detail below with respect to various implementations.
Mode one:
in the embodiment of the present application, optionally, the number of the intersections of the transverse line crossing the first area and the lane lines of the road passing through the demarcation point is denoted as N, so that N lane lines may form N-1 lanes, that is, the number of lanes on the road where the vehicle is located is N-1, N-1 is the actual number of lanes, and may reflect the profile of the road to some extent, for example, the number of lanes may be considered as wide, large, and the number of lanes may be considered as narrow, small, so as to provide a filtering function for the navigation product, for example, filtering out small roads during path planning, so as to prevent difficult passage of congestion, and may also provide a prompt basis for voice navigation, for example, the front is a narrow road section, and so on, so as to improve the user experience.
Mode two:
in an embodiment of the present application, optionally, after obtaining the intersection point of the first transverse line and each lane line on the road surface, the following processing is further performed: for the obtained multiple intersection points, calculating the distance between every two adjacent intersection points as the width of each corresponding lane; and determining the number of lanes on the road surface according to the number of the intersection points and the width of each lane when determining the number of lanes on the road surface.
On the basis of the first mode, the second mode also considers the influence of the width factor of the lane on the total number of judging lanes, because if the rightmost side of the road is the non-motor lane, the width of the non-motor lane is usually smaller, and the non-motor lane is not occupied in the running process of the motor vehicle, so the embodiment of the application considers that the navigation products are mostly motor vehicles for navigation, and the influence of the non-motor lane on the road can be ignored, and therefore, in the second mode, the non-motor lane is not counted in the total number of lanes, the relation between the total number of motor lanes of the road and the road profile can be reflected more accurately, and the method can be more fit with the actual scene.
As an example, taking the number of intersections as N as an example and taking the driving direction of the road as the front, if the width of the lanes on the road surface meets a first condition, the number of lanes on the road surface is N-2, and if the width of the lanes on the road surface does not meet the first condition, the number of lanes on the road surface is N-1; the first condition is that the width of the rightmost lane on the road surface is smaller than a preset value.
That is, if the width of the rightmost one lane on the road surface is less than a preset value, the number of lanes on the road surface is N-2, wherein the preset value may be set to 1/2 of the average value of the widths of the respective lanes on the road, and if the width of the rightmost lane is less than the preset value, it may be considered as a non-motor lane, which should be excluded, and thus the total number of motor lanes is determined to be N-2.
On the contrary, if the width of the rightmost lane is greater than or equal to the preset value, it is considered as a motor vehicle lane, and the number of lanes on the road surface is still N-1 without change.
Mode three:
in an embodiment of the present application, optionally, after obtaining the intersection point of the first transverse line and each lane line on the road surface, the following processing is further performed: identifying a category of each lane line on the road surface; and when the number of lanes on the road surface is determined, determining the number of lanes on the road surface according to the number of the intersection points and the category of each lane line.
As an example, taking the number of intersection points as N as an example and taking the driving direction of the road as the front, if the type of the N lane lines on the road surface meets a second condition, the number of lanes on the road surface is N-2, and if the type of the N lane lines on the road surface does not meet the second condition, the number of lanes on the road surface is N-1; the second condition is that two lane lines on the rightmost side on the road surface are all solid lines, and the remaining N-2 lane lines are all broken lines.
It can be seen that, on the basis of the first mode, the third mode also considers the influence of the type of the lane on the total number of judging lanes, so that the processing has the advantages that if two lane lines on the rightmost side are all solid lines and the other N-2 lane lines are all broken lines, the rightmost lane can be directly judged to be a non-motor vehicle lane, at this time, the non-motor vehicle lane can be directly obtained without considering the lane width according to the thought of the second mode, and the total number of the motor vehicle lanes is N-2, so that the processing logic is simplified to a certain extent.
Wherein the type of lane line may be identified using a suitable lane line classification model, such as a deep learning based lane line classification model, to which embodiments of the present application are not particularly limited.
In an embodiment of the present application, regarding preprocessing of a target image, semantic segmentation processing may be performed on the target image to obtain a semantic segmentation map of the target image, where the semantic segmentation map may include a first area corresponding to a road surface and a second area corresponding to a road spacer. In application, the above processing can be accomplished by using a suitable image semantic segmentation model (such as fcn/deep Lab series model or spnet model, etc.) based on deep learning. No limitation is made here as to which model is used.
In any of the above-described modes, the aforementioned "demarcation point" may be used as a left-side starting point of the road where the host vehicle is located, and a transverse line is formed across the road surface to the right of the road driving direction, where the transverse line may intersect with each lane line on the road surface to obtain the intersection point sequence. Further, coordinates of each intersection point can be determined according to navigation positioning data or high-precision map data and the like in application, and based on the coordinates, the distance between every two adjacent intersection points can be calculated to serve as a lane width, and the method can be used for judging a non-motor vehicle lane in a second mode.
In one embodiment of the present application, the "demarcation point" may be any point in the second area (the area where the road barrier is located) from which a cross point sequence is obtained by drawing a transverse line to the right.
In another embodiment of the present application, a "demarcation point" may be selected from the geometric center of the second region, from which a cross-point sequence may be drawn to the right.
In yet another embodiment of the present application, a "demarcation point" may select the centroid (or centroid) of the second region from which a cross-point sequence is drawn to the right. In the application, the centroid of the second area is taken as a 'demarcation point', so that a better processing result can be obtained for the second area with an irregular contour.
Specific processes of the embodiments of the present application are described below by way of specific examples.
Fig. 2 shows a specific flowchart of the lane number recognition process of one embodiment of the present application, which is described in detail below.
First, element region extraction: an image semantic segmentation model (such as fcn/deep Lab series/pspnet model) is adopted to process an original picture (refer to fig. 3) to be processed, a semantic segmentation map is generated, and as shown in fig. 4, single elements such as lane lines, fences, guardrails, green belts and the like are extracted according to the segmentation map, and images of respective areas are generated.
Secondly, image scene construction: (1) for a hard isolation scene, processing a communicating region of a separation element region which can not be directly crossed by automobiles such as a fence, a guardrail, a green belt and the like, and extracting a boundary center region (a second region) of a right lane line and a left lane line by using algorithms such as a main line projection method and the like; (2) for a lane line isolation scene, a lane line contour can be obtained by carrying out Hough straight line detection and morphological image processing on a lane line region image, a lane line is cut, rotated and fixed in size along the contour, and then a lane line model (such as an Alexnet model) is input for classification, and if the type of the lane line is double-line or four-line, the lane line is also a demarcation center region (a second region) for separating a left lane and a right lane. Referring to fig. 5, the triangular frame on the left side of the road surface encloses the green belt, which is the demarcation center region, that is, the aforementioned second region, which is the opposite lane to the left.
Again, lane related processing: and (3) transversely drawing a straight line to the right by using the center point or the centroid of the demarcation center area, intersecting each lane line to obtain an intersection point sequence, wherein for clarity of display, the transversely drawn straight line is shown in fig. 4, the starting point is positioned in the demarcation center area, and is intersected with three lane lines to the right to form 3-1=2 lanes, the class of the right lane line is obtained according to the classification result of the lane lines, and the intersection point sequence of the lane lines is utilized to calculate the difference between the abscissa of each intersection point and the abscissa of the adjacent intersection point to obtain the width of each lane.
Then, non-motor vehicle lane judgment: judging by using recorded N=3 lane line related post-processing information, wherein the precondition of the non-motor vehicle lane is that two related lane areas are solid lines, and if the front N-2 lane lines are all broken lines, the non-motor vehicle lane is directly judged; if not, judging the width of the final lane area, if the width of the final lane area is obviously smaller than that of other lanes, judging the vehicle to be a non-vehicle lane, and if the width of the final lane area is not greatly different from that of other lanes, judging the vehicle to be a vehicle lane.
Finally, judging the number of lanes: the determination of the number of lanes may be simplified to determine whether the rightmost lane region is a non-motor lane, and if the rightmost lane is a non-motor lane, the total number of lanes is N-2, and if the rightmost lane region is a motor lane, the total number of lanes is N-1.
In the application, the lane number recognition result can be utilized to provide personalized navigation and prompt. For example, after lane number data is obtained, a reminder can be given in the navigation path planning of the electronic map to assist a user in making decisions and selecting a more reasonable journey. Specifically, there are two different ways:
(1) referring to fig. 6, a "no-walk" option may be added to the path planning preference options;
(2) referring to fig. 7, a map bubble alert can be added to the navigation radar, and a voice early warning of a 'path narrow road section' can be broadcasted in advance in the navigation process, so that the purpose of improving the navigation experience of a product is achieved.
The specific arrangements and implementations of the embodiments of the present application have been described above from a variety of angles by way of various embodiments. Corresponding to the processing method of at least one embodiment described above, the embodiment of the present application further provides a lane number recognition device 100, referring to fig. 8, which includes:
an image acquisition module 110 for acquiring a target image, wherein the target image comprises a road pavement and a road isolator, and the road isolator is used for limiting the transverse crossing of a vehicle;
the image preprocessing module 120 is configured to preprocess a target image, and obtain a first area corresponding to the road surface and a second area corresponding to the road spacer in the target image;
the intersection determining module 130 is configured to determine a first demarcation point according to the second area, and form a first transverse line across the first area through the first demarcation point, so as to obtain an intersection of the first transverse line and each lane line on the road surface;
the lane number determining module 140 is configured to determine the number of lanes on the road surface according to the number of intersections.
The functions of each module in each apparatus of the embodiments of the present application may refer to the processing correspondingly described in the foregoing method embodiments, which is not described herein again.
According to embodiments of the present application, there is also provided an electronic device, a readable storage medium and a computer program product.
As shown in fig. 9, a block diagram of an electronic device according to a method of recognizing the number of lanes according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 9, the electronic device includes: one or more processors 1001, memory 1002, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of a graphical user interface (Graphical User Interface, GUI) on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 1001 is illustrated in fig. 9.
Memory 1002 is a non-transitory computer-readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the lane number identification method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method of recognizing the number of lanes provided by the present application.
The memory 1002 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the modules shown in fig. 8) corresponding to the lane number identification method in the embodiment of the present application. The processor 1001 executes various functional applications of the server and data processing, that is, implements the lane number recognition method in the above-described method embodiment by running a non-transitory software program, instructions, and modules stored in the memory 1002.
Memory 1002 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created from the analysis of search results, the use of processing electronics, and the like. In addition, the memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 1002 optionally includes memory remotely located relative to processor 1001, which may be connected to analysis processing electronics of the search results via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device corresponding to the lane number identification method in the embodiment of the present application may further include: an input device 1003 and an output device 1004. The processor 1001, the memory 1002, the input device 1003, and the output device 1004 may be connected by a bus or other means, which is exemplified in the embodiment of fig. 9 of the present application.
The input device 1003 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the search result analysis processing electronics, such as a touch screen, keypad, mouse, trackpad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, etc. input devices. The output means 1004 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a liquid crystal display (Liquid Crystal Display, LCD), a light emitting diode (Light Emitting Diode, LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, application specific integrated circuits (Application Specific Integrated Circuits, ASIC), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable logic devices (programmable logic device, PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., CRT (Cathode Ray Tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN) and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (18)

1. A method of identifying a number of lanes, comprising:
acquiring a target image, wherein the target image comprises a road pavement and a road isolator, and the road isolator is used for limiting the transverse crossing of a vehicle;
preprocessing a target image to obtain a first area corresponding to the road surface and a second area corresponding to the road spacer in the target image;
determining a first demarcation point according to the second area, and forming a first transverse line across the first area through the first demarcation point to obtain an intersection point of the first transverse line and each lane line on the road surface;
for the obtained multiple intersection points, calculating the distance between every two adjacent intersection points as the width of each corresponding lane;
determining the number of lanes on the road surface according to the number of the intersection points;
the determining the number of lanes on the road surface according to the number of the intersection points comprises the following steps: determining the number of lanes on the road surface according to the number of the intersection points and the width of each lane; wherein the number of the intersection points is N; taking the driving direction of a road as the front, if the width of the lanes on the road surface meets a first condition, the number of the lanes on the road surface is N-2, and if the width of the lanes on the road surface does not meet the first condition, the number of the lanes on the road surface is N-1; the first condition is that the width of the rightmost lane on the road surface is smaller than a preset value.
2. The method of claim 1, wherein,
after obtaining the intersection of the first transverse line with each lane line on the road surface, the method further comprises: identifying a category of each lane line on the road surface;
the determining the number of lanes on the road surface according to the number of the intersection points comprises the following steps: and determining the number of lanes on the road surface according to the number of the intersection points and the category of each lane line.
3. The method of claim 2, wherein,
the number of the intersection points is N; taking the driving direction of the road as the front, if the type of N lane lines on the road surface meets a second condition, the number of lanes on the road surface is N-2, and if the type of N lane lines on the road surface does not meet the second condition, the number of lanes on the road surface is N-1; the second condition is that two lane lines on the rightmost side on the road surface are all solid lines, and the remaining N-2 lane lines are all broken lines.
4. A method according to any of claims 1-3, the forming a first transverse line across the first region by the first demarcation point comprising:
and forming a first transverse line by taking the first demarcation point as a starting point and crossing the road surface to the right of the road driving direction, wherein the first transverse line intersects with each lane line on the road surface.
5. The method according to any one of claim 1 to 3, wherein,
the first demarcation point is any one point in the second area;
or alternatively, the process may be performed,
the first demarcation point is the geometric center of the second region;
or alternatively, the process may be performed,
the first demarcation point is the centroid of the second region.
6. A method according to any one of claims 1-3, said preprocessing the target image comprising:
and carrying out semantic segmentation processing on the target image to obtain a semantic segmentation map of the target image, wherein the semantic segmentation map comprises a first region corresponding to the road surface and a second region corresponding to the road spacer.
7. The method according to any one of claim 1 to 3, wherein,
the roadway barrier comprises at least one of: the central line of the road surface of the two-way two-lane is provided with a separation belt, a green belt, a guardrail and the two-way two-lane road surface.
8. The method according to any one of claim 1 to 3, wherein,
the number of lanes on the road surface is used to assist road navigation.
9. An apparatus for recognizing the number of lanes, comprising:
the image acquisition module is used for acquiring a target image, wherein the target image comprises a road pavement and a road isolator, and the road isolator is used for limiting the transverse crossing of a vehicle;
the image preprocessing module is used for preprocessing a target image to obtain a first area corresponding to the road surface and a second area corresponding to the road spacer in the target image;
the intersection point determining module is used for determining a first demarcation point according to the second area, and forming a first transverse line across the first area through the first demarcation point to obtain intersection points of the first transverse line and each lane line on the road surface;
the calculating sub-module is used for calculating the distance between every two adjacent intersection points for the obtained intersection points to be used as the width of each corresponding lane;
the lane number determining module is used for determining the number of lanes on the road surface according to the number of the intersection points;
the lane number determining module is specifically configured to determine the number of lanes on the road surface according to the number of the intersections and the width of each lane; wherein the number of the intersection points is N; taking the driving direction of a road as the front, if the width of the lanes on the road surface meets a first condition, the number of the lanes on the road surface is N-2, and if the width of the lanes on the road surface does not meet the first condition, the number of the lanes on the road surface is N-1; the first condition is that the width of the rightmost lane on the road surface is smaller than a preset value.
10. The apparatus of claim 9, wherein,
after obtaining the intersection of the first transverse line with each lane line on the road surface, the apparatus further comprises: the lane line identification sub-module is used for identifying the category of each lane line on the road surface;
the lane number determining module determines the number of lanes on the road surface according to the number of the intersection points and the category of each lane line.
11. The apparatus of claim 10, wherein,
the number of the intersection points is N; taking the driving direction of the road as the front, if the type of N lane lines on the road surface meets a second condition, the number of lanes on the road surface is N-2, and if the type of N lane lines on the road surface does not meet the second condition, the number of lanes on the road surface is N-1; the second condition is that two lane lines on the rightmost side on the road surface are all solid lines, and the remaining N-2 lane lines are all broken lines.
12. The apparatus of any of claims 9-11, the intersection determination module comprising:
and the transverse line forming sub-module is used for forming a first transverse line by taking the first demarcation point as a starting point and crossing the road surface to the right of the road driving direction, wherein the first transverse line is intersected with each lane line on the road surface.
13. The device according to any one of claims 9-11, wherein,
the first demarcation point is any one point in the second area;
or alternatively, the process may be performed,
the first demarcation point is the geometric center of the second region;
or alternatively, the process may be performed,
the first demarcation point is the centroid of the second region.
14. The apparatus of any of claims 9-11, the image preprocessing module comprising:
the semantic segmentation processing module is used for carrying out semantic segmentation processing on the target image to obtain a semantic segmentation graph of the target image, wherein the semantic segmentation graph comprises a first area corresponding to the road surface and a second area corresponding to the road isolator.
15. The device according to any one of claims 9-11, wherein,
the roadway barrier comprises at least one of: the central line of the road surface of the two-way two-lane is provided with a separation belt, a green belt, a guardrail and the two-way two-lane road surface.
16. The device according to any one of claims 9-11, wherein,
the number of lanes on the road surface is used to assist road navigation.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202010700414.0A 2020-07-20 2020-07-20 Lane number identification method, device, equipment and storage medium Active CN111814724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010700414.0A CN111814724B (en) 2020-07-20 2020-07-20 Lane number identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010700414.0A CN111814724B (en) 2020-07-20 2020-07-20 Lane number identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111814724A CN111814724A (en) 2020-10-23
CN111814724B true CN111814724B (en) 2023-07-04

Family

ID=72865029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010700414.0A Active CN111814724B (en) 2020-07-20 2020-07-20 Lane number identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111814724B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420673B (en) * 2021-06-24 2022-08-02 苏州科达科技股份有限公司 Garbage classification method, device, equipment and storage medium
CN114332140B (en) * 2022-03-16 2022-07-12 北京文安智能技术股份有限公司 Method for processing traffic road scene image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798855A (en) * 2016-09-07 2018-03-13 高德软件有限公司 A kind of lane width computational methods and device
CN108664016A (en) * 2017-03-31 2018-10-16 腾讯科技(深圳)有限公司 Determine the method and device of lane center
CN109271857A (en) * 2018-08-10 2019-01-25 广州小鹏汽车科技有限公司 A kind of puppet lane line elimination method and device
CN110633342A (en) * 2019-07-29 2019-12-31 武汉光庭信息技术股份有限公司 Lane topology network generation method
CN110704560A (en) * 2019-09-17 2020-01-17 武汉中海庭数据技术有限公司 Method and device for structuring lane line group based on road level topology

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10102434B2 (en) * 2015-12-22 2018-10-16 Omnivision Technologies, Inc. Lane detection system and method
KR101864066B1 (en) * 2017-01-11 2018-07-05 숭실대학교산학협력단 Lane marking detection device, Lane departure determination device, Lane marking detection method and Lane departure determination method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798855A (en) * 2016-09-07 2018-03-13 高德软件有限公司 A kind of lane width computational methods and device
CN108664016A (en) * 2017-03-31 2018-10-16 腾讯科技(深圳)有限公司 Determine the method and device of lane center
CN109271857A (en) * 2018-08-10 2019-01-25 广州小鹏汽车科技有限公司 A kind of puppet lane line elimination method and device
WO2020029706A1 (en) * 2018-08-10 2020-02-13 广州小鹏汽车科技有限公司 Dummy lane line elimination method and apparatus
CN110633342A (en) * 2019-07-29 2019-12-31 武汉光庭信息技术股份有限公司 Lane topology network generation method
CN110704560A (en) * 2019-09-17 2020-01-17 武汉中海庭数据技术有限公司 Method and device for structuring lane line group based on road level topology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Curve lane detection based on the binary particle swarm optimization";Shoutao Li 等;《IEEE》;全文 *
基于Canny算子和Hough变换的夜间车道线检测;李亚娣;黄海波;李相鹏;陈立国;;科学技术与工程(31);全文 *
复杂环境下车辆前方多车道识别方法;张润生;黄小云;马雷;;农业机械学报(05);全文 *

Also Published As

Publication number Publication date
CN111814724A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111595358B (en) Navigation data processing method, route guidance method, device and storage medium
CN111680362B (en) Automatic driving simulation scene acquisition method, device, equipment and storage medium
CN112581763A (en) Method, device, equipment and storage medium for detecting road event
CN113033030A (en) Congestion simulation method and system based on real road scene
CN111814724B (en) Lane number identification method, device, equipment and storage medium
CN113421432A (en) Traffic restriction information detection method and device, electronic equipment and storage medium
CN111951144A (en) Method and device for determining violation road section and computer readable storage medium
CN111739344A (en) Early warning method and device and electronic equipment
CN110675644A (en) Method and device for identifying road traffic lights, electronic equipment and storage medium
CN112013865B (en) Method, system, electronic device and medium for determining traffic gate
CN112885130B (en) Method and device for presenting road information
CN111540010B (en) Road monitoring method and device, electronic equipment and storage medium
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN113989777A (en) Method, device and equipment for identifying speed limit sign and lane position of high-precision map
CN111667706A (en) Lane-level road surface condition recognition method, road condition prompting method and device
CN113283272B (en) Real-time image information prompting method and device for road congestion and electronic equipment
CN114852079A (en) Behavior decision information generation method and device, electronic equipment and storage medium
CN112699773B (en) Traffic light identification method and device and electronic equipment
CN112800153A (en) Method, device and equipment for mining isolation zone information and computer storage medium
US20230159052A1 (en) Method for processing behavior data, method for controlling autonomous vehicle, and autonomous vehicle
CN112418081A (en) Method and system for air-ground joint rapid investigation of traffic accidents
CN114694370A (en) Method, device, computing equipment and storage medium for displaying intersection traffic flow
CN114998863B (en) Target road identification method, device, electronic equipment and storage medium
CN116358584A (en) Automatic driving vehicle path planning method, device, equipment and medium
CN115112138A (en) Trajectory planning information generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant