CN115063765A - Road side boundary determining method, device, equipment and storage medium - Google Patents

Road side boundary determining method, device, equipment and storage medium Download PDF

Info

Publication number
CN115063765A
CN115063765A CN202210639480.0A CN202210639480A CN115063765A CN 115063765 A CN115063765 A CN 115063765A CN 202210639480 A CN202210639480 A CN 202210639480A CN 115063765 A CN115063765 A CN 115063765A
Authority
CN
China
Prior art keywords
road
boundary
determining
contour
boundary line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210639480.0A
Other languages
Chinese (zh)
Inventor
赵松
吴彬
钟开
杨建忠
张通滨
卢振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210639480.0A priority Critical patent/CN115063765A/en
Publication of CN115063765A publication Critical patent/CN115063765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a road side boundary determining method, a road side boundary determining device, road side boundary determining equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of automatic driving and deep learning. The specific implementation scheme is as follows: acquiring an environmental image in front of driving acquired by an acquisition vehicle in the driving process; performing semantic segmentation on the environment image to determine a road profile and a background profile in the environment image; determining at least two key points on a road boundary from the road profile; and determining a boundary line of the road according to at least two key points on the road boundary. By the technical scheme, the road boundary line can be determined quickly and accurately.

Description

Road boundary determining method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a storage medium for determining a road boundary.
Background
Automatic driving is a research hotspot of current computer vision, in automatic driving, a vehicle-mounted sensor collects data in real time, a road boundary needs to be identified, and a boundary line can help a vehicle to sense the position of the road boundary and assist automatic driving decision, so that dangerous accidents are avoided. Therefore, how to determine the road boundary line quickly and accurately is important for automatic driving.
Disclosure of Invention
The present disclosure provides a road side boundary determining method, apparatus, device and storage medium.
According to an aspect of the present disclosure, there is provided a road side boundary determining method, including:
acquiring an environmental image in front of driving acquired by an acquisition vehicle in the driving process;
performing semantic segmentation on the environment image to determine a road profile and a background profile in the environment image;
determining at least two key points on a road boundary from the road profile;
and determining a boundary line of the road according to at least two key points on the road boundary. .
According to an aspect of the present disclosure, there is provided a road side boundary determining apparatus including:
the environment image acquisition module is used for acquiring an environment image in front of running acquired by the acquisition vehicle in the running process;
the contour determination module is used for performing semantic segmentation on the environment image so as to determine a road contour and a background contour in the environment image;
the key point determining module is used for determining at least two key points on the road boundary from the road contour;
and the boundary line determining module is used for determining the boundary line of the road according to at least two key points on the road boundary.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of determining a road boundary line according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a road boundary line determining method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a road boundary line determination method according to any one of the embodiments of the present disclosure.
According to the technology disclosed by the invention, the road boundary line can be determined quickly and accurately.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a road side boundary determining method provided according to an embodiment of the present disclosure;
fig. 2A is a flowchart of another road boundary line determining method provided according to an embodiment of the present disclosure;
FIG. 2B is a schematic diagram of an environment image and a contour image provided in accordance with an embodiment of the present disclosure;
fig. 3A is a flowchart of another road side boundary determining method provided according to an embodiment of the present disclosure;
FIG. 3B is a schematic diagram of a keypoint determination provided according to an embodiment of the present disclosure;
FIG. 3C is a graph illustrating the fitting effect of a boundary line according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of a road side boundary determining apparatus provided in accordance with an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing the road boundary line determining method of the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a road side boundary determining method according to an embodiment of the disclosure. The embodiment is suitable for determining the road boundary line, and is particularly suitable for determining the road boundary line in an automatic driving scene. The method may be performed by a road boundary line determining device, which may be implemented in software and/or hardware, and may be integrated in an electronic device carrying the road boundary line determining function, such as a vehicle controller. As shown in fig. 1, the road boundary line determining method of the present embodiment may include:
and S101, acquiring an environmental image in front of the running acquired by the acquisition vehicle in the running process.
In this embodiment, the collection vehicle may be an autonomous vehicle; optionally, an image acquisition device may be installed right above a front windshield of the vehicle, and is used for acquiring an environmental image in front of the vehicle during driving. The environment image is an image of the environment before the vehicle travels, and may be a color image or a grayscale image.
Specifically, the environmental image in front of the vehicle in the driving process can be acquired periodically in real time. For example, the environment image in front of the vehicle, which is acquired during the vehicle-mounted driving, may be acquired every set time period.
S102, performing semantic segmentation on the environment image to determine a road contour and a background contour in the environment image.
In this embodiment, the road profile refers to an overall profile formed by a road and a moving object on the road. The background contour refers to an overall contour formed by other objects except for a road.
Optionally, semantic segmentation may be performed on the environment image based on a semantic segmentation model to obtain a road contour and a background contour in the environment image. The semantic segmentation model can be obtained by training according to a pre-labeled sample environment image based on a deep learning algorithm; the sample environment image includes two categories, road and background.
S103, at least two key points on the road boundary are determined from the road contour.
In this embodiment, the key point is a point having a high probability of being located on a road boundary.
In particular, at least two keypoints on the road boundary may be determined from the road contour based on certain keypoint extraction rules. For example, coordinate points of all contours may be extracted from the road contour, and then at least two key points may be determined from all contour coordinate points.
And S104, determining a boundary line of the road according to at least two key points on the road boundary.
In this embodiment, the boundary line of the road is the position of the road where the medium changes, wherein the medium may be a green belt, a curbstone, a guardrail, a hard isolation, and the like.
Optionally, the boundary line of the road may be determined according to at least two key points on the road boundary based on the boundary line extraction model.
According to the technical scheme of the embodiment of the disclosure, the road contour and the background contour in the environment image are determined by acquiring the environment image in front of the running acquired by the acquisition vehicle in the running process and performing semantic segmentation on the environment image, at least two key points on the road boundary are determined from the road contour, and the boundary line of the road is determined according to the at least two key points on the road boundary. Compared with the scheme that the road boundary line is extracted by relying on the three-dimensional space information acquired by the vehicle-mounted laser scanning system in the prior art, the technical scheme disclosed by the invention can determine the road boundary line by processing the environment image acquired by the acquisition vehicle, reduce the cost for extracting the boundary line, quickly and accurately determine the road boundary line, identify the road boundary line from the environment image in real time and identify the road boundary line from the environment image in an off-line manner.
Fig. 2A is a flowchart of another road side boundary determining method provided according to an embodiment of the present disclosure. On the basis of the above embodiments, the present embodiment provides an alternative solution to further optimize "semantic segmentation is performed on the environment image to determine the road contour and the background contour in the environment image". As shown in fig. 2A, the road boundary line determining method of the present embodiment may include:
s201, acquiring an environmental image in front of the running acquired by the acquisition vehicle in the running process.
S202, performing semantic segmentation on the environment image to obtain an object contour of at least one class of objects.
Specifically, semantic segmentation can be performed on the environment image based on a semantic segmentation network model to obtain an object contour of at least one class of objects. It can be understood that, because each object in the environment image has obviously different characteristics, the characteristics of the corresponding object can be accurately learned through the semantic segmentation network model, the accuracy and the recall rate are improved, and the subsequent contour recombination is facilitated, so that the accurate road contour is obtained.
The semantic segmentation network model can include different segmentation networks, FCN series, UNET series, deep series and other segmentation networks. Optionally, the semantic segmentation network model may be trained based on object class labels, so that the network model has an ability to output object contours in an image, clearly depicts each object, and provides basic identification for a boundary line extraction policy.
Wherein the object categories may include at least one of: background, road, ground bus, ground car, bicycle, curbstone, guardrail, greenbelt, fence and collection car. In this embodiment, a small object such as a pedestrian on a road is directly regarded as a road.
And S203, merging the object outlines belonging to the road and the road moving object types according to the object types so as to distinguish the road outline and the background outline.
In this embodiment, the road moving-up animal category may be ground buses, ground automobiles, and bicycles.
Optionally, the object outlines belonging to the road and the road moving object category may be merged according to the object category based on a certain merging rule, so as to distinguish the road outline and the background outline. For example, in order to ensure the integrity and accuracy of the contour, the contour of the road, the ground bus, the ground automobile and the bicycle can be combined to generate a new contour as the road contour; the object contours of other objects are merged into a new contour as a background contour. For example, as shown in fig. 2B, the upper diagram is an environment image, and the lower diagram is a processed contour image including both a road and a background.
S204, at least two key points on the road boundary are determined from the road contour.
And S205, determining a boundary line of the road according to at least two key points on the road boundary.
According to the technical scheme of the embodiment of the disclosure, the method comprises the steps of acquiring an environment image in front of driving acquired by an acquisition vehicle in the driving process, performing semantic segmentation on the environment image to obtain object outlines of at least one type of objects, merging the object outlines belonging to a road and a road moving object type according to the object type to distinguish and obtain the road outline and a background outline, further determining at least two key points on a road boundary from the road outline, and determining a boundary line of the road according to the at least two key points on the road boundary. According to the technical scheme, the road contour is determined by combining the obtained object contours of the fine categories, so that the accuracy of the road contour is ensured, and the determination of the subsequent road boundary line is guaranteed.
On the basis of the above embodiment, as an alternative to the embodiment of the present disclosure, before determining at least two key points on the boundary of the road from the road contour, coordinate points that are not a set distance from the collection vehicle may be removed from the coordinate points of the road contour.
Specifically, the coordinate points beyond the set distance from the collection vehicle can be removed from the coordinate points of the road profile, that is, the category of the coordinate points beyond the set distance from the collection vehicle in the coordinate points of the road profile is assigned as the background type, that is, an image with a certain height (h) is captured from the profile image and assigned as the background according to the set distance, so as to obtain a new profile image. The set distance may be set by a person skilled in the art according to actual conditions, for example, the actual distance is 20m or 30m corresponding to the distance on the image.
It can be understood that, because the road of the place far away from the collection vehicle in the environment image collected by the collection vehicle is possibly inaccurate, and meanwhile, the part far away from the collection vehicle in the road profile is removed according to the actual automatic driving scene requirement, the road profile can be more accurate, and the scene requirement is met.
On the basis of the above embodiment, as an alternative to the embodiment of the present disclosure, before determining at least two key points on the boundary of the road from the road contour, the coordinate points adjacent to the collection vehicle side may also be removed from the coordinate points of the road contour.
Specifically, the coordinate points near the collection vehicle side, that is, the leftmost coordinate point and the rightmost coordinate point of the collection vehicle side can be removed from the coordinate points of the road contour. It should be noted that, due to the problem of the shooting angle of the collection vehicle, besides the leftmost coordinate point and the rightmost coordinate point on the collection vehicle side, there are also shot boundary points, that is, boundary points belonging to the road in the contour image.
It will be appreciated that by removing coordinate points adjacent to the side of the collection vehicle, the determination of key points on the boundary of the road can be made more accurate.
Fig. 3A is a flowchart of another road boundary line determining method provided according to an embodiment of the present disclosure. This example provides an alternative implementation for further optimizing the "determining at least two key points on the road boundary from the road contour" based on the above example. As shown in fig. 3A, the road boundary line determining method of the present embodiment may include:
s301, acquiring an environment image in front of driving, acquired by a collection vehicle in the driving process.
S302, performing semantic segmentation on the environment image to determine a road contour and a background contour in the environment image.
And S303, selecting coordinate points which are closest to and farthest from the collection vehicle from each side boundary of the road contour as two key points on the side boundary.
In this embodiment, the key points on each side boundary include an upper key point and a lower key point.
Optionally, from each side boundary of the road contour, a coordinate point farthest from the collection vehicle is selected as an upper key point on the side boundary. Illustratively, keypoint 1 and keypoint 2 in fig. 3B.
Optionally, in a case that the collection vehicle side includes the leftmost coordinate point and the rightmost coordinate point, in each side boundary of the road contour, the coordinate point on the most lateral side is directly selected as the lower key point on the side boundary.
Further, in the case that the collection vehicle side includes the leftmost coordinate point or the rightmost coordinate point and the shooting boundary point of the shooting road, the shooting boundary point farthest from the collection vehicle in one side boundary including the shooting boundary point is used as the lower key point on the side boundary, such as key point 4 in fig. 3B; in a side boundary including the leftmost coordinate point or the rightmost coordinate point, the leftmost coordinate point or the rightmost coordinate point is the most key point of the side, such as key point 3 in fig. 3B.
S304, determining a boundary line of the road according to at least two key points on the road boundary.
According to the technical scheme of the embodiment of the disclosure, the environment image in front of the running acquired by the acquisition vehicle in the running process is acquired, then the semantic segmentation is carried out on the environment image to determine the road contour and the background contour in the environment image, further, the coordinate points which are closest to and farthest from the acquisition vehicle are selected from each side boundary of the road contour to serve as two key points on the side boundary, and the boundary line of the road is determined according to at least two key points on the road boundary. According to the technical scheme, the key points of the boundary are determined through the coordinate points which are closest to and farthest from the collection vehicle, and the determination efficiency of the boundary line of the subsequent road can be improved.
On the basis of the above embodiment, as an optional mode of the present disclosure, before determining the boundary line of the road according to at least two key points on the road boundary, the probability that a key point belongs to the boundary line may also be checked according to the key point and the object type to which the coordinate point in the set distance range of the key point belongs.
Specifically, the object type of the coordinate point in the set distance range of the key point is determined, whether the object type is a road type is determined, and if yes, the key point is determined to belong to the boundary line, namely, the key point is valid. If the key points and the surrounding pixel points are not roads, such as road automobiles or road buses, the fact that the real boundary lines are shielded indicates that the currently extracted key points are not points on the real boundary lines. The set number and the set distance may be set by those skilled in the art according to actual conditions.
After verification, if the key point is invalid, the recognition result of the boundary line of the road determined based on the acquired environment image in the period is invalid. It may wait for the next period to check again.
It can be understood that the accuracy of the determination of the boundary line of the road can be ensured by verifying the probability that the key point belongs to the boundary line.
On the basis of the foregoing embodiments, as an optional manner of the present disclosure, the determining the boundary line of the road according to the at least two key points on the road boundary may be that, according to the at least two key points on the road boundary, a linear fitting is performed to determine the boundary line of the road.
Specifically, for each side boundary of the road profile, linear fitting is performed according to an upper key point and a lower key point on the side boundary of the side road to obtain a boundary line of the side road. Illustratively, the fitting effect of the boundary line is shown in fig. 3C.
It is understood that the boundary line of the road can be quickly and accurately obtained by determining the boundary line of the road by linear fitting the key points.
Fig. 4 is a schematic structural diagram of a road side boundary determining apparatus provided according to an embodiment of the present disclosure. The embodiment is suitable for determining the road boundary line, and is particularly suitable for determining the road boundary line in an automatic driving scene. The device can be implemented in software and/or hardware and can be integrated in an electronic device carrying a road boundary line determination function. As shown in fig. 4, the road boundary line determining apparatus 400 of the present embodiment may include:
the environment image acquisition module 401 is configured to acquire an environment image in front of a vehicle in a driving process of the vehicle;
a contour determination module 402, configured to perform semantic segmentation on the environment image to determine a road contour and a background contour in the environment image;
a key point determining module 403, configured to determine at least two key points on a road boundary from the road contour;
the boundary line determining module 404 is configured to determine a boundary line of the road according to at least two key points on the road boundary.
According to the technical scheme of the embodiment of the disclosure, the road contour and the background contour in the environment image are determined by acquiring the environment image in front of the running acquired by the acquisition vehicle in the running process and performing semantic segmentation on the environment image, at least two key points on the road boundary are determined from the road contour, and the boundary line of the road is determined according to the at least two key points on the road boundary. Compared with the scheme that the road boundary line is extracted by relying on the three-dimensional space information acquired by the vehicle-mounted laser scanning system in the prior art, the technical scheme disclosed by the invention can determine the road boundary line by processing the environment image acquired by the acquisition vehicle, reduce the cost for extracting the boundary line, quickly and accurately determine the road boundary line, identify the road boundary line from the environment image in real time and identify the road boundary line from the environment image in an off-line manner.
Further, the apparatus further comprises:
and the contour correction module is used for removing coordinate points which are not the set distance from the acquisition vehicle from the coordinate points of the road contour before determining at least two key points on the boundary of the road from the road contour.
Further, the apparatus further comprises:
and the contour correction module is also used for removing coordinate points adjacent to the side of the collecting vehicle from the coordinate points of the road contour before determining at least two key points on the boundary of the road from the road contour.
Further, the key point determining module 403 is specifically configured to:
from each side boundary of the road contour, coordinate points closest to and farthest from the collection vehicle are selected as two key points on the side boundary.
Further, the apparatus further comprises:
and the key point checking module is used for checking the probability that the key point belongs to the boundary line according to the key point and the object type to which the coordinate point in the set distance range of the key point belongs before determining the boundary line of the road according to at least two key points on the road boundary.
Further, the contour determination module 402 is specifically configured to:
performing semantic segmentation on the environment image to obtain an object outline of at least one class of objects;
and merging the object outlines belonging to the road and the road moving object types according to the object types so as to distinguish the road outline and the background outline.
Further, the object categories include: background, road, ground bus, ground car, bicycle, curb, guardrail, green belt, fence or collection vehicle.
Further, the boundary line determining module 404 is specifically configured to:
and performing linear fitting according to at least two key points on the road boundary to determine the boundary line of the road.
In the technical scheme of the present disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the environmental images and the like all conform to the regulations of related laws and regulations, and do not violate the common customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
Fig. 5 is a block diagram of an electronic device for implementing the road boundary line determining method of the embodiment of the present disclosure. FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the electronic device 500 includes a computing unit 501, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the electronic device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the electronic device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 executes the respective methods and processes described above, such as the road boundary line determination method. For example, in some embodiments, the road boundary line determination method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the calculation unit 501, one or more steps of the road boundary line determination method described above may be performed. Alternatively, in other embodiments, the calculation unit 501 may be configured to perform the road boundary line determination method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge map technology and the like.
Cloud computing (cloud computing) refers to a technology system that accesses a flexibly extensible shared physical or virtual resource pool through a network, where resources may include servers, operating systems, networks, software, applications, storage devices, and the like, and may be deployed and managed in a self-service manner as needed. Through the cloud computing technology, high-efficiency and strong data processing capacity can be provided for technical application and model training of artificial intelligence, block chains and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A road side boundary determining method comprising:
acquiring an environmental image in front of driving acquired by an acquisition vehicle in the driving process;
performing semantic segmentation on the environment image to determine a road profile and a background profile in the environment image;
determining at least two key points on a road boundary from the road profile;
and determining a boundary line of the road according to at least two key points on the road boundary.
2. The method of claim 1, prior to determining at least two keypoints on the boundary of the road from the road profile, further comprising:
and removing coordinate points which are away from the set distance of the collection vehicle from the coordinate points of the road profile.
3. The method of claim 1, prior to determining at least two keypoints on a boundary of a road from the road contour, further comprising:
coordinate points adjacent to the collection vehicle side are removed from the coordinate points of the road contour.
4. The method of claim 2 or 3, wherein determining at least two keypoints on a road boundary from the road profile comprises:
and selecting coordinate points which are closest to and farthest from the collection vehicle from each side boundary of the road contour as two key points on the side boundary.
5. The method of claim 4, wherein determining the boundary line of the road according to at least two key points on the boundary of the road further comprises:
and according to the key points and the object types of the coordinate points within the set distance range of the key points, checking the probability that the key points belong to the boundary line.
6. The method of claim 4, wherein semantically segmenting the environmental image to determine road and background contours in the environmental image comprises:
performing semantic segmentation on the environment image to obtain an object outline of at least one class of objects;
and merging the object outlines belonging to the road and the road moving object types according to the object types so as to distinguish the road outline and the background outline.
7. The method of claim 6, wherein the object categories include: background, road, ground bus, ground car, bicycle, curb, guardrail, green belt, fence or collection vehicle.
8. The method of claim 1, wherein determining a boundary line of a road from at least two keypoints on the boundary of the road comprises:
and performing linear fitting according to at least two key points on the road boundary to determine the boundary line of the road.
9. A road side boundary determining apparatus comprising:
the environment image acquisition module is used for acquiring an environment image in front of running acquired by the acquisition vehicle in the running process;
the contour determination module is used for performing semantic segmentation on the environment image so as to determine a road contour and a background contour in the environment image;
the key point determining module is used for determining at least two key points on the road boundary from the road contour;
and the boundary line determining module is used for determining the boundary line of the road according to at least two key points on the road boundary.
10. The apparatus of claim 9, the apparatus further comprising:
and the contour correction module is used for removing coordinate points which are not away from the set distance of the collection vehicle from the coordinate points of the road contour before determining at least two key points on the boundary of the road from the road contour.
11. The apparatus of claim 9, the apparatus further comprising:
and the contour correction module is also used for removing coordinate points adjacent to the collection vehicle side from the coordinate points of the road contour before determining at least two key points on the boundary of the road from the road contour.
12. The apparatus according to claim 10 or 11, wherein the keypoint determining module is specifically configured to:
and selecting coordinate points which are closest to and farthest from the collection vehicle from each side boundary of the road contour as two key points on the side boundary.
13. The apparatus of claim 12, the apparatus further comprising:
and the key point checking module is used for checking the probability that the key point belongs to the boundary line according to the key point and the object type of the coordinate point within the set distance range of the key point before determining the boundary line of the road according to at least two key points on the boundary line of the road.
14. The apparatus of claim 12, wherein the contour determination module is specifically configured to:
performing semantic segmentation on the environment image to obtain an object outline of at least one class of objects;
and merging the object outlines belonging to the road and the road moving object types according to the object types so as to distinguish the road outline and the background outline.
15. The apparatus of claim 14, wherein the object categories comprise: background, road, ground bus, ground car, bicycle, curb, guardrail, green belt, fence or collection vehicle.
16. The apparatus of claim 9, wherein the boundary line determination module is specifically configured to:
and performing linear fitting according to at least two key points on the road boundary to determine the boundary line of the road.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining a road boundary line as claimed in any one of claims 1 to 8.
18. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the road boundary line determining method according to any one of claims 1 to 8.
19. A computer program product comprising a computer program which, when executed by a processor, implements a road boundary line determining method according to any one of claims 1-8.
CN202210639480.0A 2022-06-07 2022-06-07 Road side boundary determining method, device, equipment and storage medium Pending CN115063765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210639480.0A CN115063765A (en) 2022-06-07 2022-06-07 Road side boundary determining method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210639480.0A CN115063765A (en) 2022-06-07 2022-06-07 Road side boundary determining method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115063765A true CN115063765A (en) 2022-09-16

Family

ID=83200649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210639480.0A Pending CN115063765A (en) 2022-06-07 2022-06-07 Road side boundary determining method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115063765A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116225028A (en) * 2023-05-04 2023-06-06 尚特杰电力科技有限公司 Forward driving deviation correcting method and deviation correcting device for cleaning robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116225028A (en) * 2023-05-04 2023-06-06 尚特杰电力科技有限公司 Forward driving deviation correcting method and deviation correcting device for cleaning robot

Similar Documents

Publication Publication Date Title
CN113902897B (en) Training of target detection model, target detection method, device, equipment and medium
CN113205037B (en) Event detection method, event detection device, electronic equipment and readable storage medium
CN113191256A (en) Method and device for training lane line detection model, electronic device and storage medium
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN114443794A (en) Data processing and map updating method, device, equipment and storage medium
CN115359471A (en) Image processing and joint detection model training method, device, equipment and storage medium
CN115063765A (en) Road side boundary determining method, device, equipment and storage medium
CN113920158A (en) Training and traffic object tracking method and device of tracking model
CN112883236A (en) Map updating method, map updating device, electronic equipment and storage medium
CN113052047A (en) Traffic incident detection method, road side equipment, cloud control platform and system
CN115578431B (en) Image depth processing method and device, electronic equipment and medium
CN114724113B (en) Road sign recognition method, automatic driving method, device and equipment
CN113297878A (en) Road intersection identification method and device, computer equipment and storage medium
CN115049997B (en) Method and device for generating edge lane line, electronic device and storage medium
CN115436900A (en) Target detection method, device, equipment and medium based on radar map
CN113706705B (en) Image processing method, device, equipment and storage medium for high-precision map
CN113989300A (en) Lane line segmentation method and device, electronic equipment and storage medium
CN116469073A (en) Target identification method, device, electronic equipment, medium and automatic driving vehicle
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN112861701A (en) Illegal parking identification method and device, electronic equipment and computer readable medium
CN112818972A (en) Method and device for detecting interest point image, electronic equipment and storage medium
CN113011316A (en) Lens state detection method and device, electronic equipment and medium
CN113033431A (en) Optical character recognition model training and recognition method, device, equipment and medium
CN113344121A (en) Method for training signboard classification model and signboard classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination