WO2023179030A1 - 一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品 - Google Patents

一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品 Download PDF

Info

Publication number
WO2023179030A1
WO2023179030A1 PCT/CN2022/129043 CN2022129043W WO2023179030A1 WO 2023179030 A1 WO2023179030 A1 WO 2023179030A1 CN 2022129043 W CN2022129043 W CN 2022129043W WO 2023179030 A1 WO2023179030 A1 WO 2023179030A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
road
lane
image
boundaries
Prior art date
Application number
PCT/CN2022/129043
Other languages
English (en)
French (fr)
Inventor
李晨光
程光亮
石建萍
有吉斗纪知
松原海明
安井裕司
中里祐介
铃木达矢
天野宣昭
Original Assignee
商汤集团有限公司
本田技研工业株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 商汤集团有限公司, 本田技研工业株式会社 filed Critical 商汤集团有限公司
Publication of WO2023179030A1 publication Critical patent/WO2023179030A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present disclosure relates to, but is not limited to, the technical field of computer vision, and relates to a road boundary detection method, device, electronic equipment, storage medium and computer program product.
  • embodiments of the present disclosure provide a road boundary detection method, device, electronic device, storage medium and computer program product.
  • Embodiments of the present disclosure provide a road boundary detection method, which method includes:
  • a road boundary into which the vehicle can drive is selected from the plurality of road boundaries.
  • selecting a road boundary that the vehicle can drive into from the plurality of road boundaries includes: determining the own vehicle lane in which the vehicle is located based on the road image;
  • a road boundary that the vehicle can drive into is determined from the plurality of road boundaries based on the own vehicle lane in which the vehicle is located.
  • determining the own vehicle lane in which the vehicle is located based on the road image includes:
  • the vehicle lane in which the vehicle is located is determined based on the traffic sign.
  • determining the own vehicle lane in which the vehicle is located based on the road image includes:
  • the own vehicle lane in which the vehicle is located is determined based on the traveling direction of the other vehicle.
  • determining the own vehicle lane in which the vehicle is located based on the traffic sign includes:
  • the own vehicle lane in which the vehicle is located is determined based on the designated road marking.
  • determining the own vehicle lane in which the vehicle is located based on the traveling direction of the other vehicle includes: in response to the traveling direction of the other vehicle being opposite to the traveling direction of the vehicle, based on the traveling direction of the other vehicle.
  • the lane in which the vehicle is located determines the own lane in which the vehicle is located.
  • determining the road boundary that the vehicle can drive into from the plurality of road boundaries based on the own vehicle lane in which the vehicle is located includes: based on the traffic sign and the self-vehicle lane in which the vehicle is located.
  • the vehicle lane determines the road boundary into which the vehicle can drive from the plurality of road boundaries.
  • determining a road boundary that the vehicle can drive into from the plurality of road boundaries based on the own vehicle lane in which the vehicle is located includes: obtaining the location information of the vehicle, obtained in advance
  • the map data determines the map sub-data related to the location information, and determines the road boundary that the vehicle can drive into from the multiple road boundaries based on the map sub-data; the map data at least includes road data, road Signage data and traffic sign data.
  • determining multiple road boundaries in the road image includes: detecting multiple lanes in the road image, and determining the multiple road boundaries by connecting ends of each lane.
  • determining a plurality of road boundaries in the road image includes: detecting a drivable area in the road image, and determining a plurality of road boundaries in the image based on a contour of the drivable area. road boundaries.
  • the method further includes: determining a driving path of the vehicle based on a road boundary that the vehicle can drive into, and controlling the driving of the vehicle according to the driving path.
  • the method further includes: setting a first area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the first area of interest at a first resolution; wherein, The road image is obtained at a second resolution that is smaller than the first resolution.
  • the method further includes: setting a second area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the second area of interest at a first frame rate; wherein, The road image is obtained at a second frame rate that is less than the first frame rate.
  • Embodiments of the present disclosure also provide a road boundary detection device, which device includes: a detection part and a selection part; wherein,
  • the detection part is configured to identify road images collected by an image collection device provided on the vehicle and determine multiple road boundaries in the road images;
  • the selecting part is configured to select a road boundary into which the vehicle can drive from the plurality of road boundaries.
  • An embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the method described in the embodiment of the present disclosure are implemented.
  • Embodiments of the present disclosure also provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the embodiments of the present disclosure are implemented when the processor executes the program. The steps of the method.
  • Embodiments of the present disclosure also provide a computer program product.
  • the computer program product includes a computer program or instructions.
  • the computer program or instructions When the computer program or instructions are run on an electronic device, the electronic device executes the embodiments of the present disclosure. The steps of the method.
  • the road boundary detection method provided by the embodiment of the present disclosure can determine the road boundary that the vehicle can drive into based on the identified road boundary, especially in the scenario of invisible road boundaries, to determine the road boundary that the vehicle can drive into. Provide sufficient basis for vehicle turning decisions at intersections.
  • Figure 1a is a schematic diagram of a road boundary in a road boundary detection method according to an embodiment of the present disclosure
  • Figure 1b is a schematic diagram of an enterable road boundary in the road boundary detection method according to an embodiment of the present disclosure
  • Figure 2a is a schematic diagram of an application scenario of an embodiment of the present disclosure
  • Figure 2b is a schematic diagram 2 of an application scenario of an embodiment of the present disclosure.
  • Figure 3 is a schematic flowchart 1 of a road boundary detection method according to an embodiment of the present disclosure
  • Figure 4a is a schematic diagram of a vehicle lane scene in the road boundary detection method according to an embodiment of the present disclosure
  • Figure 4b is a schematic diagram 2 of the scene of the self-vehicle lane in the road boundary detection method according to the embodiment of the present disclosure
  • Figure 5 is a schematic structural diagram of a road boundary detection device according to an embodiment of the present disclosure.
  • Figure 6 is a schematic diagram 2 of the composition and structure of a road boundary detection device according to an embodiment of the present disclosure
  • Figure 7 is a schematic diagram 3 of the composition and structure of a road boundary detection device according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of the hardware composition of an electronic device according to an embodiment of the present disclosure.
  • Figures 1a and 1b are respectively schematic diagrams of road boundaries and accessible road boundaries in the road boundary detection method of the embodiment of the present disclosure; in addition to the boundaries 110 on both sides of the lane where the vehicle 130 is located, the road boundaries also include boundaries perpendicular to the lane.
  • the boundaries 120 on both sides, as shown in Figure 1a, include boundaries 120 perpendicular to both sides of the lane. In the following embodiments, these boundaries are collectively referred to as road boundaries. In the intersection shown in the scene, eight road boundaries are visible.
  • the road boundary corresponding to the vehicle's traveling direction is the road boundary that the vehicle can drive into.
  • the left road boundary is the road boundary 140 that the vehicle can drive into.
  • Figure 2a is a schematic diagram of an application scenario of an embodiment of the present disclosure. As shown in Figure 2a, it is assumed that in the intersection scenario shown in Figures 1a and 1b, there is an obstruction 210 in the southwest corner of the intersection (the top, bottom, left and right of the image correspond to the north and south respectively. , south, west and east). Normally, the obstruction 210 will block the perspective of vehicles traveling from south to north.
  • Figure 2b is a schematic diagram of the second application scenario of the embodiment of the present disclosure. As shown in Figure 2b, the vehicle 230 The driver or sensor in the scene cannot obtain information about a part of the area blocked by the obstruction 210. This part of the area can be called the unknown area 220.
  • the road boundary that the driver or sensor in the vehicle 230 cannot sense is called the invisible road boundary 250 (the thick solid line in Figure 2a and Figure 2b).
  • the driver or sensor in the vehicle 230 can The perceived road boundary is the visible road boundary 240 (thick dashed line in Figure 2a).
  • multiple road boundaries in the road image are determined by identifying the road images collected by the image acquisition device installed on the vehicle; and the vehicle is selected from the multiple road boundaries.
  • the road boundary that can be driven into can identify the road boundary (especially the invisible road boundary), and can realize the determination of the road boundary that the vehicle can drive into.
  • the terms “comprising”, “comprises” or any other variations thereof are intended to cover non-exclusive inclusion, so that a method or device including a series of elements not only includes the explicitly stated elements, but also other elements not expressly listed, or elements inherent to the implementation of the method or apparatus.
  • an element defined by the statement “comprises a" does not exclude the presence of other related elements (such as steps in the method or devices) in the method or device including the element.
  • a part of the device for example, may be part of a circuit, part of a processor, part of a program or software, etc.).
  • the road boundary detection method provided by the embodiment of the present disclosure includes a series of steps, but the road boundary detection method provided by the embodiment of the present disclosure is not limited to the recorded steps.
  • the road boundary detection device provided by the embodiment of the present disclosure A series of modules are included, but the device provided by the embodiment of the present disclosure is not limited to include the explicitly recorded modules, and may also include modules that need to be set up to obtain relevant information or perform processing based on the information.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and they exist alone. B these three situations.
  • at least one herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, and C, which can mean including from A, Any one or more elements selected from the set composed of B and C.
  • FIG. 3 is a schematic flowchart 1 of a road boundary detection method according to an embodiment of the present disclosure; as shown in Figure 3, the method includes:
  • Step S301 Identify the road image collected by the image collection device installed on the vehicle, and determine multiple road boundaries in the road image;
  • Step S302 Select a road boundary into which the vehicle can drive from the plurality of road boundaries.
  • the road boundary detection method in the embodiment of the present disclosure is applied to electronic devices, which may be vehicle-mounted devices, cloud platforms, or other computer devices.
  • the vehicle-mounted device may be a thin client, a thick client, a microprocessor-based system, a small computer system, etc. installed on the vehicle
  • the cloud platform may be a distributed computer system including a small computer system or a large computer system. Cloud computing technology environment and so on.
  • the vehicle-mounted equipment can be connected through communication with the vehicle's sensors, positioning devices, etc., and the vehicle-mounted equipment can obtain the data collected by the vehicle's sensors and the geographical location information reported by the positioning device through the communication connection.
  • the vehicle's sensor may be at least one of the following: millimeter wave radar, lidar, camera and other equipment;
  • the positioning device may be a device for providing positioning services based on at least one of the following positioning systems: Global Positioning System ( GPS (Global Positioning System), Beidou Satellite Navigation System or Galileo Satellite Navigation System.
  • the vehicle-mounted device may be an Advanced Driving Assistant System (ADAS).
  • ADAS is installed on the vehicle.
  • the ADAS may obtain the vehicle's real-time location information from the vehicle's positioning device, and/or the ADAS may Image data, radar data, etc. representing information about the vehicle's surrounding environment are obtained from the vehicle's sensors.
  • ADAS can send vehicle driving data including the vehicle's real-time location information to the cloud platform.
  • the cloud platform can receive the vehicle's real-time location information and/or image data, radar data, etc. representing the vehicle's surrounding environment information.
  • the road image is obtained through an image acquisition device (ie, the above-mentioned sensor, such as a camera) installed on the vehicle.
  • the image acquisition device collects road images or environment images around the vehicle in real time as the vehicle moves. Further, by detecting and recognizing the road image, multiple road boundaries related to the vehicle in the road image are determined, and then a road boundary that the vehicle can enter is selected from the multiple road boundaries.
  • the electronic device can determine the road boundary that the vehicle can drive into based on the identified road boundary, especially in the scenario of invisible road boundaries, determine the road boundary that the vehicle can drive into, Provide sufficient basis for vehicle turning decisions at intersections.
  • determining multiple road boundaries in the road image includes detecting multiple lanes in the road image, and determining the multiple road boundaries by connecting ends of each lane.
  • multiple lanes in the road image can be detected through the first network, that is, multiple lane lines in the road image can be detected.
  • the road image is processed through the first network to obtain the lane lines in the road image; and then multiple road boundaries related to the vehicle are obtained by connecting the end edges of the lane lines.
  • other image detection schemes may also be used to detect multiple lanes in road images.
  • the road image is first grayscaled, and the lane edge in the grayscaled road image is detected, for example, an edge detection operator is used to perform edge detection; the processed image is further binarized. , thereby obtaining the lane lines in the road image.
  • determining multiple road boundaries in the road image includes: detecting a drivable area in the road image, and determining multiple road boundaries in the image based on a contour of the drivable area. road boundary.
  • the drivable area in the road image can be detected through the second network;
  • the free space (Freespace) which can also be called the passable area, represents the area where the vehicle can travel or the area where the vehicle can travel.
  • Freespace which can also be called the passable area
  • road images in addition to the current vehicle, it usually also includes other vehicles, pedestrians, trees, road edges, etc.
  • the areas where other vehicles, pedestrians, trees, and road edges are located are all areas where the current vehicle cannot travel. Therefore, the road image is processed through the second network, and areas such as other vehicles, pedestrians, trees, and road edges in the road image are removed to obtain the drivable area of the vehicle.
  • determining multiple road boundaries in the road image includes: using a third network to detect the road image and determining multiple road boundaries related to the vehicle.
  • a pre-trained third network can be used to process road images to obtain multiple road boundaries related to the vehicle.
  • the above-mentioned first network, second network and third network can all be deep neural networks (DNN, Deep Neural Networks).
  • DNN deep neural networks
  • selecting a road boundary that the vehicle can drive into from the multiple road boundaries includes: determining the own vehicle lane where the vehicle is located based on the road image; The self-vehicle lane determines the road boundary that the vehicle can drive into from the multiple road boundaries.
  • the own vehicle lane in which the vehicle is located can be determined, and the vehicle can be determined from the multiple road boundaries based on the own vehicle lane in which the vehicle is located. Entering road boundary.
  • the road boundary corresponding to the driving direction of the vehicle is the road boundary that the vehicle can drive into.
  • the road boundary in the left lane is a road boundary that the vehicle can drive into
  • the road boundary in the right lane is a road boundary that the vehicle cannot drive into.
  • the road boundary in the right lane is the road boundary that the vehicle can drive into
  • the road boundary in the left lane is the road that the vehicle cannot drive into boundary.
  • the above-mentioned “left” and “right” are relative; when a person faces the road boundary shown in Figure 1a according to the driving direction of the vehicle, the two lanes divided by the solid line of the lane, the left lane is called The lane on the left is called the right lane, and the lane on the right is called the right lane.
  • determining the own lane in which the vehicle is located based on the road image includes: identifying traffic signs in the road image; and determining the own lane in which the vehicle is located based on the traffic signs.
  • determining the own vehicle lane in which the vehicle is located based on the road image includes: identifying the driving direction of other vehicles in the road image; determining the driving direction of the other vehicle based on the driving direction of the other vehicle.
  • the vehicle lane in which the vehicle is located includes: identifying the driving direction of other vehicles in the road image; determining the driving direction of the other vehicle based on the driving direction of the other vehicle.
  • the electronic device may determine the own lane in which the vehicle is located based on the recognized traffic sign and/or the driving direction of other vehicles.
  • the traffic signs include at least one of the following: signs indicated by traffic signs, road signs, etc.
  • the traffic signs are graphic symbols used to indicate traffic regulations and road information. They are usually installed at intersections or road edges to manage traffic and indicate driving directions to ensure smooth roads and safe driving.
  • the road signs include markings on the road (such as white solid lines, white dashed lines, yellow solid lines, double yellow solid lines, etc.), signs of road attributes marked on the road (such as straight lines, turn signs, speed limits, etc.). signs, bus signs, etc., i.e. manually drawn signs on the road).
  • the electronic device can determine the own lane in which the vehicle is located through the driving direction of other vehicles detected in the road image; the electronic device can also determine the vehicle's own lane through the traffic signs detected in the road image.
  • determining the lane in which the vehicle is located based on the traffic sign includes: responding to the traffic sign indicating that the lane in which the vehicle is located is not a one-way lane, and the traffic sign includes a specified In the case of road markings, the own vehicle lane in which the vehicle is located is determined based on the designated road markings.
  • the designated road markings are used to indicate traffic flows traveling in the same direction or to separate traffic flows traveling in opposite directions.
  • the designated road markings include solid lines (such as yellow solid lines, double yellow solid lines, etc.) and dotted lines (such as white dotted lines).
  • determining the own vehicle lane in which the vehicle is located based on the traveling direction of the other vehicle includes: in response to the traveling direction of the other vehicle being opposite to the traveling direction of the vehicle, based on the traveling direction of the other vehicle.
  • the lanes in which other vehicles are located determine the own lane in which the vehicle is located.
  • Figure 4a is a schematic diagram of a vehicle lane scene in the road boundary detection method according to an embodiment of the present disclosure.
  • the road passes If the image recognizes a solid line (for example, the thick solid line 410 in Figure 4a), and the vehicle 400 is driving on the left, it can be determined that the lane to the left of the solid line is the own vehicle lane.
  • the thick solid line 410 is the designated road marking line.
  • Figure 4b is a schematic diagram of the second vehicle lane scene in the road boundary detection method according to the embodiment of the present disclosure.
  • dotted lines for example, Figure 4b
  • the thick dotted line 420 in 4b) and the vehicle 400 is driving on the left, it can be determined that the lane on the right side of the dotted line may be the own vehicle lane; further, the self-vehicle lane can be determined based on other traffic signs or through the recognition results of road images. driveway.
  • the lane where the other vehicles are located is not the vehicle's own lane.
  • the own lane of the vehicle can be obtained by removing the lanes in which other vehicles (vehicles traveling in the opposite direction to the current vehicle) are located in the lane.
  • the road boundaries corresponding to the lanes in which other vehicles are located may be further determined, and other vehicles may be removed from the multiple road boundaries determined in step S301 The road boundary corresponding to the lane where the vehicle (the vehicle traveling in the opposite direction to the current vehicle) is located, and then the road boundary that the vehicle can drive into is obtained.
  • determining a road boundary that the vehicle can drive into from the multiple road boundaries based on the own vehicle lane in which the vehicle is located includes: based on the traffic sign and the vehicle The own vehicle lane in which the vehicle is located determines the road boundary into which the vehicle can drive from the plurality of road boundaries.
  • the electronic device can identify the traffic signs in the road image in real time, and determine the road boundary that the vehicle can drive into from the multiple road boundaries in combination with the own lane where the vehicle is located.
  • the traffic signs may include at least one of the following signs: one-way driving signs, right-turn traffic signs at roundabouts, prohibition of driving outside the designated direction sign, no entry signs, traffic closure signs, no vehicle crossing signs, No turning signs, pedestrian only signs, bicycle only signs, bicycle and pedestrian only signs, stop lines, lane lines, and more.
  • the electronic device after the electronic device determines multiple road boundaries related to the vehicle and determines the own lane where the vehicle is located, it can determine the road boundaries that the vehicle can drive into based on various traffic signs set around the vehicle.
  • determining the road boundaries that the vehicle can drive into from the multiple road boundaries based on the own vehicle lane in which the vehicle is located includes: obtaining the location information of the vehicle. , determining map sub-data related to the location information from pre-obtained map data, and determining a road boundary that the vehicle can drive into from the plurality of road boundaries based on the map sub-data; the map data at least includes Road data, road sign data and traffic sign data.
  • the electronic device can obtain map data in advance.
  • the map data can be, for example, data containing a priori information such as road information and traffic sign information; the electronic device can determine the driving direction of the vehicle based on the location information of the vehicle. , and then determine the route that the vehicle can drive based on the location information and driving direction of the vehicle, and determine the road boundaries that the vehicle can drive into or the road boundaries that the vehicle cannot drive into based on the routes that the vehicle can drive.
  • the method further includes: determining a driving path of the vehicle based on a road boundary that the vehicle can drive into, and controlling the driving of the vehicle according to the driving path.
  • the electronic device can determine the driving path of the vehicle for the road boundary that the vehicle can drive into, and the electronic device can control the vehicle to drive according to the driving path.
  • the method further includes: setting a first area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the first area of interest at a first resolution; wherein , the road image is obtained according to a second resolution, and the second resolution is smaller than the first resolution.
  • the method further includes: setting a second area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the second area of interest at a first frame rate; Wherein, the road image is obtained at a second frame rate, and the second frame rate is smaller than the first frame rate.
  • the electronic device sets a region of interest (ROI, Region of Interest) based on the boundary of the road that the vehicle can drive into, that is, the aforementioned first region of interest and the second region of interest.
  • ROI Region of Interest
  • the electronic device can use a second resolution (also called a low resolution) to obtain it, and for the first area of interest, a higher resolution than the second resolution can be used.
  • the first resolution also called high resolution
  • a higher quality image is collected for the first region of interest to facilitate subsequent object recognition of the image corresponding to the first region of interest.
  • the electronic device may use a second frame rate (also called a low frame rate) to obtain the image, and for the second area of interest, a higher frame rate may be used.
  • the first frame rate (which can also be called a high frame rate) is obtained to facilitate subsequent object recognition of the image corresponding to the second area of interest.
  • FIG. 5 is a schematic structural diagram of a road boundary detection device according to an embodiment of the present disclosure; as shown in Figure 5, the device includes: a detection part 51 and a selection part 52; wherein,
  • the detection part 51 is configured to identify road images collected by an image collection device provided on the vehicle and determine multiple road boundaries in the road images;
  • the selection part 52 is configured to select a road boundary into which the vehicle can drive from the plurality of road boundaries.
  • the selection part 52 is configured to determine the own vehicle lane in which the vehicle is located based on the road image; and select the self-vehicle lane in which the vehicle is located from the plurality of road boundaries based on the road image. Determine the boundaries of the road into which the vehicle can drive.
  • the selection part 52 is configured to identify traffic signs in the road image; and determine the own vehicle lane in which the vehicle is located based on the traffic signs.
  • the selection part 52 is configured to identify the traveling directions of other vehicles in the road image; and determine the own lane in which the vehicle is located based on the traveling directions of the other vehicles.
  • the selection part 52 is configured to respond to the situation that the traffic sign indicates that the lane where the vehicle is located is not a one-way lane, and the traffic sign includes designated road markings, The own vehicle lane in which the vehicle is located is determined based on the designated road markings.
  • the selection portion 52 is configured to determine the lane in which the vehicle is located based on the lane in which the other vehicle is located in response to the traveling direction of the other vehicle being opposite to the traveling direction of the vehicle. Bicycle lane.
  • the selection part 52 is configured to determine a road boundary that the vehicle can drive into from the plurality of road boundaries based on the traffic sign and the own vehicle lane in which the vehicle is located. .
  • the selection part 52 is configured to obtain the location information of the vehicle, determine the map sub-data related to the location information from the map data obtained in advance, and based on the map sub-data
  • the data determines a road boundary that the vehicle can drive into from the plurality of road boundaries; the map data at least includes road data, road identification data and traffic sign data.
  • the detection part 51 is configured to detect multiple lanes in the road image and determine multiple road boundaries related to the vehicle by connecting the ends of each lane.
  • the detection part 51 is configured to detect a drivable area in the road image, and determine a plurality of road boundaries related to the vehicle based on the outline of the drivable area.
  • the device further includes a first control part 53 for determining the driving path of the vehicle based on the road boundary that the vehicle can drive into. The path controls the movement of the vehicle.
  • the device further includes a second control part 54 for setting a first area of interest based on the road boundary that the vehicle can drive into, according to the first resolution An image corresponding to the first region of interest is obtained; wherein the road image is obtained at a second resolution, and the second resolution is smaller than the first resolution.
  • the device further includes a second control part 54 for setting a second area of interest based on the road boundary that the vehicle can drive into, according to the first frame rate An image corresponding to the second region of interest is obtained; wherein the road image is obtained at a second frame rate, and the second frame rate is smaller than the first frame rate.
  • the device is used in electronic equipment.
  • the detection part 51, the selection part 52, the first control part 53 and the second control part 54 in the device can all be composed of a central processing unit (CPU, Central Processing Unit), a digital signal processor (DSP, Digital) in practical applications. Signal Processor), microcontroller unit (MCU, Microcontroller Unit) or programmable gate array (FPGA, Field-Programmable Gate Array).
  • CPU Central Processing Unit
  • DSP digital signal processor
  • MCU microcontroller unit
  • FPGA Field-Programmable Gate Array
  • FIG. 8 is a schematic diagram of the hardware composition of the electronic device according to the embodiment of the present disclosure.
  • the electronic device includes a memory 82, a processor 81 and a memory 82. and a computer program that can be run on the processor 81.
  • the processor 81 executes the program, the steps of the road boundary detection method described in the embodiment of the present disclosure are implemented.
  • the electronic device may also include a user interface 83 and a network interface 84.
  • the user interface 83 may include a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad or touch screen, etc.
  • bus system 85 various components in the electronic device are coupled together through bus system 85 .
  • bus system 85 is used to implement connection communication between these components.
  • the bus system 85 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled bus system 85 in FIG. 8 .
  • the memory 82 may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
  • non-volatile memory can be read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), erasable programmable read-only memory (EPROM, Erasable Programmable Read-Only Memory).
  • Volatile memory can be random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM enhanced Type Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • the memory 82 described in embodiments of the present disclosure is intended to include, but is not limited to, these and any other suitable types of memory.
  • the methods disclosed in the above embodiments of the present disclosure can be applied to the processor 81 or implemented by the processor 81 .
  • the processor 81 may be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor 81 .
  • the above-mentioned processor 81 may be a general processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the processor 81 can implement or execute the disclosed methods, steps and logical block diagrams in the embodiments of the present disclosure.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present disclosure can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium, which is located in the memory 82.
  • the processor 81 reads the information in the memory 82 and completes the steps of the foregoing method in combination with its hardware.
  • the electronic device may be configured by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs, Complex Programmable Logic Device), FPGA, general processor, controller, MCU, microprocessor (Microprocessor), or other electronic component implementation, used to execute the aforementioned method.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal processors
  • PLDs Programmable Logic Devices
  • CPLDs Complex Programmable Logic Devices
  • FPGA general processor
  • controller MCU
  • Microprocessor microprocessor
  • the present disclosure also provides a computer-readable storage medium, such as a memory 82 including a computer program.
  • the computer program can be executed by the processor 81 of the electronic device to complete the steps of the foregoing method.
  • the computer-readable storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM; it can also be various devices including one or any combination of the above memories.
  • the computer-readable storage medium provided by the embodiment of the present disclosure has a computer program stored thereon, and when the program is executed by the processor, the steps of the road boundary detection method described in the embodiment of the present disclosure are implemented.
  • Embodiments of the present disclosure also provide a computer program product.
  • the computer program product includes a computer program or instructions.
  • the computer program or instructions When the computer program or instructions are run on an electronic device, the electronic device executes the embodiments of the present disclosure. The steps of the road boundary detection method.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are schematic.
  • the division of parts is a logical function division.
  • the coupling, direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the equipment or parts may be electrical, mechanical, or other forms. of.
  • the parts described above as separate components may or may not be physically separated.
  • the components shown as parts may or may not be physical parts, that is, they may be located in one place or distributed to multiple network parts; Some or all of them may be selected according to actual needs to achieve the purpose of the embodiments of the present disclosure.
  • each functional part in each embodiment of the present disclosure can be all integrated into one processing part, or each part can be a separate part, or two or more parts can be integrated into one part; the above-mentioned integration
  • the part can be implemented in the form of hardware, or it can be implemented in the form of hardware plus software functional parts.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: various media that can store program codes, such as mobile storage devices, ROM, RAM, magnetic disks or optical disks.
  • the above-mentioned integrated parts of the present disclosure are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the computer software products are stored in a storage medium and include a number of instructions to A computer device (which may be a personal computer, a server, a network device, etc.) is caused to execute all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: mobile storage devices, ROM, RAM, magnetic disks or optical disks and other media that can store program codes.
  • Embodiments of the present disclosure provide a road boundary detection method, device, electronic device and storage medium.
  • the method includes: identifying a road image collected by an image collection device installed on a vehicle, determining a plurality of road boundaries in the road image; and selecting a road boundary into which the vehicle can drive from the plurality of road boundaries.
  • it is possible to determine the road boundary that the vehicle can drive into based on the identified road boundary, especially in the scenario of invisible road boundaries, determine the road boundary that the vehicle can drive into, for the vehicle Provide sufficient basis for intersection turning decisions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

本公开实施例提供一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品。所述方法包括:识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;从所述多个道路边界中选择所述车辆能够驶入的道路边界。

Description

一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品
相关申请的交叉引用
本公开基于申请号为202210303727.1、申请日为2022年03月24日、申请名称为“一种道路边界检测方法、装置、电子设备和存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及但不限于计算机视觉技术领域,涉及一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品。
背景技术
基于摄像头感知的自动驾驶车辆在路口转弯的过程中,不仅需要检测出道路边界的位置,还需要检测出车辆可驶入的道路边界,为自动驾驶车辆路口转弯决策提供充分的依据。目前针对道路边界的场景,无法确定车辆可驶入的道路边界。
发明内容
为解决现有存在的技术问题,本公开实施例提供一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品。
为达到上述目的,本公开实施例的技术方案是这样实现的:
本公开实施例提供了一种道路边界检测方法,所述方法包括:
识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;
从所述多个道路边界中选择所述车辆能够驶入的道路边界。
在一些实施例中,从所述多个道路边界中选择所述车辆能够驶入的道路边界,包括:基于所述道路图像确定所述车辆所在的自车车道;
基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够 驶入的道路边界。
在一些实施例中,所述基于所述道路图像确定所述车辆所在的自车车道,包括:
识别所述道路图像中的交通标识;
基于所述交通标识确定所述车辆所在的自车车道。
在一些实施例中,所述基于所述道路图像确定所述车辆所在的自车车道,包括:
识别所述道路图像中的其他车辆的行驶方向;
基于所述其他车辆的行驶方向确定所述车辆所在的自车车道。
在一些实施例中,所述基于所述交通标识确定所述车辆所在的自车车道,包括:
响应于所述交通标识表示所述车辆所在的车道不是单向车道、且所述交通标识包括指定道路标线的情况下,基于所述指定道路标线确定所述车辆所在的自车车道。
在一些实施例中,基于所述其他车辆的行驶方向确定所述车辆所在的自车车道,包括:响应于所述其他车辆的行驶方向与所述车辆的行驶方向相反,基于所述其他车辆的所在车道确定所述车辆所在的自车车道。
在一些实施例中,所述基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆可以驶入的道路边界,包括:基于所述交通标识以及所述车辆所在的自车车道从所述多个道路边界中确定所述车辆可以驶入的道路边界。
在一些实施例中,所述基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆可以驶入的道路边界,包括:获得所述车辆所在的位置信息,从预先获得的地图数据确定与所述位置信息相关的地图子数据,基于所述地图子数据从所述多个道路边界中确定所述车辆可以驶入的道路边界;所述地图数据至少包括道路数据、道路标识数据和交通标志牌数据。
在一些实施例中,所述确定所述道路图像中的多个道路边界,包括:检测所述道路图像中的多个车道,通过连接各车道的末端确定所述多个道路边 界。
在一些实施例中,所述确定所述道路图像中的多个道路边界,包括:检测所述道路图像中的可行驶区域,基于所述可行驶区域的轮廓线确定所述图像中的多个道路边界。
在一些实施例中,所述方法还包括:基于所述车辆能够驶入的道路边界确定所述车辆的行驶路径,按照所述行驶路径控制所述车辆行驶。
在一些实施例中,所述方法还包括:基于所述车辆能够驶入的道路边界设置第一感兴趣区域,按照第一分辨率获得所述第一感兴趣区域对应的图像;其中,所述道路图像按照第二分辨率获得,所述第二分辨率小于所述第一分辨率。
在一些实施例中,所述方法还包括:基于所述车辆能够驶入的道路边界设置第二感兴趣区域,按照第一帧速率获得所述第二感兴趣区域对应的图像;其中,所述道路图像按照第二帧速率获得,所述第二帧速率小于所述第一帧速度。
本公开实施例还提供了一种道路边界检测装置,所述装置包括:检测部分和选择部分;其中,
所述检测部分,被配置为识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;
所述选择部分,被配置为从所述多个道路边界中选择所述车辆能够驶入的道路边界。
本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,在所述程序被处理器执行的过程中实现本公开实施例所述方法的步骤。
本公开实施例还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,在所述处理器执行所述程序的过程中实现本公开实施例所述方法的步骤。
本公开实施例还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行本公开实施例所述方法的步骤。
本公开实施例提供的道路边界检测方法能够在识别出的道路边界的基础上,确定车辆能够驶入的道路边界,尤其在不可见的道路边界的场景下确定车辆能够驶入的道路边界,为车辆路口转弯决策提供充分的依据。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例中所需要使用的附图进行说明。
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1a为本公开实施例的道路边界检测方法中的道路边界的示意图;
图1b为本公开实施例的道路边界检测方法中的能够驶入的道路边界的示意图;
图2a为本公开实施例的应用场景示意图一;
图2b为本公开实施例的应用场景示意图二;
图3为本公开实施例的道路边界检测方法的流程示意图一;
图4a为本公开实施例的道路边界检测方法中的自车车道的场景示意图一;
图4b为本公开实施例的道路边界检测方法中的自车车道的场景示意图二;
图5为本公开实施例的道路边界检测装置的组成结构示意图一;
图6为本公开实施例的道路边界检测装置的组成结构示意图二;
图7为本公开实施例的道路边界检测装置的组成结构示意图三;
图8为本公开实施例的电子设备的硬件组成结构示意图。
具体实施方式
下面结合附图及具体实施例对本公开作进一步详细的说明。
在对本公开实施例的道路边界检测方案进行说明之前,首先对一些概念进行简单阐述。
图1a和图1b分别为本公开实施例的道路边界检测方法中的道路边界和 能够驶入的道路边界的示意图;道路边界除了包括车辆130所在的车道两侧的边界110,还包括垂直于车道两侧的边界120,如图1a中所示,包括垂直于车道两侧的边界120,以下实施例中将该边界统一称为道路边界,在场景所示的路口中,可见八个道路边界。
参照图1b所示,车道为双向车道,则对应于车辆行驶方向的道路边界为车辆能够驶入的道路边界。假设车辆130靠左侧行驶,则针对每个方向的两个道路边界120,均是左侧的道路边界为车辆能够驶入的道路边界140。
图2a为本公开实施例的应用场景示意图一,如图2a所示,假设在图1a和图1b所示的路口场景下,路口的西南角具有一个遮挡物210(图像的上下左右分别对应北、南、西和东),通常情况下,该遮挡物210会遮挡由南向北行驶的车辆的视角,图2b为本公开实施例的应用场景示意图二,如图2b所示,使得车辆230中的驾驶员或传感器无法获得被遮挡物210遮挡的一部分区域的信息,这部分区域可称为未知区域220,在场景中存在的八个道路边界中,被遮挡物遮挡或者由于其他原因(例如距离太远等原因)使得车辆230中的驾驶员或传感器无法感知的道路边界称为不可见的道路边界250(图2a和图2b中的粗实线),车辆230中的驾驶员或传感器可以感知的道路边界是可见的道路边界240(图2a中的粗虚线)。
为解决上述问题,本公开实施例中,通过识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;从所述多个道路边界中选择所述车辆能够驶入的道路边界,能够提识别出道路边界(尤其是不可见的道路边界),并且可实现对车辆能够驶入的道路边界的确定。
需要说明的是,在本公开实施例中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的方法或者装置不仅包括所明确记载的要素,而且还包括没有明确列出的其他要素,或者是还包括为实施方法或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的方法或者装置中还存在另外的相关要素(例如方法中的步骤或者装置中的部分,例如的部分可以是部分电路、部分处理器、部分程序或软件等等)。
例如,本公开实施例提供的道路边界检测方法包含了一系列的步骤,但是本公开实施例提供的道路边界检测方法不限于所记载的步骤,同样地,本公开实施例提供的道路边界检测装置包括了一系列模块,但是本公开实施例提供的装置不限于包括所明确记载的模块,还可以包括为获取相关信息、或基于信息进行处理时所需要设置的模块。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
本公开实施例提供了一种道路边界检测方法。图3为本公开实施例的道路边界检测方法的流程示意图一;如图3所示,所述方法包括:
步骤S301:识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;
步骤S302:从所述多个道路边界中选择所述车辆能够驶入的道路边界。
本公开实施例的道路边界检测方法应用于电子设备中,所述电子设备可以是车载设备,也可以是云平台或其他计算机设备。示例性的,车载设备可以是安装在车辆上的瘦客户机、厚客户机、基于微处理器的系统、小型计算机系统,等等,云平台可以是包括小型计算机系统或大型计算机系统的分布式云计算技术环境等等。
本公开实施例中,车载设备可以与车辆的传感器、定位装置等通信连接,车载设备可以通过通信连接获取车辆的传感器采集的数据、以及定位装置上报的地理位置信息等。示例性的,车辆的传感器可以是以下至少之一:毫米波雷达、激光雷达、摄像头等设备中;定位装置可以是基于以下至少一种定位系统的用于提供定位服务的装置:全球定位系统(GPS,Global Positioning System)、北斗卫星导航系统或伽利略卫星导航系统。
在一些实施例中,车载设备可以为高级辅助驾驶系统(ADAS,Advanced Driving Assistant System),ADAS设置在车辆上,ADAS可以从车辆的定位装 置中获取车辆的实时位置信息,和/或,ADAS可以从车辆的传感器中获得表示车辆周围环境信息的图像数据、雷达数据等等。其中,ADAS可以将包括车辆的实时位置信息的车辆行驶数据发送至云平台,如此,云平台可以接收到车辆的实时位置信息和/或表示车辆周围环境信息的图像数据、雷达数据等等。
本公开实施例中,通过设置在车辆上的图像采集设备(即上述传感器,如摄像头)获得道路图像,图像采集设备伴随车辆的移动而实时采集车辆周围的道路图像或环境图像。进一步地,通过对道路图像进行检测识别,确定道路图像中与车辆相关的多个道路边界,进而从所述多个道路边界中选择所述车辆能够驶入的道路边界。
采用本公开实施例的技术方案,电子设备能够在识别出的道路边界的基础上,确定车辆能够驶入的道路边界,尤其在不可见的道路边界的场景下确定车辆能够驶入的道路边界,为车辆路口转弯决策提供充分的依据。
在本公开的一些实施例中,所述确定所述道路图像中的多个道路边界,包括:检测所述道路图像中的多个车道,通过连接各车道的末端确定所述多个道路边界。
本公开实施例中,可通过第一网络检测道路图像中的多个车道,也即检测道路图像中的多个车道线。示例性的,通过第一网络对道路图像进行处理,获得道路图像中的车道线;进而通过连接车道线的末端边缘,得到与车辆相关的多个道路边界。
在其他实施方式中,也可采用其他图像检测方案检测道路图像中的多个车道。示例性的,首先对道路图像进行灰度化处理,对检测灰度化处理后的道路图像中的车道边缘,例如采用边缘检测算子进行边缘检测;进一步对处理后的图像进行二值化处理,从而得到道路图像中的车道线。
在另一些实施例中,所述确定所述道路图像中的多个道路边界,包括:检测所述道路图像中的可行驶区域,基于所述可行驶区域的轮廓线确定所述图像中的多个道路边界。
本公开实施例中,可通过第二网络检测道路图像中的可行驶区域;所述 可行驶区域(Freespace),也可称为可通行区域,表示车辆可行驶的区域或者车辆能够行驶的区域。在道路图像中,除了当前车辆之外,通常还包括其他车辆、行人、树木、道路边缘等,上述例如其他车辆、行人、树木、道路边缘所在区域均是当前车辆不可行驶的区域。因此,通过第二网络对道路图像进行处理,去除道路图像中例如其他车辆、行人、树木、道路边缘所在区域,得到车辆的可行驶区域。
在又一些实施例中,所述确定所述道路图像中的多个道路边界,包括:利用第三网络检测所述道路图像,确定与所述车辆相关的多个道路边界。
本公开实施例中,可利用预先训练完成的第三网络对道路图像进行处理,得到与所述车辆相关的多个道路边界。
其中,上述第一网络、第二网络和第三网络均可以是深度神经网络(DNN,Deep Neural Networks)。
在本公开的一些实施例中,从所述多个道路边界中选择所述车辆能够驶入的道路边界,包括:基于所述道路图像确定所述车辆所在的自车车道;基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
本公开实施例中,确定如图1a中所示的道路边界后,进而可确定车辆所在的自车车道,基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆可以驶入的道路边界。
其中,车辆行驶方向对应的道路边界为所述车辆能够驶入的道路边界。示例性的,如图1b所示,在车辆为靠左侧行驶的情况下,左侧车道内的道路边界为车辆能够驶入的道路边界,则右侧车道内的道路边界为车辆不能够驶入的道路边界;相应的,在车辆为靠右行驶的情况下,右侧车道内的道路边界为车辆能够驶入的道路边界,则左侧车道内的道路边界为车辆不能够驶入的道路边界。其中,上述“左侧”和“右侧”是相对的;在人按照车辆的行驶方向面向图1a所示的道路边界的情况下,以车道实线分割的两个车道,靠左的车道称为左侧车道,靠右的车道称为右侧车道。
在一些实施例中,所述基于所述道路图像确定所述车辆所在的自车车道, 包括:识别所述道路图像中的交通标识;基于所述交通标识确定所述车辆所在的自车车道。
在另一些实施例中,所述基于所述道路图像确定所述车辆所在的自车车道,包括:识别所述道路图像中的其他车辆的行驶方向;基于所述其他车辆的行驶方向确定所述车辆所在的自车车道。
本公开实施例中,电子设备可基于识别出的交通标识和/或其他车辆的行驶方向,确定所述车辆所在的自车车道。
其中,示例性的,所述交通标识包括以下至少之一:交通标志牌指示的标识和道路标识等等。其中,所述交通标志牌用于指示交通法规以及道路信息的图形符号,通常设置在路口或道路边缘,用以管理交通、指示行车方向以保证道路畅通与行车安全。所述道路标识例如道路上的标线标识(例如白色实线、白色虚线、黄色实线、双黄实线等等)、道路上标识的道路属性的标识(例如直行标识、转弯标识、限速标识、公共汽车专用标识等等,也即道路上通过人工方式绘制的标识)。
本公开实施例中,电子设备可通过在道路图像中检测出的其他车辆的行驶方向确定所述车辆所在的自车车道;电子设备还可通过在道路图像中检测出的交通标识确定所述车辆所在的自车车道;电子设备还可通过在道路图像中检测出的交通标识和其他车辆的行驶方向确定所述车辆所在的自车车道。
在一些实施例中,所述基于所述交通标识确定所述车辆所在的自车车道,包括:响应于所述交通标识表示所述车辆所在的车道不是单向车道、且所述交通标识包括指定道路标线的情况下,基于所述指定道路标线确定所述车辆所在的自车车道。
本公开实施例中,所述指定道路标线用于指示同向行驶的交通流或者分隔对向行驶的交通流。示例性的,所述指定道路标线例如实线(例如黄色实线、双黄实线等等)、点状线(例如白色虚线)。
在另一些实施例中,所述基于所述其他车辆的行驶方向确定所述车辆所在的自车车道,包括:响应于所述其他车辆的行驶方向与所述车辆的行驶方向相反,基于所述其他车辆的所在车道确定所述车辆所在的自车车道。
作为一种示例,图4a为本公开实施例的道路边界检测方法中的自车车道的场景示意图一,如图4a所示,在通过对交通标识的识别,确定车道不是单向车道,通过道路图像识别出实线(例如图4a中的粗实线410),且车辆400处于左侧行驶的情况下,可确定实线左侧的车道为自车车道。本示例中,该粗实线410则为上述指定道路标线。
作为另一种示例,图4b为本公开实施例的道路边界检测方法中的自车车道的场景示意图二,如图4b所示,在通过对道路图像的识别,识别出点状线(例如图4b中的粗虚线420),且车辆400处于左侧行驶的情况下,可确定点状线右侧的车道可能为自车车道;进一步可基于其他交通标识或者通过对道路图像的识别结果确定自车车道。
作为又一种示例,在通过对道路图像的识别,识别出道路图像中存在与当前车辆行驶方向相反的其他车辆的情况下,可确定其他车辆所在的车道不是所述车辆的自车车道。进而可通过在车道中去除其他车辆(与当前车辆行驶方向相反的车辆)所在的车道,得到所述车辆的自车车道。在其他实施例中,确定其他车辆所在的车道不是所述车辆的自车车道后,进一步可从确定其他车辆所在的车道对应的道路边界,从步骤S301中确定的多个道路边界中去除其他车辆(与当前车辆行驶方向相反的车辆)所在的车道对应的道路边界,进而得到所述车辆可以驶入的道路边界。
在本公开的一些实施例中,所述基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界,包括:基于所述交通标识以及所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
本公开实施例中,电子设备可通过实时对道路图像中的交通标识进行识别,结合所述车辆所在的自车车道从所述多个道路边界中确定所述车辆可以驶入的道路边界。
示例性的,所述交通标识可包括以下至少一种标识:单向行驶标识、环形交叉口右转交通标识、禁止在指定方向外行驶标识、禁止进入标识、交通封闭标识、禁止车辆交叉标识、禁止转弯标识、仅限行人标识、仅限自行车 标识、仅限自行车和行人标识、停车线、车道线等等。
本公开实施例中,电子设备确定与车辆相关的多个道路边界、且确定车辆所在的自车车道后,可根据车辆周围设置的各交通标识,确定车辆可以驶入的道路边界。
在本公开的另一些实施例中,所述基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界,包括:获得所述车辆所在的位置信息,从预先获得的地图数据确定与所述位置信息相关的地图子数据,基于所述地图子数据从所述多个道路边界中确定所述车辆能够驶入的道路边界;所述地图数据至少包括道路数据、道路标识数据和交通标志牌数据。
本公开实施例中,电子设备可预先获得地图数据,所述地图数据例如可以是包含有道路信息和交通标识信息等先验信息的数据;电子设备可根据车辆所在的位置信息确定车辆的行驶方向,进而根据车辆所在的位置信息和行驶方向确定车辆可以行驶的路线,根据车辆可以行驶的路线确定车辆能够驶入的道路边界或者不能够驶入的道路边界。
在本公开的一些实施例中,所述方法还包括:基于所述车辆能够驶入的道路边界确定所述车辆的行驶路径,按照所述行驶路径控制所述车辆行驶。
本公开实施例中,针对车辆能够驶入的道路边界,电子设备可确定车辆的行驶路径,电子设备可控制车辆按照行驶路径行驶。
在本公开的一些实施例中,所述方法还包括:基于所述车辆能够驶入的道路边界设置第一感兴趣区域,按照第一分辨率获得所述第一感兴趣区域对应的图像;其中,所述道路图像按照第二分辨率获得,所述第二分辨率小于所述第一分辨率。
在本公开的另一些实施例中,所述方法还包括:基于所述车辆能够驶入的道路边界设置第二感兴趣区域,按照第一帧速率获得所述第二感兴趣区域对应的图像;其中,所述道路图像按照第二帧速率获得,所述第二帧速率小于所述第一帧速度。
本公开实施例中,电子设备基于车辆能够驶入的道路边界设置感兴趣区域(ROI,Region of Interest),即前述第一感兴趣区域和第二感兴趣区域。一 方面,电子设备在针对道路环境获得道路图像的过程中,可采用第二分辨率(也可称为低分辨率)获得,而针对第一感兴趣区域,可采用高于第二分辨率的第一分辨率(也可称为高分辨率)获得,一边针对第一感兴趣区域采集更高质量的图像,便于后续针对第一感兴趣区域对应的图像进行对象识别。另一方面,电子设备在针对道路环境获得道路图像的过程中,可采用第二帧速率(也可称为低帧速率)获得,而针对第二感兴趣区域,可采用高于第二帧速率的第一帧速率(也可称为高帧速率)获得,便于后续针对第二感兴趣区域对应的图像进行对象识别。
基于上述实施例,本公开实施例还提供了一种道路边界检测装置。图5为本公开实施例的道路边界检测装置的组成结构示意图一;如图5所示,所述装置包括:检测部分51和选择部分52;其中,
所述检测部分51,被配置为识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;
所述选择部分52,被配置为从所述多个道路边界中选择所述车辆能够驶入的道路边界。
在本公开的一些实施例中,所述选择部分52,被配置为基于所述道路图像确定所述车辆所在的自车车道;基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
在本公开的一些实施例中,所述选择部分52,被配置为识别所述道路图像中的交通标识;基于所述交通标识确定所述车辆所在的自车车道。
在本公开的一些实施例中,所述选择部分52,被配置为识别所述道路图像中的其他车辆的行驶方向;基于所述其他车辆的行驶方向确定所述车辆所在的自车车道。
在本公开的一些实施例中,所述选择部分52,被配置为响应于所述交通标识表示所述车辆所在的车道不是单向车道、且所述交通标识包括指定道路标线的情况下,基于所述指定道路标线确定所述车辆所在的自车车道。
在本公开的一些实施例中,所述选择部分52,被配置为响应于所述其他车辆的行驶方向与所述车辆的行驶方向相反,基于所述其他车辆的所在车道 确定所述车辆所在的自车车道。
在本公开的一些实施例中,所述选择部分52,被配置为基于所述交通标识以及所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
在本公开的一些实施例中,所述选择部分52,被配置为获得所述车辆所在的位置信息,从预先获得的地图数据确定与所述位置信息相关的地图子数据,基于所述地图子数据从所述多个道路边界中确定所述车辆能够驶入的道路边界;所述地图数据至少包括道路数据、道路标识数据和交通标志牌数据。
在本公开的一些实施例中,所述检测部分51,被配置为检测所述道路图像中的多个车道,通过连接各车道的末端确定与所述车辆相关的多个道路边界。
在本公开的一些实施例中,所述检测部分51,被配置为检测所述道路图像中的可行驶区域,基于所述可行驶区域的轮廓线确定与所述车辆相关的多个道路边界。
在本公开的一些实施例中,如图6所示,所述装置还包括第一控制部分53,用于基于所述车辆能够驶入的道路边界确定所述车辆的行驶路径,按照所述行驶路径控制所述车辆行驶。
在本公开的一些实施例中,如图7所示,所述装置还包括第二控制部分54,用于基于所述车辆能够驶入的道路边界设置第一感兴趣区域,按照第一分辨率获得所述第一感兴趣区域对应的图像;其中,所述道路图像按照第二分辨率获得,所述第二分辨率小于所述第一分辨率。
在本公开的一些实施例中,如图7所示,所述装置还包括第二控制部分54,用于基于所述车辆能够驶入的道路边界设置第二感兴趣区域,按照第一帧速率获得所述第二感兴趣区域对应的图像;其中,所述道路图像按照第二帧速率获得,所述第二帧速率小于所述第一帧速度。
本公开实施例中,所述装置应用于电子设备中。所述装置中的检测部分51、选择部分52、第一控制部分53和第二控制部分54,在实际应用中均可由中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital  Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现。
需要说明的是:上述实施例提供的道路边界检测装置在进行道路边界检测过程中,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的道路边界检测装置与道路边界检测方法实施例属于同一构思,其具体实现过程详见方法实施例。
本公开实施例还提供了一种电子设备,图8为本公开实施例的电子设备的硬件组成结构示意图,如图8所示,所述电子设备包括存储器82、处理器81及存储在存储器82上并可在处理器81上运行的计算机程序,所述处理器81执行所述程序时实现本公开实施例所述道路边界检测方法的步骤。
在一些实施例中,所述电子设备还可包括用户接口83和网络接口84。其中,用户接口83可以包括显示器、键盘、鼠标、轨迹球、点击轮、按键、按钮、触感板或者触摸屏等。
在一些实施例中,电子设备中的各个组件通过总线系统85耦合在一起。可理解,总线系统85用于实现这些组件之间的连接通信。总线系统85除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图8中将各种总线都标为总线系统85。
可以理解,存储器82可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,Ferromagnetic Random Access Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可 以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本公开实施例描述的存储器82旨在包括但不限于这些和任意其它适合类型的存储器。
上述本公开实施例揭示的方法可以应用于处理器81中,或者由处理器81实现。处理器81可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器81中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器81可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器81可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本公开实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器82,处理器81读取存储器82中的信息,结合其硬件完成前述方法的步骤。
在示例性实施例中,电子设备可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、FPGA、通用处理器、控制器、MCU、微处理器 (Microprocessor)、或其他电子元件实现,用于执行前述方法。
在示例性实施例中,本公开实施例还提供了一种计算机可读存储介质,例如包括计算机程序的存储器82,上述计算机程序可由电子设备的处理器81执行,以完成前述方法所述步骤。计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
本公开实施例提供的计算机可读存储介质,其上存储有计算机程序,在所述程序被处理器执行的过程中实现本公开实施例所述的道路边界检测方法的步骤。
本公开实施例还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行本公开实施例所述的道路边界检测方法的步骤。
本公开所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本公开所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本公开所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
在公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例是示意性的,例如,所述部分的划分,为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个部分或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或部分的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的部分可以是、或也可以不是物理上分开的,作为部分显示的部件可以是、或也可以不是物理部分,即可以位于一个地方,也可以分布到多个网络部分上;可以根据实际的需要选择其中的部分或全部 部分来实现本公开实施例方案的目的。
另外,在本公开各实施例中的各功能部分可以全部集成在一个处理部分中,也可以是各部分分别单独作为一个部分,也可以两个或两个以上部分集成在一个部分中;上述集成的部分既可以采用硬件的形式实现,也可以采用硬件加软件功能部分的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本公开上述集成的部分如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。
工业实用性
本公开实施例提供一种道路边界检测方法、装置、电子设备和存储介质。所述方法包括:识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;从所述多个道路边界中选择所述车辆能够驶入的道路边界。采用本公开实施例的技术方案,能够在识别出的道路边界的 基础上,确定车辆能够驶入的道路边界,尤其在不可见的道路边界的场景下确定车辆能够驶入的道路边界,为车辆路口转弯决策提供充分的依据。

Claims (17)

  1. 一种道路边界检测方法,所述方法由电子设备执行,所述方法包括:
    识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;
    从所述多个道路边界中选择所述车辆能够驶入的道路边界。
  2. 根据权利要求1所述的方法,其中,所述从所述多个道路边界中选择所述车辆能够驶入的道路边界,包括:
    基于所述道路图像确定所述车辆所在的自车车道;
    基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
  3. 根据权利要求2所述的方法,其中,所述基于所述道路图像确定所述车辆所在的自车车道,包括:
    识别所述道路图像中的交通标识;
    基于所述交通标识确定所述车辆所在的自车车道。
  4. 根据权利要求2所述的方法,其中,所述基于所述道路图像确定所述车辆所在的自车车道,包括:
    识别所述道路图像中的其他车辆的行驶方向;
    基于所述其他车辆的行驶方向确定所述车辆所在的自车车道。
  5. 根据权利要求3所述的方法,其中,所述基于所述交通标识确定所述车辆所在的自车车道,包括:
    响应于所述交通标识表示所述车辆所在的车道不是单向车道、且所述交通标识包括指定道路标线的情况下,基于所述指定道路标线确定所述车辆所在的自车车道。
  6. 根据权利要求4所述的方法,其中,所述基于所述其他车辆的行驶方向确定所述车辆所在的自车车道,包括:
    响应于所述其他车辆的行驶方向与所述车辆的行驶方向相反,基于所述其他车辆的所在车道确定所述车辆所在的自车车道。
  7. 根据权利要求3或5所述的方法,其中,所述基于所述车辆所在的自 车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界,包括:
    基于所述交通标识以及所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
  8. 根据权利要求2至6任一项所述的方法,其中,所述基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界,包括:
    获得所述车辆所在的位置信息,从预先获得的地图数据确定与所述位置信息相关的地图子数据,基于所述地图子数据从所述多个道路边界中确定所述车辆能够驶入的道路边界;所述地图数据至少包括道路数据、道路标识数据和交通标志牌数据。
  9. 根据权利要求1至8任一项所述的方法,其中,所述确定所述道路图像中的多个道路边界,包括:
    检测所述道路图像中的多个车道,通过连接各车道的末端确定所述多个道路边界。
  10. 根据权利要求1至8任一项所述的方法,其中,所述确定所述道路图像中的多个道路边界,包括:
    检测所述道路图像中的可行驶区域,基于所述可行驶区域的轮廓线确定所述图像中的多个道路边界。
  11. 根据权利要求1至10任一项所述的方法,其中,所述方法还包括:
    基于所述车辆能够驶入的道路边界确定所述车辆的行驶路径,按照所述行驶路径控制所述车辆行驶。
  12. 根据权利要求1至11任一项所述的方法,其中,所述方法还包括:
    基于所述车辆能够驶入的道路边界设置第一感兴趣区域,按照第一分辨率获得所述第一感兴趣区域对应的图像;其中,所述道路图像按照第二分辨率获得,所述第二分辨率小于所述第一分辨率。
  13. 根据权利要求1至11任一项所述的方法,其中,所述方法还包括:
    基于所述车辆能够驶入的道路边界设置第二感兴趣区域,按照第一帧速率获得所述第二感兴趣区域对应的图像;其中,所述道路图像按照第二帧速 率获得,所述第二帧速率小于所述第一帧速度。
  14. 一种道路边界检测装置,所述装置包括:检测部分和选择部分;其中,
    所述检测部分,被配置为识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;
    所述选择部分,被配置为从所述多个道路边界中选择所述车辆能够驶入的道路边界。
  15. 一种计算机可读存储介质,其上存储有计算机程序,在所述程序被处理器执行的情况下实现权利要求1至13任一项所述方法的步骤。
  16. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,在所述处理器执行所述程序的情况下实现权利要求1至13任一项所述方法的步骤。
  17. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行权利要求1至13任一项所述方法的步骤。
PCT/CN2022/129043 2022-03-24 2022-11-01 一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品 WO2023179030A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210303727.1A CN114694116A (zh) 2022-03-24 2022-03-24 一种道路边界检测方法、装置、电子设备和存储介质
CN202210303727.1 2022-03-24

Publications (1)

Publication Number Publication Date
WO2023179030A1 true WO2023179030A1 (zh) 2023-09-28

Family

ID=82140064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129043 WO2023179030A1 (zh) 2022-03-24 2022-11-01 一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品

Country Status (2)

Country Link
CN (1) CN114694116A (zh)
WO (1) WO2023179030A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694116A (zh) * 2022-03-24 2022-07-01 商汤集团有限公司 一种道路边界检测方法、装置、电子设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108216229A (zh) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 交通工具、道路线检测和驾驶控制方法及装置
CN111874006A (zh) * 2020-08-05 2020-11-03 腾讯科技(深圳)有限公司 路线规划处理方法和装置
US20210016780A1 (en) * 2018-08-02 2021-01-21 GM Global Technology Operations LLC Controlling an autonomous vehicle based upon computed lane boundaries
CN112309233A (zh) * 2020-10-26 2021-02-02 北京三快在线科技有限公司 一种道路边界的确定、道路切分方法及装置
CN112363192A (zh) * 2020-09-29 2021-02-12 蘑菇车联信息科技有限公司 车道定位方法、装置、车辆、电子设备及存储介质
CN113297878A (zh) * 2020-02-21 2021-08-24 百度在线网络技术(北京)有限公司 道路交叉口识别方法、装置、计算机设备和存储介质
CN114170826A (zh) * 2021-12-03 2022-03-11 地平线(上海)人工智能技术有限公司 自动驾驶控制方法和装置、电子设备和存储介质
CN114694116A (zh) * 2022-03-24 2022-07-01 商汤集团有限公司 一种道路边界检测方法、装置、电子设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108216229A (zh) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 交通工具、道路线检测和驾驶控制方法及装置
US20210016780A1 (en) * 2018-08-02 2021-01-21 GM Global Technology Operations LLC Controlling an autonomous vehicle based upon computed lane boundaries
CN113297878A (zh) * 2020-02-21 2021-08-24 百度在线网络技术(北京)有限公司 道路交叉口识别方法、装置、计算机设备和存储介质
CN111874006A (zh) * 2020-08-05 2020-11-03 腾讯科技(深圳)有限公司 路线规划处理方法和装置
CN112363192A (zh) * 2020-09-29 2021-02-12 蘑菇车联信息科技有限公司 车道定位方法、装置、车辆、电子设备及存储介质
CN112309233A (zh) * 2020-10-26 2021-02-02 北京三快在线科技有限公司 一种道路边界的确定、道路切分方法及装置
CN114170826A (zh) * 2021-12-03 2022-03-11 地平线(上海)人工智能技术有限公司 自动驾驶控制方法和装置、电子设备和存储介质
CN114694116A (zh) * 2022-03-24 2022-07-01 商汤集团有限公司 一种道路边界检测方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN114694116A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
JP7349792B2 (ja) 車両走行のための情報を提供する方法
WO2021226776A1 (zh) 一种车辆可行驶区域检测方法、系统以及采用该系统的自动驾驶车辆
US10118614B2 (en) Detailed map format for autonomous driving
US10331957B2 (en) Method, apparatus, and system for vanishing point/horizon estimation using lane models
JP6720296B2 (ja) 自律走行のための地図画像に基づく交通予測
EP3299921B1 (en) Location specific assistance for an autonomous vehicle control system
CN110210280B (zh) 一种超视距感知方法、系统、终端和存储介质
CN111874006B (zh) 路线规划处理方法和装置
KR102595897B1 (ko) 차선 결정 방법 및 장치
CN109426256A (zh) 自动驾驶车辆的基于驾驶员意图的车道辅助系统
US7840331B2 (en) Travel support system and travel support method
CN110347145A (zh) 用于自动驾驶车辆的感知辅助
US11230297B2 (en) Pedestrian probability prediction system for autonomous vehicles
JP2018084573A (ja) 頑健で効率的な車両測位用のアルゴリズム及びインフラ
JP6630521B2 (ja) 危険判定方法、危険判定装置、危険出力装置及び危険判定システム
WO2022088722A1 (zh) 导航方法、装置、智能驾驶设备及存储介质
WO2021227520A1 (zh) 可视化界面的显示方法、装置、电子设备和存储介质
JP2024023319A (ja) 緊急車両の検出
CN111144211A (zh) 点云显示方法和装置
CN108332761B (zh) 一种使用及创建路网地图信息的方法与设备
JP6613265B2 (ja) 予測装置、車両、予測方法およびプログラム
WO2022021982A1 (zh) 可行驶区域判定的方法、智能驾驶系统和智能汽车
CN111091037A (zh) 用于确定驾驶信息的方法和设备
WO2023179028A1 (zh) 一种图像处理方法、装置、设备及存储介质
WO2023179027A1 (zh) 一种道路障碍物检测方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22933081

Country of ref document: EP

Kind code of ref document: A1