WO2021088504A1 - 路口检测、神经网络训练及智能行驶方法、装置和设备 - Google Patents
路口检测、神经网络训练及智能行驶方法、装置和设备 Download PDFInfo
- Publication number
- WO2021088504A1 WO2021088504A1 PCT/CN2020/114095 CN2020114095W WO2021088504A1 WO 2021088504 A1 WO2021088504 A1 WO 2021088504A1 CN 2020114095 W CN2020114095 W CN 2020114095W WO 2021088504 A1 WO2021088504 A1 WO 2021088504A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- road
- intersection
- image
- sample image
- detection
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 184
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 89
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000012549 training Methods 0.000 title claims abstract description 34
- 238000000605 extraction Methods 0.000 claims abstract description 29
- 238000002372 labelling Methods 0.000 claims description 58
- 238000004590 computer program Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 15
- 230000015654 memory Effects 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 9
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 6
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 6
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 230000035484 reaction time Effects 0.000 description 4
- 238000010998 test method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/10—Path keeping
- B60W30/12—Lane keeping
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/18—Propelling the vehicle
- B60W30/18009—Propelling the vehicle related to particular drive situations
- B60W30/18154—Approaching an intersection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0002—Automatic control, details of type of controller or control system architecture
- B60W2050/0004—In digital systems, e.g. discrete-time systems involving sampling
- B60W2050/0005—Processor details or data handling, e.g. memory registers or chip architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Y—INDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
- B60Y2300/00—Purposes or special features of road vehicle drive control systems
- B60Y2300/10—Path keeping
- B60Y2300/12—Lane keeping
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Y—INDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
- B60Y2300/00—Purposes or special features of road vehicle drive control systems
- B60Y2300/18—Propelling the vehicle
- B60Y2300/18008—Propelling the vehicle related to particular drive situations
- B60Y2300/18158—Approaching intersection
Definitions
- This application relates to computer vision processing technology, and relates to but is not limited to an intersection detection, neural network training and intelligent driving method, device, electronic equipment, computer storage medium and computer program.
- the embodiments of the present application expect to provide a technical solution for intersection detection.
- the embodiment of the present application provides a method for detecting an intersection, and the method includes:
- the detection frame of the intersection on the road shown in the road image is determined; the detection frame of the intersection indicates the area of the intersection in the road image, and the detection frame of the intersection is under The frame is on the pavement of the road;
- the method further includes:
- the feature map of the road image it is determined that the road shown in the road image does not have an intersection.
- the determining the distance between the device that collects the road image and the intersection according to the lower border of the detection frame of the intersection includes:
- the distance between the device that collects the road image and the intersection is obtained .
- the method is executed by a neural network, the neural network is trained using sample images and the annotation results of the sample images, and the annotation results of the sample images include the intersections on the roads shown in the positive sample images
- the labeled frame represents the position of the intersection in the positive sample image, and the lower frame of the labeled frame is on the road surface of the road shown in the positive sample image.
- the embodiment of the present application also provides a neural network training method, including:
- the labeling result of the sample image is the labeling frame of the intersection on the road shown in the positive sample image, and the labeling frame represents the position of the intersection in the positive sample image , And the lower border of the labeling frame of the intersection on the road shown in the positive sample image is on the road surface of the road shown in the positive sample image.
- the positive sample image includes a stop line of a road intersection, and the lower border of the label frame of the intersection on the road shown in the positive sample image is aligned with the stop line.
- the height difference of the labeled frames in the multiple positive sample images containing the same intersection is within a preset range.
- the labeling result of the sample image includes that there is no labeling frame in the negative sample image .
- the embodiment of the present application also provides an intelligent driving method, including:
- intersection detection on the road image according to any one of the foregoing intersection detection methods
- the driving control of the device is performed according to the distance between the intelligent driving device that collects the road image and the intersection.
- the embodiment of the present application also provides an intersection detection device, which includes a first extraction module, a detection module, and a first determination module; wherein,
- the first extraction module is configured to perform feature extraction on a road image to obtain a feature map of the road image
- the detection module is configured to determine the detection frame of the intersection on the road shown in the road image according to the feature map of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the intersection The lower border of the detection frame is on the road surface of the road;
- the first determining module is configured to determine the distance between the device that collects the road image and the intersection according to the lower border of the detection frame of the intersection.
- the detection module is further configured to determine, according to the feature map of the road image, that the road shown in the road image does not have an intersection.
- the first determining module is configured to determine the position of the lower frame of the detection frame of the intersection in the road image and the distance between the plane of the road image and the road surface of the road.
- the position of the lower border of the detection frame of the intersection on the road is determined according to the coordinate conversion relationship of the intersection; the position of the lower border of the detection frame of the intersection on the road is related to the location of the device that collects the road image. According to the position on the road, the distance between the device that collects the road image and the intersection is obtained.
- the device is implemented based on a neural network, and the neural network is trained using sample images and labeling results of sample images.
- the labeling results of sample images include the roads shown in the positive sample images.
- the labeling frame of the intersection of, the labeling frame represents the position of the intersection in the positive sample image, and the lower border of the labeling frame is on the road surface of the road shown in the positive sample image.
- An embodiment of the present application also provides a neural network training device, the device includes: a second extraction module, a second determination module, and an adjustment module, wherein:
- the second extraction module is configured to perform feature extraction on a sample image to obtain a feature map of the sample image
- the second determining module is configured to determine the detection result of the sample image according to the feature map of the sample image
- An adjustment module configured to adjust the network parameter value of the neural network according to the annotation result of the sample image and the detection result
- the labeling result of the sample image is the labeling frame of the intersection on the road shown in the positive sample image, and the labeling frame represents the position of the intersection in the positive sample image , And the lower border of the labeling frame of the intersection on the road shown in the positive sample image is on the road surface of the road shown in the positive sample image.
- the positive sample image includes a stop line of a road intersection, and the lower border of the label frame of the intersection on the road shown in the positive sample image is aligned with the stop line.
- the height difference of the labeled frames in the multiple positive sample images containing the same intersection is within a preset range.
- the labeling result of the sample image includes that there is no labeling frame in the negative sample image .
- the embodiment of the present application also provides an intelligent driving device, the device includes: an acquisition module and a processing module, wherein:
- An acquisition module configured to acquire road images
- the processing module is configured to perform intersection detection on the road image according to any one of the foregoing intersection detection methods; and perform driving control on the device according to the distance between the intelligent driving device that collects the road image and the intersection.
- the embodiment of the present application also provides an electronic device, including a processor and a memory configured to store a computer program that can run on the processor; wherein,
- the processor is configured to run the computer program to execute any one of the above-mentioned intersection detection methods or any one of the above-mentioned neural network training methods or any one of the above-mentioned intelligent driving test methods.
- the embodiment of the present application also provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, any one of the above-mentioned intersection detection methods, any one of the above-mentioned neural network training methods, or any one of the above-mentioned methods is implemented Intelligent driving test method.
- the embodiment of the present application also provides a computer program, including computer-readable code, when the computer-readable code is run in an electronic device, the processor in the electronic device executes any one of the above-mentioned intersection detections. Method or any one of the above neural network training methods or any one of the above intelligent driving test methods.
- the intersection detection method includes: extracting features from a road image to obtain a feature map of the road image;
- the feature map of the road image determines the detection frame of the intersection on the road shown in the road image;
- the detection frame of the intersection represents the area of the intersection in the road image;
- the lower border of the detection frame is on the road surface, so the distance between the device that collects the road image and the intersection can be determined according to the lower border of the detection frame of the intersection; this way, even when clear traffic lights or traffic lights cannot be obtained Ground stop line image, or when there is no traffic light or ground stop line at the intersection, the embodiment of the present application can also implement intersection detection according to the feature map of the road image, so as to determine the distance between the device that collects the road image and the intersection .
- FIG. 1 is a flow chart of the intersection detection method according to an embodiment of the application
- Fig. 2 is a flowchart of a neural network training method according to an embodiment of the application
- FIG. 3 is an example diagram of intersection detection using a trained neural network in the embodiment of the present application.
- Fig. 4 is a flowchart of a smart driving method according to an embodiment of the application.
- FIG. 5 is a schematic diagram of the composition structure of an intersection detection device according to an embodiment of the application.
- FIG. 6 is a schematic diagram of the composition structure of a neural network training device according to an embodiment of the application.
- FIG. 7 is a schematic diagram of the composition structure of a smart driving device according to an embodiment of the application.
- FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the application.
- the terms "including”, “including” or any other variants thereof are intended to cover non-exclusive inclusion, so that a method or device including a series of elements not only includes what is clearly stated Elements, and also include other elements not explicitly listed, or elements inherent to the implementation of the method or device. Without more restrictions, the element defined by the sentence “including a" does not exclude the existence of other related elements in the method or device that includes the element (such as steps or steps in the method).
- the unit in the device for example, the unit may be a part of a circuit, a part of a processor, a part of a program or software, etc.).
- intersection detection, neural network training method, and smart driving method provided in the embodiments of this application include a series of steps, but the intersection detection, neural network training method, and smart driving method provided in the embodiments of this application are not limited to the recorded steps.
- the intersection detection device, neural network training device, and smart driving device provided in the embodiments of the application include a series of modules, but the devices provided in the embodiments of the application are not limited to include the explicitly recorded modules, and may also include Related information, or modules that need to be set for processing based on information.
- the embodiments of the present application can be applied to a computer system composed of a terminal and a server, and can be operated with many other general-purpose or special-purpose computing system environments or configurations.
- the terminal can be a thin client, a thick client, a handheld or laptop device, a microprocessor-based system, a set-top box, a programmable consumer electronic product, a network personal computer, a small computer system, etc.
- the server can be a server computer System small computer system, large computer system and distributed cloud computing technology environment including any of the above systems, etc.
- Electronic devices such as terminals and servers can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
- program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment. In the distributed cloud computing environment, tasks are executed by remote processing equipment linked through a communication network.
- program modules may be located on a storage medium of a local or remote computing system including a storage device.
- intersection traffic lights or ground stop lines captured by the vehicle's camera
- clear traffic lights or ground stop line images cannot be obtained, resulting in the above-mentioned intersection detection
- the solution cannot accurately detect intersections; in addition, some intersections do not have traffic lights or ground stop lines, which will cause the aforementioned intersection detection solutions to fail to achieve intersection detection.
- a method for detecting intersections is proposed, and the embodiments of the present application can be applied to scenarios such as automatic driving and assisted driving.
- Fig. 1 is a flow chart of the intersection detection method according to an embodiment of this application. As shown in Fig. 1, the process may include:
- Step 101 Perform feature extraction on the road image to obtain a feature map of the road image.
- the road image is an image that requires intersection detection.
- the format of the road image may be Joint Photographic Experts Group (JPEG), Bitmap (BMP), Portable Network Graphics (PNG) or other formats; it should be noted that, Here, only the format of the road image is described as an example, and the embodiment of the present application does not limit the format of the sample image.
- road images can be acquired from the local storage area or the network, or image acquisition equipment can be used to acquire road images.
- the image acquisition equipment can include a camera installed on the vehicle, etc.; in practical applications, the vehicle can be Set up one or more cameras to collect road images in front of the vehicle.
- the feature map of the road image may be used to characterize at least one of the following features of the road image: color feature, texture feature, shape feature, and spatial relationship feature.
- the feature map of the road image may be used to characterize at least one of the following features of the road image: color feature, texture feature, shape feature, and spatial relationship feature.
- SIFT Scale-invariant feature transform
- HOG Histogram of Oriented Gradient
- a pre-trained neural network for extracting image feature maps can also be used to extract features from road images.
- Step 102 Determine the detection frame of the intersection on the road shown in the road image according to the feature map of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower border of the detection frame of the intersection is On the pavement of the road.
- the judgment result includes the following two situations: the road shown in the road image has an intersection, and the road image has an intersection.
- the road shown does not have an intersection; when there is an intersection on the road shown in the road image, the detection frame of the intersection on the road shown in the road image can be determined according to the feature map of the road image, and the detection frame of the intersection is output; in the road image When there is no intersection on the road shown, no output is performed.
- the pre-trained neural network for extracting the intersection detection frame can be used to determine the detection frame of the intersection on the road shown in the road image.
- the shape of the detection frame of the intersection is not limited.
- the shape of the detection frame of the intersection may be a rectangle, a trapezoid, etc.; in a specific example, the road shown in the road image has an intersection, and the road
- the neural network used to extract the intersection detection frame can output the detection frame of a rectangular intersection; in another specific example, the road shown in the road image is not There are intersections.
- the neural network for extracting the intersection detection frame does not output any data.
- Step 103 Determine the distance between the device that collects road images and the intersection according to the lower border of the detection frame of the intersection.
- the position of the intersection can be determined according to the lower border of the detection frame of the intersection, and further, combined with the known location of the device that collects road images, it can be Determine the distance between the device that collects road images and the intersection.
- the processor can be an Application Specific Integrated Circuit (ASIC) or a Digital Signal Processor (DSP). , Digital Signal Processing Device (Digital Signal Processing Device, DSPD), Programmable Logic Device (Programmable Logic Device, PLD), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA), Central Processing Unit (Central Processing Unit, CPU ), at least one of a controller, a microcontroller, and a microprocessor.
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processor
- DSPD Digital Signal Processing Device
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- CPU Central Processing Unit
- CPU Central Processing Unit
- the embodiment of the present application first, feature extraction is performed on the road image to obtain the feature map of the road image; then, according to the feature map of the road image, the road image on the road shown in the road image is determined The detection frame of the intersection; and, because the lower border of the detection frame of the intersection in the embodiment of the present application is on the road surface, it can be determined based on the lower border of the detection frame of the intersection to determine whether the device that collects the road image and the intersection are In this way, even if a clear traffic light or ground stop line image cannot be obtained, or there is no traffic light or ground stop line at an intersection, the embodiment of the present application can also implement intersection detection based on the feature map of the road image, thereby determining The distance between the device that collects the road image and the intersection.
- intersection detection method of the embodiment of the present application has strong universality.
- the intersection can be accurately detected on the image in front of the vehicle, which can be realized when the intersection is far away
- Intersection detection helps to provide sufficient reaction time for driving decisions and ensures driving safety. For example, it can provide sufficient reaction time for braking.
- the position of the lower border of the detection frame of the intersection in the road image and the plane of the road image can be used as an example.
- the coordinate conversion relationship between the road surface and the road surface determines the position of the lower border of the detection frame of the intersection on the road; according to the position of the lower border of the detection frame of the intersection on the road and the position of the device that collects road images on the road, Get the distance between the device that collects road images and the intersection.
- the distance between the intersection and the vehicle cannot be determined; and in the embodiment of the present application, when the device for collecting road images is located in the vehicle, the distance between the device for collecting road images and the intersection can be set. The distance is taken as the distance between the vehicle and the intersection.
- the embodiment of the application can consider that the lower border of the intersection detection frame will fit the road surface. According to the position of the lower border of the intersection detection frame in the road image, it can be accurately Estimating the distance between the vehicle and the intersection is conducive to providing sufficient reaction time for driving decisions and ensuring driving safety.
- the position coordinates of the lower border of the detection frame of the intersection can be converted to the world coordinate system to obtain that the lower border of the detection frame of the intersection is in the world.
- the position of the coordinate system that is, the position of the lower border of the detection frame of the intersection on the road.
- the plane of the road image and the road surface are two different planes.
- a Homography matrix can be used to express the coordinate conversion relationship between the plane of the road image and the road surface.
- the homography matrix the position coordinates of the lower border of the detection frame of the intersection are converted to the world coordinate system; the homography matrix can be calculated from the road image and some corresponding points in the world coordinate system, based on this homography matrix , Can accurately get the position of each point in the road image in the world coordinate system.
- the foregoing intersection detection method may be executed by a neural network.
- the neural network is trained using sample images and labeling results of the sample images.
- the labeling results of the sample images include the labeling frame of the intersection on the road shown in the positive sample image.
- the label frame of the intersection on the road shown in the positive sample image represents the position of the intersection in the positive sample image, and the lower border of the label frame of the intersection on the road shown in the positive sample image is at the edge of the road shown in the positive sample image. On the road.
- the format of the sample image can be Joint Photographic Experts Group (JPEG), Bitmap (BMP), Portable Network Graphics (PNG) or other formats; it should be noted that here only The format of the road image is illustrated by an example, and the embodiment of the present application does not limit the format of the sample image.
- JPEG Joint Photographic Experts Group
- BMP Bitmap
- PNG Portable Network Graphics
- sample images can be obtained from a local storage area or the network, or image collection equipment can be used to collect sample images.
- the training of the neural network based on the positive sample image is beneficial to enable the trained neural network to detect the intersection in the road image.
- FIG. 2 is a flowchart of a neural network training method according to an embodiment of the application. As shown in FIG. 2, the process may include:
- Step 201 Perform feature extraction on a sample image to obtain a feature map of the sample image.
- the feature map of the sample image can be used to characterize at least one of the following features of the sample image: color feature, texture feature, shape feature, and spatial relationship feature; for the implementation of this step, for example,
- the sample image is input into the neural network, and the neural network is used to extract the features of the sample image to obtain the feature map of the sample image.
- the type of neural network is not limited.
- the neural network may be a single-shot multibox detector (SSD), you only look once (You Only Look Once), Faster Regional-Convolutional Neural Networks (Faster RCNN) or other neural networks based on deep learning.
- the network structure of the neural network is not limited.
- the network structure of the neural network may be a 50-layer residual network structure, a VGG16 network structure, or a MobileNet network structure.
- Step 202 Determine the detection result of the sample image according to the feature map of the sample image.
- the detection result includes the following two situations: the road shown in the sample image has an intersection, and the sample image has an intersection. There is no intersection in the road shown.
- the labeling result of the sample image is the labeling frame of the intersection on the road shown in the positive sample image, and the labeling frame represents the position of the intersection in the positive sample image, And the lower border of the above-mentioned labeling frame is on the road surface shown in the positive sample image; obviously, when the sample image is a positive sample image, the detection result of the sample image can be determined according to the feature map of the sample image, that is, the detection result of the sample image can be determined. Check box.
- Step 203 Adjust the network parameter value of the neural network according to the labeling result of the sample image and the detection result.
- the network parameter value of the neural network can be adjusted according to the difference between the annotation result of the sample image and the above detection result.
- the loss of the neural network can be calculated. The loss of the neural network is used to characterize the difference between the labeling results of the sample image and the above detection results; then, the loss of the initial neural network can be reduced according to the loss of the initial neural network as The goal is to adjust the network parameter values of the neural network.
- Step 204 Determine whether the detection result of the neural network on the sample image after the network parameter value adjustment meets the set accuracy requirement, if not, return to step 201; if it meets, then execute step 205.
- the set accuracy requirement may be that the difference between the detection result of the sample image and the annotation result of the sample image is within a preset range.
- Step 205 Use the neural network after the network parameter values are adjusted as the neural network that has been trained.
- steps 201 to 205 can be implemented by a processor in an electronic device.
- the aforementioned processor can be at least ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor.
- ASIC ASIC
- DSP digital signal processor
- DSPD DSPD
- PLD PLD
- FPGA field-programmable gate array
- CPU controller
- microcontroller microprocessor
- the detection result of the sample image can be determined according to the feature map of the sample image; therefore, the trained neural network can be able to obtain a clear Traffic lights or ground stop line images, or when there are no traffic lights or ground stop lines at the intersection, intersection detection can also be realized based on the feature map of the road image; and, since the positive sample image includes the intersection, the neural network is performed based on the positive sample image
- the training is beneficial to enable the trained neural network to detect intersections in the road image.
- the lower border of the label frame of the intersection on the road shown in the positive sample image is on the road surface of the road shown in the positive sample image; in this way, even when there is no obvious sign at the intersection, it can be determined
- the lower border of the marking frame of the intersection is conducive to marking; further, since the marking frame of the intersection is on the pavement of the road, the marking frame of the intersection is consistent with the actual situation, and then it is on the road shown in the sample image. Based on the marking frame of the intersection, the trained neural network can accurately obtain the detection frame of the intersection.
- the label box of the intersection on the road shown in the positive sample image The border is aligned with the above stop line; because the lower border of the marking box marking the exit is aligned with the stop line, the marking box of the intersection is consistent with the actual situation, and then based on the marking box of the intersection on the road shown in the sample image , So that the trained neural network can accurately obtain the detection frame of the intersection.
- the intersection of the positive sample image is marked with a rectangular marking frame. If the intersection is far away, it needs to be based on experience and observation of the intersection area.
- the lower border of the labeling frame is marked on the road surface, and the height of the rectangular labeling frame is set to a fixed value, for example, the height of the rectangular labeling frame is 80 pixels.
- the difference between the heights of the labeled frames in multiple positive sample images containing the same intersection is within a preset range; the preset range can be preset according to the actual situation, for example, containing multiple positive samples of the same intersection
- the heights of the label boxes in the image are the same, all of which are 80 pixels.
- multiple positive sample images containing the same intersection may be images taken continuously.
- the marking frame of the intersection needs to be marked.
- the sample image when the sample image is a negative sample image, there is no intersection on the road in the negative sample image, and the labeling result of the sample image indicates that there is no labeling frame in the negative sample image.
- the error detection rate of the trained neural network for the image that does not include the intersection area can be reduced, that is, the error detection rate can be detected more accurately. Contains the image of the intersection area.
- the ratio of the positive sample image to the negative sample image is greater than the set ratio threshold; in this way, by inputting enough positive sample images to the neural network,
- the network training of the neural network can enable the trained neural network to more accurately detect the intersection area containing the image of the intersection.
- the road image can be input to the trained neural network, and the trained neural network can be used for intersection detection, and further, the intersection on the road shown in the road image can be determined Or, it is determined that there is no intersection on the road shown in the road image.
- Fig. 3 is an example diagram of the embodiment of the application using a trained neural network for intersection detection.
- the image to be detected represents a road image taken by a single camera of a vehicle
- the detection network represents a trained neural network. It can be seen that the intersection detection result includes a detection frame representing the intersection, and the lower border of the detection frame at the intersection is fitted to the road surface.
- smart driving equipment includes, but is not limited to, self-driving vehicles, equipped with advanced Advanced Driving Assistant System (ADAS) vehicles, ADAS-equipped robots, etc.
- ADAS Advanced Driving Assistant System
- Fig. 4 is a flowchart of a smart driving method according to an embodiment of the application. As shown in Fig. 4, the process may include:
- Step 401 Obtain a road image.
- Step 402 Perform intersection detection on the road image according to any of the foregoing intersection detection methods.
- the intersection detection on the road image the detection result obtained can be a detection frame to determine the intersection on the road shown in the road image, or it can be determined that the road shown in the road image does not have an intersection ;
- the distance between the device that collects road images and the intersection can also be determined.
- Step 403 Perform driving control on the smart driving device according to the distance between the smart driving device that collects the road image and the intersection.
- smart driving equipment can be directly controlled to drive (automatic driving and robots), or instructions can be sent to the driver, and the driver can control the vehicle (for example, a vehicle equipped with ADAS) to drive.
- vehicle for example, a vehicle equipped with ADAS
- the distance between the intelligent driving device that collects road images and the intersection can be obtained, which is conducive to providing assistance to vehicle driving according to the distance between the intelligent driving device that collects road images and the intersection.
- the safety of vehicle driving is conducive to providing assistance to vehicle driving according to the distance between the intelligent driving device that collects road images and the intersection.
- an embodiment of the present application proposes an intersection detection device.
- FIG. 5 is a schematic diagram of the composition structure of an intersection detection device according to an embodiment of the application. As shown in FIG. 5, the device includes: a first extraction module 501, a detection module 502, and a first determination module 503, wherein:
- the first extraction module 501 is configured to perform feature extraction on a road image to obtain a feature map of the road image
- the detection module 502 is configured to determine the detection frame of the intersection on the road shown in the road image according to the feature map of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and The lower border of the detection frame of the intersection is on the road surface of the road;
- the first determining module 503 is configured to determine the distance between the device that collects the road image and the intersection according to the lower border of the detection frame of the intersection.
- the detection module 502 is further configured to determine, according to the feature map of the road image, that the road shown in the road image does not have an intersection.
- the first determining module 503 is configured to determine the position of the lower frame of the detection frame of the intersection in the road image and the difference between the plane of the road image and the road surface of the road.
- the position of the lower border of the detection frame of the intersection on the road is determined according to the coordinate conversion relationship between the intersections; the position of the lower border of the detection frame of the intersection on the road is in relation to the device that collects the road image. From the position on the road, the distance between the device that collects the road image and the intersection is obtained.
- the device is implemented based on a neural network, and the neural network is trained using sample images and labeling results of sample images.
- the labeling results of sample images include the roads shown in the positive sample images.
- the labeling frame of the intersection of, the labeling frame represents the position of the intersection in the positive sample image, and the lower border of the labeling frame is on the road surface of the road shown in the positive sample image.
- the first extraction module 501, the detection module 502, and the first determination module 503 can all be implemented by a processor in an electronic device.
- the aforementioned processors can be ASIC, DSP, DSPD, PLD, FPGA, CPU, and controller. , At least one of microcontroller and microprocessor.
- FIG. 6 is a schematic diagram of the composition structure of a neural network training device according to an embodiment of the application. As shown in FIG. 6, the device may include a second extraction module 601, a second determination module 602, and an adjustment module 603, where:
- the second extraction module 601 is configured to perform feature extraction on a sample image to obtain a feature map of the sample image
- the second determining module 602 is configured to determine the detection result of the sample image according to the feature map of the sample image;
- the adjustment module 603 is configured to adjust the network parameter value of the neural network according to the annotation result of the sample image and the detection result;
- the labeling result of the sample image is the labeling frame of the intersection on the road shown in the positive sample image, and the labeling frame represents the position of the intersection in the positive sample image , And the lower border of the labeling frame of the intersection on the road shown in the positive sample image is on the road surface of the road shown in the positive sample image.
- the positive sample image includes a stop line of a road intersection, and the lower border of the label frame of the intersection on the road shown in the positive sample image is aligned with the stop line.
- the height difference of the labeled frames in the multiple positive sample images containing the same intersection is within a preset range.
- the labeling result of the sample image includes that there is no labeling frame in the negative sample image .
- the second extraction module 601, the second determination module 602, and the adjustment module 603 can all be implemented by a processor in an electronic device.
- the aforementioned processors can be ASIC, DSP, DSPD, PLD, FPGA, CPU, and controller. , At least one of microcontroller and microprocessor.
- FIG. 7 is a schematic diagram of the composition structure of a smart driving device according to an embodiment of the application. As shown in FIG. 7, the device includes: an acquisition module 701 and a processing module 702, wherein,
- the obtaining module 701 is configured to obtain road images
- the processing module 702 is configured to perform intersection detection on the road image according to any one of the foregoing intersection detection methods; and perform driving control on the device according to the distance between the intelligent driving device that collects the road image and the intersection.
- both the acquisition module 701 and the processing module 702 can be implemented by a processor in a smart driving device.
- the aforementioned processors can be ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. At least one of them.
- the functional modules in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software function module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can A personal computer, a server, or a network device, etc.) or a processor (processor) executes all or part of the steps of the method described in this embodiment.
- the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- the computer program instructions corresponding to any intersection detection method, neural network training method, or smart driving method in this embodiment can be stored on storage media such as optical disks, hard disks, and USB flash drives.
- storage media such as optical disks, hard disks, and USB flash drives.
- FIG. 8 shows an electronic device 80 provided by an embodiment of the present application, which may include: a memory 81 and a processor 82; wherein,
- the memory 81 is configured to store computer programs and data
- the processor 82 is configured to execute a computer program stored in the memory to implement any intersection detection method, neural network training method, or smart driving method in the foregoing embodiments.
- the aforementioned memory 81 may be a volatile memory (volatile memory), such as RAM; or a non-volatile memory (non-volatile memory), such as ROM, flash memory, or hard disk (Hard Disk). Drive, HDD) or Solid-State Drive (SSD); or a combination of the foregoing types of memories, and provide instructions and data to the processor 82.
- volatile memory volatile memory
- non-volatile memory non-volatile memory
- ROM read-only memory
- flash memory read-only memory
- HDD hard disk
- SSD Solid-State Drive
- the aforementioned processor 82 may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It can be understood that, for different devices, the electronic devices used to implement the above-mentioned processor functions may also be other, which is not specifically limited in the embodiment of the present application.
- the embodiment of the present application also provides a computer program, including computer-readable code, when the computer-readable code is run in an electronic device, the processor in the electronic device executes any one of the above-mentioned intersection detections. Method or any one of the above neural network training methods or any one of the above intelligent driving test methods.
- the functions or modules contained in the apparatus provided in the embodiments of the present application can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the apparatus provided in the embodiments of the present application can be used to execute the methods described in the above method embodiments.
- the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes a number of instructions to enable a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present invention.
- a terminal which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
- the embodiments of the present application provide a method, device, electronic device, computer storage medium and computer program for intersection detection, neural network training, and intelligent driving.
- the intersection detection method includes: extracting features of road images to obtain information about the road image. Feature map; according to the feature map of the road image, determine the detection frame of the intersection on the road shown in the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the detection of the intersection.
- the lower frame of the frame is on the road surface of the road; according to the lower frame of the detection frame of the intersection, the distance between the device that collects the road image and the intersection is determined.
- the embodiment of the present application can implement intersection detection based on the feature map of the road image, thereby determining to collect the road The distance between the image device and the intersection.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
Description
Claims (21)
- 一种路口检测方法,所述方法包括:对道路图像进行特征提取,获得所述道路图像的特征图;根据所述道路图像的特征图,确定所述道路图像所示的道路上的路口的检测框;所述路口的检测框表示路口在所述道路图像中的区域,所述路口的检测框的下边框在所述道路的路面上;根据所述路口的检测框的下边框,确定采集所述道路图像的设备与所述路口之间的距离。
- 根据权利要求1所述的方法,其中,所述方法还包括:根据所述道路图像的特征图,确定所述道路图像所示的道路不存在路口。
- 根据权利要求1所述的方法,其中,所述根据所述路口的检测框的下边框,确定采集所述道路图像的设备与所述路口之间的距离,包括:根据所述路口的检测框的下边框在所述道路图像中的位置以及所述道路图像的平面和所述道路的路面之间的坐标转换关系,确定所述路口的检测框的下边框在所述道路上的位置;根据所述路口的检测框的下边框在所述道路上的位置与采集所述道路图像的设备在所述道路上的位置,得出采集所述道路图像的设备与所述路口之间的距离。
- 根据权利要求1至3任一项所述的方法,其中,所述方法由神经网络执行,所述神经网络采用样本图像以及样本图像的标注结果训练得到,所述样本图像的标注结果包括正样本图像所示的道路上的路口的标注框,所述标注框表征路口在所述正样本图像中的位置,且所述标注框的下边框在所述正样本图像所示的道路的路面上。
- 一种神经网络训练方法,其中,包括:对样本图像进行特征提取,获得所述样本图像的特征图;根据所述样本图像的特征图,确定所述样本图像的检测结果;根据所述样本图像的标注结果和所述检测结果,调整所述神经网络的网络参数值;当所述样本图像为正样本图像时,所述样本图像的标注结果为所述正样本图像所示的道路上的路口的标注框,所述标注框表征路口在所述正样本图像中的位置,且所述正样本图像所示的道路上的路口的标注框的下边框在所述正样本图像所示的道路的路面上。
- 根据权利要求5所述的方法,其中,所述正样本图像中包括道路的路口的停止线,所述正样本图像所示的道路上的路口的标注框的下边框与所述停止线对齐。
- 根据权利要求5所述的方法,其中,包含同一路口的多个正样本图像中的标注框的高度之差在预设范围内。
- 根据权利要求5-7任一所述的方法,其中,当所述样本图像为负样本图像时,所述负样本图像中的道路上不存在路口,所述样本图像的标注结果包括所述负样本图像中不存在标注框。
- 一种智能行驶方法,包括:获取道路图像;根据权利要求1-4任一所述的方法,对所述道路图像进行路口检测;根据采集所述道路图像的智能行驶设备与所述路口之间的距离对所述设备进行行驶控制。
- 一种路口检测装置,所述装置包括第一提取模块、检测模块和第一确定模块; 其中,第一提取模块,配置为对道路图像进行特征提取,获得所述道路图像的特征图;检测模块,配置为根据所述道路图像的特征图,确定所述道路图像所示的道路上的路口的检测框;所述路口的检测框表示路口在所述道路图像中的区域,所述路口的检测框的下边框在所述道路的路面上;第一确定模块,配置为根据所述路口的检测框的下边框,确定采集所述道路图像的设备与所述路口之间的距离。
- 根据权利要求10所述的装置,其中,所述检测模块,还配置为根据所述道路图像的特征图,确定所述道路图像所示的道路不存在路口。
- 根据权利要求10所述的装置,其中,所述第一确定模块,配置为根据所述路口的检测框的下边框在所述道路图像中的位置以及所述道路图像的平面和所述道路的路面之间的坐标转换关系,确定所述路口的检测框的下边框在所述道路上的位置;根据所述路口的检测框的下边框在所述道路上的位置与采集所述道路图像的设备在所述道路上的位置,得出采集所述道路图像的设备与所述路口之间的距离。
- 根据权利要求10至12任一项所述的装置,其中,所述装置是基于神经网络实现的,所述神经网络采用样本图像以及样本图像的标注结果训练得到,所述样本图像的标注结果包括正样本图像所示的道路上的路口的标注框,所述标注框表征路口在所述正样本图像中的位置,且所述标注框的下边框在所述正样本图像所示的道路的路面上。
- 一种神经网络训练装置,所述装置包括:第二提取模块、第二确定模块和调整模块,其中,第二提取模块,配置为对样本图像进行特征提取,获得所述样本图像的特征图;第二确定模块,配置为根据所述样本图像的特征图,确定所述样本图像的检测结果;调整模块,配置为根据所述样本图像的标注结果和所述检测结果,调整所述神经网络的网络参数值;当所述样本图像为正样本图像时,所述样本图像的标注结果为所述正样本图像所示的道路上的路口的标注框,所述标注框表征路口在所述正样本图像中的位置,且所述正样本图像所示的道路上的路口的标注框的下边框在所述正样本图像所示的道路的路面上。
- 根据权利要求14所述的装置,其中,所述正样本图像中包括道路的路口的停止线,所述正样本图像所示的道路上的路口的标注框的下边框与所述停止线对齐。
- 根据权利要求14所述的装置,其中,包含同一路口的多个正样本图像中的标注框的高度之差在预设范围内。
- 根据权利要求14至16任一项所述的装置,其中,当所述样本图像为负样本图像时,所述负样本图像中的道路上不存在路口,所述样本图像的标注结果包括所述负样本图像中不存在标注框。
- 一种智能行驶装置,所述装置包括:获取模块和处理模块,其中,获取模块,配置为获取道路图像;处理模块,配置为根据权利要求1-4任一所述的方法,对所述道路图像进行路口检测;根据采集所述道路图像的智能行驶设备与所述路口之间的距离对所述设备进行行驶控制。
- 一种电子设备,包括处理器和用于存储能够在处理器上运行的计算机程序的存储器;其中,所述处理器配置为运行所述计算机程序以执行权利要求1至4任一项所述的路口检测方法或权利要求5至8任一项所述的神经网络训练方法或权利要求9所述的智能行驶方法。
- 一种计算机存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至4任一项所述的路口检测方法或权利要求5至8任一项所述的神经网络训练方法或权利要求9所述的智能行驶方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现1至4任一项所述的路口检测方法或权利要求5至8任一项所述的神经网络训练方法或权利要求9所述的智能行驶方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217016327A KR20210082518A (ko) | 2019-11-07 | 2020-09-08 | 교차로 검출, 신경망 훈련 및 스마트 주행 방법, 장치 및 기기 |
JP2021532862A JP2022512165A (ja) | 2019-11-07 | 2020-09-08 | 交差点検出、ニューラルネットワークトレーニング及びインテリジェント走行方法、装置及びデバイス |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911083615.4 | 2019-11-07 | ||
CN201911083615.4A CN112784639A (zh) | 2019-11-07 | 2019-11-07 | 路口检测、神经网络训练及智能行驶方法、装置和设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021088504A1 true WO2021088504A1 (zh) | 2021-05-14 |
Family
ID=75747994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/114095 WO2021088504A1 (zh) | 2019-11-07 | 2020-09-08 | 路口检测、神经网络训练及智能行驶方法、装置和设备 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2022512165A (zh) |
KR (1) | KR20210082518A (zh) |
CN (1) | CN112784639A (zh) |
WO (1) | WO2021088504A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113380035A (zh) * | 2021-06-16 | 2021-09-10 | 山东省交通规划设计院集团有限公司 | 一种道路交叉口交通量分析方法及系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023107002A2 (en) * | 2021-12-09 | 2023-06-15 | Grabtaxi Holdings Pte. Ltd. | System and method for adaptively predicting a road segment attribute based on a graph indicative of relationship between a road segment and a detection |
GB2617866A (en) * | 2022-04-21 | 2023-10-25 | Continental Automotive Romania Srl | Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection, |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267415A1 (en) * | 2013-03-12 | 2014-09-18 | Xueming Tang | Road marking illuminattion system and method |
CN108230817A (zh) * | 2017-11-30 | 2018-06-29 | 商汤集团有限公司 | 车辆驾驶模拟方法和装置、电子设备、系统、程序和介质 |
CN108216229A (zh) * | 2017-09-08 | 2018-06-29 | 北京市商汤科技开发有限公司 | 交通工具、道路线检测和驾驶控制方法及装置 |
CN110059554A (zh) * | 2019-03-13 | 2019-07-26 | 重庆邮电大学 | 一种基于交通场景的多支路目标检测方法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002193025A (ja) * | 2000-12-27 | 2002-07-10 | Koito Mfg Co Ltd | 車両用前照灯装置 |
JP4915739B2 (ja) * | 2007-05-31 | 2012-04-11 | アイシン・エィ・ダブリュ株式会社 | 運転支援装置 |
JP5942822B2 (ja) * | 2012-11-30 | 2016-06-29 | 富士通株式会社 | 交差点検出方法および交差点検出システム |
KR102628654B1 (ko) * | 2016-11-07 | 2024-01-24 | 삼성전자주식회사 | 차선을 표시하는 방법 및 장치 |
US10008110B1 (en) * | 2017-02-16 | 2018-06-26 | Mapbox, Inc. | Detecting restrictions on turning paths in digital maps |
CN107689157B (zh) * | 2017-08-30 | 2021-04-30 | 电子科技大学 | 基于深度学习的交通路口可通行道路规划方法 |
JP7346401B2 (ja) * | 2017-11-10 | 2023-09-19 | エヌビディア コーポレーション | 安全で信頼できる自動運転車両のためのシステム及び方法 |
CN108376235A (zh) * | 2018-01-15 | 2018-08-07 | 深圳市易成自动驾驶技术有限公司 | 图像检测方法、装置及计算机可读存储介质 |
US11107347B2 (en) * | 2018-04-27 | 2021-08-31 | Cubic Corporation | Adaptively controlling traffic movements for driver safety |
CN108596116B (zh) * | 2018-04-27 | 2021-11-05 | 深圳市商汤科技有限公司 | 测距方法、智能控制方法及装置、电子设备和存储介质 |
CN108877267B (zh) * | 2018-08-06 | 2020-11-03 | 武汉理工大学 | 一种基于车载单目相机的交叉路口检测方法 |
-
2019
- 2019-11-07 CN CN201911083615.4A patent/CN112784639A/zh active Pending
-
2020
- 2020-09-08 JP JP2021532862A patent/JP2022512165A/ja active Pending
- 2020-09-08 WO PCT/CN2020/114095 patent/WO2021088504A1/zh active Application Filing
- 2020-09-08 KR KR1020217016327A patent/KR20210082518A/ko not_active Application Discontinuation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267415A1 (en) * | 2013-03-12 | 2014-09-18 | Xueming Tang | Road marking illuminattion system and method |
CN108216229A (zh) * | 2017-09-08 | 2018-06-29 | 北京市商汤科技开发有限公司 | 交通工具、道路线检测和驾驶控制方法及装置 |
CN108230817A (zh) * | 2017-11-30 | 2018-06-29 | 商汤集团有限公司 | 车辆驾驶模拟方法和装置、电子设备、系统、程序和介质 |
CN110059554A (zh) * | 2019-03-13 | 2019-07-26 | 重庆邮电大学 | 一种基于交通场景的多支路目标检测方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113380035A (zh) * | 2021-06-16 | 2021-09-10 | 山东省交通规划设计院集团有限公司 | 一种道路交叉口交通量分析方法及系统 |
CN113380035B (zh) * | 2021-06-16 | 2022-11-11 | 山东省交通规划设计院集团有限公司 | 一种道路交叉口交通量分析方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
KR20210082518A (ko) | 2021-07-05 |
JP2022512165A (ja) | 2022-02-02 |
CN112784639A (zh) | 2021-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10964054B2 (en) | Method and device for positioning | |
CN109271944B (zh) | 障碍物检测方法、装置、电子设备、车辆及存储介质 | |
WO2021088504A1 (zh) | 路口检测、神经网络训练及智能行驶方法、装置和设备 | |
EP3171292B1 (en) | Driving lane data processing method, device, storage medium and apparatus | |
WO2020103893A1 (zh) | 车道线属性检测方法、装置、电子设备及可读存储介质 | |
US10212397B2 (en) | Abandoned object detection apparatus and method and system | |
Ding et al. | Fast lane detection based on bird’s eye view and improved random sample consensus algorithm | |
US20210191397A1 (en) | Autonomous vehicle semantic map establishment system and establishment method | |
CN111967396A (zh) | 障碍物检测的处理方法、装置、设备及存储介质 | |
CN111091023A (zh) | 一种车辆检测方法、装置及电子设备 | |
CN112487884A (zh) | 一种交通违法行为检测方法、装置及计算机可读存储介质 | |
JP2021103160A (ja) | 自律走行車意味マップ確立システム及び確立方法 | |
CN112434657A (zh) | 飘散运载物检测方法、设备、程序及计算机可读介质 | |
CN115147328A (zh) | 三维目标检测方法及装置 | |
WO2021088505A1 (zh) | 目标属性检测、神经网络训练及智能行驶方法、装置 | |
CN116228756B (zh) | 一种自动驾驶中相机坏点的检测方法及检测系统 | |
CN110111018B (zh) | 评估车辆感测能力的方法、装置、电子设备及存储介质 | |
Muniruzzaman et al. | Deterministic algorithm for traffic detection in free-flow and congestion using video sensor | |
Lu et al. | Monocular multi-kernel based lane marking detection | |
CN115565072A (zh) | 一种道路垃圾识别和定位方法、装置、电子设备及介质 | |
CN115618602A (zh) | 一种车道级场景仿真方法及系统 | |
US11461944B2 (en) | Region clipping method and recording medium storing region clipping program | |
CN113869440A (zh) | 图像处理方法、装置、设备、介质及程序产品 | |
CN113435350A (zh) | 一种交通标线检测方法、装置、设备和介质 | |
CN111383268A (zh) | 车辆间距状态获取方法、装置、计算机设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20217016327 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021532862 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20883748 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20883748 Country of ref document: EP Kind code of ref document: A1 |