CN112784639A - Intersection detection, neural network training and intelligent driving method, device and equipment - Google Patents

Intersection detection, neural network training and intelligent driving method, device and equipment Download PDF

Info

Publication number
CN112784639A
CN112784639A CN201911083615.4A CN201911083615A CN112784639A CN 112784639 A CN112784639 A CN 112784639A CN 201911083615 A CN201911083615 A CN 201911083615A CN 112784639 A CN112784639 A CN 112784639A
Authority
CN
China
Prior art keywords
intersection
road
image
detection
sample image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911083615.4A
Other languages
Chinese (zh)
Inventor
程光亮
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201911083615.4A priority Critical patent/CN112784639A/en
Priority to PCT/CN2020/114095 priority patent/WO2021088504A1/en
Priority to KR1020217016327A priority patent/KR20210082518A/en
Priority to JP2021532862A priority patent/JP2022512165A/en
Publication of CN112784639A publication Critical patent/CN112784639A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18154Approaching an intersection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/10Path keeping
    • B60Y2300/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/18Propelling the vehicle
    • B60Y2300/18008Propelling the vehicle related to particular drive situations
    • B60Y2300/18158Approaching intersection

Abstract

The embodiment discloses a method, a device, electronic equipment and a computer storage medium for intersection detection, neural network training and intelligent driving, wherein the intersection detection method comprises the following steps: carrying out feature extraction on a road image to obtain a feature map of the road image; determining a detection frame of an intersection on a road shown by the road image according to the characteristic diagram of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower frame of the detection frame of the intersection is on the road surface of the road; and determining the distance between the equipment for acquiring the road image and the intersection according to the lower frame of the detection frame of the intersection. Therefore, even under the condition that a clear traffic light or ground stop line image cannot be acquired or no traffic light or ground stop line exists at the intersection, the embodiment of the disclosure can realize intersection detection according to the characteristic diagram of the road image, so as to determine the distance between the equipment for acquiring the road image and the intersection.

Description

Intersection detection, neural network training and intelligent driving method, device and equipment
Technical Field
The present disclosure relates to computer vision processing technologies, and in particular, to a method and an apparatus for intersection detection, neural network training, and intelligent driving, an electronic device, and a computer storage medium.
Background
In recent years, with the improvement of living standards and the improvement of driving assistance techniques, more and more driving assistance-related demands are being raised, while more and more scholars and companies apply deep learning to driving assistance schemes. Intersection detection and determining the distance between a vehicle and an intersection from the detected intersection are very important tasks when performing a driving assistance or an automatic driving task.
Disclosure of Invention
The disclosed embodiments are intended to provide a technical solution for intersection detection.
The embodiment of the disclosure provides an intersection detection method, which comprises the following steps:
carrying out feature extraction on a road image to obtain a feature map of the road image;
determining a detection frame of an intersection on a road shown by the road image according to the characteristic diagram of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower frame of the detection frame of the intersection is on the road surface of the road;
and determining the distance between the equipment for acquiring the road image and the intersection according to the lower frame of the detection frame of the intersection.
Optionally, the method further comprises:
and determining that the road shown by the road image has no intersection according to the characteristic diagram of the road image.
Optionally, the determining, according to a lower frame of the detection frame of the intersection, a distance between the device for acquiring the road image and the intersection includes:
determining the position of the lower frame of the detection frame of the intersection on the road according to the position of the lower frame of the detection frame of the intersection in the road image and the coordinate conversion relation between the plane of the road image and the road surface of the road;
and obtaining the distance between the equipment for acquiring the road image and the intersection according to the position of the lower frame of the detection frame of the intersection on the road and the position of the equipment for acquiring the road image on the road.
Optionally, the method is executed by a neural network, the neural network is obtained by training using a sample image and an annotation result of the sample image, the annotation result of the sample image includes an annotation frame of an intersection on a road shown by a positive sample image, the annotation frame represents a position of the intersection in the positive sample image, and a lower frame of the annotation frame is on a road surface of the road shown by the positive sample image.
The embodiment of the present disclosure further provides a neural network training method, including:
carrying out feature extraction on a sample image to obtain a feature map of the sample image;
determining the detection result of the sample image according to the characteristic diagram of the sample image;
adjusting the network parameter value of the neural network according to the labeling result and the detection result of the sample image;
when the sample image is a positive sample image, the labeling result of the sample image is a labeling frame of the intersection on the road shown by the positive sample image, the labeling frame represents the position of the intersection in the positive sample image, and the lower frame of the labeling frame of the intersection on the road shown by the positive sample image is on the road surface of the road shown by the positive sample image.
Optionally, the positive sample image includes a stop line of an intersection of the road, and a lower border of a labeling frame of the intersection on the road shown in the positive sample image is aligned with the stop line.
Optionally, the difference between the heights of the annotation frames in the multiple positive sample images containing the same intersection is within a preset range.
Optionally, when the sample image is a negative sample image, no intersection exists on the road in the negative sample image, and the labeling result of the sample image includes that no labeling frame exists in the negative sample image.
The embodiment of the present disclosure further provides an intelligent driving method, including:
acquiring a road image;
according to any one of the intersection detection methods, intersection detection is carried out on the road image;
and controlling the equipment to run according to the distance between the intelligent running equipment for collecting the road image and the intersection.
The embodiment of the disclosure also provides a crossing detection device, which comprises a first extraction module, a detection module and a first determination module; wherein the content of the first and second substances,
the first extraction module is used for extracting the characteristics of the road image to obtain a characteristic map of the road image;
the detection module is used for determining a detection frame of an intersection on a road shown by the road image according to the characteristic diagram of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower frame of the detection frame of the intersection is on the road surface of the road;
and the first determining module is used for determining the distance between the equipment for acquiring the road image and the intersection according to the lower frame of the detection frame of the intersection.
Optionally, the detection module is further configured to determine that the road shown in the road image does not have an intersection according to the feature map of the road image.
Optionally, the first determining module is configured to determine, according to a position of a lower frame of the detection frame of the intersection in the road image and a coordinate conversion relationship between a plane of the road image and a road surface of the road, a position of the lower frame of the detection frame of the intersection on the road; and obtaining the distance between the equipment for acquiring the road image and the intersection according to the position of the lower frame of the detection frame of the intersection on the road and the position of the equipment for acquiring the road image on the road.
Optionally, the device is implemented based on a neural network, the neural network is obtained by training sample images and labeling results of the sample images, the labeling results of the sample images include labeling frames of intersections on roads shown in the positive sample images, the labeling frames represent positions of the intersections in the positive sample images, and lower frames of the labeling frames are on the road surfaces of the roads shown in the positive sample images.
The embodiment of the present disclosure further provides a neural network training device, which includes: a second extraction module, a second determination module, and an adjustment module, wherein,
the second extraction module is used for extracting the characteristics of the sample image to obtain a characteristic diagram of the sample image;
the second determining module is used for determining the detection result of the sample image according to the feature map of the sample image;
the adjusting module is used for adjusting the network parameter value of the neural network according to the labeling result and the detection result of the sample image;
when the sample image is a positive sample image, the labeling result of the sample image is a labeling frame of the intersection on the road shown by the positive sample image, the labeling frame represents the position of the intersection in the positive sample image, and the lower frame of the labeling frame of the intersection on the road shown by the positive sample image is on the road surface of the road shown by the positive sample image.
Optionally, the positive sample image includes a stop line of an intersection of the road, and a lower border of a labeling frame of the intersection on the road shown in the positive sample image is aligned with the stop line.
Optionally, the difference between the heights of the annotation frames in the multiple positive sample images containing the same intersection is within a preset range.
Optionally, when the sample image is a negative sample image, no intersection exists on the road in the negative sample image, and the labeling result of the sample image includes that no labeling frame exists in the negative sample image.
The disclosed embodiment also provides an intelligent driving device, which comprises: the system comprises an acquisition module and a processing module, wherein the acquisition module is used for acquiring a road image;
the processing module is used for carrying out intersection detection on the road image according to any one of the intersection detection methods; and controlling the equipment to run according to the distance between the intelligent running equipment for collecting the road image and the intersection.
The disclosed embodiments also provide an electronic device comprising a processor and a memory for storing a computer program capable of running on the processor; wherein the content of the first and second substances,
the processor is configured to run the computer program to execute any one of the intersection detection methods or any one of the neural network training methods or any one of the intelligent driving detection methods.
The disclosed embodiment also provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements any one of the above intersection detection methods, or any one of the above neural network training methods, or any one of the above intelligent driving detection methods.
In the intersection detection method, the neural network training method, the intelligent driving method, the apparatus, the electronic device and the computer storage medium provided by the embodiment of the present disclosure, the intersection detection method includes: carrying out feature extraction on a road image to obtain a feature map of the road image; determining a detection frame of an intersection on a road shown by the road image according to the characteristic diagram of the road image; the detection frame of the intersection represents the area of the intersection in the road image; in addition, because the lower frame of the detection frame of the intersection is arranged on the road surface of the road in the embodiment of the disclosure, the distance between the equipment for acquiring the road image and the intersection can be determined according to the lower frame of the detection frame of the intersection; therefore, even under the condition that a clear traffic light or ground stop line image cannot be acquired or no traffic light or ground stop line exists at the intersection, the embodiment of the disclosure can realize intersection detection according to the characteristic diagram of the road image, so as to determine the distance between the equipment for acquiring the road image and the intersection.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a method of intersection detection in an embodiment of the present disclosure;
FIG. 2 is a flow chart of a neural network training method of an embodiment of the present disclosure;
FIG. 3 is an exemplary diagram of intersection detection using a trained neural network according to an embodiment of the present disclosure;
fig. 4 is a flowchart of an intelligent driving method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a structure of the intersection detection device according to the embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a structure of a neural network training device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of the intelligent driving device according to the embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The present disclosure will be described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the examples provided herein are merely illustrative of the present disclosure and are not intended to limit the present disclosure. In addition, the embodiments provided below are some embodiments for implementing the disclosure, not all embodiments for implementing the disclosure, and the technical solutions described in the embodiments of the disclosure may be implemented in any combination without conflict.
It should be noted that, in the embodiments of the present disclosure, the terms "comprises," "comprising," or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or apparatus including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or apparatus. Without further limitation, the use of the phrase "including a. -. said." does not exclude the presence of other elements (e.g., steps in a method or elements in a device, such as portions of circuitry, processors, programs, software, etc.) in the method or device in which the element is included.
For example, the intersection detection, neural network training method and intelligent driving method provided by the embodiments of the present disclosure include a series of steps, but the intersection detection, neural network training method and intelligent driving method provided by the embodiments of the present disclosure are not limited to the described steps, and similarly, the intersection detection device, neural network training device and intelligent driving device provided by the embodiments of the present disclosure include a series of modules, but the device provided by the embodiments of the present disclosure is not limited to include the explicitly described modules, and may also include modules that are required to obtain relevant information or perform processing based on the information.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The disclosed embodiments may be implemented in a computer system comprised of terminals and servers and may be operational with numerous other general purpose or special purpose computing system environments or configurations. Here, the terminal may be a thin client, a thick client, a hand-held or laptop device, a microprocessor-based system, a set-top box, a programmable consumer electronics, a network personal computer, a small computer system, etc., and the server may be a server computer system, a small computer system, a mainframe computer system, a distributed cloud computing environment including any of the above, etc.
The electronic devices of the terminal, server, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
In an auxiliary driving or automatic driving task, peripheral information needs to be sensed through a camera and a radar, and meanwhile accurate decision information needs to be given, such as acceleration, avoidance, deceleration and the like; the structure of the intersection region is often complex, and when the vehicle is far away from the intersection, how to accurately predict and estimate the distance of the intersection is very important. Generally, the purpose of accurately detecting intersection regions is: sufficient reaction time can be effectively provided for automated driving decisions and sufficient time can be reserved for vehicle deceleration. In the related art, the judgment is usually performed by using information such as intersection traffic lights or ground stop lines shot by a camera of a vehicle; under the condition that the vehicle is far away from the intersection, clear traffic light or ground stop line images cannot be acquired, so that the intersection detection scheme cannot accurately detect the intersection; in addition, some intersections do not have traffic lights or ground stop lines, which can cause the intersection detection scheme to be incapable of realizing intersection detection.
In view of the above-mentioned problems, in some embodiments of the present disclosure, an intersection detection method is provided, and embodiments of the present disclosure may be applied to scenarios such as automatic driving and assisted driving.
Fig. 1 is a flowchart of an intersection detection method according to an embodiment of the present disclosure, and as shown in fig. 1, the flowchart may include:
step 101: and extracting the characteristics of the road image to obtain a characteristic diagram of the road image.
Here, the road image is an image that requires intersection detection. Illustratively, the format of the road image may be Joint Photographic Experts GROUP (JPEG), Bitmap (BMP), Portable Network Graphics (PNG), or other formats; it should be noted that, the format of the road image is merely illustrated here, and the format of the sample image is not limited in the embodiment of the present disclosure.
In practical applications, the road image may be acquired from a local storage area or a network, or the road image may be acquired by using an image acquisition device, where the image acquisition device may include a camera mounted on a vehicle, etc.; in practical applications, one or more cameras may be provided on the vehicle for capturing images of the road in front of the vehicle.
In the embodiment of the present disclosure, the feature map of the road image may be used to characterize at least one of the following features of the road image: color features, texture features, shape features, spatial relationship features. For the implementation of this step, in one example, a Scale-invariant feature transform (SIFT) method or a Histogram of Oriented Gradients (HOG) feature extraction method may be used to extract a feature map of the road image; in another example, the road image may also be subjected to feature extraction by using a pre-trained neural network for extracting the image feature map.
Step 102: determining a detection frame of an intersection on a road shown by the road image according to the characteristic diagram of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower frame of the detection frame of the intersection is on the road surface of the road.
In the embodiment of the present disclosure, whether a road shown in a road image has an intersection may be determined according to a feature map of the road image, so as to obtain a determination result, obviously, the determination result includes the following two cases: the road shown by the road image has an intersection and the road shown by the road image has no intersection; when the road shown by the road image has an intersection, determining a detection frame of the intersection on the road shown by the road image according to the characteristic diagram of the road image, and outputting the detection frame of the intersection; when the road shown in the road image does not have an intersection, no output is performed.
In practical application, when an intersection exists on a road shown by a road image, a pre-trained neural network for extracting intersection detection frames can be used for determining the detection frames of the intersection on the road shown by the road image.
In the embodiment of the present disclosure, the shape of the detection frame of the intersection is not limited, for example, the shape of the detection frame of the intersection may be a rectangle, a trapezoid, or the like; in a specific example, a road shown by a road image has an intersection, and after the feature map of the road image is input to the neural network for extracting the intersection detection frame, the neural network for extracting the intersection detection frame can output a rectangular intersection detection frame; in another specific example, the road shown in the road image has no intersection, and after the feature map of the road image is input to the neural network for extracting the intersection detection frame, the neural network for extracting the intersection detection frame does not output any data.
Step 103: and determining the distance between the equipment for acquiring the road image and the intersection according to the lower frame of the detection frame of the intersection.
It can be understood that, because the lower border of the detection frame of the intersection is on the road surface of the road, the position of the intersection can be determined according to the lower border of the detection frame of the intersection, and further, the distance between the equipment for acquiring the road image and the intersection can be determined by combining the known position of the equipment for acquiring the road image.
In practical applications, the steps 101 to 103 can be implemented by a Processor in an electronic Device, and the Processor can be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
It can be seen that, in the embodiment of the present disclosure, first, feature extraction is performed on a road image to obtain a feature map of the road image; then, according to the characteristic diagram of the road image, determining a detection frame of an intersection on the road shown by the road image; in addition, because the lower frame of the detection frame of the intersection is arranged on the road surface of the road in the embodiment of the disclosure, the distance between the equipment for acquiring the road image and the intersection can be determined according to the lower frame of the detection frame of the intersection; therefore, even under the condition that a clear traffic light or ground stop line image cannot be acquired or no traffic light or ground stop line exists at the intersection, the embodiment of the disclosure can realize intersection detection according to the characteristic diagram of the road image, so as to determine the distance between the equipment for acquiring the road image and the intersection.
In addition, the intersection detection method disclosed by the embodiment of the disclosure has strong universality, can accurately detect the intersection of the image in front of the vehicle under the condition that at least one camera is installed on the vehicle, can realize intersection detection under the condition of being far away from the intersection, is beneficial to providing sufficient reaction time for driving decision, ensures driving safety, and can provide sufficient reaction time for braking, for example.
Optionally, for some road images not including intersections, it can be directly determined that the intersections do not exist on the road shown in the road image according to the feature diagram of the road image, which is beneficial to providing help for driving decisions and ensures driving safety.
For the implementation mode of determining the distance between the equipment for acquiring the road image and the intersection according to the lower frame of the detection frame of the intersection, exemplarily, the position of the lower frame of the detection frame of the intersection on the road can be determined according to the position of the lower frame of the detection frame of the intersection in the road image and the coordinate conversion relationship between the plane of the road image and the road surface of the road; and obtaining the distance between the equipment for acquiring the road image and the intersection according to the position of the lower frame of the detection frame of the intersection on the road and the position of the equipment for acquiring the road image on the road.
In the related intersection detection scheme, the distance between the intersection and the vehicle cannot be determined; in the embodiment of the present disclosure, when the device for acquiring a road image is located in the vehicle, the distance between the device for acquiring a road image and the intersection may be used as the distance between the vehicle and the intersection, that is, the embodiment of the present disclosure may consider that the lower frame of the intersection detection frame may be attached to the road surface, and according to the position of the lower frame of the intersection detection frame in the road image, the distance between the vehicle and the intersection may be accurately estimated, which is beneficial to providing sufficient reaction time for driving decisions, and ensuring driving safety.
In one embodiment, the position coordinates of the lower frame of the detection frame of the intersection may be converted into a world coordinate system according to a coordinate conversion relationship between the plane of the road image and the road surface of the road, and the position of the lower frame of the detection frame of the intersection in the world coordinate system, that is, the position of the lower frame of the detection frame of the intersection on the road may be obtained.
In practical application, the plane of the road image and the road surface of the road are two different planes, so that a Homography (Homography) matrix can be used for representing the coordinate conversion relation between the plane of the road image and the road surface of the road, and further, the position coordinate of the lower frame of the detection frame of the intersection can be converted into a world coordinate system according to the Homography matrix; the homography matrix can be calculated through the road image and some corresponding points in the world coordinate system, and the position of each point in the road image in the world coordinate system can be accurately obtained based on the homography matrix.
As an embodiment, the intersection detection method may be executed by a neural network, the neural network being trained by using a sample image and an annotation result of the sample image, the annotation result of the sample image including an annotation frame of an intersection on a road shown in the positive sample image, the annotation frame of the intersection on the road shown in the positive sample image representing a position of the intersection in the positive sample image, and a lower frame of the annotation frame of the intersection on the road shown in the positive sample image being on a road surface of the road shown in the positive sample image.
Here, the format of the sample image may be Joint Photographic Experts GROUP (JPEG), Bitmap (BMP), Portable Network Graphics (PNG), or other formats; it should be noted that, the format of the road image is merely illustrated here, and the format of the sample image is not limited in the embodiment of the present disclosure.
In practical applications, the sample image may be acquired from a local storage area or a network, or the sample image may be acquired by using an image acquisition device.
As can be appreciated, since the positive sample image includes the intersection, it is advantageous to enable the trained neural network to detect the intersection in the road image by performing the training of the neural network based on the positive sample image.
The training process of the neural network is exemplarily described below with reference to the drawings.
Fig. 2 is a flowchart of a neural network training method according to an embodiment of the present disclosure, and as shown in fig. 2, the flowchart may include:
step 201: and performing feature extraction on the sample image to obtain a feature map of the sample image.
In the embodiment of the present disclosure, the feature map of the sample image may be used to characterize at least one of the following features of the sample image: color features, texture features, shape features, spatial relationship features; for implementation of this step, for example, the sample image may be input into a neural network, and feature extraction may be performed on the sample image by using the neural network to obtain a feature map of the sample image.
In the embodiment of the present disclosure, the kind of the Neural network is not limited, and for example, the Neural network may be a Single-Shot multi-box Detector (SSD), a You Only Look Once (You Only Look on), a fast Region convolution Neural network (fast Region-conditional Neural Networks, fast RCNN), or other Neural Networks based on deep learning. In the embodiment of the present disclosure, the network structure of the neural network is also not limited, for example, the network structure of the neural network may be a residual network structure of 50 layers, a VGG16 network structure, a MobileNet network structure, or the like.
Step 202: and determining the detection result of the sample image according to the characteristic diagram of the sample image.
In the embodiment of the present disclosure, whether the road shown in the sample image has an intersection can be determined according to the feature diagram of the sample image, so as to obtain a detection result, obviously, the detection result includes the following two cases: the road shown by the sample image has an intersection, and the road shown by the sample image has no intersection.
In the embodiment of the disclosure, when the sample image is a positive sample image, the labeling result of the sample image is a labeling frame of the intersection on the road shown by the positive sample image, the labeling frame represents the position of the intersection in the positive sample image, and the lower frame of the labeling frame is on the road surface of the road shown by the positive sample image; obviously, when the sample image is a positive sample image, the detection result of the sample image, that is, the detection frame of the intersection, can be determined according to the feature map of the sample image.
Step 203: and adjusting the network parameter value of the neural network according to the labeling result and the detection result of the sample image.
For the implementation of this step, illustratively, the network parameter value of the neural network may be adjusted according to the difference between the labeling result of the sample image and the detection result. In actual implementation, the loss of the neural network can be calculated, and the loss of the neural network is used for representing the difference between the labeling result of the sample image and the detection result; network parameter values of the neural network may then be adjusted based on the loss of the initial neural network with the goal of reducing the loss of the initial neural network.
Step 204: judging whether the detection result of the neural network on the sample image after the network parameter value adjustment meets the set precision requirement, if not, returning to execute the step 201; if so, step 205 is performed.
Here, the set accuracy requirement may be that a difference between the detection result of the sample image and the labeling result of the sample image is within a preset range.
Step 205: and taking the neural network after the network parameter value adjustment as the trained neural network.
In practical applications, steps 201 to 205 may be implemented by a processor in an electronic device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor.
It can be seen that in the embodiment of the present disclosure, in the training process of the neural network, the detection result of the sample image can be determined according to the feature map of the sample image; therefore, the trained neural network can realize intersection detection according to the characteristic diagram of the road image under the condition that clear traffic light or ground stop line images cannot be acquired or no traffic light or ground stop line exists at the intersection; in addition, since the positive sample image includes the intersection, the training of the neural network based on the positive sample image is advantageous in that the trained neural network can detect the intersection in the road image.
In practical application, when labeling frames of intersections are labeled for positive sample images, because many intersections have no obvious markers, great difficulty also exists in data labeling; in view of this problem, in the embodiments of the present disclosure, it can be solved in various ways, and the following is described by several examples.
In the first example, the lower frame of the mark frame of the intersection on the road shown in the positive sample image is on the road surface of the road shown in the positive sample image; therefore, even when the crossing has no obvious marker, the lower frame of the marking frame of the crossing can be determined, which is beneficial to marking; furthermore, because the marking frame of the road junction is marked on the road surface of the road, the marking frame of the road junction is consistent with the actual condition, and the trained neural network can accurately obtain the detection frame of the road junction on the basis of the marking frame of the road junction shown by the sample image.
On the basis of the first example, as an alternative embodiment, in the case where a stop line of an intersection of a road is included in the positive sample image, a lower border of a label frame of the intersection on the road shown in the positive sample image is aligned with the stop line; the lower border of the marking frame marking the intersection is aligned with the stop line, so that the marking frame of the intersection is in accordance with the actual condition, and the trained neural network can accurately obtain the detection frame of the intersection on the basis of the marking frame of the intersection on the road shown by the sample image.
On the basis of the first example, as an alternative implementation manner, the intersection of the aligned sample image is labeled with a rectangular labeling frame, and if the intersection is far away, the lower border of the rectangular labeling frame needs to be labeled on the road surface according to experience and observation of the intersection region, and the height of the rectangular labeling frame is set to be a fixed value, for example, the height of the rectangular labeling frame is 80 pixels.
In a second example, the difference between the heights of the label frames in the plurality of positive sample images containing the same intersection is within a preset range; the preset range can be preset according to actual conditions, for example, the heights of the labeling frames in the multiple positive sample images containing the same intersection are consistent and are all 80 pixels.
Therefore, the height difference of the mark frames of the intersections in the multiple positive sample images containing the same intersection is within the preset range, the consistency of the mark frames of the intersections of the multiple positive sample images can be ensured, and the training process of the neural network is accelerated on the basis of the mark frames of the intersections of the multiple positive sample images.
In practical applications, a plurality of positive sample images including the same intersection may be continuously captured images.
In the third example, in the positive sample image, when the intersection ahead can be identified, the labeling frame of the intersection needs to be labeled.
In the fourth example, in the positive sample image, when there is a case where there is a serious occlusion or whether it is an intersection region cannot be distinguished by naked eyes, labeling of an intersection is not performed.
Optionally, when the sample image is a negative sample image, no intersection exists on the road in the negative sample image, and the annotation result of the sample image indicates that no annotation frame exists in the negative sample image.
It can be seen that, by inputting the negative sample image to the neural network and performing network training of the neural network, the error detection rate of the trained neural network for the image not including the intersection region can be reduced, that is, the image not including the intersection region can be detected more accurately.
In one embodiment, when the sample image includes a positive sample image and a negative sample image, the ratio of the positive sample image to the negative sample image is greater than a set ratio threshold; therefore, enough positive sample images are input into the neural network to carry out network training of the neural network, so that the trained neural network can accurately detect the intersection area containing the images of the intersections.
In the embodiment of the disclosure, after the trained neural network is obtained, the road image may be input to the trained neural network, intersection detection is performed by using the trained neural network, and then a detection frame of an intersection on a road shown by the road image is determined, or it is determined that the road shown by the road image does not have an intersection.
Fig. 3 is an illustration diagram illustrating intersection detection by using a trained neural network according to the embodiment of the present disclosure, and as shown in fig. 3, an image to be detected represents a road image captured by using a single camera of a vehicle, and a detection network represents the trained neural network.
On the basis of the intersection detection method provided by the foregoing embodiment, the embodiment of the present disclosure further provides an intelligent Driving method, which may be applied to an intelligent Driving device, where the intelligent Driving device includes, but is not limited to, an automatic Driving vehicle, a vehicle equipped with an Advanced Driving Assistance System (ADAS), a robot equipped with ADAS, and the like.
Fig. 4 is a flowchart of an intelligent driving method according to an embodiment of the present disclosure, and as shown in fig. 4, the flowchart may include:
step 401: and acquiring a road image.
The implementation of this step has already been described in the foregoing description, and is not described herein again.
Step 402: according to any one of the above intersection detection methods, intersection detection is performed on the road image.
With the above description, it can be seen that the detection result obtained by performing intersection detection on the road image may be a detection frame for determining an intersection on the road shown in the road image, or may be a detection frame for determining that no intersection exists on the road shown in the road image; on the basis of determining the detection frame of the intersection, the distance between the equipment for acquiring the road image and the intersection can be determined.
Step 403: and carrying out driving control on the intelligent driving equipment according to the distance between the intelligent driving equipment for acquiring the road image and the intersection.
In practical applications, the intelligent driving device can be directly controlled to drive (automatic driving and robot), and the driver can also be sent instructions to control the vehicle (such as the vehicle equipped with the ADAS) to drive.
Therefore, based on the intersection detection method, the distance between the intelligent driving equipment for acquiring the road image and the intersection can be obtained, and the method is favorable for providing help for vehicle driving and improving the safety of vehicle driving according to the distance between the intelligent driving equipment for acquiring the road image and the intersection.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic
On the basis of the intersection detection method provided by the foregoing embodiment, the embodiment of the present disclosure provides an intersection detection device.
Fig. 5 is a schematic structural diagram of a composition of an intersection detecting device according to an embodiment of the present disclosure, and as shown in fig. 5, the device includes: a first extraction module 501, a detection module 502 and a first determination module 503, wherein,
the first extraction module 501 is configured to perform feature extraction on a road image to obtain a feature map of the road image;
a detection module 502, configured to determine a detection frame of an intersection on a road shown in the road image according to the feature map of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower frame of the detection frame of the intersection is on the road surface of the road;
the first determining module 503 is configured to determine a distance between the device for acquiring the road image and the intersection according to a lower frame of the detection frame of the intersection.
Optionally, the detecting module 502 is further configured to determine that the road shown in the road image does not have an intersection according to the feature map of the road image.
Optionally, the first determining module 503 is configured to determine the position of the lower frame of the detection frame of the intersection on the road according to the position of the lower frame of the detection frame of the intersection in the road image and a coordinate conversion relationship between a plane of the road image and a road surface of the road; and obtaining the distance between the equipment for acquiring the road image and the intersection according to the position of the lower frame of the detection frame of the intersection on the road and the position of the equipment for acquiring the road image on the road.
Optionally, the device is implemented based on a neural network, the neural network is obtained by training sample images and labeling results of the sample images, the labeling results of the sample images include labeling frames of intersections on roads shown in the positive sample images, the labeling frames represent positions of the intersections in the positive sample images, and lower frames of the labeling frames are on the road surfaces of the roads shown in the positive sample images.
In practical applications, the first extracting module 501, the detecting module 502, and the first determining module 503 may be implemented by a processor in an electronic device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor.
Fig. 6 is a schematic structural diagram of a neural network training apparatus according to an embodiment of the present disclosure, as shown in fig. 6, the apparatus may include a second extraction module 601, a second determination module 602, and an adjustment module 603, wherein,
a second extraction module 601, configured to perform feature extraction on a sample image to obtain a feature map of the sample image;
a second determining module 602, configured to determine a detection result of the sample image according to the feature map of the sample image;
an adjusting module 603, configured to adjust a network parameter value of the neural network according to the labeling result and the detection result of the sample image;
when the sample image is a positive sample image, the labeling result of the sample image is a labeling frame of the intersection on the road shown by the positive sample image, the labeling frame represents the position of the intersection in the positive sample image, and the lower frame of the labeling frame of the intersection on the road shown by the positive sample image is on the road surface of the road shown by the positive sample image.
Optionally, the positive sample image includes a stop line of an intersection of the road, and a lower border of a labeling frame of the intersection on the road shown in the positive sample image is aligned with the stop line.
Optionally, the difference between the heights of the annotation frames in the multiple positive sample images containing the same intersection is within a preset range.
Optionally, when the sample image is a negative sample image, no intersection exists on the road in the negative sample image, and the labeling result of the sample image includes that no labeling frame exists in the negative sample image.
In practical applications, the second extracting module 601, the second determining module 602, and the adjusting module 603 may be implemented by a processor in an electronic device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor.
Fig. 7 is a schematic structural diagram of a smart driving device according to an embodiment of the present disclosure, and as shown in fig. 7, the smart driving device includes: an acquisition module 701 and a processing module 702, wherein,
an obtaining module 701, configured to obtain a road image;
a processing module 702, configured to perform intersection detection on the road image according to any one of the above intersection detection methods; and controlling the equipment to run according to the distance between the intelligent running equipment for collecting the road image and the intersection.
In practical applications, the obtaining module 701 and the processing module 702 may be implemented by a processor in the intelligent driving device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor.
In addition, each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Specifically, the computer program instructions corresponding to any one of the intersection detection method, the neural network training method, or the intelligent driving method in the present embodiment may be stored in a storage medium such as an optical disc, a hard disc, or a U-disc, and when the computer program instructions corresponding to any one of the intersection detection method, the neural network training method, or the intelligent driving method in the storage medium are read or executed by an electronic device, any one of the intersection detection method, the neural network training method, or the intelligent driving method in the foregoing embodiments is implemented.
Based on the same technical concept of the foregoing embodiment, referring to fig. 8, it illustrates an electronic device 80 provided by an embodiment of the present disclosure, which may include: a memory 81 and a processor 82; wherein the content of the first and second substances,
the memory 81 for storing computer programs and data;
the processor 82 is configured to execute the computer program stored in the memory to implement any one of the intersection detection method, the neural network training method, or the intelligent driving method of the foregoing embodiments.
In practical applications, the memory 81 may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor 82.
The processor 82 may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, which are not repeated herein for brevity
The methods disclosed in the method embodiments provided by the present application can be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in various product embodiments provided by the application can be combined arbitrarily to obtain new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided herein may be combined in any combination to arrive at new method or apparatus embodiments without conflict.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An intersection detection method, characterized in that the method comprises:
carrying out feature extraction on a road image to obtain a feature map of the road image;
determining a detection frame of an intersection on a road shown by the road image according to the characteristic diagram of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower frame of the detection frame of the intersection is on the road surface of the road;
and determining the distance between the equipment for acquiring the road image and the intersection according to the lower frame of the detection frame of the intersection.
2. The method of claim 1, further comprising:
and determining that the road shown by the road image has no intersection according to the characteristic diagram of the road image.
3. The method of claim 1, wherein determining the distance between the device for acquiring the road image and the intersection according to the lower border of the detection frame of the intersection comprises:
determining the position of the lower frame of the detection frame of the intersection on the road according to the position of the lower frame of the detection frame of the intersection in the road image and the coordinate conversion relation between the plane of the road image and the road surface of the road;
and obtaining the distance between the equipment for acquiring the road image and the intersection according to the position of the lower frame of the detection frame of the intersection on the road and the position of the equipment for acquiring the road image on the road.
4. A neural network training method, comprising:
carrying out feature extraction on a sample image to obtain a feature map of the sample image;
determining the detection result of the sample image according to the characteristic diagram of the sample image;
adjusting the network parameter value of the neural network according to the labeling result and the detection result of the sample image;
when the sample image is a positive sample image, the labeling result of the sample image is a labeling frame of the intersection on the road shown by the positive sample image, the labeling frame represents the position of the intersection in the positive sample image, and the lower frame of the labeling frame of the intersection on the road shown by the positive sample image is on the road surface of the road shown by the positive sample image.
5. An intelligent driving method, comprising:
acquiring a road image;
the method according to any one of claims 1-3, wherein intersection detection is performed on the road image;
and controlling the equipment to run according to the distance between the intelligent running equipment for collecting the road image and the intersection.
6. The intersection detection device is characterized by comprising a first extraction module, a detection module and a first determination module; wherein the content of the first and second substances,
the first extraction module is used for extracting the characteristics of the road image to obtain a characteristic map of the road image;
the detection module is used for determining a detection frame of an intersection on a road shown by the road image according to the characteristic diagram of the road image; the detection frame of the intersection represents the area of the intersection in the road image, and the lower frame of the detection frame of the intersection is on the road surface of the road;
and the first determining module is used for determining the distance between the equipment for acquiring the road image and the intersection according to the lower frame of the detection frame of the intersection.
7. An apparatus for neural network training, the apparatus comprising: a second extraction module, a second determination module, and an adjustment module, wherein,
the second extraction module is used for extracting the characteristics of the sample image to obtain a characteristic diagram of the sample image;
the second determining module is used for determining the detection result of the sample image according to the feature map of the sample image;
the adjusting module is used for adjusting the network parameter value of the neural network according to the labeling result and the detection result of the sample image;
when the sample image is a positive sample image, the labeling result of the sample image is a labeling frame of the intersection on the road shown by the positive sample image, the labeling frame represents the position of the intersection in the positive sample image, and the lower frame of the labeling frame of the intersection on the road shown by the positive sample image is on the road surface of the road shown by the positive sample image.
8. An intelligent travel apparatus, characterized in that the apparatus comprises: an acquisition module and a processing module, wherein,
the acquisition module is used for acquiring a road image;
a processing module for performing intersection detection on the road image according to the method of any one of claims 1 to 3; and controlling the equipment to run according to the distance between the intelligent running equipment for collecting the road image and the intersection.
9. An electronic device comprising a processor and a memory for storing a computer program operable on the processor; wherein the content of the first and second substances,
the processor is configured to run the computer program to execute the intersection detection method according to any one of claims 1 to 3, the neural network training method according to claim 4, or the intelligent driving method according to claim 5.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the intersection detection method of any one of claims 1 to 3 or the neural network training method of claim 4 or the intelligent driving method of claim 5.
CN201911083615.4A 2019-11-07 2019-11-07 Intersection detection, neural network training and intelligent driving method, device and equipment Pending CN112784639A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201911083615.4A CN112784639A (en) 2019-11-07 2019-11-07 Intersection detection, neural network training and intelligent driving method, device and equipment
PCT/CN2020/114095 WO2021088504A1 (en) 2019-11-07 2020-09-08 Road junction detection method and apparatus, neural network training method and apparatus, intelligent driving method and apparatus, and device
KR1020217016327A KR20210082518A (en) 2019-11-07 2020-09-08 Intersection detection, neural network training and smart driving methods, devices and devices
JP2021532862A JP2022512165A (en) 2019-11-07 2020-09-08 Intersection detection, neural network training and intelligent driving methods, equipment and devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911083615.4A CN112784639A (en) 2019-11-07 2019-11-07 Intersection detection, neural network training and intelligent driving method, device and equipment

Publications (1)

Publication Number Publication Date
CN112784639A true CN112784639A (en) 2021-05-11

Family

ID=75747994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911083615.4A Pending CN112784639A (en) 2019-11-07 2019-11-07 Intersection detection, neural network training and intelligent driving method, device and equipment

Country Status (4)

Country Link
JP (1) JP2022512165A (en)
KR (1) KR20210082518A (en)
CN (1) CN112784639A (en)
WO (1) WO2021088504A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023107002A3 (en) * 2021-12-09 2023-08-31 Grabtaxi Holdings Pte. Ltd. System and method for adaptively predicting a road segment attribute based on a graph indicative of relationship between a road segment and a detection
GB2617866A (en) * 2022-04-21 2023-10-25 Continental Automotive Romania Srl Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection,

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113380035B (en) * 2021-06-16 2022-11-11 山东省交通规划设计院集团有限公司 Road intersection traffic volume analysis method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100299000A1 (en) * 2007-05-31 2010-11-25 Aisin Aw Co., Ltd. Driving assistance apparatus
JP2014109875A (en) * 2012-11-30 2014-06-12 Fujitsu Ltd Intersection detecting method and intersection detecting system
CN107689157A (en) * 2017-08-30 2018-02-13 电子科技大学 Traffic intersection based on deep learning can passing road planing method
US10008110B1 (en) * 2017-02-16 2018-06-26 Mapbox, Inc. Detecting restrictions on turning paths in digital maps
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
CN108596116A (en) * 2018-04-27 2018-09-28 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium
CN108877267A (en) * 2018-08-06 2018-11-23 武汉理工大学 A kind of intersection detection method based on vehicle-mounted monocular camera
US20190333377A1 (en) * 2018-04-27 2019-10-31 Cubic Corporation Adaptively controlling traffic movements for driver safety

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002193025A (en) * 2000-12-27 2002-07-10 Koito Mfg Co Ltd Vehicular head lamp device
US20140267415A1 (en) * 2013-03-12 2014-09-18 Xueming Tang Road marking illuminattion system and method
KR102628654B1 (en) * 2016-11-07 2024-01-24 삼성전자주식회사 Method and apparatus of indicating lane
CN108216229B (en) * 2017-09-08 2020-01-10 北京市商汤科技开发有限公司 Vehicle, road line detection and driving control method and device
WO2019094843A1 (en) * 2017-11-10 2019-05-16 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
CN108230817B (en) * 2017-11-30 2020-09-22 商汤集团有限公司 Vehicle driving simulation method and apparatus, electronic device, system, program, and medium
CN110059554B (en) * 2019-03-13 2022-07-01 重庆邮电大学 Multi-branch target detection method based on traffic scene

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100299000A1 (en) * 2007-05-31 2010-11-25 Aisin Aw Co., Ltd. Driving assistance apparatus
JP2014109875A (en) * 2012-11-30 2014-06-12 Fujitsu Ltd Intersection detecting method and intersection detecting system
US10008110B1 (en) * 2017-02-16 2018-06-26 Mapbox, Inc. Detecting restrictions on turning paths in digital maps
CN107689157A (en) * 2017-08-30 2018-02-13 电子科技大学 Traffic intersection based on deep learning can passing road planing method
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
CN108596116A (en) * 2018-04-27 2018-09-28 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium
US20190333377A1 (en) * 2018-04-27 2019-10-31 Cubic Corporation Adaptively controlling traffic movements for driver safety
CN108877267A (en) * 2018-08-06 2018-11-23 武汉理工大学 A kind of intersection detection method based on vehicle-mounted monocular camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023107002A3 (en) * 2021-12-09 2023-08-31 Grabtaxi Holdings Pte. Ltd. System and method for adaptively predicting a road segment attribute based on a graph indicative of relationship between a road segment and a detection
GB2617866A (en) * 2022-04-21 2023-10-25 Continental Automotive Romania Srl Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection,

Also Published As

Publication number Publication date
KR20210082518A (en) 2021-07-05
WO2021088504A1 (en) 2021-05-14
JP2022512165A (en) 2022-02-02

Similar Documents

Publication Publication Date Title
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
Siriborvornratanakul An automatic road distress visual inspection system using an onboard in-car camera
CN109087510B (en) Traffic monitoring method and device
WO2022126377A1 (en) Traffic lane line detection method and apparatus, and terminal device and readable storage medium
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
WO2021088504A1 (en) Road junction detection method and apparatus, neural network training method and apparatus, intelligent driving method and apparatus, and device
CN107609510B (en) Positioning method and device for lower set of quayside container crane
Ding et al. Fast lane detection based on bird’s eye view and improved random sample consensus algorithm
CN111738995A (en) RGBD image-based target detection method and device and computer equipment
CN111091023B (en) Vehicle detection method and device and electronic equipment
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
CN109903308B (en) Method and device for acquiring information
JP2018073275A (en) Image recognition device
CN112785595A (en) Target attribute detection, neural network training and intelligent driving method and device
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN115565072A (en) Road garbage recognition and positioning method and device, electronic equipment and medium
CN115618602A (en) Lane-level scene simulation method and system
Chen et al. Integrated vehicle and lane detection with distance estimation
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
CN113869440A (en) Image processing method, apparatus, device, medium, and program product
CN114170267A (en) Target tracking method, device, equipment and computer readable storage medium
CN112215042A (en) Parking space limiter identification method and system and computer equipment
CN112101139B (en) Human shape detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination