CN106778548B - Method and apparatus for detecting obstacles - Google Patents

Method and apparatus for detecting obstacles Download PDF

Info

Publication number
CN106778548B
CN106778548B CN201611078768.6A CN201611078768A CN106778548B CN 106778548 B CN106778548 B CN 106778548B CN 201611078768 A CN201611078768 A CN 201611078768A CN 106778548 B CN106778548 B CN 106778548B
Authority
CN
China
Prior art keywords
map data
street view
view map
obstacle
obstacles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611078768.6A
Other languages
Chinese (zh)
Other versions
CN106778548A (en
Inventor
胡太群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201611078768.6A priority Critical patent/CN106778548B/en
Publication of CN106778548A publication Critical patent/CN106778548A/en
Application granted granted Critical
Publication of CN106778548B publication Critical patent/CN106778548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

Methods and apparatus for detecting obstacles are disclosed. One embodiment of the method comprises: acquiring an obstacle detection model and target street view map data; detecting and marking out an obstacle in the target street view map data by using an obstacle detection model; the obstacle detection model is obtained through the following steps: obtaining street view map data of a pre-marked barrier; processing street view map data based on the marked obstacles; selecting partial street view map data with marked obstacles from the processed street view map data as training data; and training a preset obstacle detection model by using the training data to obtain the obstacle detection model. The implementation method realizes the full utilization of the street view map data marked with the obstacles and the rapid and accurate detection of the obstacles in the street view map data.

Description

Method and apparatus for detecting obstacles
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of object detection, and more particularly, to a method and an apparatus for detecting an obstacle.
Background
Automatic detection of obstacles is of great importance to pedestrians with visual impairment or to autonomous vehicles. The street view map data generally includes image data or point cloud data containing obstacle information in a driving environment of a vehicle or a pedestrian, such as a pedestrian, a vehicle, a building, and the like.
The existing obstacle detection method firstly identifies and labels the collected street view map data to determine the position of an obstacle included in the collected street view map data. When the machine learning algorithm is adopted to realize the identification and labeling of the street view map data, the machine learning algorithm is trained firstly, and the data amount required by the training is large, so that the cost consumed in the processes of collecting and labeling the street view map data is high, and the value of the collected street view map data cannot be fully utilized.
Disclosure of Invention
The present application aims to provide a method and apparatus for detecting obstacles to solve the technical problems mentioned in the background section above.
In a first aspect, the present application provides a method for detecting an obstacle, the method comprising: acquiring an obstacle detection model and target street view map data; detecting and marking out the obstacles in the target street view map data by using the obstacle detection model; wherein the obstacle detection model is obtained by the following steps: obtaining street view map data with a barrier marked in advance, wherein the street view map data comprises barrier information in a driving environment; processing the street view map data based on the marked obstacles; selecting partial street view map data with marked obstacles from the processed street view map data as training data; and training a preset obstacle detection model by using the training data to obtain the obstacle detection model.
In some embodiments, the street view map data further includes road information in the driving environment; and the method further comprises: importing the street view map data into a pre-trained road identification model, and identifying roads in the street view map data; and marking the identified roads and determining the road areas in the street view map data.
In some embodiments, the processing the street view map data based on the marked obstacles comprises: selecting at least one obstacle from the marked obstacles; adding the selected obstacles in the road area; deleting the road information of the area covered by the added obstacle in the road area.
In some embodiments, the processing the street view map data based on the marked obstacles comprises: selecting at least one obstacle from the marked obstacles; moving the position of the selected obstacle to the road area; and deleting the road information of the area covered by the moved obstacle in the road area.
In some embodiments, the street view map data comprises image data; and processing the street view map data based on the marked obstacles, including: detecting whether the marked obstacles comprise traffic lights or not; in response to a traffic light being included in the marked obstacle, color data of the traffic light is changed.
In some embodiments, the method further comprises at least one of: deleting the processed street view map data; and outputting target street view map data marked with the barrier.
In a second aspect, the present application provides an apparatus for detecting an obstacle, the apparatus comprising: the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring an obstacle detection model and target street view map data; the detection unit is used for detecting and marking out the obstacles in the target street view map data by using the obstacle detection model; wherein the obstacle detection model is obtained by a training unit, the training unit comprising: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring street view map data of a pre-marked barrier, and the street view map data comprises barrier information in a driving environment; the processing module is used for processing the street view map data based on the marked obstacles; the selection module is used for selecting partial street view map data with marked obstacles from the processed street view map data as training data; and the training module is used for training a preset obstacle detection model by using the training data to obtain the obstacle detection model.
In some embodiments, the street view map data further includes road information in the driving environment; and the device further comprises a road identification unit comprising: the recognition module is used for importing the street view map data into a pre-trained road recognition model and recognizing roads in the street view map data; and the marking module is used for marking the identified road and determining the road area in the street view map data.
In some embodiments, the processing module is further to: selecting at least one obstacle from the marked obstacles; adding the selected obstacles in the road area; deleting the road information of the area covered by the added obstacle in the road area.
In some embodiments, the processing module is further to: selecting at least one obstacle from the marked obstacles; moving the position of the selected obstacle to the road area; and deleting the road information of the area covered by the moved obstacle in the road area.
In some embodiments, the street view map data comprises image data; and the processing module is further to: detecting whether the marked obstacles comprise traffic lights or not; in response to a traffic light being included in the marked obstacle, color data of the traffic light is changed.
In some embodiments, the apparatus further comprises at least one of: a deleting unit for deleting the processed street view map data; and the output unit is used for outputting the target street view map data marked with the barrier.
According to the method and the device for detecting the obstacle, the street view map data with the obstacle marked in advance are processed, the part of the street view map data with the marked obstacle is selected from the processed street view map data, the preset obstacle detection model is trained, the obstacle detection model is obtained, the obstacle in the target street view map data is detected and marked by using the obstacle detection model, on one hand, the street view map data with the obstacle marked is fully utilized, and on the other hand, the obstacle in the street view map data is quickly and accurately detected.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of a method for detecting obstacles according to the present application;
FIG. 2 is a flow diagram of one embodiment of a trained obstacle detection model for a method for detecting obstacles according to the present application;
FIG. 3 is a schematic diagram of one application scenario of a method for detecting obstacles according to the present application;
FIG. 4 is a schematic block diagram of one embodiment of an apparatus for detecting obstacles according to the present application;
fig. 5 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, a flow 100 of one embodiment of a method for detecting an obstacle according to the present application is shown. The method for detecting an obstacle of the present embodiment includes the steps of:
step 101, obtaining an obstacle detection model and target street view map data.
In this embodiment, the obstacle detection model may be a model constructed by various algorithms capable of detecting obstacles included in the image data or the point cloud data, such as a Convolutional Neural Network (CNN) and a random forest algorithm. The obstacle may be a pedestrian or various obstacles of the vehicle during driving, such as a pedestrian, a vehicle, a building, and the like. The target street view map data can be various image data or point cloud data of the barrier to be detected, wherein the point cloud data can be obtained through a laser radar, and the image data can be obtained through an image acquisition device. The electronic device (such as a terminal device) operating the method for detecting the obstacle according to the embodiment may acquire the locally stored obstacle detection model and the target street view map data, and may also acquire the manually imported obstacle detection model and the target street view map data in a wired connection or wireless connection manner.
It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
And step 102, detecting and marking out the obstacles in the target street view map data by using the obstacle detection model.
And inputting the target street view map data into the obstacle detection model, so that the obstacles in the target street view map data can be detected, and the detected obstacles can be marked. Wherein the obstacle detection model is obtained by training through the following steps shown in fig. 2:
step 201, street view map data of the pre-marked obstacles is obtained.
In this embodiment, an electronic device (e.g., a terminal device) operating the method for detecting an obstacle according to this embodiment may obtain locally stored street view map data in which an obstacle is marked in advance, or may obtain street view map data in which an obstacle is marked manually by using a wired connection or a wireless connection. The marking refers to marking the information of obstacles (such as pedestrians and vehicles), road signs (such as traffic lights and speed limit signs) and the like in the map data, and can be marked by adopting a minimum circumscribed rectangular frame or a frame of an object to be marked. It is understood that the street view map data includes obstacle information in the driving environment of the pedestrian or the vehicle, such as data of the size, shape, color, and the like of the obstacle.
Step 202, processing street view map data based on the marked obstacles.
In this embodiment, the terminal device may perform various processing on the street view map data based on the marked obstacles, for example, change the positions, the number, the colors, and the like of the marked obstacles. It will be appreciated that the processed street view map data still includes the noted obstacles.
And step 203, selecting partial street view map data with marked obstacles from the processed street view map data as training data.
After the street view map data with the marked obstacles are processed, the information contained in the original street view map data is mined equivalently, new street view map data different from the original street view map data is obtained, and partial street view map data with the marked obstacles are selected from the new street view map data to serve as training data.
And 204, training a preset obstacle detection model by using the training data to obtain the obstacle detection model.
The preset obstacle detection model is trained by using the training data, so that the obstacle detection model mentioned in step 101 can be obtained. The preset obstacle detection model may be various algorithms with unadjusted parameters, for example, when the obstacle detection model is a convolutional neural network, the preset obstacle detection model may be an initial convolutional neural network newly created by using a newly created function.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for detecting an obstacle according to the present embodiment. In the application scenario of fig. 3, an unmanned vehicle 301 travels on a road, and a camera 302 is installed on the unmanned vehicle 301 for collecting obstacle information in a traveling environment. After acquiring an image including an obstacle, the camera 302 guides the image to the obstacle detection model 303 to obtain an image 304 in which the obstacle is marked. It is to be understood that the obstacle detection model 303 may be installed in an on-board computer or a server, and the vehicle is connected to the server through a network.
According to the method for detecting the obstacle provided by the embodiment of the application, the street view map data with the obstacle marked in advance is processed, the part of the street view map data with the marked obstacle is selected from the processed street view map data, the preset obstacle detection model is trained to obtain the obstacle detection model, and the obstacle in the target street view map data is detected and marked by using the obstacle detection model, so that on one hand, the street view map data with the obstacle marked is fully utilized, and on the other hand, the obstacle in the street view map data is quickly and accurately detected.
In some optional implementations of this embodiment, the street view map data may further include road information in a driving environment of a vehicle or a pedestrian, and the method may further include the following steps not shown in fig. 2:
importing street view map data into a pre-trained road identification model, and identifying road information in the street view map data; and marking the identified roads and determining the road areas in the street view map data.
In this implementation, when a pedestrian or a vehicle travels on a road, the street view map data may include road information in addition to the obstacle information. The street view map data may be imported into a pre-trained road recognition model to recognize road information in the street view map data. The above-mentioned road information may include the width, type (one-way traveling lane or two-way traveling lane), name, and the like of the road. After identifying the road region, the identified road region may be marked, so as to determine the road region in the street view map data.
In some optional implementations of the present embodiment, the processing of the street view map data may be implemented by the following steps not shown in fig. 2:
selecting at least one obstacle from the marked obstacles; adding selected obstacles in the road area; the road information of the area covered by the added obstacle in the road area is deleted.
In this implementation, one or more obstacles may be selected from the marked obstacles, and the selected obstacles may be added to the road area. It is understood that when a pedestrian or a vehicle travels on a road, the most significant influence on the travel of the pedestrian or the vehicle is an obstacle on the road, and therefore, in the present embodiment, with respect to the addition of the selected obstacle to the road area, the new street view map data adds an obstacle on the road with respect to the original street view map data. Therefore, when the preset obstacle detection model is trained by using the new street view map data, the trained obstacle detection model can detect the position of an obstacle on a road quickly and accurately. After adding the selected obstacle to the road area, the original information in the marked road area may be deleted.
In some optional implementations of the present embodiment, the processing of the street view map data may also be implemented by the following steps not shown in fig. 2:
selecting at least one obstacle from the marked obstacles; moving the position of the selected obstacle to a road area; and deleting the road information of the area covered by the moved obstacle in the road area.
The difference from the above-described implementation is that in this implementation, processing of the street view map data focuses on changing the position of the marked obstacle, that is, new street view map data is formed by changing the position of the obstacle while keeping the number of obstacles in the street view map data unchanged. And then, part of street view map data with marked obstacles in the new street view map data is used for training a preset obstacle detection model, so that the trained obstacle detection model is more sensitive to the obstacles on the road and the obstacles on the road can be more easily identified.
In some optional implementation manners of this embodiment, the street view map data may further include image data, and accordingly, the processing of the street view map data may be further implemented by the following steps that are not shown in fig. 2:
detecting whether the marked obstacles comprise traffic lights or not; the color data of the traffic light is changed in response to the traffic light being included in the marked obstacle.
In this implementation, when the street view map data is image data acquired by a monocular or binocular camera, it is first detected whether a traffic light is included in an obstacle when the street view map data is processed, and when the traffic light is included, the street view map data is processed by changing color data of the traffic light. Therefore, when the acquired image data is less or the colors of the traffic lights in the acquired image data are red, green or yellow, the image data of the traffic lights with other colors can be obtained by changing the color data of the traffic lights, so that the trained obstacle detection model can be more sensitive to the colors of the traffic lights, and the signals indicated by the traffic lights can be quickly and accurately identified.
In some optional implementations of the present embodiment, the method for detecting an obstacle described above may further include the following steps not shown in fig. 1:
and deleting the processed street view map data.
In the implementation mode, after the street view map data are processed and the preset barrier detection model is trained, the processed street view map data can be timely deleted, the storage space occupied by the excessive processed street view map data is avoided, and the storage and maintenance cost is increased.
In some optional implementations of the present embodiment, the method for detecting an obstacle described above may further include the following steps not shown in fig. 1:
and outputting target street view map data marked with the barrier.
In the implementation mode, after the obstacles contained in the target street view map data are detected, the target street view map data marked with the obstacles can be output, so that staff can detect the marking accuracy and maintain the obstacle detection model better.
With further reference to fig. 4, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for detecting an obstacle, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the apparatus 400 for detecting an obstacle of the present embodiment includes: acquisition section 401, detection section 402, and training section 403.
The obtaining unit 401 is configured to obtain an obstacle detection model and target street view map data.
A detecting unit 402, configured to detect and mark an obstacle in the target street view map data by using the obstacle detection model acquired by the acquiring unit 401.
In this embodiment, the obstacle detection model is obtained by a training unit 403, and the training unit 403 includes:
an obtaining module 4031, configured to obtain street view map data in which an obstacle is marked in advance.
The street view map data includes obstacle information in a driving environment.
A processing module 4032, configured to process the street view map data based on the marked obstacle.
A selecting module 4033, configured to select, from the processed street view map data, a part of the street view map data with the marked obstacle as training data.
And the training module 4034 is configured to train a preset obstacle detection model by using the training data to obtain the obstacle detection model.
The device for detecting the obstacle provided by the embodiment of the application processes street view map data with an obstacle marked in advance through the processing module, the selecting module selects partial street view map data with the marked obstacle from the processed street view map data as training data, the training module trains a preset obstacle detection model by using the training data to obtain an obstacle detection model, and the detecting unit detects and marks the obstacle in the target street view map data obtained by the obtaining unit by using the obstacle detection model.
In some optional implementations of the embodiment, the street view map data further includes road information in a driving environment, and the apparatus 400 for detecting an obstacle may further include a road identification unit not shown in fig. 4. The road identification unit may include an identification module and a labeling module.
And the identification module is used for importing the street view map data into a pre-trained road identification model and identifying roads in the street view map data.
And the marking module is used for marking the identified road and determining the road area in the street view map data.
In some optional implementations of this embodiment, the processing module 4032 may be further configured to:
selecting at least one obstacle from the marked obstacles; adding selected obstacles in the road area; the road information of the area covered by the added obstacle in the road area is deleted.
In some optional implementations of this embodiment, the processing module 4032 may be further configured to:
selecting at least one obstacle from the marked obstacles; moving the position of the selected obstacle to a road area; and deleting the road information of the area covered by the moved obstacle in the road area.
In some optional implementations of this embodiment, the street view map data includes image data, and the processing module 4032 may be further configured to:
detecting whether the marked obstacles comprise traffic lights or not; the color data of the traffic light is changed in response to the traffic light being included in the marked obstacle.
In some optional implementations of the present embodiment, the apparatus 400 for detecting an obstacle may further include a deletion unit and/or an output unit, which are not shown in fig. 4.
And the deleting unit is used for deleting the processed street view map data.
And the output unit is used for outputting the target street view map data marked with the barrier.
It should be understood that units 401 to 403 recited in the apparatus 400 for detecting an obstacle correspond to respective steps in the method described with reference to fig. 1 and 2, respectively. Thus, the operations and features described above for the method for detecting obstacles apply equally to the apparatus 400 and the units comprised therein, and are not described in detail here. The corresponding elements of the apparatus 400 may cooperate with elements in a server to implement aspects of embodiments of the present application.
Referring now to FIG. 5, a block diagram of a computer system 500 suitable for use in implementing a terminal device or server of an embodiment of the present application is shown.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 501.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a detection unit, and a training unit. Here, the names of the units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires the obstacle detection model and the target street view map data".
As another aspect, the present application also provides a non-volatile computer storage medium, which may be the non-volatile computer storage medium included in the apparatus in the above-described embodiments; or it may be a non-volatile computer storage medium that exists separately and is not incorporated into the terminal. The non-transitory computer storage medium stores one or more programs that, when executed by a device, cause the device to: acquiring an obstacle detection model and target street view map data; detecting and marking out the obstacles in the target street view map data by using the obstacle detection model; wherein the obstacle detection model is obtained by the following steps: obtaining street view map data with a barrier marked in advance, wherein the street view map data comprises barrier information in a driving environment; processing the street view map data based on the marked obstacles; selecting partial street view map data with marked obstacles from the processed street view map data as training data; and training a preset obstacle detection model by using the training data to obtain the obstacle detection model.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A method for detecting an obstacle, the method comprising:
acquiring an obstacle detection model and target street view map data;
detecting and marking out the obstacles in the target street view map data by using the obstacle detection model;
wherein the obstacle detection model is obtained by the following steps:
obtaining street view map data with a barrier marked in advance, wherein the street view map data comprises barrier information and road information in a driving environment;
processing the street view map data based on the marked obstacles;
selecting partial street view map data with marked obstacles from the processed street view map data as training data;
training a preset obstacle detection model by using the training data to obtain the obstacle detection model;
the method further comprises the following steps:
importing the street view map data into a pre-trained road identification model, and identifying roads in the street view map data;
marking the identified roads and determining road areas in the street view map data;
one or more obstacles are selected from the marked obstacles, and the selected obstacles are added to the road area.
2. The method of claim 1, wherein the processing the street view map data based on the marked obstacles comprises:
selecting at least one obstacle from the marked obstacles;
adding the selected obstacles in the road area;
deleting the road information of the area covered by the added obstacle in the road area.
3. The method of claim 1, wherein the processing the street view map data based on the marked obstacles comprises:
selecting at least one obstacle from the marked obstacles;
moving the position of the selected obstacle to the road area;
and deleting the road information of the area covered by the moved obstacle in the road area.
4. The method of claim 1, wherein the street view map data comprises image data; and
the processing of the street view map data based on the noted obstacles comprises:
detecting whether the marked obstacles comprise traffic lights or not;
in response to a traffic light being included in the marked obstacle, color data of the traffic light is changed.
5. The method according to one of claims 1-4, characterized in that the method further comprises at least one of the following:
deleting the processed street view map data;
and outputting target street view map data marked with the barrier.
6. An apparatus for detecting an obstacle, the apparatus comprising:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring an obstacle detection model and target street view map data;
the detection unit is used for detecting and marking out the obstacles in the target street view map data by using the obstacle detection model;
wherein the obstacle detection model is obtained by a training unit, the training unit comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring street view map data of a pre-marked obstacle, and the street view map data comprises obstacle information and road information in a driving environment;
the processing module is used for processing the street view map data based on the marked obstacles;
the selection module is used for selecting partial street view map data with marked obstacles from the processed street view map data as training data;
the training module is used for training a preset obstacle detection model by using the training data to obtain the obstacle detection model;
the apparatus further comprises a road identification unit, the road identification unit comprising: the system comprises an identification module and a marking module;
the identification module is used for importing the street view map data into a pre-trained road identification model and identifying roads in the street view map data;
the marking module is used for marking the identified road and determining a road area in the street view map data;
the processing module is further used for selecting one or more obstacles from the marked obstacles and adding the selected obstacles to the road area.
7. The apparatus of claim 6, wherein the processing module is further configured to:
selecting at least one obstacle from the marked obstacles;
adding the selected obstacles in the road area;
deleting the road information of the area covered by the added obstacle in the road area.
8. The apparatus of claim 6, wherein the processing module is further configured to:
selecting at least one obstacle from the marked obstacles;
moving the position of the selected obstacle to the road area;
and deleting the road information of the area covered by the moved obstacle in the road area.
9. The apparatus of claim 6, wherein the street view map data comprises image data; and
the processing module is further to:
detecting whether the marked obstacles comprise traffic lights or not;
in response to a traffic light being included in the marked obstacle, color data of the traffic light is changed.
10. The apparatus according to one of claims 6-9, characterized in that the apparatus further comprises at least one of:
a deleting unit for deleting the processed street view map data;
and the output unit is used for outputting the target street view map data marked with the barrier.
CN201611078768.6A 2016-11-30 2016-11-30 Method and apparatus for detecting obstacles Active CN106778548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611078768.6A CN106778548B (en) 2016-11-30 2016-11-30 Method and apparatus for detecting obstacles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611078768.6A CN106778548B (en) 2016-11-30 2016-11-30 Method and apparatus for detecting obstacles

Publications (2)

Publication Number Publication Date
CN106778548A CN106778548A (en) 2017-05-31
CN106778548B true CN106778548B (en) 2021-04-09

Family

ID=58898990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611078768.6A Active CN106778548B (en) 2016-11-30 2016-11-30 Method and apparatus for detecting obstacles

Country Status (1)

Country Link
CN (1) CN106778548B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145680B (en) * 2017-06-16 2022-05-27 阿波罗智能技术(北京)有限公司 Method, device and equipment for acquiring obstacle information and computer storage medium
US10816984B2 (en) * 2018-04-13 2020-10-27 Baidu Usa Llc Automatic data labelling for autonomous driving vehicles
CN108805882B (en) * 2018-05-29 2021-09-03 杭州视氪科技有限公司 Water surface and water pit detection method
CN111160360B (en) * 2018-11-07 2023-08-01 北京四维图新科技股份有限公司 Image recognition method, device and system
CN109544981B (en) * 2018-12-29 2021-11-09 北京百度网讯科技有限公司 Image processing method, apparatus, device and medium
CN112861573A (en) * 2019-11-27 2021-05-28 宇龙计算机通信科技(深圳)有限公司 Obstacle identification method and device, storage medium and intelligent lamp pole
WO2021134354A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Path prediction method and apparatus, computer device, and storage medium
CN111142150A (en) * 2020-01-06 2020-05-12 中国石油化工股份有限公司 Automatic intelligent obstacle avoidance design method for seismic exploration
CN111325136B (en) * 2020-02-17 2024-03-19 北京小马慧行科技有限公司 Method and device for labeling object in intelligent vehicle and unmanned vehicle
CN111401133A (en) * 2020-02-19 2020-07-10 北京三快在线科技有限公司 Target data augmentation method, device, electronic device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
CN103793684A (en) * 2012-10-30 2014-05-14 现代自动车株式会社 Apparatus and method for detecting obstacle for around view monitoring system
CN103954275A (en) * 2014-04-01 2014-07-30 西安交通大学 Lane line detection and GIS map information development-based vision navigation method
CN105740802A (en) * 2016-01-28 2016-07-06 北京中科慧眼科技有限公司 Disparity map-based obstacle detection method and device as well as automobile driving assistance system
CN105957145A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Road barrier identification method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
CN103793684A (en) * 2012-10-30 2014-05-14 现代自动车株式会社 Apparatus and method for detecting obstacle for around view monitoring system
CN103954275A (en) * 2014-04-01 2014-07-30 西安交通大学 Lane line detection and GIS map information development-based vision navigation method
CN105740802A (en) * 2016-01-28 2016-07-06 北京中科慧眼科技有限公司 Disparity map-based obstacle detection method and device as well as automobile driving assistance system
CN105957145A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Road barrier identification method and device

Also Published As

Publication number Publication date
CN106778548A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106778548B (en) Method and apparatus for detecting obstacles
CN107103272B (en) Distinguishing lane markings to be followed by a vehicle
US11023745B2 (en) System for automated lane marking
US10990815B2 (en) Image pre-processing in a lane marking determination system
US20190287400A1 (en) Method, device and system for parking space navigation
US9652980B2 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
JP6542196B2 (en) Automatic driving of route
CN113748315A (en) System for automatic lane marking
US11727799B2 (en) Automatically perceiving travel signals
US10650256B2 (en) Automatically perceiving travel signals
CN108227707B (en) Automatic driving method based on laser radar and end-to-end deep learning method
US20180299893A1 (en) Automatically perceiving travel signals
CN108460968A (en) A kind of method and device obtaining traffic information based on car networking
JP5522475B2 (en) Navigation device
US10515293B2 (en) Method, apparatus, and system for providing skip areas for machine learning
JP2016049891A (en) Vehicular irradiation control system and light irradiation control method
EP3612424A1 (en) Automatically perceiving travel signals
WO2020007589A1 (en) Training a deep convolutional neural network for individual routes
WO2020139356A1 (en) Image pre-processing in a lane marking determination system
US20180300566A1 (en) Automatically perceiving travel signals
CN112654892A (en) Method for creating a map of an environment of a vehicle
CN110765224A (en) Processing method of electronic map, vehicle vision repositioning method and vehicle-mounted equipment
US20210173095A1 (en) Method and apparatus for determining location by correcting global navigation satellite system based location and electronic device thereof
TWI451990B (en) System and method for lane localization and markings
JP2018025898A (en) Indication recognition device and vehicle control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant