CN108073868A - A kind of method of scene cut and detection of obstacles - Google Patents

A kind of method of scene cut and detection of obstacles Download PDF

Info

Publication number
CN108073868A
CN108073868A CN201611020157.6A CN201611020157A CN108073868A CN 108073868 A CN108073868 A CN 108073868A CN 201611020157 A CN201611020157 A CN 201611020157A CN 108073868 A CN108073868 A CN 108073868A
Authority
CN
China
Prior art keywords
feature
scene
scene cut
interest
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611020157.6A
Other languages
Chinese (zh)
Inventor
刘康伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fafa Automobile China Co ltd
Original Assignee
Faraday Beijing Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Faraday Beijing Network Technology Co Ltd filed Critical Faraday Beijing Network Technology Co Ltd
Priority to CN201611020157.6A priority Critical patent/CN108073868A/en
Publication of CN108073868A publication Critical patent/CN108073868A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to image processing field, a kind of methods for disclosing scene cut and detection of obstacles, which is characterized in that this method includes:Extract the feature f of scene image;Classify to the feature f, obtain and sorted feature f is carried out to each scene characteristic in the scene images;By the feature f and the feature fsFusion is characterized fd;And pass through feature f described in Region Feature ExtractiondIn area-of-interest, obtain the target classification result to the area-of-interest.

Description

A kind of method of scene cut and detection of obstacles
Technical field
The present invention relates to image processing fields, and in particular, to a kind of method of scene cut and detection of obstacles.
Background technology
In automatic Pilot task, environment perception technology, if road scene is split, detection of barrier etc. is to ensure to drive The key factor of security performance.The segmentation of scene and the detection of barrier are all received significant attention in academia and industrial quarters. The main task of scape segmentation is to split all objects occurred in sensor (such as video camera, laser radar) obtained image (such as road, vehicle, pedestrian, trees, isolation strip, sky);And the main task of detection of obstacles is to detect vehicle periphery The exact position of all barriers (such as other vehicles, pedestrian).
In recent years, with the rise of deep learning algorithm, the performance of scene cut and detection of obstacles algorithm all obtains very Big promotion.However, no matter for scene cut task or detection of obstacles task, based on the algorithm of deep learning due to The complexity of its model is required for consuming the substantial amounts of calculating time, and needs to occupy substantial amounts of memory source.In commercial Application In, since the computing resource carried on vehicle is very limited amount of.How scene cut and barrier ensured in this case The real-time of detection algorithm, while reduce emphasis of the memory source needed for algorithm as research.
On the other hand, inventor has found in the implementation of the present invention, scene cut task and detection of obstacles Task has very strong relation (for example we can be using this more generally result of scene cut come additional barrier analyte detection), so And in existing algorithm, scene cut and detection of obstacles are all often by independent studies.
The content of the invention
The object of the present invention is to provide a kind of scene cut and the method and system of detection of obstacles, this method and system are borrowed Help feature to share, and merged scene separation and detection of obstacles, so as to drastically reduce the memory money that algorithm is consumed Source improves the speed of algorithm, and scene cut and obstacle detection is made to have more real-time.
To achieve these goals, a kind of method that the present invention provides scene cut and detection of obstacles, which is characterized in that This method includes:The feature f of scene image is extracted, can be realized using convolution and pondization operation;The feature f is divided Class obtains and carries out sorted feature f to each scene characteristic in the scene images;By the feature f and the feature fsFusion is characterized fd;And it by the area-of-interest in feature fd described in Region Feature Extraction, obtains to the region of interest The target classification result in domain.
Wherein, it is described to classify to the feature f, it obtains after classifying to each scene characteristic in scene image Feature fsIncluding:Scene cut is carried out according to the feature f, and output token is per the scene cut result of a kind of scene fScene cut;And to the scene cut result fScene cutFurther judge the concrete kind per a kind of scene by pixel classifications Not, the feature f is obtaineds
Wherein, it is described to the scene cut result fScene cutFurther judged by pixel classifications described per a kind of scene Specific category, obtain the feature fsIncluding:Using sorting algorithm to the scene cut result fScene cutIt carries out at classification Reason;And by top sampling method by the size restoration of sorted scene cut result to original size, so as to obtain with it is former The feature f of the corresponding classification results of beginning sizes
Wherein, feature f described in Region Feature Extraction is passed throughdIn area-of-interest, obtain to the area-of-interest Target classification result includes:Obtain the feature fdIn area-of-interest coordinate;According to the feature fdWith it is described interested The coordinate in region obtains the feature f of area-of-interest by Region Feature Extractionr;To the feature frIn area-of-interest into Row classification obtains the target classification result of area-of-interest.
Wherein, the method further includes:According to the feature frBy convolution operation, the position for obtaining area-of-interest is sat Target offset;And the coordinate of the area-of-interest is adjusted according to the offset.
According to another aspect of the present invention, the system for also providing a kind of scene cut and detection of obstacles, the system include: Shared network, for extracting the feature f of scene image;Scene cut network, for classifying to the feature f, acquisition pair Each scene characteristic in the scene image carries out sorted feature fs;Merge network, for by the feature f and described Feature fsFusion is characterized fd;And target detection network, for passing through the region of interest in feature fd described in Region Feature Extraction Domain obtains the target classification result to the area-of-interest.
Wherein, the scene cut network includes:Scene cut module, for carrying out scene cut according to the feature f, And output token is per the scene cut result f of a kind of sceneScene cut;And sort module, for the scene cut result fScene cutFurther judge the specific category per a kind of scene by pixel classifications, obtain the feature fs
Wherein, the sort module includes:Classification processing module, for utilizing sorting algorithm to the scene cut result fScene cutCarry out classification processing;And up-sampling module, for passing through top sampling method by the ruler of sorted scene cut result It is very little to be restored to original size, so as to obtain the feature f of classification results corresponding with original sizes
Wherein, the target detection network includes:RPN (zone scheme network, Region Proposal Network) net Network, for obtaining the coordinate of the area-of-interest in the feature fd;Region Feature Extraction network, for according to the feature fd With the coordinate of the area-of-interest, pass through the feature f of Region Feature Extraction acquisition area-of-interestr;And target classification net Network for classifying to the area-of-interest in the feature fr, obtains the target classification result of area-of-interest.
Wherein, which further includes position Recurrent networks, for according to the feature frBy convolution operation, it is emerging to obtain sense The offset of the position coordinates in interesting region, and according to the coordinate of the offset adjustment area-of-interest.
Through the above technical solutions, the scene cut and the method and system of detection of obstacles, are shared using feature, it will The feature extracted from original scene image shares to scene cut and target detection (or being detection of obstacles), and target Detection improves scene cut and target inspection further using scene cut as a result, so as to save substantial amounts of algorithm resource The speed of survey, and this method and system combine scene cut and target detection, can export simultaneously scene segmentation result and Object detection results (or detection of obstacles result).
Other features and advantages of the present invention will be described in detail in subsequent specific embodiment part.
Description of the drawings
Attached drawing is for providing a further understanding of the present invention, and a part for constitution instruction, with following tool Body embodiment is together for explaining the present invention, but be not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the flow chart of the method for one scene cut and detection of obstacles according to an embodiment of the invention;
Fig. 2 is the flow chart of the method for two scene cut and detection of obstacles according to an embodiment of the invention;
Fig. 3 is the preferred embodiment of the method for two scene cut and detection of obstacles according to an embodiment of the invention Flow chart;
Fig. 4 is the structure chart of the system of three scene cut and detection of obstacles according to an embodiment of the invention;
Fig. 5 is the structure chart of the system of four scene cut and detection of obstacles according to an embodiment of the invention;
Fig. 6 is the preferred embodiment of the system of four scene cut and detection of obstacles according to an embodiment of the invention Flow chart;
Fig. 7 is the signal of the operation principle of the system of four scene cut and detection of obstacles according to an embodiment of the invention Figure;
Fig. 8 is the scene cut effect of the system of four scene cut and detection of obstacles according to an embodiment of the invention Example;And
Fig. 9 is the target detection effect of the system of four scene cut and detection of obstacles according to an embodiment of the invention Example.
Reference sign
100:Shared network 200:Scene cut network
210:Scene cut module 220:Sort module
221:Classification processing module 222:Up-sample module
300:Merge network 400:Target detection network
410:RPN networks 420:Region Feature Extraction network
430:Target classification network 500:Position Recurrent networks
Specific embodiment
The specific embodiment of the present invention is described in detail below in conjunction with attached drawing.It should be appreciated that this place is retouched The specific embodiment stated is merely to illustrate and explain the present invention, and is not intended to limit the invention.
Fig. 1 is the flow chart of the method for one scene cut and detection of obstacles according to an embodiment of the invention.Such as Fig. 1 institutes Show, this method may comprise steps of:
In the step s 100, the feature f of scene image is extracted.By operations such as multiple convolution and ponds, it is to size Wi×HiPending original scene image carry out preliminary feature extraction, and the feature f that final output dimension is W × H × N The feature representation of (feature map) as the image, wherein, Wi、HiThe width and height of respectively original scene image, W, H, N are respectively width, height and the port number of this feature expression.
In step s 200, classify to the feature f, obtain and each scene characteristic in scene image is divided Feature f after classs.F is exported for example, image convolution can be passed through and finally obtain the scene cut that dimension size is W × H × ns, Middle n is scene type number to be split.This step can export scene cut as a result, the result of the scene cut can be with Such as it is used to identify road or positioning etc. in Vehicular automatic driving.
In step S300, by the feature f and the feature fsFusion is characterized fd.It for example, can be by feature f and field The feature f of scape segmentation outputsIt is input to concat network layers, concat network layers are by feature f and feature fsMerge, so as to obtain Dimension is the fusion feature f of W × H × (N+n)d
In step S400, pass through feature f described in Region Feature ExtractiondIn area-of-interest, obtain to region of interest The target classification result in domain.By the step, each target is identification as a result, simultaneously in the final exportable image to original scene It is marked, for example, when applied to automatic Pilot, can identify and mark the obstacles such as automobile, the pedestrian in driving front Object.
Fig. 2 is the flow chart of the method for two scene cut and detection of obstacles according to an embodiment of the invention.Such as Fig. 2 institutes Show, on the basis of embodiment one, the step S200 may comprise steps of:
In step S210, scene cut is carried out by convolution operation according to the feature f, and output token is per a kind of field The scene cut result f of scapeScene cut.Each target in scene image can roughly be divided by difference by convolution operation Classification, and each of which class scene can be marked.For precise classification, which can include multilayer convolution and grasp Make.
In S220, to the scene cut result fScene cutFurther judged by pixel classifications described per a kind of scene Specific category, obtain the feature fs
Wherein, as shown in figure 3, step S220 can preferably include following steps:
In step S221, using sorting algorithm to the scene cut result fScene cutCarry out classification processing.It for example, can To utilize the most described scene cut result f of Softmax classificationScene cutIn marked each classification its tool that may belong to Body classification is matched, for example, it is assumed that fScene cutValue at (p, q) isWhereinRepresent fs Value at the kth passage of (p, q) point, the then probability that (p, q) belongs to kth class scene are
Therefore, the scene type belonging to (p, q) isWhen the probability closer to 1 when, can will fScene cutValue at (p, q) can be classified into kth class scene.The kth class scene can be road, pedestrian, building, Vehicle etc..The scene cut result that size is W × H can be obtained by the step.
In step S222 and by top sampling method by the size restoration of sorted scene cut result to original Size, so as to obtain the feature f of classification results corresponding with original sizes.In the extraction feature of original scene image and right The feature of extraction further during classification, in order to reduce operand, make use of pondization to operate, the image after pondization operation Size ratio original size it is small, therefore for the size for reducing original image, make scene cut result and object detection results It is more true, the top sampling method using quadratic interpolattion may be employed, by the scene cut result be reduced to size for Wi × The segmentation result of Hi, wherein WiAnd HiIt is the size of original scene image respectively.
As shown in figure 3, in embodiment two, the step S400 may comprise steps of:
In step S410, the feature f is obtaineddIn area-of-interest coordinate.The area-of-interest can be Automobile, pedestrian, road in scene image etc..The step can for example be completed by convolution operation.
In the step s 420, according to the feature fdWith the coordinate of the area-of-interest, obtained by Region Feature Extraction Take the feature f of area-of-interestr
In step S430 and to the feature frIn area-of-interest classify, obtain area-of-interest Target classification result.The classification for example may be employed Softmax sorting algorithms mentioned above and be realized based on territorial classification.
As shown in figure 3, in embodiment two, the method for the scene cut and detection of obstacles can further include Following steps:
In step S510, according to the feature frBy convolution operation, obtain area-of-interest position coordinates it is inclined Shifting amount.
The coordinate of the area-of-interest is adjusted in step S520 and according to the offset, so that target is examined The mark result of survey is more accurate.
Fig. 4 is the structure chart of the system of three scene cut and detection of obstacles according to an embodiment of the invention.Such as Fig. 4 institutes Show, which includes:Shared network 100, for extracting the feature f of scene image;Scene cut network 200, for described Feature f classifies, and obtains and carries out sorted feature f to each scene characteristic in scene images;Merge network 300, use In by the feature f and the feature fsFusion is characterized fd;And target detection network 400, for passing through Region Feature Extraction The feature fdIn area-of-interest, obtain the target classification result to area-of-interest.Wherein, shared network can be VGG models, LEnet models, Alexnet models, ResNet models etc. can need to select according to specific exploitation.
By means of sharing network 100, the feature of original image is extracted, this feature can be applied to scene cut network simultaneously 200 and target detection network 400, while the output result of scene cut network 200 can also be by 400 profit of target detection network With, thus in combination with two kinds of functions of scene cut and target detection, and save substantial amounts of algorithm resource.
Fig. 5 is the structure chart of the system of four scene cut and detection of obstacles according to an embodiment of the invention.Such as Fig. 5 institutes Show, in example IV, the scene cut network 200 can include:Scene cut module 210, for according to the feature f Scene cut is carried out, and output token is per the scene cut result f of a kind of sceneScene cut;And sort module 220, for institute State scene cut result fScene cutFurther judge the specific category per a kind of scene by pixel classifications, obtain the spy Levy fs
Further, as shown in fig. 6, the sort module can also preferably include:Classification processing module 221, is used for Using sorting algorithm to the scene cut result fScene cutCarry out classification processing;And up-sampling module 222, for passing through (Upsampling) method of sampling by the size restoration of sorted scene cut result to original size, so as to obtain with it is original The feature f of the corresponding classification results of sizes.Processing module 221 of classifying is to scene cut result fScene cutThe mistake specifically classified Journey may be employed Softmax algorithms mentioned above and realize.
As shown in figure 5, in example IV, the target detection network can preferably include:RPN networks 410, are used for Root obtains the feature fdIn area-of-interest coordinate;Region Feature Extraction network 420, for according to the feature fdWith The coordinate of the area-of-interest obtains the feature f of area-of-interest by Region Feature Extractionr;And target classification network 430, for the feature frIn area-of-interest classify, obtain the target classification result of area-of-interest.
Wherein, in example IV, the system of the scene cut and detection of obstacles further includes position Recurrent networks 500, for obtaining the offset of the position coordinates of area-of-interest, and the area-of-interest is adjusted according to the offset Coordinate.
Fig. 7 is the signal of the realization principle of the system of four scene cut and detection of obstacles according to an embodiment of the invention Figure.With reference to figure 7, original scene image is input to shared network, and the feature W × H being extracted is exported after shared network processes × N, this feature output can export after scene cut network processes carries out original image specific sorted segmentation result W × H × n, and scene segmentation figure picture can be exported simultaneously.The feature and scene that target detection network is extracted according to shared network are divided Feature W × H × (N+n) of the scene cut result of network output after merging network processes is cut, further detects original graph Each target (or being barrier) as in, and final output classification results.
Fig. 8 and Fig. 9 is the detection knot when system of the scene cut and detection of obstacles is applied to automatic driving Fruit example.
As shown in figure 8, be the example of the result of detection of obstacles, it in fig. 8, can be accurate by the system of the present invention The barriers such as vehicle, the pedestrian of identifying rows front side so as to the car deceleration or avoidance in control process automatic Pilot, are kept away Exempt from traffic accident.Using the position Recurrent networks 500, for example, the box deviation lower section of the vehicle when mark driving front, does not have Have that when including being marked as the region where the automobile of barrier completely, position Recurrent networks are by identifying mark position and automobile The deviation of the coordinate of actual position, and further using the deviation adjusting mark position makes to the mark of barrier more Accurately.
As shown in figure 9, it is a kind of example of the scene cut result of scene cut network output, in fig.9, original image Middle different scene type, such as automobile, road, building etc. are mutually split, and are usually marked with different phases.Example Such as in automatic Pilot, the road in scene cut result identification front can be utilized, so as to assist positioning traffic route. In vehicle traveling process, some barriers (or being target), such as building, road etc., due to the sky occupied by its volume Between it is larger, it is not easy to marked by target detection, can identify that these are not easy by the image of scene cut result at this time The barrier of mark.
It will be appreciated by those skilled in the art that it is that can pass through to implement the method for the above embodiments Program instructs relevant hardware to complete, which uses including some instructions so that one in a storage medium A (can be microcontroller, chip etc.) or processor (processor) perform the whole of each embodiment the method for the application Or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
The preferred embodiment of the present invention is described in detail above in association with attached drawing, still, the present invention is not limited to above-mentioned realities The detail in mode is applied, within the scope of the technical concept of the present invention, a variety of letters can be carried out to technical scheme Monotropic type, these simple variants all belong to the scope of protection of the present invention.
It is further to note that the specific technical features described in the above specific embodiments, in not lance In the case of shield, can be combined by any suitable means, in order to avoid unnecessary repetition, the present invention to it is various can The combination of energy no longer separately illustrates.
In addition, various embodiments of the present invention can be combined randomly, as long as it is without prejudice to originally The thought of invention, it should also be regarded as the disclosure of the present invention.

Claims (5)

1. a kind of method of scene cut and detection of obstacles, which is characterized in that this method includes:
Extract the feature f of scene image;
Classify to the feature f, obtain and sorted feature f is carried out to each scene characteristic in the scene images
By the feature f and the feature fsFusion is characterized fd;And
Pass through feature f described in Region Feature ExtractiondIn area-of-interest, obtain the target classification knot to the area-of-interest Fruit.
2. the method for scene cut according to claim 1 and detection of obstacles, which is characterized in that described to the feature F classifies, and obtains and carries out sorted feature f to each scene characteristic in scene imagesIncluding:
Scene cut is carried out according to the feature f, and output token is per the scene cut result f of a kind of sceneScene cut;And
To the scene cut result fScene cutFurther judge the specific category per a kind of scene by pixel classifications, obtain Take the feature fs
3. the method for scene cut according to claim 2 and detection of obstacles, which is characterized in that described to the scene Segmentation result fScene cutFurther judge the specific category per a kind of scene by pixel classifications, obtain the feature fsBag It includes:
Using sorting algorithm to the scene cut result fScene cutCarry out classification processing;And
By top sampling method by the size restoration of sorted scene cut result to original size, so as to obtain and original ruler The feature f of very little corresponding classification resultss
4. the method for scene cut according to claim 1 and detection of obstacles, which is characterized in that described special by region Sign extracts the feature fdIn area-of-interest, obtaining includes the target classification result of the area-of-interest:
Obtain the feature fdIn area-of-interest coordinate;
According to the feature fdWith the coordinate of the area-of-interest, pass through acquisition area-of-interest described in Region Feature Extraction Feature fr;And
To the feature frIn area-of-interest classify, obtain the target classification result of the area-of-interest.
5. the method for scene cut according to claim 4 and detection of obstacles, which is characterized in that the method is also wrapped It includes:
According to the feature frBy convolution operation, the offset of the position coordinates of the area-of-interest is obtained;And
The coordinate of the area-of-interest is adjusted according to the offset.
CN201611020157.6A 2016-11-18 2016-11-18 A kind of method of scene cut and detection of obstacles Pending CN108073868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611020157.6A CN108073868A (en) 2016-11-18 2016-11-18 A kind of method of scene cut and detection of obstacles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611020157.6A CN108073868A (en) 2016-11-18 2016-11-18 A kind of method of scene cut and detection of obstacles

Publications (1)

Publication Number Publication Date
CN108073868A true CN108073868A (en) 2018-05-25

Family

ID=62160525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611020157.6A Pending CN108073868A (en) 2016-11-18 2016-11-18 A kind of method of scene cut and detection of obstacles

Country Status (1)

Country Link
CN (1) CN108073868A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022083150A1 (en) * 2020-10-20 2022-04-28 广州小鹏自动驾驶科技有限公司 Parking space detection method and apparatus, vehicle and readable medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413308A (en) * 2013-08-01 2013-11-27 东软集团股份有限公司 Obstacle detection method and device

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
DAN LEVI ET AL.: "StixelNet: A Deep Convolutional Network for Obstacle Detection and Road Segmentation", 《BMVC》 *
DI GUO ET AL.: "Object Discovery and Grasp Detection with a Shared Convolutional Neural Network", 《2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 *
KAREN SIMONYAN ET AL.: "VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION", 《ICLR 2015》 *
ROSS GIRSHICK: "Fast R-CNN", 《PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
SHAOQING REN ET AL.: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 *
任少卿: "基于特征共享的高效物体检测", 《中国博士学位论文数据库 信息科技辑》 *
刘宏等: "面向盲人避障的场景自适应分割及障碍物检测", 《计算机辅助设计与图像学学报》 *
常亮等: "图像理解中的卷积神经网络", 《自动化学报》 *
牛杰等: "一种融合全局及显著性区域特征的室内场景识别方法", 《机器人》 *
陈龙: "辅助视觉中的图像处理关键技术研究", 《中国博士学位论文数据库 信息科技辑》 *
颜晓文: "基于机器人视觉的场景图像分类的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022083150A1 (en) * 2020-10-20 2022-04-28 广州小鹏自动驾驶科技有限公司 Parking space detection method and apparatus, vehicle and readable medium

Similar Documents

Publication Publication Date Title
Zhu et al. Traffic sign detection and recognition using fully convolutional network guided proposals
Kim et al. On-road object detection using deep neural network
CN110843794B (en) Driving scene understanding method and device and trajectory planning method and device
CN106778835A (en) The airport target by using remote sensing image recognition methods of fusion scene information and depth characteristic
Broggi et al. Vehicle detection for autonomous parking using a Soft-Cascade AdaBoost classifier
CN106845453A (en) Taillight detection and recognition methods based on image
Zaghari et al. The improvement in obstacle detection in autonomous vehicles using YOLO non-maximum suppression fuzzy algorithm
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
Liu et al. A large-scale simulation dataset: Boost the detection accuracy for special weather conditions
CN110909656B (en) Pedestrian detection method and system integrating radar and camera
CN106886757A (en) A kind of multiclass traffic lights detection method and system based on prior probability image
CN108073869A (en) A kind of system of scene cut and detection of obstacles
CN108073868A (en) A kind of method of scene cut and detection of obstacles
Hasan et al. Comparative analysis of vehicle detection in urban traffic environment using Haar cascaded classifiers and blob statistics
Jeon et al. High-speed car detection using resnet-based recurrent rolling convolution
Lagahit et al. Road Marking Extraction and Classification from Mobile LIDAR Point Clouds Derived Imagery using Transfer Learning
Yuan et al. A new active safety distance model of autonomous vehicle based on sensor occluded scenes
Zhang et al. A vehicle classification method based on improved ResNet
Dhalwar et al. Image processing based traffic convex mirror detection
Sahar et al. Efficient Detection and Recognition of Traffic Lights for Autonomous Vehicles Using CNN
Muzalevskiy et al. Runway Marking Detection using Neural Networks
Chen Research and Implementation of 3D Object Detection Based on Autonomous Driving Scenarios
Hodges Deep learning based vision for driverless vehicles in hazy environmental conditions
Jo et al. Design and Implementation of Object Detection and Re-simulation System based on LiDAR
CN117475410B (en) Three-dimensional target detection method, system, equipment and medium based on foreground point screening

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100026 8 floor 909, 105 building 3, Yao Yuan Road, Chaoyang District, Beijing.

Applicant after: Lexus Automobile (Beijing) Co.,Ltd.

Address before: 100026 8 floor 909, 105 building 3, Yao Yuan Road, Chaoyang District, Beijing.

Applicant before: FARADAY (BEIJING) NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20181009

Address after: 511458 9, Nansha District Beach Road, Guangzhou, Guangdong, 9

Applicant after: Evergrande Faraday Future Smart Car (Guangdong) Co.,Ltd.

Address before: 100026 8 floor 909, 105 building 3, Yao Yuan Road, Chaoyang District, Beijing.

Applicant before: Lexus Automobile (Beijing) Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190318

Address after: 100015 Building No. 7, 74, Jiuxianqiao North Road, Chaoyang District, Beijing, 001

Applicant after: FAFA Automobile (China) Co.,Ltd.

Address before: 511458 9, Nansha District Beach Road, Guangzhou, Guangdong, 9

Applicant before: Evergrande Faraday Future Smart Car (Guangdong) Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180525