CN106919908A - Obstacle recognition method and device, computer equipment and computer-readable recording medium - Google Patents

Obstacle recognition method and device, computer equipment and computer-readable recording medium Download PDF

Info

Publication number
CN106919908A
CN106919908A CN201710073031.3A CN201710073031A CN106919908A CN 106919908 A CN106919908 A CN 106919908A CN 201710073031 A CN201710073031 A CN 201710073031A CN 106919908 A CN106919908 A CN 106919908A
Authority
CN
China
Prior art keywords
barrier
frames
frame
identified
perspective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710073031.3A
Other languages
Chinese (zh)
Other versions
CN106919908B (en
Inventor
谢国洋
李晓晖
郭疆
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710073031.3A priority Critical patent/CN106919908B/en
Publication of CN106919908A publication Critical patent/CN106919908A/en
Application granted granted Critical
Publication of CN106919908B publication Critical patent/CN106919908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present invention provides a kind of obstacle recognition method and device, computer equipment and computer-readable recording medium.Its method includes:The information of the barrier to be identified of the continuous N+1 frames around acquisition Current vehicle;The information of the barrier to be identified in preceding each frame of N frames, obtain the first cloud perspective view of the point cloud layer in horizontal plane of at least two height in each frame, cognitive disorders thing horizontal plane the first reflective information perspective view and barrier to be identified horizontal plane the first dutycycle perspective view;Sorter model according to training in advance, barrier to be identified at least two first cloud perspective views of the weight of each frame and each frame, the first reflective information perspective view and first dutycycle perspective view in the first barrier classification figure, the preceding N frames that pre-set of horizontal plane in each frame in the preceding N frames for obtaining in advance, predict the first barrier classification figure of N+1 frames.Technical scheme, can effectively improve the recognition accuracy to barrier to be identified and recognition efficiency.

Description

Obstacle recognition method and device, computer equipment and computer-readable recording medium
【Technical field】
Set the present invention relates to automatic Pilot technical field, more particularly to a kind of obstacle recognition method and device, computer Standby and computer-readable recording medium.
【Background technology】
In existing automatic Pilot technology, the information of obstacle recognition output to be identified as control and can plan The input of information, therefore, to barrier to be identified accurately and quickly recognize to be a very crucial technology.
In the prior art, barrier to be identified is identified using camera and laser radar generally.Wherein image Head scheme can apply very sufficient in illumination, under the more stable scene of environment.But it is mixed in bad weather and road environment In the case of unrest, the vision of camera scheme is all not sufficiently stable always, causes the information of the barrier to be identified of collection to be forbidden Really.And laser radar is although very expensive, but highly stable during the barrier to be identified of laser radar scheme identification and peace Entirely.In the prior art, when recognizing barrier to be identified using laser radar, according to the to be identified of Laser Radar Scanning single frames The point cloud size and local feature of the barrier to be identified acquired in barrier judge the class of barrier to be identified Not.The point cloud of the wherein frame of Laser Radar Scanning one is commonly referred to as laser radar and scanning Current vehicle week is rotated by 360 ° in 1s Point cloud obtained by the barrier for making a circle;Therefore, a barrier to be identified can be included in the point cloud of a frame, it is also possible to wrap Include multiple barriers to be identified.Then can according to barrier to be identified point cloud local feature whether be people head Picture judges whether barrier to be identified is people;Whether the local feature of the point cloud according to barrier to be identified is voluntarily The headstock feature of car judges whether barrier to be identified is bicycle etc..
But in the prior art, the point cloud detection of the barrier to be identified of the single frames according to Laser Radar Scanning is to be identified Barrier when, all barriers to be identified are all static, for example, be easy to be identified as pedestrian such as the column in background Object etc., cause to recognize barrier to be identified in road recognition accuracy is poor, recognition efficiency is relatively low.
【The content of the invention】
The invention provides a kind of obstacle recognition method and device, computer equipment and computer-readable recording medium, for improving certainly The recognition accuracy of barrier to be identified and recognition efficiency in dynamic driving.
The present invention provides a kind of obstacle recognition method, and methods described includes:
The information of the barrier to be identified of the continuous N+1 frames around acquisition Laser Radar Scanning Current vehicle;
The information of the barrier described to be identified in each frame of the preceding N frames in the N+1 frames, in each frame of acquisition The point cloud layer of at least two height barrier to be identified described in first cloud perspective view of horizontal plane, each frame is described Barrier to be identified is accounted for the first of the horizontal plane described in the first reflective information perspective view and each frame of horizontal plane Sky compares perspective view;
Barrier to be identified described in each frame in sorter model, the advance described preceding N frames for obtaining according to training in advance Hinder the thing weight of each frame and institute in the first barrier classification figure, the described preceding N frames that pre-set of the horizontal plane State first cloud perspective view, the first reflective information perspective view and described described at least two of each frame in preceding N frames One dutycycle perspective view, predicts the first barrier classification figure of the N+1 frames in the N+1 frames.
Still optionally further, in method as described above, sorter model, the institute of acquisition in advance according to training in advance State barrier to be identified described in each frame in preceding N frames the horizontal plane the first barrier classification figure, pre-set In the preceding N frames in the weight of each frame and the preceding N frames each frame at least two described in first cloud perspective view, The first reflective information perspective view and the first dutycycle perspective view, predict described the of N+1 frames in the N+1 frames After one barrier classification figure, methods described also includes:
According to the first barrier classification figure and the barrier described to be identified of the N+1 frames of the N+1 frames Information, recognize the classification of each barrier to be identified in the point cloud of the N+1 frames.
Still optionally further, in method as described above, according to the first barrier classification figure of the N+1 frames With the information of the barrier described to be identified of the N+1 frames, each obstacle to be identified in the point cloud of the N+1 frames is recognized The classification of thing, specifically includes:
According to the first barrier classification figure of the N+1 frames, in the barrier described to be identified of the N+1 frames Point cloud in identify the classification of each barrier to be identified;
Judge the N+1 frames barrier described to be identified point cloud in the same barrier to be identified whether mark Knowledge have two or more different classifications, if so, according to the barrier to be identified point cloud in two or more different classifications The quantity of corresponding point, identifies the classification of the barrier to be identified respectively.
Still optionally further, in method as described above, described in each frame of the preceding N frames in the N+1 frames The information of barrier to be identified, the point cloud layer for obtaining at least two height in each frame is projected in first cloud of horizontal plane Barrier to be identified is in the first reflective information perspective view and each frame of the horizontal plane described in figure, each frame The barrier to be identified is specifically included in the first dutycycle perspective view of the horizontal plane:
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains parallel to the level Described cloud layer of at least two height in face;By described cloud layer of at least two height respectively on the horizontal plane Project, obtain first cloud perspective view described in each frame corresponding at least two;
The reflected value of each point on the surface of the barrier described to be identified in each described frame according to the preceding N frames, each The point cloud of barrier to be identified described in the frame identifies the surface of the barrier to be identified in the projection of the horizontal plane The reflected value of upper each point, obtains the corresponding first reflective information perspective view of each frame;
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains described in each frame The first dutycycle perspective view of the point cloud of barrier to be identified in the horizontal plane.
Still optionally further, in method as described above, sorter model, the institute of acquisition in advance according to training in advance State barrier to be identified described in each frame in preceding N frames the horizontal plane the first barrier classification figure, pre-set In the preceding N frames in the weight of each frame and the preceding N frames each frame at least two described in first cloud perspective view, The first reflective information perspective view and the first dutycycle perspective view, predict described the of N+1 frames in the N+1 frames Before one barrier classification figure, methods described also includes:
Obtain first barrier of the barrier to be identified in the horizontal plane described in each frame in the preceding N frames Classification figure;
Further, barrier to be identified described in each frame is obtained in the preceding N frames described in the horizontal plane First barrier classification figure, specifically includes:
First barrier of the barrier described to be identified in the 1st frame in the horizontal plane is obtained from static map Species are hindered not schemed;
Barrier to be identified described in each frame in sorter model, the advance described preceding i frames for obtaining according to training in advance Hinder thing in the first barrier classification figure and the preceding i frames of the horizontal plane each frame at least two described in Some cloud perspective views, the first reflective information perspective view and the first dutycycle perspective view, described the of prediction i+1 frame One barrier classification figure;Wherein described i is the integer in 1≤i≤(N-1).
Still optionally further, in method as described above, sorter model, the institute of acquisition in advance according to training in advance State barrier to be identified described in each frame in preceding N frames the horizontal plane the first barrier classification figure, pre-set In the preceding N frames in the weight of each frame and the preceding N frames each frame at least two described in first cloud perspective view, The first reflective information perspective view and the first dutycycle perspective view, predict described the of N+1 frames in the N+1 frames Before one barrier classification figure, methods described also includes:
For the jth frame in the preceding N frames sets weight Wj, the frame of jth+1 setting weight Wj+1, wherein Wj+1>Wj;Wherein described j It is the integer of 1≤j≤N;Or
It is i-th nt in the preceding N frames for the 1st frame to the i-th nt (N/2) frame in the preceding N frames sets weight Q (N/2) weight R is set to nth frame, and the R is more than the Q.
Still optionally further, in method as described above, sorter model, the institute of acquisition in advance according to training in advance State barrier to be identified described in each frame in preceding N frames the horizontal plane the first barrier classification figure, pre-set In the preceding N frames in the weight of each frame and the preceding N frames each frame at least two described in first cloud perspective view, The first reflective information perspective view and the first dutycycle perspective view, predict described the of N+1 frames in the N+1 frames Before one barrier classification figure, methods described also includes:
Gather the information of the default barrier of multigroup continuous N+1 frames known class, dyspoiesis thing training set;It is each described The information of the described default barrier of frame includes the reflection of each point of the point cloud and the default barrier of the default barrier Value;
According to the information of the described default barrier of multigroup continuous N+1 frames in the barrier training set, institute is trained State sorter model.
Still optionally further, in method as described above, according to multigroup continuous N+1 in the barrier training set The information of the described default barrier of frame, trains the sorter model, specifically includes:
It is described default in each frame of the preceding N frames in the N+1 frames according to each group in the barrier training set respectively The information of barrier, obtains the point cloud layer of at least two height that barrier is preset described in each described group of each described frame in water The second reflective information that barrier is preset described in the second point cloud perspective view of plane, each frame in the horizontal plane is projected Second dutycycle perspective view of the barrier in the horizontal plane is preset described in figure and each frame;
Barrier is preset described in each described frame according to the described preceding N frames of each described group of advance acquisition respectively described The weight of each frame in second barrier classification figure of horizontal plane, the described preceding N frames of each described group for pre-setting, it is described before Second point cloud perspective view, the second reflective information perspective view and described second account for described at least two of each frame in N frames It is empty to train the sorter model than perspective view and the known class of each described group of corresponding default barrier, so that Determine the sorter model.
The present invention also provides a kind of obstacle recognition system, and described device includes:
Obstacle information acquisition module, for obtain the continuous N+1 frames around Laser Radar Scanning Current vehicle wait know The information of other barrier;
Parameter information acquisition module, for the barrier described to be identified in each frame of the preceding N frames in the N+1 frames Information, obtain the point cloud layer of at least two height in each frame in first cloud perspective view, each frame of horizontal plane The barrier to be identified obstacle to be identified described in the first reflective information perspective view and each frame of the horizontal plane First dutycycle perspective view of the thing in the horizontal plane;
Prediction module, for the sorter model according to training in advance, in the described preceding N frames for obtaining in advance in each frame The barrier to be identified each frame in the first barrier classification figure, the described preceding N frames that pre-set of the horizontal plane Weight and the preceding N frames in each frame at least two described in first cloud perspective view, first reflective information throw Shadow figure and the first dutycycle perspective view, predict the first barrier classification figure of the N+1 frames in the N+1 frames.
Still optionally further, in device as described above, also include:
Obstacle recognition module, for according to the first barrier classification figure of the N+1 frames and the N+1 frames Barrier described to be identified information, recognize the classification of each barrier to be identified in the point cloud of the N+1 frames.
Still optionally further, in device as described above, the obstacle recognition module, specifically for:
According to the first barrier classification figure of the N+1 frames, in the barrier described to be identified of the N+1 frames Point cloud in identify the classification of each barrier to be identified;
Judge the N+1 frames barrier described to be identified point cloud in the same barrier to be identified whether mark Knowledge have two or more different classifications, if so, according to the barrier to be identified point cloud in two or more different classifications The quantity of corresponding point, identifies the classification of the barrier to be identified respectively.
Still optionally further, in device as described above, the parameter information acquisition module, specifically for:
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains parallel to the level Described cloud layer of at least two height in face;By described cloud layer of at least two height respectively on the horizontal plane Project, obtain first cloud perspective view described in each frame corresponding at least two;
The reflected value of each point on the surface of the barrier described to be identified in each described frame according to the preceding N frames, each The point cloud of barrier to be identified described in the frame is identified on the blocking surfaces to be identified in the projection of the horizontal plane The reflected value of each point, obtains the corresponding first reflective information perspective view of each frame;
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains described in each frame The first dutycycle perspective view of the point cloud of barrier to be identified in the horizontal plane.
Still optionally further, in device as described above, also include:
Barrier classification acquisition module, for obtaining in the preceding N frames barrier to be identified described in each frame in institute State the first barrier classification figure of horizontal plane;
Further, the barrier classification acquisition module, specifically for:
First barrier of the barrier described to be identified in the 1st frame in the horizontal plane is obtained from static map Species are hindered not schemed;
Barrier to be identified described in each frame in sorter model, the advance described preceding i frames for obtaining according to training in advance Hinder thing in the first barrier classification figure and the preceding i frames of the horizontal plane each frame at least two described in Some cloud perspective views, the first reflective information perspective view and the first dutycycle perspective view, described the of prediction i+1 frame One barrier classification figure;Wherein described i is the integer in 1≤i≤(N-1).
Still optionally further, in device as described above, also include:
Weight setting module, for setting weight W for the jth frame in the preceding N framesj, the frame of jth+1 setting weight Wj+1, its Middle Wj+1>Wj;Wherein described j is the integer of 1≤j≤N;Or
It is i-th nt in the preceding N frames for the 1st frame to the i-th nt (N/2) frame in the preceding N frames sets weight Q (N/2) weight R is set to nth frame, and the R is more than the Q.
Still optionally further, in device as described above, also include:
Acquisition module, the information of the default barrier for gathering multigroup continuous N+1 frames known class, dyspoiesis thing Training set;The information of the described default barrier of each frame includes the point cloud and the default barrier of the default barrier Each point reflected value;
Training module, for the described default barrier according to multigroup continuous N+1 frames in the barrier training set Information, train the sorter model.
Still optionally further, in device as described above, the training module, specifically for:
It is described default in each frame of the preceding N frames in the N+1 frames according to each group in the barrier training set respectively The information of barrier, obtains the point cloud layer of at least two height that barrier is preset described in each described group of each described frame in water The second reflective information that barrier is preset described in the second point cloud perspective view of plane, each frame in the horizontal plane is projected Second dutycycle perspective view of the barrier in the horizontal plane is preset described in figure and each frame;
Barrier is preset described in each described frame according to the described preceding N frames of each described group of advance acquisition respectively described The weight of each frame in second barrier classification figure of horizontal plane, the described preceding N frames of each described group for pre-setting, it is described before Second point cloud perspective view, the second reflective information perspective view and described second account for described at least two of each frame in N frames It is empty to train the sorter model than perspective view and the known class of each described group of corresponding default barrier, so that Determine the sorter model.
The present invention also provides a kind of computer equipment, including memory, processor and storage on a memory and can located The computer program run on reason device, realizes obstacle recognition method as described above during the computing device described program.
The present invention also provides a kind of computer-readable medium, is stored thereon with computer program, and the program is held by processor Obstacle recognition method as described above is realized during row.
Obstacle recognition method of the invention and device, computer equipment and computer-readable recording medium, are swept by obtaining laser radar Retouch the information of the barrier to be identified of continuous N+1 frames around Current vehicle;Treating in each frame of the preceding N frames in N+1 frames The information of cognitive disorders thing, obtains first cloud perspective view, each frame of the point cloud layer in horizontal plane of at least two height in each frame In barrier to be identified in the first reflective information perspective view of horizontal plane and each frame barrier to be identified in horizontal plane One dutycycle perspective view;Barrier to be identified exists in each frame in sorter model, the advance preceding N frames for obtaining according to training in advance In first barrier classification figure of horizontal plane, the preceding N frames for pre-setting in the weight and preceding N frames of each frame each frame at least two First cloud perspective view, the first reflective information perspective view and the first dutycycle perspective view, the of N+1 frames in prediction N+1 frames One barrier classification figure.The barrier to be identified with the point cloud detection of the barrier to be identified by single frames of prior art Classification is compared, technical scheme, and the information of the barrier to be identified according to multiframe recognizes barrier to be identified Classification, due to the information of the barrier to be identified with reference to multiframe, can effectively improve to obstacle to be identified The recognition accuracy of thing such that it is able to effectively improve the recognition efficiency to barrier to be identified.
【Brief description of the drawings】
Fig. 1 is the flow chart of obstacle recognition method embodiment of the invention.
Fig. 2 is the structure chart of obstacle recognition system embodiment one of the invention.
Fig. 3 is the structure chart of obstacle recognition system embodiment two of the invention.
Fig. 4 is the structure chart of computer equipment of the invention.
【Specific embodiment】
In order that the object, technical solutions and advantages of the present invention are clearer, below in conjunction with the accompanying drawings with specific embodiment pair The present invention is described in detail.
Fig. 1 is the flow chart of obstacle recognition method embodiment of the invention.As shown in figure 1, the barrier of the present embodiment Recognition methods, specifically may include steps of:
100th, the information of the barrier to be identified of the continuous N+1 frames around acquisition Laser Radar Scanning Current vehicle;
The obstacle recognition method of the present embodiment is applied in automatic Pilot technical field., it is necessary to car in automatic Pilot Barrier that can be in automatic identification road, makes a policy and control in time with being travelled in vehicle, is easy to the safety of vehicle Traveling.The executive agent of the obstacle recognition method of the present embodiment can be obstacle recognition system, the obstacle recognition system Multiple modules can be used integrated and obtained, the obstacle recognition system can be specifically arranged in the vehicle of automatic Pilot, with right The vehicle of automatic Pilot is controlled.
The information of the barrier to be identified of the present embodiment can strafe what is obtained using laser radar.The specification of laser radar Can be using 16 lines, 32 lines or 64 lines etc..The wherein line number unit energy density for representing laser radar higher is bigger.This reality Apply in example, the laser radar on Current vehicle rotates 360 in each second, and it is to be identified that scanning makes a circle in Current vehicle week Barrier information, be the information of frame barrier to be identified.The information of the barrier to be identified in the present embodiment can With point cloud and the reflected value of barrier to be identified including barrier to be identified.Barrier to be identified around Current vehicle Hindering thing can have one, it is possibility to have multiple.After Laser Radar Scanning, can be with the centroid position of Current vehicle as coordinate system Origin, and take both direction respectively the x directions and y directions parallel to horizontal plane, as length direction and width, hang down The straight direction in ground is z directions, used as short transverse.Then can be according to each point in the point cloud of barrier to be identified With the relative position and distance of origin, barrier to be identified is identified in a coordinate system.So, in the barrier to be identified of each frame Hinder in the point cloud of thing, can draw and respectively treat according to the every bit in each barrier to be identified and the relative position of Current vehicle The point cloud of the barrier of identification.In addition, laser radar can also detect that each point in each barrier to be identified Reflected value.In practical application, coordinate system can also be with the centroid position of laser radar as origin, and other directions are constant.This implementation The numerical value of the N of example, can take according to the actual requirements, for example, can take 8 or 10 or other numerical value.
101st, the information of the barrier to be identified in each frame of the preceding N frames in N+1 frames, obtains at least two in each frame Point cloud layer first reflective information of barrier to be identified in horizontal plane in first cloud perspective view, each frame of horizontal plane of height First dutycycle perspective view of the barrier to be identified in horizontal plane in perspective view and each frame;
In the recognition methods of the barrier of the present embodiment, mainly using some parameter informations of preceding N frames, N+1 frames are predicted Barrier classification, so as to realize being identified barrier.And due to three in the information for directly utilizing barrier to be identified The point cloud chart picture of dimension is predicted and cannot realize at present, therefore, in the present embodiment, two-dimentional letter is converted to by by three-dimensional information Breath, and the prediction using two-dimensional signal to the barrier classification of N+1 frames.Such as two-dimensional signal in the present embodiment can include In each frame at least two height point cloud layer in first cloud perspective view, each frame of horizontal plane barrier to be identified in horizontal plane The first reflective information perspective view and each frame in barrier to be identified horizontal plane the first dutycycle perspective view.
For example, the step 101 specifically may include steps of:
(a1) the point cloud of the barrier to be identified in each frame according to preceding N frames, obtains parallel at least two of horizontal plane The point cloud layer of height;The point cloud layer of at least two height is projected in the horizontal plane respectively, each frame corresponding at least two is obtained Individual first cloud perspective view;
For example, the short transverse of the point cloud of the barrier to be identified of Laser Radar Scanning is opened from a negative height threshold Begin, to certain positive height threshold.If the centroid distance ground of such as Current vehicle be 1.3m, then the height on ground for- 1.3m, and the height threshold of forward direction can be according in practical application, the maximum height of the barrier in road is set, and for example may be used To take+5m, or other numerical value.Then can be taken parallel to horizontal plane extremely in the point cloud of the barrier to be identified of each frame Few two point cloud layers of height.If the feature of the barrier to be identified included near the point cloud layer of horizontal plane is less, choosing When, can take upwards as far as possible, it is also possible to avoid taking the point cloud layer near horizontal plane, can for example take -1.2m's to+1.0m Point cloud layer, the point cloud layer of -1.2m to+2.0m, the point cloud layer and the point cloud layer of -1.2m to+5.0m of -1.2m to+3.0m.Wherein The height of point cloud layer can be in the road of practical study the altitude feature of barrier choose, for example, pedestrian is more Road, nethermost one layer of point cloud layer can be chosen according to the height of pedestrian;Secondly, can be according to car or bicycle Height etc. the height of last layer point cloud layer is set, the like, the height of each layer point cloud layer can be set.Get at least After two point cloud layers of height, 1 cloud layers are projected in the horizontal plane respectively;Point cloud layer that will be three-dimensional turns It is changed to the point cloud perspective view of two dimension.The point cloud layer of each height, projection obtains first cloud perspective view;At least two is high The point cloud layer of degree, can obtain at least two first cloud perspective views altogether.
(a2) on the surface of the barrier to be identified in each frame according to preceding N frames each point reflected value, wait to know in each frame The point cloud of other barrier identifies the reflected value of each point on the surface of barrier to be identified in the projection of horizontal plane, obtains each frame pair The the first reflective information perspective view answered;
The point cloud that laser radar gets the barrier to be identified of each frame in scanning is current at the same time it can also detect In frame in barrier to be identified every bit reflected value.Because the laser radar set in the vehicle of automatic Pilot be able to will be obtained Take the road conditions in front, therefore the distance higher higher than vehicle roof that laser radar is generally set, in order to be able to comprehensive Scan the barrier all to be identified around Current vehicle.Therefore, laser radar can be detected during scanning The reflected value on position that each of each barrier to be identified can be scanned.Under normal circumstances, when the height of laser radar When spending sufficiently high, the reflected value of any point on the surface of barrier to be identified can be scanned in theory, i.e., except waiting to know Other barrier should be scanned towards other optional positions outside the bottom on ground.But, when laser thunder band When insufficient height is high, laser radar may scan the one side less than barrier to be identified dorsad laser radar, but extremely The every bit of the upper surface of barrier to be identified can be scanned less, and now, the point cloud of barrier to be identified exists in each frame The reflected value of each point on the upper surface of barrier to be identified is identified in the projection of horizontal plane.I.e. can be in the first reflective information The projection of the corresponding point of maximum on the height of each barrier to be identified is only identified in perspective view.It is right by above-mentioned treatment In each frame, corresponding first reflective information perspective view can be obtained.
(a3) the point cloud of the barrier to be identified in each frame according to preceding N frames, obtains the point of barrier to be identified in each frame First dutycycle perspective view of the cloud in horizontal plane.
For each frame, if the quantity of the barrier to be identified included around Laser Radar Scanning Current vehicle is included When multiple, the point cloud of the barrier all to be identified that can be obtained Laser Radar Scanning is projected in the horizontal plane, so as to obtain First dutycycle perspective view of the point cloud of barrier to be identified in horizontal plane in the frame., for each frame, can obtain correspondence The first dutycycle perspective view.
102nd, according to training in advance in sorter model, the preceding N frames for obtaining in advance in each frame barrier to be identified in water In first barrier classification figure of plane, the preceding N frames for pre-setting in the weight and preceding N frames of each frame each frame at least two Some cloud perspective views, the first reflective information perspective view and the first dutycycle perspective view, predict first of the N+1 frames in N+1 frames Barrier classification figure.
The principle of the obstacle recognition in the present embodiment is will be according to barrier to be identified in each frame of preceding N frames in horizontal plane The first barrier classification figure, preceding N frames in each frame weight and each frame in preceding N frames at least two first cloud perspective views, First reflective information perspective view and the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in prediction N+1 frames, In view of the continuity between frame, the sorter model of the present embodiment allows for the model of successive frame, for example the grader mould Type can be recurrent neural networks model, such as can specifically use Recurrent Neural Network models or Long Short Term Neural Network models.By by the barrier to be identified in each frame in preceding N frames the of horizontal plane In one barrier classification figure, preceding N frames in the weight and preceding N frames of each frame each frame at least two first cloud perspective views, first Reflective information perspective view and the first dutycycle perspective view, substitute into the sorter model of training in advance, and the sorter model can be pre- Measure the first barrier classification figure of N+1 frames.In the present embodiment will N+1 frames the first barrier classification figure as final Predict the outcome, will N+1 frames the first barrier category icon know each barrier classification as Current vehicle around Each barrier classification.Because the first barrier classification figure be according to each barrier to be identified in the horizontal direction with currently The orientation of vehicle is identified, so the first barrier classification figure according to the N+1 frames, can identify each orientation wait know The classification of other barrier.
Alternatively, in the present embodiment, the classification of barrier to be identified can be divided into pedestrian, bicycle, car, big Automobile or other classifications.When being identified to barrier, the classification of uncertain barrier is all identified as other classifications.And And according in practical application, the new vehicles occurred in road can also be stepped up the classification of barrier.In the first barrier Hinder in species not figure, different barrier classifications can be represented using different colors, it is also possible to which point of different shapes is represented Etc..
Alternatively, before the step 102, can also include:Before obtaining in N frames in each frame barrier to be identified in horizontal plane The first barrier classification figure.
The step " first barrier classification figure of the barrier to be identified in horizontal plane in each frame in the preceding N frames of acquisition ", specifically May include steps of:
(b1) first barrier classification figure of the barrier to be identified in the 1st frame in horizontal plane is obtained from static map;
(b2) in sorter model according to training in advance, the preceding i frames for obtaining in advance in each frame barrier to be identified in water At least two first cloud perspective views of each frame, the first reflective information are thrown in the first barrier classification figure and preceding i frames of plane Shadow figure and the first dutycycle perspective view, predict the first barrier classification figure of i+1 frame;Wherein i is whole in 1≤i≤(N-1) Number.
In the present embodiment, due to there is no the first barrier classification figure before the 1st frame, it is impossible to using the technical side of the present embodiment First barrier classification figure of the case according to before predicts the classification figure of the barrier of the 1st frame.Therefore, the 1st frame of the present embodiment The first barrier classification figure can be obtained from static map.Since the 2nd frame, using the first barrier classification of the 1st frame Figure, the corresponding at least two first cloud perspective views of the 1st frame, the corresponding first reflective information perspective view of the 1st frame and the 1st frame correspondence The first dutycycle perspective view, the first barrier classification figure of the 2nd frame can be predicted;Similarly, according to the 1st frame corresponding at least two Individual first cloud perspective view, the corresponding first reflective information perspective view of the 1st frame and the corresponding first dutycycle perspective view of the 1st frame, The corresponding at least two first cloud perspective views of 2nd frame, the corresponding first reflective information perspective view of the 2nd frame and the 2nd frame are corresponding First barrier classification figure of the first dutycycle perspective view and the 1st frame and the 2nd frame, can predict the first barrier of the 3rd frame Classification figure;The like, the first barrier classification figure of the 4th frame, Zhi Daogen according to the information of the 1st frame to the 3rd frame, can be predicted According to the information of the 1st frame to nth frame, the first barrier classification figure of N+1 frames is predicted.It is so to be identified around Current vehicle The classification of barrier is just it was determined that laser radar need not be further continued for the barrier to be identified of scanning acquisition N+2 frames Point cloud.
It should be noted that in above-described embodiment, in each frame in N frames before acquisition barrier to be identified in horizontal plane The process of one barrier classification figure, does not consider the weight of each frame, during specific prediction, can be set to the weight of each frame Equal numerical value.But, it is necessary to consider each frame in preceding N frames when predicting the first barrier classification figure of N+1 frames in a step 102 Weight.
In addition, in practical application, when laser radar just starts scanning, before some frames scanning result and may pay no attention to Think, but be also can the presence of such frame, such as actually also exist before the 1st frame in continuous N+1 frames in the present embodiment Some frames, now can also be using the above-mentioned technical proposal of the present embodiment, some reality existed before the 1st frame according to collection The frame of presence predicts the first barrier classification figure of the 1st frame.
The obstacle recognition method of the present embodiment, by obtaining the continuous N+1 frames around Laser Radar Scanning Current vehicle Barrier to be identified information;The information of the barrier to be identified in each frame of the preceding N frames in N+1 frames, obtains each frame In at least two height point cloud layer in first cloud perspective view, each frame of horizontal plane barrier to be identified in horizontal plane the First dutycycle perspective view of the barrier to be identified in horizontal plane in one reflective information perspective view and each frame;According to instruction in advance In experienced sorter model, the preceding N frames for obtaining in advance in each frame barrier to be identified horizontal plane the first barrier classification figure, At least two first cloud perspective views of each frame, the first reflection are believed in the weight and preceding N frames of each frame in the preceding N frames for pre-setting Breath perspective view and the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in prediction N+1 frames.With prior art The classification of point cloud detection barrier to be identified of the barrier to be identified by single frames compare, the technical side of the present embodiment Case, the information of the barrier to be identified according to multiframe recognizes the classification of barrier to be identified, due to reference to many The information of the barrier to be identified of frame, can effectively improve the recognition accuracy to barrier to be identified such that it is able to Effectively improve the recognition efficiency to barrier to be identified.
Still optionally further, on the basis of the technical scheme of above-mentioned embodiment illustrated in fig. 1, in step 102 " according to advance First barrier classification of the barrier to be identified in horizontal plane in each frame in the sorter model of training, the preceding N frames for obtaining in advance It is at least two first cloud perspective views of each frame in the weight and preceding N frames of each frame in the preceding N frames for scheme, pre-setting, first anti- Penetrate information perspective view and the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in prediction N+1 frames " before, also May include steps of:For each frame sets weight in preceding N frames.For example can specifically include the following two kinds mode:
First way:For the jth frame in preceding N frames sets weight Wj, the frame of jth+1 setting weight Wj+1, wherein Wj+1>Wj;Its Middle j is the integer of 1≤j≤N;
The second way:It is the i-th nt in preceding N frames for the 1st frame to the i-th nt (N/2) frame in preceding N frames sets weight Q (N/2) weight R is set to nth frame, and R is more than Q.
In first way, with the increase of frame number, the weight of frame gradually increases.Also just say closer to be predicted the N+1 frames, the proportion accounted in prediction is bigger.In the second way, in preceding N frames, the weight phase of the 1st frame to the i-th nt (N/2) frame Together, the weight of the i-th nt (N/2) to nth frame is identical, but is proximate to i-th nt (N/2) of the N+1 frames of prediction to the weight of nth frame The weight of the 1st frame to the i-th nt (N/2) frame is greater than, such as R can be more than or equal to 2Q in the present embodiment.In practical application, According to demand, 3Q or 1.5Q, or other can be more than or equal to R.In a word, ensure as far as possible closer to be predicted Frame weight it is bigger so that the prediction of frame to be predicted is more accurate.
Still optionally further, on the basis of the technical scheme of above-mentioned embodiment illustrated in fig. 1, in step 102 " according to advance First barrier classification of the barrier to be identified in horizontal plane in each frame in the sorter model of training, the preceding N frames for obtaining in advance It is at least two first cloud perspective views of each frame in the weight and preceding N frames of each frame in the preceding N frames for scheme, pre-setting, first anti- Penetrate information perspective view and the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in prediction N+1 frames " after, also May include steps of:The information of the barrier to be identified of the first barrier classification figure and N+1 frames according to N+1 frames, Recognize the classification of each barrier to be identified in the point cloud of N+1 frames.
The first barrier classification figure that the step of due to above-described embodiment 102 finally gives is two-dimentional, it is impossible to accurate mark Know each barrier to be identified.Therefore, the present embodiment, can be by when prediction obtains the first barrier classification figure of N+1 frames The first barrier classification figure is transformed into the point cloud of the barrier to be identified of N+1 frames.Because the first barrier classification figure is It is two-dimentional in x/y plane, is three-dimensional during the point cloud of the barrier to be identified of N+1 frames is xyz spaces.So, may be used With the xy coordinates being easy in the first barrier classification figure, by the classification of the barrier under the coordinate, N+1 frames are corresponded to The point cloud of barrier to be identified be xyz spaces.In conversion process, it is believed that the point of the barrier to be identified of N+1 frames The classification of the barrier of all z coordinates of xy identicals point is all identical in cloud.For example, the step can specifically include following step Suddenly:
(c1) the first barrier classification figure according to N+1 frames, identifies in the point cloud of the barrier to be identified of N+1 frames The classification of each barrier to be identified;
(c2) judge N+1 frames barrier to be identified point cloud in same barrier to be identified whether be identified with two kinds The different classification of the above, if so, two or more different classifications distinguish corresponding points in point cloud according to barrier to be identified Quantity, identify the classification of barrier to be identified.
Due to N+1 frames barrier to be identified point cloud in three-dimensional three dimensions.In three dimensions, each is treated The barrier of identification is very independent, is generally readily recognizable by.Therefore, the first barrier classification figure according to N+1 frames, N+1 frames barrier to be identified point cloud in identify each barrier to be identified classification after, it can be determined that it is identified The barrier to be identified of the N+1 frames of the classification of barrier point cloud in, same barrier to be identified whether be identified with two kinds The different classification of the above, if so, can respectively be corresponded to according to two or more different classifications in the point cloud of barrier to be identified Point quantity, identify the classification of barrier to be identified.For example in the point cloud of same barrier to be identified, classification 1 Point has 500, and is wherein also identified with 20 points of classification 2, and now the quantity of the point of classification 1 is far longer than the point of classification 2, can Noise spot is with the point for thinking identified category 2, the classification 2 identified in classification 1 is now removed, the barrier to be identified is known Wei not classification 1.When including multiple classifications, similarly, according to the most principle of minority service, by the class of the barrier to be identified The most classification of points is not designated.
But, sometimes traffic is not fine, such as when traffic is crowded, some barriers to be identified can What can be leaned on mutually is close, now when identified barrier classification N+1 frames barrier to be identified point cloud in, same treat When the barrier of identification there are two or more different classifications, can also respectively judge the quantity of point of every kind of classification whether beyond point Number threshold value.The points threshold value of the present embodiment could be arranged to can independently as a minimum value for the point of barrier.When certain The quantity of the point of classification exceeds the points threshold value, it is believed that the point composition for being designated the category is an independent obstacle Thing;Even if quantity is likely lower than the point of other mutually close classifications, but the barrier of the category is self-existent.Otherwise when Be designated certain classification point quantity less than points threshold value when, it is believed that the point of the category is noise spot, in verification, can be with Remove the point of the category.By above-mentioned steps (c1) and (c2), of N+1 frames in the N+1 frames that can be obtained to step 102 One barrier classification figure carries out verification post processing, further enhancing the recognition accuracy to barrier to be identified, goes forward side by side one Step effectively improves the recognition efficiency to barrier to be identified.
Still optionally further, on the basis of the technical scheme of above-described embodiment, step 102 is " according to dividing for training in advance In class device model, the preceding N frames for obtaining in advance in each frame barrier to be identified horizontal plane the first barrier classification figure, set in advance At least two first cloud perspective views of each frame, the projection of the first reflective information in the weight and preceding N frames of each frame in the preceding N frames put Before figure and the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in prediction N+1 frames ", can also include such as Lower step:
(d1) information of the default barrier of multigroup continuous N+1 frames known class, dyspoiesis thing training set are gathered;Together Reason, the information of the default barrier of each frame of the present embodiment can include the point cloud of default barrier and each point of default barrier Reflected value;
(d2) according to the information of the default barrier of multigroup continuous N+1 frames in barrier training set, sorter model is trained.
The group number of the information of the default barrier of the known class that barrier training set includes in the present embodiment can be very It is many, such as more than 5000 or up to ten thousand or more, the group number of the information of the default barrier that barrier training set includes is got over It is many, during the sorter model of training, it is determined that sorter model parameter it is more accurate, subsequently recognized according to sorter model and wait to know The classification of other barrier is just more accurate.The information of the default barrier of each of which group, can place obstacles including continuous the pre- of N+1 frames Hinder the information of thing.So, sorter model can be trained according to the information of the default barrier of multigroup continuous N+1 frames.
For example, the step (d2), specifically may include steps of:
(e1) the default barrier in each frame of the preceding N frames in the N+1 frames of each group respectively in barrier training set Information, obtain each group each frame in preset barrier at least two height point cloud layer horizontal plane second point cloud project Barrier is preset in figure, each frame barrier is preset in the second reflective information perspective view of horizontal plane and each frame in horizontal plane The second dutycycle perspective view;
(e2) second obstacle of the barrier in horizontal plane is preset in each frame of the preceding N frames of advance acquisition respectively according to each group In the preceding N frames of each group that species do not scheme, pre-set in weight, the preceding N frames of each frame each frame the projection of at least two second point clouds The known class of figure, the second reflective information perspective view and the second dutycycle perspective view and the corresponding default barrier of each group, instruction Practice sorter model, so that it is determined that sorter model.
In the present embodiment, when sorter model is trained, each group of barrier of N+1 frames in barrier training set is used respectively Hinder the information of thing, sorter model is trained by step (e1) and step (e2), by repeatedly training, may finally be true Determine sorter model.Using each group of N+1 frames barrier information train sorter model when the step of (e1), with step 101 implementation process is identical, and the record of the step 101 in above-described embodiment is may be referred in detail, will not be repeated here.Wherein walk Suddenly (e2), specifically the second barrier of the barrier in horizontal plane can be preset in each frame with line according to the preceding N frames of the advance acquisition of each group At least two second point clouds of each frame are thrown in the weight of each frame, preceding N frames in the preceding N frames of each group for hinder species not schemed, pre-setting Shadow figure, the second reflective information perspective view and the second dutycycle perspective view, predict the second obstacle species of the corresponding N+1 frames of the group Do not scheme.The step of its detailed process is with above-described embodiment 102 is identical, the record of above-described embodiment is may be referred in detail, herein not Repeat again.The set-up mode of the weight of each frame may be referred to the note of above-described embodiment in the preceding N frames of every group for wherein pre-setting Carry, will not be repeated here.It should be noted that the weight of each frame can be with corresponding frame in other groups in preceding N frames in each group Weight it is identical, it is also possible to differ.
Then can be pressed according to the point cloud of the default barrier of the known class and N+1 frames of the default barrier of the group The point cloud of the default barrier of N+1 frames is projected to horizontal plane i.e. x/y plane by the classification according to each barrier, obtains N+1 frames Second barrier classification figure.Second barrier classification figure of the N+1 frames that will be predicted and the N+1 frames projected according to known class Barrier classification figure be compared, when the N+1 frames that the second barrier classification figure of the N+1 frames of prediction is obtained with projection Barrier classification figure is different, can now adjust the parameter of sorter model, from new training so that the of the N+1 frames of prediction Two barrier classification figures are identical with the barrier classification figure of the N+1 frames that projection is obtained.
Or the second barrier classification figure of the corresponding N+1 frames of the group that will can also be predicted, it is transformed into the N of the group In the point cloud of the default barrier of+1 frame, in order to the point Yun Zhongxian of the more clear default barrier in three-dimensional N+1 frames Show the classification of each default barrier.Then the known class of each default barrier of the classification of each default barrier predicted is entered Row compares, and when differing, can adjust the parameter of sorter model, from new training so that the class of the default barrier of prediction Known class not with the default barrier is identical.
Sorter model is trained by the information of the default barrier of above-mentioned multigroup continuous N+1 frames, can be true Determine the parameter of sorter model, may thereby determine that sorter model.So, can be according in step 100-102, using training Good sorter model is identified to barrier to be identified.
Second barrier classification of the barrier in horizontal plane is preset in each frame of the preceding N frames for wherein being obtained in step (e2) " first barrier classification figure of the barrier to be identified in horizontal plane in each frame in the preceding N frames of acquisition " in figure, with above-described embodiment Realization principle it is identical, the record of above-mentioned related embodiment is may be referred in detail, will not be repeated here.
By the obstacle recognition method using the present embodiment, the vehicle of automatic Pilot is by Laser Radar Scanning to waiting to know After the point cloud of other barrier, just barrier to be identified can be identified according to above-mentioned obstacle recognition method, The traveling of vehicle can be further controlled according to the classification of barrier, for example, controls vehicle to avoid barrier, so as to effectively increase The driving safety of the vehicle of automatic Pilot is added.
The technical scheme of above-described embodiment, the point cloud detection with the barrier to be identified by single frames of prior art is treated The classification of the barrier of identification is compared, and the information of the barrier to be identified according to multiframe recognizes barrier to be identified Classification, due to the information of the barrier to be identified with reference to multiframe, can effectively improve to barrier to be identified Recognition accuracy such that it is able to effectively improve the recognition efficiency to barrier to be identified.
Fig. 2 is the structure chart of obstacle recognition system embodiment one of the invention.As shown in Fig. 2 the obstacle of the present embodiment Thing identifying device, can specifically include:Obstacle information acquisition module 10, parameter information acquisition module 11 and prediction module 12.
Wherein obstacle information acquisition module 10 is used to obtain the continuous N+1 frames around Laser Radar Scanning Current vehicle The information of barrier to be identified;Parameter information acquisition module 11 is used for the N+1 frames obtained according to obstacle information acquisition module 10 In preceding N frames each frame in barrier to be identified information, obtain the point cloud layer of at least two height in each frame in horizontal plane First cloud perspective view, each frame in barrier to be identified treated in the first reflective information perspective view of horizontal plane and each frame First dutycycle perspective view of the cognitive disorders thing in horizontal plane;Prediction module 12 be used for according to the sorter model of training in advance, First barrier classification figure, the preceding N frame that pre-sets of the barrier to be identified in horizontal plane in each frame in the preceding N frames for obtaining in advance In each frame weight and parameter information acquisition module 11 obtain preceding N frames in each frame at least two first cloud perspective views, First reflective information perspective view and the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in prediction N+1 frames.
The obstacle recognition system of the present embodiment, is known by using the realization of above-mentioned module to barrier to be identified Not, the realization principle with above-mentioned related method embodiment is identical with technique effect, above-mentioned correlation technique is may be referred in detail and is implemented The record of example, will not be repeated here.
Fig. 3 is the structure chart of obstacle recognition system embodiment two of the invention.As shown in figure 3, the obstacle of the present embodiment Thing identifying device, on the basis of the technical scheme of above-mentioned embodiment illustrated in fig. 2, is further described more fully of the invention Technical scheme.
As shown in figure 3, in the obstacle recognition system of the present embodiment, also including:Obstacle recognition module 13.The barrier Identification module 13 is used to predict that the first barrier classification figure and obstacle information of the N+1 frames for obtaining are obtained according to prediction module 12 The information of the barrier to be identified of the N+1 frames that modulus block 10 is obtained, each barrier to be identified in the point cloud of identification N+1 frames Classification.
Still optionally further, in the obstacle recognition system of the present embodiment, obstacle recognition module 13 specifically for:
The first barrier classification figure according to N+1 frames, identifies in the point cloud of the barrier to be identified of N+1 frames and respectively treats The classification of the barrier of identification;
Judge N+1 frames barrier to be identified point cloud in same barrier to be identified whether be identified with it is two or more Different classification, if so, according to the number for putting the two or more different corresponding points of classifications difference in cloud of barrier to be identified Amount, identifies the classification of barrier to be identified.
Still optionally further, in the obstacle recognition system of the present embodiment, parameter information acquisition module 11, specifically for:
The point cloud of the barrier to be identified in each frame according to preceding N frames, obtains at least two height parallel to horizontal plane Point cloud layer;The point cloud layer of at least two height is projected in the horizontal plane respectively, each frame corresponding at least two is obtained Some cloud perspective views;
The reflected value of each point on the surface of the barrier to be identified in each frame according to preceding N frames, in each frame barrier to be identified Hinder the reflected value that cloud identifies each point on the surface of barrier to be identified in the projection of horizontal plane of putting of thing, obtain each frame corresponding First reflective information perspective view;
The point cloud of the barrier to be identified in each frame according to preceding N frames, the point cloud for obtaining barrier to be identified in each frame exists First dutycycle perspective view of horizontal plane.
Still optionally further, as shown in figure 3, in the obstacle recognition system of the present embodiment, also including:Barrier classification is obtained Modulus block 14.
Wherein barrier classification acquisition module 14 is used to obtain in preceding N frames barrier to be identified in each frame the of horizontal plane One barrier classification figure;
Further, barrier classification acquisition module 14 specifically for:
First barrier classification figure of the barrier to be identified in the 1st frame in horizontal plane is obtained from static map;
Sorter model according to training in advance, in the preceding i frames for obtaining in advance in each frame barrier to be identified in horizontal plane The first barrier classification figure and preceding i frames in each frame at least two first cloud perspective views, the first reflective information perspective view With the first dutycycle perspective view, the first barrier classification figure of i+1 frame is predicted;Wherein i is the integer in 1≤i≤(N-1).
Accordingly, prediction module 12 is pre- for the sorter model according to training in advance, barrier classification acquisition module 14 In the preceding N frames for first obtaining in each frame barrier to be identified in the first barrier classification figure, the preceding N frames that pre-set of horizontal plane At least two first cloud perspective views of each frame in the preceding N frames that the weight and parameter information acquisition module 11 of each frame are obtained, the One reflective information perspective view and the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in prediction N+1 frames.
Still optionally further, as shown in figure 3, in the obstacle recognition system of the present embodiment, also including:Weight setting module 15。
Wherein weight setting module 15 is used to set weight W for the jth frame in preceding N framesj, the frame of jth+1 setting weight Wj+1, Wherein Wj+1>Wj;Wherein j is the integer of 1≤j≤N;Or
It is that the i-th nt (N/2) in preceding N frames arrives nth frame for the 1st frame to the i-th nt (N/2) frame in preceding N frames sets weight Q Weight R is set, and R is more than Q.
Accordingly, accordingly, prediction module 12 is used for sorter model, barrier classification acquisition mould according to training in advance First barrier classification figure, weight setting mould of the barrier to be identified in horizontal plane in each frame in the preceding N frames that block 14 is obtained in advance Each frame is at least in the preceding N frames that the weight of each frame and parameter information acquisition module 11 are obtained in the preceding N frames that block 15 pre-sets Two first cloud perspective view, the first reflective information perspective view and the first dutycycle perspective views, the N+1 frames in prediction N+1 frames The first barrier classification figure.
Still optionally further, as shown in figure 3, in the obstacle recognition system of the present embodiment, also including:The He of acquisition module 16 Training module 17.
Wherein acquisition module 16 is used for the information of the default barrier for gathering multigroup continuous N+1 frames known class, generation Barrier training set;The information of the default barrier of each frame includes the anti-of each point of the point cloud of default barrier and default barrier Penetrate value;
Training module 17 is used for multigroup continuous the pre- of N+1 frames in the barrier training set gathered according to acquisition module 16 and places obstacles Hinder the information of thing, train sorter model.
Still optionally further, in the obstacle recognition system of the present embodiment, training module 17 specifically for:
The information of the default barrier in each frame of the preceding N frames in the N+1 frames according to each group in barrier training set respectively, Second point cloud perspective view, each frame of the point cloud layer of at least two height of default barrier in horizontal plane in each frame of acquisition each group In preset barrier in the second reflective information perspective view of horizontal plane and each frame preset barrier accounted for the second of horizontal plane Sky compares perspective view;
Second obstacle species of the barrier in horizontal plane are preset in each frame of the preceding N frames of the advance acquisition according to each group respectively At least two second point cloud perspective views of each frame, in the weight of each frame, preceding N frames in the preceding N frames of each group for do not scheme, pre-setting The known class of two reflective information perspective views and the second dutycycle perspective view and the corresponding default barrier of each group, training point Class device model, so that it is determined that sorter model.
Accordingly, prediction module 12 is obtained for the sorter model according to the training in advance of training module 16, barrier classification Barrier to be identified sets in the first barrier classification figure of horizontal plane, weight in each frame in the preceding N frames that modulus block 14 is obtained in advance Each frame in the preceding N frames of weight and parameter information acquisition module 11 acquisition for putting each frame in the preceding N frames that module 15 pre-sets At least two first cloud perspective views, the first reflective information perspective view and the first dutycycle perspective views, the N+ in prediction N+1 frames First barrier classification figure of 1 frame.
The obstacle recognition system of the present embodiment, is known by using the realization of above-mentioned module to barrier to be identified Not, the realization principle with above-mentioned related method embodiment is identical with technique effect, above-mentioned correlation technique is may be referred in detail and is implemented The record of example, will not be repeated here.
The present invention also provides a kind of computer equipment, including memory, processor and storage on a memory and can located The computer program run on reason device, realizes the obstacle recognition method as shown in above-mentioned embodiment during computing device program.
For example, a kind of structure chart of computer equipment that Fig. 4 is provided for the present invention.Fig. 4 shows and is suitable to for realizing this The block diagram of the exemplary computer device 12a of invention embodiment.The computer equipment 12a that Fig. 4 shows is only an example, Any limitation should not be carried out to the function of the embodiment of the present invention and using range band.
As shown in figure 4, computer equipment 12a is showed in the form of universal computing device.The component of computer equipment 12a can To include but is not limited to:One or more processor 16a, system storage 28a, connection different system component (including system Memory 28a and processor 16a) bus 18a.
Bus 18a represents one or more in a few class bus structures, including memory bus or Memory Controller, Peripheral bus, AGP, processor or the local bus using any bus structures in various bus structures.Lift For example, these architectures include but is not limited to industry standard architecture (ISA) bus, MCA (MAC) Bus, enhanced isa bus, VESA's (VESA) local bus and periphery component interconnection (PCI) bus.
Computer equipment 12a typically comprises various computing systems computer-readable recording medium.These media can be it is any can The usable medium accessed by computer equipment 12a, including volatibility and non-volatile media, moveable and immovable Jie Matter.
System storage 28a can include the computer system readable media of form of volatile memory, for example, deposit at random Access to memory (RAM) 30a and/or cache memory 32a.Computer equipment 12a may further include that other are removable/ Immovable, volatile/non-volatile computer system storage medium.Only as an example, storage system 34a can be used for reading Write immovable, non-volatile magnetic media (Fig. 4 do not show, commonly referred to " hard disk drive ").Although not shown in Fig. 4, Can provide for the disc driver to may move non-volatile magnetic disk (such as " floppy disk ") read-write, and to removable non-easy The CD drive of the property lost CD (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each Driver can be connected by one or more data media interfaces with bus 18a.System storage 28a can be included at least One program product, the program product has one group of (for example, at least one) program module, and these program modules are configured to hold The function of row above-mentioned each embodiments of Fig. 1-Fig. 3 of the invention.
With the one group of program of (at least one) program module 42a/utility 40a, can store and be deposited in such as system In reservoir 28a, such program module 42a include --- but being not limited to --- operating system, one or more application program, Other program modules and routine data, potentially include the reality of network environment in each or certain combination in these examples It is existing.Program module 42a generally performs the function and/or method in above-mentioned each embodiments of Fig. 1-Fig. 3 described in the invention.
Computer equipment 12a can also be with one or more external equipments 14a (such as keyboard, sensing equipment, display 24a etc.) communication, also the equipment communication that is interacted with computer equipment 12a can be enabled a user to one or more, and/or (such as network interface card is adjusted with any equipment for computer equipment 12a is communicated with one or more of the other computing device Modulator-demodulator etc.) communication.This communication can be carried out by input/output (I/O) interface 22a.Also, computer equipment 12a can also by network adapter 20a and one or more network (such as LAN (LAN), wide area network (WAN) and/or Public network, such as internet) communication.As illustrated, network adapter 20a passes through its of bus 18a and computer equipment 12a Its module communicates.It should be understood that although not shown in, can combine computer equipment 12a use other hardware and/or software Module, including but not limited to:Microcode, device driver, redundant processor, external disk drive array, RAID system, tape Driver and data backup storage system etc..
Processor 16a by running program of the storage in system storage 28a so that perform various function application and Data processing, for example, realize the obstacle recognition method shown in above-described embodiment.
The present invention also provides a kind of computer-readable medium, is stored thereon with computer program, and the program is held by processor The obstacle recognition method as shown in above-mentioned embodiment is realized during row.
The computer-readable medium of the present embodiment can be included in the system storage 28a in above-mentioned embodiment illustrated in fig. 4 RAM30a, and/or cache memory 32a, and/or storage system 34a.
With the development of science and technology, the route of transmission of computer program is no longer limited by tangible medium, can also directly from net Network is downloaded, or is obtained using other modes.Therefore, the computer-readable medium in the present embodiment can not only include tangible Medium, can also include invisible medium.
The computer-readable medium of the present embodiment can be using any combination of one or more computer-readable media. Computer-readable medium can be computer-readable signal media or computer-readable recording medium.Computer-readable storage medium Matter for example may be-but not limited to-system, device or the device of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or Combination more than person is any.The more specifically example (non exhaustive list) of computer-readable recording medium includes:With one Or the electrical connection of multiple wires, portable computer diskette, hard disk, random access memory (RAM), read-only storage (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light Memory device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable recording medium can Be it is any comprising or storage program tangible medium, the program can be commanded execution system, device or device use or Person is in connection.
Computer-readable signal media can include the data-signal propagated in a base band or as a carrier wave part, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including --- but It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be Any computer-readable medium beyond computer-readable recording medium, the computer-readable medium can send, propagate or Transmit for being used or program in connection by instruction execution system, device or device.
The program code included on computer-readable medium can be transmitted with any appropriate medium, including --- but do not limit In --- wireless, electric wire, optical cable, RF etc., or above-mentioned any appropriate combination.
Computer for performing present invention operation can be write with one or more programming language or its combination Program code, described program design language includes object oriented program language-such as Java, Smalltalk, C++, Also include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with Fully perform on the user computer, partly perform on the user computer, performed as an independent software kit, portion Part on the user computer is divided to perform on the remote computer or performed on remote computer or server completely. Be related in the situation of remote computer, remote computer can be by the network of any kind --- including LAN (LAN) or Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (is for example carried using Internet service Come by Internet connection for business).
In several embodiments provided by the present invention, it should be understood that disclosed system, apparatus and method can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the unit Divide, only a kind of division of logic function there can be other dividing mode when actually realizing.
The unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be according to the actual needs selected to realize the mesh of this embodiment scheme 's.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list Unit can both be realized in the form of hardware, it would however also be possible to employ hardware adds the form of SFU software functional unit to realize.
The above-mentioned integrated unit realized in the form of SFU software functional unit, can store and be deposited in an embodied on computer readable In storage media.Above-mentioned SFU software functional unit storage is in a storage medium, including some instructions are used to so that a computer Equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the present invention each The part steps of embodiment methods described.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (Read- Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disc or CD etc. it is various Can be with the medium of store program codes.
Presently preferred embodiments of the present invention is the foregoing is only, is not intended to limit the invention, it is all in essence of the invention Within god and principle, any modification, equivalent substitution and improvements done etc. should be included within the scope of protection of the invention.

Claims (18)

1. a kind of obstacle recognition method, it is characterised in that methods described includes:
The information of the barrier to be identified of the continuous N+1 frames around acquisition Laser Radar Scanning Current vehicle;
The information of the barrier described to be identified in each frame of the preceding N frames in the N+1 frames, in each frame of acquisition at least The point cloud layers of two height barrier to be identified described in first cloud perspective view of horizontal plane, each frame is in the level First dutycycle of the barrier to be identified in the horizontal plane described in the first reflective information perspective view and each frame in face Perspective view;
Barrier to be identified described in each frame in sorter model, the advance described preceding N frames for obtaining according to training in advance The first barrier classification figure in the horizontal plane, in the described preceding N frames for pre-setting each frame weight and the preceding N First cloud perspective view, the first reflective information perspective view and first duty described at least two of each frame in frame Than perspective view, the first barrier classification figure of the N+1 frames in the N+1 frames is predicted.
2. method according to claim 1, it is characterised in that sorter model according to training in advance, obtain in advance In the preceding N frames barrier to be identified described in each frame the horizontal plane the first barrier classification figure, pre-set Described preceding N frames in the weight of each frame and first cloud projection described at least two of each frame in the preceding N frames Figure, the first reflective information perspective view and the first dutycycle perspective view, predict the institute of the N+1 frames in the N+1 frames State after the first barrier classification figure, methods described also includes:
The letter of the barrier described to be identified of the first barrier classification figure and the N+1 frames according to the N+1 frames Breath, recognizes the classification of each barrier to be identified in the point cloud of the N+1 frames.
3. method according to claim 2, it is characterised in that according to the first barrier classification of the N+1 frames The information of the barrier described to be identified of figure and the N+1 frames, recognizes each barrier to be identified in the point cloud of the N+1 frames Hinder the classification of thing, specifically include:
According to the first barrier classification figure of the N+1 frames, in the point of the barrier described to be identified of the N+1 frames The classification of each barrier to be identified is identified in cloud;
Judge the N+1 frames barrier described to be identified point cloud in the same barrier to be identified whether be identified with Two or more different classifications, if so, two or more different classifications difference in point cloud according to the barrier to be identified The quantity of corresponding point, identifies the classification of the barrier to be identified.
4. method according to claim 1, it is characterised in that described in each frame of the preceding N frames in the N+1 frames The information of barrier to be identified, the point cloud layer for obtaining at least two height in each frame is projected in first cloud of horizontal plane Barrier to be identified is in the first reflective information perspective view and each frame of the horizontal plane described in figure, each frame The barrier to be identified is specifically included in the first dutycycle perspective view of the horizontal plane:
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains parallel to the horizontal plane Described cloud layer of at least two height;Described cloud layer of at least two height is thrown on the horizontal plane respectively Shadow, obtains first cloud perspective view described in each frame corresponding at least two;
The reflected value of each point on the surface of the barrier described to be identified in each described frame according to the preceding N frames, each described The point cloud of barrier to be identified described in frame identifies each on the surface of the barrier to be identified in the projection of the horizontal plane The reflected value of point, obtains the corresponding first reflective information perspective view of each frame;
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains and waits to know described in each frame The first dutycycle perspective view of the point cloud of other barrier in the horizontal plane.
5. method according to claim 1, it is characterised in that sorter model according to training in advance, obtain in advance In the preceding N frames barrier to be identified described in each frame the horizontal plane the first barrier classification figure, pre-set Described preceding N frames in the weight of each frame and first cloud projection described at least two of each frame in the preceding N frames Figure, the first reflective information perspective view and the first dutycycle perspective view, predict the institute of the N+1 frames in the N+1 frames Before stating the first barrier classification figure, methods described also includes:
Obtain the first barrier classification of the barrier to be identified in the horizontal plane described in each frame in the preceding N frames Figure;
Further, barrier to be identified described in each frame is obtained in the preceding N frames described the first of the horizontal plane Barrier classification figure, specifically includes:
First barrier of the barrier described to be identified in the 1st frame in the horizontal plane is obtained from static map Classification figure;
Barrier to be identified described in each frame in sorter model, the advance described preceding i frames for obtaining according to training in advance In the first barrier classification figure and the preceding i frames of the horizontal plane first point described at least two of each frame Cloud perspective view, the first reflective information perspective view and the first dutycycle perspective view, first barrier of prediction i+1 frame Species are hindered not schemed;Wherein described i is the integer in 1≤i≤(N-1).
6. method according to claim 1, it is characterised in that sorter model according to training in advance, obtain in advance In the preceding N frames barrier to be identified described in each frame the horizontal plane the first barrier classification figure, pre-set Described preceding N frames in the weight of each frame and first cloud projection described at least two of each frame in the preceding N frames Figure, the first reflective information perspective view and the first dutycycle perspective view, predict the institute of the N+1 frames in the N+1 frames Before stating the first barrier classification figure, methods described also includes:
For the jth frame in the preceding N frames sets weight Wj, the frame of jth+1 setting weight Wj+1, wherein Wj+1>Wj;Wherein described j is 1 The integer of≤j≤N;Or
It is i-th nt (N/2) in the preceding N frames for the 1st frame to the i-th nt (N/2) frame in the preceding N frames sets weight Q To nth frame, weight R is set, and the R is more than the Q.
7. according to any described methods of claim 1-6, it is characterised in that sorter model according to training in advance, in advance First barrier classification figure, pre- of the barrier to be identified described in each frame in the horizontal plane in the described preceding N frames for obtaining In the described preceding N frames for first setting in the weight of each frame and the preceding N frames each frame at least two described in first point Cloud perspective view, the first reflective information perspective view and the first dutycycle perspective view, predict the N+1 in the N+1 frames Before the first barrier classification figure of frame, methods described also includes:
Gather the information of the default barrier of multigroup continuous N+1 frames known class, dyspoiesis thing training set;Each frame The information of the default barrier includes the reflected value of each point of the point cloud and the default barrier of the default barrier;
According to the information of the described default barrier of multigroup continuous N+1 frames in the barrier training set, described point of training Class device model.
8. method according to claim 7, it is characterised in that according to multigroup continuous N+ in the barrier training set The information of the described default barrier of 1 frame, trains the sorter model, specifically includes:
Described default obstacle in each frame of the preceding N frames in the N+1 frames according to each group in the barrier training set respectively The information of thing, obtains the point cloud layer of at least two height that barrier is preset described in each described group of each described frame in horizontal plane Second point cloud perspective view, preset described in each frame barrier the horizontal plane the second reflective information perspective view, with And second dutycycle perspective view of the barrier in the horizontal plane is preset described in each frame;
Barrier is preset described in each described frame according to the described preceding N frames of each described group of advance acquisition respectively in the level Weight, the preceding N frames of each frame in the second barrier classification figure in face, the described preceding N frames of each described group for pre-setting In each frame at least two described in second point cloud perspective view, the second reflective information perspective view and second dutycycle The known class of perspective view and each described group of corresponding default barrier, trains the sorter model, so that it is determined that The sorter model.
9. a kind of obstacle recognition system, it is characterised in that described device includes:
Obstacle information acquisition module, the barrier to be identified for obtaining the continuous N+1 frames around Laser Radar Scanning Current vehicle Hinder the information of thing;
Parameter information acquisition module, for the letter of the barrier described to be identified in each frame of the preceding N frames in the N+1 frames Breath, obtains described in first cloud perspective view of the point cloud layer in horizontal plane of at least two height in each frame, each frame Barrier to be identified barrier to be identified described in the first reflective information perspective view and each frame of the horizontal plane exists First dutycycle perspective view of the horizontal plane;
Prediction module, for the sorter model according to training in advance, in the described preceding N frames for obtaining in advance described in each frame The power of barrier to be identified each frame in the first barrier classification figure, the described preceding N frames that pre-set of the horizontal plane First cloud perspective view, the first reflective information perspective view described at least two of each frame in weight and the preceding N frames With the first dutycycle perspective view, the first barrier classification figure of the N+1 frames in the N+1 frames is predicted.
10. device according to claim 9, it is characterised in that described device also includes:
Obstacle recognition module, for the first barrier classification figure according to the N+1 frames and the institute of the N+1 frames The information of barrier to be identified is stated, the classification of each barrier to be identified in the point cloud of the N+1 frames is recognized.
11. devices according to claim 10, it is characterised in that the obstacle recognition module, specifically for:
According to the first barrier classification figure of the N+1 frames, in the point of the barrier described to be identified of the N+1 frames The classification of each barrier to be identified is identified in cloud;
Judge the N+1 frames barrier described to be identified point cloud in the same barrier to be identified whether be identified with Two or more different classifications, if so, two or more different classifications difference in point cloud according to the barrier to be identified The quantity of corresponding point, identifies the classification of the barrier to be identified.
12. devices according to claim 9, it is characterised in that the parameter information acquisition module, specifically for:
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains parallel to the horizontal plane Described cloud layer of at least two height;Described cloud layer of at least two height is thrown on the horizontal plane respectively Shadow, obtains first cloud perspective view described in each frame corresponding at least two;
The reflected value of each point on the surface of the barrier described to be identified in each described frame according to the preceding N frames, each described The point cloud of barrier to be identified described in frame identifies each point on the blocking surfaces to be identified in the projection of the horizontal plane The reflected value, obtain the corresponding first reflective information perspective view of each frame;
The point cloud of the barrier described to be identified in each described frame according to the preceding N frames, obtains and waits to know described in each frame The first dutycycle perspective view of the point cloud of other barrier in the horizontal plane.
13. devices according to claim 9, it is characterised in that described device also includes:
Barrier classification acquisition module, for obtaining in the preceding N frames barrier to be identified described in each frame in the water The first barrier classification figure of plane;
Further, the barrier classification acquisition module, specifically for:
First barrier of the barrier described to be identified in the 1st frame in the horizontal plane is obtained from static map Classification figure;
Barrier to be identified described in each frame in sorter model, the advance described preceding i frames for obtaining according to training in advance In the first barrier classification figure and the preceding i frames of the horizontal plane first point described at least two of each frame Cloud perspective view, the first reflective information perspective view and the first dutycycle perspective view, first barrier of prediction i+1 frame Species are hindered not schemed;Wherein described i is the integer in 1≤i≤(N-1).
14. devices according to claim 9, it is characterised in that described device also includes:
Weight setting module, for setting weight W for the jth frame in the preceding N framesj, the frame of jth+1 setting weight Wj+1, wherein Wj+1 >Wj;Wherein described j is the integer of 1≤j≤N;Or
It is i-th nt (N/2) in the preceding N frames for the 1st frame to the i-th nt (N/2) frame in the preceding N frames sets weight Q To nth frame, weight R is set, and the R is more than the Q.
15. according to any described devices of claim 9-14, it is characterised in that described device also includes:
Acquisition module, the information of the default barrier for gathering multigroup continuous N+1 frames known class, the training of dyspoiesis thing Collection;The information of the described default barrier of each frame include the default barrier point cloud and the default barrier it is each The reflected value of point;
Training module, for the letter according to the described default barrier of multigroup continuous N+1 frames in the barrier training set Breath, trains the sorter model.
16. devices according to claim 15, it is characterised in that the training module, specifically for:
Described default obstacle in each frame of the preceding N frames in the N+1 frames according to each group in the barrier training set respectively The information of thing, obtains the point cloud layer of at least two height that barrier is preset described in each described group of each described frame in horizontal plane Second point cloud perspective view, preset described in each frame barrier the horizontal plane the second reflective information perspective view, with And second dutycycle perspective view of the barrier in the horizontal plane is preset described in each frame;
Barrier is preset described in each described frame according to the described preceding N frames of each described group of advance acquisition respectively in the level Weight, the preceding N frames of each frame in the second barrier classification figure in face, the described preceding N frames of each described group for pre-setting In each frame at least two described in second point cloud perspective view, the second reflective information perspective view and second dutycycle The known class of perspective view and each described group of corresponding default barrier, trains the sorter model, so that it is determined that The sorter model.
A kind of 17. computer equipments, including memory, processor and storage are on a memory and the meter that can run on a processor Calculation machine program, it is characterised in that the side as described in any in claim 1-8 is realized during the computing device described program Method.
A kind of 18. computer-readable mediums, are stored thereon with computer program, it is characterised in that the program is executed by processor Methods of the Shi Shixian as described in any in claim 1-8.
CN201710073031.3A 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium Active CN106919908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710073031.3A CN106919908B (en) 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710073031.3A CN106919908B (en) 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium

Publications (2)

Publication Number Publication Date
CN106919908A true CN106919908A (en) 2017-07-04
CN106919908B CN106919908B (en) 2020-07-28

Family

ID=59453621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710073031.3A Active CN106919908B (en) 2017-02-10 2017-02-10 Obstacle identification method and device, computer equipment and readable medium

Country Status (1)

Country Link
CN (1) CN106919908B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145489A (en) * 2018-09-07 2019-01-04 百度在线网络技术(北京)有限公司 A kind of distribution of obstacles emulation mode, device and terminal based on probability graph
CN109255181A (en) * 2018-09-07 2019-01-22 百度在线网络技术(北京)有限公司 A kind of distribution of obstacles emulation mode, device and terminal based on multi-model
CN109513630A (en) * 2018-11-14 2019-03-26 深圳蓝胖子机器人有限公司 Packages system and its control method, storage medium
CN109513629A (en) * 2018-11-14 2019-03-26 深圳蓝胖子机器人有限公司 Packages method, apparatus and computer readable storage medium
US11047673B2 (en) 2018-09-11 2021-06-29 Baidu Online Network Technology (Beijing) Co., Ltd Method, device, apparatus and storage medium for detecting a height of an obstacle
CN113110451A (en) * 2021-04-14 2021-07-13 浙江工业大学 Mobile robot obstacle avoidance method with depth camera and single line laser radar fused
US11113546B2 (en) 2018-09-04 2021-09-07 Baidu Online Network Technology (Beijing) Co., Ltd. Lane line processing method and device
US11126875B2 (en) 2018-09-13 2021-09-21 Baidu Online Network Technology (Beijing) Co., Ltd. Method and device of multi-focal sensing of an obstacle and non-volatile computer-readable storage medium
US11205289B2 (en) 2018-09-07 2021-12-21 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and terminal for data augmentation
US11307302B2 (en) 2018-09-07 2022-04-19 Baidu Online Network Technology (Beijing) Co., Ltd Method and device for estimating an absolute velocity of an obstacle, and non-volatile computer-readable storage medium
US11718318B2 (en) 2019-02-22 2023-08-08 Apollo Intelligent Driving (Beijing) Technology Co., Ltd. Method and apparatus for planning speed of autonomous vehicle, and storage medium
US11780463B2 (en) 2019-02-19 2023-10-10 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus and server for real-time learning of travelling strategy of driverless vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103308923A (en) * 2012-03-15 2013-09-18 通用汽车环球科技运作有限责任公司 Method for registration of range images from multiple LiDARS
US8996228B1 (en) * 2012-09-05 2015-03-31 Google Inc. Construction zone object detection using light detection and ranging
US20160274589A1 (en) * 2012-09-26 2016-09-22 Google Inc. Wide-View LIDAR With Areas of Special Attention
CN106295586A (en) * 2016-08-16 2017-01-04 长春理工大学 Humanoid target identification method based on single line cloud data machine learning and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103308923A (en) * 2012-03-15 2013-09-18 通用汽车环球科技运作有限责任公司 Method for registration of range images from multiple LiDARS
US8996228B1 (en) * 2012-09-05 2015-03-31 Google Inc. Construction zone object detection using light detection and ranging
US20160274589A1 (en) * 2012-09-26 2016-09-22 Google Inc. Wide-View LIDAR With Areas of Special Attention
CN106295586A (en) * 2016-08-16 2017-01-04 长春理工大学 Humanoid target identification method based on single line cloud data machine learning and device

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11113546B2 (en) 2018-09-04 2021-09-07 Baidu Online Network Technology (Beijing) Co., Ltd. Lane line processing method and device
CN109255181B (en) * 2018-09-07 2019-12-24 百度在线网络技术(北京)有限公司 Obstacle distribution simulation method and device based on multiple models and terminal
US11307302B2 (en) 2018-09-07 2022-04-19 Baidu Online Network Technology (Beijing) Co., Ltd Method and device for estimating an absolute velocity of an obstacle, and non-volatile computer-readable storage medium
US11341297B2 (en) 2018-09-07 2022-05-24 Baidu Online Network Technology (Beijing) Co., Ltd. Obstacle distribution simulation method, device and terminal based on a probability graph
CN109145489A (en) * 2018-09-07 2019-01-04 百度在线网络技术(北京)有限公司 A kind of distribution of obstacles emulation mode, device and terminal based on probability graph
US10984588B2 (en) 2018-09-07 2021-04-20 Baidu Online Network Technology (Beijing) Co., Ltd Obstacle distribution simulation method and device based on multiple models, and storage medium
CN109255181A (en) * 2018-09-07 2019-01-22 百度在线网络技术(北京)有限公司 A kind of distribution of obstacles emulation mode, device and terminal based on multi-model
US11205289B2 (en) 2018-09-07 2021-12-21 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and terminal for data augmentation
US11047673B2 (en) 2018-09-11 2021-06-29 Baidu Online Network Technology (Beijing) Co., Ltd Method, device, apparatus and storage medium for detecting a height of an obstacle
US11519715B2 (en) 2018-09-11 2022-12-06 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device, apparatus and storage medium for detecting a height of an obstacle
US11126875B2 (en) 2018-09-13 2021-09-21 Baidu Online Network Technology (Beijing) Co., Ltd. Method and device of multi-focal sensing of an obstacle and non-volatile computer-readable storage medium
CN109513630A (en) * 2018-11-14 2019-03-26 深圳蓝胖子机器人有限公司 Packages system and its control method, storage medium
CN109513629A (en) * 2018-11-14 2019-03-26 深圳蓝胖子机器人有限公司 Packages method, apparatus and computer readable storage medium
US11780463B2 (en) 2019-02-19 2023-10-10 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus and server for real-time learning of travelling strategy of driverless vehicle
US11718318B2 (en) 2019-02-22 2023-08-08 Apollo Intelligent Driving (Beijing) Technology Co., Ltd. Method and apparatus for planning speed of autonomous vehicle, and storage medium
CN113110451A (en) * 2021-04-14 2021-07-13 浙江工业大学 Mobile robot obstacle avoidance method with depth camera and single line laser radar fused

Also Published As

Publication number Publication date
CN106919908B (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN106919908A (en) Obstacle recognition method and device, computer equipment and computer-readable recording medium
CN106934347A (en) Obstacle recognition method and device, computer equipment and computer-readable recording medium
CN106845412A (en) Obstacle recognition method and device, computer equipment and computer-readable recording medium
KR102337376B1 (en) Method and device for lane detection without post-processing by using lane mask, and testing method, and testing device using the same
US11521371B2 (en) Systems and methods for semantic map-based adaptive auto-exposure
CN111123735B (en) Automatic driving simulation operation method and device
CN106709475A (en) Obstacle recognition method and device, computer equipment and readable storage medium
CN107918753A (en) Processing Method of Point-clouds and device
CN110286387A (en) Obstacle detection method, device and storage medium applied to automated driving system
CN106951847A (en) Obstacle detection method, device, equipment and storage medium
CN107153363A (en) The emulation test method and device of pilotless automobile, equipment and computer-readable recording medium
CN109509260A (en) Mask method, equipment and the readable medium of dynamic disorder object point cloud
US20200284598A1 (en) Systems and methods for autonomous vehicle performance evaluation
CN109145677A (en) Obstacle detection method, device, equipment and storage medium
CN108470174A (en) Method for obstacle segmentation and device, computer equipment and readable medium
CN109074490A (en) Path detection method, related device and computer readable storage medium
CN110047276A (en) The congestion status of barrier vehicle determines method, apparatus and Related product
JP6892157B2 (en) A learning method and learning device that updates the HD map by reconstructing the 3D space using the depth prediction information for each object and the class information for each object acquired by the V2X information fusion technology, and using this. Testing method and testing equipment
CN110276293A (en) Method for detecting lane lines, device, electronic equipment and storage medium
CN110232368A (en) Method for detecting lane lines, device, electronic equipment and storage medium
CN117218107A (en) Rail defect detection method and system
CN116563801A (en) Traffic accident detection method, device, electronic equipment and medium
CN109885392A (en) Distribute the method and device of vehicle computing resource
CN114103994B (en) Control method, device and equipment based on automatic road surface cleaning of vehicle and vehicle
CN110377982A (en) The test method and system of automatic Pilot performance, electronic equipment, storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant