CN105301573A - Laser radar image fragment identification method and device - Google Patents

Laser radar image fragment identification method and device Download PDF

Info

Publication number
CN105301573A
CN105301573A CN201510777540.5A CN201510777540A CN105301573A CN 105301573 A CN105301573 A CN 105301573A CN 201510777540 A CN201510777540 A CN 201510777540A CN 105301573 A CN105301573 A CN 105301573A
Authority
CN
China
Prior art keywords
standard
fragment
training sample
weight
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510777540.5A
Other languages
Chinese (zh)
Other versions
CN105301573B (en
Inventor
潘晨劲
赵江宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foochow Hua Ying Heavy Industry Machinery Co Ltd
Original Assignee
Foochow Hua Ying Heavy Industry Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foochow Hua Ying Heavy Industry Machinery Co Ltd filed Critical Foochow Hua Ying Heavy Industry Machinery Co Ltd
Priority to CN201510777540.5A priority Critical patent/CN105301573B/en
Publication of CN105301573A publication Critical patent/CN105301573A/en
Application granted granted Critical
Publication of CN105301573B publication Critical patent/CN105301573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Abstract

Inventors provide a laser radar image fragment identification method and device which relate to the field of wireless identification and especially relate to laser radar image fragment identification. The laser radar image fragment identification method and device are used for solving the problem that in the process for identifying each fragment collected by a laser radar, the success rate of object identification is very low because of unavoidable insufficient cutting and excessive cutting. The method comprises the steps of: S101, establishing a training sample set, wherein all fragments in the training sample set have clear attribution; S102, using a group of numbers to depict the characteristics of the fragments; S103, extracting required standards from the training sample set through a machine learning process, integrating the extracted standards and commonly forming a judgment total standard H, wherein the extracted standards include standards using motion parameters as characteristics; and S104, applying the total standard H to fragment identification.

Description

The method and apparatus of a kind of lidar image fragment identification
Technical field
The present invention relates to wireless identification field, particularly the identification of wireless radar image fragment.
Background technology
Compared to optics receptor, laser radar has the signal processing flow of its uniqueness.Laser radar signal is due to its pinpoint accuracy and high sampling rate, and the data volume of unit interval is very high comparatively speaking.Normally a kind of point set cloud form of raw data structure that laser radar gathers.Each point represents the primary event of laser.First process the mistiming between each device, point set cloud is farthest reflected, before and after the measurement moment, the distance between unmanned vehicle and surrounding enviroment.
Each fragment of laser radar collection reflects the object of a relative space and time continuous, has had certain methods automatically to realize the cutting of fragment at present.
Realizing in process of the present invention, inventor finds, when identifying each fragment that laser radar gathers, due to the problem of still unavoidable undercut and undue cutting, causes the success ratio of recognition object very low.
Summary of the invention
Below provide and the simplification of one or more aspect is summarized to try hard to provide the basic comprehension to this type of aspect.Detailed the combining of this not all aspect contemplated of general introduction is look at, and both not intended to be pointed out out the scope of key or decisive any or all aspect of elements nor delineate of all aspects.Its unique object is some concepts that will provide one or more aspect in simplified form using as the more specifically bright sequence provided after a while.
The invention provides a kind of autonomous learning method identifying laser thunder fragment, solve the problem that the discrimination that causes due to undercut and undue cutting is low.
For achieving the above object, inventor provide a kind of lidar image fragment knowledge method for distinguishing and comprise step:
S101, set up training sample set, the fragment that training sample is concentrated all has clear and definite ownership;
S102, portray the feature of fragment with one group of numerical value;
S103, to be concentrated from training sample by the process of machine learning and extract required standard, by the standard summary that extracts together, jointly form one and judge overall standard H;
S104, overall standard H is applied to identification fragment.
Further, described canonical representation is:
Wherein, k is standard a knumbering in overall standard H; f kz () refers to certain several descriptor of descriptor z, or a part of descriptor, or all descriptors; x kthen a corresponding standard a being numbered k kcenter, standard a kwith θ kfor radius, c represents the classification of fragment.
Further, described concentrating from training sample extracts required standard, for:
S301, first iteration, each segment has same weight, and as supposition, we obtain first standard by decision tree or the method for recurrence;
S302, then, utilize this standard, go weight to recalculate each segment, to the point that those are successfully sorted out, give little weight, contrary then increasing weight; Then as new supposition recreation new standard; Like this, weight can not be strengthened gradually by the segment successfully concluded, until successfully sorted out;
S303, repetition previous step, until can not create effective standard, or maximum iteration time reaches.
Further, criterion is applied to and identifies that fragment is by described step:
The duration of concrete fragment m is t, has T sampling, be numbered 1-T respectively within the t time; Calculate:
m i n i m i z e α , β , γ Σ m l o g ( 1 + exp ( - y m H A ( w m , z 1 : T m ) ) )
Wherein α, beta, gamma is undetermined coefficient, this be one for α, the convex problem of beta, gamma, direct solution, the H finally obtained ait is exactly the classification net result of concrete fragment m;
W mrepresent the weight of concrete fragment m, y mfor training sample concentrates the classification results provided, z m 1:Tgraphic feature in T the sampling of representative sample m.
Be different from prior art, the kinematic parameter of segment also can be joined machine-learning process as auxiliary parameter by technique scheme.Namely there is any external parameter to be considered to relevant to classification results, even if relation is not fairly obvious, can join in the Descriptor vector of machine learning.Whether program meeting auto judge is useful.If do not do this improvement, then may ignore some useful standards.And concentrate from training sample the required standard that extracts by machine learning, the standard extracted comprises the standard using kinematic parameter as feature, effectively can identify fragment further, and then improve the discrimination of fragment, solve unavoidable undercut and cause with undue cutting the problem that the success ratio of recognition object is very low.
Before reaching, address relevant object, this one or more aspect is included in and hereinafter fully describes and the feature particularly pointed out in the following claims.The following description and drawings illustrate some illustrative aspects of this one or more aspect.But these features are only indicate that can to adopt in the various modes of the principle of various aspect several, and this description is intended to contain this type of aspects all and equivalent aspect thereof.
Accompanying drawing explanation
Describe disclosed aspect below with reference to accompanying drawing, provide accompanying drawing to be non-limiting disclosed aspect in order to illustrate, label similar in accompanying drawing indicates similar elements, and wherein:
Fig. 1 is the step schematic diagram of the present embodiment;
Fig. 2 is the fragment described in embodiment and result schematic diagram.
Embodiment
By describe in detail technical scheme technology contents, structural attitude, realized object and effect, coordinate accompanying drawing to be explained in detail below in conjunction with specific embodiment.In the following description, numerous details is set forth for explanatory purposes providing the thorough understanding to one or more aspect.But it is evident that do not have these details also can put into practice this type of aspect.
Need to illustrate, the fragment described in the application is often referred to lidar image fragment, and each fragment reflects the object of a relative space and time continuous.Inventor provides a kind of lidar image fragment to know method for distinguishing, and as shown in Figure 1, it can be divided into a few step:
First need to set up training sample set.The fragment that training sample is concentrated all has clear and definite ownership, and this section of described method specifies three ownership collection, pedestrian, bicycle, automobile.These fragments or the training sample set set up from forefathers, or be exactly the part needing data to be processed, first processed by hand by people.These fragments can be often in unlimited time, do not limit size.Versatility and compatibility are very good comparatively speaking.
Secondly, need each fragment abstract.So-calledly abstractly refer to the feature of portraying fragment with one group of numerical value.Can be the feature of fragment itself, for example, gradient, derivative of gradient etc.Also can be the motion state of this object, such as speed, angular velocity etc.Abstract is a kind of choice, a kind of approximate.First, select to think that a few numerical value can reflect, or reflect the essence of the object representated by fragment to a great extent.This hypothesis is always not obvious.Secondly, as the deviser of unmanned vehicle, be only concerned about the rough speciality of these objects, be such as people or car, and need not canvass, what plate, what color etc. Extraneous details.Very lucky, in this problem of unmanned vehicle identification Pedestrians and vehicles, this selection reasonable.
Finally, extract required standard by the process of machine learning from training sample set kind, they are combined, a common formation criterion.Go to process data by this standard again.Typically, neither one method can ensure high success rate in advance.If training sample set can not provide the information for required process data, so the successful possibility of machine learning must reduce.Training sample set, learning cycle, the selection of learning method all needs cautiously.As space is limited, we are not described herein at this section more.
Foundation for descriptor: first we set up descriptor space.The segment time that laser radar signal is partitioned into and spatially all continuous.So we can extract the motion feature of the object representated by this segment.Comprise but be not only: maximal rate, average velocity, peak acceleration, average acceleration, maximum angular rate etc.Graphic feature can be extracted in addition.Many documents mention over-rotation image (spinimage) and oriented gradient.Seldom describe at this.It should be noted that all features all need alignment.Such as we can select z-axis as central shaft, by all descriptor figure towards aligning.This is in order to avoid due to towards Different Effects recognition effect.
According to the descriptor space set up, training sample set.The signal that training sample is concentrated is exactly that descriptor adds corresponding classification results in this instance.Typically, training sample set is the subspace of pending signal, instead of chooses arbitrarily.So at utmost ensure that the condition that signal generates is constant, the difference between signal is primarily of being observed thing and background environment generation.
An important parameter of training sample set is study ratio.Namely using how many number percents of resultant signal amount as training sample set.In practice, preferably, can get ambiguous signal segment by multiselect, i.e. the signal segment that also acquires a certain degree of difficulty of artificial cognition, such as noise is comparatively large, such as excessively cuts, must keep suitable mixing.
In addition, in this instance, the kinematic parameter of segment is also joined machine-learning process as auxiliary parameter by us.If in the design process, there is any external parameter to be considered to relevant to classification results, even if relation is not fairly obvious, can join in the Descriptor vector of machine learning.Whether program meeting auto judge is useful.If but do not do this improvement, then might ignore some useful standards.
In the present embodiment, represent overall standard (i.e. the overall standard of training of judgement sample classification) with H, the true solution of H may be very complicated, but suppose it can be write as some comparatively simple functions and:
H ( z , c ) = Σ k h k ( z , c )
Wherein c representative classification, k is numbering (the standard a of standard knumbering in overall standard H), z represents descriptor.Select as shown in the formula expression standard in this present embodiment, select following formula to represent and be because the function decomposed like this can be done spread all over whole function space, infinitely can approach measurable function.
F kz () refers to certain several descriptor of z, a part, even all.X kthen a corresponding standard a being numbered k kcenter, standard a kwith θ kfor radius.All z falling into this circle, their H needs to add a k, otherwise remain unchanged.
Standard can be produced by iterative algorithm, introduces the concept of weight w in the process of the iteration of the present embodiment.Iteration first, each segment has same weight, and as supposition, we create first criterion by decision tree or the method for recurrence.Then, utilize this criterion, each segment of the new weight of duplicate removal.To the point that those are successfully sorted out, give little weight, to then strengthening weight on the contrary.Then as new supposition recreation New standard.Like this, weight can not be strengthened gradually by the segment successfully concluded, until successfully sorted out.Repeat previous step, until can not create effective standard, or maximum iteration time reaches.The method of this iteration is considered to effectively to avoid overlearning, and effect of reality also needs make concrete analyses of concrete problems.Because iteration all needs to recalculate weight each time, also cause algorithm consuming time longer.
Set up preliminary overall standard H according to said method, for concrete fragment m to be identified, we obtain following formula:
Wherein α, beta, gamma is undetermined coefficient, and subscript H represents motion descriptors, subscript S representative of graphics descriptor.Note, graphic feature needs to get time average.L 0represent priori:
Therefore we calculate:
m i n i m i z e α , β , γ Σ m l o g ( 1 + exp ( - y m H A ( w m , z 1 : T m ) ) )
This be one for α, the convex problem of beta, gamma, solves directly.The H finally obtained athe net result of classifying exactly.H acan be complicated, it may comprise multiple classification.
As shown in Figure 2, in figure top show fragment temporally average after schematic images, show after calculating that the object in this fragment is finally classified as bicycle.
Inventor also provides the equipment of a kind of lidar image fragment identification, described in comprise fragment acquisition module, standard storage module, fragment computations module;
Described fragment acquisition module is for obtaining lidar image fragment;
Described standard storage module for storing the standard extracted, and stores overall criterion H;
Described fragment computations module is used for extracting required standard by the process of machine learning from training sample set kind, for the standard summary that will extract together, jointly forms one and judges overall standard H, and for overall standard H is applied to identification fragment.
In described standard storage module standard represented by following formula:
Wherein, k is standard a knumbering in overall standard H; f kz () refers to certain several descriptor of descriptor z, or a part of descriptor, or all descriptors; x kthen a corresponding standard a being numbered k kcenter, standard a kwith θ kfor radius, c represents the classification of fragment.
Described " fragment computations module is used for extracting required standard by the process of machine learning from training sample is concentrated ", be specially fragment computations module for being the following step of execution:
S301, first iteration, each segment has same weight, and as supposition, we obtain first standard by decision tree or the method for recurrence;
S302, then, utilize this standard, go weight to recalculate each segment, to the point that those are successfully sorted out, give little weight, contrary then increasing weight; Then as new supposition recreation new standard; Like this, weight can not be strengthened gradually by the segment successfully concluded, until successfully sorted out;
S303, repetition previous step, until can not create effective standard, or maximum iteration time reaches.
Described " fragment computations module is used for criterion to be applied to identify fragment ", be specially fragment computations module for being the following step of execution:
The duration of concrete fragment m is t, has T sampling, be numbered 1-T respectively within the t time; Calculate:
m i n i m i z e α , β , γ Σ m l o g ( 1 + exp ( - y m H A ( w m , z 1 : T m ) ) )
Wherein α, beta, gamma is undetermined coefficient, this be one for α, the convex problem of beta, gamma, direct solution, the H finally obtained ait is exactly the classification net result of concrete fragment m;
W mrepresent the weight of concrete fragment m, y mfor training sample concentrates the classification results provided, z m 1:Tgraphic feature in T the sampling of representative sample m.
It should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or terminal device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or terminal device.When not more restrictions, the key element limited by statement " comprising ... " or " comprising ... ", and be not precluded within process, method, article or the terminal device comprising described key element and also there is other key element.In addition, in this article, " be greater than ", " being less than ", " exceeding " etc. be interpreted as and do not comprise this number; " more than ", " below ", " within " etc. be interpreted as and comprise this number.
Those skilled in the art should understand, the various embodiments described above can be provided as method, device or computer program.These embodiments can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.The hardware that all or part of step in the method that the various embodiments described above relate to can carry out instruction relevant by program has come, described program can be stored in the storage medium that computer equipment can read, for performing all or part of step described in the various embodiments described above method.Described computer equipment, includes but not limited to: personal computer, server, multi-purpose computer, special purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, intelligent home device, wearable intelligent equipment, vehicle intelligent equipment etc.; Described storage medium, includes but not limited to: the storage of RAM, ROM, magnetic disc, tape, CD, flash memory, USB flash disk, portable hard drive, storage card, memory stick, the webserver, network cloud storage etc.
The various embodiments described above describe with reference to the process flow diagram of method, equipment (system) and computer program according to embodiment and/or block scheme.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or square frame.These computer program instructions can being provided to the processor of computer equipment to produce a machine, making the instruction performed by the processor of computer equipment produce device for realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer equipment readable memory that works in a specific way of vectoring computer equipment, the instruction making to be stored in this computer equipment readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be loaded on computer equipment, make to perform sequence of operations step on a computing device to produce computer implemented process, thus the instruction performed on a computing device is provided for the step realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
Although be described the various embodiments described above; but those skilled in the art are once obtain the basic creative concept of cicada; then can make other change and amendment to these embodiments; so the foregoing is only embodiments of the invention; not thereby scope of patent protection of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included within scope of patent protection of the present invention.

Claims (8)

1. lidar image fragment knows a method for distinguishing, it is characterized in that, comprises step:
S101, set up training sample set, the fragment that training sample is concentrated all has clear and definite ownership;
S102, portray the feature of fragment with one group of numerical value;
S103, to be concentrated extract required standard by the process of machine learning from training sample, the standard extracted comprises the standard using kinematic parameter as feature, by the standard summary that extracts together, jointly forms one and judges overall standard H;
S104, overall standard H is applied to identification fragment.
2. a kind of lidar image fragment according to claim 1 knows method for distinguishing, and it is characterized in that, described canonical representation is:
Wherein, k is standard a knumbering in overall standard H; f kz () refers to certain several descriptor of descriptor z, or a part of descriptor, or all descriptors; x kthen a corresponding standard a being numbered k kcenter, standard a kwith θ kfor radius, c represents the classification of fragment.
3. a kind of lidar image fragment according to claim 1 knows method for distinguishing, it is characterized in that, described concentrating from training sample extracts required standard, comprises step:
S301, first iteration, each segment has same weight, and as supposition, we obtain first standard by decision tree or the method for recurrence;
S302, then, utilize this standard, go weight to recalculate each segment, to the point that those are successfully sorted out, give little weight, contrary then increasing weight; Then as new supposition recreation new standard; Like this, weight can not be strengthened gradually by the segment successfully concluded, until successfully sorted out;
S303, repetition previous step, until can not create effective standard, or maximum iteration time reaches.
4. a kind of lidar image fragment according to claim 1 knows method for distinguishing, it is characterized in that, criterion is applied to and identifies that fragment is by described step:
The duration of concrete fragment m is t, has T sampling, be numbered 1-T respectively within the t time; Calculate:
min i m i z e α , β , γ Σ m log ( 1 + exp ( - y m H A ( w m , z 1 : T m ) ) )
Wherein α, beta, gamma is undetermined coefficient, this be one for α, the convex problem of beta, gamma, direct solution, the H finally obtained ait is exactly the classification net result of concrete fragment m;
W mrepresent the weight of concrete fragment m, y mfor training sample concentrates the classification results provided, z m 1:Tgraphic feature in T the sampling of representative sample m.
5. an equipment for lidar image fragment identification, is characterized in that, described in comprise fragment acquisition module, standard storage module, fragment computations module;
Described fragment acquisition module is for obtaining lidar image fragment;
Described standard storage module for storing the standard extracted, and stores overall criterion H;
Described fragment computations module is used for extracting required standard by the process of machine learning from training sample set kind, for the standard summary that will extract together, jointly forms one and judges overall standard H, and for overall standard H is applied to identification fragment.
6. the equipment of a kind of lidar image fragment according to claim 5 identification, is characterized in that, in described standard storage module standard represented by following formula:
Wherein, k is standard a knumbering in overall standard H; f kz () refers to certain several descriptor of descriptor z, or a part of descriptor, or all descriptors; x kthen a corresponding standard a being numbered k kcenter, standard a kwith θ kfor radius, c represents the classification of fragment.
7. the equipment of a kind of lidar image fragment according to claim 5 identification, it is characterized in that, described " fragment computations module is used for extracting required standard by the process of machine learning from training sample is concentrated ", be specially fragment computations module for being the following step of execution:
S301, first iteration, each segment has same weight, and as supposition, we obtain first standard by decision tree or the method for recurrence;
S302, then, utilize this standard, go weight to recalculate each segment, to the point that those are successfully sorted out, give little weight, contrary then increasing weight; Then as new supposition recreation new standard; Like this, weight can not be strengthened gradually by the segment successfully concluded, until successfully sorted out;
S303, repetition previous step, until can not create effective standard, or maximum iteration time reaches.
8. the equipment of a kind of lidar image fragment according to claim 5 identification, is characterized in that, described " fragment computations module is used for criterion to be applied to identify fragment ", is specially fragment computations module for being the following step of execution:
The duration of concrete fragment m is t, has T sampling, be numbered 1-T respectively within the t time; Calculate:
min i m i z e α , β , γ Σ m log ( 1 + exp ( - y m H A ( w m , z 1 : T m ) ) )
Wherein α, beta, gamma is undetermined coefficient, this be one for α, the convex problem of beta, gamma, direct solution, the H finally obtained ait is exactly the classification net result of concrete fragment m;
W mrepresent the weight of concrete fragment m, y mfor training sample concentrates the classification results provided, z m 1:Tgraphic feature in T the sampling of representative sample m.
CN201510777540.5A 2015-11-12 2015-11-12 A kind of method and apparatus of lidar image fragment identification Active CN105301573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510777540.5A CN105301573B (en) 2015-11-12 2015-11-12 A kind of method and apparatus of lidar image fragment identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510777540.5A CN105301573B (en) 2015-11-12 2015-11-12 A kind of method and apparatus of lidar image fragment identification

Publications (2)

Publication Number Publication Date
CN105301573A true CN105301573A (en) 2016-02-03
CN105301573B CN105301573B (en) 2016-08-17

Family

ID=55199053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510777540.5A Active CN105301573B (en) 2015-11-12 2015-11-12 A kind of method and apparatus of lidar image fragment identification

Country Status (1)

Country Link
CN (1) CN105301573B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901588A (en) * 2019-03-27 2019-06-18 广州高新兴机器人有限公司 A kind of charging unit and automatic recharging method that patrol robot uses

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233541A (en) * 1990-08-10 1993-08-03 Kaman Aerospace Corporation Automatic target detection process
CN102103202A (en) * 2010-12-01 2011-06-22 武汉大学 Semi-supervised classification method for airborne laser radar data fusing images
US20140233010A1 (en) * 2011-09-30 2014-08-21 The Chancellor Masters And Scholars Of The University Of Oxford Localising transportable apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233541A (en) * 1990-08-10 1993-08-03 Kaman Aerospace Corporation Automatic target detection process
CN102103202A (en) * 2010-12-01 2011-06-22 武汉大学 Semi-supervised classification method for airborne laser radar data fusing images
US20140233010A1 (en) * 2011-09-30 2014-08-21 The Chancellor Masters And Scholars Of The University Of Oxford Localising transportable apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴军等: "SVM加权学习下的机载LiDAR数据多元分类研究", 《武汉大学学报》 *
王寒凝: "基于激光测距的车型识别分类系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
程健: "基于三维激光雷达的实时目标检测", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
胡顺玺: "基于三维激光扫描仪的车辆前方行人检测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901588A (en) * 2019-03-27 2019-06-18 广州高新兴机器人有限公司 A kind of charging unit and automatic recharging method that patrol robot uses

Also Published As

Publication number Publication date
CN105301573B (en) 2016-08-17

Similar Documents

Publication Publication Date Title
Cai et al. YOLOv4-5D: An effective and efficient object detector for autonomous driving
Chen et al. 3d object proposals using stereo imagery for accurate object class detection
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
Chen et al. Object-level motion detection from moving cameras
CN107316048A (en) Point cloud classifications method and device
Gong et al. Object detection based on improved YOLOv3-tiny
CN106250838A (en) vehicle identification method and system
US20240017747A1 (en) Method and system for augmenting lidar data
CN111177887A (en) Method and device for constructing simulation track data based on real driving scene
Wang et al. Voxel-RCNN-complex: An effective 3-D point cloud object detector for complex traffic conditions
Dabbiru et al. Lidar data segmentation in off-road environment using convolutional neural networks (cnn)
CN108399424A (en) A kind of point cloud classifications method, intelligent terminal and storage medium
Wang et al. Detection method of obstacles in the dangerous area of electric locomotive driving based on MSE-YOLOv4-Tiny
CN105301573A (en) Laser radar image fragment identification method and device
Feng et al. Embedded YOLO: A real-time object detector for small intelligent trajectory cars
Murhij et al. Real-time 3d object detection using feature map flow
CN113408651B (en) Unsupervised three-dimensional object classification method based on local discriminant enhancement
Nguyen et al. Robust vehicle detection under adverse weather conditions using auto-encoder feature
Chan et al. Rotating object detection in remote-sensing environment
CN114155524A (en) Single-stage 3D point cloud target detection method and device, computer equipment and medium
CN113344121A (en) Method for training signboard classification model and signboard classification
Jeon et al. High-speed car detection using resnet-based recurrent rolling convolution
Vyas et al. Advances in approach for Object Detection and classification
Ying et al. A Data Annotation and Recognition Method Based on Zero Statistical Hypothesis Test and Multi Variable Binary Classification Theory
Zhao et al. Target detection and recognition method based on embedded vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant