CN110502979B - Laser radar waveform signal classification method based on decision tree - Google Patents

Laser radar waveform signal classification method based on decision tree Download PDF

Info

Publication number
CN110502979B
CN110502979B CN201910622645.1A CN201910622645A CN110502979B CN 110502979 B CN110502979 B CN 110502979B CN 201910622645 A CN201910622645 A CN 201910622645A CN 110502979 B CN110502979 B CN 110502979B
Authority
CN
China
Prior art keywords
terrain
value
peak value
tree
decision tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910622645.1A
Other languages
Chinese (zh)
Other versions
CN110502979A (en
Inventor
董志伟
闫勇吉
徐涛
陈德应
樊荣伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910622645.1A priority Critical patent/CN110502979B/en
Publication of CN110502979A publication Critical patent/CN110502979A/en
Application granted granted Critical
Publication of CN110502979B publication Critical patent/CN110502979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present disclosure provides a laser radar waveform signal classification method based on a decision tree, which includes: acquiring fringe pattern data by sampling radar echo signals through waveforms, and extracting characteristic values based on the fringe pattern data; respectively coding the types of the terrain and ground objects into four types of 1,2, 3 and 4, which respectively represent plains, hills, buildings and trees; calculating constraint conditions of the stripe pattern data based on the characteristic values, wherein the constraint conditions comprise the number of connected domains, the peak value of the Hough line, the proportion of the peak value, theta of the peak value, rho of the peak value, rectangularity, circularity, aspect ratio, elevation difference, intensity average value and area; constructing a decision tree according to the characteristic values of the terrain and ground objects and the constraint conditions; analyzing the stripe pattern data through the decision tree, and identifying the terrain and ground object type in the stripe pattern data. According to the laser radar waveform signal classification method based on the decision tree, the point cloud data does not need to be converted for the stripe image, and the tree or the building can be accurately identified.

Description

Laser radar waveform signal classification method based on decision tree
Technical Field
The disclosure relates to the technical field of image processing, in particular to a laser radar waveform signal classification method based on a decision tree.
Background
Compared with the traditional microwave radar, the laser radar has the characteristics of high precision, high resolution, high detection sensitivity, high confidentiality, small volume, light weight and convenience for airborne and shipborne. In addition, due to different working mechanisms, compared with a traditional microwave radar signal, a laser pulse emitted by the laser radar has stronger anti-interference capability and richer acquired data information, so that the laser radar has higher detection and identification capabilities. In particular, the development of novel full-waveform sampling laser radars makes the detection and identification of complex targets possible. However, the processing of the laser radar mass echo signals has become a major technical bottleneck limiting the development of new laser radar technologies.
In some implementations, we are interested in only a portion of the object and do not need to process all of the point cloud data within the measurement range. Taking the forestry department as an example, the forestry department only cares about vegetation information of a measurement area, and information such as buildings, roads and the like belongs to interference information, so that only useful information can be screened and processed when data is processed, redundant calculation is reduced, and the data processing speed is accelerated. When screening data, it is necessary to classify the data.
A decision tree is a tree-like structure made up of a number of nodes, similar to a flow chart. Most root nodes of the decision tree are positioned at the topmost layer of the tree, and a top-down method is adopted. The nodes comprise internal nodes and leaf nodes, wherein the internal nodes represent the test on one attribute, and the test output is represented by branches; the leaf node is located at the bottom of the decision tree and is used for storing class labels. The decision tree is structured top-down such that the training set is recursively divided into smaller subsets by the definition of each internal node attribute threshold. The decision tree is simple and easy to use, does not need any prior hypothesis on data, has high calculation speed, strong result interpretability and strong robustness, and has the advantages of wide application in classification technology.
At present, most of data classification techniques related to laser radar are based on laser radar point cloud data, and terrain and ground objects are classified by using an algorithm. The point cloud data refers to laser foot point data acquired by a laser radar, and includes spatial position information (X, Y, Z) of a target point, intensity information of object reflection, echo frequency information, and the like.
Due to the fact that the process from the laser original signal to the point cloud data generation is complex, a large amount of time and operation resources are needed for the point cloud data generation, and the classification effect depends on the quality of the point cloud data seriously. Therefore, the existing classification scheme based on the laser point cloud data has the problems of complex data processing process, long classification time, large calculated amount, difficulty in meeting actual requirements and the like.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The present disclosure is directed to a method for classifying laser radar waveform signals based on a decision tree, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific embodiment of the present disclosure, the present disclosure provides a method for classifying laser radar waveform signals based on a decision tree, including:
acquiring fringe pattern data by sampling radar echo signals through waveforms, and extracting characteristic values based on the fringe pattern data;
respectively coding the types of the terrain and ground objects into four types of 1,2, 3 and 4, which respectively represent plains, hills, buildings and trees;
calculating constraint conditions of the stripe pattern data based on the characteristic values, wherein the constraint conditions comprise the number of connected domains, the peak value of the Hough line, the proportion of the peak value, theta of the peak value, rho of the peak value, rectangularity, circularity, aspect ratio, elevation difference, intensity mean value and area;
constructing a decision tree according to the characteristic values of the terrain and ground objects and the constraint conditions;
and analyzing the stripe pattern data through the decision tree, and identifying the terrain and feature type in the stripe pattern data.
Optionally, the constructing a decision tree according to the feature values of the terrain and feature and the constraint condition includes:
acquiring the characteristics of the terrain and ground objects after noise reduction;
calculating the number of the connected domains, and identifying that the terrain and ground object is a plain or a hill when the number of the connected domains is 1;
and when the number of the connected domains is more than 1, identifying that the terrain feature is a building or a tree.
Optionally, when the number of connected domains is 1, identifying that the terrain feature is a plain or a hill includes:
when the number of the connected domains is 1, acquiring the peak value of the Hough straight line;
when the peak value of the Hough straight line is larger than a first threshold value, identifying that the terrain feature is a plain;
when the Hough straight line peak value is smaller than the first threshold value, the terrain feature is identified as a hill.
Optionally, when the number of connected domains is greater than 1, identifying that the terrain feature is a building or a tree includes:
when the number of the connected domains is larger than 1, acquiring the elevation difference value;
when the elevation difference value is larger than a second threshold value, further determining that the terrain and ground object is a building or a tree;
and when the elevation difference value is smaller than a second threshold value, further determining that the terrain feature is a plain or a hill.
Optionally, when the elevation difference is greater than a second threshold, further determining that the terrain and the ground are buildings or trees, including:
when the elevation difference value is larger than a second threshold value, acquiring the proportion of a peak value of the stripe pattern data, theta of the peak value, rho of the peak value, the rectangularity, the circularity, the aspect ratio, the intensity mean value and the area;
and determining the terrain features to be buildings or trees according to the proportion of the peak value, theta of the peak value, rho of the peak value, the squareness, the circularity, the length-width ratio, the strength average value and the area.
Optionally, determining that the terrain and ground structure is a building or a tree according to the proportion of the peak value, θ of the peak value, ρ of the peak value, the squareness, the circularity, the aspect ratio, the mean intensity value and the area includes:
and when the intensity average value is larger than 70, determining that the terrain feature is a tree, otherwise, determining that the terrain feature is a building.
Optionally, the expression of the intensity mean value is as follows:
Figure BDA0002126016890000031
wherein, i represents a gray level, i =0,1,2 \8230:l-1; l represents the number of kinds of gray levels, and the number of gray levels of the echo signal is 256; n is i Representing the total number of pixels having a gray level i; n represents the total number of pixels in the image.
Optionally, the circularity expression is as follows:
Figure BDA0002126016890000041
where S denotes the area of the target region, L denotes the perimeter of the target region, and the larger the circularity C, the better the circularity of the target.
Optionally, the calculation formula of the squareness degree is as follows:
Figure BDA0002126016890000042
wherein S represents the area of the target region; s MER The area of the minimum bounding rectangle is shown, and the larger the rectangle degree R is, the closer the target is to the rectangle, and the maximum value is 1.
Optionally, the aspect ratio is calculated by the following formula:
Figure BDA0002126016890000043
where W is the length of the minor axis and L is the length of the major axis, a larger value of the aspect ratio K indicates a slimmer object.
Compared with the prior art, the scheme of the embodiment of the disclosure at least has the following beneficial effects: according to the laser radar waveform signal classification method based on the decision tree, point cloud data conversion is not needed to be carried out on stripe images, image recognition can be directly carried out, most of building type, sloping field type and flat field type targets obtained after classification are correctly classified and marked, and the classification effect is good. Other targets such as vegetation and the like mainly aim at identifying original fringe echo signals with more missing echo signals, trees or buildings can be accurately identified, and the whole image identification method is simple and efficient.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It should be apparent that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived by those of ordinary skill in the art without inventive effort. In the drawings:
FIG. 1 illustrates a general flow diagram of a decision tree based laser radar waveform signal classification method according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a decision tree based laser radar waveform signal classification method according to an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of a conventional airborne lidar detection system according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a streak tube imaging mechanism in the new regime lidar, according to an embodiment of the disclosure;
FIG. 5 shows a perimeter schematic in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates an elevation differential schematic view according to an embodiment of the present disclosure;
FIG. 7 illustrates a flow diagram for selecting attributes for node splitting according to an embodiment of the present disclosure;
FIG. 8 illustrates a decision tree training flow diagram according to an embodiment of the present disclosure;
FIG. 9 illustrates a decision tree structure according to an embodiment of the present disclosure;
fig. 10 shows an electronic device connection structure schematic according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in the disclosed embodiments and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that the term "and/or" as used herein is merely a relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
It should be understood that, although the terms first, second, third, etc. may be used in the embodiments of the present disclosure to describe \8230; \8230, these \8230; should not be limited to these terms. These terms are used only to distinguish between 8230; and vice versa. For example, a first 8230; may also be referred to as a second 8230; without departing from the scope of embodiments of the disclosure, similarly, the second one (8230) \\8230; also known as the first one (8230); 8230).
The words "if", as used herein, may be interpreted as "at \8230; \8230when" or "when 8230; \823030, when" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrases "comprising one of \8230;" does not exclude the presence of additional like elements in an article or device comprising the element.
The stripe tube-based airborne laser radar adopts a waveform sampling technology, can be directly classified according to the morphological characteristics of the original echo signal stripe pattern, and reduces the complicated process of converting all original echo signals into a point cloud data map. In addition, the data volume of each echo signal of the airborne laser radar based on the streak tube can reach 1000 times of single-point detection, and the obtained information is richer. By analyzing the attribute characteristics of various terrain and ground objects and constructing a decision tree, the speed and the accuracy of classification can be improved. Therefore, the present disclosure aims to provide a point cloud data classification technique capable of fast and accurate classification based on laser stripe original echo signals.
The present disclosure uses raw data classification of laser radar based on new system of streak tube, and the general flow is shown in fig. 1.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 2, according to an embodiment of the present disclosure, the present disclosure provides a method for classifying laser radar waveform signals based on a decision tree, which specifically includes the following steps:
step S102: and acquiring fringe pattern data through waveform sampling radar echo signals, and extracting characteristic values based on the fringe pattern data.
As shown in fig. 3, illustrating the working principle of the conventional airborne lidar detection system, the conventional airborne single-point scanning lidar measurement system mainly includes the following components: (1) A laser ranging Unit (Running Unit), an optical-mechanical scanning Unit (Opto-mechanical Scanner), a Control-Monitoring and Recording Unit (Control-Monitoring and Recording Units), a Differential Global Positioning System (DGPS) and an Inertial Measurement Unit (IMU), etc. Currently, most laser radars use a pulse laser to emit light pulses to a target, a part of the laser pulses are reflected back to the laser after reaching the target, and the distance D from the laser to the target can be measured by calculating the time difference t from emission to return, such as a formula (c in the formula represents the light speed):
Figure BDA0002126016890000071
in order to obtain more characteristic details about the target terrain feature, more accurate classification is achieved. The novel system laser radar based on the streak tube is adopted in the method, and the working mechanism of the streak tube is shown in figure 4.
After the laser pulse is emitted to the surface of the object, part of the optical signal returns. The part of optical signal is shaped into a linear beam through a slit and then focused on a photocathode of the streak tube by a subsequent optical focusing system. The photocathode generates photoelectric effect to convert photons into photoelectrons, and the instantaneous emission density is in direct proportion to the pulse intensity at the moment, so that electrons emitted by the photocathode are equivalent to an incident light signal in space-time structure. The photoelectron pulses then enter a deflection system which linearly deflects the electrons at different times, causing them to spread out over the screen in a time sequence.
Step S104: the terrain and ground feature categories are respectively coded into four categories of 1,2, 3 and 4, which respectively represent plains, hills, buildings and trees.
Step S106: and calculating constraint conditions of the stripe pattern data based on the characteristic values, wherein the constraint conditions comprise the number of connected domains, the peak value of the Hough line, the proportion of the peak value, theta of the peak value, rho of the peak value, rectangularity, circularity, aspect ratio, elevation difference, intensity average value and area.
The decision tree is a classification prediction model, and in order to construct the decision tree, feature analysis needs to be performed on signals to be classified, and threshold values of all constraint conditions are determined.
In the invention, the number of connected domains of a stripe image, the peak value of a Hough line, the proportion of the peak value, theta of the peak value, rho of the peak value, the rectangularity, the circularity, the length-width ratio, the elevation difference, the intensity mean value and the area are selected as the classification characteristics of terrain and ground objects.
1. Peak value of Hough line
Hough transform (Hough) straight line detection is an image recognition and feature extraction technique that detects objects with straight line shapes by a voting algorithm. The transformation is performed by an accumulator to count the set of complex line shapes, and typically, several peaks appear in several lines, and the size of the peak represents the shape size of the part of the line.
2. Circumference length
The perimeter is the boundary length of the target area, as shown in fig. 5.
3. Area of
The common method for calculating the area is to count the number of pixels in a target area and at a boundary, and since the target values in a binary image are all 1 and the background values are all 0, the area can be obtained by accumulating the image intensity values, and the calculation formula is as follows:
Figure BDA0002126016890000081
4. the intensity mean expression is as follows:
Figure BDA0002126016890000082
wherein, i represents a gray level, i =0,1,2 \8230:l-1; l represents the number of kinds of gray levels, and the number of gray levels of the echo signal is 256; n is a radical of an alkyl radical i Representing the total number of pixels having a gray level i; n represents the total number of pixels in the image.
5. The circularity expression is as follows:
Figure BDA0002126016890000083
where S represents the area of the target region, L represents the perimeter of the target region, and the greater the circularity C, the better the circularity of the target.
6. The calculation formula of the squareness degree is as follows:
Figure BDA0002126016890000091
wherein S represents the area of the target region; s MER The area of the minimum bounding rectangle is shown, and the larger the rectangle degree R is, the closer the target is to the rectangle, and the maximum value is 1.
7. The aspect ratio is calculated as:
Figure BDA0002126016890000092
where W is the length of the minor axis and L is the length of the major axis, a larger value of the aspect ratio K indicates a slimmer object.
8. Elevation difference value:
as shown in fig. 6, the difference between the connected component 1 and the connected component 2 is the elevation difference information between the two connected components.
Four terrains, namely, plain, sloping field, building and tree are respectively selected to analyze the characteristics, and the results are shown in the following table. As can be seen from the table, these four types of terrain can be distinguished by these features.
Figure BDA0002126016890000101
Step S108: and constructing a decision tree according to the characteristic values of the terrain and ground objects and the constraint conditions.
And in the attribute set of the training sample set, calculating the information entropy of each attribute, solving the information gain rate of the attribute, selecting the optimal attribute layer by layer according to the standard with the maximum information gain rate to split the data in the node, and stopping the growth until the sample data in the same node belongs to the same class or no new attribute in the attribute set is available for judgment. The flow chart of the most core attribute selection set node splitting is shown in fig. 7.
In the actual tree building process, various situations such as dispersion, continuity and deletion can occur to attributes in a sample set, and specific problem specific analysis is needed for each situation.
The method comprises the steps of establishing a program for a decision tree model, wherein the most core is an information entropy theory algorithm, and selecting the optimal splitting attribute and the optimal splitting value corresponding to the attribute in a circulating mode to serve as the judgment condition of a tree branch until the growth of a decision tree is stopped when a training sample cannot be subdivided.
In the experiment, information gain rate calculation is carried out on each feature attribute in the extracted feature data set, and the optimal attribute and the corresponding optimal splitting value are successively selected for growing the decision tree. The whole processing flow is shown in fig. 8, wherein the selection of the original fringe echo signal of the typical target and the extraction of its feature data are completed in chapter three. And shaping the extracted feature data to obtain input parameters suitable for a decision tree program, and further training a target classification model. The target streak echo signal is chosen → the characteristic data is extracted → the decision tree model is trained.
Wherein, the traindata and the testdata are characteristic data matrixes extracted from the selected original stripe training sample set and the selected test set; a is the type of the characteristic attribute; dim is the number of the characteristic attribute; split _ loc is the split value of the best attribute. The whole processing flow is to extract the characteristics of target area, average intensity, line number and the like from three types of selected target original stripe echo signals, namely buildings, sloping fields and flat ground, and shape to obtain tranndata and testdata. And (3) as an input matrix, starting to train a decision tree classification model, sequentially calculating the optimal splitting attribute and the optimal splitting value corresponding to the attribute, splitting the nodes successively until the leaf node condition is met, and stopping splitting.
Specifically, as shown in fig. 9, the constructing a decision tree according to the feature values of the terrain and features and the constraint conditions includes:
acquiring the characteristics of the terrain and ground objects after noise reduction;
calculating the number of the connected domains, and identifying that the terrain and ground object is a plain or a hill when the number of the connected domains is 1;
and when the number of the connected domains is more than 1, identifying that the terrain feature is a building or a tree.
Optionally, when the number of connected domains is 1, identifying that the terrain feature is a plain or a hill includes:
when the number of the connected domains is 1, acquiring the peak value of the Hough line;
when the peak value of the Hough straight line is larger than a first threshold value, the terrain ground object is identified as a plain; when the peak value of the Hough straight line is smaller than the first threshold value, the terrain feature is identified to be a hill. Wherein the first threshold value is in the range of 800-1000, preferably 900.
Optionally, when the number of connected domains is greater than 1, identifying that the terrain feature is a building or a tree includes:
when the number of the connected domains is more than 1, acquiring the elevation difference value;
when the elevation difference value is larger than a second threshold value, further determining that the terrain and ground object is a building or a tree; and when the elevation difference value is smaller than a second threshold value, further determining that the terrain feature is a plain or a hill. Wherein the second threshold range is 0.5-10 meters.
Optionally, when the elevation difference is greater than a second threshold, further determining that the terrain feature is a building or a tree, including:
when the elevation difference is larger than a second threshold value, acquiring the proportion of the peak value of the stripe pattern data, theta of the peak value, rho of the peak value, the rectangularity, the circularity, the aspect ratio, the intensity mean value and the area;
and determining the terrain and ground structure to be a building or a tree according to the proportion of the peak value, theta of the peak value, rho of the peak value, the rectangularity, the circularity, the length-width ratio, the mean intensity value and the area.
Optionally, determining that the terrain and ground structure is a building or a tree according to the proportion of the peak value, θ of the peak value, ρ of the peak value, the squareness, the circularity, the aspect ratio, the mean intensity value and the area includes:
and when the intensity mean value is more than 70, determining that the terrain feature is a tree, otherwise, determining that the terrain feature is a building.
Step S110: analyzing the stripe pattern data through the decision tree, and identifying the terrain and ground object type in the stripe pattern data.
According to the laser radar waveform signal classification method based on the decision tree, point cloud data conversion is not needed to be carried out on the stripe image, image recognition can be directly carried out, most of building type, sloping field type and flat field type targets obtained after classification are correctly classified and marked, and the classification effect is good. Other targets such as vegetation and the like mainly aim at identifying original fringe echo signals with more missing echo signals, trees or buildings can be accurately identified, and the whole image identification method is simple and efficient.
Example 2
As shown in fig. 10, the present embodiment provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the method steps of the above embodiments.
Example 3
The disclosed embodiments provide a non-volatile computer storage medium having stored thereon computer-executable instructions that may perform the method steps as described in the embodiments above.
Example 4
Referring now to FIG. 10, shown is a block diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1005. An input/output (I/O) interface 1005 is also connected to bus 1005.
Generally, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1005 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 1008 including, for example, magnetic tape, hard disk, and the like; and a communication device 1005. The communication means 1005 may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While fig. 10 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network through the communication device 1005, or installed from the storage device 1008, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of an element does not in some cases constitute a limitation on the element itself.

Claims (10)

1. A laser radar waveform signal classification method based on a decision tree is characterized by comprising the following steps:
acquiring fringe pattern data through a waveform sampling radar echo signal, and extracting a characteristic value based on the fringe pattern data;
respectively coding the types of the terrain and ground objects into four types of 1,2, 3 and 4, which respectively represent plains, hills, buildings and trees;
calculating constraint conditions of the stripe pattern data based on the characteristic values, wherein the constraint conditions comprise the number of connected domains, the peak value of the Hough line, the proportion of the peak value, theta of the peak value, rho of the peak value, rectangularity, circularity, aspect ratio, elevation difference, intensity mean value and area;
constructing a decision tree according to the characteristic values of the terrain and ground objects and the constraint conditions;
and analyzing the stripe pattern data through the decision tree, and identifying the terrain and feature type in the stripe pattern data.
2. The method of claim 1, wherein the constructing a decision tree based on the eigenvalues of the terrain feature and the constraint condition comprises:
acquiring the characteristics of the terrain and ground objects after noise reduction;
calculating the number of the connected domains, and identifying that the terrain and ground object is a plain or a hill when the number of the connected domains is 1;
and when the number of the connected domains is more than 1, identifying that the terrain feature is a building or a tree.
3. The method of claim 2, wherein identifying the terrain feature as a plain or a hill when the number of connected components is 1 comprises:
when the number of the connected domains is 1, acquiring the peak value of the Hough line;
when the peak value of the Hough straight line is larger than a first threshold value, identifying that the terrain feature is a plain;
when the peak value of the Hough straight line is smaller than the first threshold value, the terrain feature is identified to be a hill.
4. The method of claim 2, wherein identifying the topographical feature as a building or tree when the number of connected domains is greater than 1 comprises:
when the number of the connected domains is more than 1, acquiring the elevation difference value;
when the elevation difference value is larger than a second threshold value, further determining that the terrain and ground object is a building or a tree;
and when the elevation difference value is smaller than a second threshold value, further determining that the terrain feature is a plain or a hill.
5. The method of claim 4, wherein the further determining that the topographical feature is a building or a tree when the elevation difference value is greater than a second threshold value comprises:
when the elevation difference is larger than a second threshold value, acquiring the proportion of the peak value of the stripe pattern data, theta of the peak value, rho of the peak value, the rectangularity, the circularity, the aspect ratio, the intensity mean value and the area;
and determining the terrain and ground structure to be a building or a tree according to the proportion of the peak value, theta of the peak value, rho of the peak value, the rectangularity, the circularity, the length-width ratio, the mean intensity value and the area.
6. The method of claim 5, wherein determining the topographical feature as a building or tree based on the peak fraction, the peak θ, the peak ρ, the rectangularity, the circularity, the aspect ratio, the mean intensity, and the area of the peak comprises:
and when the intensity average value is larger than 70, determining that the terrain feature is a tree, otherwise, determining that the terrain feature is a building.
7. The method of claim 6, wherein the intensity mean expression is as follows:
Figure FDA0002126016880000021
wherein i represents gray level, i =0,1,2 \ 8230, l 82301; l represents the number of kinds of gray levels, and the number of gray levels of the echo signal is 256; n is i Representing the total number of pixels having a gray level i; n represents the total number of pixels in the image.
8. The method of claim 6, wherein the circularity expression is as follows:
Figure FDA0002126016880000022
where S denotes the area of the target region, L denotes the perimeter of the target region, and the larger the circularity C, the better the circularity of the target.
9. The method of claim 6, wherein the squareness is calculated by the formula:
Figure FDA0002126016880000031
wherein S represents the area of the target region; s MER The area of the minimum bounding rectangle is shown, and the larger the rectangle degree R is, the closer the target is to the rectangle, and the maximum value is 1.
10. The method of claim 6, wherein the aspect ratio is calculated as:
Figure FDA0002126016880000032
where W is the minor axis length and L is the major axis length, and a larger value of the aspect ratio K indicates a slimmer object.
CN201910622645.1A 2019-07-11 2019-07-11 Laser radar waveform signal classification method based on decision tree Active CN110502979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910622645.1A CN110502979B (en) 2019-07-11 2019-07-11 Laser radar waveform signal classification method based on decision tree

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910622645.1A CN110502979B (en) 2019-07-11 2019-07-11 Laser radar waveform signal classification method based on decision tree

Publications (2)

Publication Number Publication Date
CN110502979A CN110502979A (en) 2019-11-26
CN110502979B true CN110502979B (en) 2023-04-14

Family

ID=68585958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910622645.1A Active CN110502979B (en) 2019-07-11 2019-07-11 Laser radar waveform signal classification method based on decision tree

Country Status (1)

Country Link
CN (1) CN110502979B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418852B (en) * 2022-01-20 2024-04-12 哈尔滨工业大学 Point cloud arbitrary scale up-sampling method based on self-supervision deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7191066B1 (en) * 2005-02-08 2007-03-13 Harris Corp Method and apparatus for distinguishing foliage from buildings for topographical modeling
WO2017036363A1 (en) * 2015-09-02 2017-03-09 同方威视技术股份有限公司 Optical fiber perimeter intrusion signal identification method and device, and perimeter intrusion alarm system
WO2019113063A1 (en) * 2017-12-05 2019-06-13 Uber Technologies, Inc. Multiple stage image based object detection and recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10666928B2 (en) * 2015-02-06 2020-05-26 The University Of Akron Optical imaging system and methods thereof
US20190174207A1 (en) * 2016-05-09 2019-06-06 StrongForce IoT Portfolio 2016, LLC Methods and systems for the industrial internet of things

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7191066B1 (en) * 2005-02-08 2007-03-13 Harris Corp Method and apparatus for distinguishing foliage from buildings for topographical modeling
WO2017036363A1 (en) * 2015-09-02 2017-03-09 同方威视技术股份有限公司 Optical fiber perimeter intrusion signal identification method and device, and perimeter intrusion alarm system
WO2019113063A1 (en) * 2017-12-05 2019-06-13 Uber Technologies, Inc. Multiple stage image based object detection and recognition

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
全波形激光雷达和航空影像联合的地物分类;周梦维等;《遥感技术与应用》;20101215(第06期);第821-827页 *
基于条纹阵列探测的中高空激光雷达测绘系统关键技术研究;胡国军;《中国博士学位论文全文数据库基础科学辑》;20181215;第A008-11页 *
条纹原理激光雷达成像仿真及实验;董志伟 等;《红外与激光工程》;20160731;第45卷(第7期);第100-104页 *
甘肃拾金坡金矿床航空高光谱遥感异常信息解析与找矿应用;董双发等;《黄金》;20170115(第01期);第10-16页 *
面向对象高可信SAR数据精确处理;张继贤等;《武汉大学学报(信息科学版)》;20181205(第12期);第1819-1831页 *

Also Published As

Publication number Publication date
CN110502979A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
US11402494B2 (en) Method and apparatus for end-to-end SAR image recognition, and storage medium
US10853687B2 (en) Method and apparatus for determining matching relationship between point cloud data
CN109087510B (en) Traffic monitoring method and device
CN109993192B (en) Target object identification method and device, electronic equipment and storage medium
CN109871902A (en) It is a kind of to fight the SAR small sample recognition methods for generating cascade network based on super-resolution
CN110940971B (en) Radar target point trace recording method and device and storage medium
CN109740639A (en) A kind of wind and cloud satellite remote-sensing image cloud detection method of optic, system and electronic equipment
CN110390706B (en) Object detection method and device
CN110555841A (en) SAR image change detection method based on self-attention image fusion and DEC
CN110502973A (en) A kind of roadmarking automation extraction and recognition methods based on vehicle-mounted laser point cloud
CN110502978A (en) A kind of laser radar waveform Modulation recognition method based on BP neural network model
CN111541511A (en) Communication interference signal identification method based on target detection in complex electromagnetic environment
CN114037836A (en) Method for applying artificial intelligence recognition technology to three-dimensional power transmission and transformation engineering measurement and calculation
CN113920320B (en) Radar image target detection system for typical active interference
CN116027324A (en) Fall detection method and device based on millimeter wave radar and millimeter wave radar equipment
CN116547562A (en) Point cloud noise filtering method, system and movable platform
CN115147333A (en) Target detection method and device
CN115187812A (en) Hyperspectral laser radar point cloud data classification method, training method and device
CN109584262A (en) Cloud detection method of optic, device and electronic equipment based on remote sensing image
CN110502979B (en) Laser radar waveform signal classification method based on decision tree
WO2021179583A1 (en) Detection method and detection device
Kusetogullari et al. Unsupervised change detection in landsat images with atmospheric artifacts: a fuzzy multiobjective approach
CN111812670A (en) Single photon laser radar space transformation noise judgment and filtering method and device
US20230104674A1 (en) Machine learning techniques for ground classification
CN116047463A (en) Multi-angle SAR target scattering anisotropy deduction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant