CN113379921A - Track identification method, device, storage medium and equipment - Google Patents

Track identification method, device, storage medium and equipment Download PDF

Info

Publication number
CN113379921A
CN113379921A CN202110692045.XA CN202110692045A CN113379921A CN 113379921 A CN113379921 A CN 113379921A CN 202110692045 A CN202110692045 A CN 202110692045A CN 113379921 A CN113379921 A CN 113379921A
Authority
CN
China
Prior art keywords
track
image
lines
point cloud
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110692045.XA
Other languages
Chinese (zh)
Inventor
冯强
刘行健
张海武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Benewake Beijing Co Ltd
Original Assignee
Benewake Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Benewake Beijing Co Ltd filed Critical Benewake Beijing Co Ltd
Priority to CN202110692045.XA priority Critical patent/CN113379921A/en
Publication of CN113379921A publication Critical patent/CN113379921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

One or more embodiments of the present invention provide a track identification method, apparatus, storage medium, and device, where the track identification method includes: collecting first point cloud data around a track; removing points far away from the ground and the track in the first point cloud data to obtain second point cloud data containing track data; converting the second point cloud data to obtain a depth image; extracting lines from the depth image; and filtering the lines based on a preset line matching limit rule according to the position relationship between the two tracks to obtain the lines of the tracks.

Description

Track identification method, device, storage medium and equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a track identification method, apparatus, storage medium, and device.
Background
At present, in the field of track detection, the track direction can be extracted through PCA (principal component analysis) to obtain the track characteristics, and the acquired point cloud data of the track has no obvious abnormality under the close-range condition. However, in the long-distance PCA, when the track has a small inclination angle, due to the reflection of the track surface, the point cloud data cannot be acquired at a long distance. In this case, the track bit extraction fails.
Disclosure of Invention
In view of this, embodiments of the present invention provide a track identification method, apparatus, storage medium, and device, which can effectively improve track identification efficiency.
One or more embodiments of the present invention provide a track identification method, including: collecting first point cloud data around a track; removing points far away from the ground and the track in the first point cloud data to obtain second point cloud data containing track data; converting the second point cloud data to obtain a depth image; extracting lines from the depth image; and filtering the lines based on a preset line matching limit rule according to the position relation between the two tracks to obtain the lines of the tracks.
Optionally, the method further includes: and after the line of the track is obtained, determining the obstacle avoidance interval of the track according to the line of the track.
Optionally, the method further includes: after obtaining a depth image according to the second point cloud data, setting a filtering parameter according to the characteristics of a track, wherein the filtering parameter comprises: the range of the filtering window is larger than the preset width value of the track; filtering the depth image according to the filtering parameters to obtain a filtered image; and subtracting the depth image from the filtered image to obtain the track image.
Optionally, the method further includes: carrying out binarization processing on the track image to obtain a binary image; performing edge extraction processing on the binary image to obtain an image after edge extraction; and extracting lines in the image subjected to edge extraction through a Hough transform algorithm.
Optionally, the preset line matching restriction rule includes:
Figure BDA0003127165820000021
d0<Abs(Image2(xi1,yi1)-Image2(xi1,yi1))<d1;
d0<Abs(Image2(xi2,yi2)-Image2(xi2,yi2))<d1;
wherein xi1 represents the abscissa of the starting point of the ith track in the image, yi1 represents the ordinate of the starting point of the ith track in the image, xi2 represents the abscissa of the ending point of the ith line in the image, yi2 represents the ordinate of the ending point of the ith line in the image, d0 represents the lower limit of the preset track width value, and d1 represents the upper limit of the preset track width value; filtering the lines based on a preset line matching restriction rule according to the position relationship between the two tracks to obtain the lines of the tracks, wherein the method comprises the following steps: and the lines of which the plurality of points simultaneously satisfy the relational expressions are the lines of the track.
Optionally, the filtering the line according to the position relationship between the two tracks based on a preset line matching restriction rule to obtain the line of the track includes: extracting lines of the track from the lines satisfying any one of the following relations;
Abs(y–y1(d0))≤m0;
Abs(y–y2(d0))≤m0;
where y is the ordinate of a point on a track in the image, y1(d0) represents the ordinate of a point on one track in the corresponding image when the track width value is d0, y2(d0) represents the ordinate of a point on another track in the corresponding image when the track width value is d0, d0 represents a preset lower track width limit, and m0 represents a preset width value of a track line.
Optionally, determining an obstacle avoidance interval of the track according to the line of the track includes: determining an obstacle avoidance interval of the track from positions simultaneously satisfying the following two relational expressions; y is more than or equal to y1(d0) -m 0; y is less than or equal to y1(d0) + m 0;
where y is the ordinate of a point on a track in the image, y1(d0) represents the ordinate of a point on a corresponding track in the image when the track width value is d0, d0 represents the preset lower track width limit, and m0 represents the preset width value of a track line.
One or more embodiments of the present invention also provide a track recognition apparatus including: an acquisition module configured to acquire first point cloud data around a trajectory; the rejecting module is configured to reject points far away from the ground and the track in the first point cloud data to obtain second point cloud data containing track data; a conversion module configured to convert the second point cloud data to obtain a depth image; an extraction module configured to extract lines from the depth image; and the filtering module is configured to filter the lines based on a preset line matching limit rule according to the position relationship between the two tracks to obtain the lines of the tracks.
One or more embodiments of the present invention also provide an electronic device including: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the electronic equipment; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for any of the above-described track recognition methods.
One or more embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform any of the above-described track identification methods.
According to the track identification method provided by one or more embodiments of the invention, the first point cloud data around the track is collected, the points far away from the ground or the track are removed to obtain the second point cloud data, and the second point cloud data is converted into the two-dimensional projection plane to obtain the depth image, so that the track lines can be identified based on the obtained depth image by using an image processing algorithm, and the track identification efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram illustrating a method of track identification in accordance with one or more embodiments of the invention;
FIG. 2(a) is a schematic illustration of an image before filtering, shown in accordance with one or more embodiments of the present invention;
FIG. 2(b) is a schematic illustration of an image after filtering the image of FIG. 2(a) according to one or more embodiments of the invention;
FIG. 2(c) is a schematic diagram of an image obtained by subtracting FIG. 2(a) from FIG. 2 (b);
FIG. 3(a) is a schematic illustration of an image before filtering, shown in accordance with one or more embodiments of the present invention;
FIG. 3(b) is a schematic illustration of an image after filtering the image of FIG. 3(a) according to one or more embodiments of the invention;
FIG. 3(c) is a schematic diagram illustrating an image resulting from subtracting FIG. 3(a) from FIG. 3(b), in accordance with one or more embodiments of the present invention;
FIG. 4 is a schematic diagram illustrating a binary image derived based on a depth map in accordance with one or more embodiments of the invention;
FIG. 5 is a schematic diagram of an image after a line has been extracted, according to one or more embodiments of the invention;
FIG. 6 is a schematic diagram illustrating trapezoidal features extracted from the characteristics of the track itself, according to one or more embodiments of the invention;
FIG. 7 is a schematic diagram illustrating lines of extracted tracks in accordance with one or more embodiments of the invention;
FIG. 8 is a schematic diagram illustrating an identified obstacle avoidance area of a track in accordance with one or more embodiments of the present invention;
FIG. 9 is a schematic diagram illustrating a configuration of a track recognition device in accordance with one or more embodiments of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to one or more embodiments of the invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flow diagram illustrating a track identification method according to one or more embodiments of the invention, as shown in fig. 1, the method comprising:
step 101: collecting first point cloud data around a track;
for example, the image acquisition device may be used to acquire first point cloud data, and the acquired first point cloud data may include spatial positions of each point in space in the radar coordinate system. The laser radar has a long ranging range and has high resolution in both horizontal and vertical directions. It is ensured that smaller objects can be detected.
Step 102: removing points far away from the ground and the track in the first point cloud data to obtain second point cloud data containing track data;
for the first point cloud data acquired in the step 101, a ground extraction algorithm can be used to extract points with a normal vector of [0,0,1], and points far away from the ground or a rail (rail) are removed by setting a threshold value, so that second point cloud data is acquired. The second point cloud data includes a point cloud of orbit data.
Step 103: converting the second point cloud data to obtain a depth image;
converting the second point cloud data obtained in the step 102 to obtain a depth image in the following way:
the second point cloud data includes three-dimensional coordinates (x, y, z) of detected objects, that is, positions around the track, and according to the field of view Fov of the camera and the resolution (u, v) of the radar in the horizontal and vertical directions, the calculation may be performed to obtain:
Figure BDA0003127165820000051
(u,v)=(arctan(y/x)/u’,arcsin(z/d)/v’);
Image1(u,v)=z;
Image2(u,v)=y;
Image3(u,v)=x;
in the above equation, D represents a distance value (distance) in the depth image, and x, y, and z represent three-dimensional coordinates of one point.
Therefore, the obtained distribution of the detected object on the image plane is continuous, information similar to an image is obtained, and the depth of the image is expressed as a z value of the ground information.
Step 104: extracting lines from the depth image;
after the depth image is obtained, the depth image may be further optimized, for example, the depth image may be subjected to at least one of filtering, opening operation processing, closing operation processing, and edge extraction processing, so as to expand the information of the point cloud as much as possible and prevent the inability to extract lines due to too little information of the point cloud. After the depth image is optimized, lines in the image are extracted.
Step 105: and filtering the lines based on a preset line matching limit rule according to the position relation between the two tracks to obtain the lines of the tracks.
Since there is a certain positional relationship between the two tracks to be detected, for example, the width between the two tracks (hereinafter also referred to as track width for short), a line matching restriction rule may be established in advance according to the positional relationship, and only in the case where the line extracted in step 104 satisfies the line matching restriction rule, the line is determined as the line of the track.
According to the track identification method provided by one or more embodiments of the invention, the first point cloud data around the track is collected, the points far away from the ground or the track are removed to obtain the second point cloud data, and the second point cloud data is converted into the two-dimensional projection plane to obtain the depth image, so that the track lines can be identified based on the obtained depth image by using an image processing algorithm, and the track identification efficiency can be improved.
In one or more embodiments of the present invention, the track identification method may further include: and after the line of the track is obtained, determining the obstacle avoidance interval of the track according to the line of the track. The area between the two tracks can be an obstacle avoidance interval, so that after the lines of the tracks are extracted, the image is back projected into the point cloud, and the track danger avoidance interval of the point cloud, namely a three-dimensional danger avoidance space, can be obtained.
In one or more embodiments of the present invention, the track identification method may further include:
after obtaining a depth image according to the second point cloud data, setting a filtering parameter according to the characteristics of a track, wherein the filtering parameter comprises: the range of the filtering window is larger than the preset width value of the track; filtering the depth image according to the filtering parameters to obtain a filtered image; and subtracting the depth image from the filtered image to obtain the track image. For example, the filter window may be set to exceed the track width, outputting the minimum value of the filter window, according to the characteristics of the track. Thus, two frames of image data can be obtained, and the two images are subtracted to obtain the filtered track data characteristics. As shown in fig. 2(a), in the case that the point cloud data of the acquired track is relatively complete, after filtering, data shown in fig. 2(b) is obtained, and then, when two frames of images are subtracted, an image shown in fig. 2(c) is obtained, the remaining part shown in fig. 2(c) is the track, and the area between the two tracks is the obstacle avoidance area. As shown in fig. 3(a), when the point cloud data of the acquired orbit is incomplete, the image 3(b) is obtained by filtering the original image 3(a), and the image obtained by subtracting the image 3(a) and the image 3(b) is shown in fig. 3(c), and the image obtained at this time is sparse.
As can be seen from comparing fig. 2(c) and fig. 3(c), when the point cloud data of the acquired orbit is not complete enough, the quality of the orbit image obtained after filtering is poor, and therefore, in one or more embodiments of the present invention, after the orbit image is obtained, the orbit image may be further optimized, based on which the orbit identification method in one or more embodiments of the present invention may further include: carrying out binarization processing on the track image to obtain a binary image; for example, the track image may be subjected to binarization processing, opening operation processing, closing operation processing, and the like according to the quality of the image to expand the point cloud information, and the resulting binary image is as shown in fig. 4. After obtaining the binary image, performing edge extraction processing on the binary image, for example, performing edge detection by using a Canny (edge detection) operator to obtain an image after edge extraction; lines in the image subjected to edge extraction are extracted by a Hough (Hough) transformation algorithm. For a curve in the track, the image can be divided into different parts, and linear features in the image can be extracted in a segmented mode. The image after extracting lines can be as shown in fig. 5, where a plurality of continuous lines in fig. 5 are the lines of the track extracted through Hough transform, and the scattered points are the other effective points remaining after filtering the image.
As shown in fig. 5, there is still much noise in the lines in fig. 5, so further filtering is needed to obtain the lines of the track.
The preset line matching restriction rule may include:
Figure BDA0003127165820000071
d0<Abs(Im age2(xi1,yi1)-Im age2(xi1,yi1))<d1;
d0<Abs(Im age2(xi2,yi2)-Im age2(xi2,yi2))<d1;
wherein xi1 represents the abscissa of the starting point of the ith track in the image, yi1 represents the ordinate of the starting point of the ith track in the image, xi2 represents the abscissa of the ending point of the ith line in the image, yi2 represents the ordinate of the ending point of the ith line in the image, d0 represents the lower limit of the preset track width value, and d1 represents the upper limit of the preset track width value;
filtering the lines based on a preset line matching restriction rule according to the position relationship between the two tracks to obtain the lines of the tracks, wherein the method comprises the following steps:
and the lines of which the plurality of points simultaneously satisfy the relational expressions are the lines of the track.
For example, for a plurality of lines in fig. 5, line matching needs to be performed by using the position relationship between two tracks, and the trapezoidal feature shown in fig. 6 is extracted according to the characteristics of the tracks themselves.
According to the actual characteristics of the track, the track should satisfy the following conditions in the image:
referring to fig. 6, the pixel difference between two lines in the image in the near distance is larger than that in the far distance, and the angle between two lines in the near distance is larger than that between two lines in the far distance;
the distance between two lines in the image at different distances meets the threshold range of the track width d, the threshold range of d can be limited within the range of 1.2m to 2m according to the actual characteristics of the track, and the width can be searched through the position of a line in the image;
the difference in height of the two tracks should be below a certain value;
in the image, when the image extends linearly, there may be abnormal points, which may cause the linear portion to be on a part of the abnormal points, so that the distance rule needs to be added on a plurality of distances to avoid the false rejection.
Still taking the image shown in fig. 5 as an example, after filtering the lines in fig. 5 by using the line matching restriction rule, the obtained effective line interval can be as shown in fig. 7, and the line of the track is identified.
In one or more embodiments of the present invention, the filtering the line according to the position relationship between the two tracks based on a preset line matching restriction rule to obtain the line of the track may include:
extracting lines of the track from the lines satisfying any one of the following relations;
Abs(y–y1(d0))≤m0;
Abs(y–y2(d0))≤m0;
where y is the ordinate of the point on the track in the image, y1(d0) represents the ordinate of the point on one track in the corresponding image when the track width value is d0, y2(d0) represents the ordinate of the point on the other track in the corresponding image when the track width value is d0, d0 represents the preset lower track width limit, and m0 represents the preset width value of the track line. For example, according to the relationship between two tracks constrained in the above respective relations, combining the relationships of x and y values, the restriction of y values at respective distances can be found, respectively, as y1(d0), y2(d0), track line width m0 (representing the width value of the line of the track in the image, which is small), and when the depth value d-d 0 is satisfied, if the restriction Abs (y-y 1(d0)) ≦ m0 or Abs (y-y 2(d0) ≦ m 0), it is considered that the track line can be extracted.
In one or more embodiments of the present invention, determining an obstacle avoidance interval of a track according to a line of the track may include:
determining an obstacle avoidance interval of the track from positions simultaneously satisfying the following two relational expressions;
y≥y1(d0)–m0;
y≤y1(d0)+m0;
where y is the ordinate of a point on a track in the image, y1(d0) represents the ordinate of a point on a corresponding track in the image when the track width value is d0, d0 represents the preset lower track width limit, and m0 represents the preset width value of a track line. For example, when the constraint is applied: when y is more than or equal to y1(d0) -m 0 and y is less than or equal to y1(d0) + m0, an obstacle avoidance area of the whole track can be obtained. Meanwhile, the height at each distance satisfies the condition z < z0, and z0 represents a preset information value height threshold, so that the rail area has no obstacle, and the rail area can be judged to have an obstacle or a conventional object such as an electric wire according to the image shot by the camera. A schematic diagram of extracting the obstacle avoidance area of the track may be as shown in fig. 9, where the part identified by the mark L is the obstacle avoidance area of the whole track.
Fig. 9 is a schematic structural diagram illustrating a track recognition apparatus according to one or more embodiments of the present invention, and as shown in fig. 9, the apparatus 90 includes:
an acquisition module 91 configured to acquire first point cloud data around the orbit;
a rejecting module 92 configured to reject points far from the ground and far from the track in the first point cloud data to obtain second point cloud data including track data;
a conversion module 93 configured to convert the second point cloud data into a depth image;
an extraction module 94 configured to extract lines from the depth image;
and the filtering module 95 is configured to filter the lines based on a preset line matching restriction rule according to the position relationship between the two tracks, so as to obtain the lines of the tracks.
In one or more embodiments of the present invention, the track recognition device may further include: the determining module is configured to determine a track obstacle avoidance interval according to the track line after the track line is obtained.
In one or more embodiments of the present invention, the track recognition device may further include: a setting module configured to set a filtering parameter according to a feature of a track after obtaining a depth image according to the second point cloud data, wherein the filtering parameter includes: the range of the filtering window is larger than the preset width value of the track;
a filtering module configured to filter the depth image according to the filtering parameter to obtain a filtered image;
and the first processing module is configured to subtract the depth image and the filtered image to obtain an orbit image.
In one or more embodiments of the present invention, the track recognition device may further include: the second processing module is configured to carry out binarization processing on the track image to obtain a binary image;
the third processing module is configured to perform edge extraction processing on the binary image to obtain an image after edge extraction;
and the fourth processing module is configured to extract lines in the image subjected to edge extraction through a Hough transform algorithm.
In one or more embodiments of the present invention, the preset line matching restriction rule may include:
Figure BDA0003127165820000101
d0<Abs(Im age2(xi1,yi1)-Im age2(xi1,yi1))<d1;
d0<Abs(Im age2(xi2,yi2)-Im age2(xi2,yi2))<d1;
wherein xi1 represents the abscissa of the starting point of the ith track in the image, yi1 represents the ordinate of the starting point of the ith track in the image, xi2 represents the abscissa of the ending point of the ith line in the image, yi2 represents the ordinate of the ending point of the ith line in the image, d0 represents the lower limit of the preset track width value, and d1 represents the upper limit of the preset track width value;
filtering the lines based on a preset line matching restriction rule according to the position relationship between the two tracks to obtain the lines of the tracks, wherein the method comprises the following steps:
and the lines of which the plurality of points simultaneously satisfy the relational expressions are the lines of the track.
The filtering module is specifically configured to:
extracting lines of the track from the lines satisfying any one of the following relations;
Abs(y–y1(d0))≤m0;
Abs(y–y2(d0))≤m0;
where y is the ordinate of a point on a track in the image, y1(d0) represents the ordinate of a point on one track in the corresponding image when the track width value is d0, y2(d0) represents the ordinate of a point on another track in the corresponding image when the track width value is d0, d0 represents a preset lower track width limit, and m0 represents a preset width value of a track line.
The determination module is specifically configured to:
determining an obstacle avoidance interval of the track from positions simultaneously satisfying the following two relational expressions;
y≥y1(d0)–m0;
y≤y1(d0)+m0;
where y is the ordinate of a point on a track in the image, y1(d0) represents the ordinate of a point on a corresponding track in the image when the track width value is d0, d0 represents the preset lower track width limit, and m0 represents the preset width value of a track line.
One or more embodiments of the present invention also provide an electronic device including: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the electronic equipment; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing any one of the above-described track recognition methods.
One or more embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform any of the above-described track identification methods.
Accordingly, as shown in fig. 10, an electronic device provided by one or more embodiments of the present invention may include: the device comprises a shell 11, a processor 12, a memory 13, a circuit board 14 and a power circuit 15, wherein the circuit board 14 is arranged inside a space enclosed by the shell 11, and the processor 12 and the memory 13 are arranged on the circuit board 14; a power supply circuit 15 for supplying power to each circuit or device of the electronic apparatus; the memory 13 is used for storing executable program codes; the processor 12 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 13, for executing any one of the track recognition methods provided by the foregoing embodiments.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
For convenience of description, the above devices are described separately in terms of functional division into various units/modules. Of course, the functionality of the units/modules may be implemented in one or more software and/or hardware implementations of the invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of track identification, comprising:
collecting first point cloud data around a track;
removing points far away from the ground and the track in the first point cloud data to obtain second point cloud data containing track data;
converting the second point cloud data to obtain a depth image;
extracting lines from the depth image;
and filtering the lines based on a preset line matching limit rule according to the position relation between the two tracks to obtain the lines of the tracks.
2. The method of claim 1, further comprising:
and after the line of the track is obtained, determining the obstacle avoidance interval of the track according to the line of the track.
3. The method of claim 1, further comprising:
after obtaining a depth image according to the second point cloud data, setting a filtering parameter according to the characteristics of a track, wherein the filtering parameter comprises: the range of the filtering window is larger than the preset width value of the track;
filtering the depth image according to the filtering parameters to obtain a filtered image;
and subtracting the depth image from the filtered image to obtain the track image.
4. The method of claim 3, further comprising:
carrying out binarization processing on the track image to obtain a binary image;
performing edge extraction processing on the binary image to obtain an image after edge extraction;
and extracting lines in the image subjected to edge extraction through a Hough transform algorithm.
5. The method of claim 4, wherein the preset line matches a constraint rule, comprising:
Figure FDA0003127165810000011
d0<Abs(Image2(xi1,yi1)-Image2(xi1,yi1))<d1;
d0<Abs(Image2(xi2,yi2)-Image2(xi2,yi2))<d1;
wherein xi1 represents the abscissa of the starting point of the ith track in the image, yi1 represents the ordinate of the starting point of the ith track in the image, xi2 represents the abscissa of the ending point of the ith line in the image, yi2 represents the ordinate of the ending point of the ith line in the image, d0 represents the lower limit of the preset track width value, and d1 represents the upper limit of the preset track width value;
filtering the lines based on a preset line matching restriction rule according to the position relationship between the two tracks to obtain the lines of the tracks, wherein the method comprises the following steps:
and the lines of which the plurality of points simultaneously satisfy the relational expressions are the lines of the track.
6. The method according to claim 1, wherein filtering the line according to the position relationship between the two tracks based on a preset line matching restriction rule to obtain the line of the track comprises:
extracting lines of the track from the lines satisfying any one of the following relations;
Abs(y–y1(d0))≤m0;
Abs(y–y2(d0))≤m0;
where y is the ordinate of a point on a track in the image, y1(d0) represents the ordinate of a point on one track in the corresponding image when the track width value is d0, y2(d0) represents the ordinate of a point on another track in the corresponding image when the track width value is d0, d0 represents a preset lower track width limit, and m0 represents a preset width value of a track line.
7. The method of claim 2, wherein determining the obstacle avoidance interval of the track according to the line of the track comprises:
determining an obstacle avoidance interval of the track from positions simultaneously satisfying the following two relational expressions;
y≥y1(d0)–m0;
y≤y1(d0)+m0;
where y is the ordinate of a point on a track in the image, y1(d0) represents the ordinate of a point on a corresponding track in the image when the track width value is d0, d0 represents the preset lower track width limit, and m0 represents the preset width value of a track line.
8. A track recognition device, comprising:
an acquisition module configured to acquire first point cloud data around a trajectory;
the rejecting module is configured to reject points far away from the ground and the track in the first point cloud data to obtain second point cloud data containing track data;
a conversion module configured to convert the second point cloud data to obtain a depth image;
an extraction module configured to extract lines from the depth image;
and the filtering module is configured to filter the lines based on a preset line matching limit rule according to the position relationship between the two tracks to obtain the lines of the tracks.
9. An electronic device, characterized in that the electronic device comprises: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the electronic equipment; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing the track identification method of any one of the preceding claims 1 to 7.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the track identification method of any one of claims 1 to 7.
CN202110692045.XA 2021-06-22 2021-06-22 Track identification method, device, storage medium and equipment Pending CN113379921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110692045.XA CN113379921A (en) 2021-06-22 2021-06-22 Track identification method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110692045.XA CN113379921A (en) 2021-06-22 2021-06-22 Track identification method, device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN113379921A true CN113379921A (en) 2021-09-10

Family

ID=77578240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110692045.XA Pending CN113379921A (en) 2021-06-22 2021-06-22 Track identification method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN113379921A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116922448A (en) * 2023-09-06 2023-10-24 湖南大学无锡智能控制研究院 Environment sensing method, device and system for high-speed railway body-in-white transfer robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
US20190052792A1 (en) * 2017-08-11 2019-02-14 Ut-Battelle, Llc Optical array for high-quality imaging in harsh environments
CN110647798A (en) * 2019-08-05 2020-01-03 中国铁路设计集团有限公司 Automatic track center line detection method based on vehicle-mounted mobile laser point cloud
CN110909713A (en) * 2019-12-05 2020-03-24 深圳市镭神智能系统有限公司 Method, system and medium for extracting point cloud data track

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190052792A1 (en) * 2017-08-11 2019-02-14 Ut-Battelle, Llc Optical array for high-quality imaging in harsh environments
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN110647798A (en) * 2019-08-05 2020-01-03 中国铁路设计集团有限公司 Automatic track center line detection method based on vehicle-mounted mobile laser point cloud
CN110909713A (en) * 2019-12-05 2020-03-24 深圳市镭神智能系统有限公司 Method, system and medium for extracting point cloud data track

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116922448A (en) * 2023-09-06 2023-10-24 湖南大学无锡智能控制研究院 Environment sensing method, device and system for high-speed railway body-in-white transfer robot
CN116922448B (en) * 2023-09-06 2024-01-02 湖南大学无锡智能控制研究院 Environment sensing method, device and system for high-speed railway body-in-white transfer robot

Similar Documents

Publication Publication Date Title
US9014432B2 (en) License plate character segmentation using likelihood maximization
US8867790B2 (en) Object detection device, object detection method, and program
Soquet et al. Road segmentation supervised by an extended v-disparity algorithm for autonomous navigation
US9760804B2 (en) Marker generating and marker detecting system, method and program
EP2783328B1 (en) Text detection using multi-layer connected components with histograms
KR101822185B1 (en) Method and apparatus for poi detection in 3d point clouds
US9911204B2 (en) Image processing method, image processing apparatus, and recording medium
US10643100B2 (en) Object detection apparatus, object detection method, and storage medium
Cheng et al. Building boundary extraction from high resolution imagery and lidar data
WO2018176514A1 (en) Fingerprint registration method and device
CN109255802B (en) Pedestrian tracking method, device, computer equipment and storage medium
KR20090098167A (en) Method and system for detecting lane by using distance sensor
JP2007121111A (en) Target identifying technique using synthetic aperture radar image and device therof
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
CN111079626B (en) Living body fingerprint identification method, electronic equipment and computer readable storage medium
CN114118253B (en) Vehicle detection method and device based on multi-source data fusion
CN111354038B (en) Anchor detection method and device, electronic equipment and storage medium
Palenichka et al. Multiscale isotropic matched filtering for individual tree detection in LiDAR images
CN113379921A (en) Track identification method, device, storage medium and equipment
CN114092857A (en) Gateway-based collection card image acquisition method, system, equipment and storage medium
Huang et al. A back propagation based real-time license plate recognition system
Deb et al. Automatic vehicle identification by plate recognition for intelligent transportation system applications
CN113379923A (en) Track identification method, device, storage medium and equipment
JP4552409B2 (en) Image processing device
CN113516685A (en) Target tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination