CN112800902A - Method for fitting track line in complex scene based on image segmentation - Google Patents
Method for fitting track line in complex scene based on image segmentation Download PDFInfo
- Publication number
- CN112800902A CN112800902A CN202110066159.3A CN202110066159A CN112800902A CN 112800902 A CN112800902 A CN 112800902A CN 202110066159 A CN202110066159 A CN 202110066159A CN 112800902 A CN112800902 A CN 112800902A
- Authority
- CN
- China
- Prior art keywords
- track
- key point
- sequence
- row
- key
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000003709 image segmentation Methods 0.000 title claims abstract description 15
- 230000009466 transformation Effects 0.000 claims abstract description 12
- 238000012216 screening Methods 0.000 claims abstract description 8
- 238000013528 artificial neural network Methods 0.000 claims abstract description 4
- 238000012545 processing Methods 0.000 claims description 8
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 5
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 5
- 239000002131 composite material Substances 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Train Traffic Observation, Control, And Security (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image segmentation based method for fitting a track line in a complex scene, which comprises the following steps: inputting a track area image, and segmenting the image through a coding-decoding neural network to obtain track area pixels; according to the image segmentation result, performing inverse perspective transformation on the image; extracting key points of the track; matching points with the distance within a certain threshold value according to the key points of the pixels; carrying out breadth-first search on the matching points to obtain an initial track sequence; screening and removing poor track sequences; and fitting key point pairs in each track sequence to obtain fitted left and right tracks. By the technical scheme, the problem of track line fitting of the track train in a complex track environment process can be solved, namely the track line to which the track line pixels belong can be accurately distinguished, the track area (left and right tracks) can be distinguished, and the method has the advantages of simplicity, high efficiency, suitability for a multi-track staggered environment and the like, and can be used for assisting in driving of the track train in the complex environment.
Description
Technical Field
The invention belongs to the technical field of rail transit automatic driving, and particularly relates to a method for fitting a track line in a complex scene based on image segmentation.
Background
The closed environment of the rail transit reduces the occurrence of accidents to a certain extent, but unpredictable foreign matters such as line patrol personnel and natural disasters still invade the rail boundary area and bring great hidden dangers to the rail transit. Due to the harmfulness of foreign body invasion, the traditional static monitoring can not meet the requirements of the current complex orbit running environment and all-weather high-speed running, and real-time monitoring is necessary. The premise of dynamically monitoring and detecting the foreign matter intrusion is accurate rail train running environment scene recognition, namely rail area recognition, and therefore rail line fitting is required.
There are many researchers currently developing rail region identification, such as fitting a single rail line by a window-marking detection method using image information and detecting a front simple rail line using a laser radar. The actual rail traffic environment is complex, the rail lines are mutually staggered, and the number of the rail lines is not fixed, so that the fitting of the rail lines in a complex scene is more difficult, and the existing scheme cannot fit a plurality of staggered rail lines.
Disclosure of Invention
In order to solve the problems that the existing track line fitting algorithm is oriented to a single road line and cannot fit staggered track lines, the invention provides a track line fitting method under a complex scene based on image segmentation, which can distinguish track lines to which track line pixels belong and can distinguish track areas (left and right tracks). The specific technical scheme of the invention is as follows:
a method for fitting a track line in a complex scene based on image segmentation is characterized by comprising the following steps:
s1: inputting a track area image, and segmenting the image through a coding-decoding neural network to obtain track area pixels;
s2: performing inverse perspective transformation on the image according to the image segmentation result obtained in step S1 to convert the image to the bird' S eye view, that is:
[u′,v′,w′]=[u,v,1]·M
wherein u and v are horizontal and vertical coordinate values of the original image, u '/w', v '/w' are horizontal and vertical coordinate values after transformation, and M is a perspective transformation projection matrix;
s3: extracting a track key point from the inverse perspective transformation image obtained in step S2;
s4: matching points with the distance within a certain threshold value according to the track key points extracted in the step S3;
s5: carrying out breadth-first search on successfully matched track key points to obtain an initial track sequence;
s6: screening the initial track sequence obtained in the step S5, and removing a repeated track sequence;
s7: and fitting the track key point pairs in all the track sequences processed in the step S6 to obtain the fitted left and right tracks.
Further, the processing procedure of step S3 is:
s3-1: taking n key pixel rows with equal longitudinal spacing on the aerial view obtained in the step S2, wherein r is1,r2…,riI is 1,2, …, n, wherein, r isiIn the row pixel, the jth track key pointThe abscissa of the graph is the mean of the abscissas of the j-th section of continuous track pixelsObtaining n-row track key points in total;
wherein ,himgTo divide the picture pixel height, ClIs the abscissa of the pixel of the first section of track, k is the abscissa of the pixel of the track, and m is ClThe number of pixels is included;
s3-2: will r toiRow and ri+1Taking the image between the key pixel lines as a sub-image, taking the r-th imageiTaking the key point A of the row track as a seed point, and carrying out region growth on the sub-map region to obtain the r < th > image communicated with the key point A of the tracki+1A row track key point B and a track key point C;
s3-3: obtaining the included angle theta between the connecting line between the track key point A and the track key point B and the y axis through the coordinate difference between the two track key pointsABTilt angle as track key point a:
θAB=atan((Ax-Bx)/(Ay-By))
wherein, Ax and Ay are horizontal and vertical coordinate values of the key point A of the orbit in the row; bx and By are the horizontal and vertical coordinate values of the next row of track key point B.
Further, the processing procedure of step S4 is:
s4-1: randomly selecting two track key points from the same track key points to match;
s4-2: if the actual distance dis between the two matched track key points is notpair=xleft-xrightSatisfies (dis)min_th/cosθ)<dispair<(dismax_th/cos θ), then step S4-3 is executed, otherwise, step S4-1 is returned to; wherein x isleft、xrightRespectively, the abscissa of the key point of the left and right tracks, and theta ═ thetaleft+θright)/2,θleft、θrightRespectively, the tilt angles, dis, of the key points of the track calculated according to step S3-3max_th and dismin_thRespectively is the upper and lower boundaries of the distance between the matched two track key points;
s4-3: if the inclination of two track key points is matchedThe oblique angle satisfies | thetaleft-θright|<θthIf not, go to step S4-4, otherwise return to step S4-1, thetathA threshold value for the difference between the tilt angles of the set left and right track key points;
s4-4: and successfully matching, and storing the track key point pairs.
Further, the processing procedure of step S5 is:
s5-1: for the current riSearching the r-th track key point pair of the rowi+1A track key point pair of row pixels;
s5-2: judgment of the r-thi+1The key point pair of the track of the row pixel is corresponding to the r-thiWhether the key point pairs of the row tracks are communicated or not is judged, if not, the step S5-1 is returned; otherwise, the r-th node to be connectedi+1Adding the key point pair of the row track into the riThe key point pairs of the row tracks are in the track sequence;
s5-3: and traversing all the track key point pairs, and if the connected track key point pairs do not exist in the current track key point pair, storing the obtained track sequence.
Further, the processing procedure of step S6 is:
s6-1: randomly selecting two track sequences, and setting sequence numbers as f and g;
s6-2: comparing the coordinates of the key point pairs of the tracks line by line for the two track sequences if the r th of the two track sequences isiIf the coordinates of the track key point pairs of the row pixels are completely the same, judging that the point pairs are the same point pairs; otherwise, judging the point pairs to be different point pairs, and storing the different point pairs;
s6-3: if the number of different point pairs obtained in S6-2 is smaller than the set threshold value between the two track sequences, judging as a high repetition sequence, otherwise, returning to the step S6-1;
s6-4: the different point pairs in the high repetition track sequence obtained from step S6-2 wherein ,indicates that the f-th track sequence is in the r-th track sequenceiThe key point of the trajectory of the row of pixels,indicates that the g track sequence is in the riA track key point of a row pixel;
s6-5: if the different point pairs of the track sequence obtained from the step S6-4 are not identicalRemoving the g track sequence; otherwise, removing the f track sequence;
different pointsWith two upper and lower track key pointsAngle of inclination therebetweenAnd wherein ,r-th track sequenceiTrack key point of row pixel and r-th track of f-th track sequencei-1The angle between the connecting line of the key points of the track of the row pixels and the y axis,r-th track sequenceiTrack key point of row pixel and r-th track of f-th track sequencei+1The included angle between the connecting line of the track key points of the row pixels and the y axis; different points of the f-th track sequenceSmoothness of the surfaceIn the same way, the method for preparing the composite material,for different points of the g-th track sequenceSmoothness of (d).
The invention has the beneficial effects that:
1. aiming at the matching problem of the left and right tracks, the method provided by the invention matches the track key points through the priori knowledge of the distance between the tracks on the basis of the extracted track key points so as to determine the track pixel track area (left and right tracks).
2. Aiming at the problem of track line fitting of track staggering, the method sets a threshold value by searching the images from top to bottom so as to connect track key points and distinguish different tracks in the same picture.
3. Aiming at the problem of distinguishing the adjacent rails of a plurality of rails, the method determines the central point of the rail by the placement position of a camera on the train, the rail sequence where the key point pair which is within a certain threshold range away from the central point of the rail is the rail, and the rest are the adjacent rails, thereby distinguishing the rail lines of the adjacent rails.
4. The method is simple and efficient, is suitable for multi-rail staggered environments, and can be used for assisting in driving of the rail train in complex environments.
Drawings
In order to illustrate embodiments of the present invention or technical solutions in the prior art more clearly, the drawings which are needed in the embodiments will be briefly described below, so that the features and advantages of the present invention can be understood more clearly by referring to the drawings, which are schematic and should not be construed as limiting the present invention in any way, and for a person skilled in the art, other drawings can be obtained on the basis of these drawings without any inventive effort. Wherein:
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of the track keypoint extraction and matching of the present invention;
FIG. 3 is a flow chart of the breadth first search algorithm of the present invention;
FIG. 4 is a flowchart of the orbital sequence screening of the present invention;
FIG. 5 is a schematic diagram of the orbital sequence screening of the invention;
FIG. 6 is an original picture of embodiment 1 of the present invention;
FIG. 7 is a divided picture according to embodiment 1 of the present invention;
FIG. 8 is an inverse perspective transformation diagram of embodiment 1 of the present invention;
FIG. 9 is a schematic diagram of a track key point extraction method in embodiment 1 of the present invention;
FIG. 10 is a track key point diagram according to embodiment 1 of the present invention;
FIG. 11 is a schematic diagram of the screening orbital sequence of example 1 of the present invention;
FIG. 12 is a screening track sequence chart according to example 1 of the present invention;
fig. 13 is a perspective view back to the original picture of embodiment 1 of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
As shown in fig. 1-4, in order to solve the problem of track line fitting of a track train in a complex track environment process, and specifically to solve the problems that the existing track line fitting algorithm is oriented to a single road line and cannot fit a staggered track line, the invention provides a track line fitting algorithm suitable for a complex scene, which can distinguish track lines to which track line pixels belong, can distinguish track areas (left and right tracks), has the advantages of simplicity, high efficiency, suitability for a multi-track staggered environment, and the like, and is suitable for auxiliary driving of a track train in a complex environment.
For the convenience of understanding the above technical aspects of the present invention, the following detailed description will be given of the above technical aspects of the present invention by way of specific examples.
Example 1
S1: inputting 1280 × 720 original picture, as shown in fig. 6, segmenting the image through a neural network to obtain the pixels of the track region, as shown in fig. 7.
S2: based on the image segmentation result obtained in step S1, the image is subjected to inverse perspective transformation, and the image is converted to the bird' S eye view, as shown in fig. 8.
S3: extracting a track key point from the inverse perspective transformation image obtained in step S2;
s3-1: taking 25 of the bird's-eye view images at equal intervals longitudinally, taking the key pixel rows of the 29 th, 58 th, 87 th and … … 720 th rows of the bird's-eye view images, and taking the coordinate mean value of each segment of continuous track pixels in the key pixel rows as the coordinates of the track key points. As shown in fig. 9 and 10.
S3-2: and taking the image between the current key pixel and the next key pixel line as a sub-image, taking the current specific key point as a seed point in the sub-image, and obtaining the key point communicated with the current specific key point through region growing.
S3-3: and taking the included angle between the y axis and the connecting line of the current key point and the connected key point in the next key pixel row as the inclined angle.
S4: matching points with the distance within a certain threshold value according to the track key points extracted in the step S3;
s4-1: and randomly selecting two track key points from the key points of the same key pixel row for matching.
S4-2: if the actual distance dis between the two matched track key points is notpair=xleft-xrightSatisfies (dis)min_th/cosθ)<dispair<(dismax_th/cos θ), then step S4-3 is executed, otherwise, step S4-1 is returned to; wherein x isleft、xrightRespectively, the abscissa of the key point of the left and right tracks, and theta ═ thetaleft+θright)/2,θleft、θrightRespectively, the tilt angles, dis, of the key points of the track calculated according to step S3-3max_th and dismin_thRespectively is the upper and lower boundaries of the distance between the matched two track key points;
s4-3: if the inclination angles of the matched two track key points satisfy thetaleft-θright|<θthIf not, go to step S4-4, otherwise return to step S4-1, thetathA threshold value for the difference between the tilt angles of the set left and right track key points;
s4-4: and successfully matching, and storing the track key point pairs.
S5: carrying out breadth-first search on successfully matched track key points to obtain an initial track sequence;
s5-1: for the current riSearching the r-th track key point pair of the rowi+1A track key point pair of row pixels;
s5-2: judgment of the r-thi+1The key point pair of the track of the row pixel is corresponding to the r-thiWhether the key point pairs of the row tracks are communicated or not is judged, if not, the step S5-1 is returned; otherwise, the r-th node to be connectedi+1Adding the key point pair of the row track into the riThe key point pairs of the row tracks are in the track sequence;
s5-3: and traversing all the track key point pairs, and if the connected track key point pairs do not exist in the current track key point pair, storing the obtained track sequence.
S6: screening the initial track sequence obtained in the step S5, and removing a repeated track sequence;
s6-1: randomly selecting two track sequences;
s6-2: as shown in fig. 11, the number (1 pair) of different point pairs between the track sequence 1 and the track sequence 2 is smaller than the threshold 2, and it is determined that the track sequence 1 and the track sequence 2 are high repetition sequences, otherwise, the step S6-1 is returned to;
s6-3: search for differences B, C between two high-repetition orbital sequences of orbital sequence 1 and orbital sequence 2
S6-4: and according to the different points of the track sequence obtained in the step S6-3, removing the track sequence with small smoothness at the different points in the two high repetition sequences.
As shown in FIG. 5, track sequence 1 is different from track sequence 2 only at one point in the two highly repetitive track sequences, but track sequence 2 has a large slope change (| θ) at point 31,3,up-θ1,3,down|<|θ2,3,up-θ2,3,down|) so track sequence 2 is removed.
In this embodiment, the smoothness θ of the B point at different points B, C in the two high repetition track sequences of track sequence 1 and track sequence 2 is shownB,slope=|θB,up-θB,down|<θC,slope=|θC,up-θC,downTherefore, the track sequence 2 at the point C is removed, and the track sequence after the duplication removal is shown in fig. 12.
S7: the track sequences are re-projected back into the original image, as shown in fig. 13 for key point pairs of each track sequence. Fitting is carried out through a least square method to obtain a fitting track.
In summary, the track line in a complex scene can be fitted through the method and the device, on the basis, the placing position of the camera on the train can be used for determining the central point of the local track, the track sequence in which the key point pair which is within a certain threshold range away from the central point of the local track is the local track, and the rest are adjacent tracks, so that the track line of the adjacent track is distinguished.
In the present invention, the terms "first", "second", "third" and "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (5)
1. A method for fitting a track line in a complex scene based on image segmentation is characterized by comprising the following steps:
s1: inputting a track area image, and segmenting the image through a coding-decoding neural network to obtain track area pixels;
s2: performing inverse perspective transformation on the image according to the image segmentation result obtained in step S1 to convert the image to the bird' S eye view, that is:
[u′,v′,w′]=[u,v,1]·M
wherein u and v are horizontal and vertical coordinate values of the original image, u '/w', v '/w' are horizontal and vertical coordinate values after transformation, and M is a perspective transformation projection matrix;
s3: extracting a track key point from the inverse perspective transformation image obtained in step S2;
s4: matching points with the distance within a certain threshold value according to the track key points extracted in the step S3;
s5: carrying out breadth-first search on successfully matched track key points to obtain an initial track sequence;
s6: screening the initial track sequence obtained in the step S5, and removing a repeated track sequence;
s7: and fitting the track key point pairs in all the track sequences processed in the step S6 to obtain the fitted left and right tracks.
2. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 1, wherein the processing procedure of step S3 is:
s3-1: taking n key pixel rows with equal longitudinal spacing on the aerial view obtained in the step S2, wherein r is1,r2...,riI is 1,2, …, n, wherein, r isiIn the row pixel, the jth track key pointThe abscissa of the graph is the mean of the abscissas of the j-th section of continuous track pixelsObtaining n-row track key points in total;
wherein ,himgTo divide the picture pixel height, ClIs the abscissa of the pixel of the first section of track, k is the abscissa of the pixel of the track, and m is ClThe number of pixels is included;
s3-2: will r toiRow and ri+1Taking the image between the key pixel lines as a sub-image, taking the r-th imageiTaking the key point A of the row track as a seed point, and carrying out region growth on the sub-map region to obtain the r < th > image communicated with the key point A of the tracki+1A row track key point B and a track key point C;
s3-3: obtaining the included angle theta between the connecting line between the track key point A and the track key point B and the y axis through the coordinate difference between the two track key pointsABTilt angle as track key point a:
θAB=atan((Ax-Bx)/(Ay-By))
wherein, Ax and Ay are horizontal and vertical coordinate values of the key point A of the orbit in the row; bx and By are the horizontal and vertical coordinate values of the next row of track key point B.
3. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 2, wherein the processing procedure of step S4 is:
s4-1: randomly selecting two track key points from the same track key points to match;
s4-2: if the actual distance dis between the two matched track key points is notpair=xleft-xrightSatisfies (dis)min_th/cosθ)<dispair<(dismax_th/cos θ), then step S4-3 is executed, otherwise, step S4-1 is returned; wherein x isleft、xrightRespectively, the abscissa of the key point of the left and right tracks, and theta ═ thetaleft+θright)/2,θleft、θrightRespectively, the inclination angles, d, of the key points of the track calculated according to the step S3-3ismax_th and dismin_thRespectively is the upper and lower boundaries of the distance between the matched two track key points;
s4-3: if the inclination angles of the matched two track key points satisfy thetaleft-θright|<θthIf not, go to step S4-4, otherwise return to step S4-1, thetathA threshold value for the difference between the tilt angles of the set left and right track key points;
s4-4: and successfully matching, and storing the track key point pairs.
4. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 3, wherein the processing procedure of step S5 is:
s5-1: for the current riSearching the r-th track key point pair of the rowi+1A track key point pair of row pixels;
s5-2: judgment of the r-thi+1The key point pair of the track of the row pixel is corresponding to the r-thiWhether the key point pairs of the row tracks are communicated or not is judged, if not, the step S5-1 is returned; otherwise, the r-th node to be connectedi+1Adding the key point pair of the row track into the riThe key point pairs of the row tracks are in the track sequence;
s5-3: and traversing all the track key point pairs, and if the connected track key point pairs do not exist in the current track key point pair, storing the obtained track sequence.
5. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 4, wherein the processing procedure of step S6 is:
s6-1: randomly selecting two track sequences, and setting sequence numbers as f and g;
s6-2: for two stripesComparing the coordinates of the key point pairs of the tracks line by line in the track sequence if the r-th coordinates of the two track sequences are the sameiIf the coordinates of the track key point pairs of the row pixels are completely the same, judging that the point pairs are the same point pairs; otherwise, judging the point pairs to be different point pairs, and storing the different point pairs;
s6-3: if the number of different point pairs obtained in S6-2 is smaller than the set threshold value between the two track sequences, judging as a high repetition sequence, otherwise, returning to the step S6-1;
s6-4: the different point pairs in the high repetition track sequence obtained from step S6-2 wherein ,indicating the f track sequence at the riThe key point of the trajectory of the row of pixels,indicates that the g track sequence is in the riA track key point of a row pixel;
s6-5: if the different point pairs of the track sequence obtained from the step S6-4 are not identicalRemoving the g track sequence; otherwise, removing the f track sequence;
different pointsWith two upper and lower track key pointsAngle of inclination therebetweenAnd wherein ,r-th track sequenceiTrack key point of row pixel and r-th track of f-th track sequencei-1The angle between the connecting line of the key points of the track of the row pixels and the y axis,friend of the r-th track sequence of f-th trackiTrack key point of row pixel and r-th track of f-th track sequencei+1The included angle between the connecting line of the track key points of the row pixels and the y axis; different points of the f-th track sequenceSmoothness of the surfaceIn the same way, the method for preparing the composite material,for different points of the g-th track sequenceSmoothness of (d).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110066159.3A CN112800902B (en) | 2021-01-15 | 2021-01-15 | Image segmentation-based track line fitting method under complex scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110066159.3A CN112800902B (en) | 2021-01-15 | 2021-01-15 | Image segmentation-based track line fitting method under complex scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112800902A true CN112800902A (en) | 2021-05-14 |
CN112800902B CN112800902B (en) | 2023-09-05 |
Family
ID=75810286
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110066159.3A Active CN112800902B (en) | 2021-01-15 | 2021-01-15 | Image segmentation-based track line fitting method under complex scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112800902B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101770581A (en) * | 2010-01-08 | 2010-07-07 | 西安电子科技大学 | Semi-automatic detecting method for road centerline in high-resolution city remote sensing image |
US20100239123A1 (en) * | 2007-10-12 | 2010-09-23 | Ryuji Funayama | Methods and systems for processing of video data |
CN107590438A (en) * | 2017-08-16 | 2018-01-16 | 中国地质大学(武汉) | A kind of intelligent auxiliary driving method and system |
CN109711372A (en) * | 2018-12-29 | 2019-05-03 | 驭势科技(北京)有限公司 | A kind of recognition methods of lane line and system, storage medium, server |
-
2021
- 2021-01-15 CN CN202110066159.3A patent/CN112800902B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100239123A1 (en) * | 2007-10-12 | 2010-09-23 | Ryuji Funayama | Methods and systems for processing of video data |
CN101770581A (en) * | 2010-01-08 | 2010-07-07 | 西安电子科技大学 | Semi-automatic detecting method for road centerline in high-resolution city remote sensing image |
CN107590438A (en) * | 2017-08-16 | 2018-01-16 | 中国地质大学(武汉) | A kind of intelligent auxiliary driving method and system |
CN109711372A (en) * | 2018-12-29 | 2019-05-03 | 驭势科技(北京)有限公司 | A kind of recognition methods of lane line and system, storage medium, server |
Non-Patent Citations (1)
Title |
---|
SONGYUE YANG 等: "A Topology Guided Method for Rail-Track Detection", 《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY,VOL.71》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112800902B (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106290388B (en) | A kind of insulator breakdown automatic testing method | |
CN102902974B (en) | Image based method for identifying railway overhead-contact system bolt support identifying information | |
CN103745224B (en) | Image-based railway contact net bird-nest abnormal condition detection method | |
CN102708356B (en) | Automatic license plate positioning and recognition method based on complex background | |
CN108921875A (en) | A kind of real-time traffic flow detection and method for tracing based on data of taking photo by plane | |
CN105005771B (en) | A kind of detection method of the lane line solid line based on light stream locus of points statistics | |
CN107067002A (en) | Road licence plate recognition method in a kind of dynamic video | |
CN101339601B (en) | License plate Chinese character recognition method based on SIFT algorithm | |
CN103903018A (en) | Method and system for positioning license plate in complex scene | |
CN110675415B (en) | Road ponding area detection method based on deep learning enhanced example segmentation | |
CN106529532A (en) | License plate identification system based on integral feature channels and gray projection | |
CN108509950B (en) | Railway contact net support number plate detection and identification method based on probability feature weighted fusion | |
CN102663760A (en) | Location and segmentation method for windshield area of vehicle in images | |
CN108875803A (en) | A kind of detection of harmful influence haulage vehicle and recognition methods based on video image | |
CN103902981A (en) | Method and system for identifying license plate characters based on character fusion features | |
CN108198417A (en) | A kind of road cruising inspection system based on unmanned plane | |
CN111597904B (en) | Identification method for inclination of tunnel cable bracket | |
CN110443142B (en) | Deep learning vehicle counting method based on road surface extraction and segmentation | |
CN109961065A (en) | A kind of surface vessel object detection method | |
CN112347967B (en) | Pedestrian detection method fusing motion information in complex scene | |
CN109784261B (en) | Pedestrian segmentation and identification method based on machine vision | |
CN112800902A (en) | Method for fitting track line in complex scene based on image segmentation | |
Wu et al. | Block-based hough transform for recognition of zebra crossing in natural scene images | |
CN111597939A (en) | High-speed rail line nest defect detection method based on deep learning | |
CN113505793B (en) | Rectangular target detection method under complex background |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |