CN112800902A - Method for fitting track line in complex scene based on image segmentation - Google Patents

Method for fitting track line in complex scene based on image segmentation Download PDF

Info

Publication number
CN112800902A
CN112800902A CN202110066159.3A CN202110066159A CN112800902A CN 112800902 A CN112800902 A CN 112800902A CN 202110066159 A CN202110066159 A CN 202110066159A CN 112800902 A CN112800902 A CN 112800902A
Authority
CN
China
Prior art keywords
track
key point
sequence
row
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110066159.3A
Other languages
Chinese (zh)
Other versions
CN112800902B (en
Inventor
余贵珍
杨松岳
刘文韬
王章宇
周彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110066159.3A priority Critical patent/CN112800902B/en
Publication of CN112800902A publication Critical patent/CN112800902A/en
Application granted granted Critical
Publication of CN112800902B publication Critical patent/CN112800902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation based method for fitting a track line in a complex scene, which comprises the following steps: inputting a track area image, and segmenting the image through a coding-decoding neural network to obtain track area pixels; according to the image segmentation result, performing inverse perspective transformation on the image; extracting key points of the track; matching points with the distance within a certain threshold value according to the key points of the pixels; carrying out breadth-first search on the matching points to obtain an initial track sequence; screening and removing poor track sequences; and fitting key point pairs in each track sequence to obtain fitted left and right tracks. By the technical scheme, the problem of track line fitting of the track train in a complex track environment process can be solved, namely the track line to which the track line pixels belong can be accurately distinguished, the track area (left and right tracks) can be distinguished, and the method has the advantages of simplicity, high efficiency, suitability for a multi-track staggered environment and the like, and can be used for assisting in driving of the track train in the complex environment.

Description

Method for fitting track line in complex scene based on image segmentation
Technical Field
The invention belongs to the technical field of rail transit automatic driving, and particularly relates to a method for fitting a track line in a complex scene based on image segmentation.
Background
The closed environment of the rail transit reduces the occurrence of accidents to a certain extent, but unpredictable foreign matters such as line patrol personnel and natural disasters still invade the rail boundary area and bring great hidden dangers to the rail transit. Due to the harmfulness of foreign body invasion, the traditional static monitoring can not meet the requirements of the current complex orbit running environment and all-weather high-speed running, and real-time monitoring is necessary. The premise of dynamically monitoring and detecting the foreign matter intrusion is accurate rail train running environment scene recognition, namely rail area recognition, and therefore rail line fitting is required.
There are many researchers currently developing rail region identification, such as fitting a single rail line by a window-marking detection method using image information and detecting a front simple rail line using a laser radar. The actual rail traffic environment is complex, the rail lines are mutually staggered, and the number of the rail lines is not fixed, so that the fitting of the rail lines in a complex scene is more difficult, and the existing scheme cannot fit a plurality of staggered rail lines.
Disclosure of Invention
In order to solve the problems that the existing track line fitting algorithm is oriented to a single road line and cannot fit staggered track lines, the invention provides a track line fitting method under a complex scene based on image segmentation, which can distinguish track lines to which track line pixels belong and can distinguish track areas (left and right tracks). The specific technical scheme of the invention is as follows:
a method for fitting a track line in a complex scene based on image segmentation is characterized by comprising the following steps:
s1: inputting a track area image, and segmenting the image through a coding-decoding neural network to obtain track area pixels;
s2: performing inverse perspective transformation on the image according to the image segmentation result obtained in step S1 to convert the image to the bird' S eye view, that is:
[u′,v′,w′]=[u,v,1]·M
wherein u and v are horizontal and vertical coordinate values of the original image, u '/w', v '/w' are horizontal and vertical coordinate values after transformation, and M is a perspective transformation projection matrix;
s3: extracting a track key point from the inverse perspective transformation image obtained in step S2;
s4: matching points with the distance within a certain threshold value according to the track key points extracted in the step S3;
s5: carrying out breadth-first search on successfully matched track key points to obtain an initial track sequence;
s6: screening the initial track sequence obtained in the step S5, and removing a repeated track sequence;
s7: and fitting the track key point pairs in all the track sequences processed in the step S6 to obtain the fitted left and right tracks.
Further, the processing procedure of step S3 is:
s3-1: taking n key pixel rows with equal longitudinal spacing on the aerial view obtained in the step S2, wherein r is1,r2…,riI is 1,2, …, n, wherein, r isiIn the row pixel, the jth track key point
Figure BDA0002900849610000021
The abscissa of the graph is the mean of the abscissas of the j-th section of continuous track pixels
Figure BDA0002900849610000022
Obtaining n-row track key points in total;
Figure BDA0002900849610000023
Figure BDA0002900849610000024
wherein ,himgTo divide the picture pixel height, ClIs the abscissa of the pixel of the first section of track, k is the abscissa of the pixel of the track, and m is ClThe number of pixels is included;
s3-2: will r toiRow and ri+1Taking the image between the key pixel lines as a sub-image, taking the r-th imageiTaking the key point A of the row track as a seed point, and carrying out region growth on the sub-map region to obtain the r < th > image communicated with the key point A of the tracki+1A row track key point B and a track key point C;
s3-3: obtaining the included angle theta between the connecting line between the track key point A and the track key point B and the y axis through the coordinate difference between the two track key pointsABTilt angle as track key point a:
θAB=atan((Ax-Bx)/(Ay-By))
wherein, Ax and Ay are horizontal and vertical coordinate values of the key point A of the orbit in the row; bx and By are the horizontal and vertical coordinate values of the next row of track key point B.
Further, the processing procedure of step S4 is:
s4-1: randomly selecting two track key points from the same track key points to match;
s4-2: if the actual distance dis between the two matched track key points is notpair=xleft-xrightSatisfies (dis)min_th/cosθ)<dispair<(dismax_th/cos θ), then step S4-3 is executed, otherwise, step S4-1 is returned to; wherein x isleft、xrightRespectively, the abscissa of the key point of the left and right tracks, and theta ═ thetaleftright)/2,θleft、θrightRespectively, the tilt angles, dis, of the key points of the track calculated according to step S3-3max_th and dismin_thRespectively is the upper and lower boundaries of the distance between the matched two track key points;
s4-3: if the inclination of two track key points is matchedThe oblique angle satisfies | thetaleftright|<θthIf not, go to step S4-4, otherwise return to step S4-1, thetathA threshold value for the difference between the tilt angles of the set left and right track key points;
s4-4: and successfully matching, and storing the track key point pairs.
Further, the processing procedure of step S5 is:
s5-1: for the current riSearching the r-th track key point pair of the rowi+1A track key point pair of row pixels;
s5-2: judgment of the r-thi+1The key point pair of the track of the row pixel is corresponding to the r-thiWhether the key point pairs of the row tracks are communicated or not is judged, if not, the step S5-1 is returned; otherwise, the r-th node to be connectedi+1Adding the key point pair of the row track into the riThe key point pairs of the row tracks are in the track sequence;
s5-3: and traversing all the track key point pairs, and if the connected track key point pairs do not exist in the current track key point pair, storing the obtained track sequence.
Further, the processing procedure of step S6 is:
s6-1: randomly selecting two track sequences, and setting sequence numbers as f and g;
s6-2: comparing the coordinates of the key point pairs of the tracks line by line for the two track sequences if the r th of the two track sequences isiIf the coordinates of the track key point pairs of the row pixels are completely the same, judging that the point pairs are the same point pairs; otherwise, judging the point pairs to be different point pairs, and storing the different point pairs;
s6-3: if the number of different point pairs obtained in S6-2 is smaller than the set threshold value between the two track sequences, judging as a high repetition sequence, otherwise, returning to the step S6-1;
s6-4: the different point pairs in the high repetition track sequence obtained from step S6-2
Figure BDA0002900849610000031
wherein ,
Figure BDA0002900849610000032
indicates that the f-th track sequence is in the r-th track sequenceiThe key point of the trajectory of the row of pixels,
Figure BDA0002900849610000033
indicates that the g track sequence is in the riA track key point of a row pixel;
s6-5: if the different point pairs of the track sequence obtained from the step S6-4 are not identical
Figure BDA0002900849610000034
Removing the g track sequence; otherwise, removing the f track sequence;
different points
Figure BDA0002900849610000035
With two upper and lower track key points
Figure BDA0002900849610000036
Angle of inclination therebetween
Figure BDA0002900849610000037
And
Figure BDA0002900849610000038
wherein ,
Figure BDA0002900849610000039
r-th track sequenceiTrack key point of row pixel and r-th track of f-th track sequencei-1The angle between the connecting line of the key points of the track of the row pixels and the y axis,
Figure BDA00029008496100000310
r-th track sequenceiTrack key point of row pixel and r-th track of f-th track sequencei+1The included angle between the connecting line of the track key points of the row pixels and the y axis; different points of the f-th track sequence
Figure BDA0002900849610000041
Smoothness of the surface
Figure BDA0002900849610000042
In the same way, the method for preparing the composite material,
Figure BDA0002900849610000043
for different points of the g-th track sequence
Figure BDA0002900849610000044
Smoothness of (d).
The invention has the beneficial effects that:
1. aiming at the matching problem of the left and right tracks, the method provided by the invention matches the track key points through the priori knowledge of the distance between the tracks on the basis of the extracted track key points so as to determine the track pixel track area (left and right tracks).
2. Aiming at the problem of track line fitting of track staggering, the method sets a threshold value by searching the images from top to bottom so as to connect track key points and distinguish different tracks in the same picture.
3. Aiming at the problem of distinguishing the adjacent rails of a plurality of rails, the method determines the central point of the rail by the placement position of a camera on the train, the rail sequence where the key point pair which is within a certain threshold range away from the central point of the rail is the rail, and the rest are the adjacent rails, thereby distinguishing the rail lines of the adjacent rails.
4. The method is simple and efficient, is suitable for multi-rail staggered environments, and can be used for assisting in driving of the rail train in complex environments.
Drawings
In order to illustrate embodiments of the present invention or technical solutions in the prior art more clearly, the drawings which are needed in the embodiments will be briefly described below, so that the features and advantages of the present invention can be understood more clearly by referring to the drawings, which are schematic and should not be construed as limiting the present invention in any way, and for a person skilled in the art, other drawings can be obtained on the basis of these drawings without any inventive effort. Wherein:
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of the track keypoint extraction and matching of the present invention;
FIG. 3 is a flow chart of the breadth first search algorithm of the present invention;
FIG. 4 is a flowchart of the orbital sequence screening of the present invention;
FIG. 5 is a schematic diagram of the orbital sequence screening of the invention;
FIG. 6 is an original picture of embodiment 1 of the present invention;
FIG. 7 is a divided picture according to embodiment 1 of the present invention;
FIG. 8 is an inverse perspective transformation diagram of embodiment 1 of the present invention;
FIG. 9 is a schematic diagram of a track key point extraction method in embodiment 1 of the present invention;
FIG. 10 is a track key point diagram according to embodiment 1 of the present invention;
FIG. 11 is a schematic diagram of the screening orbital sequence of example 1 of the present invention;
FIG. 12 is a screening track sequence chart according to example 1 of the present invention;
fig. 13 is a perspective view back to the original picture of embodiment 1 of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
As shown in fig. 1-4, in order to solve the problem of track line fitting of a track train in a complex track environment process, and specifically to solve the problems that the existing track line fitting algorithm is oriented to a single road line and cannot fit a staggered track line, the invention provides a track line fitting algorithm suitable for a complex scene, which can distinguish track lines to which track line pixels belong, can distinguish track areas (left and right tracks), has the advantages of simplicity, high efficiency, suitability for a multi-track staggered environment, and the like, and is suitable for auxiliary driving of a track train in a complex environment.
For the convenience of understanding the above technical aspects of the present invention, the following detailed description will be given of the above technical aspects of the present invention by way of specific examples.
Example 1
S1: inputting 1280 × 720 original picture, as shown in fig. 6, segmenting the image through a neural network to obtain the pixels of the track region, as shown in fig. 7.
S2: based on the image segmentation result obtained in step S1, the image is subjected to inverse perspective transformation, and the image is converted to the bird' S eye view, as shown in fig. 8.
S3: extracting a track key point from the inverse perspective transformation image obtained in step S2;
s3-1: taking 25 of the bird's-eye view images at equal intervals longitudinally, taking the key pixel rows of the 29 th, 58 th, 87 th and … … 720 th rows of the bird's-eye view images, and taking the coordinate mean value of each segment of continuous track pixels in the key pixel rows as the coordinates of the track key points. As shown in fig. 9 and 10.
S3-2: and taking the image between the current key pixel and the next key pixel line as a sub-image, taking the current specific key point as a seed point in the sub-image, and obtaining the key point communicated with the current specific key point through region growing.
S3-3: and taking the included angle between the y axis and the connecting line of the current key point and the connected key point in the next key pixel row as the inclined angle.
S4: matching points with the distance within a certain threshold value according to the track key points extracted in the step S3;
s4-1: and randomly selecting two track key points from the key points of the same key pixel row for matching.
S4-2: if the actual distance dis between the two matched track key points is notpair=xleft-xrightSatisfies (dis)min_th/cosθ)<dispair<(dismax_th/cos θ), then step S4-3 is executed, otherwise, step S4-1 is returned to; wherein x isleft、xrightRespectively, the abscissa of the key point of the left and right tracks, and theta ═ thetaleftright)/2,θleft、θrightRespectively, the tilt angles, dis, of the key points of the track calculated according to step S3-3max_th and dismin_thRespectively is the upper and lower boundaries of the distance between the matched two track key points;
s4-3: if the inclination angles of the matched two track key points satisfy thetaleftright|<θthIf not, go to step S4-4, otherwise return to step S4-1, thetathA threshold value for the difference between the tilt angles of the set left and right track key points;
s4-4: and successfully matching, and storing the track key point pairs.
S5: carrying out breadth-first search on successfully matched track key points to obtain an initial track sequence;
s5-1: for the current riSearching the r-th track key point pair of the rowi+1A track key point pair of row pixels;
s5-2: judgment of the r-thi+1The key point pair of the track of the row pixel is corresponding to the r-thiWhether the key point pairs of the row tracks are communicated or not is judged, if not, the step S5-1 is returned; otherwise, the r-th node to be connectedi+1Adding the key point pair of the row track into the riThe key point pairs of the row tracks are in the track sequence;
s5-3: and traversing all the track key point pairs, and if the connected track key point pairs do not exist in the current track key point pair, storing the obtained track sequence.
S6: screening the initial track sequence obtained in the step S5, and removing a repeated track sequence;
s6-1: randomly selecting two track sequences;
s6-2: as shown in fig. 11, the number (1 pair) of different point pairs between the track sequence 1 and the track sequence 2 is smaller than the threshold 2, and it is determined that the track sequence 1 and the track sequence 2 are high repetition sequences, otherwise, the step S6-1 is returned to;
s6-3: search for differences B, C between two high-repetition orbital sequences of orbital sequence 1 and orbital sequence 2
S6-4: and according to the different points of the track sequence obtained in the step S6-3, removing the track sequence with small smoothness at the different points in the two high repetition sequences.
As shown in FIG. 5, track sequence 1 is different from track sequence 2 only at one point in the two highly repetitive track sequences, but track sequence 2 has a large slope change (| θ) at point 31,3,up1,3,down|<|θ2,3,up2,3,down|) so track sequence 2 is removed.
In this embodiment, the smoothness θ of the B point at different points B, C in the two high repetition track sequences of track sequence 1 and track sequence 2 is shownB,slope=|θB,upB,down|<θC,slope=|θC,upC,downTherefore, the track sequence 2 at the point C is removed, and the track sequence after the duplication removal is shown in fig. 12.
S7: the track sequences are re-projected back into the original image, as shown in fig. 13 for key point pairs of each track sequence. Fitting is carried out through a least square method to obtain a fitting track.
In summary, the track line in a complex scene can be fitted through the method and the device, on the basis, the placing position of the camera on the train can be used for determining the central point of the local track, the track sequence in which the key point pair which is within a certain threshold range away from the central point of the local track is the local track, and the rest are adjacent tracks, so that the track line of the adjacent track is distinguished.
In the present invention, the terms "first", "second", "third" and "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A method for fitting a track line in a complex scene based on image segmentation is characterized by comprising the following steps:
s1: inputting a track area image, and segmenting the image through a coding-decoding neural network to obtain track area pixels;
s2: performing inverse perspective transformation on the image according to the image segmentation result obtained in step S1 to convert the image to the bird' S eye view, that is:
[u′,v′,w′]=[u,v,1]·M
wherein u and v are horizontal and vertical coordinate values of the original image, u '/w', v '/w' are horizontal and vertical coordinate values after transformation, and M is a perspective transformation projection matrix;
s3: extracting a track key point from the inverse perspective transformation image obtained in step S2;
s4: matching points with the distance within a certain threshold value according to the track key points extracted in the step S3;
s5: carrying out breadth-first search on successfully matched track key points to obtain an initial track sequence;
s6: screening the initial track sequence obtained in the step S5, and removing a repeated track sequence;
s7: and fitting the track key point pairs in all the track sequences processed in the step S6 to obtain the fitted left and right tracks.
2. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 1, wherein the processing procedure of step S3 is:
s3-1: taking n key pixel rows with equal longitudinal spacing on the aerial view obtained in the step S2, wherein r is1,r2...,riI is 1,2, …, n, wherein, r isiIn the row pixel, the jth track key point
Figure FDA0002900849600000011
The abscissa of the graph is the mean of the abscissas of the j-th section of continuous track pixels
Figure FDA0002900849600000012
Obtaining n-row track key points in total;
Figure FDA0002900849600000013
Figure FDA0002900849600000014
wherein ,himgTo divide the picture pixel height, ClIs the abscissa of the pixel of the first section of track, k is the abscissa of the pixel of the track, and m is ClThe number of pixels is included;
s3-2: will r toiRow and ri+1Taking the image between the key pixel lines as a sub-image, taking the r-th imageiTaking the key point A of the row track as a seed point, and carrying out region growth on the sub-map region to obtain the r < th > image communicated with the key point A of the tracki+1A row track key point B and a track key point C;
s3-3: obtaining the included angle theta between the connecting line between the track key point A and the track key point B and the y axis through the coordinate difference between the two track key pointsABTilt angle as track key point a:
θAB=atan((Ax-Bx)/(Ay-By))
wherein, Ax and Ay are horizontal and vertical coordinate values of the key point A of the orbit in the row; bx and By are the horizontal and vertical coordinate values of the next row of track key point B.
3. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 2, wherein the processing procedure of step S4 is:
s4-1: randomly selecting two track key points from the same track key points to match;
s4-2: if the actual distance dis between the two matched track key points is notpair=xleft-xrightSatisfies (dis)min_th/cosθ)<dispair<(dismax_th/cos θ), then step S4-3 is executed, otherwise, step S4-1 is returned; wherein x isleft、xrightRespectively, the abscissa of the key point of the left and right tracks, and theta ═ thetaleftright)/2,θleft、θrightRespectively, the inclination angles, d, of the key points of the track calculated according to the step S3-3ismax_th and dismin_thRespectively is the upper and lower boundaries of the distance between the matched two track key points;
s4-3: if the inclination angles of the matched two track key points satisfy thetaleftright|<θthIf not, go to step S4-4, otherwise return to step S4-1, thetathA threshold value for the difference between the tilt angles of the set left and right track key points;
s4-4: and successfully matching, and storing the track key point pairs.
4. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 3, wherein the processing procedure of step S5 is:
s5-1: for the current riSearching the r-th track key point pair of the rowi+1A track key point pair of row pixels;
s5-2: judgment of the r-thi+1The key point pair of the track of the row pixel is corresponding to the r-thiWhether the key point pairs of the row tracks are communicated or not is judged, if not, the step S5-1 is returned; otherwise, the r-th node to be connectedi+1Adding the key point pair of the row track into the riThe key point pairs of the row tracks are in the track sequence;
s5-3: and traversing all the track key point pairs, and if the connected track key point pairs do not exist in the current track key point pair, storing the obtained track sequence.
5. The method for fitting the track line in the complex scene based on the image segmentation as claimed in claim 4, wherein the processing procedure of step S6 is:
s6-1: randomly selecting two track sequences, and setting sequence numbers as f and g;
s6-2: for two stripesComparing the coordinates of the key point pairs of the tracks line by line in the track sequence if the r-th coordinates of the two track sequences are the sameiIf the coordinates of the track key point pairs of the row pixels are completely the same, judging that the point pairs are the same point pairs; otherwise, judging the point pairs to be different point pairs, and storing the different point pairs;
s6-3: if the number of different point pairs obtained in S6-2 is smaller than the set threshold value between the two track sequences, judging as a high repetition sequence, otherwise, returning to the step S6-1;
s6-4: the different point pairs in the high repetition track sequence obtained from step S6-2
Figure FDA0002900849600000021
wherein ,
Figure FDA0002900849600000022
indicating the f track sequence at the riThe key point of the trajectory of the row of pixels,
Figure FDA0002900849600000031
indicates that the g track sequence is in the riA track key point of a row pixel;
s6-5: if the different point pairs of the track sequence obtained from the step S6-4 are not identical
Figure FDA0002900849600000032
Removing the g track sequence; otherwise, removing the f track sequence;
different points
Figure FDA0002900849600000033
With two upper and lower track key points
Figure FDA0002900849600000034
Angle of inclination therebetween
Figure FDA0002900849600000035
And
Figure FDA0002900849600000036
wherein ,
Figure FDA0002900849600000037
r-th track sequenceiTrack key point of row pixel and r-th track of f-th track sequencei-1The angle between the connecting line of the key points of the track of the row pixels and the y axis,
Figure FDA0002900849600000038
friend of the r-th track sequence of f-th trackiTrack key point of row pixel and r-th track of f-th track sequencei+1The included angle between the connecting line of the track key points of the row pixels and the y axis; different points of the f-th track sequence
Figure FDA0002900849600000039
Smoothness of the surface
Figure FDA00029008496000000310
In the same way, the method for preparing the composite material,
Figure FDA00029008496000000311
for different points of the g-th track sequence
Figure FDA00029008496000000312
Smoothness of (d).
CN202110066159.3A 2021-01-15 2021-01-15 Image segmentation-based track line fitting method under complex scene Active CN112800902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110066159.3A CN112800902B (en) 2021-01-15 2021-01-15 Image segmentation-based track line fitting method under complex scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110066159.3A CN112800902B (en) 2021-01-15 2021-01-15 Image segmentation-based track line fitting method under complex scene

Publications (2)

Publication Number Publication Date
CN112800902A true CN112800902A (en) 2021-05-14
CN112800902B CN112800902B (en) 2023-09-05

Family

ID=75810286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110066159.3A Active CN112800902B (en) 2021-01-15 2021-01-15 Image segmentation-based track line fitting method under complex scene

Country Status (1)

Country Link
CN (1) CN112800902B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770581A (en) * 2010-01-08 2010-07-07 西安电子科技大学 Semi-automatic detecting method for road centerline in high-resolution city remote sensing image
US20100239123A1 (en) * 2007-10-12 2010-09-23 Ryuji Funayama Methods and systems for processing of video data
CN107590438A (en) * 2017-08-16 2018-01-16 中国地质大学(武汉) A kind of intelligent auxiliary driving method and system
CN109711372A (en) * 2018-12-29 2019-05-03 驭势科技(北京)有限公司 A kind of recognition methods of lane line and system, storage medium, server

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100239123A1 (en) * 2007-10-12 2010-09-23 Ryuji Funayama Methods and systems for processing of video data
CN101770581A (en) * 2010-01-08 2010-07-07 西安电子科技大学 Semi-automatic detecting method for road centerline in high-resolution city remote sensing image
CN107590438A (en) * 2017-08-16 2018-01-16 中国地质大学(武汉) A kind of intelligent auxiliary driving method and system
CN109711372A (en) * 2018-12-29 2019-05-03 驭势科技(北京)有限公司 A kind of recognition methods of lane line and system, storage medium, server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SONGYUE YANG 等: "A Topology Guided Method for Rail-Track Detection", 《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY,VOL.71》 *

Also Published As

Publication number Publication date
CN112800902B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN106290388B (en) A kind of insulator breakdown automatic testing method
CN102902974B (en) Image based method for identifying railway overhead-contact system bolt support identifying information
CN103745224B (en) Image-based railway contact net bird-nest abnormal condition detection method
CN102708356B (en) Automatic license plate positioning and recognition method based on complex background
CN108921875A (en) A kind of real-time traffic flow detection and method for tracing based on data of taking photo by plane
CN105005771B (en) A kind of detection method of the lane line solid line based on light stream locus of points statistics
CN107067002A (en) Road licence plate recognition method in a kind of dynamic video
CN101339601B (en) License plate Chinese character recognition method based on SIFT algorithm
CN103903018A (en) Method and system for positioning license plate in complex scene
CN110675415B (en) Road ponding area detection method based on deep learning enhanced example segmentation
CN106529532A (en) License plate identification system based on integral feature channels and gray projection
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN102663760A (en) Location and segmentation method for windshield area of vehicle in images
CN108875803A (en) A kind of detection of harmful influence haulage vehicle and recognition methods based on video image
CN103902981A (en) Method and system for identifying license plate characters based on character fusion features
CN108198417A (en) A kind of road cruising inspection system based on unmanned plane
CN111597904B (en) Identification method for inclination of tunnel cable bracket
CN110443142B (en) Deep learning vehicle counting method based on road surface extraction and segmentation
CN109961065A (en) A kind of surface vessel object detection method
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
CN109784261B (en) Pedestrian segmentation and identification method based on machine vision
CN112800902A (en) Method for fitting track line in complex scene based on image segmentation
Wu et al. Block-based hough transform for recognition of zebra crossing in natural scene images
CN111597939A (en) High-speed rail line nest defect detection method based on deep learning
CN113505793B (en) Rectangular target detection method under complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant