CN115937719A - Runway contour line obtaining method based on maximum area - Google Patents

Runway contour line obtaining method based on maximum area Download PDF

Info

Publication number
CN115937719A
CN115937719A CN202211703502.1A CN202211703502A CN115937719A CN 115937719 A CN115937719 A CN 115937719A CN 202211703502 A CN202211703502 A CN 202211703502A CN 115937719 A CN115937719 A CN 115937719A
Authority
CN
China
Prior art keywords
point
pixel
value
line
maximum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211703502.1A
Other languages
Chinese (zh)
Inventor
马波
周彦
倪静
王瑞
李云霞
胡逸雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVIC Chengdu Aircraft Design and Research Institute
Original Assignee
AVIC Chengdu Aircraft Design and Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVIC Chengdu Aircraft Design and Research Institute filed Critical AVIC Chengdu Aircraft Design and Research Institute
Priority to CN202211703502.1A priority Critical patent/CN115937719A/en
Publication of CN115937719A publication Critical patent/CN115937719A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of image processing, and particularly relates to a runway contour line acquisition method based on the maximum area. Firstly, acquiring runway position information through target detection; segmenting the runway outline by a semantic segmentation neural network, extracting a communication area with the largest area in the segmented runway outline, and excluding other error communication areas; and finally, combining the obtained runway contour information with a least square and midline fitting algorithm to obtain the runway contour information.

Description

Runway contour line obtaining method based on maximum area
Technical Field
The invention belongs to the field of image processing, and particularly relates to a runway contour line acquisition method based on the maximum area.
Background
With the continuous progress of image acquisition technology and image processing technology, the resolution of the acquired images is higher and higher, and the image processing speed is higher and higher. By collecting images, processing the images to extract the target object becomes an important application. As an important artificial ground object, the airport is an important reference object for the unmanned aerial vehicle to take off, land and the like. Most of the current research is mainly focused on identifying airports themselves, and most of the research aims at finding airports. And the research on the automatic extraction of airport contours is insufficient.
Disclosure of Invention
The purpose of the invention is as follows: a runway contour line acquisition method based on the maximum area is provided.
The technical scheme is as follows:
a runway contour line obtaining method based on the maximum area comprises the following steps:
reading a read frame of image into a target detection neural network;
acquiring a target detection neural network result and checking whether the target detection neural network result is effective or not;
when the result of the target detection neural network is effective, intercepting an image containing a target, and transmitting the intercepted image into a semantic segmentation neural network;
obtaining a result of the semantic segmentation neural network, and respectively carrying out binarization processing on a left side line, a right side line and a bottom side line region of a runway region in the result of the semantic segmentation neural network, wherein the binarization processing can assign a value 1 with a gray value larger than a threshold value to a value 0 with the gray value smaller than the threshold value;
respectively setting label values for points with pixels of 1 in the left sideline, the right sideline and the bottom sideline regions in the binarized image, and determining the maximum area communication regions of the left sideline, the right sideline and the bottom sideline according to the label values;
respectively traversing pixels in the maximum connected domain of the left edge line region aiming at the maximum area connected domain in the left edge line region, the right edge line region and the bottom edge line region, and finding out a point with the first pixel value of 1 at two sides as an element of two side point sets;
respectively carrying out least square fitting on elements in the point sets on the two sides to obtain the slope and intercept of a straight line corresponding to the elements in the point sets on the two sides and coordinates of a top point and a bottom point;
and adding the vertex and the bottom point pixel coordinates of the straight line corresponding to the two side point concentration elements, and taking the average value to obtain the vertex and the bottom point pixel coordinates of the current edge line contour line.
Further, setting label values for points with pixels of 1 in the left side line, the right side line and the bottom side line region in the binarized image, and determining the maximum area connected region of the left side line, the right side line and the bottom side line according to the label values, specifically comprising:
scanning in sequence, finding out a pixel point with a first value of 1, and setting a label of the pixel point to be 1;
continuing to scan the pixel points, finding out the next pixel point with the value of 1, and checking the upper, the lower, the left and the right of the pixel point;
when the values of the upper, lower, left and right pixel points of the pixel point are both 0, adding one to the label value of the pixel point;
when the upper, lower, left and right pixel points of the current pixel point have pixel points with a value of 1, the label of the pixel point with the value of 1 is assigned to the current pixel point;
when the values of a plurality of adjacent pixel points are 1, selecting a smaller label value to be assigned to the current pixel point;
after the first scanning is finished, updating the label of each point by the second scanning to be the minimum label value in the communication area;
determining each connected region according to the updated label value;
and calculating the area of each connected domain to obtain the connected domain with the largest area.
Further, determining each connected region according to the updated tag value, specifically:
check all points if the labels of two adjacent points are different then treat the two points as the same connected region.
Further, for the maximum area connected component in the left edge line region, traversing the pixels in the maximum connected component in the left edge line region, and finding a point with a first pixel value of 1 on both sides as an element in the two-side point set, specifically including:
traversing pixels in the maximum connected domain of the left edge line area from left to right and from top to bottom, and finding a point with a first pixel value of 1 in each row as an element in the left side point set;
and traversing the pixels in the maximum connected domain of the left edge area from right to left and from top to bottom, and finding the point with the first pixel value of 1 in each row as an element in the right point set.
Further, for the maximum area connected component in the right edge area, traversing the pixels in the maximum connected component in the left edge area, and finding the first pixel value 1 on both sides as an element in the two-side point set, which specifically includes:
traversing the pixel values in the maximum connected domain of the right edge from left to right and from top to bottom, and finding out the point with the pixel value of 1 in each row as an element in the left side point set;
and traversing the pixel values in the maximum connected domain of the right edge from right to left and from top to bottom, and finding the point with the pixel value of 1 in each row as an element in the right point set.
Further, for the maximum area connected domain in the edge region, traversing the pixels in the maximum connected domain of the left edge region, and finding out a point with a first pixel value of 1 on both sides as an element in a point set on both sides, which specifically includes:
traversing pixels in the maximum connected domain of the bottom edge line area from top to bottom and from left to right, and finding out a point with the first pixel value of 1 in each column as an element in the upper side point set;
and traversing the pixels in the maximum connected domain of the bottom edge line area from bottom to top and from left to right, and finding the point with the first pixel value of 1 in each column as an element in the lower side point set.
Further, the method further comprises:
and according to the longitudinal pixel coordinate of the left vertex of the bottom edge line contour line, correcting the longitudinal coordinate of the vertex of the contour of the left edge line, which is close to the bottom edge line, and updating the transverse coordinate according to the longitudinal coordinate.
Further, the method further comprises:
and correcting the longitudinal coordinate of the top point of the outline of the right sideline, which is close to the bottom sideline, according to the longitudinal pixel coordinate of the right top point of the outline of the bottom sideline, and updating the transverse coordinate according to the longitudinal coordinate.
Has the advantages that:
firstly, acquiring runway position information through target detection; segmenting the runway outline by a semantic segmentation neural network, extracting a communication area with the largest area in the segmented runway outline, and excluding other error communication areas; and finally, combining the obtained runway contour information with a least square and midline fitting algorithm to obtain the runway contour information.
Drawings
Fig. 1 is a flowchart of a runway contour line acquisition method based on a maximum area.
Detailed Description
The invention relates to an image processing technology, a target recognition technology and a straight line fitting technology, in particular to a runway contour line acquisition method based on the maximum area.
The method sets up a runway contour line acquisition method based on the maximum area. Firstly, acquiring runway position information through target detection; segmenting the runway outline by a semantic segmentation neural network, extracting a largest-area communication area in the segmented runway outline, and excluding other error communication areas; and finally, combining the obtained runway contour information with a least square and midline fitting algorithm to obtain the runway contour information.
The technical problem to be solved by the invention is to obtain the maximum area connected domain in the detected runway area and obtain the runway contour line, and the invention provides a runway contour line obtaining method based on the maximum area.
According to the invention, a runway contour line obtaining method based on the maximum area is provided, and the technical scheme adopted by the method comprises the following steps:
runway position information is detected through a target detection neural network, an image is intercepted according to a target detection result, and the image is transmitted into a semantic segmentation neural network. After the semantic segmentation result is obtained, firstly, binarization processing is carried out on the result, and each category is separated. Then, scanning the images of various categories after binarization in sequence, finding out a point with a first pixel being 1, and setting a label to be 1; continuously scanning the pixel points, and when the left adjacent pixel and the top pixel of the pixel are invalid values, setting a new label, namely a label +1, for the pixel; when one of the left-adjacent pixel or the upper-adjacent pixel of the pixel is an effective value, the label of the effective value pixel is assigned to the label value of the pixel, and when the two effective values are both effective values, the smaller label value is selected to be assigned to the pixel; and after the first scanning is finished, updating the label of each point by the second scanning to the minimum label in the set. And calculating the area of each connected domain, and selecting the connected domain with the largest area in each category.
And when the maximum connected domain of the left and right edge line categories is obtained, traversing the pixel value y-axis of the edge line category from the left to the right from the top to the bottom, respectively finding out a point with a pixel value of 1 in two directions, stopping, storing the pixel coordinate of the point in the corresponding left and right point sets, respectively performing least square algorithm on the obtained point set container to fit the slope and intercept of the straight line of the left and right point sets of the maximum connected domain of the edge line category, respectively bringing the maximum and minimum y-axis pixel coordinates in the left and right point sets into the left and right straight lines which are obtained by the least square algorithm to obtain the left side of the x-axis pixel corresponding to each straight line, and finally adding the maximum and minimum y-axis pixel coordinates of the left and right straight lines and the x-axis pixel coordinate camera to obtain the mean value of the maximum and minimum y-axis pixel coordinates, so as to obtain the top point and the left side of the center line of the left and right edge line category.
And for the acquisition mode of the maximum connected domain of the bottom edge line and the left and right edge line types, only traversing from the y axis to the x axis, and clicking to acquire the straight line formed by the maximum and minimum x-axis pixel coordinates of the bottom edge line by changing the left and right point sets into the upper and lower point sets. And will not be described in detail herein.
And finally, respectively substituting the acquired y-axis pixel coordinates of the left and right end points of the bottom edge line into the linear equations of the left and right edge lines to update the coordinates of the bottom points of the left and right edge lines. This captures the entire runway profile. As shown in fig. 1, the method for obtaining the runway contour line based on the maximum area includes the following specific steps:
step 1: reading a read frame of picture into a target detection neural network;
step 2: obtaining a target detection neural network result and checking whether the target detection neural network result is valid,
and 3, step 3: when the result is effective, intercepting the image according to the target detection result, and transmitting the intercepted image into a semantic segmentation neural network;
and 4, step 4: acquiring a result of the semantic segmentation neural network, and respectively carrying out binarization processing on the left, right and bottom edges of the runway area in the result of the semantic segmentation neural network, wherein the binarization processing can assign a value 1 with a gray value larger than a threshold value to a value 0 with a gray value smaller than the threshold value;
respectively executing the following steps 5-12 aiming at the left side line, the right side line and the bottom side line in the binarized image to obtain the maximum area connected domain of the left side line, the right side line and the bottom side line;
and 5: sequentially scanning the binarized images of all categories (left, right and bottom edges), finding out a pixel point with a first value of 1, and setting a label of the pixel point to be 1;
step 6: continuously scanning the pixel points, finding out the next pixel point with the value of 1, and checking the upper, lower, left and right four pixel points of the pixel point;
and 7: when the values of the upper, lower, left and right pixel points of the pixel point are all 0, setting a new label, namely a label +1, for the pixel point; the initial value of the tag is 0;
and 8: when the upper, lower, left and right pixel points of the pixel point have a pixel point with a value of 1, the label of the pixel point with the value of 1 is assigned to the pixel point, and when the values of a plurality of adjacent pixel points are 1, the smaller label value (nonzero value) is selected to be assigned to the pixel point; if the pixel point is at the edge, several points are considered by the surrounding points
And step 9: checking whether the label values of two adjacent points are different at all points, and regarding the two pixel points as the same connected region;
step 10: after the first scanning is finished, the label of each point is updated by carrying out second scanning (repeating the steps 5-10) on the current image, the label is updated to be the smallest label in the connected region, and each connected region is determined according to the smallest label (specifically determined according to the definition of the step 9);
step 11: then calculating the area of each connected domain to obtain the connected domain with the largest area;
step 12: repeating the steps 5-11 to respectively obtain the maximum area 5 connected domains of the left side line, the right side line and the sideline;
step 13: traversing pixels in the maximum connected domain of the left edge from left to right and from top to bottom and from right to left and from top to bottom, stopping after finding out a point with the pixel value of 1 in each row, and storing the points in corresponding left and right point sets;
step 14: respectively obtaining the slope and intercept of the straight line of the left point set and the right point set of 0 of the left line profile by a least square fitting algorithm;
step 15: after acquiring the maximum and minimum y-axis pixel coordinates in the left and right point sets, substituting the maximum and minimum y-axis pixel coordinates into respective linear equations of the left and right point sets to acquire corresponding x-axis pixel coordinates, and sequentially using the x-axis pixel coordinates as the top point and bottom point pixel coordinates of the left and right point sets of the left sideline; (wherein the y-axis forward is the image)
A vertical downward direction of (1), the x-axis positive direction being a horizontal rightward direction of the image)
5, step 16: adding the pixel coordinates of the top point and the bottom point of the straight line generated by the left and right point sets, and taking
The vertex and the bottom point pixel coordinates of the left side line contour line are obtained by the mean value;
and step 17: the contour line of the right edge is obtained to be the same as the left edge, from left to right, from top to bottom, from right to left, from top to bottom, the pixel values in the maximum connected domain of the right edge are traversed, and the pixel values are found
Stopping until each row of the point with the pixel value of 1 and storing the point in a container of the corresponding left and right point sets; 0, step 18: respectively obtaining the slope and intercept of a straight line of a left point set and a right point set of the right line profile through a least square fitting algorithm;
step 19: after the maximum and minimum y-axis pixel coordinates in the left and right point sets are obtained, the maximum and minimum y-axis pixel coordinates are substituted into respective linear equations of the left and right point sets to obtain corresponding x-axis pixel coordinates, and the x-axis pixel coordinates are sequentially worked out
The pixel coordinates of the top point and the bottom point of the left and right point sets of the right side line contour line are obtained;
5, step 20: the traversal mode of the bottom edge line is different from that of the left edge line and the right edge line, pixel values in the maximum connected domain of the bottom edge line are traversed from top to bottom from left to right and from bottom to top from left to right, and the bottom edge line stops after each row of the pixels with the pixel values of 1 is found and is stored in a container of the corresponding upper point set and the corresponding lower point set;
step 21: respectively obtaining the slope and intercept of a straight line of an upper point set and a lower point set of the outline of a bottom line by a least square fitting algorithm;
step 22: after acquiring the maximum and minimum x-axis pixel coordinates in the upper and lower point sets, substituting the maximum and minimum x-axis pixel coordinates into respective linear equations of the upper and lower point sets to acquire corresponding y-axis pixel coordinates, and sequentially using the y-axis pixel coordinates as the top point and bottom point pixel coordinates of the left and right point sets of the bottom edge line contour line;
step 23: and finally, respectively correcting the maximum y-axis pixel coordinates of the left and right runway contour edge lines corresponding to the left and right end points of the bottom edge line through the acquired y-axis pixel coordinates of the left and right end points of the bottom edge line, and updating the bottom point coordinates of the left and right side line.

Claims (8)

1. A runway contour line obtaining method based on the maximum area is characterized by comprising the following steps:
reading a read frame of image into a target detection neural network;
acquiring a target detection neural network result and checking whether the target detection neural network result is effective or not;
when the result of the target detection neural network is effective, intercepting an image containing a target, and transmitting the intercepted image into a semantic segmentation neural network;
obtaining a result of the semantic segmentation neural network, and respectively carrying out binarization processing on a left side line, a right side line and a bottom side line region of a runway region in the result of the semantic segmentation neural network, wherein the binarization processing can assign a value 1 with a gray value larger than a threshold value to a value 0 with the gray value smaller than the threshold value;
respectively setting label values for points with pixels of 1 in the left sideline, the right sideline and the bottom sideline regions in the binarized image, and determining the maximum area communication regions of the left sideline, the right sideline and the bottom sideline according to the label values;
respectively traversing pixels in the maximum connected domain of the left edge line region aiming at the maximum area connected domain in the left edge line region, the right edge line region and the bottom edge line region, and finding out a point with the first pixel value of 1 at two sides as an element of two side point sets;
respectively carrying out least square fitting on the elements in the point sets on the two sides to obtain the slope and intercept of a straight line, and the coordinates of a top point and a bottom point, which correspond to the elements in the point sets on the two sides;
and adding the vertex and the bottom point pixel coordinates of the straight line corresponding to the two side point concentration elements, and taking the average value to obtain the vertex and the bottom point pixel coordinates of the current edge line contour line.
2. The method for obtaining a runway contour line based on a maximum area according to claim 1, wherein label values are respectively set for points where pixels in left side line, right side line and bottom side line regions in the binarized image are 1, and maximum area connected regions of the left side line, the right side line and the bottom side line are determined according to the label values, which specifically comprises:
scanning in sequence, finding out a pixel point with a first value of 1, and setting a label of the pixel point to be 1;
continuously scanning the pixel points, finding out the next pixel point with the value of 1, and checking the upper, lower, left and right four pixel points of the pixel point;
when the values of the upper, lower, left and right pixel points of the pixel point are both 0, adding one to the label value of the pixel point;
when the upper, lower, left and right pixel points of the current pixel point have pixel points with a value of 1, the label of the pixel point with the value of 1 is assigned to the current pixel point;
when the values of a plurality of adjacent pixel points are 1, selecting a smaller label value to be assigned to the current pixel point;
after the first scanning is finished, updating the label of each point by the second scanning to be the minimum label value in the communication area;
determining each connected region according to the updated label value;
and calculating the area of each connected domain, and acquiring the connected domain with the largest area.
3. The maximum-area-based runway contour line acquisition method according to claim 2, wherein each connected region is determined according to the updated label value, specifically:
check all points if the labels of two adjacent points are different then treat the two points as the same connected region.
4. The method for obtaining a runway contour line based on a maximum area according to claim 2, wherein for a maximum area connected domain in the left edge region, traversing pixels in the maximum connected domain of the left edge region, and finding a point with a first pixel value of 1 on both sides as an element in a point set on both sides, specifically comprises:
traversing pixels in the maximum connected domain of the left edge line area from left to right and from top to bottom, and finding a point with a first pixel value of 1 in each row as an element in the left side point set;
and traversing the pixels in the maximum connected domain of the left edge area from right to left and from top to bottom, and finding the point with the first pixel value of 1 in each row as an element in the right point set.
5. The method for obtaining a runway contour line based on a maximum area according to claim 2, wherein for a maximum area connected domain in the right edge region, traversing pixels in the maximum connected domain of the left edge region, and finding a point with a first pixel value of 1 on both sides as an element in a point set on both sides, specifically comprises:
traversing the pixel values in the maximum connected domain of the right edge from left to right and from top to bottom, and finding out the point with the pixel value of 1 in each row as an element in the left side point set;
and traversing the pixel values in the maximum connected domain of the right edge from right to left and from top to bottom, and finding the point with the pixel value of 1 in each row as an element in the right point set.
6. The method for obtaining a runway contour line based on the maximum area according to claim 2, wherein for the maximum area connected domain in the edge region, traversing the pixels in the maximum connected domain of the left edge region, and finding the point with the first pixel value of 1 on both sides as the element in the point set on both sides, specifically comprises:
traversing pixels in the maximum connected domain of the bottom edge line area from top to bottom and from left to right, and finding out a point with the first pixel value of 1 in each column as an element in the upper side point set;
and traversing the pixels in the maximum connected domain of the bottom edge line area from bottom to top and from left to right, and finding out the point with the first pixel value of 1 in each column as an element in the lower side point set.
7. The maximum area based runway contour line acquisition method of claim 6, further comprising:
and according to the longitudinal pixel coordinate of the left vertex of the bottom edge line contour line, correcting the longitudinal coordinate of the vertex of the contour of the left edge line, which is close to the bottom edge line, and updating the transverse coordinate according to the longitudinal coordinate.
8. The maximum area based runway contour line acquisition method of claim 1, further comprising:
and correcting the longitudinal coordinate of the top point of the outline of the right sideline, which is close to the bottom sideline, according to the longitudinal pixel coordinate of the right top point of the outline of the bottom sideline, and updating the transverse coordinate according to the longitudinal coordinate.
CN202211703502.1A 2022-12-29 2022-12-29 Runway contour line obtaining method based on maximum area Pending CN115937719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211703502.1A CN115937719A (en) 2022-12-29 2022-12-29 Runway contour line obtaining method based on maximum area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211703502.1A CN115937719A (en) 2022-12-29 2022-12-29 Runway contour line obtaining method based on maximum area

Publications (1)

Publication Number Publication Date
CN115937719A true CN115937719A (en) 2023-04-07

Family

ID=86554155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211703502.1A Pending CN115937719A (en) 2022-12-29 2022-12-29 Runway contour line obtaining method based on maximum area

Country Status (1)

Country Link
CN (1) CN115937719A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385475A (en) * 2023-06-06 2023-07-04 四川腾盾科技有限公司 Runway identification and segmentation method for autonomous landing of large fixed-wing unmanned aerial vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385475A (en) * 2023-06-06 2023-07-04 四川腾盾科技有限公司 Runway identification and segmentation method for autonomous landing of large fixed-wing unmanned aerial vehicle
CN116385475B (en) * 2023-06-06 2023-08-18 四川腾盾科技有限公司 Runway identification and segmentation method for autonomous landing of large fixed-wing unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
CN107045629B (en) Multi-lane line detection method
Su et al. Robust document image binarization technique for degraded document images
CN107680054B (en) Multi-source image fusion method in haze environment
KR101403876B1 (en) Method and Apparatus for Vehicle License Plate Recognition
CN106407883B (en) Complex form and identification method for handwritten numbers in complex form
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN104794421B (en) A kind of positioning of QR codes and recognition methods
CN107045634B (en) Text positioning method based on maximum stable extremum region and stroke width
CN102663760B (en) Location and segmentation method for windshield area of vehicle in images
CN109784344A (en) A kind of non-targeted filtering method of image for ground level mark identification
CN109559324A (en) A kind of objective contour detection method in linear array images
CN109540925B (en) Complex ceramic tile surface defect detection method based on difference method and local variance measurement operator
CN106651836B (en) A kind of ground level detection method based on binocular vision
CN103310211A (en) Filling mark recognition method based on image processing
CN115082466B (en) PCB surface welding spot defect detection method and system
CN111539429B (en) Automatic circulation box positioning method based on image geometric features
CN110175556B (en) Remote sensing image cloud detection method based on Sobel operator
CN112966542A (en) SLAM system and method based on laser radar
CN115937719A (en) Runway contour line obtaining method based on maximum area
CN114820474A (en) Train wheel defect detection method based on three-dimensional information
CN112085723B (en) Automatic detection method for spring jumping fault of truck bolster
CN108205678A (en) A kind of nameplate Text region processing method containing speck interference
CN110378337B (en) Visual input method and system for drawing identification information of metal cutting tool
Gao et al. Intelligent crack damage detection system in shield tunnel using combination of retinanet and optimal adaptive selection
CN105844641A (en) Adaptive threshold segmentation method in dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination