CN117036359A - Contact net geometric parameter measurement method based on binocular machine vision - Google Patents

Contact net geometric parameter measurement method based on binocular machine vision Download PDF

Info

Publication number
CN117036359A
CN117036359A CN202311301492.3A CN202311301492A CN117036359A CN 117036359 A CN117036359 A CN 117036359A CN 202311301492 A CN202311301492 A CN 202311301492A CN 117036359 A CN117036359 A CN 117036359A
Authority
CN
China
Prior art keywords
image
area
hole
edge detection
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311301492.3A
Other languages
Chinese (zh)
Other versions
CN117036359B (en
Inventor
王威
廖峪
王建
杨伟
王迎春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhonggui Track Equipment Co ltd
Original Assignee
Chengdu Zhonggui Track Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhonggui Track Equipment Co ltd filed Critical Chengdu Zhonggui Track Equipment Co ltd
Priority to CN202311301492.3A priority Critical patent/CN117036359B/en
Publication of CN117036359A publication Critical patent/CN117036359A/en
Application granted granted Critical
Publication of CN117036359B publication Critical patent/CN117036359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/14Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/255Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring radius of curvature
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a contact net geometric parameter measurement method based on binocular machine vision, which relates to the technical field of image processing, and comprises the following steps: step 1: obtaining binocular vision images in three different directions in a target area; the included angle between the three directions is more than or equal to 90 degrees and less than or equal to 180 degrees; step 2: creating a panoramic image using binocular vision images of three different directions; step 3: performing hole detection in the panoramic image to obtain a hole area, screening holes positioned at the edge of the hole area from the hole area, defining an edge detection area by taking each hole as a center and taking a set value as a radius, performing edge detection from the edge detection area to obtain an edge detection result, and extracting a contact net area based on the edge detection result; step 4: calculating the curve radius of the contact net; step 5: and calculating the contact line height, transverse distance, longitudinal distance and leap degree of the contact line based on the contact line area. The invention improves the measurement accuracy.

Description

Contact net geometric parameter measurement method based on binocular machine vision
Technical Field
The invention relates to the technical field of image processing, in particular to a contact net geometric parameter measuring method based on binocular machine vision.
Background
In the field of railway traffic, catenary is an important component of a train that supplies electricity to an electric drive, the stable geometrical parameters of which are critical to the safety and efficiency of the train operation. Geometrical parameters of the overhead contact system, such as the height, transverse distance, longitudinal distance, jump degree and the like of the contact line, directly influence the traction relation and the power transmission efficiency between the train and the overhead contact system. Therefore, the accurate measurement and evaluation of the geometrical parameters of the catenary is of great importance for the safety and reliability of railway operations.
Over the last several decades, measurement methods for catenary geometry have evolved and evolved. The earliest method mainly relies on manual measurement, but the method has the problems of large human error, low efficiency and the like, and cannot meet the requirements of high-speed railways. With the development of computer vision and image processing technology, an automatic contact net geometric parameter measurement method starts to appear. According to the method, the geometric parameters of the overhead contact system are automatically measured through a camera and an image processing technology, and the accuracy and the efficiency of measurement are greatly improved.
However, the existing automatic contact net geometric parameter measurement method still has some problems and limitations. First, some methods perform measurement based on monocular vision only, and are easily affected by factors such as viewing angle and light, resulting in unstable measurement accuracy. Secondly, when the existing method is used for processing a complex contact net structure, the edge of the contact net is difficult to accurately identify and divide, and further the subsequent geometric parameter measurement is influenced. In addition, aiming at special conditions such as holes, the existing method cannot accurately identify and process, so that a final measurement result is affected. In addition, for complex catenary structures, the conventional method often has defects when curve fitting and parameter calculation are performed, and cannot meet the requirement of high-precision measurement.
Disclosure of Invention
The invention aims to provide a contact net geometric parameter measuring method based on binocular machine vision, which improves the measuring accuracy.
In order to solve the technical problems, the invention provides a contact net geometric parameter measuring method based on binocular machine vision, which comprises the following steps:
a contact net geometric parameter measurement method based on binocular machine vision, the method comprising:
step 1: obtaining binocular vision images in three different directions in a target area; the included angle between the three directions is more than or equal to 90 degrees and less than or equal to 180 degrees;
step 2: creating a panoramic image using binocular vision images of three different directions;
step 3: performing hole detection in the panoramic image to obtain a hole area, screening holes positioned at the edge of the hole area from the hole area, defining an edge detection area by taking each hole as a center and taking a set value as a radius, performing edge detection from the edge detection area to obtain an edge detection result, and extracting a contact net area based on the edge detection result;
step 4: performing curve fitting in the contact net area, and calculating the curve radius of the contact net;
step 5: and calculating the contact line height, transverse distance, longitudinal distance and leap degree of the contact line based on the contact line area.
Further, the step 2 specifically includes:
step 2.1: calibrating the camera, calculating an internal reference matrix and distortion coefficients of the camera, and correcting distortion of the images in three directions by using the parameters to obtain corrected images
Step 2.2: obtaining key points and descriptors in the image by using a characteristic point extraction algorithm for each corrected image, and then matching the three groups of descriptors to find corresponding characteristic point pairs;
step 2.3: estimating a base matrix between images to eliminate matching errors and outlier effects;
step 2.4: based on the basic matrix, calculating an epipolar equation of the feature points on each image;
step 2.5: setting images in three directions as a first image, a second image and a third image respectively; using an epipolar equation to project the characteristic points in the second image onto the first image, so as to realize epipolar correction;
step 2.6: creating a blank image as an initialized panoramic image; merging the epipolar line corrected first image and the epipolar line corrected second image into the panoramic image pixel by pixel, starting from the first image; in the fusion process, performing smooth transition processing on an overlapped area of the first image and the second image by using an image fusion technology;
step 2.7: updating the fused image area into the panoramic image to obtain an updated panoramic image;
step 2.8: and repeatedly executing the steps 2.6 to 2.7 until the third image is fused into the updated panoramic image pixel by pixel, and completing the creation of the panoramic image.
Further, the step 3 specifically includes:
step 3.1: converting panoramic image into gray scale image
Step 3.2: applying a threshold segmentation algorithm to binarize the gray level image to obtain a binary image
Step 3.3: hole marking is carried out on the binary image by using a communication component analysis algorithm to obtain a marked imageWherein each hole is marked as a separate area;
step 3.4: calculating the area of each hole, deleting the holes with the areas lower than the set area threshold, calculating the compactness of each hole, and deleting the holes with the compactness lower than the set compactness threshold;
step 3.5: calculating boundary pixels for each hole area to obtain boundary images
Step 3.6: performing corrosion operation on the boundary image to obtain a corroded boundary image as
Step 3.7: performing expansion operation on the corroded boundary image to restore the original boundary shape, and obtaining the expanded boundary image as
Step 3.8: extracting hole edges from the expanded boundary images by using an edge detection algorithm to obtain edge images
Step 3.9: based on edge imagesAnd panoramic image->And extracting the hole area. For each hole, extracting pixels located inside the edge image from the panoramic image to obtain a hole area +.>
Step 3.10: and screening holes positioned at the edges of the hole areas from the hole areas, defining an edge detection area by taking each hole as a center and a set value as a radius, carrying out edge detection from the edge detection area to obtain an edge detection result, and extracting the contact net area based on the edge detection result.
Further, the step 3.3 specifically includes: creating an empty label imageAnd binary imageHave the same dimensions; simultaneously creating a mapping table for storing the tags; traversing binary image +.>Is +.>The method comprises the steps of carrying out a first treatment on the surface of the If the current pixel +>If the value of (2) is 1, then the following operations are performed: acquiring the current pixel +.>Tag set of marked pixels in neighborhood +.>The method comprises the steps of carrying out a first treatment on the surface of the If->If the pixel is empty, a new label is created for the current pixel, the label is put into a mapping table, and the label is assigned to the position of the current pixel +.>I.e. +.>The method comprises the steps of carrying out a first treatment on the surface of the If->Is not empty, in->The smallest label is selected and assigned to the position of the current pixel, i.e. +.>The method comprises the steps of carrying out a first treatment on the surface of the If the current pixel +>If the value of (2) is 0, skipping the pixel; if a new label is allocated to the current pixel, updating a mapping table, and adding the mapping table into a corresponding label set; and merging the labels according to the mapping table after the traversal is finished, so as to ensure that only one label exists in each connected area.
Further, the step 3.4 specifically includes: for each hole, the number of pixels of the hole, i.e. the area of the hole, is calculatedThe method comprises the steps of carrying out a first treatment on the surface of the Based on the set minimum area threshold->Screening out areas smaller than the minimum area threshold +.>I.e. if->The hole is then perforatedHoles are deleted from the marked image; calculating the compactness of the hole for the rest holes, and deleting the hole from the marked image if the compactness of the hole is lower than a set compactness threshold value; the calculation formula of the compactness is as follows:
wherein,is the boundary length of the hole.
Further, step 3.8 extracts the hole edge from the expanded boundary image by using a Canny edge detection algorithm to obtain an edge imageThe method comprises the steps of carrying out a first treatment on the surface of the The step 3.10 uses the Laplacian operator to perform edge detection from the edge detection area.
Further, the step 4 specifically includes: extracting edge pixels in the contact network area, and mapping the edge pixels to a complex plane; then, curve fitting is carried out on a complex plane, curve parameters are solved, and the curve radius is calculated by using the following formula:
wherein,,/>and->Are all curve parameters; curve fitting is performed on complex planes using the following formula:
wherein,the order is the set value; />Fitting a curve; />Is an imaginary symbol;
coordinates of each pixel point of edge pixels in the contact net areaMapping a complex number +.>,/>Corresponding to the real part of the complex number, ">An imaginary part corresponding to the complex number; the mathematical expression of the edge pixels in the contact line area is thus given as +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is the horizontal axis variable +.>Is a vertical axis variable.
Further, the curve parameters are obtained by solving the following formula:
the contact net geometric parameter measuring method based on binocular machine vision has the following beneficial effects: conventional methods generally treat hole detection and edge detection as independent tasks, which are processed separately. However, the invention integrates hole detection and edge screening, and more accurate positioning of the hole area is realized by searching the edge position in the hole area. The fusion method avoids error accumulation, so that the extraction of the contact network area is more accurate. The invention adopts a strategy of dynamically defining an edge detection area, takes each hole as a center, takes a set value as a radius, and determines the edge detection range. The method has self-adaptability and can be better suitable for holes with different sizes and shapes. Compared with a detection area with a fixed size, the dynamic strategy can better capture the edge information of the hole, and the edge detection precision is improved. In addition, according to initialization and updating of the panoramic image, fusion of a plurality of images is carried out through camera calibration information. The geometric information obtained during the calibration of these images helps to determine their position and weight in the panoramic image. Therefore, the fusion process is more accurate and efficient, and the quality and consistency of the panoramic image are ensured. The accurate internal reference matrix and distortion coefficient obtained in the camera calibration process are beneficial to improving the estimation accuracy of the basic matrix. The basis matrix reflects the geometric relationship between images of different viewing angles, and accurate estimation thereof lays a solid foundation for subsequent polar correction and panoramic image creation. And measuring and calculating the internal parameters of the camera. By acquiring the internal reference matrix and distortion coefficient of the camera, the imaging characteristics of the camera can be accurately described. These parameters are the basis for distortion correction and image reconstruction, ensuring the accuracy and stability of the subsequent processing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a contact net geometric parameter measurement method based on binocular machine vision according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, a contact net geometric parameter measurement method based on binocular machine vision, the method comprises:
step 1: obtaining binocular vision images in three different directions in a target area; the included angle between the three directions is more than or equal to 90 degrees and less than or equal to 180 degrees; binocular vision images are acquired from different angles by positioning two cameras or sensors within the target area. The images have different perspectives capturing different information of the target area. Such multi-angle observations help to provide more geometric information, thereby improving the accuracy of subsequent parameter measurements.
Step 2: creating a panoramic image using binocular vision images of three different directions; combining three binocular vision images in different directions together can create one panoramic image. This is achieved by calibrating and fusing the images to ensure that they are geometrically aligned. Panoramic images provide a more complete, continuous view of the scene, facilitating more accurate detection of holes and subsequent parameter calculations.
Step 3: performing hole detection in the panoramic image to obtain a hole area, screening holes positioned at the edge of the hole area from the hole area, defining an edge detection area by taking each hole as a center and taking a set value as a radius, performing edge detection from the edge detection area to obtain an edge detection result, and extracting a contact net area based on the edge detection result; the principle of hole detection in panoramic images is based on image analysis and processing techniques. Holes are typically represented as darker areas that can be detected by thresholding or the like. The specific location of the hole can then be located by screening the hole at the edge of the hole area. Next, a radius is set with the hole as the center to define an edge detection area for searching the boundary of the hole.
Step 4: performing curve fitting in the contact net area, and calculating the curve radius of the contact net;
step 5: and calculating the contact line height, transverse distance, longitudinal distance and leap degree of the contact line based on the contact line area.
Specifically, in step 5, the contact line height refers to the distance from the upper edge of the contact net to the ground. From the contact net region that has been extracted, the upper edge position of the contact net can be obtained. Meanwhile, the internal and external parameters of the camera are required to be considered to realize real-world scale transformation. If calibration parameters of the camera have been obtained, they can be used to convert the image coordinates to real world coordinates, thereby calculating the contact line height.
The lateral and longitudinal distances refer to the distances of the contact net in the horizontal and vertical directions. The pixel distance between different points can be measured from the extracted catenary area. Then, the pixel distance is converted into an actual physical distance by applying calibration parameters of the camera, thereby obtaining a lateral distance and a longitudinal distance.
The degree of jump refers to the distance between two adjacent struts of the contact net in the vertical direction. This can be calculated by detecting the position of the struts on the extracted catenary area and measuring the actual physical distance between them.
Assuming that a calibrated matrix camera in-camera parameter matrix) and distortion coefficients (radial and tangential distortion coefficients) are already obtained, you can use these parameters to convert the pixel coordinates to normalized coordinates and then calculate the geometric parameters from the known actual dimensional relationships. The calculation of the contact line height can be further converted into real world coordinates by converting pixel coordinates into normalized coordinates based on the pixel positions on the contact net edge in the image. The computation of the lateral and longitudinal distances may be based on pixel distances between different points in the image, again using camera calibration parameters to convert the pixel distances to actual physical distances. The calculation of the degree of jump can be based on the extracted contact net area, the positions of the adjacent struts are detected, and then the pixel distance is converted into the actual distance through the calibration parameters.
Example 2: based on the above embodiment, the step 2 specifically includes:
step 2.1: calibrating the camera, calculating an internal reference matrix and a distortion coefficient of the camera, and carrying out distortion correction on images in three directions by using the parameters to obtain corrected images; in this step, each camera is calibrated, and the internal reference matrix and distortion coefficient of the camera are calculated. The internal reference matrix of the camera contains information such as focal length, optical center coordinates and the like, and the distortion coefficient is used for eliminating image distortion. These parameters will be used to correct the image so that it can more accurately correspond to the real world coordinates.
Step 2.2: obtaining key points and descriptors in the image by using a characteristic point extraction algorithm for each corrected image, and then matching the three groups of descriptors to find corresponding characteristic point pairs; on each corrected image, a feature point extraction algorithm (e.g., SIFT, SURF, ORB, etc.) is used to find the keypoints in the image and a descriptor is generated for each keypoint. Then, matching is performed among the three groups of descriptors, and corresponding feature point pairs in different images are found. These feature point pairs will be used to estimate the basis matrix between the images.
Step 2.3: estimating a base matrix between images to eliminate matching errors and outlier effects; the basis matrix is used to describe the geometrical relationship between the two views. By using the feature point pairs, the base matrix can be estimated using a method such as RANSAC, thereby eliminating the influence of the matching error and outliers. The basis matrix will help to achieve epipolar corrections.
Step 2.4: based on the basic matrix, calculating an epipolar equation of the feature points on each image; and calculating polar equations of the feature points on each image based on the estimated basis matrix. The epipolar line refers to a straight line corresponding to a feature point on another image. Epipolar corrections will help align the images to create a panoramic image.
Step 2.5: setting images in three directions as a first image, a second image and a third image respectively; using an epipolar equation to project the characteristic points in the second image onto the first image, so as to realize epipolar correction; and using the epipolar equation to project the characteristic points in the second image onto the first image, so as to realize epipolar correction. This will ensure that the feature points of both images are on the same horizontal line, thus providing for subsequent image fusion.
Step 2.6: creating a blank image as an initialized panoramic image; merging the epipolar line corrected first image and the epipolar line corrected second image into the panoramic image pixel by pixel, starting from the first image; in the fusion process, performing smooth transition processing on an overlapped area of the first image and the second image by using an image fusion technology; for the overlapping region, a smooth transition process is performed using an image fusion technique (e.g., weighted average, multi-band fusion, etc.) to avoid discontinuities or imperfections in the overlapping region.
Step 2.7: updating the fused image area into the panoramic image to obtain an updated panoramic image;
step 2.8: and repeatedly executing the steps 2.6 to 2.7 until the third image is fused into the updated panoramic image pixel by pixel, and completing the creation of the panoramic image.
Specifically, in step 2.2, keypoints are extracted from each image and descriptors are generated. By matching these descriptors, a set of feature point pairs can be obtained, which correspond in different images. RANSAC is an iterative method for estimating model parameters that can eliminate outlier interference. In estimating the basis matrix, RANSAC randomly selects a small number of feature point pairs as the set of candidate interior points and uses them to calculate an initial basis matrix. For each pair of feature points, the difference between its actual position and the position calculated by the initial basis matrix, i.e. the residual, is calculated. Residues smaller than a certain threshold are regarded as inner points, and residues larger than the threshold are regarded as outer points. An interior point is used to re-fit a better basis matrix. This may be achieved by calculating a more accurate basis matrix using the pairs of feature points calculated by the interior points. And calculating residual errors of the estimated basic matrix and all the inner points. This would be an indicator of the quality of the metrology model. The above steps are repeated a number of times (typically tens of times) to find the best basis matrix estimate. In a number of iterations, the model that produces the most interior points is selected. Finally, RANSAC will output a base matrix estimate that best interprets the relationship between pairs of inliers' feature points.
Specifically, in step 2.3, the base matrix is estimated by RANSAC or the like. This matrix describes the geometrical relationship between the first image and the second image. In calculating the epipolar equation, a set of feature points is selected from the first image as a starting point. These feature points are extracted and matched in step 2.2, which have corresponding feature points in the second image. Homogeneous coordinates: for each feature point in the first image, its two-dimensional coordinates are determinedHomogeneous into three-dimensional homogeneous coordinates. By means of the homogeneous coordinate vector of the feature points +.>Substituting polar equation->In (1), the epipolar equation corresponding to the feature points in the first image on the second image can be calculated>. This epipolar equation describes the possible locations of feature points on the second image. Polar equation->The coefficients of (a) can be interpreted as the position and orientation of the epipolar line on the second image. These coefficients can be used to visualize the epipolar line in order to understand the correspondence between the first image and the second image. Repeating the above process, for each feature point in the first imageTo calculate the corresponding epipolar equation to find its corresponding location on the second image. These calculated epipolar equations will provide for epipolar corrections. In epipolar correction, these epipolar equations are used to map feature points in a first image onto a second image, enabling alignment of the two images.
Example 3: on the basis of the above embodiment, the step 3 specifically includes:
step 3.1: converting panoramic image into gray scale imageThe method comprises the steps of carrying out a first treatment on the surface of the The color panoramic image is converted into a gray scale image, and the color information of each pixel is combined into a single luminance value. The gray scale image is more suitable for contour detection and morphological operations, reduces processing complexity, and provides better edge information.
Step 3.2: applying a threshold segmentation algorithm to binarize the gray level image to obtain a binary imageThe method comprises the steps of carrying out a first treatment on the surface of the Pixels in the gray level image are compared with a set threshold according to the brightness value, and the pixels are divided into a foreground (contact net part) and a background (non-contact net part). And separating the region of interest (contact net) from the background through the binary image, and preparing for subsequent connected component analysis and morphological operation.
Step 3.3: hole marking is carried out on the binary image by using a communication component analysis algorithm to obtain a marked imageWherein each hole is marked as a separate area; the connected component analysis divides pixels in the image into different regions according to their connectivity, each region being labeled as a separate connected component. Different hole areas are identified and separated, and accurate hole boundary information is provided for subsequent analysis and processing.
Step 3.4: calculating the area of each hole, deleting the holes with the areas lower than the set area threshold, calculating the compactness of each hole, and deleting the holes with the compactness lower than the set compactness threshold; the area and the compactness of the connected components (hole areas) are calculated, and small-area or non-compact holes are filtered according to a preset threshold value. Holes with too small areas or irregular shapes are removed, and noise and unnecessary interference of subsequent processing are reduced.
Step 3.5: calculating boundary pixels for each hole area to obtain boundary images. The boundary between the foreground and the background is detected from the binary image by a boundary detection algorithm (e.g., canny algorithm). A boundary image describing the boundary of the hole region is generated, providing input for subsequent morphological operations and edge extraction.
Step 3.6: performing corrosion operation on the boundary image to obtain a corroded boundary image as
Step 3.7: performing expansion operation on the corroded boundary image to restore the original boundary shape, and obtaining the expanded boundary image asThe method comprises the steps of carrying out a first treatment on the surface of the The erosion operation reduces the size of the foreground region by gradually eroding the background pixels around the foreground pixels. The dilation operation then increases the size of the foreground region by gradually filling the background pixels around the foreground pixels.
Step 3.8: extracting hole edges from the expanded boundary images by using an edge detection algorithm to obtain edge images
Step 3.9: based on edge imagesAnd panoramic image->Extraction holeHole area. For each hole, extracting pixels located inside the edge image from the panoramic image to obtain a hole area +.>
Step 3.10: and screening holes positioned at the edges of the hole areas from the hole areas, defining an edge detection area by taking each hole as a center and a set value as a radius, carrying out edge detection from the edge detection area to obtain an edge detection result, and extracting the contact net area based on the edge detection result.
Example 4: on the basis of the above embodiment, the step 3.3 specifically includes: creating an empty label imageAnd binary image->Have the same dimensions; simultaneously creating a mapping table for storing the tags; traversing binary image +.>Is +.>The method comprises the steps of carrying out a first treatment on the surface of the If the current pixel +>If the value of (2) is 1, then the following operations are performed: acquiring the current pixel +.>Tag set of marked pixels in neighborhood +.>The method comprises the steps of carrying out a first treatment on the surface of the If->If the pixel is empty, a new label is created for the current pixel, the label is put into a mapping table, and the label is assigned to the position of the current pixel +.>I.e.The method comprises the steps of carrying out a first treatment on the surface of the If->Is not empty, in->The smallest label is selected and assigned to the position of the current pixel, i.e. +.>The method comprises the steps of carrying out a first treatment on the surface of the If the current pixel +>If the value of (2) is 0, skipping the pixel; if a new label is allocated to the current pixel, updating a mapping table, and adding the mapping table into a corresponding label set; and merging the labels according to the mapping table after the traversal is finished, so as to ensure that only one label exists in each connected area.
Specifically, the connected component analysis principle in step 3.3 is that by detecting adjacent foreground pixels and assigning them to the same area, the holes are separated and each hole is assigned a unique label, providing detailed information about the hole for the subsequent steps. The method has the effects of ensuring that each hole can be independently processed and keeping the accuracy of image segmentation. And merging the labels according to the mapping table after the traversal is finished. Ensuring that there is only one tag per connected region is achieved by merging the set of tags in the mapping table into one tag. This ensures that each connected region has a unique tag for distinguishing between different hole regions.
Example 5: on the basis of the above embodiment, the step 3.4 specifically includes: for each hole, the number of pixels of the hole, i.e. the area of the hole, is calculatedThe method comprises the steps of carrying out a first treatment on the surface of the Minimum face based on settingAccumulation threshold->Screening out areas smaller than the minimum area threshold +.>I.e. if->Deleting the hole from the marked image; calculating the compactness of the hole for the rest holes, and deleting the hole from the marked image if the compactness of the hole is lower than a set compactness threshold value; the calculation formula of the compactness is as follows:
wherein,is the boundary length of the hole.
Specifically, through the dual screening of area and compactness, undesirable holes are effectively filtered, thereby preserving those holes of sufficient size and compactness. The screening method can improve the accuracy and reliability of subsequent analysis. In addition, the implementation process of the step 3.4 is relatively simple, and the holes can be screened and processed through simple mathematical calculation and comparison.
Example 6: based on the above embodiment, the step 3.8 extracts the hole edge from the expanded boundary image by using a Canny edge detection algorithm to obtain an edge imageThe method comprises the steps of carrying out a first treatment on the surface of the The step 3.10 uses the Laplacian operator to perform edge detection from the edge detection area.
Specifically, the Canny edge detection algorithm mainly comprises the following steps: first, the image is smoothed using a gaussian filter to reduce the effects of noise. The gradient magnitude and direction for each pixel in the image is then calculated to find the place where the gradient change is greatest. Then, the position where the gradient change is largest is thinned to a narrower edge by non-maximum suppression. Finally, using a double thresholding process, the edge pixels are divided into strong and weak edges, forming a continuous edge profile by connecting the strong edge pixels. The Laplacian operator is a second derivative operator that can be calculated by applying a discrete Laplacian convolution kernel over each pixel point in the image. In the vicinity of edges, the pixel values will change drastically, so the Laplacian operator can help detect these edges.
Example 7: based on the above embodiment, the step 4 specifically includes: extracting edge pixels in the contact network area, and mapping the edge pixels to a complex plane; then, curve fitting is carried out on a complex plane, curve parameters are solved, and the curve radius is calculated by using the following formula:
wherein,,/>and->Are all curve parameters; curve fitting is performed on complex planes using the following formula:
wherein,the order is the set value; />Fitting a curve; />Is an imaginary symbol.
Specifically, the principle of these formulas consists in matching the edge pixels of the catenary area by adjusting the fitting parameters using complex operations and integration in complex planes, and obtaining the average radius of the fitted curve by calculating the distance. The method can accurately analyze the complex contact net shape mathematically, extract curve information and provide an accurate data basis for further parameter analysis.
Integration path and integration operation: the integration operation in this formula is performed around a closed path, complex values on the pathRepresenting the mapping of edge pixels extracted from the catenary area on the complex plane. The integrating path may be arbitrary and a suitable closed path is typically chosen. Complex fitting: molecular part of the formula->Representing the complex value of edge pixels +.>And (5) secondary curtain operation. This is in fact a fitting of the edge curve in the complex plane, the order of the fitting +.>The accuracy and adaptability of the fitting can be controlled. Denominator part: denominator part->Is->And->Is a parameter used to adjust the fit curve. By adjusting these parameters, the fitted curve can be made to coincide with the actual edge pixel points as much as possible.
Coordinates of each pixel point of edge pixels in the contact net areaMapping a complex number +.>,/>Corresponding to the real part of the complex number, ">An imaginary part corresponding to the complex number; the mathematical expression of the edge pixels in the contact line area is thus given as +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is the horizontal axis variable +.>Is a vertical axis variable.
Specifically, for each edge pixel coordinate in the catenary areaWhich are used as real and imaginary parts, respectively, to form a complex number. This complex number can be expressed as +.>Wherein->Corresponding to the real part of the complex number, ">Corresponding to the imaginary part of the complex number. Thus, mapping all edge pixels onto complex planes can result in a series of complex +.>Each complex number represents the position of an edge pixel in the complex plane. Due to the real and imaginary parts of complex numbers, respectivelyCorresponding to the horizontal and vertical axis coordinates on the plane, this mapping process maps the coordinates of the edge pixels in the catenary area to the coordinate system of the complex plane. By this mapping, all edge pixels in the catenary area can be represented as a complex sequenceWherein each plural->Showing the position of an edge pixel on the complex plane. This sequence can be expressed by a mathematical expression:>wherein->Is the horizontal axis variable representing the real part of all complex numbers, and +.>Is a vertical axis variable representing the imaginary part of all complex numbers. By means of the complex representation, mathematical operations such as curve fitting and the like can be performed on a complex plane, so that more accurate characteristic information of the contact network area is obtained.
Example 8: on the basis of the above embodiment, the curve parameters are solved using the following formula:
specifically, in solving forThis formula is based on the principle of complex integration. The integrating path is closed around a certain area on the complex plane. Every point on the path +.>Representing the position of edge pixels extracted from the catenary area on the complex plane. +.>Representing an integrand that integrates over a complex plane. It contains the parameters->And->And plural->Power of (v) and product of (v). The result of the integration will give the parameter +.>Is a numerical value of (2). Integrating calculates the cumulative effect of the integrand over the whole closed path, thereby obtaining the parameter +.>Is a value of (2).
In solving b, this formula is to determine the parameters by a minimization operationIs a value of (2). The aim of the minimization is to make the difference of a series of data points +.>The sum of the modular squares of (c) is minimal. Specifically, for each data point +.>Calculate the difference +.>The square of its modulus is then calculated and the results of all data points are summed. The result of the minimization operation is to find a +.>And a value such that the sum of the squares of the above modes is minimized. This->The value corresponds to the minimum distance between the data point and the fitted curve.
In solving for c, the method is similar toThe formula also determines the parameter +.>Is a value of (2). The aim of the minimization is to make the difference of a series of data points +.>The sum of the modular squares of (c) is minimal. For each data pointCalculate the difference +.>The square of its modulus is then calculated and the results of all data points are summed. The result of the minimization operation is to find a +.>And a value such that the sum of the squares of the above modes is minimized. This->The value also corresponds to the minimum distance between the data point and the fitted curve.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The present invention has been described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (8)

1. The contact net geometric parameter measuring method based on binocular machine vision is characterized by comprising the following steps of:
step 1: obtaining binocular vision images in three different directions in a target area; the included angle between the three directions is more than or equal to 90 degrees and less than or equal to 180 degrees;
step 2: creating a panoramic image using binocular vision images of three different directions;
step 3: performing hole detection in the panoramic image to obtain a hole area, screening holes positioned at the edge of the hole area from the hole area, defining an edge detection area by taking each hole as a center and taking a set value as a radius, performing edge detection from the edge detection area to obtain an edge detection result, and extracting a contact net area based on the edge detection result;
step 4: performing curve fitting in the contact net area, and calculating the curve radius of the contact net;
step 5: and calculating the contact line height, transverse distance, longitudinal distance and leap degree of the contact line based on the contact line area.
2. The contact net geometric parameter measuring method based on binocular machine vision according to claim 1, wherein the step 2 specifically comprises:
step 2.1: calibrating the camera, calculating an internal reference matrix and a distortion coefficient of the camera, and carrying out distortion correction on images in three directions by using the parameters to obtain corrected images;
step 2.2: obtaining key points and descriptors in the image by using a characteristic point extraction algorithm for each corrected image, and then matching the three groups of descriptors to find corresponding characteristic point pairs;
step 2.3: estimating a base matrix between images to eliminate matching errors and outlier effects;
step 2.4: based on the basic matrix, calculating an epipolar equation of the feature points on each image;
step 2.5: setting images in three directions as a first image, a second image and a third image respectively; using an epipolar equation to project the characteristic points in the second image onto the first image, so as to realize epipolar correction;
step 2.6: creating a blank image as an initialized panoramic image; merging the epipolar line corrected first image and the epipolar line corrected second image into the panoramic image pixel by pixel, starting from the first image; in the fusion process, performing smooth transition processing on an overlapped area of the first image and the second image by using an image fusion technology;
step 2.7: updating the fused image area into the panoramic image to obtain an updated panoramic image;
step 2.8: and repeatedly executing the steps 2.6 to 2.7 until the third image is fused into the updated panoramic image pixel by pixel, and completing the creation of the panoramic image.
3. The contact net geometric parameter measuring method based on binocular machine vision according to claim 2, wherein the step 3 specifically includes:
step 3.1: converting panoramic image into gray scale image
Step 3.2: applying a threshold segmentation algorithm to binarize the gray level image to obtain a binary image
Step 3.3: hole marking is carried out on the binary image by using a communication component analysis algorithm to obtain a marked imageWherein each hole is marked as a separate area;
step 3.4: calculating the area of each hole, deleting the holes with the areas lower than the set area threshold, calculating the compactness of each hole, and deleting the holes with the compactness lower than the set compactness threshold;
step 3.5: calculating boundary pixels for each hole area to obtain boundary images
Step 3.6: performing corrosion operation on the boundary image to obtain a corroded boundary image as
Step 3.7: performing expansion operation on the corroded boundary image to restore the original boundary shape, and obtaining the expanded boundary image as
Step 3.8: extracting hole edges from the expanded boundary images by using an edge detection algorithm to obtain edge images
Step 3.9: based on edge imagesAnd panoramic image->Extracting a hole area; for each hole, extracting pixels located inside the edge image from the panoramic image to obtain a hole area +.>
Step 3.10: and screening holes positioned at the edges of the hole areas from the hole areas, defining an edge detection area by taking each hole as a center and a set value as a radius, carrying out edge detection from the edge detection area to obtain an edge detection result, and extracting the contact net area based on the edge detection result.
4. A contact net geometric parameter measuring method based on binocular machine vision according to claim 3, wherein the step 3.3 specifically includes: creating an empty label imageAnd binary image->Have the same dimensions; simultaneously creating a mapping table for storing the tags; traversing binary image +.>Is +.>The method comprises the steps of carrying out a first treatment on the surface of the If the current pixel +>If the value of (2) is 1, then the following operations are performed: acquiring the current pixel +.>Tag set of marked pixels in neighborhood +.>The method comprises the steps of carrying out a first treatment on the surface of the If->If the pixel is empty, a new label is created for the current pixel, the label is put into a mapping table, and the label is assigned to the position of the current pixel +.>I.e. +.>The method comprises the steps of carrying out a first treatment on the surface of the If->Is not empty, in->The smallest label is selected and assigned to the position of the current pixel, i.e. +.>The method comprises the steps of carrying out a first treatment on the surface of the If the current pixel +>If the value of (2) is 0, skipping the pixel; if a new label is allocated to the current pixel, updating a mapping table, and adding the mapping table into a corresponding label set; after the traversal is finished, the rootAnd merging the labels according to the mapping table to ensure that only one label exists in each connected area.
5. The contact net geometric parameter measuring method based on binocular machine vision according to claim 4, wherein the step 3.4 specifically includes: for each hole, the number of pixels of the hole, i.e. the area of the hole, is calculatedThe method comprises the steps of carrying out a first treatment on the surface of the Based on the set minimum area threshold->Screening out areas smaller than the minimum area threshold +.>I.e. if->Deleting the hole from the marked image; calculating the compactness of the hole for the rest holes, and deleting the hole from the marked image if the compactness of the hole is lower than a set compactness threshold value; the calculation formula of the compactness is as follows:
wherein,is the boundary length of the hole.
6. The contact net geometric parameter measuring method based on binocular machine vision according to claim 5, wherein the step 3.8 uses Canny edge detection algorithm to extract hole edges from the expanded boundary image, and obtain edge imagesThe method comprises the steps of carrying out a first treatment on the surface of the The step 3.10 uses the Laplacian operator to perform edge detection from the edge detection area.
7. The contact net geometric parameter measuring method based on binocular machine vision according to claim 6, wherein the step 4 specifically includes: extracting edge pixels in the contact network area, and mapping the edge pixels to a complex plane; then, curve fitting is carried out on a complex plane, curve parameters are solved, and the curve radius is calculated by using the following formula:
wherein,,/>and->Are all curve parameters; curve fitting is performed on complex planes using the following formula:
wherein,the order is the set value; />Fitting a curve; />Is an imaginary symbol;
coordinates of each pixel point of edge pixels in the contact net areaMapping to a complex number on a complex plane,/>Corresponding to the real part of the complex number, ">An imaginary part corresponding to the complex number; the mathematical expression of the edge pixels in the contact line area is thus given as +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is the horizontal axis variable +.>Is a vertical axis variable.
8. The method for measuring geometrical parameters of a catenary based on binocular machine vision according to claim 7, wherein the curve parameters are solved using the following formula:
CN202311301492.3A 2023-10-10 2023-10-10 Contact net geometric parameter measurement method based on binocular machine vision Active CN117036359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311301492.3A CN117036359B (en) 2023-10-10 2023-10-10 Contact net geometric parameter measurement method based on binocular machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311301492.3A CN117036359B (en) 2023-10-10 2023-10-10 Contact net geometric parameter measurement method based on binocular machine vision

Publications (2)

Publication Number Publication Date
CN117036359A true CN117036359A (en) 2023-11-10
CN117036359B CN117036359B (en) 2023-12-08

Family

ID=88639469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311301492.3A Active CN117036359B (en) 2023-10-10 2023-10-10 Contact net geometric parameter measurement method based on binocular machine vision

Country Status (1)

Country Link
CN (1) CN117036359B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101255773A (en) * 2008-03-17 2008-09-03 莱芜钢铁股份有限公司 H-shaped steel for electrified railroad contact network pillar as well as preparation technique thereof
CN104318546A (en) * 2014-09-29 2015-01-28 中国农业大学 Multi-scale analysis-based greenhouse field plant leaf margin extraction method and system
CN105674880A (en) * 2016-01-25 2016-06-15 成都国铁电气设备有限公司 Geometric parameter measuring method and system for overhead lines based on binocular principle
CN105741291A (en) * 2016-01-30 2016-07-06 西南交通大学 Method for detecting faults of equipotential lines of high-speed railway overhead line system suspension devices
CN205440025U (en) * 2016-03-18 2016-08-10 中铁建电气化局集团轨道交通器材有限公司 A registration arm checkpost for high -speed electronic railway connecting net
CN106679567A (en) * 2017-02-14 2017-05-17 成都国铁电气设备有限公司 Contact net and strut geometric parameter detecting measuring system based on binocular stereoscopic vision
GB201916315D0 (en) * 2019-11-08 2019-12-25 Darkvision Tech Inc Using an acoustic device to identify external apparatus mounted to a tubular
CN110930415A (en) * 2019-11-14 2020-03-27 中国航空工业集团公司西安飞行自动控制研究所 Method for detecting spatial position of track contact net
CN111553500A (en) * 2020-05-11 2020-08-18 北京航空航天大学 Railway traffic contact net inspection method based on attention mechanism full convolution network
CN112325772A (en) * 2020-10-28 2021-02-05 中国电力科学研究院有限公司 Punching size measuring method, system, equipment and medium based on machine vision
CN113012098A (en) * 2021-01-25 2021-06-22 郑州轻工业大学 Iron tower angle steel punching defect detection method based on BP neural network
AR121550A1 (en) * 2021-03-11 2022-06-15 Guijarro Jimenez Antonio Gustavo ULTRA-RESISTANT PNEUMATIC CONSTRUCTION ARRANGEMENT FOR LARGE WORKS
CN116660286A (en) * 2023-04-24 2023-08-29 上海电机学院 Wire harness head peeling measurement and defect detection method and system based on image segmentation

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101255773A (en) * 2008-03-17 2008-09-03 莱芜钢铁股份有限公司 H-shaped steel for electrified railroad contact network pillar as well as preparation technique thereof
CN104318546A (en) * 2014-09-29 2015-01-28 中国农业大学 Multi-scale analysis-based greenhouse field plant leaf margin extraction method and system
CN105674880A (en) * 2016-01-25 2016-06-15 成都国铁电气设备有限公司 Geometric parameter measuring method and system for overhead lines based on binocular principle
CN105741291A (en) * 2016-01-30 2016-07-06 西南交通大学 Method for detecting faults of equipotential lines of high-speed railway overhead line system suspension devices
CN205440025U (en) * 2016-03-18 2016-08-10 中铁建电气化局集团轨道交通器材有限公司 A registration arm checkpost for high -speed electronic railway connecting net
CN106679567A (en) * 2017-02-14 2017-05-17 成都国铁电气设备有限公司 Contact net and strut geometric parameter detecting measuring system based on binocular stereoscopic vision
GB201916315D0 (en) * 2019-11-08 2019-12-25 Darkvision Tech Inc Using an acoustic device to identify external apparatus mounted to a tubular
CN110930415A (en) * 2019-11-14 2020-03-27 中国航空工业集团公司西安飞行自动控制研究所 Method for detecting spatial position of track contact net
CN111553500A (en) * 2020-05-11 2020-08-18 北京航空航天大学 Railway traffic contact net inspection method based on attention mechanism full convolution network
CN112325772A (en) * 2020-10-28 2021-02-05 中国电力科学研究院有限公司 Punching size measuring method, system, equipment and medium based on machine vision
CN113012098A (en) * 2021-01-25 2021-06-22 郑州轻工业大学 Iron tower angle steel punching defect detection method based on BP neural network
AR121550A1 (en) * 2021-03-11 2022-06-15 Guijarro Jimenez Antonio Gustavo ULTRA-RESISTANT PNEUMATIC CONSTRUCTION ARRANGEMENT FOR LARGE WORKS
CN116660286A (en) * 2023-04-24 2023-08-29 上海电机学院 Wire harness head peeling measurement and defect detection method and system based on image segmentation

Also Published As

Publication number Publication date
CN117036359B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN110363858B (en) Three-dimensional face reconstruction method and system
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN108648240B (en) Non-overlapping view field camera attitude calibration method based on point cloud feature map registration
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN107063228B (en) Target attitude calculation method based on binocular vision
CN109285145B (en) Multi-standing tree height measuring method based on smart phone
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
CN107833181B (en) Three-dimensional panoramic image generation method based on zoom stereo vision
CN110610505A (en) Image segmentation method fusing depth and color information
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
JP2013178656A (en) Image processing device, image processing method, and image processing program
CN111612731B (en) Measuring method, device, system and medium based on binocular microscopic vision
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN111354047B (en) Computer vision-based camera module positioning method and system
CN115880373B (en) Calibration plate and calibration method of stereoscopic vision system based on novel coding features
CN108269289A (en) A kind of two step optimization methods of camera parameter calibration
CN107607053A (en) A kind of standing tree tree breast diameter survey method based on machine vision and three-dimensional reconstruction
CN114332689A (en) Citrus identification and positioning method, device, equipment and storage medium
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN116935013B (en) Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction
CN117036359B (en) Contact net geometric parameter measurement method based on binocular machine vision
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant