KR101765223B1 - Method For Estimating Edge Displacement Againt Brightness - Google Patents

Method For Estimating Edge Displacement Againt Brightness Download PDF

Info

Publication number
KR101765223B1
KR101765223B1 KR1020160021722A KR20160021722A KR101765223B1 KR 101765223 B1 KR101765223 B1 KR 101765223B1 KR 1020160021722 A KR1020160021722 A KR 1020160021722A KR 20160021722 A KR20160021722 A KR 20160021722A KR 101765223 B1 KR101765223 B1 KR 101765223B1
Authority
KR
South Korea
Prior art keywords
brightness
edge
value
estimating
squares
Prior art date
Application number
KR1020160021722A
Other languages
Korean (ko)
Inventor
서수영
Original Assignee
경북대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 경북대학교 산학협력단 filed Critical 경북대학교 산학협력단
Priority to KR1020160021722A priority Critical patent/KR101765223B1/en
Application granted granted Critical
Publication of KR101765223B1 publication Critical patent/KR101765223B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

A method for estimating a displacement of an edge included in an image according to a brightness value, the method comprising: forming a reference pattern for detecting a reference line and a grid pattern for detecting an edge position, The method includes the steps of preparing a plurality of target sheets different from each other, acquiring an image of each target sheet, analyzing a reference pattern of each target sheet to estimate a reference line, analyzing the grid pattern of each target sheet, Estimating an edge displacement according to a change in brightness value based on a displacement value between a reference line and an edge position of each target sheet.
According to the present invention, it is possible to estimate an edge displacement according to a brightness value and a distance between a camera and an object, so that accurate edge position recognition is possible in geographical information construction and a traveling robot.

Description

METHOD FOR ESTIMATING EDGE DISPLACEMENT AGAINST BRIGHTNESS [0002]

The present invention relates to a method for estimating the edge displacement of sub-pixel accuracy using variable vision and camera-to-object distance (hereinafter referred to as COD) with machine vision.

In the field of image processing, an edge represents a boundary of an object whose general brightness value changes abruptly. Edge detection and positioning is an essential procedure for extracting and recognizing objects from an image. Due to the importance of these edges, edge detection has been extensively studied in the field of image processing.

In order to precisely measure the object shape, it is necessary to position the edge with sub-pixel accuracy. Thus, much research has been done to determine the subpixel accuracy of an edge. There are also studies to quantify the quality of edge detection and positioning results.

How well the edge positions determined from the edge extraction process correspond to the physical edges of the real world is a fundamental question in edge detection. Although there have been many studies to extract edges with subpixel accuracy, there is little research to find the geometric relationships between the edges in the image and the edges of the real world in a systematic way.

Korean Patent No. 1526465: Improvement method of depth image quality based on graphic processor Korean Patent Registration No. 1023944: Image processing apparatus and method thereof

SUMMARY OF THE INVENTION It is an object of the present invention to provide a method and apparatus for estimating an edge displacement of a subpixel accuracy using variable machine vision To be able to do so.

According to an aspect of the present invention, there is provided a sheet having a pattern image for determining a reference line for detecting an edge displacement included in an image, the sheet having a predetermined width, There is provided a target sheet having two pattern regions of a grid pattern composed of a pair of vertical regions and two horizontal regions separated from each other and a grid pattern formed of four squares formed in the central space.

According to another aspect of the present invention, there is provided a method for estimating a displacement of an edge included in an image according to a brightness value, the method comprising: A plurality of target sheets having different brightness values of edges included in a grid pattern are prepared, an image of each target sheet is obtained, a reference pattern of each target sheet is analyzed, and a reference line is estimated Estimating a position of an edge formed on the grid pattern by analyzing a grid pattern of each target sheet; estimating an edge displacement according to a brightness value change based on a displacement value between a reference line and an edge position of each target sheet; The method of estimating an edge displacement with respect to a brightness value is provided.

According to the present invention, it is possible to estimate an edge displacement according to a brightness value and a distance between a camera and an object, so that accurate edge position recognition is possible in geographical information construction and a traveling robot.

Fig. 1 shows the result of edge displacement measurement, where a shows the edge displacement effect in the three-dimensional object reconstruction and b shows the edge displacement in outline.
2 shows a pattern shape of a target sheet according to the present invention.
Fig. 3 is an image taken at a COD of a = 1m, b = 2m, c = 3m, d = 4m, e = 5m, and f = 6m, which is a crop image designed with a foreground brightness of 0.5 in each COD.
Fig. 4 shows a central axis (red line) of a region surrounded by a red circle and a black dotted line, and a to f are examples of images photographed at a COD of 1 to 6 m.
Fig. 5 shows the fitness of the reference line to the observation point, where a is the horizontal reference line and b is the vertical reference line.
6 shows a center point (red star line) superimposed on an image and a reference line (yellow line), wherein a to f are enlarged images of the A to E regions in the entire image a, respectively.
7 is a graph showing the correlation between the brightness value in the target sheet and the brightness value in the image.
In FIG. 8, a represents the profile accumulation section, and b represents the pixel coordinate projection in the vertical axis direction of the x axis.
Figure 9 shows the cumulative brightness profile at each COD of a target sheet with a foreground brightness of 0.5, where a to f are CODs of 1 to 6 m.
10 is an average profile at each COD of a target sheet having a foreground brightness of 0.5, wherein a to f are CODs of 1 to 6 m.
Fig. 11 shows profiles under the harshest conditions when the foreground brightness is 0.5 and the COD is 6 m, where a is the brightness profile, b is the average of the brightness profile, and c is the slope of the average profile.
12 shows a slope profile (blue line) and a peak (red line) at each COD of a target sheet having a foreground brightness of 0.5.
13 shows the slope profile (blue line), the noise threshold (black dotted line), and the peak (red line) at each COD of the target sheet with the foreground brightness of 0.1.
In Fig. 14, a represents an edge displacement estimation value for brightness and COD, and b represents a standard deviation of an estimated value.

Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to the drawings.

Fig. 1 shows the result of edge displacement measurement, where a shows the edge displacement effect in the three-dimensional object reconstruction and b shows the edge displacement in outline.

As shown in FIG. 1A, if a gap occurs between an edge determined using image brightness and an edge present in the real world, the 3D reconstruction result of the edge in the object space has a certain displacement from the measured edge. In the present invention, as shown in FIG. 1B, edge displacements in which a bright pixel moves an edge in an image toward a dark pixel are modeled.

Fig. 2 shows a pattern shape of a target sheet according to the present invention. Fig. 3 is a crop image in which the foreground brightness is designed to be 0.5 in each COD, where a = 1m, b = 2m, c = 3m, d = = 5m and f = 6m, respectively.

In order to systematically measure the amount of displacement, the inventor designed the target sheet shown in Fig. The target sheet is a line shape having a predetermined width and is divided into two parts, that is, a reference pattern composed of a pair of vertical areas and two horizontal areas spaced apart from each other with a space in the center, Branch pattern regions. The grid pattern is composed of two upper squares and two lower squares. The upper left corner and the lower right corner squares in the diagonal direction are used as a background to fix the background brightness value to 0 (black) And the square in the lower left corner in the diagonal direction are used as a foreground and are formed with variable brightness values ranging from 0.1 to 1 (white) in units of 0.1.

The dimensions of each part in FIG. 2 are shown in Table 1.

part a b c d Length (mm) 9 52.5 35 9

Ten target sheets whose brightness values in the foreground were changed by 0.1 unit were printed, and each target sheet was imaged while varying the COD from 1m to 6m using a DSLR camera. The camera focus was manually adjusted at each shooting distance so that the target sheet was captured clearly.

To remove lens distortion, a total of 60 captured images were resampled using the photo-modeler software, and then the resulting images were cropped to include only the target sheet area.

FIG. 3 shows a cropped image of a target sheet having a foreground brightness of 0.5 after being photographed at each COD. As a result of the following experiments, the red band among the red, green, and blue bands was the most reliable, and only the red bands of the cropped images were used in the following experiments.

Fig. 4 shows a center axis (red line) of a region surrounded by a red circle and a black dotted line, where a to f are examples of images photographed at a COD of 1 to 6 m, and Fig. 5 is a graph showing a goodness of fit of a reference line to an observation point 6 shows a center point (red star line) superimposed on an image and a reference line (yellow line), where a to f denote A ~ E is an enlarged image of the area.

To establish a physical edge location, the geometry of the baseline is first estimated. As described above, a pair of vertical and horizontal reference lines are used. To detect the reference line, the reference region is first detected using binary thresholding and region characteristics. In the binary thresholding step, a binary image is generated from the crop image using a brightness threshold (set to 100 in the present example), and then each binary image is labeled by connected component labeling with four connectivity. Since the labeling is performed in the left-right direction and the up-down direction, the first area of the label area is set as the left area of the horizontal reference area. Next, the dimension of each area is compared with the dimension of the side area, and the right side area of the horizontal reference area is determined among the labeled areas. The vertical area is also determined by comparing the dimensions of the first area and the remaining area.

Next, the center line of the reference area is extracted as follows. The start and end columns of each region are discarded by a width of five pixels with respect to the horizontal reference region and only the remaining portions are used for center line extraction to obtain a brightness profile. The brightness slope is calculated to find the limit of the area in each column. Row indices are then searched for which the brightness slope is below a certain value (in this embodiment the slope threshold is set to 5). The minimum value among the row indexes is assigned to the upper boundary of the corresponding area. The lower bound of the row index is also searched in the same way. An area surrounded by the brightness values of the upper and lower boundary rows and the brightness value therebetween is generated. Finally, the geometric center of each profile in the row direction is calculated. 4 shows the brightness profile and the determined center of gravity for the image at each COD. The center line of the vertical area is calculated. In this case, the brightness profile along the row direction is used instead of the column direction.

The geometry of the baseline is modeled using the Hessian form of the linear equation of Equation (1).

Figure 112016018079781-pat00001

Here, r and c are the row and column coordinates of the center point extracted in the previous step, respectively, and? And d are respectively the angles measured from the column axis with respect to the normal line direction and the shortest distance from the center of the line. In the estimation of the horizontal baseline, assuming that the column coordinates are fixed and the row coordinates have any errors due to measurement errors, the geometrical structure of the baseline in equation (1) can be modified as shown in equation (2).

Figure 112016018079781-pat00002

For each set of n center points belonging to the baseline, the n observation equation can be expressed as Equation (3).

Figure 112016018079781-pat00003

Furthermore, Equation (3) can be expressed again as Equation (4) using a vector.

Figure 112016018079781-pat00004

here,

Figure 112016018079781-pat00005
,

Figure 112016018079781-pat00006
to be.

For parameter estimation by an iterative scheme, Equation (4) can be linearized as Equation (5).

Figure 112016018079781-pat00007

Equation (5) can be re-expressed as Equation (6) using a matrix and a vector.

Figure 112016018079781-pat00008

The error vector is assumed to follow the Gaussian normal distribution as shown in equation (7).

Figure 112016018079781-pat00009

Here, P is a weighting matrix, and is assumed to be the unit matrix since each observation error is assumed to be the same.

Therefore, in each iteration process, the improvement vector can be estimated as shown in equation (8).

Figure 112016018079781-pat00010

Therefore, the error vector is predicted as shown in Equation (9).

Figure 112016018079781-pat00011

Then, the dispersion component in Equation (7) can be estimated as Equation (10).

Figure 112016018079781-pat00012

 This process is repeated until the inequality in equation (11) is satisfied or the maximum number of iterations is reached.

Figure 112016018079781-pat00013

Here,? Is set to 10 - 10 .

In the geometric structure estimation of the vertical reference line, a state equation similar to the horizontal reference line and an algorithm following the sequential step are used by using the following equation (12).

Figure 112016018079781-pat00014

This baseline estimation process was applied to all 60 images, and the quality of the estimation results is shown in FIG. As shown in FIG. 5, it can be seen that the square root of the dispersive component (SRVC) is less than 0.05 pixels at COD in the range of 2 to 6 m and less than 0.08 at 1 m COD. This is because the square root of the 1 m dispersion component is relatively large compared to other CODs due to the slightly unevenness of the surface flatness of the target sheet observed in the close-up image.

6 shows an exemplary plot in which the estimated baseline is superimposed on a target image with a COD of 3 m. As shown in Fig. 6, it can be seen that the geometrical structure of the estimation baseline is sufficiently accurate to be used as a measurement edge for estimating the edge displacement.

FIG. 7 is a graph showing the correlation between the brightness value in the target sheet and the brightness value in the image. In FIG. 8, a denotes the profile accumulation section, and b denotes the pixel coordinate projection in the vertical axis direction of the x axis , FIG. 9 shows the cumulative brightness profile at each COD of the target sheet with a foreground brightness of 0.5 (where a to f are CODs of 1 to 6 m), FIG. 10 shows the cumulative brightness profile at each COD of the target sheet with foreground brightness of 0.5 11 shows a profile under the harshest conditions when the foreground brightness is 0.5 and the COD is 6 m, where a is the brightness profile, b is the average of the brightness profiles (a to f are COD of 1 to 6 m) , and c represents the slope of the average profile.

After baseline estimation, the foreground and background regions can be determined based on the intersection positions of the two reference lines. The foreground and background areas are determined using the intersection points and the background area dimensions detected in the target image with a foreground brightness of 1.0.

In FIG. 7, a brightness value or a digital number (DN) given as 0 to 255 corresponding to each brightness intensity given from 0.0 to 1.0 is shown in the target sheet of the present invention. As shown in FIG. 7, the brightness of the image increases with the increase of the COD in both the foreground brightness value and the background brightness value, which is considered to be due to the increase of background light as COD increases. It should be noted that the degree of correlation of the foreground brightness between the target sheet and the image is not linear but slightly curved.

For reliable edge displacement estimation, the brightness profile is accumulated as shown in FIG. In each image, there are four sections where the brightness profile is collected. The red arrow in a shows the cumulative direction, designed to start from the background area to the foreground area. This is also shown in b. In the collection of brightness profiles along the horizontal direction. Each pellet position was projected onto the normal of the baseline shown in b. A similar projection is also applied to the collection of brightness profiles along the vertical baseline.

Next, each brightness profile is resampled at specific intervals (0.1 pixels in this embodiment) along the x-axis, and an average of the brightness values at each resampling position is calculated. Fig. 10 shows the average brightness profile obtained from the profile shown in Fig.

Since the images with large foreground brightness values have a relatively large contrast ratio, the average profile obtained from these images shows a stair edge shape that is obvious as in FIG. However, when the foreground brightness value is small, the contrast ratio is not distinguished as shown in FIG. 11A, and the resulting average profile deviates from the shape of the regular step edge as shown in FIG. 11B.

In the present invention, the slope profile-based least squares method is used to calculate edge positions with subpixel accuracy. While most conventional subpixel localization methods are based on brightness profiles, this technique is different from conventional methods in that it uses a brightness gradient profile. For position determination, the algorithm calculates the slope between two consecutive brightness values and sets the regular interval of the slope data points as shown in equation (13).

Figure 112016018079781-pat00015

Where N is the number of data points.

The data position is then expressed as a function of the data index i, as shown in equation (14).

Figure 112016018079781-pat00016

The slope at each data point is expressed as a position function as shown in equation (15).

Figure 112016018079781-pat00017

Next, the left and right boundaries having a slope greater than a specific threshold S noise are searched as shown in Equation (16) below. The threshold is determined by a visual analysis of the slope profile, such as the profile shown in Figure 11c, which is set at 0.05 in the present embodiment.

Figure 112016018079781-pat00018

Figure 112016018079781-pat00019

The collected position is set as an observation vector as shown in Equation (17).

Figure 112016018079781-pat00020

The weight matrix of the observation vector is modeled using the square of the difference between the slope value and the slice value to consider only the effective portion of the slope as shown in Equation (18) below.

Figure 112016018079781-pat00021

The observation vector can be expressed by a parameter vector [mu] and an error vector e as shown in Equation (19).

Figure 112016018079781-pat00022

The number of observations is calculated as shown in equation (20).

Figure 112016018079781-pat00023

The sum vector is expressed by Equation (21).

Figure 112016018079781-pat00024

Therefore, a parameter representing the edge position with subpixel accuracy is estimated as shown in Equation (22).

Figure 112016018079781-pat00025

The corresponding error can be estimated as: < EMI ID = 23.0 >

Figure 112016018079781-pat00026

Then, the dispersion component in equation (20) is estimated as shown in equation (24).

Figure 112016018079781-pat00027

Finally, the variance of the estimated position is expressed as " (24) "

Figure 112016018079781-pat00028

FIG. 12 shows the slope profile (blue line) and the peak (red line) at each COD of the target sheet with the foreground brightness of 0.5, and FIG. 13 shows the slope profile (blue line) at each COD of the target sheet with the foreground brightness of 0.1, Noise threshold (black dotted line) and peak (red line).

Figure 12 shows the slope profile and calculated peaks at images of various COD values. When the foreground brightness is 0.1, the contrast difference between the foreground and background brightness is very low, which is the most difficult condition to find the peak of the slope profile. Fig. 13 shows this case. Visual inspection of the picture can prove that the peaks appear adequately with sub-pixel accuracy even under the most difficult conditions.

In Fig. 14, a represents an edge displacement estimation value for brightness and COD, and b represents a standard deviation of an estimated value.

Figure 14 shows estimates of edge displacements and variance for this estimate at each foreground brightness and COD. It can be clearly seen from the visual inspection of the figure that as the foreground brightness increases from 0.4 to 1, the edge position measurement performed using the image tends to move continuously in the dark direction. The observation of this displacement should be corrected with respect to the standard deviation of the edge positions calculated at 29 as shown in FIG. 14B. For foreground brightness in the range of 0.1 to 0.3, the displacement does not exhibit any other distinctive trend.

Claims (14)

1. A sheet having a pattern image for determining a reference line for detecting an edge displacement included in an image,
A reference pattern composed of a pair of vertical regions and two horizontal regions spaced apart from each other with a space in the center as a line shape having a constant width and a grid pattern composed of four squares formed in the central space, Wherein the target sheet is a sheet.
The method according to claim 1,
Wherein the grid pattern is comprised of two upper squares and two lower squares, wherein one square and its diagonal squares are used as the background, and the remaining squares are used as the foreground.
3. The method of claim 2,
Wherein the plurality of target sheets are fabricated such that the background has a brightness value fixed to zero and the foreground has a variable brightness value.
A method for estimating a displacement of an edge included in an image according to a brightness value,
Preparing a plurality of target sheets in which a reference pattern for detecting a reference line and a grid pattern for edge position detection are formed and the brightness values of the edges included in the grid pattern are different from each other;
Obtaining an image of each of the target sheets;
Analyzing a reference pattern of each of the target sheets to estimate a reference line;
Analyzing a grid pattern of each of the target sheets to estimate a position of an edge formed in the grid pattern;
Estimating an edge displacement according to a change in brightness value based on a displacement value between a reference line and an edge position of each target sheet.
5. The method of claim 4,
The target sheet has a line pattern having a predetermined width and is divided into a pair of vertical areas and two horizontal areas spaced apart from each other with a space in the center and a grid pattern composed of four squares formed in the center space And estimating an edge displacement with respect to a brightness value.
6. The method of claim 5,
Wherein the grid pattern is composed of two upper squares and two lower squares, one square and its diagonal squares are used as backgrounds, and the remaining squares are used as foregrounds. .
The method according to claim 6,
Wherein a plurality of target sheets are manufactured such that the background has a brightness value fixed to 0 and a foreground has a variable brightness value.
5. The method of claim 4,
Wherein acquiring the image of each target sheet comprises acquiring a plurality of images while varying the distance between the camera and the object for each target sheet,
Estimating an edge displacement according to a brightness value change based on a displacement value between a reference line and an edge position of each of the target sheets estimates an edge displacement in consideration of a camera value versus object distance together with a brightness value change, A method for estimating an edge displacement with respect to a value.
5. The method of claim 4,
The step of estimating the reference line by analyzing the reference pattern of each of the target sheets
Generating a binary image for the reference pattern using a preset threshold value;
Labeling the generated binary image using connected component labeling;
Determining a start area and an end area of the labeled area;
Obtaining a brightness profile of an area between the start area and the end area;
Searching an index having a brightness slope equal to or less than a predetermined slope threshold value, setting a point having a minimum value among the indexes as a boundary of the corresponding region, and creating a region surrounded by brightness values of the boundary points and brightness values therebetween; And
And calculating a geometric center of each profile in the row or column direction.
8. The method of claim 7,
The step of estimating the position of an edge formed in the grid pattern by analyzing the grid pattern of each target sheet
Determining a foreground region and a background region in the grid pattern based on an intersection position of the estimated baseline;
Obtaining a brightness slope profile from the background area to the foreground area direction;
Estimating an edge position using a least squares method based on the slope profile.
11. The method of claim 10,
The step of estimating the edge position using the least squares method based on the slope profile
Searching boundary positions whose brightness gradient has a value larger than a specific threshold value and setting the positions of the boundary positions as an observation vector;
Modeling a weighting matrix of the observation vector using the squared difference between the slope value and the threshold value;
Estimating a parameter representing an edge position from the observation vector and estimating an error corresponding thereto.
12. The method of claim 11,
Wherein the observation vector is expressed by the following equation.
Figure 112016018079781-pat00029

Here, τ is a sum vector, μ is a parameter vector, e is an error vector, and σ 0 2 is a variance.
12. The method of claim 11,
Wherein the parameter is expressed by the following equation.
Figure 112016018079781-pat00030

Here, τ denotes a sum vector, and P denotes an observation vector.
12. The method of claim 11,
Calculating a variance value of an estimated position using the estimated error; And
And estimating an appropriateness of the edge displacement estimation value based on the variance value of the estimated position.
KR1020160021722A 2016-02-24 2016-02-24 Method For Estimating Edge Displacement Againt Brightness KR101765223B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160021722A KR101765223B1 (en) 2016-02-24 2016-02-24 Method For Estimating Edge Displacement Againt Brightness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160021722A KR101765223B1 (en) 2016-02-24 2016-02-24 Method For Estimating Edge Displacement Againt Brightness

Publications (1)

Publication Number Publication Date
KR101765223B1 true KR101765223B1 (en) 2017-08-04

Family

ID=59654541

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160021722A KR101765223B1 (en) 2016-02-24 2016-02-24 Method For Estimating Edge Displacement Againt Brightness

Country Status (1)

Country Link
KR (1) KR101765223B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11227394B2 (en) 2016-10-13 2022-01-18 Kyungpook National University Industry-Academic Cooperation Foundation Method for setting edge blur for edge modeling

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101255742B1 (en) 2011-05-31 2013-04-17 주식회사 에이티엠 Dmethod for deciding of lens distortion correction parameters

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101255742B1 (en) 2011-05-31 2013-04-17 주식회사 에이티엠 Dmethod for deciding of lens distortion correction parameters

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11227394B2 (en) 2016-10-13 2022-01-18 Kyungpook National University Industry-Academic Cooperation Foundation Method for setting edge blur for edge modeling

Similar Documents

Publication Publication Date Title
Chen et al. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm
US10699476B2 (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
Shan et al. A stereovision-based crack width detection approach for concrete surface assessment
CN104697476B (en) Roughness light cuts the automatic testing method and device of contour curve
Liu et al. Fast dimensional measurement method and experiment of the forgings under high temperature
JP6823486B2 (en) Crack detection method
JP2013178656A (en) Image processing device, image processing method, and image processing program
Flesia et al. Sub-pixel straight lines detection for measuring through machine vision
KR101868483B1 (en) Prediction of two blur parameters of edges in varying contrast
Cherian et al. Accurate 3D ground plane estimation from a single image
JP5812705B2 (en) Crack detection method
CN103824275A (en) System and method for finding saddle point-like structures in an image and determining information from the same
CN115096206A (en) Part size high-precision measurement method based on machine vision
CN114241061A (en) Calibration method, calibration system and calibration target for line structured light imaging and measurement system using calibration target
JP7008409B2 (en) Crack detection method
CN115330684A (en) Underwater structure apparent defect detection method based on binocular vision and line structured light
KR101733028B1 (en) Method For Estimating Edge Displacement Againt Brightness
CN104537627B (en) A kind of post-processing approach of depth image
JP2013117409A (en) Crack detection method
JP6555211B2 (en) Edge extraction method for 2D images
KR101765223B1 (en) Method For Estimating Edge Displacement Againt Brightness
KR101777696B1 (en) Method For Subpixel Localization Of Edges Based on Areal Symmetry
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
JP6852406B2 (en) Distance measuring device, distance measuring method and distance measuring program
CN113409334B (en) Centroid-based structured light angle point detection method

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant