CN116542979A - Image measurement-based prediction correction method and terminal - Google Patents

Image measurement-based prediction correction method and terminal Download PDF

Info

Publication number
CN116542979A
CN116542979A CN202310822728.1A CN202310822728A CN116542979A CN 116542979 A CN116542979 A CN 116542979A CN 202310822728 A CN202310822728 A CN 202310822728A CN 116542979 A CN116542979 A CN 116542979A
Authority
CN
China
Prior art keywords
point
target
image
edge
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310822728.1A
Other languages
Chinese (zh)
Other versions
CN116542979B (en
Inventor
黄宗荣
林大甲
郑敏忠
江世松
刘兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinqianmao Technology Co ltd
Original Assignee
Jinqianmao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinqianmao Technology Co ltd filed Critical Jinqianmao Technology Co ltd
Priority to CN202310822728.1A priority Critical patent/CN116542979B/en
Publication of CN116542979A publication Critical patent/CN116542979A/en
Application granted granted Critical
Publication of CN116542979B publication Critical patent/CN116542979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a correction method and a terminal for prediction based on image measurement, which are used for acquiring a target point of a predicted object in an image, intercepting the image based on the target point to obtain a target image, carrying out edge detection on the target image to obtain a target candidate edge, carrying out straight line detection on the target candidate edge to obtain a target boundary line segment, determining a point closest to the target point from the target boundary line segment, taking the point as a correction point, measuring the predicted object according to the correction point to obtain a measurement result, finding the correction point by combining the edge detection and the straight line detection, avoiding image measurement by using a biased target point, and obtaining the target candidate edge from the target image by combining a first-order partial finite difference, non-maximum value inhibition processing and a double-threshold algorithm by using a preset Gaussian filter, thereby carrying out automatic correction on the selected target point, improving the precision of the measurement result and accurately predicting the size of the object in the image.

Description

Image measurement-based prediction correction method and terminal
Technical Field
The present invention relates to the field of image measurement technologies, and in particular, to a method and a terminal for correcting prediction based on image measurement.
Background
Nowadays, image measurement technology is widely used in various fields, and image measurement refers to a method of obtaining an actual size of a predicted object based on image pixel measurement. The image measurement process comprises the following steps: the measured target point is manually selected on the image, a measurement result is obtained by using a model according to the target point, and the whole measurement process is equivalent to the use of the image to predict the actual size of the object in the image. For fine objects on the image, point selection deviation is easy to occur in the manual operation process. In a specific application scene, the accuracy of the point selection of the measurement target on the image influences the accuracy of the measurement of the object size in the image, and the pixel difference between the manual point selection and the actual target point influences the measurement result of the object size in the final image, so that the accuracy of the object size prediction is influenced.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the correction method and the terminal for the prediction based on image measurement can automatically correct the selected target point, improve the accuracy of the measurement result and accurately predict the size of the object in the image.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method of correcting predictions based on image measurements, comprising the steps of:
acquiring a target point of a predicted object in an image, and intercepting the image based on the target point to obtain a target image;
performing edge detection on the target image to obtain a target candidate edge, and performing straight line detection on the target candidate edge to obtain a target boundary line segment;
determining a point closest to the target point from the target boundary line segment, and taking the point closest to the target point as a correction point;
measuring the predicted object according to the correction point to obtain a measurement result;
the step of performing edge detection on the target image to obtain a target candidate edge comprises the following steps:
smoothing and filtering the target image by using a preset Gaussian filter to obtain a filtered target image;
calculating the initial gradient amplitude of each pixel point by using a first-order partial derivative finite difference on the filtered target image;
performing non-maximum value inhibition processing on the initial gradient amplitude to obtain a final gradient amplitude of each pixel point;
determining an initial target candidate edge in the filtered target image by using a double-threshold algorithm based on the final gradient amplitude of each pixel point;
and performing edge clipping on the initial target candidate edge by using a pixel classification method to obtain a target candidate edge.
In order to solve the technical problems, the invention adopts another technical scheme that:
a predicted correction terminal based on image measurements, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a target point of a predicted object in an image, and intercepting the image based on the target point to obtain a target image;
performing edge detection on the target image to obtain a target candidate edge, and performing straight line detection on the target candidate edge to obtain a target boundary line segment;
determining a point closest to the target point from the target boundary line segment, and taking the point closest to the target point as a correction point;
measuring the predicted object according to the correction point to obtain a measurement result;
the step of performing edge detection on the target image to obtain a target candidate edge comprises the following steps:
smoothing and filtering the target image by using a preset Gaussian filter to obtain a filtered target image;
calculating the initial gradient amplitude of each pixel point by using a first-order partial derivative finite difference on the filtered target image;
performing non-maximum value inhibition processing on the initial gradient amplitude to obtain a final gradient amplitude of each pixel point;
determining an initial target candidate edge in the filtered target image by using a double-threshold algorithm based on the final gradient amplitude of each pixel point;
and performing edge clipping on the initial target candidate edge by using a pixel classification method to obtain a target candidate edge.
The invention has the beneficial effects that: the method comprises the steps of obtaining a target point of a predicted object in an image, intercepting the image based on the target point to obtain a target image, carrying out edge detection on the target image to obtain a target candidate edge, carrying out straight line detection on the target candidate edge to obtain a target boundary line segment, determining a point closest to the target point from the target boundary line segment, taking the point as a correction point, measuring the predicted object according to the correction point to obtain a measurement result, finding the correction point by combining edge detection and straight line detection, automatically correcting deviation caused by manual point selection on the image, avoiding image measurement by using the biased target point, automatically correcting the selected target point, improving the accuracy of the measurement result, namely improving the accuracy of the size of the predicted object in the image predicted based on an image measurement technology, and obtaining the target candidate edge from the target image by using a preset Gaussian filter in combination with first-order bias finite difference, non-maximum value inhibition processing and a dual-threshold algorithm during edge detection, so that the method can be more accurate and rapid.
Drawings
FIG. 1 is a flow chart showing the steps of a method for correcting predictions based on image measurements according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a prediction correction terminal based on image measurement according to an embodiment of the present invention.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, a method for correcting prediction based on image measurement includes the steps of:
acquiring a target point of a predicted object in an image, and intercepting the image based on the target point to obtain a target image;
performing edge detection on the target image to obtain a target candidate edge, and performing straight line detection on the target candidate edge to obtain a target boundary line segment;
determining a point closest to the target point from the target boundary line segment, and taking the point closest to the target point as a correction point;
measuring the predicted object according to the correction point to obtain a measurement result;
the step of performing edge detection on the target image to obtain a target candidate edge comprises the following steps:
smoothing and filtering the target image by using a preset Gaussian filter to obtain a filtered target image;
calculating the initial gradient amplitude of each pixel point by using a first-order partial derivative finite difference on the filtered target image;
performing non-maximum value inhibition processing on the initial gradient amplitude to obtain a final gradient amplitude of each pixel point;
determining an initial target candidate edge in the filtered target image by using a double-threshold algorithm based on the final gradient amplitude of each pixel point;
and performing edge clipping on the initial target candidate edge by using a pixel classification method to obtain a target candidate edge.
From the above description, the beneficial effects of the invention are as follows: the method comprises the steps of obtaining a target point of a predicted object in an image, intercepting the image based on the target point to obtain a target image, carrying out edge detection on the target image to obtain a target candidate edge, carrying out straight line detection on the target candidate edge to obtain a target boundary line segment, determining a point closest to the target point from the target boundary line segment, taking the point as a correction point, measuring the predicted object according to the correction point to obtain a measurement result, finding the correction point by combining edge detection and straight line detection, automatically correcting deviation caused by manual point selection on the image, avoiding image measurement by using the biased target point, automatically correcting the selected target point, improving the accuracy of the measurement result, namely improving the accuracy of the size of the predicted object in the image predicted based on an image measurement technology, and obtaining the target candidate edge from the target image by using a preset Gaussian filter in combination with first-order bias finite difference, non-maximum value inhibition processing and a dual-threshold algorithm during edge detection, so that the method can be more accurate and rapid.
Further, the capturing the image based on the target point, and obtaining a target image includes:
and taking the target point as a center, and intercepting the image according to the preset pixel size to obtain a target image.
As can be seen from the above description, when the target point is manually selected, the target point is usually not deviated too far, so that the image is intercepted according to the preset pixel size, the obtained target image includes the correction point, the subsequent determination of the position of the correction point is facilitated, and the data processing efficiency is improved.
Further, the determining an initial target candidate edge in the filtered target image using a dual threshold algorithm based on the final gradient magnitude of each pixel point includes:
determining pixel points with final gradient amplitude values larger than a first preset threshold value, and generating a first threshold value edge image by taking the pixel points with the final gradient amplitude values larger than the first preset threshold value as edges;
determining pixel points with final gradient amplitude values larger than a second preset threshold value, and generating a second threshold value edge image by taking the pixel points with the final gradient amplitude values larger than the second preset threshold value as edges, wherein the second preset threshold value is larger than the first preset threshold value;
and connecting edges in the second threshold edge image as contours, judging whether the edges reach the end points of the contours, if so, determining the edges which can be connected to the contours from the first threshold edge image, and connecting the edges which can be connected to the contours with the contours to obtain initial target candidate edges.
From the above description, it is clear that the second threshold edge image is obtained using a high threshold, and thus contains few false edges, but has discontinuities, so that edges connectable to the contour are determined from the first threshold image, in order to obtain initial target candidate edges.
Further, the performing edge clipping on the initial target candidate edge by using a pixel classification method to obtain a target candidate edge includes:
acquiring an object and background classifier corresponding to the image;
judging whether the pixel points on the left and right sides of each edge in the initial target candidate edges belong to an object or a background by using the object and background classifier, if so, reserving the edges, and if not, discarding the edges;
and obtaining target candidate edges according to the reserved edges.
From the above description, it can be seen that the edge clipping is performed on the initial target candidate edge by using the pixel classification method, so that a more accurate target candidate edge can be obtained, and the subsequent processing is facilitated.
Further, the performing the line detection on the target candidate edge to obtain a target boundary line segment includes:
initializing an accumulator array and an edge point hit array of a polar coordinate domain rho-theta space;
scanning all pixel points in the image line by line to generate a straight line of rho-theta space;
traversing edge points in the target candidate edge, if the edge points pass through the straight line of the rho-theta space, adding one to an accumulator unit in the accumulator array corresponding to the edge points in a polar coordinate domain, and adding one to an edge point hit unit in the edge point hit array corresponding to the edge points;
determining a current maximum peak value of a polar coordinate domain, and determining a maximum peak point according to the current maximum peak value;
resetting the maximum peak point and accumulator units in a range adjacent to the maximum peak point, and determining an edge point hit array of a straight line passing through the maximum peak point;
reserving edge points, which do not exceed a second preset value, in the edge point hit array of the straight line passing through the maximum peak point;
and judging whether a non-zero peak value point of the edge point exists in the accumulator array, if so, returning to execute the step of determining the current maximum peak value of the polar coordinate domain, otherwise, obtaining a target boundary line segment according to the reserved edge point.
It can be seen from the above description that the above straight line detection method can accurately detect and obtain the target boundary line segment from the target candidate edge, and obtain all the found straight lines, i.e. the correction points are on the target boundary line segment, and then the correction points can be directly found therefrom.
Further, the determining the point closest to the target point from the target boundary line segments, and taking the point closest to the target point as a correction point includes:
calculating the distance between each target boundary line segment and the target point, and determining the line segment closest to the target point according to the distance;
marking the line segment closest to the target point as a preset color;
and acquiring the color of the target point, judging whether the color of the target point is the preset color, if so, taking the target point as a correction point, and if not, determining the pixel point which is closest to the target point and has the color of the preset color from the target image as the correction point.
As can be seen from the above description, the target point may be an accurate point to be measured without correction, so that it is determined whether the color of the target point is a preset color, if not, the pixel point closest to the target point and having the color of the preset color is determined as the correction point from the target image, thereby improving the determination efficiency of the correction point.
Further, the measuring the prediction object according to the correction point, and obtaining a measurement result includes:
obtaining the three-dimensional coordinates of the correction points by using a preset mapping model;
and measuring the predicted object according to the three-dimensional coordinates of the correction point to obtain a measurement result.
Further, the preset mapping model is:
wherein Z is c Represents a scaling factor, (u, v) represents a coordinate value of the correction point, K represents a camera internal parameter, R represents a rotation angle, t represents a translation value, (X) w ,Y w ,Z w ) Representing the three-dimensional coordinates of the correction point, f representing the focal length of the camera, d x Representing the physical dimension of each pixel of the charge coupled device in the horizontal direction, d y Representing the physical dimension of each pixel of the charge coupled device in the vertical direction, (u) 0 ,v 0 ) Representing the projected position of the optical center of the camera on the CCD imaging plane, 0 T Represents [0,0]Vector.
From the above description, the three-dimensional coordinates of the correction points are obtained through the preset mapping model, so that more accurate and reliable measurement is realized.
Referring to fig. 2, another embodiment of the present invention provides a correction terminal for image measurement-based prediction, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements each step in the above-mentioned image measurement-based prediction correction method when executing the computer program.
The correction method and the terminal for prediction based on image measurement can be applied to a scene requiring prediction of the actual size of an object in an image, such as an engineering quality prediction scene, and are described in the following specific embodiments:
referring to fig. 1, a first embodiment of the present invention is as follows:
a method of correcting predictions based on image measurements, comprising the steps of:
s1, acquiring a target point of a predicted object in an image, and intercepting the image based on the target point to obtain a target image, wherein the method specifically comprises the following steps of:
s1.1, acquiring a target point of a predicted object in the image.
The target point comprises a target starting point and a target ending point;
specifically, a target starting point and a target end point of the predicted object are selected on the image by using a mouse click mode, and then corresponding operations are respectively executed on the target starting point and the target end point.
For example, if a rebar exists in the image, and the length of the rebar is predicted, a target start point may be selected from the head of the rebar and a target end point may be selected from the tail of the rebar in the image.
S1.2, taking the target point as a center, and intercepting the image according to a preset pixel size to obtain a target image.
The preset pixel size may be set according to practical situations, and in an alternative embodiment, the preset pixel size is 100×100 pixels.
S2, performing edge detection on the target image to obtain a target candidate edge, and performing straight line detection on the target candidate edge to obtain a target boundary line segment, wherein the method specifically comprises the following steps:
s2.1, smoothing and filtering the target image by using a preset Gaussian filter to obtain a filtered target image;
in an alternative embodiment, the preset gaussian filter is a gaussian function G (x, y) with omission coefficients: g (x, y) =f (x, y) ×exp (- (x) 2 +y 2 ) / (2σ 2 ));
Where (x, y) represents the target image pixel coordinates, f (x, y) represents the target image data, and σ represents a kernel parameter. The Gaussian function is used for integration and then remains unchanged, so that the calculated amount is reduced, and the overall efficiency is improved.
S2.2, calculating the initial gradient amplitude of each pixel point by using a first-order partial derivative finite difference on the filtered target image;
specifically, the filtered target image is used to calculate the initial gradient magnitude phi (x, y) =sqrt (phi) for each pixel point 1 2 (x,y)+φ 2 2 (x, y)), the gradient direction θ can also be calculated φ =tan -12 (x,y)/φ 1 (x,y));
The first-order partial derivative finite difference mathematical expression in the x-direction and the y-direction is as follows:
φ 1 (x,y)=(G[x,y+1]-G[x,y]+G[x+1,y+1]-G[x+1,y])/2;
φ 2 (x,y)=(G[x,y]-G[x+1,y]+G[x,y+1]-G[x+1,y+1])/2;
in phi 1 (x, y) represents the difference in x-direction, φ 2 (x, y) represents the y-direction difference, G [ x, y ]]Gao Sixiang element value representing (x, y) position, G [ x, y+1 ]]Gao Sixiang element value representing (x, y+1) position, gx+1, y+1]Gao Sixiang element value representing (x+1, y+1) position, gx+1, y]Gao Sixiang prime values representing the (x+1, y) position.
Since obtaining only global gradients is not sufficient to determine edges, the points of maximum local gradients have to be preserved, while non-maxima are suppressed, i.e. non-local maxima are zeroed out to obtain refined edges, S2.3 is performed:
s2.3, performing non-maximum value inhibition processing on the initial gradient amplitude to obtain a final gradient amplitude of each pixel point.
Specifically, performing non-maximum suppression processing on the initial gradient amplitude, namely comparing each pixel point with the front and rear adjacent pixel points along the gradient direction, and if the initial gradient amplitude of the pixel point is smaller than that of the adjacent pixel point, setting the initial gradient amplitude of the pixel point to 0, thereby obtaining the final gradient amplitude of each pixel point.
S2.4, determining an initial target candidate edge in the filtered target image by using a double-threshold algorithm based on the final gradient amplitude of each pixel, wherein the method specifically comprises the following steps:
s2.4.1, determining pixel points with final gradient amplitude larger than a first preset threshold value, and generating a first threshold value edge image by taking the pixel points with the final gradient amplitude larger than the first preset threshold value as edges.
S2.4.2 determining pixel points with final gradient amplitude larger than a second preset threshold, and generating a second threshold edge image by taking the pixel points with final gradient amplitude larger than the second preset threshold as edges, wherein the second preset threshold is larger than the first preset threshold.
S2.4.3, connecting edges in the second threshold edge image to be a contour, judging whether the edges reach the end point of the contour, if so, determining the edges which can be connected to the contour from the first threshold edge image, and connecting the edges which can be connected to the contour with the contour to obtain initial target candidate edges.
Specifically, the second threshold edge image N 2 [i,j]The edge connection in (a) is a contour, and judges whether the end point of the contour is reached, namely, no edge can be connected in the 8 neighborhood position of the end point, if yes, the edge image N is obtained from the first threshold value 1 [i,j]Determining an edge connectable to the contour from an 8-neighborhood position of the same position as the end point, and connecting the edge connectable to the contour to obtain an initial target candidate edge by continuously determining the position of the edge in N 1 [i,j]Collecting edges until N 2 [i,j]T is as follows 2 Is used to find each line segment, T 1 The function of (a) is to extend in both directions of the line segments to find the break of the edge and connect the edges.
S2.5, performing edge clipping on the initial target candidate edge by using a pixel classification method to obtain a target candidate edge, wherein the method specifically comprises the following steps:
s2.5.1, obtaining an object corresponding to the image and a background classifier;
specifically, an object and a background pixel point in an image are sampled by collecting the image of an application scene corresponding to the image, an RGB pixel library is obtained, RGB values in the RGB pixel library are mapped into three-dimensional coordinate space points, and an SVM (Support Vector Machines, support vector machine) algorithm is adopted to obtain an object and background classifier.
S2.5.2, using the object and background classifier to determine whether the pixels on the left and right sides (i.e. the same side) of each edge in the initial target candidate edges belong to the object or the background, if so, reserving the edge, and if not, discarding the edge.
S2.5.3, obtaining target candidate edges according to the reserved edges.
S2.6, initializing an accumulator array and an edge point hit array of a polar coordinate domain rho-theta space.
S2.7, scanning all pixel points in the image line by line to generate a straight line of the rho-theta space.
S2.8, traversing edge points in the target candidate edge, if the edge points pass through the straight line of the rho-theta space, adding one to an accumulator unit in the accumulator array corresponding to the edge points in a polar coordinate domain, and adding one to an edge point hit unit in the edge point hit array corresponding to the edge points.
S2.9, determining the current maximum peak value of the polar coordinate domain, and determining the maximum peak point according to the current maximum peak value.
S2.10, resetting the maximum peak point and accumulator units in the range adjacent to the maximum peak point, and determining an edge point hit array of a straight line passing through the maximum peak point.
S2.11, reserving the edge points of which the edge point hit units do not exceed a second preset value in the edge point hit array of the straight line passing through the maximum peak point.
S2.12, judging whether a non-zero peak value point of the edge point exists in the accumulator array, if yes, returning to execute S2.9, otherwise, obtaining a target boundary line segment according to the reserved edge point.
The algorithm adopts the principle of maximum obtaining a straight line, namely each pixel point in the original image can form a straight line, and then judging that the straight line passes through a plurality of edge pixel points, and the straight line passing through the most edge pixel points is the most probable straight line.
S3, determining a point closest to the target point from the target boundary line segment, and taking the point closest to the target point as a correction point, wherein the method specifically comprises the following steps of:
s3.1, calculating the distance between each target boundary line segment and the target point, and determining the line segment closest to the target point according to the distance.
S3.2, marking the line segment closest to the target point as a preset color;
the preset color may be set according to the actual situation, and in an alternative embodiment, the preset color is red.
S3.3, acquiring the color of the target point, judging whether the color of the target point is the preset color, if so, taking the target point as a correction point, and if not, determining the pixel point which is closest to the target point in the target image and has the color of the preset color as the correction point.
S4, measuring the prediction object according to the correction point to obtain a measurement result, wherein the measurement result specifically comprises:
s4.1, obtaining three-dimensional coordinates of the correction points by using a preset mapping model;
wherein, the preset mapping model is:
wherein Z is c Represents a scaling factor, (u, v) represents a coordinate value of the correction point, K represents a camera internal parameter, R represents a rotation angle, t represents a translation value, (X) w ,Y w ,Z w ) Representing the three-dimensional coordinates of the correction point, f representing the focal length of the camera, d x Representing the physical dimension of each pixel of the charge coupled device in the horizontal direction, d y Representing the physical dimension of each pixel of the charge coupled device in the vertical direction, (u) 0 ,v 0 ) Representing the projected position of the optical center of the camera on the CCD imaging plane, 0 T Represents [0,0]Vector.
And S4.2, measuring the prediction object according to the three-dimensional coordinates of the correction point to obtain a measurement result.
Referring to fig. 2, a second embodiment of the present invention is as follows:
a correction terminal for image measurement based prediction comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the correction method for image measurement based prediction in embodiment one when executing the computer program.
In summary, the method and the terminal for predicting the image measurement based on the prediction acquire the target point of the predicted object in the image, intercept the image based on the target point to obtain the target image, perform edge detection on the target image to obtain the target candidate edge, perform straight line detection on the target candidate edge to obtain the target boundary line segment, determine the point closest to the target point from the target boundary line segment, use the point as the correction point, measure the predicted object according to the correction point to obtain the measurement result, find the correction point by combining the edge detection and the straight line detection, automatically correct the deviation caused by manual point selection on the image, avoid using the biased target point to perform image measurement, automatically correct the selected target point, improve the accuracy of the measurement result, accurately predict the object size in the image, and can be better applied to the project quality prediction scene. Meanwhile, when the target point is manually selected, the target point is not deviated too far, so that the image is intercepted according to the preset pixel size, the obtained target image contains the correction point, the position of the correction point is convenient to determine subsequently, and the data processing efficiency is improved.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (9)

1. A method for correcting predictions based on image measurements, comprising the steps of:
acquiring a target point of a predicted object in an image, and intercepting the image based on the target point to obtain a target image;
performing edge detection on the target image to obtain a target candidate edge, and performing straight line detection on the target candidate edge to obtain a target boundary line segment;
determining a point closest to the target point from the target boundary line segment, and taking the point closest to the target point as a correction point;
measuring the predicted object according to the correction point to obtain a measurement result;
the step of performing edge detection on the target image to obtain a target candidate edge comprises the following steps:
smoothing and filtering the target image by using a preset Gaussian filter to obtain a filtered target image;
calculating the initial gradient amplitude of each pixel point by using a first-order partial derivative finite difference on the filtered target image;
performing non-maximum value inhibition processing on the initial gradient amplitude to obtain a final gradient amplitude of each pixel point;
determining an initial target candidate edge in the filtered target image by using a double-threshold algorithm based on the final gradient amplitude of each pixel point;
and performing edge clipping on the initial target candidate edge by using a pixel classification method to obtain a target candidate edge.
2. The method of claim 1, wherein the capturing the image based on the target point to obtain a target image comprises:
and taking the target point as a center, and intercepting the image according to the preset pixel size to obtain a target image.
3. The method of claim 1, wherein determining an initial target candidate edge in the filtered target image using a dual threshold algorithm based on the final gradient magnitude for each pixel comprises:
determining pixel points with final gradient amplitude values larger than a first preset threshold value, and generating a first threshold value edge image by taking the pixel points with the final gradient amplitude values larger than the first preset threshold value as edges;
determining pixel points with final gradient amplitude values larger than a second preset threshold value, and generating a second threshold value edge image by taking the pixel points with the final gradient amplitude values larger than the second preset threshold value as edges, wherein the second preset threshold value is larger than the first preset threshold value;
and connecting edges in the second threshold edge image as contours, judging whether the edges reach the end points of the contours, if so, determining the edges which can be connected to the contours from the first threshold edge image, and connecting the edges which can be connected to the contours with the contours to obtain initial target candidate edges.
4. The method of claim 1, wherein edge cropping the initial target candidate edge using pixel classification to obtain a target candidate edge comprises:
acquiring an object and background classifier corresponding to the image;
judging whether the pixel points on the left and right sides of each edge in the initial target candidate edges belong to an object or a background by using the object and background classifier, if so, reserving the edges, and if not, discarding the edges;
and obtaining target candidate edges according to the reserved edges.
5. The method of claim 1, wherein the performing straight line detection on the target candidate edge to obtain a target boundary line segment comprises:
initializing an accumulator array and an edge point hit array of a polar coordinate domain rho-theta space;
scanning all pixel points in the image line by line to generate a straight line of rho-theta space;
traversing edge points in the target candidate edge, if the edge points pass through the straight line of the rho-theta space, adding one to an accumulator unit in the accumulator array corresponding to the edge points in a polar coordinate domain, and adding one to an edge point hit unit in the edge point hit array corresponding to the edge points;
determining a current maximum peak value of a polar coordinate domain, and determining a maximum peak point according to the current maximum peak value;
resetting the maximum peak point and accumulator units in a range adjacent to the maximum peak point, and determining an edge point hit array of a straight line passing through the maximum peak point;
reserving edge points, which do not exceed a second preset value, in the edge point hit array of the straight line passing through the maximum peak point;
and judging whether a non-zero peak value point of the edge point exists in the accumulator array, if so, returning to execute the step of determining the current maximum peak value of the polar coordinate domain, otherwise, obtaining a target boundary line segment according to the reserved edge point.
6. The method according to claim 1, wherein determining a point closest to the target point from the target boundary line segments and taking the point closest to the target point as a correction point comprises:
calculating the distance between each target boundary line segment and the target point, and determining the line segment closest to the target point according to the distance;
marking the line segment closest to the target point as a preset color;
and acquiring the color of the target point, judging whether the color of the target point is the preset color, if so, taking the target point as a correction point, and if not, determining the pixel point which is closest to the target point and has the color of the preset color from the target image as the correction point.
7. The method according to claim 1, wherein measuring the prediction object according to the correction point, and obtaining a measurement result comprises:
obtaining the three-dimensional coordinates of the correction points by using a preset mapping model;
and measuring the predicted object according to the three-dimensional coordinates of the correction point to obtain a measurement result.
8. The method of claim 7, wherein the predetermined mapping model is:
wherein Z is c Represents a scaling factor, (u, v) represents a coordinate value of the correction point, K represents a camera internal parameter, R represents a rotation angle, t represents a translation value, (X) w ,Y w ,Z w ) Representing the three-dimensional coordinates of the correction point, f representing the image capturingFocal length of machine, d x Representing the physical dimension of each pixel of the charge coupled device in the horizontal direction, d y Representing the physical dimension of each pixel of the charge coupled device in the vertical direction, (u) 0 ,v 0 ) Representing the projected position of the optical center of the camera on the CCD imaging plane, 0 T Represents [0,0]Vector.
9. A correction terminal for image measurement based prediction comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of a method for image measurement based prediction correction according to any of claims 1 to 8 when the computer program is executed.
CN202310822728.1A 2023-07-06 2023-07-06 Image measurement-based prediction correction method and terminal Active CN116542979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310822728.1A CN116542979B (en) 2023-07-06 2023-07-06 Image measurement-based prediction correction method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310822728.1A CN116542979B (en) 2023-07-06 2023-07-06 Image measurement-based prediction correction method and terminal

Publications (2)

Publication Number Publication Date
CN116542979A true CN116542979A (en) 2023-08-04
CN116542979B CN116542979B (en) 2023-10-03

Family

ID=87454649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310822728.1A Active CN116542979B (en) 2023-07-06 2023-07-06 Image measurement-based prediction correction method and terminal

Country Status (1)

Country Link
CN (1) CN116542979B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648905A (en) * 2024-01-30 2024-03-05 珠海芯烨电子科技有限公司 Method and related device for analyzing label instruction of thermal printer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243417A1 (en) * 2008-09-03 2011-10-06 Rutgers, The State University Of New Jersey System and method for accurate and rapid identification of diseased regions on biological images with applications to disease diagnosis and prognosis
CN103617613A (en) * 2013-11-20 2014-03-05 西北工业大学 Microsatellite non-cooperative target image processing method
CN107491730A (en) * 2017-07-14 2017-12-19 浙江大学 A kind of laboratory test report recognition methods based on image procossing
CN109614868A (en) * 2018-11-09 2019-04-12 公安部交通管理科学研究所 Automobile tire decorative pattern graph line identifying system
CN112114320A (en) * 2020-08-31 2020-12-22 金钱猫科技股份有限公司 Measuring method and device based on image algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243417A1 (en) * 2008-09-03 2011-10-06 Rutgers, The State University Of New Jersey System and method for accurate and rapid identification of diseased regions on biological images with applications to disease diagnosis and prognosis
CN103617613A (en) * 2013-11-20 2014-03-05 西北工业大学 Microsatellite non-cooperative target image processing method
CN107491730A (en) * 2017-07-14 2017-12-19 浙江大学 A kind of laboratory test report recognition methods based on image procossing
CN109614868A (en) * 2018-11-09 2019-04-12 公安部交通管理科学研究所 Automobile tire decorative pattern graph line identifying system
CN112114320A (en) * 2020-08-31 2020-12-22 金钱猫科技股份有限公司 Measuring method and device based on image algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙少红: ""基于机器视觉的精确尺寸测量研究"" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648905A (en) * 2024-01-30 2024-03-05 珠海芯烨电子科技有限公司 Method and related device for analyzing label instruction of thermal printer
CN117648905B (en) * 2024-01-30 2024-04-16 珠海芯烨电子科技有限公司 Method and related device for analyzing label instruction of thermal printer

Also Published As

Publication number Publication date
CN116542979B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN109801333B (en) Volume measurement method, device and system and computing equipment
JP6363863B2 (en) Information processing apparatus and information processing method
CN109448045B (en) SLAM-based planar polygon measurement method and machine-readable storage medium
CN108470356B (en) Target object rapid ranging method based on binocular vision
CN108513121B (en) Method and apparatus for depth map evaluation of a scene
WO2022105676A1 (en) Method and system for measuring wear of workpiece plane
JP2004234423A (en) Stereoscopic image processing method, stereoscopic image processor and stereoscopic image processing program
CN116542979B (en) Image measurement-based prediction correction method and terminal
CN111811784A (en) Laser spot center coordinate determination method, device and equipment
JP7188201B2 (en) Image processing device, image processing method, and image processing program
US11928805B2 (en) Information processing apparatus, information processing method, and storage medium for defect inspection and detection
CN108362205B (en) Space distance measuring method based on fringe projection
US20210132214A1 (en) Synthetic aperture radar image analysis system, synthetic aperture radar image analysis method, and synthetic aperture radar image analysis program
JP6061770B2 (en) Camera posture estimation apparatus and program thereof
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
JP6116765B1 (en) Object detection apparatus and object detection method
KR20100034500A (en) Structure inspection system using image deblurring technique and method of thereof
JP2008269218A (en) Image processor, image processing method, and image processing program
US11475629B2 (en) Method for 3D reconstruction of an object
JP2001116513A (en) Distance image calculating device
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN109600598B (en) Image processing method, image processing device and computer readable recording medium
KR20200082854A (en) A method of matching a stereo image and an apparatus therefor
CN112800890B (en) Road obstacle detection method based on surface normal vector
CN115690469A (en) Binocular image matching method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant