CN111161339A - Distance measuring method, device, equipment and computer readable medium - Google Patents

Distance measuring method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN111161339A
CN111161339A CN201911130044.5A CN201911130044A CN111161339A CN 111161339 A CN111161339 A CN 111161339A CN 201911130044 A CN201911130044 A CN 201911130044A CN 111161339 A CN111161339 A CN 111161339A
Authority
CN
China
Prior art keywords
coordinates
standard
image
points
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911130044.5A
Other languages
Chinese (zh)
Other versions
CN111161339B (en
Inventor
黄秀林
薛鸿臻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai necessary Industrial Technology Co.,Ltd.
Original Assignee
Zhuhai Suibian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Suibian Technology Co ltd filed Critical Zhuhai Suibian Technology Co ltd
Priority to CN201911130044.5A priority Critical patent/CN111161339B/en
Publication of CN111161339A publication Critical patent/CN111161339A/en
Application granted granted Critical
Publication of CN111161339B publication Critical patent/CN111161339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The application relates to a distance measuring method, a device, equipment and a computer readable medium, which avoids the situation that the matching deviation between a reference object and a calibration template occurs because the imaging angles of the reference object in different images are different by determining the coordinate conversion relation between the corner point coordinates of the reference object in the images and the standard coordinates of the reference object in a standard image. Because the actual size of the reference object in the standard image is known, the coordinate of the measuring point in the target image is converted into the standard coordinate by utilizing the coordinate conversion relation, the measuring points under different vision are converted into the image of the standard vision, the actual distance between the standard coordinates is calculated by the reference object with the known actual size in the standard image, and the calculation accuracy of measuring the distance between the two points by utilizing the reference object is further improved.

Description

Distance measuring method, device, equipment and computer readable medium
Technical Field
The present application relates to the field of digital image processing technologies, and in particular, to a method, an apparatus, a device, and a computer readable medium for measuring a distance.
Background
Digital image is a digital representation of an object, and digital image processing refers to various processing and processing of image information by using a computer or other digital devices, and with the development of digital image processing technology, digital image processing technology has been applied to various aspects of people's lives.
For example: the traditional measuring method needs tools such as a graduated scale, uses a digital image processing technology, and can calculate the actual distance of the line segment to be measured only by selecting a reference object with a known size from an image containing the line segment to be measured, wherein the specific processing process comprises the following steps: firstly, matching a reference object in an image with a reference object template with a known size, then calculating the proportion of the reference object in the image to the reference object template, then calculating the proportion of the reference object in the image to a line segment to be detected, and finally calculating the distance between the line segments to be detected according to the proportion relation and the size of the reference object template.
However, due to the influence of factors such as the shooting angle, the actual template matching process of the reference object in the image and the reference object with the known size is prone to be deviated, and further, the measurement result of the line segment to be measured is influenced, so that the accuracy of the measurement result is low.
Disclosure of Invention
In order to solve the above technical problem, the present application provides a method, an apparatus, a device and a computer readable medium for measuring a distance.
In a first aspect, the present application provides a method for measuring a pitch, the method comprising:
acquiring a target image, wherein the target image comprises: a reference and at least two measurement points;
determining the coordinates of the corner points of the reference object in the target image;
determining a coordinate transformation relation according to the corner point coordinates and standard corner point coordinates of the reference object in a preset standard image;
converting the coordinates of the measuring points in the target image into standard coordinates in the preset standard image according to the coordinate conversion relation;
and calculating the distance between any two measuring points based on the standard coordinates.
Optionally, the step of determining the coordinates of the corner points of the reference object in the target image includes:
inputting the target image into a pre-established target detection model, and detecting the contour of the reference object by using the target detection model to obtain a rough contour;
corner coordinates of corner points of the rough contour are determined in the target image.
Optionally, the step of determining corner coordinates of the corners of the rough contour in the target image includes:
detecting a reference corner point in the rough contour;
respectively constructing line frames containing rough contour lines based on every two reference angular points positioned on the same rough contour line;
determining an actual contour line in the rough contour lines contained in each line frame;
and determining the coordinates of the corner points according to a plurality of actual contour lines.
Optionally, the step of determining an actual contour line from the rough contour lines included in each line frame includes:
respectively carrying out filtering processing on the rough contour lines contained in each line frame to obtain an intermediate image;
calculating a transverse gradient matrix and a longitudinal gradient matrix of the intermediate image;
establishing a plane rectangular coordinate system by taking the center of the intermediate image as an origin;
if the included angle between the rough contour line and the transverse axis is smaller than or equal to a preset angle, determining the actual contour line according to the transverse gradient matrix;
or if the included angle between the rough contour line and the longitudinal axis is smaller than or equal to a preset angle, determining the actual contour line according to the longitudinal gradient matrix;
or if the included angle between the rough contour line and any coordinate axis is larger than a preset angle, calculating corresponding matrix elements in the transverse gradient matrix and the longitudinal gradient matrix by using the pythagorean theorem to obtain composite matrix elements;
determining a horizontal and vertical composite gradient matrix according to a plurality of composite matrix elements;
and determining the actual contour line according to the horizontal and vertical composite gradient matrix.
Optionally, the step of determining the coordinates of the corner points according to a plurality of actual contour lines includes:
fitting each actual contour line to obtain a middle contour line;
and determining the coordinates of the intersection points between the intermediate contour lines on the target image as corner point coordinates.
Optionally, the step of determining a coordinate transformation relationship according to the corner coordinates and the standard corner coordinates of the reference object in a preset standard image includes:
determining the corresponding relation between each corner point coordinate and the standard coordinate;
calculating a homography matrix according to the corresponding relations, the corner coordinates and the standard coordinates;
and determining the homography matrix as a coordinate transformation relation.
Optionally, the step of calculating the distance between any two measurement points based on the standard coordinates includes:
calculating the product of the coordinate of each measuring point in the target image and the homography matrix to obtain the standard coordinates of the measuring points;
and calculating the distance between any two standard coordinates to obtain the distance between two measuring points.
Optionally, each standard coordinate corresponds to a pixel point in one standard image; calculating the distance between any two standard coordinates to obtain the distance between two measuring points, wherein the step comprises the following steps:
acquiring the physical size of the standard image and the physical size of a pixel point in the standard image;
calculating the resolution of the standard image according to the physical size of the standard image and the physical size of the pixel point;
calculating the number of pixel points between the two standard coordinates;
and calculating the product of the number of the pixel points and the resolution ratio to obtain the distance between the two measuring points.
In a second aspect, the present application provides a pitch measurement device comprising:
an obtaining module, configured to obtain a target image, where the target image includes: a reference and at least two measurement points;
a first determining module, configured to determine corner coordinates of the reference object in the target image;
the second determining module is used for determining a coordinate conversion relation according to the corner point coordinates and standard corner point coordinates of the reference object in a preset standard image;
the conversion module is used for converting the coordinates of the measuring points in the target image into standard coordinates in the preset standard image according to the coordinate conversion relation;
and the calculation module is used for calculating the distance between any two measuring points based on the standard coordinates.
In a third aspect, the present application provides an object measurement device comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor implementing the steps of the method of any one of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of any of the first aspects.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the application provides a distance measurement method, which comprises the following steps of obtaining a target image, wherein the target image comprises: a reference and at least two measurement points; determining the coordinates of the corner points of the reference object in the target image; determining a coordinate transformation relation according to the corner point coordinates and standard corner point coordinates of the reference object in a preset standard image; converting the coordinates of the measuring points in the target image into standard coordinates in the preset standard image according to the coordinate conversion relation; and calculating the distance between any two measuring points based on the standard coordinates, thereby achieving the purpose of accurately measuring the distance between any two measuring points.
According to the method and the device, the coordinate conversion relation between the corner point coordinates of the reference object in the image and the standard coordinates of the reference object in the standard image is determined, so that the situation that the reference object is matched with the calibration template in a deviation manner due to different imaging angles of the reference object in different images is avoided. Because the actual size of the reference object in the standard image is known, the coordinate of the measuring point in the target image is converted into the standard coordinate by utilizing the coordinate conversion relation, the measuring points under different vision are converted into the image of the standard vision, the actual distance between the standard coordinates is calculated by the reference object with the known actual size in the standard image, and the calculation accuracy of measuring the distance between the two points by utilizing the reference object is further improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a method for measuring a distance according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method of step S200 according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an extraction result of a target detection model provided in the embodiment of the present application;
fig. 4 is a flowchart of a method of step S230 according to an embodiment of the present application;
fig. 5 is a schematic diagram of a rough contour line divided by a line frame according to an embodiment of the present disclosure;
fig. 6 is a flowchart of a method of step S223 provided in an embodiment of the present application;
FIG. 7 is a diagram illustrating a relationship between a gradient matrix and an image according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating an applicable direction of a gradient matrix according to an embodiment of the present disclosure;
fig. 9 is a flowchart of a method of step S240 according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an extended actual contour line determination intersection point provided in an embodiment of the present application;
fig. 11 is a flowchart of a method of step S300 according to an embodiment of the present application;
fig. 12 is a flowchart of a method of step S400 according to an embodiment of the present application;
fig. 13 is a flowchart of a method of step S420 according to an embodiment of the present application;
fig. 14 is a schematic diagram of a module of a distance measuring device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, the distance between two measurement points is calculated, a known size template corresponding to a reference object can be determined by obtaining an image of the measurement point and the reference object in the same frame, a proportional relation is calculated by matching the reference object in the known size template with the reference object on the image, and then the distance between the two measurement points is calculated according to the proportional relation, but researchers in the field find that the process of calculating the distance between the measurement points requires harsh execution conditions, such as: in the practical application process, the imaging angle of the reference object in the acquired image is easy to deviate from the imaging angle of the reference object in the template (the reference object is a rectangular card, the image is inclined relative to the image edge, and four edges of a rectangle in the template are all parallel to the image edge), which affects the accuracy of the subsequent calculation result, so based on this, the invention provides a distance measurement method, which is applied to a computer, a mobile phone or other terminals with processor functions, as shown in fig. 1, the method comprises:
step S100, acquiring a target image;
in the embodiment of the present invention, the target image includes: a reference object and at least two measuring points, wherein, in order to guarantee the accuracy of measuring the actual distance between the two measuring points, optionally, the reference object and the measuring points are located on the same plane, for example: firstly, an identity card is selected/placed on a desktop as a reference object, then a plurality of measurement points can be selected at will on the desktop, and at this time, the reference object and the plurality of measurement points can be regarded as being located on the same plane, so that errors of a final measurement result caused by machine vision errors are reduced, the selection of the specific reference object and the measurement points can be determined according to actual conditions, and the above example is only an optional implementation manner.
The method for acquiring the measuring points is not limited, the measuring points can be manually marked points after the target image is acquired, then the manually marked points are detected by a computer, the measuring points can also be manually marked points on the image before the image is acquired, then the target image is acquired by using an image acquisition device, and then the computer acquires the target image from the image acquisition device.
In the embodiment of the present invention, the reference object includes an object with a fixed shape and a known size, such as a bank card or an identity card, and optionally, in order to ensure the accuracy of the measurement result, the object on the same horizontal plane as the measurement point may be selected as the reference object, or the reference object may be placed on the horizontal plane where the measurement point is located, and then the target image of the reference object and the measurement point in the same frame is acquired.
Step S200, determining the corner coordinates of the reference object in the target image;
in the embodiment of the present invention, the corner point may be an extreme point with a particularly prominent attribute in some aspect, for example, a gray value of the corner point is significantly higher than gray values of other nearby pixel points, and the corner point may also be a feature point satisfying a preset condition, for example, an isolated point where no other point or line exists in the periphery of the point is selected as the corner point.
In the embodiment of the present invention, the target image includes the reference object, but the type of the reference object is not fixed, and the selected reference object is not necessarily a rectangle including four corner points, or a triangle and other objects including a definite inflection point, for example, the reference object may be an identification card, and four corners of the identification card are all circular arcs, but a uniquely determined coordinate cannot be determined on the circular arc, so that the coordinate of the intersection point of the extension line in the target image may be determined as the corner point coordinate by fitting four sides of the identification card, and specifically, the position coordinate of the reference object in the target image may be accurately located by using other side algorithms (e.g., corner point detection), so as to ensure the accuracy of subsequent calculation.
Step S300, determining a coordinate transformation relation according to the corner point coordinates and standard corner point coordinates of the reference object in a preset standard image;
in the embodiment of the present invention, the preset standard image is an image with a known size and resolution, wherein the size of the standard image is determined by the size of the reference object in the standard image, for example: if one identity card is the whole content of the standard image, the size of the standard image is the same as the real size of the identity card, and if the identity card only occupies one half of the content in the standard image, the size of the standard image is twice of the size of the reference object.
The preset standard image is provided with a coordinate system, so that the standard coordinates of the corner points of the reference object in the standard image can be determined. The plane rectangular coordinate system in the preset standard image can set the lower left corner of the preset standard image as the origin of the coordinate system, establish the plane rectangular coordinate system, firstly calculate the number of horizontal pixel points and the number of longitudinal pixel points of the standard image according to the size and the resolution of the standard image, and then determine the division condition of the coordinate axes according to the number of the horizontal pixel points and the longitudinal pixel points of the standard image. For example: the number of horizontal pixel points of the standard image is 100, the number of longitudinal pixel points is 50, a coordinate system is established by taking the lower left corner of the standard image as an origin, 100 scales are arranged on an X axis, 50 scales are arranged on a Y axis, and each coordinate point represents one pixel point, for example, coordinates (1, 1) represent the first pixel point of the lower left corner of the standard image.
In the embodiment of the invention, coordinates of the corner points of the reference object in the target image are corner point coordinates, coordinates of the corner points of the reference object on the standard image are standard coordinates, and the transformation matrix can be calculated through two corresponding coordinate points, so that the coordinate transformation relation of the corresponding coordinate points between the target image and the standard image is obtained.
Step S400, converting the coordinates of the measuring points in the target image into standard coordinates in the preset standard image according to the coordinate conversion relation;
in the embodiment of the invention, the relation between the target image and the standard image is established by calculating the coordinate conversion relation between the corner point coordinates and the standard coordinates, even if the imaging angle of the reference object in the target image is different from the imaging angle of the reference object in the standard image, the precise coordinate conversion can be realized by converting the corner point coordinates in the target image into the standard coordinates on the preset standard image according to the coordinate conversion relation, and the accuracy of the measurement result of the distance between the two measurement points is further improved.
Step S500, calculating the distance between any two measuring points based on the standard coordinates;
in the embodiment of the invention, the size of the standard image is known, so that the actual distance between two measuring points can be determined by calculating the ratio of the distance between two standard coordinates and the edge of the image, wherein the actual distance is the distance between the measuring points in the real world. In addition, the physical size of each pixel point can be calculated according to the known size and resolution of the standard image, and then the number of the pixel points between two measurement points is calculated, so that the distance between the two measurement points can be calculated, and the specific application method can be determined according to the actual situation.
According to the method and the device, the coordinate conversion relation between the corner point coordinates of the reference object in the image and the standard coordinates of the reference object in the standard image is determined, so that the situation that the reference object is matched with the calibration template in a deviation manner due to different imaging angles of the reference object in different images is avoided. Because the actual size of the reference object in the standard image is known, the coordinate of the measuring point in the target image is converted into the standard coordinate by utilizing the coordinate conversion relation, the measuring points under different vision are converted into the image of the standard vision, the actual distance between the standard coordinates is calculated by the reference object with the known actual size in the standard image, and the calculation accuracy of measuring the distance between the two points by utilizing the reference object is further improved.
In another embodiment of the present invention, there is further provided a complete implementation of measuring the distance between two points, wherein step S200 is to determine the coordinates of the corner point of the reference object in the target image, as shown in fig. 2, and includes:
step S210, inputting the target image into a pre-established target detection model, and detecting the contour of the reference object by using the target detection model to obtain a rough contour;
in the embodiment of the present invention, the target detection model is pre-established and is used for extracting the outline of the reference object, the selection of the target detection model depends on the type of the reference object, for example, the identity card in the target image is selected as the reference object, selecting a target detection model for extracting the edge of the identity card, selecting a plurality of sample images containing the identity card, manually marking the edge of the identity card, training the target detection model by using the plurality of sample images until the output accuracy of the model reaches a preset standard value, the target detection model can be used to extract the contour of the reference object from the target image, but the target extraction of the image by using the neural network model can only extract the rough contour of the target, as shown in fig. 3, a plurality of line segments may exist in the rough contour, the determination of the exact coordinates of the corner points in the reference object in the rough contour requires further processing of the rough edges of the object.
Step S220, determining corner coordinates of the corner points of the rough contour in the target image;
in the embodiment of the present invention, the corner point is determined in the rough contour line, the rough contour line may be detected by using a corner point detection algorithm to obtain the corner point, or the rough contour line may be detected by using an edge detection algorithm to obtain a plurality of line segments, then the line segments are fitted, an intersection point between the line segments after fitting is selected as the corner point of the rough contour line, and finally the corner point coordinate of the corner point in the target image is determined.
However, in practical applications, researchers in the field find that, when a bank or an identity card or the like is used as a reference object, edges of the reference object are all circular arcs and there is no corner, and if a corner detection algorithm is directly used to determine a corner in a rough contour of the reference object, a specific position of the corner cannot be located (the circular arc makes a smooth curve, so there is no extreme value in the circular arc), or a corner coordinate of the location corner in a target image does not match an actual situation, based on which, further, step S220 determines the corner coordinate of the corner of the rough contour in the target image, as shown in fig. 4, including:
step S221, detecting a reference corner point in the rough contour;
in the embodiment of the present invention, the detection of the reference corner in the rough contour may use a corner detection algorithm, and the corner obtained by performing the corner detection on the rough contour by the corner detection algorithm may be inaccurate, for example: angular points detected from a quasi-rectangle containing arc corners, such as an identity card or a bank card, have low accuracy and cannot accurately represent the positions of the angular points in the rough contour in the target image, so the angular points are only used as reference angular points for further extraction and confirmation.
Step S222, respectively constructing line frames containing rough contour lines based on every two reference corner points positioned on the same rough contour line;
in the embodiment of the present invention, those skilled in the art find that the rough contour of the reference object extracted by the neural network model may include many fine lines or other noise points. Therefore, in order to avoid the interference of lines in the complex background to the edge extraction, reference angular points extracted from the rough outline construct a plurality of line frames containing the rough outline lines in the graphic rough outline, so as to divide a plurality of areas containing partial graphic rough outline, thereby avoiding the interference of the lines in the background to the edge extraction of the reference object.
The line frame construction mode may be determined according to an actual situation, optionally, as shown in fig. 5, a schematic diagram of dividing a rough contour line of the line frame is provided in the embodiment of the present invention, two reference corner points adjacent to each other are selected to construct a rectangular line frame, the rough contour of the reference object is divided into four regions, each region includes a part of the rough contour line, and an actual contour line is determined for each line frame.
Step S223 of determining an actual contour line in the rough contour lines included in each line frame;
in the embodiment of the present invention, a plurality of noise points or a plurality of lines may exist in the rough contour line, and for each line frame, an edge detection algorithm may be used to extract an actual contour line from the rough contour line, so that an influence of a background in the target image on the edge detection algorithm may be avoided, and in addition, other image processing algorithms capable of determining an actual contour line in the rough contour line may also be used.
Alternatively, in step S223, determining an actual contour line from the rough contour lines included in each line frame, as shown in fig. 6, includes:
step 2231, respectively filtering the rough contour lines contained in each line frame to obtain an intermediate image;
in the embodiment of the present invention, each line frame includes a rough contour line, and gaussian filtering is performed on an image including the rough contour line in the line frame to obtain a filtered intermediate image, so as to achieve the purpose of filtering noise points.
Step S2232, calculating a transverse gradient matrix and a longitudinal gradient matrix of the intermediate image;
in the embodiment of the present invention, from the viewpoint of digitizing the image by the machine, the gradient matrix of the image is a number matrix, the image is usually reduced into pixels with a certain proportion in order to filter interference information, and the reduced image is represented by the number matrix, wherein each matrix element represents a gray value of a region.
As shown in fig. 7, the original image is a matrix of 49 × 49 pixels, for example, and the image is reduced by a ratio of 7 to 1 to obtain a 7 × 7 matrix, in which each matrix element maps the gray level value of a certain area in the original image (each element maps to a matrix of 7 × 7 in the original image), and the matrix represents the gradient matrix of the image. In particular, there are 6 gray values in graph a that are clearly distinguished from the gray values in the other elements, which are represented as a separate curve in graph b.
Step S2233, establishing a plane rectangular coordinate system by taking the center of the intermediate image as an origin;
step S2234, if the included angle between the rough contour line and the transverse axis is smaller than or equal to a preset angle, determining the actual contour line according to the transverse gradient matrix;
or, in step S2235, if the included angle between the rough contour line and the longitudinal axis is smaller than or equal to a preset angle, determining the actual contour line according to the longitudinal gradient matrix;
or, in step S2236, if the included angle between the rough contour line and any coordinate axis is greater than the preset angle, performing an operation on the corresponding matrix elements in the transverse gradient matrix and the longitudinal gradient matrix by using the pythagorean theorem to obtain a composite matrix element.
Step S2237, determining a horizontal and vertical composite gradient matrix according to a plurality of composite matrix elements;
and S2238, determining the actual contour line according to the horizontal and vertical composite gradient matrix.
In the embodiment of the invention, the intermediate image comprises the rough contour line filtered from each line, but the rough contour line may comprise a plurality of line segments, so that the actual contour line in the rough contour line can be calculated by using the image gradient matrix.
Specifically, a plane rectangular coordinate system is established by taking the center of the intermediate image as an origin, a transverse gradient matrix and a longitudinal gradient matrix of the intermediate image are calculated, and the transverse/longitudinal gradient matrix is determined to be used according to the position of the rough contour line in the intermediate image. For example: and similarly, if the direction of the rough contour line is approximately attached to the longitudinal axis of the coordinate system, the actual contour line corresponding to the rough contour line is determined by using the longitudinal gradient matrix, so that the purpose of accurately extracting the straight line segment of the actual contour line from the rough contour line of the graph is achieved.
Further, in practical applications, the preset angle may be set to be 22.5 °, and then 45 ° is taken as a direction, and the corresponding relationship between the rough contour line and the gradient matrix is determined, as shown in fig. 8,
1) if the included angle between the rough contour line and the horizontal line of the image is between-22.5 degrees and 22.5 degrees, determining the straight line segment of the actual contour line by using a pre-calculated horizontal gradient matrix;
2) if the included angle between the rough contour line and the longitudinal vertical line of the image is 67.5-112.5 degrees, determining the straight line segment of the actual contour line by using a pre-calculated longitudinal gradient matrix;
3) if the position angle of the rough outline of the graph does not fall into the two ranges, a horizontal and vertical composite gradient matrix of the image can be calculated according to the horizontal gradient matrix and the vertical gradient matrix of the image, and further, the element h in the horizontal gradient matrix is subjected to pythagorean theorem11A and h in the longitudinal gradient matrix corresponding to its position11=b,Finding the element c of the first row and the first column in the gradient matrix of the horizontal and vertical composite image, wherein
Figure BDA0002278039840000141
And determining an actual contour line by using the longitudinal composite gradient matrix, and realizing that an accurate actual contour straight line segment is extracted from a disordered rough contour of the lines.
Step S224, determining coordinates of the corner points according to a plurality of the actual contour lines.
In the embodiment of the invention, the small-range area is divided by constructing the line frame, the edge of the image in the line frame is extracted, the influence of a complex background on the extraction effect can be avoided, after a plurality of actual contour lines are obtained, the coordinate of the intersection point between the actual contour lines on the image can be determined as the corner point coordinate, and the effect of accurately positioning the corner point in the reference object is realized.
However, if the reference object is a shape with a circular arc at a corner (an identity card has four circular arc corners), because the radian of the corner is small, after extracting an actual contour line for each line frame, the radian part in each line frame may be fitted into a straight line segment, so that no intersection point exists between the four sides, based on this, in the embodiment of the present invention, further, in step S240, the corner point coordinates are determined according to a plurality of actual contour lines, as shown in fig. 9, including:
step S241, fitting each actual contour line to obtain a middle contour line;
in the embodiment of the present invention, the actual contour line may include a curve, and since an accurate corner point cannot be located on the curve, the embodiment of the present invention fits a curve portion in the actual contour line with a straight line portion in the actual contour line to obtain an intermediate contour line, which may be an infinite straight line or a straight line segment, and the specific setting manner may be determined according to the actual situation.
Step S242, determining coordinates of the intersection points between the intermediate contour lines on the target image as corner coordinates.
In the embodiment of the invention, as shown in fig. 10, the intermediate contour lines are obtained by fitting the actual contour lines, and then the intersection points between the intermediate contour lines are determined as the corner points, so that the problem that the corner points cannot be determined on the reference object containing the arc corners by directly using the corner point detection method is solved, the corner point coordinates of the corner points of the actual contour in the target image are accurately positioned, and the measurement accuracy of the distance measurement algorithm is further improved.
In another embodiment provided by the present invention, further, in step S300, the step of determining a coordinate transformation relationship according to the corner coordinates and the standard corner coordinates of the reference object in the preset standard image, as shown in fig. 11, includes:
step S310, determining the corresponding relation between each corner point coordinate and the standard coordinate;
in the embodiment of the present invention, the coordinates of the corner points of the reference object in the target image all have corresponding standard coordinates on the standard image, for example: and if the coordinate of the corner point of the upper left corner of the front face of the identity card in the target image is a, and the standard coordinate of the upper left corner of the front face of the identity card in the standard image is A, the a and the A are in a corresponding relationship.
Step S320, calculating a homography matrix according to the corresponding relations, the corner point coordinates and the standard coordinates;
and step S330, determining the homography matrix as a coordinate transformation relation.
In the embodiment of the present invention, homography refers to projection mapping from one plane to another plane, and if a certain point a on a calibration plate is mapped onto an imager using homogeneous coordinates to obtain a point B, the mapping relationship between a and B may be represented by a homography matrix. In the embodiment of the invention, because the position of the reference object in the preset standard image is usually set manually, the preset standard image can be understood as the calibration plate, and because the position and the angle of the reference object in the target image have uncertainty, the target image can be understood as the imager, and then the process of determining the coordinate conversion relation between the point in the target image and the corresponding point in the standard image comprises establishing a homography matrix.
Specifically, the process of establishing the homography matrix includes: s1, firstly, determining the corresponding relation between the corner point coordinates and the corresponding standard coordinates (for example, the corner point coordinates are the corner point coordinates of the upper left corner of the front face of the ID card in the target image, and the corresponding corner point coordinates are the standard coordinates of the upper left corner of the front face of the ID card in the standard image); s2, then, calculating a homography matrix by using the coordinates of the plurality of corner points and the corresponding standard coordinates, assuming that there is a point a on the target image, a obtains a point B on the standard image through homography transformation, specifically denoted as a · H ═ B, where the matrix H is a homography matrix expressed as follows:
Figure BDA0002278039840000161
since a and B are homogeneous coordinates in the two-dimensional image, and only two dimensional coordinates of the X axis and the Y axis, and the Z axis can be set to 1, are homogeneous coordinates in the target image and the standard image, the homography matrix H can be calculated only by calculating 8 degrees of freedom of the homography matrix H, and therefore, at least four sets of corresponding coordinate points are required to calculate the homography matrix H.
In this embodiment of the present invention, further, in step S400, calculating a distance between any two measurement points based on the standard coordinates, as shown in fig. 12, includes:
step S410, calculating the product of the coordinate of each measuring point in the target image and the homography matrix to obtain the standard coordinates of the measuring points;
in the embodiment of the invention, after the homography matrix is obtained by calculating the corner coordinates and the corresponding standard coordinates, the coordinate a of any point measuring point in the target image can be converted into the standard coordinate B on the standard image by using the formula a.h.b, so that the measuring points under different vision can be converted into the image of the standard vision, the influence of machine vision is avoided, and the measuring accuracy of the distance between any two measuring points is improved.
Step S420, calculating a distance between any two of the standard coordinates to obtain a distance between two of the measurement points.
In the embodiment of the invention, the standard image comprises a pre-established rectangular coordinate system, and the size of the standard image is known, so the actual distance between two coordinates can be calculated by calculating the length of a line segment between any two coordinates, calculating the components of the line segment on the horizontal axis and the vertical axis by the Pythagorean theorem, and calculating the proportion of the components to the side length of the standard image.
Optionally, in step S420, calculating a distance between any two of the standard coordinates, as shown in fig. 13, includes:
step S421, acquiring the physical size of the standard image and the physical size of the pixel point in the standard image;
in the embodiment of the present invention, the physical size of the standard image and the physical size of the pixel may be preset manually, and as to the manner for acquiring the physical size of the standard image and the physical size of the pixel, the physical size of the standard image and the physical size of the pixel may be specifically acquired from a database for storing physical parameters of an annotated image, or may be acquired from a cache for manually acquiring the physical parameters of the standard image, which is only an example, but is not limited to this specifically,
step S422, calculating the resolution of the standard image according to the physical size of the standard image and the physical size of the pixel point;
in the embodiment of the present invention, the physical size of the standard image includes a width of the standard image and a height of the standard image, the size of the pixel point in the standard image includes a width of the pixel point and a height of the pixel point, specifically, the width of the standard image may be divided by the width of the pixel point to obtain a horizontal resolution of the standard image, the height of the standard image is divided by the height of the pixel point to obtain a vertical resolution of the standard image, and the resolution of the standard image includes the horizontal resolution of the standard image and the vertical resolution of the standard image.
Step 423, calculating the number of pixel points between the two standard coordinates;
in the embodiment of the invention, a plane rectangular coordinate system is established by taking the lower left corner of the standard image as an origin, the number of horizontal pixel points and the number of vertical pixel points of the standard image are calculated according to the size and the resolution of the standard image, and then the division condition of the coordinate axes is determined according to the number of the horizontal pixel points and the vertical pixel points of the standard image. For example: the number of the horizontal pixel points of the standard image is 100, the number of the longitudinal pixel points is 50, a coordinate system is established by taking the lower left corner of the standard image as an original point, 100 scales are arranged on an X axis, 50 scales are arranged on a Y axis, each coordinate point represents one pixel point, for example, a coordinate (1, 1) represents the first pixel point of the lower left corner of the standard image, so that the difference of the two standard coordinates is calculated, and the number of the pixel points between the two coordinates can be obtained.
For example: and if the two standard coordinates are one standard coordinate (1, 1) and one standard coordinate (3, 3), two pixel points are transversely spaced between the two standard coordinates, and two pixel points are longitudinally spaced between the two standard coordinates.
Step S424, calculating the product of the number of the pixel points and the resolution to obtain the distance between the two measurement points.
In the embodiment of the invention, when the longitudinal axis coordinates of two measuring points in the marked image are the same, the number of transverse pixel points between two standard coordinates is multiplied by the transverse resolution of the standard image to obtain the transverse actual distance between the two coordinate points; or when the horizontal axis coordinates of the two measurement points in the marked image are the same, the number of vertical pixel points between the two standard coordinates is multiplied by the vertical resolution of the standard image to obtain the vertical actual distance between the two coordinate points; when the coordinates of the transverse axis and the coordinates of the longitudinal axis of the two measuring points are different, the transverse actual distance and the longitudinal actual distance between the coordinates of the two measuring points can be calculated by lines according to the mode, and then the transverse actual distance and the longitudinal actual distance are subjected to stock-hooking calculation and solved to obtain the distance between the two measuring points, namely the actual distance between the two measuring points in the real world.
In still another embodiment of the present invention, the present invention further provides a distance measuring apparatus, as shown in fig. 14, including:
an obtaining module 01, configured to obtain a target image, where the target image includes: the device comprises a reference object and at least two measuring points, wherein the reference object and the measuring points are positioned on the same plane;
a first determining module 02, configured to determine the coordinates of the corner point of the reference object in the target image;
the second determining module 03 is configured to determine a coordinate transformation relationship according to the corner coordinates and standard corner coordinates of the reference object in a preset standard image;
the conversion module 04 is configured to convert the coordinates of the measurement point in the target image into standard coordinates in the preset standard image according to the coordinate conversion relationship;
and the calculating module 05 is used for calculating the distance between any two measuring points based on the standard coordinates.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In a further embodiment of the present invention, the present application provides an object measurement device comprising a memory, a processor, a computer program stored in the memory and being executable on the processor, the processor implementing the steps of the method of any of the above embodiments when executing the computer program.
In a further embodiment of the present invention, the present application provides a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of any of the first aspect.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A method of pitch measurement, the method comprising:
acquiring a target image, wherein the target image comprises: a reference and at least two measurement points;
determining the coordinates of the corner points of the reference object in the target image;
determining a coordinate transformation relation according to the corner point coordinates and standard corner point coordinates of the reference object in a preset standard image;
converting the coordinates of the measuring points in the target image into standard coordinates in the preset standard image according to the coordinate conversion relation;
and calculating the distance between any two measuring points based on the standard coordinates.
2. The distance measuring method according to claim 1, wherein the step of determining the coordinates of the corner points of the reference object in the target image comprises:
inputting the target image into a pre-established target detection model, and detecting the contour of the reference object by using the target detection model to obtain a rough contour;
corner coordinates of corner points of the rough contour are determined in the target image.
3. The pitch measurement method of claim 2, wherein the step of determining corner point coordinates of corner points of the rough contour in the target image comprises:
detecting a reference corner point in the rough contour;
respectively constructing line frames containing rough contour lines based on every two reference angular points positioned on the same rough contour line;
determining an actual contour line in the rough contour lines contained in each line frame;
and determining the coordinates of the corner points according to a plurality of actual contour lines.
4. The pitch measurement method of claim 3, wherein the step of determining an actual contour line among the rough contour lines included in each line frame comprises:
respectively carrying out filtering processing on the rough contour lines contained in each line frame to obtain an intermediate image;
calculating a transverse gradient matrix and a longitudinal gradient matrix of the intermediate image;
establishing a plane rectangular coordinate system by taking the center of the intermediate image as an origin;
if the included angle between the rough contour line and the transverse axis is smaller than or equal to a preset angle, determining the actual contour line according to the transverse gradient matrix;
or if the included angle between the rough contour line and the longitudinal axis is smaller than or equal to a preset angle, determining the actual contour line according to the longitudinal gradient matrix;
or if the included angle between the rough contour line and any coordinate axis is larger than a preset angle, calculating corresponding matrix elements in the transverse gradient matrix and the longitudinal gradient matrix by using the pythagorean theorem to obtain composite matrix elements;
determining a horizontal and vertical composite gradient matrix according to a plurality of composite matrix elements;
and determining the actual contour line according to the horizontal and vertical composite gradient matrix.
5. A method of distance measurement according to claim 3, wherein the step of determining the coordinates of the corner points from a plurality of said actual contour lines comprises:
fitting each actual contour line to obtain a middle contour line;
and determining the coordinates of the intersection points between the intermediate contour lines on the target image as corner point coordinates.
6. The distance measuring method according to claim 1, wherein the step of determining a coordinate transformation relation based on the corner coordinates and the standard corner coordinates of the reference object in a preset standard image comprises:
determining the corresponding relation between each corner point coordinate and the standard coordinate;
calculating a homography matrix according to the corresponding relations, the corner coordinates and the standard coordinates;
and determining the homography matrix as a coordinate transformation relation.
7. The distance measuring method according to claim 6, wherein the step of calculating the distance between any two measuring points based on the standard coordinates comprises:
calculating the product of the coordinate of each measuring point in the target image and the homography matrix to obtain the standard coordinates of the measuring points;
and calculating the distance between any two standard coordinates to obtain the distance between two measuring points.
8. The method according to claim 6, wherein each standard coordinate corresponds to a pixel point in a standard image;
calculating the distance between any two standard coordinates to obtain the distance between two measuring points, wherein the step comprises the following steps:
acquiring the physical size of the standard image and the physical size of a pixel point in the standard image;
calculating the resolution of the standard image according to the physical size of the standard image and the physical size of the pixel point;
calculating the number of pixel points between the two standard coordinates;
and calculating the product of the number of the pixel points and the resolution ratio to obtain the distance between the two measuring points.
9. A spacing measuring device, comprising:
an obtaining module, configured to obtain a target image, where the target image includes: a reference and at least two measurement points;
a first determining module, configured to determine corner coordinates of the reference object in the target image;
the second determining module is used for determining a coordinate conversion relation according to the corner point coordinates and standard corner point coordinates of the reference object in a preset standard image;
the conversion module is used for converting the coordinates of the measuring points in the target image into standard coordinates in the preset standard image according to the coordinate conversion relation;
and the calculation module is used for calculating the distance between any two measuring points based on the standard coordinates.
10. A distance measuring device comprising a memory, a processor, said memory having stored thereon a computer program operable on said processor, wherein said processor, when executing said computer program, performs the steps of the method of any of the preceding claims 1 to 8.
11. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1 to 8.
CN201911130044.5A 2019-11-18 2019-11-18 Distance measuring method, device, equipment and computer readable medium Active CN111161339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911130044.5A CN111161339B (en) 2019-11-18 2019-11-18 Distance measuring method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911130044.5A CN111161339B (en) 2019-11-18 2019-11-18 Distance measuring method, device, equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111161339A true CN111161339A (en) 2020-05-15
CN111161339B CN111161339B (en) 2020-11-27

Family

ID=70555921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911130044.5A Active CN111161339B (en) 2019-11-18 2019-11-18 Distance measuring method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111161339B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017232A (en) * 2020-08-31 2020-12-01 浙江水晶光电科技股份有限公司 Method, device and equipment for positioning circular pattern in image
CN112348909A (en) * 2020-10-26 2021-02-09 北京市商汤科技开发有限公司 Target positioning method, device, equipment and storage medium
CN114872048A (en) * 2022-05-27 2022-08-09 河南职业技术学院 Robot steering engine angle calibration method
CN115760856A (en) * 2023-01-10 2023-03-07 惟众信(湖北)科技有限公司 Part spacing measuring method and system based on image recognition and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070197A1 (en) * 2005-09-28 2007-03-29 Nissan Motor Co., Ltd. Vehicle periphery video providing apparatus and method
CN101464132A (en) * 2008-12-31 2009-06-24 北京中星微电子有限公司 Position confirming method and apparatus
CN101504285A (en) * 2009-02-23 2009-08-12 北京建筑工程学院 Method for confirming mounting surface of outdoor advertisement screen
CN101630406A (en) * 2008-07-14 2010-01-20 深圳华为通信技术有限公司 Camera calibration method and camera calibration device
CN101998136A (en) * 2009-08-18 2011-03-30 华为技术有限公司 Homography matrix acquisition method as well as image pickup equipment calibrating method and device
CN102305608A (en) * 2011-05-13 2012-01-04 哈尔滨工业大学 Error measurement and compensation method for multi-target two-dimensional cross motion simulation system
CN105513078A (en) * 2015-12-15 2016-04-20 浙江农林大学 Standing tree information acquisition method and device based on images
CN105551039A (en) * 2015-12-14 2016-05-04 深圳先进技术研究院 Calibration method and calibration device for structured light 3D scanning system
CN106264537A (en) * 2015-05-25 2017-01-04 杭州海康威视系统技术有限公司 The measurement system and method for human body attitude height in image
CN107527369A (en) * 2017-08-30 2017-12-29 广州视源电子科技股份有限公司 Method for correcting image, device, equipment and computer-readable recording medium
CN108364318A (en) * 2018-01-18 2018-08-03 北京科技大学 A kind of planar dimension monocular measuring method for eliminating protective glass refractive effect
CN109059895A (en) * 2018-03-28 2018-12-21 南京航空航天大学 A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
CN109448043A (en) * 2018-10-22 2019-03-08 浙江农林大学 Standing tree height extracting method under plane restriction
CN109840897A (en) * 2017-11-28 2019-06-04 深圳市航盛电子股份有限公司 Vehicle panoramic method for correcting image and vehicle panoramic system
CN109979206A (en) * 2017-12-28 2019-07-05 杭州海康威视系统技术有限公司 Vehicle speed measuring method, device, system, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070197A1 (en) * 2005-09-28 2007-03-29 Nissan Motor Co., Ltd. Vehicle periphery video providing apparatus and method
CN101630406A (en) * 2008-07-14 2010-01-20 深圳华为通信技术有限公司 Camera calibration method and camera calibration device
CN101464132A (en) * 2008-12-31 2009-06-24 北京中星微电子有限公司 Position confirming method and apparatus
CN101504285A (en) * 2009-02-23 2009-08-12 北京建筑工程学院 Method for confirming mounting surface of outdoor advertisement screen
CN101998136A (en) * 2009-08-18 2011-03-30 华为技术有限公司 Homography matrix acquisition method as well as image pickup equipment calibrating method and device
CN102305608A (en) * 2011-05-13 2012-01-04 哈尔滨工业大学 Error measurement and compensation method for multi-target two-dimensional cross motion simulation system
CN106264537A (en) * 2015-05-25 2017-01-04 杭州海康威视系统技术有限公司 The measurement system and method for human body attitude height in image
CN105551039A (en) * 2015-12-14 2016-05-04 深圳先进技术研究院 Calibration method and calibration device for structured light 3D scanning system
CN105513078A (en) * 2015-12-15 2016-04-20 浙江农林大学 Standing tree information acquisition method and device based on images
CN107527369A (en) * 2017-08-30 2017-12-29 广州视源电子科技股份有限公司 Method for correcting image, device, equipment and computer-readable recording medium
CN109840897A (en) * 2017-11-28 2019-06-04 深圳市航盛电子股份有限公司 Vehicle panoramic method for correcting image and vehicle panoramic system
CN109979206A (en) * 2017-12-28 2019-07-05 杭州海康威视系统技术有限公司 Vehicle speed measuring method, device, system, electronic equipment and storage medium
CN108364318A (en) * 2018-01-18 2018-08-03 北京科技大学 A kind of planar dimension monocular measuring method for eliminating protective glass refractive effect
CN109059895A (en) * 2018-03-28 2018-12-21 南京航空航天大学 A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
CN109448043A (en) * 2018-10-22 2019-03-08 浙江农林大学 Standing tree height extracting method under plane restriction

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017232A (en) * 2020-08-31 2020-12-01 浙江水晶光电科技股份有限公司 Method, device and equipment for positioning circular pattern in image
CN112017232B (en) * 2020-08-31 2024-03-15 浙江水晶光电科技股份有限公司 Positioning method, device and equipment for circular patterns in image
CN112348909A (en) * 2020-10-26 2021-02-09 北京市商汤科技开发有限公司 Target positioning method, device, equipment and storage medium
CN114872048A (en) * 2022-05-27 2022-08-09 河南职业技术学院 Robot steering engine angle calibration method
CN114872048B (en) * 2022-05-27 2024-01-05 河南职业技术学院 Robot steering engine angle calibration method
CN115760856A (en) * 2023-01-10 2023-03-07 惟众信(湖北)科技有限公司 Part spacing measuring method and system based on image recognition and storage medium

Also Published As

Publication number Publication date
CN111161339B (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN111161339B (en) Distance measuring method, device, equipment and computer readable medium
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
WO2019200837A1 (en) Method and system for measuring volume of parcel, and storage medium and mobile terminal
US20200364849A1 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN101673397B (en) Digital camera nonlinear calibration method based on LCDs
CN108613630B (en) Two-wire tube level bubble offset measurement method based on image processing technology
CN111307039A (en) Object length identification method and device, terminal equipment and storage medium
CN112132907B (en) Camera calibration method and device, electronic equipment and storage medium
CN104899888B (en) A kind of image sub-pixel edge detection method based on Legendre squares
JP2014025748A (en) Dimension measuring program, dimension measuring instrument, and dimension measuring method
CN106296587B (en) Splicing method of tire mold images
CN111681186A (en) Image processing method and device, electronic equipment and readable storage medium
US8068673B2 (en) Rapid and high precision centroiding method and system for spots image
CN111222507A (en) Automatic identification method of digital meter reading and computer readable storage medium
CN110793441B (en) High-precision object geometric dimension measuring method and device
CN112200822A (en) Table reconstruction method and device, computer equipment and storage medium
JP2002165083A (en) Picture processing method and non-contact picture input device using the same
CN109902695B (en) Line feature correction and purification method for image pair linear feature matching
CN112016341A (en) Text picture correction method and device
CN115239789B (en) Method and device for determining liquid volume, storage medium and terminal
CN114998571B (en) Image processing and color detection method based on fixed-size markers
CN102637094A (en) Correction information calculation method and system applied to optical touch device
CN115511807A (en) Method and device for determining position and depth of groove
CN115018735A (en) Fracture width identification method and system for correcting two-dimensional code image based on Hough transform
CN114862761A (en) Power transformer liquid level detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 519080 Building 8, No.1, Harbin Institute of technology, Tangjiawan Town, Zhuhai City, Guangdong Province

Patentee after: Zhuhai necessary Industrial Technology Co.,Ltd.

Address before: Building 8, No.1, hagongda Road, Tangjiawan Town, Zhuhai City, Guangdong Province

Patentee before: ZHUHAI SUIBIAN TECHNOLOGY Co.,Ltd.