CN111307039A - Object length identification method and device, terminal equipment and storage medium - Google Patents

Object length identification method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111307039A
CN111307039A CN202010150504.7A CN202010150504A CN111307039A CN 111307039 A CN111307039 A CN 111307039A CN 202010150504 A CN202010150504 A CN 202010150504A CN 111307039 A CN111307039 A CN 111307039A
Authority
CN
China
Prior art keywords
image
tail
head
measured
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010150504.7A
Other languages
Chinese (zh)
Inventor
王延樑
黄秀林
薛鸿臻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Suibian Technology Co ltd
Original Assignee
Zhuhai Suibian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Suibian Technology Co ltd filed Critical Zhuhai Suibian Technology Co ltd
Priority to CN202010150504.7A priority Critical patent/CN111307039A/en
Publication of CN111307039A publication Critical patent/CN111307039A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an object length identification method, an object length identification device, terminal equipment and a storage medium. The method comprises the following steps: acquiring a head image and a tail image of an object to be measured, wherein the head image and the tail image comprise the same reference object; acquiring a head to-be-measured point corresponding to the head image and a tail to-be-measured point corresponding to the tail image; calculating a head coordinate conversion relation and a tail coordinate conversion relation according to reference objects in the head image and the tail image; and calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation. The technical scheme of the embodiment of the invention can realize accurate and rapid identification of the length of the object and reduce the error of the identification result.

Description

Object length identification method and device, terminal equipment and storage medium
Technical Field
The embodiment of the invention relates to a digital image processing technology, in particular to an object length identification method, an object length identification device, terminal equipment and a storage medium.
Background
The digital image processing technology is a technology for converting an image signal into a digital signal and processing the digital signal by using a computer, and the digital processing technology is widely applied to daily life of people.
The traditional method for measuring the length of an object mainly depends on a graduated scale and the like to carry out manual measurement, or the length of the object is calculated through scanning of a three-dimensional laser scanner. The scale is low in measurement accuracy and large in error, the three-dimensional laser scanner is used for calculating the length of an object, the cost is high, and the portability is low. Using digital image processing techniques, the length of an object can be identified by taking a picture containing the object and a reference. In the prior art, a picture is taken by taking a scene of a measured object and a reference object, and the size of the measured object is calculated according to the known size of the reference object.
In the process of implementing the invention, the inventor finds that the prior art has the following disadvantages: fig. 1 provides a schematic diagram of an object length measurement error in the prior art, as shown in fig. 1, due to factors such as a shooting angle and a shooting height, a measurement error is easily generated, and the identification accuracy of the object length is reduced.
Disclosure of Invention
The embodiment of the invention provides an object length identification method, an object length identification device, terminal equipment and a storage medium, so that the object length can be accurately and quickly identified, and errors of an identification result are reduced.
In a first aspect, an embodiment of the present invention provides an object length identification method, where the method includes:
acquiring a head image and a tail image of an object to be measured, wherein the head image and the tail image comprise the same reference object;
acquiring a head to-be-measured point corresponding to the head image and a tail to-be-measured point corresponding to the tail image;
calculating a head coordinate conversion relation and a tail coordinate conversion relation according to reference objects in the head image and the tail image; and calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation.
In a second aspect, an embodiment of the present invention further provides an object length identification apparatus, where the apparatus includes:
the device comprises a to-be-measured object image acquisition module, a detection module and a display module, wherein the to-be-measured object image acquisition module is used for acquiring a head image and a tail image of an object to be measured, and the head image and the tail image comprise the same reference object;
the acquisition module of points to be measured is used for acquiring head points to be measured corresponding to the head images and tail points to be measured corresponding to the tail images;
the coordinate conversion relation acquisition module is used for calculating a head coordinate conversion relation and a tail coordinate conversion relation according to reference objects in the head image and the tail image;
and the object length identification module is used for calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes: one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement the object length recognition method according to any one of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used to perform the object length identification method according to any one of the embodiments of the present invention.
The embodiment of the invention shoots a head image and a tail image of an object to be measured, acquires points to be measured in the head image and the tail image, calculates a head coordinate conversion relation and a tail coordinate conversion relation according to corner point coordinates of a reference object in the head image and the tail image and standard focus coordinates of the reference object, and calculates the length of the object to be measured according to the head coordinate conversion relation, the tail coordinate conversion relation and the points to be measured in the head image and the tail image. The problems of large measurement error and low identification precision in the process of identifying the length of an object in the prior art are solved, the length of the object is accurately and quickly identified, and the error of an identification result is reduced.
Drawings
FIG. 1 is a schematic diagram of an error in measuring the length of an object according to the prior art;
FIG. 2a is a flowchart of an object length recognition method according to a first embodiment of the present invention;
FIG. 2b is a schematic representation of an image of a toe suitable for use in embodiments of the invention;
FIG. 2c is a schematic view of a heel image suitable for use in embodiments of the present invention;
FIG. 3a is a flowchart of an object length recognition method according to a second embodiment of the present invention;
fig. 3b is a flowchart of a method for acquiring a reference corner point of a reference object, which is suitable for use in the embodiment of the present invention;
fig. 3c is a flowchart of a method for obtaining a reference corner point of a reference object, which is suitable for use in the embodiment of the present invention;
FIG. 3d is a schematic diagram of a rough outline of an identification card;
FIG. 3e is a diagram of a line box dividing a rough contour;
FIG. 3f is a flow chart of a method for determining an actual contour line from a rough contour line suitable for use in embodiments of the present invention;
FIG. 3g is a schematic diagram illustrating an angle between a rough contour line and a coordinate axis;
fig. 4 is a schematic structural diagram of an object length recognition apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal device in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 2a is a flowchart of an object length recognition method according to an embodiment of the present invention, where the embodiment is applicable to a case of precisely recognizing an object length, and the method may be executed by an object length recognition apparatus, which may be implemented by software and/or hardware and is generally integrated in a terminal device.
As shown in fig. 2a, the technical solution of the embodiment of the present invention specifically includes the following steps:
s110, acquiring a head image and a tail image of the object to be measured, wherein the head image and the tail image comprise the same reference object.
The object to be measured may be an object whose length needs to be measured, the head image may be an image obtained by photographing the head of the object to be measured and a reference object together, and the tail image may be an image obtained by photographing the tail of the object to be measured and the reference object together. The head and the tail of the object to be measured are both ends of the object to be measured, respectively, and for example, when the object to be measured is a foot, the head and the tail may be a toe and a heel, respectively. The reference object can be an object with a known size and a fixed shape, and can be an identity card, a bank card, a mobile phone or the like.
In the embodiment of the invention, firstly, the object to be measured and the reference object are placed on the same plane, then the head of the object to be measured and the reference object are shot to obtain the head image, and the tail of the object to be measured and the reference object are shot to obtain the tail image.
In an optional embodiment of the present invention, the head image and the tail image may be captured by an image capturing device facing the front of the object to be measured, and an imaging surface of the image capturing device is parallel to a placement surface of the object to be measured.
The image capturing device may be a mobile phone, a camera, or other devices capable of capturing images. The imaging surface can be a plane formed by projection according to a pinhole imaging principle when an object to be measured is shot. The placement surface of the object to be measured may be a plane on which the object to be measured is placed, for example, when the object to be measured is placed on the ground, the placement surface is a plane on which the ground is placed.
In a specific example, when the length of a human foot needs to be measured, the object to be measured is the foot, the reference object is an identity card, the identity card is placed beside the foot, and a mobile phone is used for shooting images of the toe and the heel. When shooting, the front face of the tiptoe of the mobile phone is kept horizontal and is parallel to the ground, and the front face of the tiptoe and the identity card are shot in the same frame. Similarly, the mobile phone is kept horizontal facing the front of the heel and is parallel to the ground, and the front of the heel and the identity card are photographed in the same frame. The captured toe image and heel image are the head image and the tail image required by the present embodiment.
Specifically, a limit circle and a cross can be arranged on a camera shooting interface of the mobile phone, and when the cross is in the limit circle, an Inertial Measurement Unit (IMU) sensor of the mobile phone displays that the mobile phone is parallel to the ground, so that when a user shoots, the mobile phone can be kept horizontal to the ground only by keeping the cross in the limit circle. Meanwhile, a horizontal line is arranged in the center of the limiting ring, the head or the tail of the object to be measured is ensured to be tangent to the horizontal line during shooting, two horizontal lines can be arranged for reducing the operation difficulty of a user, and the head or the tail of the object to be measured is ensured to be in an area between the two horizontal lines during shooting, so that the identification (calculation) error is reduced during subsequent object length identification (for example, the length of the object is calculated). Fig. 2b is a schematic diagram of a toe image, fig. 2c is a schematic diagram of a heel image, as shown in fig. 2b and 2c, the toe image is the front of the toe and the front of the identification card, and the heel image is the front of the heel and the front of the identification card.
And S120, acquiring a head to-be-measured point corresponding to the head image and a tail to-be-measured point corresponding to the tail image.
The point to be measured on the head can be selected from the head image to represent the measurement point at the farthest end of the head of the object to be measured. The tail point to be measured can be selected from the tail image to represent the measuring point at the farthest end of the tail of the object to be measured. The distance between the head part point to be measured and the tail part point to be measured is the length of the object to be measured.
In an optional embodiment of the present invention, the head portion point to be measured may be a point formed by intersecting a straight line, which is made along a direction perpendicular to a plane on which the object to be measured is placed, of an outermost point of the head portion of the object to be measured with the plane on which the object to be measured is placed; the point to be measured at the tail part can be a point formed by intersecting a straight line which is formed by the outermost point of the tail part of the object to be measured along the vertical direction of the plane on which the object to be measured is placed and the plane on which the object to be measured is placed.
In the embodiment of the invention, the head outermost point and the tail outermost point of the object to be measured can be obtained by manually labeling the head image and the tail image. Or the head contour and the tail contour of the object to be measured can be identified firstly through a preset automatic selection algorithm, and then the outermost points of the head and the outermost points of the tail are positioned. The present embodiment does not limit the manner and specific procedure for obtaining the cephalad outermost point and the caudal outermost point.
In a specific example, when the object to be measured is a foot, the outermost point of the toe is a straight line along the direction perpendicular to the ground, and the point where the straight line intersects with the ground is the point to be measured on the head. Similarly, a straight line is made at the outermost point of the heel along the direction vertical to the ground, and the point where the straight line intersects with the ground is the point to be measured at the tail part. The distance between the head part point to be measured and the tail part point to be measured is the length of the object to be measured. As shown in fig. 1, a tangent line is drawn along the leftmost point of the small ellipse representing the toe of the foot, the tangent line is perpendicular to the ground and intersects with the ground at one point, and a tangent line is drawn along the rightmost point of the large ellipse representing the heel of the foot, the tangent line is perpendicular to the ground and intersects with the ground at one point, and the distance between the two points represents the real foot length.
And S130, calculating a head coordinate conversion relation and a tail coordinate conversion relation according to the reference objects in the head image and the tail image.
The head coordinate transformation relation and the tail coordinate transformation relation can be used for describing the transformation relation between the standard corner coordinates and the corner coordinates of the reference object in the head image and the tail image. The calculation of the head coordinate conversion relation and the tail coordinate conversion relation can be calculated through the coordinates of the corner points of the reference objects in the head image and the tail image, can also be calculated through the characteristic points of the reference objects, and can also be calculated through the edges of the reference objects. The present embodiment does not limit the calculation manner of the head coordinate conversion relationship and the tail coordinate conversion relationship.
In an alternative embodiment of the present invention, calculating a head coordinate transformation relation and a tail coordinate transformation relation according to the reference objects in the head image and the tail image may include: and calculating a head coordinate conversion relation and a tail coordinate conversion relation according to the corner point coordinates of the reference object in the head image and the tail image and the standard corner point coordinates of the reference object.
The corner points may be extreme points with particularly prominent attributes in some aspect, for example, the gray value of the corner point is significantly higher than the gray values of other nearby pixel points, the corner points may also be feature points satisfying a preset condition, for example, an isolated point without other points or lines in the periphery of the point is selected as the corner point, the corner point may refer to the corner points of two lines, for example, a rectangle has four sides, any two sides are intersected to obtain one corner point, one rectangle includes four corner points, and for example, one triangle includes three corner points. The corner coordinates may be coordinates of the corner in the leading image or the trailing image.
In the embodiment of the present invention, the head image and the tail image include reference objects, but the types of the reference objects are not fixed, and the selected reference objects are not necessarily all rectangles including four corner points, or triangles and other objects including a definite inflection point, for example, the reference object may be an identification card, and four corners of the identification card are circular arcs, but a uniquely determined coordinate cannot be determined on the circular arc, so that the coordinate of the intersection point of the extension line in the target image can be determined as the corner point coordinate by fitting four sides of the identification card, specifically, the position coordinate of the reference object in the target image can be accurately located by using other side algorithms (e.g., corner point detection), so as to ensure the accuracy of subsequent calculation.
The standard corner coordinates may refer to the coordinates of the corner of a reference object in a preset standard image. The preset standard image may be an image of known size and resolution, wherein the size of the standard image is determined by the size of the reference object in the standard image, for example: if one identity card is the whole content of the standard image, the size of the standard image is the same as the real size of the identity card, and if the identity card only occupies one half of the content in the standard image, the size of the standard image is twice of the size of the reference object. A coordinate system is arranged in the preset standard image, and standard corner coordinates of the corner of the reference object in the standard image can be determined. The plane rectangular coordinate system in the preset standard image can set the lower left corner of the preset standard image as the origin of the coordinate system, establish the plane rectangular coordinate system, firstly calculate the number of horizontal pixel points and the number of longitudinal pixel points of the standard image according to the size and the resolution of the standard image, and then determine the division condition of the coordinate axes according to the number of the horizontal pixel points and the longitudinal pixel points of the standard image. For example: the number of horizontal pixel points of the standard image is 100, the number of longitudinal pixel points is 50, a coordinate system is established by taking the lower left corner of the standard image as an origin, 100 scales are arranged on an X axis, 50 scales are arranged on a Y axis, and each coordinate point represents one pixel point, for example, coordinates (1, 1) represent the first pixel point of the lower left corner of the standard image.
In the embodiment of the invention, the coordinate transformation relation is calculated through the relation between the corner point coordinates of the reference object and the standard corner point coordinates. The above-described method is only one preferable method for calculating the coordinate conversion relationship, and the coordinate conversion relationship may be calculated based on a feature matching relationship between the reference object and the reference object reference image in the head image and the tail image. The embodiment does not limit the calculation mode and the specific implementation process of the coordinate transformation relation.
S140, calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation.
In the embodiment of the present invention, after the head point to be measured and the tail point to be measured are acquired from the head image and the tail image, and the head coordinate conversion relationship and the tail coordinate conversion relationship are acquired according to the corner point coordinates of the reference object in the head image and the tail image, the distance between the head point to be measured and the tail point to be measured can be calculated according to the head point to be measured, the head coordinate conversion relationship and the tail coordinate conversion relationship, and the distance is taken as the length of the object to be measured.
In an optional embodiment of the present invention, calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relationship, and the tail coordinate conversion relationship may include: converting the coordinates of the head point to be measured in the head image into head standard coordinates in the preset standard image according to the head coordinate conversion relation; converting the coordinates of the tail point to be measured in the tail image into tail standard coordinates in the preset standard image according to the tail coordinate conversion relation; and calculating the distance between the head standard coordinate and the tail standard coordinate, and taking the calculation result as the length of the object.
In the embodiment of the invention, the head standard coordinates of the head point to be measured in the preset standard image can be obtained according to the coordinates of the head point to be measured in the head image and the head coordinate conversion relation; according to the coordinates of the tail point to be measured in the tail image and the tail coordinate conversion relation, the tail standard coordinates of the tail point to be measured in the preset standard image can be obtained. The head standard coordinate and the tail standard coordinate are in the same coordinate system at the moment, and the distance can be calculated by the following Euclidean distance formula:
Figure BDA0002402272160000101
wherein d is the distance between the head standard coordinate and the tail standard coordinate, x1Abscissa, x, representing standard head coordinates2Abscissa, y, representing the standard coordinate of the tail1Ordinate, y, representing head standard coordinates2The ordinate represents the tail norm.
According to the technical scheme of the embodiment, a head image and a tail image of an object to be measured are shot, points to be measured are obtained in the head image and the tail image, a head coordinate conversion relation and a tail coordinate conversion relation are calculated according to corner point coordinates of a reference object in the head image and the tail image and standard focus coordinates of the reference object, and the length of the object to be measured is calculated according to the head coordinate conversion relation, the tail coordinate conversion relation and the points to be measured in the head image and the tail image. The problems of large measurement error and low identification precision in the process of identifying the length of an object in the prior art are solved, the length of the object is accurately and quickly identified, and the error of an identification result is reduced.
Example two
Fig. 3a is a flowchart of an object length identification method according to a second embodiment of the present invention, and the second embodiment of the present invention further embodies the process of calculating the head coordinate transformation relation and the tail coordinate transformation relation, and the process of calculating the object length of the object to be measured.
Correspondingly, as shown in fig. 3a, the technical solution of the embodiment of the present invention specifically includes the following steps:
s210, acquiring a head image and a tail image of the object to be measured, wherein the head image and the tail image comprise the same reference object.
S220, acquiring a head to-be-measured point corresponding to the head image and a tail to-be-measured point corresponding to the tail image.
And S230, respectively acquiring reference corner points of the reference object in the head image and the tail image.
The reference corner point may be a rough corner point of the reference object, and is used to determine a rough contour of the reference object, so as to further obtain a corner point coordinate of the reference object.
In the embodiment of the invention, the reference corner points of the reference object are obtained, the outline of the reference object can be detected through the deep learning network model, the rough outline of the reference object is obtained, and the reference corner points are detected in the rough outline of the reference object. Or acquiring a homography matrix between the head image or the tail image and the standard image through feature matching, and converting the standard corner coordinates of the reference object in the standard image into the reference corner coordinates in the head image or the tail image through the homography matrix. The embodiment does not limit the method and the specific implementation process for obtaining the coordinates of the reference corner point.
In the embodiment of the present invention, in the leading image and the trailing image, S230-S270 are also performed to obtain the coordinates of the corner points of the reference object in the leading image and the trailing image, respectively.
Fig. 3b provides a flowchart of a method for acquiring a reference corner point of a reference object, and accordingly, as shown in fig. 3b, S230 may include:
s231, calculating a characteristic point homography matrix according to the corresponding relation between the characteristic points of the reference object in the head image and the characteristic points in the preset standard image.
The feature point may refer to a point where the gray value of the image changes dramatically or a point with a large curvature on the edge of the image. And extracting characteristic points from the head image and the standard image, matching the characteristic points with the characteristic points, acquiring matched characteristic points, and calculating a homography matrix according to the corresponding relation between the matched characteristic points in the head image and the standard image.
And S232, calculating the reference corner point coordinates of the reference object in the head image according to the feature point homography matrix and the standard corner point coordinates of the reference object in the preset standard image.
The standard corner coordinates are coordinates of a corner of the reference object in the standard image, and the standard corner coordinates can be converted into coordinates of a reference corner of the reference object in the head image according to the homography matrix calculated in S231.
Fig. 3c provides a flowchart of a method for acquiring a reference corner point of a reference object, and as shown in fig. 3c, S230 may further include:
s2301, inputting the head image and the tail image into pre-established detection models respectively, and detecting the outline of the reference object by using the detection models to obtain a rough outline.
The detection model is established in advance and used for extracting the outline of the reference object, and the selection of the detection model depends on the type of the reference object. For example, if the identity card is selected as a reference object, a detection model for extracting the edge of the identity card is selected, a plurality of sample images containing the identity card are selected, the edge of the identity card is marked manually, the detection model is trained by using the plurality of sample images until the output accuracy of the model reaches a preset standard value, and then the detection model can be used for extracting the outline of the identity card from the image. However, the reference object extraction using the neural network model can only extract the rough contour of the reference object, fig. 3d is a schematic diagram of the rough contour of the identification card, as shown in fig. 3d, a plurality of line segments may exist in the rough contour, so that it is necessary to further process the rough edge to determine the accurate coordinates of the corner points in the reference object in the rough contour.
S2302, detecting a reference corner point in the rough contour.
The reference corner may be a corner obtained by performing corner detection on the rough contour through a corner detection algorithm. But the reference corner points may be inaccurate, for example: angular points detected from a quasi-rectangle containing arc corners, such as an identity card or a bank card, have low accuracy and cannot accurately represent the positions of the angular points in the rough contour in the target image, so the angular points are only used as reference angular points for further extraction and confirmation.
And S240, respectively constructing line frames containing the rough contour lines based on every two reference corner points positioned on the same rough contour line.
In the embodiment of the present invention, the rough contour of the reference object extracted by the neural network model may include many fine lines or other noise points. In order to avoid the interference of lines in the complex background to the edge extraction, a reference corner point is extracted from the rough outline, a plurality of line frames containing the rough outline lines in the graphic rough outline are constructed, a plurality of areas containing partial graphic rough outlines are divided, and then the interference of the lines in the background to the edge extraction of a reference object is avoided.
In the embodiment of the present invention, the construction manner of the line frame may be determined according to an actual situation, and optionally, fig. 3e is a schematic diagram of dividing the rough contour line of the line frame, as shown in fig. 3e, two reference corner points adjacent to each other are selected to construct a rectangular line frame, the rough contour line of the reference object is divided into four regions, each region includes a part of the rough contour line, and the actual contour line is determined for each line frame. The specific arrangement of the shape, the size, and the like of the line frame in the embodiment of the present invention is only an example, and the specific arrangement may be determined according to actual situations, which is not specifically limited by the present invention.
And S250, determining an actual contour line in the rough contour lines contained in each line frame.
In an optional embodiment of the present invention, when the reference corner is obtained by using feature matching, the actual contour line may be obtained by performing edge detection on the rough contour line included in each line frame.
Any algorithm that can implement edge detection is within the scope of the embodiments of the present invention. Illustratively, the edge detection may be performed using the Canny edge detection algorithm.
Fig. 3f provides a flowchart of a method for determining an actual contour line according to a rough contour line, and as shown in fig. 3f, S250 may include: and S251, respectively carrying out filtering processing on the rough contour lines contained in each line frame to obtain an intermediate image.
The filtering process may be gaussian filtering process, which filters noise points of the rough contour line, or smooth filtering process, which fits the rough contour line into a uniquely determined line. The present embodiment does not limit the type and specific procedure of the filtering process.
In the embodiment of the invention, each line frame contains the rough contour line, and the image containing the rough contour line in the line frame is filtered to obtain the filtered intermediate image.
And S252, calculating the transverse gradient, the longitudinal gradient and the oblique gradient of each pixel in the intermediate image.
In the embodiment of the invention, the gradient value and the direction of each pixel in the intermediate image are calculated, and the gradient of the pixel can be used for expressing the change degree and the change direction of the gray value of the pixel.
And S253, acquiring a target gradient of which the included angle with the rough contour line is smaller than or equal to a preset angle in the transverse gradient, the longitudinal gradient and the oblique gradient of the current processing pixel.
If the included angle between the rough contour line and a certain gradient angle of the pixel is smaller than or equal to a preset angle, the included angle represents the direction of the rough contour line approximately fitting the gradient, and the actual contour line corresponding to the rough contour line is determined by utilizing the gradient.
In a specific example, the preset angle may be set to be 22.5 °, and the corresponding relationship between the rough contour line and the gradient is determined with 45 ° as one direction. Fig. 3g is a schematic diagram of the included angle between the rough contour and the gradient, and as shown in fig. 3g, if the included angle between the rough contour and the transverse gradient of the pixel is between-22.5 ° and 22.5 °, the pre-calculated transverse gradient is used to determine the straight-line segment of the actual contour.
And S254, determining the actual contour line according to the target gradient.
It should be noted that S251 to S254 in the embodiment of the present invention are only one way to determine an actual contour line, and may extract the actual contour line in the rough contour line by using an edge detection algorithm, or may use another image processing algorithm capable of determining the actual contour line in the rough contour line, which is not limited in the embodiment of the present invention.
In another optional embodiment of the present invention, when the reference corner point is obtained by using the deep learning model, after performing edge detection on the rough contour line included in each line frame, the rough contour line after performing the edge detection needs to be further subjected to gradient algorithm processing to obtain an actual contour line.
The gradient algorithm may set a gradient operator for a certain neighborhood of pixels by using a first-order or second-order derivative change rule of edge proximity in order to consider gray scale change in the certain neighborhood of each pixel of the head image or the tail image. The advantage of further processing with gradient algorithms is that the actual contour can be further optimized.
And S260, fitting each actual contour line to obtain a middle contour line.
In the embodiment of the invention, if the reference object is in the shape of a circular arc with corners, the actual contour line may contain a curve, and an accurate corner point cannot be positioned on the curve. Therefore, the curve part in the actual contour line is fitted with the straight line part in the actual contour line to obtain an intermediate contour line, the intermediate contour line can be an infinite-length straight line or a straight line section, and the specific setting mode can be determined according to the actual situation.
S270, determining the coordinates of the intersection points between the middle contour lines on the head image as the coordinates of the corner points.
In the embodiment of the invention, the middle contour lines are obtained by fitting the actual contour lines, and then the intersection points between the middle contour lines are determined as the corner points, so that the problem that the corner points cannot be determined on the reference object containing the arc corners by directly using a corner point detection method is solved, the corner point coordinates of the corner points of the actual contour in the target image are accurately positioned, and the measurement accuracy of the distance measurement algorithm is further improved.
And S280, determining the corresponding relation between each corner point coordinate and the standard corner point coordinate of the reference object in the preset standard image in the head image.
In the embodiment of the present invention, each corner coordinate of the reference object in the head image and the tail image has a corresponding standard corner coordinate on the standard image, for example: when the reference object is the identity card, the coordinate of the corner point of the upper left corner of the front face of the identity card in the head image is a, the standard coordinate of the upper left corner of the front face of the identity card in the standard image is A, and then a and A have a corresponding relation.
S290, calculating a homography matrix according to the corresponding relation, the corner point coordinates and the standard corner point coordinates, and taking the homography matrix as a head coordinate conversion relation.
Wherein homography refers to the projection mapping of one plane to another. And determining the coordinate conversion relation between a certain point in the head image and a corresponding point in the standard image, wherein the process comprises the step of establishing a homography matrix.
In the embodiment of the present invention, a homography matrix is calculated by using a plurality of corner point coordinates and standard corner point coordinates corresponding to the corner point coordinates, assuming that there is a point a on the head image, a obtains a point B on the standard image through homography transformation, and is specifically represented as a · H ═ B, where the matrix H is a homography matrix and is expressed as follows:
Figure BDA0002402272160000161
because A and B are homogeneous coordinates in the two-dimensional image, and only two dimensional coordinates of an X axis and a Y axis and a Z axis can be set to be 1 in the homogeneous coordinates in the target image and the standard image, only 8 degrees of freedom of the homography matrix H need to be solved, and at least four groups of corresponding coordinate points are needed to calculate the homography matrix H.
And S2100, determining the corresponding relation between each corner point coordinate and the standard corner point coordinate of the reference object in a preset standard image in the tail image.
And S2110, calculating a homography matrix according to the corresponding relation, the corner point coordinates and the standard corner point coordinates, and taking the homography matrix as a tail coordinate conversion relation.
S2120, according to the head coordinate conversion relation, converting the coordinates of the head point to be measured in the head image into head standard coordinates in the preset standard image.
S2130, converting the coordinates of the tail point to be measured in the tail image into tail standard coordinates in the preset standard image according to the tail coordinate conversion relation.
S2140, calculating the distance between the head standard coordinate and the tail standard coordinate, and taking the calculation result as the length of the object.
According to the technical scheme of the embodiment of the invention, a head image and a tail image of an object to be measured are shot, a point to be measured is obtained from the head image and the tail image, a head coordinate conversion relation and a tail coordinate conversion relation are calculated according to corner point coordinates of a reference object in the head image and the tail image and a standard focus coordinate of the reference object, the point to be measured in the head image is converted into a head standard coordinate according to the head coordinate conversion relation, and the point to be measured in the tail image is converted into a tail standard coordinate according to the tail coordinate conversion relation. And taking the distance between the head standard coordinate and the tail standard coordinate as the length of the object to be measured. The problems of large measurement error and low identification precision in the process of identifying the length of an object in the prior art are solved, the length of the object is accurately and quickly identified, and the error of an identification result is reduced.
EXAMPLE III
Fig. 4 is a schematic structural diagram of an object length recognition apparatus according to a third embodiment of the present invention, where the apparatus includes: an object-to-be-measured image acquisition module 310, a point-to-be-measured acquisition module 320, a coordinate conversion relationship acquisition module 330, and an object length identification module 340. Wherein:
an object-to-be-measured image acquiring module 310, configured to acquire a head image and a tail image of an object to be measured, where the head image and the tail image include a same reference object;
a point-to-be-measured acquisition module 320, configured to acquire a head point-to-be-measured corresponding to the head image and a tail point-to-be-measured corresponding to the tail image;
a coordinate transformation relation obtaining module 330, configured to calculate a head coordinate transformation relation and a tail coordinate transformation relation according to the reference objects in the head image and the tail image;
and an object length identification module 340, configured to calculate an object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relationship, and the tail coordinate conversion relationship.
According to the technical scheme of the embodiment of the invention, a head image and a tail image of an object to be measured are shot, points to be measured are obtained in the head image and the tail image, a head coordinate conversion relation and a tail coordinate conversion relation are calculated according to corner point coordinates of a reference object in the head image and the tail image and standard focus coordinates of the reference object, and the length of the object to be measured is calculated according to the head coordinate conversion relation, the tail coordinate conversion relation and the points to be measured in the head image and the tail image. The problems of large measurement error and low identification precision in the process of identifying the length of an object in the prior art are solved, the length of the object is accurately and quickly identified, and the error of an identification result is reduced.
On the basis of the above embodiment, the head image and the tail image are obtained by shooting the object to be measured with the front surface facing the image shooting device, and the imaging surface of the image shooting device is parallel to the placing surface of the object to be measured.
On the basis of the above embodiment, the point to be measured on the head is a point formed by intersecting a straight line, which is made along the vertical direction of the plane on which the object to be measured is placed, of the outermost point of the head of the object to be measured and the plane on which the object to be measured is placed; the point to be measured at the tail part is a straight line which is formed by the outermost point at the tail part of the object to be measured along the vertical direction of the plane on which the object to be measured is placed and is intersected with the plane on which the object to be measured is placed.
On the basis of the above embodiment, the coordinate transformation relation obtaining module 330 includes:
and the coordinate conversion relation acquisition unit is used for calculating a head coordinate conversion relation and a tail coordinate conversion relation according to the corner point coordinates of the reference object in the head image and the tail image and the standard corner point coordinates of the reference object.
On the basis of the above embodiment, the apparatus further includes:
a reference corner acquiring module, configured to acquire reference corners of the reference object in the head image and the tail image respectively;
the line frame construction module is used for respectively constructing line frames containing rough contour lines based on every two reference corner points positioned on the same rough contour line;
the actual contour line determining module is used for determining an actual contour line in the rough contour lines contained in each line frame;
and the corner coordinate determination module is used for determining the corner coordinates according to the plurality of actual contour lines.
On the basis of the above embodiment, the actual contour line determining module includes:
and the actual contour line acquisition unit is used for carrying out edge detection on the rough contour line contained in each line frame to obtain the actual contour line.
On the basis of the above embodiment, the corner point coordinate determining module includes:
the middle contour line obtaining unit is used for fitting each actual contour line to obtain a middle contour line;
and the corner coordinate determination unit is used for determining the coordinates of the intersection points between the middle contour lines on the head image as the corner coordinates.
On the basis of the above embodiment, the reference corner point obtaining module includes:
the characteristic point homography matrix calculation unit is used for calculating a characteristic point homography matrix according to the corresponding relation between the characteristic points of the reference object in the head image and the characteristic points in a preset standard image;
and the reference corner point coordinate acquisition unit is used for calculating the reference corner point coordinate of the reference object in the head image according to the feature point homography matrix and the standard corner point coordinate of the reference object in the preset standard image.
On the basis of the above embodiment, the reference corner point obtaining module further includes:
a rough contour acquisition unit, configured to input the head image and the tail image into pre-established detection models, respectively, and detect a contour of the reference object using the detection models to obtain a rough contour;
a reference corner detection unit for detecting a reference corner in the rough contour;
the device, still include:
and the gradient algorithm processing module is used for carrying out gradient algorithm processing on the rough contour line after the edge detection is carried out to obtain an actual contour line.
On the basis of the above embodiment, the coordinate transformation relation obtaining module 330 includes:
the first corresponding relation determining unit is used for determining the corresponding relation between each corner point coordinate and a standard corner point coordinate of the reference object in a preset standard image in the head image;
a head coordinate conversion relation determining unit, configured to calculate a homography matrix according to the correspondence, the corner coordinates, and the standard corner coordinates, and use the homography matrix as a head coordinate conversion relation;
the second corresponding relation determining unit is used for determining the corresponding relation between each corner point coordinate and a standard corner point coordinate of the reference object in a preset standard image in the tail image;
and the tail coordinate conversion relation determining unit is used for calculating a homography matrix according to the corresponding relation, the corner coordinates and the standard corner coordinates, and taking the homography matrix as a tail coordinate conversion relation.
On the basis of the above embodiment, the object length identifying module 340 includes:
the head standard coordinate conversion unit is used for converting the coordinates of the points to be measured of the head in the head image into head standard coordinates in the preset standard image according to the head coordinate conversion relation;
the tail standard coordinate conversion unit is used for converting the coordinates of the tail point to be measured in the tail image into tail standard coordinates in the preset standard image according to the tail coordinate conversion relation;
and the standard coordinate distance calculating unit is used for calculating the distance between the head standard coordinate and the tail standard coordinate and taking the calculation result as the length of the object.
The object length recognition device provided by the embodiment of the invention can execute the object length recognition method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 5 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention, and as shown in fig. 5, the computer device includes a processor 70, a memory 71, an input device 72, and an output device 73; the number of processors 70 in the computer device may be one or more, and one processor 70 is taken as an example in fig. 5; the processor 70, the memory 71, the input device 72 and the output device 73 in the computer apparatus may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 5.
The memory 71 may be used to store software programs, computer-executable programs, and modules, such as modules corresponding to the object length recognition method in the embodiment of the present invention (for example, the object-to-be-measured image acquisition module 310, the point-to-be-measured acquisition module 320, the coordinate transformation relation acquisition module 330, and the object length recognition module 340 in the object length recognition apparatus), as a computer-readable storage medium. The processor 70 executes various functional applications of the computer device and data processing, i.e., implements the object length recognition method described above, by executing software programs, instructions, and modules stored in the memory 71. The method comprises the following steps:
acquiring a head image and a tail image of an object to be measured, wherein the head image and the tail image comprise the same reference object;
acquiring a head to-be-measured point corresponding to the head image and a tail to-be-measured point corresponding to the tail image;
calculating a head coordinate conversion relation and a tail coordinate conversion relation according to reference objects in the head image and the tail image;
and calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 71 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 71 may further include memory located remotely from the processor 70, which may be connected to a computer device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive input numeric or character information and generate key signal inputs relating to user settings and function controls of the computer apparatus. The output device 73 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for identifying a length of an object, the method including:
acquiring a head image and a tail image of an object to be measured, wherein the head image and the tail image comprise the same reference object;
acquiring a head to-be-measured point corresponding to the head image and a tail to-be-measured point corresponding to the tail image;
calculating a head coordinate conversion relation and a tail coordinate conversion relation according to reference objects in the head image and the tail image;
and calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the object length identification method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the object length recognition apparatus, the included units and modules are only divided according to the functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (14)

1. An object length recognition method, comprising:
acquiring a head image and a tail image of an object to be measured, wherein the head image and the tail image comprise the same reference object;
acquiring a head to-be-measured point corresponding to the head image and a tail to-be-measured point corresponding to the tail image;
calculating a head coordinate conversion relation and a tail coordinate conversion relation according to reference objects in the head image and the tail image;
and calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation.
2. The method according to claim 1, characterized in that the head image and the tail image are captured by the object to be measured with its front side facing an image capturing device, the imaging plane of which is parallel to the placement plane of the object to be measured.
3. The method of claim 2, wherein:
the head part point to be measured is a straight line which is formed by the outermost point of the head part of the object to be measured along the vertical direction of the plane on which the object to be measured is placed and is intersected with the plane on which the object to be measured is placed;
the point to be measured at the tail part is a straight line which is formed by the outermost point at the tail part of the object to be measured along the vertical direction of the plane on which the object to be measured is placed and is intersected with the plane on which the object to be measured is placed.
4. The method of claim 1, wherein calculating a head coordinate transformation relationship and a tail coordinate transformation relationship from references in the head image and the tail image comprises:
and calculating a head coordinate conversion relation and a tail coordinate conversion relation according to the corner point coordinates of the reference object in the head image and the tail image and the standard corner point coordinates of the reference object.
5. The method according to claim 4, before calculating a head coordinate transformation relation and a tail coordinate transformation relation according to the corner coordinates of the reference objects in the head image and the tail image and the standard corner coordinates of the reference objects, further comprising:
respectively acquiring reference angular points of the reference object in the head image and the tail image;
respectively constructing line frames containing rough contour lines based on every two reference angular points positioned on the same rough contour line;
determining an actual contour line in the rough contour lines contained in each line frame;
and determining the coordinates of the corner points according to a plurality of actual contour lines.
6. The method of claim 5, wherein determining the actual contour line among the coarse contour lines contained in each line box comprises:
and carrying out edge detection on the rough contour line contained in each line frame to obtain an actual contour line.
7. The method of claim 5, wherein determining corner coordinates from a number of said actual contour lines comprises:
fitting each actual contour line to obtain a middle contour line;
and determining the coordinates of the intersection points between the middle contour lines on the head image as the coordinates of the corner points.
8. The method of claim 6, wherein obtaining the reference corner points of the reference object in the head image comprises:
calculating a characteristic point homography matrix according to the corresponding relation between the characteristic points of the reference object in the head image and the characteristic points in a preset standard image;
and calculating the reference corner point coordinates of the reference object in the head image according to the feature point homography matrix and the standard corner point coordinates of the reference object in the preset standard image.
9. The method according to claim 6, wherein obtaining the reference corner points of the reference object in the head image and the tail image respectively comprises:
inputting the head image and the tail image into a pre-established detection model respectively, and detecting the outline of the reference object by using the detection model to obtain a rough outline;
detecting a reference corner point in the rough contour;
after the rough contour line contained in each line frame is subjected to edge detection, the method further comprises the following steps: and carrying out gradient algorithm processing on the rough contour line after edge detection to obtain an actual contour line.
10. The method according to claim 3, wherein calculating a head coordinate transformation relation based on the coordinates of the corner points of the reference object in the head image and the coordinates of the standard corner points of the reference object comprises:
in the head image, determining the corresponding relation between each corner point coordinate and the standard corner point coordinate of the reference object in a preset standard image;
calculating a homography matrix according to the corresponding relation, the corner coordinates and the standard corner coordinates, and taking the homography matrix as a head coordinate conversion relation;
calculating a tail coordinate conversion relation according to the corner coordinates of the reference object in the tail image and the standard corner coordinates of the reference object, wherein the tail coordinate conversion relation comprises the following steps:
in the tail image, determining the corresponding relation between each corner point coordinate and the standard corner point coordinate of the reference object in a preset standard image;
and calculating a homography matrix according to the corresponding relation, the corner coordinates and the standard corner coordinates, and taking the homography matrix as a tail coordinate conversion relation.
11. The method according to claim 3, wherein calculating the object length of the object to be measured from the head point to be measured, the tail point to be measured, the head coordinate conversion relationship, and the tail coordinate conversion relationship includes:
converting the coordinates of the head point to be measured in the head image into head standard coordinates in the preset standard image according to the head coordinate conversion relation;
converting the coordinates of the tail point to be measured in the tail image into tail standard coordinates in the preset standard image according to the tail coordinate conversion relation;
and calculating the distance between the head standard coordinate and the tail standard coordinate, and taking the calculation result as the length of the object.
12. An object length recognition apparatus, comprising:
the device comprises a to-be-measured object image acquisition module, a detection module and a display module, wherein the to-be-measured object image acquisition module is used for acquiring a head image and a tail image of an object to be measured, and the head image and the tail image comprise the same reference object;
the acquisition module of points to be measured is used for acquiring head points to be measured corresponding to the head images and tail points to be measured corresponding to the tail images;
the coordinate conversion relation acquisition module is used for calculating a head coordinate conversion relation and a tail coordinate conversion relation according to reference objects in the head image and the tail image;
and the object length identification module is used for calculating the object length of the object to be measured according to the head point to be measured, the tail point to be measured, the head coordinate conversion relation and the tail coordinate conversion relation.
13. A terminal device, the terminal device comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the object length recognition method of any one of claims 1-11.
14. A storage medium containing computer-executable instructions for performing the object length recognition method of any one of claims 1-11 when executed by a computer processor.
CN202010150504.7A 2020-03-06 2020-03-06 Object length identification method and device, terminal equipment and storage medium Pending CN111307039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010150504.7A CN111307039A (en) 2020-03-06 2020-03-06 Object length identification method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010150504.7A CN111307039A (en) 2020-03-06 2020-03-06 Object length identification method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111307039A true CN111307039A (en) 2020-06-19

Family

ID=71154912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010150504.7A Pending CN111307039A (en) 2020-03-06 2020-03-06 Object length identification method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111307039A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017232A (en) * 2020-08-31 2020-12-01 浙江水晶光电科技股份有限公司 Method, device and equipment for positioning circular pattern in image
CN112037196A (en) * 2020-08-31 2020-12-04 中冶赛迪重庆信息技术有限公司 Cooling bed multiple-length detection method, system and medium
CN112649446A (en) * 2020-11-12 2021-04-13 巨轮(广州)智能装备有限公司 FPC detection method, laminating method and device
CN113188465A (en) * 2021-04-21 2021-07-30 中铁第四勘察设计院集团有限公司 Drilling hole depth identification method and device based on video learning
CN113592833A (en) * 2021-08-05 2021-11-02 深圳潜行创新科技有限公司 Object measuring method, device, equipment and storage medium
CN113725473A (en) * 2021-11-04 2021-11-30 广州市易鸿智能装备有限公司 Lithium battery winding tab dislocation real-time correction system and method
CN114545863A (en) * 2022-03-07 2022-05-27 中南大学 Track smoothing method for numerical control machining based on B spline curve fitting
WO2024069513A1 (en) * 2022-09-30 2024-04-04 Salvagnini Italia S.P.A. Machine and method for working and/or moving metal plates or sheets comprising edge recognition means

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0474908A (en) * 1990-07-17 1992-03-10 Hitachi Constr Mach Co Ltd Length measuring apparatus
CN102102978A (en) * 2009-12-16 2011-06-22 Tcl集团股份有限公司 Handheld terminal, and method and device for measuring object by using same
CN102831578A (en) * 2011-06-15 2012-12-19 富士通株式会社 Image processing method and image processing device
CN103063143A (en) * 2012-12-03 2013-04-24 苏州佳世达电通有限公司 Measuring method and system based on image identification
CN104729417A (en) * 2013-12-18 2015-06-24 卡巴股份公司 Distance Determination Of Images With Reference Object
CN104813340A (en) * 2012-09-05 2015-07-29 体通有限公司 System and method for deriving accurate body size measures from a sequence of 2d images
CN105928598A (en) * 2016-04-20 2016-09-07 上海斐讯数据通信技术有限公司 Method and system for measuring object mass based on photographing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0474908A (en) * 1990-07-17 1992-03-10 Hitachi Constr Mach Co Ltd Length measuring apparatus
CN102102978A (en) * 2009-12-16 2011-06-22 Tcl集团股份有限公司 Handheld terminal, and method and device for measuring object by using same
CN102831578A (en) * 2011-06-15 2012-12-19 富士通株式会社 Image processing method and image processing device
CN104813340A (en) * 2012-09-05 2015-07-29 体通有限公司 System and method for deriving accurate body size measures from a sequence of 2d images
CN103063143A (en) * 2012-12-03 2013-04-24 苏州佳世达电通有限公司 Measuring method and system based on image identification
CN104729417A (en) * 2013-12-18 2015-06-24 卡巴股份公司 Distance Determination Of Images With Reference Object
CN105928598A (en) * 2016-04-20 2016-09-07 上海斐讯数据通信技术有限公司 Method and system for measuring object mass based on photographing

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017232A (en) * 2020-08-31 2020-12-01 浙江水晶光电科技股份有限公司 Method, device and equipment for positioning circular pattern in image
CN112037196A (en) * 2020-08-31 2020-12-04 中冶赛迪重庆信息技术有限公司 Cooling bed multiple-length detection method, system and medium
CN112017232B (en) * 2020-08-31 2024-03-15 浙江水晶光电科技股份有限公司 Positioning method, device and equipment for circular patterns in image
CN112649446A (en) * 2020-11-12 2021-04-13 巨轮(广州)智能装备有限公司 FPC detection method, laminating method and device
CN112649446B (en) * 2020-11-12 2024-02-13 巨轮(广州)智能装备有限公司 FPC detection method, bonding method and device
CN113188465A (en) * 2021-04-21 2021-07-30 中铁第四勘察设计院集团有限公司 Drilling hole depth identification method and device based on video learning
CN113592833A (en) * 2021-08-05 2021-11-02 深圳潜行创新科技有限公司 Object measuring method, device, equipment and storage medium
CN113725473A (en) * 2021-11-04 2021-11-30 广州市易鸿智能装备有限公司 Lithium battery winding tab dislocation real-time correction system and method
CN113725473B (en) * 2021-11-04 2022-02-08 广州市易鸿智能装备有限公司 Lithium battery winding tab dislocation real-time correction system and method
CN114545863A (en) * 2022-03-07 2022-05-27 中南大学 Track smoothing method for numerical control machining based on B spline curve fitting
CN114545863B (en) * 2022-03-07 2024-02-13 中南大学 Trajectory smoothing method for numerical control machining based on B spline curve fitting
WO2024069513A1 (en) * 2022-09-30 2024-04-04 Salvagnini Italia S.P.A. Machine and method for working and/or moving metal plates or sheets comprising edge recognition means

Similar Documents

Publication Publication Date Title
CN111307039A (en) Object length identification method and device, terminal equipment and storage medium
WO2019200837A1 (en) Method and system for measuring volume of parcel, and storage medium and mobile terminal
CN107766855B (en) Chessman positioning method and system based on machine vision, storage medium and robot
CN111968172B (en) Method and system for measuring volume of stock ground material
US11087169B2 (en) Image processing apparatus that identifies object and method therefor
EP3358298B1 (en) Building height calculation method and apparatus, and storage medium
JP6176598B2 (en) Dimension measurement program, dimension measurement apparatus, and dimension measurement method
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN108986152B (en) Foreign matter detection method and device based on difference image
WO2021136386A1 (en) Data processing method, terminal, and server
KR102073468B1 (en) System and method for scoring color candidate poses against a color image in a vision system
CN111220235B (en) Water level monitoring method and device
CN110926330A (en) Image processing apparatus, image processing method, and program
JP4058293B2 (en) Generation method of high-precision city model using laser scanner data and aerial photograph image, generation system of high-precision city model, and program for generation of high-precision city model
CN111161339B (en) Distance measuring method, device, equipment and computer readable medium
CN115797359B (en) Detection method, equipment and storage medium based on solder paste on circuit board
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
KR101931564B1 (en) Device and method for processing image using image registration
CN111387987A (en) Height measuring method, device, equipment and storage medium based on image recognition
JP7298687B2 (en) Object recognition device and object recognition method
CN110210291B (en) Guide vane parameter acquisition method and device, electronic equipment and storage medium
US9098746B2 (en) Building texture extracting apparatus and method thereof
JP2006113832A (en) Stereoscopic image processor and program
JP2008224323A (en) Stereoscopic photograph measuring instrument, stereoscopic photograph measuring method, and stereoscopic photograph measuring program
KR101574195B1 (en) Auto Calibration Method for Virtual Camera based on Mobile Platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519085 Building 8, No.1, Tangjiawan Harbin Institute of technology, high tech Zone, Zhuhai City, Guangdong Province

Applicant after: Zhuhai necessary Industrial Technology Co.,Ltd.

Address before: 519085 Building 8, No.1, Tangjiawan Harbin Institute of technology, high tech Zone, Zhuhai City, Guangdong Province

Applicant before: ZHUHAI SUIBIAN TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619

RJ01 Rejection of invention patent application after publication