CN112150527B - Measurement method and device, electronic equipment and storage medium - Google Patents

Measurement method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112150527B
CN112150527B CN202010899124.3A CN202010899124A CN112150527B CN 112150527 B CN112150527 B CN 112150527B CN 202010899124 A CN202010899124 A CN 202010899124A CN 112150527 B CN112150527 B CN 112150527B
Authority
CN
China
Prior art keywords
point
measured
edge
corner point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010899124.3A
Other languages
Chinese (zh)
Other versions
CN112150527A (en
Inventor
薛地
周杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202010899124.3A priority Critical patent/CN112150527B/en
Publication of CN112150527A publication Critical patent/CN112150527A/en
Application granted granted Critical
Publication of CN112150527B publication Critical patent/CN112150527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/022Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by means of tv-camera scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a measuring method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image; the two-dimensional image is acquired by a two-dimensional imaging device; and obtaining the distance between the first angular point of the object to be measured and the second angular point of the object to be measured according to the two-dimensional image and the depth map.

Description

Measurement method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a measurement method and apparatus, an electronic device, and a storage medium.
Background
In daily life, people often need to measure the size of an object. In conventional methods, one typically uses a length measuring tool (e.g., tape, ruler, vernier) to measure the size of the object. However, this conventional measurement method is time-consuming and labor-consuming for the measurer, and has low measurement efficiency. Therefore, how to efficiently and accurately measure the size of an object is of great importance.
Disclosure of Invention
The application provides a measuring method and device, electronic equipment and a storage medium.
In a first aspect, there is provided a measurement method comprising:
Acquiring a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image; the two-dimensional image is acquired by a two-dimensional imaging device;
and obtaining the distance between the first angular point of the object to be measured and the second angular point of the object to be measured according to the two-dimensional image and the depth map.
In combination with any one of the embodiments of the present application, the obtaining, according to the two-dimensional image and the depth map, a distance between the first corner of the object to be measured and the second corner of the object to be measured includes:
Obtaining a first image point of a first corner point of the object to be measured under a camera coordinate system of the two-dimensional imaging device and a second image point of a second corner point of the object to be measured under the camera coordinate system according to the two-dimensional image and the depth image;
and determining the distance between the first image point and the second image point as the distance between the first corner point and the second corner point.
In combination with any one of the embodiments of the present application, the object to be measured includes a surface to be measured; the obtaining, according to the two-dimensional image and the depth map, a first image point of a first corner point of the object to be measured in a camera coordinate system of the two-dimensional imaging device, and a second image point of a second corner point of the object to be measured in the camera coordinate system includes:
obtaining a projection plane of the surface to be measured under a camera coordinate system of the two-dimensional imaging device according to coordinates of at least three first points in the surface to be measured in the two-dimensional image and depth values of the at least three first points obtained from the depth map;
The first image point and the second image point are determined from the projection plane.
In combination with any one of the embodiments of the present application, the determining the first image point and the second image point from the projection plane includes:
Acquiring a first straight line passing through the first angular point and the optical center of the two-dimensional imaging device, and a second straight line passing through the second angular point and the optical center;
determining an intersection point between the first straight line and the projection plane to obtain the first image point, and determining an intersection point between the second straight line and the projection plane to obtain the second image point.
In combination with any one of the embodiments of the present application, the obtaining a first straight line passing through the first corner and the optical center of the two-dimensional imaging device, and a second straight line passing through the second corner and the optical center includes:
Acquiring a first coordinate of the first corner point under an image coordinate system of the two-dimensional imaging device, a second coordinate of the second corner point under the image coordinate system and a third coordinate of an optical center of the two-dimensional imaging device under the camera coordinate system;
Obtaining the first straight line according to the first coordinate and the third coordinate;
and obtaining the second straight line according to the second coordinate and the third coordinate.
In combination with any one of the embodiments of the present application, the obtaining a projection plane of the surface to be measured under a camera coordinate system of the two-dimensional imaging device according to coordinates of at least three first points in the surface to be measured in the two-dimensional image and depth values of the at least three first points obtained from the depth map includes:
Acquiring internal parameters of the two-dimensional imaging device and coordinates of the at least three first points under a pixel coordinate system of the two-dimensional image;
Obtaining image points of the at least three first points under the camera coordinate system according to the internal parameters, the depth values of the at least three first points and the coordinates of the at least three first points under the pixel coordinate system;
and carrying out plane fitting processing on the image points of the at least three first points under the camera coordinate system to obtain the projection plane.
In combination with any one of the embodiments of the present application, the shape of the object to be measured is a regular shape or a quasi-regular shape, and the surface to be measured is an upper bottom surface of the object to be measured; the method further comprises the steps of:
Acquiring coordinates of the third corner point under the pixel coordinate system, and acquiring a depth value of the third corner point from the depth map; the connecting line between the third corner point and the first corner point is the height edge of the object to be measured;
obtaining a third image point of the third corner point under the camera coordinate system according to the internal parameter, the depth value of the third corner point and the coordinate of the third corner point under the pixel coordinate system;
A distance between the first image point and the third image point is determined as the height of the object to be measured.
In combination with any one of the embodiments of the present application, the obtaining, according to the two-dimensional image and the depth map, a distance between the first corner of the object to be measured and the second corner of the object to be measured includes:
Acquiring a first depth value of a first line segment from the depth map; the two endpoints of the first line segment are the first corner point and the second corner point respectively; the first depth value is positively correlated with the depth value of the first corner point, and/or the first depth value is positively correlated with the depth value of the second corner point;
obtaining a mapping relation between the depth value and the transmission factor; the transmission factor characterizes the conversion relation between the size in the image and the real size; the transmission factor is positively correlated with the depth value;
obtaining a transmission factor of the first line segment according to the mapping relation and the first depth value;
And obtaining the distance between the first corner point and the second corner point according to the transmission factor of the first line segment and the length of the first line segment.
In combination with any one of the embodiments of the present application, the surface of the object to be measured includes a non-host structure, and the method further includes:
Performing edge detection processing on the two-dimensional image to obtain a first edge and a second edge; the first edge and the second edge are positioned in the same surface of the object to be measured, and the gradient of the first edge and the gradient of the second edge exceed an edge threshold;
a distance between the first edge and the second edge is determined as a dimension of the non-body structure.
In combination with any one of the embodiments of the present application, after performing edge detection processing on the two-dimensional image to obtain a first edge and a second edge, before determining a distance between the first edge and the second edge as a dimension of the non-main structure, the method further includes:
respectively performing curvature detection processing on the first edge and the second edge to obtain a first curvature of the first edge and a second curvature of the second edge;
the determining a distance between the first edge and the second edge as the dimension of the non-body structure includes:
In the event that both the first curvature and the second curvature exceed a curvature threshold, a distance between the first edge and the second edge is determined as a dimension of the non-body structure.
In combination with any of the embodiments of the present application, the method further comprises:
Acquiring dynamic display information;
And carrying out dynamic display processing on the first corner point and the second corner point according to the dynamic display information, and displaying the first corner point and the second corner point after the dynamic display processing.
In combination with any one of the embodiments of the present application, the dynamic display information includes an interface element; the dynamic display processing is performed on the first corner point and the second corner point according to the dynamic display information, and the first corner point and the second corner point after the dynamic display processing are displayed, including:
And moving the interface element from the first corner point to the second corner point.
In combination with any one of the embodiments of the present application, the moving the scroll bar from the first corner point to the second corner point includes:
Obtaining the moving speed of the rolling bar according to the distance between the first angular point and the second angular point; the moving speed is inversely related to the distance between the first corner point and the second corner point;
the scroll bar is moved from the first corner point to the second corner point at the moving speed.
In combination with any one of the embodiments of the present application, the two-dimensional imaging device includes an RGB camera; the RGB camera belongs to a terminal; the terminal further comprises a depth camera; the obtaining a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image comprises:
And under the condition that a measurement instruction aiming at the object to be measured is detected, the terminal respectively uses the RGB camera and the depth camera to shoot the object to be measured to obtain the two-dimensional image and the depth map.
In a second aspect, there is provided a measurement device, the device comprising:
an acquisition unit configured to acquire a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image; the two-dimensional image is acquired by a two-dimensional imaging device;
and the first processing unit is used for obtaining the distance between the first angular point of the object to be measured and the second angular point of the object to be measured according to the two-dimensional image and the depth map.
In combination with any one of the embodiments of the present application, the first processing unit is configured to:
Obtaining a first image point of a first corner point of the object to be measured under a camera coordinate system of the two-dimensional imaging device and a second image point of a second corner point of the object to be measured under the camera coordinate system according to the two-dimensional image and the depth image;
and determining the distance between the first image point and the second image point as the distance between the first corner point and the second corner point.
In combination with any one of the embodiments of the present application, the object to be measured includes a surface to be measured; the first processing unit is used for:
obtaining a projection plane of the surface to be measured under a camera coordinate system of the two-dimensional imaging device according to coordinates of at least three first points in the surface to be measured in the two-dimensional image and depth values of the at least three first points obtained from the depth map;
The first image point and the second image point are determined from the projection plane.
In combination with any one of the embodiments of the present application, the first processing unit is configured to:
Acquiring a first straight line passing through the first angular point and the optical center of the two-dimensional imaging device, and a second straight line passing through the second angular point and the optical center;
determining an intersection point between the first straight line and the projection plane to obtain the first image point, and determining an intersection point between the second straight line and the projection plane to obtain the second image point.
In combination with any one of the embodiments of the present application, the first processing unit is configured to:
Acquiring a first coordinate of the first corner point under an image coordinate system of the two-dimensional imaging device, a second coordinate of the second corner point under the image coordinate system and a third coordinate of an optical center of the two-dimensional imaging device under the camera coordinate system;
Obtaining the first straight line according to the first coordinate and the third coordinate;
and obtaining the second straight line according to the second coordinate and the third coordinate.
In combination with any one of the embodiments of the present application, the first processing unit is configured to:
Acquiring internal parameters of the two-dimensional imaging device and coordinates of the at least three first points under a pixel coordinate system of the two-dimensional image;
Obtaining image points of the at least three first points under the camera coordinate system according to the internal parameters, the depth values of the at least three first points and the coordinates of the at least three first points under the pixel coordinate system;
and carrying out plane fitting processing on the image points of the at least three first points under the camera coordinate system to obtain the projection plane.
In combination with any one of the embodiments of the present application, the shape of the object to be measured is a regular shape or a quasi-regular shape, and the surface to be measured is an upper bottom surface of the object to be measured;
The acquisition unit is used for:
Acquiring coordinates of the third corner point under the pixel coordinate system, and acquiring a depth value of the third corner point from the depth map; the connecting line between the third corner point and the first corner point is the height edge of the object to be measured;
the first processing unit is used for:
obtaining a third image point of the third corner point under the camera coordinate system according to the internal parameter, the depth value of the third corner point and the coordinate of the third corner point under the pixel coordinate system;
A distance between the first image point and the third image point is determined as the height of the object to be measured.
In combination with any one of the embodiments of the present application, the first processing unit is configured to:
Acquiring a first depth value of a first line segment from the depth map; the two endpoints of the first line segment are the first corner point and the second corner point respectively; the first depth value is positively correlated with the depth value of the first corner point, and/or the first depth value is positively correlated with the depth value of the second corner point;
obtaining a mapping relation between the depth value and the transmission factor; the transmission factor characterizes the conversion relation between the size in the image and the real size; the transmission factor is positively correlated with the depth value;
obtaining a transmission factor of the first line segment according to the mapping relation and the first depth value;
And obtaining the distance between the first corner point and the second corner point according to the transmission factor of the first line segment and the length of the first line segment.
In combination with any of the embodiments of the present application, the surface of the object to be measured includes a non-body structure, and the apparatus further includes:
The first detection unit is used for carrying out edge detection processing on the two-dimensional image to obtain a first edge and a second edge; the first edge and the second edge are positioned in the same surface of the object to be measured, and the gradient of the first edge and the gradient of the second edge exceed an edge threshold;
And a second processing unit for determining a distance between the first edge and the second edge as a dimension of the non-body structure.
In combination with any of the embodiments of the application, the device further comprises:
The second detection unit is used for performing curvature detection processing on the first edge and the second edge respectively to obtain a first curvature of the first edge and a second curvature of the second edge after performing edge detection processing on the two-dimensional image to obtain the first edge and the second edge and before determining the distance between the first edge and the second edge as the size of the non-main body structure;
The second processing unit is used for:
In the event that both the first curvature and the second curvature exceed a curvature threshold, a distance between the first edge and the second edge is determined as a dimension of the non-body structure.
In combination with any one of the embodiments of the present application, the obtaining unit is further configured to:
Acquiring dynamic display information;
the measuring device further includes:
And the display unit is used for carrying out dynamic display processing on the first corner point and the second corner point according to the dynamic display information and displaying the first corner point and the second corner point after the dynamic display processing.
In combination with any one of the embodiments of the present application, the dynamic display information includes an interface element; the display unit is used for:
The scroll bar is moved from the first corner point to the second corner point.
In combination with any one of the embodiments of the present application, the display unit is configured to:
Obtaining the moving speed of the rolling bar according to the distance between the first angular point and the second angular point; the moving speed is inversely related to the distance between the first corner point and the second corner point;
And moving the interface element from the first corner point to the second corner point at the moving speed.
In combination with any one of the embodiments of the present application, the two-dimensional imaging device includes an RGB camera; the RGB camera belongs to the measuring device; the measuring device further comprises a depth camera; and under the condition that a measurement instruction aiming at the object to be measured is detected, the measurement device respectively uses the RGB camera and the depth camera to shoot the object to be measured to obtain the two-dimensional image and the depth map.
In a third aspect, a processor is provided for performing the method of the first aspect and any one of its possible implementation manners described above.
In a fourth aspect, there is provided an electronic device comprising: a processor, transmission means, input means, output means and memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to carry out the method as described in the first aspect and any one of its possible implementations.
In a fifth aspect, there is provided a computer readable storage medium having stored therein a computer program comprising program instructions which, when executed by a processor, cause the processor to carry out a method as in the first aspect and any one of its possible implementations.
In a sixth aspect, a computer program product is provided, the computer program product comprising a computer program or instructions which, when run on a computer, cause the computer to perform the method of the first aspect and any one of the possible implementations thereof.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
In order to more clearly describe the embodiments of the present application or the technical solutions in the background art, the following description will describe the drawings that are required to be used in the embodiments of the present application or the background art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of a homonymy point provided in an embodiment of the application;
FIG. 2 is a schematic diagram of an image pixel coordinate system according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a measurement method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an object to be measured according to an embodiment of the present application;
FIG. 5 is a schematic view of another object to be measured according to an embodiment of the present application;
FIG. 6 is a schematic view of another object to be measured according to an embodiment of the present application;
FIG. 7 is a schematic view of a triangle with cut corners according to an embodiment of the present application;
fig. 8 is a schematic view of a corner-cut rectangle according to an embodiment of the present application;
FIG. 9 is a schematic view of another rectangular corner cut provided in an embodiment of the present application;
FIG. 10 is a schematic view of a regular-shaped object according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a measuring device according to an embodiment of the present application;
Fig. 12 is a schematic hardware structure of a measurement device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
It should be understood that, in the present application, "at least one (item)" means one or more, "a plurality" means two or more, "at least two (items)" means two or three and three or more, "and/or" for describing an association relationship of an association object, three kinds of relationships may exist, for example, "a and/or B" may mean: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" may indicate that the context-dependent object is an "or" relationship, meaning any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. The character "/" may also represent divisors in mathematical operations, e.g., a/b=a divided by b; 6/3=2. "at least one of the following" or its similar expressions.
Some concepts will be defined first, which will appear below. In the embodiment of the application, an object point refers to a point in the real world, a physical distance refers to a distance in the real world, and a physical size refers to a size in the real world.
The object points correspond to pixel points in the image. For example, a camera is used to take a picture of a table to obtain an image a. The table comprises an object point a, a pixel point b in the image A is imaged by the object point a, and then the object point a corresponds to the pixel point b.
For convenience of description, pixels of the same object point in different images are referred to as the same name points. As shown in fig. 1, the pixel a and the pixel C are the same name points, and the pixel B and the pixel D are the same name points.
In the embodiment of the application, the positions in the image refer to the positions under the pixel coordinates of the image. The abscissa of the pixel coordinate system in the embodiment of the application is used for representing the column number where the pixel point is located, and the ordinate under the pixel coordinate system is used for representing the row number where the pixel point is located. For example, in the image shown in fig. 2, a pixel coordinate system XOY is constructed with the upper left corner of the image as the origin O of coordinates, the direction parallel to the rows of the image as the direction of the X axis, and the direction parallel to the columns of the image as the direction of the Y axis. The units of the abscissa and the ordinate are pixel points. For example, in fig. 2, the coordinate of the pixel a 11 is (1, 1), the coordinate of the pixel a 23 is (3, 2), the coordinate of the pixel a 42 is (2, 4), and the coordinate of the pixel a 34 is (4, 3).
In the embodiment of the application, an image point of a pixel point in a two-dimensional image under a camera coordinate system is a projection point of the pixel point under the camera coordinate system, the distance between the projection point and the optical center of the two-dimensional imaging device is the distance between an object point corresponding to the pixel point and the two-dimensional imaging device, and the projection point, the pixel point and the optical center of the two-dimensional imaging device are on the same straight line.
In the embodiment of the application, a projection plane of a pixel point plane in a two-dimensional image under a camera coordinate system is a plane containing projection points of pixel points in the pixel point plane under the camera coordinate system.
In daily life, people often need to measure the size of an object. In conventional methods, one typically uses a length measuring tool (e.g., tape, ruler, vernier) to measure the size of the object. However, this conventional measurement method is time-consuming and labor-consuming for the measurer, and has low measurement efficiency. Therefore, how to efficiently and accurately measure the size of an object is of great importance.
The electronic equipment uses a synchronous positioning and map construction (simultaneous localization AND MAPPING, SLAM) method to model the scene of the object to be measured, so that the size of the object to be measured can be obtained. However, when the size of the object to be measured is measured using this method, the amount of data required to be processed by the electronic apparatus is large, and the processing speed is low.
Based on the above, the embodiment of the application provides a technical scheme which can reduce the data processing amount and improve the measurement processing speed. The execution main body of the embodiment of the application is a measuring device. Alternatively, the measuring device may be one of the following: cell phone, computer, server, panel computer. Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. Referring to fig. 3, fig. 3 is a flow chart of a measurement method according to an embodiment of the application.
301. A two-dimensional image containing an object to be measured and a depth map of the two-dimensional image are acquired.
In the embodiment of the application, the two-dimensional image can be an RGB image or a YUV image, wherein "Y" represents brightness (i.e. gray scale value), "U" and "V" each represent chromaticity (Chrominance or Chroma). The two-dimensional image comprises an object to be measured, and the depth map carries depth information of pixel points in the two-dimensional image, namely the depth map carries the depth information of the object to be measured.
In the embodiment of the application, the two-dimensional image is acquired by a two-dimensional imaging device, wherein the two-dimensional imaging device is a two-dimensional imaging device. For example, a two-dimensional image may be acquired by an RGB camera. For another example, the two-dimensional image may be acquired by a YUV imaging device.
In one implementation of acquiring a two-dimensional image, a measurement device receives an RGB image input by a user through an input component as the two-dimensional image, wherein the input component comprises: a keyboard, a mouse, a touch screen, a touch pad, an audio input device, and the like.
In another implementation manner of acquiring the two-dimensional image, the measurement device receives the RGB image sent by the first terminal as the two-dimensional image, where the first terminal includes a mobile phone, a computer, a tablet computer, a server, and the like.
In yet another implementation of acquiring a two-dimensional image, the measurement device acquires the two-dimensional image using a two-dimensional imaging device. For example, the measuring device is a mobile phone, and the two-dimensional imaging device is an RGB camera on the mobile phone. The mobile phone can acquire a two-dimensional image by using the RGB camera.
In one implementation of acquiring a depth map of a two-dimensional image, a measurement device receives a depth map of a two-dimensional image input by a user through an input component, wherein the input component comprises: a keyboard, a mouse, a touch screen, a touch pad, an audio input device, and the like.
In another implementation manner of acquiring the depth map of the two-dimensional image, the measuring device receives the two-dimensional image sent by the second terminal, where the second terminal includes a mobile phone, a computer, a tablet computer, a server, and the like. Optionally, the second terminal is identical to the first terminal.
In yet another implementation of acquiring a depth map of a two-dimensional image, the measurement device acquires the depth map using a depth imaging device. For example, the measurement device is a cell phone and the depth imaging device is a depth camera on the cell phone. The mobile phone can acquire a depth map by using the depth camera. The depth camera may be any one of the following: structured light (TOF) cameras, time of flight (time of flight) cameras, binocular stereo (binocular stereo vision) cameras.
As an alternative embodiment, the measuring device is loaded with a two-dimensional imaging device and a depth imaging device. The measuring device shoots an object to be measured simultaneously by using RGB equipment and depth imaging equipment to obtain a two-dimensional image and a depth map of the two-dimensional image.
302. And obtaining the distance between the first angular point of the object to be measured and the second angular point of the object to be measured according to the two-dimensional image and the depth map.
In the embodiment of the application, the first corner point and the second corner point are both corner points belonging to an object to be measured in the two-dimensional image, and the first corner point and the second corner point are different.
In a way of obtaining the distance between the first corner and the second corner, the measuring device obtains a first image point of the first corner of the object to be measured in a camera coordinate system of the two-dimensional imaging device and a second image point of the second corner of the object to be measured in the camera coordinate system according to the two-dimensional image and the depth map. And determining the distance between the first image point and the second image point to obtain the distance between the first angular point and the second angular point.
In one embodiment of the method for obtaining the first and second image points, the measuring device obtains the depth value of the first corner point and the depth value of the second corner point from the depth map. And obtaining a three-dimensional coordinate of the first image point under a camera coordinate system of the two-dimensional imaging device according to the coordinate of the first angular point in the two-dimensional image, the depth value of the first angular point and the internal parameter of the two-dimensional imaging device, wherein the first image point is the image point of the first angular point in the two-dimensional imaging device. And obtaining the three-dimensional coordinate of the second image point under the camera coordinate system of the two-dimensional imaging device according to the coordinate of the second image point in the two-dimensional image, the depth value of the second image point and the internal parameter of the two-dimensional imaging device, wherein the second image point is the image point of the second image point in the two-dimensional imaging device.
In another implementation of obtaining the first image point and the second image point, it is assumed that the first corner point belongs to the first plane and the second corner point belongs to the second plane. The measurement means determines a projection plane of the first face (which will be referred to as a first intermediate plane hereinafter) under the camera coordinate system of the two-dimensional imaging device, and determines a projection plane of the second face (which will be referred to as a second intermediate plane hereinafter) under the camera coordinate system of the two-dimensional imaging device. And determining a connecting line between the coordinates of the first angle point under the image coordinate system and the optical center of the two-dimensional imaging device to obtain a first middle straight line. And determining a connecting line between the coordinates of the second corner point under the image coordinate system and the optical center of the two-dimensional imaging device to obtain a second middle straight line. And determining an intersection point between the first middle straight line and the first middle plane to obtain an image point of the first corner point in the two-dimensional imaging device, namely a first image point. And determining an intersection point between the second middle straight line and the second middle plane to obtain an image point of the second corner point in the two-dimensional imaging device, namely a second image point.
In another way of obtaining the distance between the first corner point and the second corner point, the distance between the first corner point and the second corner point is made a first intermediate distance. The measurement device acquires a mapping relation between the transmission factor and the depth value of the first intermediate distance before determining the first intermediate distance.
The mapping relation characterizes the mapping relation between the depth value of the pixel point and the transmission factor. The transmission factor characterizes a conversion relation between the size of the pixel point and the size of the object point corresponding to the pixel point. For example, a box is photographed using a camera to obtain an image a. The box comprises an object point a, a pixel point b in the image A is imaged by the object point a, and then the transmission factor of the pixel point b characterizes the conversion relation between the size of the pixel point b and the size of the object point a.
The depth value of the first distance is positively correlated with the depth value of the first corner point and/or the first intermediate transmission factor is positively correlated with the depth value of the second corner point. Optionally, a midpoint of a connection line between the first corner point and the second corner point is referred to as a first midpoint, and a depth value of the first distance is a depth value of the first midpoint. The measuring device can acquire a depth value of the first midpoint from the depth map as a depth value of the first distance, and further can acquire a first intermediate transmission factor of the first intermediate distance according to the mapping relation and the depth value of the first intermediate distance. The measuring device converts the first intermediate distance into a physical distance corresponding to the first intermediate distance according to the first transmission factor, and obtains the distance between the first corner point and the second corner point.
It should be understood that the first corner point and the second corner point may be any two different corner points on the object to be measured, that is, by using the technical solution disclosed in the foregoing, a distance between any two corner points on the object to be measured may be obtained, and thus, a size of the object to be measured may be obtained. For example, in the case where the object to be measured is a cardboard box, the length, width, and height of the object to be measured can be obtained; in the case where the object to be measured is a microwave oven as shown in fig. 4, the length, width, and height of the microwave oven can be obtained.
In the embodiment of the application, the measuring device obtains the size of the object to be measured according to the two-dimensional image and the depth map, so that the data processing amount can be reduced, and the processing speed can be improved.
As an alternative embodiment, the measuring device obtains the first image point and the second image point when the following steps are performed:
1. And obtaining a projection plane of the surface to be measured under a camera coordinate system of the two-dimensional imaging device according to the coordinates of at least three first points in the surface to be measured in the two-dimensional image and the depth values of the at least three first points obtained from the depth map.
In the embodiment of the present application, coordinates in a two-dimensional image refer to coordinates in a pixel coordinate system of the two-dimensional image (hereinafter, will be referred to as a reference pixel coordinate system). The measurement means may obtain coordinates of the pixel point in a camera coordinate system of the two-dimensional imaging device (hereinafter, the coordinates will be referred to as two-dimensional coordinates) based on the coordinates of the pixel point in a reference pixel coordinate system and internal parameters of the two-dimensional imaging device.
In the embodiment of the application, the internal parameters of the imaging device include: the focal length of the imaging device is in the coordinate of the camera coordinate system and the optical center of the imaging device is in the coordinate of the camera coordinate system, wherein the optical center is the intersection point of the optical axis of the imaging device and the image plane.
The imaging device that acquires the depth map is referred to as a depth imaging device. The measuring device can acquire the depth value of the pixel point in the two-dimensional image from the depth map. And converting the depth value of the pixel point into the depth value under a reference pixel coordinate system according to the pose conversion relation between the two-dimensional imaging device and the depth imaging device. The measuring device can obtain the depth value of the pixel point under the camera coordinate system of the two-dimensional imaging device according to the internal parameters of the two-dimensional imaging device and the depth value of the pixel point under the reference pixel coordinate system. In this way, the measuring device can obtain the three-dimensional coordinates of any one first point in the camera coordinate system of the two-dimensional imaging device.
After obtaining the three-dimensional coordinates of at least three first points in the camera coordinate system, the measuring device can obtain a plane according to the three-dimensional coordinates of at least three first points in the camera coordinate system, wherein the plane is a projection plane of the surface to be measured in the camera coordinate system.
Optionally, the measuring device obtains the position of the corner of the surface to be measured in the two-dimensional image by performing corner detection processing on the two-dimensional image. The measuring device obtains an area covered by the surface to be measured in the image according to the position of the corner point in the surface to be measured, and then at least three first points can be selected from the area.
In one possible implementation, the measuring device uses a convolutional neural network to process the two-dimensional image, so as to realize corner detection processing. The convolutional neural network is obtained by training a plurality of images with labeling information as training data, wherein the labeling information of the images in the training data is corner points and the positions of the corner points. In the process of training the convolutional neural network by using training data, the convolutional neural network extracts characteristic data of the image from the image, and determines whether angular points exist in the image according to the characteristic data. And under the condition that the existence of the corner points in the image is determined, obtaining the positions of the corner points in the image according to the characteristic data of the image. In the training process, the labeling information is used as the supervision information to supervise the result obtained by the convolutional neural network in the training process, and the parameters of the convolutional neural network are updated to complete the training of the convolutional neural network.
In this way, the two-dimensional image can be processed by using the trained convolutional neural network, and the position of the corner point of the object to be measured in the two-dimensional image can be obtained. It should be understood that the execution subject of training the convolutional neural network may be a measurement device or a training device, where the training device may be one of the following: computer, server.
In another possible implementation manner, the corner detection process may be implemented by a corner detection algorithm, where the corner detection algorithm may be one of the following: the application does not limit the corner detection algorithm for realizing the corner detection processing.
2. The first image point and the second image point are determined from the projection plane.
Since the projection plane contains the image points of all the points on the surface to be measured in the camera coordinate system, and the first corner point and the second corner point both belong to the surface to be measured, the projection plane contains the image point corresponding to the first corner point (i.e., the first image point) and the image point corresponding to the second corner point (i.e., the second image point).
In one possible implementation, the measurement device obtains coordinates of the optical center of the two-dimensional imaging device in the camera coordinate system, and obtains a straight line (hereinafter referred to as a first straight line) passing through the first corner point and the optical center according to the coordinates of the first corner point in the camera coordinate system and the coordinates of the optical center in the camera coordinate system. The measuring device determines the intersection point of the first straight line and the projection plane as a first image point. Similarly, the measuring device obtains a straight line (hereinafter referred to as a second straight line) passing through the second corner point and the optical center, based on the coordinates of the second corner point in the camera coordinate system and the coordinates of the optical center in the camera coordinate system. The measuring device takes the intersection point of the second straight line and the projection plane as a second image point.
In this step, since the points on the projection plane all carry three-dimensional coordinate information, the first image point and the second image point also all carry three-dimensional coordinate information, and the three-dimensional coordinate information carries depth information. And limited by the imaging limitations of the depth imaging device, the accuracy of the depth values of points on edges in the object to be measured obtained from the depth map is low. Thereby causing inaccuracy in the coordinates of the image points of the points on the edges in the object to be measured in the camera coordinate system. By determining the first image point and the second image point from the projection plane, the accuracy of the coordinates of the first image point in the camera coordinate system and the accuracy of the coordinates of the second image point in the camera coordinate system can be improved.
As an alternative embodiment, the measuring device performs the following steps in performing step 2:
3. a first straight line passing through the first corner point and the optical center of the two-dimensional imaging device and a second straight line passing through the second corner point and the optical center are obtained.
Alternatively, the measuring device may determine the first straight line by taking an equation of the first straight line and determine the second straight line by taking an equation of the second straight line.
4. The first image point is obtained by determining an intersection point between the first straight line and the projection plane, and the second image point is obtained by determining an intersection point between the second straight line and the projection plane.
The measuring device combines the equation of the first straight line with the equation of the projection plane to obtain the coordinates of the first image point under the camera coordinate system. The measuring device combines the equation of the second straight line with the equation of the projection plane to obtain the coordinates of the second image point under the camera coordinate system.
As an alternative embodiment, the measuring device performs the following steps in performing step 3:
5. And acquiring a first coordinate of the first corner point in an image coordinate system of the two-dimensional imaging device, a second coordinate of the second corner point in the image coordinate system and a third coordinate of an optical center of the two-dimensional imaging device in the camera coordinate system.
After the measuring device acquires the internal parameters of the two-dimensional imaging device, the coordinates of the first angle point under the image coordinate system of the two-dimensional imaging device, namely the first coordinates, can be obtained according to the internal parameters and the coordinates of the first angle point in the two-dimensional image. And the measuring device obtains the coordinates of the second corner point under the image coordinate system, namely the second coordinates, according to the internal parameters and the coordinates of the second corner point in the two-dimensional image.
In one implementation of acquiring the third coordinates of the optical center in the camera coordinate system, the measurement device acquires the optical center coordinates input by the user through the input component as the third coordinates.
In another implementation manner of acquiring the third coordinate of the optical center under the camera coordinate system, the measuring device receives the optical center coordinate sent by the second terminal as the third coordinate, wherein the second terminal comprises a mobile phone, a computer, a tablet computer, a server and the like. Optionally, the second terminal is identical to the first terminal.
6. And obtaining the first straight line according to the first coordinate and the third coordinate.
The measuring device can obtain an equation of a straight line passing through the first angle point and the optical center, namely an equation of a first straight line according to the first coordinate and the third coordinate.
7. And obtaining the second straight line according to the second coordinate and the third coordinate.
The measuring device can obtain an equation of a straight line passing through the second angular point and the optical center, namely an equation of a second straight line, according to the second coordinate and the third coordinate.
As an alternative embodiment, the measuring device performs the following steps in performing step 1:
8. Acquiring the internal parameters of the two-dimensional imaging device and the coordinates of the at least three first points in the pixel coordinate system of the two-dimensional image.
In the embodiment of the application, the internal parameters of the two-dimensional imaging device include: the focal length of the two-dimensional imaging device is in the coordinate of the camera coordinate system and the optical center of the two-dimensional imaging device is in the coordinate of the camera coordinate system, wherein the optical center is the intersection point of the optical axis of the two-dimensional imaging device and the image plane.
As described above, depth information of an object point at a plane edge may be inaccurate in a depth map acquired by a depth imaging apparatus due to imaging limitations of the depth imaging apparatus. In view of the depth information required to use at least three first points in the subsequent processing, the measuring device may select at least three first points from points other than on the edge of the surface to be measured in order to improve the accuracy of the subsequent processing. For example, in fig. 5, if the surface to be measured is ABCD, at least three first points are points in ABCD other than the point on AB, the point on BC, the point on CD, and the point on DA.
Optionally, the measuring device obtains the position of the corner of the surface to be measured in the two-dimensional image by performing corner detection processing on the two-dimensional image. The measuring device obtains an area covered by the surface to be measured in the image according to the position of the corner point in the surface to be measured, and then at least three first points can be selected from the area.
9. And obtaining image points of the at least three first points in the camera coordinate system according to the internal parameters, the depth values of the at least three first points and the coordinates of the at least three first points in the pixel coordinate system.
The measuring device converts the coordinates of the first point in the two-dimensional image and the depth value of the first point according to the internal parameters of the two-dimensional imaging device, and an image point of the first point under the camera coordinate system of the two-dimensional imaging device can be obtained.
10. And carrying out plane fitting processing on the image points of the at least three first points under the camera coordinate system to obtain the projection plane.
Alternatively, the measuring device may perform a plane fitting process on the image points of the at least three first points in the camera coordinate system, so as to minimize the sum of distances from the plane obtained by the fitting to the image points of the at least three first points, thereby obtaining the projection plane. In this way, the accuracy of the projection plane can be improved.
For example, the image points of the at least three first points comprise: image point a, image point b, image point c. Assuming that the distance from the image point a to the plane obtained by fitting is D 1, the distance from the image point b to the plane obtained by fitting is D 2, and the distance from the image point c to the plane obtained by fitting is D 3. The measuring means then minimizes D 1+D2+D3 by performing a plane fitting process on the image points of the at least three first points.
As an alternative embodiment, the shape of the object to be measured is a regular shape or a quasi-regular shape, and the surface to be measured is the upper bottom surface of the object to be measured.
In an embodiment of the present application, the regular shape includes at least one of: rectangular, diamond, parallelogram, pentagon. For example, the rectangular parallelepiped is a regular shape. For another example, in the object to be measured shown in fig. 6, both the upper bottom surface and the lower bottom surface are pentagons, and the object to be measured is a regular shape.
In an embodiment of the present application, the regular-like shape includes a regular shape in which at least one face is a truncated rectangle, and a convex portion and/or a concave portion is present in at least one face of the regular shape. For example, fig. 7 shows a corner cut triangle; FIG. 8 shows a corner cut rectangle; fig. 9 shows a corner cut rectangle. The shape of the object shown in fig. 10 is a quasi-regular shape. For another example, the microwave oven shown in fig. 4 is of a quasi-regular shape.
In the embodiment of the present application, a vector that takes the geometric center of the object to be measured as a starting point and is parallel to the gravity direction is referred to as a reference vector. The bottom surface to which the positive direction of the reference vector points is referred to as a lower bottom surface, and the bottom surface to which the negative direction of the reference vector points is referred to as an upper bottom surface. The surfaces of the object to be measured other than the upper bottom surface and the lower bottom surface are referred to as side surfaces. For example, in the object to be measured shown in fig. 5, the plane ABCD is an upper bottom surface, the plane EFG is a lower bottom surface, and the plane ABFE and the plane BCGF are side surfaces.
Optionally, the measuring device determines the gravity direction in the two-dimensional image according to the gyroscope data when the two-dimensional imaging device collects the two-dimensional image, so that the upper bottom surface of the object to be measured and the lower bottom surface of the object to be measured can be determined.
The measuring device further performs the following steps on the basis of performing the above steps:
11. and acquiring coordinates of the third corner point under the pixel coordinate system, and acquiring a depth value of the third corner point from the depth map.
In the embodiment of the application, the third corner belongs to a side surface, namely the third corner is a corner in the side surface. The connecting line between the third corner point and the first corner point is the height side of the object to be measured, wherein the height side refers to the side with the length being the height of the object to be measured. For example, in the object to be measured shown in fig. 4, the lengths of three sides AE, BF, and CG may be used to represent the height of the object to be measured, that is, AE, BF, and CG are all height sides.
12. And obtaining a third image point of the third corner point under the camera coordinate system according to the internal parameter, the depth value of the third corner point and the coordinate of the third corner point under the pixel coordinate system.
The measuring device converts the coordinate of the third corner point under the pixel coordinate system of the two-dimensional image and the depth value of the third corner point according to the internal parameters, and an image point of the third corner point under the camera coordinate system, namely a third image point, can be obtained.
13. And determining the distance between the first image point and the third image point as the height of the object to be measured.
Optionally, the measuring device performs corner detection processing on the two-dimensional image, so that coordinates of the corner of the object to be measured in the two-dimensional image can be obtained, and confidence of the corner in the lower bottom surface of the object to be measured can also be obtained. For example (example 1), the measuring device performs corner detection processing on fig. 4, and determining corners in the lower bottom surface of the object to be measured includes: the position of the point E in the image, the position of the point F in the image and the position of the point G in the image are obtained, and meanwhile, the confidence of the position of the point E in the image, the confidence of the position of the point F in the image and the confidence of the position of the point G in the image can also be obtained.
The measuring device selects k corner points with the highest confidence from the lower bottom surface of the object to be measured as height corner points. And calculating the average value of the lengths of the height edges where the height corner points are located, and taking the average value as the height of the object to be measured. Continuing with example 1, assuming that k=2, the confidence of the position of point E in the image is greater than the confidence of the position of point F in the image, which is greater than the confidence of the position of point G in the image. At this time, points E and F are altitude points. The measuring device calculates the length average of AE and BF as the height of the object to be measured.
As an alternative embodiment, the measuring device performs the following steps in performing step 302:
14. and acquiring a first depth value of the first line segment from the depth map.
In the implementation of the application, two endpoints of the first line segment are a first corner point and a second corner point respectively. The depth value of the first line segment is positively correlated with the depth value of the first corner point and/or the depth value of the first line segment is positively correlated with the depth value of the second corner point.
In one possible implementation, the depth value of the first line segment is the depth value of the midpoint of the first line segment. For example, the midpoint of the first line segment is the reference midpoint. The measuring device acquires a depth value of a reference midpoint from the depth map, and takes the depth value of the reference midpoint as a depth value of a first line segment, namely a first depth value.
In another possible implementation, the depth value of the first line segment is a mean value of the depth value of the first corner point and the depth value of the third corner point.
15. And obtaining a mapping relation between the depth value and the transmission factor.
In the embodiment of the application, the transmission factor characterizes the conversion relation between the size and the real size in the image. For example, the transmission factor of a pixel characterizes the conversion relationship between the size of the pixel and the size of the object point to which the pixel corresponds. For another example, the transmission factor of the stool in the image characterizes the conversion relationship between the length of the stool in the image and the true length of the stool.
The ratio between the size in the image and the true size is called transmittance, and the transmission factor is inversely related to the transmittance. For example, the image includes a pixel a and a pixel B, where the transmission factor of the pixel a is greater than the transmission factor of the pixel B, the object point corresponding to the pixel a is the object point a, and the object point corresponding to the pixel B is the object point B. Since the transmission factor of the pixel a is larger than that of the pixel B, the transmittance of the pixel a and the object point a is smaller than that of the pixel B and the object point B. Since the size of the pixel a is the same as that of the pixel B, the size of the object point a is larger than that of the object point B.
In the embodiment of the application, the transmission factor and the depth value are positively correlated. For example, the image includes a pixel a and a pixel b, wherein the depth value of the pixel a is greater than the depth value of the pixel b. At this time, the size of the object point corresponding to the pixel point a is larger than the size of the object point corresponding to the pixel point b.
16. And obtaining the transmission factor of the first line segment according to the mapping relation and the first depth value.
In the embodiment of the application, the transmission factor of the first line segment characterizes the conversion relation between the length of the first line segment and the length of the second line segment, wherein the second line segment is a physical line segment corresponding to the first line segment, i.e. the length of the second line segment is a physical distance of the length of the first line segment.
17. And obtaining the distance between the first angular point and the second angular point according to the transmission factor of the first line segment and the length of the first line segment.
In one possible implementation, the transmission factor of the first line segment characterizes a ratio of a length of the first line segment to a length of the second line segment. The measuring device calculates the quotient of the length of the first line segment and the transmission factor of the first line segment to obtain the length of the second line segment, namely the distance between the first corner point and the second corner point.
In another possible implementation, the transmission factor of the first line segment characterizes a ratio of a length of the second line segment to a length of the first line segment. The measuring device calculates the product of the length of the first line segment and the transmission factor of the first line segment to obtain the length of the second line segment, namely the distance between the first corner point and the second corner point.
As an alternative embodiment, the surface of the object to be measured comprises non-host structures, wherein the non-host structures comprise at least one of the following: a concave structure and a convex structure. For example, in the microwave oven shown in fig. 4, the handle has a surface convex structure. The measuring device also performs the following steps:
18. And performing edge detection processing on the two-dimensional image to obtain a first edge and a second edge.
In the embodiment of the application, the edge detection process is used for detecting the position of the edge and the gradient of the edge in the image. Alternatively, the edge detection process may be implemented by one of the following methods: canny edge detection algorithm, sobel (sobel) operator, roberts edge detection operator, laplace (LAPLACIAN OF GAUSSIAN, LOG) edge detection operator.
The measuring device performs edge detection processing on the two-dimensional image to obtain an edge detection result of the two-dimensional image, wherein the edge detection result comprises: where in the two-dimensional image there is an edge, the gradient magnitude of the edge.
In the embodiment of the application, the first edge and the second edge are positioned in the same surface of the object to be measured, and the gradient of the first edge and the gradient of the second edge exceed the edge threshold. For example, the first edge and the second edge are both located on the upper bottom surface of the object to be measured; the first edge and the second edge are both located on the front side of the object to be measured.
In the embodiment of the application, the edge with the gradient exceeding the edge threshold value is regarded as the edge of the non-main body structure. For example, the junction between the handle of the microwave oven and the front of the microwave oven shown in fig. 4 is the edge of the handle.
19. And determining a distance between the first edge and the second edge as a dimension of the non-body structure.
The measuring device obtains a first edge and a second edge by carrying out edge detection processing on the two-dimensional image, and regards the first edge and the second edge as two edges of the non-main body structure. In this way, the measuring device can obtain the dimensions of the non-body structure by determining the distance between the first edge and the second edge.
For example, the measuring device obtains the edge of the handle on the front surface of the microwave oven by performing edge detection processing on the two-dimensional image shown in fig. 4, and further obtains the length of the handle.
It should be understood that the first edge and the second edge are only examples, and in practical application, the number of edges located on the same surface may exceed 2 by performing edge detection processing on the two-dimensional image by the measuring device. For example, the upper bottom surface of the object to be measured has a rectangular recess. The measuring device determines that four edges with gradients exceeding a gradient threshold exist in the upper bottom surface by performing edge detection processing on a two-dimensional image containing an object to be measured, and can determine the directions of the four edges. Specifically, among the four edges, the direction of the first edge is the same as the direction of the second edge, and the direction of the third edge is the same as the direction of the fourth edge. The measuring device can obtain the length and the width of the rectangular groove by determining the distance between the first edge and the second edge and the distance between the third edge and the fourth edge.
As an alternative embodiment, the measuring device, before performing step 19, also performs the following steps:
20. And respectively performing curvature detection processing on the first edge and the second edge to obtain a first curvature of the first edge and a second curvature of the second edge.
In the embodiment of the present application, the curvature detection process is used to calculate the curvature of an edge (which will be referred to as a boundary edge hereinafter) where the gradient exceeds the gradient threshold. The measuring device performs curvature detection processing on the first edge and the second edge respectively to obtain the curvature of the first edge (namely, the first curvature) and the curvature of the second edge (namely, the second curvature).
Because the curvature of the edge of the non-main body structure is larger, the measuring device determines the boundary edge to be the edge of the non-main body structure under the condition that the curvature of the boundary edge exceeds the curvature threshold value, and the judgment accuracy of the edge of the non-main body structure can be improved.
For example, assuming that two slits and a handle are present in the upper bottom surface of the storage box, the measuring device can determine that four boundary edges, respectively, a first edge, a second edge, a third edge, and a fourth edge, are present in the upper bottom surface by performing edge detection processing on a two-dimensional image including the storage box. The measuring device further performs curvature detection processing on the four boundary edges, and determines that the curvature of the first edge and the curvature of the second edge exceed a curvature threshold value, and the curvature of the third edge and the curvature of the fourth edge do not exceed a curvature threshold value. The measuring device in turn determines the distance between the first edge and the second edge as the length of the handle. In this example, the measuring device may avoid taking the distance between the slits as the size of the handle.
Thus, as an alternative embodiment, the measuring device performs step 19 in case it is determined that both the first curvature and the second curvature exceed the curvature threshold.
As an alternative embodiment, the measuring device performs the following steps on the basis of performing the above steps:
21. Dynamic display information is acquired.
In the embodiment of the application, the dynamic display information comprises display information of special effect type, wherein the special effect type comprises at least one of the following: a scroll bar, a progress bar, thickening the edges of the object to be measured.
For example, the dynamic display information may be to thicken the height edge of the object to be measured. For another example, the dynamic display information may be a measurement progress of the side length dimension displayed in the form of a progress bar. For another example, when measuring a side length of a certain strip on an object to be measured, a scroll strip is displayed between two corner points of the side length.
22. And carrying out dynamic display processing on the first corner point and the second corner point according to the dynamic display information, and displaying the first corner point and the second corner point after the dynamic display processing.
In one possible implementation manner, the measuring device performs dynamic display processing on the first corner point and the second corner point according to the dynamic display information after measuring the distance between the first corner point and the second corner point, and displays the first corner point and the second corner point after the dynamic display processing.
For example, assuming that a line between the first corner point and the second corner point is a line segment AB, the dynamic display information includes: a scroll bar is displayed. The measuring device displays a scroll bar between the first corner point and the second corner point, i.e. displays the scroll bar on the AB, during the measurement of the length of the AB. To indicate that the measuring device has measured the length of the AB. Alternatively, the measuring device may determine the movement speed of the scroll bar depending on the length of AB.
In another possible implementation manner, before measuring the distance between the first corner point and the second corner point, the measuring device performs dynamic display processing on the first corner point and the second corner point according to the dynamic display information, and displays the first corner point and the second corner point after the dynamic display processing.
For example, assuming that a line between the first corner point and the second corner point is a line segment AB, the dynamic display information includes: the edges of the object to be measured are thickened. The measuring device makes a bold display of the line segment AB before measuring the length of the line segment AB. To indicate that the measuring device is to measure the length of AB.
In one possible implementation manner, in the process of measuring the distance between the first corner point and the second corner point, the measuring device performs dynamic display processing on the first corner point and the second corner point according to the dynamic display information, and displays the first corner point and the second corner point after the dynamic display processing.
For example, assuming that a line between the first corner point and the second corner point is a line segment AB, the dynamic display information includes: and displaying a progress bar. In the process of measuring the length of AB, the measuring device displays a progress bar between the first corner point and the second corner point, namely a rolling bar is displayed on AB. To indicate the progress of the measuring device in measuring the length of AB.
As an alternative embodiment, the dynamic display information comprises interface elements, and the measuring device performs the following steps in performing step 22:
23. And moving the interface element from the first corner point to the second corner point.
In the embodiment of the application, the interface element can be any graph. The measuring device measures the first corner point
As an alternative embodiment, the measuring device performs the following steps in performing step 23:
24. and obtaining the moving speed of the rolling bar according to the distance between the first angular point and the second angular point.
In the embodiment of the application, the moving speed of the scroll bar is inversely related to the distance between the first corner point and the second corner point. The measuring device determines the moving speed of the scroll bar according to the distance between the first angular point and the second angular point, so that a user can more intuitively feel the real distance between the first angular point and the second angular point according to the speed of the scroll bar.
25. And moving the scroll bar from the first corner point to the second corner point at the moving speed.
In daily life, people often need to measure the size of some objects (e.g., cartons, tables, cabinets). It is time consuming and labor intensive for people to use a ruler to measure the dimensions of an object. With the development of technology, the hardware configuration of the terminal is higher and higher, and the terminal can measure the size of the object by using the technical scheme disclosed by the embodiment of the application.
Specifically, the terminal is loaded with an RGB camera and a depth camera. Under the condition that a measurement instruction aiming at an object to be measured is received, the terminal shoots the object to be measured by using the RGB camera to obtain a two-dimensional image, and the terminal shoots the object to be measured by using the depth camera to obtain a depth map. The measurement instructions include at least one of: voice, text, click operation, touch operation. The terminal processes the acquired RGB image and depth map by using the technical scheme, and the size of the object to be measured can be obtained.
Based on the technical scheme provided by the embodiment of the application, the embodiment of the application also provides several possible application scenes.
Scene 1: the small-sized light contacts the moving company to help the moving, but the moving company needs to know the size of the object to be moved by the small-sized light clear statement, so the small-sized light needs to measure the size of the object to be moved. Since many things need to be carried, it is troublesome to measure the size of these things (such as tables, cabinets, washing machines, which will be referred to as objects to be measured hereinafter) with a ruler. Therefore, the Xiaoming inputs an instruction for measuring the object to be measured to the mobile phone by clicking the mobile phone, and the mobile phone shoots the object to be measured by using the mobile phone loaded with the RGB camera and the TOF camera under the condition of receiving the instruction, so as to obtain an RGB image and a depth map of the RGB image of the object to be measured. The mobile phone can further process the RGB image and the depth map by using the technical scheme disclosed above, and the size of the object to be measured is obtained. Therefore, the size of the object to be measured can be obtained by shooting the object to be measured only by using a mobile phone without using a ruler to measure the size of the object to be measured.
Scene 2: with the rapid development of electronic commerce, more and more people shop through electronic commerce, which also presents more challenges to the logistics industry, including improving the efficiency of measuring the size of goods to be dispensed.
The logistics distribution is more standard nowadays, and before the goods to be distributed are transported, the goods to be distributed are packaged by cartons. Because the shape of the paper box is cuboid, the terminal can accurately measure the size of the paper box by using the technical scheme disclosed by the embodiment of the application. For example, a worker of a logistics company can use a terminal (such as a mobile phone or a tablet computer) loaded with an RGB camera and a TOF camera to shoot a carton to be measured, so as to obtain an RGB image and a depth map of the RGB image of the carton to be measured. The terminal can further process the RGB image and the depth map by using the technical scheme disclosed above, and the size of the carton to be measured is obtained. Thus, the labor cost required by measuring the size of the paper box can be reduced, and the efficiency of measuring the size of the paper box can be improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
The foregoing details of the method according to the embodiments of the present application and the apparatus according to the embodiments of the present application are provided below.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a measuring device according to an embodiment of the present application, where the measuring device 1 includes: an acquisition unit 11, a first processing unit 12, a first detection unit 13, a second processing unit 14, a second detection unit 15, a display unit 16, an RGB camera 17, and a depth camera 18, wherein:
An acquisition unit 11 for acquiring a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image; the two-dimensional image is acquired by a two-dimensional imaging device;
the first processing unit 12 is configured to obtain, according to the two-dimensional image and the depth map, a distance between the first corner of the object to be measured and the second corner of the object to be measured.
In combination with any embodiment of the present application, the first processing unit 12 is configured to:
Obtaining a first image point of a first corner point of the object to be measured under a camera coordinate system of the two-dimensional imaging device and a second image point of a second corner point of the object to be measured under the camera coordinate system according to the two-dimensional image and the depth image;
and determining the distance between the first image point and the second image point as the distance between the first corner point and the second corner point.
In combination with any one of the embodiments of the present application, the object to be measured includes a surface to be measured; the first processing unit 12 is configured to:
obtaining a projection plane of the surface to be measured under a camera coordinate system of the two-dimensional imaging device according to coordinates of at least three first points in the surface to be measured in the two-dimensional image and depth values of the at least three first points obtained from the depth map;
The first image point and the second image point are determined from the projection plane.
In combination with any embodiment of the present application, the first processing unit 12 is configured to:
Acquiring a first straight line passing through the first angular point and the optical center of the two-dimensional imaging device, and a second straight line passing through the second angular point and the optical center;
determining an intersection point between the first straight line and the projection plane to obtain the first image point, and determining an intersection point between the second straight line and the projection plane to obtain the second image point.
In combination with any embodiment of the present application, the first processing unit 12 is configured to:
Acquiring a first coordinate of the first corner point under an image coordinate system of the two-dimensional imaging device, a second coordinate of the second corner point under the image coordinate system and a third coordinate of an optical center of the two-dimensional imaging device under the camera coordinate system;
Obtaining the first straight line according to the first coordinate and the third coordinate;
and obtaining the second straight line according to the second coordinate and the third coordinate.
In combination with any embodiment of the present application, the first processing unit 12 is configured to:
Acquiring internal parameters of the two-dimensional imaging device and coordinates of the at least three first points under a pixel coordinate system of the two-dimensional image;
Obtaining image points of the at least three first points under the camera coordinate system according to the internal parameters, the depth values of the at least three first points and the coordinates of the at least three first points under the pixel coordinate system;
and carrying out plane fitting processing on the image points of the at least three first points under the camera coordinate system to obtain the projection plane.
In combination with any one of the embodiments of the present application, the shape of the object to be measured is a regular shape or a quasi-regular shape, and the surface to be measured is an upper bottom surface of the object to be measured;
The acquisition unit 11 is configured to:
Acquiring coordinates of the third corner point under the pixel coordinate system, and acquiring a depth value of the third corner point from the depth map; the connecting line between the third corner point and the first corner point is the height edge of the object to be measured;
The first processing unit 12 is configured to:
obtaining a third image point of the third corner point under the camera coordinate system according to the internal parameter, the depth value of the third corner point and the coordinate of the third corner point under the pixel coordinate system;
A distance between the first image point and the third image point is determined as the height of the object to be measured.
In combination with any embodiment of the present application, the first processing unit 12 is configured to:
Acquiring a first depth value of a first line segment from the depth map; the two endpoints of the first line segment are the first corner point and the second corner point respectively; the first depth value is positively correlated with the depth value of the first corner point, and/or the first depth value is positively correlated with the depth value of the second corner point;
obtaining a mapping relation between the depth value and the transmission factor; the transmission factor characterizes the conversion relation between the size in the image and the real size; the transmission factor is positively correlated with the depth value;
obtaining a transmission factor of the first line segment according to the mapping relation and the first depth value;
And obtaining the distance between the first corner point and the second corner point according to the transmission factor of the first line segment and the length of the first line segment.
In combination with any embodiment of the present application, the surface of the object to be measured comprises a non-body structure, and the apparatus 1 further comprises:
A first detecting unit 13, configured to perform edge detection processing on the two-dimensional image, so as to obtain a first edge and a second edge; the first edge and the second edge are positioned in the same surface of the object to be measured, and the gradient of the first edge and the gradient of the second edge exceed an edge threshold;
a second processing unit 14 for determining a distance between the first edge and the second edge as a dimension of the non-body structure.
In combination with any of the embodiments of the present application, the device 1 further comprises:
a second detecting unit 15, configured to perform edge detection processing on the two-dimensional image to obtain a first edge and a second edge, and then perform curvature detection processing on the first edge and the second edge, respectively, to obtain a first curvature of the first edge and a second curvature of the second edge, before determining a distance between the first edge and the second edge as a size of the non-main structure;
The second processing unit 14 is configured to:
In the event that both the first curvature and the second curvature exceed a curvature threshold, a distance between the first edge and the second edge is determined as a dimension of the non-body structure.
In combination with any embodiment of the present application, the obtaining unit 11 is further configured to:
Acquiring dynamic display information;
The measuring device 1 further comprises:
And the display unit 16 is configured to perform dynamic display processing on the first corner point and the second corner point according to the dynamic display information, and display the first corner point and the second corner point after the dynamic display processing.
In combination with any one of the embodiments of the present application, the dynamic display information includes an interface element; the display unit 16 is configured to:
The scroll bar is moved from the first corner point to the second corner point.
In combination with any embodiment of the present application, the display unit 16 is configured to:
Obtaining the moving speed of the rolling bar according to the distance between the first angular point and the second angular point; the moving speed is inversely related to the distance between the first corner point and the second corner point;
And moving the interface element from the first corner point to the second corner point at the moving speed.
In combination with any of the embodiments of the present application, the two-dimensional imaging device includes an RGB camera 17; the RGB camera 17 belongs to the measuring device 1; the measuring device 1 further comprises a depth camera 18; in the case of detecting a measurement instruction for the object to be measured, the measurement apparatus 1 photographs the object to be measured using the RGB camera 17 and the depth camera 18, respectively, to obtain the two-dimensional image and the depth map.
In some embodiments, the functions or modules included in the apparatus provided by the embodiments of the present application may be used to perform the methods described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
Fig. 12 is a schematic hardware structure of a measurement device according to an embodiment of the present application. The measuring device 2 comprises a processor 21, a memory 22, input means 23, output means 24. The processor 21, memory 22, input device 23, and output device 24 are coupled by connectors including various interfaces, transmission lines or buses, etc., as are not limited by the present embodiments. It should be appreciated that in various embodiments of the application, coupled is intended to mean interconnected by a particular means, including directly or indirectly through other devices, e.g., through various interfaces, transmission lines, buses, etc.
The processor 21 may be one or more graphics processors (graphics processing unit, GPUs), which in the case of a GPU as the processor 21 may be a single core GPU or a multi-core GPU. Alternatively, the processor 21 may be a processor group formed by a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. In the alternative, the processor may be another type of processor, and the embodiment of the application is not limited.
Memory 22 may be used to store computer program instructions as well as various types of computer program code for performing aspects of the present application. Optionally, the memory includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM) for associated instructions and data.
The input means 23 are for inputting data and/or signals and the output means 24 are for outputting data and/or signals. The input device 23 and the output device 24 may be separate devices or may be an integral device.
It will be appreciated that in the embodiment of the present application, the memory 22 may be used to store not only related instructions, but also related data, for example, the memory 22 may be used to store a two-dimensional image obtained through the input device 23, or the memory 22 may be further used to store a distance between the first corner point and the second corner point obtained through the processor 21, etc., and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that figure 12 shows only a simplified design of a measuring device. In practical applications, the measuring device may also include other necessary elements, including but not limited to any number of input/output devices, processors, memories, etc., and all measuring devices that can implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein. It will be further apparent to those skilled in the art that the descriptions of the various embodiments of the present application are provided with emphasis, and that the same or similar parts may not be described in detail in different embodiments for convenience and brevity of description, and thus, parts not described in one embodiment or in detail may be referred to in description of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (DIGITAL VERSATILE DISC, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: a read-only memory (ROM) or a random-access memory (random access memory, RAM), a magnetic disk or an optical disk, or the like.

Claims (12)

1. A method of measurement, the method comprising:
Acquiring a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image; the two-dimensional image is acquired by a two-dimensional imaging device;
Obtaining the distance between the first angular point of the object to be measured and the second angular point of the object to be measured according to the two-dimensional image and the depth map; the obtaining, according to the two-dimensional image and the depth map, a distance between the first corner of the object to be measured and the second corner of the object to be measured includes: acquiring a first depth value of a first line segment from the depth map; obtaining a mapping relation between a depth value and a transmission factor, wherein the transmission factor represents a conversion relation between the size in the two-dimensional image and the real size; obtaining a transmission factor of the first line segment according to the mapping relation and the first depth value; and obtaining the distance between the first corner point and the second corner point according to the transmission factor of the first line segment and the length of the first line segment, wherein the first line segment is a line segment passing through the first corner point and the second corner point in the two-dimensional image.
2. The method according to claim 1, wherein the obtaining the distance between the first corner of the object to be measured and the second corner of the object to be measured from the two-dimensional image and the depth map comprises:
Obtaining a first image point of a first corner point of the object to be measured under a camera coordinate system of the two-dimensional imaging device and a second image point of a second corner point of the object to be measured under the camera coordinate system according to the two-dimensional image and the depth image;
and determining the distance between the first image point and the second image point as the distance between the first corner point and the second corner point.
3. The method according to claim 2, characterized in that the object to be measured comprises a surface to be measured;
The obtaining, according to the two-dimensional image and the depth map, a first image point of a first corner point of the object to be measured in a camera coordinate system of the two-dimensional imaging device, and a second image point of a second corner point of the object to be measured in the camera coordinate system includes:
obtaining a projection plane of the surface to be measured under a camera coordinate system of the two-dimensional imaging device according to coordinates of at least three first points in the surface to be measured in the two-dimensional image and depth values of the at least three first points obtained from the depth map;
The first image point and the second image point are determined from the projection plane.
4. A method according to any one of claims 1 to 3, wherein the surface of the object to be measured comprises non-host structures, the method further comprising:
Performing edge detection processing on the two-dimensional image to obtain a first edge and a second edge; the first edge and the second edge are positioned in the same surface of the object to be measured, and the gradient of the first edge and the gradient of the second edge exceed an edge threshold;
a distance between the first edge and the second edge is determined as a dimension of the non-body structure.
5. The method of claim 4, wherein after said performing edge detection processing on said two-dimensional image to obtain a first edge and a second edge, before said determining a distance between said first edge and said second edge as a dimension of said non-body structure, said method further comprises:
respectively performing curvature detection processing on the first edge and the second edge to obtain a first curvature of the first edge and a second curvature of the second edge;
the determining a distance between the first edge and the second edge as the dimension of the non-body structure includes:
In the event that both the first curvature and the second curvature exceed a curvature threshold, a distance between the first edge and the second edge is determined as a dimension of the non-body structure.
6. A method according to any one of claims 1 to 3, characterized in that the method further comprises:
Acquiring dynamic display information;
And carrying out dynamic display processing on the first corner point and the second corner point according to the dynamic display information, and displaying the first corner point and the second corner point after the dynamic display processing.
7. The method of claim 6, wherein the dynamic display information comprises an interface element; the dynamic display processing is performed on the first corner point and the second corner point according to the dynamic display information, and the first corner point and the second corner point after the dynamic display processing are displayed, including:
And moving the interface element from the first corner point to the second corner point.
8. The method of claim 7, wherein the moving the interface element from the first corner point to the second corner point comprises:
obtaining the moving speed of the interface element according to the distance between the first angular point and the second angular point; the moving speed is inversely related to the distance between the first corner point and the second corner point;
And moving the interface element from the first corner point to the second corner point at the moving speed.
9. A method according to any one of claims 1 to 3, wherein the two-dimensional imaging device comprises an RGB camera; the RGB camera belongs to a terminal; the terminal further comprises a depth camera; the obtaining a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image comprises:
And under the condition that a measurement instruction aiming at the object to be measured is detected, the terminal respectively uses the RGB camera and the depth camera to shoot the object to be measured to obtain the two-dimensional image and the depth map.
10. A measurement device, the device comprising:
an acquisition unit configured to acquire a two-dimensional image containing an object to be measured and a depth map of the two-dimensional image; the two-dimensional image is acquired by a two-dimensional imaging device;
The first processing unit is used for obtaining the distance between the first angular point of the object to be measured and the second angular point of the object to be measured according to the two-dimensional image and the depth map; the first processing unit is specifically configured to: acquiring a first depth value of a first line segment from the depth map; obtaining a mapping relation between a depth value and a transmission factor, wherein the transmission factor represents a conversion relation between the size in the two-dimensional image and the real size; obtaining a transmission factor of the first line segment according to the mapping relation and the first depth value; and obtaining the distance between the first corner point and the second corner point according to the transmission factor of the first line segment and the length of the first line segment, wherein the first line segment is a line segment passing through the first corner point and the second corner point in the two-dimensional image.
11. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 9.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1 to 9.
CN202010899124.3A 2020-08-31 2020-08-31 Measurement method and device, electronic equipment and storage medium Active CN112150527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010899124.3A CN112150527B (en) 2020-08-31 2020-08-31 Measurement method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010899124.3A CN112150527B (en) 2020-08-31 2020-08-31 Measurement method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112150527A CN112150527A (en) 2020-12-29
CN112150527B true CN112150527B (en) 2024-05-17

Family

ID=73890296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010899124.3A Active CN112150527B (en) 2020-08-31 2020-08-31 Measurement method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112150527B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113188465A (en) * 2021-04-21 2021-07-30 中铁第四勘察设计院集团有限公司 Drilling hole depth identification method and device based on video learning

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013179712A1 (en) * 2012-05-30 2013-12-05 日立コンシューマエレクトロニクス株式会社 Information processing device, information processing method, and program
CN106125994A (en) * 2016-06-17 2016-11-16 深圳迪乐普数码科技有限公司 Coordinate matching method and use control method and the terminal of this coordinate matching method
JP2017151094A (en) * 2016-02-08 2017-08-31 ゼネラル・エレクトリック・カンパニイ Method and device for automatically identifying point of interest in depth measurement of viewed object
WO2017167239A1 (en) * 2016-03-31 2017-10-05 纳恩博(北京)科技有限公司 Mobile control method, mobile electronic apparatus and mobile control system, and storage medium
WO2019136315A2 (en) * 2018-01-05 2019-07-11 Aquifi, Inc. Systems and methods for volumetric sizing
CN110006343A (en) * 2019-04-15 2019-07-12 Oppo广东移动通信有限公司 Measurement method, device and the terminal of object geometric parameter
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN110188495A (en) * 2019-06-04 2019-08-30 中住(北京)数据科技有限公司 A method of the two-dimentional floor plan based on deep learning generates three-dimensional floor plan
CN110276774A (en) * 2019-06-26 2019-09-24 Oppo广东移动通信有限公司 Drawing practice, device, terminal and the computer readable storage medium of object
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
CN111160178A (en) * 2019-12-19 2020-05-15 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9241147B2 (en) * 2013-05-01 2016-01-19 Legend3D, Inc. External depth map transformation method for conversion of two-dimensional images to stereoscopic images
JP6089722B2 (en) * 2013-01-23 2017-03-08 富士通株式会社 Image processing apparatus, image processing method, and image processing program
US9872011B2 (en) * 2015-11-24 2018-01-16 Nokia Technologies Oy High-speed depth sensing with a hybrid camera setup
CN109300190B (en) * 2018-09-06 2021-08-10 百度在线网络技术(北京)有限公司 Three-dimensional data processing method, device, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013179712A1 (en) * 2012-05-30 2013-12-05 日立コンシューマエレクトロニクス株式会社 Information processing device, information processing method, and program
JP2017151094A (en) * 2016-02-08 2017-08-31 ゼネラル・エレクトリック・カンパニイ Method and device for automatically identifying point of interest in depth measurement of viewed object
WO2017167239A1 (en) * 2016-03-31 2017-10-05 纳恩博(北京)科技有限公司 Mobile control method, mobile electronic apparatus and mobile control system, and storage medium
CN106125994A (en) * 2016-06-17 2016-11-16 深圳迪乐普数码科技有限公司 Coordinate matching method and use control method and the terminal of this coordinate matching method
WO2019136315A2 (en) * 2018-01-05 2019-07-11 Aquifi, Inc. Systems and methods for volumetric sizing
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN110006343A (en) * 2019-04-15 2019-07-12 Oppo广东移动通信有限公司 Measurement method, device and the terminal of object geometric parameter
CN110188495A (en) * 2019-06-04 2019-08-30 中住(北京)数据科技有限公司 A method of the two-dimentional floor plan based on deep learning generates three-dimensional floor plan
CN110276774A (en) * 2019-06-26 2019-09-24 Oppo广东移动通信有限公司 Drawing practice, device, terminal and the computer readable storage medium of object
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111160178A (en) * 2019-12-19 2020-05-15 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multitemporal accuracy and precision assessment of unmanned aerial system photogrammetry for slope-scale snow depth maps in Alpine terrain;Adams等;《Pure and Applied Geophysics》;20171214;第175卷;3303-3324 *
单目视觉图像深度测量方法研究;何立新;《中国博士学位论文全文数据库 信息科技辑》;20181115(第11期);I138-10 *

Also Published As

Publication number Publication date
CN112150527A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
CN107223269B (en) Three-dimensional scene positioning method and device
CN112771573A (en) Depth estimation method and device based on speckle images and face recognition system
US20120127171A1 (en) Techniques for rapid stereo reconstruction from images
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN110276774B (en) Object drawing method, device, terminal and computer-readable storage medium
CN113077548B (en) Collision detection method, device, equipment and storage medium for object
CN103914876A (en) Method and apparatus for displaying video on 3D map
CN116152306B (en) Method, device, apparatus and medium for determining masonry quality
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
US10462450B2 (en) Combining two-dimensional images with depth data to detect junctions or edges
CN107203962B (en) Method for making pseudo-3D image by using 2D picture and electronic equipment
CN112197708B (en) Measuring method and device, electronic device and storage medium
Boulanger et al. ATIP: A Tool for 3D Navigation inside a Single Image with Automatic Camera Calibration.
JP7432793B1 (en) Mapping methods, devices, chips and module devices based on three-dimensional point clouds
CN112150527B (en) Measurement method and device, electronic equipment and storage medium
US10593054B2 (en) Estimation of 3D point candidates from a location in a single image
CN112634366B (en) Method for generating position information, related device and computer program product
CN112102391A (en) Measuring method and device, electronic device and storage medium
CN113379826A (en) Method and device for measuring volume of logistics piece
CN112102390A (en) Measuring method and device, electronic device and storage medium
CN111915666A (en) Volume measurement method and device based on mobile terminal
CN112146628B (en) Measurement method and device, electronic equipment and storage medium
CN109493419B (en) Method and device for acquiring digital surface model from oblique photography data
TW201816725A (en) Method for improving occluded edge quality in augmented reality based on depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant