CN110766744A - MR volume measurement method and device based on 3D depth camera - Google Patents
MR volume measurement method and device based on 3D depth camera Download PDFInfo
- Publication number
- CN110766744A CN110766744A CN201911070133.5A CN201911070133A CN110766744A CN 110766744 A CN110766744 A CN 110766744A CN 201911070133 A CN201911070133 A CN 201911070133A CN 110766744 A CN110766744 A CN 110766744A
- Authority
- CN
- China
- Prior art keywords
- target object
- coordinates
- vertex
- coordinate system
- image information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides an MR volume measurement method and device based on a 3D depth camera, and the method comprises the following steps: when a depth image information frame containing a target object is obtained, calculating a view matrix according to the depth image information frame; calculating coordinates of each vertex of the target object under a camera coordinate system according to the depth image information frame; respectively multiplying the coordinates of each vertex of the target object under the camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object under the world coordinate system; and calculating the volume of the target object by using the coordinates of each vertex of the target object in the world coordinate system. In the application, the volume measurement can be realized, and the accuracy of the volume measurement can be improved.
Description
Technical Field
The application relates to the technical field of volume measurement, in particular to an MR volume measurement method and device based on a 3D depth camera.
Background
With the development of the internet, online shopping becomes the mainstream of consumption in life, and the express industry is started.
In the express delivery industry, the postage of some articles is calculated by the volume of the articles, so the volume of the articles needs to be measured before the postage is calculated, but how to measure the volume becomes a problem.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present application provide an MR volume measurement method and apparatus based on a 3D depth camera, so as to achieve the purposes of implementing volume measurement and improving the accuracy of volume measurement, and the technical solution is as follows:
a MR volume measurement method based on a 3D depth camera comprises the following steps:
when a depth image information frame containing a target object is obtained, calculating a view matrix according to the depth image information frame;
calculating coordinates of each vertex of the target object under a camera coordinate system according to the depth image information frame;
respectively multiplying the coordinates of each vertex of the target object under a camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object under a world coordinate system;
and calculating the volume of the target object by using the coordinates of each vertex of the target object under the world coordinate system.
Preferably, the target object is in the shape of a rectangular cube;
or, the target object is in a non-rectangular cube shape, and each vertex of the target object is each vertex of a smallest circumscribed rectangular cube of the target object.
Preferably, the calculating the volume of the target object by using the coordinates of each vertex of the target object in the world coordinate system includes:
selecting one of the vertexes as a target vertex;
calculating the lengths of three edges intersected at the target vertex, and respectively taking the lengths as the length, the width and the height;
taking the product of the length, the width, and the height as the volume of the target object.
Preferably, the method further comprises:
constructing a three-dimensional virtual world with the opening position of the 3D depth camera as the origin of world coordinates by utilizing an SLAM synchronous positioning and mapping technology and the depth image information frame;
calculating a projection matrix according to the depth image information frame;
rendering each vertex of the target object in the three-dimensional virtual world according to the coordinates of each vertex of the target object in a world coordinate system, and connecting the rendered vertices to obtain a rectangular cube;
respectively rendering the length, width and height of the rectangular cube at the middle point of each edge of any adjacent three edges in the rectangular cube, and rendering the size of the volume at the central point in the rectangular cube;
acquiring a color image information frame shot at the same time as the depth image information frame, and overlapping the rectangular cube and the color image information frame to obtain a target image;
and carrying out left multiplication on the coordinates of the target image in a world coordinate system by the view matrix to obtain the coordinates of the target image in a camera coordinate system, carrying out left multiplication on the coordinates of the target image in the camera coordinate system by the projection matrix to obtain cutting coordinates of the target image, and cutting the cutting coordinates of the target image into the coordinates in a screen coordinate system by utilizing an OpenGL technology.
Preferably, the connecting the rendered vertices to obtain a rectangular cube includes:
and connecting the rendered vertexes to obtain a rectangular cube, wherein edges visible in the real world in the rectangular cube are rendered by adopting a solid line, and edges invisible in the real world in the rectangular cube are rendered by adopting a dotted line.
Preferably, before calculating the volume of the target object by using the coordinates of the vertices of the target object in the world coordinate system, the method further includes:
and when a request of clicking the target object on the screen by a user is received, saving the coordinates of each vertex of the target object under the world coordinate system.
A 3D depth camera based MR volume measurement device comprising:
the first calculation module is used for calculating a view matrix according to a depth image information frame when the depth image information frame containing a target object is acquired;
the second calculation module is used for calculating the coordinates of each vertex of the target object under a camera coordinate system according to the depth image information frame;
the third calculation module is used for multiplying the coordinates of each vertex of the target object under the camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object under the world coordinate system;
and the fourth calculation module is used for calculating the volume of the target object by utilizing the coordinates of each vertex of the target object under the world coordinate system.
Preferably, the target object is in the shape of a rectangular cube;
or, the target object is in a non-rectangular cube shape, and each vertex of the target object is each vertex of a smallest circumscribed rectangular cube of the target object.
Preferably, the fourth calculating module is specifically configured to:
selecting one of the vertexes as a target vertex;
calculating the lengths of three edges intersected at the target vertex, and respectively taking the lengths as the length, the width and the height;
taking the product of the length, the width, and the height as the volume of the target object.
Preferably, the apparatus further comprises:
the construction module is used for constructing a three-dimensional virtual world taking the opening position of the 3D depth camera as the world coordinate origin by utilizing the SLAM synchronous positioning and mapping technology and the depth image information frame;
the fifth calculation module is used for calculating a projection matrix according to the depth image information frame;
the first rendering module is used for rendering each vertex of the target object in the three-dimensional virtual world according to the coordinates of each vertex of the target object in a world coordinate system, and connecting the rendered vertices to obtain a rectangular cube;
the second rendering module is used for rendering the middle point of each edge of any adjacent three edges in the rectangular cube to obtain the length, width and height of the rectangular cube and rendering the size of the volume at the central point in the rectangular cube;
the superposition module is used for acquiring a color image information frame shot at the same time as the depth image information frame, and superposing the rectangular cube and the color image information frame to obtain a target image;
the conversion module is used for multiplying the coordinates of the target image in the world coordinate system by the view matrix to obtain the coordinates of the target image in the camera coordinate system, multiplying the coordinates of the target image in the camera coordinate system by the projection matrix to obtain the cutting coordinates of the target image, and cutting the cutting coordinates of the target image into the coordinates in the screen coordinate system by utilizing the OpenGL technology.
Preferably, the first rendering module is specifically configured to:
and connecting the rendered vertexes to obtain a rectangular cube, wherein edges visible in the real world in the rectangular cube are rendered by adopting a solid line, and edges invisible in the real world in the rectangular cube are rendered by adopting a dotted line.
Preferably, the apparatus further comprises:
and the interaction module is used for storing the coordinates of each vertex of the target object in the world coordinate system when a request of clicking the target object on the screen by a user is received.
A smart device, comprising: a 3D depth camera, a memory and a processor;
the 3D depth camera is used for acquiring a depth image information frame containing a target object;
the memory is used for storing programs;
the processor is configured to implement the steps of any one of the above-mentioned 3D depth camera-based MR volume measurement methods when executing the program.
Compared with the prior art, the beneficial effect of this application is:
in the method, the coordinates of each vertex of the target object under the world coordinate system are calculated by using the depth image information frame containing the target object, and the volume of the target object is calculated by using the coordinates of each vertex of the target object under the world coordinate system, so that the volume measurement is realized, and the volume measurement precision can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of an embodiment 1 of a 3D depth camera-based MR volume measurement method provided by the present application;
FIG. 2 is a schematic diagram of a rectangular cube according to the present application;
FIG. 3 is an interface diagram of returned categories for a search engine in a schematic diagram of a data storage structure provided herein;
FIG. 4 is a flowchart of an embodiment 2 of a 3D depth camera-based MR volume measurement method provided by the present application;
FIG. 5 is a schematic structural diagram of another rectangular cube provided herein;
FIG. 6 is a schematic illustration of a target image provided herein;
fig. 7 is a flowchart of embodiment 3 of a 3D depth camera-based MR volume measurement method provided by the present application;
fig. 8 is a schematic logical structure diagram of an MR volume measurement device based on a 3D depth camera provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses an MR volume measurement method based on a 3D depth camera, which comprises the following steps: when a depth image information frame containing a target object is obtained, calculating a view matrix according to the depth image information frame; calculating coordinates of each vertex of the target object under a camera coordinate system according to the depth image information frame; respectively multiplying the coordinates of each vertex of the target object under a camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object under a world coordinate system; and calculating the volume of the target object by using the coordinates of each vertex of the target object under the world coordinate system. In the application, the volume measurement is realized, and the accuracy of the volume measurement can be improved.
Next, a 3D depth camera-based MR volume measurement method disclosed in an embodiment of the present application is described, and as shown in fig. 1, a flowchart of an embodiment 1 of the 3D depth camera-based MR volume measurement method provided in the present application may include the following steps:
step S11, when a depth image information frame including a target object is acquired, a view matrix is calculated according to the depth image information frame.
In the present embodiment, the shape of the target object may be, but is not limited to, a rectangular cube.
Of course, the target object may also be shaped as a non-rectangular cube.
In this embodiment, a depth image information frame and a color image information frame including a target object can be obtained by shooting through a 3D depth camera based on TOF or structured light.
And step S12, calculating the coordinates of each vertex of the target object in a camera coordinate system according to the depth image information frame.
Corresponding to the shape of the target object being a rectangular cube, the respective vertices of the target object can be understood as: the vertices of the target object itself.
Corresponding to the shape of the target object being a non-rectangular cube, the respective vertices of the target object can be understood as: each vertex of the target object circumscribing a smallest rectangular cube.
And step S13, respectively multiplying the coordinates of each vertex of the target object in the camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object in the world coordinate system.
And step S14, calculating the volume of the target object by using the coordinates of each vertex of the target object in the world coordinate system.
In this embodiment, when the shape of the target object is a rectangular cube or a non-rectangular cube, and each vertex of the target object is each vertex of a smallest circumscribed rectangular cube of the target object, calculating the volume of the target object by using coordinates of each vertex of the target object in a world coordinate system may include:
and A11, selecting one of the vertexes as a target vertex.
And A12, calculating the lengths of the three edges intersected at the target vertex as the length, the width and the height respectively.
As shown in fig. 2, vertex 3 may be selected as the target vertex, and the distance between vertex 3 and vertex 1 may be calculated as the length; calculating the distance from the vertex 3 to the vertex 4 as the width; the distance between vertex 3 and vertex 7 is calculated as high.
A13, taking the product of the length, the width and the height as the volume of the target object.
In this embodiment, in addition to calculating the coordinates of each vertex in the world coordinate system, the coordinates of the midpoint of each edge of the three edges intersecting with the target vertex in the world coordinate system and the coordinates of the center point of the rectangular cube in the world coordinate system may be calculated, and the coordinates of each vertex in the world coordinate system may be stored. The saved structure may be a volumeBean, as shown in FIG. 3.
As another alternative embodiment of the present application, referring to fig. 4, a schematic flow chart of embodiment 2 of a 3D depth camera-based MR volume measurement method provided by the present application is provided, and this embodiment mainly is an extension of the 3D depth camera-based MR volume measurement method described in the above embodiment 1, as shown in fig. 4, the method may include, but is not limited to, the following steps:
step S21, when a depth image information frame including a target object is acquired, a view matrix is calculated according to the depth image information frame.
And step S22, calculating the coordinates of each vertex of the target object in a camera coordinate system according to the depth image information frame.
And step S23, respectively multiplying the coordinates of each vertex of the target object in the camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object in the world coordinate system.
And step S24, calculating the volume of the target object by using the coordinates of each vertex of the target object in the world coordinate system.
And step S25, constructing a three-dimensional virtual world with the opening position of the 3D depth camera as the world coordinate origin by utilizing the SLAM technology and the depth image information frame.
By utilizing the SLAM (simultaneous localization and mapping) technology, a three-dimensional virtual world with the opening position of the 3D depth camera as the world coordinate origin can be constructed based on the depth image information frame, similar points and different points in the depth image information frame are extracted, and the three-dimensional virtual world is perfected.
And step S26, calculating a projection matrix according to the depth image information frame.
In this embodiment, the projection matrix may be calculated based on the depth image information frame by using the SLAM technique.
And step S27, rendering each vertex of the target object in the three-dimensional virtual world according to the coordinates of each vertex of the target object in the world coordinate system, and connecting the rendered vertices to obtain a rectangular cube.
In this embodiment, the rendered vertices may be connected by a solid line to obtain a rectangular cube.
Of course, in a world coordinate system, a front-back relationship is generated between each point of the box and the camera, and in order to enhance the sense of reality, the rendered vertexes can be connected in a manner of combining a solid line and a dotted line to obtain a rectangular cube. Specifically, rendering may be performed based on the following rendering principle:
every 4 points may constitute a surface, each surface having a normal. The front surface is assumed to be a surface a, and the back surface is assumed to be a surface B. The outward surface of each surface of the rectangular cube is the A surface of the surface, and the inward surface of each surface of the rectangular cube is the B surface of the surface. From the camera position to the very center of the face, a vector can be formed, the angle between the vector and the normal is between 90 ° and 180 ° to represent that the camera can see the front face (face a) of the face at this position, and then the 4 frames of the face are rendered by the solid line connection. If the back surface (B surface) of the rectangular cube is seen between 0 and 90 degrees, the 4 frames of the surface are rendered by connecting the dotted lines.
The rendering result is that the edges visible in the real world in the rectangular cube are rendered with solid lines, and the edges invisible in the real world in the rectangular cube are rendered with dotted lines
It should be noted that, by using the SLAM technology, it can be ensured that the rectangular cube will move along with the movement of the camera during the movement of the camera.
And step S28, rendering the length, width and height of the rectangular cube at the middle point of each edge of any adjacent three edges in the rectangular cube, and rendering the size of the volume at the central point in the rectangular cube.
For example, if the length is 0.286m, the width is 0.328m, and the height is 0.237m, as shown in FIG. 5, the length is 0.286m, the width is 0.328m, and the height is 0.237mRendering 0.286m, 0.328m and 0.237m from the middle point of one edge of three adjacent edges in the rectangular cube, and rendering 0.0222m from the middle point of the rectangular cube3。
And step S29, acquiring a color image information frame shot at the same time as the depth image information frame, and overlapping the rectangular cube and the color image information frame to obtain a target image.
The rendered rectangular cube with the length, width, height and volume is superposed with the color image information frame, so that the volume of the rectangular cube can be displayed more visually.
Taking the rectangular cube shown in fig. 5 as an example, the target image can be seen in fig. 6, which is the added effect of the rectangular cube and the color image information frame shown in fig. 6.
Step S210, pre-multiplying the coordinates of the target image in the world coordinate system by the view matrix to obtain the coordinates of the target image in the camera coordinate system, pre-multiplying the coordinates of the target image in the camera coordinate system by the projection matrix to obtain the clipping coordinates of the target image, and clipping the clipping coordinates of the target image into the coordinates in the screen coordinate system by using the OpenGL technique.
And the coordinates of the target image in the world coordinate system are multiplied by the view matrix to obtain the coordinates of the target image in the camera coordinate system, the coordinates of the target image in the camera coordinate system are multiplied by the projection matrix to obtain the cutting coordinates of the target image, and the cutting coordinates of the target image are cut into the coordinates in the screen coordinate system by utilizing the OpenGL technology, so that the target image can be accurately displayed in a screen.
As another alternative embodiment of the present application, referring to fig. 7, a schematic flow chart of embodiment 3 of a 3D depth camera-based MR volume measurement method provided by the present application is provided, and this embodiment is mainly an extension of the 3D depth camera-based MR volume measurement method described in the above embodiment 1, and as shown in fig. 7, the method may include, but is not limited to, the following steps:
step S31, when a depth image information frame including a target object is acquired, a view matrix is calculated according to the depth image information frame.
And step S32, calculating the coordinates of each vertex of the target object in a camera coordinate system according to the depth image information frame.
And step S33, respectively multiplying the coordinates of each vertex of the target object in the camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object in the world coordinate system.
The detailed procedures of steps S31-S33 can be found in the related descriptions of steps S11-S13 in embodiment 1, and are not repeated herein.
And step S34, saving the coordinates of each vertex of the target object in the world coordinate system when receiving a request of clicking the target object on the screen by a user.
When a user clicks a target object on a screen, the click position is converted into the click position of a depth map (the default depth map size is 640 x 480) through the screen coordinate system, and then the coordinates of each vertex of the target object under the world coordinate system can be stored according to the click position, so that the interactivity is improved.
And step S35, calculating the volume of the target object by using the coordinates of each vertex of the target object in the world coordinate system.
The detailed process of step S35 can be referred to the related description of step S14 in embodiment 1, and is not repeated here.
Next, a 3D depth camera-based MR volume measurement apparatus provided in an embodiment of the present application is described, and the 3D depth camera-based MR volume measurement apparatus described below and the 3D depth camera-based MR volume measurement method described above may be referred to correspondingly.
Referring to fig. 8, the MR volume measuring device based on the 3D depth camera includes: a first calculation module 11, a second calculation module 12, a third calculation module 13 and a fourth calculation module 14.
The first calculation module 11 is configured to calculate a view matrix according to a depth image information frame when the depth image information frame including a target object is acquired;
a second calculating module 12, configured to calculate, according to the depth image information frame, coordinates of each vertex of the target object in a camera coordinate system;
a third calculating module 13, configured to multiply the coordinates of each vertex of the target object in the camera coordinate system by an inverse matrix of the view matrix, respectively, to obtain the coordinates of each vertex of the target object in the world coordinate system;
and the fourth calculation module 14 is configured to calculate a volume of the target object by using coordinates of each vertex of the target object in the world coordinate system.
In this embodiment, the target object may be a rectangular cube in shape;
or, the target object may be in the shape of a non-rectangular cube, and each vertex of the target object is each vertex of a smallest rectangular cube circumscribing the target object.
In this embodiment, the fourth calculating module may be specifically configured to:
selecting one of the vertexes as a target vertex;
calculating the lengths of three edges intersected at the target vertex, and respectively taking the lengths as the length, the width and the height;
taking the product of the length, the width, and the height as the volume of the target object.
In this embodiment, the MR volume measurement device based on the 3D depth camera may further include:
the construction module is used for constructing a three-dimensional virtual world taking the opening position of the 3D depth camera as the world coordinate origin by utilizing the SLAM synchronous positioning and mapping technology and the depth image information frame;
the fifth calculation module is used for calculating a projection matrix according to the depth image information frame;
the first rendering module is used for rendering each vertex of the target object in the three-dimensional virtual world according to the coordinates of each vertex of the target object in a world coordinate system, and connecting the rendered vertices to obtain a rectangular cube;
the second rendering module is used for rendering the middle point of each edge of any adjacent three edges in the rectangular cube to obtain the length, width and height of the rectangular cube and rendering the size of the volume at the central point in the rectangular cube;
the superposition module is used for acquiring a color image information frame shot at the same time as the depth image information frame, and superposing the rectangular cube and the color image information frame to obtain a target image;
the conversion module is used for multiplying the coordinates of the target image in the world coordinate system by the view matrix to obtain the coordinates of the target image in the camera coordinate system, multiplying the coordinates of the target image in the camera coordinate system by the projection matrix to obtain the cutting coordinates of the target image, and cutting the cutting coordinates of the target image into the coordinates in the screen coordinate system by utilizing the OpenGL technology.
In this embodiment, the first rendering module may be specifically configured to:
and connecting the rendered vertexes to obtain a rectangular cube, wherein edges visible in the real world in the rectangular cube are rendered by adopting a solid line, and edges invisible in the real world in the rectangular cube are rendered by adopting a dotted line.
In this embodiment, the MR volume measurement device based on the 3D depth camera may further include:
and the interaction module is used for storing the coordinates of each vertex of the target object in the world coordinate system when a request of clicking the target object on the screen by a user is received.
In another embodiment of the present application, a smart device is presented, comprising:
a 3D depth camera, a memory and a processor;
the 3D depth camera is used for acquiring a depth image information frame containing a target object;
the memory is used for storing programs;
the processor is configured to implement the steps of the 3D depth camera-based MR volume measurement method according to any one of embodiments 1 to 3 when executing the program.
It should be noted that each embodiment is mainly described as a difference from the other embodiments, and the same and similar parts between the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The MR volume measurement method and device based on the 3D depth camera provided by the present application are introduced in detail above, and a specific example is applied in the text to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An MR volume measurement method based on a 3D depth camera is characterized by comprising the following steps:
when a depth image information frame containing a target object is obtained, calculating a view matrix according to the depth image information frame;
calculating coordinates of each vertex of the target object under a camera coordinate system according to the depth image information frame;
respectively multiplying the coordinates of each vertex of the target object under a camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object under a world coordinate system;
and calculating the volume of the target object by using the coordinates of each vertex of the target object under the world coordinate system.
2. The method of claim 1, wherein the target object is shaped as a rectangular cube;
or, the target object is in a non-rectangular cube shape, and each vertex of the target object is each vertex of a smallest circumscribed rectangular cube of the target object.
3. The method of claim 2, wherein calculating the volume of the target object using the coordinates of the vertices of the target object in the world coordinate system comprises:
selecting one of the vertexes as a target vertex;
calculating the lengths of three edges intersected at the target vertex, and respectively taking the lengths as the length, the width and the height;
taking the product of the length, the width, and the height as the volume of the target object.
4. The method of claim 1, further comprising:
constructing a three-dimensional virtual world with the opening position of the 3D depth camera as the origin of world coordinates by utilizing an SLAM synchronous positioning and mapping technology and the depth image information frame;
calculating a projection matrix according to the depth image information frame;
rendering each vertex of the target object in the three-dimensional virtual world according to the coordinates of each vertex of the target object in a world coordinate system, and connecting the rendered vertices to obtain a rectangular cube;
respectively rendering the length, width and height of the rectangular cube at the middle point of each edge of any adjacent three edges in the rectangular cube, and rendering the size of the volume at the central point in the rectangular cube;
acquiring a color image information frame shot at the same time as the depth image information frame, and overlapping the rectangular cube and the color image information frame to obtain a target image;
and carrying out left multiplication on the coordinates of the target image in a world coordinate system by the view matrix to obtain the coordinates of the target image in a camera coordinate system, carrying out left multiplication on the coordinates of the target image in the camera coordinate system by the projection matrix to obtain cutting coordinates of the target image, and cutting the cutting coordinates of the target image into the coordinates in a screen coordinate system by utilizing an OpenGL technology.
5. The method of claim 4, wherein the connecting the rendered vertices to obtain a rectangular cube comprises:
and connecting the rendered vertexes to obtain a rectangular cube, wherein edges visible in the real world in the rectangular cube are rendered by adopting a solid line, and edges invisible in the real world in the rectangular cube are rendered by adopting a dotted line.
6. The method of claim 1, wherein before calculating the volume of the target object using the coordinates of the vertices of the target object in the world coordinate system, the method further comprises:
and when a request of clicking the target object on the screen by a user is received, saving the coordinates of each vertex of the target object under the world coordinate system.
7. An MR volume measurement device based on a 3D depth camera, comprising:
the first calculation module is used for calculating a view matrix according to a depth image information frame when the depth image information frame containing a target object is acquired;
the second calculation module is used for calculating the coordinates of each vertex of the target object under a camera coordinate system according to the depth image information frame;
the third calculation module is used for multiplying the coordinates of each vertex of the target object under the camera coordinate system by the inverse matrix of the view matrix to obtain the coordinates of each vertex of the target object under the world coordinate system;
and the fourth calculation module is used for calculating the volume of the target object by utilizing the coordinates of each vertex of the target object under the world coordinate system.
8. The apparatus of claim 7, wherein the target object is shaped as a rectangular cube;
or, the target object is in a non-rectangular cube shape, and each vertex of the target object is each vertex of a smallest circumscribed rectangular cube of the target object.
9. The apparatus of claim 7, further comprising:
the construction module is used for constructing a three-dimensional virtual world taking the opening position of the 3D depth camera as the world coordinate origin by utilizing the SLAM synchronous positioning and mapping technology and the depth image information frame;
the fifth calculation module is used for calculating a projection matrix according to the depth image information frame;
the first rendering module is used for rendering each vertex of the target object in the three-dimensional virtual world according to the coordinates of each vertex of the target object in a world coordinate system, and connecting the rendered vertices to obtain a rectangular cube;
the second rendering module is used for rendering the middle point of each edge of any adjacent three edges in the rectangular cube to obtain the length, width and height of the rectangular cube and rendering the size of the volume at the central point in the rectangular cube;
the superposition module is used for acquiring a color image information frame shot at the same time as the depth image information frame, and superposing the rectangular cube and the color image information frame to obtain a target image;
the conversion module is used for multiplying the coordinates of the target image in the world coordinate system by the view matrix to obtain the coordinates of the target image in the camera coordinate system, multiplying the coordinates of the target image in the camera coordinate system by the projection matrix to obtain the cutting coordinates of the target image, and cutting the cutting coordinates of the target image into the coordinates in the screen coordinate system by utilizing the OpenGL technology.
10. A smart device, comprising: a 3D depth camera, a memory and a processor;
the 3D depth camera is used for acquiring a depth image information frame containing a target object;
the memory is used for storing programs;
the processor, when executing the program, is configured to carry out the steps of the 3D depth camera based MR volume measurement method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911070133.5A CN110766744B (en) | 2019-11-05 | 2019-11-05 | MR volume measurement method and device based on 3D depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911070133.5A CN110766744B (en) | 2019-11-05 | 2019-11-05 | MR volume measurement method and device based on 3D depth camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110766744A true CN110766744A (en) | 2020-02-07 |
CN110766744B CN110766744B (en) | 2022-06-10 |
Family
ID=69335785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911070133.5A Active CN110766744B (en) | 2019-11-05 | 2019-11-05 | MR volume measurement method and device based on 3D depth camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110766744B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111524180A (en) * | 2020-04-23 | 2020-08-11 | Oppo广东移动通信有限公司 | Object volume calculation method and device, electronic equipment and storage medium |
CN111768373A (en) * | 2020-06-18 | 2020-10-13 | 北京交通大学 | Hierarchical pavement marking damage detection method based on deep learning |
CN112077018A (en) * | 2020-08-30 | 2020-12-15 | 哈尔滨工程大学 | Sea cucumber size screening method adopting computer vision |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163340A (en) * | 2011-04-18 | 2011-08-24 | 宁波万里电子科技有限公司 | Method for labeling three-dimensional (3D) dynamic geometric figure data information in computer system |
CN105844686A (en) * | 2016-03-31 | 2016-08-10 | 深圳市菲森科技有限公司 | Image 3D effect display method and system |
CN106096016A (en) * | 2016-06-24 | 2016-11-09 | 北京建筑大学 | A kind of network three-dimensional point cloud method for visualizing and device |
WO2017092307A1 (en) * | 2015-12-01 | 2017-06-08 | 乐视控股(北京)有限公司 | Model rendering method and device |
CN106813568A (en) * | 2015-11-27 | 2017-06-09 | 阿里巴巴集团控股有限公司 | object measuring method and device |
WO2017197988A1 (en) * | 2016-05-16 | 2017-11-23 | 杭州海康机器人技术有限公司 | Method and apparatus for determining volume of object |
CN107564089A (en) * | 2017-08-10 | 2018-01-09 | 腾讯科技(深圳)有限公司 | Three dimensional image processing method, device, storage medium and computer equipment |
CN108537834A (en) * | 2018-03-19 | 2018-09-14 | 杭州艾芯智能科技有限公司 | A kind of volume measuring method, system and depth camera based on depth image |
CN109186461A (en) * | 2018-07-27 | 2019-01-11 | 南京阿凡达机器人科技有限公司 | A kind of measurement method and measuring device of cabinet size |
-
2019
- 2019-11-05 CN CN201911070133.5A patent/CN110766744B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163340A (en) * | 2011-04-18 | 2011-08-24 | 宁波万里电子科技有限公司 | Method for labeling three-dimensional (3D) dynamic geometric figure data information in computer system |
CN106813568A (en) * | 2015-11-27 | 2017-06-09 | 阿里巴巴集团控股有限公司 | object measuring method and device |
WO2017092307A1 (en) * | 2015-12-01 | 2017-06-08 | 乐视控股(北京)有限公司 | Model rendering method and device |
CN105844686A (en) * | 2016-03-31 | 2016-08-10 | 深圳市菲森科技有限公司 | Image 3D effect display method and system |
WO2017197988A1 (en) * | 2016-05-16 | 2017-11-23 | 杭州海康机器人技术有限公司 | Method and apparatus for determining volume of object |
CN106096016A (en) * | 2016-06-24 | 2016-11-09 | 北京建筑大学 | A kind of network three-dimensional point cloud method for visualizing and device |
CN107564089A (en) * | 2017-08-10 | 2018-01-09 | 腾讯科技(深圳)有限公司 | Three dimensional image processing method, device, storage medium and computer equipment |
CN108537834A (en) * | 2018-03-19 | 2018-09-14 | 杭州艾芯智能科技有限公司 | A kind of volume measuring method, system and depth camera based on depth image |
CN109186461A (en) * | 2018-07-27 | 2019-01-11 | 南京阿凡达机器人科技有限公司 | A kind of measurement method and measuring device of cabinet size |
Non-Patent Citations (3)
Title |
---|
DAI ANGELA ET AL: "BundleFusion: Real-Time Globally Consistent 3D Reconstruction Using On-the-Fly Surface Reintegration", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 36, no. 3, 31 March 2017 (2017-03-31) * |
孙海洪等: "基于OpenGL的三维虚拟林相建立", 《东北林业大学学报》, vol. 38, no. 08, 31 October 2010 (2010-10-31) * |
陶森柏: "三维可视化体积测量系统的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)》, no. 04, 15 April 2014 (2014-04-15) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111524180A (en) * | 2020-04-23 | 2020-08-11 | Oppo广东移动通信有限公司 | Object volume calculation method and device, electronic equipment and storage medium |
CN111768373A (en) * | 2020-06-18 | 2020-10-13 | 北京交通大学 | Hierarchical pavement marking damage detection method based on deep learning |
CN112077018A (en) * | 2020-08-30 | 2020-12-15 | 哈尔滨工程大学 | Sea cucumber size screening method adopting computer vision |
Also Published As
Publication number | Publication date |
---|---|
CN110766744B (en) | 2022-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110766744B (en) | MR volume measurement method and device based on 3D depth camera | |
CN109636919B (en) | Holographic technology-based virtual exhibition hall construction method, system and storage medium | |
Sheffer et al. | Mesh parameterization methods and their applications | |
JP5665872B2 (en) | Shape optimization based on connectivity for real-time rendering | |
Mitani et al. | 3D sketch: sketch-based model reconstruction and rendering | |
EP3340183B1 (en) | Graphics processing employing cube map texturing | |
US10013506B2 (en) | Annotating real-world objects | |
CN111127623B (en) | Model rendering method and device, storage medium and terminal | |
KR20160052441A (en) | Automated texturing mapping and animation from images | |
WO2008067330A1 (en) | Interacting with 2d content on 3d surfaces | |
US6978230B1 (en) | Apparatus, system, and method for draping annotations on to a geometric surface | |
KR101265870B1 (en) | 2/3 2d/3d combined rendering | |
CN108090952A (en) | 3 d modeling of building method and apparatus | |
US6791563B2 (en) | System, method and computer program product for global rendering | |
EP3594906B1 (en) | Method and device for providing augmented reality, and computer program | |
CN114820980A (en) | Three-dimensional reconstruction method and device, electronic equipment and readable storage medium | |
US11074747B2 (en) | Computer-aided techniques for designing detailed three-dimensional objects | |
Hartl et al. | Rapid reconstruction of small objects on mobile phones | |
Efrat et al. | On incremental rendering of silhouette maps of polyhedral scene. | |
CN109427084B (en) | Map display method, device, terminal and storage medium | |
CN116310037A (en) | Model appearance updating method and device and computing equipment | |
JP3350672B2 (en) | Contact part drawing method, contact part drawing apparatus and storage medium therefor | |
CN116129085B (en) | Virtual object processing method, device, storage medium, and program product | |
KR20140013292A (en) | Ray-tracing arithmetic operation method and system | |
Ohta et al. | Photo-based Desktop Virtual Reality System Implemented on a Web-browser |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |