CN110619661A - Method for measuring volume of outdoor stock ground raw material based on augmented reality - Google Patents

Method for measuring volume of outdoor stock ground raw material based on augmented reality Download PDF

Info

Publication number
CN110619661A
CN110619661A CN201910882976.9A CN201910882976A CN110619661A CN 110619661 A CN110619661 A CN 110619661A CN 201910882976 A CN201910882976 A CN 201910882976A CN 110619661 A CN110619661 A CN 110619661A
Authority
CN
China
Prior art keywords
model
augmented reality
transformation
volume
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910882976.9A
Other languages
Chinese (zh)
Inventor
王伟乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910882976.9A priority Critical patent/CN110619661A/en
Publication of CN110619661A publication Critical patent/CN110619661A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for measuring the volume of outdoor stock ground raw materials based on augmented reality, which comprises the steps that a three-dimensional measuring system consisting of a camera and a laser ranging sensor is used for completing the acquisition of data information of each characteristic point on the surface of a stockpile; the obtained data is accessed to a wireless transmitting communication module through a sensor serial port to realize wireless transmission of the data; after the computer obtains the data through the wireless receiving module, the test analysis software running in the computer processes the coordinate data and carries out grid division, volume calculation, three-dimensional reconstruction, historical query and report output operation. The invention effectively solves the problem of measuring the volume of the raw materials in the outdoor stock ground.

Description

Method for measuring volume of outdoor stock ground raw material based on augmented reality
Technical Field
The invention relates to a method for measuring the volume of outdoor stock ground raw materials based on augmented reality.
Background
In the production management process of industrial enterprises, the raw materials need to be stored and delivered in and out in time, and the problem of effective metering in the material pile storage process in a material pile yard is solved. Direct and indirect measurement methods are currently used.
The direct measurement method is that a detection person uses a tape measure or a handheld laser range finder to measure and calculate the volume of the powdery pile. The method needs to pile the powdery pile into a regular volume manually or mechanically, and because the characteristics of the powdery pile, a detection person can not directly carry out detection work on the pile, a scaffold needs to be erected, a large amount of manpower and material resources are consumed, the time consumption is long, the powdery pile is seriously polluted by dust, the measurement precision is low, and the requirement of field detection cannot be met.
The indirect measurement method is to measure the density of the powdery pile first and then calculate the volume of the powdery pile according to the weighed total mass of the powdery pile on site. The method needs to transport machinery and a large scale, consumes a large amount of manpower and material resources, causes serious material dust pollution in the loading and transporting process, generates loss of bulk materials, has low measurement precision and can not meet the requirement of field detection.
Computer Aided Design (CAD) refers to a method and a technique for engineering technicians to design products or projects by using computer hardware and software systems, and includes design activities such as design, drawing, engineering analysis and document making. CAD has gone through the development stages of two-dimensional planar graphic design, interactive graphic design, three-dimensional wire-frame model design, three-dimensional solid modeling design, free-form surface modeling design, parametric design, characteristic modeling design, parametric characteristic modeling, and the like from the 20 th century 50 s to now. Although the CAD technology has achieved great success for a long time in the past, the design interface itself is occupied by interactive operation through menus in a two-dimensional environment, and the design difficulty is increased.
Augmented Reality (AR) technology is a good solution to this problem. The AR generates virtual objects which do not exist in the real environment by means of a computer graphics technology and a visualization technology, the virtual objects are accurately placed in the real environment through a sensing technology, the virtual objects and the real environment are integrated into a whole by means of a display device, and a new environment with real sensory effect is presented to a user. In the industrial field, AR technology has been widely used in factory planning, product development and assembly, service and maintenance, and the like.
Three-dimensional interactive modeling in real space is essentially an interactive spatial localization process. Therefore, finding the coordinate representation method of the spatial position point in the world coordinate system is the basis and key of three-dimensional interactive modeling, and the realization way can be completed through the mutual transformation relation among the camera, the interactive tool and the reference mark.
The main criterion for measuring the quality of the augmented reality system is the real degree of the virtual model superimposed on the real world, i.e. the virtual object must be accurately registered with the physical entity to obtain the desired augmented effect. In the registration work, the main problem involved is the identification of the world coordinate system and its matching to the virtual model coordinate system. However, due to environmental changes, device noise, estimation algorithm accuracy, and the like, the inputs of the various modules of the system have large errors. Therefore, although some systems can achieve pixel-level registration as early as the century, there are some specific requirements, such as detection of information and reconstruction of coordinate system, which must be accomplished by using large visual markers, and accurate registration which must be achieved by relying on expensive trackers and sensors, which directly result in very high cost for constructing AR system, so that the application of augmented reality is very limited. Therefore, considering the flexibility of the marking model and the construction cost of the system, the invention integrates a plurality of technologies such as early detection of the marking, 3D space pose restoration of the marker, space transformation of the virtual model and the like, and provides a set of implementation scheme of the enhanced implementation system based on the simple visual marking.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method for measuring the volume of outdoor stock ground raw materials based on augmented reality, which can overcome the defects of the existing measuring method.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention provides a method for measuring the volume of outdoor stock ground raw materials based on augmented reality, which comprises the following steps: a three-dimensional measurement system consisting of a camera and a laser ranging sensor is used for acquiring data information of each characteristic point on the surface of the piled material; the obtained data is accessed to the wireless transmitting communication module through the serial port of the sensor to realize the wireless transmission of the data; after the computer obtains the data through the wireless receiving module, the test analysis software running in the computer processes the coordinate data and carries out grid division, volume calculation, three-dimensional reconstruction, historical query and report output operation.
Further, an augmented reality system implementation scheme based on simple visual markers is adopted, and the implementation scheme specifically comprises: establishing a system model structure, detecting a marked object, restoring the 3D pose of the marked object and registering the virtual model and the image of the real-time scene.
Further, the system model structure comprises a marker detection and 3D pose reduction module, a virtual object generation module and an image registration module; firstly, shooting an object in the real world by adopting a physical camera, and obtaining space model information of the physical object by combining a mark detection module and a 3D pose recovery module; meanwhile, the virtual object generation module controls the virtual camera according to the internal and external parameters obtained by the physical camera and generates a corresponding virtual layer; and finally, splicing the corresponding virtual and real images into a final augmented reality effect through an image registration module.
Further, the detection of the marked object comprises the design of a mark model and the implementation of a mark detection algorithm; the design of the marking model specifically comprises the following steps: adopting a visual mark detection model combining color and shape dual characteristics; the mark detection algorithm specifically comprises the following steps: the method comprises the steps of conducting parallel identification processing on shape features of a mark while detecting the color of the visual mark, wherein the shape features of the mark comprise a circle feature and a rectangle feature, detecting the two features respectively in a Hough circle identification and contour fitting mode, obtaining a candidate visual mark after the two features of the color and the shape are matched simultaneously, and matching the identified image mark with a point sequence of a predefined physical space to achieve acquisition of a space pose.
Further, the 3D pose of the marker object is specifically restored as follows: and establishing a monocular weak perspective model by using a visual processing library according to the matching relation between the obtained visual marking sequence and a predefined physical space point sequence, and carrying out approximate estimation on the space 3D pose of the marked target.
Further, the image registration of the virtual model and the real-time scene specifically includes: the concept of model view transformation in OpenGL is adopted, and two transformations are mixed in a transformation matrix to realize the transformation of view and model simultaneously, so that any vertex P is subjected to0(X, Y, Z, W), the model view angle can be transformed into a human eye coordinate system Pe (Xe, Ye, Ze, We), and the transformation formula is as follows:
[X Y Z W]T×[M]=[Xe Ye Ze We]T (10)
determining the absolute position of a virtual observer by using a human eye coordinate system, and setting a shearing plane displayed by a model by using projection transformation; performing perspective shifting using a perspective projection with a visible range in the shape of a truncated cone such that objects with practically equal physical size will appear to be large and small; and finally displaying the rendered model on a screen by adopting viewport transformation after the two steps of transformation are finished, finishing the tasks of data buffering, color processing and window pixel at the bottom layer by utilizing a GLTools library of OpenGL through the transformation, and accelerating the calculation speed of the virtual model by adopting a matrix stack form.
The invention has the following beneficial effects:
the invention effectively solves the problem of measuring the volume of the raw materials in the outdoor stock ground, firstly, a space coordinate system is established before measurement, then, a measuring point is selected from the coordinate system, and a three-dimensional measuring system consisting of a camera and a laser distance sensor is utilized to acquire three-dimensional coordinate data of each characteristic point on the surface of the stock pile to obtain a point set representing an entity. And then recombining the coordinate data according to a certain grid model to obtain a plurality of small volume microelements which can be calculated, and finally accumulating and summing to obtain the volume of the whole stockpile.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of the principles of the present invention;
FIG. 2 is a block diagram of the system architecture of the present invention;
FIG. 3 is a diagram of an augmented reality system model architecture;
FIG. 4 is a map of a marker sequence detection;
FIG. 5 is a flow chart of a virtual model generation pipeline;
FIG. 6 is an exemplary diagram of a three-dimensional point cloud of a measured material pile;
fig. 7 is a three-dimensional mesh model diagram obtained by topology of the three-dimensional point cloud of the measured material pile in fig. 6.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
As shown in fig. 1, before measurement, a spatial coordinate system is first established, then measurement points are selected from the coordinate system, and a three-dimensional measurement system composed of a camera and a laser ranging sensor is used to acquire three-dimensional coordinate data of each feature point on the surface of a pile to obtain a point set representing an entity, which is called as "point clouds" (pointclouds). And then recombining the coordinate data according to a certain grid model to obtain a plurality of small volume microelements which can be calculated, and finally accumulating and summing to obtain the volume of the whole stockpile.
As shown in fig. 2, the system structure is designed as follows:
and a three-dimensional measuring system consisting of a camera and a laser ranging sensor is used for acquiring data information of each characteristic point on the surface of the piled material. The obtained data is accessed to the wireless transmitting communication module through the serial port of the sensor to realize the wireless transmission of the data. After the computer obtains the data through the wireless receiving module, the test analysis software running in the computer processes the coordinate data and carries out the operations of grid division, volume calculation, three-dimensional display, historical inquiry, report output and the like. As shown in fig. 2, is a hardware schematic block diagram of the overall system design. The stockpile volume measurement system mainly comprises a detection object, a high-definition camera, a laser ranging sensor, a wireless communication module and a reinforced notebook computer. The detection object can be a rectangle with a traveling crane indoors or a square stock ground with a reclaimer or a bucket wheel machine outdoors; the material stored in the stock ground can be the most common coal or any other material. The wireless communication module converts serial data output by the camera and the laser ranging sensor, performs wireless data transmission, finally converts the serial data into serial data and uploads the serial data to a computer, and transparent transmission of the serial data is realized [29 ]. After obtaining a complete three-dimensional point cloud information, the computer can perform modeling according to the corresponding mathematical model and realize the functional modules of volume calculation, three-dimensional display, historical query, report output and the like.
The stockpile volume measurement system mainly comprises two parts, namely hardware and software. The hardware part mainly comprises a camera, a laser ranging sensor, a Zigbee wireless communication module, a reinforced computer, a reflecting plate and the like, wherein a data acquisition device consisting of a ranging device and a wireless communication module acquires three-dimensional coordinate information and transmits the three-dimensional coordinate information to a computer for storage; the system software runs on a computer, controls the synchronous running of the whole system hardware, stores and processes the measured data value, calculates the volume and the quality of the stockpile by using a grid division model, and reconstructs the three-dimensional volume visual display of the stockpile through OpenGL.
Principles of augmented reality modeling
Computer Aided Design (CAD) refers to a method and a technique for engineering technicians to design products or projects by using computer hardware and software systems, and includes design activities such as design, drawing, engineering analysis and document making. CAD has gone through the development stages of two-dimensional planar graphic design, interactive graphic design, three-dimensional wire-frame model design, three-dimensional solid modeling design, free-form surface modeling design, parametric design, characteristic modeling design, parametric characteristic modeling, and the like from the 20 th century 50 s to now. Although the CAD technology has achieved great success for a long time in the past, the design interface itself is occupied by interactive operation through menus in a two-dimensional environment, and the design difficulty is increased.
Augmented Reality (AR) technology is a good solution to this problem. The AR generates virtual objects which do not exist in the real environment by means of a computer graphics technology and a visualization technology, the virtual objects are accurately placed in the real environment through a sensing technology, the virtual objects and the real environment are integrated into a whole by means of a display device, and a new environment with real sensory effect is presented to a user. In the industrial field, AR technology has been widely used in factory planning, product development and assembly, service and maintenance, and the like. The method takes the AR environment as an interface, and based on a graphical technology and a developed interactive tool, realizes the editing operation test of the virtual model in the real-space environment directly.
Three-dimensional interactive modeling in real space is essentially an interactive spatial localization process. Therefore, finding the coordinate representation method of the spatial position point in the world coordinate system is the basis and key of three-dimensional interactive modeling, and the realization way can be completed through the mutual transformation relation among the camera, the interactive tool and the reference mark.
The main criterion for measuring the quality of the augmented reality system is the real degree of the virtual model superimposed on the real world, i.e. the virtual object must be accurately registered with the physical entity to obtain the desired augmented effect. In the registration work, the main problem involved is the identification of the world coordinate system and its matching to the virtual model coordinate system. However, due to environmental changes, device noise, estimation algorithm accuracy, and the like, the inputs of the various modules of the system have large errors. Therefore, although some systems can achieve pixel-level registration as early as the century, there are some specific requirements, such as detection of information and reconstruction of coordinate system, which must be accomplished by using large visual markers, and accurate registration which must be achieved by relying on expensive trackers and sensors, which directly result in very high cost for constructing AR system, so that the application of augmented reality is very limited. Therefore, considering the flexibility of the marker model and the system construction cost, the early-stage detection of the marker, the 3D space pose restoration of the marker, the space transformation of the virtual model and other technologies are fused, and a set of augmented reality system implementation scheme based on the simple visual marker is provided.
1) System model structure
The lower page of fig. 3 is an augmented reality system architecture established herein, which mainly includes a marker detection and 3D pose restoration module, a virtual object generation module, and an image registration module. And the figure marked by white background in the figure represents a generation module of the virtual object. The system firstly adopts a physical camera to shoot an object in the real world, and obtains the space model information of the physical object by combining a mark detection module and a 3D pose recovery module. Meanwhile, the virtual object generation module controls the virtual camera according to the internal and external parameters obtained by the physical camera and generates a corresponding virtual layer. And finally, splicing the corresponding virtual and real images into a final enhanced realization effect through an image registration module.
2) Detection of marked objects
2.1 Mark model design
The augmented reality based on the visual marker sets markers in the real world in advance, the virtual model is mapped to the real world, and the AR system tracks and identifies the markers to realize seamless fusion between the reality and the virtuality to the maximum extent. However, when the camera and the mark move relatively in the All system, a "drift" phenomenon may occur in which the virtual model cannot be superimposed on the mark in time. Therefore, the early detection of the visual marker has important research significance for overcoming error interference and improving the performance of the augmented reality system fusion. At present, most of markers of the vision-based augmented reality system are black and white plane markers, such as an ARToolkit marker, an ARTag marker, an ARToolkitPlus marker, an arstaudio marker, a visufco marker and the like. Such marks are easy to realize corner point identification, but have the defects that the mark area is usually large, the marks are difficult to apply to complex surfaces, and the operation is lack of flexibility. For this purpose, a visual mark detection model combining dual characteristics of color and shape is designed, and only if the color information and the shape information simultaneously accord with the detection standard, the visual mark detection model is determined as the real mark to be recognized. The model target was 53.2+ 38. 1 cardboard, 8 marks of different shapes and colors are distributed around the target, and a circular color mark is arranged at the middle point. At the same time, an additional 4 redundant tags are provided in the marker template for enhancing the accuracy of spatial pose detection and for providing support beyond the camera field of view at registration.
2.2 Mark detection Algorithm implementation
The traditional color detection method only divides the image into regions according to absolute color information, and due to the influence of external factors such as illumination, light intensity and the like in a real-time environment, the color division mode easily generates interference regions, so that difficulty is brought to mark identification. Therefore, in order to make the mark detection algorithm adaptive to environmental changes, a Lab color space which is more robust to illumination is adopted, and the color information in the real-time video frame is marked by using gaussian probability distribution, that is, in three Lab channels, with the Lab value of a pure color label as an expectation, the corresponding channel of the video image is expressed in the form of probability distribution as shown in formula (1):
and (3) performing parallel identification processing on the shape features of the mark while detecting the color of the visual mark. The shape features of the mark comprise a circle and a rectangle, and the circle recognition and the outline fitting correction are respectively adopted to detect the two features. When the color and the shape are matched at the same time, the candidate visual mark is obtained. In order to achieve the acquisition of the spatial pose, the recognized image markers need to be matched with the point sequence of the predefined physical space. However, in the designed marking template, two identical green labels exist even for labels of the same shape, and for this reason, it is necessary to sort the identified labels in order and to confirm the actual order of the identification feature points. Taking a rectangle group as an example, the red and blue labels are already determined, after the red and blue connecting lines are connected, the positions of the red and blue labels are determined and can be distinguished from other sides, while the two green labels are certainly on the same side of the red and blue connecting lines, if the blue label is taken as an angular point, the monotonicity that the cosine value is [0, 1] is known, the included angle between the label with the small cosine value and the red and blue is large, namely the two green rectangle labels can be distinguished as shown in fig. 4, and the two green circle labels can be sequence-labeled similarly.
3) 3D pose recovery of tagged objects
And establishing a monocular weak perspective model by utilizing an OpenCVr vision processing library according to the matching relation between the obtained visual mark sequence and a predefined physical space point sequence, and performing approximate estimation 1 on the spatial 3D pose of the marked target. When the target object is far enough away from the camera, the distance between the object and the vision acquisition device is much larger than the depth distance between the object mark points, and at this time, under the weak perspective model, it can be assumed that the points on the object surface have the same scaling factor s ═ f/2, and the projection model can be simply expressed as follows:
xi=sXi,yi=sYi (2)
selection of P0For one point in the world coordinate system, the origin of the world coordinate system is coincident with the origin of the model target, in which case other points in the world coordinate system may be represented as
Definition and P0The projection of the corresponding image space is ‰, and then any point in the image space can be represented as: pi-P0In the weak perspective model, the corresponding relationship between the image space and the cartesian space can be expressed as follows:
for each image space coordinate (X)0,Y0) When I is defined as 7 and J is defined as 4, each representing a scaling unit of an image space, a vector transformation format between the image space and a world space is expressed as follows,
by superimposing all the points in a matrix, it is obtained,
x=MI,y=MJ (6)
wherein the matrix M is formed by superposition of space vectors, X and y represent the projection of the vector J in the image space, and the formula (6) is reconstructed in a form of a double linear equation to obtain,
|I J|=M+ pi (7)
wherein M + represents the pseudo-inverse of the elbow. Orthogonalizing J and I obtained in equation (7) can calculate unit vectors I and J, the first two rows of the rotation matrix ruler are composed of these two parameters, and the third row can be obtained by the right-hand rule method, so that the rotation matrix of the pose parameters can be expressed as
After I and J are obtained, a scaling factor s can be estimated according to the average amplitude value of the I and J, so that any one space characteristic point P0Can be represented by P0The projected image coordinate per mill and s are calculated,
T=p0/s=[x0 y0 f]/s (9)
4) image registration of virtual models with real-time scenes
Image registration is a key link in an augmented reality system, and compared with a tracking sensor registration method with high cost, an augmented reality fusion registration strategy based on a simple visual marker is researched.
4.1 spatial transformation of virtual models
As shown in fig. 4, to generate a virtual object at a specified location in the real physical world, three spatial transformations, namely, a model perspective transformation, a projection transformation, and a viewport transformation, need to be involved.
View transformation is the first transformation process of virtual scene rendering, which makes a camera in accordance with featuresThe fixed rule points to the scene, so that the observation point where one camera is located and the direction observed by the camera are specified. The input parameters of the transformation are obtained by a physical camera of an actual observation scene according to a certain rule. The model transformation is a dual transformation of the perspective transformation, which results in a whole scene variation, whereas the model transformation is only for a single object variation. In the system, the concept of model view transformation (modeville transformation) in OpenGL is adopted, and two transformations are mixed in a transformation matrix to realize transformation of a view angle and a model at the same time, so that any vertex P is subjected to transformation0(X, Y, Z, W), the model view angle can be converted into a human eye coordinate system Pe after being converted. (Xe, Ye, Ze, We), the transformation formula is as follows:
[X Y Z W]T×[M]=[Xe Ye Ze We]T (10)
the human eye coordinate system determines the absolute position of the virtual observer, and then the clipping plane of the model display is set by projection transformation (projectiontransform). In the present system, perspective offset (perspective division) is performed using perspective projection in which the visible range is a truncated cone shape, so that objects having practically equal physical sizes will look large and small. After the two steps of transformation are completed, a rendered model is finally displayed on a screen by adopting viewport transformation (viewport transformation), the transformation completes tasks such as data buffering, color processing, window pixels and the like of a bottom layer by utilizing a GLTools library of OpenGL, and the calculation speed of a virtual model is accelerated by adopting a matrix stack form.
Three-dimensional model display of measured material pile
Fig. 5 shows a three-dimensional point representation of the measured material pile constructed by the system according to the global coordinates.
In order to reflect the three-dimensional model of the measured material pile more vividly, the system carries out topology on the three-dimensional point cloud data of the measured material pile to form a three-dimensional grid model. A three-dimensional mesh model obtained by performing topology on the three-dimensional point cloud of the measured material pile in fig. 5 is shown in fig. 6.
Principle of volume calculation
The volume calculation part firstly projects all the stockpile surface points in the measured stockpile three-dimensional point cloud onto the horizontal planes X and Y, and carries out triangle subdivision on the projection points of the stockpile surface points in the horizontal planes X and Y, so that the projection points can be divided into triangles. The triangles formed by the projection points correspond to the triangles formed by the surface points of the material pile one by one to form a triangular prism together, and the three-dimensional point cloud of the material pile to be detected can be divided into a plurality of triangular prisms by the sequential method. The system can obtain the volume result of the whole material pile by calculating the volume of each triangular prism and accumulating and adding.
Triangulation algorithm
1) Definition of triangulation
Let v be a finite set of points in the two-dimensional real number domain, edge E be a closed line segment composed of points in the set of points as end points, and E be a set of E. Then a triangulation T ═ (V, E) of the set of points V is a plan G which satisfies the condition:
(1) an edge in the plan view does not contain any point in the set of points, except for the end points.
(2) There are no intersecting edges.
(3) All the faces in the plan view are triangular faces, and the collection of all the triangular faces is the convex hull of the scatter set V.
2) Definition of Delaunay triangulation
The triangulation most used in practice is the Delaunay triangulation, which is a special triangulation. Delaunay triangulation: if a triangulation T of the set of points V contains only Delaunay edges, the triangulation is referred to as a Delaunay triangulation. The following are the excellent properties possessed by the Delaunay profile:
(1) the closest is: the nearest three points form a triangle, and all line segments (the sides of the triangle) do not intersect.
(2) Uniqueness: no matter where the region is constructed, a consistent junction is finally obtained
(3) Optimality: if the diagonals of the convex quadrangle formed by any two adjacent triangles can be interchanged, the smallest angle in the six interior angles of the two triangles cannot be enlarged.
(4) Most regular: the arrangement of the Delaunay triangulation results in the largest value if the smallest angles of each triangle in the triangulation are arranged in ascending order.
(5) Regionality: when a certain vertex is added, deleted or moved, only the adjacent triangle is influenced.
(6) Housing with convex polygon: the outermost boundaries of the triangular mesh form a convex polygonal envelope.
3.22 three-dimensional Point cloud volume calculation
The triangulation volume calculation process is shown in fig. 7. Firstly, projecting all material pile surface points in the three-dimensional point cloud of the measured material pile on horizontal planes X and Y, triangulating the projection points of the material pile surface points on the horizontal planes X and Y, wherein the triangle can be sectioned into triangles, and the area of the triangle is S;
the triangles formed by the projection points and the triangles formed by the surface points of the stockpile correspond one to form a triangular prism together, and the average value of the Z coordinates of the three stockpile surface points is taken as the height h of the triangular prism, as shown in a formula 4.5.
h=1/3*(Zi+Zj+Zk) (4.5)
The volume of a triangular prism can be found by the base area and height:
VΔ=SΔ*h (4.6)
the three-dimensional point cloud of the measured material pile can be divided into a plurality of triangular prisms by the sequential method, and the system can obtain the volume result of the whole material pile by calculating the volume of each triangular prism and accumulating and adding:
Vgeneral assembly=∑VΔ (4.7)。
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that various changes, modifications and substitutions can be made without departing from the spirit and scope of the invention as defined by the appended claims. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A method for measuring the volume of outdoor stock ground raw materials based on augmented reality is characterized by comprising the following steps: a three-dimensional measurement system consisting of a camera and a laser ranging sensor is used for acquiring data information of each characteristic point on the surface of the stacking material; the obtained data is accessed to a wireless transmitting communication module through a sensor serial port to realize wireless transmission of the data; after the computer obtains the data through the wireless receiving module, the test analysis software running in the computer processes the coordinate data and carries out grid division, volume calculation, three-dimensional reconstruction, historical query and report output operation.
2. The augmented reality-based method for measuring the volume of the outdoor stock ground raw material according to claim 1, wherein an augmented reality system implementation scheme based on simple visual markers is adopted, and specifically comprises: establishing a system model structure, detecting a marked object, restoring the 3D pose of the marked object and registering the virtual model and the image of the real-time scene.
3. The augmented reality-based method for measuring outdoor stock ground raw material volume according to claim 2, wherein the system model structure comprises a marker detection and 3D pose reduction module, a virtual object generation module and an image registration module; firstly, shooting an object in the real world by using a physical camera, and obtaining space model information of the physical object by combining a mark detection module and a 3D pose recovery module; meanwhile, the virtual object generation module controls the virtual camera according to the internal and external parameters obtained by the physical camera and generates a corresponding virtual layer; and finally, splicing the corresponding virtual and real images into a final augmented reality effect through an image registration module.
4. The augmented reality-based method for measuring outdoor stock yard raw material volume of claim 2, wherein the detection of the marker object comprises marker model design and marker detection algorithm implementation; the design of the marking model specifically comprises the following steps: adopting a visual mark detection model combining color and shape dual characteristics; the mark detection algorithm specifically comprises the following steps: the method comprises the steps of conducting parallel identification processing on shape features of a mark while detecting the color of the visual mark, wherein the shape features of the mark comprise a circle feature and a rectangle feature, detecting the two features respectively in a Hough circle identification and contour fitting mode, obtaining a candidate visual mark after the two features of the color and the shape are matched simultaneously, and matching the identified image mark with a point sequence of a predefined physical space to achieve acquisition of a space pose.
5. The augmented reality-based method for measuring the volume of the outdoor stock ground raw material according to claim 2, wherein the 3D pose recovery of the marker object is specifically: and establishing a monocular weak perspective model by using a visual processing library according to the matching relation between the obtained visual mark sequence and a predefined physical space point sequence, and carrying out approximate estimation on the space 3D pose of the mark target.
6. The augmented reality-based method for measuring the volume of the outdoor stock ground raw material according to claim 2, wherein the image registration of the virtual model and the real-time scene is specifically as follows: by adopting the concept of model view transformation in OpenGL, two transformations are mixed in a transformation matrix to realize transformation of view angle and model at the same time, so that for any vertex P0(X, Y, Z, W), the model view angle can be transformed into a human eye coordinate system Pe (Xe, Ye, Ze, We), and the transformation formula is as follows:
[X Y Z W]T×[M]=[Xe Ye Ze We]T (10)
determining the absolute position of a virtual observer by using a human eye coordinate system, and setting a shearing plane displayed by a model by using projection transformation; performing perspective shifting using a perspective projection with a visible range in the shape of a truncated cone such that objects with practically equal physical size will appear to be large and small; and finally displaying the rendered model on a screen by adopting viewport transformation after the two steps of transformation are finished, finishing the tasks of data buffering, color processing and window pixel at the bottom layer by utilizing a GLTools library of OpenGL through the transformation, and accelerating the calculation speed of the virtual model by adopting a matrix stack form.
CN201910882976.9A 2019-09-18 2019-09-18 Method for measuring volume of outdoor stock ground raw material based on augmented reality Pending CN110619661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910882976.9A CN110619661A (en) 2019-09-18 2019-09-18 Method for measuring volume of outdoor stock ground raw material based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910882976.9A CN110619661A (en) 2019-09-18 2019-09-18 Method for measuring volume of outdoor stock ground raw material based on augmented reality

Publications (1)

Publication Number Publication Date
CN110619661A true CN110619661A (en) 2019-12-27

Family

ID=68923431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910882976.9A Pending CN110619661A (en) 2019-09-18 2019-09-18 Method for measuring volume of outdoor stock ground raw material based on augmented reality

Country Status (1)

Country Link
CN (1) CN110619661A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258401A (en) * 2020-09-28 2021-01-22 北京深睿博联科技有限责任公司 Image enhancement method and device
CN115512345A (en) * 2022-09-21 2022-12-23 浙江安吉天子湖热电有限公司 Traveling crane fixed coal inventory system and coal inventory method
CN115908432A (en) * 2023-03-13 2023-04-04 单县龙宇生物科技有限公司 Material output quality detection system and prediction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680585A (en) * 2013-11-29 2015-06-03 深圳先进技术研究院 Three-dimensional reconstruction system and method for material stack
WO2017128934A1 (en) * 2016-01-29 2017-08-03 成都理想境界科技有限公司 Method, server, terminal and system for implementing augmented reality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680585A (en) * 2013-11-29 2015-06-03 深圳先进技术研究院 Three-dimensional reconstruction system and method for material stack
WO2017128934A1 (en) * 2016-01-29 2017-08-03 成都理想境界科技有限公司 Method, server, terminal and system for implementing augmented reality

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
冯英瑞 等: "基于机器视觉的目标位置检测方法研究", 《信息化研究》 *
卢韶芳 等: "基于三维标识物的虚实配准方法研究", 《吉林大学学报(信息科学版)》 *
孟垂哲 等: "基于激光数据的圆堆工况实时监测方法研究", 《电气传动》 *
张文军 等: "基于激光三维扫描的不规则煤场测量系统设计", 《煤炭科学技术》 *
武雪玲等: "仿射变换虚实配准的空间信息增强表达", 《计算机工程与应用》 *
马铁民 等: "无人机露天大型料堆计量系统", 《衡器》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258401A (en) * 2020-09-28 2021-01-22 北京深睿博联科技有限责任公司 Image enhancement method and device
CN112258401B (en) * 2020-09-28 2022-09-16 北京深睿博联科技有限责任公司 Image enhancement method and device
CN115512345A (en) * 2022-09-21 2022-12-23 浙江安吉天子湖热电有限公司 Traveling crane fixed coal inventory system and coal inventory method
CN115908432A (en) * 2023-03-13 2023-04-04 单县龙宇生物科技有限公司 Material output quality detection system and prediction method

Similar Documents

Publication Publication Date Title
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
Zhang et al. A 3D reconstruction method for pipeline inspection based on multi-vision
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
US6922234B2 (en) Method and apparatus for generating structural data from laser reflectance images
CN104574501B (en) A kind of high-quality texture mapping method for complex three-dimensional scene
Barazzetti et al. Photogrammetric survey of complex geometries with low-cost software: Application to the ‘G1′ temple in Myson, Vietnam
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
CN103489214A (en) Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN110619661A (en) Method for measuring volume of outdoor stock ground raw material based on augmented reality
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
CN103971404A (en) 3D real-scene copying device having high cost performance
Pascoe et al. Farlap: Fast robust localisation using appearance priors
CN103196370A (en) Measuring method and measuring device of conduit connector space pose parameters
CN111192321A (en) Three-dimensional positioning method and device for target object
CN105115560A (en) Non-contact measurement method for cabin capacity
CN106500626A (en) A kind of mobile phone stereoscopic imaging method and three-dimensional imaging mobile phone
CN104463969A (en) Building method of model of aviation inclined shooting geographic photos
CN105550992A (en) High fidelity full face texture fusing method of three-dimensional full face camera
Abdul-Rahman et al. Innovations in 3D geo information systems
CN114543787B (en) Millimeter-scale indoor map positioning method based on fringe projection profilometry
CN115661252A (en) Real-time pose estimation method and device, electronic equipment and storage medium
Hu et al. Collaborative 3D real modeling by multi-view images photogrammetry and laser scanning: the case study of Tangwei Village, China
CN113487726B (en) Motion capture system and method
Fleischmann et al. Fast projector-camera calibration for interactive projection mapping
CN107274449B (en) Space positioning system and method for object by optical photo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191227

RJ01 Rejection of invention patent application after publication