CN117830385A - Material pile volume measuring method and device, electronic equipment and storage medium - Google Patents

Material pile volume measuring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117830385A
CN117830385A CN202410026700.1A CN202410026700A CN117830385A CN 117830385 A CN117830385 A CN 117830385A CN 202410026700 A CN202410026700 A CN 202410026700A CN 117830385 A CN117830385 A CN 117830385A
Authority
CN
China
Prior art keywords
window
frame image
material pile
coordinates
left corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410026700.1A
Other languages
Chinese (zh)
Inventor
赵英宝
张俊豪
李华伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Science and Technology
Original Assignee
Hebei University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Science and Technology filed Critical Hebei University of Science and Technology
Priority to CN202410026700.1A priority Critical patent/CN117830385A/en
Publication of CN117830385A publication Critical patent/CN117830385A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method and a device for measuring the volume of a material pile, electronic equipment and a storage medium. The method comprises the following steps: respectively acquiring material pile images under different visual angles; respectively carrying out background segmentation on each material pile image to obtain a frame image containing the material pile corresponding to each material pile image; dividing windows of each frame image respectively, and carrying out three-dimensional matching on the frame images based on gray average values and gradient information corresponding to each window to obtain optimal parallax of each corresponding window in each frame image; and determining three-dimensional point cloud data of the material pile based on the optimal parallax, and triangulating the three-dimensional point cloud data to obtain the volume of the material pile. The invention can improve the measurement precision of the volume of the material pile.

Description

Material pile volume measuring method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of machine vision, and in particular, to a method and an apparatus for measuring a volume of a material stack, an electronic device, and a storage medium.
Background
Along with the high-speed development of the world circulation field, the circulation speed of goods is also faster and faster, and the demands of enterprises on accurate inventory are also higher and higher. At present, a large number of material piles exist in thermal power plants, construction sites, granaries, ports and the like, and the volumes of the material piles need to be accurately calculated, so that space resources are reasonably distributed. However, the material pile is huge and irregular in shape, and accurate measurement cannot be realized only by the traditional weighing apparatus.
At present, two common material pile volume measuring methods are respectively a laser measuring method and a photogrammetry method. The laser measuring method measures the distance of the material pile by emitting laser pulses, and further calculates the volume of the material pile. The laser measurement method has higher measurement precision, but the measurement process is complex, the speed is slower, the operator is required to have higher professional level, and the manufacturing cost is relatively high.
The photogrammetry rule is to take a picture of the stack by a multi-camera and measure the volume of the stack based on the stack image. The photogrammetry has the advantages of simple operation, high efficiency, low cost and the like. But the material pile has weak graining property, and the shooting process is easily affected by illumination, so that the volume measurement precision is lower.
Disclosure of Invention
The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for measuring the volume of a material pile, which are used for solving the problem of lower volume measurement precision when the volume of the material pile is measured based on an image of the material pile.
In a first aspect, an embodiment of the present invention provides a method for measuring a volume of a material stack, including:
respectively acquiring material pile images under different visual angles;
respectively carrying out background segmentation on each material pile image to obtain a frame image containing the material pile corresponding to each material pile image;
Dividing windows of each frame image respectively, and carrying out three-dimensional matching on the frame images based on gray average values and gradient information corresponding to each window to obtain optimal parallax of each corresponding window in each frame image;
and determining three-dimensional point cloud data of the material pile based on the optimal parallax, and triangulating the three-dimensional point cloud data to obtain the volume of the material pile.
In one possible implementation, the frame image includes a first frame image and a second frame image;
the window division is performed on each frame image, and stereo matching is performed on the frame images based on gray average value and gradient information corresponding to each window, so as to obtain optimal parallax of each corresponding window in each frame image, including:
dividing the first frame image and the second frame image into a plurality of windows according to the preset window size;
respectively calculating gray average value and gradient information corresponding to each window, and calculating matching cost between each window in the first frame image and each window in the second frame image based on the gray average value and the gradient information corresponding to each window;
performing cost aggregation, parallax calculation and parallax optimization based on the matching cost to obtain an optimal parallax map; the optimal parallax map comprises optimal parallaxes between windows in the first frame image and corresponding windows in the second frame image.
In one possible implementation manner, calculating the gray average value corresponding to each window includes:
according toCalculating the gray average value corresponding to each window;
wherein I is avg (x, y) represents the gray-scale average value, ω, corresponding to the window with the upper left corner vertex coordinates of (x, y) 1 Representing the length of the window omega 2 Representing the width of the window, I representing the length variable, j representing the width variable, I (x+i, y+j) representing the pixel of the window with coordinates (x+i, y+j)A gray value;
the gradient information includes: normalizing the gradient amplitude; calculating gradient information corresponding to each window, including:
respectively obtaining a horizontal gradient vector and a vertical gradient vector corresponding to each window, and calculating a gradient amplitude corresponding to each window based on the horizontal gradient vector and the vertical gradient vector corresponding to each window;
and respectively carrying out normalization processing on the gradient amplitude values corresponding to the windows to obtain normalized gradient amplitude values corresponding to the windows.
In one possible implementation manner, calculating a matching cost between each window in the first frame image and each window in the second frame image based on gray average value and gradient information corresponding to each window includes:
respectively calculating gray level differences between each window in the first frame image and each window in the second frame image according to gray level average values corresponding to each window in the first frame image and each window in the second frame image;
According to gradient information corresponding to each window in the first frame image and the second frame image, local structure information between each window in the first frame image and each window in the second frame image is calculated respectively;
and respectively calculating the matching cost between each window in the first frame image and each window in the second frame image according to the gray level difference and the local structure information.
In a possible implementation manner, the calculating, according to the gray average value corresponding to each window in the first frame image and the second frame image, the gray difference between each window in the first frame image and each window in the second frame image includes:
according to D (x, y) = |i lavg (x,y)-I ravg Respectively calculating gray scale differences between each window in the first frame image and each window in the second frame image;
wherein D (x, y) represents the gray scale difference between the window with the coordinates of the top left corner vertex (x, y) in the first frame image and the window with the coordinates of the top left corner vertex (x, y) in the second frame image, I lavg (x,y) represents the gray-scale average value corresponding to the window with the upper left corner vertex coordinates of (x, y) in the first frame image, I lavg And (x, y) represents a gray-scale average value corresponding to a window with the coordinates of the top left corner vertex of the second frame image being (x, y).
In one possible implementation manner, the calculating, according to gradient information corresponding to each window in the first frame image and the second frame image, local structure information between each window in the first frame image and each window in the second frame image includes:
according to S (x, y) =N l (x,y)+N r (x, y) calculating local structure information between each window in the first frame image and each window in the second frame image respectively;
wherein S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, N l (x, y) represents gradient information corresponding to a window with (x, y) coordinates of the top left corner vertex in the first frame image, N r And (x, y) represents gradient information corresponding to a window with the top left corner vertex coordinates of (x, y) in the second frame image.
In a possible implementation manner, the calculating, according to the gray scale difference and the local structure information, a matching cost between each window in the first frame image and each window in the second frame image includes:
calculating matching cost between each window in the first frame image and each window in the second frame image according to C (x, y) =alpha D (x, y) +beta S (x, y);
Wherein C (x, y) represents a matching cost between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, α represents a first weight, β represents a second weight, D (x, y) represents a gray scale difference between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, and S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image.
In a second aspect, an embodiment of the present invention provides a device for measuring a volume of a material stack, including:
the acquisition module is used for respectively acquiring the material pile images under different visual angles;
the processing module is used for respectively carrying out background segmentation on each material pile image to obtain a frame image which corresponds to each material pile image and contains the material pile;
the processing module is further used for dividing windows of the frame images respectively, and carrying out three-dimensional matching on the frame images based on gray average values and gradient information corresponding to the windows to obtain optimal parallax of the corresponding windows in the frame images;
And the measurement module is used for determining three-dimensional point cloud data of the material pile based on the optimal parallax, and triangulating the three-dimensional point cloud data to obtain the volume of the material pile.
In a third aspect, an embodiment of the present invention provides an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect or any one of the possible implementations of the first aspect, when the computer program is executed by the processor.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above in the first aspect or any one of the possible implementations of the first aspect.
The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for measuring the volume of a material pile, which are used for dividing windows of frame images containing the material pile corresponding to the material pile images under different view angles, and carrying out three-dimensional matching on the frame images based on gray average values and gradient information corresponding to the windows to obtain optimal parallax, so that the volume of the material pile is determined based on the optimal parallax. Through carrying out window division and carrying out three-dimensional matching based on gray average value and gradient information corresponding to each window, on one hand, tiny changes in a frame image can be captured, on the other hand, the condition of poor texture caused by illumination influence can be improved, so that three-dimensional matching precision is improved, and further, the volume measurement precision of a material pile is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an implementation of a method for measuring volume of a material stack according to an embodiment of the present invention;
FIG. 2 is a flow chart of an implementation of determining an optimal disparity provided by an embodiment of the present invention;
FIG. 3 is a flow chart of an implementation of calculating a matching cost provided by an embodiment of the present invention;
FIG. 4 is a schematic view of a material pile volume measuring device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In the related art, two methods for measuring the volume of the material pile are respectively a laser measurement method and a photogrammetry method. The laser measuring method measures the distance of the material pile by emitting laser pulses, and further calculates the volume of the material pile. The laser measurement method has higher measurement precision, but the measurement process is complex, the speed is slower, the operator is required to have higher professional level, and the manufacturing cost is relatively high. The photogrammetry rule is to take a picture of the stack by a multi-camera and measure the volume of the stack based on the stack image. The photogrammetry has the advantages of simple operation, high efficiency, low cost and the like. But the material pile has weak graining property, and the shooting process is easily affected by illumination, so that the volume measurement precision is lower.
For the idea of improving the volume measurement accuracy of the material pile, in this embodiment, window division is performed on the frame image containing the material pile corresponding to the material pile image, and stereo matching is performed based on the gray average value and gradient information corresponding to each window, so that not only can tiny changes in the frame image be captured, but also the condition of poor texture caused by illumination influence is improved, so that the stereo matching accuracy is improved, and further the volume measurement accuracy of the material pile is improved.
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the following description will be made by way of specific embodiments with reference to the accompanying drawings.
Fig. 1 is a flowchart of an implementation of a method for measuring a volume of a material stack according to an embodiment of the present invention, which is described in detail below:
and step 101, respectively acquiring material pile images under different visual angles.
When acquiring the material pile images under different view angles, the embodiment can respectively acquire the material pile images from different view angles by adopting a plurality of cameras calibrated in advance, such as a first camera and a second camera, and can acquire the material pile images under different view angles by adopting a binocular camera.
In this embodiment, before the binocular camera is used to obtain the images of the material pile under different viewing angles, calibration and stereo correction may be performed on the camera in advance.
For example, when the binocular camera is calibrated, a preset calibration method, such as Zhang Zhengyou calibration method, may be used in the present embodiment. Optionally, the embodiment may use the binocular camera to be calibrated to shoot the checkerboard calibration board for multiple times, and continuously adjust the angle and distance of the checkerboard during the shooting. And extracting angular point coordinates in the photographed image by using an intersection point detection algorithm. And solving the homography matrix based on the angular point coordinates, so as to calculate and obtain the internal and external parameters of the binocular camera, and obtaining the distortion coefficient of the binocular camera by using a least square method. And (3) optimizing the internal and external parameters and the distortion coefficient of the binocular camera to obtain the optimized internal and external parameters and the optimized distortion coefficient, and completing calibration.
Here, the embodiment can use the Bouguet algorithm to make the plane origin coordinates of the left and right images shot by the binocular camera be the same based on the optimized internal and external parameters and the distortion coefficient, so as to realize surface alignment and line alignment, thereby completing the stereo correction of the binocular camera. The corrected left and right images eliminate lens distortion, the polar lines of the left and right views are parallel to each other, and the ordinate coordinates of the corresponding points are consistent.
And 102, respectively carrying out background segmentation on each material pile image to obtain a frame image containing the material pile corresponding to each material pile image.
In the embodiment, the background in the material pile image can be removed by carrying out background segmentation on the material pile image, only the material pile is reserved, and the accuracy of a subsequent processing result is improved.
When background segmentation is performed on the material pile image, a preset segmentation method, such as a maximum inter-class variance method, can be adopted in the embodiment. The pixel threshold value which maximizes the inter-class variance can be obtained through a traversing method. And determining each pixel point belonging to the material pile based on the magnitude relation between the pixel value of each pixel point and the pixel threshold value.
It will be appreciated that the shape of the mass pile is not regular. In order to facilitate subsequent calculation, the embodiment of the invention determines the rectangular area corresponding to the material pile as a frame image, and executes subsequent steps based on the frame image. The rectangular area corresponding to the material pile is the smallest rectangular area containing the outer boundary of the material pile.
And 103, respectively carrying out window division on each frame image, and carrying out stereo matching on the frame images based on the gray average value and the gradient information corresponding to each window to obtain the optimal parallax of each corresponding window in each frame image.
According to the embodiment of the invention, through dividing the windows of the frame images and carrying out three-dimensional matching of the frame images based on the gray average value and the gradient information corresponding to each window, the micro changes in the frame images can be captured, so that the accuracy of three-dimensional matching is improved, and the error matching is reduced. Especially for the region with abundant details, the accuracy of stereo matching can be greatly improved.
The frame images comprise a first frame image and a second frame image.
In some embodiments, referring to fig. 2, the above-mentioned window dividing is performed on each frame image, and stereo matching is performed on the frame images based on gray average value and gradient information corresponding to each window, so as to obtain optimal parallax of each corresponding window in each frame image, which may include:
step 201, dividing the first frame image and the second frame image into a plurality of windows according to a preset window size.
The preset window size may be 5×5 or 7×7, i.e., 5 pixels×5 pixels, or 7 pixels×7 pixels, for example. The first frame image and the second frame image may be divided into a plurality of windows, respectively, based on a preset window size.
Step 202, respectively calculating gray average value and gradient information corresponding to each window, and calculating matching cost between each window in the first frame image and each window in the second frame image based on the gray average value and the gradient information corresponding to each window.
In some embodiments, calculating the gray-scale average value corresponding to each window includes:
according toCalculating the gray average value corresponding to each window;
wherein I is avg (x, y) represents the gray-scale average value, ω, corresponding to the window with the upper left corner vertex coordinates of (x, y) 1 Representing the length of the window omega 2 Representing the width of the window, I representing the length variable, j representing the width variable, and I (x+i, y+j) representing the gray value of the pixel in the window at coordinates (x+i, y+j). Wherein i is more than or equal to 1 and less than or equal to omega 1 ,1≤j≤ω 2
In essence, for each window, the embodiment of the present invention calculates an average value of gray values of all pixel points in the window, and determines the average value as a gray average value corresponding to the window.
The upper left corner vertex coordinates (x, y) refer to coordinate information located in the image pixel coordinate system. Accordingly, the coordinates (x+i, y+j) mentioned above refer to the coordinate information located in the image pixel coordinate system as well.
In some embodiments, the gradient information includes: normalizing the gradient magnitude. Calculating gradient information corresponding to each window may include:
And respectively acquiring a horizontal gradient vector and a vertical gradient vector corresponding to each window, and calculating the gradient amplitude corresponding to each window based on the horizontal gradient vector and the vertical gradient vector corresponding to each window.
And respectively carrying out normalization processing on the gradient amplitude values corresponding to the windows to obtain normalized gradient amplitude values corresponding to the windows.
In this embodiment, a filter such as Sobel, prewitt or Scharr may be used to obtain a horizontal gradient vector and a vertical gradient vector corresponding to each window.
According toThe gradient magnitude corresponding to each window may be calculated.
Wherein G (x, y) represents the gradient amplitude corresponding to the window with the upper left corner vertex coordinates of (x, y), I X (x, y) represents a horizontal gradient vector corresponding to a window with (x, y) coordinates of the top left corner vertex, I Y (x, y) represents the vertical gradient vector corresponding to the window with the upper left corner vertex coordinates (x, y).
According toThe normalized gradient amplitude corresponding to each window can be obtained.
Wherein N (x, y) represents the normalized gradient amplitude corresponding to the window with the upper left corner vertex coordinate (x, y), and maxG represents the maximum value of the normalized gradient amplitudes corresponding to each window in the frame image where the window with the upper left corner vertex coordinate (x, y) is located.
In some embodiments, referring to fig. 3, calculating the matching cost between each window in the first frame image and each window in the second frame image based on the gray average value and the gradient information corresponding to each window may include:
step 301, respectively calculating gray scale differences between each window in the first frame image and each window in the second frame image according to gray scale average values corresponding to each window in the first frame image and each window in the second frame image.
In some embodiments, D (x, y) = |i may be used lavg (x,y)-I ravg Respectively calculating gray scale differences between each window in the first frame image and each window in the second frame image;
wherein D (x, y) represents the gray scale difference between the window with the coordinates of the top left corner vertex (x, y) in the first frame image and the window with the coordinates of the top left corner vertex (x, y) in the second frame image, I lavg (x, y) represents the gray-scale average value corresponding to the window with the top left corner vertex coordinates of (x, y) in the first frame image, I lavg And (x, y) represents a gray-scale average value corresponding to a window with the coordinates of the top left corner vertex of the second frame image being (x, y).
The above gray scale difference calculation formula exemplarily shows a method of calculating gray scale differences between a window having (x, y) coordinates of an upper left corner vertex in the first frame image and a window having (x, y) coordinates of an upper left corner vertex in the second frame image. The embodiment of the invention respectively calculates the gray level difference between each window in the first frame image and each window in the second frame image based on the calculation method.
Step 302, calculating local structure information between each window in the first frame image and each window in the second frame image according to gradient information corresponding to each window in the first frame image and the second frame image.
In some embodiments, one may according to S (x, y) =n l (x,y)+N r (x, y) calculating each window and second window in the first frame imageLocal structure information between windows in the frame image;
wherein S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, N l (x, y) represents gradient information corresponding to a window with (x, y) coordinates of the top left corner vertex in the first frame image, N r And (x, y) represents gradient information corresponding to a window with the top left corner vertex coordinates of (x, y) in the second frame image.
The above partial structure information calculation formula exemplarily shows a calculation method of partial structure information between a window having (x, y) coordinates of an upper left corner vertex in the first frame image and a window having (x, y) coordinates of an upper left corner vertex in the second frame image. The embodiment of the invention respectively calculates the local structure information between each window in the first frame image and each window in the second frame image based on the calculation method.
Step 303, according to the gray level difference and the local structure information, respectively calculating the matching cost between each window in the first frame image and each window in the second frame image.
In some embodiments, the matching cost between each window in the first frame image and each window in the second frame image may be calculated according to C (x, y) =αd (x, y) +βs (x, y), respectively;
wherein C (x, y) represents a matching cost between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, α represents a first weight, β represents a second weight, D (x, y) represents a gray scale difference between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, and S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image.
The above matching cost calculation formula exemplarily shows a calculation method of a matching cost between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image. The embodiment of the invention respectively calculates the matching cost between each window in the first frame image and each window in the second frame image based on the calculation method.
Considering that the material pile has weak graining property and is easily influenced by illumination in the shooting process, the shot image of the material pile is poor in graining. According to the embodiment of the invention, the gray level difference between the windows is calculated based on the gray level average value corresponding to each window, so that the situation of poor texture is resisted. And the embodiment of the invention calculates the matching cost by combining the local structure information on the basis of gray level difference, thereby completing three-dimensional matching, solving the problem of lower three-dimensional matching precision caused by poor texture of the material pile image, improving the calculation precision of the matching cost, and further improving the volume measurement precision of the material pile.
And 203, performing cost aggregation, parallax calculation and parallax optimization based on the matching cost to obtain an optimal parallax map. The optimal parallax map comprises optimal parallaxes between windows in the first frame image and corresponding windows in the second frame image.
When the matching cost is aggregated, an average filtering method can be adopted, so that the influence of image noise on the matching cost is reduced, and the aggregated matching cost is smoother. The calculation formula of the average filtering is as follows:
wherein C is smooth (x, y) represents a smooth cost between the window with the upper left corner vertex coordinate of (x, y) in the first frame image and each window in the second frame image after cost aggregation processing, N represents the number of windows in the neighborhood of the window with the upper left corner vertex coordinate of (x, y) in the first frame image, N represents an nth window in the neighborhood of the window with the upper left corner vertex coordinate of (x, y) in the first frame image, and C (N) represents a matching cost between the nth window in the neighborhood of the window with the upper left corner vertex coordinate of (x, y) in the first frame image and each window in the second frame image.
According to the embodiment of the invention, parallax calculation is performed based on the smooth cost between each window in the first frame image and each window in the second frame image. Wherein the smoothing cost is used to reflect the similarity between the two windows. The smaller the smoothing cost, the more similar the window. And for each window in the first frame image, determining the window with the minimum smooth cost between the second frame image and the window as the homonymy point of the window. And respectively calculating the coordinate difference between each window and the same name point in the first frame image, wherein the coordinate difference is parallax. And the parallax between each window and the homonymy point in the first frame image forms a parallax image. In the embodiment of the invention, the coordinates of the pixel point at the central position of the window are determined as the coordinates of the window.
According to the embodiment of the invention, on the basis of parallax calculation, parallax optimization is further performed so as to improve parallax accuracy and obtain an optimal parallax image. The specific flow of parallax optimization is as follows: firstly, consistency check is used for the disparity map, and error disparities and ineffective disparities are removed and filled. And then, performing fitting optimization on each parallax value in the parallax map by using a sub-pixel fitting technology, and finally, performing smoothing treatment on the parallax map by using median filtering to obtain an optimal parallax map. The optimal parallax map comprises optimal parallaxes between windows in the first frame image and corresponding windows in the second frame image. The corresponding windows in the second frame image are the same name points of the windows in the first frame image.
And 104, determining three-dimensional point cloud data of the material pile based on the optimal parallax, and triangulating the three-dimensional point cloud data to obtain the volume of the material pile.
According to the embodiment of the invention, the depth information of the material pile is obtained by acquiring the camera baseline and the camera focal length and calculating based on the camera baseline, the camera focal length and the optimal parallax. The depth information of the material stack comprises depth information corresponding to each window.
According toDepth information corresponding to each window can be calculated.
Wherein Z represents depth information corresponding to each window, namely vertical coordinates corresponding to each window in a world coordinate system, b represents a camera baseline, f represents a camera focal length, and d represents optimal parallax corresponding to each window.
According to the embodiment of the invention, the optimal parallax map is mapped into the world coordinate system by adopting a reprojection mapping method, so that three-dimensional point cloud data can be obtained. The re-projection mapping process involves an image pixel coordinate system, an image physical coordinate system, a camera coordinate system, and a world coordinate system. The above 4 coordinate systems have the following transformation relations:
wherein Z is c Representing the vertical coordinates in the camera coordinate system, u representing the horizontal coordinates in the image pixel coordinate system, v representing the vertical coordinates in the image pixel coordinate system, f x 、f y 、u 0 And v 0 R is a camera parameter 3×3 Representing a rotation matrix of the camera, T 3×1 Representing translation vector of camera, X w Representing the abscissa in the world coordinate system, Y w Representing the ordinate, Z, in the world coordinate system w Representing the vertical coordinates in the world coordinate system.
The embodiment of the invention carries out inverse transformation on the transformation relation, brings the depth information corresponding to each window, the camera parameters and the coordinate information corresponding to each window under the image pixel coordinates into the transformation relation, and can calculate and obtain the coordinate information corresponding to each window under the world coordinate system, namely, the three-dimensional point cloud data.
When the three-dimensional point cloud data is triangulated, a Delaunay triangulation algorithm can be adopted. The Delaunay triangulation algorithm projects three-dimensional point cloud data onto an xoy reference plane, and triangular grids are constructed on the xoy reference plane, so that the vertex of each triangular grid corresponds to the three-dimensional point cloud data of the material pile, namely, each triangle on the xoy reference plane corresponds to the triangle on the surface of the material pile, and a space geometrical body is formed. And finally, dividing the three-dimensional point cloud data into a plurality of space geometries. Wherein each space body consists of a triangular prism and two tetrahedrons. The volumes of the triangular prisms and the tetrahedrons can be calculated based on a volume calculation formula, and then the volumes of the space geometries can be obtained. The sum of the volumes of all the space geometries is the volume of the material stack.
When space resource allocation is carried out, the volume to be occupied of each space is obtained respectively, and the space with the same volume as the volume of the material pile to be occupied is allocated to the material pile. If the space with the volume which is the same as the volume of the material pile does not exist, determining all the spaces with the volume which is larger than the volume of the material pile, and distributing the space with the smallest volume to the material pile in all the spaces with the volume which is larger than the volume of the material pile.
Compared with the prior art, the embodiment of the invention obtains the optimal parallax by dividing the window of each frame image and carrying out three-dimensional matching on the frame images based on the gray average value and the gradient information corresponding to each window; and determining the volume of the material pile based on the optimal parallax. Through carrying out window division and carrying out three-dimensional matching based on gray average value and gradient information corresponding to each window, on one hand, tiny changes in a frame image can be captured, on the other hand, the condition of poor texture caused by illumination influence can be improved, so that three-dimensional matching precision is improved, and further, the volume measurement precision of a material pile is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
The following are device embodiments of the invention, for details not described in detail therein, reference may be made to the corresponding method embodiments described above.
Fig. 4 is a schematic structural diagram of a material pile volume measurement device according to an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present invention is shown, which is described in detail below:
as shown in fig. 4, the material pile volume measuring device 4 includes: an acquisition module 41, a processing module 42 and a measurement module 43.
The acquisition module 41 is used for respectively acquiring the material pile images under different view angles;
the processing module 42 is configured to perform background segmentation on each material pile image respectively, so as to obtain a frame image corresponding to each material pile image and containing a material pile;
the processing module 42 is further configured to divide windows of each frame image, and perform stereo matching on the frame images based on the gray average value and the gradient information corresponding to each window, so as to obtain an optimal parallax of each corresponding window in each frame image;
the measurement module 43 is configured to determine three-dimensional point cloud data of the material pile based on the optimal parallax, and triangulate the three-dimensional point cloud data to obtain a volume of the material pile.
In one possible implementation, the frame image includes a first frame image and a second frame image; the processing module 42 is specifically configured to:
Dividing the first frame image and the second frame image into a plurality of windows according to the preset window size;
respectively calculating gray average value and gradient information corresponding to each window, and calculating matching cost between each window in the first frame image and each window in the second frame image based on the gray average value and the gradient information corresponding to each window;
performing cost aggregation, parallax calculation and parallax optimization based on the matching cost to obtain an optimal parallax map; the optimal parallax map comprises optimal parallaxes between windows in the first frame image and corresponding windows in the second frame image.
In one possible implementation, the processing module 42 is configured to, according toCalculating the gray average value corresponding to each window;
wherein I is avg (x, y) represents the gray-scale average value, ω, corresponding to the window with the upper left corner vertex coordinates of (x, y) 1 Representing the length of the window omega 2 Representing the width of the window, I representing the length variable, j representing the width variable, I (x+i, y+j) representing the gray value of the pixel point in the window with coordinates (x+i, y+j);
the gradient information includes: normalizing the gradient amplitude; the processing module 42 is specifically configured to:
respectively obtaining a horizontal gradient vector and a vertical gradient vector corresponding to each window, and calculating a gradient amplitude corresponding to each window based on the horizontal gradient vector and the vertical gradient vector corresponding to each window;
And respectively carrying out normalization processing on the gradient amplitude values corresponding to the windows to obtain normalized gradient amplitude values corresponding to the windows.
In one possible implementation, the processing module 42 is specifically configured to:
respectively calculating gray level differences between each window in the first frame image and each window in the second frame image according to gray level average values corresponding to each window in the first frame image and each window in the second frame image;
according to gradient information corresponding to each window in the first frame image and the second frame image, local structure information between each window in the first frame image and each window in the second frame image is calculated respectively;
and respectively calculating the matching cost between each window in the first frame image and each window in the second frame image according to the gray level difference and the local structure information.
In one possible implementation, the processing module 42 is configured to perform the processing according to D (x, y) = |i lavg (x,y)-I ravg Respectively calculating gray scale differences between each window in the first frame image and each window in the second frame image;
wherein D (x, y) represents the gray scale difference between the window with the coordinates of the top left corner vertex (x, y) in the first frame image and the window with the coordinates of the top left corner vertex (x, y) in the second frame image, I lavg (x, y) represents the gray-scale average value corresponding to the window with the top left corner vertex coordinates of (x, y) in the first frame image, I lavg And (x, y) represents a gray-scale average value corresponding to a window with the coordinates of the top left corner vertex of the second frame image being (x, y).
In one possible implementation, the processing module 42 is configured to perform the processing according to S (x, y) =n l (x,y)+N r (x, y) calculating each of the first frame imagesLocal structure information between the windows and each window in the second frame image;
wherein S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, N l (x, y) represents gradient information corresponding to a window with (x, y) coordinates of the top left corner vertex in the first frame image, N r And (x, y) represents gradient information corresponding to a window with the top left corner vertex coordinates of (x, y) in the second frame image.
In a possible implementation manner, the processing module 42 is configured to calculate a matching cost between each window in the first frame image and each window in the second frame image according to C (x, y) =αd (x, y) +βs (x, y);
wherein C (x, y) represents a matching cost between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, α represents a first weight, β represents a second weight, D (x, y) represents a gray scale difference between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, and S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image.
According to the embodiment of the invention, the window division is carried out on each frame image, and the stereo matching is carried out on the frame images based on the gray average value and the gradient information corresponding to each window, so that the optimal parallax is obtained; and determining the volume of the material pile based on the optimal parallax. The processing module 42 performs window division and performs stereo matching based on gray average value and gradient information corresponding to each window, so that on one hand, small changes in the frame image can be captured, and on the other hand, poor texture caused by illumination influence can be improved, so that stereo matching precision is improved, and further volume measurement precision of the material stack is improved.
Fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps of the various embodiments of the method for measuring the volume of a stack of materials described above, such as steps 101 through 104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 41 to 43 shown in fig. 4.
By way of example, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used to describe the execution of the computer program 52 in the electronic device 5. For example, the computer program 52 may be split into modules 41 to 43 shown in fig. 4.
The electronic device 5 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The electronic device 5 may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the electronic device 5 and is not meant to be limiting as the electronic device 5 may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may further include an input-output device, a network access device, a bus, etc.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the electronic device 5, such as a hard disk or a memory of the electronic device 5. The memory 51 may be an external storage device of the electronic device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the electronic device 5. The memory 51 is used for storing the computer program and other programs and data required by the electronic device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may also be implemented by implementing all or part of the flow of the method of the above embodiment, or by instructing the relevant hardware by a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program may be executed by a processor to implement the steps of the method embodiment of measuring the volume of each material pile. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method of measuring the volume of a stack of materials, comprising:
respectively acquiring material pile images under different visual angles;
respectively carrying out background segmentation on each material pile image to obtain a frame image containing the material pile corresponding to each material pile image;
dividing windows of each frame image respectively, and carrying out three-dimensional matching on the frame images based on gray average values and gradient information corresponding to each window to obtain optimal parallax of each corresponding window in each frame image;
and determining three-dimensional point cloud data of the material pile based on the optimal parallax, and triangulating the three-dimensional point cloud data to obtain the volume of the material pile.
2. The method of claim 1, wherein the frame images comprise a first frame image and a second frame image;
the window division is performed on each frame image, and stereo matching is performed on the frame images based on gray average value and gradient information corresponding to each window, so as to obtain optimal parallax of each corresponding window in each frame image, including:
dividing the first frame image and the second frame image into a plurality of windows according to the preset window size;
Respectively calculating gray average value and gradient information corresponding to each window, and calculating matching cost between each window in the first frame image and each window in the second frame image based on the gray average value and the gradient information corresponding to each window;
performing cost aggregation, parallax calculation and parallax optimization based on the matching cost to obtain an optimal parallax map; the optimal parallax map comprises optimal parallaxes between windows in the first frame image and corresponding windows in the second frame image.
3. The method of claim 2, wherein calculating a gray scale average value for each window comprises:
according toCalculating the gray average value corresponding to each window;
wherein I is avg (x, y) represents the gray-scale average value, ω, corresponding to the window with the upper left corner vertex coordinates of (x, y) 1 Representing the length of the window omega 2 Representing the width of the window, I representing the length variable, j representing the width variable, I (x+i, y+j) representing the gray value of the pixel point in the window with coordinates (x+i, y+j);
the gradient information includes: normalizing the gradient amplitude; calculating gradient information corresponding to each window, including:
respectively obtaining a horizontal gradient vector and a vertical gradient vector corresponding to each window, and calculating a gradient amplitude corresponding to each window based on the horizontal gradient vector and the vertical gradient vector corresponding to each window;
And respectively carrying out normalization processing on the gradient amplitude values corresponding to the windows to obtain normalized gradient amplitude values corresponding to the windows.
4. A method of measuring a volume of a stack of materials according to claim 2 or 3, wherein calculating a matching cost between each window in the first frame image and each window in the second frame image based on gray-scale average and gradient information corresponding to each window comprises:
respectively calculating gray level differences between each window in the first frame image and each window in the second frame image according to gray level average values corresponding to each window in the first frame image and each window in the second frame image;
according to gradient information corresponding to each window in the first frame image and the second frame image, local structure information between each window in the first frame image and each window in the second frame image is calculated respectively;
and respectively calculating the matching cost between each window in the first frame image and each window in the second frame image according to the gray level difference and the local structure information.
5. The method for measuring the volume of a material pile according to claim 4, wherein the calculating the gray scale difference between each window in the first frame image and each window in the second frame image according to the gray scale average value corresponding to each window in the first frame image and the second frame image respectively includes:
According to D (x, y) = |i lavg (x,y)-I ravg Respectively calculating gray scale differences between each window in the first frame image and each window in the second frame image;
wherein D (x, y) represents the gray scale difference between the window with the coordinates of the top left corner vertex (x, y) in the first frame image and the window with the coordinates of the top left corner vertex (x, y) in the second frame image, I lavg (x, y) represents the upper left in the first frame imageGray average value corresponding to window with angular vertex coordinates of (x, y), I lavg And (x, y) represents a gray-scale average value corresponding to a window with the coordinates of the top left corner vertex of the second frame image being (x, y).
6. The method for measuring the volume of a material pile according to claim 4, wherein the calculating the local structure information between each window in the first frame image and each window in the second frame image according to the gradient information corresponding to each window in the first frame image and the second frame image respectively includes:
according to S (x, y) =N l (x,y)+N r (x, y) calculating local structure information between each window in the first frame image and each window in the second frame image respectively;
wherein S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, N l (x, y) represents gradient information corresponding to a window with (x, y) coordinates of the top left corner vertex in the first frame image, N r And (x, y) represents gradient information corresponding to a window with the top left corner vertex coordinates of (x, y) in the second frame image.
7. The method for measuring volume of a material pile according to claim 4, wherein the calculating the matching cost between each window in the first frame image and each window in the second frame image according to the gray scale difference and the local structure information includes:
calculating matching cost between each window in the first frame image and each window in the second frame image according to C (x, y) =alpha D (x, y) +beta S (x, y);
wherein C (x, y) represents a matching cost between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, α represents a first weight, β represents a second weight, D (x, y) represents a gray scale difference between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image, and S (x, y) represents local structure information between a window with (x, y) coordinates of an upper left corner vertex in the first frame image and a window with (x, y) coordinates of an upper left corner vertex in the second frame image.
8. A material stack volume measuring device, comprising:
the acquisition module is used for respectively acquiring the material pile images under different visual angles;
the processing module is used for respectively carrying out background segmentation on each material pile image to obtain a frame image which corresponds to each material pile image and contains the material pile;
the processing module is further used for dividing windows of the frame images respectively, and carrying out three-dimensional matching on the frame images based on gray average values and gradient information corresponding to the windows to obtain optimal parallax of the corresponding windows in the frame images;
and the measurement module is used for determining three-dimensional point cloud data of the material pile based on the optimal parallax, and triangulating the three-dimensional point cloud data to obtain the volume of the material pile.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method for measuring the volume of a mass pile according to any one of the preceding claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method for measuring the volume of a pile according to any one of claims 1 to 7.
CN202410026700.1A 2024-01-08 2024-01-08 Material pile volume measuring method and device, electronic equipment and storage medium Pending CN117830385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410026700.1A CN117830385A (en) 2024-01-08 2024-01-08 Material pile volume measuring method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410026700.1A CN117830385A (en) 2024-01-08 2024-01-08 Material pile volume measuring method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117830385A true CN117830385A (en) 2024-04-05

Family

ID=90509612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410026700.1A Pending CN117830385A (en) 2024-01-08 2024-01-08 Material pile volume measuring method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117830385A (en)

Similar Documents

Publication Publication Date Title
Galliani et al. Massively parallel multiview stereopsis by surface normal diffusion
Shen Accurate multiple view 3d reconstruction using patch-based stereo for large-scale scenes
KR102318023B1 (en) 3-Dimensional Model Generation Using Edges
Galliani et al. Gipuma: Massively parallel multi-view stereo reconstruction
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN106033621B (en) A kind of method and device of three-dimensional modeling
US20180025496A1 (en) Systems and methods for improved surface normal estimation
Bartczak et al. Dense depth maps from low resolution time-of-flight depth and high resolution color views
CN109640066B (en) Method and device for generating high-precision dense depth image
CN108961383A (en) three-dimensional rebuilding method and device
CN116152306B (en) Method, device, apparatus and medium for determining masonry quality
US11898875B2 (en) Method and apparatus for single camera optical measurements
CN112233076B (en) Structural vibration displacement measurement method and device based on red round target image processing
CN104574312A (en) Method and device of calculating center of circle for target image
CN113763478A (en) Unmanned vehicle camera calibration method, device, equipment, storage medium and system
US8462155B1 (en) Merging three-dimensional models based on confidence scores
CN109766896A (en) A kind of method for measuring similarity, device, equipment and storage medium
CN113920275A (en) Triangular mesh construction method and device, electronic equipment and readable storage medium
CN117557617B (en) Multi-view dense matching method, system and equipment based on plane priori optimization
CN113340201B (en) Three-dimensional measurement method based on RGBD camera
CN108629840A (en) A kind of method, apparatus and equipment for establishing LOGO three-D profiles
JP7195785B2 (en) Apparatus, method and program for generating 3D shape data
Shen Depth-map merging for multi-view stereo with high resolution images
CN111915666A (en) Volume measurement method and device based on mobile terminal
CN116379936A (en) Intelligent recognition distance measuring and calculating method and device based on binocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination