CN113240801A - Three-dimensional reconstruction method and device for material pile, electronic equipment and storage medium - Google Patents

Three-dimensional reconstruction method and device for material pile, electronic equipment and storage medium Download PDF

Info

Publication number
CN113240801A
CN113240801A CN202110636012.3A CN202110636012A CN113240801A CN 113240801 A CN113240801 A CN 113240801A CN 202110636012 A CN202110636012 A CN 202110636012A CN 113240801 A CN113240801 A CN 113240801A
Authority
CN
China
Prior art keywords
grid
laser
points
material pile
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110636012.3A
Other languages
Chinese (zh)
Other versions
CN113240801B (en
Inventor
张元生
李若熙
吕潇
李越
刘鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beikuang Intelligent Technology Co ltd
BGRIMM Technology Group Co Ltd
Original Assignee
Beijing Beikuang Intelligent Technology Co ltd
BGRIMM Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beikuang Intelligent Technology Co ltd, BGRIMM Technology Group Co Ltd filed Critical Beijing Beikuang Intelligent Technology Co ltd
Priority to CN202110636012.3A priority Critical patent/CN113240801B/en
Publication of CN113240801A publication Critical patent/CN113240801A/en
Application granted granted Critical
Publication of CN113240801B publication Critical patent/CN113240801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a three-dimensional reconstruction method and device for a material pile, electronic equipment and a storage medium, wherein a laser emitter is controlled to emit a laser grid towards the material pile, and the laser grid can comprehensively cover the material pile; the method comprises the steps that acquisition equipment acquires two grid images of a material pile under different angles under the irradiation of a laser grid; acquiring pixel coordinates of target pixel points in the two grid images in each grid image; inputting the pixel coordinates of the target pixel points in each grid image into a trained coordinate prediction model to obtain the world coordinates of the material pile; and establishing a three-dimensional model of the material pile according to the world coordinates of the material pile. This application covers the material level with the help of laser grid, and two collection equipment gather laser grid and carry out the feature extraction after and match, and the three-dimensional reconsitution of material level is carried out through the characteristic point of gathering to the coordinate prediction model, has reduced the drawback that the calibration parameter is various, and the degree of accuracy is higher, and has strong operability in dim environment.

Description

Three-dimensional reconstruction method and device for material pile, electronic equipment and storage medium
Technical Field
The application relates to the technical field of three-dimensional reconstruction, in particular to a material stack three-dimensional reconstruction method and device, electronic equipment and a storage medium.
Background
In the engineering field, large solid material piles are required to measure three-dimensional topographic parameters, quality, volume and other effective parameters, such as a burden pile in a blast furnace, materials stacked in a port, grains stored in a granary, a coal pile in a power plant, ore rocks in a mine and the like.
For enterprises taking resource types as development approaches, such as thermal power plants, iron and steel plants, grain warehouses and the like, the inventory management step of materials is an important link for evaluating the benefits of the enterprises, and the production cost of the enterprises is directly influenced. In order to improve the economic efficiency of enterprises, the volume of the solid material pile required in the production process or inventory needs to be accurately and quickly measured.
At present, most enterprises still adopt a manual measurement mode to obtain the surface of the material pile. For example, the volume of the ore pile is measured in a mine field, stones are piled into a fixed regular shape by a bulldozer through manual operation, then measurement is performed manually through a measuring tool, and finally the approximate volume of the ore pile is estimated. According to another visual measurement method, the stacking situation is obtained manually according to experience, but the method is only suitable for small stacking, interference factors are quite large, time and labor are consumed, and the accuracy is low.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method and an apparatus for three-dimensional reconstruction of a material stack, an electronic device, and a storage medium, where a coordinate prediction model after parameter optimization is used for prediction, and the method has few used parameters and high prediction accuracy.
In a first aspect, an embodiment of the present application provides a method for three-dimensional reconstruction of a material stack, where the method includes:
controlling a laser emitter to emit a laser grid towards a material stack, wherein the laser grid can comprehensively cover the material stack;
acquiring two grid images of the material pile under different angles under the irradiation of the laser grid through acquisition equipment;
acquiring pixel coordinates of target pixel points in the two grid images in each grid image; the target pixel points are pixel points corresponding to the same area in the two grid images;
inputting the pixel coordinates of the target pixel points in each grid image into a trained coordinate prediction model to obtain the world coordinates of the material pile; wherein the coordinate prediction model is constructed by: constructing a support vector machine model comprising kernel function parameters and penalty factors, and optimizing the kernel function parameters and the penalty factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function; constructing a training data set according to the pixel coordinates of the target pixel points in each grid image and the world coordinates of the target pixel points mapped on the target points of the material pile, and training the optimized support vector machine model through the training data set to obtain a trained coordinate prediction model;
and establishing a three-dimensional model of the material pile according to the world coordinates of the material pile.
In a preferred technical scheme of the present application, the laser emitter is disposed toward the material stack, and a preset height difference is formed between a height at which the laser emitter is disposed and a height of the material stack; the preset height difference is determined according to the height of the material pile;
the number of the acquisition devices is two, the two acquisition devices are respectively positioned on two opposite sides of the material pile, and the absolute depression angle of each acquisition device relative to the horizontal plane is within a preset angle range; the preset angle range enables the acquisition equipment to acquire the grid image covering the material pile.
In a preferred technical solution of the present application, the obtaining pixel coordinates of the target pixel points in the two grid images in each grid image includes:
extracting laser characteristic points of the laser grid image according to the intersection point information of the laser grid;
detecting a target corner point from the laser characteristic points according to the gray level change degree in the two grid images;
determining target pixel points according to the corresponding relation of the target corner points in the two grid images;
and determining the pixel coordinates of the target pixel points in each grid image according to the positions of the target pixel points in each grid image.
In a preferred technical solution of the present application, the extracting laser feature points of a laser grid image according to intersection point information of a laser grid includes:
searching different pixel points within a preset threshold range in different directions of the width of each laser grid;
determining the central line of the line of laser grids according to different pixel points of a preset threshold range on each laser grid;
and extracting the intersection points of the central lines of the different laser grids as laser characteristic points of the laser grid image.
In a preferred technical solution of the present application, the detecting a target corner point from the laser feature points according to the gray level variation degrees in the two grid images includes:
calculating the angular point quantity of each pixel point in the laser characteristic points;
and selecting pixel points in the same area corresponding to the angular point quantity in the preset range as target angular points.
In a preferred technical solution of the present application, the determining a target pixel point according to a corresponding relationship between target corner points in two grid images includes:
judging the corresponding relation between the pixel points according to the target angular points; when the two corner points describe the same pixel point, the pixel point is a target pixel point.
In a preferred technical solution of the present application, the optimizing the kernel function parameter and the penalty factor in the support vector machine model according to the genetic algorithm and the rayleigh distribution function includes:
initializing a population, and randomly generating initial population individuals;
decoding each individual gene string in the population into a corresponding kernel function parameter and an error penalty factor; the individual gene string consists of a kernel function parameter and an error penalty factor code;
substituting the kernel function parameters and the error punishment factors into a support vector machine prediction model, and calculating according to a Rayleigh distribution function to obtain a first fitness;
and when the first fitness does not meet the requirement, coding the kernel function parameters and the error penalty factors into individual gene strings again, copying and crossing each individual gene string to form a new generation group, taking the new generation group as a new initial group individual, and returning to the step of decoding each individual gene string in the group into the corresponding kernel function parameters and the error penalty factors until the corresponding first fitness meets the requirement.
In a second aspect, an embodiment of the present application provides a three-dimensional reconstruction apparatus for a material stack, where the apparatus includes:
the laser emitter is used for emitting a laser grid towards the material pile, and the laser grid can comprehensively cover the material pile;
the acquisition equipment is used for acquiring two grid images of the material pile under different angles under the irradiation of the laser grid;
the acquisition module is used for acquiring pixel coordinates of target pixel points in the two grid images in each grid image; the target pixel points are pixel points corresponding to the same area in the two grid images;
the conversion module is used for inputting the pixel coordinates of the target pixel points in each grid image into a trained coordinate prediction model to obtain the world coordinates of the material pile; wherein the coordinate prediction model is constructed by: constructing a support vector machine model comprising kernel function parameters and penalty factors, and optimizing the kernel function parameters and the penalty factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function; constructing a training data set according to the pixel coordinates of the target pixel points in each grid image and the world coordinates of the target pixel points mapped on the target points of the material pile, and training the optimized support vector machine model through the training data set to obtain a trained coordinate prediction model;
and the establishing module is used for establishing a three-dimensional model of the material pile according to the world coordinates of the material pile.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the step of three-dimensional reconstruction of the material pile when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the above step of three-dimensional reconstruction of a material stack.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
controlling a laser emitter to emit a laser grid towards a material stack, wherein the laser grid can comprehensively cover the material stack; acquiring two grid images of the material pile under different angles under the irradiation of the laser grid through acquisition equipment; acquiring pixel coordinates of target pixel points in the two grid images in each grid image; the target pixel points are pixel points corresponding to the same area in the two grid images; inputting the pixel coordinates of the target pixel points in each grid image into a trained coordinate prediction model to obtain the world coordinates of the material pile; and establishing a three-dimensional model of the material pile according to the world coordinates of the material pile. The laser grid is used for covering the material surface, the laser grids acquired by the left acquisition equipment and the right acquisition equipment are matched after feature extraction, and the coordinate prediction model performs three-dimensional reconstruction of the material surface through acquired feature points, so that the defect of solving and calibrating various parameters is overcome, the accuracy is higher, and the laser grid system has the advantage of strong operability in a dark environment.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a schematic flow chart of a three-dimensional reconstruction method for a material pile provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a pixel coordinate system and an image coordinate system provided in an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a camera coordinate system and an image coordinate system provided in an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a camera coordinate system and a world coordinate system provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a three-dimensional reconstruction device for a material pile provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, the most popular measurement mode in computer vision measurement is a photogrammetry method, a plurality of groups of cameras are arranged around a stock yard to acquire image data, the acquired image characteristics are analyzed and researched, coordinate measurement is carried out on the characteristic points of a target object according to the principle of binocular vision, and three-dimensional reconstruction can be carried out on a material pile. The calibration process of the system is very complicated, if the calibration is accurate, the coordinate correspondence is accurate, the depth information of the image is complete, and the three-dimensional reconstruction precision is high. In a complex environment, the influence on the image is very large, and the removal of influence factors such as noise is very important in image processing, so that the work of three-dimensional reconstruction is influenced.
The steps of researching a charge level three-dimensional reconstruction system are image acquisition, camera calibration, feature extraction, stereo matching and three-dimensional reconstruction. Each step has certain disadvantages.
(1) Most of the camera calibration adopts Zhangyingyou calibration, needs to shoot a plurality of black and white grid pictures to obtain the internal and external parameters of the camera, and has poor operability, high requirement on environment and more calibration parameters.
(2) In the image preprocessing process, an edge detection operator is mostly selected during feature extraction, and the phenomenon that laser and natural light are difficult to divide is ignored. If the situation of extracting the center line of the light spot is not considered, one pixel of the image corresponding to a plurality of pixels is easy to appear in the corner matching process, and the accuracy of corner matching is difficult to ensure.
Based on the above, the embodiment of the application provides a three-dimensional reconstruction method and device for a material pile, an electronic device and a storage medium, wherein two images at different angles utilize a coordinate prediction model after parameter optimization to predict the world coordinates of the material pile, so that the accuracy of a prediction result is improved, and the accuracy of the three-dimensional model construction is also improved according to the world coordinates of the material pile. The following is described by way of example.
Fig. 1 shows a schematic flow chart of a three-dimensional reconstruction method for a material pile provided in an embodiment of the present application, where the method includes steps S101 to S105; specifically, the method comprises the following steps:
s101, controlling a laser emitter to emit a laser grid towards a material pile, wherein the laser grid can comprehensively cover the material pile;
s102, acquiring two grid images of a material pile under different angles under the irradiation of a laser grid through acquisition equipment;
step S103, acquiring pixel coordinates of target pixel points in the two grid images in each grid image; the target pixel points are pixel points corresponding to the same area in the two grid images;
step S104, inputting pixel coordinates of target pixel points in each grid image into a trained coordinate prediction model to obtain world coordinates of the material pile; the coordinate prediction model is constructed in the following way: constructing a support vector machine model comprising kernel function parameters and disciplinary factors, and optimizing the kernel function parameters and the disciplinary factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function; constructing a training data set according to pixel coordinates of target pixel points in each grid image and world coordinates of target points mapped on the material pile by the target pixel points, and training the optimized support vector machine model through the training data set to obtain a trained coordinate prediction model;
and S105, establishing a three-dimensional model of the material pile according to the world coordinates of the material pile.
According to the method and the device, the world coordinate prediction of the material pile is carried out on the images from two images at different angles by using the coordinate prediction model after parameter optimization, the accuracy of the prediction result is improved, and the accuracy of the three-dimensional model construction is also improved according to the world coordinate of the material pile.
Some embodiments of the present application are described in detail below. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Before the method is executed, a material pile three-dimensional reconstruction system is required to be built, wherein the three-dimensional reconstruction system comprises a laser transmitter, acquisition equipment and a computer; the laser emitter is used for emitting laser grids towards the material pile, the acquisition equipment is used for acquiring grid images of the material pile, and the computer is used for performing three-dimensional reconstruction according to predicted coordinates of the material pile.
The laser emitter is arranged towards the material stack, and the setting height of the laser emitter and the height of the material stack have a preset height difference; wherein the preset height difference is determined according to the height of the material pile;
the number of the acquisition devices is two, the two acquisition devices are respectively positioned on two opposite sides of the material pile, and the absolute depression angle of each acquisition device relative to the horizontal plane is within a preset angle range; the preset angle range enables the acquisition equipment to acquire grid images covering the material pile.
And S101, controlling a laser emitter to emit a laser grid towards the material pile, wherein the laser grid can comprehensively cover the material pile.
The essence of the material pile reconstruction is the process of determining the actual spatial information of the material pile according to the acquired spatial information of the material pile image. In determining the spatial information of the material stack image, the relative coordinate system of the material stack image is used for representation.
And emitting a laser grid towards the position of the material by a laser emitter, wherein the laser grid covers the surface of the material pile. At the moment, the relative position of the surface of the material stack and the laser grid exists, and the relative position of the surface of the material stack and the laser grid cannot be changed due to the change of the shooting angle.
Specifically, adopt two industry cameras to gather the image, fix laser emitter in the one end of cloud platform, ground is twice for the stockpile height to laser emitter's height, guarantees that the laser grid of transmission department covers the surface at the object completely, shoots the stockpile of laser grid information in certain extent, sets up the camera absolute angle of depression and is 60. The virtual camera in the left and right camera vision is located at the intersection of the two camera projections. Therefore, the distance between the emitted laser and the object needs to be fixed and the overall object needs to be covered, ensuring complete imaging among the left and right cameras. The arrangement positions and angles of the two cameras ensure that the range of the shot target object is wide as much as possible, the two shot images need to be overlapped by more than 20%, and a foundation is provided for subsequent image matching.
And S102, acquiring two grid images of the material pile under different angles under the irradiation of the laser grid through acquisition equipment.
Based on the binocular camera calibration principle, the three-dimensional space information of the material pile can be determined through the images of the two material piles at different angles.
For example, two cameras are arranged, one camera is arranged on the left side of the material pile, and the other camera is arranged on the right side of the material pile and respectively shoots images of the material pile irradiated by the laser grids.
Step S103, acquiring pixel coordinates of target pixel points in the two grid images in each grid image; and the target pixel points are pixel points corresponding to the same area in the two grid images.
When the material pile is subjected to three-dimensional reconstruction, conversion is carried out according to the pixel coordinates of the pixel points corresponding to the same area in the two grid images and the relationship among the pixel coordinates, the image coordinates, the camera coordinates and the world coordinates, and the conversion is carried out into the world coordinates corresponding to the material pile.
The pixel coordinates refer to coordinates of the pixel in a pixel coordinate system.
Pixel coordinate system: the basic composition of an image is a pixel, and the pixel points have fixed and definite positions in the whole image. The image captured by the camera has a resolution M x N, meaning that the image is composed of M rows and N columns of pixels. The pixel coordinate system is a planar rectangular coordinate system, the origin o-uv of which is usually defined in the upper left corner of an image. The top left vertex of the representation image in fig. 2 is the origin of the pixel coordinate system and the optical axis is uv. Assuming that the pixel coordinate of one point is (u, v), the meaning of the pixel is that the pixel is located in the u-th row and v-column of the image.
Image coordinate system: the image coordinate system and the pixel coordinate system are closely related, the unit of the image in the pixel coordinate system is a pixel, and the position of the pixel point in the image is clearly and visually represented. Depending on the device that takes the image, the pixel size is not fixed. The image coordinate system establishes an image coordinate system o-xy with the intersection point of the camera optical axis and the imaging plane, the origin is defined as the center of the image, the x-axis is parallel to u, and the y-axis is parallel to v. The unit of the coordinates of the image is a pixel, which is a relative unit, which is a place different from the pixel coordinates. The specific positional relationship between the two is shown in fig. 2.
Camera coordinate system: the cartesian coordinate system, i.e. the right-hand coordinate system, is shown with different relationships to the four coordinate systems, in order to clarify the imaging geometry of the object for the camera coordinate system. Key point O in camera coordinate systemc、Xc、YcAnd ZcWherein O iscIs the optical center of the camera, the relationship between the camera coordinate system and the image coordinate system is XcThe axis being parallel to the x-axis, Y, in the image coordinate systemcThe axis being parallel to the y-axis, Z, in the image coordinate systemcThe axis is the optical axis of the camera, perpendicular to the image coordinate system plane, as shown in fig. 3. The point of intersection O of the origin of the image coordinate system and the optical axis of the camera and the image coordinate system1。OcO1Refers to the focal length of the camera, like f in pinhole imaging.
World coordinate system: the world coordinate describes the position relation between the camera and other objects, the arrangement position of the camera is random and random, and the description is different from other coordinate systems and is an independent coordinate system. The world coordinate system and the camera coordinate system are three-dimensional coordinate systems. The world coordinate system is taken as a reference coordinate system, as shown in fig. 4. Finding out the relation and position between three coordinate systems, and establishing a three-dimensional rectangular coordinate system Ow-XwYwZw. The process of converting the world coordinate system into the camera coordinate system belongs to rigid body change, and an object cannot deform and can only rotate and translate.
The material pile three-dimensional reconstruction method is essentially a process of converting relations among all coordinate systems, and the pixel coordinates of the target pixel points are obtained, and the positions of the target pixel points in world coordinates are finally obtained through conversion among all the coordinate systems. The target pixel points are pixel points corresponding to the same region in the two grid images, namely the pixel points representing the same position of the material pile in the two grid images.
Acquiring pixel coordinates of target pixel points in the two grid images in each grid image respectively, wherein the pixel coordinates include:
extracting laser characteristic points of the laser grid image according to the intersection point information of the laser grid;
detecting a target corner point from the laser characteristic points according to the gray degree change degree in the two grid images;
determining target pixel points according to the corresponding relation of the target corner points in the two grid images;
and determining the pixel coordinates of the target pixel points according to the positions of the target pixel points relative to the image.
And finding pixel points which represent the same position of the material pile in the two images, and converting the pixel points into world coordinates of the position of the material pile according to the pixel coordinates of the pixel points. When finding out pixel points which represent the same position of the material pile in the two images, firstly, extracting laser characteristic points of the laser grid images according to the intersection point information of the laser grids, and then detecting a target corner point from the laser characteristic points according to the gray degree change degree in the two grid images; and finally, determining target pixel points according to the corresponding relation of the target corner points in the two grid images.
The method for extracting the laser characteristic points of the laser grid image according to the intersection point information of the laser grid comprises the following steps:
searching different pixel points within a preset threshold range in different directions of the width of each laser grid;
determining the central line of the line of laser grids according to different pixel points of a preset threshold range on each laser grid;
and extracting the intersection points of the central lines of the different laser grids as laser characteristic points of the laser grid image.
The feature extraction of the pile image relies on a laser grid. The extraction of the laser grid red cross point information is the basis for the pixel coordinate conversion to world coordinates. And (3) describing numerical information such as laser characteristic points (including corner points), lines, boundaries and the like.
Specifically, the laser grid is projected on the surface of the object on the target object, the grid lines have a certain width, and a plurality of pixel points are arranged in the width. Generally, the light intensity of an ideal laser beam cross section is distributed according to a Gaussian function.
And (2) extracting the central line by adopting a threshold method, firstly determining a threshold value K to be 200, searching pixel values distributed in the grid according to opposite directions, and determining coordinates (Xa, Ya) and (Xb, Yb) of two point pixel positions A, B close to the K in searching in two directions, wherein the coordinates (Xa + Xb)/2 and (Ya + Yb)/2 are used as the positions of the centers.
After the central line is extracted, the problem that a plurality of pixels correspond to the same angular point is avoided, and preliminary processing is provided for angular point detection and matching.
Before the corner matching is performed, the target corner needs to be determined.
Detecting a target corner point from the laser characteristic points according to the gray degree change degree in the two grid images, wherein the method comprises the following steps:
calculating the angular point quantity of each pixel point in the laser characteristic points;
and selecting pixel points in the same area corresponding to the angular point quantity in the preset range as target angular points.
Specifically, Harris angular point detection is adopted to respectively filter pixels Ix and Iy in two directions:
Figure BDA0003105745900000131
convolution filtering is carried out on a discrete two-dimensional Gaussian function:
Figure BDA0003105745900000132
obtaining the pixel corner quantity cim:
Figure BDA0003105745900000133
in the formula, the cim value (R) is called a response function and only relates to m, and when the value of R is large and positive, the characteristic point is a corner point.
It is met that cim is greater than a threshold and cim belongs to a local maximum within a certain range.
During specific processing, redundant nodes are deleted, angular point coordinates output by a program are sub-pixels, and decimal points are omitted under the conditions of convenient operation and accuracy conformity.
And determining target pixel points according to the target corner points obtained by detection.
Determining a target pixel point according to the corresponding relation of the target corner points in the two grid images, wherein the determining comprises the following steps:
judging the corresponding relation between the pixel points according to the target angular points; when the two corner points describe the same pixel point, the pixel point is a target pixel point.
Specifically, the matching of the left image and the right image is that the same physical coordinate can be determined after the pixel points on the two images are matched according to a certain matching criterion. If the matching is wrong, the operation of the camera calibration can be influenced. The angular point matching comprises four basic steps of extracting a detector, extracting angular points and detecting the angular points of two images by using Harris angular points. And (4) extracting a descriptor, wherein the detected corner points need to be subjected to SIFT description by using features expressed mathematically. And matching, namely judging the corresponding relation between the pixels by adopting the descriptors. And removing interference points, namely removing the wrong matched outer points and reserving the inner points.
The present application gathers corner point matching using Fast nearest neighbor search packages (Flann). The Flann corner point matching kernel finds the point to the instance by Euclidean distance. The parameters IndexParams, SearchParams are input for the purpose of determining the algorithm to be used and other required parameters.
Step S104, inputting pixel coordinates of target pixel points in each grid image into a trained coordinate prediction model to obtain world coordinates of the material pile; the coordinate prediction model is constructed in the following way: constructing a support vector machine model comprising kernel function parameters and disciplinary factors, and optimizing the kernel function parameters and the disciplinary factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function; and constructing a training data set according to the pixel coordinates of the target pixel points in each grid image and the world coordinates of the target points mapped on the material pile by the target pixel points, and training the optimized support vector machine model through the training data set to obtain a trained coordinate prediction model.
The coordinate prediction model is a support vector machine model with kernel function parameters and discriminant factors optimized according to a genetic algorithm and a Rayleigh distribution function.
Optimizing kernel function parameters and penalty factors in a support vector machine model according to a genetic algorithm and a Rayleigh distribution function, and comprising the following steps:
initializing a population, and randomly generating initial population individuals;
decoding each individual gene string in the population into a corresponding kernel function parameter and an error penalty factor; the individual gene string consists of a kernel function parameter and an error penalty factor code;
substituting the kernel function parameters and the error punishment factors into a support vector machine prediction model, and calculating according to a Rayleigh distribution function to obtain a first fitness;
and when the first fitness does not meet the requirement, coding the kernel function parameters and the error penalty factors into individual gene strings again, copying and crossing each individual gene string to form a new generation group, taking the new generation group as a new initial population, and returning to the step of decoding each individual gene string in the population into the corresponding kernel function parameters and the error penalty factors until the corresponding first fitness meets the requirement.
Specifically, a population is initialized, and initial population individuals are randomly generated.
In the application, the kernel function adopts RBF kernel function. When in coding: the kernel function parameters and the error penalty factors are real numbers and binary coding is adopted. The kernel function parameter and the error punishment factor adopt binary coding in the value range, and the coding is X respectively1Bit sum X2Binary string of bits, will X1+X2Bit binary code setAnd synthesizing to obtain the individual chromosome gene string.
The kernel function parameters and the error penalty factors are brought into a Support Vector Machine (SVM) to be trained and tested with training data and testing data.
And calculating the fitness value of the population according to the fitness calculation rule. When the fitness value of the population is calculated, a Ranlrnd (0,1) is introduced, and the first fitness is calculated.
And if the calculated first fitness value meets the condition, selecting the optimal population.
And when the first fitness does not meet the requirement, a new population needs to be selected. And when the first fitness does not meet the requirement, coding the kernel function parameters and the error penalty factors into individual gene strings again, and copying and crossing each individual gene string to form a new generation group.
In order to ensure that the evolution is carried out towards the optimization direction, the replication operator is selected by adopting the principle of optimal storage and worst substitution. The optimal preservation strategy is to preserve the individual with the optimal fitness as the optimal chromosome of the population of a certain generation by calculating the fitness value of the individual of the population of the certain generation. The specific method is to make the optimal chromosome as the first chromosome of the next generation population, and the first chromosome is not executed in the subsequent crossing and mutation operations. The worst replacement strategy is to replace the chromosome with the best chromosome, which has the worst calculated fitness. The algorithm can not only preserve the most chromosomes to avoid the occurrence of the degeneration phenomenon, but also accelerate the genetic evolution speed due to the selection of the worst chromosomes.
The crossover operator is to swap genes at the same position on two different chromosomes in the individual selected by the selection operator for breeding the next generation, thereby generating a new chromosome. It plays a central role in genetic algorithms. The crossover operator is also called recombination operator. And (4) chromosome recombination, namely performing random pairing firstly and then performing crossover operation. The application adopts a two-point crossing method, namely, two crossing points are arranged in an individual code string, and then partial gene exchange is carried out. The second coding bit in the gene string is used as the first cross point, then a cross point is randomly generated in the rest binary coding part, and the corresponding gene segments between the two cross points are exchanged.
Mutation operators increase the ability of genetic algorithms to find globally optimal solutions. The mutation operator randomly changes the value of a certain position of the character string with a certain probability, and randomly changes 0 of the certain position of the binary coding gene string into 1 or changes 1 into 0 aiming at the gene string in the application.
And taking the new generation population as a new initial population individual, and returning to the step of decoding each individual gene string in the population into a corresponding kernel function parameter and a corresponding error penalty factor until the corresponding first fitness meets the requirement.
And S105, establishing a three-dimensional model of the material pile according to the world coordinates of the material pile.
And reconstructing the material pile according to the real coordinates of the material pile.
Fig. 5 shows a schematic structural diagram of an apparatus for three-dimensional reconstruction of a material stack according to an embodiment of the present application, where the apparatus includes:
the laser emitter is used for emitting a laser grid towards the material pile, and the laser grid can comprehensively cover the material pile;
the acquisition equipment is used for acquiring two grid images of the material pile under different angles under the irradiation of the laser grid;
the acquisition module is used for acquiring pixel coordinates of target pixel points in the two grid images in each grid image; the target pixel points are pixel points corresponding to the same area in the two grid images;
the conversion module is used for inputting the pixel coordinates of the target pixel points in each grid image into the trained coordinate prediction model to obtain the world coordinates of the material pile; the coordinate prediction model is constructed in the following way: constructing a support vector machine model comprising kernel function parameters and disciplinary factors, and optimizing the kernel function parameters and the disciplinary factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function; constructing a training data set according to pixel coordinates of target pixel points in each grid image and world coordinates of target points mapped on the material pile by the target pixel points, and training the optimized support vector machine model through the training data set to obtain a trained coordinate prediction model;
and the establishing module is used for establishing a three-dimensional model of the material pile according to the world coordinates of the material pile.
The obtaining module is used for obtaining pixel coordinates of target pixel points in the two grid images in each grid image, and comprises the following steps:
extracting laser characteristic points of the laser grid image according to the intersection point information of the laser grid;
detecting a target corner point from the laser characteristic points according to the gray degree change degree in the two grid images;
determining target pixel points according to the corresponding relation of the target corner points in the two grid images;
and determining the pixel coordinates of the target pixel points according to the positions of the target pixel points relative to the image.
The acquisition module is used for extracting laser characteristic points of the laser grid image according to the intersection point information of the laser grid, and comprises:
searching different pixel points within a preset threshold range in different directions of the width of each laser grid;
determining the central line of the line of laser grids according to different pixel points of a preset threshold range on each laser grid;
and extracting the intersection points of the central lines of the different laser grids as laser characteristic points of the laser grid image.
The acquisition module is used for detecting a target corner point from the laser characteristic points according to the gray degree change degree in the two grid images, and comprises:
calculating the angular point quantity of each pixel point in the laser characteristic points;
and selecting pixel points in the same area corresponding to the angular point quantity in the preset range as target angular points.
The acquisition module is used for determining a target pixel point according to the corresponding relation of the target corner points in the two grid images, and comprises the following steps:
judging the corresponding relation between the pixel points according to the target angular points; when the two corner points describe the same pixel point, the pixel point is a target pixel point.
The conversion module is used for optimizing kernel function parameters and punishment factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function, and comprises the following steps:
initializing a population, and randomly generating initial population individuals;
decoding each individual gene string in the population into a corresponding kernel function parameter and an error penalty factor; the individual gene string consists of a kernel function parameter and an error penalty factor code;
substituting the kernel function parameters and the error punishment factors into a support vector machine prediction model, and calculating according to a Rayleigh distribution function to obtain a first fitness;
and when the first fitness does not meet the requirement, coding the kernel function parameters and the error penalty factors into individual gene strings again, copying and crossing each individual gene string to form a new generation group, taking the new generation group as a new initial population, and returning to the step of decoding each individual gene string in the population into the corresponding kernel function parameters and the error penalty factors until the corresponding first fitness meets the requirement.
As shown in fig. 6, an embodiment of the present application provides an electronic device for performing three-dimensional reconstruction of a material stack in the present application, where the device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the three-dimensional reconstruction of the material stack when executing the computer program.
Specifically, the memory and the processor may be general-purpose memory and processor, which are not limited specifically, and when the processor runs a computer program stored in the memory, the three-dimensional reconstruction of the material pile can be performed.
Corresponding to the three-dimensional reconstruction of the material pile in the present application, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the three-dimensional reconstruction of the material pile.
Specifically, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and when the computer program on the storage medium is executed, the three-dimensional reconstruction of the material pile can be performed.
In the embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and there may be other divisions in actual implementation, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of systems or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A three-dimensional reconstruction method for a material pile is characterized by comprising the following steps:
controlling a laser emitter to emit a laser grid towards a material stack, wherein the laser grid can comprehensively cover the material stack;
acquiring two grid images of the material pile under different angles under the irradiation of the laser grid through acquisition equipment;
acquiring pixel coordinates of target pixel points in the two grid images in each grid image; the target pixel points are pixel points corresponding to the same area in the two grid images;
inputting the pixel coordinates of the target pixel points in each grid image into a trained coordinate prediction model to obtain the world coordinates of the material pile; wherein the coordinate prediction model is constructed by: constructing a support vector machine model comprising kernel function parameters and penalty factors, and optimizing the kernel function parameters and the penalty factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function; constructing a training data set according to the pixel coordinates of the target pixel points in each grid image and the world coordinates of the target pixel points mapped on the target points of the material pile, and training the optimized support vector machine model through the training data set to obtain a trained coordinate prediction model;
and establishing a three-dimensional model of the material pile according to the world coordinates of the material pile.
2. The method of claim 1,
the laser emitter is arranged towards the material stack, and the height of the laser emitter is different from the height of the material stack by a preset height difference; the preset height difference is determined according to the height of the material pile;
the number of the acquisition devices is two, the two acquisition devices are respectively positioned on two opposite sides of the material pile, and the absolute depression angle of each acquisition device relative to the horizontal plane is within a preset angle range; the preset angle range enables the acquisition equipment to acquire the grid image covering the material pile.
3. The method according to claim 1, wherein the obtaining of the pixel coordinates of the target pixel points in the two mesh images in each mesh image comprises:
extracting laser characteristic points of the laser grid image according to the intersection point information of the laser grid;
detecting a target corner point from the laser characteristic points according to the gray level change degree in the two grid images;
determining target pixel points according to the corresponding relation of the target corner points in the two grid images;
and determining the pixel coordinates of the target pixel points in each grid image according to the positions of the target pixel points in each grid image.
4. The method of claim 3, wherein the extracting the laser feature points of the laser grid image according to the intersection point information of the laser grid comprises:
searching different pixel points within a preset threshold range in different directions of the width of each laser grid;
determining the central line of the line of laser grids according to different pixel points of a preset threshold range on each laser grid;
and extracting the intersection points of the central lines of the different laser grids as laser characteristic points of the laser grid image.
5. The method according to claim 3, wherein the detecting a target corner point from the laser feature points according to the degree of gray scale change in the two grid images comprises:
calculating the angular point quantity of each pixel point in the laser characteristic points;
and selecting pixel points in the same area corresponding to the angular point quantity in the preset range as target angular points.
6. The method according to claim 3, wherein determining a target pixel point according to the correspondence between the target corner points in the two mesh images comprises:
judging the corresponding relation between the pixel points according to the target angular points; when the two corner points describe the same pixel point, the pixel point is a target pixel point.
7. The method of claim 1, wherein optimizing kernel function parameters and penalty factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function comprises:
initializing a population, and randomly generating initial population individuals;
decoding each individual gene string in the population into a corresponding kernel function parameter and an error penalty factor; the individual gene string consists of a kernel function parameter and an error penalty factor code;
substituting the kernel function parameters and the error punishment factors into a support vector machine prediction model, and calculating according to a Rayleigh distribution function to obtain a first fitness;
and when the first fitness does not meet the requirement, coding the kernel function parameters and the error penalty factors into individual gene strings again, copying and crossing each individual gene string to form a new generation group, taking the new generation group as a new initial group individual, and returning to the step of decoding each individual gene string in the group into the corresponding kernel function parameters and the error penalty factors until the corresponding first fitness meets the requirement.
8. A three-dimensional reconstruction apparatus for a stack of materials, the apparatus comprising:
the laser emitter is used for emitting a laser grid towards the material pile, and the laser grid can comprehensively cover the material pile;
the acquisition equipment is used for acquiring two grid images of the material pile under different angles under the irradiation of the laser grid;
the acquisition module is used for acquiring pixel coordinates of target pixel points in the two grid images in each grid image; the target pixel points are pixel points corresponding to the same area in the two grid images;
the conversion module is used for inputting the pixel coordinates of the target pixel points in each grid image into a trained coordinate prediction model to obtain the world coordinates of the material pile; wherein the coordinate prediction model is constructed by: constructing a support vector machine model comprising kernel function parameters and penalty factors, and optimizing the kernel function parameters and the penalty factors in the support vector machine model according to a genetic algorithm and a Rayleigh distribution function; constructing a training data set according to the pixel coordinates of the target pixel points in each grid image and the world coordinates of the target pixel points mapped on the target points of the material pile, and training the optimized support vector machine model through the training data set to obtain a trained coordinate prediction model;
and the establishing module is used for establishing a three-dimensional model of the material pile according to the world coordinates of the material pile.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of three-dimensional reconstruction of a stack of material according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of three-dimensional reconstruction of a material stack according to any one of claims 1 to 7.
CN202110636012.3A 2021-06-08 2021-06-08 Material stack three-dimensional reconstruction method and device, electronic equipment and storage medium Active CN113240801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110636012.3A CN113240801B (en) 2021-06-08 2021-06-08 Material stack three-dimensional reconstruction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110636012.3A CN113240801B (en) 2021-06-08 2021-06-08 Material stack three-dimensional reconstruction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113240801A true CN113240801A (en) 2021-08-10
CN113240801B CN113240801B (en) 2023-09-19

Family

ID=77137230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110636012.3A Active CN113240801B (en) 2021-06-08 2021-06-08 Material stack three-dimensional reconstruction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113240801B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657496A (en) * 2022-10-19 2023-01-31 中冶赛迪信息技术(重庆)有限公司 Method and system for determining material distribution and material mixing during discharging of storage bin
CN116109788A (en) * 2023-02-15 2023-05-12 张春阳 Method for modeling and reconstructing solid piece
CN116665139A (en) * 2023-08-02 2023-08-29 中建八局第一数字科技有限公司 Method and device for identifying volume of piled materials, electronic equipment and storage medium
CN117132590A (en) * 2023-10-24 2023-11-28 威海天拓合创电子工程有限公司 Image-based multi-board defect detection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680585A (en) * 2013-11-29 2015-06-03 深圳先进技术研究院 Three-dimensional reconstruction system and method for material stack
CN106949851A (en) * 2017-03-29 2017-07-14 沈阳建筑大学 A kind of line structured light vision sensor calibration method based on SVMs
CN109377555A (en) * 2018-11-14 2019-02-22 江苏科技大学 Autonomous underwater robot prospect visual field three-dimensional reconstruction target's feature-extraction recognition methods
CN110798275A (en) * 2019-10-16 2020-02-14 西安科技大学 Mine multimode wireless signal accurate identification method
CN111724481A (en) * 2020-06-24 2020-09-29 嘉应学院 Method, device, equipment and storage medium for three-dimensional reconstruction of two-dimensional image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680585A (en) * 2013-11-29 2015-06-03 深圳先进技术研究院 Three-dimensional reconstruction system and method for material stack
CN106949851A (en) * 2017-03-29 2017-07-14 沈阳建筑大学 A kind of line structured light vision sensor calibration method based on SVMs
CN109377555A (en) * 2018-11-14 2019-02-22 江苏科技大学 Autonomous underwater robot prospect visual field three-dimensional reconstruction target's feature-extraction recognition methods
CN110798275A (en) * 2019-10-16 2020-02-14 西安科技大学 Mine multimode wireless signal accurate identification method
CN111724481A (en) * 2020-06-24 2020-09-29 嘉应学院 Method, device, equipment and storage medium for three-dimensional reconstruction of two-dimensional image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李蓝青;赵刚;: "粒子群算法用于局部放电小波阈值去噪", 电气自动化, no. 01 *
秦占师;张智军;曹晓英;陈稳;: "基于SVM-UPF的雷达弱小目标检测前跟踪算法", 火力与指挥控制, no. 03 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657496A (en) * 2022-10-19 2023-01-31 中冶赛迪信息技术(重庆)有限公司 Method and system for determining material distribution and material mixing during discharging of storage bin
CN116109788A (en) * 2023-02-15 2023-05-12 张春阳 Method for modeling and reconstructing solid piece
CN116109788B (en) * 2023-02-15 2023-07-04 张春阳 Method for modeling and reconstructing solid piece
CN116665139A (en) * 2023-08-02 2023-08-29 中建八局第一数字科技有限公司 Method and device for identifying volume of piled materials, electronic equipment and storage medium
CN116665139B (en) * 2023-08-02 2023-12-22 中建八局第一数字科技有限公司 Method and device for identifying volume of piled materials, electronic equipment and storage medium
CN117132590A (en) * 2023-10-24 2023-11-28 威海天拓合创电子工程有限公司 Image-based multi-board defect detection method and device
CN117132590B (en) * 2023-10-24 2024-03-01 威海天拓合创电子工程有限公司 Image-based multi-board defect detection method and device

Also Published As

Publication number Publication date
CN113240801B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN113240801B (en) Material stack three-dimensional reconstruction method and device, electronic equipment and storage medium
CN110634161B (en) Rapid high-precision estimation method and device for workpiece pose based on point cloud data
CN110246092B (en) Three-dimensional laser point cloud denoising method considering neighborhood point mean distance and slope
Gao et al. An approach to combine progressively captured point clouds for BIM update
CN104816306A (en) Robot, robot system, control device and control method
CN112907747A (en) Point cloud data processing method and device, electronic equipment and storage medium
US20120033873A1 (en) Method and device for determining a shape match in three dimensions
CN113487633A (en) Point cloud contour extraction method and device, computer equipment and storage medium
CN112053427A (en) Point cloud feature extraction method, device, equipment and readable storage medium
CN113077523A (en) Calibration method, calibration device, computer equipment and storage medium
CN112257721A (en) Image target region matching method based on Fast ICP
CN109190452A (en) Crop row recognition methods and device
CN114387408A (en) Method and device for generating digital elevation model and computer readable storage medium
JP6146731B2 (en) Coordinate correction apparatus, coordinate correction program, and coordinate correction method
CN110223356A (en) A kind of monocular camera full automatic calibration method based on energy growth
Xiao et al. Filtering method of rock points based on BP neural network and principal component analysis
JP6212398B2 (en) Landscape quantification device
Mitropoulou et al. An automated process to detect edges in unorganized point clouds
CN114998397B (en) Multi-view satellite image stereopair optimization selection method
CN111260714A (en) Flood disaster assessment method, device, equipment and computer storage medium
CN113762310B (en) Point cloud data classification method, device, computer storage medium and system
KR101781359B1 (en) A Method Of Providing For Searching Footprint And The System Practiced The Method
CN111428530B (en) Two-dimensional code image detection and identification equipment, device and method
CN113920269A (en) Project progress obtaining method and device, electronic equipment and medium
CN117152364B (en) Method, device and equipment for three-dimensional reconstruction of water body boundary based on image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant