CN117115225A - Intelligent comprehensive informatization management platform for natural resources - Google Patents
Intelligent comprehensive informatization management platform for natural resources Download PDFInfo
- Publication number
- CN117115225A CN117115225A CN202311128047.1A CN202311128047A CN117115225A CN 117115225 A CN117115225 A CN 117115225A CN 202311128047 A CN202311128047 A CN 202311128047A CN 117115225 A CN117115225 A CN 117115225A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- image
- point
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000000280 densification Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 60
- 238000007726 management method Methods 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 22
- 238000003062 neural network model Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 3
- 230000008929 regeneration Effects 0.000 claims description 3
- 238000011069 regeneration method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 2
- 238000005286 illumination Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, and discloses a natural resource intelligent comprehensive informatization management platform, which comprises the following components: the first module is configured to acquire first point cloud data, obtain dense point cloud data through the densification of the first point cloud data, and convert the dense point cloud data into a first depth image; a second module configured to input the image group into a generated countermeasure model, and train the generated countermeasure model; a first depth image and a first monocular image form an image group, and the camera coordinate systems of the first monocular image and the first depth image of the image group are the same; a third module configured to input a first monocular image to be processed to generate a countermeasure model, the second depth image being generated by a generator that generates the countermeasure model; the method can accurately estimate the depth of the monocular image of the mine tunnel for three-dimensional reconstruction.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a natural resource intelligent comprehensive informatization management platform.
Background
The monocular depth estimation is used for three-dimensional reconstruction, a three-dimensional image in a mine tunnel can be used for environment simulation of the mine and can be used for training and identifying possible dangerous applications, but the lack of natural illumination of the mine tunnel, the acquired image is greatly different from the image under a general good illumination image, the depth characteristic is seriously lost, an unsupervised monocular depth estimation model is very difficult to train to converge, and the measurement engineering of the real depth value required by supervision training in the mine tunnel environment is an incomplete task.
Disclosure of Invention
At least one embodiment of the present disclosure provides a method, a platform and a medium for intelligent comprehensive informatization management of natural resources, which are characterized in that a general unsupervised monocular depth estimation model is applied to a second monocular image with sufficient illumination to obtain a depth value, then a neural network model which obtains dense point cloud data through sparse point cloud data is obtained through training, a training sample of the depth estimation model is constructed, and the technical problem that the monocular depth estimation model is difficult to train in the related art is solved.
At least one embodiment of the present disclosure provides a method for intelligent comprehensive informatization management of natural resources, including:
collecting first point cloud data, performing densification through the first point cloud data to obtain dense point cloud data, and converting the dense point cloud data into a first depth image, wherein the size of the first depth image is consistent with that of a first monocular image;
a first depth image and a first monocular image form an image group, and the camera coordinate systems of the first monocular image and the first depth image of the image group are the same; and, the origin mapping of the camera coordinate systems of the first monocular image and the first depth image of one image group is the same in position in the real space;
inputting the image group into a generated countermeasure model, and training the generated countermeasure model;
the first monocular image to be processed is input to a generator that generates a countermeasure model, and the second depth image is generated by the generator that generates the countermeasure model.
The arbiter inputs the first depth map or the second depth map generated by the generator when generating the countermeasure model training of the embodiments of the present disclosure.
For example, in a method for intelligent comprehensive informatization management of natural resources provided in at least one embodiment of the present disclosure, dense point cloud data is obtained through a neural network model, where the neural network model includes:
the hidden vector coding layer is configured to input the point cloud space matrix and the first point cloud data and output a hidden vector coding matrix;
a regeneration layer configured to input the hidden vector encoding matrix and output a generated space matrix;
the calculation formula of the regenerated layer is as follows:
G=sigmoid(ZZ T )
g represents generating a space matrix, wherein Z represents a hidden vector coding matrix;
updating the three-dimensional coordinates of the default point by generating a space matrix, wherein the updated three-dimensional coordinates of the default point have the following calculation formula:
wherein D is x,k 、D y,k 、D z,k Respectively is a braidingX, Y, Z axis coordinates of the point with number k, N a Is a set of points corresponding to the first point cloud data, θ a,k Is the value of the element that generates the a-th row and k-th column in the space matrix;
adding the updated default point into the first point cloud data to generate dense point cloud data.
For example, in a neural network provided in at least one embodiment of the present disclosure, a method for generating a point cloud space matrix includes:
the points of the first point cloud data are mapped onto an image coordinate system of the second monocular image, a sparse matrix is obtained, the size of the sparse matrix is the same as that of the second monocular image, and elements with values of 1 on the sparse matrix represent the points of the first point cloud data; establishing a default point, representing the default point by using an element with a value of 0 in the sparse matrix, wherein the coordinate value of the default point defaults to a default value, and the default value is 1 or 0.
Defining the position of an element from the upper left corner of the sparse matrix as a first row and a first column, traversing from the element, traversing each row of the sparse matrix from the first column, numbering the traversed element according to the traversing sequence, and synchronizing the numbering of the element to a default point represented by the element or a point of first point cloud data;
generating a point cloud space matrix, wherein an element of an ith row and an jth column represents whether the spatial position relation of a point with the number of i and the number of j on an image coordinate system is adjacent, if the value of the element is 1, the spatial position relation of the point with the number of i and the number of j on the image coordinate system is indicated to be adjacent, otherwise, the element is not adjacent;
the default point is not adjacent to any point; the points of the first point cloud data are adjacent to the nearest four points of the first point cloud data on the image coordinate system.
For example, in one neural network provided by at least one embodiment of the present disclosure, the training loss function of the neural network model:
wherein R is x,a 、R y,a 、R z,a X, Y, Z axis coordinates of points with a number a of the third point cloud data are respectively represented;
a third depth image is generated based on the second monocular image, and the third depth image is converted into third point cloud data.
For example, in a method for intelligently and comprehensively informationized management of natural resources provided in at least one embodiment of the present disclosure, a method for obtaining dense point cloud data by performing densification on first point cloud data includes the following steps:
the points of the first point cloud data are mapped onto an image coordinate system of the second monocular image to obtain a sparse matrix; and generating default points based on the sparse matrix;
generating a point cloud space matrix which represents whether the space position relation of the points is adjacent or not through the sparse matrix;
generating a third depth image based on the second monocular image, and converting the third depth image into third point cloud data;
for example, in at least one embodiment of the present disclosure, a method of generating a third depth image based on a second monocular image is provided, the third depth image being generated using a model of an unsupervised monocular depth estimation, such as monoscopic 2.
Training the neural network model through the first point cloud data and the third point cloud data;
and inputting the first point cloud data into the trained neural network model to obtain dense point cloud data.
For example, in a natural resource intelligent integrated informatization management method provided in at least one embodiment of the present disclosure, a loss function when a generator for generating an countermeasure model is trained is expressed as:
LOSS 2 =1*logD(x 1 )+0*logD(x 0 )+1*logD(G(z 1 ))+0*logD(G(z 0 ))
wherein D (x) 0 )、(x 1 ) The probability that the discriminators judge whether the samples input into the discriminators are the first depth map or the second depth map respectively; wherein D (G (z) 1 ))、D(G(z 0 ) Respectively, is judged by a judging deviceThe probability of whether the sample of the input arbiter is the first depth map or the second depth map.
The LOSS function of the discriminant training to generate the challenge model is expressed as-LOSS 2 。
The present disclosure also provides, in at least one embodiment, a smart integrated informatization management platform for natural resources, including:
the first module is configured to acquire first point cloud data, obtain dense point cloud data through the densification of the first point cloud data, and convert the dense point cloud data into a first depth image;
a second module configured to input the image group into a generated countermeasure model, and train the generated countermeasure model; a first depth image and a first monocular image form an image group, and the camera coordinate systems of the first monocular image and the first depth image of the image group are the same;
and a third module configured to input the first monocular image to be processed to generate a countermeasure model, and generate a second depth image by a generator that generates the countermeasure model.
At least one embodiment of the present disclosure also provides a storage medium storing non-transitory computer-readable instructions that, when executed by a computer, perform one or more steps of the foregoing natural resource intelligent integrated informatization management method.
Drawings
FIG. 1 is a flow chart of a method for intelligent integrated informatization management of natural resources according to at least one embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of obtaining dense point cloud data by densification of first point cloud data provided in accordance with at least one embodiment of the present disclosure;
FIG. 3 is a schematic block diagram of a smart integrated informatization management system for natural resources according to at least one embodiment of the present disclosure;
fig. 4 is a schematic diagram of a storage medium provided in at least one embodiment of the present disclosure.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
FIG. 1 is a flow chart of a method for intelligent integrated informatization management of natural resources according to at least one embodiment of the present disclosure;
a natural resource intelligent comprehensive informatization management method comprises the following steps:
s101, collecting first point cloud data, carrying out densification through the first point cloud data to obtain dense point cloud data, converting the dense point cloud data into a first depth image, and enabling the size of the first depth image to be consistent with that of a first monocular image;
the uniform size means that the two images are uniform in length and width, and the total number of pixels is uniform.
S102, forming an image group by a first depth image and a first monocular image, wherein the camera coordinate systems of the first monocular image and the first depth image of the image group are the same;
the origin of the camera coordinate systems of the first monocular image and the first depth image of one image group are mapped to the same position in real space.
Inputting the image group into a generated countermeasure model, and training the generated countermeasure model;
s103, inputting the first monocular image to be processed to a generator for generating a countermeasure model, and generating a second depth image by the generator for generating a countermeasure model.
One use of the second depth image provided by embodiments of the present disclosure is for three-dimensional reconstruction. Thus, for the above steps, a three-dimensional reconstruction step may be further included, for obtaining a three-dimensional image to achieve another purpose of information processing, as a specific purpose, to manually or mechanically identify whether there is a hidden danger by the three-dimensional image inside the mine tunnel.
Embodiments of the present disclosure provide a neural network model for obtaining dense point cloud data by densification of first point cloud data, comprising:
the hidden vector coding layer is configured to input the point cloud space matrix and the first point cloud data and output a hidden vector coding matrix;
a second monocular image (monocular image in a bright environment), an image matrix being generated based on the second monocular image;
the points of the first point cloud data are mapped onto an image coordinate system of the second monocular image, a sparse matrix is obtained, the size of the sparse matrix is the same as that of the second monocular image, and elements with values of 1 on the sparse matrix represent the points of the first point cloud data; establishing a default point, representing the default point by using an element with a value of 0 in the sparse matrix, wherein the coordinate value of the default point defaults to a default value, and the default value is 1 or 0.
Defining the position of an element from the upper left corner of the sparse matrix as a first row and a first column, traversing from the element, traversing each row of the sparse matrix from the first column, numbering the traversed element according to the traversing sequence, and synchronizing the numbering of the element to a default point represented by the element or a point of first point cloud data;
generating a point cloud space matrix, wherein an element of an ith row and an jth column represents whether the spatial position relations of points numbered i and j on an image coordinate system are adjacent (including the points of the first point cloud data and default points), if the value of the element is 1, the spatial position relations of the points numbered i and j on the image coordinate system are indicated to be adjacent, otherwise, the spatial position relations of the points numbered i and j on the image coordinate system are indicated to be not adjacent;
the default point is not adjacent to any point; the points of the first point cloud data are adjacent to the nearest four points of the first point cloud data on the image coordinate system.
A regeneration layer configured to input the hidden vector encoding matrix and output a generated space matrix;
the calculation formula of the regenerated layer is as follows:
G=sigmoid(ZZ T )
g represents generating a space matrix, wherein Z represents a hidden vector coding matrix;
updating the three-dimensional coordinates of the default point by generating a space matrix, wherein the updated three-dimensional coordinates of the default point have the following calculation formula:
wherein D is x,k 、D y,k 、D z,k X, Y, Z axis coordinates, N, of points numbered k, respectively a Is a set of points corresponding to the first point cloud data, θ a,k Is the value of the element that generates the a-th row and k-th column in the space matrix;
adding the updated default point into the first point cloud data to generate dense point cloud data;
generating a third depth image based on the second monocular image, and converting the third depth image into third point cloud data;
the training loss function of the neural network model for obtaining dense point cloud data provided by the embodiment of the disclosure is constructed based on the difference between the dense point cloud data and the third point cloud data;
specifically, a training loss function of a neural network model for obtaining dense point cloud data is provided:
wherein R is x,a 、R y,a 、R z,a X, Y, Z axis coordinates of a point numbered a of the third point cloud data are respectively represented.
The method provided by the embodiment of the disclosure comprises the following steps: the acquisition of point cloud data is not affected in a dim light environment in a mine tunnel, so that a neural network model for obtaining dense point cloud data through densification cannot use data affected by the dim light environment in the training and using processes, and the performance of the neural network model cannot be interfered by the dim light environment; a neural network model with good performance for obtaining dense point cloud data can be obtained through training by a second monocular image with sufficient illumination;
fig. 2 is a method for obtaining dense point cloud data by performing densification of first point cloud data according to at least one embodiment of the present disclosure, including the following steps:
step S201, mapping the points of the first point cloud data onto an image coordinate system of a second monocular image to obtain a sparse matrix; and generating default points based on the sparse matrix;
step S202, generating a point cloud space matrix which represents whether the space position relation of the points is adjacent or not through a sparse matrix;
step S203, generating a third depth image based on the second monocular image, and converting the third depth image into third point cloud data;
step S204, training a neural network model through the first point cloud data and the third point cloud data;
in step S205, the first point cloud data is input into the trained neural network model to obtain dense point cloud data.
The arbiter inputs the first depth map or the second depth map generated by the generator when generating the countermeasure model training of the embodiments of the present disclosure.
The loss function when training the challenge model is generated is expressed as:
LOSS 2 =1*logD(x 1 )+0*logD(x 0 )+1*logD(G(z 1 ))+0*logD(G(z 0 ))
wherein D (x) 0 )、(x 1 ) The probability that the discriminators judge whether the samples input into the discriminators are the first depth map or the second depth map respectively; wherein D (G (z) 1 ))、D(G(z 0 ) A probability that the arbiter determines whether the sample input to the arbiter is the first depth map or the second depth map, respectively;
the LOSS function is added with a negative sign when training the discriminator, i.e. the optimization target is LOSS 2 Maximization.
Fig. 3 is a diagram of a system 200 for intelligent integrated informatization management of natural resources according to at least one embodiment of the present disclosure, which includes:
a first module 210 configured to collect first point cloud data, obtain dense point cloud data by performing densification on the first point cloud data, and convert the dense point cloud data into a first depth image;
a second module 220 configured to input the set of images into a generated countermeasure model, the generated countermeasure model being trained; a first depth image and a first monocular image form an image group, and the camera coordinate systems of the first monocular image and the first depth image of the image group are the same;
a third module 230 is configured to input the first monocular image to be processed to generate a countermeasure model, and generate a second depth image by a generator that generates the countermeasure model.
Fig. 4 is a storage medium 300 storing non-transitory computer readable instructions 310 according to at least one embodiment of the present disclosure, where the non-transitory computer readable instructions 310 are configured to perform one or more steps of the foregoing method for intelligent integrated information management of natural resources.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.
Claims (9)
1. The intelligent comprehensive informatization management method for the natural resources is characterized by comprising the following steps of:
collecting first point cloud data, performing densification through the first point cloud data to obtain dense point cloud data, and converting the dense point cloud data into a first depth image, wherein the size of the first depth image is consistent with that of a first monocular image;
a first depth image and a first monocular image form an image group, and the camera coordinate systems of the first monocular image and the first depth image of the image group are the same; and, the origin mapping of the camera coordinate systems of the first monocular image and the first depth image of one image group is the same in position in the real space;
inputting the image group into a generated countermeasure model, and training the generated countermeasure model;
the first monocular image to be processed is input to a generator that generates a countermeasure model, and the second depth image is generated by the generator that generates the countermeasure model.
2. The method for intelligent comprehensive informatization management of natural resources according to claim 1, wherein the dense point cloud data is obtained through a neural network model, the neural network model comprising:
the hidden vector coding layer is configured to input the point cloud space matrix and the first point cloud data and output a hidden vector coding matrix;
a regeneration layer configured to input the hidden vector encoding matrix and output a generated space matrix;
the calculation formula of the regenerated layer is as follows:
G=sigmoid(ZZ T )
g represents generating a space matrix, wherein Z represents a hidden vector coding matrix;
updating the three-dimensional coordinates of the default point by generating a space matrix, wherein the updated three-dimensional coordinates of the default point have the following calculation formula:
wherein D is x,k 、D y,k 、D z,k X, Y, Z axis coordinates, N, of points numbered k, respectively a Is a set of points corresponding to the first point cloud data, θ a,k Is the value of the element that generates the a-th row and k-th column in the space matrix;
adding the updated default point into the first point cloud data to generate dense point cloud data.
3. The method for intelligently and comprehensively informationized management of natural resources according to claim 2, wherein the method for generating the point cloud space matrix comprises the following steps:
the points of the first point cloud data are mapped onto an image coordinate system of the second monocular image, a sparse matrix is obtained, the size of the sparse matrix is the same as that of the second monocular image, and elements with values of 1 on the sparse matrix represent the points of the first point cloud data; establishing a default point, representing the default point by using an element with a value of 0 in the sparse matrix, wherein the coordinate value of the default point defaults to a default value, and the default value is 1 or 0;
defining the position of an element from the upper left corner of the sparse matrix as a first row and a first column, traversing from the element, traversing each row of the sparse matrix from the first column, numbering the traversed element according to the traversing sequence, and synchronizing the numbering of the element to a default point represented by the element or a point of first point cloud data;
generating a point cloud space matrix, wherein an element of an ith row and an jth column represents whether the spatial position relation of a point with the number of i and the number of j on an image coordinate system is adjacent, if the value of the element is 1, the spatial position relation of the point with the number of i and the number of j on the image coordinate system is indicated to be adjacent, otherwise, the element is not adjacent;
the default point is not adjacent to any point; the points of the first point cloud data are adjacent to the nearest four points of the first point cloud data on the image coordinate system.
4. The method for intelligent comprehensive informatization management of natural resources according to claim 2, wherein the training loss function of the neural network model is as follows:
wherein R is x,a 、R y,a 、R z,a X, Y, Z axis coordinates of points with a number a of the third point cloud data are respectively represented;
a third depth image is generated based on the second monocular image, and the third depth image is converted into third point cloud data.
5. The method for intelligently and comprehensively informationized management of natural resources according to claim 2, wherein the method for obtaining dense point cloud data by performing densification on the first point cloud data comprises the following steps:
the points of the first point cloud data are mapped onto an image coordinate system of the second monocular image to obtain a sparse matrix; and generating default points based on the sparse matrix;
generating a point cloud space matrix which represents whether the space position relation of the points is adjacent or not through the sparse matrix;
generating a third depth image based on the second monocular image, and converting the third depth image into third point cloud data;
training the neural network model through the first point cloud data and the third point cloud data;
and inputting the first point cloud data into the trained neural network model to obtain dense point cloud data.
6. The method for intelligent integrated informatization management of natural resources according to claim 1, wherein the loss function when the generator for generating the countermeasure model is trained is expressed as:
LOSS 2 =1*logD(x 1 )+0*logD(x 0 )+1*logD(G(z 1 ))+0*logD(G(z 0 ))
wherein D (x) 0 )、(x 1 ) The probability that the discriminators judge whether the samples input into the discriminators are the first depth map or the second depth map respectively; wherein D (G (z) 1 ))、D(G(z 0 ) A probability that the arbiter determines whether the samples input to the arbiter are the first depth map or the second depth map, respectively.
7. The method for intelligent integrated informatization management of natural resources according to claim 6, wherein the LOSS function during training of the discriminators for generating the countermeasure model is expressed as-LOSS 2 。
8. A natural resource intelligent comprehensive informatization management system is characterized by comprising:
the first module is configured to acquire first point cloud data, obtain dense point cloud data through the densification of the first point cloud data, and convert the dense point cloud data into a first depth image;
a second module configured to input the image group into a generated countermeasure model, and train the generated countermeasure model; a first depth image and a first monocular image form an image group, and the camera coordinate systems of the first monocular image and the first depth image of the image group are the same;
and a third module configured to input the first monocular image to be processed to generate a countermeasure model, and generate a second depth image by a generator that generates the countermeasure model.
9. A storage medium having stored thereon non-transitory computer readable instructions which, when executed by a computer, perform one or more steps of the natural resource intelligent integrated informatization management method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311128047.1A CN117115225B (en) | 2023-09-01 | 2023-09-01 | Intelligent comprehensive informatization management platform for natural resources |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311128047.1A CN117115225B (en) | 2023-09-01 | 2023-09-01 | Intelligent comprehensive informatization management platform for natural resources |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117115225A true CN117115225A (en) | 2023-11-24 |
CN117115225B CN117115225B (en) | 2024-04-30 |
Family
ID=88801917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311128047.1A Active CN117115225B (en) | 2023-09-01 | 2023-09-01 | Intelligent comprehensive informatization management platform for natural resources |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117115225B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862293A (en) * | 2017-09-14 | 2018-03-30 | 北京航空航天大学 | Radar based on confrontation generation network generates colored semantic image system and method |
CN111161364A (en) * | 2019-12-24 | 2020-05-15 | 东南大学 | Real-time shape completion and attitude estimation method for single-view depth map |
CN111563923A (en) * | 2020-07-15 | 2020-08-21 | 浙江大华技术股份有限公司 | Method for obtaining dense depth map and related device |
US20200273190A1 (en) * | 2018-03-14 | 2020-08-27 | Dalian University Of Technology | Method for 3d scene dense reconstruction based on monocular visual slam |
CN112330729A (en) * | 2020-11-27 | 2021-02-05 | 中国科学院深圳先进技术研究院 | Image depth prediction method and device, terminal device and readable storage medium |
CN112819875A (en) * | 2021-02-03 | 2021-05-18 | 苏州挚途科技有限公司 | Monocular depth estimation method and device and electronic equipment |
CN113379646A (en) * | 2021-07-07 | 2021-09-10 | 厦门大学 | Algorithm for performing dense point cloud completion by using generated countermeasure network |
CN114724111A (en) * | 2022-04-14 | 2022-07-08 | 重庆亲禾智千科技有限公司 | Intelligent forklift identification obstacle avoidance method based on deepstream |
CN114998406A (en) * | 2022-07-14 | 2022-09-02 | 武汉图科智能科技有限公司 | Self-supervision multi-view depth estimation method and device |
WO2022242416A1 (en) * | 2021-05-21 | 2022-11-24 | 北京百度网讯科技有限公司 | Method and apparatus for generating point cloud data |
WO2023015880A1 (en) * | 2021-08-09 | 2023-02-16 | 深圳奥锐达科技有限公司 | Acquisition method for training sample set, model training method and related apparatus |
-
2023
- 2023-09-01 CN CN202311128047.1A patent/CN117115225B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862293A (en) * | 2017-09-14 | 2018-03-30 | 北京航空航天大学 | Radar based on confrontation generation network generates colored semantic image system and method |
US20200273190A1 (en) * | 2018-03-14 | 2020-08-27 | Dalian University Of Technology | Method for 3d scene dense reconstruction based on monocular visual slam |
CN111161364A (en) * | 2019-12-24 | 2020-05-15 | 东南大学 | Real-time shape completion and attitude estimation method for single-view depth map |
CN111563923A (en) * | 2020-07-15 | 2020-08-21 | 浙江大华技术股份有限公司 | Method for obtaining dense depth map and related device |
CN112330729A (en) * | 2020-11-27 | 2021-02-05 | 中国科学院深圳先进技术研究院 | Image depth prediction method and device, terminal device and readable storage medium |
CN112819875A (en) * | 2021-02-03 | 2021-05-18 | 苏州挚途科技有限公司 | Monocular depth estimation method and device and electronic equipment |
WO2022242416A1 (en) * | 2021-05-21 | 2022-11-24 | 北京百度网讯科技有限公司 | Method and apparatus for generating point cloud data |
CN113379646A (en) * | 2021-07-07 | 2021-09-10 | 厦门大学 | Algorithm for performing dense point cloud completion by using generated countermeasure network |
WO2023015880A1 (en) * | 2021-08-09 | 2023-02-16 | 深圳奥锐达科技有限公司 | Acquisition method for training sample set, model training method and related apparatus |
CN114724111A (en) * | 2022-04-14 | 2022-07-08 | 重庆亲禾智千科技有限公司 | Intelligent forklift identification obstacle avoidance method based on deepstream |
CN114998406A (en) * | 2022-07-14 | 2022-09-02 | 武汉图科智能科技有限公司 | Self-supervision multi-view depth estimation method and device |
Non-Patent Citations (2)
Title |
---|
XU, YUANHONG等: "Combining Depth-estimation-based Multi-spectral Photometric Stereo and SLAM for Real-time Dense 3D Reconstruction", PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING (ICCIP 2018), 31 December 2018 (2018-12-31), pages 81 - 85 * |
王俊锴 等: "基于对抗生成网络的单幅图像生成点云方法", 现代计算机, 23 September 2021 (2021-09-23), pages 144 - 147 * |
Also Published As
Publication number | Publication date |
---|---|
CN117115225B (en) | 2024-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113409191B (en) | Lightweight image super-resolution method and system based on attention feedback mechanism | |
KR102126724B1 (en) | Method and apparatus for restoring point cloud data | |
CN110246181B (en) | Anchor point-based attitude estimation model training method, attitude estimation method and system | |
CN111199214B (en) | Residual network multispectral image ground object classification method | |
CN110992366B (en) | Image semantic segmentation method, device and storage medium | |
CN113408662B (en) | Image recognition and training method and device for image recognition model | |
JP6431404B2 (en) | Attitude estimation model generation apparatus and attitude estimation apparatus | |
CN108053454A (en) | A kind of graph structure data creation method that confrontation network is generated based on depth convolution | |
CN114708434A (en) | Cross-domain remote sensing image semantic segmentation method based on adaptation and self-training in iterative domain | |
CN111127360A (en) | Gray level image transfer learning method based on automatic encoder | |
CN112861659A (en) | Image model training method and device, electronic equipment and storage medium | |
CN111353988A (en) | KNN dynamic self-adaptive double-image convolution image segmentation method and system | |
CN113313176A (en) | Point cloud analysis method based on dynamic graph convolution neural network | |
KR20230073751A (en) | System and method for generating images of the same style based on layout | |
CN108520532B (en) | Method and device for identifying motion direction of object in video | |
CN111833395B (en) | Direction-finding system single target positioning method and device based on neural network model | |
CN114170465A (en) | Attention mechanism-based 3D point cloud classification method, terminal device and storage medium | |
CN110647917B (en) | Model multiplexing method and system | |
CN106570928B (en) | A kind of heavy illumination method based on image | |
CN111639523B (en) | Target detection method, device, computer equipment and storage medium | |
CN117115225B (en) | Intelligent comprehensive informatization management platform for natural resources | |
CN114913330B (en) | Point cloud component segmentation method and device, electronic equipment and storage medium | |
CN116030365A (en) | Model training method, apparatus, computer device, storage medium, and program product | |
CN116071625A (en) | Training method of deep learning model, target detection method and device | |
CN112712015B (en) | Human body key point identification method and device, intelligent terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |