CN117710199A - Three-dimensional imaging method and related equipment thereof - Google Patents

Three-dimensional imaging method and related equipment thereof Download PDF

Info

Publication number
CN117710199A
CN117710199A CN202311803454.8A CN202311803454A CN117710199A CN 117710199 A CN117710199 A CN 117710199A CN 202311803454 A CN202311803454 A CN 202311803454A CN 117710199 A CN117710199 A CN 117710199A
Authority
CN
China
Prior art keywords
dimensional
dimensional detection
noise
graphs
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311803454.8A
Other languages
Chinese (zh)
Other versions
CN117710199B (en
Inventor
赵迪斐
刘鸿斌
尹俊凯
邹筱瑜
翟晓悦
赵迎宾
陈虹羽
刘超玮
魏源
刘丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202311803454.8A priority Critical patent/CN117710199B/en
Publication of CN117710199A publication Critical patent/CN117710199A/en
Application granted granted Critical
Publication of CN117710199B publication Critical patent/CN117710199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application belongs to the technical field of three-dimensional imaging, and discloses a three-dimensional imaging method and related equipment thereof, wherein a two-dimensional detection chart of a plurality of cross sections of an object to be detected is obtained through detection; identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to obtain the noise duty ratio of each two-dimensional detection graph; stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to perform interpolation calculation on pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model; the imaging quality and the structural authenticity of the generated three-dimensional detection model can be improved.

Description

Three-dimensional imaging method and related equipment thereof
Technical Field
The application relates to the technical field of three-dimensional imaging, in particular to a three-dimensional imaging method and related equipment thereof.
Background
In many scenes in real life, a three-dimensional scanning detection mode is required to detect an internal structure of an object so as to generate a three-dimensional detection model capable of displaying the internal structure of the object, for example, detection of a shale gas reservoir space in shale, detection of an internal structure of a cultural relic in a museum, internal flaw detection of parts and the like. When three-dimensional scanning detection is performed, an ultrasonic detector or a CT detection device is generally used for scanning a plurality of cross sections of an object to obtain two-dimensional detection graphs of the plurality of cross sections, the two-dimensional detection graphs are stacked and arranged, and finally interpolation operation is performed on areas between the two-dimensional detection graphs to obtain pixel information of corresponding areas, so that a three-dimensional detection model is obtained.
In the prior art, when interpolation operation is performed on an area between two-dimensional detection graphs, original pixel information of the two-dimensional detection graphs obtained by scanning is directly used for operation according to an equal weight rule, and the influence degree of the original pixel information of all input two-dimensional detection graphs on an interpolation result is the same, however, due to influence factors such as hardware performance limitation and surrounding environment influence of detection equipment, each two-dimensional detection graph generated by scanning can have noise with different degrees, so that imaging quality of each two-dimensional detection graph is different, when a three-dimensional detection model is generated, interpolation operation is performed by adopting an equal weight rule, which can cause poor imaging quality of the three-dimensional detection model and influence structural authenticity of the finally obtained three-dimensional detection model (the structural authenticity refers to the approaching degree between the structure of the three-dimensional detection model and the structure of a detected object).
Disclosure of Invention
The purpose of the application is to provide a three-dimensional imaging method and related equipment thereof, which can improve the imaging quality and the structural authenticity of a three-dimensional detection model generated by three-dimensional scanning detection.
In a first aspect, the present application provides a three-dimensional imaging method, comprising the steps of:
A1. acquiring a two-dimensional detection map of a plurality of cross sections of the detected object through detection;
A2. identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to obtain the noise duty ratio of each two-dimensional detection graph;
A3. and stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to perform interpolation calculation on pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model.
After the two-dimensional detection graphs of a plurality of cross sections of the detected object are detected, when the two-dimensional detection graphs are utilized to interpolate and fill pixel information of a blank area between the adjacent two-dimensional detection graphs, interpolation operation is carried out according to weight values corresponding to the noise duty ratio of the two-dimensional detection graphs, so that the two-dimensional detection graphs with larger noise duty ratio have smaller influence on interpolation operation results, the influence of noise on final imaging results can be well eliminated, and the imaging quality and structural authenticity of the finally obtained three-dimensional detection model are improved.
Preferably, before step A1, the method further comprises the steps of:
acquiring a plurality of noiseless two-dimensional detection images;
sequentially taking different types of noise as noise sources, and adding noise with different duty ratios into each noiseless two-dimensional detection diagram to obtain a plurality of mixed two-dimensional detection diagrams;
taking the mixed two-dimensional detection graph as sample data, and taking the noise duty ratio and the noise type of the mixed two-dimensional detection graph as sample labels to generate a plurality of training samples;
constructing an initial noise identification model which takes a two-dimensional detection diagram as input and takes a noise duty ratio and a noise type as output;
and training the initial noise recognition model by using a plurality of training samples to obtain a trained noise recognition model.
Preferably, step A3 comprises:
A301. stacking and arranging the two-dimensional detection graphs obtained by detection;
A302. taking a plurality of two-dimensional detection graphs near each blank area as a reference two-dimensional detection graph corresponding to the blank area, and distributing a weight value to the reference two-dimensional detection graph of each blank area according to the noise duty ratio;
A303. according to the reference two-dimensional detection map of each blank area and the corresponding weight value, carrying out interpolation calculation on pixel information of a middle facet of each blank area to generate a two-dimensional map of the middle facet as a new two-dimensional detection map; the bisector plane is parallel to the adjacent two-dimensional detection images and has the same distance with the adjacent two-dimensional detection images;
A304. and (C) repeating the step A302-the step A303 until the interval between the adjacent two-dimensional detection graphs is not larger than a preset interval threshold value, and obtaining the three-dimensional detection model.
Each two-dimensional detection graph can be used as a reference two-dimensional detection graph of different blank areas for multiple times, and when each two-dimensional detection graph is used as a reference two-dimensional detection graph, the assigned weight value is not fixed and is dynamically changed along with the actual position of the bisector, so that the accuracy of a calculation result is improved, and the imaging quality and the structural authenticity of the finally obtained three-dimensional detection model are ensured.
Preferably, step a302 includes:
taking m two-dimensional detection graphs adjacent to the front side of the blank area and m two-dimensional detection graphs adjacent to the rear side of the blank area as reference two-dimensional detection graphs of the blank area, wherein m is calculated according to the following formula:
wherein p is the number of the two-dimensional detection images at the front side of the blank area, q is the number of the two-dimensional detection images at the rear side of the blank area, and N is a preset positive integer.
Preferably, in step a302, the step of assigning a weight value to the reference two-dimensional probe map of each blank area according to the noise duty ratio includes:
according to the noise duty ratio, the reference two-dimensional detection graphs of the same blank area are ordered in a descending order;
dividing the reference two-dimensional probe map into a plurality of groups from front to back according to the sequencing result;
and assigning weight values for the reference two-dimensional detection graphs of each group, so that the weight values of the reference two-dimensional detection graphs of the same group are the same, and the weight values of the reference two-dimensional detection graph groups which are ranked more later are larger.
By grouping and distributing the same weight value for the same group of reference two-dimensional probe graphs, the calculated amount can be reduced, and the processing speed can be improved.
Optionally, the step of assigning weight values to each group of the reference two-dimensional probe map includes:
and assigning preset weight values to each group of the reference two-dimensional probe graphs.
Optionally, the step of assigning weight values to each group of the reference two-dimensional probe map includes:
and calculating the weight value of each group of the reference two-dimensional detection graphs according to the number of the reference two-dimensional detection graphs of the blank area.
In a second aspect, the present application provides a three-dimensional imaging apparatus comprising:
the detection module is used for acquiring two-dimensional detection graphs of a plurality of cross sections of the detected object through detection;
the noise identification module is used for identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to acquire the noise duty ratio of each two-dimensional detection graph;
the three-dimensional imaging module is used for stacking and arranging the two-dimensional detection graphs, distributing weight values to the two-dimensional detection graphs according to the noise occupation ratio, and performing interpolation calculation to pixel information of a blank area between adjacent two-dimensional detection graphs to generate a three-dimensional detection model.
In a third aspect, the present application provides an electronic device comprising a processor and a memory, the memory storing a computer program executable by the processor, when executing the computer program, running steps in a three-dimensional imaging method as described hereinbefore.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs steps in a three-dimensional imaging method as hereinbefore described.
The beneficial effects are that: the three-dimensional imaging method and the related equipment thereof acquire two-dimensional detection graphs of a plurality of cross sections of the detected object through detection; identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to obtain the noise duty ratio of each two-dimensional detection graph; stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to perform interpolation calculation on pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model; the imaging quality and the structural authenticity of the generated three-dimensional detection model can be improved.
Drawings
Fig. 1 is a flowchart of a three-dimensional imaging method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a three-dimensional imaging device according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Description of the reference numerals: 1. a detection module; 2. a noise identification module; 3. a three-dimensional imaging module; 301. a processor; 302. a memory; 303. a communication bus.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a three-dimensional imaging method according to some embodiments of the present application, including the steps of:
A1. acquiring a two-dimensional detection map of a plurality of cross sections of the detected object through detection;
A2. identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to acquire the noise duty ratio of each two-dimensional detection graph;
A3. the two-dimensional detection graphs are stacked and arranged, weight value distribution is carried out on the two-dimensional detection graphs according to the noise occupation ratio, and pixel information of a blank area between adjacent two-dimensional detection graphs (for convenience of description, the area between the adjacent two-dimensional detection graphs is called as the blank area) is used for interpolation calculation, so that a three-dimensional detection model is generated.
After the two-dimensional detection graphs of a plurality of cross sections of the detected object are detected, when the two-dimensional detection graphs are utilized to interpolate and fill pixel information of a blank area between the adjacent two-dimensional detection graphs, interpolation operation is carried out according to weight values corresponding to the noise duty ratio of the two-dimensional detection graphs, so that the two-dimensional detection graphs with larger noise duty ratio have smaller influence on interpolation operation results, the influence of noise on final imaging results can be well eliminated, and the imaging quality and structural authenticity of the finally obtained three-dimensional detection model are improved.
Preferably, the smaller the noise ratio is, the larger the weight value allocated to the two-dimensional detection graph is, so that the influence of the two-dimensional detection graph with the larger noise ratio on the interpolation operation result can be reduced more effectively.
In step A1, an ultrasonic detector or a CT detector may be used to scan and detect multiple cross sections of the object to be detected, so as to obtain a two-dimensional detection map of the multiple cross sections. The pixel information (such as pixel value, chromaticity, etc.) contained in each pixel point in the two-dimensional detection map represents the internal structure information of the measured object, for example, the larger the shale gas reservoir space is, the larger the pixel value in the corresponding position is in the two-dimensional detection map of the shale cross section.
Preferably, each two-dimensional detection chart is arranged at equal intervals along the normal direction (i.e. when detection is performed, a plurality of cross sections which are arranged at equal intervals are used for scanning detection), wherein the interval between the two-dimensional detection charts can be set according to actual needs.
In step A2, the two-dimensional probe map is input into a trained noise recognition model, so as to obtain the noise duty ratio output by the noise recognition model.
Further, before step A1, the method further includes the steps of:
acquiring a plurality of noiseless two-dimensional detection images;
sequentially taking different types of noise as noise sources, and adding noise with different duty ratios into each noiseless two-dimensional detection diagram to obtain a plurality of mixed two-dimensional detection diagrams;
taking the mixed two-dimensional detection graph as sample data, and taking the noise duty ratio and the noise type of the mixed two-dimensional detection graph as sample labels to generate a plurality of training samples;
constructing an initial noise identification model which takes a two-dimensional detection diagram as input and takes a noise duty ratio and a noise type as output;
and training the initial noise recognition model by using a plurality of training samples to obtain a trained noise recognition model.
The types of noise include, but are not limited to, gaussian noise, poisson noise, multiplicative noise, and pretzel noise.
The initial noise recognition model may be an existing neural network model structure according to actual needs, which will not be described in detail herein.
When the initial noise recognition model is trained, the mixed two-dimensional detection graph of each training sample is sequentially input into the initial noise recognition model, a loss function is calculated by using output data of the initial noise recognition model and sample labels of the training samples, model parameters of the initial noise recognition model are adjusted according to the loss function until the loss function converges, and training of the initial noise recognition model is completed.
Specifically, step A3 includes:
A301. stacking and arranging the two-dimensional detection graphs obtained by detection;
A302. taking a plurality of two-dimensional detection images near each blank area as reference two-dimensional detection images of the corresponding blank areas, and distributing weight values to the reference two-dimensional detection images of each blank area according to the noise duty ratio;
A303. according to the reference two-dimensional detection graph and the corresponding weight value of each blank area, carrying out interpolation calculation on the pixel information of the middle facet of each blank area to generate a two-dimensional graph of the middle facet as a new two-dimensional detection graph; the bisection plane is parallel to the adjacent two-dimensional detection graph and has the same distance with the adjacent two-dimensional detection graph;
A304. and (3) repeating the steps A302-A303 until the interval between the adjacent two-dimensional detection graphs is not larger than a preset interval threshold value, so as to obtain a three-dimensional detection model.
Generally, in order to improve the detection efficiency, the arrangement density of the cross sections of the scanning detection in the step A1 is relatively small, so that the interval between the two-dimensional detection images is relatively large, if the two-dimensional detection images obtained by detection are directly stacked and arranged as a final three-dimensional detection model, a large amount of local information of the three-dimensional detection model is lost, and the three-dimensional detection model is not fine enough and has poor imaging quality. Therefore, the pixel information of the blank area is interpolated and filled in the cyclic mode, so that the local information of the three-dimensional detection model is supplemented.
In the above process, each two-dimensional detection chart can be used as a reference two-dimensional detection chart of different blank areas for multiple times, and when each two-dimensional detection chart is used as a reference two-dimensional detection chart, the assigned weight value is not fixed and is dynamically changed along with the actual position of the bisector surface, so that the accuracy of a calculation result is improved, and the imaging quality and the structural authenticity of the finally obtained three-dimensional detection model are ensured.
In step a301, two-dimensional probe graphs obtained by detection are stacked and arranged according to the position of the cross section of the detected scan. Specifically, the position of the cross section of the detected scan is recorded by an ultrasonic probe or a CT probe apparatus at the time of scanning detection.
Wherein, step a302 includes:
m two-dimensional detection graphs adjacent to the front side of the blank area and m two-dimensional detection graphs adjacent to the rear side of the blank area are used as reference two-dimensional detection graphs of the blank area, and m is calculated according to the following formula:
wherein p is the number of two-dimensional detection images on the front side of the blank area, q is the number of two-dimensional detection images on the rear side of the blank area, and N is a preset positive integer (which can be set according to actual needs). The front-back direction is a normal direction of the two-dimensional detection chart, and specifically one side of the normal direction of the two-dimensional detection chart is taken as a front side, and the other side is taken as a rear side.
For example, assuming that n=10, the total number of current two-dimensional probe maps is 100, the number of two-dimensional probe maps at the front side of the current 5 th blank area is 5 (i.e., p=5), and the number of two-dimensional probe maps at the rear side of the current 5 th blank area is 95 (i.e., q=95), where p < N, m=5 is taken, so that the 1 st to 10 th two-dimensional probe maps are used as reference two-dimensional probe maps of the current 5 th blank area; the number of two-dimensional detection images on the front side of the current 20 th blank area is 20 (i.e. p=20), the number of two-dimensional detection images on the rear side of the current 20 th blank area is 80 (i.e. q=80), at this time, p > N and q > N are taken to be m=10, and the 11 th to 30 th two-dimensional detection images are used as reference two-dimensional detection images of the current 20 th blank area.
In some embodiments, in step a302, the step of assigning a weight value to the reference two-dimensional probe map of each blank region according to the noise duty cycle includes:
according to the noise duty ratio, the reference two-dimensional detection graphs of the same blank area are ordered in a descending order;
dividing the reference two-dimensional detection map into a plurality of groups from front to back according to the sequencing result;
and assigning weight values to each group of reference two-dimensional detection images, so that the weight values of the same group of reference two-dimensional detection images are the same, and the weight values of the reference two-dimensional detection image groups which are ranked more later are larger.
By grouping and distributing the same weight value for the same group of reference two-dimensional probe graphs, the calculated amount can be reduced, and the processing speed can be improved.
Wherein, when the number M of reference two-dimensional probe maps of the same blank area (where m=2m) is smaller than a preset grouping number X (X may be set based on actual needs, for example, 4, 6, etc., but is not limited thereto), each reference two-dimensional probe map is taken as a group (i.e., divided into M groups), when the number M of reference two-dimensional probe maps of the same blank area is not smaller than the preset grouping number X, the reference two-dimensional probe maps are equally divided into X groups (when M cannot be divided by X, the number of images of the first K2 groups of reference two-dimensional probe maps is taken as k1+1, and the number of images of the other groups of reference two-dimensional probe maps is taken as K1, where K1 is an integer part of a quotient of dividing M by X, and K2 is a remainder part of a quotient of dividing M by X).
In some embodiments, the step of assigning weight values to each set of reference two-dimensional probe maps comprises:
and assigning preset weight values to each group of reference two-dimensional probe graphs.
Corresponding set of weight values can be preset for the reference two-dimensional probe graphs with different grouping numbers in advance, and then corresponding weight values are distributed for each group of reference two-dimensional probe graphs according to the actual grouping numbers. For example, for the case that the number of grouping groups is 4, a preset set of weight values is (0.1, 0.2,0.3, 0.4), when the actual number of grouping groups is 4, the weight value of the first set of reference two-dimensional probe maps is assigned to 0.1, the weight value of the second set of reference two-dimensional probe maps is assigned to 0.2, the weight value of the third set of reference two-dimensional probe maps is assigned to 0.3, and the weight value of the fourth set of reference two-dimensional probe maps is assigned to 0.4.
In other embodiments, the step of assigning weight values to each set of reference two-dimensional probe maps comprises:
and calculating the weight value of each group of reference two-dimensional detection graphs according to the number of the reference two-dimensional detection graphs of the blank area.
The calculation rule for calculating the weight value can be set according to actual requirements. In some embodiments, corresponding weight calculation functions can be set for different grouping number conditions in advance, and then the corresponding weight calculation functions are called according to the actual grouping number to calculate the weight value of each group of reference two-dimensional probe graphs. For example, for the case where the number of packet groups is 6, the weight calculation function is:
wherein,and (5) referencing the weight value of the two-dimensional probe graph for the ith group. But the weight calculation function is not limited thereto.
In practical application, in step a302, the allocation of the weight value to the reference two-dimensional probe map of each blank area according to the noise duty ratio is not limited to the above-mentioned grouping method, for example, the weight value of each reference two-dimensional probe map may also be directly calculated according to a preset calculation rule, where the calculation rule may be set according to the actual requirement, but the larger the weight value of the reference two-dimensional probe map needs to be ensured when the noise duty ratio is smaller.
For example, the two-dimensional detection maps of each reference may be sorted in descending order according to the noise duty ratio, and then the weight value of each two-dimensional detection map of each reference may be calculated by using the following formula:
wherein,weight value for the ith reference two-dimensional probe map,/>Noise ratio for the j-th reference two-dimensional probe map,>the noise duty cycle for the M-i th reference two-dimensional probe map.
In step a303, when calculating the pixel information of the pixel to be calculated of the bisection plane, a weighted average of the pixel information of the pixel corresponding to the pixel to be calculated in each reference two-dimensional probe graph is calculated and is used as the pixel information of the pixel to be calculated. The pixel point corresponding to the pixel point to be calculated in the reference two-dimensional detection map may be one pixel point (hereinafter referred to as a first pixel point) closest to the pixel point to be calculated in the reference two-dimensional detection map, or may be all the pixel points in a neighboring area (the shape and size of the neighboring area may be set according to actual needs) centered on the first pixel point in the reference two-dimensional detection map.
In step a304, when the interval between two adjacent two-dimensional detection graphs is not greater than a preset interval threshold (which can be set according to actual needs), the three-dimensional model formed by stacking and arranging all the two-dimensional detection graphs is the three-dimensional detection model.
From the above, the three-dimensional imaging method acquires two-dimensional detection images of a plurality of cross sections of the detected object through detection; identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to acquire the noise duty ratio of each two-dimensional detection graph; stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to perform interpolation calculation on pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model; thereby being capable of improving the imaging quality and the structural authenticity of the generated three-dimensional detection model.
Referring to fig. 2, the present application provides a three-dimensional imaging apparatus including:
a detection module 1 for acquiring a plurality of two-dimensional detection maps of cross sections of an object to be detected by detection;
the noise recognition module 2 is used for recognizing each two-dimensional detection chart by utilizing a pre-trained noise recognition model so as to acquire the noise duty ratio of each two-dimensional detection chart;
the three-dimensional imaging module 3 is used for stacking and arranging the two-dimensional detection graphs, distributing weight values to the two-dimensional detection graphs according to the noise occupation ratio, and performing interpolation calculation to pixel information of a blank area between two adjacent two-dimensional detection graphs to generate a three-dimensional detection model.
After the two-dimensional detection graphs of a plurality of cross sections of the detected object are detected, when the two-dimensional detection graphs are utilized to interpolate and fill pixel information of a blank area between the adjacent two-dimensional detection graphs, interpolation operation is carried out according to weight values corresponding to the noise duty ratio of the two-dimensional detection graphs, so that the two-dimensional detection graphs with larger noise duty ratio have smaller influence on interpolation operation results, the influence of noise on final imaging results can be well eliminated, and the imaging quality and structural authenticity of the finally obtained three-dimensional detection model are improved.
Preferably, the smaller the noise ratio is, the larger the weight value allocated to the two-dimensional detection graph is, so that the influence of the two-dimensional detection graph with the larger noise ratio on the interpolation operation result can be reduced more effectively.
When the detection module 1 acquires the two-dimensional detection images of the plurality of cross sections of the detected object through detection, the ultrasonic detector or the CT detection equipment can be used for scanning and detecting the plurality of cross sections of the detected object, so that the two-dimensional detection images of the plurality of cross sections are obtained. The pixel information (such as pixel value, chromaticity, etc.) contained in each pixel point in the two-dimensional detection map represents the internal structure information of the measured object, for example, the larger the shale gas reservoir space is, the larger the pixel value in the corresponding position is in the two-dimensional detection map of the shale cross section.
Preferably, each two-dimensional detection chart is arranged at equal intervals along the normal direction (i.e. when detection is performed, a plurality of cross sections which are arranged at equal intervals are used for scanning detection), wherein the interval between the two-dimensional detection charts can be set according to actual needs.
When the noise recognition module 2 recognizes each two-dimensional probe graph by using a pre-trained noise recognition model to obtain the noise duty ratio of each two-dimensional probe graph, the two-dimensional probe graph is input into the trained noise recognition model to obtain the noise duty ratio output by the noise recognition model.
Further, the three-dimensional imaging device further includes:
the first acquisition module is used for acquiring a plurality of noiseless two-dimensional detection images;
the mixing module is used for sequentially taking different types of noise as noise sources, adding noise with different duty ratios into each noiseless two-dimensional detection graph, and obtaining a plurality of mixed two-dimensional detection graphs;
the sample generation module is used for generating a plurality of training samples by taking the mixed two-dimensional detection graph as sample data and taking the noise duty ratio and the noise type of the mixed two-dimensional detection graph as sample labels;
the model construction module is used for constructing an initial noise identification model which takes a two-dimensional detection graph as input and takes the noise duty ratio and the noise type as output;
and the training module is used for training the initial noise recognition model by using a plurality of training samples to obtain a trained noise recognition model.
The types of noise include, but are not limited to, gaussian noise, poisson noise, multiplicative noise, and pretzel noise.
The initial noise recognition model may be an existing neural network model structure according to actual needs, which will not be described in detail herein.
When the initial noise recognition model is trained, the mixed two-dimensional detection graph of each training sample is sequentially input into the initial noise recognition model, a loss function is calculated by using output data of the initial noise recognition model and sample labels of the training samples, model parameters of the initial noise recognition model are adjusted according to the loss function until the loss function converges, and training of the initial noise recognition model is completed.
Specifically, the three-dimensional imaging module 3 performs, when stacking and arranging two-dimensional detection maps, weight value distribution on the two-dimensional detection maps according to a noise ratio, and performing interpolation calculation on pixel information of a blank area between adjacent two-dimensional detection maps to generate a three-dimensional detection model, performing:
A301. stacking and arranging the two-dimensional detection graphs obtained by detection;
A302. taking a plurality of two-dimensional detection images near each blank area as reference two-dimensional detection images of the corresponding blank areas, and distributing weight values to the reference two-dimensional detection images of each blank area according to the noise duty ratio;
A303. according to the reference two-dimensional detection graph and the corresponding weight value of each blank area, carrying out interpolation calculation on the pixel information of the middle facet of each blank area to generate a two-dimensional graph of the middle facet as a new two-dimensional detection graph; the bisection plane is parallel to the adjacent two-dimensional detection graph and has the same distance with the adjacent two-dimensional detection graph;
A304. and (3) repeating the steps A302-A303 until the interval between the adjacent two-dimensional detection graphs is not larger than a preset interval threshold value, so as to obtain a three-dimensional detection model.
Generally, in order to improve the detection efficiency, the arrangement density of the cross sections of the scanning detection in the step A1 is relatively small, so that the interval between the two-dimensional detection images is relatively large, if the two-dimensional detection images obtained by detection are directly stacked and arranged as a final three-dimensional detection model, a large amount of local information of the three-dimensional detection model is lost, and the three-dimensional detection model is not fine enough and has poor imaging quality. Therefore, the pixel information of the blank area is interpolated and filled in the cyclic mode, so that the local information of the three-dimensional detection model is supplemented.
In the above process, each two-dimensional detection chart can be used as a reference two-dimensional detection chart of different blank areas for multiple times, and when each two-dimensional detection chart is used as a reference two-dimensional detection chart, the assigned weight value is not fixed and is dynamically changed along with the actual position of the bisector surface, so that the accuracy of a calculation result is improved, and the imaging quality and the structural authenticity of the finally obtained three-dimensional detection model are ensured.
In step a301, two-dimensional probe graphs obtained by detection are stacked and arranged according to the position of the cross section of the detected scan. Specifically, the position of the cross section of the detected scan is recorded by an ultrasonic probe or a CT probe apparatus at the time of scanning detection.
Wherein, step a302 includes:
m two-dimensional detection graphs adjacent to the front side of the blank area and m two-dimensional detection graphs adjacent to the rear side of the blank area are used as reference two-dimensional detection graphs of the blank area, and m is calculated according to the following formula:
wherein p is the number of two-dimensional detection images on the front side of the blank area, q is the number of two-dimensional detection images on the rear side of the blank area, and N is a preset positive integer (which can be set according to actual needs). The front-back direction is a normal direction of the two-dimensional detection chart, and specifically one side of the normal direction of the two-dimensional detection chart is taken as a front side, and the other side is taken as a rear side.
For example, assuming that n=10, the total number of current two-dimensional probe maps is 100, the number of two-dimensional probe maps at the front side of the current 5 th blank area is 5 (i.e., p=5), and the number of two-dimensional probe maps at the rear side of the current 5 th blank area is 95 (i.e., q=95), where p < N, m=5 is taken, so that the 1 st to 10 th two-dimensional probe maps are used as reference two-dimensional probe maps of the current 5 th blank area; the number of two-dimensional detection images on the front side of the current 20 th blank area is 20 (i.e. p=20), the number of two-dimensional detection images on the rear side of the current 20 th blank area is 80 (i.e. q=80), at this time, p > N and q > N are taken to be m=10, and the 11 th to 30 th two-dimensional detection images are used as reference two-dimensional detection images of the current 20 th blank area.
In some embodiments, in step a302, the step of assigning a weight value to the reference two-dimensional probe map of each blank region according to the noise duty cycle includes:
according to the noise duty ratio, the reference two-dimensional detection graphs of the same blank area are ordered in a descending order;
dividing the reference two-dimensional detection map into a plurality of groups from front to back according to the sequencing result;
and assigning weight values to each group of reference two-dimensional detection images, so that the weight values of the same group of reference two-dimensional detection images are the same, and the weight values of the reference two-dimensional detection image groups which are ranked more later are larger.
By grouping and distributing the same weight value for the same group of reference two-dimensional probe graphs, the calculated amount can be reduced, and the processing speed can be improved.
Wherein, when the number M of reference two-dimensional probe maps of the same blank area (where m=2m) is smaller than a preset grouping number X (X may be set based on actual needs, for example, 4, 6, etc., but is not limited thereto), each reference two-dimensional probe map is taken as a group (i.e., divided into M groups), when the number M of reference two-dimensional probe maps of the same blank area is not smaller than the preset grouping number X, the reference two-dimensional probe maps are equally divided into X groups (when M cannot be divided by X, the number of images of the first K2 groups of reference two-dimensional probe maps is taken as k1+1, and the number of images of the other groups of reference two-dimensional probe maps is taken as K1, where K1 is an integer part of a quotient of dividing M by X, and K2 is a remainder part of a quotient of dividing M by X).
In some embodiments, the step of assigning weight values to each set of reference two-dimensional probe maps comprises:
and assigning preset weight values to each group of reference two-dimensional probe graphs.
Corresponding set of weight values can be preset for the reference two-dimensional probe graphs with different grouping numbers in advance, and then corresponding weight values are distributed for each group of reference two-dimensional probe graphs according to the actual grouping numbers. For example, for the case that the number of grouping groups is 4, a preset set of weight values is (0.1, 0.2,0.3, 0.4), when the actual number of grouping groups is 4, the weight value of the first set of reference two-dimensional probe maps is assigned to 0.1, the weight value of the second set of reference two-dimensional probe maps is assigned to 0.2, the weight value of the third set of reference two-dimensional probe maps is assigned to 0.3, and the weight value of the fourth set of reference two-dimensional probe maps is assigned to 0.4.
In other embodiments, the step of assigning weight values to each set of reference two-dimensional probe maps comprises:
and calculating the weight value of each group of reference two-dimensional detection graphs according to the number of the reference two-dimensional detection graphs of the blank area.
The calculation rule for calculating the weight value can be set according to actual requirements. In some embodiments, corresponding weight calculation functions can be set for different grouping number conditions in advance, and then the corresponding weight calculation functions are called according to the actual grouping number to calculate the weight value of each group of reference two-dimensional probe graphs. For example, for the case where the number of packet groups is 6, the weight calculation function is:
wherein,and (5) referencing the weight value of the two-dimensional probe graph for the ith group. But the weight calculation function is not limited thereto.
In practical application, in step a302, the allocation of the weight value to the reference two-dimensional probe map of each blank area according to the noise duty ratio is not limited to the above-mentioned grouping method, for example, the weight value of each reference two-dimensional probe map may also be directly calculated according to a preset calculation rule, where the calculation rule may be set according to the actual requirement, but the larger the weight value of the reference two-dimensional probe map needs to be ensured when the noise duty ratio is smaller.
For example, the two-dimensional detection maps of each reference may be sorted in descending order according to the noise duty ratio, and then the weight value of each two-dimensional detection map of each reference may be calculated by using the following formula:
wherein,weight value for the ith reference two-dimensional probe map,/>Noise ratio for the j-th reference two-dimensional probe map,>the noise duty cycle for the M-i th reference two-dimensional probe map.
In step a303, when calculating the pixel information of the pixel to be calculated of the bisection plane, a weighted average of the pixel information of the pixel corresponding to the pixel to be calculated in each reference two-dimensional probe graph is calculated and is used as the pixel information of the pixel to be calculated. The pixel point corresponding to the pixel point to be calculated in the reference two-dimensional detection map may be one pixel point (hereinafter referred to as a first pixel point) closest to the pixel point to be calculated in the reference two-dimensional detection map, or may be all the pixel points in a neighboring area (the shape and size of the neighboring area may be set according to actual needs) centered on the first pixel point in the reference two-dimensional detection map.
In step a304, when the interval between two adjacent two-dimensional detection graphs is not greater than a preset interval threshold (which can be set according to actual needs), the three-dimensional model formed by stacking and arranging all the two-dimensional detection graphs is the three-dimensional detection model.
From the above, the three-dimensional imaging device acquires two-dimensional detection images of a plurality of cross sections of the detected object through detection; identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to acquire the noise duty ratio of each two-dimensional detection graph; stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to perform interpolation calculation on pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model; thereby being capable of improving the imaging quality and the structural authenticity of the generated three-dimensional detection model.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device includes: processor 301 and memory 302, the processor 301 and memory 302 being interconnected and in communication with each other by a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing a computer program executable by the processor 301, the computer program being executable by the processor 301 when the electronic device is running to perform the three-dimensional imaging method in any of the alternative implementations of the above embodiments to perform the following functions: acquiring a two-dimensional detection map of a plurality of cross sections of the detected object through detection; identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to acquire the noise duty ratio of each two-dimensional detection graph; and stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to interpolate pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model.
The present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the three-dimensional imaging method in any of the alternative implementations of the above embodiments to implement the following functions: acquiring a two-dimensional detection map of a plurality of cross sections of the detected object through detection; identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to acquire the noise duty ratio of each two-dimensional detection graph; and stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to interpolate pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model.
The computer readable storage medium may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A three-dimensional imaging method, comprising the steps of:
A1. acquiring a two-dimensional detection map of a plurality of cross sections of the detected object through detection;
A2. identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to obtain the noise duty ratio of each two-dimensional detection graph;
A3. and stacking and arranging the two-dimensional detection graphs, and distributing weight values of the two-dimensional detection graphs according to the noise occupation ratio, so as to perform interpolation calculation on pixel information of a blank area between adjacent two-dimensional detection graphs and generate a three-dimensional detection model.
2. The three-dimensional imaging method according to claim 1, further comprising, before step A1, the steps of:
acquiring a plurality of noiseless two-dimensional detection images;
sequentially taking different types of noise as noise sources, and adding noise with different duty ratios into each noiseless two-dimensional detection diagram to obtain a plurality of mixed two-dimensional detection diagrams;
taking the mixed two-dimensional detection graph as sample data, and taking the noise duty ratio and the noise type of the mixed two-dimensional detection graph as sample labels to generate a plurality of training samples;
constructing an initial noise identification model which takes a two-dimensional detection diagram as input and takes a noise duty ratio and a noise type as output;
and training the initial noise recognition model by using a plurality of training samples to obtain a trained noise recognition model.
3. The three-dimensional imaging method according to claim 1, wherein step A3 comprises:
A301. stacking and arranging the two-dimensional detection graphs obtained by detection;
A302. taking a plurality of two-dimensional detection graphs near each blank area as a reference two-dimensional detection graph corresponding to the blank area, and distributing a weight value to the reference two-dimensional detection graph of each blank area according to the noise duty ratio;
A303. according to the reference two-dimensional detection map of each blank area and the corresponding weight value, carrying out interpolation calculation on pixel information of a middle facet of each blank area to generate a two-dimensional map of the middle facet as a new two-dimensional detection map; the bisector plane is parallel to the adjacent two-dimensional detection images and has the same distance with the adjacent two-dimensional detection images;
A304. and (C) repeating the step A302-the step A303 until the interval between the adjacent two-dimensional detection graphs is not larger than a preset interval threshold value, and obtaining the three-dimensional detection model.
4. A three-dimensional imaging method according to claim 3, wherein step a302 comprises:
taking m two-dimensional detection graphs adjacent to the front side of the blank area and m two-dimensional detection graphs adjacent to the rear side of the blank area as reference two-dimensional detection graphs of the blank area, wherein m is calculated according to the following formula:
wherein p is the number of the two-dimensional detection images at the front side of the blank area, q is the number of the two-dimensional detection images at the rear side of the blank area, and N is a preset positive integer.
5. A three-dimensional imaging method according to claim 3, wherein in step a302, the step of assigning a weight value to the reference two-dimensional probe map of each of the blank areas according to the noise duty ratio comprises:
according to the noise duty ratio, the reference two-dimensional detection graphs of the same blank area are ordered in a descending order;
dividing the reference two-dimensional probe map into a plurality of groups from front to back according to the sequencing result;
and assigning weight values for the reference two-dimensional detection graphs of each group, so that the weight values of the reference two-dimensional detection graphs of the same group are the same, and the weight values of the reference two-dimensional detection graph groups which are ranked more later are larger.
6. The method of three-dimensional imaging of claim 5, wherein the step of assigning weight values to each set of the reference two-dimensional probe maps comprises:
and assigning preset weight values to each group of the reference two-dimensional probe graphs.
7. The method of three-dimensional imaging of claim 5, wherein the step of assigning weight values to each set of the reference two-dimensional probe maps comprises:
and calculating the weight value of each group of the reference two-dimensional detection graphs according to the number of the reference two-dimensional detection graphs of the blank area.
8. A three-dimensional imaging apparatus, comprising:
the detection module is used for acquiring two-dimensional detection graphs of a plurality of cross sections of the detected object through detection;
the noise identification module is used for identifying each two-dimensional detection graph by utilizing a pre-trained noise identification model so as to acquire the noise duty ratio of each two-dimensional detection graph;
the three-dimensional imaging module is used for stacking and arranging the two-dimensional detection graphs, distributing weight values to the two-dimensional detection graphs according to the noise occupation ratio, and performing interpolation calculation to pixel information of a blank area between adjacent two-dimensional detection graphs to generate a three-dimensional detection model.
9. An electronic device comprising a processor and a memory, the memory storing a computer program executable by the processor, when executing the computer program, running the steps in the three-dimensional imaging method of any of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored which, when being executed by a processor, performs the steps of the three-dimensional imaging method as claimed in any one of claims 1 to 7.
CN202311803454.8A 2023-12-26 2023-12-26 Three-dimensional imaging method and related equipment thereof Active CN117710199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311803454.8A CN117710199B (en) 2023-12-26 2023-12-26 Three-dimensional imaging method and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311803454.8A CN117710199B (en) 2023-12-26 2023-12-26 Three-dimensional imaging method and related equipment thereof

Publications (2)

Publication Number Publication Date
CN117710199A true CN117710199A (en) 2024-03-15
CN117710199B CN117710199B (en) 2024-05-28

Family

ID=90149738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311803454.8A Active CN117710199B (en) 2023-12-26 2023-12-26 Three-dimensional imaging method and related equipment thereof

Country Status (1)

Country Link
CN (1) CN117710199B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102599934A (en) * 2012-03-13 2012-07-25 华中科技大学 Data acquisition device of three-dimensional ultrasound image based on rear-end scanning
CN110279429A (en) * 2019-06-13 2019-09-27 北京理工大学 Four-dimensional ultrasound method for reconstructing and device
CN114581605A (en) * 2022-02-22 2022-06-03 清华大学 Method, device and equipment for generating scanning image of workpiece and computer storage medium
CN116883723A (en) * 2023-06-19 2023-10-13 中国矿业大学 Combined zero sample image classification method based on parallel semantic embedding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102599934A (en) * 2012-03-13 2012-07-25 华中科技大学 Data acquisition device of three-dimensional ultrasound image based on rear-end scanning
CN110279429A (en) * 2019-06-13 2019-09-27 北京理工大学 Four-dimensional ultrasound method for reconstructing and device
CN114581605A (en) * 2022-02-22 2022-06-03 清华大学 Method, device and equipment for generating scanning image of workpiece and computer storage medium
CN116883723A (en) * 2023-06-19 2023-10-13 中国矿业大学 Combined zero sample image classification method based on parallel semantic embedding

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
AVEGNON KOSSI LOIC M: "Porosity formation from disrupted gas flow in laser powder bed fusion of 316 stainless steel", JOURNAL OF MANUFACTURING PROCESSES, 1 March 2023 (2023-03-01), pages 333 - 340 *
刘丹丹: "基于BIM+AR的机电工程现场巡检方法研究", 中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑, no. 2, 15 February 2023 (2023-02-15), pages 038 - 714 *
吴志红;刘日晨;甘霖;: "基于三维校正的SHGC物体三维结构提取方法", 电子科技大学学报, no. 03, 30 May 2011 (2011-05-30), pages 128 - 132 *
段俐, 丁汉泉: "动态三维流场的全息层析干涉测量", 北京航空航天大学学报, no. 03, 30 June 1997 (1997-06-30), pages 114 - 120 *
郑震;查冰婷;张合;: "基于DHGF算法的激光线扫描成像引信目标识别方法", 中国激光, no. 07, 28 March 2018 (2018-03-28), pages 147 - 154 *
郭荣幸;: "基于纤维图像的横截面轮廓提取和识别技术", 现代丝绸科学与技术, no. 01, 28 February 2016 (2016-02-28), pages 38 - 40 *

Also Published As

Publication number Publication date
CN117710199B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN107169463B (en) Method for detecting human face, device, computer equipment and storage medium
CN105335955B (en) Method for checking object and object test equipment
CN106407947A (en) Target object recognition method and device applied to unmanned vehicle
Weinshall et al. On view likelihood and stability
EP3286691A1 (en) A method of detecting objects within a 3d environment
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN110287942A (en) Training method, age estimation method and the corresponding device of age estimation model
CN111291768B (en) Image feature matching method and device, equipment and storage medium
CN109753960A (en) The underwater unnatural object detection method of isolated forest based on fractal theory
CN108198172B (en) Image significance detection method and device
US20130080111A1 (en) Systems and methods for evaluating plane similarity
CN110197206B (en) Image processing method and device
CN110378942A (en) Barrier identification method, system, equipment and storage medium based on binocular camera
JP2019152543A (en) Target recognizing device, target recognizing method, and program
CN109671055B (en) Pulmonary nodule detection method and device
CN110390327A (en) Foreground extracting method, device, computer equipment and storage medium
CN114972922A (en) Coal and gangue sorting and identifying method, device and equipment based on machine learning
CN111860498B (en) Method, device and storage medium for generating antagonism sample of license plate
Eum et al. Vehicle detection from airborne LiDAR point clouds based on a decision tree algorithm with horizontal and vertical features
Rajasekaran et al. PTRM: Perceived terrain realism metric
CN117710199B (en) Three-dimensional imaging method and related equipment thereof
CN103714528B (en) Object segmentation device and method
Stal et al. Classification of airborne laser scanning point clouds based on binomial logistic regression analysis
CN112634431A (en) Method and device for converting three-dimensional texture map into three-dimensional point cloud
EP3742398A1 (en) Determining one or more scanner positions in a point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant