CN111784841A - Method, apparatus, electronic device, and medium for reconstructing three-dimensional image - Google Patents

Method, apparatus, electronic device, and medium for reconstructing three-dimensional image Download PDF

Info

Publication number
CN111784841A
CN111784841A CN202010507773.4A CN202010507773A CN111784841A CN 111784841 A CN111784841 A CN 111784841A CN 202010507773 A CN202010507773 A CN 202010507773A CN 111784841 A CN111784841 A CN 111784841A
Authority
CN
China
Prior art keywords
image
frame
dimensional
target
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010507773.4A
Other languages
Chinese (zh)
Inventor
唐荣富
邓宝松
龙知洲
商尔科
李靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical National Defense Technology Innovation Institute PLA Academy of Military Science
Priority to CN202010507773.4A priority Critical patent/CN111784841A/en
Publication of CN111784841A publication Critical patent/CN111784841A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The application discloses a method, a device, electronic equipment and a medium for reconstructing a three-dimensional image. In the method, after the target common-view image is obtained and the first frame sequence sequentially ordered according to the weight sum of the corresponding connecting edges is determined based on the target common-view image, the first image frame of the image frame with the highest weight sum value of the connecting edges in the first frame sequence can be obtained, the first image block is generated based on the first image frame, the second image blocks with the target number are obtained based on the first image block, and finally the reconstructed three-dimensional grid is generated based on the first image block and the second image blocks with the target number. By applying the technical scheme of the application, the common view can be introduced to establish the reconstruction sequence, and a new image block generation method is provided according to the visual geometry mathematics, so that the problems of speed and precision in the matching process can be better solved.

Description

Method, apparatus, electronic device, and medium for reconstructing three-dimensional image
Technical Field
The present application relates to image processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for reconstructing a three-dimensional image.
Background
Three-dimensional reconstruction of images is an important research problem of computer vision and computer graphics, and refers to dense mesh reconstruction of scenes on the basis of a known motion recovery structure (structure from motion) of a plurality of images. Although the method of three-dimensional reconstruction varies from sensor to sensor, the mainstream method based on image reconstruction is generally called multi-view stereo (MVS).
Further, because of the difference of scene expression form (and data format), there are three main categories of the current MVS method: depth map-based reconstruction methods, point cloud-based reconstruction methods, and voxel-based reconstruction methods. Generally, a depth map-based reconstruction method is view-dependent and has unique advantages in large-scene application, but the method is usually large in calculation amount and not beneficial to observation of different visions, and fusion of a plurality of depth maps in a unified coordinate system is yet to be further researched; the method based on the point cloud reconstructs dense point cloud under a unified coordinate system, has very good characteristics of geometric editing, fusion, rendering and the like, but the point cloud generated by the method is often high in noise or has holes; the voxel-based method expresses a three-dimensional space by voxels, then processes the three-dimensional point cloud by adopting the idea of a Markov random field, and generally needs to adopt an octree structure to accelerate.
However, the current three-dimensional image reconstruction method still has the defects of more noise points and large computation amount. Therefore, how to generate a high-performance three-dimensional image reconstruction method becomes a problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a medium for reconstructing a three-dimensional image, and is used for solving the problems that in the related art, the three-dimensional image has more noise points and the calculation amount is large.
According to an aspect of an embodiment of the present application, there is provided a method for reconstructing a three-dimensional image, including:
acquiring a target common-view image, and determining a first frame sequence based on the target common-view image, wherein each frame in the first frame sequence is sequentially ordered according to the weight sum of corresponding connecting edges;
acquiring a first image frame in the first frame sequence, and generating a first image block based on the first image frame, wherein the first image frame is an image frame with the highest total value of connecting edge weights in the first frame sequence;
obtaining a target number of second image blocks based on the first image blocks, wherein the second image blocks are adjacent to the first image blocks;
and generating a reconstructed three-dimensional grid based on the first image blocks and the target number of second image blocks.
Optionally, in another embodiment based on the above method of the present application, after the acquiring a first image frame in the first frame sequence, the method further includes:
and acquiring a first feature point set of the first image frame, wherein the first feature point set at least comprises an angular point detection feature and a Gaussian function feature.
Optionally, in another embodiment based on the method described above, the acquiring the first feature point set of the first image frame includes:
acquiring a second set of image frames adjacent to the first image frame;
obtaining a target point to be matched corresponding to each image frame in the second image frame set by using an epipolar constraint algorithm;
calculating the similarity of each target point to be matched by using a cost function;
obtaining a plurality of candidate matching points based on the similarity of the target points to be matched;
and obtaining the first feature point set of the first image frame based on the candidate matching points.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring the first set of feature points of the first image frame, the method further includes:
calculating three-dimensional space coordinates corresponding to each first characteristic point in the first characteristic point set by using a forward intersection algorithm and a robust function algorithm;
calculating an initial appearance corresponding to each first feature point in the first feature point set;
utilizing a homography matrix to constrain a minimized optimization target, and acquiring a first orientation of the first image block;
and generating the first image block based on the three-dimensional space coordinate, the initial appearance and the first orientation corresponding to each first feature point in the first feature point set.
Optionally, in another embodiment based on the foregoing method of the present application, the calculating an initial appearance corresponding to each first feature point in the first feature point set includes:
calculating the difference sum value of each first characteristic point and other characteristic points in the first characteristic point set;
and taking the feature point with the minimum sum of differences as a second feature point, and taking the M-M neighborhood of the second feature point as the initial appearance of the first image block.
Optionally, in another embodiment based on the foregoing method of the present application, the obtaining a target number of second image blocks based on the first image block includes:
gridding the first image frame to obtain a second image block;
taking the three-dimensional coordinates, the initial orientation value and the reference frame of the first image block as the three-dimensional coordinates, the initial orientation value and the reference frame of the second image block;
carrying out back projection operation by using the initial value of the second image block, and obtaining an effective feature set according to luminosity difference;
and so on until the three-dimensional space coordinates, the initial appearance and the second orientation corresponding to all the image frames in the first frame sequence are calculated.
Optionally, in another embodiment based on the above method of the present application, the generating a reconstructed three-dimensional grid based on the first image block and the target number of second image blocks includes:
filtering the first image blocks and the second image blocks with the target number, and eliminating outlier points of the second image blocks with the target number;
and generating the reconstructed three-dimensional grid according to the filtered first image block and the target number of second image blocks.
According to another aspect of the embodiments of the present application, there is provided an apparatus for reconstructing a three-dimensional image, including:
the first acquisition module is used for acquiring a target common-view image and determining a first frame sequence based on the target common-view image, wherein each frame in the first frame sequence is sequentially ordered according to the weight sum of corresponding connecting edges;
the second acquisition module is arranged for acquiring a first image frame in the first frame sequence and generating a first image block based on the first image frame, wherein the first image frame is the image frame with the highest total value of the connecting edge weights in the first frame sequence;
the first generation module is configured to obtain a target number of second image blocks based on the first image blocks, wherein the second image blocks are adjacent to the first image blocks;
a second generation module configured to generate a reconstructed three-dimensional mesh based on the first image blocks and the target number of second image blocks.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the methods of reconstructing a three-dimensional image described above.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of any one of the above methods for reconstructing a three-dimensional image.
In the application, after the target common-view image is obtained, and the first frame sequence which is sequentially ordered according to the weight sum of the corresponding connecting edges is determined based on the target common-view image, the first image frame of the image frame with the highest weight sum value of the connecting edges in the first frame sequence can be obtained, the first image block is generated based on the first image frame, the second image blocks with the target number are obtained based on the first image block, and finally the reconstructed three-dimensional grid is generated based on the first image block and the second image blocks with the target number. By applying the technical scheme of the application, the common view can be introduced to establish the reconstruction sequence, and a new image block generation method is provided according to the visual geometry mathematics, so that the problems of speed and precision in the matching process can be better solved.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram of a method for reconstructing a three-dimensional image according to the present application;
fig. 2 is a schematic flow chart of reconstructing a three-dimensional image according to the present application;
FIG. 3 is a schematic diagram of another method for reconstructing a three-dimensional image according to the present application;
FIG. 4 is a schematic structural diagram of an apparatus for reconstructing a three-dimensional image according to the present application;
fig. 5 is a schematic view of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for performing reconstruction of a three-dimensional image according to an exemplary embodiment of the present application is described below in conjunction with fig. 1-3. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
The application also provides a method, a device, a target terminal and a medium for reconstructing the three-dimensional image.
Fig. 1 schematically shows a flow diagram of a method of reconstructing a three-dimensional image according to an embodiment of the present application. As shown in fig. 1, the method includes:
s101, obtaining a target common-view image, determining a first frame sequence based on the target common-view image, and sequencing frames in the first frame sequence in sequence according to the weight sum of corresponding connecting edges.
First, the present application may calculate a common view according to a result of a motion recovery structure (structure from motion SfM), and determine a reconstruction frame order in a common view relationship.
Further, motion recovery structures, i.e. the determination of the spatial and geometric relationships of the object by the movement of the camera, are a common method for three-dimensional reconstruction. The 3D camera is different from a Kinect 3D camera in the greatest way that the camera only needs a common RGB camera, so that the cost is lower, the environmental constraint is small, and the camera can be used indoors and outdoors. However, after SfM, complex theory and algorithm are needed for support, and the accuracy and speed are still to be improved, so that the mature commercial application is not much.
Further, the common view in the application can be an undirected graph, the vertex V is the camera, and the edge E is the number of the common-view feature points between two cameras, in order to ensure robustness, the maximum value of the edge E is not more than β times of the number of all the feature points (for example, β -1/3), namely E < β SfIn which S isfIs the number of all feature points.
And further, sequencing all the frames from large to small according to the weight sum of the connected edges to further obtain a first frame sequence. It should be noted that the following matching process needs to be calculated from the frame with the largest weight.
S102, acquiring a first image frame in a first frame sequence, and generating a first image block based on the first image frame, wherein the first image frame is an image frame with the highest total value of connecting edge weights in the first frame sequence.
Further, after the first frame sequence is determined based on the target common-view image, a first image frame with the highest total weight sum of the connecting edges in the first frame sequence may be obtained, and the first image block may be generated based on the first image frame.
Specifically, the image Patch processing method and device can extract the feature points in the image frame, match the feature points, and calculate the image Patch corresponding to the feature points. The image Patch includes a position, an orientation, an appearance, and the like.
S103, obtaining a target number of second image blocks based on the first image block, wherein the second Patch image block is an image block adjacent to the first image block.
Calculating the surrounding image blocks according to the existing image blocks
And S104, generating a reconstructed three-dimensional grid based on the first image block and the target number of second image blocks.
Further, in the method, the dense point cloud can be meshed on the basis of the first image blocks and the second image blocks of the target number, so that a final reconstructed three-dimensional grid is obtained, and the reconstructed three-dimensional grid is subsequently used for reconstructing a three-dimensional image.
In the application, after the target common-view image is obtained, and the first frame sequence which is sequentially ordered according to the weight sum of the corresponding connecting edges is determined based on the target common-view image, the first image frame of the image frame with the highest weight sum value of the connecting edges in the first frame sequence can be obtained, the first image block is generated based on the first image frame, the second image blocks with the target number are obtained based on the first image block, and finally the reconstructed three-dimensional grid is generated based on the first image block and the second image blocks with the target number. By applying the technical scheme of the application, the common view can be introduced to establish the reconstruction sequence, and a new image block generation method is provided according to the visual geometry mathematics, so that the problems of speed and precision in the matching process can be better solved.
Optionally, in a possible implementation manner of the present application, after S102 (acquiring the first image frame in the first frame sequence), the following steps may be implemented:
a first feature point set of a first image frame is acquired, wherein the first feature point set at least comprises corner detected (feature from filtered segment FAST) features and Gaussian function (Difference of Gaussian DoG) features.
Further optionally, in the process of acquiring the first feature point set of the first image frame, the following may be performed:
acquiring a second image frame set adjacent to the first image frame;
obtaining a target point to be matched corresponding to each image frame in a second image frame set by using an epipolar constraint algorithm;
calculating the similarity of each target point to be matched by using a cost function;
obtaining a plurality of candidate matching points based on the similarity of each target point to be matched;
and obtaining a first feature point set of the first image frame based on the candidate matching points.
Further, after acquiring the first feature point set of the first image frame, the following steps may be further performed:
calculating three-dimensional space coordinates corresponding to each first characteristic point in the first characteristic point set by using a forward intersection algorithm and a robust function algorithm;
calculating an initial appearance corresponding to each first feature point in the first feature point set;
utilizing a Homography matrix (Homography matrix) to constrain a minimized optimization target, and acquiring a first orientation of the first image block;
and generating a first image block based on the three-dimensional space coordinate, the initial appearance and the first orientation corresponding to each first feature point in the first feature point set.
Further, in the calculation of the initial appearance corresponding to each first feature point in the first feature point set, the following steps may be performed:
calculating the difference sum value of each first characteristic point and other characteristic points in the first characteristic point set;
and taking the feature point with the minimum difference sum as a second feature point, and taking a neighborhood with a preset size of the second feature point as an initial appearance of the first image block.
Further, the present application constructs a common view G (V, E), the common view G (V, E) is an undirected graph, the vertex V is a camera, and the edge E is the number of common view feature points between the two cameras, the maximum value of the edge E is not more than β times the number of all feature points (e.g., β -1/3) to ensure robustness, i.e., E < β SfIn which S isfIs the number of all feature points.
Further, the feature points may be extracted first in the process of obtaining the first image block. In order to extract as many feature points as possible, the invention extracts the FAST feature and the DoG feature at the same time. In addition, the first image frame in the present application may be an image frame I, and for a certain feature point s of the image frame I, to complete initial matching quickly, other image frame sets adjacent to the image frame I are obtained first, that is, the image frame sets are obtained
Figure BDA0002527155830000093
Then, can be used for
Figure BDA0002527155830000092
Is traversed through the following process to obtain an initial set of candidate match points, denoted as L.
Firstly, an epipolar constraint algorithm can be used to obtain a point to be matched of a frame L, and an AD cost function (absolute difference and center AD-Cencus) can be used to calculate the similarity of the point to be matched, if the luminosity similarity is greater than a preset threshold, the point to be matched is considered to be an effective candidate matching point, it should be noted that one frame of image in the present application may have one or more effective candidate matching points.
Furthermore, the method can also utilize forward intersection (forward modeling) and a robust function algorithm to calculate the three-dimensional space coordinates p(s) of the feature points s, obtain the feature point set { s } of the two-dimensional image, and enable each frame of image to have at most one matching point. Then, the present application may initialize an appearance (appearance) of the image block, and specifically, may take a neighborhood of a feature point M of { s } having a smallest sum of apparent differences from all other feature points as an initial appearance a(s) of the image block.
Still further, according to the known camera position and the three-dimensional coordinates p(s), the method and the device can perform back projection, find whether more candidate feature points exist by using the apearance difference, and update the { s }, and it can be understood that the following process can be repeated to calculate the three-dimensional coordinates and the corresponding appearance of the first image block. Finally, when calculating the orientation of the first image block, the orientation o(s) of the image block may be obtained by using an optimization objective of homographic matrix constraint minimization. And obtaining a first image block comprising the three-dimensional space coordinate, the initial appearance and the first orientation.
Further optionally, in the process of obtaining the target number of second image blocks based on the first image block, the method may be obtained by:
gridding the first image frame to obtain a second image block;
taking the three-dimensional coordinates, the initial orientation value and the reference frame of the first image block as the three-dimensional coordinates, the initial orientation value and the reference frame of the second image block;
carrying out back projection operation by using the initial value of the second image block, and obtaining an effective feature set according to the luminosity difference;
and repeating the steps until three-dimensional space coordinates, an initial appearance and a second orientation corresponding to all the image frames in the first frame sequence are calculated.
Further, after the first image block is generated, the first image block can be continued to be expanded to obtain a plurality of second image blocks around the first image block, so that more image blocks are obtained, and dense three-dimensional point cloud reconstruction is completed. It should be noted that the following process of generating the second image block needs to be repeated according to the frame sequence:
first, the present application needs to grid the first image frame (e.g., each grid size is 3 × 3), and expand the image according to the existing image blocks. Further, the three-dimensional coordinates, the initial orientation value and the reference frame of the newly expanded second image block are all corresponding three-dimensional coordinates, initial orientation values and reference frames of the adjacent existing image blocks. Wherein, the appearance of the second image block is the appearance of the reference frame.
Furthermore, the initial value of the second image block can be utilized to perform back projection, an effective feature set is obtained according to the luminosity difference, the steps are continuously repeated subsequently, the three-dimensional coordinate and the appearance of the second image block are calculated, and the orientation of the second image block is obtained by adopting an optimization target of homographiy matrix constraint minimization.
Further optionally, in S104 (generating a reconstructed three-dimensional grid based on the first image block and the target number of second image blocks), the present application may further include a specific implementation manner, as shown in fig. 3, including:
s201, acquiring a target common-view image, determining a first frame sequence based on the target common-view image, and sequencing frames in the first frame sequence in sequence according to the weight sum of corresponding connecting edges;
s202, acquiring a first image frame in a first frame sequence, and generating a first image block based on the first image frame, wherein the first image frame is an image frame with the highest connecting edge weight sum value in the first frame sequence;
s203, obtaining a target number of second image blocks based on the first image blocks, wherein the second image blocks are adjacent to the first image blocks;
s204, filtering the first image blocks and the second image blocks with the target number, and removing the outlier points of the second image blocks with the target number;
and S205, generating a reconstructed three-dimensional grid according to the filtered first image block and the target number of second image blocks.
Furthermore, the method and the device can filter the first image block and the second image blocks with the target number, and further achieve the purpose of eliminating outlier points generated by image block expansion. According to the luminosity difference function, image block observation points which do not meet the luminosity consistency condition are removed, image block observation points which do not meet the FOV condition constraint are removed, finally, grid is carried out on dense point cloud, and a standard Poisson Surface Reconstruction algorithm (Poisson Surface Reconstruction) is adopted, so that the final reconstructed three-dimensional grid is obtained.
In the application, after the target common-view image is obtained, and the first frame sequence which is sequentially ordered according to the weight sum of the corresponding connecting edges is determined based on the target common-view image, the first image frame of the image frame with the highest weight sum value of the connecting edges in the first frame sequence can be obtained, the first image block is generated based on the first image frame, the second image blocks with the target number are obtained based on the first image block, and finally the reconstructed three-dimensional grid is generated based on the first image block and the second image blocks with the target number. By applying the technical scheme of the application, the common view can be introduced to establish the reconstruction sequence, and a new image block generation method is provided according to the visual geometry mathematics, so that the problems of speed and precision in the matching process can be better solved.
In another embodiment of the present application, as shown in fig. 4, the present application further provides an apparatus for reconstructing a three-dimensional image. Wherein the apparatus comprises a first obtaining module 301, a second obtaining module 302, a first generating module 303, a second generating module 304, wherein,
a first obtaining module 301, configured to obtain a target common-view image, and determine a first frame sequence based on the target common-view image, where each frame in the first frame sequence is sequentially ordered according to a total weight of corresponding connecting edges;
a second obtaining module 302, configured to obtain a first image frame in the first frame sequence, and generate a first image block based on the first image frame, where the first image frame is an image frame with a highest total value of connected edge weights in the first frame sequence;
a first generating module 303 configured to obtain a target number of second image blocks based on the first image block, where the second image blocks are adjacent to the first image block;
a second generating module 304 arranged to generate a reconstructed three-dimensional grid based on the first image block and the target number of second image blocks.
In the application, after the target common-view image is obtained, and the first frame sequence which is sequentially ordered according to the weight sum of the corresponding connecting edges is determined based on the target common-view image, the first image frame of the image frame with the highest weight sum value of the connecting edges in the first frame sequence can be obtained, the first image block is generated based on the first image frame, the second image blocks with the target number are obtained based on the first image block, and finally the reconstructed three-dimensional grid is generated based on the first image block and the second image blocks with the target number. By applying the technical scheme of the application, the common view can be introduced to establish the reconstruction sequence, and a new image block generation method is provided according to the visual geometry mathematics, so that the problems of speed and precision in the matching process can be better solved.
In another embodiment of the present application, the second obtaining module 302 further includes:
a second obtaining module 302 configured to obtain a first set of feature points of the first image frame, the first set of feature points including at least a corner detection feature and a gaussian function feature.
In another embodiment of the present application, the second obtaining module 302 further includes:
a second acquisition module 302 configured to acquire a second set of image frames adjacent to the first image frame;
a second obtaining module 302, configured to obtain, by using an epipolar constraint algorithm, a target point to be matched corresponding to each image frame in the second image frame set;
a second obtaining module 302, configured to calculate similarity of each target point to be matched by using the cost function;
a second obtaining module 302, configured to obtain a plurality of candidate matching points based on the similarity of the target points to be matched;
a second obtaining module 302 configured to obtain the first feature point set of the first image frame based on the plurality of candidate matching points.
In another embodiment of the present application, the second obtaining module 302 further includes:
a second obtaining module 302, configured to calculate three-dimensional space coordinates corresponding to each first feature point in the first feature point set by using a forward intersection algorithm and a robust function algorithm;
a second obtaining module 302 configured to calculate an initial appearance corresponding to each first feature point in the first feature point set;
a second obtaining module 302 configured to obtain a first orientation of the first image block by using an optimization objective of homography matrix constraint minimization;
a second obtaining module 302 configured to generate the first image block based on the three-dimensional space coordinates, the initial appearance, and the first orientation corresponding to each first feature point in the first feature point set.
In another embodiment of the present application, the first generating module 303 further includes:
a first generating module 303 configured to calculate a difference sum value of each first feature point and other feature points in the first feature point set;
a first generating module 303, configured to use a feature point with a minimum sum of differences as a second feature point, and use a preset size neighborhood of the second feature point as an initial appearance of the first image block.
In another embodiment of the present application, the first generating module 303 further includes:
a first generating module 303 configured to grid the first image frame to obtain a second image block;
a first generating module 303, configured to use the three-dimensional coordinates, the initial orientation value, and the reference frame of the first image block as the three-dimensional coordinates, the initial orientation value, and the reference frame of the second image block;
the first generation module 303 is configured to perform a back projection operation by using the initial values of the second image blocks, and obtain an effective feature set according to the luminosity difference;
and so on until the three-dimensional space coordinates, the initial appearance and the second orientation corresponding to all the image frames in the first frame sequence are calculated.
In another embodiment of the present application, the first generating module 303 further includes:
a first generating module 303, configured to filter the first image block and the target number of second image blocks, and eliminate outlier points of the target number of second image blocks;
a first generating module 303, configured to generate the reconstructed three-dimensional grid according to the filtered first image block and the target number of second image blocks.
Fig. 5 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, electronic device 400 may include one or more of the following components: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 402 is configured to store at least one instruction for execution by the processor 401 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the electronic device 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the electronic device 400 or in a folded design; in still other embodiments, the display screen 405 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate a current geographic location of the electronic device 400 to implement navigation or LBS (location based Service). The positioning component 408 may be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the electronic device 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic apparatus 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the electronic device 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the user on the electronic device 400. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 413 may be disposed on a side bezel of the electronic device 400 and/or on a lower layer of the touch display screen 405. When the pressure sensor 413 is arranged on the side frame of the electronic device 400, a holding signal of the user to the electronic device 400 can be detected, and the processor 401 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the electronic device 400. When a physical button or vendor Logo is provided on the electronic device 400, the fingerprint sensor 414 may be integrated with the physical button or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
Proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of electronic device 400. The proximity sensor 416 is used to capture the distance between the user and the front of the electronic device 400. In one embodiment, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state when the proximity sensor 416 detects that the distance between the user and the front surface of the electronic device 400 gradually decreases; when the proximity sensor 416 detects that the distance between the user and the front of the electronic device 400 is gradually increased, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 does not constitute a limitation of the electronic device 400, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 404, comprising instructions executable by the processor 420 of the electronic device 400 to perform the above-described method of reconstructing a three-dimensional image, the method comprising: acquiring a target common-view image, and determining a first frame sequence based on the target common-view image, wherein each frame in the first frame sequence is sequentially ordered according to the weight sum of corresponding connecting edges; acquiring a first image frame in the first frame sequence, and generating a first image block based on the first image frame, wherein the first image frame is an image frame with the highest total value of connecting edge weights in the first frame sequence; obtaining a target number of second image blocks based on the first image blocks, wherein the second image blocks are adjacent to the first image blocks; and generating a reconstructed three-dimensional grid based on the first image blocks and the target number of second image blocks. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, an application/computer program product is also provided that includes one or more instructions executable by the processor 420 of the electronic device 400 to perform the above-described method of reconstructing a three-dimensional image, the method comprising: acquiring a target common-view image, and determining a first frame sequence based on the target common-view image, wherein each frame in the first frame sequence is sequentially ordered according to the weight sum of corresponding connecting edges; acquiring a first image frame in the first frame sequence, and generating a first image block based on the first image frame, wherein the first image frame is an image frame with the highest total value of connecting edge weights in the first frame sequence; obtaining a target number of second image blocks based on the first image blocks, wherein the second image blocks are adjacent to the first image blocks; and generating a reconstructed three-dimensional grid based on the first image blocks and the target number of second image blocks. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of reconstructing a three-dimensional image, comprising:
acquiring a target common-view image, and determining a first frame sequence based on the target common-view image, wherein each frame in the first frame sequence is sequentially ordered according to the weight sum of corresponding connecting edges;
acquiring a first image frame in the first frame sequence, and generating a first image block based on the first image frame, wherein the first image frame is an image frame with the highest total value of connecting edge weights in the first frame sequence;
obtaining a target number of second image blocks based on the first image blocks, wherein the second image blocks are adjacent to the first image blocks;
and generating a reconstructed three-dimensional grid based on the first image blocks and the target number of second image blocks.
2. The method of claim 1, further comprising, after said acquiring a first image frame of the first frame sequence:
and acquiring a first feature point set of the first image frame, wherein the first feature point set at least comprises an angular point detection feature and a Gaussian function feature.
3. The method of claim 2, wherein said obtaining a first set of feature points for the first image frame comprises:
acquiring a second set of image frames adjacent to the first image frame;
obtaining a target point to be matched corresponding to each image frame in the second image frame set by using an epipolar constraint algorithm;
calculating the similarity of each target point to be matched by using a cost function;
obtaining a plurality of candidate matching points based on the similarity of the target points to be matched;
and obtaining the first feature point set of the first image frame based on the candidate matching points.
4. A method as claimed in claim 2 or 3, further comprising, after said acquiring the first set of feature points for the first image frame:
calculating three-dimensional space coordinates corresponding to each first characteristic point in the first characteristic point set by using a forward intersection algorithm and a robust function algorithm;
calculating an initial appearance corresponding to each first feature point in the first feature point set;
utilizing a homography matrix to constrain a minimized optimization target, and acquiring a first orientation of the first image block;
and generating the first image block based on the three-dimensional space coordinate, the initial appearance and the first orientation corresponding to each first feature point in the first feature point set.
5. The method of claim 4, wherein said computing an initial appearance corresponding to each first feature point in the set of first feature points comprises:
calculating the difference sum value of each first characteristic point and other characteristic points in the first characteristic point set;
and taking the feature point with the minimum difference sum value as a second feature point, and taking a preset size neighborhood of the second feature point as an initial appearance of the first image block.
6. The method of claim 1, wherein obtaining a target number of second image blocks based on the first image block comprises:
gridding the first image frame to obtain a second image block;
taking the three-dimensional coordinates, the initial orientation value and the reference frame of the first image block as the three-dimensional coordinates, the initial orientation value and the reference frame of the second image block;
carrying out back projection operation by using the initial value of the second image block, and obtaining an effective feature set according to luminosity difference;
and so on until the three-dimensional space coordinates, the initial appearance and the second orientation corresponding to all the image frames in the first frame sequence are calculated.
7. The method of claim 1, wherein generating a reconstructed three-dimensional grid based on the first image patch and the target number of second image patches comprises:
filtering the first image blocks and the second image blocks with the target number, and eliminating outlier points of the second image blocks with the target number;
and generating the reconstructed three-dimensional grid according to the filtered first image block and the target number of second image blocks.
8. An apparatus for reconstructing a three-dimensional image, comprising:
the first acquisition module is used for acquiring a target common-view image and determining a first frame sequence based on the target common-view image, wherein each frame in the first frame sequence is sequentially ordered according to the weight sum of corresponding connecting edges;
the second acquisition module is arranged for acquiring a first image frame in the first frame sequence and generating a first image block based on the first image frame, wherein the first image frame is the image frame with the highest total value of the connecting edge weights in the first frame sequence;
the first generation module is configured to obtain a target number of second image blocks based on the first image blocks, wherein the second image blocks are adjacent to the first image blocks;
a second generation module configured to generate a reconstructed three-dimensional mesh based on the first image blocks and the target number of second image blocks.
9. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the method of reconstructing a three-dimensional image of any of claims 1-7.
10. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the method of reconstructing a three-dimensional image of any of claims 1-7.
CN202010507773.4A 2020-06-05 2020-06-05 Method, apparatus, electronic device, and medium for reconstructing three-dimensional image Pending CN111784841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010507773.4A CN111784841A (en) 2020-06-05 2020-06-05 Method, apparatus, electronic device, and medium for reconstructing three-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010507773.4A CN111784841A (en) 2020-06-05 2020-06-05 Method, apparatus, electronic device, and medium for reconstructing three-dimensional image

Publications (1)

Publication Number Publication Date
CN111784841A true CN111784841A (en) 2020-10-16

Family

ID=72754031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010507773.4A Pending CN111784841A (en) 2020-06-05 2020-06-05 Method, apparatus, electronic device, and medium for reconstructing three-dimensional image

Country Status (1)

Country Link
CN (1) CN111784841A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396107A (en) * 2020-11-18 2021-02-23 广州极飞科技有限公司 Reconstructed image selection method and device and electronic equipment
WO2024060981A1 (en) * 2022-09-20 2024-03-28 深圳市其域创新科技有限公司 Three-dimensional mesh optimization method, device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN110599545A (en) * 2019-09-06 2019-12-20 电子科技大学中山学院 Feature-based dense map construction system
WO2020007483A1 (en) * 2018-07-06 2020-01-09 Nokia Technologies Oy Method, apparatus and computer program for performing three dimensional radio model construction
WO2020092177A2 (en) * 2018-11-02 2020-05-07 Fyusion, Inc. Method and apparatus for 3-d auto tagging
CN111161347A (en) * 2020-04-01 2020-05-15 亮风台(上海)信息科技有限公司 Method and equipment for initializing SLAM

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
WO2020007483A1 (en) * 2018-07-06 2020-01-09 Nokia Technologies Oy Method, apparatus and computer program for performing three dimensional radio model construction
WO2020092177A2 (en) * 2018-11-02 2020-05-07 Fyusion, Inc. Method and apparatus for 3-d auto tagging
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
CN110599545A (en) * 2019-09-06 2019-12-20 电子科技大学中山学院 Feature-based dense map construction system
CN111161347A (en) * 2020-04-01 2020-05-15 亮风台(上海)信息科技有限公司 Method and equipment for initializing SLAM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
S CASCIANELLI 等: "Robust visual semi-semantic loop closure detection by a covisibility graph and CNN features", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 *
代具亭: "基于RGB-D视频序列的大尺度场景三维语义表面重建技术研究", 《中国优秀博士论文全文数据库 信息科技辑》 *
唐荣富 等: "未知主点条件的相机焦距自标定方法", 《计算机应用研究》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396107A (en) * 2020-11-18 2021-02-23 广州极飞科技有限公司 Reconstructed image selection method and device and electronic equipment
CN112396107B (en) * 2020-11-18 2023-02-14 广州极飞科技股份有限公司 Reconstructed image selection method and device and electronic equipment
WO2024060981A1 (en) * 2022-09-20 2024-03-28 深圳市其域创新科技有限公司 Three-dimensional mesh optimization method, device, and storage medium

Similar Documents

Publication Publication Date Title
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
CN110148178B (en) Camera positioning method, device, terminal and storage medium
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN110064200B (en) Object construction method and device based on virtual environment and readable storage medium
CN110599593B (en) Data synthesis method, device, equipment and storage medium
CN109522863B (en) Ear key point detection method and device and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN114170349A (en) Image generation method, image generation device, electronic equipment and storage medium
CN111680758B (en) Image training sample generation method and device
CN112308103B (en) Method and device for generating training samples
CN111862148A (en) Method, device, electronic equipment and medium for realizing visual tracking
CN109754439B (en) Calibration method, calibration device, electronic equipment and medium
CN111784841A (en) Method, apparatus, electronic device, and medium for reconstructing three-dimensional image
CN111928861B (en) Map construction method and device
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN110728744A (en) Volume rendering method and device and intelligent equipment
CN113209610B (en) Virtual scene picture display method and device, computer equipment and storage medium
CN111127539B (en) Parallax determination method and device, computer equipment and storage medium
CN110335224B (en) Image processing method, image processing device, computer equipment and storage medium
CN110443841B (en) Method, device and system for measuring ground depth
CN109472855B (en) Volume rendering method and device and intelligent device
CN109685881B (en) Volume rendering method and device and intelligent equipment
CN111583339A (en) Method, device, electronic equipment and medium for acquiring target position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination