CN116596741A - Point cloud display diagram generation method and device, electronic equipment and storage medium - Google Patents

Point cloud display diagram generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116596741A
CN116596741A CN202310376732.XA CN202310376732A CN116596741A CN 116596741 A CN116596741 A CN 116596741A CN 202310376732 A CN202310376732 A CN 202310376732A CN 116596741 A CN116596741 A CN 116596741A
Authority
CN
China
Prior art keywords
space
image
area
panoramic
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310376732.XA
Other languages
Chinese (zh)
Other versions
CN116596741B (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202310376732.XA priority Critical patent/CN116596741B/en
Publication of CN116596741A publication Critical patent/CN116596741A/en
Application granted granted Critical
Publication of CN116596741B publication Critical patent/CN116596741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application provides a point cloud display diagram generation method, a point cloud display diagram generation device, electronic equipment and a storage medium, wherein the method comprises the following steps: generating a first plane point cloud display diagram according to a first panoramic image obtained by panoramic shooting of a first space of a target space; generating a second plane point cloud display diagram according to a second panoramic image when the second panoramic image of a second space is acquired based on panoramic shooting, wherein the second space is a space which is determined based on a space structure and is associated with the first space in a target space; and based on image registration, splicing the second plane point cloud display diagram with the first plane point cloud display diagram to obtain a target point cloud display diagram. The application can splice the planar point cloud display diagrams based on the actual demands, acquire the demanded point cloud display diagrams, and avoid resource waste; by calculating based on the panoramic segmentation image and the depth image, the accuracy of the three-dimensional coordinates can be improved, the display effect of the Ping Miandian cloud display image is optimized, the visual experience of a user is improved, and the cost can be saved.

Description

Point cloud display diagram generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer data processing technologies, and in particular, to a method and apparatus for generating a point cloud display diagram, an electronic device, and a storage medium.
Background
When the indoor visual positioning is performed, in order to facilitate a user to check whether the current positioning is accurate, a point cloud display diagram is generally adopted as an aid. Currently, the auxiliary point cloud display diagram is usually a 3D point cloud display diagram, and the display form is single.
In the existing scheme, when a 3D-form point cloud image is generated, the following two modes are mainly adopted, and one mode is that the three-dimensional point is identified through sparse three-dimensional points generated during visual positioning; another is to directly use the point cloud acquired by the depth device for visualization. The sparse three-dimensional points generated during visual positioning are fewer, and the complete indoor room structure is difficult to represent, so that whether the positioning information of the current point position is correct or not is difficult to judge; the method for visualizing the point cloud based on the depth equipment depends on professional equipment, and the processing cost is high.
In the prior art, when the point cloud display diagram is acquired, the whole point cloud display diagram corresponding to a certain room source is generally acquired, the point cloud display diagram of a specific space cannot be acquired based on the requirement, and resource waste is easy to cause.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide a point cloud display diagram generating method, apparatus, electronic device, and storage medium that overcome or at least partially solve the foregoing problems.
In a first aspect, an embodiment of the present application provides a method for generating a point cloud display diagram, including:
generating a first plane point cloud display diagram according to a first panoramic image obtained by panoramic shooting of a first space of a target space;
generating a second planar point cloud display diagram according to a second panoramic image in the case of acquiring the second panoramic image of a second space based on panoramic shooting, wherein the second space is a space which is determined based on a space structure and is associated with the first space in the target space; the planar point cloud display diagram corresponding to the first space and the second space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a corresponding panoramic image, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the panoramic image;
And based on image registration, splicing the second plane point cloud display diagram with the first plane point cloud display diagram to obtain a target point cloud display diagram.
In a second aspect, an embodiment of the present application provides a point cloud display diagram generating apparatus, including:
the first generation module is used for generating a first plane point cloud display diagram according to a first panoramic image obtained by panoramic shooting of a first space of a target space;
the second generation module is used for generating a second planar point cloud display diagram according to a second panoramic image when the second panoramic image of a second space is acquired based on panoramic shooting, wherein the second space is a space which is determined based on a space structure and is associated with the first space in the target space; the planar point cloud display diagram corresponding to the first space and the second space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a corresponding panoramic image, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the panoramic image;
And the splicing acquisition module is used for splicing the second plane point cloud display diagram and the first plane point cloud display diagram based on image registration to acquire a target point cloud display diagram.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program when executed by the processor implements the steps of the point cloud display diagram generating method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, on which a computer program is stored, the computer program implementing the steps of the point cloud display diagram generating method according to the first aspect, when the computer program is executed by a processor.
According to the technical scheme, under the condition that a first planar point cloud display diagram corresponding to a first space and a second planar point cloud display diagram corresponding to a second space which is related to the first space in a space structure are generated, the second planar point cloud display diagram is spliced with the first planar point cloud display diagram based on image registration, a target point cloud display diagram is obtained, two planar point cloud display diagrams corresponding to the two spaces which are related to the space structure can be generated based on actual requirements, and the planar point cloud display diagrams are spliced, so that the required point cloud display diagram is obtained, and resource waste is avoided; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, the planar point cloud display diagram with better quality can be obtained, the accuracy of positioning judgment is improved, and the display effect of the Ping Miandian cloud display diagram is optimized.
The embodiment of the application does not need professional equipment, reduces the dependence on equipment, and saves the cost while ensuring the display effect of the point cloud picture; by acquiring the Ping Miandian cloud display diagram, the point cloud effect is presented by adopting a new image display form, and the visual experience of a user is improved.
Drawings
Fig. 1 shows one of schematic diagrams of a point cloud display diagram generating method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a second embodiment of a method for generating a point cloud display;
fig. 3 is a schematic diagram of a method for stitching two planar point cloud display diagrams based on image registration according to an embodiment of the present application;
fig. 4 is a schematic diagram of a method for generating Ping Miandian cloud display based on panorama according to an embodiment of the present application;
fig. 5 shows a specific example of calculating three-dimensional coordinates of a wall surface pixel point according to an embodiment of the present application;
FIG. 6 shows a specific schematic of a Ping Miandian cloud display provided by an embodiment of the application;
fig. 7 is a schematic diagram of a point cloud display diagram generating device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. The plurality of embodiments of the present application may include two or more.
In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The embodiment of the application provides a point cloud display diagram generation method, which is shown in fig. 1, and can comprise the following steps:
and step 1, generating a first plane point cloud display diagram according to a first panoramic image obtained by panoramic shooting of a first space of a target space.
The target space in this embodiment is a real physical space, and the physical space may refer to a building such as a house, a mall, an office building, a gym, or the like, for example, the physical space may be an office building, a commercial building, a residential building, or the like. The first space may be a building of a single spatial structure in a physical space, such as a room in a house.
After panoramic shooting is carried out on a first space of a target space to obtain a first panoramic image corresponding to the first space, a first plane point cloud display image corresponding to the first space is generated based on the first panoramic image. The panorama used for generating the Ping Miandian cloud display diagram in the embodiment is a panorama subjected to vertical correction processing; the panoramic view shown in a Virtual Reality (VR) scene is a VR panoramic view.
Step 2, under the condition that a second panoramic view of a second space is acquired based on panoramic shooting, generating a second plane point cloud display diagram according to the second panoramic view, wherein the second space is a space which is determined based on a space structure and is associated with the first space in the target space; the planar point cloud display diagram corresponding to the first space and the second space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a corresponding panoramic image, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the panoramic image.
In the case of acquiring a second panorama corresponding to a second space of the target space based on panoramic shooting, a second planar point cloud display diagram may be generated based on the second panorama, the second space being a space spatially structurally associated with the first space in the target space; and the second space and the first space may be regarded as subspaces of the target space.
When the planar point cloud display diagrams corresponding to the first space and the second space are generated, the adopted processing modes are the same. The planar point cloud display diagram corresponding to the first space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the first panoramic image; correspondingly, the planar point cloud display diagram corresponding to the second space is an image generated by projecting part of pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to part of pixel points in the second panoramic image. And the three-dimensional coordinates corresponding to the pixel points are determined based on the depth image and the panoramic segmentation image, the depth image and the panoramic segmentation image are generated based on the panoramic image, and the three-dimensional coordinates of the pixel points are determined by adopting the depth image and the panoramic segmentation image, so that the problem of larger deviation in scene edge calculation based on the depth image can be avoided to a certain extent compared with the situation that the three-dimensional coordinates of the pixel points are obtained based on the depth image only.
The color characteristic value carried by the pixel point is a pixel value corresponding to the pixel point in the panorama, the pixel value of the pixel point is determined by the value corresponding to the R (red) G (green) B (blue) three elements, and the values of the RGB three elements are all between 0 and 255. The process of projecting the pixel points carrying the color characteristic values can be understood as a process of determining the plane coordinates corresponding to the pixel points based on the three-dimensional coordinates, so as to obtain Ping Miandian cloud display diagrams based on plane projection.
Because the three-dimensional coordinates of the pixel points are calculated based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then after plane projection, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
And 3, based on image registration, splicing the second plane point cloud display diagram with the first plane point cloud display diagram to obtain a target point cloud display diagram.
Because the second space is related to the first space in space structure, under the condition of acquiring the first planar point cloud display diagram corresponding to the first space and the second planar point cloud display diagram corresponding to the second space, the second planar point cloud display diagram and the first planar point cloud display diagram can be spliced based on image registration, and the target point cloud display diagram is acquired through splicing processing.
And when the second planar point cloud display diagram and the first planar point cloud display diagram are spliced based on image registration, the second planar point cloud display diagram and the first planar point cloud display diagram can be spliced based on registration results by performing image registration on the second panoramic diagram and the first panoramic diagram to obtain the target point cloud display diagram.
According to the embodiment of the application, under the condition that the first planar point cloud display diagram corresponding to the first space and the second planar point cloud display diagram corresponding to the second space related to the first space in the space structure are generated, the second planar point cloud display diagram is spliced with the first planar point cloud display diagram based on image registration to obtain the target point cloud display diagram, and the planar point cloud display diagrams corresponding to the two spaces related to the space structure can be generated based on actual requirements and spliced to obtain the required point cloud display diagram, so that resource waste is avoided; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, the planar point cloud display diagram with better quality can be obtained, the accuracy of positioning judgment is improved, and the display effect of the Ping Miandian cloud display diagram is optimized.
The embodiment of the application does not need professional equipment, reduces the dependence on equipment, and saves the cost while ensuring the display effect of the point cloud picture; by acquiring the Ping Miandian cloud display diagram, the point cloud effect is presented by adopting a new image display form, and the visual experience of a user is improved.
As an alternative embodiment, referring to fig. 2, after the second plane point cloud display diagram is spliced with the first plane point cloud display diagram to obtain the target point cloud display diagram, the method further includes:
and step 4, judging whether other spaces which do not participate in splicing exist in the target space, if so, executing step 5, and if not, ending the processing.
After the second plane point cloud display diagram and the first plane point cloud display diagram are spliced to obtain the target point cloud display diagram, whether other spaces which do not participate in the splicing exist in the target space can be judged. If not, ending the flow; if so, step 5 is performed.
And 5, merging the first space and the second space to be used as the first space, taking the target point cloud display diagram as the first plane point cloud display diagram, and returning to the step 2.
And under the condition that the target space is determined to have the planar point cloud display diagram corresponding to other spaces which do not participate in the splicing, combining the first space and the second space which are spliced by the planar point cloud display diagram to be used as the first space, taking the target point cloud display diagram corresponding to the first space and the second space as the first planar point cloud display diagram, and returning to the step 2 to continue the splicing of the planar point cloud display diagram.
After returning to step 2, it may determine another space that is spatially related to the updated first space (formed by combining the original first space and the original second space) and that does not participate in the stitching, and take the space as the updated second space. If the updated second space already has a corresponding Ping Miandian cloud display diagram, the Ping Miandian cloud display diagram corresponding to the updated first space (the target point cloud display diagram corresponding to the original first space and the original second space) and the plane point cloud display diagram corresponding to the updated second space can be directly spliced based on image registration. If the updated second space does not have the corresponding Ping Miandian cloud display diagram, the corresponding plane point cloud display diagram can be generated based on the panorama corresponding to the updated second space, and then splicing is performed.
And after the splicing of the Ping Miandian cloud display diagram is completed, whether a certain space which does not participate in the splicing exists in the target space can be continuously judged, if so, the first space is continuously updated until the planar point cloud display diagram corresponding to all the spaces of the target space is obtained, and the splicing of the Ping Miandian cloud display diagram is completed.
According to the embodiment, by detecting whether the target space has other spaces which do not participate in the splicing of the planar point cloud display diagrams, if so, the splicing is continuously performed, and the corresponding overall Ping Miandian cloud display diagram of the target space can be acquired for the target space under the condition that the planar point cloud display diagram corresponding to the specific space is acquired based on the requirement, so that a user can conveniently know the specific space from different angles, and meanwhile, the user can conveniently know the other spaces and compare the spaces.
As an optional embodiment, after taking the target point cloud display diagram as the first plane point cloud display diagram and returning to step 2, the method further includes:
and under the condition that the planar point cloud display diagrams corresponding to all the spaces of the target space are determined to be spliced, acquiring the target point cloud display diagrams corresponding to the target space.
After merging the original first space and the original second space as the first space and taking the target point cloud display diagram corresponding to the original first space and the original second space as the first plane point cloud display diagram, other spaces which are related to the updated first space (formed by merging the original first space and the original second space) in a space structure and do not participate in splicing can be determined, and the space is taken as the updated second space. And then continuing to splice the plane point cloud display diagrams so as to update the target point cloud display diagrams.
Under the condition that all the planar point cloud display diagrams corresponding to the target space are determined to participate in the splicing and the splicing is completed, the target point cloud display diagrams corresponding to the target space are acquired, and the overall Ping Miandian cloud display diagrams corresponding to the target space are acquired.
The process of stitching Ping Miandian cloud display images based on image registration is described below. The associated space determined based on the spatial structure is a neighboring space; referring to fig. 3, when the second planar point cloud display diagram and the first planar point cloud display diagram are spliced based on image registration to obtain a target point cloud display diagram, the method includes the following steps:
And step 31, performing image registration on the second panoramic image and the first panoramic image, and acquiring the relative pose of the second panoramic image and the first panoramic image.
And step 32, according to the relative pose, splicing the second plane point cloud display diagram and the first plane point cloud display diagram to obtain the target point cloud display diagram.
In this embodiment, the space associated based on the spatial structure in the target space is an adjacent space, that is, the first space and the second space are adjacent spaces in the target space. For example, the target space is a target house, the first space is a living room space, and the second space is a bedroom space adjacent to the living room space.
It should be noted that, the first space and the second space related based on the spatial structure may be related based on the panoramic shooting order, that is, the first space and the second space are continuous in the shooting order; it is also possible that the first space and the second space are not related to each other based on the panorama shooting sequence, i.e. that both are discontinuous in the shooting sequence.
When the second plane point cloud display diagram and the first plane point cloud display diagram are spliced based on image registration, the second panoramic diagram and the first panoramic diagram can be subjected to image registration, and the relative pose of the second panoramic diagram and the first panoramic diagram is acquired based on image registration of the panoramic diagrams. And after the relative pose of the two panoramas is determined, based on the relative pose of the second panoramas and the first panoramas, splicing the second plane point cloud display diagram with the first plane point cloud display diagram to obtain target point cloud display diagrams corresponding to the first space and the second space.
The second plane point cloud display diagram and the first plane point cloud display diagram are spliced, which can be understood as that the second plane point cloud display diagram and the first plane point cloud display diagram are combined and placed.
In the above embodiment, by acquiring the relative pose of the panorama based on the registration of the two spatially corresponding panoramas, stitching the two spatially corresponding planar point cloud display views based on the relative pose, the panorama may be used as a registration object, and the panorama-based registration stitching Ping Miandian cloud display view may be used to acquire the target point cloud display view through stitching.
The foregoing mainly describes the stitching of the planar point cloud display, and the following describes the process of generating Ping Miandian cloud display based on the panorama. And the process of generating the first planar point cloud display based on the first panorama is similar to the process of generating the second planar point cloud display based on the second panorama, the process of generating Ping Miandian cloud display will be described below taking the generation of the first planar point cloud display based on the first panorama as an example.
As an alternative embodiment, referring to fig. 4, when generating a first planar point cloud display diagram according to a first panorama acquired by panorama shooting a first space of a target space, the method includes the following steps:
And 11, acquiring a depth image and a panoramic segmentation image corresponding to the first panoramic image based on the first panoramic image.
After the first panorama corresponding to the first space of the target space is acquired, the depth image and the panorama segmentation image corresponding to the first panorama can be obtained by processing the first panorama. Depth images, also known as range images, refer to images having as pixel values the distance (depth) from an image collector to points in a scene, which directly reflect the geometry of the visible surface of the scene; the panoramic segmentation image is an image determined by dividing the panoramic image into areas on the basis of the panoramic image and setting category labels for the pixel points of the areas, and the category labels corresponding to the pixel points in the panoramic image can be obtained on the basis of the panoramic segmentation image.
And step 12, according to the depth image and the panoramic segmentation image corresponding to the first panoramic image, three-dimensional coordinates corresponding to each target pixel point in a target set are obtained, wherein the target set comprises at least part of the pixel points in the first panoramic image.
After the depth image and the panoramic segmented image corresponding to the first panoramic image are obtained, three-dimensional coordinates corresponding to at least part of pixel points in the first panoramic image can be obtained according to the depth image and the panoramic segmented image, the depth image is combined on the basis of the panoramic segmented image, and the three-dimensional coordinates corresponding to each target pixel point in the target set corresponding to the first panoramic image are calculated.
The target set is a pixel point set and comprises at least part of pixel points in the first panoramic image, and based on the cooperation of the depth image and the panoramic segmentation image, three-dimensional coordinates corresponding to all the target pixel points in the target set are calculated, so that the three-dimensional coordinates corresponding to at least part of pixel points in the first panoramic image are obtained.
And 13, based on the three-dimensional coordinates of part of target pixel points in the target set, projecting the part of target pixel points carrying color characteristic values to obtain a planar point cloud display diagram corresponding to the first space, wherein the part of target pixel points are pixel points meeting projection requirements.
After the three-dimensional coordinates corresponding to each target pixel point in the target set are obtained, partial target pixel points meeting the projection requirement can be determined in the target set, and projection is carried out on partial target pixel points carrying color characteristic values based on the determined three-dimensional coordinates of the partial target pixel points so as to obtain a planar point cloud display diagram corresponding to the first space through planar projection.
According to the embodiment, the three-dimensional coordinates of the pixel points are calculated by combining the depth image on the basis of the panoramic segmentation image, so that the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, further, after plane projection, a plane point cloud display diagram with better quality can be obtained, the accuracy of positioning judgment is improved, and the display effect of the Ping Miandian cloud display diagram is optimized.
Wherein, for step 11, when acquiring the depth image and the panorama segmentation image corresponding to the first panorama based on the first panorama, the method comprises:
predicting the depth value of a pixel point in the first panoramic image based on a depth image model, and acquiring the depth image;
and carrying out category prediction on the pixel points in the first panoramic image based on a semantic segmentation model, and obtaining the panoramic segmentation image carrying category labels corresponding to the pixel points in the first panoramic image.
When determining the depth image corresponding to the first space based on the first panoramic image, the first panoramic image can be processed by adopting a depth image model to obtain the depth value of each pixel point in the first panoramic image, and further obtain the depth image.
The depth map model is obtained by training the open source image and the real depth map data, and can adopt a common encoding-decoding network, such as a unet and other structures. By inputting the first panorama into the depth map model, the depth value of each pixel point in the first panorama can be predicted based on the depth map model, and further a depth image representing the depth value corresponding to each pixel point in the first panorama is obtained.
When determining the panoramic segmentation image corresponding to the first space based on the first panoramic image, the semantic segmentation model may be used to process the first panoramic image so as to obtain class labels corresponding to each pixel point in the first panoramic image, thereby obtaining the panoramic segmentation image. Because the panoramic segmentation image comprises class labels of all pixel points in the first panoramic image, the pixel points in the first panoramic image can be classified based on the class labels of the pixel points, and further the segmentation of the first panoramic image based on the class labels of the pixel points can be realized, and the segmentation is not real image segmentation, and can be regarded as the regional division of the first panoramic image based on the class labels of the pixel points, and the class labels corresponding to the pixel points in the same region can be the same.
The segmentation labels of the semantic segmentation model mainly comprise ceilings, wall surfaces and floors, and also comprise other object types, such as indoor furniture in space, such as tables, chairs, beds and the like. The semantic segmentation model is obtained by training data of an open-source image and a real pixel class label, and a model structure can also adopt a common encoding-decoding network. By inputting the first panoramic image into the semantic segmentation model, the class labels of all the pixel points in the first panoramic image can be predicted based on the semantic segmentation model, and further the panoramic segmentation image carrying the class labels corresponding to all the pixel points in the first panoramic image is obtained.
According to the embodiment, the first panoramic image can be respectively input into the depth image model and the semantic segmentation model, the depth value of the pixel points in the first panoramic image is predicted based on the depth image model to obtain the depth image, the category prediction is performed on the pixel points in the first panoramic image based on the semantic segmentation model, the panoramic segmentation image carrying the category labels respectively corresponding to the pixel points in the first panoramic image is obtained, the acquisition of the depth image and the panoramic segmentation image based on the training mature model is achieved, and the processing efficiency is improved while the quality is ensured.
In an optional embodiment, step 12 obtains three-dimensional coordinates corresponding to each target pixel point in the target set according to the depth image and the panorama segmentation image corresponding to the first panorama, including:
determining all pixel points corresponding to a first class label and all pixel points corresponding to a second class label in the first panorama according to the panorama segmentation image, wherein the panorama segmentation image comprises class labels respectively corresponding to all pixel points in the first panorama, the pixel points corresponding to the first class label are located in a first area, and the pixel points corresponding to the second class label are located in a second area;
determining a first coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the first area and a second coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the second area based on the depth image;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points in a third area of the first panoramic image is obtained, and a fourth coordinate set corresponding to at least part of pixel points in a fourth area of the first panoramic image is obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points;
the first area, the second area and the third area are respectively a top area, a ground area and a wall area, corresponding to the first space on the first panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the first panoramic image.
After the depth image and the panoramic segmented image corresponding to the first panoramic image are generated, three-dimensional coordinates corresponding to each target pixel point in the target set can be obtained based on cooperation of the depth image and the panoramic segmented image.
Since the panorama split image includes class labels to which each pixel point in the first panorama image corresponds, all the pixel points in the first panorama image corresponding to the first class labels and all the pixel points corresponding to the second class labels can be determined based on the panorama split image. The first class label corresponds to a first region of the first panorama and the second class label corresponds to a second region of the first panorama, i.e. the pixel point corresponding to the first class label is located in the first region and the pixel point corresponding to the second class label is located in the second region. The first region is a top region, such as a ceiling region, of the first space corresponding to the first panorama; the second area is a ground area corresponding to the first space on the first panorama.
After all the pixel points which are positioned in the first area and correspond to the first class labels in the first panoramic image are acquired, three-dimensional coordinates corresponding to the pixel points in the first area respectively can be determined based on the depth image, and a first coordinate set is acquired based on the determined three-dimensional coordinates; after all the pixel points in the second area and corresponding to the second class labels in the first panoramic image are acquired, three-dimensional coordinates corresponding to the pixel points in the second area respectively can be determined based on the depth image, and a second coordinate set is acquired based on the determined three-dimensional coordinates.
In the case of determining the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least a part of the pixels located in the third region of the first panorama and a fourth coordinate set corresponding to at least a part of the pixels located in the fourth region of the first panorama may be obtained according to at least one of the first coordinate set and the second coordinate set.
The third area is a wall area corresponding to the first space on the first panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the first panoramic image. The fourth region of the first panorama may be regarded as a region remaining after the first, second, and third regions are removed from the first panorama, or may be regarded as a region surrounded by the first, second, and third regions.
The third coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points of the third region respectively, and the three-dimensional coordinates corresponding to any pixel point in the third region need to be determined based on the three-dimensional coordinates of corresponding pixel points in the first coordinate set or the second coordinate set; the fourth coordinate set includes three-dimensional coordinates corresponding to at least part of the pixel points in the fourth area, and the three-dimensional coordinates corresponding to any pixel point in the fourth area need to be determined based on the three-dimensional coordinates of the corresponding pixel point in the first coordinate set or the second coordinate set.
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points, and the target set comprises at least part of pixel points in the first panoramic image because the third coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points in the third area and the fourth coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points in the fourth area.
According to the embodiment, all pixel points corresponding to the first class labels and all pixel points corresponding to the second class labels can be obtained according to the panoramic segmented image, the obtained pixel points are processed based on the depth image, three-dimensional coordinates corresponding to the pixel points in the first area respectively and three-dimensional coordinates corresponding to the pixel points in the second area respectively are determined, so that a first coordinate set and a second coordinate set are determined, and the first coordinate set and the second coordinate set are obtained based on the matching of the panoramic segmented image and the depth image; after the first coordinate set and the second coordinate set are determined, the third coordinate set corresponding to the third area and the fourth coordinate set corresponding to the fourth area are obtained based on at least one of the first coordinate set and the second coordinate set, three-dimensional coordinates corresponding to pixel points of other areas are obtained by operation based on the existing coordinate sets, the processing flow is simplified, and the processing efficiency is improved.
The process of determining the first set of coordinates and the second set of coordinates is described below. When determining, based on the depth image, a first coordinate set including three-dimensional coordinates corresponding to the pixel points of the first region and a second coordinate set including three-dimensional coordinates corresponding to the pixel points of the second region, the method includes:
acquiring a first height value corresponding to each pixel point in the first area and a second height value corresponding to each pixel point in the second area based on the depth image, wherein the height value corresponding to each pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction;
determining a first average height value according to the first height values respectively corresponding to the pixel points in the first area;
determining a second average height value according to second height values respectively corresponding to the pixel points in the second area;
according to the first average height value, panoramic pixel coordinates corresponding to each pixel point in the first area and a conversion formula, determining three-dimensional coordinates corresponding to each pixel point in the first area, and determining the first coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the first area;
Determining three-dimensional coordinates corresponding to each pixel point in the second area according to the second average height value, the panoramic pixel coordinates corresponding to each pixel point in the second area and a conversion formula, and determining the second coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the second area;
the conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
The reference three-dimensional coordinates of the pixel points may be determined based on the depth image, and precisely, the reference three-dimensional coordinates directly determined based on the depth image are not on one 3D plane. For the first region and the second region, the first region corresponds to a plane in the three-dimensional space, and the second region corresponds to a plane in the three-dimensional space, so that the depth image is only used for determining the height of the region.
After determining the pixel points corresponding to the first class labels and located in the first area, a reference three-dimensional coordinate corresponding to each pixel point in the first area may be acquired based on the depth image, and calculation may be performed based on the reference three-dimensional coordinate to determine a three-dimensional coordinate (final three-dimensional coordinate) corresponding to the pixel point.
For each pixel point in the first area, after the reference three-dimensional coordinate corresponding to the current pixel point is acquired, a vertical component corresponding to the reference three-dimensional coordinate in the height direction may be acquired, and the vertical component corresponding to the reference three-dimensional coordinate in the height direction is determined as a first height value corresponding to the pixel point. Since the first region is a top region of the first space corresponding to the first panorama, a vertical component corresponding to the reference three-dimensional coordinate in the height direction is a component in a direction parallel to the height of the wall surface.
After the first height value corresponding to each pixel point in the first area is obtained, average value calculation is performed based on the first height value corresponding to each pixel point in the first area, a first average height value is obtained, and the first average height value is used as the height h1 of the first area.
And then, determining the real coordinate position (three-dimensional coordinate) of the pixel point in the three-dimensional space based on the first average height value h1, the panoramic pixel coordinate corresponding to the pixel point and a conversion formula for each pixel point in the first area, wherein the conversion formula is a calculation formula for converting the panoramic pixel point into a 3d coordinate point. After the three-dimensional coordinates (final three-dimensional coordinates) corresponding to each pixel point in the first area are obtained, the three-dimensional coordinates corresponding to each pixel point in the first area are aggregated, and a first coordinate set corresponding to the first area is obtained.
After determining the pixel points corresponding to the second class labels and located in the second area, a reference three-dimensional coordinate corresponding to each pixel point in the second area may be acquired based on the depth image, and calculation may be performed based on the reference three-dimensional coordinate to determine a three-dimensional coordinate (final three-dimensional coordinate) corresponding to the pixel point.
For each pixel point in the second area, after the reference three-dimensional coordinate corresponding to the current pixel point is acquired, a vertical component corresponding to the reference three-dimensional coordinate in the height direction may be acquired, and the vertical component corresponding to the reference three-dimensional coordinate in the height direction is determined as a second height value corresponding to the pixel point. Since the second area is the ground area corresponding to the first space on the first panorama, the vertical component corresponding to the reference three-dimensional coordinate in the height direction is the component in the direction parallel to the height of the wall surface.
After the second height value corresponding to each pixel point in the second area is obtained, average value calculation is performed based on the second height value corresponding to each pixel point in the second area, a second average height value is obtained, and the second average height value is used as the height h2 of the second area.
Then, for each pixel point in the second area, a real coordinate position (final three-dimensional coordinate) of the pixel point in the three-dimensional space is determined based on the second average height value h2, the panoramic pixel coordinate corresponding to the pixel point, and a conversion formula. After the three-dimensional coordinates (final three-dimensional coordinates) corresponding to each pixel point in the second area are obtained, the three-dimensional coordinates corresponding to each pixel point in the second area are aggregated, and a second coordinate set corresponding to the second area is obtained.
In the above embodiment, the first average height value and the second average height value may be determined based on the depth image, the three-dimensional coordinates may be determined based on the first average height value, the panoramic pixel coordinates and the conversion formula for each pixel point in the first region, and the three-dimensional coordinates may be determined based on the second average height value, the panoramic pixel coordinates and the conversion formula for each pixel point in the second region, so as to obtain the first coordinate set corresponding to the first region and the second coordinate set corresponding to the second region based on the depth image.
The process of determining the third and fourth coordinate sets is described below. The obtaining, according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least a part of pixels located in a third area of the first panorama, and obtaining a fourth coordinate set corresponding to at least a part of pixels located in a fourth area of the first panorama, includes:
for each pixel point in the third area and the fourth area, searching a first pixel point which is associated with the current pixel point in the column direction and is positioned in the first area or the second area, wherein the first pixel point is intersected with a second pixel point corresponding to the current pixel point, and the second pixel point is in the same column with the current pixel point and is intersected with the first area or the second area;
under the condition that the first pixel point is found, based on the first coordinate set or the second coordinate set, acquiring a three-dimensional coordinate corresponding to the first pixel point;
according to the three-dimensional coordinates corresponding to the first pixel points, a first distance between a projection point corresponding to a virtual camera in a three-dimensional live-action space model and a first pixel point associated with the current pixel point in the column direction is obtained, wherein the three-dimensional live-action space model is a three-dimensional space model corresponding to the first space, and in the three-dimensional live-action space model, a connecting line of the projection point and the first pixel point is perpendicular to the column direction of the current pixel point;
Determining a depth value of the current pixel point based on the first distance and the panorama latitude angle corresponding to the current pixel point, and determining a three-dimensional coordinate of the current pixel point based on the depth value of the current pixel point;
determining the third coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the third region, and determining the fourth coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the fourth region;
wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
For each pixel point in the third area and the fourth area, a first pixel point associated with the current pixel point in the column direction can be searched in the first area or the second area, and under the condition that the first pixel point is searched, the three-dimensional coordinate corresponding to the first pixel point is obtained based on the first coordinate set or the second coordinate set.
When searching for the first pixel point associated with the current pixel point, the second pixel point which is in the same column with the current pixel point and intersects with the first area or the second area can be searched first. When searching the second pixel point, the second pixel point which is in the same column with the current pixel point and is intersected with the first area can be searched first, if the second pixel point which is in the same column with the current pixel point and is intersected with the second area can not be searched, and if the second pixel point which is in the same column with the current pixel point and is intersected with the second area can not be searched, the second pixel point corresponding to the current pixel point is determined to be absent.
After the second pixel point corresponding to the current pixel point is found, if the second pixel point is intersected with the first area, determining the pixel point intersected with the second pixel point in the first area as the first pixel point, and if the second pixel point is intersected with the second area, determining the pixel point intersected with the second pixel point in the second area as the first pixel point, so that the first pixel point associated with the current pixel point is obtained. Aiming at the situation that the first pixel point is located in the first area, the three-dimensional coordinate corresponding to the first pixel point can be directly searched in the first coordinate set corresponding to the first area; for the case that the first pixel point is located in the second area, the three-dimensional coordinate corresponding to the first pixel point can be directly found in the second coordinate set corresponding to the second area.
After the three-dimensional coordinates corresponding to the first pixel point are obtained, a first distance between a projection point of the virtual camera in the three-dimensional live-action space model on the target plane and the first pixel point associated with the current pixel point in the column direction can be obtained according to the three-dimensional coordinates corresponding to the first pixel point. The target plane may be a top of the first space or an end surface of the ground corresponding to the three-dimensional real scene space model, where the projection point corresponding to the virtual camera and the first pixel point are both located on the target plane, and in the three-dimensional real scene space model, a connection line between the projection point of the virtual camera and the first pixel point is perpendicular to a column direction (height direction) where the current pixel point is located. The three-dimensional real-scene space model can be a three-dimensional space model corresponding to the first space, the virtual camera can be a coordinate origin of the three-dimensional real-scene space model, but the three-dimensional real-scene space model is not limited to the three-dimensional real-scene space model, the virtual camera can be spaced from the ground, the top end face and the wall surface by a certain distance, and the virtual camera can be arranged at any position.
After the first distance corresponding to the current pixel point is obtained, determining a depth value of the current pixel point based on the first distance corresponding to the current pixel point and the panorama latitude angle corresponding to the current pixel point, and determining three-dimensional coordinates of the current pixel point based on the depth value of the current pixel point so as to operate based on the matched three-dimensional coordinates of the first pixel point to obtain the three-dimensional coordinates of the current pixel point. When determining the three-dimensional coordinates of the pixel points based on the depth values of the pixel points, the calculation may be performed based on the depth values of the pixel points, the panoramic pixel coordinates of the pixel points, and the conversion formula.
When the first pixel point associated with the current pixel point in the column direction is searched in the first area or the second area, the first area may be searched first, if the associated first pixel point cannot be searched in the first area, the second area may be searched continuously, and if the associated first pixel point cannot be searched in the second area, the three-dimensional coordinate corresponding to the current pixel point is directly abandoned. Of course, it is also possible to search in the second area first and then search in the first area, which is not particularly limited in this embodiment.
Because of the situation that other objects can be blocked between the ground and the wall surface and the situation that other objects can be blocked between the top and the wall surface, the associated first pixel points can not be found in the second area and the first area aiming at the pixel points in the third area; because there are situations in which other objects are placed on the ground and suspended on top, for the pixel points in the fourth area, the associated first pixel point may not be found in the second area and the first area.
The third area is a wall surface area corresponding to the first space on the first panoramic image, and pixel points in the third area correspond to third category labels; the pixel points in the fourth area correspond to the fourth category labels, and the number of the fourth category labels is at least one, and as the fourth area can comprise one or more pieces of furniture, different pieces of furniture can correspond to the same category labels, or one piece of furniture can correspond to a fourth category label.
Calculating corresponding three-dimensional coordinates for each pixel point in the third area, and determining a third coordinate set after obtaining the three-dimensional coordinates corresponding to at least part of the pixel points in the third area; after calculating the corresponding three-dimensional coordinates for each pixel point in the fourth area and obtaining the three-dimensional coordinates corresponding to at least part of the pixel points in the fourth area, a fourth coordinate set may be determined.
In the above embodiment, for the pixel points in the third and fourth areas, the associated first pixel point may be searched, and the three-dimensional coordinates of the current pixel point may be calculated based on the three-dimensional coordinates of the first pixel point, so as to determine the third and fourth coordinate sets with the aid of the first and/or second coordinate sets.
As an optional embodiment, the determining the depth value of the current pixel point based on the first distance and the panorama latitude angle corresponding to the current pixel point includes:
determining the distance between the virtual camera and the current pixel point in the three-dimensional live-action space model based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point;
and determining the distance between the virtual camera and the current pixel point as the depth value of the current pixel point.
The first distance is a first distance between a projection point of the virtual camera on the target plane and a first pixel point associated with a current pixel point in a column direction, and a panoramic latitude angle a corresponding to the current pixel point can be shown in fig. 5, d in fig. 5 represents the first distance, P represents the current pixel point, and Q represents the first pixel point associated with the current pixel point. The distance between the virtual camera and the current pixel point in the three-dimensional live-action space model can be determined based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point, the distance between the virtual camera and the current pixel point is determined to be the depth value of the current pixel point, and the depth value of the current pixel point is obtained based on operation. Fig. 5 illustrates a case where the current pixel is a pixel on the wall surface area.
Taking fig. 5 as an example, a process of searching for a first pixel associated with a current pixel and calculating a depth value of the current pixel will be described. The current pixel point P is a pixel point on a wall area, the junction point of the wall pixel point and the ground pixel point on the column of the panoramic image where the current pixel point P is located is calculated, the 3d coordinate Xq of the ground pixel point Q at the junction point is taken, so that the distance d between the projection of the virtual camera on the ground and the ground pixel point Q can be obtained, and the connecting line between the projection of the virtual camera on the ground and the ground pixel point Q is perpendicular to the corresponding straight line of the current pixel point P in the column direction.
Based on the panoramic latitude angle a of the P point and the distance d, a depth value of the current pixel point P is obtained by adopting trigonometric function operation, so that a 3d point coordinate Xp of the current pixel point P is obtained. If the boundary point with the ground pixel point is not found, trying to find the boundary point with the ceiling pixel point, and if the boundary point is not found, abandoning to calculate the 3d point position of the current pixel point.
In the above embodiment, trigonometric function operation may be performed based on the first distance and the panorama latitude angle corresponding to the current pixel point, and the depth value of the current pixel point may be determined based on the operation, so as to determine the three-dimensional coordinate based on the depth value.
The following describes a plane projection process, wherein the projecting, based on the three-dimensional coordinates of a part of target pixel points in the target set, the part of target pixel points carrying color feature values to obtain a plane point cloud display diagram corresponding to the first space includes:
screening out partial target pixel points meeting projection requirements from the target set based on a preset rule;
based on the three-dimensional coordinates of the partial target pixel points, projecting the partial target pixel points carrying color characteristic values to a preset plane, and obtaining a plane point cloud display diagram;
and determining the color characteristic value carried by the target pixel point based on the first panoramic image, wherein the pixel point positioned in the first area does not meet the projection requirement.
After the target set is determined, part of target pixel points meeting projection requirements can be screened out from the target set based on a preset rule, and part of target pixel points carrying color characteristic values are projected to a preset plane based on three-dimensional coordinates of the screened part of target pixel points so as to obtain a Ping Miandian cloud display diagram through plane projection. Referring to fig. 6, a specific illustration of a planar point cloud display diagram corresponding to a room is shown, and colors of different areas are not shown in fig. 6.
The virtual camera can be respectively spaced with the top end face of the three-dimensional live-action space model, the ground and the wall surface at a certain distance, and can also be arranged at any position. Based on shooting of the virtual camera in the three-dimensional live-action space model, a first panoramic image corresponding to the first space can be obtained.
When the pixel screening is performed based on a preset rule, the pixel corresponding to the first area can be filtered according to the target set, so that the pixel screening is realized, and the pixel meeting the projection requirement is obtained. Since the projection of the pixels of the first region onto the ground region affects the projection effect, it is necessary to filter the pixels.
When the pixel point screening is performed based on a preset rule, the target pixel point below the horizontal plane where the virtual camera is located can be screened out, so that the target pixel point meeting the projection requirement is screened out from the target set. Other strategies for screening the target pixels may, of course, be employed, and are not further described herein. When the target pixel point carrying the color characteristic value is projected to a preset plane, the target pixel point is actually projected to the ground of the three-dimensional real scene space model.
In order to simplify the projection operation, necessary target pixels can be selected from the target set, so that the problem of heavy workload of the projection operation caused by excessive pixels participating in the projection is avoided.
In the overall implementation process of the point cloud display diagram generation method provided by the embodiment of the application, under the condition that the first planar point cloud display diagram corresponding to the first space and the second planar point cloud display diagram corresponding to the second space related to the first space in the space structure are generated, the second planar point cloud display diagram and the first planar point cloud display diagram are spliced based on image registration to obtain the target point cloud display diagram, and the planar point cloud display diagrams corresponding to the two spaces related to the space structure can be generated based on actual requirements and spliced to obtain the required point cloud display diagram, so that resource waste is avoided.
By calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, the planar point cloud display diagram with better quality can be obtained, the accuracy of positioning judgment is improved, and the display effect of the Ping Miandian cloud display diagram is optimized. The embodiment of the application does not need professional equipment, reduces the dependence on equipment, and saves the cost while ensuring the display effect of the point cloud picture; by acquiring the Ping Miandian cloud display diagram, the point cloud effect is presented by adopting a new image display form, and the visual experience of a user is improved.
By registering the panorama based on the two spatially corresponding panoramas, the relative pose of the panorama is obtained, the planar point cloud display based on the relative pose is spliced, the panorama can be used as a registration object, and the target point cloud display is obtained through splicing processing based on the registration splicing Ping Miandian cloud display of the panorama.
Acquiring a depth image based on a depth image model and acquiring a panoramic segmentation image based on a semantic segmentation model, so that a demand image based on a training mature model is acquired, the quality is ensured, and the processing efficiency is improved; and determining a third coordinate set and a fourth coordinate set based on the first coordinate set and/or the second coordinate set, so that three-dimensional coordinates corresponding to the pixels in other areas are obtained by operation based on the existing coordinate sets, the processing flow is simplified, and the processing efficiency is improved.
An embodiment of the present application provides a point cloud display diagram generating device, as shown in fig. 7, including:
a first generating module 701, configured to generate a first planar point cloud display diagram according to a first panorama acquired by panoramic shooting a first space of a target space;
a second generating module 702, configured to generate a second planar point cloud display diagram according to a second panorama of a second space, where the second space is a space that is determined based on a spatial structure and is associated with the first space in the target space, in a case where the second panorama is acquired based on panorama shooting; the planar point cloud display diagram corresponding to the first space and the second space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a corresponding panoramic image, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the panoramic image;
And the stitching acquiring module 703 is configured to stitch the second planar point cloud display diagram and the first planar point cloud display diagram based on image registration, and acquire a target point cloud display diagram.
Optionally, the associated space determined based on the spatial structure is a neighboring space; the splicing acquisition module comprises:
the first acquisition submodule is used for carrying out image registration on the second panoramic image and the first panoramic image to acquire the relative pose of the second panoramic image and the first panoramic image;
and the splicing sub-module is used for splicing the second plane point cloud display diagram and the first plane point cloud display diagram according to the relative pose so as to acquire the target point cloud display diagram.
Optionally, the first generating module includes:
the second acquisition sub-module is used for acquiring a depth image and a panoramic segmentation image corresponding to the first panoramic image based on the first panoramic image;
the third acquisition sub-module is used for acquiring three-dimensional coordinates corresponding to each target pixel point in a target set according to the depth image and the panoramic segmentation image corresponding to the first panoramic image, wherein the target set comprises at least part of the pixel points in the first panoramic image;
The projection acquisition sub-module is used for projecting part of target pixel points carrying color characteristic values based on the three-dimensional coordinates of the part of target pixel points in the target set to acquire a plane point cloud display diagram corresponding to the first space, wherein the part of target pixel points are pixel points meeting projection requirements.
Optionally, the second acquisition submodule includes:
the first acquisition unit is used for predicting the depth value of the pixel point in the first panoramic image based on the depth image model to acquire the depth image;
the second obtaining unit is used for carrying out category prediction on the pixel points in the first panoramic image based on the semantic segmentation model, and obtaining the panoramic segmentation image carrying category labels corresponding to the pixel points in the first panoramic image.
Optionally, the third obtaining submodule includes:
a first determining unit, configured to determine, according to the panoramic segmented image, all pixels corresponding to a first class label and all pixels corresponding to a second class label in the first panoramic image, where the panoramic segmented image includes class labels corresponding to the pixels in the first panoramic image, and the pixels corresponding to the first class label are located in a first area and the pixels corresponding to the second class label are located in a second area;
A second determining unit, configured to determine, based on the depth image, a first coordinate set including three-dimensional coordinates corresponding to the pixel points of the first area and a second coordinate set including three-dimensional coordinates corresponding to the pixel points of the second area;
a third obtaining unit, configured to obtain, according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least a part of pixels located in a third area of the first panorama, and obtain a fourth coordinate set corresponding to at least a part of pixels located in a fourth area of the first panorama;
the pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points;
the first area, the second area and the third area are respectively a top area, a ground area and a wall area, corresponding to the first space on the first panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the first panoramic image.
Optionally, the second determining unit includes:
The first acquisition subunit is used for acquiring a first height value corresponding to each pixel point in the first area and a second height value corresponding to each pixel point in the second area based on the depth image, wherein the height value corresponding to each pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction;
the first determining subunit is used for determining a first average height value according to the first height values respectively corresponding to the pixel points in the first area;
a second determining subunit, configured to determine a second average height value according to second height values corresponding to each pixel point in the second area respectively;
the third determining subunit is configured to determine, according to the first average height value, the panoramic pixel coordinate corresponding to each pixel point in the first area, and a conversion formula, a three-dimensional coordinate corresponding to each pixel point in the first area, and determine the first coordinate set based on the three-dimensional coordinate corresponding to each pixel point in the first area;
a fourth determining subunit, configured to determine, according to the second average height value, the panoramic pixel coordinate and the conversion formula corresponding to each pixel point in the second area, a three-dimensional coordinate corresponding to each pixel point in the second area, and determine the second coordinate set based on the three-dimensional coordinate corresponding to each pixel point in the second area;
The conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
Optionally, the third obtaining unit includes:
a searching subunit, configured to search, for each pixel point in the third area and the fourth area, a first pixel point associated with a current pixel point in a column direction and located in the first area or the second area, where the first pixel point intersects a second pixel point corresponding to the current pixel point, and the second pixel point intersects the first area or the second area in the same column as the current pixel point;
the second acquisition subunit is used for acquiring the three-dimensional coordinates corresponding to the first pixel point based on the first coordinate set or the second coordinate set under the condition that the first pixel point is found;
a third obtaining subunit, configured to obtain, according to a three-dimensional coordinate corresponding to the first pixel point, a first distance between a projection point corresponding to a virtual camera in a three-dimensional real scene space model and a first pixel point associated with a current pixel point in a column direction, where the three-dimensional real scene space model is a three-dimensional space model corresponding to the first space, and in the three-dimensional real scene space model, a connection line between the projection point and the first pixel point is perpendicular to the column direction in which the current pixel point is located;
The first processing subunit is used for determining the depth value of the current pixel point based on the first distance and the panorama latitude angle corresponding to the current pixel point, and determining the three-dimensional coordinate of the current pixel point based on the depth value of the current pixel point;
the second processing subunit is configured to determine the third coordinate set according to three-dimensional coordinates corresponding to at least part of the pixel points in the third area, and determine the fourth coordinate set according to three-dimensional coordinates corresponding to at least part of the pixel points in the fourth area;
wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
Optionally, the first processing subunit is further configured to:
determining the distance between the virtual camera and the current pixel point in the three-dimensional live-action space model based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point;
and determining the distance between the virtual camera and the current pixel point as the depth value of the current pixel point.
Optionally, the projection acquisition submodule includes:
the screening unit is used for screening out partial target pixel points meeting projection requirements from the target set based on a preset rule;
The projection acquisition unit is used for projecting the partial target pixel points carrying the color characteristic values to a preset plane based on the three-dimensional coordinates of the partial target pixel points, and acquiring the plane point cloud display diagram;
and determining the color characteristic value carried by the target pixel point based on the first panoramic image, wherein the pixel point positioned in the first area does not meet the projection requirement.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the application also provides electronic equipment, which comprises: the computer program is stored in the memory and can run on the processor, and when the computer program is executed by the processor, the processes of the point cloud display diagram generating method embodiment are realized, and the same technical effects can be achieved, so that repetition is avoided, and the description is omitted here.
For example, fig. 8 shows a schematic diagram of the physical structure of an electronic device. As shown in fig. 8, the electronic device may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may invoke logic instructions in the memory 830, the processor 810 being configured to perform the steps of the point cloud display diagram generation method described in any of the embodiments above.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the point cloud display diagram generating method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (12)

1. The point cloud display diagram generation method is characterized by comprising the following steps of:
generating a first plane point cloud display diagram according to a first panoramic image obtained by panoramic shooting of a first space of a target space;
generating a second planar point cloud display diagram according to a second panoramic image in the case of acquiring the second panoramic image of a second space based on panoramic shooting, wherein the second space is a space which is determined based on a space structure and is associated with the first space in the target space; the planar point cloud display diagram corresponding to the first space and the second space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a corresponding panoramic image, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the panoramic image;
and based on image registration, splicing the second plane point cloud display diagram with the first plane point cloud display diagram to obtain a target point cloud display diagram.
2. The method of claim 1, wherein the associated space determined based on the spatial structure is a neighboring space; the image registration-based stitching the second planar point cloud display diagram with the first planar point cloud display diagram to obtain a target point cloud display diagram includes:
Performing image registration on the second panoramic image and the first panoramic image to obtain the relative pose of the second panoramic image and the first panoramic image;
and according to the relative pose, splicing the second plane point cloud display diagram and the first plane point cloud display diagram to acquire the target point cloud display diagram.
3. The method of claim 1, wherein generating a first planar point cloud display from a first panorama acquired by panoramic shooting a first space of a target space comprises:
based on the first panoramic image, acquiring a depth image and a panoramic segmentation image corresponding to the first panoramic image;
according to the depth image and the panoramic segmentation image corresponding to the first panoramic image, three-dimensional coordinates corresponding to each target pixel point in a target set are obtained, wherein the target set comprises at least part of the pixel points in the first panoramic image;
and based on the three-dimensional coordinates of part of target pixel points in the target set, projecting the part of target pixel points carrying color characteristic values to obtain a planar point cloud display diagram corresponding to the first space, wherein the part of target pixel points are pixel points meeting projection requirements.
4. The method of claim 3, wherein the obtaining, based on the first panorama, a depth image and a panorama segmentation image corresponding to the first panorama comprises:
predicting the depth value of a pixel point in the first panoramic image based on a depth image model, and acquiring the depth image;
and carrying out category prediction on the pixel points in the first panoramic image based on a semantic segmentation model, and obtaining the panoramic segmentation image carrying category labels corresponding to the pixel points in the first panoramic image.
5. The method according to claim 3, wherein the obtaining three-dimensional coordinates corresponding to each target pixel point in the target set according to the depth image and the panoramic segmented image corresponding to the first panoramic image includes:
determining all pixel points corresponding to a first class label and all pixel points corresponding to a second class label in the first panorama according to the panorama segmentation image, wherein the panorama segmentation image comprises class labels respectively corresponding to all pixel points in the first panorama, the pixel points corresponding to the first class label are located in a first area, and the pixel points corresponding to the second class label are located in a second area;
Determining a first coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the first area and a second coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the second area based on the depth image;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points in a third area of the first panoramic image is obtained, and a fourth coordinate set corresponding to at least part of pixel points in a fourth area of the first panoramic image is obtained;
the pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points;
the first area, the second area and the third area are respectively a top area, a ground area and a wall area, corresponding to the first space on the first panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the first panoramic image.
6. The method of claim 5, wherein the determining, based on the depth image, a first set of coordinates including three-dimensional coordinates corresponding to pixels of the first region, a second set of coordinates including three-dimensional coordinates corresponding to pixels of the second region, comprises:
Acquiring a first height value corresponding to each pixel point in the first area and a second height value corresponding to each pixel point in the second area based on the depth image, wherein the height value corresponding to each pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction;
determining a first average height value according to the first height values respectively corresponding to the pixel points in the first area;
determining a second average height value according to second height values respectively corresponding to the pixel points in the second area;
according to the first average height value, panoramic pixel coordinates corresponding to each pixel point in the first area and a conversion formula, determining three-dimensional coordinates corresponding to each pixel point in the first area, and determining the first coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the first area;
determining three-dimensional coordinates corresponding to each pixel point in the second area according to the second average height value, the panoramic pixel coordinates corresponding to each pixel point in the second area and a conversion formula, and determining the second coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the second area;
The conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
7. The method of claim 5, wherein the obtaining a third set of coordinates corresponding to at least a portion of pixels located in a third region of the first panorama and a fourth set of coordinates corresponding to at least a portion of pixels located in a fourth region of the first panorama according to at least one of the first set of coordinates and the second set of coordinates comprises:
for each pixel point in the third area and the fourth area, searching a first pixel point which is associated with the current pixel point in the column direction and is positioned in the first area or the second area, wherein the first pixel point is intersected with a second pixel point corresponding to the current pixel point, and the second pixel point is in the same column with the current pixel point and is intersected with the first area or the second area;
under the condition that the first pixel point is found, based on the first coordinate set or the second coordinate set, acquiring a three-dimensional coordinate corresponding to the first pixel point;
according to the three-dimensional coordinates corresponding to the first pixel points, a first distance between a projection point corresponding to a virtual camera in a three-dimensional live-action space model and a first pixel point associated with the current pixel point in the column direction is obtained, wherein the three-dimensional live-action space model is a three-dimensional space model corresponding to the first space, and in the three-dimensional live-action space model, a connecting line of the projection point and the first pixel point is perpendicular to the column direction of the current pixel point;
Determining a depth value of the current pixel point based on the first distance and the panorama latitude angle corresponding to the current pixel point, and determining a three-dimensional coordinate of the current pixel point based on the depth value of the current pixel point;
determining the third coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the third region, and determining the fourth coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the fourth region;
wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
8. The method of claim 7, wherein determining the depth value of the current pixel based on the first distance and the panorama latitude angle corresponding to the current pixel comprises:
determining the distance between the virtual camera and the current pixel point in the three-dimensional live-action space model based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point;
and determining the distance between the virtual camera and the current pixel point as the depth value of the current pixel point.
9. The method according to claim 5, wherein the projecting the partial target pixel points carrying color feature values based on the three-dimensional coordinates of the partial target pixel points in the target set to obtain the planar point cloud display map corresponding to the first space includes:
Screening out partial target pixel points meeting projection requirements from the target set based on a preset rule;
based on the three-dimensional coordinates of the partial target pixel points, projecting the partial target pixel points carrying color characteristic values to a preset plane, and obtaining a plane point cloud display diagram;
and determining the color characteristic value carried by the target pixel point based on the first panoramic image, wherein the pixel point positioned in the first area does not meet the projection requirement.
10. A point cloud display diagram generation apparatus, characterized by comprising:
the first generation module is used for generating a first plane point cloud display diagram according to a first panoramic image obtained by panoramic shooting of a first space of a target space;
the second generation module is used for generating a second planar point cloud display diagram according to a second panoramic image when the second panoramic image of a second space is acquired based on panoramic shooting, wherein the second space is a space which is determined based on a space structure and is associated with the first space in the target space; the planar point cloud display diagram corresponding to the first space and the second space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a corresponding panoramic image, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the panoramic image;
And the splicing acquisition module is used for splicing the second plane point cloud display diagram and the first plane point cloud display diagram based on image registration to acquire a target point cloud display diagram.
11. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the point cloud display generation method of any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the point cloud representation generation method according to any of claims 1 to 9.
CN202310376732.XA 2023-04-10 2023-04-10 Point cloud display diagram generation method and device, electronic equipment and storage medium Active CN116596741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310376732.XA CN116596741B (en) 2023-04-10 2023-04-10 Point cloud display diagram generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310376732.XA CN116596741B (en) 2023-04-10 2023-04-10 Point cloud display diagram generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116596741A true CN116596741A (en) 2023-08-15
CN116596741B CN116596741B (en) 2024-05-07

Family

ID=87588843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310376732.XA Active CN116596741B (en) 2023-04-10 2023-04-10 Point cloud display diagram generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116596741B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9013543B1 (en) * 2012-11-14 2015-04-21 Google Inc. Depth map generation using multiple scanners to minimize parallax from panoramic stitched images
US20220198750A1 (en) * 2019-04-12 2022-06-23 Beijing Chengshi Wanglin Information Technology Co., Ltd. Three-dimensional object modeling method, image processing method, image processing device
WO2022222121A1 (en) * 2021-04-23 2022-10-27 华为技术有限公司 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle
CN115375860A (en) * 2022-08-15 2022-11-22 北京城市网邻信息技术有限公司 Point cloud splicing method, device, equipment and storage medium
CN115393467A (en) * 2022-08-19 2022-11-25 北京城市网邻信息技术有限公司 House type graph generation method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9013543B1 (en) * 2012-11-14 2015-04-21 Google Inc. Depth map generation using multiple scanners to minimize parallax from panoramic stitched images
US20220198750A1 (en) * 2019-04-12 2022-06-23 Beijing Chengshi Wanglin Information Technology Co., Ltd. Three-dimensional object modeling method, image processing method, image processing device
WO2022222121A1 (en) * 2021-04-23 2022-10-27 华为技术有限公司 Panoramic image generation method, vehicle-mounted image processing apparatus, and vehicle
CN115375860A (en) * 2022-08-15 2022-11-22 北京城市网邻信息技术有限公司 Point cloud splicing method, device, equipment and storage medium
CN115393467A (en) * 2022-08-19 2022-11-25 北京城市网邻信息技术有限公司 House type graph generation method, device, equipment and medium

Also Published As

Publication number Publication date
CN116596741B (en) 2024-05-07

Similar Documents

Publication Publication Date Title
US9984177B2 (en) Modeling device, three-dimensional model generation device, modeling method, program and layout simulator
WO2019233445A1 (en) Data collection and model generation method for house
JP4512584B2 (en) Panorama video providing method and apparatus with improved image matching speed and blending method
CN106548516B (en) Three-dimensional roaming method and device
JP6418449B2 (en) Image processing apparatus, image processing method, and program
US20220198709A1 (en) Determining position of an image capture device
US10733777B2 (en) Annotation generation for an image network
KR101260132B1 (en) Stereo matching process device, stereo matching process method, and recording medium
US20150332474A1 (en) Orthogonal and Collaborative Disparity Decomposition
US20200258285A1 (en) Distributed computing systems, graphical user interfaces, and control logic for digital image processing, visualization and measurement derivation
JP2016170610A (en) Three-dimensional model processing device and camera calibration system
JPWO2018179040A1 (en) Camera parameter estimation device, method and program
CN112686877A (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN113298928A (en) House three-dimensional reconstruction method, device, equipment and storage medium
CN116485633A (en) Point cloud display diagram generation method and device, electronic equipment and storage medium
JP6425511B2 (en) Method of determining feature change and feature change determination apparatus and feature change determination program
JP7241812B2 (en) Information visualization system, information visualization method, and program
JP2015108992A (en) Additional information display system
CN116596741B (en) Point cloud display diagram generation method and device, electronic equipment and storage medium
CN116485634B (en) Point cloud display diagram generation method and device, electronic equipment and storage medium
WO2024041181A1 (en) Image processing method and apparatus, and storage medium
JP2006318015A (en) Image processing device, image processing method, image display system, and program
CN110191284B (en) Method and device for collecting data of house, electronic equipment and storage medium
CN110751703A (en) Winding picture generation method, device, equipment and storage medium
KR101559739B1 (en) System for merging virtual modeling and image data of cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant