CN116912817A - Three-dimensional scene model splitting method and device, electronic equipment and storage medium - Google Patents

Three-dimensional scene model splitting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116912817A
CN116912817A CN202310623505.2A CN202310623505A CN116912817A CN 116912817 A CN116912817 A CN 116912817A CN 202310623505 A CN202310623505 A CN 202310623505A CN 116912817 A CN116912817 A CN 116912817A
Authority
CN
China
Prior art keywords
point cloud
sub
point
dimensional scene
scene model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310623505.2A
Other languages
Chinese (zh)
Inventor
武延豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenli Vision Shenzhen Cultural Technology Co ltd
Original Assignee
Shenli Vision Shenzhen Cultural Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenli Vision Shenzhen Cultural Technology Co ltd filed Critical Shenli Vision Shenzhen Cultural Technology Co ltd
Priority to CN202310623505.2A priority Critical patent/CN116912817A/en
Publication of CN116912817A publication Critical patent/CN116912817A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a three-dimensional scene model splitting method, a device, electronic equipment and a storage medium, wherein the three-dimensional scene model splitting method comprises the following steps: converting the three-dimensional scene model into a first point cloud; deleting points corresponding to the supporting bottom surface in the first point cloud to obtain a second point cloud; dividing the second point cloud into a plurality of sub point clouds, wherein each sub point cloud comprises a plurality of points, and different sub point clouds correspond to different split objects in the three-dimensional scene model; and determining a bounding box of a split object corresponding to the sub-point cloud in the three-dimensional scene model according to the sub-point cloud, and splitting the split object corresponding to the sub-point cloud from the three-dimensional scene model according to the bounding box. The scheme can improve the efficiency of splitting the three-dimensional scene model.

Description

Three-dimensional scene model splitting method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a three-dimensional scene model splitting method, a three-dimensional scene model splitting device, electronic equipment and a storage medium.
Background
With the development of film and television shooting technology, a virtual shooting technology based on a Light-Emitting Diode (LED) screen appears, and a virtual scene rendered on the built LED screen replaces a real setting, so that the dependence of film and television shooting on places and scenes is reduced, and the cost and the manufacturing period of film and television shooting are reduced. The virtual scene used for virtual shooting is obtained by rendering a three-dimensional scene model (also referred to as a 3D scene model) obtained by scanning scenes such as a mall, a street, a city, a forest, and the like. In order to repair and warehouse-in manage the three-dimensional scene model, individual objects, buildings and the like are required to be separated from the three-dimensional scene model.
At present, an object of interest in a three-dimensional scene model is split in a manual operation mode.
However, the three-dimensional scene model obtained through scanning reconstruction is a whole complete model, which comprises a large number of points and faces, and splitting the three-dimensional scene model through a manual operation mode consumes a large amount of manpower and time, so that the efficiency of splitting the three-dimensional scene model is low.
Disclosure of Invention
In view of the above, embodiments of the present application provide a three-dimensional scene model splitting method, apparatus, electronic device and storage medium, so as to at least solve or alleviate the above-mentioned problems.
According to a first aspect of an embodiment of the present application, there is provided a three-dimensional scene model splitting method, including: converting the three-dimensional scene model into a first point cloud; deleting points corresponding to the supporting bottom surface in the first point cloud to obtain a second point cloud; dividing the second point cloud into a plurality of sub point clouds, wherein each sub point cloud comprises a plurality of points, and different sub point clouds correspond to different split objects in the three-dimensional scene model; and determining a bounding box of a split object corresponding to the sub-point cloud in the three-dimensional scene model according to the sub-point cloud, and splitting the split object corresponding to the sub-point cloud from the three-dimensional scene model according to the bounding box.
According to a second aspect of an embodiment of the present application, there is provided a three-dimensional scene model splitting apparatus, including: the conversion unit is used for converting the three-dimensional scene model into a first point cloud; the separation unit is used for deleting the points corresponding to the supporting bottom surface in the first point cloud to obtain a second point cloud; the clustering unit is used for dividing the second point cloud into a plurality of sub point clouds, wherein each sub point cloud comprises a plurality of points, and different sub point clouds correspond to different split objects in the three-dimensional scene model; the splitting unit is used for determining a bounding box of a splitting object corresponding to the sub-point cloud in the three-dimensional scene model according to the sub-point cloud, and splitting the splitting object corresponding to the sub-point cloud from the three-dimensional scene model according to the bounding box.
According to a third aspect of an embodiment of the present application, there is provided an electronic apparatus including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus; the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform operations corresponding to the method according to the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect described above.
According to a fifth aspect of embodiments of the present application, there is provided a computer program product comprising computer instructions for instructing a computing device to execute the method of the first aspect described above.
According to the technical scheme, after the three-dimensional scene model is converted into the first point cloud, the points corresponding to the bottom surface of the support are deleted from the first point cloud to obtain the second point cloud, and as a certain distance exists between different split objects in the three-dimensional scene model, the sub point clouds corresponding to the different split objects can be determined according to the second point cloud, and further the bounding box of the corresponding split object can be determined according to the sub point clouds, and the corresponding split object can be split from the three-dimensional scene model through the bounding box. Through converting the three-dimensional scene model into the point cloud, bounding boxes of different splitting objects in the three-dimensional scene model can be determined, and then corresponding splitting objects can be split from the three-dimensional scene model according to the bounding boxes, so that automatic splitting of the three-dimensional scene model is realized, manual participation and time consumption in the splitting process of the three-dimensional scene model are reduced, and the splitting efficiency of the three-dimensional scene model is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of an exemplary system to which an embodiment of the application is applied;
FIG. 2 is a flow chart of a three-dimensional scene model splitting method according to an embodiment of the application;
FIG. 3 is a schematic diagram of a three-dimensional scene model splitting apparatus according to an embodiment of the application;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
The present application is described below based on examples, but the present application is not limited to only these examples. In the following detailed description of the present application, certain specific details are set forth in detail. The present application will be fully understood by those skilled in the art without the details described herein. Well-known methods, procedures, and flows have not been described in detail so as not to obscure the nature of the application. The figures are not necessarily drawn to scale.
First, partial terms or terminology appearing in the course of describing the embodiments of the application are applicable to the following explanation.
Virtual shooting: virtual shooting refers to a series of digital film and television making methods which replace real scenery by virtual scenery and use computer technology to assist in making.
Three-dimensional scene model: the three-dimensional scene model is also called a 3D scene model, which is a large scene model obtained through high-precision scanning reconstruction, and the model has rich details by covering outdoor building groups, streets and the like, and the number of the included faces and vertices reaches the level of tens or billions.
Boolean intersection: logical deduction methods for digital symbolization during Boolean operation include combination, intersection and subtraction. Simple basic graphics combinations can be made to produce new shapes by boolean operations in graphics processing operations, and the development of boolean operations from two-dimensional to three-dimensional graphics. Boolean intersection is one type of Boolean operation, which refers to a set of co-existing elements in two sets. In the embodiment of the application, the Boolean intersection refers to the shared volume of two mesh models (mesh models).
And (3) point cloud: a point cloud (point cloud) is a collection of points on the object-appearance surface that can characterize the object surface properties.
Surrounding frame: the bounding box is a closed curved surface formed by splicing a plurality of planes and/or curved surfaces in a three-dimensional space, and in the embodiment of the application, the bounding box refers to a closed curved surface capable of surrounding a certain independent object in a three-dimensional scene model.
Exemplary System
FIG. 1 illustrates an exemplary system suitable for use in the three-dimensional scene model splitting method of embodiments of the application. As shown in fig. 1, the system may include a cloud service 102, a communication network 104, and at least one user device 106, illustrated in fig. 1 as a plurality of user devices 106. It should be noted that, the solution of the embodiment of the present application may be applied to both the cloud service end 102 and the user equipment 106.
Cloud server 102 may be any suitable device for storing information, data, programs, and/or any other suitable type of content, including, but not limited to, distributed storage system devices, server clusters, computing cloud server clusters, and the like. In some embodiments, cloud server 102 may perform any suitable functions. For example, in some embodiments, cloud server 102 may be used to split a three-dimensional scene model. As an optional example, in some embodiments, the cloud service 102 may receive a splitting instruction sent by the user device 106, and split the specified three-dimensional scene model based on the splitting instruction, so as to split a plurality of independent objects from the three-dimensional scene model.
The communication network 104 may be any suitable combination of one or more wired and/or wireless networks. For example, the communication network 104 can include any one or more of the following: the internet, an intranet, a wide area network (Wide Area Network, WAN), a local area network (Local Area Network, LAN), a wireless network, a digital subscriber line (Digital Subscriber Line, DSL) network, a frame relay network, an asynchronous transfer mode (Asynchronous Transfer Mode, ATM) network, a Virtual Private Network (VPN), and/or any other suitable communication network. The user device 106 can be coupled to the communication network 104 via one or more communication links (e.g., communication link 112), and the communication network 104 can be linked to the cloud service 102 via one or more communication links (e.g., communication link 114). The communication link may be any communication link suitable for transferring data between the cloud service 102 and the user device 106, such as a network link, a dial-up link, a wireless link, a hardwired link, any other suitable communication link, or any suitable combination of such links.
User device 106 may comprise any one or more user devices suitable for interaction. In some embodiments, when the three-dimensional scene model is split by the cloud service 102, the user device 106 may send a splitting instruction to the cloud service 102 in response to the operation of the user, after receiving the splitting instruction, the cloud service 102 splits the specified three-dimensional scene model, obtains a plurality of independent objects included in the three-dimensional scene model, and then sends the split independent objects, or related information of the independent objects, to the user device 106 through the communication network 104. The user equipment 106 can locally perform model repair or warehouse management on the independent object after receiving the independent object, and the user equipment 106 can send a corresponding management instruction to the cloud server 102 after receiving the related information of the independent object, so that the cloud server 102 performs model repair or warehouse management on the split independent object. In other embodiments, the user device 106 may split the three-dimensional scene model locally. The user device 106 locally retrieves the three-dimensional scene model in response to a user operation and splits the three-dimensional scene model to obtain a plurality of independent objects comprised by the three-dimensional scene model.
User device 106 may comprise any suitable type of device, for example, user device 106 may comprise a mobile device, a tablet computer, a laptop computer, a desktop computer, or any other suitable type of user device.
The embodiment of the present application mainly focuses on a process of splitting a three-dimensional scene model by the cloud server 102 or the user device 106, and a process of splitting a three-dimensional scene model will be described in detail later.
Three-dimensional scene model splitting method
Based on the above system, the embodiment of the present application provides a three-dimensional scene model splitting method, which may be executed by the cloud server 102 or the user device 106. The three-dimensional scene model splitting method is described in detail below by means of a number of embodiments.
FIG. 2 is a flow chart of a three-dimensional scene model splitting method according to an embodiment of the application. As shown in fig. 2, the three-dimensional scene model splitting method includes the following steps:
step 201, converting the three-dimensional scene model into a first point cloud.
The three-dimensional scene model is a mesh model (mesh model), which is a model that approximately represents a three-dimensional object using a series of polygons (typically triangles) that are close in size and shape. The three-dimensional scene model can be obtained through scanning reconstruction, and the three-dimensional scene model comprises a supporting bottom surface and a plurality of split objects, wherein the split objects are positioned on the supporting bottom surface, and the plurality of split objects are connected through the supporting bottom surface to form a complete three-dimensional scene model. The split object in the embodiment of the application refers to a part of the three-dimensional scene model, namely the split object is also a grid model, and the split object is a sub-model included in the three-dimensional scene model.
The supporting bottom surface is a medium for connecting different split objects, if the three-dimensional scene model is a model of an outdoor building group, a street and the like, the supporting bottom surface is a grid model of the ground, and if the three-dimensional scene model is a model of an indoor layout and a display, the supporting bottom surface is a grid model of an indoor floor.
The three-dimensional scene model may be converted to a first point cloud by a model conversion algorithm for converting the mesh model to a point cloud. The model conversion algorithm may be any suitable algorithm, and embodiments of the present application are not limited to a specific implementation of the model conversion algorithm.
And 202, deleting points corresponding to the supporting bottom surface in the first point cloud to obtain a second point cloud.
After the first point cloud is obtained, determining points corresponding to the supporting bottom surface in the three-dimensional scene model in the first point cloud, and deleting the points corresponding to the supporting bottom surface in the three-dimensional scene model in the first point cloud to obtain a second point cloud.
Because the supporting bottom surface is a medium for connecting the split objects in the three-dimensional scene model, after deleting the points corresponding to the supporting bottom surface in the first point cloud, the obtained second point cloud comprises the points corresponding to the split objects, and a certain distance exists between the points corresponding to different split objects in the second point cloud, so that the points corresponding to different split objects can be identified from the second point cloud.
Step 203, dividing the second point cloud into a plurality of sub point clouds.
Because the second point cloud includes points corresponding to different split objects, the second point cloud can be divided into a plurality of sub point clouds according to the split objects corresponding to the points in the second point cloud, different sub point clouds correspond to different split objects, each sub point cloud includes a plurality of points, and the points included in the same sub point cloud correspond to the same split object.
For example, the second point cloud is divided into N sub-point clouds, and the three-dimensional scene model includes N split objects, an i-th sub-point cloud in the N sub-point clouds corresponds to an i-th split object in the N split objects, i is a positive integer less than or equal to N, N is a positive integer greater than or equal to 2, and the i-th sub-point cloud is a point cloud representation of the i-th split object.
Step 204, determining a bounding box of the split object corresponding to the sub-point cloud in the three-dimensional scene model according to the sub-point cloud, and splitting the split object corresponding to the sub-point cloud from the three-dimensional scene model according to the bounding box.
Because the point cloud comprises points positioned on the surface of the object to be split, for the sub-point cloud divided from the second point cloud, according to the distribution of points of the sub-point cloud in the three-dimensional space, the distribution of the split object corresponding to the sub-point cloud in the three-dimensional space can be determined, and then the bounding box of the split object corresponding to the sub-point cloud can be determined. The bounding box of the split object is a closed curved surface which surrounds the split object, namely the split object is positioned in a space area surrounded by the bounding box.
After determining the bounding box of the split object according to the sub-point cloud, the corresponding split object can be split from the three-dimensional scene model according to the bounding box. Since the split object is located in the space area surrounded by the bounding box, after the bounding box of the split object is obtained, the split object can be obtained by calculating the intersection of the bounding box of the split object and the three-dimensional scene model, so that the split object is split from the three-dimensional scene model.
For example, according to the ith sub-point cloud, a bounding box of the ith split object can be determined, the bounding box of the ith split object indicates a position of the ith split object in the three-dimensional space, and then according to the bounding box of the ith split object, the ith split object can be split from the three-dimensional scene model.
In the embodiment of the application, after the three-dimensional scene model is converted into the first point cloud, the point corresponding to the bottom surface of the support is deleted from the first point cloud to obtain the second point cloud, and as a certain distance exists between different split objects in the three-dimensional scene model, the sub point clouds corresponding to the different split objects can be determined according to the second point cloud, and then the bounding box of the corresponding split object can be determined according to the sub point clouds, and the corresponding split object can be split from the three-dimensional scene model through the bounding box. Through converting the three-dimensional scene model into the point cloud, bounding boxes of different splitting objects in the three-dimensional scene model can be determined, and then corresponding splitting objects can be split from the three-dimensional scene model according to the bounding boxes, so that automatic splitting of the three-dimensional scene model is realized, manual participation and time consumption in the splitting process of the three-dimensional scene model are reduced, and the splitting efficiency of the three-dimensional scene model is improved.
In a possible implementation manner, when the second point cloud is obtained based on the first point cloud, a reference plane corresponding to the bottom surface of the support in the first point cloud may be determined, and then points, of which the distance between the first point cloud and the reference plane is smaller than a first distance threshold, are deleted, so as to obtain the second point cloud.
Since the three-dimensional scene model is obtained through scanning reconstruction, there may be fluctuation in the supporting bottom surface in the three-dimensional scene model, and thus after the three-dimensional scene model is converted into the first point cloud, points in the first point cloud corresponding to the supporting bottom surface may not be located on the same plane. For this reason, a reference plane corresponding to the support bottom surface in the first point cloud is determined, the reference plane may indicate the position of the support bottom surface in the three-dimensional space as a whole, and a point with a smaller distance from the reference plane in the first point cloud may be regarded as a point corresponding to the support bottom surface, so that a point with a distance from the reference plane in the first point cloud smaller than the first distance threshold value may be deleted, and a second point cloud may be obtained.
According to different real scenes corresponding to the three-dimensional scene model, the fluctuation degree of the supporting bottom surface in the three-dimensional scene model is different, so that a first distance threshold value is required to be determined according to the fluctuation degree of the supporting bottom surface, and on the premise of ensuring that the points corresponding to the supporting bottom surface are deleted from a first point cloud, the number of the points corresponding to the split objects deleted from the first point cloud is reduced, so that the integrity of the split objects split from the three-dimensional scene model is ensured. In one example, the first distance threshold value is equal to 5cm, that is, after the reference plane corresponding to the bottom surface of the support is determined, points, of which the distance between the first point cloud and the reference plane is smaller than 5cm, are deleted, and points, of which the distance between the first point cloud and the reference plane is greater than or equal to 5cm, form a second point cloud. It should be noted that, the first distance threshold value equal to 5cm is only one example of the embodiment of the present application, and the first distance threshold value is not limited.
In the embodiment of the application, the reference plane corresponding to the supporting bottom surface in the three-dimensional space is determined, the position of the supporting bottom surface in the three-dimensional space is represented by the reference plane, and then the point, of which the distance between the first point cloud and the reference plane is smaller than the first threshold value, is deleted, so that the separation of the point corresponding to the supporting bottom surface and the point corresponding to the separation object can be realized, the supporting bottom surface in the three-dimensional scene model can be further ensured to be separated from the separation object, the independent separation object can be separated, the condition that a plurality of separation objects are identified as one separation object due to incomplete separation of the supporting bottom surface is avoided, and the accuracy of the separation of the three-dimensional scene model is ensured.
In one possible implementation, when converting the three-dimensional scene model into the first point cloud, a plurality of points may be acquired from a geometric surface in the three-dimensional scene model according to an area of the geometric surface, where the area of the geometric surface is positively related to the number of points acquired from the geometric surface, and further, a set of points acquired from each geometric surface included in the three-dimensional scene model is taken as the first point cloud.
The three-dimensional scene model is a grid model, the supporting bottom surface and the split object included in the three-dimensional scene model respectively comprise a plurality of polygons which are spliced together, small polygons which are spliced to form the supporting bottom surface and the split object are plane polygons, and the vertexes of the three-dimensional scene model are the intersecting points of at least three polygons. If the vertex of the three-dimensional scene model is used as the point cloud, uneven distribution of the point cloud is caused, and the supporting bottom surface comprises a plane with a larger area, when the reference plane corresponding to the supporting bottom surface is determined based on the distribution of the point cloud, the determined position of the reference plane in the three-dimensional space is caused to have larger error, so that the accuracy of trampling and splitting the three-dimensional scene model is affected.
In the embodiment of the application, when the three-dimensional scene model is converted into the first point cloud, an area-based sampling algorithm is adopted to sample on the supporting bottom surface and the split object, the sampled points form the first point cloud, for any geometric surface (namely, polygon) in the three-dimensional scene model, the number of the sampled points on the geometric surface is positively correlated with the area of the joint surface, so that the distribution of the point cloud is more uniform, and when the reference plane corresponding to the supporting bottom surface is determined based on the distribution of the point cloud, the accuracy of the determined reference plane in the three-dimensional space can be ensured, and the points corresponding to the supporting bottom surface and the points corresponding to the split object can be accurately separated, so that the accuracy of the split object split from the three-dimensional scene model is ensured.
In one possible implementation manner, on the basis of sampling by an area-based sampling algorithm to obtain a first point cloud, when determining a reference plane corresponding to the support bottom surface, performing plane iteration in a three-dimensional space with the aim of maximizing the number of points covering the first point cloud, and determining the iterated plane as the reference plane corresponding to the support bottom surface.
The three-dimensional scene model is obtained through scanning reconstruction, the area of the ground in the real scene corresponding to the three-dimensional scene model is larger than the area of the surface of the object, and the first point cloud is obtained through an area-based sampling algorithm, so that the plane with the largest point data quantity covering the first point cloud in the three-dimensional space can be obtained in an iterative mode and used as the reference plane corresponding to the bottom surface of the support. When the plane with the largest number of points covering the first point cloud in the three-dimensional space is solved in an iteration mode, the solving result can be a plane equation, and the three-dimensional plane represented by the plane equation is the reference plane corresponding to the supporting bottom surface.
It should be noted that, the reference plane is determined by iterative solution, and the iterative solution process aims at maximizing the number of points in the first point cloud covered by the plane, but the iterative solution result is not necessarily the plane with the greatest number of points in the first point cloud covered by the plane. For example, when the iterated plane covers more than 50% of points in the first point cloud, the iteration is stopped, and the iteration result at this time is taken as a reference plane corresponding to the bottom surface of the support.
In one example, the reference plane may be solved by a random sample consensus (Random Sample Consensus, RANSAC) algorithm. After the first point cloud is obtained, the first point cloud is used as input of an SANSAC algorithm, and a plane equation with the largest number of points covering the first point cloud in the three-dimensional space is solved and used as a reference plane corresponding to the bottom surface of the support. The RANSAC algorithm is simple in principle, high in speed and strong in robustness, is suitable for ground detection tasks of outdoor scenes, and ensures the matching of the determined reference plane and the supporting bottom surface.
In the embodiment of the application, the first point cloud is obtained based on area sampling, the area of the geometric surface forming the supporting bottom surface and the split object is positively correlated with the number of the points acquired from the geometric surface, and the supporting bottom surface comprises a plane with a larger area, and the supporting bottom surface has a larger area relative to the surface of the split object, so that the number of the points covering the first point cloud can be maximized as an iteration target, plane solution can be carried out in a three-dimensional space, the solution result is determined as a reference plane corresponding to the supporting bottom surface, and the determined reference plane can be ensured to accurately indicate the position of the supporting bottom surface in the three-dimensional space, thereby accurately separating the supporting bottom surface from the split object.
In one possible implementation manner, when the second point cloud is divided into a plurality of sub point clouds, the second point cloud may be divided into a plurality of point sets according to distances between points of the second point cloud, so that a distance between any first point in the point sets and at least one second point in the point sets is smaller than a second distance threshold value, and a distance between different point sets is greater than or equal to the second distance threshold value, and further, a point set, in which the number of points included in the obtained plurality of point sets is greater than the number threshold value, is determined as the sub point cloud.
In practical application, the three-dimensional scene model is obtained by scanning and reconstructing a real scene, split objects in the three-dimensional scene model correspond to objects in the real scene, and under the condition that the objects in the real scene have a certain distance, certain distances correspondingly exist between different split objects in the three-dimensional scene model, and the three-dimensional scene model is sampled according to the set sampling distance, so that a first point cloud can be obtained. The sampling interval is set according to the interval between the split objects such that the distance between the points collected from different split objects is greater than the sampling interval, i.e. for any first point obtained from the sampling of a split object, there is at least one second point obtained from the sampling of the same split object such that the distance between the first point and the second point is less than the interval, and the distance between the points obtained from the sampling of different split objects is greater than the interval. For example, the distance between split objects is greater than 100cm, for any first point sampled from the same split object, the distance between the first point and at least one second point sampled from the same split object is less than 100cm, and the distance between the points sampled from different split objects is greater than 100cm.
That is, points corresponding to the same split object in the second point cloud are relatively intensively distributed in one spatial region, and points corresponding to different split objects are relatively discretely distributed in different spatial regions, so that the second point cloud can be divided into a plurality of point sets according to distances between points in the second point cloud, so that different point sets correspond to different split objects, and further, a point set including a number greater than a number threshold is determined as a sub point cloud.
Because the distances between the split objects in different three-dimensional scene models are different, a second distance threshold can be determined according to the minimum distance between the split objects in the three-dimensional scene models, so that points included in the second point cloud are divided into different point sets based on the second distance threshold, and the different point sets correspond to different split objects. When the second point cloud is divided into a plurality of point sets based on the second distance threshold, points in the second point cloud, which are smaller than the second distance threshold in distance from each other, are divided into the same point set, so for any first point in one point set, the distance between at least one second point and the first point in the point set is smaller than the second distance threshold, and the distance between different point set points is larger than or equal to the second distance threshold.
The second distance threshold may be determined based on being the distance between the split objects in the three-dimensional scene model, the second distance threshold being less than the minimum distance between the split objects in the three-dimensional scene model, e.g., the second distance threshold may be equal to 15cm. It should be noted that, the second distance threshold value equal to 15cm is only one example of the embodiment of the present application, and the second distance threshold value is not limited.
In one example, the second point cloud may be partitioned into a plurality of point sets by a Density-based clustering algorithm, such as a clustering algorithm DBSCAN (Density-based spatial clustering of applications with noise), which may partition points in the second point cloud having a distance less than a second distance threshold into the same point set.
After the second point cloud is divided into a plurality of point sets, if the point sets include a smaller number of points, the point sets correspond to smaller faces or three-dimensional volumes in the three-dimensional scene model, and the smaller faces or three-dimensional volumes are usually interference data, so after the second point cloud is divided into the plurality of point sets, the point sets including the number of points smaller than the number threshold are deleted, and each of the remaining point sets is determined as a sub point cloud respectively.
The quantity threshold may be determined based on the size of the split object in the three-dimensional scene model, e.g., the quantity threshold may be equal to 2. It should be noted that the number threshold value equal to 2 is only one example of the embodiment of the present application, and the data threshold value is not limited.
In the embodiment of the application, the second point cloud is divided into a plurality of point sets according to the distances between the points of the second point cloud, so that any first point in the same point set has at least one second point and the distance between the second point and the point is smaller than a second distance threshold value, and the distances between the points of different point sets are larger than or equal to the second distance threshold value, so that the different point sets correspond to different split objects in the three-dimensional scene model, after deleting the point sets with the number of the included points smaller than or equal to the number threshold value, the rest of the plurality of point sets are respectively determined as sub point clouds, so that the different sub point clouds correspond to different split objects, the accuracy of the corresponding relation between the sub point clouds and the split objects is ensured, and the accuracy of the split objects split from the three-dimensional scene model is further ensured.
In one possible implementation manner, when the second point cloud is divided into a plurality of point sets, points with a distance smaller than a second distance threshold value can be searched from the second point cloud in parallel by a plurality of processes, and an intermediate point set generated by each process is obtained, wherein the distance between any third point in the intermediate point set and at least one fourth point in the intermediate point set is smaller than the second distance threshold value, and the intersection of different intermediate point sets is an empty set.
And searching points with the distance smaller than a second distance threshold from the second point cloud by different processes, adding the searched points into a corresponding intermediate point set, and generating an intermediate point set by each process. If a point in the second point cloud is already added to its corresponding set of intermediate points by a certain process, the other processes will not calculate the distance between the point and the other points any more, i.e. the point will not be added to the corresponding set of intermediate points by the other processes any more.
After obtaining the set of intermediate points generated by the plurality of processes, the inter-group distances between the set of intermediate points may be calculated. For any of the first set of intermediate points and the second set of intermediate points, the distance between the first set of intermediate points and the second set of intermediate points is equal to the minimum value of the distance between any of the first set of intermediate points and any of the second set of intermediate points. And after calculating the inter-group distance between the intermediate point sets, merging the intermediate point sets, of which the corresponding inter-group distance is smaller than a second distance threshold, to obtain a plurality of point sets.
In the embodiment of the application, the intermediate point set corresponding to each process is obtained through a plurality of processes and the points with the search distance smaller than the second distance threshold, the points in the same intermediate point set correspond to the same split object in the three-dimensional scene model, the intermediate point sets with the corresponding inter-group distances smaller than the second distance threshold are combined through calculating the inter-group distances between different intermediate point sets, so that a plurality of point sets are obtained, and the efficiency of dividing the second point cloud into the plurality of point sets can be shortened by adopting a mode of parallel search of the plurality of processes, so that the splitting efficiency of the three-dimensional scene model is improved.
In one possible implementation manner, when determining the bounding box of the corresponding split object in the three-dimensional scene model according to the sub-point cloud, surface reconstruction may be performed according to the sub-point cloud, so as to obtain a first bounding box of the split object corresponding to the sub-point cloud in the three-dimensional scene model, and further split the split object corresponding to the sub-point cloud from the three-dimensional scene model according to the first bounding box.
According to the method, a closed curved surface can be generated by carrying out surface reconstruction on the sub-point cloud, the point included in the sub-point cloud is located on the closed curved surface or is located in a space area surrounded by the closed curved surface, the point included in the sub-point cloud is obtained by sampling corresponding split objects in the three-dimensional scene model, so that the split objects corresponding to the sub-point cloud are located in the space area surrounded by the closed curved surface, gaps between the closed curved surface and the split objects corresponding to the sub-point cloud are small, the split objects corresponding to the sub-point cloud can be separated from other split objects through the closed curved surface, the closed curved surface can be determined to be a first bounding box of the split objects corresponding to the sub-point cloud, and the split objects corresponding to the sub-point cloud can be split from the three-dimensional scene model through the first bounding box.
When the surface reconstruction is carried out according to the sub-point cloud, the sub-point cloud can be subjected to the surface reconstruction through a point cloud surface reconstruction algorithm, and the point cloud surface reconstruction algorithm is used for generating a point cloud bounding box. The point cloud surface reconstruction algorithm may be, but is not limited to, a scatter profile algorithm (Alpha Shapes).
When the surface reconstruction is carried out on the sub-point cloud through the point cloud surface reconstruction algorithm, key parameters of the point cloud surface reconstruction algorithm are reconstruction step length, the reconstruction step length is related to the accuracy of the reconstruction model, the reconstruction model is finer when the reconstruction step length is smaller, and the reconstruction model is coarser when the reconstruction step length is larger. It takes longer to reconstruct the surface with a smaller reconstruction step, while it takes shorter time to reconstruct the surface with a larger reconstruction step. The first bounding box is determined to separate the split objects to split the split objects from the three-dimensional scene model, so that the requirement on the accuracy of the first bounding box is low, and therefore, a larger reconstruction step size can be adopted to generate the first bounding box. For example, the reconstruction step length is 10cm, and the surface reconstruction is carried out on the sub-point cloud.
In the embodiment of the application, the first bounding box of the split object corresponding to the sub-point cloud is obtained by carrying out surface reconstruction on the sub-point cloud, the split object corresponding to the sub-point cloud is positioned in the first bounding box in the three-dimensional space, and the first bounding box corresponding to one sub-point cloud can separate the split object corresponding to the sub-point cloud from the split objects corresponding to other sub-point clouds, so that the corresponding split object can be split from the three-dimensional scene model according to the first bounding box, and the accuracy of the split object is ensured.
In one possible implementation manner, when the split object corresponding to the sub-point cloud is split from the three-dimensional scene model according to the first bounding box corresponding to the sub-point cloud, a second bounding box of the split object corresponding to the sub-point cloud in the three-dimensional scene can be determined according to the sub-point cloud, the second bounding box is a cuboid bounding box, and the split object corresponding to the sub-point cloud is located in the second bounding box. The second bounding box is a cuboid bounding box formed by splicing 6 rectangular planes.
After a second bounding box corresponding to one sub-point cloud is obtained, calculating the Boolean intersection of the three-dimensional scene model and the second bounding box to obtain an intermediate sub-model, and then calculating the Boolean intersection of the intermediate sub-model and the first bounding box corresponding to the sub-point cloud to obtain a split object corresponding to the sub-point cloud.
Since the first bounding box includes a larger number of faces and the three-dimensional scene model also includes a larger number of faces, if the boolean intersection of the first bounding box and the three-dimensional scene model is directly calculated, the calculation complexity is larger, and the efficiency of splitting the three-dimensional scene model is lower. After a first bounding box and a second bounding box corresponding to one sub-point cloud are determined, calculating the Boolean intersection of the three-dimensional scene model and the second bounding box to obtain an intermediate sub-model, and then calculating the Boolean intersection of the intermediate sub-model and the first bounding box to obtain a split object corresponding to the sub-point cloud. Because the second bounding box comprises 6 faces, the calculation complexity of calculating the Boolean intersection of the three-dimensional scene model and the second bounding box is low, the obtained intermediate sub-model does not comprise split objects outside the second bounding box, so that the intermediate sub-model comprises a small number of faces, the calculation complexity of calculating the intermediate sub-model and the Boolean intersection of the first bounding box can be further reduced, and the efficiency of splitting the three-dimensional scene model can be improved.
In one example, the three-dimensional scene model includes 1000 ten thousand faces, the first bounding box includes 10000 faces, and the second bounding box includes 20 ten thousand faces. If the boolean intersection of the three-dimensional scene model and the first bounding box is directly calculated, the amount of calculation between faces in the boolean intersection operation is 10000000×10000=1000 hundred million times. If the boolean intersection of the three-dimensional scene model and the second bounding box is calculated first and then the boolean intersection of the intermediate sub-model and the first bounding box is calculated, the calculation amount between faces in the boolean intersection operation is 10000000×6+200000×10000=20.6 billions. Compared with the method for directly calculating the Boolean intersection of the three-dimensional scene model and the first bounding box, the method for calculating the Boolean intersection of the three-dimensional scene model and the second bounding box first calculates the Boolean intersection of the intermediate sub-model and the first bounding box, and the calculated amount is saved by about 98%, so that the calculation complexity of the Boolean intersection operation can be reduced, and the efficiency of splitting the three-dimensional scene model is improved.
In the embodiment of the application, the first bounding box of the split object corresponding to the sub-point cloud is determined through surface reconstruction, the second bounding box comprises 6 surfaces, the Boolean intersection of the second bounding box and the three-dimensional scene model is calculated firstly to obtain the intermediate sub-model, and then the Boolean intersection of the intermediate sub-model and the first bounding box is calculated to obtain the split object corresponding to the sub-point cloud.
In one possible implementation manner, when determining the second bounding box of the split object corresponding to the sub-point cloud, the minimum bounding cuboid of the sub-point cloud may be obtained, and the minimum bounding cuboid is determined as the second bounding box of the split object corresponding to the sub-point cloud. It should be understood that the smallest circumscribed cuboid of the sub-point cloud is a cuboid bounding box formed by splicing 6 rectangular planes.
In the embodiment of the application, the points included in the sub-point cloud are obtained by up-sampling the corresponding split objects, and the region where the sub-point cloud is located in the three-dimensional space is the region where the split object corresponding to the sub-point cloud is located in the three-dimensional space, and the split object corresponding to the sub-point cloud is located in the minimum circumscribed cuboid of the sub-point cloud, so that the minimum circumscribed cuboid of the sub-point cloud can be used as a second surrounding frame of the split object corresponding to the sub-point cloud, the minimum circumscribed cuboid of the sub-point cloud is used as the second surrounding frame, the surface included in the split object in the second surrounding frame can be reduced, and the computational complexity of Boolean intersection operation is reduced.
In one possible implementation manner, when determining the second bounding box of the split object corresponding to the sub-point cloud, the maximum value and the minimum value of the X-axis coordinates, the maximum value and the minimum value of the Y-axis coordinates, and the maximum value and the minimum value of the Z-axis coordinates of the points in the sub-point cloud in the three-dimensional space coordinate may be obtained, then the maximum value and the minimum value of the X-axis coordinates, the maximum value and the minimum value of the Y-axis coordinates, and the maximum value and the minimum value of the Z-axis coordinates are combined to obtain 8 points in the three-dimensional space coordinate system, and then a cuboid with the 8 points as vertices is determined as the second bounding box of the split object corresponding to the sub-point cloud.
The coordinates of the points included in the sub-point cloud may be expressed as (Px, py, pz), where the minimum value px_min of the Px of each point in the sub-point cloud is the minimum value of the X-axis coordinates of the point in the three-dimensional space coordinate system, the maximum value px_max of the Px of each point in the sub-point cloud is the maximum value of the X-axis coordinates of the point in the three-dimensional space coordinate system, the minimum value py_min of the Py of each point in the sub-point cloud is the minimum value of the Y-axis coordinates of the point in the three-dimensional space coordinate system, the maximum value py_max of the Py of each point in the sub-point cloud is the maximum value of the Y-axis coordinates of the point in the three-dimensional space coordinate system, the minimum value pz_min of the Py of each point in the sub-point cloud is the minimum value of the Z-axis coordinates of the point in the three-dimensional space coordinate system, and the maximum value pz_max of the Py of each point in the sub-point cloud is the maximum value of the Z-axis coordinates of the point in the three-dimensional space coordinate system.
After determining px_min, px_max, py_min, py_max, pz_min, and pz_max, 8 points can be determined by the 8 coordinate values, the 8 points having coordinates of (px_min, py_min, pz_min), (px_min, py_max, pz_min), (px_max, py_min, pz_min), (pz_max), (px_min, py_max), (py_max, pz_max), and (px_max, py_min, pz_max). The cuboid bounding box with these 8 points as vertices can then be determined as the second bounding box of the corresponding sub-point cloud.
In the embodiment of the application, the minimum X-axis coordinate, the maximum X-axis coordinate, the minimum Y-axis coordinate, the maximum Y-axis coordinate, the minimum Z-axis coordinate and the maximum Z-axis coordinate of the midpoint of the sub-point cloud are determined, 8 points can be obtained by combining the 8 coordinates, and then the cuboid bounding box taking the 8 points as the vertexes is determined as the second bounding box of the corresponding sub-point cloud, so that the split object corresponding to the sub-point cloud is positioned in the second bounding box, the second bounding box can be quickly determined based on the coordinate maximum value of the points in the sub-point cloud, complex iterative operation is not needed, and the splitting efficiency of the three-dimensional scene model can be further improved.
Three-dimensional scene model splitting device
Corresponding to the above embodiment of the three-dimensional scene model splitting method, fig. 3 shows a schematic diagram of a three-dimensional scene model splitting device according to an embodiment of the present application. As shown in fig. 3, the three-dimensional scene model splitting apparatus 300 includes:
a conversion unit 301, configured to convert the three-dimensional scene model into a first point cloud;
a separation unit 302, configured to delete a point corresponding to the bottom surface of the support in the first point cloud, and obtain a second point cloud;
a clustering unit 303, configured to divide the second point cloud into a plurality of sub point clouds, where the sub point clouds include a plurality of points, and different sub point clouds correspond to different split objects in the three-dimensional scene model;
The splitting unit 304 is configured to determine, according to the sub-point cloud, a bounding box of a split object corresponding to the sub-point cloud in the three-dimensional scene model, and split, according to the bounding box, the split object corresponding to the sub-point cloud from the three-dimensional scene model.
In an embodiment of the present application, in the present application,
after the conversion unit 301 converts the three-dimensional scene model into the first point cloud, the separation unit 302 deletes the point corresponding to the bottom surface of the support from the first point cloud to obtain the second point cloud, and as a certain distance exists between different split objects in the three-dimensional scene model, the sub-point clouds corresponding to the different split objects can be determined according to the second point cloud, and further the clustering unit 303 can determine the bounding box of the corresponding split object according to the sub-point clouds, and the splitting unit 304 can split the corresponding split object from the three-dimensional scene model through the bounding box. Through converting the three-dimensional scene model into the point cloud, bounding boxes of different splitting objects in the three-dimensional scene model can be determined, and then corresponding splitting objects can be split from the three-dimensional scene model according to the bounding boxes, so that automatic splitting of the three-dimensional scene model is realized, manual participation and time consumption in the splitting process of the three-dimensional scene model are reduced, and the splitting efficiency of the three-dimensional scene model is improved.
It should be noted that, the three-dimensional scene model splitting device in this embodiment is used to implement the three-dimensional scene model splitting method in the foregoing method embodiment, and has the beneficial effects of the corresponding method embodiment, which is not described herein again.
Electronic equipment
Fig. 4 is a schematic block diagram of an electronic device according to an embodiment of the present application, which is not limited to the specific implementation of the electronic device. As shown in fig. 4, the electronic device may include: a processor 402, a communication interface (Communications Interface) 404, a memory 406, and a communication bus 408. Wherein:
processor 402, communication interface 404, and memory 406 communicate with each other via communication bus 408.
A communication interface 404 for communicating with other electronic devices or servers.
The processor 402 is configured to execute the program 410, and may specifically perform the relevant steps in any of the foregoing three-dimensional scene model splitting method embodiments.
In particular, program 410 may include program code including computer-operating instructions.
The processor 402 may be a CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement embodiments of the present application. The one or more processors comprised by the smart device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
RISC-V is an open source instruction set architecture based on the principle of Reduced Instruction Set (RISC), which can be applied to various aspects such as single chip microcomputer and FPGA chip, and can be particularly applied to the fields of Internet of things security, industrial control, mobile phones, personal computers and the like, and because the real conditions of small size, rapidness and low power consumption are considered in design, the RISC-V is particularly suitable for modern computing equipment such as warehouse-scale cloud computers, high-end mobile phones, micro embedded systems and the like. With the rise of AIoT of the artificial intelligent Internet of things, RISC-V instruction set architecture is also receiving more and more attention and support, and is expected to become a CPU architecture widely applied in the next generation.
The computer operating instructions in embodiments of the present application may be computer operating instructions based on a RISC-V instruction set architecture, and correspondingly, the processor 402 may be RISC-V based instruction set design. Specifically, the chip of the processor in the electronic device provided by the embodiment of the application may be a chip designed by adopting a RISC-V instruction set, and the chip may execute executable codes based on the configured instructions, thereby implementing the three-dimensional scene model splitting method in the embodiment.
Memory 406 for storing programs 410. Memory 406 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
Program 410 may be specifically configured to cause processor 402 to perform the three-dimensional scene model splitting method of any of the foregoing embodiments.
The specific implementation of each step in the procedure 410 may refer to corresponding descriptions in the corresponding steps and units in any of the foregoing three-dimensional scene model splitting method embodiments, which are not repeated herein. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and modules described above may refer to corresponding procedure descriptions in the foregoing method embodiments, which are not repeated herein.
According to the electronic equipment provided by the embodiment of the application, after the three-dimensional scene model is converted into the first point cloud, the point corresponding to the bottom surface of the support is deleted from the first point cloud to obtain the second point cloud, and as a certain distance exists between different split objects in the three-dimensional scene model, the sub point clouds corresponding to the different split objects can be determined according to the second point cloud, and further, the bounding box of the corresponding split object can be determined according to the sub point clouds, and the corresponding split object can be split from the three-dimensional scene model through the bounding box. Through converting the three-dimensional scene model into the point cloud, bounding boxes of different splitting objects in the three-dimensional scene model can be determined, and then corresponding splitting objects can be split from the three-dimensional scene model according to the bounding boxes, so that automatic splitting of the three-dimensional scene model is realized, manual participation and time consumption in the splitting process of the three-dimensional scene model are reduced, and the splitting efficiency of the three-dimensional scene model is improved.
Computer storage medium
The present application also provides a computer readable storage medium storing instructions for causing a machine to perform a three-dimensional scene model splitting method as described herein. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present application.
Examples of the storage medium for providing the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer by a communication network.
Computer program product
Embodiments of the present application also provide a computer program product comprising computer instructions that instruct a computing device to perform any corresponding operations of the above-described method embodiments.
It should be noted that, according to implementation requirements, each component/step described in the embodiments of the present application may be split into more components/steps, or two or more components/steps or part of operations of the components/steps may be combined into new components/steps, so as to achieve the objects of the embodiments of the present application.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium and to be stored in a local recording medium downloaded through a network, so that the methods described herein may be stored on such software processes on a recording medium using a general purpose computer, special purpose processor, or programmable or special purpose hardware such as an ASIC or FPGA. It is understood that a computer, processor, microprocessor controller, or programmable hardware includes a storage component (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by a computer, processor, or hardware, performs the methods described herein. Furthermore, when a general purpose computer accesses code for implementing the methods illustrated herein, execution of the code converts the general purpose computer into a special purpose computer for performing the methods illustrated herein.
It should be noted that, the information related to the user (including, but not limited to, user equipment information, user personal information, etc.) and the data related to the embodiments of the present disclosure (including, but not limited to, sample data for training the model, data for analyzing, stored data, presented data, etc.) are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and are provided with corresponding operation entries for the user to select authorization or rejection.
It should be understood that each embodiment in this specification is described in an incremental manner, and the same or similar parts between each embodiment are referred to each other, and the embodiments focus on differences from other embodiments. In particular, for method embodiments, the description is relatively simple as it is substantially similar to the methods described in the apparatus and system embodiments, with reference to the description of other embodiments being relevant.
It should be understood that the foregoing describes specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It should be understood that elements described herein in the singular or shown in the drawings are not intended to limit the number of elements to one. Furthermore, modules or elements described or illustrated herein as separate may be combined into a single module or element, and modules or elements described or illustrated herein as a single may be split into multiple modules or elements.
It is also to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. The use of these terms and expressions is not meant to exclude any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible and are intended to be included within the scope of the claims. Other modifications, variations, and alternatives are also possible. Accordingly, the claims should be looked to in order to cover all such equivalents.

Claims (14)

1. A three-dimensional scene model splitting method, comprising:
converting the three-dimensional scene model into a first point cloud;
deleting points corresponding to the supporting bottom surface in the first point cloud to obtain a second point cloud;
dividing the second point cloud into a plurality of sub point clouds, wherein each sub point cloud comprises a plurality of points, and different sub point clouds correspond to different split objects in the three-dimensional scene model;
And determining a bounding box of a split object corresponding to the sub-point cloud in the three-dimensional scene model according to the sub-point cloud, and splitting the split object corresponding to the sub-point cloud from the three-dimensional scene model according to the bounding box.
2. The method of claim 1, wherein the deleting the point of the first point cloud corresponding to the supporting floor to obtain a second point cloud comprises:
determining a reference plane corresponding to the bottom surface of the support in the first point cloud;
and deleting points, of which the distance between the first point cloud and the reference plane is smaller than a first distance threshold value, so as to obtain a second point cloud.
3. The method of claim 2, wherein the converting the three-dimensional scene model to the first point cloud comprises:
acquiring a plurality of points from a geometric surface in the three-dimensional scene model according to the area of the geometric surface, wherein the area of the geometric surface is positively correlated with the number of the acquired points from the geometric surface;
a set of points acquired from a geometric surface comprised by the three-dimensional scene model is determined as the first point cloud.
4. A method according to claim 3, wherein said determining a reference plane corresponding to a bottom surface of a support in the first point cloud comprises:
And carrying out plane iteration in a three-dimensional space with the aim of maximizing the number of points covering the first point cloud, and determining the iterated plane as a reference plane corresponding to the supporting bottom surface.
5. The method of claim 1, wherein the partitioning the second point cloud into a plurality of sub-point clouds comprises:
dividing the second point cloud into a plurality of point sets according to the distances between points of the second point cloud, wherein the distance between any first point in the point sets and at least one second point in the point sets is smaller than a second distance threshold value, and the distances between different point sets are larger than or equal to the second distance threshold value;
and determining a point set, including the point set with the number of points larger than a number threshold, in the plurality of point sets as the sub-point cloud.
6. The method of claim 5, wherein the dividing the second point cloud into a plurality of point sets according to the distance between points of the second point cloud comprises:
searching points with the distance smaller than the second distance threshold value from the second point cloud through a plurality of processes in parallel to obtain an intermediate point set corresponding to each process, wherein the distance between any third point in the intermediate point set and at least one fourth point in the intermediate point set is smaller than the second distance threshold value, and the intersection of different intermediate point sets is an empty set;
Calculating an inter-group distance between the intermediate point sets, wherein the inter-group distance between the first intermediate point set and the second intermediate point set is equal to the minimum value of the distance between any point in the first intermediate point set and any point in the second intermediate point set;
and merging the intermediate point sets with the distance between the corresponding groups smaller than the second distance threshold value to obtain the point set.
7. The method according to any one of claims 1-6, wherein the determining, according to the sub-point cloud, a bounding box of a split object corresponding to the sub-point cloud in the three-dimensional scene model, and splitting, according to the bounding box, the split object corresponding to the sub-point cloud from the three-dimensional scene model includes:
carrying out surface reconstruction according to the sub-point cloud to obtain a first bounding box of a split object corresponding to the sub-point cloud in the three-dimensional scene model;
and splitting a split object corresponding to the sub-point cloud from the three-dimensional scene model according to the first bounding box.
8. The method of claim 7, wherein the splitting the split object corresponding to the sub-point cloud from the three-dimensional scene model according to the first bounding box comprises:
determining a second surrounding frame of a split object corresponding to the sub-point cloud in the three-dimensional scene model according to the sub-point cloud, wherein the second surrounding frame is a cuboid surrounding box, and the split object corresponding to the sub-point cloud is positioned in the second surrounding frame;
Calculating a Boolean intersection of the three-dimensional scene model and the second bounding box to obtain an intermediate sub-model;
and calculating a Boolean intersection of the intermediate sub-model and the first bounding box to obtain a split object corresponding to the sub-point cloud.
9. The method of claim 8, wherein the determining, according to the sub-point cloud, a second bounding box of the split object corresponding to the sub-point cloud in the three-dimensional scene model includes:
and acquiring the minimum external cuboid of the sub-point cloud, and determining the minimum external cuboid as a second surrounding frame of the split object corresponding to the sub-point cloud.
10. The method of claim 8, wherein the determining, according to the sub-point cloud, a second bounding box of the split object corresponding to the sub-point cloud in the three-dimensional scene model includes:
acquiring maximum and minimum values of X-axis coordinates, maximum and minimum values of Y-axis coordinates and maximum and minimum values of Z-axis coordinates of points in the sub-point cloud in a three-dimensional space coordinate system;
coordinate value combination is carried out on the maximum value and the minimum value of the X-axis coordinate, the maximum value and the minimum value of the Y-axis coordinate and the maximum value and the minimum value of the Z-axis coordinate, so that 8 points in a three-dimensional space coordinate system are obtained;
And determining the cuboid taking the 8 points as vertexes as a second surrounding frame of the split object corresponding to the sub-point cloud.
11. A three-dimensional scene model splitting apparatus comprising:
the conversion unit is used for converting the three-dimensional scene model into a first point cloud;
the separation unit is used for deleting the points corresponding to the supporting bottom surface in the first point cloud to obtain a second point cloud;
the clustering unit is used for dividing the second point cloud into a plurality of sub point clouds, wherein each sub point cloud comprises a plurality of points, and different sub point clouds correspond to different split objects in the three-dimensional scene model;
the splitting unit is used for determining a bounding box of a splitting object corresponding to the sub-point cloud in the three-dimensional scene model according to the sub-point cloud, and splitting the splitting object corresponding to the sub-point cloud from the three-dimensional scene model according to the bounding box.
12. An electronic device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform operations corresponding to the method of any one of claims 1-10.
13. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-10.
14. A computer program product comprising computer instructions that instruct a computing device to perform the method of any one of claims 1-10.
CN202310623505.2A 2023-05-30 2023-05-30 Three-dimensional scene model splitting method and device, electronic equipment and storage medium Pending CN116912817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310623505.2A CN116912817A (en) 2023-05-30 2023-05-30 Three-dimensional scene model splitting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310623505.2A CN116912817A (en) 2023-05-30 2023-05-30 Three-dimensional scene model splitting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116912817A true CN116912817A (en) 2023-10-20

Family

ID=88361661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310623505.2A Pending CN116912817A (en) 2023-05-30 2023-05-30 Three-dimensional scene model splitting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116912817A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117424970A (en) * 2023-10-23 2024-01-19 神力视界(深圳)文化科技有限公司 Light control method and device, mobile terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117424970A (en) * 2023-10-23 2024-01-19 神力视界(深圳)文化科技有限公司 Light control method and device, mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
Wang et al. Lidar point clouds to 3-D urban models $: $ A review
JP6745328B2 (en) Method and apparatus for recovering point cloud data
CN111079619B (en) Method and apparatus for detecting target object in image
CN109658445A (en) Network training method, increment build drawing method, localization method, device and equipment
CN110378175B (en) Method and device for recognizing road edge
CN110135455A (en) Image matching method, device and computer readable storage medium
CN113112603B (en) Method and device for optimizing three-dimensional model
CN112862874A (en) Point cloud data matching method and device, electronic equipment and computer storage medium
WO2023124676A1 (en) 3d model construction method, apparatus, and electronic device
CN114758337A (en) Semantic instance reconstruction method, device, equipment and medium
Franz et al. Real-time collaborative reconstruction of digital building models with mobile devices
CN116912817A (en) Three-dimensional scene model splitting method and device, electronic equipment and storage medium
CN113724388A (en) Method, device and equipment for generating high-precision map and storage medium
CN112053440A (en) Method for determining individualized model and communication device
CN109997123B (en) Methods, systems, and apparatus for improved space-time data management
Park et al. Segmentation of Lidar data using multilevel cube code
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN113988677A (en) Integrated city planning decision support system based on smart city planning
Li 3D Indoor Scene Reconstruction and Layout Based on Virtual Reality Technology and Few‐Shot Learning
CN114202454A (en) Graph optimization method, system, computer program product and storage medium
CN111402415B (en) Object body elevation map generation method and device, storage medium and terminal equipment
CN110377776B (en) Method and device for generating point cloud data
Arnaud et al. 3D reconstruction of indoor building environments with new generation of tablets
Bevilacqua et al. Photogrammetric meshes and 3D points cloud reconstruction: a genetic algorithm optimization procedure
CN111652831B (en) Object fusion method and device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination