CN110930515B - Three-dimensional modeling method and device, storage medium and electronic equipment - Google Patents

Three-dimensional modeling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110930515B
CN110930515B CN201911195902.4A CN201911195902A CN110930515B CN 110930515 B CN110930515 B CN 110930515B CN 201911195902 A CN201911195902 A CN 201911195902A CN 110930515 B CN110930515 B CN 110930515B
Authority
CN
China
Prior art keywords
point cloud
cloud data
real
laser point
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911195902.4A
Other languages
Chinese (zh)
Other versions
CN110930515A (en
Inventor
王德海
傅洪全
陈曦
马骏
邵九
黄泽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qifan Technology Co ltd
Skills Training Center Of State Grid Jiangsu Electric Power Co ltd
Original Assignee
Beijing Qifan Technology Co ltd
Skills Training Center Of State Grid Jiangsu Electric Power Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qifan Technology Co ltd, Skills Training Center Of State Grid Jiangsu Electric Power Co ltd filed Critical Beijing Qifan Technology Co ltd
Priority to CN201911195902.4A priority Critical patent/CN110930515B/en
Publication of CN110930515A publication Critical patent/CN110930515A/en
Application granted granted Critical
Publication of CN110930515B publication Critical patent/CN110930515B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a three-dimensional modeling method, a three-dimensional modeling device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring laser point cloud data of an object in real time; and generating a real-time three-dimensional model corresponding to the object by utilizing the laser point cloud data. According to the embodiment of the application, the laser point cloud data of the object are obtained in real time, and the real-time three-dimensional model is generated by utilizing the laser point cloud data, so that the effect of updating the three-dimensional model corresponding to the object in real time can be achieved.

Description

Three-dimensional modeling method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for three-dimensional modeling, a storage medium, and an electronic device.
Background
VR (Virtual Reality) technology is a computer simulation system that can create and experience a Virtual world, and uses a computer to generate a simulation environment, which is a system simulation of multi-source information fusion, interactive three-dimensional dynamic views and physical behaviors, so that users are immersed in the environment.
Currently, existing methods for presenting VR scenes generally load a designed three-dimensional model into a virtual scene. However, since the three-dimensional model is designed in advance, the virtual object corresponding to the three-dimensional model in the virtual scene can only show one shape corresponding to the three-dimensional model, that is, update of the three-dimensional model cannot be realized.
Disclosure of Invention
The embodiment of the application aims to provide a three-dimensional modeling method, a device, a storage medium and electronic equipment, so as to achieve the effect of updating a three-dimensional model corresponding to an object in real time.
In a first aspect, embodiments of the present application provide a method for three-dimensional modeling, the method including: acquiring laser point cloud data of an object in real time; and generating a real-time three-dimensional model corresponding to the object by utilizing the laser point cloud data.
Therefore, the embodiment of the application generates the real-time three-dimensional model by acquiring the laser point cloud data of the object in real time and utilizing the laser point cloud data, so that the embodiment of the application can realize the effect of updating the three-dimensional model corresponding to the object in real time.
In addition, the three-dimensional model is automatically modeled, and manual processing is not needed, so that modeling efficiency can be improved.
In one possible embodiment, generating a real-time three-dimensional model corresponding to the object using the laser point cloud data includes: denoising the laser point cloud data to obtain denoised data; and generating a real-time three-dimensional model corresponding to the object by using the denoised data.
Therefore, the embodiment of the application removes the data of other objects except the object in the laser point cloud by denoising the laser point cloud data.
In one possible embodiment, acquiring laser point cloud data of an object in real time includes: and acquiring laser point cloud data of the object in real time through a plurality of laser scanners uniformly distributed around the object.
Therefore, the embodiment of the application acquires the laser point cloud data through the plurality of laser scanners uniformly distributed around the object, so that the laser point cloud data of the object can be acquired in real time, and the three-dimensional model can be updated in real time.
In one possible embodiment, the number of the plurality of laser scanners is 4, 8 or 16.
Therefore, the embodiment of the application accurately acquires the laser point cloud data of the object through the laser scanners with the preset number.
In a second aspect, embodiments of the present application provide an apparatus for three-dimensional modeling, the apparatus including: the acquisition module is used for acquiring laser point cloud data of the object in real time; and the generating module is used for generating a real-time three-dimensional model corresponding to the object by utilizing the laser point cloud data.
In one possible embodiment, the generating module includes: the denoising module is used for denoising the laser point cloud data to obtain denoised data; and the generating sub-module is used for generating a real-time three-dimensional model corresponding to the object by using the denoised data.
In a possible embodiment, the acquiring module is further configured to acquire laser point cloud data of the object in real time through a plurality of laser scanners uniformly distributed around the object.
In one possible embodiment, the number of the plurality of laser scanners is 4, 8 or 16.
In a third aspect, embodiments of the present application provide a storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect or any alternative implementation of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the method of the first aspect or any alternative implementation of the first aspect.
In a fifth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method of the first aspect or any of the possible implementations of the first aspect.
In order to make the above objects, features and advantages of the embodiments of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application scenario to which embodiments of the present application are applicable;
FIG. 2 illustrates a flow chart of a method of three-dimensional modeling provided by an embodiment of the present application;
FIG. 3 shows a block diagram of an apparatus for generating a three-dimensional model according to an embodiment of the present application;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
At present, the existing VR scene processing flow is generally: firstly, a three-dimensional model is designed by a modeling staff, then, a research and development staff pre-stores a file of the designed three-dimensional model in a storage medium such as a disk or a memory of VR equipment, then, the model is loaded into the memory or the video memory of the VR equipment from the storage medium, and then, the model file is rendered.
For example, a modeler may first design a cable training-related three-dimensional model through laser modeling (or drawing modeling, or image modeling, etc.), and then a developer may pre-store a file of the cable training-related three-dimensional model in a storage medium such as a disk or a memory of the VR device, so that a teacher may perform related training of cable skills.
However, the above prior art has at least the following problems:
the existing VR scene processing method generally obtains a pre-designed three-dimensional model, that is, the related data such as the size and shape of the model need to be designed in advance. And, existing various programmed engines cannot edit three-dimensional models (e.g., complex models, etc.) through code. Therefore, in the process of virtual reality scene and user interaction, the three-dimensional model can only keep the original form, so that real-time updating cannot be realized.
For example, when the actual shape of the real object changes, since the prior art can only display the model of the initial shape of the real object in the virtual reality scene, the deformed virtual object cannot be displayed in the virtual reality scene, and if the change of the shape of the corresponding virtual object in the VR scene is to be updated, the three-dimensional model needs to be redesigned, and then the three-dimensional model after the redesign is stored in the storage medium of the VR device.
In addition, for the existing modeling mode, modeling personnel designing a three-dimensional model are basically needed to participate, and the model cannot be restored directly through special model editing software, so that the problem of longer modeling period is also caused.
In addition, the existing manual modeling mode is to restore factors such as the size and the appearance of the object, however, various irregular objects exist in practice, and the modeling difficulty is increased. For example, for an irregular mud, the reduction difficulty is no more than that of building a classroom or a building.
In addition, although the existing laser point cloud modeling can realize the modeling of an irregular object, there is still a model in which only one form can be built every modeling, and when the actual shape of the irregular object is changed, new modeling needs to be performed on the deformed object from scratch.
Based on the above, the embodiment of the application provides a three-dimensional modeling method, which is used for obtaining laser point cloud data of an object in real time and generating a real-time three-dimensional model by utilizing the laser point cloud data, so that the embodiment of the application can realize the effect of updating the three-dimensional model corresponding to the object in real time.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario to which an embodiment of the present application is applicable. The scenario shown in fig. 1 includes: 8 laser scanners and a computer.
Wherein 8 laser scanners can be arranged on 8 corners of a cuboid house, and the positions of the 8 laser scanners can be fixed without change.
It should be understood that the type or model of each of the 8 laser scanners may be set according to actual requirements, and the embodiment of the present application is not limited thereto.
The computer may be a notebook or a desktop. That is, the specific type of the computer may also be set according to the actual requirement, and the embodiment of the present application is not limited thereto.
In the embodiment of the application, an object is placed at a preset position in a house, and the object can be located at the focus of 8 laser scanners, so that the 8 laser scanners can scan the object at the focus, and laser point cloud data is obtained. And all the laser scanners can upload the scanned laser point cloud data to a computer.
The computer may then aggregate, analyze, and process all the laser point cloud data to obtain processed data. Finally, the computer may generate a real-time three-dimensional model using the processed data by a laser modeling method.
It should be noted that the three-dimensional modeling scheme provided in the embodiment of the present invention may be further extended to other suitable application scenarios, and is not limited to the application scenario 100 shown in fig. 1. Furthermore, although only 8 laser scanners are shown in fig. 1, those skilled in the art will appreciate that the application scenario 100 may include more or fewer laser scanners in the course of actual application, and embodiments of the application are not limited thereto.
For example, embodiments of the present application may include 16 laser scanners, which 16 laser scanners may be disposed on 8 corners and 8 walls of a cuboid house.
For another example, the application may also include 4 laser scanners, which may be located on 4 different walls of the house.
Referring to fig. 2, fig. 2 shows a flowchart of a three-dimensional modeling method provided in an embodiment of the present application, and it should be understood that the method shown in fig. 2 may be performed by a three-dimensional modeling apparatus, where the apparatus may correspond to the apparatus shown in fig. 3 below, and the apparatus may be various devices capable of performing the method, for example, such as a personal computer, a server, or a network device, where the embodiment of the present application is not limited thereto, and specifically includes the following steps:
in step S210, laser point cloud data of the object is acquired in real time by a plurality of laser scanners uniformly distributed around the object.
It should be understood that the object may be an object to be modeled. The object to be modeled may be an object (e.g., a handle or a deformable mud, etc.), or a person (e.g., a person in a video conference scene), etc. That is, the specific type of the object may be set according to actual requirements, and the embodiment of the present application is not limited thereto.
It should also be understood that the installation position, orientation, number, etc. of the laser scanners may be set according to actual requirements, and the embodiments of the present application are not limited thereto.
Specifically, a plurality of laser scanners can be distributed around the object, so that the situation that laser point cloud data of different surfaces of the object cannot be acquired due to the fact that the plurality of scanners are arranged on the same surface is avoided.
In addition, the laser point cloud data can be acquired in real time, so that the laser point cloud data at different moments can be identical or different.
For example, in the case where the object is a stool, in the case where no change in the shape of the stool occurs (or no person sits on it), laser point cloud data for different times of the table is unchanged.
For another example, in the case where the object is an ice cube, the laser point cloud data may be different for different times of the ice cube because the ice cube melts over time, i.e., the shape of the ice cube changes.
It should be noted that, in the process of placing the object, sundries around the object may be cleaned in advance, so as to avoid the situation of interference to the scanning of the object.
In step S220, all the laser scanners upload the laser point cloud data acquired in real time to the computer. Correspondingly, the computer acquires laser point cloud data in real time.
In step S230, the computer performs the de-noising on the laser point cloud data to obtain de-noised data.
Specifically, after the computer obtains the laser point cloud data of each laser scanner, the computer may aggregate the laser point cloud data of each laser scanner, so that all the laser point cloud data of the object can be obtained (for example, the first laser scanner obtains the first laser point cloud data corresponding to the left side surface of the object and the second laser scanner obtains the second laser point cloud data corresponding to the right side surface of the object, so that the subsequent computer may aggregate the first laser point cloud data and the second laser point cloud data to obtain the laser point cloud data related to the left side surface and the right side surface of the object).
In addition, although only the summary processing of the laser point cloud data by the computer is shown above, those skilled in the art should understand that the computer may perform other preprocessing on the laser point cloud data in addition to the summary processing of the laser point cloud data, and the embodiment of the present application is not limited thereto.
For example, after the foregoing summary process, the computer may further analyze and process the data after the summary to preliminarily generate an object point cloud or the like.
In addition, the laser point cloud data may also include a small amount of laser point cloud data of other objects (such as ground) in addition to the laser point cloud data of the object, but the computer may perform the noise removing and filtering processing on the obtained laser point cloud data due to the large dispersion between the laser point cloud data of the object and the laser point cloud data of the other objects, so that the laser point cloud data of the other objects may be filtered out.
In step S240, the computer generates a real-time three-dimensional model corresponding to the object using the denoised data.
In particular, the computer may classify the de-manized data into topological areas having a single geometry. And, since the generation of the laser point cloud itself depends on the laser reflection of the object surface, the point-to-point relationship in the laser point cloud data is very small from the object appearance result.
And since the relative positions between the points in the laser point cloud data (e.g., the first point is to the right of the second point, or those points are distributed around the first point, etc.) are determined, a three-dimensional model can be formed by interconnecting three adjacent points to generate triangular faces, and in this cycle, all points are changed to vertices of different triangular faces, and then by combining between the triangular faces.
It should be noted that, in the embodiment of the present application, step S230 and step S240 may be performed in an engine of a computer, that is, the embodiment of the present application transmits laser point cloud data acquired in real time to the engine of the computer, so that the model is generated in real time by the engine, where the engine may be a programming software in the computer. And the three-dimensional model corresponding to the object is deformed along with the deformation of the real shape of the object, namely, the update of the three-dimensional model in the process of running the program in real time (the model is not editable but replaceable in the process of running the system, so that the programming of the model is simulated by refreshing the model).
It should be further noted that, in the process of operating the object (for example, mud, etc.), a portion of the object may be covered by the hand of the user, so that in this case, since the hand of the user may leave the object after a few seconds of operation, laser point cloud data after the hand of the user leaves the object may be collected.
In addition, after the real-time three-dimensional model is acquired, the embodiment of the application can also load the real-time three-dimensional model into the virtual scene.
For example, in the case where the object is mud and the virtual reality scene is a virtual cable training scene, a three-dimensional model corresponding to the mud may be acquired. And then, coloring and other operations are performed on the three-dimensional model through an engine in the computer, so that the three-dimensional model corresponding to the mud is displayed as a lead block in the virtual cable training scene, and as the handfeel of the mud and the lead block are approximately the same, a user can operate the mud in reality to realize the operation of the lead block in the virtual cable training scene, and further, a better training effect is realized.
It should be noted that, after the three-dimensional model is generated, the three-dimensional model may be synchronized in real time into the virtual scene, that is, the virtual object matched with the object in the virtual scene may be updated in real time, so that the deformation process of the virtual object matched with the object in the virtual scene may be observed. For example, by the process of melting virtual ice cubes.
Therefore, the embodiment of the application generates the real-time three-dimensional model by acquiring the laser point cloud data of the object in real time and utilizing the laser point cloud data, so that the embodiment of the application can realize the effect of updating the three-dimensional model corresponding to the object in real time.
In addition, the three-dimensional model is automatically modeled, and manual processing is not needed, so that modeling efficiency can be improved.
It should be understood that the above three-dimensional modeling method is merely exemplary, and those skilled in the art can make various modifications, modifications or modifications according to the above method, and the content after the modifications is also within the scope of protection of the present application.
For example, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform. For example, step S230 and step S240 may be combined into one step: and the computer generates a real-time three-dimensional model corresponding to the object by utilizing the laser point cloud data.
Referring to fig. 3, fig. 3 shows a block diagram of an apparatus 300 for generating a three-dimensional model according to an embodiment of the present application, and it should be understood that the apparatus 300 corresponds to the above method embodiment, and is capable of executing the steps related to the above method embodiment, and specific functions of the apparatus 300 may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy. The device 300 includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device 300. Specifically, the apparatus 300 includes:
an acquisition module 310, configured to acquire laser point cloud data of an object in real time;
the generating module 320 is configured to generate a real-time three-dimensional model corresponding to the object using the laser point cloud data.
In one possible embodiment, the generating module 320 includes: the de-noising module (not shown) is used for de-noising the laser point cloud data to obtain de-noised data; and the generating submodule (not shown) is used for generating a real-time three-dimensional model corresponding to the object by using the denoised data.
In one possible embodiment, the acquiring module 310 is further configured to acquire laser point cloud data of the object in real time through a plurality of laser scanners uniformly distributed around the object.
In one possible embodiment, the number of the plurality of laser scanners is 4, 8 or 16.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding procedure in the foregoing method for the specific working procedure of the apparatus described above, and this will not be repeated here.
An embodiment of the present application further provides an electronic device, please refer to fig. 4, and fig. 4 is a block diagram of an electronic device 400 provided in an embodiment of the present application. Electronic device 400 may include a processor 410, a communication interface 420, a memory 430, and at least one communication bus 440. Wherein the communication bus 440 is used to enable direct connection communication of these components. The communication interface 420 in the embodiment of the present application is used for performing signaling or data communication with other devices. The processor 410 may be an integrated circuit chip with signal processing capabilities. The processor 410 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor 410 may be any conventional processor or the like.
The Memory 430 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory 430 has stored therein computer readable instructions which, when executed by the processor 410, can cause the electronic device 400 to perform the steps of the method embodiments described above.
The electronic device 400 may also include a memory controller, an input-output unit, an audio unit, a display unit.
The memory 430, the memory controller, the processor 410, the peripheral interface, the input/output unit, the audio unit, and the display unit are electrically connected directly or indirectly to each other, so as to realize data transmission or interaction. For example, the elements may be electrically coupled to each other via one or more communication buses 440. The processor 410 is configured to execute executable modules stored in the memory 430. And, the electronic device 400 is configured to perform the following method: acquiring laser point cloud data of an object in real time; and generating a real-time three-dimensional model corresponding to the object by utilizing the laser point cloud data.
The input-output unit is used for providing the user with input data to realize the interaction between the user and the server (or the local terminal). The input/output unit may be, but is not limited to, a mouse, a keyboard, and the like.
The audio unit provides an audio interface to the user, which may include one or more microphones, one or more speakers, and audio circuitry.
The display unit provides an interactive interface (e.g. a user-operated interface) between the electronic device and the user or is used to display image data to a user reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the touch display may be a capacitive touch screen or a resistive touch screen, etc. supporting single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are passed to the processor for calculation and processing.
It is to be understood that the configuration shown in fig. 4 is merely illustrative, and that the electronic device 400 may also include more or fewer components than those shown in fig. 4, or have a different configuration than that shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof.
The present application also provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the method embodiment.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method of the method embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding procedure in the foregoing method for the specific working procedure of the system described above, and this will not be repeated here.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the apparatus class embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference is made to the description of the method embodiments for relevant points.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of three-dimensional modeling, comprising:
acquiring laser point cloud data of an object in real time, wherein the laser point cloud data at different moments are the same or different;
generating a real-time three-dimensional model corresponding to the object by using the laser point cloud data, wherein the generating the real-time three-dimensional model corresponding to the object by using the laser point cloud data comprises the following steps: denoising the laser point cloud data to obtain denoised data; classifying the denoised data by a computer to obtain a topological area with a single geometric structure, connecting three adjacent points with each other to generate triangular surfaces, cycling the triangular surfaces, changing all the points into vertexes of different triangular surfaces, and forming the real-time three-dimensional model corresponding to the object through combination of all the triangular surfaces;
after the generating the real-time three-dimensional model corresponding to the object by using the laser point cloud data, the method further comprises the following steps: and synchronizing the real-time three-dimensional model into a virtual scene in real time, wherein the real-time three-dimensional model is synchronized into the virtual scene in real time and is used for updating the virtual object matched with the object in the virtual scene in real time, and observing the deformation process of the virtual object matched with the object in the virtual scene.
2. The method of claim 1, wherein the acquiring laser point cloud data of the object in real time comprises:
and acquiring laser point cloud data of the object in real time through a plurality of laser scanners uniformly distributed around the object.
3. The method of claim 2, wherein the number of the plurality of laser scanners is 4, 8, or 16.
4. An apparatus for three-dimensional modeling, comprising:
the acquisition module is used for acquiring laser point cloud data of the object in real time, wherein the laser point cloud data at different moments are the same or different;
the generating module is configured to generate a real-time three-dimensional model corresponding to the object by using the laser point cloud data, where the generating the real-time three-dimensional model corresponding to the object by using the laser point cloud data includes: denoising the laser point cloud data to obtain denoised data; classifying the denoised data by a computer to obtain a topological area with a single geometric structure, connecting three adjacent points with each other to generate triangular surfaces, cycling the triangular surfaces, changing all the points into vertexes of different triangular surfaces, and forming the real-time three-dimensional model corresponding to the object through combination of all the triangular surfaces;
and the generation module is used for synchronizing the real-time three-dimensional model into a virtual scene in real time after generating the real-time three-dimensional model corresponding to the object by utilizing the laser point cloud data, wherein the real-time three-dimensional model is synchronized into the virtual scene in real time and is used for updating the virtual object matched with the object in the virtual scene in real time, and observing the deformation process of the virtual object matched with the object in the virtual scene.
5. The apparatus of claim 4, wherein the acquisition module is further configured to acquire laser point cloud data of the object in real time by a plurality of laser scanners evenly distributed around the object.
6. The apparatus of claim 5, wherein the number of the plurality of laser scanners is 4, 8, or 16.
7. A storage medium having stored thereon a computer program which, when executed by a processor, performs the method of three-dimensional modeling according to any of claims 1-3.
8. An electronic device, the electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the method of three-dimensional modeling of any of claims 1-3.
CN201911195902.4A 2019-11-28 2019-11-28 Three-dimensional modeling method and device, storage medium and electronic equipment Active CN110930515B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195902.4A CN110930515B (en) 2019-11-28 2019-11-28 Three-dimensional modeling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195902.4A CN110930515B (en) 2019-11-28 2019-11-28 Three-dimensional modeling method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110930515A CN110930515A (en) 2020-03-27
CN110930515B true CN110930515B (en) 2024-02-09

Family

ID=69846944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195902.4A Active CN110930515B (en) 2019-11-28 2019-11-28 Three-dimensional modeling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110930515B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111125807A (en) * 2019-11-06 2020-05-08 贝壳技术有限公司 Decoration three-dimensional model rendering display method and system
CN112446114B (en) * 2020-12-08 2023-09-05 国网江苏省电力工程咨询有限公司 Three-dimensional model comparison-based power transmission line engineering construction progress monitoring method
CN112926162B (en) * 2021-04-01 2023-06-23 广东三维家信息科技有限公司 Electric control drill connection control optimization method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909645A (en) * 2017-11-16 2018-04-13 青岛市光电工程技术研究院 Building view generation method, apparatus and system
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device
CN109035399A (en) * 2018-07-25 2018-12-18 上海华测导航技术股份有限公司 Utilize the method for three-dimensional laser scanner quick obtaining substation three-dimensional information
CN109493422A (en) * 2018-12-28 2019-03-19 国网新疆电力有限公司信息通信公司 A kind of substation's 3 D model construction method based on three-dimensional laser scanning technique

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909645A (en) * 2017-11-16 2018-04-13 青岛市光电工程技术研究院 Building view generation method, apparatus and system
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device
CN109035399A (en) * 2018-07-25 2018-12-18 上海华测导航技术股份有限公司 Utilize the method for three-dimensional laser scanner quick obtaining substation three-dimensional information
CN109493422A (en) * 2018-12-28 2019-03-19 国网新疆电力有限公司信息通信公司 A kind of substation's 3 D model construction method based on three-dimensional laser scanning technique

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《建筑点云几何模型重建方法研究进展》;杜建丽等;《遥感学报》;全文 *

Also Published As

Publication number Publication date
CN110930515A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110930515B (en) Three-dimensional modeling method and device, storage medium and electronic equipment
Carrasco et al. Application of mixed reality for improving architectural design comprehension effectiveness
JP7343963B2 (en) Dataset for learning functions that take images as input
Gausemeier et al. Development of a real time image based object recognition method for mobile AR-devices
Cuillière et al. Automatic construction of structural CAD models from 3D topology optimization
Stein et al. Natural boundary conditions for smoothing in geometry processing
JP6580078B2 (en) Method and system for converting an existing three-dimensional model into graphic data
CN106997613B (en) 3D model generation from 2D images
Górski et al. Immersive city bus configuration system for marketing and sales education
US10964083B1 (en) Facial animation models
O’Hare et al. Defining requirements for an Augmented Reality system to overcome the challenges of creating and using design representations in co-design sessions
US20150088474A1 (en) Virtual simulation
JP7294788B2 (en) Classification of 2D images according to the type of 3D placement
CN113298226A (en) Controlling neural networks through intermediate latent spaces
Ratican et al. A proposed meta-reality immersive development pipeline: Generative ai models and extended reality (xr) content for the metaverse
Karan et al. A markov decision process workflow for automating interior design
Denning et al. 3dflow: Continuous summarization of mesh editing workflows
US10943037B2 (en) Generating a CAD model from a finite element mesh
Li et al. Magic nerf lens: Interactive fusion of neural radiance fields for virtual facility inspection
CN113240789A (en) Virtual object construction method and device
CN109064537A (en) Image generating method and device based on 3D rendering engine
CN108597025B (en) Artificial intelligence-based virtual reality-oriented rapid model construction method and device
CN116484448A (en) Industrial model interaction method, system and equipment based on meta universe
Ablyaev et al. Criteria of evaluating augmented reality applications
JP2019106187A (en) System for and method of repairing finite element mesh

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant