CN117313364A - Digital twin three-dimensional scene construction method and device - Google Patents

Digital twin three-dimensional scene construction method and device Download PDF

Info

Publication number
CN117313364A
CN117313364A CN202311255731.6A CN202311255731A CN117313364A CN 117313364 A CN117313364 A CN 117313364A CN 202311255731 A CN202311255731 A CN 202311255731A CN 117313364 A CN117313364 A CN 117313364A
Authority
CN
China
Prior art keywords
scene
digital twin
dimensional
target
dimensional scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311255731.6A
Other languages
Chinese (zh)
Inventor
张豫辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinda Property Management Beijing Technology Co ltd
Original Assignee
Xinda Property Management Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinda Property Management Beijing Technology Co ltd filed Critical Xinda Property Management Beijing Technology Co ltd
Priority to CN202311255731.6A priority Critical patent/CN117313364A/en
Publication of CN117313364A publication Critical patent/CN117313364A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The application relates to the technical field of digital twinning and provides a digital twinning three-dimensional scene construction method and device. The method comprises the steps of obtaining scene data of a target scene; then, obtaining feature points and feature descriptors from the color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result; then, generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result; and then, carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene. According to the method, the real scene is rebuilt, the rebuilt target digital twin three-dimensional scene can be better restored to the scene, and evidence and case scene reappearance simulation can be conveniently found.

Description

Digital twin three-dimensional scene construction method and device
Technical Field
The application relates to the technical field of digital twinning, in particular to a digital twinning three-dimensional scene construction method and device.
Background
The image resources of the criminal investigation are not only needed for recording and reserving, but also needed for restoring the structural space of the scene so as to find evidence, reproduce and simulate the case scene, and the like. However, the existing criminal investigation is generally recorded and collected through image resources, the method cannot well restore the scene, the correlation between images is poor, the spatial structure of the scene and the position relationship between the target point and the suspicious point cannot be restored.
Disclosure of Invention
In view of this, the embodiment of the application provides a method and a device for constructing a digital twin three-dimensional scene, so as to solve the problems that in the prior art, criminal investigation is performed by recording and collecting image resources, a scene cannot be well restored, the relevance between images is poor, and a scene space structure cannot be restored.
In a first aspect of an embodiment of the present application, a digital twin three-dimensional scene construction method is provided, including:
acquiring scene data of a target scene; the scene data includes color image information and image depth information;
acquiring feature points and feature descriptors from the color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result;
generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result;
and carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene.
In a second aspect of the embodiments of the present application, there is provided a digital twin three-dimensional scene building apparatus, the apparatus including:
an acquisition module configured to acquire scene data of a target scene; the scene data includes color image information and image depth information;
the matching module is configured to acquire feature points and feature descriptors from the color image information and perform feature matching according to the feature points and the feature descriptors so as to obtain a matching result;
the generating module is configured to generate an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result;
and the refining module is configured for refining and preprocessing the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene.
In a third aspect of the embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present application, there is provided a readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
Compared with the prior art, the embodiment of the application has the beneficial effects that: acquiring scene data of a target scene; then, obtaining feature points and feature descriptors from the color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result; then, generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result; and then, carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene. According to the method, the real scene is rebuilt, the rebuilt target digital twin three-dimensional scene can be better restored to the scene, and evidence and case scene reappearance simulation can be conveniently found.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a digital twin three-dimensional scene construction method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a digital twin three-dimensional scene building device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The image resources of the criminal investigation are not only needed for recording and reserving, but also needed for restoring the structural space of the scene so as to find evidence, reproduce and simulate the case scene, and the like. However, the existing criminal investigation is generally recorded and collected through image resources, the method cannot well restore the scene, the correlation between images is poor, the spatial structure of the scene and the position relationship between the target point and the suspicious point cannot be restored.
Aiming at the problems in the prior art, the application provides a digital twin three-dimensional scene construction method and a device, wherein the method is realized by acquiring scene data of a target scene; then, obtaining feature points and feature descriptors from the color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result; then, generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result; and then, carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene. According to the method, the real scene is rebuilt, the rebuilt target digital twin three-dimensional scene can be better restored to the scene, and evidence and case scene reappearance simulation can be conveniently found.
A digital twin three-dimensional scene construction method and apparatus according to embodiments of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application. The application scenario may include terminal devices 1, 2 and 3, a server 4 and a network 5.
The terminal devices 1, 2 and 3 may be hardware or software. When the terminal devices 1, 2 and 3 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 4, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal apparatuses 1, 2, and 3 are software, they can be installed in the electronic apparatus as described above. The terminal devices 1, 2 and 3 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited in this embodiment of the present application. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search class application, a shopping class application, and the like, may be installed on the terminal devices 1, 2, and 3.
The server 4 may be a server that provides various services, for example, a background server that receives a request transmitted from a terminal device with which communication connection is established, and the background server may perform processing such as receiving and analyzing the request transmitted from the terminal device and generate a processing result. The server 4 may be a server, a server cluster formed by a plurality of servers, or a cloud computing service center, which is not limited in this embodiment of the present application.
The server 4 may be hardware or software. When the server 4 is hardware, it may be various electronic devices that provide various services to the terminal devices 1, 2, and 3. When the server 4 is software, it may be a plurality of software or software modules providing various services to the terminal devices 1, 2, and 3, or may be a single software or software module providing various services to the terminal devices 1, 2, and 3, which is not limited in the embodiment of the present application.
The network 5 may be a wired network using coaxial cable, twisted pair and optical fiber connection, or may be a wireless network capable of realizing interconnection of various communication devices without wiring, for example, bluetooth (Bluetooth), near field communication (Near Field Communication, NFC), infrared (Infrared), etc., which is not limited in the embodiment of the present application.
The user can establish a communication connection with the server 4 via the network 5 through the terminal devices 1, 2, and 3 to receive or transmit information or the like. Specifically, after the user imports scene data of the acquisition target scene to the server 4, the server 4 acquires feature points and feature descriptors from the color image information, and performs feature matching according to the feature points and the feature descriptors to obtain a matching result; further, generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result; further, the server 4 performs refinement preprocessing on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene. By carrying out live-action reconstruction on the target scene, the reconstructed target digital twin three-dimensional scene can be better restored to the scene, and evidence and case scene reproduction simulation can be conveniently searched.
It should be noted that the specific types, numbers and combinations of the terminal devices 1, 2 and 3, the server 4 and the network 5 may be adjusted according to the actual requirements of the application scenario, which is not limited in the embodiment of the present application.
Fig. 2 is a schematic flow chart of a digital twin three-dimensional scene construction method according to an embodiment of the present application. The digital twin three-dimensional scene construction method of fig. 2 may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the digital twin three-dimensional scene construction method includes:
s201, acquiring scene data of a target scene, wherein the scene data comprises color image information and image depth information;
s202, obtaining feature points and feature descriptors from color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result;
s203, generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result;
s204, carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene.
The target scene can be any scene needing to build a three-dimensional scene graph, and the target scene can be a house type graph, a topography scene graph, a criminal investigation scene and the like in renting and selling, which need to restore the three-dimensional scene. In order to unify the target scenes in the following examples, which take criminal investigation sites as examples, the content of each example is consistent so as to better understand the detailed implementation of the steps of the application, but the target scenes of the application are not limited to the criminal investigation sites. The scene data refers to information obtained by at least one of a camera, a video camera, a laser scanner, a time-of-flight sensor, a line structure light sensor, a binocular sensor, including color image information, gray image information, image depth information, and the like. In the present embodiment, the scene data includes color image information and image depth information, and a three-dimensional color image can be obtained by the color image information and the image depth information. The image depth information includes a depth map, depth information, point cloud data, and the like. Although the images exist in the form of gray matrix in the computer, the same object in the two images cannot be accurately found by using the gray of the images. Since the gray scale is affected by illumination and the gray scale value of the same object changes when the viewing angle of the image changes, it is necessary to find a feature that can be moved and rotated (change in viewing angle) in the camera, and still remain unchanged, and the same object in the image at different viewing angles is found by using the unchanged feature. The feature points refer to local areas with uniqueness in the image. It may be corner points, edge points or spots, etc. with higher gray level variation or texture information. The feature points have invariance in the image, i.e. can still be accurately detected in the case of rotation, scaling, translation or brightness change of the image, etc. Common feature point detection algorithms for detecting feature points from images are Harris corner detection, SIFT (scale invariant feature transform), SURF (speeded up robust feature), and the like. The feature descriptor is a representation of the feature points in a numeric manner. It can transform the local texture information of the feature points into a vector with a certain dimension. The function of the descriptor is to uniquely encode the feature points so as to facilitate tasks such as feature matching and image retrieval. Common descriptor algorithms for determining feature descriptors of feature points are SIFT descriptors, SURF descriptors, ORB (Oriented FAST and Rotated BRIEF) descriptors, and the like. The matching of features is performed for feature descriptors, which are usually vectors, and the distance between two feature descriptors in different images can reflect the similarity degree, i.e. the two feature points are not identical. The three-dimensional reconstruction algorithm comprises the existing algorithm and the future possible algorithm, wherein the existing algorithm comprises a poisson algorithm, a convex hull algorithm, a concave hull algorithm and the like. The algorithms can derive three-dimensional geometry from different viewing angles, depth information or point cloud data, thereby fusing color image information to obtain an initial digital twin three-dimensional scene. After the initial digital twin three-dimensional scene is established in step S203, in order to make the integrity and restoration degree of the digital twin three-dimensional scene higher, the initial digital twin three-dimensional scene needs to be thinned to eliminate noise, holes and the like in the initial digital twin three-dimensional scene, so that a target digital twin three-dimensional scene identical to the target scene is established, the target digital twin three-dimensional scene can be restored to a better scene, and evidence and case scene reproduction simulation can be conveniently found.
According to the technical scheme provided by the embodiment of the application, scene data of a target scene are obtained; then, obtaining feature points and feature descriptors from the color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result; then, generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result; and then, carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene. According to the method, the real scene is rebuilt, the rebuilt target digital twin three-dimensional scene can be better restored to the scene, and evidence and case scene reappearance simulation can be conveniently found.
The built target digital twin three-dimensional scene can restore the scene situation at any time and any place, related evidence cannot be traced due to field damage without taking care, and criminal investigation personnel can carry out immersive experience in the forms of terminal equipment checking and the like. The terminal device may be a VR device.
In some embodiments, the digital twin three-dimensional scene construction method includes:
identifying a scene element from the target digital twin three-dimensional scene; the scene element comprises household articles of a target digital twin three-dimensional scene and a sign board of a target point;
determining geometric information of the scene elements in the target digital twin three-dimensional scene according to the image depth information;
and determining spatial screen structure information according to the target digital twin three-dimensional scene and the geometric information.
Specifically, in order to facilitate the conversion of the digital twin three-dimensional scene of the target into a two-dimensional image which can be seen in the screen, image recognition or point cloud recognition can be performed through the three-dimensional reconstruction scanned related point cloud data and image information, and objects in a specific space, such as walls, doors, windows, beds, cabinets and the like, are analyzed, and meanwhile, a sign board for identifying the target point is also required. The target point refers to a position needing to be noted in the target digital twin three-dimensional scene, and the position is represented to have something needing to be noted or an area. Then, based on the deduced semantic information such as image information and point cloud data, the distance or depth information of objects in the scene is calculated by combining the three-dimensional space structure, and the space screen structure information such as the position, the size and the direction of the objects is deduced, so that a space screen structure diagram is drawn, and a two-dimensional map of the target digital twin three-dimensional scene is conveniently derived. It can be understood that the target point can be the evidence found at the criminal investigation site, the point corresponding to the evidence is the target point, and the target point represents the position point of the evidence. Marking the target point in the target digital twin three-dimensional scene can make the evidence obvious, and the evidence can be seen at a glance in the target digital twin three-dimensional scene. The space layout of the target scene is constructed through semantic reconstruction, so that the basic situation of the case site can be more intuitively known, and the reservation and the preservation can be carried out.
According to the technical scheme provided by the embodiment of the application, the three-dimensional scene image can be better reflected in the screen by determining the spatial screen structure information according to the target digital twin three-dimensional scene and the geometric information.
In some embodiments, a method comprises:
the method comprises the steps of performing remarkable labeling on a signboard of a target point in a target digital twin three-dimensional scene;
the annotation content is generated in the salient annotation based on the editing operation information for the salient annotation.
Specifically, the marked mark is carried out in the signboard of the target point, so that the signboard can be noticed in the target digital twin three-dimensional scene quickly. The target point is provided with a signboard, is more obvious, but has no related description, and does not know the purpose of identifying the target point. In order to record the corresponding description of the target point, the characters are edited in the obvious mark, and the characters are saved. When viewing the target digital twin three-dimensional scene, the remarkable mark can be clicked, and the characters can be unfolded. The marked mark can be a more obvious pattern such as a yellow small flag, a red circle and the like, and can also be a dynamic visual special effect such as a cartoon eidolon and the like. The editing operation information for the salient annotation can be initiated by a terminal device connected with a server executing the digital twin three-dimensional scene construction method, for example, the terminal device is a computer, the editing operation information is generated based on input operation input by a keyboard, a mouse or voice of the computer, and the annotation content can be characters input by the keyboard of the computer, characters selected by the mouse or characters identified by voice input. The suspicious point marking is performed based on the target digital twin three-dimensional scene, so that more clues can be provided, the case detection is facilitated, and the geometric measurement of space, such as distance, height, angle and the like, can be performed through the target digital twin three-dimensional scene, so that basis is provided for criminal investigation.
According to the technical scheme provided by the embodiment of the application, the target point is made to be conspicuous in the digital twin three-dimensional scene of the target by performing remarkable labeling on the signboard of the target point, so that the target point can be seen at a glance, and the saliency of the target point is improved. And generating annotation content in the salient mark based on the editing operation information aiming at the salient mark, recording the corresponding description of the target point, editing characters in the salient mark, and storing the characters so as to be convenient for knowing the identified target point.
In some embodiments, the digital twin three-dimensional scene construction method includes:
forming a correlation chain according to target points marked by the identification plates in the target digital twin three-dimensional scene;
and correlating the objects and/or areas corresponding to the target points in the correlation chain.
It will be appreciated that in a target digital twin three-dimensional scene, associations may be formed according to the relevance of target points in order to find relevant information in the target digital twin three-dimensional scene. For example, the target scene is a criminal investigation site, and the target point is a point where the marked evidence is located. The object and/or area corresponding to the target point is evidence or a key evidence of the case. In order to show all the objects and/or areas corresponding to the associated target points when clicking any target point in the target digital twin three-dimensional scene, the search is convenient, the search time can be saved, and the objects and/or areas corresponding to the associated target points can be associated. In an exemplary criminal investigation scene, the target scene has a plurality of footprints of suspects, positions corresponding to all footprints can be marked as target points, and the target points are inserted with signboards. The target digital twin three-dimensional scene is a reproduction target scene, so the target digital twin three-dimensional scene is also provided with a nameplate. Namely, all the footprints in the target digital twin three-dimensional scene are provided with the marking cards, the marking cards are identified, and then the same footprints are associated. When the target digital twin three-dimensional scene is viewed in the terminal equipment, any footprint in the target digital twin three-dimensional scene can be clicked, the same footprint can be displayed, and the walking path of the footprint can be viewed according to the associated footprint, so that the place where the suspects corresponding to the footprint go can be determined, and the behaviors and the behavior intentions of the suspects can be presumed. Illustratively, there are a plurality of different footprints in the target digital twin three-dimensional scene, each footprint being distinguished for the purpose of identifying a different footprint, the different footprints being referred to as a first footprint, a second footprint, a third footprint, a fourth footprint, etc., respectively. And inserting all the first footprints with the first marking cards, inserting all the second footprints with the second marking cards, inserting all the third footprints with the third marking cards, and inserting all the fourth footprints with the fourth marking cards. The target points of the same sign are associated target points, which can be formed into an associated chain, so that when the first footprint is clicked in the target digital twin three-dimensional scene, objects in the associated chain can be displayed for viewing. The same footprint is displayed, possibly by a bright color on the screen of the terminal device, for example with gold, so that the footprint is easily visible.
According to the technical scheme provided by the embodiment of the application, the articles and/or the areas corresponding to the target points in the association chain are associated, so that the articles and/or the areas corresponding to the associated target points can be displayed when the target digital twin three-dimensional scene is checked, the searching is convenient, and the searching time can be saved.
In some embodiments, the digital twin three-dimensional scene construction method includes:
acquiring a six-view image and a three-dimensional image of an article corresponding to a target point marked by the signboard;
and labeling the six-view image and the three-dimensional image on the object corresponding to the target point.
Specifically, the object corresponding to the target point is an object to be focused on, and all the faces, the shapes and the colors of the object are required to be displayed, so that the object can be better viewed. However, when the terminal device views the target digital twin three-dimensional scene, the target digital twin three-dimensional scene needs to be continuously rotated, and the object is inconvenient to view. In order to solve the problem, a six-view image and a three-dimensional image of the object corresponding to the target point are acquired, and the object corresponding to the target point is marked. When the terminal equipment views the target digital twin three-dimensional scene, clicking the target point of the object, so that six views and a three-dimensional diagram of the object can be unfolded, and when any one of the six views is clicked, the view can be enlarged in a screen of the terminal equipment, so that the object is convenient to view. Or clicking the three-dimensional view again, magnifying the three-dimensional view of the object in the screen of the terminal device, and adjusting the angle of the three-dimensional view to see the shape and color of each face of the object.
In some embodiments, after obtaining scene data of a target scene, the digital twin three-dimensional scene construction method includes:
the scene data is initially preprocessed.
Preprocessing of the acquired data is typically required before three-dimensional reconstruction can take place. This includes removing noise, image correction, camera calibration, image alignment, etc. to improve data quality and accuracy.
Specifically, the initial pretreatment includes at least one of the following steps:
denoising the color image information;
correcting the image elements in the color image information;
aligning image elements in the color image information;
and calibrating parameters for collecting color image information.
Noise is an important cause of image interference, and in practical applications, a variety of noise may exist in an image, and these noise may be generated during transmission or during quantization or the like. The denoising of the color image information can be calculated by using an image denoising algorithm, and the purpose is to remove noise in the image and restore the noiseless image. Image correction refers to a restorative process performed on a distorted image. The causes of image distortion are: image distortion caused by aberrations, distortions, bandwidth limitations, etc. of the imaging system; image geometric distortion due to imaging pose and scanning nonlinearity of the imaging device; image distortion due to motion blur, radiation distortion, introduced noise, and the like. The basic idea of image correction is to build a corresponding mathematical model according to the cause of image distortion, extract the required information from the contaminated or distorted image signal, and restore the original appearance of the image along the inverse process of distorting the image. The actual restoration process is to design a filter so that it can calculate the estimated value of the real image from the distorted image, and make it approach the real image to the maximum extent according to the predetermined error criterion. The purpose of image alignment is to find the corresponding relation between the pixel point coordinates of two images, and for the two images acquired by the camera, firstly, the simple affine transformation between the two images is assumed, and the purpose of image alignment can be realized by solving an affine transformation matrix between the two images. Because the distortion degree of each lens for acquiring the image is different, the distortion of the lens can be corrected through the calibration of parameters of color image information, and a corrected image is generated.
In some embodiments, the refinement pretreatment includes at least one of the following steps:
removing irrelevant and erroneous geometric information from the initial three-dimensional model;
filling the cavity in the initial three-dimensional model;
smoothing the model in the initial three-dimensional model;
texture mapping is performed in the initial three-dimensional model.
Specifically, after the initial three-dimensional model is built, there are some small problems that need to be refined, for example, errors, irrelevant geometric information, blank areas, a certain area of the model is not smooth enough, textures are not clear enough, and the like, so that the initial three-dimensional model is not reproduced enough. Removing irrelevant and erroneous geometric information in the initial three-dimensional model according to scene data; filling holes in the initial three-dimensional model according to the scene data; smoothing the model in the initial three-dimensional model and performing texture mapping on the initial three-dimensional model according to scene data, so as to obtain the target digital twin three-dimensional scene with higher reproduction degree. Texture mapping, i.e., wrapping a two-dimensional image onto a three-dimensional object, this process may be referred to as texture mapping.
According to the technical scheme provided by the embodiment of the application, the generated initial three-dimensional model may need to be optimized and refined so as to improve accuracy and detail. The refinement preprocessing comprises the operations of removing irrelevant or wrong geometric information, filling holes, smoothing models, texture mapping and the like. Finally, the target digital twin three-dimensional scene is obtained, and a user can view the target digital twin three-dimensional scene at the terminal equipment and carry out relevant annotation.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein in detail.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Fig. 3 is a schematic diagram of a digital twin three-dimensional scene construction device according to an embodiment of the present application. As shown in fig. 3, the digital twin three-dimensional scene constructing apparatus includes:
an acquisition module 301 configured to acquire scene data of a target scene; the scene data includes color image information and image depth information;
a matching module 302 configured to obtain feature points and feature descriptors from the color image information, and perform feature matching according to the feature points and feature descriptors to obtain matching results
A generation module 303 configured to generate an initial digital twin three-dimensional scene using a three-dimensional reconstruction algorithm based on the color image information, the image depth information, and the matching result;
a refinement module 304 configured to refine the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene.
According to the technical scheme provided by the embodiment of the application, scene data of a target scene are obtained; then, obtaining feature points and feature descriptors from the color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result; then, generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result; and then, carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene. According to the method, the real scene is rebuilt, the rebuilt target digital twin three-dimensional scene can be better restored to the scene, and evidence and case scene reappearance simulation can be conveniently found.
In some embodiments, the digital twin three-dimensional scene building apparatus includes:
an identification module configured to identify a scene element from a target digital twin three-dimensional scene; the scene element comprises household articles of a target digital twin three-dimensional scene and a sign board of a target point;
a first determination module configured to determine geometric information of scene elements in the target digital twin three-dimensional scene from the image depth information;
and a second determination module configured to determine spatial screen structure information from the target digital twin three-dimensional scene and the geometric information.
In some embodiments, the digital twin three-dimensional scene building apparatus includes:
the labeling module is configured to be used for carrying out remarkable labeling on the identification plate of the target point in the target digital twin three-dimensional scene;
and the annotating module is configured to generate annotating content in the salient annotation based on the editing operation information for the salient annotation.
In some embodiments, the digital twin three-dimensional scene building apparatus includes:
the forming module is configured to form a correlation chain according to target points marked by the identification plates in the target digital twin three-dimensional scene;
and the association module is configured to associate the object and/or the area corresponding to the target point in the association chain.
In some embodiments, the digital twin three-dimensional scene building apparatus includes:
acquiring a six-view image and a three-dimensional image of an article corresponding to a target point marked by the signboard;
and labeling the six-view image and the three-dimensional image on the object corresponding to the target point.
In some embodiments, the digital twin three-dimensional scene building apparatus includes:
an initial preprocessing module configured to:
performing initial preprocessing on scene data;
the initial pretreatment comprises at least one of the following steps:
denoising the color image information;
correcting the image elements in the color image information;
alignment of image elements in color image information
And calibrating parameters for collecting color image information.
In some embodiments, refinement module 304 in FIG. 3 includes:
removing irrelevant and erroneous geometric information from the initial three-dimensional model;
filling the cavity in the initial three-dimensional model;
smoothing the model in the initial three-dimensional model;
texture mapping is performed in the initial three-dimensional model.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 4 is a schematic diagram of an electronic device 4 provided in an embodiment of the present application. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps of the various method embodiments described above are implemented by processor 401 when executing computer program 403. Alternatively, the processor 401, when executing the computer program 403, performs the functions of the modules/units in the above-described apparatus embodiments.
The electronic device 4 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 4 may include, but is not limited to, a processor 401 and a memory 402. It will be appreciated by those skilled in the art that fig. 4 is merely an example of the electronic device 4 and is not limiting of the electronic device 4 and may include more or fewer components than shown, or different components.
The processor 401 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The memory 402 may be an internal storage unit of the electronic device 4, for example, a hard disk or a memory of the electronic device 4. The memory 402 may also be an external storage device of the electronic device 4, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 4. Memory 402 may also include both internal storage units and external storage devices of electronic device 6. The memory 402 is used to store computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium (e.g., a computer readable storage medium). Based on such understanding, the present application implements all or part of the flow in the methods of the above embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program may implement the steps of the respective method embodiments described above when executed by a processor. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A digital twin three-dimensional scene construction method, the method comprising:
acquiring scene data of a target scene, wherein the scene data comprises color image information and image depth information;
acquiring feature points and feature descriptors from the color image information, and performing feature matching according to the feature points and the feature descriptors to obtain a matching result;
generating an initial digital twin three-dimensional scene by using a three-dimensional reconstruction algorithm according to the color image information, the image depth information and the matching result;
and carrying out refining pretreatment on the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene.
2. The digital twin three-dimensional scene building method as defined in claim 1, comprising:
identifying a scene element from the target digital twin three-dimensional scene; the scene element comprises household articles of a target digital twin three-dimensional scene and a sign board of a target point;
determining geometric information of the scene elements in the target digital twin three-dimensional scene according to the image depth information;
and determining spatial screen structure information according to the target digital twin three-dimensional scene and the geometric information.
3. The digital twin three-dimensional scene building method as defined in claim 2, comprising:
the identification plate of the target point is marked obviously in the target digital twin three-dimensional scene;
and generating annotation content in the salient annotation based on the editing operation information aiming at the salient annotation.
4. A digital twin three-dimensional scene building method as defined in claim 3, in which the method comprises:
forming a correlation chain according to target points marked by the identification plates in the target digital twin three-dimensional scene;
and correlating the objects and/or areas corresponding to the target points in the correlation chain.
5. A digital twin three-dimensional scene building method as defined in claim 3, in which the method comprises:
acquiring a six-view image and a three-dimensional image of an article corresponding to a target point marked by the signboard;
and labeling the object corresponding to the six-view image and the three-dimensional image at the target point.
6. The method for constructing a digital twin three-dimensional scene as defined in claim 1, wherein after acquiring scene data of a target scene, the method comprises:
performing initial preprocessing on the scene data;
the initial pre-treatment comprises at least one of the following steps:
denoising the color image information;
correcting the image elements in the color image information;
aligning image elements in the color image information;
and calibrating parameters for acquiring the color image information.
7. The digital twin three-dimensional scene building method according to any of claims 1-6, wherein the refinement preprocessing comprises at least one of the following steps:
removing irrelevant and erroneous geometric information from the initial three-dimensional model;
filling the cavity in the initial three-dimensional model;
smoothing the model in the initial three-dimensional model;
texture mapping is performed in the initial three-dimensional model.
8. A digital twin three-dimensional scene building apparatus, the apparatus comprising:
an acquisition module configured to acquire scene data of a target scene; the scene data includes color image information and image depth information;
the matching module is configured to acquire feature points and feature descriptors from the color image information, and perform feature matching according to the feature points and the feature descriptors so as to obtain a matching result;
a generation module configured to generate an initial digital twin three-dimensional scene using a three-dimensional reconstruction algorithm from the color image information, the image depth information, and the matching result;
and the refining module is configured to refine and preprocess the initial digital twin three-dimensional scene to obtain a target digital twin three-dimensional scene.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when the computer program is executed.
10. A readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202311255731.6A 2023-09-26 2023-09-26 Digital twin three-dimensional scene construction method and device Pending CN117313364A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311255731.6A CN117313364A (en) 2023-09-26 2023-09-26 Digital twin three-dimensional scene construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311255731.6A CN117313364A (en) 2023-09-26 2023-09-26 Digital twin three-dimensional scene construction method and device

Publications (1)

Publication Number Publication Date
CN117313364A true CN117313364A (en) 2023-12-29

Family

ID=89280623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311255731.6A Pending CN117313364A (en) 2023-09-26 2023-09-26 Digital twin three-dimensional scene construction method and device

Country Status (1)

Country Link
CN (1) CN117313364A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786147A (en) * 2024-02-26 2024-03-29 北京飞渡科技股份有限公司 Method and device for displaying data in digital twin model visual field range

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786147A (en) * 2024-02-26 2024-03-29 北京飞渡科技股份有限公司 Method and device for displaying data in digital twin model visual field range

Similar Documents

Publication Publication Date Title
CN109508681B (en) Method and device for generating human body key point detection model
CN106203242B (en) Similar image identification method and equipment
CN108734185B (en) Image verification method and device
CN112889091A (en) Camera pose estimation using fuzzy features
Tau et al. Dense correspondences across scenes and scales
KR101548928B1 (en) Invariant visual scene and object recognition
CN111275784B (en) Method and device for generating image
CN110111241B (en) Method and apparatus for generating dynamic image
CN117313364A (en) Digital twin three-dimensional scene construction method and device
Guan et al. Self-calibration approach to stereo cameras with radial distortion based on epipolar constraint
CN108597034B (en) Method and apparatus for generating information
CN114612987A (en) Expression recognition method and device
US10628998B2 (en) System and method for three dimensional object reconstruction and quality monitoring
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN114627244A (en) Three-dimensional reconstruction method and device, electronic equipment and computer readable medium
CN112102404B (en) Object detection tracking method and device and head-mounted display equipment
CN109816791B (en) Method and apparatus for generating information
Mehta et al. Near-duplicate detection for LCD screen acquired images using edge histogram descriptor
CN108776959B (en) Image processing method and device and terminal equipment
CN112287734A (en) Screen-fragmentation detection and training method of convolutional neural network for screen-fragmentation detection
Sandoval Orozco et al. Image source acquisition identification of mobile devices based on the use of features
CN112487943A (en) Method and device for removing duplicate of key frame and electronic equipment
Yu et al. Registration Based on ORB and FREAK Features for Augmented Reality Systems
CN114140427A (en) Object detection method and device
EP3115927A1 (en) Method and apparatus for processing a scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination